2026-03-10T10:03:37.786 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-10T10:03:37.803 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T10:03:37.830 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/995 branch: squid description: orch/cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} email: null first_in_suite: false flavor: default job_id: '995' ktype: distro last_in_suite: false machine_type: vps name: kyr-2026-03-10_01:00:38-orch-squid-none-default-vps no_nested_subset: false openstack: - volumes: count: 4 size: 10 os_type: ubuntu os_version: '22.04' overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: client: debug ms: 1 global: mon election default strategy: 3 ms bind msgr1: false ms bind msgr2: true ms type: async mgr: debug mgr: 20 debug ms: 1 mon: debug mon: 20 debug ms: 1 debug paxos: 20 mon warn on pool no app: false osd: debug ms: 1 debug osd: 20 osd class default list: '*' osd class load list: '*' osd mclock iops capacity threshold hdd: 49000 osd shutdown pgref assert: true flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - reached quota - but it is still running - overall HEALTH_ - \(POOL_FULL\) - \(SMALLER_PGP_NUM\) - \(CACHE_POOL_NO_HIT_SET\) - \(CACHE_POOL_NEAR_FULL\) - \(POOL_APP_NOT_ENABLED\) - \(PG_AVAILABILITY\) - \(PG_DEGRADED\) - CEPHADM_STRAY_DAEMON log-only-match: - CEPHADM_ sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} cephadm: cephadm_mode: root install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath workunit: branch: tt-squid sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - mon.a - mon.c - mgr.y - osd.0 - osd.1 - osd.2 - osd.3 - client.0 - ceph.rgw.foo.a - node-exporter.a - alertmanager.a - - mon.b - mgr.x - osd.4 - osd.5 - osd.6 - osd.7 - client.1 - prometheus.a - grafana.a - node-exporter.b - ceph.iscsi.iscsi.a seed: 8043 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b targets: vm04.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAHoWoTdjYYlCigWkis0w8NEDCQRwXylk5NwC8L1TzgyzKoF0EUoUzhMwnt9+faFbRgsJvytFcUm7lBriEdihlk= vm07.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGHWPA+bf3T2JC2MusS656fdZJxHfgYOUShDuUSdX5kQQ7WbRxZxMXc9dLGXO5h4Frw3cKyzcSTskU+nWnkjWcc= tasks: - install: null - cephadm: conf: mgr: debug mgr: 20 debug ms: 1 - workunit: clients: client.0: - rados/test.sh - rados/test_pool_quota.sh teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-10_01:00:38 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-10T10:03:37.830 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa; will attempt to use it 2026-03-10T10:03:37.831 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks 2026-03-10T10:03:37.831 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-10T10:03:37.831 INFO:teuthology.task.internal:Checking packages... 2026-03-10T10:03:37.831 INFO:teuthology.task.internal:Checking packages for os_type 'ubuntu', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-10T10:03:37.831 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-10T10:03:37.831 INFO:teuthology.packaging:ref: None 2026-03-10T10:03:37.831 INFO:teuthology.packaging:tag: None 2026-03-10T10:03:37.831 INFO:teuthology.packaging:branch: squid 2026-03-10T10:03:37.831 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T10:03:37.831 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=squid 2026-03-10T10:03:38.497 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678-ge911bdeb-1jammy 2026-03-10T10:03:38.498 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-10T10:03:38.498 INFO:teuthology.task.internal:no buildpackages task found 2026-03-10T10:03:38.498 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-10T10:03:38.499 INFO:teuthology.task.internal:Saving configuration 2026-03-10T10:03:38.503 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-10T10:03:38.504 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-10T10:03:38.511 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm04.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/995', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 10:02:22.311264', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:04', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAHoWoTdjYYlCigWkis0w8NEDCQRwXylk5NwC8L1TzgyzKoF0EUoUzhMwnt9+faFbRgsJvytFcUm7lBriEdihlk='} 2026-03-10T10:03:38.517 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm07.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/995', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 10:02:22.311673', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:07', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGHWPA+bf3T2JC2MusS656fdZJxHfgYOUShDuUSdX5kQQ7WbRxZxMXc9dLGXO5h4Frw3cKyzcSTskU+nWnkjWcc='} 2026-03-10T10:03:38.518 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-10T10:03:38.518 INFO:teuthology.task.internal:roles: ubuntu@vm04.local - ['mon.a', 'mon.c', 'mgr.y', 'osd.0', 'osd.1', 'osd.2', 'osd.3', 'client.0', 'ceph.rgw.foo.a', 'node-exporter.a', 'alertmanager.a'] 2026-03-10T10:03:38.518 INFO:teuthology.task.internal:roles: ubuntu@vm07.local - ['mon.b', 'mgr.x', 'osd.4', 'osd.5', 'osd.6', 'osd.7', 'client.1', 'prometheus.a', 'grafana.a', 'node-exporter.b', 'ceph.iscsi.iscsi.a'] 2026-03-10T10:03:38.518 INFO:teuthology.run_tasks:Running task console_log... 2026-03-10T10:03:38.525 DEBUG:teuthology.task.console_log:vm04 does not support IPMI; excluding 2026-03-10T10:03:38.532 DEBUG:teuthology.task.console_log:vm07 does not support IPMI; excluding 2026-03-10T10:03:38.532 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7f4062c1e170>, signals=[15]) 2026-03-10T10:03:38.532 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-10T10:03:38.533 INFO:teuthology.task.internal:Opening connections... 2026-03-10T10:03:38.533 DEBUG:teuthology.task.internal:connecting to ubuntu@vm04.local 2026-03-10T10:03:38.533 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm04.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T10:03:38.592 DEBUG:teuthology.task.internal:connecting to ubuntu@vm07.local 2026-03-10T10:03:38.624 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm07.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T10:03:38.685 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-10T10:03:38.686 DEBUG:teuthology.orchestra.run.vm04:> uname -m 2026-03-10T10:03:38.689 INFO:teuthology.orchestra.run.vm04.stdout:x86_64 2026-03-10T10:03:38.689 DEBUG:teuthology.orchestra.run.vm04:> cat /etc/os-release 2026-03-10T10:03:38.735 INFO:teuthology.orchestra.run.vm04.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-10T10:03:38.735 INFO:teuthology.orchestra.run.vm04.stdout:NAME="Ubuntu" 2026-03-10T10:03:38.736 INFO:teuthology.orchestra.run.vm04.stdout:VERSION_ID="22.04" 2026-03-10T10:03:38.736 INFO:teuthology.orchestra.run.vm04.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-10T10:03:38.736 INFO:teuthology.orchestra.run.vm04.stdout:VERSION_CODENAME=jammy 2026-03-10T10:03:38.736 INFO:teuthology.orchestra.run.vm04.stdout:ID=ubuntu 2026-03-10T10:03:38.736 INFO:teuthology.orchestra.run.vm04.stdout:ID_LIKE=debian 2026-03-10T10:03:38.736 INFO:teuthology.orchestra.run.vm04.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-10T10:03:38.736 INFO:teuthology.orchestra.run.vm04.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-10T10:03:38.736 INFO:teuthology.orchestra.run.vm04.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-10T10:03:38.736 INFO:teuthology.orchestra.run.vm04.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-10T10:03:38.736 INFO:teuthology.orchestra.run.vm04.stdout:UBUNTU_CODENAME=jammy 2026-03-10T10:03:38.736 INFO:teuthology.lock.ops:Updating vm04.local on lock server 2026-03-10T10:03:38.775 DEBUG:teuthology.orchestra.run.vm07:> uname -m 2026-03-10T10:03:38.798 INFO:teuthology.orchestra.run.vm07.stdout:x86_64 2026-03-10T10:03:38.804 DEBUG:teuthology.orchestra.run.vm07:> cat /etc/os-release 2026-03-10T10:03:38.842 INFO:teuthology.orchestra.run.vm07.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-10T10:03:38.842 INFO:teuthology.orchestra.run.vm07.stdout:NAME="Ubuntu" 2026-03-10T10:03:38.842 INFO:teuthology.orchestra.run.vm07.stdout:VERSION_ID="22.04" 2026-03-10T10:03:38.842 INFO:teuthology.orchestra.run.vm07.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-10T10:03:38.842 INFO:teuthology.orchestra.run.vm07.stdout:VERSION_CODENAME=jammy 2026-03-10T10:03:38.842 INFO:teuthology.orchestra.run.vm07.stdout:ID=ubuntu 2026-03-10T10:03:38.842 INFO:teuthology.orchestra.run.vm07.stdout:ID_LIKE=debian 2026-03-10T10:03:38.842 INFO:teuthology.orchestra.run.vm07.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-10T10:03:38.842 INFO:teuthology.orchestra.run.vm07.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-10T10:03:38.843 INFO:teuthology.orchestra.run.vm07.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-10T10:03:38.843 INFO:teuthology.orchestra.run.vm07.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-10T10:03:38.843 INFO:teuthology.orchestra.run.vm07.stdout:UBUNTU_CODENAME=jammy 2026-03-10T10:03:38.843 INFO:teuthology.lock.ops:Updating vm07.local on lock server 2026-03-10T10:03:38.859 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-10T10:03:38.861 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-10T10:03:38.862 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-10T10:03:38.862 DEBUG:teuthology.orchestra.run.vm04:> test '!' -e /home/ubuntu/cephtest 2026-03-10T10:03:38.863 DEBUG:teuthology.orchestra.run.vm07:> test '!' -e /home/ubuntu/cephtest 2026-03-10T10:03:38.886 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-10T10:03:38.887 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-10T10:03:38.887 DEBUG:teuthology.orchestra.run.vm04:> test -z $(ls -A /var/lib/ceph) 2026-03-10T10:03:38.909 DEBUG:teuthology.orchestra.run.vm07:> test -z $(ls -A /var/lib/ceph) 2026-03-10T10:03:38.911 INFO:teuthology.orchestra.run.vm04.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T10:03:38.931 INFO:teuthology.orchestra.run.vm07.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T10:03:38.931 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-10T10:03:38.939 DEBUG:teuthology.orchestra.run.vm04:> test -e /ceph-qa-ready 2026-03-10T10:03:38.955 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T10:03:39.194 DEBUG:teuthology.orchestra.run.vm07:> test -e /ceph-qa-ready 2026-03-10T10:03:39.197 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T10:03:39.455 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-10T10:03:39.456 INFO:teuthology.task.internal:Creating test directory... 2026-03-10T10:03:39.456 DEBUG:teuthology.orchestra.run.vm04:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T10:03:39.457 DEBUG:teuthology.orchestra.run.vm07:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T10:03:39.459 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-10T10:03:39.461 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-10T10:03:39.462 INFO:teuthology.task.internal:Creating archive directory... 2026-03-10T10:03:39.462 DEBUG:teuthology.orchestra.run.vm04:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T10:03:39.501 DEBUG:teuthology.orchestra.run.vm07:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T10:03:39.507 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-10T10:03:39.508 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-10T10:03:39.508 DEBUG:teuthology.orchestra.run.vm04:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T10:03:39.546 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T10:03:39.546 DEBUG:teuthology.orchestra.run.vm07:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T10:03:39.549 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T10:03:39.549 DEBUG:teuthology.orchestra.run.vm04:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T10:03:39.589 DEBUG:teuthology.orchestra.run.vm07:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T10:03:39.597 INFO:teuthology.orchestra.run.vm04.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T10:03:39.599 INFO:teuthology.orchestra.run.vm07.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T10:03:39.602 INFO:teuthology.orchestra.run.vm04.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T10:03:39.604 INFO:teuthology.orchestra.run.vm07.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T10:03:39.605 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-10T10:03:39.606 INFO:teuthology.task.internal:Configuring sudo... 2026-03-10T10:03:39.606 DEBUG:teuthology.orchestra.run.vm04:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T10:03:39.649 DEBUG:teuthology.orchestra.run.vm07:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T10:03:39.657 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-10T10:03:39.659 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-10T10:03:39.659 DEBUG:teuthology.orchestra.run.vm04:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T10:03:39.701 DEBUG:teuthology.orchestra.run.vm07:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T10:03:39.704 DEBUG:teuthology.orchestra.run.vm04:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T10:03:39.747 DEBUG:teuthology.orchestra.run.vm04:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T10:03:39.791 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-10T10:03:39.791 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T10:03:39.842 DEBUG:teuthology.orchestra.run.vm07:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T10:03:39.844 DEBUG:teuthology.orchestra.run.vm07:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T10:03:39.893 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T10:03:39.893 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T10:03:39.943 DEBUG:teuthology.orchestra.run.vm04:> sudo service rsyslog restart 2026-03-10T10:03:39.944 DEBUG:teuthology.orchestra.run.vm07:> sudo service rsyslog restart 2026-03-10T10:03:40.000 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-10T10:03:40.002 INFO:teuthology.task.internal:Starting timer... 2026-03-10T10:03:40.002 INFO:teuthology.run_tasks:Running task pcp... 2026-03-10T10:03:40.005 INFO:teuthology.run_tasks:Running task selinux... 2026-03-10T10:03:40.007 INFO:teuthology.task.selinux:Excluding vm04: VMs are not yet supported 2026-03-10T10:03:40.007 INFO:teuthology.task.selinux:Excluding vm07: VMs are not yet supported 2026-03-10T10:03:40.007 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-10T10:03:40.007 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-10T10:03:40.007 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-10T10:03:40.007 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-10T10:03:40.008 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-10T10:03:40.009 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-10T10:03:40.030 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-10T10:03:40.509 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-10T10:03:40.514 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-10T10:03:40.514 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventory154uibhp --limit vm04.local,vm07.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-10T10:05:58.152 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm04.local'), Remote(name='ubuntu@vm07.local')] 2026-03-10T10:05:58.153 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm04.local' 2026-03-10T10:05:58.153 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm04.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T10:05:58.216 DEBUG:teuthology.orchestra.run.vm04:> true 2026-03-10T10:05:58.432 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm04.local' 2026-03-10T10:05:58.433 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm07.local' 2026-03-10T10:05:58.433 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm07.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T10:05:58.492 DEBUG:teuthology.orchestra.run.vm07:> true 2026-03-10T10:05:58.700 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm07.local' 2026-03-10T10:05:58.701 INFO:teuthology.run_tasks:Running task clock... 2026-03-10T10:05:58.703 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-10T10:05:58.703 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T10:05:58.703 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T10:05:58.705 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T10:05:58.705 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T10:05:58.719 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:05:58 ntpd[16110]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-10T10:05:58.719 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:05:58 ntpd[16110]: Command line: ntpd -gq 2026-03-10T10:05:58.719 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:05:58 ntpd[16110]: ---------------------------------------------------- 2026-03-10T10:05:58.719 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:05:58 ntpd[16110]: ntp-4 is maintained by Network Time Foundation, 2026-03-10T10:05:58.719 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:05:58 ntpd[16110]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-10T10:05:58.719 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:05:58 ntpd[16110]: corporation. Support and training for ntp-4 are 2026-03-10T10:05:58.719 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:05:58 ntpd[16110]: available at https://www.nwtime.org/support 2026-03-10T10:05:58.719 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:05:58 ntpd[16110]: ---------------------------------------------------- 2026-03-10T10:05:58.719 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:05:58 ntpd[16110]: proto: precision = 0.029 usec (-25) 2026-03-10T10:05:58.719 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:05:58 ntpd[16110]: basedate set to 2022-02-04 2026-03-10T10:05:58.719 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:05:58 ntpd[16110]: gps base set to 2022-02-06 (week 2196) 2026-03-10T10:05:58.719 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:05:58 ntpd[16110]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-10T10:05:58.719 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:05:58 ntpd[16110]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-10T10:05:58.719 INFO:teuthology.orchestra.run.vm04.stderr:10 Mar 10:05:58 ntpd[16110]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 73 days ago 2026-03-10T10:05:58.719 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:05:58 ntpd[16110]: Listen and drop on 0 v6wildcard [::]:123 2026-03-10T10:05:58.719 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:05:58 ntpd[16110]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-10T10:05:58.719 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:05:58 ntpd[16110]: Listen normally on 2 lo 127.0.0.1:123 2026-03-10T10:05:58.719 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:05:58 ntpd[16110]: Listen normally on 3 ens3 192.168.123.104:123 2026-03-10T10:05:58.719 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:05:58 ntpd[16110]: Listen normally on 4 lo [::1]:123 2026-03-10T10:05:58.719 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:05:58 ntpd[16110]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:4%2]:123 2026-03-10T10:05:58.719 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:05:58 ntpd[16110]: Listening on routing socket on fd #22 for interface updates 2026-03-10T10:05:58.757 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:05:58 ntpd[16095]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-10T10:05:58.757 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:05:58 ntpd[16095]: Command line: ntpd -gq 2026-03-10T10:05:58.757 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:05:58 ntpd[16095]: ---------------------------------------------------- 2026-03-10T10:05:58.757 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:05:58 ntpd[16095]: ntp-4 is maintained by Network Time Foundation, 2026-03-10T10:05:58.757 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:05:58 ntpd[16095]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-10T10:05:58.757 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:05:58 ntpd[16095]: corporation. Support and training for ntp-4 are 2026-03-10T10:05:58.757 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:05:58 ntpd[16095]: available at https://www.nwtime.org/support 2026-03-10T10:05:58.757 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:05:58 ntpd[16095]: ---------------------------------------------------- 2026-03-10T10:05:58.757 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:05:58 ntpd[16095]: proto: precision = 0.029 usec (-25) 2026-03-10T10:05:58.757 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:05:58 ntpd[16095]: basedate set to 2022-02-04 2026-03-10T10:05:58.758 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:05:58 ntpd[16095]: gps base set to 2022-02-06 (week 2196) 2026-03-10T10:05:58.758 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:05:58 ntpd[16095]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-10T10:05:58.758 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:05:58 ntpd[16095]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-10T10:05:58.758 INFO:teuthology.orchestra.run.vm07.stderr:10 Mar 10:05:58 ntpd[16095]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 73 days ago 2026-03-10T10:05:58.758 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:05:58 ntpd[16095]: Listen and drop on 0 v6wildcard [::]:123 2026-03-10T10:05:58.758 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:05:58 ntpd[16095]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-10T10:05:58.758 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:05:58 ntpd[16095]: Listen normally on 2 lo 127.0.0.1:123 2026-03-10T10:05:58.758 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:05:58 ntpd[16095]: Listen normally on 3 ens3 192.168.123.107:123 2026-03-10T10:05:58.758 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:05:58 ntpd[16095]: Listen normally on 4 lo [::1]:123 2026-03-10T10:05:58.758 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:05:58 ntpd[16095]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:7%2]:123 2026-03-10T10:05:58.758 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:05:58 ntpd[16095]: Listening on routing socket on fd #22 for interface updates 2026-03-10T10:05:59.718 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:05:59 ntpd[16110]: Soliciting pool server 85.215.189.120 2026-03-10T10:05:59.758 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:05:59 ntpd[16095]: Soliciting pool server 85.215.166.214 2026-03-10T10:06:00.718 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:06:00 ntpd[16110]: Soliciting pool server 93.241.86.156 2026-03-10T10:06:00.719 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:06:00 ntpd[16110]: Soliciting pool server 129.70.132.35 2026-03-10T10:06:00.756 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:06:00 ntpd[16095]: Soliciting pool server 85.215.189.120 2026-03-10T10:06:00.757 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:06:00 ntpd[16095]: Soliciting pool server 136.243.147.210 2026-03-10T10:06:01.718 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:06:01 ntpd[16110]: Soliciting pool server 162.159.200.123 2026-03-10T10:06:01.719 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:06:01 ntpd[16110]: Soliciting pool server 94.130.184.193 2026-03-10T10:06:01.756 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:06:01 ntpd[16095]: Soliciting pool server 129.70.132.35 2026-03-10T10:06:01.756 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:06:01 ntpd[16095]: Soliciting pool server 93.241.86.156 2026-03-10T10:06:01.756 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:06:01 ntpd[16095]: Soliciting pool server 94.130.184.193 2026-03-10T10:06:02.718 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:06:02 ntpd[16110]: Soliciting pool server 217.160.19.219 2026-03-10T10:06:02.718 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:06:02 ntpd[16110]: Soliciting pool server 85.215.166.214 2026-03-10T10:06:02.719 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:06:02 ntpd[16110]: Soliciting pool server 31.209.85.242 2026-03-10T10:06:02.756 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:06:02 ntpd[16095]: Soliciting pool server 217.160.19.219 2026-03-10T10:06:02.756 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:06:02 ntpd[16095]: Soliciting pool server 162.159.200.123 2026-03-10T10:06:02.756 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:06:02 ntpd[16095]: Soliciting pool server 178.215.228.24 2026-03-10T10:06:03.718 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:06:03 ntpd[16110]: Soliciting pool server 213.209.109.44 2026-03-10T10:06:03.718 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:06:03 ntpd[16110]: Soliciting pool server 212.132.108.186 2026-03-10T10:06:03.719 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:06:03 ntpd[16110]: Soliciting pool server 185.125.190.58 2026-03-10T10:06:03.755 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:06:03 ntpd[16095]: Soliciting pool server 31.209.85.242 2026-03-10T10:06:03.755 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:06:03 ntpd[16095]: Soliciting pool server 212.132.108.186 2026-03-10T10:06:03.755 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:06:03 ntpd[16095]: Soliciting pool server 185.125.190.58 2026-03-10T10:06:04.718 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:06:04 ntpd[16110]: Soliciting pool server 185.125.190.57 2026-03-10T10:06:04.718 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:06:04 ntpd[16110]: Soliciting pool server 172.104.134.72 2026-03-10T10:06:04.718 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:06:04 ntpd[16110]: Soliciting pool server 78.47.118.0 2026-03-10T10:06:04.754 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:06:04 ntpd[16095]: Soliciting pool server 185.125.190.57 2026-03-10T10:06:04.755 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:06:04 ntpd[16095]: Soliciting pool server 213.209.109.44 2026-03-10T10:06:04.755 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:06:04 ntpd[16095]: Soliciting pool server 78.47.118.0 2026-03-10T10:06:05.718 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:06:05 ntpd[16110]: Soliciting pool server 91.189.91.157 2026-03-10T10:06:05.719 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:06:05 ntpd[16110]: Soliciting pool server 178.215.228.24 2026-03-10T10:06:05.719 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:06:05 ntpd[16110]: Soliciting pool server 2a01:4f8:271:2dec::2 2026-03-10T10:06:05.754 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:06:05 ntpd[16095]: Soliciting pool server 91.189.91.157 2026-03-10T10:06:05.754 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:06:05 ntpd[16095]: Soliciting pool server 172.104.134.72 2026-03-10T10:06:05.754 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:06:05 ntpd[16095]: Soliciting pool server 2a01:4f8:1c1c:8183:: 2026-03-10T10:06:06.718 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:06:06 ntpd[16110]: Soliciting pool server 185.125.190.56 2026-03-10T10:06:06.747 INFO:teuthology.orchestra.run.vm04.stdout:10 Mar 10:06:06 ntpd[16110]: ntpd: time slew -0.002478 s 2026-03-10T10:06:06.748 INFO:teuthology.orchestra.run.vm04.stdout:ntpd: time slew -0.002478s 2026-03-10T10:06:06.768 INFO:teuthology.orchestra.run.vm04.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T10:06:06.768 INFO:teuthology.orchestra.run.vm04.stdout:============================================================================== 2026-03-10T10:06:06.768 INFO:teuthology.orchestra.run.vm04.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T10:06:06.768 INFO:teuthology.orchestra.run.vm04.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T10:06:06.768 INFO:teuthology.orchestra.run.vm04.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T10:06:06.768 INFO:teuthology.orchestra.run.vm04.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T10:06:06.768 INFO:teuthology.orchestra.run.vm04.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T10:06:06.783 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 10:06:06 ntpd[16095]: ntpd: time slew +0.009607 s 2026-03-10T10:06:06.783 INFO:teuthology.orchestra.run.vm07.stdout:ntpd: time slew +0.009607s 2026-03-10T10:06:06.802 INFO:teuthology.orchestra.run.vm07.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T10:06:06.802 INFO:teuthology.orchestra.run.vm07.stdout:============================================================================== 2026-03-10T10:06:06.802 INFO:teuthology.orchestra.run.vm07.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T10:06:06.802 INFO:teuthology.orchestra.run.vm07.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T10:06:06.802 INFO:teuthology.orchestra.run.vm07.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T10:06:06.802 INFO:teuthology.orchestra.run.vm07.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T10:06:06.802 INFO:teuthology.orchestra.run.vm07.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T10:06:06.803 INFO:teuthology.run_tasks:Running task install... 2026-03-10T10:06:06.804 DEBUG:teuthology.task.install:project ceph 2026-03-10T10:06:06.804 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'}, 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-10T10:06:06.805 DEBUG:teuthology.task.install:config {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-10T10:06:06.805 INFO:teuthology.task.install:Using flavor: default 2026-03-10T10:06:06.807 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-10T10:06:06.807 INFO:teuthology.task.install:extra packages: [] 2026-03-10T10:06:06.807 DEBUG:teuthology.orchestra.run.vm04:> sudo apt-key list | grep Ceph 2026-03-10T10:06:06.807 DEBUG:teuthology.orchestra.run.vm07:> sudo apt-key list | grep Ceph 2026-03-10T10:06:06.847 INFO:teuthology.orchestra.run.vm04.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-10T10:06:06.866 INFO:teuthology.orchestra.run.vm04.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-10T10:06:06.867 INFO:teuthology.orchestra.run.vm04.stdout:uid [ unknown] Ceph.com (release key) 2026-03-10T10:06:06.867 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-10T10:06:06.867 INFO:teuthology.task.install.deb:Installing system (non-project) packages: python3-xmltodict, python3-jmespath on remote deb x86_64 2026-03-10T10:06:06.867 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T10:06:06.881 INFO:teuthology.orchestra.run.vm07.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-10T10:06:06.899 INFO:teuthology.orchestra.run.vm07.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-10T10:06:06.899 INFO:teuthology.orchestra.run.vm07.stdout:uid [ unknown] Ceph.com (release key) 2026-03-10T10:06:06.899 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-10T10:06:06.899 INFO:teuthology.task.install.deb:Installing system (non-project) packages: python3-xmltodict, python3-jmespath on remote deb x86_64 2026-03-10T10:06:06.900 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T10:06:07.477 INFO:teuthology.task.install.deb:Pulling from https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/ 2026-03-10T10:06:07.477 INFO:teuthology.task.install.deb:Package version is 19.2.3-678-ge911bdeb-1jammy 2026-03-10T10:06:07.568 INFO:teuthology.task.install.deb:Pulling from https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/ 2026-03-10T10:06:07.568 INFO:teuthology.task.install.deb:Package version is 19.2.3-678-ge911bdeb-1jammy 2026-03-10T10:06:08.047 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-10T10:06:08.047 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-10T10:06:08.054 DEBUG:teuthology.orchestra.run.vm04:> sudo apt-get update 2026-03-10T10:06:08.085 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T10:06:08.085 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-10T10:06:08.092 DEBUG:teuthology.orchestra.run.vm07:> sudo apt-get update 2026-03-10T10:06:08.237 INFO:teuthology.orchestra.run.vm04.stdout:Hit:1 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-10T10:06:08.285 INFO:teuthology.orchestra.run.vm07.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-10T10:06:08.289 INFO:teuthology.orchestra.run.vm07.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-10T10:06:08.297 INFO:teuthology.orchestra.run.vm07.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-10T10:06:08.374 INFO:teuthology.orchestra.run.vm07.stdout:Hit:4 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-10T10:06:08.699 INFO:teuthology.orchestra.run.vm07.stdout:Ign:5 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy InRelease 2026-03-10T10:06:08.723 INFO:teuthology.orchestra.run.vm04.stdout:Ign:2 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy InRelease 2026-03-10T10:06:08.811 INFO:teuthology.orchestra.run.vm07.stdout:Get:6 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release [7662 B] 2026-03-10T10:06:08.840 INFO:teuthology.orchestra.run.vm04.stdout:Get:3 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release [7662 B] 2026-03-10T10:06:08.848 INFO:teuthology.orchestra.run.vm04.stdout:Hit:4 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-10T10:06:08.924 INFO:teuthology.orchestra.run.vm07.stdout:Ign:7 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-10T10:06:08.947 INFO:teuthology.orchestra.run.vm04.stdout:Hit:5 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-10T10:06:08.957 INFO:teuthology.orchestra.run.vm04.stdout:Ign:6 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-10T10:06:09.036 INFO:teuthology.orchestra.run.vm07.stdout:Get:8 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.1 kB] 2026-03-10T10:06:09.046 INFO:teuthology.orchestra.run.vm04.stdout:Hit:7 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-10T10:06:09.075 INFO:teuthology.orchestra.run.vm04.stdout:Get:8 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.1 kB] 2026-03-10T10:06:09.107 INFO:teuthology.orchestra.run.vm07.stdout:Fetched 25.8 kB in 1s (29.7 kB/s) 2026-03-10T10:06:09.151 INFO:teuthology.orchestra.run.vm04.stdout:Fetched 25.8 kB in 1s (27.1 kB/s) 2026-03-10T10:06:09.768 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T10:06:09.781 DEBUG:teuthology.orchestra.run.vm07:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=19.2.3-678-ge911bdeb-1jammy cephadm=19.2.3-678-ge911bdeb-1jammy ceph-mds=19.2.3-678-ge911bdeb-1jammy ceph-mgr=19.2.3-678-ge911bdeb-1jammy ceph-common=19.2.3-678-ge911bdeb-1jammy ceph-fuse=19.2.3-678-ge911bdeb-1jammy ceph-test=19.2.3-678-ge911bdeb-1jammy ceph-volume=19.2.3-678-ge911bdeb-1jammy radosgw=19.2.3-678-ge911bdeb-1jammy python3-rados=19.2.3-678-ge911bdeb-1jammy python3-rgw=19.2.3-678-ge911bdeb-1jammy python3-cephfs=19.2.3-678-ge911bdeb-1jammy python3-rbd=19.2.3-678-ge911bdeb-1jammy libcephfs2=19.2.3-678-ge911bdeb-1jammy libcephfs-dev=19.2.3-678-ge911bdeb-1jammy librados2=19.2.3-678-ge911bdeb-1jammy librbd1=19.2.3-678-ge911bdeb-1jammy rbd-fuse=19.2.3-678-ge911bdeb-1jammy 2026-03-10T10:06:09.795 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-10T10:06:09.810 DEBUG:teuthology.orchestra.run.vm04:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=19.2.3-678-ge911bdeb-1jammy cephadm=19.2.3-678-ge911bdeb-1jammy ceph-mds=19.2.3-678-ge911bdeb-1jammy ceph-mgr=19.2.3-678-ge911bdeb-1jammy ceph-common=19.2.3-678-ge911bdeb-1jammy ceph-fuse=19.2.3-678-ge911bdeb-1jammy ceph-test=19.2.3-678-ge911bdeb-1jammy ceph-volume=19.2.3-678-ge911bdeb-1jammy radosgw=19.2.3-678-ge911bdeb-1jammy python3-rados=19.2.3-678-ge911bdeb-1jammy python3-rgw=19.2.3-678-ge911bdeb-1jammy python3-cephfs=19.2.3-678-ge911bdeb-1jammy python3-rbd=19.2.3-678-ge911bdeb-1jammy libcephfs2=19.2.3-678-ge911bdeb-1jammy libcephfs-dev=19.2.3-678-ge911bdeb-1jammy librados2=19.2.3-678-ge911bdeb-1jammy librbd1=19.2.3-678-ge911bdeb-1jammy rbd-fuse=19.2.3-678-ge911bdeb-1jammy 2026-03-10T10:06:09.817 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T10:06:09.842 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-10T10:06:10.000 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T10:06:10.000 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T10:06:10.029 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-10T10:06:10.029 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-10T10:06:10.155 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:06:10.155 INFO:teuthology.orchestra.run.vm07.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T10:06:10.156 INFO:teuthology.orchestra.run.vm07.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-10T10:06:10.156 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:06:10.156 INFO:teuthology.orchestra.run.vm07.stdout:The following additional packages will be installed: 2026-03-10T10:06:10.156 INFO:teuthology.orchestra.run.vm07.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-10T10:06:10.156 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-10T10:06:10.156 INFO:teuthology.orchestra.run.vm07.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T10:06:10.156 INFO:teuthology.orchestra.run.vm07.stdout: liboath0 libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-10T10:06:10.157 INFO:teuthology.orchestra.run.vm07.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph 2026-03-10T10:06:10.157 INFO:teuthology.orchestra.run.vm07.stdout: libthrift-0.16.0 lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T10:06:10.157 INFO:teuthology.orchestra.run.vm07.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T10:06:10.157 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T10:06:10.157 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-10T10:06:10.157 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T10:06:10.157 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T10:06:10.157 INFO:teuthology.orchestra.run.vm07.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T10:06:10.157 INFO:teuthology.orchestra.run.vm07.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-10T10:06:10.157 INFO:teuthology.orchestra.run.vm07.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-10T10:06:10.157 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-pytest python3-repoze.lru 2026-03-10T10:06:10.157 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T10:06:10.157 INFO:teuthology.orchestra.run.vm07.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T10:06:10.157 INFO:teuthology.orchestra.run.vm07.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T10:06:10.157 INFO:teuthology.orchestra.run.vm07.stdout: python3-toml python3-waitress python3-wcwidth python3-webob 2026-03-10T10:06:10.157 INFO:teuthology.orchestra.run.vm07.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-10T10:06:10.157 INFO:teuthology.orchestra.run.vm07.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-10T10:06:10.158 INFO:teuthology.orchestra.run.vm07.stdout:Suggested packages: 2026-03-10T10:06:10.159 INFO:teuthology.orchestra.run.vm07.stdout: python3-influxdb readline-doc python3-beaker python-mako-doc 2026-03-10T10:06:10.159 INFO:teuthology.orchestra.run.vm07.stdout: python-natsort-doc httpd-wsgi libapache2-mod-python libapache2-mod-scgi 2026-03-10T10:06:10.159 INFO:teuthology.orchestra.run.vm07.stdout: libjs-mochikit python-pecan-doc python-psutil-doc subversion 2026-03-10T10:06:10.159 INFO:teuthology.orchestra.run.vm07.stdout: python-pygments-doc ttf-bitstream-vera python-pyinotify-doc python3-dap 2026-03-10T10:06:10.159 INFO:teuthology.orchestra.run.vm07.stdout: python-sklearn-doc ipython3 python-waitress-doc python-webob-doc 2026-03-10T10:06:10.159 INFO:teuthology.orchestra.run.vm07.stdout: python-webtest-doc python-werkzeug-doc python3-watchdog gsmartcontrol 2026-03-10T10:06:10.159 INFO:teuthology.orchestra.run.vm07.stdout: smart-notifier mailx | mailutils 2026-03-10T10:06:10.159 INFO:teuthology.orchestra.run.vm07.stdout:Recommended packages: 2026-03-10T10:06:10.159 INFO:teuthology.orchestra.run.vm07.stdout: btrfs-tools 2026-03-10T10:06:10.197 INFO:teuthology.orchestra.run.vm07.stdout:The following NEW packages will be installed: 2026-03-10T10:06:10.197 INFO:teuthology.orchestra.run.vm07.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-10T10:06:10.197 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-10T10:06:10.197 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-10T10:06:10.197 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs-dev libcephfs2 libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 2026-03-10T10:06:10.197 INFO:teuthology.orchestra.run.vm07.stdout: liblua5.3-dev libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-10T10:06:10.197 INFO:teuthology.orchestra.run.vm07.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 libreadline-dev 2026-03-10T10:06:10.198 INFO:teuthology.orchestra.run.vm07.stdout: librgw2 libsqlite3-mod-ceph libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-10T10:06:10.198 INFO:teuthology.orchestra.run.vm07.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-10T10:06:10.198 INFO:teuthology.orchestra.run.vm07.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T10:06:10.198 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-10T10:06:10.198 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-10T10:06:10.198 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T10:06:10.198 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T10:06:10.198 INFO:teuthology.orchestra.run.vm07.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T10:06:10.198 INFO:teuthology.orchestra.run.vm07.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-10T10:06:10.198 INFO:teuthology.orchestra.run.vm07.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-10T10:06:10.198 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-pytest python3-rados python3-rbd 2026-03-10T10:06:10.198 INFO:teuthology.orchestra.run.vm07.stdout: python3-repoze.lru python3-requests-oauthlib python3-rgw python3-routes 2026-03-10T10:06:10.198 INFO:teuthology.orchestra.run.vm07.stdout: python3-rsa python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-10T10:06:10.198 INFO:teuthology.orchestra.run.vm07.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-10T10:06:10.198 INFO:teuthology.orchestra.run.vm07.stdout: python3-threadpoolctl python3-toml python3-waitress python3-wcwidth 2026-03-10T10:06:10.198 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T10:06:10.198 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse smartmontools 2026-03-10T10:06:10.198 INFO:teuthology.orchestra.run.vm07.stdout: socat unzip xmlstarlet zip 2026-03-10T10:06:10.198 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be upgraded: 2026-03-10T10:06:10.198 INFO:teuthology.orchestra.run.vm07.stdout: librados2 librbd1 2026-03-10T10:06:10.206 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:06:10.206 INFO:teuthology.orchestra.run.vm04.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T10:06:10.206 INFO:teuthology.orchestra.run.vm04.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-10T10:06:10.207 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:06:10.207 INFO:teuthology.orchestra.run.vm04.stdout:The following additional packages will be installed: 2026-03-10T10:06:10.207 INFO:teuthology.orchestra.run.vm04.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-10T10:06:10.207 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-10T10:06:10.207 INFO:teuthology.orchestra.run.vm04.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T10:06:10.207 INFO:teuthology.orchestra.run.vm04.stdout: liboath0 libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-10T10:06:10.207 INFO:teuthology.orchestra.run.vm04.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph 2026-03-10T10:06:10.208 INFO:teuthology.orchestra.run.vm04.stdout: libthrift-0.16.0 lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T10:06:10.208 INFO:teuthology.orchestra.run.vm04.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T10:06:10.208 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T10:06:10.208 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-10T10:06:10.208 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T10:06:10.208 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T10:06:10.208 INFO:teuthology.orchestra.run.vm04.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T10:06:10.208 INFO:teuthology.orchestra.run.vm04.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-10T10:06:10.208 INFO:teuthology.orchestra.run.vm04.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-10T10:06:10.208 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyinotify python3-pytest python3-repoze.lru 2026-03-10T10:06:10.208 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T10:06:10.208 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T10:06:10.208 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T10:06:10.208 INFO:teuthology.orchestra.run.vm04.stdout: python3-toml python3-waitress python3-wcwidth python3-webob 2026-03-10T10:06:10.208 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-10T10:06:10.208 INFO:teuthology.orchestra.run.vm04.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-10T10:06:10.209 INFO:teuthology.orchestra.run.vm04.stdout:Suggested packages: 2026-03-10T10:06:10.209 INFO:teuthology.orchestra.run.vm04.stdout: python3-influxdb readline-doc python3-beaker python-mako-doc 2026-03-10T10:06:10.209 INFO:teuthology.orchestra.run.vm04.stdout: python-natsort-doc httpd-wsgi libapache2-mod-python libapache2-mod-scgi 2026-03-10T10:06:10.209 INFO:teuthology.orchestra.run.vm04.stdout: libjs-mochikit python-pecan-doc python-psutil-doc subversion 2026-03-10T10:06:10.209 INFO:teuthology.orchestra.run.vm04.stdout: python-pygments-doc ttf-bitstream-vera python-pyinotify-doc python3-dap 2026-03-10T10:06:10.209 INFO:teuthology.orchestra.run.vm04.stdout: python-sklearn-doc ipython3 python-waitress-doc python-webob-doc 2026-03-10T10:06:10.209 INFO:teuthology.orchestra.run.vm04.stdout: python-webtest-doc python-werkzeug-doc python3-watchdog gsmartcontrol 2026-03-10T10:06:10.209 INFO:teuthology.orchestra.run.vm04.stdout: smart-notifier mailx | mailutils 2026-03-10T10:06:10.209 INFO:teuthology.orchestra.run.vm04.stdout:Recommended packages: 2026-03-10T10:06:10.209 INFO:teuthology.orchestra.run.vm04.stdout: btrfs-tools 2026-03-10T10:06:10.246 INFO:teuthology.orchestra.run.vm04.stdout:The following NEW packages will be installed: 2026-03-10T10:06:10.246 INFO:teuthology.orchestra.run.vm04.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-10T10:06:10.246 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-10T10:06:10.246 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-10T10:06:10.246 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs-dev libcephfs2 libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 2026-03-10T10:06:10.246 INFO:teuthology.orchestra.run.vm04.stdout: liblua5.3-dev libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-10T10:06:10.246 INFO:teuthology.orchestra.run.vm04.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 libreadline-dev 2026-03-10T10:06:10.247 INFO:teuthology.orchestra.run.vm04.stdout: librgw2 libsqlite3-mod-ceph libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-10T10:06:10.247 INFO:teuthology.orchestra.run.vm04.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-10T10:06:10.247 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T10:06:10.247 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-10T10:06:10.247 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-10T10:06:10.247 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T10:06:10.247 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T10:06:10.247 INFO:teuthology.orchestra.run.vm04.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T10:06:10.247 INFO:teuthology.orchestra.run.vm04.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-10T10:06:10.247 INFO:teuthology.orchestra.run.vm04.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-10T10:06:10.247 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyinotify python3-pytest python3-rados python3-rbd 2026-03-10T10:06:10.247 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-rgw python3-routes 2026-03-10T10:06:10.247 INFO:teuthology.orchestra.run.vm04.stdout: python3-rsa python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-10T10:06:10.247 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-10T10:06:10.247 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-toml python3-waitress python3-wcwidth 2026-03-10T10:06:10.247 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T10:06:10.247 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse smartmontools 2026-03-10T10:06:10.247 INFO:teuthology.orchestra.run.vm04.stdout: socat unzip xmlstarlet zip 2026-03-10T10:06:10.247 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be upgraded: 2026-03-10T10:06:10.247 INFO:teuthology.orchestra.run.vm04.stdout: librados2 librbd1 2026-03-10T10:06:10.403 INFO:teuthology.orchestra.run.vm07.stdout:2 upgraded, 107 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T10:06:10.403 INFO:teuthology.orchestra.run.vm07.stdout:Need to get 178 MB of archives. 2026-03-10T10:06:10.403 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 782 MB of additional disk space will be used. 2026-03-10T10:06:10.403 INFO:teuthology.orchestra.run.vm07.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-10T10:06:10.462 INFO:teuthology.orchestra.run.vm04.stdout:2 upgraded, 107 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T10:06:10.462 INFO:teuthology.orchestra.run.vm04.stdout:Need to get 178 MB of archives. 2026-03-10T10:06:10.462 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 782 MB of additional disk space will be used. 2026-03-10T10:06:10.462 INFO:teuthology.orchestra.run.vm04.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-10T10:06:10.572 INFO:teuthology.orchestra.run.vm07.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-10T10:06:10.577 INFO:teuthology.orchestra.run.vm07.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-10T10:06:10.612 INFO:teuthology.orchestra.run.vm07.stdout:Get:4 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-10T10:06:10.637 INFO:teuthology.orchestra.run.vm04.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-10T10:06:10.642 INFO:teuthology.orchestra.run.vm04.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-10T10:06:10.678 INFO:teuthology.orchestra.run.vm04.stdout:Get:4 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-10T10:06:10.713 INFO:teuthology.orchestra.run.vm07.stdout:Get:5 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-10T10:06:10.717 INFO:teuthology.orchestra.run.vm07.stdout:Get:6 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-10T10:06:10.731 INFO:teuthology.orchestra.run.vm07.stdout:Get:7 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-10T10:06:10.735 INFO:teuthology.orchestra.run.vm07.stdout:Get:8 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-10T10:06:10.736 INFO:teuthology.orchestra.run.vm07.stdout:Get:9 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-10T10:06:10.736 INFO:teuthology.orchestra.run.vm07.stdout:Get:10 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-10T10:06:10.737 INFO:teuthology.orchestra.run.vm07.stdout:Get:11 https://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-10T10:06:10.745 INFO:teuthology.orchestra.run.vm07.stdout:Get:12 https://archive.ubuntu.com/ubuntu jammy/main amd64 libreadline-dev amd64 8.1.2-1 [166 kB] 2026-03-10T10:06:10.747 INFO:teuthology.orchestra.run.vm07.stdout:Get:13 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblua5.3-dev amd64 5.3.6-1build1 [167 kB] 2026-03-10T10:06:10.749 INFO:teuthology.orchestra.run.vm07.stdout:Get:14 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua5.1 amd64 5.1.5-8.1build4 [94.6 kB] 2026-03-10T10:06:10.781 INFO:teuthology.orchestra.run.vm04.stdout:Get:5 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-10T10:06:10.783 INFO:teuthology.orchestra.run.vm07.stdout:Get:15 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-any all 27ubuntu1 [5034 B] 2026-03-10T10:06:10.784 INFO:teuthology.orchestra.run.vm07.stdout:Get:16 https://archive.ubuntu.com/ubuntu jammy/main amd64 zip amd64 3.0-12build2 [176 kB] 2026-03-10T10:06:10.785 INFO:teuthology.orchestra.run.vm07.stdout:Get:17 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 unzip amd64 6.0-26ubuntu3.2 [175 kB] 2026-03-10T10:06:10.787 INFO:teuthology.orchestra.run.vm07.stdout:Get:18 https://archive.ubuntu.com/ubuntu jammy/universe amd64 luarocks all 3.8.0+dfsg1-1 [140 kB] 2026-03-10T10:06:10.789 INFO:teuthology.orchestra.run.vm07.stdout:Get:19 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-10T10:06:10.789 INFO:teuthology.orchestra.run.vm07.stdout:Get:20 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-10T10:06:10.789 INFO:teuthology.orchestra.run.vm04.stdout:Get:6 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-10T10:06:10.789 INFO:teuthology.orchestra.run.vm07.stdout:Get:21 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.1 [71.1 kB] 2026-03-10T10:06:10.790 INFO:teuthology.orchestra.run.vm07.stdout:Get:22 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-10T10:06:10.800 INFO:teuthology.orchestra.run.vm04.stdout:Get:7 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-10T10:06:10.804 INFO:teuthology.orchestra.run.vm04.stdout:Get:8 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-10T10:06:10.805 INFO:teuthology.orchestra.run.vm04.stdout:Get:9 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-10T10:06:10.805 INFO:teuthology.orchestra.run.vm04.stdout:Get:10 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-10T10:06:10.806 INFO:teuthology.orchestra.run.vm04.stdout:Get:11 https://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-10T10:06:10.815 INFO:teuthology.orchestra.run.vm04.stdout:Get:12 https://archive.ubuntu.com/ubuntu jammy/main amd64 libreadline-dev amd64 8.1.2-1 [166 kB] 2026-03-10T10:06:10.817 INFO:teuthology.orchestra.run.vm04.stdout:Get:13 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblua5.3-dev amd64 5.3.6-1build1 [167 kB] 2026-03-10T10:06:10.819 INFO:teuthology.orchestra.run.vm04.stdout:Get:14 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua5.1 amd64 5.1.5-8.1build4 [94.6 kB] 2026-03-10T10:06:10.822 INFO:teuthology.orchestra.run.vm07.stdout:Get:23 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 19.2.3-678-ge911bdeb-1jammy [3257 kB] 2026-03-10T10:06:10.824 INFO:teuthology.orchestra.run.vm07.stdout:Get:24 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-10T10:06:10.824 INFO:teuthology.orchestra.run.vm07.stdout:Get:25 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-10T10:06:10.825 INFO:teuthology.orchestra.run.vm07.stdout:Get:26 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-10T10:06:10.825 INFO:teuthology.orchestra.run.vm07.stdout:Get:27 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-10T10:06:10.825 INFO:teuthology.orchestra.run.vm07.stdout:Get:28 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-10T10:06:10.836 INFO:teuthology.orchestra.run.vm04.stdout:Get:15 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 19.2.3-678-ge911bdeb-1jammy [3257 kB] 2026-03-10T10:06:10.854 INFO:teuthology.orchestra.run.vm04.stdout:Get:16 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-any all 27ubuntu1 [5034 B] 2026-03-10T10:06:10.855 INFO:teuthology.orchestra.run.vm04.stdout:Get:17 https://archive.ubuntu.com/ubuntu jammy/main amd64 zip amd64 3.0-12build2 [176 kB] 2026-03-10T10:06:10.856 INFO:teuthology.orchestra.run.vm04.stdout:Get:18 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 unzip amd64 6.0-26ubuntu3.2 [175 kB] 2026-03-10T10:06:10.857 INFO:teuthology.orchestra.run.vm04.stdout:Get:19 https://archive.ubuntu.com/ubuntu jammy/universe amd64 luarocks all 3.8.0+dfsg1-1 [140 kB] 2026-03-10T10:06:10.859 INFO:teuthology.orchestra.run.vm04.stdout:Get:20 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-10T10:06:10.859 INFO:teuthology.orchestra.run.vm04.stdout:Get:21 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-10T10:06:10.859 INFO:teuthology.orchestra.run.vm04.stdout:Get:22 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.1 [71.1 kB] 2026-03-10T10:06:10.860 INFO:teuthology.orchestra.run.vm07.stdout:Get:29 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-10T10:06:10.860 INFO:teuthology.orchestra.run.vm04.stdout:Get:23 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-10T10:06:10.862 INFO:teuthology.orchestra.run.vm07.stdout:Get:30 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-10T10:06:10.862 INFO:teuthology.orchestra.run.vm07.stdout:Get:31 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-logutils all 0.3.3-8 [17.6 kB] 2026-03-10T10:06:10.862 INFO:teuthology.orchestra.run.vm07.stdout:Get:32 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-mako all 1.1.3+ds1-2ubuntu0.1 [60.5 kB] 2026-03-10T10:06:10.863 INFO:teuthology.orchestra.run.vm07.stdout:Get:33 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplegeneric all 0.8.1-3 [11.3 kB] 2026-03-10T10:06:10.896 INFO:teuthology.orchestra.run.vm04.stdout:Get:24 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-10T10:06:10.896 INFO:teuthology.orchestra.run.vm07.stdout:Get:34 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-singledispatch all 3.4.0.3-3 [7320 B] 2026-03-10T10:06:10.896 INFO:teuthology.orchestra.run.vm04.stdout:Get:25 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-10T10:06:10.896 INFO:teuthology.orchestra.run.vm07.stdout:Get:35 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-10T10:06:10.896 INFO:teuthology.orchestra.run.vm04.stdout:Get:26 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-10T10:06:10.897 INFO:teuthology.orchestra.run.vm04.stdout:Get:27 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-10T10:06:10.897 INFO:teuthology.orchestra.run.vm07.stdout:Get:36 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-waitress all 1.4.4-1.1ubuntu1.1 [47.0 kB] 2026-03-10T10:06:10.897 INFO:teuthology.orchestra.run.vm04.stdout:Get:28 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-10T10:06:10.897 INFO:teuthology.orchestra.run.vm07.stdout:Get:37 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempita all 0.5.2-6ubuntu1 [15.1 kB] 2026-03-10T10:06:10.898 INFO:teuthology.orchestra.run.vm07.stdout:Get:38 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-paste all 3.5.0+dfsg1-1 [456 kB] 2026-03-10T10:06:10.931 INFO:teuthology.orchestra.run.vm07.stdout:Get:39 https://archive.ubuntu.com/ubuntu jammy/main amd64 python-pastedeploy-tpl all 2.1.1-1 [4892 B] 2026-03-10T10:06:10.932 INFO:teuthology.orchestra.run.vm07.stdout:Get:40 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastedeploy all 2.1.1-1 [26.6 kB] 2026-03-10T10:06:10.932 INFO:teuthology.orchestra.run.vm07.stdout:Get:41 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-webtest all 2.0.35-1 [28.5 kB] 2026-03-10T10:06:10.932 INFO:teuthology.orchestra.run.vm07.stdout:Get:42 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pecan all 1.3.3-4ubuntu2 [87.3 kB] 2026-03-10T10:06:10.933 INFO:teuthology.orchestra.run.vm04.stdout:Get:29 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-10T10:06:10.933 INFO:teuthology.orchestra.run.vm07.stdout:Get:43 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-werkzeug all 2.0.2+dfsg1-1ubuntu0.22.04.3 [181 kB] 2026-03-10T10:06:10.936 INFO:teuthology.orchestra.run.vm04.stdout:Get:30 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-10T10:06:10.937 INFO:teuthology.orchestra.run.vm04.stdout:Get:31 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-logutils all 0.3.3-8 [17.6 kB] 2026-03-10T10:06:10.937 INFO:teuthology.orchestra.run.vm04.stdout:Get:32 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-mako all 1.1.3+ds1-2ubuntu0.1 [60.5 kB] 2026-03-10T10:06:10.938 INFO:teuthology.orchestra.run.vm04.stdout:Get:33 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplegeneric all 0.8.1-3 [11.3 kB] 2026-03-10T10:06:10.967 INFO:teuthology.orchestra.run.vm07.stdout:Get:44 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-10T10:06:10.968 INFO:teuthology.orchestra.run.vm07.stdout:Get:45 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-10T10:06:10.970 INFO:teuthology.orchestra.run.vm07.stdout:Get:46 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-10T10:06:10.971 INFO:teuthology.orchestra.run.vm07.stdout:Get:47 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-10T10:06:10.971 INFO:teuthology.orchestra.run.vm07.stdout:Get:48 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-10T10:06:10.973 INFO:teuthology.orchestra.run.vm04.stdout:Get:34 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-singledispatch all 3.4.0.3-3 [7320 B] 2026-03-10T10:06:10.973 INFO:teuthology.orchestra.run.vm04.stdout:Get:35 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-10T10:06:10.975 INFO:teuthology.orchestra.run.vm04.stdout:Get:36 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-waitress all 1.4.4-1.1ubuntu1.1 [47.0 kB] 2026-03-10T10:06:10.976 INFO:teuthology.orchestra.run.vm04.stdout:Get:37 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempita all 0.5.2-6ubuntu1 [15.1 kB] 2026-03-10T10:06:10.976 INFO:teuthology.orchestra.run.vm04.stdout:Get:38 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-paste all 3.5.0+dfsg1-1 [456 kB] 2026-03-10T10:06:11.039 INFO:teuthology.orchestra.run.vm07.stdout:Get:49 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-10T10:06:11.040 INFO:teuthology.orchestra.run.vm07.stdout:Get:50 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-10T10:06:11.040 INFO:teuthology.orchestra.run.vm07.stdout:Get:51 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-10T10:06:11.053 INFO:teuthology.orchestra.run.vm04.stdout:Get:39 https://archive.ubuntu.com/ubuntu jammy/main amd64 python-pastedeploy-tpl all 2.1.1-1 [4892 B] 2026-03-10T10:06:11.053 INFO:teuthology.orchestra.run.vm04.stdout:Get:40 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastedeploy all 2.1.1-1 [26.6 kB] 2026-03-10T10:06:11.054 INFO:teuthology.orchestra.run.vm04.stdout:Get:41 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-webtest all 2.0.35-1 [28.5 kB] 2026-03-10T10:06:11.054 INFO:teuthology.orchestra.run.vm04.stdout:Get:42 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pecan all 1.3.3-4ubuntu2 [87.3 kB] 2026-03-10T10:06:11.055 INFO:teuthology.orchestra.run.vm04.stdout:Get:43 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-werkzeug all 2.0.2+dfsg1-1ubuntu0.22.04.3 [181 kB] 2026-03-10T10:06:11.066 INFO:teuthology.orchestra.run.vm07.stdout:Get:52 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-10T10:06:11.066 INFO:teuthology.orchestra.run.vm07.stdout:Get:53 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-10T10:06:11.066 INFO:teuthology.orchestra.run.vm07.stdout:Get:54 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-10T10:06:11.067 INFO:teuthology.orchestra.run.vm07.stdout:Get:55 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-10T10:06:11.067 INFO:teuthology.orchestra.run.vm07.stdout:Get:56 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-10T10:06:11.068 INFO:teuthology.orchestra.run.vm07.stdout:Get:57 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-10T10:06:11.069 INFO:teuthology.orchestra.run.vm04.stdout:Get:44 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-10T10:06:11.070 INFO:teuthology.orchestra.run.vm04.stdout:Get:45 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-10T10:06:11.071 INFO:teuthology.orchestra.run.vm04.stdout:Get:46 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-10T10:06:11.071 INFO:teuthology.orchestra.run.vm04.stdout:Get:47 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-10T10:06:11.090 INFO:teuthology.orchestra.run.vm07.stdout:Get:58 https://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-10T10:06:11.090 INFO:teuthology.orchestra.run.vm04.stdout:Get:48 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-10T10:06:11.092 INFO:teuthology.orchestra.run.vm07.stdout:Get:59 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-10T10:06:11.094 INFO:teuthology.orchestra.run.vm07.stdout:Get:60 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-10T10:06:11.118 INFO:teuthology.orchestra.run.vm04.stdout:Get:49 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-10T10:06:11.120 INFO:teuthology.orchestra.run.vm04.stdout:Get:50 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-10T10:06:11.120 INFO:teuthology.orchestra.run.vm04.stdout:Get:51 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-10T10:06:11.126 INFO:teuthology.orchestra.run.vm07.stdout:Get:61 https://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-10T10:06:11.129 INFO:teuthology.orchestra.run.vm07.stdout:Get:62 https://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-10T10:06:11.132 INFO:teuthology.orchestra.run.vm07.stdout:Get:63 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-socket amd64 3.0~rc1+git+ac3201d-6 [78.9 kB] 2026-03-10T10:06:11.133 INFO:teuthology.orchestra.run.vm07.stdout:Get:64 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-sec amd64 1.0.2-1 [37.6 kB] 2026-03-10T10:06:11.133 INFO:teuthology.orchestra.run.vm07.stdout:Get:65 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-10T10:06:11.185 INFO:teuthology.orchestra.run.vm04.stdout:Get:52 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-10T10:06:11.185 INFO:teuthology.orchestra.run.vm04.stdout:Get:53 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-10T10:06:11.186 INFO:teuthology.orchestra.run.vm04.stdout:Get:54 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-10T10:06:11.186 INFO:teuthology.orchestra.run.vm04.stdout:Get:55 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-10T10:06:11.187 INFO:teuthology.orchestra.run.vm04.stdout:Get:56 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-10T10:06:11.187 INFO:teuthology.orchestra.run.vm04.stdout:Get:57 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-10T10:06:11.189 INFO:teuthology.orchestra.run.vm04.stdout:Get:58 https://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-10T10:06:11.190 INFO:teuthology.orchestra.run.vm04.stdout:Get:59 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-10T10:06:11.190 INFO:teuthology.orchestra.run.vm04.stdout:Get:60 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-10T10:06:11.199 INFO:teuthology.orchestra.run.vm07.stdout:Get:66 https://archive.ubuntu.com/ubuntu jammy/main amd64 pkg-config amd64 0.29.2-1ubuntu3 [48.2 kB] 2026-03-10T10:06:11.222 INFO:teuthology.orchestra.run.vm04.stdout:Get:61 https://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-10T10:06:11.228 INFO:teuthology.orchestra.run.vm04.stdout:Get:62 https://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-10T10:06:11.230 INFO:teuthology.orchestra.run.vm04.stdout:Get:63 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-socket amd64 3.0~rc1+git+ac3201d-6 [78.9 kB] 2026-03-10T10:06:11.231 INFO:teuthology.orchestra.run.vm04.stdout:Get:64 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-sec amd64 1.0.2-1 [37.6 kB] 2026-03-10T10:06:11.231 INFO:teuthology.orchestra.run.vm04.stdout:Get:65 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-10T10:06:11.235 INFO:teuthology.orchestra.run.vm04.stdout:Get:66 https://archive.ubuntu.com/ubuntu jammy/main amd64 pkg-config amd64 0.29.2-1ubuntu3 [48.2 kB] 2026-03-10T10:06:11.236 INFO:teuthology.orchestra.run.vm04.stdout:Get:67 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-10T10:06:11.238 INFO:teuthology.orchestra.run.vm07.stdout:Get:67 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-10T10:06:11.238 INFO:teuthology.orchestra.run.vm04.stdout:Get:68 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-10T10:06:11.259 INFO:teuthology.orchestra.run.vm04.stdout:Get:69 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastescript all 2.0.2-4 [54.6 kB] 2026-03-10T10:06:11.259 INFO:teuthology.orchestra.run.vm04.stdout:Get:70 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-10T10:06:11.274 INFO:teuthology.orchestra.run.vm07.stdout:Get:68 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-10T10:06:11.274 INFO:teuthology.orchestra.run.vm07.stdout:Get:69 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastescript all 2.0.2-4 [54.6 kB] 2026-03-10T10:06:11.275 INFO:teuthology.orchestra.run.vm07.stdout:Get:70 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-10T10:06:11.275 INFO:teuthology.orchestra.run.vm07.stdout:Get:71 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-10T10:06:11.276 INFO:teuthology.orchestra.run.vm07.stdout:Get:72 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-10T10:06:11.276 INFO:teuthology.orchestra.run.vm07.stdout:Get:73 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-10T10:06:11.277 INFO:teuthology.orchestra.run.vm04.stdout:Get:71 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-10T10:06:11.278 INFO:teuthology.orchestra.run.vm04.stdout:Get:72 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-10T10:06:11.279 INFO:teuthology.orchestra.run.vm04.stdout:Get:73 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-10T10:06:11.285 INFO:teuthology.orchestra.run.vm04.stdout:Get:74 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pyinotify all 0.9.6-1.3 [24.8 kB] 2026-03-10T10:06:11.285 INFO:teuthology.orchestra.run.vm04.stdout:Get:75 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-10T10:06:11.286 INFO:teuthology.orchestra.run.vm04.stdout:Get:76 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-10T10:06:11.287 INFO:teuthology.orchestra.run.vm04.stdout:Get:77 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-10T10:06:11.296 INFO:teuthology.orchestra.run.vm04.stdout:Get:78 https://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-10T10:06:11.318 INFO:teuthology.orchestra.run.vm07.stdout:Get:74 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pyinotify all 0.9.6-1.3 [24.8 kB] 2026-03-10T10:06:11.318 INFO:teuthology.orchestra.run.vm07.stdout:Get:75 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-10T10:06:11.319 INFO:teuthology.orchestra.run.vm07.stdout:Get:76 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-10T10:06:11.386 INFO:teuthology.orchestra.run.vm07.stdout:Get:77 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-10T10:06:11.386 INFO:teuthology.orchestra.run.vm07.stdout:Get:78 https://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-10T10:06:11.413 INFO:teuthology.orchestra.run.vm04.stdout:Get:79 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-10T10:06:11.419 INFO:teuthology.orchestra.run.vm07.stdout:Get:79 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-10T10:06:11.651 INFO:teuthology.orchestra.run.vm04.stdout:Get:80 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 19.2.3-678-ge911bdeb-1jammy [3597 kB] 2026-03-10T10:06:11.769 INFO:teuthology.orchestra.run.vm04.stdout:Get:81 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 19.2.3-678-ge911bdeb-1jammy [979 kB] 2026-03-10T10:06:11.781 INFO:teuthology.orchestra.run.vm04.stdout:Get:82 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 19.2.3-678-ge911bdeb-1jammy [357 kB] 2026-03-10T10:06:11.787 INFO:teuthology.orchestra.run.vm04.stdout:Get:83 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 19.2.3-678-ge911bdeb-1jammy [32.9 kB] 2026-03-10T10:06:11.787 INFO:teuthology.orchestra.run.vm04.stdout:Get:84 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 19.2.3-678-ge911bdeb-1jammy [184 kB] 2026-03-10T10:06:11.790 INFO:teuthology.orchestra.run.vm04.stdout:Get:85 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 19.2.3-678-ge911bdeb-1jammy [70.1 kB] 2026-03-10T10:06:11.791 INFO:teuthology.orchestra.run.vm04.stdout:Get:86 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 19.2.3-678-ge911bdeb-1jammy [334 kB] 2026-03-10T10:06:11.800 INFO:teuthology.orchestra.run.vm04.stdout:Get:87 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 19.2.3-678-ge911bdeb-1jammy [6935 kB] 2026-03-10T10:06:12.107 INFO:teuthology.orchestra.run.vm04.stdout:Get:88 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 19.2.3-678-ge911bdeb-1jammy [112 kB] 2026-03-10T10:06:12.108 INFO:teuthology.orchestra.run.vm04.stdout:Get:89 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 19.2.3-678-ge911bdeb-1jammy [470 kB] 2026-03-10T10:06:12.116 INFO:teuthology.orchestra.run.vm04.stdout:Get:90 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 19.2.3-678-ge911bdeb-1jammy [26.5 MB] 2026-03-10T10:06:12.411 INFO:teuthology.orchestra.run.vm07.stdout:Get:80 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 19.2.3-678-ge911bdeb-1jammy [3597 kB] 2026-03-10T10:06:13.080 INFO:teuthology.orchestra.run.vm04.stdout:Get:91 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 19.2.3-678-ge911bdeb-1jammy [5178 kB] 2026-03-10T10:06:13.290 INFO:teuthology.orchestra.run.vm04.stdout:Get:92 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 19.2.3-678-ge911bdeb-1jammy [248 kB] 2026-03-10T10:06:13.291 INFO:teuthology.orchestra.run.vm04.stdout:Get:93 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 19.2.3-678-ge911bdeb-1jammy [125 kB] 2026-03-10T10:06:13.292 INFO:teuthology.orchestra.run.vm04.stdout:Get:94 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 19.2.3-678-ge911bdeb-1jammy [1081 kB] 2026-03-10T10:06:13.334 INFO:teuthology.orchestra.run.vm04.stdout:Get:95 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 19.2.3-678-ge911bdeb-1jammy [6239 kB] 2026-03-10T10:06:13.486 INFO:teuthology.orchestra.run.vm07.stdout:Get:81 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 19.2.3-678-ge911bdeb-1jammy [979 kB] 2026-03-10T10:06:13.596 INFO:teuthology.orchestra.run.vm04.stdout:Get:96 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 19.2.3-678-ge911bdeb-1jammy [23.0 MB] 2026-03-10T10:06:13.727 INFO:teuthology.orchestra.run.vm07.stdout:Get:82 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 19.2.3-678-ge911bdeb-1jammy [357 kB] 2026-03-10T10:06:13.844 INFO:teuthology.orchestra.run.vm07.stdout:Get:83 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 19.2.3-678-ge911bdeb-1jammy [32.9 kB] 2026-03-10T10:06:13.845 INFO:teuthology.orchestra.run.vm07.stdout:Get:84 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 19.2.3-678-ge911bdeb-1jammy [184 kB] 2026-03-10T10:06:13.847 INFO:teuthology.orchestra.run.vm07.stdout:Get:85 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 19.2.3-678-ge911bdeb-1jammy [70.1 kB] 2026-03-10T10:06:13.961 INFO:teuthology.orchestra.run.vm07.stdout:Get:86 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 19.2.3-678-ge911bdeb-1jammy [334 kB] 2026-03-10T10:06:13.966 INFO:teuthology.orchestra.run.vm07.stdout:Get:87 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 19.2.3-678-ge911bdeb-1jammy [6935 kB] 2026-03-10T10:06:14.436 INFO:teuthology.orchestra.run.vm04.stdout:Get:97 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 19.2.3-678-ge911bdeb-1jammy [14.2 kB] 2026-03-10T10:06:14.437 INFO:teuthology.orchestra.run.vm04.stdout:Get:98 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 19.2.3-678-ge911bdeb-1jammy [1173 kB] 2026-03-10T10:06:14.469 INFO:teuthology.orchestra.run.vm04.stdout:Get:99 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 19.2.3-678-ge911bdeb-1jammy [2503 kB] 2026-03-10T10:06:14.561 INFO:teuthology.orchestra.run.vm04.stdout:Get:100 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 19.2.3-678-ge911bdeb-1jammy [798 kB] 2026-03-10T10:06:14.587 INFO:teuthology.orchestra.run.vm04.stdout:Get:101 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 19.2.3-678-ge911bdeb-1jammy [157 kB] 2026-03-10T10:06:14.589 INFO:teuthology.orchestra.run.vm04.stdout:Get:102 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 19.2.3-678-ge911bdeb-1jammy [2396 kB] 2026-03-10T10:06:14.685 INFO:teuthology.orchestra.run.vm04.stdout:Get:103 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 19.2.3-678-ge911bdeb-1jammy [8625 kB] 2026-03-10T10:06:15.023 INFO:teuthology.orchestra.run.vm04.stdout:Get:104 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 19.2.3-678-ge911bdeb-1jammy [14.3 kB] 2026-03-10T10:06:15.023 INFO:teuthology.orchestra.run.vm04.stdout:Get:105 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 19.2.3-678-ge911bdeb-1jammy [52.1 MB] 2026-03-10T10:06:15.869 INFO:teuthology.orchestra.run.vm07.stdout:Get:88 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 19.2.3-678-ge911bdeb-1jammy [112 kB] 2026-03-10T10:06:15.870 INFO:teuthology.orchestra.run.vm07.stdout:Get:89 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 19.2.3-678-ge911bdeb-1jammy [470 kB] 2026-03-10T10:06:15.989 INFO:teuthology.orchestra.run.vm07.stdout:Get:90 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 19.2.3-678-ge911bdeb-1jammy [26.5 MB] 2026-03-10T10:06:17.637 INFO:teuthology.orchestra.run.vm04.stdout:Get:106 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 19.2.3-678-ge911bdeb-1jammy [135 kB] 2026-03-10T10:06:17.637 INFO:teuthology.orchestra.run.vm04.stdout:Get:107 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 19.2.3-678-ge911bdeb-1jammy [41.0 kB] 2026-03-10T10:06:17.637 INFO:teuthology.orchestra.run.vm04.stdout:Get:108 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 19.2.3-678-ge911bdeb-1jammy [13.7 MB] 2026-03-10T10:06:18.315 INFO:teuthology.orchestra.run.vm04.stdout:Get:109 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 19.2.3-678-ge911bdeb-1jammy [92.2 kB] 2026-03-10T10:06:18.579 INFO:teuthology.orchestra.run.vm04.stdout:Fetched 178 MB in 8s (22.1 MB/s) 2026-03-10T10:06:18.708 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-10T10:06:18.733 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 111717 files and directories currently installed.) 2026-03-10T10:06:18.734 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../000-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-10T10:06:18.736 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-10T10:06:18.754 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-10T10:06:18.759 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../001-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-10T10:06:18.759 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-10T10:06:18.773 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-10T10:06:18.777 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../002-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-10T10:06:18.778 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-10T10:06:18.796 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-10T10:06:18.800 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../003-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-10T10:06:18.803 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T10:06:18.841 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-10T10:06:18.845 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../004-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-10T10:06:18.846 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T10:06:18.862 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-10T10:06:18.868 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../005-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-10T10:06:18.869 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T10:06:18.894 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-10T10:06:18.898 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../006-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-10T10:06:18.899 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-10T10:06:18.922 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../007-librbd1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:18.923 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking librbd1 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-10T10:06:18.995 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../008-librados2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:18.998 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking librados2 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-10T10:06:19.067 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libnbd0. 2026-03-10T10:06:19.069 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../009-libnbd0_1.10.5-1_amd64.deb ... 2026-03-10T10:06:19.070 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-10T10:06:19.084 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libcephfs2. 2026-03-10T10:06:19.088 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../010-libcephfs2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:19.089 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:19.114 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-rados. 2026-03-10T10:06:19.119 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../011-python3-rados_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:19.120 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:19.138 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-10T10:06:19.143 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../012-python3-ceph-argparse_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T10:06:19.143 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:19.155 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-cephfs. 2026-03-10T10:06:19.160 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../013-python3-cephfs_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:19.161 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:19.175 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-10T10:06:19.180 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../014-python3-ceph-common_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T10:06:19.181 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:19.198 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-10T10:06:19.202 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../015-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-10T10:06:19.203 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-10T10:06:19.219 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-prettytable. 2026-03-10T10:06:19.223 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../016-python3-prettytable_2.5.0-2_all.deb ... 2026-03-10T10:06:19.224 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-10T10:06:19.237 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-rbd. 2026-03-10T10:06:19.242 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../017-python3-rbd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:19.242 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:19.261 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-10T10:06:19.266 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../018-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-10T10:06:19.267 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-10T10:06:19.287 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libreadline-dev:amd64. 2026-03-10T10:06:19.293 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../019-libreadline-dev_8.1.2-1_amd64.deb ... 2026-03-10T10:06:19.293 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libreadline-dev:amd64 (8.1.2-1) ... 2026-03-10T10:06:19.311 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package liblua5.3-dev:amd64. 2026-03-10T10:06:19.316 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../020-liblua5.3-dev_5.3.6-1build1_amd64.deb ... 2026-03-10T10:06:19.317 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-10T10:06:19.336 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package lua5.1. 2026-03-10T10:06:19.341 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../021-lua5.1_5.1.5-8.1build4_amd64.deb ... 2026-03-10T10:06:19.342 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking lua5.1 (5.1.5-8.1build4) ... 2026-03-10T10:06:19.360 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package lua-any. 2026-03-10T10:06:19.365 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../022-lua-any_27ubuntu1_all.deb ... 2026-03-10T10:06:19.366 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking lua-any (27ubuntu1) ... 2026-03-10T10:06:19.378 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package zip. 2026-03-10T10:06:19.384 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../023-zip_3.0-12build2_amd64.deb ... 2026-03-10T10:06:19.384 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking zip (3.0-12build2) ... 2026-03-10T10:06:19.401 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package unzip. 2026-03-10T10:06:19.406 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../024-unzip_6.0-26ubuntu3.2_amd64.deb ... 2026-03-10T10:06:19.407 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking unzip (6.0-26ubuntu3.2) ... 2026-03-10T10:06:19.425 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package luarocks. 2026-03-10T10:06:19.430 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../025-luarocks_3.8.0+dfsg1-1_all.deb ... 2026-03-10T10:06:19.431 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking luarocks (3.8.0+dfsg1-1) ... 2026-03-10T10:06:19.477 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package librgw2. 2026-03-10T10:06:19.482 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../026-librgw2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:19.483 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:19.594 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-rgw. 2026-03-10T10:06:19.599 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../027-python3-rgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:19.599 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:19.614 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-10T10:06:19.618 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../028-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-10T10:06:19.619 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-10T10:06:19.632 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libradosstriper1. 2026-03-10T10:06:19.637 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../029-libradosstriper1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:19.638 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:19.658 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-common. 2026-03-10T10:06:19.663 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../030-ceph-common_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:19.663 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:20.041 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-base. 2026-03-10T10:06:20.046 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../031-ceph-base_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:20.050 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:20.149 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-10T10:06:20.154 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../032-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-10T10:06:20.154 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-10T10:06:20.168 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-cheroot. 2026-03-10T10:06:20.173 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../033-python3-cheroot_8.5.2+ds1-1ubuntu3.1_all.deb ... 2026-03-10T10:06:20.174 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-10T10:06:20.193 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-10T10:06:20.198 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../034-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-10T10:06:20.199 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-10T10:06:20.213 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-10T10:06:20.218 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../035-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-10T10:06:20.219 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-10T10:06:20.234 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-10T10:06:20.238 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../036-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-10T10:06:20.239 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-10T10:06:20.253 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-tempora. 2026-03-10T10:06:20.257 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../037-python3-tempora_4.1.2-1_all.deb ... 2026-03-10T10:06:20.258 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-10T10:06:20.273 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-portend. 2026-03-10T10:06:20.278 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../038-python3-portend_3.0.0-1_all.deb ... 2026-03-10T10:06:20.278 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-10T10:06:20.292 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-10T10:06:20.297 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../039-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-10T10:06:20.298 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-10T10:06:20.312 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-10T10:06:20.317 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../040-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-10T10:06:20.317 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-10T10:06:20.345 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-natsort. 2026-03-10T10:06:20.350 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../041-python3-natsort_8.0.2-1_all.deb ... 2026-03-10T10:06:20.350 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-10T10:06:20.366 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-logutils. 2026-03-10T10:06:20.370 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../042-python3-logutils_0.3.3-8_all.deb ... 2026-03-10T10:06:20.371 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-logutils (0.3.3-8) ... 2026-03-10T10:06:20.384 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-mako. 2026-03-10T10:06:20.388 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../043-python3-mako_1.1.3+ds1-2ubuntu0.1_all.deb ... 2026-03-10T10:06:20.389 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-10T10:06:20.406 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-simplegeneric. 2026-03-10T10:06:20.411 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../044-python3-simplegeneric_0.8.1-3_all.deb ... 2026-03-10T10:06:20.412 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-simplegeneric (0.8.1-3) ... 2026-03-10T10:06:20.425 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-singledispatch. 2026-03-10T10:06:20.430 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../045-python3-singledispatch_3.4.0.3-3_all.deb ... 2026-03-10T10:06:20.446 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-singledispatch (3.4.0.3-3) ... 2026-03-10T10:06:20.460 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-webob. 2026-03-10T10:06:20.464 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../046-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-10T10:06:20.465 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-10T10:06:20.483 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-waitress. 2026-03-10T10:06:20.488 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../047-python3-waitress_1.4.4-1.1ubuntu1.1_all.deb ... 2026-03-10T10:06:20.489 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-10T10:06:20.505 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-tempita. 2026-03-10T10:06:20.510 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../048-python3-tempita_0.5.2-6ubuntu1_all.deb ... 2026-03-10T10:06:20.511 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-tempita (0.5.2-6ubuntu1) ... 2026-03-10T10:06:20.525 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-paste. 2026-03-10T10:06:20.529 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../049-python3-paste_3.5.0+dfsg1-1_all.deb ... 2026-03-10T10:06:20.530 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-paste (3.5.0+dfsg1-1) ... 2026-03-10T10:06:20.562 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python-pastedeploy-tpl. 2026-03-10T10:06:20.566 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../050-python-pastedeploy-tpl_2.1.1-1_all.deb ... 2026-03-10T10:06:20.567 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python-pastedeploy-tpl (2.1.1-1) ... 2026-03-10T10:06:20.580 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-pastedeploy. 2026-03-10T10:06:20.585 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../051-python3-pastedeploy_2.1.1-1_all.deb ... 2026-03-10T10:06:20.586 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-pastedeploy (2.1.1-1) ... 2026-03-10T10:06:20.600 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-webtest. 2026-03-10T10:06:20.606 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../052-python3-webtest_2.0.35-1_all.deb ... 2026-03-10T10:06:20.606 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-webtest (2.0.35-1) ... 2026-03-10T10:06:20.621 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-pecan. 2026-03-10T10:06:20.626 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../053-python3-pecan_1.3.3-4ubuntu2_all.deb ... 2026-03-10T10:06:20.627 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-pecan (1.3.3-4ubuntu2) ... 2026-03-10T10:06:20.657 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-werkzeug. 2026-03-10T10:06:20.662 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../054-python3-werkzeug_2.0.2+dfsg1-1ubuntu0.22.04.3_all.deb ... 2026-03-10T10:06:20.662 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-10T10:06:20.684 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-10T10:06:20.689 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../055-ceph-mgr-modules-core_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T10:06:20.690 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:20.725 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-10T10:06:20.730 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../056-libsqlite3-mod-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:20.731 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:20.746 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-mgr. 2026-03-10T10:06:20.751 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../057-ceph-mgr_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:20.752 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:20.780 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-mon. 2026-03-10T10:06:20.784 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../058-ceph-mon_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:20.785 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:20.875 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-10T10:06:20.880 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../059-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-10T10:06:20.881 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-10T10:06:20.897 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-osd. 2026-03-10T10:06:20.902 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../060-ceph-osd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:20.902 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:21.193 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph. 2026-03-10T10:06:21.198 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../061-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:21.199 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:21.213 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-fuse. 2026-03-10T10:06:21.217 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../062-ceph-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:21.218 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:21.249 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-mds. 2026-03-10T10:06:21.254 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../063-ceph-mds_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:21.255 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:21.302 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package cephadm. 2026-03-10T10:06:21.308 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../064-cephadm_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:21.309 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:21.326 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-10T10:06:21.331 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../065-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-10T10:06:21.332 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-10T10:06:21.357 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-10T10:06:21.362 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../066-ceph-mgr-cephadm_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T10:06:21.363 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:21.385 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-10T10:06:21.389 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../067-python3-repoze.lru_0.7-2_all.deb ... 2026-03-10T10:06:21.390 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-10T10:06:21.404 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-routes. 2026-03-10T10:06:21.409 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../068-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-10T10:06:21.410 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-10T10:06:21.433 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-10T10:06:21.438 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../069-ceph-mgr-dashboard_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T10:06:21.439 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:21.823 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-10T10:06:21.825 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../070-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-10T10:06:21.826 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-10T10:06:21.887 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-joblib. 2026-03-10T10:06:21.891 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../071-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-10T10:06:21.892 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-10T10:06:21.926 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-10T10:06:21.932 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../072-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-10T10:06:21.932 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-10T10:06:21.948 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-sklearn. 2026-03-10T10:06:21.953 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../073-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-10T10:06:21.954 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-10T10:06:22.078 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-10T10:06:22.085 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../074-ceph-mgr-diskprediction-local_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T10:06:22.085 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:22.360 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-cachetools. 2026-03-10T10:06:22.365 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../075-python3-cachetools_5.0.0-1_all.deb ... 2026-03-10T10:06:22.366 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-10T10:06:22.381 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-rsa. 2026-03-10T10:06:22.386 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../076-python3-rsa_4.8-1_all.deb ... 2026-03-10T10:06:22.387 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-10T10:06:22.405 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-google-auth. 2026-03-10T10:06:22.410 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../077-python3-google-auth_1.5.1-3_all.deb ... 2026-03-10T10:06:22.411 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-10T10:06:22.429 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-10T10:06:22.434 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../078-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-10T10:06:22.435 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-10T10:06:22.454 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-websocket. 2026-03-10T10:06:22.459 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../079-python3-websocket_1.2.3-1_all.deb ... 2026-03-10T10:06:22.460 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-10T10:06:22.483 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-10T10:06:22.487 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../080-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-10T10:06:22.500 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-10T10:06:22.648 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-10T10:06:22.654 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../081-ceph-mgr-k8sevents_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T10:06:22.654 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:22.669 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-10T10:06:22.674 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../082-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-10T10:06:22.675 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-10T10:06:22.692 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-10T10:06:22.697 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../083-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-10T10:06:22.698 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-10T10:06:22.713 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package jq. 2026-03-10T10:06:22.718 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../084-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-10T10:06:22.718 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-10T10:06:22.732 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package socat. 2026-03-10T10:06:22.737 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../085-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-10T10:06:22.738 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-10T10:06:22.761 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package xmlstarlet. 2026-03-10T10:06:22.766 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../086-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-10T10:06:22.767 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-10T10:06:22.811 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-test. 2026-03-10T10:06:22.816 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../087-ceph-test_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:22.817 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:23.638 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-volume. 2026-03-10T10:06:23.643 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../088-ceph-volume_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T10:06:23.644 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:23.671 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-10T10:06:23.676 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../089-libcephfs-dev_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:23.677 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:23.692 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package lua-socket:amd64. 2026-03-10T10:06:23.697 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../090-lua-socket_3.0~rc1+git+ac3201d-6_amd64.deb ... 2026-03-10T10:06:23.698 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-10T10:06:23.721 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package lua-sec:amd64. 2026-03-10T10:06:23.726 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../091-lua-sec_1.0.2-1_amd64.deb ... 2026-03-10T10:06:23.727 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking lua-sec:amd64 (1.0.2-1) ... 2026-03-10T10:06:23.745 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package nvme-cli. 2026-03-10T10:06:23.750 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../092-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-10T10:06:23.751 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-10T10:06:23.791 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package pkg-config. 2026-03-10T10:06:23.796 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../093-pkg-config_0.29.2-1ubuntu3_amd64.deb ... 2026-03-10T10:06:23.797 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking pkg-config (0.29.2-1ubuntu3) ... 2026-03-10T10:06:23.812 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-10T10:06:23.817 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../094-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-10T10:06:23.819 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-10T10:06:23.865 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-10T10:06:23.870 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../095-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-10T10:06:23.871 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-10T10:06:23.884 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-pastescript. 2026-03-10T10:06:23.889 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../096-python3-pastescript_2.0.2-4_all.deb ... 2026-03-10T10:06:23.890 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-pastescript (2.0.2-4) ... 2026-03-10T10:06:23.909 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-pluggy. 2026-03-10T10:06:23.914 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../097-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-10T10:06:23.915 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-10T10:06:23.930 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-psutil. 2026-03-10T10:06:23.935 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../098-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-10T10:06:23.936 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-10T10:06:23.955 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-py. 2026-03-10T10:06:23.960 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../099-python3-py_1.10.0-1_all.deb ... 2026-03-10T10:06:23.961 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-10T10:06:23.981 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-pygments. 2026-03-10T10:06:23.987 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../100-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-10T10:06:23.988 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-10T10:06:24.046 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-pyinotify. 2026-03-10T10:06:24.051 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../101-python3-pyinotify_0.9.6-1.3_all.deb ... 2026-03-10T10:06:24.052 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-pyinotify (0.9.6-1.3) ... 2026-03-10T10:06:24.066 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-toml. 2026-03-10T10:06:24.071 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../102-python3-toml_0.10.2-1_all.deb ... 2026-03-10T10:06:24.072 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-10T10:06:24.087 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-pytest. 2026-03-10T10:06:24.091 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../103-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-10T10:06:24.093 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-10T10:06:24.117 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-simplejson. 2026-03-10T10:06:24.120 INFO:teuthology.orchestra.run.vm07.stdout:Get:91 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 19.2.3-678-ge911bdeb-1jammy [5178 kB] 2026-03-10T10:06:24.123 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../104-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-10T10:06:24.124 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-10T10:06:24.141 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-10T10:06:24.147 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../105-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-10T10:06:24.149 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-10T10:06:24.253 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package radosgw. 2026-03-10T10:06:24.259 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../106-radosgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:24.259 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:24.459 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package rbd-fuse. 2026-03-10T10:06:24.460 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../107-rbd-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:24.461 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:24.478 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package smartmontools. 2026-03-10T10:06:24.483 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../108-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-10T10:06:24.491 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-10T10:06:24.540 INFO:teuthology.orchestra.run.vm04.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-10T10:06:24.771 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-10T10:06:24.771 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-10T10:06:25.108 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-10T10:06:25.170 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-10T10:06:25.172 INFO:teuthology.orchestra.run.vm04.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-10T10:06:25.234 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-10T10:06:25.457 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-10T10:06:25.657 INFO:teuthology.orchestra.run.vm07.stdout:Get:92 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 19.2.3-678-ge911bdeb-1jammy [248 kB] 2026-03-10T10:06:25.666 INFO:teuthology.orchestra.run.vm07.stdout:Get:93 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 19.2.3-678-ge911bdeb-1jammy [125 kB] 2026-03-10T10:06:25.706 INFO:teuthology.orchestra.run.vm07.stdout:Get:94 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 19.2.3-678-ge911bdeb-1jammy [1081 kB] 2026-03-10T10:06:25.816 INFO:teuthology.orchestra.run.vm04.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-10T10:06:25.821 INFO:teuthology.orchestra.run.vm04.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-10T10:06:25.824 INFO:teuthology.orchestra.run.vm04.stdout:Setting up cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:25.861 INFO:teuthology.orchestra.run.vm04.stdout:Adding system user cephadm....done 2026-03-10T10:06:25.869 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-10T10:06:25.937 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-10T10:06:25.994 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-10T10:06:25.996 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-10T10:06:26.022 INFO:teuthology.orchestra.run.vm07.stdout:Get:95 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 19.2.3-678-ge911bdeb-1jammy [6239 kB] 2026-03-10T10:06:26.054 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-10T10:06:26.114 INFO:teuthology.orchestra.run.vm04.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-10T10:06:26.116 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-10T10:06:26.197 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-10T10:06:26.309 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-10T10:06:26.369 INFO:teuthology.orchestra.run.vm04.stdout:Setting up unzip (6.0-26ubuntu3.2) ... 2026-03-10T10:06:26.377 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-pyinotify (0.9.6-1.3) ... 2026-03-10T10:06:26.438 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-10T10:06:26.498 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:26.560 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-10T10:06:26.562 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-10T10:06:26.565 INFO:teuthology.orchestra.run.vm04.stdout:Setting up lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-10T10:06:26.567 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libreadline-dev:amd64 (8.1.2-1) ... 2026-03-10T10:06:26.569 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-10T10:06:26.571 INFO:teuthology.orchestra.run.vm04.stdout:Setting up lua5.1 (5.1.5-8.1build4) ... 2026-03-10T10:06:26.575 INFO:teuthology.orchestra.run.vm04.stdout:update-alternatives: using /usr/bin/lua5.1 to provide /usr/bin/lua (lua-interpreter) in auto mode 2026-03-10T10:06:26.577 INFO:teuthology.orchestra.run.vm04.stdout:update-alternatives: using /usr/bin/luac5.1 to provide /usr/bin/luac (lua-compiler) in auto mode 2026-03-10T10:06:26.578 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-10T10:06:26.582 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-10T10:06:26.692 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-10T10:06:26.755 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-10T10:06:26.820 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-10T10:06:26.891 INFO:teuthology.orchestra.run.vm04.stdout:Setting up zip (3.0-12build2) ... 2026-03-10T10:06:26.894 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-10T10:06:27.155 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-tempita (0.5.2-6ubuntu1) ... 2026-03-10T10:06:27.218 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python-pastedeploy-tpl (2.1.1-1) ... 2026-03-10T10:06:27.220 INFO:teuthology.orchestra.run.vm04.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-10T10:06:27.222 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-10T10:06:27.306 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-10T10:06:27.430 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-paste (3.5.0+dfsg1-1) ... 2026-03-10T10:06:27.546 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-10T10:06:27.565 INFO:teuthology.orchestra.run.vm07.stdout:Get:96 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 19.2.3-678-ge911bdeb-1jammy [23.0 MB] 2026-03-10T10:06:27.628 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-10T10:06:27.731 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-10T10:06:27.789 INFO:teuthology.orchestra.run.vm04.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-10T10:06:27.792 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:27.876 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-10T10:06:28.397 INFO:teuthology.orchestra.run.vm04.stdout:Setting up pkg-config (0.29.2-1ubuntu3) ... 2026-03-10T10:06:28.417 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T10:06:28.422 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-10T10:06:28.486 INFO:teuthology.orchestra.run.vm04.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-10T10:06:28.489 INFO:teuthology.orchestra.run.vm04.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-10T10:06:28.491 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-10T10:06:28.555 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-10T10:06:28.613 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T10:06:28.616 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-10T10:06:28.680 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-singledispatch (3.4.0.3-3) ... 2026-03-10T10:06:28.740 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-logutils (0.3.3-8) ... 2026-03-10T10:06:28.802 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-10T10:06:28.864 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-simplegeneric (0.8.1-3) ... 2026-03-10T10:06:28.924 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-10T10:06:28.995 INFO:teuthology.orchestra.run.vm04.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-10T10:06:28.997 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-10T10:06:29.070 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-10T10:06:29.072 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-10T10:06:29.137 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-10T10:06:29.214 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-10T10:06:29.297 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-10T10:06:29.365 INFO:teuthology.orchestra.run.vm04.stdout:Setting up liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-10T10:06:29.367 INFO:teuthology.orchestra.run.vm04.stdout:Setting up lua-sec:amd64 (1.0.2-1) ... 2026-03-10T10:06:29.369 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-10T10:06:29.372 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-10T10:06:29.498 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-pastedeploy (2.1.1-1) ... 2026-03-10T10:06:29.561 INFO:teuthology.orchestra.run.vm04.stdout:Setting up lua-any (27ubuntu1) ... 2026-03-10T10:06:29.563 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-10T10:06:29.622 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T10:06:29.624 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-10T10:06:29.694 INFO:teuthology.orchestra.run.vm04.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-10T10:06:29.696 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-webtest (2.0.35-1) ... 2026-03-10T10:06:29.764 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-10T10:06:29.887 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-pastescript (2.0.2-4) ... 2026-03-10T10:06:29.965 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-pecan (1.3.3-4ubuntu2) ... 2026-03-10T10:06:30.069 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-10T10:06:30.071 INFO:teuthology.orchestra.run.vm04.stdout:Setting up librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:30.073 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:30.075 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-10T10:06:30.616 INFO:teuthology.orchestra.run.vm04.stdout:Setting up luarocks (3.8.0+dfsg1-1) ... 2026-03-10T10:06:30.622 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:30.624 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:30.626 INFO:teuthology.orchestra.run.vm04.stdout:Setting up librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:30.628 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:30.630 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:30.686 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-10T10:06:30.686 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-10T10:06:31.034 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:31.036 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:31.039 INFO:teuthology.orchestra.run.vm04.stdout:Setting up librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:31.041 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:31.043 INFO:teuthology.orchestra.run.vm04.stdout:Setting up rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:31.045 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:31.047 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:31.049 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:31.082 INFO:teuthology.orchestra.run.vm04.stdout:Adding group ceph....done 2026-03-10T10:06:31.117 INFO:teuthology.orchestra.run.vm04.stdout:Adding system user ceph....done 2026-03-10T10:06:31.124 INFO:teuthology.orchestra.run.vm04.stdout:Setting system user ceph properties....done 2026-03-10T10:06:31.128 INFO:teuthology.orchestra.run.vm04.stdout:chown: cannot access '/var/log/ceph/*.log*': No such file or directory 2026-03-10T10:06:31.189 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target. 2026-03-10T10:06:31.381 INFO:teuthology.orchestra.run.vm07.stdout:Get:97 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 19.2.3-678-ge911bdeb-1jammy [14.2 kB] 2026-03-10T10:06:31.381 INFO:teuthology.orchestra.run.vm07.stdout:Get:98 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 19.2.3-678-ge911bdeb-1jammy [1173 kB] 2026-03-10T10:06:31.392 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-10T10:06:31.502 INFO:teuthology.orchestra.run.vm07.stdout:Get:99 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 19.2.3-678-ge911bdeb-1jammy [2503 kB] 2026-03-10T10:06:31.699 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:31.701 INFO:teuthology.orchestra.run.vm04.stdout:Setting up radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:31.744 INFO:teuthology.orchestra.run.vm07.stdout:Get:100 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 19.2.3-678-ge911bdeb-1jammy [798 kB] 2026-03-10T10:06:31.861 INFO:teuthology.orchestra.run.vm07.stdout:Get:101 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 19.2.3-678-ge911bdeb-1jammy [157 kB] 2026-03-10T10:06:31.862 INFO:teuthology.orchestra.run.vm07.stdout:Get:102 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 19.2.3-678-ge911bdeb-1jammy [2396 kB] 2026-03-10T10:06:31.911 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-10T10:06:31.911 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-10T10:06:32.102 INFO:teuthology.orchestra.run.vm07.stdout:Get:103 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 19.2.3-678-ge911bdeb-1jammy [8625 kB] 2026-03-10T10:06:32.257 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:32.335 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-10T10:06:32.681 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:32.749 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-10T10:06:32.750 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-10T10:06:33.106 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:33.164 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-10T10:06:33.164 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-10T10:06:33.282 INFO:teuthology.orchestra.run.vm07.stdout:Get:104 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 19.2.3-678-ge911bdeb-1jammy [14.3 kB] 2026-03-10T10:06:33.282 INFO:teuthology.orchestra.run.vm07.stdout:Get:105 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 19.2.3-678-ge911bdeb-1jammy [52.1 MB] 2026-03-10T10:06:33.538 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:33.612 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-10T10:06:33.612 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-10T10:06:33.948 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:33.950 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:33.963 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:34.018 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-10T10:06:34.018 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-10T10:06:34.363 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:34.376 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:34.378 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:34.390 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:34.504 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-10T10:06:34.511 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T10:06:34.525 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T10:06:34.599 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for install-info (6.8-4build1) ... 2026-03-10T10:06:34.902 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:06:34.902 INFO:teuthology.orchestra.run.vm04.stdout:Running kernel seems to be up-to-date. 2026-03-10T10:06:34.902 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:06:34.902 INFO:teuthology.orchestra.run.vm04.stdout:Services to be restarted: 2026-03-10T10:06:34.907 INFO:teuthology.orchestra.run.vm04.stdout: systemctl restart packagekit.service 2026-03-10T10:06:34.910 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:06:34.910 INFO:teuthology.orchestra.run.vm04.stdout:Service restarts being deferred: 2026-03-10T10:06:34.910 INFO:teuthology.orchestra.run.vm04.stdout: systemctl restart networkd-dispatcher.service 2026-03-10T10:06:34.910 INFO:teuthology.orchestra.run.vm04.stdout: systemctl restart unattended-upgrades.service 2026-03-10T10:06:34.910 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:06:34.910 INFO:teuthology.orchestra.run.vm04.stdout:No containers need to be restarted. 2026-03-10T10:06:34.910 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:06:34.910 INFO:teuthology.orchestra.run.vm04.stdout:No user sessions are running outdated binaries. 2026-03-10T10:06:34.910 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:06:34.910 INFO:teuthology.orchestra.run.vm04.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-10T10:06:35.612 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:06:35.614 DEBUG:teuthology.orchestra.run.vm04:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install python3-xmltodict python3-jmespath 2026-03-10T10:06:35.685 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-10T10:06:35.821 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-10T10:06:35.821 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-10T10:06:35.921 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:06:35.921 INFO:teuthology.orchestra.run.vm04.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T10:06:35.922 INFO:teuthology.orchestra.run.vm04.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-10T10:06:35.922 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:06:35.932 INFO:teuthology.orchestra.run.vm04.stdout:The following NEW packages will be installed: 2026-03-10T10:06:35.932 INFO:teuthology.orchestra.run.vm04.stdout: python3-jmespath python3-xmltodict 2026-03-10T10:06:36.016 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 2 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T10:06:36.016 INFO:teuthology.orchestra.run.vm04.stdout:Need to get 34.3 kB of archives. 2026-03-10T10:06:36.016 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 146 kB of additional disk space will be used. 2026-03-10T10:06:36.016 INFO:teuthology.orchestra.run.vm04.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-10T10:06:36.032 INFO:teuthology.orchestra.run.vm04.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-10T10:06:36.196 INFO:teuthology.orchestra.run.vm04.stdout:Fetched 34.3 kB in 0s (350 kB/s) 2026-03-10T10:06:36.210 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-jmespath. 2026-03-10T10:06:36.234 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118577 files and directories currently installed.) 2026-03-10T10:06:36.235 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-10T10:06:36.236 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-10T10:06:36.250 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-10T10:06:36.255 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-10T10:06:36.256 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-10T10:06:36.281 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-10T10:06:36.340 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-10T10:06:36.652 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:06:36.652 INFO:teuthology.orchestra.run.vm04.stdout:Running kernel seems to be up-to-date. 2026-03-10T10:06:36.653 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:06:36.653 INFO:teuthology.orchestra.run.vm04.stdout:Services to be restarted: 2026-03-10T10:06:36.658 INFO:teuthology.orchestra.run.vm04.stdout: systemctl restart packagekit.service 2026-03-10T10:06:36.661 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:06:36.661 INFO:teuthology.orchestra.run.vm04.stdout:Service restarts being deferred: 2026-03-10T10:06:36.661 INFO:teuthology.orchestra.run.vm04.stdout: systemctl restart networkd-dispatcher.service 2026-03-10T10:06:36.661 INFO:teuthology.orchestra.run.vm04.stdout: systemctl restart unattended-upgrades.service 2026-03-10T10:06:36.661 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:06:36.661 INFO:teuthology.orchestra.run.vm04.stdout:No containers need to be restarted. 2026-03-10T10:06:36.661 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:06:36.662 INFO:teuthology.orchestra.run.vm04.stdout:No user sessions are running outdated binaries. 2026-03-10T10:06:36.662 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:06:36.662 INFO:teuthology.orchestra.run.vm04.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-10T10:06:37.298 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:06:37.301 DEBUG:teuthology.parallel:result is None 2026-03-10T10:06:43.204 INFO:teuthology.orchestra.run.vm07.stdout:Get:106 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 19.2.3-678-ge911bdeb-1jammy [135 kB] 2026-03-10T10:06:43.250 INFO:teuthology.orchestra.run.vm07.stdout:Get:107 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 19.2.3-678-ge911bdeb-1jammy [41.0 kB] 2026-03-10T10:06:43.278 INFO:teuthology.orchestra.run.vm07.stdout:Get:108 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 19.2.3-678-ge911bdeb-1jammy [13.7 MB] 2026-03-10T10:06:46.241 INFO:teuthology.orchestra.run.vm07.stdout:Get:109 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 19.2.3-678-ge911bdeb-1jammy [92.2 kB] 2026-03-10T10:06:46.509 INFO:teuthology.orchestra.run.vm07.stdout:Fetched 178 MB in 36s (4941 kB/s) 2026-03-10T10:06:46.626 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-10T10:06:46.650 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 111717 files and directories currently installed.) 2026-03-10T10:06:46.651 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../000-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-10T10:06:46.653 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-10T10:06:46.672 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-10T10:06:46.676 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../001-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-10T10:06:46.677 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-10T10:06:46.690 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-10T10:06:46.694 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../002-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-10T10:06:46.695 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-10T10:06:46.713 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-10T10:06:46.717 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../003-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-10T10:06:46.720 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T10:06:46.757 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-10T10:06:46.762 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../004-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-10T10:06:46.762 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T10:06:46.779 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-10T10:06:46.783 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../005-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-10T10:06:46.784 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T10:06:46.806 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-10T10:06:46.810 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../006-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-10T10:06:46.811 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-10T10:06:46.832 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../007-librbd1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:46.834 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking librbd1 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-10T10:06:46.903 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../008-librados2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:46.905 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking librados2 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-10T10:06:46.967 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libnbd0. 2026-03-10T10:06:46.971 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../009-libnbd0_1.10.5-1_amd64.deb ... 2026-03-10T10:06:46.972 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-10T10:06:46.984 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libcephfs2. 2026-03-10T10:06:46.989 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../010-libcephfs2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:46.990 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:47.013 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-rados. 2026-03-10T10:06:47.018 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../011-python3-rados_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:47.019 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:47.036 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-10T10:06:47.041 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../012-python3-ceph-argparse_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T10:06:47.041 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:47.053 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-cephfs. 2026-03-10T10:06:47.057 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../013-python3-cephfs_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:47.058 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:47.072 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-10T10:06:47.077 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../014-python3-ceph-common_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T10:06:47.077 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:47.094 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-10T10:06:47.098 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../015-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-10T10:06:47.099 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-10T10:06:47.114 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-prettytable. 2026-03-10T10:06:47.118 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../016-python3-prettytable_2.5.0-2_all.deb ... 2026-03-10T10:06:47.119 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-10T10:06:47.131 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-rbd. 2026-03-10T10:06:47.135 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../017-python3-rbd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:47.136 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:47.153 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-10T10:06:47.157 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../018-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-10T10:06:47.158 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-10T10:06:47.176 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libreadline-dev:amd64. 2026-03-10T10:06:47.181 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../019-libreadline-dev_8.1.2-1_amd64.deb ... 2026-03-10T10:06:47.181 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libreadline-dev:amd64 (8.1.2-1) ... 2026-03-10T10:06:47.197 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package liblua5.3-dev:amd64. 2026-03-10T10:06:47.201 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../020-liblua5.3-dev_5.3.6-1build1_amd64.deb ... 2026-03-10T10:06:47.202 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-10T10:06:47.220 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package lua5.1. 2026-03-10T10:06:47.224 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../021-lua5.1_5.1.5-8.1build4_amd64.deb ... 2026-03-10T10:06:47.225 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking lua5.1 (5.1.5-8.1build4) ... 2026-03-10T10:06:47.241 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package lua-any. 2026-03-10T10:06:47.245 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../022-lua-any_27ubuntu1_all.deb ... 2026-03-10T10:06:47.246 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking lua-any (27ubuntu1) ... 2026-03-10T10:06:47.257 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package zip. 2026-03-10T10:06:47.261 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../023-zip_3.0-12build2_amd64.deb ... 2026-03-10T10:06:47.262 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking zip (3.0-12build2) ... 2026-03-10T10:06:47.277 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package unzip. 2026-03-10T10:06:47.281 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../024-unzip_6.0-26ubuntu3.2_amd64.deb ... 2026-03-10T10:06:47.281 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking unzip (6.0-26ubuntu3.2) ... 2026-03-10T10:06:47.297 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package luarocks. 2026-03-10T10:06:47.301 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../025-luarocks_3.8.0+dfsg1-1_all.deb ... 2026-03-10T10:06:47.302 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking luarocks (3.8.0+dfsg1-1) ... 2026-03-10T10:06:47.344 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package librgw2. 2026-03-10T10:06:47.349 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../026-librgw2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:47.350 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:47.455 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-rgw. 2026-03-10T10:06:47.460 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../027-python3-rgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:47.461 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:47.475 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-10T10:06:47.480 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../028-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-10T10:06:47.481 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-10T10:06:47.493 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libradosstriper1. 2026-03-10T10:06:47.498 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../029-libradosstriper1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:47.498 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:47.518 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-common. 2026-03-10T10:06:47.522 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../030-ceph-common_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:47.523 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:47.893 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-base. 2026-03-10T10:06:47.898 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../031-ceph-base_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:47.901 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:47.996 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-10T10:06:48.001 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../032-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-10T10:06:48.002 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-10T10:06:48.014 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-cheroot. 2026-03-10T10:06:48.019 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../033-python3-cheroot_8.5.2+ds1-1ubuntu3.1_all.deb ... 2026-03-10T10:06:48.020 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-10T10:06:48.036 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-10T10:06:48.041 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../034-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-10T10:06:48.041 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-10T10:06:48.054 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-10T10:06:48.059 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../035-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-10T10:06:48.059 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-10T10:06:48.072 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-10T10:06:48.077 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../036-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-10T10:06:48.078 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-10T10:06:48.090 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-tempora. 2026-03-10T10:06:48.095 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../037-python3-tempora_4.1.2-1_all.deb ... 2026-03-10T10:06:48.096 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-10T10:06:48.110 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-portend. 2026-03-10T10:06:48.115 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../038-python3-portend_3.0.0-1_all.deb ... 2026-03-10T10:06:48.116 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-10T10:06:48.128 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-10T10:06:48.133 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../039-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-10T10:06:48.134 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-10T10:06:48.150 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-10T10:06:48.156 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../040-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-10T10:06:48.156 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-10T10:06:48.186 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-natsort. 2026-03-10T10:06:48.192 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../041-python3-natsort_8.0.2-1_all.deb ... 2026-03-10T10:06:48.193 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-10T10:06:48.210 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-logutils. 2026-03-10T10:06:48.215 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../042-python3-logutils_0.3.3-8_all.deb ... 2026-03-10T10:06:48.216 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-logutils (0.3.3-8) ... 2026-03-10T10:06:48.232 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-mako. 2026-03-10T10:06:48.238 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../043-python3-mako_1.1.3+ds1-2ubuntu0.1_all.deb ... 2026-03-10T10:06:48.239 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-10T10:06:48.259 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-simplegeneric. 2026-03-10T10:06:48.264 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../044-python3-simplegeneric_0.8.1-3_all.deb ... 2026-03-10T10:06:48.264 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-simplegeneric (0.8.1-3) ... 2026-03-10T10:06:48.277 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-singledispatch. 2026-03-10T10:06:48.282 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../045-python3-singledispatch_3.4.0.3-3_all.deb ... 2026-03-10T10:06:48.283 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-singledispatch (3.4.0.3-3) ... 2026-03-10T10:06:48.296 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-webob. 2026-03-10T10:06:48.301 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../046-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-10T10:06:48.301 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-10T10:06:48.318 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-waitress. 2026-03-10T10:06:48.322 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../047-python3-waitress_1.4.4-1.1ubuntu1.1_all.deb ... 2026-03-10T10:06:48.324 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-10T10:06:48.338 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-tempita. 2026-03-10T10:06:48.343 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../048-python3-tempita_0.5.2-6ubuntu1_all.deb ... 2026-03-10T10:06:48.343 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-tempita (0.5.2-6ubuntu1) ... 2026-03-10T10:06:48.356 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-paste. 2026-03-10T10:06:48.361 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../049-python3-paste_3.5.0+dfsg1-1_all.deb ... 2026-03-10T10:06:48.361 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-paste (3.5.0+dfsg1-1) ... 2026-03-10T10:06:48.391 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python-pastedeploy-tpl. 2026-03-10T10:06:48.396 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../050-python-pastedeploy-tpl_2.1.1-1_all.deb ... 2026-03-10T10:06:48.396 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python-pastedeploy-tpl (2.1.1-1) ... 2026-03-10T10:06:48.409 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-pastedeploy. 2026-03-10T10:06:48.413 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../051-python3-pastedeploy_2.1.1-1_all.deb ... 2026-03-10T10:06:48.414 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-pastedeploy (2.1.1-1) ... 2026-03-10T10:06:48.429 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-webtest. 2026-03-10T10:06:48.433 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../052-python3-webtest_2.0.35-1_all.deb ... 2026-03-10T10:06:48.434 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-webtest (2.0.35-1) ... 2026-03-10T10:06:48.448 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-pecan. 2026-03-10T10:06:48.453 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../053-python3-pecan_1.3.3-4ubuntu2_all.deb ... 2026-03-10T10:06:48.454 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-pecan (1.3.3-4ubuntu2) ... 2026-03-10T10:06:48.483 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-werkzeug. 2026-03-10T10:06:48.488 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../054-python3-werkzeug_2.0.2+dfsg1-1ubuntu0.22.04.3_all.deb ... 2026-03-10T10:06:48.488 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-10T10:06:48.509 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-10T10:06:48.514 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../055-ceph-mgr-modules-core_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T10:06:48.515 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:48.550 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-10T10:06:48.555 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../056-libsqlite3-mod-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:48.555 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:48.570 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-mgr. 2026-03-10T10:06:48.574 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../057-ceph-mgr_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:48.575 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:48.603 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-mon. 2026-03-10T10:06:48.608 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../058-ceph-mon_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:48.608 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:48.687 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-10T10:06:48.692 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../059-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-10T10:06:48.693 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-10T10:06:48.709 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-osd. 2026-03-10T10:06:48.713 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../060-ceph-osd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:48.714 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:48.985 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph. 2026-03-10T10:06:48.990 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../061-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:48.991 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:49.004 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-fuse. 2026-03-10T10:06:49.008 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../062-ceph-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:49.009 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:49.037 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-mds. 2026-03-10T10:06:49.042 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../063-ceph-mds_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:49.043 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:49.085 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package cephadm. 2026-03-10T10:06:49.090 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../064-cephadm_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:49.091 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:49.106 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-10T10:06:49.111 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../065-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-10T10:06:49.112 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-10T10:06:49.135 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-10T10:06:49.140 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../066-ceph-mgr-cephadm_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T10:06:49.140 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:49.162 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-10T10:06:49.166 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../067-python3-repoze.lru_0.7-2_all.deb ... 2026-03-10T10:06:49.167 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-10T10:06:49.180 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-routes. 2026-03-10T10:06:49.185 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../068-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-10T10:06:49.186 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-10T10:06:49.206 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-10T10:06:49.211 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../069-ceph-mgr-dashboard_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T10:06:49.211 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:49.569 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-10T10:06:49.574 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../070-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-10T10:06:49.575 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-10T10:06:49.630 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-joblib. 2026-03-10T10:06:49.634 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../071-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-10T10:06:49.635 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-10T10:06:49.665 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-10T10:06:49.670 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../072-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-10T10:06:49.671 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-10T10:06:49.684 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-sklearn. 2026-03-10T10:06:49.689 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../073-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-10T10:06:49.689 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-10T10:06:49.808 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-10T10:06:49.813 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../074-ceph-mgr-diskprediction-local_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T10:06:49.814 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:50.075 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-cachetools. 2026-03-10T10:06:50.080 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../075-python3-cachetools_5.0.0-1_all.deb ... 2026-03-10T10:06:50.081 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-10T10:06:50.093 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-rsa. 2026-03-10T10:06:50.097 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../076-python3-rsa_4.8-1_all.deb ... 2026-03-10T10:06:50.098 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-10T10:06:50.114 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-google-auth. 2026-03-10T10:06:50.118 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../077-python3-google-auth_1.5.1-3_all.deb ... 2026-03-10T10:06:50.119 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-10T10:06:50.135 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-10T10:06:50.140 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../078-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-10T10:06:50.140 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-10T10:06:50.154 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-websocket. 2026-03-10T10:06:50.159 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../079-python3-websocket_1.2.3-1_all.deb ... 2026-03-10T10:06:50.160 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-10T10:06:50.176 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-10T10:06:50.181 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../080-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-10T10:06:50.193 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-10T10:06:50.334 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-10T10:06:50.339 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../081-ceph-mgr-k8sevents_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T10:06:50.339 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:50.352 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-10T10:06:50.357 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../082-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-10T10:06:50.358 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-10T10:06:50.373 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-10T10:06:50.378 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../083-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-10T10:06:50.379 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-10T10:06:50.391 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package jq. 2026-03-10T10:06:50.396 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../084-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-10T10:06:50.397 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-10T10:06:50.408 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package socat. 2026-03-10T10:06:50.413 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../085-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-10T10:06:50.414 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-10T10:06:50.435 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package xmlstarlet. 2026-03-10T10:06:50.440 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../086-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-10T10:06:50.441 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-10T10:06:50.482 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-test. 2026-03-10T10:06:50.486 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../087-ceph-test_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:50.487 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:51.244 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-volume. 2026-03-10T10:06:51.249 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../088-ceph-volume_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T10:06:51.250 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:51.274 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-10T10:06:51.279 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../089-libcephfs-dev_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:51.280 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:51.293 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package lua-socket:amd64. 2026-03-10T10:06:51.298 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../090-lua-socket_3.0~rc1+git+ac3201d-6_amd64.deb ... 2026-03-10T10:06:51.298 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-10T10:06:51.320 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package lua-sec:amd64. 2026-03-10T10:06:51.325 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../091-lua-sec_1.0.2-1_amd64.deb ... 2026-03-10T10:06:51.326 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking lua-sec:amd64 (1.0.2-1) ... 2026-03-10T10:06:51.342 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package nvme-cli. 2026-03-10T10:06:51.347 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../092-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-10T10:06:51.347 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-10T10:06:51.383 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package pkg-config. 2026-03-10T10:06:51.388 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../093-pkg-config_0.29.2-1ubuntu3_amd64.deb ... 2026-03-10T10:06:51.389 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking pkg-config (0.29.2-1ubuntu3) ... 2026-03-10T10:06:51.436 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-10T10:06:51.441 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../094-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-10T10:06:51.442 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-10T10:06:51.487 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-10T10:06:51.487 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../095-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-10T10:06:51.488 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-10T10:06:51.501 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-pastescript. 2026-03-10T10:06:51.506 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../096-python3-pastescript_2.0.2-4_all.deb ... 2026-03-10T10:06:51.506 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-pastescript (2.0.2-4) ... 2026-03-10T10:06:51.524 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-pluggy. 2026-03-10T10:06:51.528 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../097-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-10T10:06:51.530 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-10T10:06:51.544 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-psutil. 2026-03-10T10:06:51.549 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../098-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-10T10:06:51.550 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-10T10:06:51.569 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-py. 2026-03-10T10:06:51.574 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../099-python3-py_1.10.0-1_all.deb ... 2026-03-10T10:06:51.575 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-10T10:06:51.595 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-pygments. 2026-03-10T10:06:51.600 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../100-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-10T10:06:51.601 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-10T10:06:51.655 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-pyinotify. 2026-03-10T10:06:51.660 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../101-python3-pyinotify_0.9.6-1.3_all.deb ... 2026-03-10T10:06:51.661 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-pyinotify (0.9.6-1.3) ... 2026-03-10T10:06:51.675 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-toml. 2026-03-10T10:06:51.680 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../102-python3-toml_0.10.2-1_all.deb ... 2026-03-10T10:06:51.681 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-10T10:06:51.694 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-pytest. 2026-03-10T10:06:51.699 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../103-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-10T10:06:51.699 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-10T10:06:51.724 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-simplejson. 2026-03-10T10:06:51.728 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../104-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-10T10:06:51.787 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-10T10:06:51.808 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-10T10:06:51.813 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../105-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-10T10:06:51.814 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-10T10:06:51.919 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package radosgw. 2026-03-10T10:06:51.924 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../106-radosgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:51.925 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:52.110 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package rbd-fuse. 2026-03-10T10:06:52.115 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../107-rbd-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T10:06:52.116 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:52.130 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package smartmontools. 2026-03-10T10:06:52.135 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../108-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-10T10:06:52.141 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-10T10:06:52.180 INFO:teuthology.orchestra.run.vm07.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-10T10:06:52.438 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-10T10:06:52.438 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-10T10:06:52.789 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-10T10:06:52.846 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-10T10:06:52.848 INFO:teuthology.orchestra.run.vm07.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-10T10:06:52.907 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-10T10:06:53.115 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-10T10:06:53.428 INFO:teuthology.orchestra.run.vm07.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-10T10:06:53.433 INFO:teuthology.orchestra.run.vm07.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-10T10:06:53.435 INFO:teuthology.orchestra.run.vm07.stdout:Setting up cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:53.471 INFO:teuthology.orchestra.run.vm07.stdout:Adding system user cephadm....done 2026-03-10T10:06:53.479 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-10T10:06:53.544 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-10T10:06:53.601 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-10T10:06:53.603 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-10T10:06:53.660 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-10T10:06:53.721 INFO:teuthology.orchestra.run.vm07.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-10T10:06:53.723 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-10T10:06:53.804 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-10T10:06:53.918 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-10T10:06:53.981 INFO:teuthology.orchestra.run.vm07.stdout:Setting up unzip (6.0-26ubuntu3.2) ... 2026-03-10T10:06:53.988 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-pyinotify (0.9.6-1.3) ... 2026-03-10T10:06:54.052 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-10T10:06:54.115 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:54.177 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-10T10:06:54.179 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-10T10:06:54.181 INFO:teuthology.orchestra.run.vm07.stdout:Setting up lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-10T10:06:54.184 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libreadline-dev:amd64 (8.1.2-1) ... 2026-03-10T10:06:54.186 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-10T10:06:54.188 INFO:teuthology.orchestra.run.vm07.stdout:Setting up lua5.1 (5.1.5-8.1build4) ... 2026-03-10T10:06:54.192 INFO:teuthology.orchestra.run.vm07.stdout:update-alternatives: using /usr/bin/lua5.1 to provide /usr/bin/lua (lua-interpreter) in auto mode 2026-03-10T10:06:54.194 INFO:teuthology.orchestra.run.vm07.stdout:update-alternatives: using /usr/bin/luac5.1 to provide /usr/bin/luac (lua-compiler) in auto mode 2026-03-10T10:06:54.195 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-10T10:06:54.198 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-10T10:06:54.310 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-10T10:06:54.375 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-10T10:06:54.444 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-10T10:06:54.516 INFO:teuthology.orchestra.run.vm07.stdout:Setting up zip (3.0-12build2) ... 2026-03-10T10:06:54.518 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-10T10:06:54.770 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-tempita (0.5.2-6ubuntu1) ... 2026-03-10T10:06:54.833 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python-pastedeploy-tpl (2.1.1-1) ... 2026-03-10T10:06:54.835 INFO:teuthology.orchestra.run.vm07.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-10T10:06:54.838 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-10T10:06:54.921 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-10T10:06:55.043 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-paste (3.5.0+dfsg1-1) ... 2026-03-10T10:06:55.161 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-10T10:06:55.238 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-10T10:06:55.341 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-10T10:06:55.398 INFO:teuthology.orchestra.run.vm07.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-10T10:06:55.400 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:55.487 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-10T10:06:55.996 INFO:teuthology.orchestra.run.vm07.stdout:Setting up pkg-config (0.29.2-1ubuntu3) ... 2026-03-10T10:06:56.015 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T10:06:56.019 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-10T10:06:56.081 INFO:teuthology.orchestra.run.vm07.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-10T10:06:56.084 INFO:teuthology.orchestra.run.vm07.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-10T10:06:56.086 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-10T10:06:56.150 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-10T10:06:56.212 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T10:06:56.214 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-10T10:06:56.279 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-singledispatch (3.4.0.3-3) ... 2026-03-10T10:06:56.337 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-logutils (0.3.3-8) ... 2026-03-10T10:06:56.399 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-10T10:06:56.458 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-simplegeneric (0.8.1-3) ... 2026-03-10T10:06:56.516 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-10T10:06:56.579 INFO:teuthology.orchestra.run.vm07.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-10T10:06:56.582 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-10T10:06:56.651 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-10T10:06:56.653 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-10T10:06:56.715 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-10T10:06:56.791 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-10T10:06:56.872 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-10T10:06:56.932 INFO:teuthology.orchestra.run.vm07.stdout:Setting up liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-10T10:06:56.933 INFO:teuthology.orchestra.run.vm07.stdout:Setting up lua-sec:amd64 (1.0.2-1) ... 2026-03-10T10:06:56.936 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-10T10:06:56.938 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-10T10:06:57.070 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-pastedeploy (2.1.1-1) ... 2026-03-10T10:06:57.134 INFO:teuthology.orchestra.run.vm07.stdout:Setting up lua-any (27ubuntu1) ... 2026-03-10T10:06:57.137 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-10T10:06:57.196 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T10:06:57.198 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-10T10:06:57.267 INFO:teuthology.orchestra.run.vm07.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-10T10:06:57.269 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-webtest (2.0.35-1) ... 2026-03-10T10:06:57.334 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-10T10:06:57.451 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-pastescript (2.0.2-4) ... 2026-03-10T10:06:57.525 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-pecan (1.3.3-4ubuntu2) ... 2026-03-10T10:06:57.625 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-10T10:06:57.627 INFO:teuthology.orchestra.run.vm07.stdout:Setting up librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:57.629 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:57.631 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-10T10:06:58.159 INFO:teuthology.orchestra.run.vm07.stdout:Setting up luarocks (3.8.0+dfsg1-1) ... 2026-03-10T10:06:58.166 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:58.169 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:58.171 INFO:teuthology.orchestra.run.vm07.stdout:Setting up librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:58.173 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:58.175 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:58.229 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-10T10:06:58.229 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-10T10:06:58.557 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:58.559 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:58.562 INFO:teuthology.orchestra.run.vm07.stdout:Setting up librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:58.564 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:58.566 INFO:teuthology.orchestra.run.vm07.stdout:Setting up rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:58.568 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:58.571 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:58.573 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:58.602 INFO:teuthology.orchestra.run.vm07.stdout:Adding group ceph....done 2026-03-10T10:06:58.632 INFO:teuthology.orchestra.run.vm07.stdout:Adding system user ceph....done 2026-03-10T10:06:58.639 INFO:teuthology.orchestra.run.vm07.stdout:Setting system user ceph properties....done 2026-03-10T10:06:58.643 INFO:teuthology.orchestra.run.vm07.stdout:chown: cannot access '/var/log/ceph/*.log*': No such file or directory 2026-03-10T10:06:58.701 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target. 2026-03-10T10:06:58.934 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-10T10:06:59.255 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:59.258 INFO:teuthology.orchestra.run.vm07.stdout:Setting up radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:59.452 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-10T10:06:59.452 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-10T10:06:59.819 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:06:59.895 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-10T10:07:00.229 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:07:00.291 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-10T10:07:00.291 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-10T10:07:00.637 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:07:00.695 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-10T10:07:00.695 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-10T10:07:01.062 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:07:01.131 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-10T10:07:01.131 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-10T10:07:01.500 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:07:01.502 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:07:01.513 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:07:01.567 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-10T10:07:01.567 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-10T10:07:01.946 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:07:01.957 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:07:01.960 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:07:01.971 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:07:02.078 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-10T10:07:02.085 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T10:07:02.099 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T10:07:02.172 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for install-info (6.8-4build1) ... 2026-03-10T10:07:02.477 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T10:07:02.477 INFO:teuthology.orchestra.run.vm07.stdout:Running kernel seems to be up-to-date. 2026-03-10T10:07:02.477 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T10:07:02.477 INFO:teuthology.orchestra.run.vm07.stdout:Services to be restarted: 2026-03-10T10:07:02.483 INFO:teuthology.orchestra.run.vm07.stdout: systemctl restart packagekit.service 2026-03-10T10:07:02.485 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T10:07:02.485 INFO:teuthology.orchestra.run.vm07.stdout:Service restarts being deferred: 2026-03-10T10:07:02.485 INFO:teuthology.orchestra.run.vm07.stdout: systemctl restart networkd-dispatcher.service 2026-03-10T10:07:02.485 INFO:teuthology.orchestra.run.vm07.stdout: systemctl restart unattended-upgrades.service 2026-03-10T10:07:02.485 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T10:07:02.486 INFO:teuthology.orchestra.run.vm07.stdout:No containers need to be restarted. 2026-03-10T10:07:02.486 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T10:07:02.486 INFO:teuthology.orchestra.run.vm07.stdout:No user sessions are running outdated binaries. 2026-03-10T10:07:02.486 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T10:07:02.486 INFO:teuthology.orchestra.run.vm07.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-10T10:07:03.091 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:07:03.093 DEBUG:teuthology.orchestra.run.vm07:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install python3-xmltodict python3-jmespath 2026-03-10T10:07:03.165 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T10:07:03.290 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T10:07:03.290 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T10:07:03.387 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:07:03.388 INFO:teuthology.orchestra.run.vm07.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T10:07:03.388 INFO:teuthology.orchestra.run.vm07.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-10T10:07:03.388 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:07:03.398 INFO:teuthology.orchestra.run.vm07.stdout:The following NEW packages will be installed: 2026-03-10T10:07:03.398 INFO:teuthology.orchestra.run.vm07.stdout: python3-jmespath python3-xmltodict 2026-03-10T10:07:03.480 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 2 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T10:07:03.481 INFO:teuthology.orchestra.run.vm07.stdout:Need to get 34.3 kB of archives. 2026-03-10T10:07:03.481 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 146 kB of additional disk space will be used. 2026-03-10T10:07:03.481 INFO:teuthology.orchestra.run.vm07.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-10T10:07:03.496 INFO:teuthology.orchestra.run.vm07.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-10T10:07:03.657 INFO:teuthology.orchestra.run.vm07.stdout:Fetched 34.3 kB in 0s (357 kB/s) 2026-03-10T10:07:03.683 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-jmespath. 2026-03-10T10:07:03.704 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118577 files and directories currently installed.) 2026-03-10T10:07:03.705 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-10T10:07:03.706 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-10T10:07:03.720 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-10T10:07:03.725 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-10T10:07:03.725 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-10T10:07:03.748 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-10T10:07:03.806 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-10T10:07:04.108 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T10:07:04.109 INFO:teuthology.orchestra.run.vm07.stdout:Running kernel seems to be up-to-date. 2026-03-10T10:07:04.109 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T10:07:04.109 INFO:teuthology.orchestra.run.vm07.stdout:Services to be restarted: 2026-03-10T10:07:04.113 INFO:teuthology.orchestra.run.vm07.stdout: systemctl restart packagekit.service 2026-03-10T10:07:04.116 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T10:07:04.116 INFO:teuthology.orchestra.run.vm07.stdout:Service restarts being deferred: 2026-03-10T10:07:04.116 INFO:teuthology.orchestra.run.vm07.stdout: systemctl restart networkd-dispatcher.service 2026-03-10T10:07:04.116 INFO:teuthology.orchestra.run.vm07.stdout: systemctl restart unattended-upgrades.service 2026-03-10T10:07:04.116 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T10:07:04.116 INFO:teuthology.orchestra.run.vm07.stdout:No containers need to be restarted. 2026-03-10T10:07:04.116 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T10:07:04.116 INFO:teuthology.orchestra.run.vm07.stdout:No user sessions are running outdated binaries. 2026-03-10T10:07:04.116 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T10:07:04.116 INFO:teuthology.orchestra.run.vm07.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-10T10:07:04.728 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:07:04.731 DEBUG:teuthology.parallel:result is None 2026-03-10T10:07:04.731 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T10:07:05.433 DEBUG:teuthology.orchestra.run.vm04:> dpkg-query -W -f '${Version}' ceph 2026-03-10T10:07:05.441 INFO:teuthology.orchestra.run.vm04.stdout:19.2.3-678-ge911bdeb-1jammy 2026-03-10T10:07:05.442 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678-ge911bdeb-1jammy 2026-03-10T10:07:05.442 INFO:teuthology.task.install:The correct ceph version 19.2.3-678-ge911bdeb-1jammy is installed. 2026-03-10T10:07:05.443 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T10:07:06.092 DEBUG:teuthology.orchestra.run.vm07:> dpkg-query -W -f '${Version}' ceph 2026-03-10T10:07:06.100 INFO:teuthology.orchestra.run.vm07.stdout:19.2.3-678-ge911bdeb-1jammy 2026-03-10T10:07:06.100 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678-ge911bdeb-1jammy 2026-03-10T10:07:06.100 INFO:teuthology.task.install:The correct ceph version 19.2.3-678-ge911bdeb-1jammy is installed. 2026-03-10T10:07:06.101 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-10T10:07:06.101 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-10T10:07:06.101 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-10T10:07:06.108 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T10:07:06.108 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-10T10:07:06.149 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-10T10:07:06.150 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-10T10:07:06.150 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/usr/bin/daemon-helper 2026-03-10T10:07:06.158 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-10T10:07:06.206 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T10:07:06.206 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/usr/bin/daemon-helper 2026-03-10T10:07:06.213 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-10T10:07:06.261 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-10T10:07:06.262 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-10T10:07:06.262 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-10T10:07:06.268 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-10T10:07:06.318 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T10:07:06.318 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-10T10:07:06.325 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-10T10:07:06.373 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-10T10:07:06.374 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-10T10:07:06.374 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/usr/bin/stdin-killer 2026-03-10T10:07:06.380 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-10T10:07:06.431 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T10:07:06.431 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/usr/bin/stdin-killer 2026-03-10T10:07:06.438 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-10T10:07:06.486 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-10T10:07:06.527 INFO:tasks.cephadm:Config: {'conf': {'mgr': {'debug mgr': 20, 'debug ms': 1}, 'client': {'debug ms': 1}, 'global': {'mon election default strategy': 3, 'ms bind msgr1': False, 'ms bind msgr2': True, 'ms type': 'async'}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20, 'mon warn on pool no app': False}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd class default list': '*', 'osd class load list': '*', 'osd mclock iops capacity threshold hdd': 49000, 'osd shutdown pgref assert': True}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'reached quota', 'but it is still running', 'overall HEALTH_', '\\(POOL_FULL\\)', '\\(SMALLER_PGP_NUM\\)', '\\(CACHE_POOL_NO_HIT_SET\\)', '\\(CACHE_POOL_NEAR_FULL\\)', '\\(POOL_APP_NOT_ENABLED\\)', '\\(PG_AVAILABILITY\\)', '\\(PG_DEGRADED\\)', 'CEPHADM_STRAY_DAEMON'], 'log-only-match': ['CEPHADM_'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'cephadm_mode': 'root'} 2026-03-10T10:07:06.527 INFO:tasks.cephadm:Cluster image is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T10:07:06.527 INFO:tasks.cephadm:Cluster fsid is e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:07:06.527 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-10T10:07:06.527 INFO:tasks.cephadm:Monitor IPs: {'mon.a': '192.168.123.104', 'mon.c': '[v2:192.168.123.104:3301,v1:192.168.123.104:6790]', 'mon.b': '192.168.123.107'} 2026-03-10T10:07:06.527 INFO:tasks.cephadm:First mon is mon.a on vm04 2026-03-10T10:07:06.527 INFO:tasks.cephadm:First mgr is y 2026-03-10T10:07:06.527 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-10T10:07:06.528 DEBUG:teuthology.orchestra.run.vm04:> sudo hostname $(hostname -s) 2026-03-10T10:07:06.535 DEBUG:teuthology.orchestra.run.vm07:> sudo hostname $(hostname -s) 2026-03-10T10:07:06.543 INFO:tasks.cephadm:Downloading "compiled" cephadm from cachra 2026-03-10T10:07:06.543 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T10:07:07.146 INFO:tasks.cephadm:builder_project result: [{'url': 'https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/', 'chacra_url': 'https://1.chacra.ceph.com/repos/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/', 'ref': 'squid', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'distro': 'ubuntu', 'distro_version': '22.04', 'distro_codename': 'jammy', 'modified': '2026-02-25 19:37:07.680480', 'status': 'ready', 'flavor': 'default', 'project': 'ceph', 'archs': ['x86_64'], 'extra': {'version': '19.2.3-678-ge911bdeb', 'package_manager_version': '19.2.3-678-ge911bdeb-1jammy', 'build_url': 'https://jenkins.ceph.com/job/ceph-dev-pipeline/3275/', 'root_build_cause': '', 'node_name': '10.20.192.98+toko08', 'job_name': 'ceph-dev-pipeline'}}] 2026-03-10T10:07:07.737 INFO:tasks.util.chacra:got chacra host 1.chacra.ceph.com, ref squid, sha1 e911bdebe5c8faa3800735d1568fcdca65db60df from https://shaman.ceph.com/api/search/?project=ceph&distros=ubuntu%2F22.04%2Fx86_64&flavor=default&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T10:07:07.738 INFO:tasks.cephadm:Discovered cachra url: https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm 2026-03-10T10:07:07.738 INFO:tasks.cephadm:Downloading cephadm from url: https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm 2026-03-10T10:07:07.738 DEBUG:teuthology.orchestra.run.vm04:> curl --silent -L https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T10:07:10.028 INFO:teuthology.orchestra.run.vm04.stdout:-rw-rw-r-- 1 ubuntu ubuntu 795696 Mar 10 10:07 /home/ubuntu/cephtest/cephadm 2026-03-10T10:07:10.028 DEBUG:teuthology.orchestra.run.vm07:> curl --silent -L https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T10:07:12.230 INFO:teuthology.orchestra.run.vm07.stdout:-rw-rw-r-- 1 ubuntu ubuntu 795696 Mar 10 10:07 /home/ubuntu/cephtest/cephadm 2026-03-10T10:07:12.231 DEBUG:teuthology.orchestra.run.vm04:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T10:07:12.234 DEBUG:teuthology.orchestra.run.vm07:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T10:07:12.242 INFO:tasks.cephadm:Pulling image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on all hosts... 2026-03-10T10:07:12.242 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T10:07:12.275 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T10:07:12.358 INFO:teuthology.orchestra.run.vm04.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T10:07:12.361 INFO:teuthology.orchestra.run.vm07.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T10:08:05.635 INFO:teuthology.orchestra.run.vm04.stdout:{ 2026-03-10T10:08:05.635 INFO:teuthology.orchestra.run.vm04.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T10:08:05.635 INFO:teuthology.orchestra.run.vm04.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T10:08:05.635 INFO:teuthology.orchestra.run.vm04.stdout: "repo_digests": [ 2026-03-10T10:08:05.635 INFO:teuthology.orchestra.run.vm04.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T10:08:05.635 INFO:teuthology.orchestra.run.vm04.stdout: ] 2026-03-10T10:08:05.635 INFO:teuthology.orchestra.run.vm04.stdout:} 2026-03-10T10:08:06.591 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-10T10:08:06.591 INFO:teuthology.orchestra.run.vm07.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T10:08:06.591 INFO:teuthology.orchestra.run.vm07.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T10:08:06.591 INFO:teuthology.orchestra.run.vm07.stdout: "repo_digests": [ 2026-03-10T10:08:06.591 INFO:teuthology.orchestra.run.vm07.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T10:08:06.591 INFO:teuthology.orchestra.run.vm07.stdout: ] 2026-03-10T10:08:06.591 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-10T10:08:06.606 DEBUG:teuthology.orchestra.run.vm04:> sudo mkdir -p /etc/ceph 2026-03-10T10:08:06.614 DEBUG:teuthology.orchestra.run.vm07:> sudo mkdir -p /etc/ceph 2026-03-10T10:08:06.621 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod 777 /etc/ceph 2026-03-10T10:08:06.661 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod 777 /etc/ceph 2026-03-10T10:08:06.669 INFO:tasks.cephadm:Writing seed config... 2026-03-10T10:08:06.669 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-10T10:08:06.669 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-10T10:08:06.669 INFO:tasks.cephadm: override: [client] debug ms = 1 2026-03-10T10:08:06.669 INFO:tasks.cephadm: override: [global] mon election default strategy = 3 2026-03-10T10:08:06.669 INFO:tasks.cephadm: override: [global] ms bind msgr1 = False 2026-03-10T10:08:06.670 INFO:tasks.cephadm: override: [global] ms bind msgr2 = True 2026-03-10T10:08:06.670 INFO:tasks.cephadm: override: [global] ms type = async 2026-03-10T10:08:06.670 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-10T10:08:06.670 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-10T10:08:06.670 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-10T10:08:06.670 INFO:tasks.cephadm: override: [mon] mon warn on pool no app = False 2026-03-10T10:08:06.670 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-10T10:08:06.670 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-10T10:08:06.670 INFO:tasks.cephadm: override: [osd] osd class default list = * 2026-03-10T10:08:06.670 INFO:tasks.cephadm: override: [osd] osd class load list = * 2026-03-10T10:08:06.670 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-10T10:08:06.670 INFO:tasks.cephadm: override: [osd] osd shutdown pgref assert = True 2026-03-10T10:08:06.670 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-10T10:08:06.670 DEBUG:teuthology.orchestra.run.vm04:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-10T10:08:06.706 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = e4c1c9d6-1c68-11f1-a9bd-116050875839 mon election default strategy = 3 ms bind msgr1 = False ms bind msgr2 = True ms type = async [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = True bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd class default list = * osd class load list = * osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 mon warn on pool no app = False [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true [client] debug ms = 1 2026-03-10T10:08:06.706 DEBUG:teuthology.orchestra.run.vm04:mon.a> sudo journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@mon.a.service 2026-03-10T10:08:06.747 DEBUG:teuthology.orchestra.run.vm04:mgr.y> sudo journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@mgr.y.service 2026-03-10T10:08:06.791 INFO:tasks.cephadm:Bootstrapping... 2026-03-10T10:08:06.791 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df -v bootstrap --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-ip 192.168.123.104 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-10T10:08:06.918 INFO:teuthology.orchestra.run.vm04.stdout:-------------------------------------------------------------------------------- 2026-03-10T10:08:06.918 INFO:teuthology.orchestra.run.vm04.stdout:cephadm ['--image', 'quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df', '-v', 'bootstrap', '--fsid', 'e4c1c9d6-1c68-11f1-a9bd-116050875839', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-id', 'a', '--mgr-id', 'y', '--orphan-initial-daemons', '--skip-monitoring-stack', '--mon-ip', '192.168.123.104', '--skip-admin-label'] 2026-03-10T10:08:06.918 INFO:teuthology.orchestra.run.vm04.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-10T10:08:06.918 INFO:teuthology.orchestra.run.vm04.stdout:Verifying podman|docker is present... 2026-03-10T10:08:06.918 INFO:teuthology.orchestra.run.vm04.stdout:Verifying lvm2 is present... 2026-03-10T10:08:06.918 INFO:teuthology.orchestra.run.vm04.stdout:Verifying time synchronization is in place... 2026-03-10T10:08:06.921 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T10:08:06.922 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T10:08:06.924 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T10:08:06.924 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stdout inactive 2026-03-10T10:08:06.926 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 1 from systemctl is-enabled chronyd.service 2026-03-10T10:08:06.926 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Failed to get unit file state for chronyd.service: No such file or directory 2026-03-10T10:08:06.928 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 3 from systemctl is-active chronyd.service 2026-03-10T10:08:06.928 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stdout inactive 2026-03-10T10:08:06.930 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 1 from systemctl is-enabled systemd-timesyncd.service 2026-03-10T10:08:06.930 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stdout masked 2026-03-10T10:08:06.932 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 3 from systemctl is-active systemd-timesyncd.service 2026-03-10T10:08:06.932 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stdout inactive 2026-03-10T10:08:06.934 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 1 from systemctl is-enabled ntpd.service 2026-03-10T10:08:06.934 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Failed to get unit file state for ntpd.service: No such file or directory 2026-03-10T10:08:06.936 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 3 from systemctl is-active ntpd.service 2026-03-10T10:08:06.936 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stdout inactive 2026-03-10T10:08:06.939 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stdout enabled 2026-03-10T10:08:06.941 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stdout active 2026-03-10T10:08:06.941 INFO:teuthology.orchestra.run.vm04.stdout:Unit ntp.service is enabled and running 2026-03-10T10:08:06.941 INFO:teuthology.orchestra.run.vm04.stdout:Repeating the final host check... 2026-03-10T10:08:06.941 INFO:teuthology.orchestra.run.vm04.stdout:docker (/usr/bin/docker) is present 2026-03-10T10:08:06.941 INFO:teuthology.orchestra.run.vm04.stdout:systemctl is present 2026-03-10T10:08:06.941 INFO:teuthology.orchestra.run.vm04.stdout:lvcreate is present 2026-03-10T10:08:06.943 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T10:08:06.943 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T10:08:06.946 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T10:08:06.946 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stdout inactive 2026-03-10T10:08:06.948 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 1 from systemctl is-enabled chronyd.service 2026-03-10T10:08:06.948 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Failed to get unit file state for chronyd.service: No such file or directory 2026-03-10T10:08:06.950 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 3 from systemctl is-active chronyd.service 2026-03-10T10:08:06.950 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stdout inactive 2026-03-10T10:08:06.952 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 1 from systemctl is-enabled systemd-timesyncd.service 2026-03-10T10:08:06.952 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stdout masked 2026-03-10T10:08:06.955 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 3 from systemctl is-active systemd-timesyncd.service 2026-03-10T10:08:06.955 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stdout inactive 2026-03-10T10:08:06.957 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 1 from systemctl is-enabled ntpd.service 2026-03-10T10:08:06.957 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Failed to get unit file state for ntpd.service: No such file or directory 2026-03-10T10:08:06.959 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 3 from systemctl is-active ntpd.service 2026-03-10T10:08:06.959 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stdout inactive 2026-03-10T10:08:06.962 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stdout enabled 2026-03-10T10:08:06.964 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stdout active 2026-03-10T10:08:06.964 INFO:teuthology.orchestra.run.vm04.stdout:Unit ntp.service is enabled and running 2026-03-10T10:08:06.964 INFO:teuthology.orchestra.run.vm04.stdout:Host looks OK 2026-03-10T10:08:06.964 INFO:teuthology.orchestra.run.vm04.stdout:Cluster fsid: e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:08:06.964 INFO:teuthology.orchestra.run.vm04.stdout:Acquiring lock 140339525945904 on /run/cephadm/e4c1c9d6-1c68-11f1-a9bd-116050875839.lock 2026-03-10T10:08:06.964 INFO:teuthology.orchestra.run.vm04.stdout:Lock 140339525945904 acquired on /run/cephadm/e4c1c9d6-1c68-11f1-a9bd-116050875839.lock 2026-03-10T10:08:06.965 INFO:teuthology.orchestra.run.vm04.stdout:Verifying IP 192.168.123.104 port 3300 ... 2026-03-10T10:08:06.965 INFO:teuthology.orchestra.run.vm04.stdout:Verifying IP 192.168.123.104 port 6789 ... 2026-03-10T10:08:06.965 INFO:teuthology.orchestra.run.vm04.stdout:Base mon IP(s) is [192.168.123.104:3300, 192.168.123.104:6789], mon addrv is [v2:192.168.123.104:3300,v1:192.168.123.104:6789] 2026-03-10T10:08:06.966 INFO:teuthology.orchestra.run.vm04.stdout:/usr/sbin/ip: stdout default via 192.168.123.1 dev ens3 proto dhcp src 192.168.123.104 metric 100 2026-03-10T10:08:06.966 INFO:teuthology.orchestra.run.vm04.stdout:/usr/sbin/ip: stdout 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 2026-03-10T10:08:06.966 INFO:teuthology.orchestra.run.vm04.stdout:/usr/sbin/ip: stdout 192.168.123.0/24 dev ens3 proto kernel scope link src 192.168.123.104 metric 100 2026-03-10T10:08:06.966 INFO:teuthology.orchestra.run.vm04.stdout:/usr/sbin/ip: stdout 192.168.123.1 dev ens3 proto dhcp scope link src 192.168.123.104 metric 100 2026-03-10T10:08:06.968 INFO:teuthology.orchestra.run.vm04.stdout:/usr/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-10T10:08:06.968 INFO:teuthology.orchestra.run.vm04.stdout:/usr/sbin/ip: stdout fe80::/64 dev ens3 proto kernel metric 256 pref medium 2026-03-10T10:08:06.969 INFO:teuthology.orchestra.run.vm04.stdout:/usr/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-10T10:08:06.969 INFO:teuthology.orchestra.run.vm04.stdout:/usr/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-10T10:08:06.969 INFO:teuthology.orchestra.run.vm04.stdout:/usr/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T10:08:06.969 INFO:teuthology.orchestra.run.vm04.stdout:/usr/sbin/ip: stdout 2: ens3: mtu 1500 state UP qlen 1000 2026-03-10T10:08:06.969 INFO:teuthology.orchestra.run.vm04.stdout:/usr/sbin/ip: stdout inet6 fe80::5055:ff:fe00:4/64 scope link 2026-03-10T10:08:06.969 INFO:teuthology.orchestra.run.vm04.stdout:/usr/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T10:08:06.969 INFO:teuthology.orchestra.run.vm04.stdout:Mon IP `192.168.123.104` is in CIDR network `192.168.123.0/24` 2026-03-10T10:08:06.969 INFO:teuthology.orchestra.run.vm04.stdout:Mon IP `192.168.123.104` is in CIDR network `192.168.123.0/24` 2026-03-10T10:08:06.969 INFO:teuthology.orchestra.run.vm04.stdout:Mon IP `192.168.123.104` is in CIDR network `192.168.123.1/32` 2026-03-10T10:08:06.969 INFO:teuthology.orchestra.run.vm04.stdout:Mon IP `192.168.123.104` is in CIDR network `192.168.123.1/32` 2026-03-10T10:08:06.969 INFO:teuthology.orchestra.run.vm04.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24', '192.168.123.0/24', '192.168.123.1/32', '192.168.123.1/32'] 2026-03-10T10:08:06.970 INFO:teuthology.orchestra.run.vm04.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-10T10:08:06.970 INFO:teuthology.orchestra.run.vm04.stdout:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T10:08:07.922 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/docker: stdout e911bdebe5c8faa3800735d1568fcdca65db60df: Pulling from ceph-ci/ceph 2026-03-10T10:08:07.922 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/docker: stdout Digest: sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T10:08:07.922 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/docker: stdout Status: Image is up to date for quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T10:08:07.922 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/docker: stdout quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T10:08:08.061 INFO:teuthology.orchestra.run.vm04.stdout:ceph: stdout ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T10:08:08.061 INFO:teuthology.orchestra.run.vm04.stdout:Ceph version: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T10:08:08.061 INFO:teuthology.orchestra.run.vm04.stdout:Extracting ceph user uid/gid from container image... 2026-03-10T10:08:08.154 INFO:teuthology.orchestra.run.vm04.stdout:stat: stdout 167 167 2026-03-10T10:08:08.154 INFO:teuthology.orchestra.run.vm04.stdout:Creating initial keys... 2026-03-10T10:08:08.255 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-authtool: stdout AQCI7a9pOiutDRAA57MNYbBJrwrWIpi7qlOmBQ== 2026-03-10T10:08:08.350 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-authtool: stdout AQCI7a9prBxlExAAIc3K6W1K/7yWFLz132q7jw== 2026-03-10T10:08:08.455 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-authtool: stdout AQCI7a9pfKasGRAAtgA60JuuwfZRnt1aai5Mog== 2026-03-10T10:08:08.455 INFO:teuthology.orchestra.run.vm04.stdout:Creating initial monmap... 2026-03-10T10:08:08.556 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T10:08:08.556 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = quincy 2026-03-10T10:08:08.556 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:08:08.556 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T10:08:08.556 INFO:teuthology.orchestra.run.vm04.stdout:monmaptool for a [v2:192.168.123.104:3300,v1:192.168.123.104:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T10:08:08.556 INFO:teuthology.orchestra.run.vm04.stdout:setting min_mon_release = quincy 2026-03-10T10:08:08.556 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/monmaptool: set fsid to e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:08:08.556 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T10:08:08.556 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:08:08.556 INFO:teuthology.orchestra.run.vm04.stdout:Creating mon... 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.624+0000 7f9089b58d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.624+0000 7f9089b58d80 1 imported monmap: 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr epoch 0 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr last_changed 2026-03-10T10:08:08.532327+0000 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr created 2026-03-10T10:08:08.532327+0000 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr min_mon_release 17 (quincy) 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr election_strategy: 1 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.624+0000 7f9089b58d80 0 /usr/bin/ceph-mon: set fsid to e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Git sha 0 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: DB SUMMARY 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: DB Session ID: F9QBMLAK8KJTJU0Z6CNO 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 0, files: 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.error_if_exists: 0 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.create_if_missing: 1 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.env: 0x560ca02a1dc0 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.info_log: 0x560cbe20ada0 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.statistics: (nil) 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.use_fsync: 0 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.db_log_dir: 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.wal_dir: 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T10:08:08.672 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.write_buffer_manager: 0x560cbe2015e0 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.unordered_write: 0 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.row_cache: None 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.wal_filter: None 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.two_write_queues: 0 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.wal_compression: 0 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.atomic_flush: 0 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.max_open_files: -1 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Compression algorithms supported: 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: kZSTD supported: 0 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: kXpressCompression supported: 0 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: kZlibCompression supported: 1 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-10T10:08:08.673 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: [db/db_impl/db_impl_open.cc:317] Creating manifest 1 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.merge_operator: 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.compaction_filter: None 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560cbe1fd520) 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr cache_index_and_filter_blocks: 1 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr pin_top_level_index_and_filter: 1 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr index_type: 0 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr data_block_index_type: 0 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr index_shortening: 1 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr data_block_hash_table_util_ratio: 0.750000 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr checksum: 4 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr no_block_cache: 0 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr block_cache: 0x560cbe223350 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr block_cache_name: BinnedLRUCache 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr block_cache_options: 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr capacity : 536870912 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr num_shard_bits : 4 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr strict_capacity_limit : 0 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr high_pri_pool_ratio: 0.000 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr block_cache_compressed: (nil) 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr persistent_cache: (nil) 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr block_size: 4096 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr block_size_deviation: 10 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr block_restart_interval: 16 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr index_block_restart_interval: 1 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr metadata_block_size: 4096 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr partition_filters: 0 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr use_delta_encoding: 1 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr filter_policy: bloomfilter 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr whole_key_filtering: 1 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr verify_compression: 0 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr read_amp_bytes_per_bit: 0 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr format_version: 5 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr enable_index_compression: 1 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr block_align: 0 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr max_auto_readahead_size: 262144 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr prepopulate_block_cache: 0 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr initial_auto_readahead_size: 8192 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr num_file_reads_for_auto_readahead: 2 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T10:08:08.674 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-10T10:08:08.677 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-10T10:08:08.677 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.compression: NoCompression 2026-03-10T10:08:08.677 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-10T10:08:08.677 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-10T10:08:08.677 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T10:08:08.677 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.num_levels: 7 2026-03-10T10:08:08.677 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T10:08:08.677 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T10:08:08.677 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T10:08:08.677 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T10:08:08.677 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T10:08:08.677 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T10:08:08.677 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T10:08:08.677 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T10:08:08.677 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T10:08:08.677 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T10:08:08.677 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T10:08:08.677 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T10:08:08.677 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T10:08:08.677 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-10T10:08:08.677 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-10T10:08:08.677 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T10:08:08.677 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T10:08:08.677 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T10:08:08.677 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.bloom_locality: 0 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.ttl: 2592000 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.enable_blob_files: false 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.min_blob_size: 0 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.628+0000 7f9089b58d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.632+0000 7f9089b58d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 succeeded,manifest_file_number is 1, next_file_number is 3, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T10:08:08.678 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.632+0000 7f9089b58d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.632+0000 7f9089b58d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 11491799-4724-44be-8bb5-e880f6a6a6aa 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.632+0000 7f9089b58d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 5 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.632+0000 7f9089b58d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x560cbe224e00 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.632+0000 7f9089b58d80 4 rocksdb: DB pointer 0x560cbe308000 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.632+0000 7f90812e2640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.632+0000 7f90812e2640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr ** DB Stats ** 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr ** Compaction Stats [default] ** 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr ** Compaction Stats [default] ** 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr Flush(GB): cumulative 0.000, interval 0.000 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr AddFile(Total Files): cumulative 0, interval 0 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr AddFile(Keys): cumulative 0, interval 0 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr Block cache BinnedLRUCache@0x560cbe223350#7 capacity: 512.00 MB usage: 0.00 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 6e-06 secs_since: 0 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr Block cache entry stats(count,size,portion): Misc(1,0.00 KB,0%) 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr ** File Read Latency Histogram By Level [default] ** 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.636+0000 7f9089b58d80 4 rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.636+0000 7f9089b58d80 4 rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T10:08:08.636+0000 7f9089b58d80 0 /usr/bin/ceph-mon: created monfs at /var/lib/ceph/mon/ceph-a for mon.a 2026-03-10T10:08:08.679 INFO:teuthology.orchestra.run.vm04.stdout:create mon.a on 2026-03-10T10:08:08.855 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Removed /etc/systemd/system/multi-user.target.wants/ceph.target. 2026-03-10T10:08:09.021 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-10T10:08:09.184 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839.target → /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839.target. 2026-03-10T10:08:09.184 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839.target → /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839.target. 2026-03-10T10:08:09.347 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@mon.a 2026-03-10T10:08:09.347 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Failed to reset failed state of unit ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@mon.a.service: Unit ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@mon.a.service not loaded. 2026-03-10T10:08:09.502 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839.target.wants/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@mon.a.service → /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service. 2026-03-10T10:08:09.511 INFO:teuthology.orchestra.run.vm04.stdout:firewalld does not appear to be present 2026-03-10T10:08:09.511 INFO:teuthology.orchestra.run.vm04.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T10:08:09.511 INFO:teuthology.orchestra.run.vm04.stdout:Waiting for mon to start... 2026-03-10T10:08:09.511 INFO:teuthology.orchestra.run.vm04.stdout:Waiting for mon... 2026-03-10T10:08:09.569 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:09 vm04 systemd[1]: Started Ceph mon.a for e4c1c9d6-1c68-11f1-a9bd-116050875839. 2026-03-10T10:08:09.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:09 vm04 bash[20253]: cluster 2026-03-10T10:08:09.664020+0000 mon.a (mon.0) 0 : cluster [INF] mkfs e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:08:09.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:09 vm04 bash[20253]: cluster 2026-03-10T10:08:09.664020+0000 mon.a (mon.0) 0 : cluster [INF] mkfs e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:08:09.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:09 vm04 bash[20253]: cluster 2026-03-10T10:08:09.659529+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T10:08:09.901 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:09 vm04 bash[20253]: cluster 2026-03-10T10:08:09.659529+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout cluster: 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout id: e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout services: 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum a (age 0.230749s) 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout data: 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout pgs: 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.652+0000 7efeb3577640 1 Processor -- start 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.652+0000 7efeb3577640 1 -- start start 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.652+0000 7efeb3577640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7efeac005570 0x7efeac005970 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.652+0000 7efeb3577640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7efeac005f40 con 0x7efeac005570 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.652+0000 7efeb2575640 1 -- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7efeac005570 msgr2=0x7efeac005970 unknown :-1 s=STATE_CONNECTING_RE l=0).process reconnect failed to v2:192.168.123.104:3300/0 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.652+0000 7efeb2575640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7efeac005570 0x7efeac005970 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.852+0000 7efeb2575640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7efeac005570 0x7efeac005970 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.852+0000 7efeb2575640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7efeac005570 0x7efeac005970 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:34966/0 (socket says 192.168.123.104:34966) 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.852+0000 7efeb2575640 1 -- 192.168.123.104:0/2245135346 learned_addr learned my addr 192.168.123.104:0/2245135346 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.852+0000 7efeb2575640 1 -- 192.168.123.104:0/2245135346 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7efeac0067c0 con 0x7efeac005570 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.852+0000 7efeb2575640 1 --2- 192.168.123.104:0/2245135346 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7efeac005570 0x7efeac005970 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7efea40060c0 tx=0x7efea40312c0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=e5084295ee79be6d server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.852+0000 7efeb1573640 1 -- 192.168.123.104:0/2245135346 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7efea4031e00 con 0x7efeac005570 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.852+0000 7efeb1573640 1 -- 192.168.123.104:0/2245135346 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7efea4003ab0 con 0x7efeac005570 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.852+0000 7efeb1573640 1 -- 192.168.123.104:0/2245135346 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7efea4003db0 con 0x7efeac005570 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.852+0000 7efeb3577640 1 -- 192.168.123.104:0/2245135346 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7efeac005570 msgr2=0x7efeac005970 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.852+0000 7efeb3577640 1 --2- 192.168.123.104:0/2245135346 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7efeac005570 0x7efeac005970 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7efea40060c0 tx=0x7efea40312c0 comp rx=0 tx=0).stop 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.852+0000 7efeb3577640 1 -- 192.168.123.104:0/2245135346 shutdown_connections 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.852+0000 7efeb3577640 1 --2- 192.168.123.104:0/2245135346 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7efeac005570 0x7efeac005970 unknown :-1 s=CLOSED pgs=1 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.852+0000 7efeb3577640 1 -- 192.168.123.104:0/2245135346 >> 192.168.123.104:0/2245135346 conn(0x7efeac09fc30 msgr2=0x7efeac0a2090 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.852+0000 7efeb3577640 1 -- 192.168.123.104:0/2245135346 shutdown_connections 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.852+0000 7efeb3577640 1 -- 192.168.123.104:0/2245135346 wait complete. 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.856+0000 7efeb3577640 1 Processor -- start 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.856+0000 7efeb3577640 1 -- start start 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.856+0000 7efeb3577640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7efeac005570 0x7efeac00b290 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.856+0000 7efeb3577640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7efeac006c90 con 0x7efeac005570 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.856+0000 7efeb2575640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7efeac005570 0x7efeac00b290 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.856+0000 7efeb2575640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7efeac005570 0x7efeac00b290 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:34968/0 (socket says 192.168.123.104:34968) 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.856+0000 7efeb2575640 1 -- 192.168.123.104:0/1204767688 learned_addr learned my addr 192.168.123.104:0/1204767688 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.856+0000 7efeb2575640 1 -- 192.168.123.104:0/1204767688 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7efeac00b7d0 con 0x7efeac005570 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.856+0000 7efeb2575640 1 --2- 192.168.123.104:0/1204767688 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7efeac005570 0x7efeac00b290 secure :-1 s=READY pgs=2 cs=0 l=1 rev1=1 crypto rx=0x7efea4002410 tx=0x7efea4005b50 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.856+0000 7efe9b7fe640 1 -- 192.168.123.104:0/1204767688 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7efea4047020 con 0x7efeac005570 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.856+0000 7efe9b7fe640 1 -- 192.168.123.104:0/1204767688 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7efea4042570 con 0x7efeac005570 2026-03-10T10:08:09.928 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.856+0000 7efe9b7fe640 1 -- 192.168.123.104:0/1204767688 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7efea4042850 con 0x7efeac005570 2026-03-10T10:08:09.929 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.856+0000 7efeb3577640 1 -- 192.168.123.104:0/1204767688 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7efeac00ba60 con 0x7efeac005570 2026-03-10T10:08:09.929 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.856+0000 7efeb3577640 1 -- 192.168.123.104:0/1204767688 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7efeac008020 con 0x7efeac005570 2026-03-10T10:08:09.929 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.856+0000 7efe9b7fe640 1 -- 192.168.123.104:0/1204767688 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 1) ==== 811+0+0 (secure 0 0 0) 0x7efea400b040 con 0x7efeac005570 2026-03-10T10:08:09.929 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.856+0000 7efeb3577640 1 -- 192.168.123.104:0/1204767688 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7efe80005180 con 0x7efeac005570 2026-03-10T10:08:09.929 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.856+0000 7efe9b7fe640 1 -- 192.168.123.104:0/1204767688 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7efea404c400 con 0x7efeac005570 2026-03-10T10:08:09.929 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.856+0000 7efe9b7fe640 1 -- 192.168.123.104:0/1204767688 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (secure 0 0 0) 0x7efea4009020 con 0x7efeac005570 2026-03-10T10:08:09.929 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.888+0000 7efeb3577640 1 -- 192.168.123.104:0/1204767688 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "status"} v 0) -- 0x7efe80005740 con 0x7efeac005570 2026-03-10T10:08:09.929 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.888+0000 7efe9b7fe640 1 -- 192.168.123.104:0/1204767688 <== mon.0 v2:192.168.123.104:3300/0 7 ==== mon_command_ack([{"prefix": "status"}]=0 v0) ==== 54+0+317 (secure 0 0 0) 0x7efea4003a80 con 0x7efeac005570 2026-03-10T10:08:09.929 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.888+0000 7efeb3577640 1 -- 192.168.123.104:0/1204767688 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7efeac005570 msgr2=0x7efeac00b290 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:09.929 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.888+0000 7efeb3577640 1 --2- 192.168.123.104:0/1204767688 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7efeac005570 0x7efeac00b290 secure :-1 s=READY pgs=2 cs=0 l=1 rev1=1 crypto rx=0x7efea4002410 tx=0x7efea4005b50 comp rx=0 tx=0).stop 2026-03-10T10:08:09.929 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.888+0000 7efeb3577640 1 -- 192.168.123.104:0/1204767688 shutdown_connections 2026-03-10T10:08:09.929 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.888+0000 7efeb3577640 1 --2- 192.168.123.104:0/1204767688 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7efeac005570 0x7efeac00b290 unknown :-1 s=CLOSED pgs=2 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:09.929 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.888+0000 7efeb3577640 1 -- 192.168.123.104:0/1204767688 >> 192.168.123.104:0/1204767688 conn(0x7efeac09fc30 msgr2=0x7efeac00f120 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:09.929 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.888+0000 7efeb3577640 1 -- 192.168.123.104:0/1204767688 shutdown_connections 2026-03-10T10:08:09.929 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:09.888+0000 7efeb3577640 1 -- 192.168.123.104:0/1204767688 wait complete. 2026-03-10T10:08:09.929 INFO:teuthology.orchestra.run.vm04.stdout:mon is available 2026-03-10T10:08:09.929 INFO:teuthology.orchestra.run.vm04.stdout:Assimilating anything we can from ceph.conf... 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout fsid = e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.104:3300,v1:192.168.123.104:6789] 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.028+0000 7fac6f517640 1 Processor -- start 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.028+0000 7fac6f517640 1 -- start start 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.032+0000 7fac6f517640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fac68108b70 0x7fac68108f70 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.032+0000 7fac6f517640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fac68109540 con 0x7fac68108b70 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.032+0000 7fac6d28c640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fac68108b70 0x7fac68108f70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.032+0000 7fac6d28c640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fac68108b70 0x7fac68108f70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:34978/0 (socket says 192.168.123.104:34978) 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.032+0000 7fac6d28c640 1 -- 192.168.123.104:0/66385697 learned_addr learned my addr 192.168.123.104:0/66385697 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.032+0000 7fac6d28c640 1 -- 192.168.123.104:0/66385697 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fac68109d70 con 0x7fac68108b70 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.032+0000 7fac6d28c640 1 --2- 192.168.123.104:0/66385697 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fac68108b70 0x7fac68108f70 secure :-1 s=READY pgs=3 cs=0 l=1 rev1=1 crypto rx=0x7fac5c009920 tx=0x7fac5c02ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=65327a0315087dea server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.032+0000 7fac57fff640 1 -- 192.168.123.104:0/66385697 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fac5c03c070 con 0x7fac68108b70 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.032+0000 7fac57fff640 1 -- 192.168.123.104:0/66385697 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7fac5c02fae0 con 0x7fac68108b70 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.032+0000 7fac6f517640 1 -- 192.168.123.104:0/66385697 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fac68108b70 msgr2=0x7fac68108f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.032+0000 7fac6f517640 1 --2- 192.168.123.104:0/66385697 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fac68108b70 0x7fac68108f70 secure :-1 s=READY pgs=3 cs=0 l=1 rev1=1 crypto rx=0x7fac5c009920 tx=0x7fac5c02ef20 comp rx=0 tx=0).stop 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.032+0000 7fac6f517640 1 -- 192.168.123.104:0/66385697 shutdown_connections 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.032+0000 7fac6f517640 1 --2- 192.168.123.104:0/66385697 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fac68108b70 0x7fac68108f70 unknown :-1 s=CLOSED pgs=3 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.032+0000 7fac6f517640 1 -- 192.168.123.104:0/66385697 >> 192.168.123.104:0/66385697 conn(0x7fac6807c040 msgr2=0x7fac6807c450 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.032+0000 7fac6f517640 1 -- 192.168.123.104:0/66385697 shutdown_connections 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.032+0000 7fac6f517640 1 -- 192.168.123.104:0/66385697 wait complete. 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.032+0000 7fac6f517640 1 Processor -- start 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.032+0000 7fac6f517640 1 -- start start 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.032+0000 7fac6f517640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fac68108b70 0x7fac680804f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.032+0000 7fac6f517640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fac6810a2a0 con 0x7fac68108b70 2026-03-10T10:08:10.108 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.032+0000 7fac6d28c640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fac68108b70 0x7fac680804f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:10.109 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.032+0000 7fac6d28c640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fac68108b70 0x7fac680804f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:34984/0 (socket says 192.168.123.104:34984) 2026-03-10T10:08:10.109 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.032+0000 7fac6d28c640 1 -- 192.168.123.104:0/2513369673 learned_addr learned my addr 192.168.123.104:0/2513369673 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:10.109 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.032+0000 7fac6d28c640 1 -- 192.168.123.104:0/2513369673 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fac68080a30 con 0x7fac68108b70 2026-03-10T10:08:10.109 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.032+0000 7fac6d28c640 1 --2- 192.168.123.104:0/2513369673 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fac68108b70 0x7fac680804f0 secure :-1 s=READY pgs=4 cs=0 l=1 rev1=1 crypto rx=0x7fac5c02f450 tx=0x7fac5c0047c0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:10.109 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.032+0000 7fac567fc640 1 -- 192.168.123.104:0/2513369673 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fac5c047020 con 0x7fac68108b70 2026-03-10T10:08:10.109 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.032+0000 7fac6f517640 1 -- 192.168.123.104:0/2513369673 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fac68080cc0 con 0x7fac68108b70 2026-03-10T10:08:10.109 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.032+0000 7fac6f517640 1 -- 192.168.123.104:0/2513369673 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fac6807d030 con 0x7fac68108b70 2026-03-10T10:08:10.109 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.032+0000 7fac567fc640 1 -- 192.168.123.104:0/2513369673 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7fac5c0425a0 con 0x7fac68108b70 2026-03-10T10:08:10.109 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.032+0000 7fac567fc640 1 -- 192.168.123.104:0/2513369673 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fac5c03c040 con 0x7fac68108b70 2026-03-10T10:08:10.109 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.032+0000 7fac567fc640 1 -- 192.168.123.104:0/2513369673 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 1) ==== 811+0+0 (secure 0 0 0) 0x7fac5c054050 con 0x7fac68108b70 2026-03-10T10:08:10.109 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.032+0000 7fac567fc640 1 -- 192.168.123.104:0/2513369673 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7fac5c043a10 con 0x7fac68108b70 2026-03-10T10:08:10.109 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.036+0000 7fac6f517640 1 -- 192.168.123.104:0/2513369673 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fac30005180 con 0x7fac68108b70 2026-03-10T10:08:10.109 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.036+0000 7fac567fc640 1 -- 192.168.123.104:0/2513369673 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (secure 0 0 0) 0x7fac5c042d40 con 0x7fac68108b70 2026-03-10T10:08:10.109 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.064+0000 7fac6f517640 1 -- 192.168.123.104:0/2513369673 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "config assimilate-conf"} v 0) -- 0x7fac30003c00 con 0x7fac68108b70 2026-03-10T10:08:10.109 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.068+0000 7fac567fc640 1 -- 192.168.123.104:0/2513369673 <== mon.0 v2:192.168.123.104:3300/0 7 ==== mon_command_ack([{"prefix": "config assimilate-conf"}]=0 v2) ==== 70+0+380 (secure 0 0 0) 0x7fac5c043420 con 0x7fac68108b70 2026-03-10T10:08:10.109 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.068+0000 7fac567fc640 1 -- 192.168.123.104:0/2513369673 <== mon.0 v2:192.168.123.104:3300/0 8 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7fac5c05c070 con 0x7fac68108b70 2026-03-10T10:08:10.109 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.068+0000 7fac6f517640 1 -- 192.168.123.104:0/2513369673 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fac68108b70 msgr2=0x7fac680804f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:10.109 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.068+0000 7fac6f517640 1 --2- 192.168.123.104:0/2513369673 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fac68108b70 0x7fac680804f0 secure :-1 s=READY pgs=4 cs=0 l=1 rev1=1 crypto rx=0x7fac5c02f450 tx=0x7fac5c0047c0 comp rx=0 tx=0).stop 2026-03-10T10:08:10.109 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.068+0000 7fac6f517640 1 -- 192.168.123.104:0/2513369673 shutdown_connections 2026-03-10T10:08:10.109 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.068+0000 7fac6f517640 1 --2- 192.168.123.104:0/2513369673 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fac68108b70 0x7fac680804f0 unknown :-1 s=CLOSED pgs=4 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:10.109 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.068+0000 7fac6f517640 1 -- 192.168.123.104:0/2513369673 >> 192.168.123.104:0/2513369673 conn(0x7fac6807c040 msgr2=0x7fac681924f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:10.109 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.068+0000 7fac6f517640 1 -- 192.168.123.104:0/2513369673 shutdown_connections 2026-03-10T10:08:10.109 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.072+0000 7fac6f517640 1 -- 192.168.123.104:0/2513369673 wait complete. 2026-03-10T10:08:10.109 INFO:teuthology.orchestra.run.vm04.stdout:Generating new minimal ceph.conf... 2026-03-10T10:08:10.289 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.216+0000 7f860b376640 1 Processor -- start 2026-03-10T10:08:10.289 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.216+0000 7f860b376640 1 -- start start 2026-03-10T10:08:10.289 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.216+0000 7f860b376640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8604108b70 0x7f8604108f70 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:10.289 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.216+0000 7f860b376640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f8604109540 con 0x7f8604108b70 2026-03-10T10:08:10.289 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.216+0000 7f86090eb640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8604108b70 0x7f8604108f70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:10.289 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.216+0000 7f86090eb640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8604108b70 0x7f8604108f70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:34998/0 (socket says 192.168.123.104:34998) 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.216+0000 7f86090eb640 1 -- 192.168.123.104:0/1071507714 learned_addr learned my addr 192.168.123.104:0/1071507714 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.216+0000 7f86090eb640 1 -- 192.168.123.104:0/1071507714 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f8604109d70 con 0x7f8604108b70 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.216+0000 7f86090eb640 1 --2- 192.168.123.104:0/1071507714 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8604108b70 0x7f8604108f70 secure :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0x7f85f8009920 tx=0x7f85f802ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=15210ffe06beb5c server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.216+0000 7f85f3fff640 1 -- 192.168.123.104:0/1071507714 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f85f803c070 con 0x7f8604108b70 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.216+0000 7f85f3fff640 1 -- 192.168.123.104:0/1071507714 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f85f8037440 con 0x7f8604108b70 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.216+0000 7f860b376640 1 -- 192.168.123.104:0/1071507714 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8604108b70 msgr2=0x7f8604108f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.216+0000 7f860b376640 1 --2- 192.168.123.104:0/1071507714 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8604108b70 0x7f8604108f70 secure :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0x7f85f8009920 tx=0x7f85f802ef20 comp rx=0 tx=0).stop 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.216+0000 7f860b376640 1 -- 192.168.123.104:0/1071507714 shutdown_connections 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.216+0000 7f860b376640 1 --2- 192.168.123.104:0/1071507714 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8604108b70 0x7f8604108f70 unknown :-1 s=CLOSED pgs=5 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.216+0000 7f860b376640 1 -- 192.168.123.104:0/1071507714 >> 192.168.123.104:0/1071507714 conn(0x7f860407c040 msgr2=0x7f860407c450 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.216+0000 7f860b376640 1 -- 192.168.123.104:0/1071507714 shutdown_connections 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.216+0000 7f860b376640 1 -- 192.168.123.104:0/1071507714 wait complete. 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.216+0000 7f860b376640 1 Processor -- start 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.216+0000 7f860b376640 1 -- start start 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.216+0000 7f860b376640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8604108b70 0x7f860419ea30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.216+0000 7f860b376640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f860410cfa0 con 0x7f8604108b70 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.216+0000 7f86090eb640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8604108b70 0x7f860419ea30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.216+0000 7f86090eb640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8604108b70 0x7f860419ea30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:35004/0 (socket says 192.168.123.104:35004) 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.216+0000 7f86090eb640 1 -- 192.168.123.104:0/1489080002 learned_addr learned my addr 192.168.123.104:0/1489080002 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.220+0000 7f86090eb640 1 -- 192.168.123.104:0/1489080002 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f860419ef70 con 0x7f8604108b70 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.220+0000 7f86090eb640 1 --2- 192.168.123.104:0/1489080002 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8604108b70 0x7f860419ea30 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f85f8035f40 tx=0x7f85f8035f70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.220+0000 7f85f27fc640 1 -- 192.168.123.104:0/1489080002 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f85f8045070 con 0x7f8604108b70 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.220+0000 7f85f27fc640 1 -- 192.168.123.104:0/1489080002 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f85f8037c50 con 0x7f8604108b70 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.220+0000 7f85f27fc640 1 -- 192.168.123.104:0/1489080002 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f85f803c070 con 0x7f8604108b70 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.220+0000 7f860b376640 1 -- 192.168.123.104:0/1489080002 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f860419f200 con 0x7f8604108b70 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.220+0000 7f860b376640 1 -- 192.168.123.104:0/1489080002 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f86041a1ef0 con 0x7f8604108b70 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.220+0000 7f85f27fc640 1 -- 192.168.123.104:0/1489080002 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 1) ==== 811+0+0 (secure 0 0 0) 0x7f85f804a430 con 0x7f8604108b70 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.220+0000 7f85f27fc640 1 -- 192.168.123.104:0/1489080002 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7f85f80499b0 con 0x7f8604108b70 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.220+0000 7f860b376640 1 -- 192.168.123.104:0/1489080002 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f85cc005180 con 0x7f8604108b70 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.220+0000 7f85f27fc640 1 -- 192.168.123.104:0/1489080002 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (secure 0 0 0) 0x7f85f8040450 con 0x7f8604108b70 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.248+0000 7f860b376640 1 -- 192.168.123.104:0/1489080002 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "config generate-minimal-conf"} v 0) -- 0x7f85cc005740 con 0x7f8604108b70 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.248+0000 7f85f27fc640 1 -- 192.168.123.104:0/1489080002 <== mon.0 v2:192.168.123.104:3300/0 7 ==== mon_command_ack([{"prefix": "config generate-minimal-conf"}]=0 v2) ==== 76+0+181 (secure 0 0 0) 0x7f85f8049c00 con 0x7f8604108b70 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.252+0000 7f860b376640 1 -- 192.168.123.104:0/1489080002 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8604108b70 msgr2=0x7f860419ea30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.252+0000 7f860b376640 1 --2- 192.168.123.104:0/1489080002 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8604108b70 0x7f860419ea30 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f85f8035f40 tx=0x7f85f8035f70 comp rx=0 tx=0).stop 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.252+0000 7f860b376640 1 -- 192.168.123.104:0/1489080002 shutdown_connections 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.252+0000 7f860b376640 1 --2- 192.168.123.104:0/1489080002 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8604108b70 0x7f860419ea30 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.252+0000 7f860b376640 1 -- 192.168.123.104:0/1489080002 >> 192.168.123.104:0/1489080002 conn(0x7f860407c040 msgr2=0x7f86041060d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.252+0000 7f860b376640 1 -- 192.168.123.104:0/1489080002 shutdown_connections 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.252+0000 7f860b376640 1 -- 192.168.123.104:0/1489080002 wait complete. 2026-03-10T10:08:10.290 INFO:teuthology.orchestra.run.vm04.stdout:Restarting the monitor... 2026-03-10T10:08:10.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 systemd[1]: Stopping Ceph mon.a for e4c1c9d6-1c68-11f1-a9bd-116050875839... 2026-03-10T10:08:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20253]: debug 2026-03-10T10:08:10.320+0000 7f576583b640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T10:08:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20253]: debug 2026-03-10T10:08:10.320+0000 7f576583b640 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-10T10:08:10.540 INFO:teuthology.orchestra.run.vm04.stdout:Setting public_network to 192.168.123.0/24,192.168.123.1/32 in mon config section 2026-03-10T10:08:10.733 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20639]: ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839-mon-a 2026-03-10T10:08:10.733 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 systemd[1]: ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@mon.a.service: Deactivated successfully. 2026-03-10T10:08:10.733 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 systemd[1]: Stopped Ceph mon.a for e4c1c9d6-1c68-11f1-a9bd-116050875839. 2026-03-10T10:08:10.733 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 systemd[1]: Started Ceph mon.a for e4c1c9d6-1c68-11f1-a9bd-116050875839. 2026-03-10T10:08:10.733 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.656+0000 7fe7e16f1d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-10T10:08:10.733 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.656+0000 7fe7e16f1d80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-10T10:08:10.733 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.656+0000 7fe7e16f1d80 0 pidfile_write: ignore empty --pid-file 2026-03-10T10:08:10.733 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 0 load: jerasure load: lrc 2026-03-10T10:08:10.733 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-10T10:08:10.733 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Git sha 0 2026-03-10T10:08:10.733 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T10:08:10.733 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: DB SUMMARY 2026-03-10T10:08:10.733 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: DB Session ID: N86NVVW77BYHW6IYEEY4 2026-03-10T10:08:10.733 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: CURRENT file: CURRENT 2026-03-10T10:08:10.733 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-10T10:08:10.733 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-10T10:08:10.733 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000008.sst 2026-03-10T10:08:10.733 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000009.log size: 78581 ; 2026-03-10T10:08:10.733 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.error_if_exists: 0 2026-03-10T10:08:10.733 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.create_if_missing: 0 2026-03-10T10:08:10.733 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-10T10:08:10.733 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T10:08:10.733 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T10:08:10.733 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T10:08:10.733 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.env: 0x559ddf511dc0 2026-03-10T10:08:10.733 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-10T10:08:10.733 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.info_log: 0x559e0460cd00 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.statistics: (nil) 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.use_fsync: 0 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.db_log_dir: 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.wal_dir: 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.write_buffer_manager: 0x559e04611900 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.unordered_write: 0 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.row_cache: None 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.wal_filter: None 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.two_write_queues: 0 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.wal_compression: 0 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.atomic_flush: 0 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-10T10:08:10.734 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.max_open_files: -1 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Compression algorithms supported: 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: kZSTD supported: 0 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: kXpressCompression supported: 0 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: kZlibCompression supported: 1 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.merge_operator: 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.compaction_filter: None 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559e0460c480) 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: cache_index_and_filter_blocks: 1 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: pin_top_level_index_and_filter: 1 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: index_type: 0 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: data_block_index_type: 0 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: index_shortening: 1 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: data_block_hash_table_util_ratio: 0.750000 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: checksum: 4 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: no_block_cache: 0 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: block_cache: 0x559e04633350 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: block_cache_name: BinnedLRUCache 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: block_cache_options: 2026-03-10T10:08:10.735 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: capacity : 536870912 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: num_shard_bits : 4 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: strict_capacity_limit : 0 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: high_pri_pool_ratio: 0.000 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: block_cache_compressed: (nil) 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: persistent_cache: (nil) 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: block_size: 4096 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: block_size_deviation: 10 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: block_restart_interval: 16 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: index_block_restart_interval: 1 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: metadata_block_size: 4096 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: partition_filters: 0 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: use_delta_encoding: 1 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: filter_policy: bloomfilter 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: whole_key_filtering: 1 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: verify_compression: 0 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: read_amp_bytes_per_bit: 0 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: format_version: 5 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: enable_index_compression: 1 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: block_align: 0 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: max_auto_readahead_size: 262144 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: prepopulate_block_cache: 0 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: initial_auto_readahead_size: 8192 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: num_file_reads_for_auto_readahead: 2 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.compression: NoCompression 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.num_levels: 7 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T10:08:10.736 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.bloom_locality: 0 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.ttl: 2592000 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.enable_blob_files: false 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.min_blob_size: 0 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.660+0000 7fe7e16f1d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.664+0000 7fe7e16f1d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.664+0000 7fe7e16f1d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.664+0000 7fe7e16f1d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 11491799-4724-44be-8bb5-e880f6a6a6aa 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.664+0000 7fe7e16f1d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773137290668325, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.664+0000 7fe7e16f1d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.664+0000 7fe7e16f1d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773137290670447, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 75319, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 239, "table_properties": {"data_size": 73526, "index_size": 182, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 581, "raw_key_size": 10492, "raw_average_key_size": 49, "raw_value_size": 67703, "raw_average_value_size": 322, "num_data_blocks": 8, "num_entries": 210, "num_filter_entries": 210, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773137290, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11491799-4724-44be-8bb5-e880f6a6a6aa", "db_session_id": "N86NVVW77BYHW6IYEEY4", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.664+0000 7fe7e16f1d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773137290670499, "job": 1, "event": "recovery_finished"} 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.664+0000 7fe7e16f1d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.668+0000 7fe7e16f1d80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.668+0000 7fe7e16f1d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x559e04634e00 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.668+0000 7fe7e16f1d80 4 rocksdb: DB pointer 0x559e04740000 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.668+0000 7fe7e16f1d80 0 starting mon.a rank 0 at public addrs [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] at bind addrs [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon_data /var/lib/ceph/mon/ceph-a fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.668+0000 7fe7e16f1d80 1 mon.a@-1(???) e1 preinit fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.672+0000 7fe7e16f1d80 0 mon.a@-1(???).mds e1 new map 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.672+0000 7fe7e16f1d80 0 mon.a@-1(???).mds e1 print_map 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: e1 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: btime 2026-03-10T10:08:09:663579+0000 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: legacy client fscid: -1 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: No filesystems configured 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.672+0000 7fe7e16f1d80 0 mon.a@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.672+0000 7fe7e16f1d80 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.672+0000 7fe7e16f1d80 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.672+0000 7fe7e16f1d80 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T10:08:10.737 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: debug 2026-03-10T10:08:10.672+0000 7fe7e16f1d80 1 mon.a@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3 2026-03-10T10:08:10.942 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.660+0000 7f2161174640 1 Processor -- start 2026-03-10T10:08:10.942 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.660+0000 7f2161174640 1 -- start start 2026-03-10T10:08:10.942 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.660+0000 7f2161174640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f215c074510 0x7f215c074910 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:10.942 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.660+0000 7f2161174640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f215c074ee0 con 0x7f215c074510 2026-03-10T10:08:10.942 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.660+0000 7f215bfff640 1 -- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f215c074510 msgr2=0x7f215c074910 unknown :-1 s=STATE_CONNECTING_RE l=0).process reconnect failed to v2:192.168.123.104:3300/0 2026-03-10T10:08:10.942 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.660+0000 7f215bfff640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f215c074510 0x7f215c074910 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-10T10:08:10.942 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.864+0000 7f215bfff640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f215c074510 0x7f215c074910 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:10.942 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.864+0000 7f215bfff640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f215c074510 0x7f215c074910 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:35030/0 (socket says 192.168.123.104:35030) 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.864+0000 7f215bfff640 1 -- 192.168.123.104:0/1209441093 learned_addr learned my addr 192.168.123.104:0/1209441093 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.864+0000 7f215bfff640 1 -- 192.168.123.104:0/1209441093 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f215c075060 con 0x7f215c074510 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.864+0000 7f215bfff640 1 --2- 192.168.123.104:0/1209441093 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f215c074510 0x7f215c074910 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7f214c007550 tx=0x7f214c030870 comp rx=0 tx=0).ready entity=mon.0 client_cookie=7964cafcab03e8e6 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.864+0000 7f215affd640 1 -- 192.168.123.104:0/1209441093 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f214c003ca0 con 0x7f215c074510 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.864+0000 7f215affd640 1 -- 192.168.123.104:0/1209441093 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f214c003e40 con 0x7f215c074510 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.864+0000 7f215affd640 1 -- 192.168.123.104:0/1209441093 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f214c0377c0 con 0x7f215c074510 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.864+0000 7f2161174640 1 -- 192.168.123.104:0/1209441093 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f215c074510 msgr2=0x7f215c074910 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.864+0000 7f2161174640 1 --2- 192.168.123.104:0/1209441093 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f215c074510 0x7f215c074910 secure :-1 s=READY pgs=1 cs=0 l=1 rev1=1 crypto rx=0x7f214c007550 tx=0x7f214c030870 comp rx=0 tx=0).stop 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.864+0000 7f2161174640 1 -- 192.168.123.104:0/1209441093 shutdown_connections 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.864+0000 7f2161174640 1 --2- 192.168.123.104:0/1209441093 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f215c074510 0x7f215c074910 unknown :-1 s=CLOSED pgs=1 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.864+0000 7f2161174640 1 -- 192.168.123.104:0/1209441093 >> 192.168.123.104:0/1209441093 conn(0x7f215c06fe30 msgr2=0x7f215c0722b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.864+0000 7f2161174640 1 -- 192.168.123.104:0/1209441093 shutdown_connections 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.864+0000 7f2161174640 1 -- 192.168.123.104:0/1209441093 wait complete. 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.864+0000 7f2161174640 1 Processor -- start 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.864+0000 7f2161174640 1 -- start start 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.864+0000 7f2161174640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f215c074510 0x7f215c086d90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.864+0000 7f2161174640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f215c079cd0 con 0x7f215c074510 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.864+0000 7f215bfff640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f215c074510 0x7f215c086d90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.864+0000 7f215bfff640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f215c074510 0x7f215c086d90 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:35034/0 (socket says 192.168.123.104:35034) 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.864+0000 7f215bfff640 1 -- 192.168.123.104:0/3141641644 learned_addr learned my addr 192.168.123.104:0/3141641644 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.864+0000 7f215bfff640 1 -- 192.168.123.104:0/3141641644 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f215c0872d0 con 0x7f215c074510 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.868+0000 7f215bfff640 1 --2- 192.168.123.104:0/3141641644 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f215c074510 0x7f215c086d90 secure :-1 s=READY pgs=2 cs=0 l=1 rev1=1 crypto rx=0x7f214c007c40 tx=0x7f214c0052e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.868+0000 7f21597fa640 1 -- 192.168.123.104:0/3141641644 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f214c03d040 con 0x7f215c074510 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.868+0000 7f21597fa640 1 -- 192.168.123.104:0/3141641644 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f214c047070 con 0x7f215c074510 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.868+0000 7f21597fa640 1 -- 192.168.123.104:0/3141641644 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f214c0387f0 con 0x7f215c074510 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.868+0000 7f2161174640 1 -- 192.168.123.104:0/3141641644 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f215c087560 con 0x7f215c074510 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.868+0000 7f2161174640 1 -- 192.168.123.104:0/3141641644 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f215c1b7600 con 0x7f215c074510 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.868+0000 7f213affd640 1 -- 192.168.123.104:0/3141641644 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f2124005180 con 0x7f215c074510 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.868+0000 7f21597fa640 1 -- 192.168.123.104:0/3141641644 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 1) ==== 811+0+0 (secure 0 0 0) 0x7f214c0376e0 con 0x7f215c074510 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.868+0000 7f21597fa640 1 -- 192.168.123.104:0/3141641644 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7f214c009070 con 0x7f215c074510 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.868+0000 7f21597fa640 1 -- 192.168.123.104:0/3141641644 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (secure 0 0 0) 0x7f214c037a00 con 0x7f215c074510 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.896+0000 7f213affd640 1 -- 192.168.123.104:0/3141641644 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command([{prefix=config set, name=public_network}] v 0) -- 0x7f2124005470 con 0x7f215c074510 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.900+0000 7f21597fa640 1 -- 192.168.123.104:0/3141641644 <== mon.0 v2:192.168.123.104:3300/0 7 ==== mon_command_ack([{prefix=config set, name=public_network}]=0 v3)=0 v3) ==== 144+0+0 (secure 0 0 0) 0x7f214c037c30 con 0x7f215c074510 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.900+0000 7f213affd640 1 -- 192.168.123.104:0/3141641644 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f215c074510 msgr2=0x7f215c086d90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.900+0000 7f213affd640 1 --2- 192.168.123.104:0/3141641644 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f215c074510 0x7f215c086d90 secure :-1 s=READY pgs=2 cs=0 l=1 rev1=1 crypto rx=0x7f214c007c40 tx=0x7f214c0052e0 comp rx=0 tx=0).stop 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.900+0000 7f213affd640 1 -- 192.168.123.104:0/3141641644 shutdown_connections 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.900+0000 7f213affd640 1 --2- 192.168.123.104:0/3141641644 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f215c074510 0x7f215c086d90 unknown :-1 s=CLOSED pgs=2 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.900+0000 7f213affd640 1 -- 192.168.123.104:0/3141641644 >> 192.168.123.104:0/3141641644 conn(0x7f215c06fe30 msgr2=0x7f215c070930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.904+0000 7f213affd640 1 -- 192.168.123.104:0/3141641644 shutdown_connections 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:10.904+0000 7f213affd640 1 -- 192.168.123.104:0/3141641644 wait complete. 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:Creating mgr... 2026-03-10T10:08:10.943 INFO:teuthology.orchestra.run.vm04.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-10T10:08:10.944 INFO:teuthology.orchestra.run.vm04.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-10T10:08:11.041 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: cluster 2026-03-10T10:08:10.680214+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T10:08:11.042 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: cluster 2026-03-10T10:08:10.680214+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T10:08:11.042 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: cluster 2026-03-10T10:08:10.680238+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-10T10:08:11.042 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: cluster 2026-03-10T10:08:10.680238+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-10T10:08:11.042 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: cluster 2026-03-10T10:08:10.680243+0000 mon.a (mon.0) 3 : cluster [DBG] fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:08:11.042 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: cluster 2026-03-10T10:08:10.680243+0000 mon.a (mon.0) 3 : cluster [DBG] fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:08:11.042 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: cluster 2026-03-10T10:08:10.680245+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-10T10:08:08.532327+0000 2026-03-10T10:08:11.042 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: cluster 2026-03-10T10:08:10.680245+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-10T10:08:08.532327+0000 2026-03-10T10:08:11.042 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: cluster 2026-03-10T10:08:10.680253+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-10T10:08:08.532327+0000 2026-03-10T10:08:11.042 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: cluster 2026-03-10T10:08:10.680253+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-10T10:08:08.532327+0000 2026-03-10T10:08:11.042 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: cluster 2026-03-10T10:08:10.680256+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T10:08:11.042 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: cluster 2026-03-10T10:08:10.680256+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T10:08:11.042 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: cluster 2026-03-10T10:08:10.680259+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-10T10:08:11.042 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: cluster 2026-03-10T10:08:10.680259+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-10T10:08:11.042 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: cluster 2026-03-10T10:08:10.680261+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-10T10:08:11.042 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: cluster 2026-03-10T10:08:10.680261+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-10T10:08:11.042 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: cluster 2026-03-10T10:08:10.680467+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-10T10:08:11.042 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: cluster 2026-03-10T10:08:10.680467+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-10T10:08:11.042 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: cluster 2026-03-10T10:08:10.680479+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-10T10:08:11.042 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: cluster 2026-03-10T10:08:10.680479+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-10T10:08:11.042 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: cluster 2026-03-10T10:08:10.680836+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-10T10:08:11.042 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:10 vm04 bash[20742]: cluster 2026-03-10T10:08:10.680836+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-10T10:08:11.111 INFO:teuthology.orchestra.run.vm04.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@mgr.y 2026-03-10T10:08:11.111 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Failed to reset failed state of unit ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@mgr.y.service: Unit ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@mgr.y.service not loaded. 2026-03-10T10:08:11.274 INFO:teuthology.orchestra.run.vm04.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839.target.wants/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@mgr.y.service → /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service. 2026-03-10T10:08:11.281 INFO:teuthology.orchestra.run.vm04.stdout:firewalld does not appear to be present 2026-03-10T10:08:11.281 INFO:teuthology.orchestra.run.vm04.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T10:08:11.281 INFO:teuthology.orchestra.run.vm04.stdout:firewalld does not appear to be present 2026-03-10T10:08:11.281 INFO:teuthology.orchestra.run.vm04.stdout:Not possible to open ports <[9283, 8765]>. firewalld.service is not available 2026-03-10T10:08:11.281 INFO:teuthology.orchestra.run.vm04.stdout:Waiting for mgr to start... 2026-03-10T10:08:11.281 INFO:teuthology.orchestra.run.vm04.stdout:Waiting for mgr... 2026-03-10T10:08:11.295 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:11 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:08:11.295 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:11 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:08:11.529 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-10T10:08:11.529 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout { 2026-03-10T10:08:11.529 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "fsid": "e4c1c9d6-1c68-11f1-a9bd-116050875839", 2026-03-10T10:08:11.529 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T10:08:11.529 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T10:08:11.529 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T10:08:11.529 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T10:08:11.529 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:08:11.529 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T10:08:11.529 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T10:08:11.529 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 0 2026-03-10T10:08:11.529 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-10T10:08:11.529 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T10:08:11.529 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T10:08:11.529 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-10T10:08:11.529 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-10T10:08:11.529 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T10:08:11.529 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T10:08:09:663579+0000", 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T10:08:09.664216+0000", 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout } 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.428+0000 7f836423a640 1 Processor -- start 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.428+0000 7f836423a640 1 -- start start 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.428+0000 7f836423a640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f835c0745d0 0x7f835c0749d0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.428+0000 7f836423a640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f835c074fa0 con 0x7f835c0745d0 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.428+0000 7f8361faf640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f835c0745d0 0x7f835c0749d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.428+0000 7f8361faf640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f835c0745d0 0x7f835c0749d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:35060/0 (socket says 192.168.123.104:35060) 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.428+0000 7f8361faf640 1 -- 192.168.123.104:0/2838150194 learned_addr learned my addr 192.168.123.104:0/2838150194 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.428+0000 7f8361faf640 1 -- 192.168.123.104:0/2838150194 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f835c075120 con 0x7f835c0745d0 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.428+0000 7f8361faf640 1 --2- 192.168.123.104:0/2838150194 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f835c0745d0 0x7f835c0749d0 secure :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0x7f835800a990 tx=0x7f83580334e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=8cf62344333459ca server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.428+0000 7f8360fad640 1 -- 192.168.123.104:0/2838150194 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f8358041070 con 0x7f835c0745d0 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.428+0000 7f8360fad640 1 -- 192.168.123.104:0/2838150194 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f8358039a00 con 0x7f835c0745d0 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.428+0000 7f836423a640 1 -- 192.168.123.104:0/2838150194 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f835c0745d0 msgr2=0x7f835c0749d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.428+0000 7f836423a640 1 --2- 192.168.123.104:0/2838150194 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f835c0745d0 0x7f835c0749d0 secure :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0x7f835800a990 tx=0x7f83580334e0 comp rx=0 tx=0).stop 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.428+0000 7f836423a640 1 -- 192.168.123.104:0/2838150194 shutdown_connections 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.428+0000 7f836423a640 1 --2- 192.168.123.104:0/2838150194 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f835c0745d0 0x7f835c0749d0 unknown :-1 s=CLOSED pgs=5 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:11.530 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.428+0000 7f836423a640 1 -- 192.168.123.104:0/2838150194 >> 192.168.123.104:0/2838150194 conn(0x7f835c06fe80 msgr2=0x7f835c0722e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:11.531 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.432+0000 7f836423a640 1 -- 192.168.123.104:0/2838150194 shutdown_connections 2026-03-10T10:08:11.531 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.432+0000 7f836423a640 1 -- 192.168.123.104:0/2838150194 wait complete. 2026-03-10T10:08:11.531 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.432+0000 7f836423a640 1 Processor -- start 2026-03-10T10:08:11.531 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.432+0000 7f836423a640 1 -- start start 2026-03-10T10:08:11.531 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.432+0000 7f836423a640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f835c086c90 0x7f835c0870b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:11.531 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.432+0000 7f836423a640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f835c079ce0 con 0x7f835c086c90 2026-03-10T10:08:11.531 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.432+0000 7f8361faf640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f835c086c90 0x7f835c0870b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:11.531 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.432+0000 7f8361faf640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f835c086c90 0x7f835c0870b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:35064/0 (socket says 192.168.123.104:35064) 2026-03-10T10:08:11.531 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.432+0000 7f8361faf640 1 -- 192.168.123.104:0/947440216 learned_addr learned my addr 192.168.123.104:0/947440216 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:11.531 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.432+0000 7f8361faf640 1 -- 192.168.123.104:0/947440216 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f835c08a190 con 0x7f835c086c90 2026-03-10T10:08:11.531 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.432+0000 7f8361faf640 1 --2- 192.168.123.104:0/947440216 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f835c086c90 0x7f835c0870b0 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f835803f040 tx=0x7f83580045e0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:11.531 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.432+0000 7f8352ffd640 1 -- 192.168.123.104:0/947440216 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f835804b020 con 0x7f835c086c90 2026-03-10T10:08:11.531 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.432+0000 7f836423a640 1 -- 192.168.123.104:0/947440216 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f835c0875f0 con 0x7f835c086c90 2026-03-10T10:08:11.531 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.432+0000 7f836423a640 1 -- 192.168.123.104:0/947440216 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f835c1b7680 con 0x7f835c086c90 2026-03-10T10:08:11.531 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.432+0000 7f8352ffd640 1 -- 192.168.123.104:0/947440216 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f8358003c90 con 0x7f835c086c90 2026-03-10T10:08:11.531 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.432+0000 7f8352ffd640 1 -- 192.168.123.104:0/947440216 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f8358041040 con 0x7f835c086c90 2026-03-10T10:08:11.531 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.432+0000 7f8352ffd640 1 -- 192.168.123.104:0/947440216 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 1) ==== 811+0+0 (secure 0 0 0) 0x7f8358058020 con 0x7f835c086c90 2026-03-10T10:08:11.531 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.432+0000 7f8352ffd640 1 -- 192.168.123.104:0/947440216 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7f8358039070 con 0x7f835c086c90 2026-03-10T10:08:11.531 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.432+0000 7f836423a640 1 -- 192.168.123.104:0/947440216 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f832c005180 con 0x7f835c086c90 2026-03-10T10:08:11.531 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.440+0000 7f8352ffd640 1 -- 192.168.123.104:0/947440216 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (secure 0 0 0) 0x7f8358039e80 con 0x7f835c086c90 2026-03-10T10:08:11.531 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.468+0000 7f836423a640 1 -- 192.168.123.104:0/947440216 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "status", "format": "json-pretty"} v 0) -- 0x7f832c005740 con 0x7f835c086c90 2026-03-10T10:08:11.531 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.468+0000 7f8352ffd640 1 -- 192.168.123.104:0/947440216 <== mon.0 v2:192.168.123.104:3300/0 7 ==== mon_command_ack([{"prefix": "status", "format": "json-pretty"}]=0 v0) ==== 79+0+1291 (secure 0 0 0) 0x7f8358047420 con 0x7f835c086c90 2026-03-10T10:08:11.531 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.468+0000 7f8350ff9640 1 -- 192.168.123.104:0/947440216 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f835c086c90 msgr2=0x7f835c0870b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:11.531 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.468+0000 7f8350ff9640 1 --2- 192.168.123.104:0/947440216 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f835c086c90 0x7f835c0870b0 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f835803f040 tx=0x7f83580045e0 comp rx=0 tx=0).stop 2026-03-10T10:08:11.531 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.472+0000 7f8350ff9640 1 -- 192.168.123.104:0/947440216 shutdown_connections 2026-03-10T10:08:11.531 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.472+0000 7f8350ff9640 1 --2- 192.168.123.104:0/947440216 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f835c086c90 0x7f835c0870b0 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:11.531 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.472+0000 7f8350ff9640 1 -- 192.168.123.104:0/947440216 >> 192.168.123.104:0/947440216 conn(0x7f835c06fe80 msgr2=0x7f835c0707e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:11.531 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.472+0000 7f8350ff9640 1 -- 192.168.123.104:0/947440216 shutdown_connections 2026-03-10T10:08:11.531 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:11.472+0000 7f8350ff9640 1 -- 192.168.123.104:0/947440216 wait complete. 2026-03-10T10:08:11.531 INFO:teuthology.orchestra.run.vm04.stdout:mgr not available, waiting (1/15)... 2026-03-10T10:08:11.631 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:11 vm04 bash[20997]: debug 2026-03-10T10:08:11.516+0000 7f0f83ff7140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T10:08:11.631 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:11 vm04 bash[20997]: debug 2026-03-10T10:08:11.624+0000 7f0f83ff7140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T10:08:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:11 vm04 bash[20742]: audit 2026-03-10T10:08:10.906149+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.104:0/3141641644' entity='client.admin' 2026-03-10T10:08:12.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:11 vm04 bash[20742]: audit 2026-03-10T10:08:10.906149+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.104:0/3141641644' entity='client.admin' 2026-03-10T10:08:12.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:11 vm04 bash[20742]: audit 2026-03-10T10:08:11.474847+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.104:0/947440216' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T10:08:12.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:11 vm04 bash[20742]: audit 2026-03-10T10:08:11.474847+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.104:0/947440216' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T10:08:12.204 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:11 vm04 bash[20997]: debug 2026-03-10T10:08:11.892+0000 7f0f83ff7140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T10:08:12.607 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:12 vm04 bash[20997]: debug 2026-03-10T10:08:12.296+0000 7f0f83ff7140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T10:08:12.607 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:12 vm04 bash[20997]: debug 2026-03-10T10:08:12.368+0000 7f0f83ff7140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T10:08:12.607 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:12 vm04 bash[20997]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T10:08:12.607 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:12 vm04 bash[20997]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T10:08:12.607 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:12 vm04 bash[20997]: from numpy import show_config as show_numpy_config 2026-03-10T10:08:12.607 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:12 vm04 bash[20997]: debug 2026-03-10T10:08:12.480+0000 7f0f83ff7140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T10:08:12.607 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:12 vm04 bash[20997]: debug 2026-03-10T10:08:12.600+0000 7f0f83ff7140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T10:08:12.953 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:12 vm04 bash[20997]: debug 2026-03-10T10:08:12.636+0000 7f0f83ff7140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T10:08:12.954 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:12 vm04 bash[20997]: debug 2026-03-10T10:08:12.668+0000 7f0f83ff7140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T10:08:12.954 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:12 vm04 bash[20997]: debug 2026-03-10T10:08:12.708+0000 7f0f83ff7140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T10:08:12.954 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:12 vm04 bash[20997]: debug 2026-03-10T10:08:12.752+0000 7f0f83ff7140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T10:08:13.398 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:13 vm04 bash[20997]: debug 2026-03-10T10:08:13.132+0000 7f0f83ff7140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T10:08:13.398 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:13 vm04 bash[20997]: debug 2026-03-10T10:08:13.164+0000 7f0f83ff7140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T10:08:13.398 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:13 vm04 bash[20997]: debug 2026-03-10T10:08:13.196+0000 7f0f83ff7140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T10:08:13.398 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:13 vm04 bash[20997]: debug 2026-03-10T10:08:13.324+0000 7f0f83ff7140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T10:08:13.398 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:13 vm04 bash[20997]: debug 2026-03-10T10:08:13.360+0000 7f0f83ff7140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T10:08:13.398 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:13 vm04 bash[20997]: debug 2026-03-10T10:08:13.392+0000 7f0f83ff7140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T10:08:13.667 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:13 vm04 bash[20997]: debug 2026-03-10T10:08:13.496+0000 7f0f83ff7140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T10:08:13.755 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-10T10:08:13.755 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout { 2026-03-10T10:08:13.755 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "fsid": "e4c1c9d6-1c68-11f1-a9bd-116050875839", 2026-03-10T10:08:13.755 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T10:08:13.755 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T10:08:13.755 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T10:08:13.755 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T10:08:13.755 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:08:13.755 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T10:08:13.755 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T10:08:13.755 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 0 2026-03-10T10:08:13.755 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-10T10:08:13.755 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T10:08:13.755 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T10:08:13.755 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-10T10:08:13.755 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum_age": 3, 2026-03-10T10:08:13.755 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T10:08:13.755 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T10:08:13.755 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T10:08:13.755 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T10:08:13.755 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:08:13.755 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T10:08:13.755 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T10:08:13.755 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T10:08:13.755 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T10:08:13.756 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T10:08:13.756 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T10:08:13.756 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T10:08:13.756 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T10:08:13.756 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:08:13.756 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T10:08:13.756 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T10:08:13.756 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T10:08:13.756 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T10:08:13.756 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T10:08:13.756 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T10:08:09:663579+0000", 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T10:08:09.664216+0000", 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout } 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.664+0000 7f61d8e81640 1 Processor -- start 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.664+0000 7f61d8e81640 1 -- start start 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.664+0000 7f61d8e81640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f61d4074bd0 0x7f61d4074fd0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.664+0000 7f61d8e81640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f61d40755a0 con 0x7f61d4074bd0 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.664+0000 7f61d37fe640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f61d4074bd0 0x7f61d4074fd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.664+0000 7f61d37fe640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f61d4074bd0 0x7f61d4074fd0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:35070/0 (socket says 192.168.123.104:35070) 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.664+0000 7f61d37fe640 1 -- 192.168.123.104:0/4279008137 learned_addr learned my addr 192.168.123.104:0/4279008137 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.664+0000 7f61d37fe640 1 -- 192.168.123.104:0/4279008137 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f61d410e4c0 con 0x7f61d4074bd0 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.664+0000 7f61d37fe640 1 --2- 192.168.123.104:0/4279008137 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f61d4074bd0 0x7f61d4074fd0 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f61c400a9c0 tx=0x7f61c4033650 comp rx=0 tx=0).ready entity=mon.0 client_cookie=5c66f95be75f8df9 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.664+0000 7f61d27fc640 1 -- 192.168.123.104:0/4279008137 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f61c4037580 con 0x7f61d4074bd0 2026-03-10T10:08:13.757 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.664+0000 7f61d27fc640 1 -- 192.168.123.104:0/4279008137 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f61c4037b70 con 0x7f61d4074bd0 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.664+0000 7f61d27fc640 1 -- 192.168.123.104:0/4279008137 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f61c403cb90 con 0x7f61d4074bd0 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.664+0000 7f61d8e81640 1 -- 192.168.123.104:0/4279008137 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f61d4074bd0 msgr2=0x7f61d4074fd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.664+0000 7f61d8e81640 1 --2- 192.168.123.104:0/4279008137 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f61d4074bd0 0x7f61d4074fd0 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f61c400a9c0 tx=0x7f61c4033650 comp rx=0 tx=0).stop 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.664+0000 7f61d8e81640 1 -- 192.168.123.104:0/4279008137 shutdown_connections 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.664+0000 7f61d8e81640 1 --2- 192.168.123.104:0/4279008137 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f61d4074bd0 0x7f61d4074fd0 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.664+0000 7f61d8e81640 1 -- 192.168.123.104:0/4279008137 >> 192.168.123.104:0/4279008137 conn(0x7f61d406fe50 msgr2=0x7f61d40722b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.664+0000 7f61d8e81640 1 -- 192.168.123.104:0/4279008137 shutdown_connections 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.664+0000 7f61d8e81640 1 -- 192.168.123.104:0/4279008137 wait complete. 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.664+0000 7f61d8e81640 1 Processor -- start 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.664+0000 7f61d8e81640 1 -- start start 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.664+0000 7f61d8e81640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f61d4074bd0 0x7f61d41ab5f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.664+0000 7f61d8e81640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f61d410ef90 con 0x7f61d4074bd0 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.664+0000 7f61d37fe640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f61d4074bd0 0x7f61d41ab5f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.664+0000 7f61d37fe640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f61d4074bd0 0x7f61d41ab5f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:35072/0 (socket says 192.168.123.104:35072) 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.664+0000 7f61d37fe640 1 -- 192.168.123.104:0/2122167615 learned_addr learned my addr 192.168.123.104:0/2122167615 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.664+0000 7f61d37fe640 1 -- 192.168.123.104:0/2122167615 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f61d41abb30 con 0x7f61d4074bd0 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.668+0000 7f61d37fe640 1 --2- 192.168.123.104:0/2122167615 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f61d4074bd0 0x7f61d41ab5f0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f61c40081a0 tx=0x7f61c4037750 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.668+0000 7f61d0ff9640 1 -- 192.168.123.104:0/2122167615 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f61c4043070 con 0x7f61d4074bd0 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.668+0000 7f61d8e81640 1 -- 192.168.123.104:0/2122167615 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f61d41abdc0 con 0x7f61d4074bd0 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.668+0000 7f61d8e81640 1 -- 192.168.123.104:0/2122167615 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f61d41aeab0 con 0x7f61d4074bd0 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.668+0000 7f61d0ff9640 1 -- 192.168.123.104:0/2122167615 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f61c4010370 con 0x7f61d4074bd0 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.668+0000 7f61d0ff9640 1 -- 192.168.123.104:0/2122167615 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f61c4047600 con 0x7f61d4074bd0 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.668+0000 7f61d0ff9640 1 -- 192.168.123.104:0/2122167615 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 1) ==== 811+0+0 (secure 0 0 0) 0x7f61c403b070 con 0x7f61d4074bd0 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.668+0000 7f61d0ff9640 1 -- 192.168.123.104:0/2122167615 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7f61c4051890 con 0x7f61d4074bd0 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.668+0000 7f61d8e81640 1 -- 192.168.123.104:0/2122167615 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f61a0005180 con 0x7f61d4074bd0 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.668+0000 7f61d0ff9640 1 -- 192.168.123.104:0/2122167615 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+75931 (secure 0 0 0) 0x7f61c4051430 con 0x7f61d4074bd0 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.700+0000 7f61d8e81640 1 -- 192.168.123.104:0/2122167615 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "status", "format": "json-pretty"} v 0) -- 0x7f61a0005740 con 0x7f61d4074bd0 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.700+0000 7f61d0ff9640 1 -- 192.168.123.104:0/2122167615 <== mon.0 v2:192.168.123.104:3300/0 7 ==== mon_command_ack([{"prefix": "status", "format": "json-pretty"}]=0 v0) ==== 79+0+1291 (secure 0 0 0) 0x7f61c403da90 con 0x7f61d4074bd0 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.708+0000 7f61ca7fc640 1 -- 192.168.123.104:0/2122167615 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f61d4074bd0 msgr2=0x7f61d41ab5f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.708+0000 7f61ca7fc640 1 --2- 192.168.123.104:0/2122167615 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f61d4074bd0 0x7f61d41ab5f0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f61c40081a0 tx=0x7f61c4037750 comp rx=0 tx=0).stop 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.708+0000 7f61ca7fc640 1 -- 192.168.123.104:0/2122167615 shutdown_connections 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.708+0000 7f61ca7fc640 1 --2- 192.168.123.104:0/2122167615 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f61d4074bd0 0x7f61d41ab5f0 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.708+0000 7f61ca7fc640 1 -- 192.168.123.104:0/2122167615 >> 192.168.123.104:0/2122167615 conn(0x7f61d406fe50 msgr2=0x7f61d40709b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.708+0000 7f61ca7fc640 1 -- 192.168.123.104:0/2122167615 shutdown_connections 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:13.708+0000 7f61ca7fc640 1 -- 192.168.123.104:0/2122167615 wait complete. 2026-03-10T10:08:13.758 INFO:teuthology.orchestra.run.vm04.stdout:mgr not available, waiting (2/15)... 2026-03-10T10:08:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:13 vm04 bash[20742]: audit 2026-03-10T10:08:13.706685+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.104:0/2122167615' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T10:08:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:13 vm04 bash[20742]: audit 2026-03-10T10:08:13.706685+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.104:0/2122167615' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T10:08:13.954 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:13 vm04 bash[20997]: debug 2026-03-10T10:08:13.660+0000 7f0f83ff7140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T10:08:13.954 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:13 vm04 bash[20997]: debug 2026-03-10T10:08:13.840+0000 7f0f83ff7140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T10:08:13.954 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:13 vm04 bash[20997]: debug 2026-03-10T10:08:13.872+0000 7f0f83ff7140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T10:08:13.954 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:13 vm04 bash[20997]: debug 2026-03-10T10:08:13.908+0000 7f0f83ff7140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T10:08:14.454 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:14 vm04 bash[20997]: debug 2026-03-10T10:08:14.052+0000 7f0f83ff7140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T10:08:14.454 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:14 vm04 bash[20997]: debug 2026-03-10T10:08:14.264+0000 7f0f83ff7140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T10:08:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:14 vm04 bash[20742]: cluster 2026-03-10T10:08:14.269965+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon y 2026-03-10T10:08:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:14 vm04 bash[20742]: cluster 2026-03-10T10:08:14.269965+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon y 2026-03-10T10:08:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:14 vm04 bash[20742]: cluster 2026-03-10T10:08:14.273720+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: y(active, starting, since 0.00383463s) 2026-03-10T10:08:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:14 vm04 bash[20742]: cluster 2026-03-10T10:08:14.273720+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: y(active, starting, since 0.00383463s) 2026-03-10T10:08:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:14 vm04 bash[20742]: audit 2026-03-10T10:08:14.277013+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T10:08:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:14 vm04 bash[20742]: audit 2026-03-10T10:08:14.277013+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T10:08:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:14 vm04 bash[20742]: audit 2026-03-10T10:08:14.277453+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T10:08:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:14 vm04 bash[20742]: audit 2026-03-10T10:08:14.277453+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T10:08:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:14 vm04 bash[20742]: audit 2026-03-10T10:08:14.277836+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T10:08:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:14 vm04 bash[20742]: audit 2026-03-10T10:08:14.277836+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T10:08:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:14 vm04 bash[20742]: audit 2026-03-10T10:08:14.278523+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:08:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:14 vm04 bash[20742]: audit 2026-03-10T10:08:14.278523+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:08:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:14 vm04 bash[20742]: audit 2026-03-10T10:08:14.278924+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T10:08:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:14 vm04 bash[20742]: audit 2026-03-10T10:08:14.278924+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T10:08:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:14 vm04 bash[20742]: cluster 2026-03-10T10:08:14.283610+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon y is now available 2026-03-10T10:08:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:14 vm04 bash[20742]: cluster 2026-03-10T10:08:14.283610+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon y is now available 2026-03-10T10:08:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:14 vm04 bash[20742]: audit 2026-03-10T10:08:14.292305+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:08:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:14 vm04 bash[20742]: audit 2026-03-10T10:08:14.292305+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:08:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:14 vm04 bash[20742]: audit 2026-03-10T10:08:14.293364+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' 2026-03-10T10:08:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:14 vm04 bash[20742]: audit 2026-03-10T10:08:14.293364+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' 2026-03-10T10:08:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:14 vm04 bash[20742]: audit 2026-03-10T10:08:14.295226+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T10:08:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:14 vm04 bash[20742]: audit 2026-03-10T10:08:14.295226+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T10:08:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:14 vm04 bash[20742]: audit 2026-03-10T10:08:14.296438+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' 2026-03-10T10:08:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:14 vm04 bash[20742]: audit 2026-03-10T10:08:14.296438+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' 2026-03-10T10:08:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:14 vm04 bash[20742]: audit 2026-03-10T10:08:14.298870+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' 2026-03-10T10:08:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:14 vm04 bash[20742]: audit 2026-03-10T10:08:14.298870+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' 2026-03-10T10:08:16.049 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout { 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "fsid": "e4c1c9d6-1c68-11f1-a9bd-116050875839", 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 0 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T10:08:09:663579+0000", 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T10:08:16.050 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ], 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T10:08:09.664216+0000", 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout }, 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout } 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.872+0000 7f3bc4b68640 1 Processor -- start 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.872+0000 7f3bc4b68640 1 -- start start 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.872+0000 7f3bc4b68640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3bc0108b50 0x7f3bc0108f50 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.872+0000 7f3bc4b68640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f3bc0109520 con 0x7f3bc0108b50 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.872+0000 7f3bbe575640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3bc0108b50 0x7f3bc0108f50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.872+0000 7f3bbe575640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3bc0108b50 0x7f3bc0108f50 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55102/0 (socket says 192.168.123.104:55102) 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.872+0000 7f3bbe575640 1 -- 192.168.123.104:0/2860701123 learned_addr learned my addr 192.168.123.104:0/2860701123 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.872+0000 7f3bbe575640 1 -- 192.168.123.104:0/2860701123 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f3bc0109d50 con 0x7f3bc0108b50 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.872+0000 7f3bbe575640 1 --2- 192.168.123.104:0/2860701123 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3bc0108b50 0x7f3bc0108f50 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f3bb4009b80 tx=0x7f3bb402f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=8b4880305c9039ef server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.872+0000 7f3bbd573640 1 -- 192.168.123.104:0/2860701123 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f3bb403c070 con 0x7f3bc0108b50 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.872+0000 7f3bbd573640 1 -- 192.168.123.104:0/2860701123 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f3bb4037440 con 0x7f3bc0108b50 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.872+0000 7f3bbd573640 1 -- 192.168.123.104:0/2860701123 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f3bb40353a0 con 0x7f3bc0108b50 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.872+0000 7f3bc4b68640 1 -- 192.168.123.104:0/2860701123 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3bc0108b50 msgr2=0x7f3bc0108f50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.872+0000 7f3bc4b68640 1 --2- 192.168.123.104:0/2860701123 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3bc0108b50 0x7f3bc0108f50 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f3bb4009b80 tx=0x7f3bb402f190 comp rx=0 tx=0).stop 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.876+0000 7f3bc4b68640 1 -- 192.168.123.104:0/2860701123 shutdown_connections 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.876+0000 7f3bc4b68640 1 --2- 192.168.123.104:0/2860701123 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3bc0108b50 0x7f3bc0108f50 unknown :-1 s=CLOSED pgs=16 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.876+0000 7f3bc4b68640 1 -- 192.168.123.104:0/2860701123 >> 192.168.123.104:0/2860701123 conn(0x7f3bc007c030 msgr2=0x7f3bc007c440 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.876+0000 7f3bc4b68640 1 -- 192.168.123.104:0/2860701123 shutdown_connections 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.876+0000 7f3bc4b68640 1 -- 192.168.123.104:0/2860701123 wait complete. 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.876+0000 7f3bc4b68640 1 Processor -- start 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.876+0000 7f3bc4b68640 1 -- start start 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.876+0000 7f3bc4b68640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3bc0108b50 0x7f3bc019e9b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.876+0000 7f3bbe575640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3bc0108b50 0x7f3bc019e9b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.876+0000 7f3bbe575640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3bc0108b50 0x7f3bc019e9b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55106/0 (socket says 192.168.123.104:55106) 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.876+0000 7f3bbe575640 1 -- 192.168.123.104:0/2656217666 learned_addr learned my addr 192.168.123.104:0/2656217666 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.876+0000 7f3bc4b68640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f3bc010cf80 con 0x7f3bc0108b50 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.876+0000 7f3bbe575640 1 -- 192.168.123.104:0/2656217666 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f3bc019eef0 con 0x7f3bc0108b50 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.876+0000 7f3bbe575640 1 --2- 192.168.123.104:0/2656217666 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3bc0108b50 0x7f3bc019e9b0 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f3bb403a040 tx=0x7f3bb4035480 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.876+0000 7f3ba77fe640 1 -- 192.168.123.104:0/2656217666 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f3bb403c070 con 0x7f3bc0108b50 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.876+0000 7f3ba77fe640 1 -- 192.168.123.104:0/2656217666 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f3bb4044070 con 0x7f3bc0108b50 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.876+0000 7f3ba77fe640 1 -- 192.168.123.104:0/2656217666 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f3bb4035920 con 0x7f3bc0108b50 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.876+0000 7f3bc4b68640 1 -- 192.168.123.104:0/2656217666 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f3bc019f180 con 0x7f3bc0108b50 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.876+0000 7f3bc4b68640 1 -- 192.168.123.104:0/2656217666 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f3bc019f5a0 con 0x7f3bc0108b50 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.876+0000 7f3ba77fe640 1 -- 192.168.123.104:0/2656217666 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 3) ==== 50095+0+0 (secure 0 0 0) 0x7f3bb4035ac0 con 0x7f3bc0108b50 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.876+0000 7f3bc4b68640 1 -- 192.168.123.104:0/2656217666 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f3b84005180 con 0x7f3bc0108b50 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.876+0000 7f3ba77fe640 1 --2- 192.168.123.104:0/2656217666 >> v2:192.168.123.104:6800/887024688 conn(0x7f3b9403da90 0x7f3b9403ff50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.876+0000 7f3ba77fe640 1 -- 192.168.123.104:0/2656217666 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7f3bb4076270 con 0x7f3bc0108b50 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.880+0000 7f3bbdd74640 1 --2- 192.168.123.104:0/2656217666 >> v2:192.168.123.104:6800/887024688 conn(0x7f3b9403da90 0x7f3b9403ff50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.880+0000 7f3bbdd74640 1 --2- 192.168.123.104:0/2656217666 >> v2:192.168.123.104:6800/887024688 conn(0x7f3b9403da90 0x7f3b9403ff50 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f3ba80099c0 tx=0x7f3ba8006eb0 comp rx=0 tx=0).ready entity=mgr.14100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:15.880+0000 7f3ba77fe640 1 -- 192.168.123.104:0/2656217666 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f3bb404e1f0 con 0x7f3bc0108b50 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.008+0000 7f3bc4b68640 1 -- 192.168.123.104:0/2656217666 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "status", "format": "json-pretty"} v 0) -- 0x7f3b84005470 con 0x7f3bc0108b50 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.008+0000 7f3ba77fe640 1 -- 192.168.123.104:0/2656217666 <== mon.0 v2:192.168.123.104:3300/0 7 ==== mon_command_ack([{"prefix": "status", "format": "json-pretty"}]=0 v0) ==== 79+0+1290 (secure 0 0 0) 0x7f3bb4035de0 con 0x7f3bc0108b50 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.008+0000 7f3bc4b68640 1 -- 192.168.123.104:0/2656217666 >> v2:192.168.123.104:6800/887024688 conn(0x7f3b9403da90 msgr2=0x7f3b9403ff50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.008+0000 7f3bc4b68640 1 --2- 192.168.123.104:0/2656217666 >> v2:192.168.123.104:6800/887024688 conn(0x7f3b9403da90 0x7f3b9403ff50 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f3ba80099c0 tx=0x7f3ba8006eb0 comp rx=0 tx=0).stop 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.008+0000 7f3bc4b68640 1 -- 192.168.123.104:0/2656217666 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3bc0108b50 msgr2=0x7f3bc019e9b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.008+0000 7f3bc4b68640 1 --2- 192.168.123.104:0/2656217666 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3bc0108b50 0x7f3bc019e9b0 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f3bb403a040 tx=0x7f3bb4035480 comp rx=0 tx=0).stop 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.012+0000 7f3bc4b68640 1 -- 192.168.123.104:0/2656217666 shutdown_connections 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.012+0000 7f3bc4b68640 1 --2- 192.168.123.104:0/2656217666 >> v2:192.168.123.104:6800/887024688 conn(0x7f3b9403da90 0x7f3b9403ff50 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.012+0000 7f3bc4b68640 1 --2- 192.168.123.104:0/2656217666 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3bc0108b50 0x7f3bc019e9b0 unknown :-1 s=CLOSED pgs=17 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:16.051 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.012+0000 7f3bc4b68640 1 -- 192.168.123.104:0/2656217666 >> 192.168.123.104:0/2656217666 conn(0x7f3bc007c030 msgr2=0x7f3bc0106080 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:16.052 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.012+0000 7f3bc4b68640 1 -- 192.168.123.104:0/2656217666 shutdown_connections 2026-03-10T10:08:16.052 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.012+0000 7f3bc4b68640 1 -- 192.168.123.104:0/2656217666 wait complete. 2026-03-10T10:08:16.052 INFO:teuthology.orchestra.run.vm04.stdout:mgr is available 2026-03-10T10:08:16.296 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-10T10:08:16.296 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T10:08:16.296 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout fsid = e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.104:3300,v1:192.168.123.104:6789] 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.148+0000 7fa7d2d0f640 1 Processor -- start 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.148+0000 7fa7d2d0f640 1 -- start start 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.152+0000 7fa7d2d0f640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa7cc07c7d0 0x7fa7cc07ac30 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.152+0000 7fa7d2d0f640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fa7cc07b170 con 0x7fa7cc07c7d0 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.152+0000 7fa7d1d0d640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa7cc07c7d0 0x7fa7cc07ac30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.152+0000 7fa7d1d0d640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa7cc07c7d0 0x7fa7cc07ac30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55114/0 (socket says 192.168.123.104:55114) 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.152+0000 7fa7d1d0d640 1 -- 192.168.123.104:0/721363772 learned_addr learned my addr 192.168.123.104:0/721363772 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.152+0000 7fa7d1d0d640 1 -- 192.168.123.104:0/721363772 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fa7cc07b2f0 con 0x7fa7cc07c7d0 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.152+0000 7fa7d1d0d640 1 --2- 192.168.123.104:0/721363772 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa7cc07c7d0 0x7fa7cc07ac30 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7fa7c0009920 tx=0x7fa7c002ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=1d44428363c4e42f server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.152+0000 7fa7d0d0b640 1 -- 192.168.123.104:0/721363772 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fa7c003c070 con 0x7fa7cc07c7d0 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.152+0000 7fa7d0d0b640 1 -- 192.168.123.104:0/721363772 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7fa7c0037440 con 0x7fa7cc07c7d0 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.152+0000 7fa7d2d0f640 1 -- 192.168.123.104:0/721363772 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa7cc07c7d0 msgr2=0x7fa7cc07ac30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.152+0000 7fa7d2d0f640 1 --2- 192.168.123.104:0/721363772 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa7cc07c7d0 0x7fa7cc07ac30 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7fa7c0009920 tx=0x7fa7c002ef20 comp rx=0 tx=0).stop 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.152+0000 7fa7d2d0f640 1 -- 192.168.123.104:0/721363772 shutdown_connections 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.152+0000 7fa7d2d0f640 1 --2- 192.168.123.104:0/721363772 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa7cc07c7d0 0x7fa7cc07ac30 unknown :-1 s=CLOSED pgs=18 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.152+0000 7fa7d2d0f640 1 -- 192.168.123.104:0/721363772 >> 192.168.123.104:0/721363772 conn(0x7fa7cc101ed0 msgr2=0x7fa7cc1042f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.152+0000 7fa7d2d0f640 1 -- 192.168.123.104:0/721363772 shutdown_connections 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.152+0000 7fa7d2d0f640 1 -- 192.168.123.104:0/721363772 wait complete. 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.152+0000 7fa7d2d0f640 1 Processor -- start 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.152+0000 7fa7d2d0f640 1 -- start start 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.152+0000 7fa7d2d0f640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa7cc07c7d0 0x7fa7cc1a2d50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.152+0000 7fa7d2d0f640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fa7cc1086e0 con 0x7fa7cc07c7d0 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.152+0000 7fa7d1d0d640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa7cc07c7d0 0x7fa7cc1a2d50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.152+0000 7fa7d1d0d640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa7cc07c7d0 0x7fa7cc1a2d50 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55126/0 (socket says 192.168.123.104:55126) 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.152+0000 7fa7d1d0d640 1 -- 192.168.123.104:0/1778658827 learned_addr learned my addr 192.168.123.104:0/1778658827 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.152+0000 7fa7d1d0d640 1 -- 192.168.123.104:0/1778658827 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fa7cc1a3290 con 0x7fa7cc07c7d0 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.152+0000 7fa7d1d0d640 1 --2- 192.168.123.104:0/1778658827 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa7cc07c7d0 0x7fa7cc1a2d50 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7fa7c0035ed0 tx=0x7fa7c0035f00 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.152+0000 7fa7baffd640 1 -- 192.168.123.104:0/1778658827 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fa7c0045070 con 0x7fa7cc07c7d0 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.152+0000 7fa7baffd640 1 -- 192.168.123.104:0/1778658827 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7fa7c0037ca0 con 0x7fa7cc07c7d0 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.152+0000 7fa7d2d0f640 1 -- 192.168.123.104:0/1778658827 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fa7cc1a3520 con 0x7fa7cc07c7d0 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.152+0000 7fa7d2d0f640 1 -- 192.168.123.104:0/1778658827 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fa7cc1a6210 con 0x7fa7cc07c7d0 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.156+0000 7fa7baffd640 1 -- 192.168.123.104:0/1778658827 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fa7c003c070 con 0x7fa7cc07c7d0 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.156+0000 7fa7baffd640 1 -- 192.168.123.104:0/1778658827 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 3) ==== 50095+0+0 (secure 0 0 0) 0x7fa7c004a430 con 0x7fa7cc07c7d0 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.156+0000 7fa7baffd640 1 --2- 192.168.123.104:0/1778658827 >> v2:192.168.123.104:6800/887024688 conn(0x7fa7a803d720 0x7fa7a803fbe0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.156+0000 7fa7baffd640 1 -- 192.168.123.104:0/1778658827 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7fa7c0075f60 con 0x7fa7cc07c7d0 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.156+0000 7fa7d2d0f640 1 -- 192.168.123.104:0/1778658827 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fa7cc07ac30 con 0x7fa7cc07c7d0 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.160+0000 7fa7baffd640 1 -- 192.168.123.104:0/1778658827 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fa7c00a69f0 con 0x7fa7cc07c7d0 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.160+0000 7fa7d150c640 1 --2- 192.168.123.104:0/1778658827 >> v2:192.168.123.104:6800/887024688 conn(0x7fa7a803d720 0x7fa7a803fbe0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.160+0000 7fa7d150c640 1 --2- 192.168.123.104:0/1778658827 >> v2:192.168.123.104:6800/887024688 conn(0x7fa7a803d720 0x7fa7a803fbe0 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7fa7bc009a10 tx=0x7fa7bc006eb0 comp rx=0 tx=0).ready entity=mgr.14100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.248+0000 7fa7d2d0f640 1 -- 192.168.123.104:0/1778658827 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "config assimilate-conf"} v 0) -- 0x7fa7cc19bee0 con 0x7fa7cc07c7d0 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.248+0000 7fa7baffd640 1 -- 192.168.123.104:0/1778658827 <== mon.0 v2:192.168.123.104:3300/0 7 ==== mon_command_ack([{"prefix": "config assimilate-conf"}]=0 v3) ==== 70+0+380 (secure 0 0 0) 0x7fa7c0033200 con 0x7fa7cc07c7d0 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.252+0000 7fa7d2d0f640 1 -- 192.168.123.104:0/1778658827 >> v2:192.168.123.104:6800/887024688 conn(0x7fa7a803d720 msgr2=0x7fa7a803fbe0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.252+0000 7fa7d2d0f640 1 --2- 192.168.123.104:0/1778658827 >> v2:192.168.123.104:6800/887024688 conn(0x7fa7a803d720 0x7fa7a803fbe0 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7fa7bc009a10 tx=0x7fa7bc006eb0 comp rx=0 tx=0).stop 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.252+0000 7fa7d2d0f640 1 -- 192.168.123.104:0/1778658827 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa7cc07c7d0 msgr2=0x7fa7cc1a2d50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.252+0000 7fa7d2d0f640 1 --2- 192.168.123.104:0/1778658827 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa7cc07c7d0 0x7fa7cc1a2d50 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7fa7c0035ed0 tx=0x7fa7c0035f00 comp rx=0 tx=0).stop 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.252+0000 7fa7d2d0f640 1 -- 192.168.123.104:0/1778658827 shutdown_connections 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.252+0000 7fa7d2d0f640 1 --2- 192.168.123.104:0/1778658827 >> v2:192.168.123.104:6800/887024688 conn(0x7fa7a803d720 0x7fa7a803fbe0 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.252+0000 7fa7d2d0f640 1 --2- 192.168.123.104:0/1778658827 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa7cc07c7d0 0x7fa7cc1a2d50 unknown :-1 s=CLOSED pgs=19 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.252+0000 7fa7d2d0f640 1 -- 192.168.123.104:0/1778658827 >> 192.168.123.104:0/1778658827 conn(0x7fa7cc101ed0 msgr2=0x7fa7cc1042c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:16.297 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.252+0000 7fa7d2d0f640 1 -- 192.168.123.104:0/1778658827 shutdown_connections 2026-03-10T10:08:16.298 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.252+0000 7fa7d2d0f640 1 -- 192.168.123.104:0/1778658827 wait complete. 2026-03-10T10:08:16.298 INFO:teuthology.orchestra.run.vm04.stdout:Enabling cephadm module... 2026-03-10T10:08:16.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:16 vm04 bash[20742]: cluster 2026-03-10T10:08:15.278903+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: y(active, since 1.00902s) 2026-03-10T10:08:16.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:16 vm04 bash[20742]: cluster 2026-03-10T10:08:15.278903+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: y(active, since 1.00902s) 2026-03-10T10:08:16.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:16 vm04 bash[20742]: audit 2026-03-10T10:08:16.013180+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.104:0/2656217666' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T10:08:16.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:16 vm04 bash[20742]: audit 2026-03-10T10:08:16.013180+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.104:0/2656217666' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T10:08:16.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:16 vm04 bash[20742]: audit 2026-03-10T10:08:16.253402+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.104:0/1778658827' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T10:08:16.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:16 vm04 bash[20742]: audit 2026-03-10T10:08:16.253402+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.104:0/1778658827' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T10:08:17.386 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.404+0000 7fbbbfbea640 1 Processor -- start 2026-03-10T10:08:17.386 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.404+0000 7fbbbfbea640 1 -- start start 2026-03-10T10:08:17.386 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.404+0000 7fbbbfbea640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fbbb8108b70 0x7fbbb8108f70 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:17.386 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.404+0000 7fbbbfbea640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fbbb8109540 con 0x7fbbb8108b70 2026-03-10T10:08:17.386 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.404+0000 7fbbbd95f640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fbbb8108b70 0x7fbbb8108f70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:17.386 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.404+0000 7fbbbd95f640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fbbb8108b70 0x7fbbb8108f70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55136/0 (socket says 192.168.123.104:55136) 2026-03-10T10:08:17.386 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.404+0000 7fbbbd95f640 1 -- 192.168.123.104:0/76609053 learned_addr learned my addr 192.168.123.104:0/76609053 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:17.386 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.404+0000 7fbbbd95f640 1 -- 192.168.123.104:0/76609053 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fbbb8109d70 con 0x7fbbb8108b70 2026-03-10T10:08:17.386 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.404+0000 7fbbbd95f640 1 --2- 192.168.123.104:0/76609053 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fbbb8108b70 0x7fbbb8108f70 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7fbbac009920 tx=0x7fbbac02ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=5724624c4ed2ef99 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:17.386 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.404+0000 7fbbbc95d640 1 -- 192.168.123.104:0/76609053 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fbbac03c070 con 0x7fbbb8108b70 2026-03-10T10:08:17.386 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.404+0000 7fbbbc95d640 1 -- 192.168.123.104:0/76609053 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7fbbac037440 con 0x7fbbb8108b70 2026-03-10T10:08:17.386 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.404+0000 7fbbbfbea640 1 -- 192.168.123.104:0/76609053 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fbbb8108b70 msgr2=0x7fbbb8108f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:17.386 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.404+0000 7fbbbfbea640 1 --2- 192.168.123.104:0/76609053 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fbbb8108b70 0x7fbbb8108f70 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7fbbac009920 tx=0x7fbbac02ef20 comp rx=0 tx=0).stop 2026-03-10T10:08:17.386 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.404+0000 7fbbbfbea640 1 -- 192.168.123.104:0/76609053 shutdown_connections 2026-03-10T10:08:17.386 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.404+0000 7fbbbfbea640 1 --2- 192.168.123.104:0/76609053 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fbbb8108b70 0x7fbbb8108f70 unknown :-1 s=CLOSED pgs=20 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:17.386 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.404+0000 7fbbbfbea640 1 -- 192.168.123.104:0/76609053 >> 192.168.123.104:0/76609053 conn(0x7fbbb807c040 msgr2=0x7fbbb807c450 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:17.386 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.404+0000 7fbbbfbea640 1 -- 192.168.123.104:0/76609053 shutdown_connections 2026-03-10T10:08:17.386 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.404+0000 7fbbbfbea640 1 -- 192.168.123.104:0/76609053 wait complete. 2026-03-10T10:08:17.386 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.408+0000 7fbbbfbea640 1 Processor -- start 2026-03-10T10:08:17.386 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.408+0000 7fbbbfbea640 1 -- start start 2026-03-10T10:08:17.386 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.408+0000 7fbbbfbea640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fbbb8108b70 0x7fbbb819ec10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:17.386 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.408+0000 7fbbbfbea640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fbbb810cfa0 con 0x7fbbb8108b70 2026-03-10T10:08:17.386 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.408+0000 7fbbbd95f640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fbbb8108b70 0x7fbbb819ec10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:17.386 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.408+0000 7fbbbd95f640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fbbb8108b70 0x7fbbb819ec10 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55142/0 (socket says 192.168.123.104:55142) 2026-03-10T10:08:17.387 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.408+0000 7fbbbd95f640 1 -- 192.168.123.104:0/756455820 learned_addr learned my addr 192.168.123.104:0/756455820 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:17.387 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.408+0000 7fbbbd95f640 1 -- 192.168.123.104:0/756455820 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fbbb819f150 con 0x7fbbb8108b70 2026-03-10T10:08:17.387 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.408+0000 7fbbbd95f640 1 --2- 192.168.123.104:0/756455820 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fbbb8108b70 0x7fbbb819ec10 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7fbbac0379f0 tx=0x7fbbac037a20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:17.387 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.408+0000 7fbba6ffd640 1 -- 192.168.123.104:0/756455820 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fbbac045070 con 0x7fbbb8108b70 2026-03-10T10:08:17.387 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.408+0000 7fbba6ffd640 1 -- 192.168.123.104:0/756455820 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7fbbac0409b0 con 0x7fbbb8108b70 2026-03-10T10:08:17.387 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.408+0000 7fbba6ffd640 1 -- 192.168.123.104:0/756455820 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fbbac03c070 con 0x7fbbb8108b70 2026-03-10T10:08:17.387 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.408+0000 7fbbbfbea640 1 -- 192.168.123.104:0/756455820 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fbbb819f3e0 con 0x7fbbb8108b70 2026-03-10T10:08:17.387 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.408+0000 7fbbbfbea640 1 -- 192.168.123.104:0/756455820 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fbbb81a20d0 con 0x7fbbb8108b70 2026-03-10T10:08:17.387 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.408+0000 7fbba6ffd640 1 -- 192.168.123.104:0/756455820 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 4) ==== 50201+0+0 (secure 0 0 0) 0x7fbbac040560 con 0x7fbbb8108b70 2026-03-10T10:08:17.387 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.408+0000 7fbbbfbea640 1 -- 192.168.123.104:0/756455820 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fbb80005180 con 0x7fbbb8108b70 2026-03-10T10:08:17.387 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.412+0000 7fbba6ffd640 1 --2- 192.168.123.104:0/756455820 >> v2:192.168.123.104:6800/887024688 conn(0x7fbb9403d840 0x7fbb9403fd00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:17.387 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.412+0000 7fbba6ffd640 1 -- 192.168.123.104:0/756455820 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7fbbac075ef0 con 0x7fbbb8108b70 2026-03-10T10:08:17.387 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.412+0000 7fbba6ffd640 1 -- 192.168.123.104:0/756455820 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fbbac03c210 con 0x7fbbb8108b70 2026-03-10T10:08:17.387 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.412+0000 7fbbbd15e640 1 --2- 192.168.123.104:0/756455820 >> v2:192.168.123.104:6800/887024688 conn(0x7fbb9403d840 0x7fbb9403fd00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:17.387 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.412+0000 7fbbbd15e640 1 --2- 192.168.123.104:0/756455820 >> v2:192.168.123.104:6800/887024688 conn(0x7fbb9403d840 0x7fbb9403fd00 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7fbba8009a10 tx=0x7fbba8006eb0 comp rx=0 tx=0).ready entity=mgr.14100 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:17.387 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:16.524+0000 7fbbbfbea640 1 -- 192.168.123.104:0/756455820 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) -- 0x7fbb80005470 con 0x7fbbb8108b70 2026-03-10T10:08:17.387 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.320+0000 7fbba6ffd640 1 -- 192.168.123.104:0/756455820 <== mon.0 v2:192.168.123.104:3300/0 7 ==== mon_command_ack([{"prefix": "mgr module enable", "module": "cephadm"}]=0 v5) ==== 86+0+0 (secure 0 0 0) 0x7fbbac048c60 con 0x7fbbb8108b70 2026-03-10T10:08:17.387 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.324+0000 7fbba6ffd640 1 -- 192.168.123.104:0/756455820 <== mon.0 v2:192.168.123.104:3300/0 8 ==== mgrmap(e 5) ==== 50212+0+0 (secure 0 0 0) 0x7fbbac03fc90 con 0x7fbbb8108b70 2026-03-10T10:08:17.387 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.324+0000 7fbbbfbea640 1 -- 192.168.123.104:0/756455820 >> v2:192.168.123.104:6800/887024688 conn(0x7fbb9403d840 msgr2=0x7fbb9403fd00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:17.387 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.324+0000 7fbbbfbea640 1 --2- 192.168.123.104:0/756455820 >> v2:192.168.123.104:6800/887024688 conn(0x7fbb9403d840 0x7fbb9403fd00 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7fbba8009a10 tx=0x7fbba8006eb0 comp rx=0 tx=0).stop 2026-03-10T10:08:17.387 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.324+0000 7fbbbfbea640 1 -- 192.168.123.104:0/756455820 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fbbb8108b70 msgr2=0x7fbbb819ec10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:17.387 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.324+0000 7fbbbfbea640 1 --2- 192.168.123.104:0/756455820 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fbbb8108b70 0x7fbbb819ec10 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7fbbac0379f0 tx=0x7fbbac037a20 comp rx=0 tx=0).stop 2026-03-10T10:08:17.387 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.324+0000 7fbbbfbea640 1 -- 192.168.123.104:0/756455820 shutdown_connections 2026-03-10T10:08:17.387 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.324+0000 7fbbbfbea640 1 --2- 192.168.123.104:0/756455820 >> v2:192.168.123.104:6800/887024688 conn(0x7fbb9403d840 0x7fbb9403fd00 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:17.387 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.324+0000 7fbbbfbea640 1 --2- 192.168.123.104:0/756455820 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fbbb8108b70 0x7fbbb819ec10 unknown :-1 s=CLOSED pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:17.387 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.324+0000 7fbbbfbea640 1 -- 192.168.123.104:0/756455820 >> 192.168.123.104:0/756455820 conn(0x7fbbb807c040 msgr2=0x7fbbb8106340 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:17.387 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.324+0000 7fbbbfbea640 1 -- 192.168.123.104:0/756455820 shutdown_connections 2026-03-10T10:08:17.387 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.324+0000 7fbbbfbea640 1 -- 192.168.123.104:0/756455820 wait complete. 2026-03-10T10:08:17.604 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:17 vm04 bash[20742]: cluster 2026-03-10T10:08:16.284952+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e4: y(active, since 2s) 2026-03-10T10:08:17.604 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:17 vm04 bash[20742]: cluster 2026-03-10T10:08:16.284952+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e4: y(active, since 2s) 2026-03-10T10:08:17.604 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:17 vm04 bash[20742]: audit 2026-03-10T10:08:16.529292+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.104:0/756455820' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T10:08:17.605 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:17 vm04 bash[20742]: audit 2026-03-10T10:08:16.529292+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.104:0/756455820' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T10:08:17.605 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:17 vm04 bash[20997]: ignoring --setuser ceph since I am not root 2026-03-10T10:08:17.605 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:17 vm04 bash[20997]: ignoring --setgroup ceph since I am not root 2026-03-10T10:08:17.605 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:17 vm04 bash[20997]: debug 2026-03-10T10:08:17.440+0000 7fcecf06b140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T10:08:17.605 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:17 vm04 bash[20997]: debug 2026-03-10T10:08:17.484+0000 7fcecf06b140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout { 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 5, 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "active_name": "y", 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout } 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.516+0000 7f8224df4640 1 Processor -- start 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.516+0000 7f8224df4640 1 -- start start 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.516+0000 7f8224df4640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8220074c00 0x7f8220075000 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.516+0000 7f8224df4640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f82200755d0 con 0x7f8220074c00 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.516+0000 7f821e575640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8220074c00 0x7f8220075000 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.516+0000 7f821e575640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8220074c00 0x7f8220075000 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55162/0 (socket says 192.168.123.104:55162) 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.516+0000 7f821e575640 1 -- 192.168.123.104:0/1268885422 learned_addr learned my addr 192.168.123.104:0/1268885422 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.516+0000 7f821e575640 1 -- 192.168.123.104:0/1268885422 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f822010e4f0 con 0x7f8220074c00 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.516+0000 7f821e575640 1 --2- 192.168.123.104:0/1268885422 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8220074c00 0x7f8220075000 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f82100089a0 tx=0x7f8210031440 comp rx=0 tx=0).ready entity=mon.0 client_cookie=d7e964b4d8032948 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.516+0000 7f821dd74640 1 -- 192.168.123.104:0/1268885422 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f821003c480 con 0x7f8220074c00 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.516+0000 7f821dd74640 1 -- 192.168.123.104:0/1268885422 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f821003ca70 con 0x7f8220074c00 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.516+0000 7f8224df4640 1 -- 192.168.123.104:0/1268885422 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8220074c00 msgr2=0x7f8220075000 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.516+0000 7f8224df4640 1 --2- 192.168.123.104:0/1268885422 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8220074c00 0x7f8220075000 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f82100089a0 tx=0x7f8210031440 comp rx=0 tx=0).stop 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.516+0000 7f8224df4640 1 -- 192.168.123.104:0/1268885422 shutdown_connections 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.516+0000 7f8224df4640 1 --2- 192.168.123.104:0/1268885422 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8220074c00 0x7f8220075000 unknown :-1 s=CLOSED pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.516+0000 7f8224df4640 1 -- 192.168.123.104:0/1268885422 >> 192.168.123.104:0/1268885422 conn(0x7f822006fe80 msgr2=0x7f82200722e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.516+0000 7f8224df4640 1 -- 192.168.123.104:0/1268885422 shutdown_connections 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.516+0000 7f8224df4640 1 -- 192.168.123.104:0/1268885422 wait complete. 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.516+0000 7f8224df4640 1 Processor -- start 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.516+0000 7f8224df4640 1 -- start start 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.516+0000 7f8224df4640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8220074c00 0x7f82201a2e10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.516+0000 7f8224df4640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f822010efc0 con 0x7f8220074c00 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.516+0000 7f821e575640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8220074c00 0x7f82201a2e10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.516+0000 7f821e575640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8220074c00 0x7f82201a2e10 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55170/0 (socket says 192.168.123.104:55170) 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.516+0000 7f821e575640 1 -- 192.168.123.104:0/1685368523 learned_addr learned my addr 192.168.123.104:0/1685368523 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.516+0000 7f821e575640 1 -- 192.168.123.104:0/1685368523 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f82201a3350 con 0x7f8220074c00 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.520+0000 7f821e575640 1 --2- 192.168.123.104:0/1685368523 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8220074c00 0x7f82201a2e10 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7f8210009c80 tx=0x7f8210009cb0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.520+0000 7f82177fe640 1 -- 192.168.123.104:0/1685368523 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f821003c690 con 0x7f8220074c00 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.520+0000 7f8224df4640 1 -- 192.168.123.104:0/1685368523 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f82201a35e0 con 0x7f8220074c00 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.520+0000 7f8224df4640 1 -- 192.168.123.104:0/1685368523 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f82201a42c0 con 0x7f8220074c00 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.520+0000 7f82177fe640 1 -- 192.168.123.104:0/1685368523 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f8210031e50 con 0x7f8220074c00 2026-03-10T10:08:17.703 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.520+0000 7f8224df4640 1 -- 192.168.123.104:0/1685368523 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f8220075000 con 0x7f8220074c00 2026-03-10T10:08:17.704 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.520+0000 7f82177fe640 1 -- 192.168.123.104:0/1685368523 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f821000b240 con 0x7f8220074c00 2026-03-10T10:08:17.704 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.520+0000 7f82177fe640 1 -- 192.168.123.104:0/1685368523 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 5) ==== 50212+0+0 (secure 0 0 0) 0x7f821000bce0 con 0x7f8220074c00 2026-03-10T10:08:17.704 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.520+0000 7f82177fe640 1 --2- 192.168.123.104:0/1685368523 >> v2:192.168.123.104:6800/887024688 conn(0x7f81fc03dc00 0x7f81fc0400c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:17.704 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.520+0000 7f8217fff640 1 -- 192.168.123.104:0/1685368523 >> v2:192.168.123.104:6800/887024688 conn(0x7f81fc03dc00 msgr2=0x7f81fc0400c0 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.104:6800/887024688 2026-03-10T10:08:17.704 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.520+0000 7f8217fff640 1 --2- 192.168.123.104:0/1685368523 >> v2:192.168.123.104:6800/887024688 conn(0x7f81fc03dc00 0x7f81fc0400c0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-10T10:08:17.704 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.520+0000 7f82177fe640 1 -- 192.168.123.104:0/1685368523 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7f8210076910 con 0x7f8220074c00 2026-03-10T10:08:17.704 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.528+0000 7f82177fe640 1 -- 192.168.123.104:0/1685368523 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f821003ca10 con 0x7f8220074c00 2026-03-10T10:08:17.704 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.636+0000 7f8224df4640 1 -- 192.168.123.104:0/1685368523 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "mgr stat"} v 0) -- 0x7f82201a4870 con 0x7f8220074c00 2026-03-10T10:08:17.704 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.636+0000 7f82177fe640 1 -- 192.168.123.104:0/1685368523 <== mon.0 v2:192.168.123.104:3300/0 7 ==== mon_command_ack([{"prefix": "mgr stat"}]=0 v5) ==== 56+0+88 (secure 0 0 0) 0x7f821000bac0 con 0x7f8220074c00 2026-03-10T10:08:17.704 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.640+0000 7f82157fa640 1 -- 192.168.123.104:0/1685368523 >> v2:192.168.123.104:6800/887024688 conn(0x7f81fc03dc00 msgr2=0x7f81fc0400c0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:08:17.704 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.640+0000 7f82157fa640 1 --2- 192.168.123.104:0/1685368523 >> v2:192.168.123.104:6800/887024688 conn(0x7f81fc03dc00 0x7f81fc0400c0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:17.704 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.640+0000 7f82157fa640 1 -- 192.168.123.104:0/1685368523 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8220074c00 msgr2=0x7f82201a2e10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:17.704 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.640+0000 7f82157fa640 1 --2- 192.168.123.104:0/1685368523 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8220074c00 0x7f82201a2e10 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7f8210009c80 tx=0x7f8210009cb0 comp rx=0 tx=0).stop 2026-03-10T10:08:17.704 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.644+0000 7f82157fa640 1 -- 192.168.123.104:0/1685368523 shutdown_connections 2026-03-10T10:08:17.704 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.644+0000 7f82157fa640 1 --2- 192.168.123.104:0/1685368523 >> v2:192.168.123.104:6800/887024688 conn(0x7f81fc03dc00 0x7f81fc0400c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:17.704 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.644+0000 7f82157fa640 1 --2- 192.168.123.104:0/1685368523 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8220074c00 0x7f82201a2e10 unknown :-1 s=CLOSED pgs=25 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:17.704 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.644+0000 7f82157fa640 1 -- 192.168.123.104:0/1685368523 >> 192.168.123.104:0/1685368523 conn(0x7f822006fe80 msgr2=0x7f8220070b20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:17.704 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.644+0000 7f82157fa640 1 -- 192.168.123.104:0/1685368523 shutdown_connections 2026-03-10T10:08:17.704 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.644+0000 7f82157fa640 1 -- 192.168.123.104:0/1685368523 wait complete. 2026-03-10T10:08:17.704 INFO:teuthology.orchestra.run.vm04.stdout:Waiting for the mgr to restart... 2026-03-10T10:08:17.704 INFO:teuthology.orchestra.run.vm04.stdout:Waiting for mgr epoch 5... 2026-03-10T10:08:17.910 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:17 vm04 bash[20997]: debug 2026-03-10T10:08:17.600+0000 7fcecf06b140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T10:08:18.203 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:17 vm04 bash[20997]: debug 2026-03-10T10:08:17.904+0000 7fcecf06b140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T10:08:18.635 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:18 vm04 bash[20742]: audit 2026-03-10T10:08:17.323368+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.104:0/756455820' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T10:08:18.635 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:18 vm04 bash[20742]: audit 2026-03-10T10:08:17.323368+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.104:0/756455820' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T10:08:18.635 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:18 vm04 bash[20742]: cluster 2026-03-10T10:08:17.328303+0000 mon.a (mon.0) 34 : cluster [DBG] mgrmap e5: y(active, since 3s) 2026-03-10T10:08:18.636 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:18 vm04 bash[20742]: cluster 2026-03-10T10:08:17.328303+0000 mon.a (mon.0) 34 : cluster [DBG] mgrmap e5: y(active, since 3s) 2026-03-10T10:08:18.636 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:18 vm04 bash[20742]: audit 2026-03-10T10:08:17.641899+0000 mon.a (mon.0) 35 : audit [DBG] from='client.? 192.168.123.104:0/1685368523' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T10:08:18.636 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:18 vm04 bash[20742]: audit 2026-03-10T10:08:17.641899+0000 mon.a (mon.0) 35 : audit [DBG] from='client.? 192.168.123.104:0/1685368523' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T10:08:18.636 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:18 vm04 bash[20997]: debug 2026-03-10T10:08:18.304+0000 7fcecf06b140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T10:08:18.636 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:18 vm04 bash[20997]: debug 2026-03-10T10:08:18.384+0000 7fcecf06b140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T10:08:18.636 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:18 vm04 bash[20997]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T10:08:18.636 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:18 vm04 bash[20997]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T10:08:18.636 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:18 vm04 bash[20997]: from numpy import show_config as show_numpy_config 2026-03-10T10:08:18.636 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:18 vm04 bash[20997]: debug 2026-03-10T10:08:18.504+0000 7fcecf06b140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T10:08:18.954 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:18 vm04 bash[20997]: debug 2026-03-10T10:08:18.628+0000 7fcecf06b140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T10:08:18.954 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:18 vm04 bash[20997]: debug 2026-03-10T10:08:18.664+0000 7fcecf06b140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T10:08:18.954 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:18 vm04 bash[20997]: debug 2026-03-10T10:08:18.696+0000 7fcecf06b140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T10:08:18.954 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:18 vm04 bash[20997]: debug 2026-03-10T10:08:18.732+0000 7fcecf06b140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T10:08:18.954 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:18 vm04 bash[20997]: debug 2026-03-10T10:08:18.780+0000 7fcecf06b140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T10:08:19.454 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:19 vm04 bash[20997]: debug 2026-03-10T10:08:19.180+0000 7fcecf06b140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T10:08:19.454 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:19 vm04 bash[20997]: debug 2026-03-10T10:08:19.212+0000 7fcecf06b140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T10:08:19.454 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:19 vm04 bash[20997]: debug 2026-03-10T10:08:19.248+0000 7fcecf06b140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T10:08:19.454 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:19 vm04 bash[20997]: debug 2026-03-10T10:08:19.384+0000 7fcecf06b140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T10:08:19.454 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:19 vm04 bash[20997]: debug 2026-03-10T10:08:19.420+0000 7fcecf06b140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T10:08:19.863 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:19 vm04 bash[20997]: debug 2026-03-10T10:08:19.456+0000 7fcecf06b140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T10:08:19.863 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:19 vm04 bash[20997]: debug 2026-03-10T10:08:19.560+0000 7fcecf06b140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T10:08:19.863 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:19 vm04 bash[20997]: debug 2026-03-10T10:08:19.700+0000 7fcecf06b140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T10:08:20.204 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:19 vm04 bash[20997]: debug 2026-03-10T10:08:19.856+0000 7fcecf06b140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T10:08:20.204 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:19 vm04 bash[20997]: debug 2026-03-10T10:08:19.888+0000 7fcecf06b140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T10:08:20.204 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:19 vm04 bash[20997]: debug 2026-03-10T10:08:19.928+0000 7fcecf06b140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T10:08:20.204 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:20 vm04 bash[20997]: debug 2026-03-10T10:08:20.068+0000 7fcecf06b140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T10:08:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:20 vm04 bash[20742]: cluster 2026-03-10T10:08:20.274732+0000 mon.a (mon.0) 36 : cluster [INF] Active manager daemon y restarted 2026-03-10T10:08:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:20 vm04 bash[20742]: cluster 2026-03-10T10:08:20.274732+0000 mon.a (mon.0) 36 : cluster [INF] Active manager daemon y restarted 2026-03-10T10:08:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:20 vm04 bash[20742]: cluster 2026-03-10T10:08:20.274918+0000 mon.a (mon.0) 37 : cluster [INF] Activating manager daemon y 2026-03-10T10:08:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:20 vm04 bash[20742]: cluster 2026-03-10T10:08:20.274918+0000 mon.a (mon.0) 37 : cluster [INF] Activating manager daemon y 2026-03-10T10:08:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:20 vm04 bash[20742]: cluster 2026-03-10T10:08:20.279317+0000 mon.a (mon.0) 38 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-10T10:08:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:20 vm04 bash[20742]: cluster 2026-03-10T10:08:20.279317+0000 mon.a (mon.0) 38 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-10T10:08:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:20 vm04 bash[20742]: cluster 2026-03-10T10:08:20.279391+0000 mon.a (mon.0) 39 : cluster [DBG] mgrmap e6: y(active, starting, since 0.00455868s) 2026-03-10T10:08:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:20 vm04 bash[20742]: cluster 2026-03-10T10:08:20.279391+0000 mon.a (mon.0) 39 : cluster [DBG] mgrmap e6: y(active, starting, since 0.00455868s) 2026-03-10T10:08:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:20 vm04 bash[20742]: audit 2026-03-10T10:08:20.283005+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:08:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:20 vm04 bash[20742]: audit 2026-03-10T10:08:20.283005+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:08:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:20 vm04 bash[20742]: audit 2026-03-10T10:08:20.283065+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T10:08:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:20 vm04 bash[20742]: audit 2026-03-10T10:08:20.283065+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T10:08:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:20 vm04 bash[20742]: audit 2026-03-10T10:08:20.283801+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T10:08:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:20 vm04 bash[20742]: audit 2026-03-10T10:08:20.283801+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T10:08:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:20 vm04 bash[20742]: audit 2026-03-10T10:08:20.283959+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T10:08:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:20 vm04 bash[20742]: audit 2026-03-10T10:08:20.283959+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T10:08:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:20 vm04 bash[20742]: audit 2026-03-10T10:08:20.284138+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T10:08:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:20 vm04 bash[20742]: audit 2026-03-10T10:08:20.284138+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T10:08:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:20 vm04 bash[20742]: cluster 2026-03-10T10:08:20.288740+0000 mon.a (mon.0) 45 : cluster [INF] Manager daemon y is now available 2026-03-10T10:08:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:20 vm04 bash[20742]: cluster 2026-03-10T10:08:20.288740+0000 mon.a (mon.0) 45 : cluster [INF] Manager daemon y is now available 2026-03-10T10:08:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:20 vm04 bash[20742]: audit 2026-03-10T10:08:20.297079+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:08:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:20 vm04 bash[20742]: audit 2026-03-10T10:08:20.297079+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:08:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:20 vm04 bash[20742]: audit 2026-03-10T10:08:20.299817+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:08:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:20 vm04 bash[20742]: audit 2026-03-10T10:08:20.299817+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:08:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:20 vm04 bash[20742]: audit 2026-03-10T10:08:20.315592+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:08:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:20 vm04 bash[20742]: audit 2026-03-10T10:08:20.315592+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:08:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:20 vm04 bash[20742]: audit 2026-03-10T10:08:20.316475+0000 mon.a (mon.0) 49 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:08:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:20 vm04 bash[20742]: audit 2026-03-10T10:08:20.316475+0000 mon.a (mon.0) 49 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:08:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:20 vm04 bash[20742]: audit 2026-03-10T10:08:20.318310+0000 mon.a (mon.0) 50 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:08:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:20 vm04 bash[20742]: audit 2026-03-10T10:08:20.318310+0000 mon.a (mon.0) 50 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:08:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:20 vm04 bash[20742]: audit 2026-03-10T10:08:20.324639+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T10:08:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:20 vm04 bash[20742]: audit 2026-03-10T10:08:20.324639+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T10:08:20.704 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:20 vm04 bash[20997]: debug 2026-03-10T10:08:20.268+0000 7fcecf06b140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T10:08:21.339 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout { 2026-03-10T10:08:21.339 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 7, 2026-03-10T10:08:21.339 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T10:08:21.339 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout } 2026-03-10T10:08:21.339 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.824+0000 7ff3d6b7f640 1 Processor -- start 2026-03-10T10:08:21.339 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.824+0000 7ff3d6b7f640 1 -- start start 2026-03-10T10:08:21.339 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.824+0000 7ff3d6b7f640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7ff3d0074c00 0x7ff3d0075000 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:21.339 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.824+0000 7ff3d6b7f640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7ff3d00755d0 con 0x7ff3d0074c00 2026-03-10T10:08:21.339 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.824+0000 7ff3d48f4640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7ff3d0074c00 0x7ff3d0075000 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:21.339 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.824+0000 7ff3d48f4640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7ff3d0074c00 0x7ff3d0075000 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55182/0 (socket says 192.168.123.104:55182) 2026-03-10T10:08:21.339 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.824+0000 7ff3d48f4640 1 -- 192.168.123.104:0/2521515504 learned_addr learned my addr 192.168.123.104:0/2521515504 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:21.339 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.824+0000 7ff3d48f4640 1 -- 192.168.123.104:0/2521515504 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ff3d010e4f0 con 0x7ff3d0074c00 2026-03-10T10:08:21.339 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.824+0000 7ff3d48f4640 1 --2- 192.168.123.104:0/2521515504 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7ff3d0074c00 0x7ff3d0075000 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7ff3c000a9c0 tx=0x7ff3c0033650 comp rx=0 tx=0).ready entity=mon.0 client_cookie=28e4bef95fd98dde server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:21.339 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.824+0000 7ff3cf7fe640 1 -- 192.168.123.104:0/2521515504 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7ff3c0037580 con 0x7ff3d0074c00 2026-03-10T10:08:21.339 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.824+0000 7ff3cf7fe640 1 -- 192.168.123.104:0/2521515504 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7ff3c0037b70 con 0x7ff3d0074c00 2026-03-10T10:08:21.339 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.824+0000 7ff3cf7fe640 1 -- 192.168.123.104:0/2521515504 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7ff3c003cb40 con 0x7ff3d0074c00 2026-03-10T10:08:21.339 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.828+0000 7ff3d6b7f640 1 -- 192.168.123.104:0/2521515504 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7ff3d0074c00 msgr2=0x7ff3d0075000 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.828+0000 7ff3d6b7f640 1 --2- 192.168.123.104:0/2521515504 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7ff3d0074c00 0x7ff3d0075000 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7ff3c000a9c0 tx=0x7ff3c0033650 comp rx=0 tx=0).stop 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.828+0000 7ff3d6b7f640 1 -- 192.168.123.104:0/2521515504 shutdown_connections 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.828+0000 7ff3d6b7f640 1 --2- 192.168.123.104:0/2521515504 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7ff3d0074c00 0x7ff3d0075000 unknown :-1 s=CLOSED pgs=26 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.828+0000 7ff3d6b7f640 1 -- 192.168.123.104:0/2521515504 >> 192.168.123.104:0/2521515504 conn(0x7ff3d006fe80 msgr2=0x7ff3d00722e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.828+0000 7ff3d6b7f640 1 -- 192.168.123.104:0/2521515504 shutdown_connections 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.828+0000 7ff3d6b7f640 1 -- 192.168.123.104:0/2521515504 wait complete. 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.828+0000 7ff3d6b7f640 1 Processor -- start 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.828+0000 7ff3d6b7f640 1 -- start start 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.828+0000 7ff3d6b7f640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7ff3d0074c00 0x7ff3d01ab620 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.828+0000 7ff3d6b7f640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7ff3d010efc0 con 0x7ff3d0074c00 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.828+0000 7ff3d48f4640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7ff3d0074c00 0x7ff3d01ab620 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.828+0000 7ff3d48f4640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7ff3d0074c00 0x7ff3d01ab620 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55186/0 (socket says 192.168.123.104:55186) 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.828+0000 7ff3d48f4640 1 -- 192.168.123.104:0/1086480550 learned_addr learned my addr 192.168.123.104:0/1086480550 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.828+0000 7ff3d48f4640 1 -- 192.168.123.104:0/1086480550 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ff3d01abb60 con 0x7ff3d0074c00 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.828+0000 7ff3d48f4640 1 --2- 192.168.123.104:0/1086480550 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7ff3d0074c00 0x7ff3d01ab620 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7ff3c0037e40 tx=0x7ff3c00379f0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.828+0000 7ff3cdffb640 1 -- 192.168.123.104:0/1086480550 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7ff3c0043070 con 0x7ff3d0074c00 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.828+0000 7ff3d6b7f640 1 -- 192.168.123.104:0/1086480550 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7ff3d01abdf0 con 0x7ff3d0074c00 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.828+0000 7ff3d6b7f640 1 -- 192.168.123.104:0/1086480550 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7ff3d01aeae0 con 0x7ff3d0074c00 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.828+0000 7ff3cdffb640 1 -- 192.168.123.104:0/1086480550 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7ff3c003d890 con 0x7ff3d0074c00 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.828+0000 7ff3cdffb640 1 -- 192.168.123.104:0/1086480550 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7ff3c0046520 con 0x7ff3d0074c00 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.828+0000 7ff3cdffb640 1 -- 192.168.123.104:0/1086480550 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 5) ==== 50212+0+0 (secure 0 0 0) 0x7ff3c003b070 con 0x7ff3d0074c00 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.828+0000 7ff3cdffb640 1 --2- 192.168.123.104:0/1086480550 >> v2:192.168.123.104:6800/887024688 conn(0x7ff3a403dc50 0x7ff3a4040110 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.828+0000 7ff3cffff640 1 -- 192.168.123.104:0/1086480550 >> v2:192.168.123.104:6800/887024688 conn(0x7ff3a403dc50 msgr2=0x7ff3a4040110 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.104:6800/887024688 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.828+0000 7ff3cffff640 1 --2- 192.168.123.104:0/1086480550 >> v2:192.168.123.104:6800/887024688 conn(0x7ff3a403dc50 0x7ff3a4040110 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.828+0000 7ff3cdffb640 1 -- 192.168.123.104:0/1086480550 --> v2:192.168.123.104:6800/887024688 -- command(tid 0: {"prefix": "get_command_descriptions"}) -- 0x7ff3a40407e0 con 0x7ff3a403dc50 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:17.828+0000 7ff3cdffb640 1 -- 192.168.123.104:0/1086480550 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 725+0+0 (secure 0 0 0) 0x7ff3c00770a0 con 0x7ff3d0074c00 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:18.028+0000 7ff3cffff640 1 -- 192.168.123.104:0/1086480550 >> v2:192.168.123.104:6800/887024688 conn(0x7ff3a403dc50 msgr2=0x7ff3a4040110 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.104:6800/887024688 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:18.028+0000 7ff3cffff640 1 --2- 192.168.123.104:0/1086480550 >> v2:192.168.123.104:6800/887024688 conn(0x7ff3a403dc50 0x7ff3a4040110 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.400000 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:18.432+0000 7ff3cffff640 1 -- 192.168.123.104:0/1086480550 >> v2:192.168.123.104:6800/887024688 conn(0x7ff3a403dc50 msgr2=0x7ff3a4040110 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.104:6800/887024688 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:18.432+0000 7ff3cffff640 1 --2- 192.168.123.104:0/1086480550 >> v2:192.168.123.104:6800/887024688 conn(0x7ff3a403dc50 0x7ff3a4040110 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.800000 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:19.232+0000 7ff3cffff640 1 -- 192.168.123.104:0/1086480550 >> v2:192.168.123.104:6800/887024688 conn(0x7ff3a403dc50 msgr2=0x7ff3a4040110 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.104:6800/887024688 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:19.232+0000 7ff3cffff640 1 --2- 192.168.123.104:0/1086480550 >> v2:192.168.123.104:6800/887024688 conn(0x7ff3a403dc50 0x7ff3a4040110 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 1.600000 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:20.272+0000 7ff3cdffb640 1 -- 192.168.123.104:0/1086480550 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mgrmap(e 6) ==== 50014+0+0 (secure 0 0 0) 0x7ff3c000bce0 con 0x7ff3d0074c00 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:20.272+0000 7ff3cdffb640 1 -- 192.168.123.104:0/1086480550 >> v2:192.168.123.104:6800/887024688 conn(0x7ff3a403dc50 msgr2=0x7ff3a4040110 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:20.272+0000 7ff3cdffb640 1 --2- 192.168.123.104:0/1086480550 >> v2:192.168.123.104:6800/887024688 conn(0x7ff3a403dc50 0x7ff3a4040110 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.276+0000 7ff3cdffb640 1 -- 192.168.123.104:0/1086480550 <== mon.0 v2:192.168.123.104:3300/0 7 ==== mgrmap(e 7) ==== 50106+0+0 (secure 0 0 0) 0x7ff3c0042d50 con 0x7ff3d0074c00 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.276+0000 7ff3cdffb640 1 --2- 192.168.123.104:0/1086480550 >> v2:192.168.123.104:6800/2318507328 conn(0x7ff3a4041580 0x7ff3a4043970 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.276+0000 7ff3cdffb640 1 -- 192.168.123.104:0/1086480550 --> v2:192.168.123.104:6800/2318507328 -- command(tid 0: {"prefix": "get_command_descriptions"}) -- 0x7ff3c0067b20 con 0x7ff3a4041580 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.280+0000 7ff3cffff640 1 --2- 192.168.123.104:0/1086480550 >> v2:192.168.123.104:6800/2318507328 conn(0x7ff3a4041580 0x7ff3a4043970 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.280+0000 7ff3cffff640 1 --2- 192.168.123.104:0/1086480550 >> v2:192.168.123.104:6800/2318507328 conn(0x7ff3a4041580 0x7ff3a4043970 secure :-1 s=READY pgs=3 cs=0 l=1 rev1=1 crypto rx=0x7ff3c8003e00 tx=0x7ff3c80073c0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.280+0000 7ff3cdffb640 1 -- 192.168.123.104:0/1086480550 <== mgr.14118 v2:192.168.123.104:6800/2318507328 1 ==== command_reply(tid 0: 0 ) ==== 8+0+8901 (secure 0 0 0) 0x7ff3c0067b20 con 0x7ff3a4041580 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.284+0000 7ff3d6b7f640 1 -- 192.168.123.104:0/1086480550 --> v2:192.168.123.104:6800/2318507328 -- command(tid 1: {"prefix": "mgr_status"}) -- 0x7ff3d0075000 con 0x7ff3a4041580 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.288+0000 7ff3cdffb640 1 -- 192.168.123.104:0/1086480550 <== mgr.14118 v2:192.168.123.104:6800/2318507328 2 ==== command_reply(tid 1: 0 ) ==== 8+0+51 (secure 0 0 0) 0x7ff3d0075000 con 0x7ff3a4041580 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.288+0000 7ff3d6b7f640 1 -- 192.168.123.104:0/1086480550 >> v2:192.168.123.104:6800/2318507328 conn(0x7ff3a4041580 msgr2=0x7ff3a4043970 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.288+0000 7ff3d6b7f640 1 --2- 192.168.123.104:0/1086480550 >> v2:192.168.123.104:6800/2318507328 conn(0x7ff3a4041580 0x7ff3a4043970 secure :-1 s=READY pgs=3 cs=0 l=1 rev1=1 crypto rx=0x7ff3c8003e00 tx=0x7ff3c80073c0 comp rx=0 tx=0).stop 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.288+0000 7ff3d6b7f640 1 -- 192.168.123.104:0/1086480550 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7ff3d0074c00 msgr2=0x7ff3d01ab620 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.288+0000 7ff3d6b7f640 1 --2- 192.168.123.104:0/1086480550 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7ff3d0074c00 0x7ff3d01ab620 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7ff3c0037e40 tx=0x7ff3c00379f0 comp rx=0 tx=0).stop 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.288+0000 7ff3d6b7f640 1 -- 192.168.123.104:0/1086480550 shutdown_connections 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.288+0000 7ff3d6b7f640 1 --2- 192.168.123.104:0/1086480550 >> v2:192.168.123.104:6800/2318507328 conn(0x7ff3a4041580 0x7ff3a4043970 unknown :-1 s=CLOSED pgs=3 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.288+0000 7ff3d6b7f640 1 --2- 192.168.123.104:0/1086480550 >> v2:192.168.123.104:6800/887024688 conn(0x7ff3a403dc50 0x7ff3a4040110 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.288+0000 7ff3d6b7f640 1 --2- 192.168.123.104:0/1086480550 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7ff3d0074c00 0x7ff3d01ab620 unknown :-1 s=CLOSED pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.288+0000 7ff3d6b7f640 1 -- 192.168.123.104:0/1086480550 >> 192.168.123.104:0/1086480550 conn(0x7ff3d006fe80 msgr2=0x7ff3d0070990 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.288+0000 7ff3d6b7f640 1 -- 192.168.123.104:0/1086480550 shutdown_connections 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.288+0000 7ff3d6b7f640 1 -- 192.168.123.104:0/1086480550 wait complete. 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:mgr epoch 5 is available 2026-03-10T10:08:21.340 INFO:teuthology.orchestra.run.vm04.stdout:Setting orchestrator backend to cephadm... 2026-03-10T10:08:21.585 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:21 vm04 bash[20742]: cephadm 2026-03-10T10:08:20.294863+0000 mgr.y (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-10T10:08:21.586 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:21 vm04 bash[20742]: cephadm 2026-03-10T10:08:20.294863+0000 mgr.y (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-10T10:08:21.586 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:21 vm04 bash[20742]: audit 2026-03-10T10:08:20.600624+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:08:21.586 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:21 vm04 bash[20742]: audit 2026-03-10T10:08:20.600624+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:08:21.586 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:21 vm04 bash[20742]: audit 2026-03-10T10:08:20.603262+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:08:21.586 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:21 vm04 bash[20742]: audit 2026-03-10T10:08:20.603262+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:08:21.586 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:21 vm04 bash[20742]: cluster 2026-03-10T10:08:21.283038+0000 mon.a (mon.0) 54 : cluster [DBG] mgrmap e7: y(active, since 1.0082s) 2026-03-10T10:08:21.586 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:21 vm04 bash[20742]: cluster 2026-03-10T10:08:21.283038+0000 mon.a (mon.0) 54 : cluster [DBG] mgrmap e7: y(active, since 1.0082s) 2026-03-10T10:08:21.610 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.448+0000 7fa0bbbee640 1 Processor -- start 2026-03-10T10:08:21.610 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.448+0000 7fa0bbbee640 1 -- start start 2026-03-10T10:08:21.610 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.448+0000 7fa0bbbee640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa0b4108b70 0x7fa0b4108f70 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:21.610 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.448+0000 7fa0bbbee640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fa0b4109540 con 0x7fa0b4108b70 2026-03-10T10:08:21.610 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.448+0000 7fa0b9963640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa0b4108b70 0x7fa0b4108f70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:21.610 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.448+0000 7fa0b9963640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa0b4108b70 0x7fa0b4108f70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55262/0 (socket says 192.168.123.104:55262) 2026-03-10T10:08:21.610 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.448+0000 7fa0b9963640 1 -- 192.168.123.104:0/3723878922 learned_addr learned my addr 192.168.123.104:0/3723878922 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:21.610 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.448+0000 7fa0b9963640 1 -- 192.168.123.104:0/3723878922 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fa0b4109d70 con 0x7fa0b4108b70 2026-03-10T10:08:21.610 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.448+0000 7fa0b9963640 1 --2- 192.168.123.104:0/3723878922 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa0b4108b70 0x7fa0b4108f70 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7fa0a8009920 tx=0x7fa0a802ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=473842ff608d2a5c server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:21.610 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.452+0000 7fa0b8961640 1 -- 192.168.123.104:0/3723878922 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fa0a803c070 con 0x7fa0b4108b70 2026-03-10T10:08:21.610 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.452+0000 7fa0b8961640 1 -- 192.168.123.104:0/3723878922 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7fa0a8037440 con 0x7fa0b4108b70 2026-03-10T10:08:21.610 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.452+0000 7fa0b8961640 1 -- 192.168.123.104:0/3723878922 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fa0a80354d0 con 0x7fa0b4108b70 2026-03-10T10:08:21.610 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.452+0000 7fa0bbbee640 1 -- 192.168.123.104:0/3723878922 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa0b4108b70 msgr2=0x7fa0b4108f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:21.610 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.452+0000 7fa0bbbee640 1 --2- 192.168.123.104:0/3723878922 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa0b4108b70 0x7fa0b4108f70 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7fa0a8009920 tx=0x7fa0a802ef20 comp rx=0 tx=0).stop 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.452+0000 7fa0bbbee640 1 -- 192.168.123.104:0/3723878922 shutdown_connections 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.452+0000 7fa0bbbee640 1 --2- 192.168.123.104:0/3723878922 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa0b4108b70 0x7fa0b4108f70 unknown :-1 s=CLOSED pgs=35 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.452+0000 7fa0bbbee640 1 -- 192.168.123.104:0/3723878922 >> 192.168.123.104:0/3723878922 conn(0x7fa0b407c040 msgr2=0x7fa0b407c450 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.452+0000 7fa0bbbee640 1 -- 192.168.123.104:0/3723878922 shutdown_connections 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.452+0000 7fa0bbbee640 1 -- 192.168.123.104:0/3723878922 wait complete. 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.452+0000 7fa0bbbee640 1 Processor -- start 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.452+0000 7fa0bbbee640 1 -- start start 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.452+0000 7fa0bbbee640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa0b4108b70 0x7fa0b419e9a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.452+0000 7fa0bbbee640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fa0b410cfa0 con 0x7fa0b4108b70 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.452+0000 7fa0b9963640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa0b4108b70 0x7fa0b419e9a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.452+0000 7fa0b9963640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa0b4108b70 0x7fa0b419e9a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55278/0 (socket says 192.168.123.104:55278) 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.452+0000 7fa0b9963640 1 -- 192.168.123.104:0/4243706 learned_addr learned my addr 192.168.123.104:0/4243706 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.452+0000 7fa0b9963640 1 -- 192.168.123.104:0/4243706 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fa0b419eee0 con 0x7fa0b4108b70 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.452+0000 7fa0b9963640 1 --2- 192.168.123.104:0/4243706 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa0b4108b70 0x7fa0b419e9a0 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7fa0a8009a50 tx=0x7fa0a8035d70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.452+0000 7fa0a2ffd640 1 -- 192.168.123.104:0/4243706 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fa0a803c030 con 0x7fa0b4108b70 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.452+0000 7fa0a2ffd640 1 -- 192.168.123.104:0/4243706 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7fa0a803e070 con 0x7fa0b4108b70 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.452+0000 7fa0a2ffd640 1 -- 192.168.123.104:0/4243706 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fa0a8042c70 con 0x7fa0b4108b70 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.452+0000 7fa0bbbee640 1 -- 192.168.123.104:0/4243706 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fa0b419f170 con 0x7fa0b4108b70 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.452+0000 7fa0bbbee640 1 -- 192.168.123.104:0/4243706 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fa0b41a1e60 con 0x7fa0b4108b70 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.452+0000 7fa0bbbee640 1 -- 192.168.123.104:0/4243706 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fa07c005180 con 0x7fa0b4108b70 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.460+0000 7fa0a2ffd640 1 -- 192.168.123.104:0/4243706 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 7) ==== 50106+0+0 (secure 0 0 0) 0x7fa0a802fc50 con 0x7fa0b4108b70 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.460+0000 7fa0a2ffd640 1 --2- 192.168.123.104:0/4243706 >> v2:192.168.123.104:6800/2318507328 conn(0x7fa09003db30 0x7fa09003fff0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.460+0000 7fa0a2ffd640 1 -- 192.168.123.104:0/4243706 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (secure 0 0 0) 0x7fa0a803d070 con 0x7fa0b4108b70 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.460+0000 7fa0a2ffd640 1 -- 192.168.123.104:0/4243706 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fa0a80a6a90 con 0x7fa0b4108b70 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.464+0000 7fa0b9162640 1 --2- 192.168.123.104:0/4243706 >> v2:192.168.123.104:6800/2318507328 conn(0x7fa09003db30 0x7fa09003fff0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.464+0000 7fa0b9162640 1 --2- 192.168.123.104:0/4243706 >> v2:192.168.123.104:6800/2318507328 conn(0x7fa09003db30 0x7fa09003fff0 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7fa0a4009a10 tx=0x7fa0a4006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.560+0000 7fa0bbbee640 1 -- 192.168.123.104:0/4243706 --> v2:192.168.123.104:6800/2318507328 -- mgr_command(tid 0: {"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}) -- 0x7fa07c002bf0 con 0x7fa09003db30 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.564+0000 7fa0a2ffd640 1 -- 192.168.123.104:0/4243706 <== mgr.14118 v2:192.168.123.104:6800/2318507328 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+0 (secure 0 0 0) 0x7fa07c002bf0 con 0x7fa09003db30 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.568+0000 7fa0bbbee640 1 -- 192.168.123.104:0/4243706 >> v2:192.168.123.104:6800/2318507328 conn(0x7fa09003db30 msgr2=0x7fa09003fff0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.572+0000 7fa0bbbee640 1 --2- 192.168.123.104:0/4243706 >> v2:192.168.123.104:6800/2318507328 conn(0x7fa09003db30 0x7fa09003fff0 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7fa0a4009a10 tx=0x7fa0a4006eb0 comp rx=0 tx=0).stop 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.572+0000 7fa0bbbee640 1 -- 192.168.123.104:0/4243706 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa0b4108b70 msgr2=0x7fa0b419e9a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.572+0000 7fa0bbbee640 1 --2- 192.168.123.104:0/4243706 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa0b4108b70 0x7fa0b419e9a0 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7fa0a8009a50 tx=0x7fa0a8035d70 comp rx=0 tx=0).stop 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.572+0000 7fa0bbbee640 1 -- 192.168.123.104:0/4243706 shutdown_connections 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.572+0000 7fa0bbbee640 1 --2- 192.168.123.104:0/4243706 >> v2:192.168.123.104:6800/2318507328 conn(0x7fa09003db30 0x7fa09003fff0 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.572+0000 7fa0bbbee640 1 --2- 192.168.123.104:0/4243706 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa0b4108b70 0x7fa0b419e9a0 unknown :-1 s=CLOSED pgs=36 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.572+0000 7fa0bbbee640 1 -- 192.168.123.104:0/4243706 >> 192.168.123.104:0/4243706 conn(0x7fa0b407c040 msgr2=0x7fa0b4105f50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.572+0000 7fa0bbbee640 1 -- 192.168.123.104:0/4243706 shutdown_connections 2026-03-10T10:08:21.611 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.572+0000 7fa0bbbee640 1 -- 192.168.123.104:0/4243706 wait complete. 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.704+0000 7f0ffbc23640 1 Processor -- start 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.704+0000 7f0ffbc23640 1 -- start start 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.704+0000 7f0ffbc23640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0ff4108ba0 0x7f0ff4108fa0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.704+0000 7f0ffbc23640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f0ff4109570 con 0x7f0ff4108ba0 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.708+0000 7f0ff9998640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0ff4108ba0 0x7f0ff4108fa0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.708+0000 7f0ff9998640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0ff4108ba0 0x7f0ff4108fa0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55282/0 (socket says 192.168.123.104:55282) 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.708+0000 7f0ff9998640 1 -- 192.168.123.104:0/3495024927 learned_addr learned my addr 192.168.123.104:0/3495024927 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.708+0000 7f0ff9998640 1 -- 192.168.123.104:0/3495024927 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f0ff4109da0 con 0x7f0ff4108ba0 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.708+0000 7f0ff9998640 1 --2- 192.168.123.104:0/3495024927 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0ff4108ba0 0x7f0ff4108fa0 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7f0fe8009920 tx=0x7f0fe802ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=633e35e68592362c server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.708+0000 7f0ff8996640 1 -- 192.168.123.104:0/3495024927 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f0fe803c070 con 0x7f0ff4108ba0 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.708+0000 7f0ff8996640 1 -- 192.168.123.104:0/3495024927 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f0fe8037440 con 0x7f0ff4108ba0 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.708+0000 7f0ffbc23640 1 -- 192.168.123.104:0/3495024927 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0ff4108ba0 msgr2=0x7f0ff4108fa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.708+0000 7f0ffbc23640 1 --2- 192.168.123.104:0/3495024927 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0ff4108ba0 0x7f0ff4108fa0 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7f0fe8009920 tx=0x7f0fe802ef20 comp rx=0 tx=0).stop 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.708+0000 7f0ffbc23640 1 -- 192.168.123.104:0/3495024927 shutdown_connections 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.708+0000 7f0ffbc23640 1 --2- 192.168.123.104:0/3495024927 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0ff4108ba0 0x7f0ff4108fa0 unknown :-1 s=CLOSED pgs=37 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.708+0000 7f0ffbc23640 1 -- 192.168.123.104:0/3495024927 >> 192.168.123.104:0/3495024927 conn(0x7f0ff407c090 msgr2=0x7f0ff407c4a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.708+0000 7f0ffbc23640 1 -- 192.168.123.104:0/3495024927 shutdown_connections 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.708+0000 7f0ffbc23640 1 -- 192.168.123.104:0/3495024927 wait complete. 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.708+0000 7f0ffbc23640 1 Processor -- start 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.708+0000 7f0ffbc23640 1 -- start start 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.708+0000 7f0ffbc23640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0ff4108ba0 0x7f0ff40804f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.708+0000 7f0ffbc23640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f0ff410cfd0 con 0x7f0ff4108ba0 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.708+0000 7f0ff9998640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0ff4108ba0 0x7f0ff40804f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.708+0000 7f0ff9998640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0ff4108ba0 0x7f0ff40804f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55288/0 (socket says 192.168.123.104:55288) 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.708+0000 7f0ff9998640 1 -- 192.168.123.104:0/2197060885 learned_addr learned my addr 192.168.123.104:0/2197060885 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.708+0000 7f0ff9998640 1 -- 192.168.123.104:0/2197060885 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f0ff4080a30 con 0x7f0ff4108ba0 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.708+0000 7f0ff9998640 1 --2- 192.168.123.104:0/2197060885 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0ff4108ba0 0x7f0ff40804f0 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7f0fe8035f40 tx=0x7f0fe8035f70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.708+0000 7f0fe2ffd640 1 -- 192.168.123.104:0/2197060885 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f0fe8045070 con 0x7f0ff4108ba0 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.708+0000 7f0fe2ffd640 1 -- 192.168.123.104:0/2197060885 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f0fe8037c50 con 0x7f0ff4108ba0 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.708+0000 7f0fe2ffd640 1 -- 192.168.123.104:0/2197060885 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f0fe803c070 con 0x7f0ff4108ba0 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.708+0000 7f0ffbc23640 1 -- 192.168.123.104:0/2197060885 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f0ff4080cc0 con 0x7f0ff4108ba0 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.708+0000 7f0ffbc23640 1 -- 192.168.123.104:0/2197060885 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f0ff407d030 con 0x7f0ff4108ba0 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.708+0000 7f0fe2ffd640 1 -- 192.168.123.104:0/2197060885 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 7) ==== 50106+0+0 (secure 0 0 0) 0x7f0fe804a430 con 0x7f0ff4108ba0 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.708+0000 7f0fe2ffd640 1 --2- 192.168.123.104:0/2197060885 >> v2:192.168.123.104:6800/2318507328 conn(0x7f0fcc03db30 0x7f0fcc03fff0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.712+0000 7f0fe2ffd640 1 -- 192.168.123.104:0/2197060885 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (secure 0 0 0) 0x7f0fe8077340 con 0x7f0ff4108ba0 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.712+0000 7f0ff9197640 1 --2- 192.168.123.104:0/2197060885 >> v2:192.168.123.104:6800/2318507328 conn(0x7f0fcc03db30 0x7f0fcc03fff0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.712+0000 7f0ffbc23640 1 -- 192.168.123.104:0/2197060885 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f0fb8005180 con 0x7f0ff4108ba0 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.712+0000 7f0ff9197640 1 --2- 192.168.123.104:0/2197060885 >> v2:192.168.123.104:6800/2318507328 conn(0x7f0fcc03db30 0x7f0fcc03fff0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f0fe4009a10 tx=0x7f0fe4006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.712+0000 7f0fe2ffd640 1 -- 192.168.123.104:0/2197060885 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f0fe8033200 con 0x7f0ff4108ba0 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.800+0000 7f0ffbc23640 1 -- 192.168.123.104:0/2197060885 --> v2:192.168.123.104:6800/2318507328 -- mgr_command(tid 0: {"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}) -- 0x7f0fb8002bf0 con 0x7f0fcc03db30 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.800+0000 7f0fe2ffd640 1 -- 192.168.123.104:0/2197060885 <== mgr.14118 v2:192.168.123.104:6800/2318507328 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+16 (secure 0 0 0) 0x7f0fb8002bf0 con 0x7f0fcc03db30 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.804+0000 7f0ffbc23640 1 -- 192.168.123.104:0/2197060885 >> v2:192.168.123.104:6800/2318507328 conn(0x7f0fcc03db30 msgr2=0x7f0fcc03fff0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.804+0000 7f0ffbc23640 1 --2- 192.168.123.104:0/2197060885 >> v2:192.168.123.104:6800/2318507328 conn(0x7f0fcc03db30 0x7f0fcc03fff0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7f0fe4009a10 tx=0x7f0fe4006eb0 comp rx=0 tx=0).stop 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.804+0000 7f0ffbc23640 1 -- 192.168.123.104:0/2197060885 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0ff4108ba0 msgr2=0x7f0ff40804f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.804+0000 7f0ffbc23640 1 --2- 192.168.123.104:0/2197060885 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0ff4108ba0 0x7f0ff40804f0 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7f0fe8035f40 tx=0x7f0fe8035f70 comp rx=0 tx=0).stop 2026-03-10T10:08:21.840 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.804+0000 7f0ffbc23640 1 -- 192.168.123.104:0/2197060885 shutdown_connections 2026-03-10T10:08:21.841 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.804+0000 7f0ffbc23640 1 --2- 192.168.123.104:0/2197060885 >> v2:192.168.123.104:6800/2318507328 conn(0x7f0fcc03db30 0x7f0fcc03fff0 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:21.841 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.804+0000 7f0ffbc23640 1 --2- 192.168.123.104:0/2197060885 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0ff4108ba0 0x7f0ff40804f0 unknown :-1 s=CLOSED pgs=38 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:21.841 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.804+0000 7f0ffbc23640 1 -- 192.168.123.104:0/2197060885 >> 192.168.123.104:0/2197060885 conn(0x7f0ff407c090 msgr2=0x7f0ff4106100 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:21.841 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.804+0000 7f0ffbc23640 1 -- 192.168.123.104:0/2197060885 shutdown_connections 2026-03-10T10:08:21.841 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.804+0000 7f0ffbc23640 1 -- 192.168.123.104:0/2197060885 wait complete. 2026-03-10T10:08:21.841 INFO:teuthology.orchestra.run.vm04.stdout:Generating ssh key... 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.948+0000 7f0bfd896640 1 Processor -- start 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.948+0000 7f0bfd896640 1 -- start start 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.948+0000 7f0bfd896640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0bf8108b70 0x7f0bf8108f70 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.948+0000 7f0bfd896640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f0bf8109540 con 0x7f0bf8108b70 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.948+0000 7f0bf6ffd640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0bf8108b70 0x7f0bf8108f70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.948+0000 7f0bf6ffd640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0bf8108b70 0x7f0bf8108f70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55290/0 (socket says 192.168.123.104:55290) 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.948+0000 7f0bf6ffd640 1 -- 192.168.123.104:0/92569716 learned_addr learned my addr 192.168.123.104:0/92569716 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.948+0000 7f0bf6ffd640 1 -- 192.168.123.104:0/92569716 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f0bf8109d70 con 0x7f0bf8108b70 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.948+0000 7f0bf6ffd640 1 --2- 192.168.123.104:0/92569716 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0bf8108b70 0x7f0bf8108f70 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7f0be0009b80 tx=0x7f0be002f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=3ce38fb98c5439e0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.948+0000 7f0bf5ffb640 1 -- 192.168.123.104:0/92569716 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f0be003c070 con 0x7f0bf8108b70 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.948+0000 7f0bf5ffb640 1 -- 192.168.123.104:0/92569716 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f0be0037440 con 0x7f0bf8108b70 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.948+0000 7f0bf5ffb640 1 -- 192.168.123.104:0/92569716 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f0be00353a0 con 0x7f0bf8108b70 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.948+0000 7f0bfd896640 1 -- 192.168.123.104:0/92569716 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0bf8108b70 msgr2=0x7f0bf8108f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.948+0000 7f0bfd896640 1 --2- 192.168.123.104:0/92569716 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0bf8108b70 0x7f0bf8108f70 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7f0be0009b80 tx=0x7f0be002f190 comp rx=0 tx=0).stop 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.952+0000 7f0bfd896640 1 -- 192.168.123.104:0/92569716 shutdown_connections 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.952+0000 7f0bfd896640 1 --2- 192.168.123.104:0/92569716 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0bf8108b70 0x7f0bf8108f70 unknown :-1 s=CLOSED pgs=39 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.952+0000 7f0bfd896640 1 -- 192.168.123.104:0/92569716 >> 192.168.123.104:0/92569716 conn(0x7f0bf807c040 msgr2=0x7f0bf807c450 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.952+0000 7f0bfd896640 1 -- 192.168.123.104:0/92569716 shutdown_connections 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.952+0000 7f0bfd896640 1 -- 192.168.123.104:0/92569716 wait complete. 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.952+0000 7f0bfd896640 1 Processor -- start 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.952+0000 7f0bfd896640 1 -- start start 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.952+0000 7f0bfd896640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0bf8108b70 0x7f0bf819e960 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.952+0000 7f0bfd896640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f0bf810cfa0 con 0x7f0bf8108b70 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.952+0000 7f0bf6ffd640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0bf8108b70 0x7f0bf819e960 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.952+0000 7f0bf6ffd640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0bf8108b70 0x7f0bf819e960 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55306/0 (socket says 192.168.123.104:55306) 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.952+0000 7f0bf6ffd640 1 -- 192.168.123.104:0/3376858079 learned_addr learned my addr 192.168.123.104:0/3376858079 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.952+0000 7f0bf6ffd640 1 -- 192.168.123.104:0/3376858079 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f0bf819eea0 con 0x7f0bf8108b70 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.952+0000 7f0bf6ffd640 1 --2- 192.168.123.104:0/3376858079 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0bf8108b70 0x7f0bf819e960 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7f0be003a040 tx=0x7f0be0004270 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.952+0000 7f0bfc894640 1 -- 192.168.123.104:0/3376858079 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f0be003c070 con 0x7f0bf8108b70 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.952+0000 7f0bfc894640 1 -- 192.168.123.104:0/3376858079 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f0be0044070 con 0x7f0bf8108b70 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.952+0000 7f0bfc894640 1 -- 192.168.123.104:0/3376858079 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f0be0035990 con 0x7f0bf8108b70 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.952+0000 7f0bfd896640 1 -- 192.168.123.104:0/3376858079 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f0bf819f130 con 0x7f0bf8108b70 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.952+0000 7f0bfd896640 1 -- 192.168.123.104:0/3376858079 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f0bf81a1e20 con 0x7f0bf8108b70 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.952+0000 7f0bfc894640 1 -- 192.168.123.104:0/3376858079 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 7) ==== 50106+0+0 (secure 0 0 0) 0x7f0be0035b30 con 0x7f0bf8108b70 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.952+0000 7f0bfc894640 1 --2- 192.168.123.104:0/3376858079 >> v2:192.168.123.104:6800/2318507328 conn(0x7f0bcc03dae0 0x7f0bcc03ffa0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.952+0000 7f0bfc894640 1 -- 192.168.123.104:0/3376858079 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (secure 0 0 0) 0x7f0be0077070 con 0x7f0bf8108b70 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.952+0000 7f0bf67fc640 1 --2- 192.168.123.104:0/3376858079 >> v2:192.168.123.104:6800/2318507328 conn(0x7f0bcc03dae0 0x7f0bcc03ffa0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.952+0000 7f0bfd896640 1 -- 192.168.123.104:0/3376858079 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f0bbc005180 con 0x7f0bf8108b70 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.956+0000 7f0bf67fc640 1 --2- 192.168.123.104:0/3376858079 >> v2:192.168.123.104:6800/2318507328 conn(0x7f0bcc03dae0 0x7f0bcc03ffa0 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7f0bec0099c0 tx=0x7f0bec006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:21.956+0000 7f0bfc894640 1 -- 192.168.123.104:0/3376858079 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f0be0035e50 con 0x7f0bf8108b70 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.044+0000 7f0bfd896640 1 -- 192.168.123.104:0/3376858079 --> v2:192.168.123.104:6800/2318507328 -- mgr_command(tid 0: {"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}) -- 0x7f0bbc002bf0 con 0x7f0bcc03dae0 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.060+0000 7f0bfc894640 1 -- 192.168.123.104:0/3376858079 <== mgr.14118 v2:192.168.123.104:6800/2318507328 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+0 (secure 0 0 0) 0x7f0bbc002bf0 con 0x7f0bcc03dae0 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.064+0000 7f0bfd896640 1 -- 192.168.123.104:0/3376858079 >> v2:192.168.123.104:6800/2318507328 conn(0x7f0bcc03dae0 msgr2=0x7f0bcc03ffa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.064+0000 7f0bfd896640 1 --2- 192.168.123.104:0/3376858079 >> v2:192.168.123.104:6800/2318507328 conn(0x7f0bcc03dae0 0x7f0bcc03ffa0 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7f0bec0099c0 tx=0x7f0bec006eb0 comp rx=0 tx=0).stop 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.064+0000 7f0bfd896640 1 -- 192.168.123.104:0/3376858079 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0bf8108b70 msgr2=0x7f0bf819e960 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.064+0000 7f0bfd896640 1 --2- 192.168.123.104:0/3376858079 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0bf8108b70 0x7f0bf819e960 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7f0be003a040 tx=0x7f0be0004270 comp rx=0 tx=0).stop 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.064+0000 7f0bfd896640 1 -- 192.168.123.104:0/3376858079 shutdown_connections 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.064+0000 7f0bfd896640 1 --2- 192.168.123.104:0/3376858079 >> v2:192.168.123.104:6800/2318507328 conn(0x7f0bcc03dae0 0x7f0bcc03ffa0 unknown :-1 s=CLOSED pgs=9 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:22.101 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.064+0000 7f0bfd896640 1 --2- 192.168.123.104:0/3376858079 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0bf8108b70 0x7f0bf819e960 unknown :-1 s=CLOSED pgs=40 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:22.102 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.064+0000 7f0bfd896640 1 -- 192.168.123.104:0/3376858079 >> 192.168.123.104:0/3376858079 conn(0x7f0bf807c040 msgr2=0x7f0bf8105f10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:22.102 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.064+0000 7f0bfd896640 1 -- 192.168.123.104:0/3376858079 shutdown_connections 2026-03-10T10:08:22.102 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.064+0000 7f0bfd896640 1 -- 192.168.123.104:0/3376858079 wait complete. 2026-03-10T10:08:22.140 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:22 vm04 bash[20997]: Generating public/private ed25519 key pair. 2026-03-10T10:08:22.140 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:22 vm04 bash[20997]: Your identification has been saved in /tmp/tmpygrowd7t/key 2026-03-10T10:08:22.140 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:22 vm04 bash[20997]: Your public key has been saved in /tmp/tmpygrowd7t/key.pub 2026-03-10T10:08:22.140 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:22 vm04 bash[20997]: The key fingerprint is: 2026-03-10T10:08:22.140 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:22 vm04 bash[20997]: SHA256:qbHJZDy947p77y7kG8TxdGQLRt27YWq9wZDoozBihhw ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:08:22.140 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:22 vm04 bash[20997]: The key's randomart image is: 2026-03-10T10:08:22.140 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:22 vm04 bash[20997]: +--[ED25519 256]--+ 2026-03-10T10:08:22.140 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:22 vm04 bash[20997]: | .+.o. | 2026-03-10T10:08:22.140 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:22 vm04 bash[20997]: | . +... | 2026-03-10T10:08:22.140 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:22 vm04 bash[20997]: | . ..o. . | 2026-03-10T10:08:22.141 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:22 vm04 bash[20997]: | E . o =..o + | 2026-03-10T10:08:22.141 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:22 vm04 bash[20997]: | . o * S.. * o | 2026-03-10T10:08:22.141 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:22 vm04 bash[20997]: | o ++oB..o o = | 2026-03-10T10:08:22.141 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:22 vm04 bash[20997]: | o .==+. o o | 2026-03-10T10:08:22.141 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:22 vm04 bash[20997]: | .=o . | 2026-03-10T10:08:22.141 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:22 vm04 bash[20997]: | +=o*+ | 2026-03-10T10:08:22.141 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:22 vm04 bash[20997]: +----[SHA256]-----+ 2026-03-10T10:08:22.334 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN8HDqv9Pg4TqDxcaMqs5f9pVfocR4ydPGrFWw+qTBBB ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:08:22.334 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.196+0000 7f6a686b3640 1 Processor -- start 2026-03-10T10:08:22.334 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.196+0000 7f6a686b3640 1 -- start start 2026-03-10T10:08:22.334 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.196+0000 7f6a686b3640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f6a601068e0 0x7f6a60106ce0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:22.334 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.196+0000 7f6a686b3640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f6a601072b0 con 0x7f6a601068e0 2026-03-10T10:08:22.334 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.196+0000 7f6a66428640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f6a601068e0 0x7f6a60106ce0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:22.334 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.196+0000 7f6a66428640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f6a601068e0 0x7f6a60106ce0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55322/0 (socket says 192.168.123.104:55322) 2026-03-10T10:08:22.334 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.196+0000 7f6a66428640 1 -- 192.168.123.104:0/799863265 learned_addr learned my addr 192.168.123.104:0/799863265 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:22.334 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.196+0000 7f6a66428640 1 -- 192.168.123.104:0/799863265 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6a60107ae0 con 0x7f6a601068e0 2026-03-10T10:08:22.334 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.196+0000 7f6a66428640 1 --2- 192.168.123.104:0/799863265 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f6a601068e0 0x7f6a60106ce0 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7f6a54009b80 tx=0x7f6a5402f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=7ab664ef4e6181d9 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:22.334 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.196+0000 7f6a65426640 1 -- 192.168.123.104:0/799863265 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f6a5403c070 con 0x7f6a601068e0 2026-03-10T10:08:22.334 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.196+0000 7f6a65426640 1 -- 192.168.123.104:0/799863265 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f6a54037440 con 0x7f6a601068e0 2026-03-10T10:08:22.334 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.196+0000 7f6a686b3640 1 -- 192.168.123.104:0/799863265 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f6a601068e0 msgr2=0x7f6a60106ce0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:22.334 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.196+0000 7f6a686b3640 1 --2- 192.168.123.104:0/799863265 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f6a601068e0 0x7f6a60106ce0 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7f6a54009b80 tx=0x7f6a5402f190 comp rx=0 tx=0).stop 2026-03-10T10:08:22.334 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.196+0000 7f6a686b3640 1 -- 192.168.123.104:0/799863265 shutdown_connections 2026-03-10T10:08:22.334 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.196+0000 7f6a686b3640 1 --2- 192.168.123.104:0/799863265 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f6a601068e0 0x7f6a60106ce0 unknown :-1 s=CLOSED pgs=41 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:22.334 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.196+0000 7f6a686b3640 1 -- 192.168.123.104:0/799863265 >> 192.168.123.104:0/799863265 conn(0x7f6a60102090 msgr2=0x7f6a601044b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:22.334 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.196+0000 7f6a686b3640 1 -- 192.168.123.104:0/799863265 shutdown_connections 2026-03-10T10:08:22.334 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.196+0000 7f6a686b3640 1 -- 192.168.123.104:0/799863265 wait complete. 2026-03-10T10:08:22.334 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.196+0000 7f6a686b3640 1 Processor -- start 2026-03-10T10:08:22.334 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.200+0000 7f6a686b3640 1 -- start start 2026-03-10T10:08:22.334 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.200+0000 7f6a686b3640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f6a601068e0 0x7f6a6007c460 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:22.334 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.200+0000 7f6a686b3640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f6a6010ad10 con 0x7f6a601068e0 2026-03-10T10:08:22.334 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.200+0000 7f6a66428640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f6a601068e0 0x7f6a6007c460 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:22.334 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.200+0000 7f6a66428640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f6a601068e0 0x7f6a6007c460 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55326/0 (socket says 192.168.123.104:55326) 2026-03-10T10:08:22.334 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.200+0000 7f6a66428640 1 -- 192.168.123.104:0/2010030424 learned_addr learned my addr 192.168.123.104:0/2010030424 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:22.334 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.200+0000 7f6a66428640 1 -- 192.168.123.104:0/2010030424 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6a6007c9a0 con 0x7f6a601068e0 2026-03-10T10:08:22.335 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.200+0000 7f6a66428640 1 --2- 192.168.123.104:0/2010030424 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f6a601068e0 0x7f6a6007c460 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f6a5402f740 tx=0x7f6a540039b0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:22.335 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.200+0000 7f6a4f7fe640 1 -- 192.168.123.104:0/2010030424 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f6a5403c040 con 0x7f6a601068e0 2026-03-10T10:08:22.335 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.200+0000 7f6a4f7fe640 1 -- 192.168.123.104:0/2010030424 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f6a54003c40 con 0x7f6a601068e0 2026-03-10T10:08:22.335 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.200+0000 7f6a4f7fe640 1 -- 192.168.123.104:0/2010030424 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f6a54035db0 con 0x7f6a601068e0 2026-03-10T10:08:22.335 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.200+0000 7f6a686b3640 1 -- 192.168.123.104:0/2010030424 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f6a6007ab90 con 0x7f6a601068e0 2026-03-10T10:08:22.335 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.200+0000 7f6a686b3640 1 -- 192.168.123.104:0/2010030424 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f6a6007afb0 con 0x7f6a601068e0 2026-03-10T10:08:22.335 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.200+0000 7f6a4f7fe640 1 -- 192.168.123.104:0/2010030424 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 7) ==== 50106+0+0 (secure 0 0 0) 0x7f6a54030050 con 0x7f6a601068e0 2026-03-10T10:08:22.335 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.200+0000 7f6a686b3640 1 -- 192.168.123.104:0/2010030424 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6a28005180 con 0x7f6a601068e0 2026-03-10T10:08:22.335 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.204+0000 7f6a4f7fe640 1 --2- 192.168.123.104:0/2010030424 >> v2:192.168.123.104:6800/2318507328 conn(0x7f6a3c03da90 0x7f6a3c03ff50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:22.335 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.204+0000 7f6a4f7fe640 1 -- 192.168.123.104:0/2010030424 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (secure 0 0 0) 0x7f6a540779a0 con 0x7f6a601068e0 2026-03-10T10:08:22.335 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.204+0000 7f6a65c27640 1 --2- 192.168.123.104:0/2010030424 >> v2:192.168.123.104:6800/2318507328 conn(0x7f6a3c03da90 0x7f6a3c03ff50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:22.335 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.204+0000 7f6a4f7fe640 1 -- 192.168.123.104:0/2010030424 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f6a54051240 con 0x7f6a601068e0 2026-03-10T10:08:22.335 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.204+0000 7f6a65c27640 1 --2- 192.168.123.104:0/2010030424 >> v2:192.168.123.104:6800/2318507328 conn(0x7f6a3c03da90 0x7f6a3c03ff50 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f6a500099c0 tx=0x7f6a50006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:22.335 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.292+0000 7f6a686b3640 1 -- 192.168.123.104:0/2010030424 --> v2:192.168.123.104:6800/2318507328 -- mgr_command(tid 0: {"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}) -- 0x7f6a28002bf0 con 0x7f6a3c03da90 2026-03-10T10:08:22.335 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.292+0000 7f6a4f7fe640 1 -- 192.168.123.104:0/2010030424 <== mgr.14118 v2:192.168.123.104:6800/2318507328 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+123 (secure 0 0 0) 0x7f6a28002bf0 con 0x7f6a3c03da90 2026-03-10T10:08:22.335 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.296+0000 7f6a686b3640 1 -- 192.168.123.104:0/2010030424 >> v2:192.168.123.104:6800/2318507328 conn(0x7f6a3c03da90 msgr2=0x7f6a3c03ff50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:22.335 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.296+0000 7f6a686b3640 1 --2- 192.168.123.104:0/2010030424 >> v2:192.168.123.104:6800/2318507328 conn(0x7f6a3c03da90 0x7f6a3c03ff50 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f6a500099c0 tx=0x7f6a50006eb0 comp rx=0 tx=0).stop 2026-03-10T10:08:22.335 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.296+0000 7f6a686b3640 1 -- 192.168.123.104:0/2010030424 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f6a601068e0 msgr2=0x7f6a6007c460 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:22.335 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.296+0000 7f6a686b3640 1 --2- 192.168.123.104:0/2010030424 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f6a601068e0 0x7f6a6007c460 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f6a5402f740 tx=0x7f6a540039b0 comp rx=0 tx=0).stop 2026-03-10T10:08:22.335 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.296+0000 7f6a686b3640 1 -- 192.168.123.104:0/2010030424 shutdown_connections 2026-03-10T10:08:22.335 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.296+0000 7f6a686b3640 1 --2- 192.168.123.104:0/2010030424 >> v2:192.168.123.104:6800/2318507328 conn(0x7f6a3c03da90 0x7f6a3c03ff50 unknown :-1 s=CLOSED pgs=10 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:22.335 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.296+0000 7f6a686b3640 1 --2- 192.168.123.104:0/2010030424 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f6a601068e0 0x7f6a6007c460 unknown :-1 s=CLOSED pgs=42 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:22.335 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.296+0000 7f6a686b3640 1 -- 192.168.123.104:0/2010030424 >> 192.168.123.104:0/2010030424 conn(0x7f6a60102090 msgr2=0x7f6a60102d00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:22.335 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.296+0000 7f6a686b3640 1 -- 192.168.123.104:0/2010030424 shutdown_connections 2026-03-10T10:08:22.335 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.296+0000 7f6a686b3640 1 -- 192.168.123.104:0/2010030424 wait complete. 2026-03-10T10:08:22.335 INFO:teuthology.orchestra.run.vm04.stdout:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-10T10:08:22.335 INFO:teuthology.orchestra.run.vm04.stdout:Adding key to root@localhost authorized_keys... 2026-03-10T10:08:22.335 INFO:teuthology.orchestra.run.vm04.stdout:Adding host vm04... 2026-03-10T10:08:22.558 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:22 vm04 bash[20742]: audit 2026-03-10T10:08:21.285553+0000 mgr.y (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T10:08:22.558 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:22 vm04 bash[20742]: audit 2026-03-10T10:08:21.285553+0000 mgr.y (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T10:08:22.558 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:22 vm04 bash[20742]: audit 2026-03-10T10:08:21.289274+0000 mgr.y (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T10:08:22.558 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:22 vm04 bash[20742]: audit 2026-03-10T10:08:21.289274+0000 mgr.y (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T10:08:22.558 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:22 vm04 bash[20742]: cephadm 2026-03-10T10:08:21.359736+0000 mgr.y (mgr.14118) 4 : cephadm [INF] [10/Mar/2026:10:08:21] ENGINE Bus STARTING 2026-03-10T10:08:22.558 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:22 vm04 bash[20742]: cephadm 2026-03-10T10:08:21.359736+0000 mgr.y (mgr.14118) 4 : cephadm [INF] [10/Mar/2026:10:08:21] ENGINE Bus STARTING 2026-03-10T10:08:22.558 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:22 vm04 bash[20742]: audit 2026-03-10T10:08:21.570101+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:08:22.558 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:22 vm04 bash[20742]: audit 2026-03-10T10:08:21.570101+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:08:22.558 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:22 vm04 bash[20742]: audit 2026-03-10T10:08:21.575693+0000 mon.a (mon.0) 56 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:08:22.558 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:22 vm04 bash[20742]: audit 2026-03-10T10:08:21.575693+0000 mon.a (mon.0) 56 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:08:22.559 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:22 vm04 bash[20742]: audit 2026-03-10T10:08:21.580337+0000 mon.a (mon.0) 57 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:08:22.559 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:22 vm04 bash[20742]: audit 2026-03-10T10:08:21.580337+0000 mon.a (mon.0) 57 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:08:22.559 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:22 vm04 bash[20742]: audit 2026-03-10T10:08:22.064614+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:08:22.559 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:22 vm04 bash[20742]: audit 2026-03-10T10:08:22.064614+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:08:22.559 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:22 vm04 bash[20742]: audit 2026-03-10T10:08:22.066629+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:08:22.559 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:22 vm04 bash[20742]: audit 2026-03-10T10:08:22.066629+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:08:23.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:23 vm04 bash[20742]: cephadm 2026-03-10T10:08:21.462056+0000 mgr.y (mgr.14118) 5 : cephadm [INF] [10/Mar/2026:10:08:21] ENGINE Serving on http://192.168.123.104:8765 2026-03-10T10:08:23.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:23 vm04 bash[20742]: cephadm 2026-03-10T10:08:21.462056+0000 mgr.y (mgr.14118) 5 : cephadm [INF] [10/Mar/2026:10:08:21] ENGINE Serving on http://192.168.123.104:8765 2026-03-10T10:08:23.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:23 vm04 bash[20742]: audit 2026-03-10T10:08:21.567030+0000 mgr.y (mgr.14118) 6 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:08:23.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:23 vm04 bash[20742]: audit 2026-03-10T10:08:21.567030+0000 mgr.y (mgr.14118) 6 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:08:23.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:23 vm04 bash[20742]: cephadm 2026-03-10T10:08:21.576426+0000 mgr.y (mgr.14118) 7 : cephadm [INF] [10/Mar/2026:10:08:21] ENGINE Serving on https://192.168.123.104:7150 2026-03-10T10:08:23.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:23 vm04 bash[20742]: cephadm 2026-03-10T10:08:21.576426+0000 mgr.y (mgr.14118) 7 : cephadm [INF] [10/Mar/2026:10:08:21] ENGINE Serving on https://192.168.123.104:7150 2026-03-10T10:08:23.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:23 vm04 bash[20742]: cephadm 2026-03-10T10:08:21.576458+0000 mgr.y (mgr.14118) 8 : cephadm [INF] [10/Mar/2026:10:08:21] ENGINE Bus STARTED 2026-03-10T10:08:23.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:23 vm04 bash[20742]: cephadm 2026-03-10T10:08:21.576458+0000 mgr.y (mgr.14118) 8 : cephadm [INF] [10/Mar/2026:10:08:21] ENGINE Bus STARTED 2026-03-10T10:08:23.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:23 vm04 bash[20742]: cephadm 2026-03-10T10:08:21.577574+0000 mgr.y (mgr.14118) 9 : cephadm [INF] [10/Mar/2026:10:08:21] ENGINE Client ('192.168.123.104', 39838) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T10:08:23.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:23 vm04 bash[20742]: cephadm 2026-03-10T10:08:21.577574+0000 mgr.y (mgr.14118) 9 : cephadm [INF] [10/Mar/2026:10:08:21] ENGINE Client ('192.168.123.104', 39838) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T10:08:23.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:23 vm04 bash[20742]: audit 2026-03-10T10:08:21.806820+0000 mgr.y (mgr.14118) 10 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:08:23.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:23 vm04 bash[20742]: audit 2026-03-10T10:08:21.806820+0000 mgr.y (mgr.14118) 10 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:08:23.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:23 vm04 bash[20742]: audit 2026-03-10T10:08:22.049695+0000 mgr.y (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:08:23.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:23 vm04 bash[20742]: audit 2026-03-10T10:08:22.049695+0000 mgr.y (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:08:23.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:23 vm04 bash[20742]: cephadm 2026-03-10T10:08:22.049877+0000 mgr.y (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-10T10:08:23.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:23 vm04 bash[20742]: cephadm 2026-03-10T10:08:22.049877+0000 mgr.y (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-10T10:08:23.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:23 vm04 bash[20742]: audit 2026-03-10T10:08:22.298852+0000 mgr.y (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:08:23.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:23 vm04 bash[20742]: audit 2026-03-10T10:08:22.298852+0000 mgr.y (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:08:23.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:23 vm04 bash[20742]: cluster 2026-03-10T10:08:23.070528+0000 mon.a (mon.0) 60 : cluster [DBG] mgrmap e8: y(active, since 2s) 2026-03-10T10:08:23.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:23 vm04 bash[20742]: cluster 2026-03-10T10:08:23.070528+0000 mon.a (mon.0) 60 : cluster [DBG] mgrmap e8: y(active, since 2s) 2026-03-10T10:08:24.396 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout Added host 'vm04' with addr '192.168.123.104' 2026-03-10T10:08:24.396 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.444+0000 7fc4cec8e640 1 Processor -- start 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.444+0000 7fc4cec8e640 1 -- start start 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.444+0000 7fc4cec8e640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc4c8108b70 0x7fc4c8108f70 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.444+0000 7fc4cec8e640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fc4c8109540 con 0x7fc4c8108b70 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.444+0000 7fc4cca03640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc4c8108b70 0x7fc4c8108f70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.444+0000 7fc4cca03640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc4c8108b70 0x7fc4c8108f70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55328/0 (socket says 192.168.123.104:55328) 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.444+0000 7fc4cca03640 1 -- 192.168.123.104:0/1535641826 learned_addr learned my addr 192.168.123.104:0/1535641826 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.448+0000 7fc4cca03640 1 -- 192.168.123.104:0/1535641826 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fc4c8109d70 con 0x7fc4c8108b70 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.448+0000 7fc4cca03640 1 --2- 192.168.123.104:0/1535641826 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc4c8108b70 0x7fc4c8108f70 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7fc4b001adc0 tx=0x7fc4b00402a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=6fa861697086dfc6 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.448+0000 7fc4bf7fe640 1 -- 192.168.123.104:0/1535641826 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fc4b0040aa0 con 0x7fc4c8108b70 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.448+0000 7fc4bf7fe640 1 -- 192.168.123.104:0/1535641826 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7fc4b0040c40 con 0x7fc4c8108b70 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.448+0000 7fc4cec8e640 1 -- 192.168.123.104:0/1535641826 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc4c8108b70 msgr2=0x7fc4c8108f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.448+0000 7fc4cec8e640 1 --2- 192.168.123.104:0/1535641826 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc4c8108b70 0x7fc4c8108f70 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7fc4b001adc0 tx=0x7fc4b00402a0 comp rx=0 tx=0).stop 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.448+0000 7fc4cec8e640 1 -- 192.168.123.104:0/1535641826 shutdown_connections 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.448+0000 7fc4cec8e640 1 --2- 192.168.123.104:0/1535641826 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc4c8108b70 0x7fc4c8108f70 unknown :-1 s=CLOSED pgs=43 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.448+0000 7fc4cec8e640 1 -- 192.168.123.104:0/1535641826 >> 192.168.123.104:0/1535641826 conn(0x7fc4c807c040 msgr2=0x7fc4c807c450 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.448+0000 7fc4cec8e640 1 -- 192.168.123.104:0/1535641826 shutdown_connections 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.448+0000 7fc4cec8e640 1 -- 192.168.123.104:0/1535641826 wait complete. 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.448+0000 7fc4cec8e640 1 Processor -- start 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.448+0000 7fc4cec8e640 1 -- start start 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.448+0000 7fc4cec8e640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc4c8108b70 0x7fc4c8196020 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.448+0000 7fc4cec8e640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fc4c810ab90 con 0x7fc4c8108b70 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.448+0000 7fc4cca03640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc4c8108b70 0x7fc4c8196020 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.448+0000 7fc4cca03640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc4c8108b70 0x7fc4c8196020 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55344/0 (socket says 192.168.123.104:55344) 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.448+0000 7fc4cca03640 1 -- 192.168.123.104:0/953661384 learned_addr learned my addr 192.168.123.104:0/953661384 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.448+0000 7fc4cca03640 1 -- 192.168.123.104:0/953661384 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fc4c8196560 con 0x7fc4c8108b70 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.448+0000 7fc4cca03640 1 --2- 192.168.123.104:0/953661384 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc4c8108b70 0x7fc4c8196020 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7fc4b001a040 tx=0x7fc4b0047e40 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.448+0000 7fc4bdffb640 1 -- 192.168.123.104:0/953661384 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fc4b0056030 con 0x7fc4c8108b70 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.448+0000 7fc4cec8e640 1 -- 192.168.123.104:0/953661384 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fc4c81967f0 con 0x7fc4c8108b70 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.448+0000 7fc4cec8e640 1 -- 192.168.123.104:0/953661384 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fc4c81994e0 con 0x7fc4c8108b70 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.448+0000 7fc4bdffb640 1 -- 192.168.123.104:0/953661384 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7fc4b0018be0 con 0x7fc4c8108b70 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.448+0000 7fc4bdffb640 1 -- 192.168.123.104:0/953661384 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fc4b0052b40 con 0x7fc4c8108b70 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.452+0000 7fc4bdffb640 1 -- 192.168.123.104:0/953661384 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 7) ==== 50106+0+0 (secure 0 0 0) 0x7fc4b0052ce0 con 0x7fc4c8108b70 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.452+0000 7fc4bdffb640 1 --2- 192.168.123.104:0/953661384 >> v2:192.168.123.104:6800/2318507328 conn(0x7fc49003db30 0x7fc49003fff0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.452+0000 7fc4bdffb640 1 -- 192.168.123.104:0/953661384 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (secure 0 0 0) 0x7fc4b0057070 con 0x7fc4c8108b70 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.452+0000 7fc4cec8e640 1 -- 192.168.123.104:0/953661384 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fc4c8108f70 con 0x7fc4c8108b70 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.452+0000 7fc4bffff640 1 --2- 192.168.123.104:0/953661384 >> v2:192.168.123.104:6800/2318507328 conn(0x7fc49003db30 0x7fc49003fff0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.452+0000 7fc4bffff640 1 --2- 192.168.123.104:0/953661384 >> v2:192.168.123.104:6800/2318507328 conn(0x7fc49003db30 0x7fc49003fff0 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7fc4b80099c0 tx=0x7fc4b8006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.456+0000 7fc4bdffb640 1 -- 192.168.123.104:0/953661384 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fc4b00582c0 con 0x7fc4c8108b70 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:22.544+0000 7fc4cec8e640 1 -- 192.168.123.104:0/953661384 --> v2:192.168.123.104:6800/2318507328 -- mgr_command(tid 0: {"prefix": "orch host add", "hostname": "vm04", "addr": "192.168.123.104", "target": ["mon-mgr", ""]}) -- 0x7fc4c8072e50 con 0x7fc49003db30 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:23.064+0000 7fc4bdffb640 1 -- 192.168.123.104:0/953661384 <== mon.0 v2:192.168.123.104:3300/0 7 ==== mgrmap(e 8) ==== 50212+0+0 (secure 0 0 0) 0x7fc4b005b410 con 0x7fc4c8108b70 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.328+0000 7fc4bdffb640 1 -- 192.168.123.104:0/953661384 <== mgr.14118 v2:192.168.123.104:6800/2318507328 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+46 (secure 0 0 0) 0x7fc4c8072e50 con 0x7fc49003db30 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.332+0000 7fc4cec8e640 1 -- 192.168.123.104:0/953661384 >> v2:192.168.123.104:6800/2318507328 conn(0x7fc49003db30 msgr2=0x7fc49003fff0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.332+0000 7fc4cec8e640 1 --2- 192.168.123.104:0/953661384 >> v2:192.168.123.104:6800/2318507328 conn(0x7fc49003db30 0x7fc49003fff0 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7fc4b80099c0 tx=0x7fc4b8006eb0 comp rx=0 tx=0).stop 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.332+0000 7fc4cec8e640 1 -- 192.168.123.104:0/953661384 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc4c8108b70 msgr2=0x7fc4c8196020 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.332+0000 7fc4cec8e640 1 --2- 192.168.123.104:0/953661384 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc4c8108b70 0x7fc4c8196020 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7fc4b001a040 tx=0x7fc4b0047e40 comp rx=0 tx=0).stop 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.332+0000 7fc4cec8e640 1 -- 192.168.123.104:0/953661384 shutdown_connections 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.332+0000 7fc4cec8e640 1 --2- 192.168.123.104:0/953661384 >> v2:192.168.123.104:6800/2318507328 conn(0x7fc49003db30 0x7fc49003fff0 unknown :-1 s=CLOSED pgs=11 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.332+0000 7fc4cec8e640 1 --2- 192.168.123.104:0/953661384 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc4c8108b70 0x7fc4c8196020 unknown :-1 s=CLOSED pgs=44 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.332+0000 7fc4cec8e640 1 -- 192.168.123.104:0/953661384 >> 192.168.123.104:0/953661384 conn(0x7fc4c807c040 msgr2=0x7fc4c8070390 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.332+0000 7fc4cec8e640 1 -- 192.168.123.104:0/953661384 shutdown_connections 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.332+0000 7fc4cec8e640 1 -- 192.168.123.104:0/953661384 wait complete. 2026-03-10T10:08:24.397 INFO:teuthology.orchestra.run.vm04.stdout:Deploying unmanaged mon service... 2026-03-10T10:08:24.669 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:24 vm04 bash[20742]: audit 2026-03-10T10:08:22.551477+0000 mgr.y (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm04", "addr": "192.168.123.104", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:08:24.670 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:24 vm04 bash[20742]: audit 2026-03-10T10:08:22.551477+0000 mgr.y (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm04", "addr": "192.168.123.104", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:08:24.670 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:24 vm04 bash[20742]: cephadm 2026-03-10T10:08:23.080721+0000 mgr.y (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm04 2026-03-10T10:08:24.670 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:24 vm04 bash[20742]: cephadm 2026-03-10T10:08:23.080721+0000 mgr.y (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm04 2026-03-10T10:08:24.670 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:24 vm04 bash[20742]: audit 2026-03-10T10:08:24.334465+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:08:24.670 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:24 vm04 bash[20742]: audit 2026-03-10T10:08:24.334465+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:08:24.670 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:24 vm04 bash[20742]: cephadm 2026-03-10T10:08:24.334982+0000 mgr.y (mgr.14118) 16 : cephadm [INF] Added host vm04 2026-03-10T10:08:24.670 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:24 vm04 bash[20742]: cephadm 2026-03-10T10:08:24.334982+0000 mgr.y (mgr.14118) 16 : cephadm [INF] Added host vm04 2026-03-10T10:08:24.670 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:24 vm04 bash[20742]: audit 2026-03-10T10:08:24.336239+0000 mon.a (mon.0) 62 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:08:24.670 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:24 vm04 bash[20742]: audit 2026-03-10T10:08:24.336239+0000 mon.a (mon.0) 62 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:08:24.693 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-10T10:08:24.693 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.536+0000 7f9ee3577640 1 Processor -- start 2026-03-10T10:08:24.693 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.536+0000 7f9ee3577640 1 -- start start 2026-03-10T10:08:24.693 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.536+0000 7f9ee3577640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f9edc0055b0 0x7f9edc0059b0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:24.693 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.536+0000 7f9ee3577640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f9edc005f80 con 0x7f9edc0055b0 2026-03-10T10:08:24.693 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.536+0000 7f9ee2575640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f9edc0055b0 0x7f9edc0059b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:24.693 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.536+0000 7f9ee2575640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f9edc0055b0 0x7f9edc0059b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55358/0 (socket says 192.168.123.104:55358) 2026-03-10T10:08:24.693 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.536+0000 7f9ee2575640 1 -- 192.168.123.104:0/2836313173 learned_addr learned my addr 192.168.123.104:0/2836313173 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:24.693 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.536+0000 7f9ee2575640 1 -- 192.168.123.104:0/2836313173 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f9edc006800 con 0x7f9edc0055b0 2026-03-10T10:08:24.693 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.536+0000 7f9ee2575640 1 --2- 192.168.123.104:0/2836313173 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f9edc0055b0 0x7f9edc0059b0 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f9ed4009b80 tx=0x7f9ed402f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=a282c8e7669efba8 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:24.693 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.544+0000 7f9ee1573640 1 -- 192.168.123.104:0/2836313173 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f9ed403c070 con 0x7f9edc0055b0 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.544+0000 7f9ee1573640 1 -- 192.168.123.104:0/2836313173 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f9ed4037440 con 0x7f9edc0055b0 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.544+0000 7f9ee3577640 1 -- 192.168.123.104:0/2836313173 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f9edc0055b0 msgr2=0x7f9edc0059b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.544+0000 7f9ee3577640 1 --2- 192.168.123.104:0/2836313173 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f9edc0055b0 0x7f9edc0059b0 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f9ed4009b80 tx=0x7f9ed402f190 comp rx=0 tx=0).stop 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.544+0000 7f9ee3577640 1 -- 192.168.123.104:0/2836313173 shutdown_connections 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.544+0000 7f9ee3577640 1 --2- 192.168.123.104:0/2836313173 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f9edc0055b0 0x7f9edc0059b0 unknown :-1 s=CLOSED pgs=45 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.544+0000 7f9ee3577640 1 -- 192.168.123.104:0/2836313173 >> 192.168.123.104:0/2836313173 conn(0x7f9edc09fc40 msgr2=0x7f9edc0a20a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.544+0000 7f9ee3577640 1 -- 192.168.123.104:0/2836313173 shutdown_connections 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.544+0000 7f9ee3577640 1 -- 192.168.123.104:0/2836313173 wait complete. 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.544+0000 7f9ee3577640 1 Processor -- start 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.544+0000 7f9ee3577640 1 -- start start 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.544+0000 7f9ee3577640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f9edc015cb0 0x7f9edc014420 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.544+0000 7f9ee3577640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f9edc009a50 con 0x7f9edc015cb0 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.544+0000 7f9ee2575640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f9edc015cb0 0x7f9edc014420 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.544+0000 7f9ee2575640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f9edc015cb0 0x7f9edc014420 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55606/0 (socket says 192.168.123.104:55606) 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.544+0000 7f9ee2575640 1 -- 192.168.123.104:0/1425641499 learned_addr learned my addr 192.168.123.104:0/1425641499 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.544+0000 7f9ee2575640 1 -- 192.168.123.104:0/1425641499 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f9edc0160d0 con 0x7f9edc015cb0 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.544+0000 7f9ee2575640 1 --2- 192.168.123.104:0/1425641499 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f9edc015cb0 0x7f9edc014420 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7f9ed402f6c0 tx=0x7f9ed4003940 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.544+0000 7f9ed37fe640 1 -- 192.168.123.104:0/1425641499 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f9ed403c070 con 0x7f9edc015cb0 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.544+0000 7f9ee3577640 1 -- 192.168.123.104:0/1425641499 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f9edc014960 con 0x7f9edc015cb0 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.544+0000 7f9ee3577640 1 -- 192.168.123.104:0/1425641499 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f9edc014e40 con 0x7f9edc015cb0 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.544+0000 7f9ed37fe640 1 -- 192.168.123.104:0/1425641499 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f9ed4044070 con 0x7f9edc015cb0 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.544+0000 7f9ed37fe640 1 -- 192.168.123.104:0/1425641499 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f9ed4037440 con 0x7f9edc015cb0 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.544+0000 7f9ed37fe640 1 -- 192.168.123.104:0/1425641499 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 8) ==== 50212+0+0 (secure 0 0 0) 0x7f9ed40376a0 con 0x7f9edc015cb0 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.544+0000 7f9ed37fe640 1 --2- 192.168.123.104:0/1425641499 >> v2:192.168.123.104:6800/2318507328 conn(0x7f9ec803dca0 0x7f9ec8040160 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.544+0000 7f9ee1d74640 1 --2- 192.168.123.104:0/1425641499 >> v2:192.168.123.104:6800/2318507328 conn(0x7f9ec803dca0 0x7f9ec8040160 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.548+0000 7f9ee1d74640 1 --2- 192.168.123.104:0/1425641499 >> v2:192.168.123.104:6800/2318507328 conn(0x7f9ec803dca0 0x7f9ec8040160 secure :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0x7f9ee4052310 tx=0x7f9ee406bbc0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.548+0000 7f9ed37fe640 1 -- 192.168.123.104:0/1425641499 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (secure 0 0 0) 0x7f9ed4076eb0 con 0x7f9edc015cb0 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.548+0000 7f9ee3577640 1 -- 192.168.123.104:0/1425641499 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f9eb0005180 con 0x7f9edc015cb0 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.548+0000 7f9ed37fe640 1 -- 192.168.123.104:0/1425641499 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f9ed4044220 con 0x7f9edc015cb0 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.648+0000 7f9ee3577640 1 -- 192.168.123.104:0/1425641499 --> v2:192.168.123.104:6800/2318507328 -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}) -- 0x7f9eb0002bf0 con 0x7f9ec803dca0 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.656+0000 7f9ed37fe640 1 -- 192.168.123.104:0/1425641499 <== mgr.14118 v2:192.168.123.104:6800/2318507328 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+24 (secure 0 0 0) 0x7f9eb0002bf0 con 0x7f9ec803dca0 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.656+0000 7f9ee3577640 1 -- 192.168.123.104:0/1425641499 >> v2:192.168.123.104:6800/2318507328 conn(0x7f9ec803dca0 msgr2=0x7f9ec8040160 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.656+0000 7f9ee3577640 1 --2- 192.168.123.104:0/1425641499 >> v2:192.168.123.104:6800/2318507328 conn(0x7f9ec803dca0 0x7f9ec8040160 secure :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0x7f9ee4052310 tx=0x7f9ee406bbc0 comp rx=0 tx=0).stop 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.656+0000 7f9ee3577640 1 -- 192.168.123.104:0/1425641499 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f9edc015cb0 msgr2=0x7f9edc014420 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.656+0000 7f9ee3577640 1 --2- 192.168.123.104:0/1425641499 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f9edc015cb0 0x7f9edc014420 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7f9ed402f6c0 tx=0x7f9ed4003940 comp rx=0 tx=0).stop 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.656+0000 7f9ee3577640 1 -- 192.168.123.104:0/1425641499 shutdown_connections 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.656+0000 7f9ee3577640 1 --2- 192.168.123.104:0/1425641499 >> v2:192.168.123.104:6800/2318507328 conn(0x7f9ec803dca0 0x7f9ec8040160 unknown :-1 s=CLOSED pgs=12 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.656+0000 7f9ee3577640 1 --2- 192.168.123.104:0/1425641499 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f9edc015cb0 0x7f9edc014420 unknown :-1 s=CLOSED pgs=46 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.656+0000 7f9ee3577640 1 -- 192.168.123.104:0/1425641499 >> 192.168.123.104:0/1425641499 conn(0x7f9edc09fc40 msgr2=0x7f9edc0a0310 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.656+0000 7f9ee3577640 1 -- 192.168.123.104:0/1425641499 shutdown_connections 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.656+0000 7f9ee3577640 1 -- 192.168.123.104:0/1425641499 wait complete. 2026-03-10T10:08:24.694 INFO:teuthology.orchestra.run.vm04.stdout:Deploying unmanaged mgr service... 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.796+0000 7f3b9a6d2640 1 Processor -- start 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.796+0000 7f3b9a6d2640 1 -- start start 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.796+0000 7f3b9a6d2640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3b941045e0 0x7f3b941049e0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.796+0000 7f3b9a6d2640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f3b94104f20 con 0x7f3b941045e0 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.796+0000 7f3b93fff640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3b941045e0 0x7f3b941049e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.796+0000 7f3b93fff640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3b941045e0 0x7f3b941049e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55618/0 (socket says 192.168.123.104:55618) 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.796+0000 7f3b93fff640 1 -- 192.168.123.104:0/1473797427 learned_addr learned my addr 192.168.123.104:0/1473797427 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.796+0000 7f3b93fff640 1 -- 192.168.123.104:0/1473797427 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f3b941050a0 con 0x7f3b941045e0 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.796+0000 7f3b93fff640 1 --2- 192.168.123.104:0/1473797427 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3b941045e0 0x7f3b941049e0 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7f3b80009920 tx=0x7f3b8002ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=68f7cd8aa5d0fd73 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.796+0000 7f3b92ffd640 1 -- 192.168.123.104:0/1473797427 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f3b8003c070 con 0x7f3b941045e0 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.796+0000 7f3b92ffd640 1 -- 192.168.123.104:0/1473797427 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f3b80037440 con 0x7f3b941045e0 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.796+0000 7f3b92ffd640 1 -- 192.168.123.104:0/1473797427 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f3b800354d0 con 0x7f3b941045e0 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.796+0000 7f3b9a6d2640 1 -- 192.168.123.104:0/1473797427 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3b941045e0 msgr2=0x7f3b941049e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.796+0000 7f3b9a6d2640 1 --2- 192.168.123.104:0/1473797427 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3b941045e0 0x7f3b941049e0 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7f3b80009920 tx=0x7f3b8002ef20 comp rx=0 tx=0).stop 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.800+0000 7f3b9a6d2640 1 -- 192.168.123.104:0/1473797427 shutdown_connections 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.800+0000 7f3b9a6d2640 1 --2- 192.168.123.104:0/1473797427 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3b941045e0 0x7f3b941049e0 unknown :-1 s=CLOSED pgs=47 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.800+0000 7f3b9a6d2640 1 -- 192.168.123.104:0/1473797427 >> 192.168.123.104:0/1473797427 conn(0x7f3b94100250 msgr2=0x7f3b941026b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.800+0000 7f3b9a6d2640 1 -- 192.168.123.104:0/1473797427 shutdown_connections 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.800+0000 7f3b9a6d2640 1 -- 192.168.123.104:0/1473797427 wait complete. 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.800+0000 7f3b9a6d2640 1 Processor -- start 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.800+0000 7f3b9a6d2640 1 -- start start 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.800+0000 7f3b9a6d2640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3b941045e0 0x7f3b9419c710 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.800+0000 7f3b9a6d2640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f3b9410ab50 con 0x7f3b941045e0 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.800+0000 7f3b93fff640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3b941045e0 0x7f3b9419c710 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.800+0000 7f3b93fff640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3b941045e0 0x7f3b9419c710 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55624/0 (socket says 192.168.123.104:55624) 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.800+0000 7f3b93fff640 1 -- 192.168.123.104:0/2362385467 learned_addr learned my addr 192.168.123.104:0/2362385467 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.800+0000 7f3b93fff640 1 -- 192.168.123.104:0/2362385467 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f3b9419cc50 con 0x7f3b941045e0 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.800+0000 7f3b93fff640 1 --2- 192.168.123.104:0/2362385467 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3b941045e0 0x7f3b9419c710 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f3b80006fd0 tx=0x7f3b80036af0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.800+0000 7f3b917fa640 1 -- 192.168.123.104:0/2362385467 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f3b8002f9b0 con 0x7f3b941045e0 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.800+0000 7f3b917fa640 1 -- 192.168.123.104:0/2362385467 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f3b8003f680 con 0x7f3b941045e0 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.800+0000 7f3b917fa640 1 -- 192.168.123.104:0/2362385467 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f3b80048550 con 0x7f3b941045e0 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.800+0000 7f3b9a6d2640 1 -- 192.168.123.104:0/2362385467 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f3b9419cee0 con 0x7f3b941045e0 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.800+0000 7f3b9a6d2640 1 -- 192.168.123.104:0/2362385467 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f3b94105d80 con 0x7f3b941045e0 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.800+0000 7f3b917fa640 1 -- 192.168.123.104:0/2362385467 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 8) ==== 50212+0+0 (secure 0 0 0) 0x7f3b8002fb50 con 0x7f3b941045e0 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.800+0000 7f3b9a6d2640 1 -- 192.168.123.104:0/2362385467 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f3b94104a60 con 0x7f3b941045e0 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.800+0000 7f3b917fa640 1 --2- 192.168.123.104:0/2362385467 >> v2:192.168.123.104:6800/2318507328 conn(0x7f3b6803d890 0x7f3b6803fd50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.804+0000 7f3b937fe640 1 --2- 192.168.123.104:0/2362385467 >> v2:192.168.123.104:6800/2318507328 conn(0x7f3b6803d890 0x7f3b6803fd50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.804+0000 7f3b917fa640 1 -- 192.168.123.104:0/2362385467 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (secure 0 0 0) 0x7f3b8007a630 con 0x7f3b941045e0 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.804+0000 7f3b917fa640 1 -- 192.168.123.104:0/2362385467 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f3b80044030 con 0x7f3b941045e0 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.804+0000 7f3b937fe640 1 --2- 192.168.123.104:0/2362385467 >> v2:192.168.123.104:6800/2318507328 conn(0x7f3b6803d890 0x7f3b6803fd50 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7f3b840099c0 tx=0x7f3b84006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:24.959 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.900+0000 7f3b9a6d2640 1 -- 192.168.123.104:0/2362385467 --> v2:192.168.123.104:6800/2318507328 -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}) -- 0x7f3b94101500 con 0x7f3b6803d890 2026-03-10T10:08:24.960 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.904+0000 7f3b917fa640 1 -- 192.168.123.104:0/2362385467 <== mgr.14118 v2:192.168.123.104:6800/2318507328 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+24 (secure 0 0 0) 0x7f3b94101500 con 0x7f3b6803d890 2026-03-10T10:08:24.960 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.908+0000 7f3b9a6d2640 1 -- 192.168.123.104:0/2362385467 >> v2:192.168.123.104:6800/2318507328 conn(0x7f3b6803d890 msgr2=0x7f3b6803fd50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:24.960 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.908+0000 7f3b9a6d2640 1 --2- 192.168.123.104:0/2362385467 >> v2:192.168.123.104:6800/2318507328 conn(0x7f3b6803d890 0x7f3b6803fd50 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7f3b840099c0 tx=0x7f3b84006eb0 comp rx=0 tx=0).stop 2026-03-10T10:08:24.960 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.908+0000 7f3b9a6d2640 1 -- 192.168.123.104:0/2362385467 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3b941045e0 msgr2=0x7f3b9419c710 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:24.960 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.908+0000 7f3b9a6d2640 1 --2- 192.168.123.104:0/2362385467 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3b941045e0 0x7f3b9419c710 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f3b80006fd0 tx=0x7f3b80036af0 comp rx=0 tx=0).stop 2026-03-10T10:08:24.960 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.908+0000 7f3b9a6d2640 1 -- 192.168.123.104:0/2362385467 shutdown_connections 2026-03-10T10:08:24.960 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.908+0000 7f3b9a6d2640 1 --2- 192.168.123.104:0/2362385467 >> v2:192.168.123.104:6800/2318507328 conn(0x7f3b6803d890 0x7f3b6803fd50 unknown :-1 s=CLOSED pgs=13 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:24.960 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.908+0000 7f3b9a6d2640 1 --2- 192.168.123.104:0/2362385467 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3b941045e0 0x7f3b9419c710 unknown :-1 s=CLOSED pgs=48 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:24.960 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.908+0000 7f3b9a6d2640 1 -- 192.168.123.104:0/2362385467 >> 192.168.123.104:0/2362385467 conn(0x7f3b94100250 msgr2=0x7f3b94100df0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:24.960 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.908+0000 7f3b9a6d2640 1 -- 192.168.123.104:0/2362385467 shutdown_connections 2026-03-10T10:08:24.960 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:24.908+0000 7f3b9a6d2640 1 -- 192.168.123.104:0/2362385467 wait complete. 2026-03-10T10:08:25.206 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.056+0000 7fc648713640 1 Processor -- start 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.056+0000 7fc648713640 1 -- start start 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.056+0000 7fc648713640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc6401045e0 0x7fc6401049e0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.056+0000 7fc648713640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fc640104f20 con 0x7fc6401045e0 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.056+0000 7fc646488640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc6401045e0 0x7fc6401049e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.056+0000 7fc646488640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc6401045e0 0x7fc6401049e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55626/0 (socket says 192.168.123.104:55626) 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.056+0000 7fc646488640 1 -- 192.168.123.104:0/3995740854 learned_addr learned my addr 192.168.123.104:0/3995740854 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.056+0000 7fc646488640 1 -- 192.168.123.104:0/3995740854 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fc6401050a0 con 0x7fc6401045e0 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.056+0000 7fc646488640 1 --2- 192.168.123.104:0/3995740854 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc6401045e0 0x7fc6401049e0 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7fc628009920 tx=0x7fc62802ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=d91a259a22d05d58 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.056+0000 7fc645486640 1 -- 192.168.123.104:0/3995740854 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fc62803c070 con 0x7fc6401045e0 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.056+0000 7fc645486640 1 -- 192.168.123.104:0/3995740854 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7fc628037440 con 0x7fc6401045e0 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.056+0000 7fc645486640 1 -- 192.168.123.104:0/3995740854 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fc6280354d0 con 0x7fc6401045e0 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.056+0000 7fc648713640 1 -- 192.168.123.104:0/3995740854 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc6401045e0 msgr2=0x7fc6401049e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.056+0000 7fc648713640 1 --2- 192.168.123.104:0/3995740854 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc6401045e0 0x7fc6401049e0 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7fc628009920 tx=0x7fc62802ef20 comp rx=0 tx=0).stop 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.060+0000 7fc648713640 1 -- 192.168.123.104:0/3995740854 shutdown_connections 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.060+0000 7fc648713640 1 --2- 192.168.123.104:0/3995740854 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc6401045e0 0x7fc6401049e0 unknown :-1 s=CLOSED pgs=49 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.060+0000 7fc648713640 1 -- 192.168.123.104:0/3995740854 >> 192.168.123.104:0/3995740854 conn(0x7fc640100250 msgr2=0x7fc6401026b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.060+0000 7fc648713640 1 -- 192.168.123.104:0/3995740854 shutdown_connections 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.060+0000 7fc648713640 1 -- 192.168.123.104:0/3995740854 wait complete. 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.060+0000 7fc648713640 1 Processor -- start 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.060+0000 7fc648713640 1 -- start start 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.060+0000 7fc648713640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc6401045e0 0x7fc64019a4a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.060+0000 7fc648713640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fc640106880 con 0x7fc6401045e0 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.060+0000 7fc646488640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc6401045e0 0x7fc64019a4a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.060+0000 7fc646488640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc6401045e0 0x7fc64019a4a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55642/0 (socket says 192.168.123.104:55642) 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.060+0000 7fc646488640 1 -- 192.168.123.104:0/654643366 learned_addr learned my addr 192.168.123.104:0/654643366 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.060+0000 7fc646488640 1 -- 192.168.123.104:0/654643366 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fc64019a9e0 con 0x7fc6401045e0 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.060+0000 7fc646488640 1 --2- 192.168.123.104:0/654643366 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc6401045e0 0x7fc64019a4a0 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7fc628009a50 tx=0x7fc628035d70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.060+0000 7fc6377fe640 1 -- 192.168.123.104:0/654643366 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fc62803c030 con 0x7fc6401045e0 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.060+0000 7fc6377fe640 1 -- 192.168.123.104:0/654643366 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7fc62803e070 con 0x7fc6401045e0 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.060+0000 7fc6377fe640 1 -- 192.168.123.104:0/654643366 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fc628042c50 con 0x7fc6401045e0 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.060+0000 7fc648713640 1 -- 192.168.123.104:0/654643366 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fc64019ac70 con 0x7fc6401045e0 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.060+0000 7fc648713640 1 -- 192.168.123.104:0/654643366 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fc64019b090 con 0x7fc6401045e0 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.060+0000 7fc6377fe640 1 -- 192.168.123.104:0/654643366 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 8) ==== 50212+0+0 (secure 0 0 0) 0x7fc62804c430 con 0x7fc6401045e0 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.060+0000 7fc648713640 1 -- 192.168.123.104:0/654643366 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fc608005180 con 0x7fc6401045e0 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.060+0000 7fc6377fe640 1 --2- 192.168.123.104:0/654643366 >> v2:192.168.123.104:6800/2318507328 conn(0x7fc61803dc50 0x7fc618040110 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.064+0000 7fc645c87640 1 --2- 192.168.123.104:0/654643366 >> v2:192.168.123.104:6800/2318507328 conn(0x7fc61803dc50 0x7fc618040110 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.064+0000 7fc6377fe640 1 -- 192.168.123.104:0/654643366 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (secure 0 0 0) 0x7fc62803d070 con 0x7fc6401045e0 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.064+0000 7fc6377fe640 1 -- 192.168.123.104:0/654643366 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fc62802fbc0 con 0x7fc6401045e0 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.064+0000 7fc645c87640 1 --2- 192.168.123.104:0/654643366 >> v2:192.168.123.104:6800/2318507328 conn(0x7fc61803dc50 0x7fc618040110 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7fc630009a10 tx=0x7fc630006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.160+0000 7fc648713640 1 -- 192.168.123.104:0/654643366 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) -- 0x7fc608005470 con 0x7fc6401045e0 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.168+0000 7fc6377fe640 1 -- 192.168.123.104:0/654643366 <== mon.0 v2:192.168.123.104:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mgr/cephadm/container_init}]=0 v6)=0 v6) ==== 142+0+0 (secure 0 0 0) 0x7fc628042df0 con 0x7fc6401045e0 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.168+0000 7fc648713640 1 -- 192.168.123.104:0/654643366 >> v2:192.168.123.104:6800/2318507328 conn(0x7fc61803dc50 msgr2=0x7fc618040110 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.168+0000 7fc648713640 1 --2- 192.168.123.104:0/654643366 >> v2:192.168.123.104:6800/2318507328 conn(0x7fc61803dc50 0x7fc618040110 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7fc630009a10 tx=0x7fc630006eb0 comp rx=0 tx=0).stop 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.168+0000 7fc648713640 1 -- 192.168.123.104:0/654643366 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc6401045e0 msgr2=0x7fc64019a4a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.168+0000 7fc648713640 1 --2- 192.168.123.104:0/654643366 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc6401045e0 0x7fc64019a4a0 secure :-1 s=READY pgs=50 cs=0 l=1 rev1=1 crypto rx=0x7fc628009a50 tx=0x7fc628035d70 comp rx=0 tx=0).stop 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.168+0000 7fc648713640 1 -- 192.168.123.104:0/654643366 shutdown_connections 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.168+0000 7fc648713640 1 --2- 192.168.123.104:0/654643366 >> v2:192.168.123.104:6800/2318507328 conn(0x7fc61803dc50 0x7fc618040110 unknown :-1 s=CLOSED pgs=14 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.168+0000 7fc648713640 1 --2- 192.168.123.104:0/654643366 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc6401045e0 0x7fc64019a4a0 unknown :-1 s=CLOSED pgs=50 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.168+0000 7fc648713640 1 -- 192.168.123.104:0/654643366 >> 192.168.123.104:0/654643366 conn(0x7fc640100250 msgr2=0x7fc640191050 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.168+0000 7fc648713640 1 -- 192.168.123.104:0/654643366 shutdown_connections 2026-03-10T10:08:25.207 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.168+0000 7fc648713640 1 -- 192.168.123.104:0/654643366 wait complete. 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.308+0000 7f6179015640 1 Processor -- start 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.308+0000 7f6179015640 1 -- start start 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.308+0000 7f6179015640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f617407c7d0 0x7f617407ac30 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.308+0000 7f6179015640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f617407b170 con 0x7f617407c7d0 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.312+0000 7f6172d76640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f617407c7d0 0x7f617407ac30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.312+0000 7f6172d76640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f617407c7d0 0x7f617407ac30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55646/0 (socket says 192.168.123.104:55646) 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.312+0000 7f6172d76640 1 -- 192.168.123.104:0/3299009782 learned_addr learned my addr 192.168.123.104:0/3299009782 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.312+0000 7f6172d76640 1 -- 192.168.123.104:0/3299009782 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f617407b2f0 con 0x7f617407c7d0 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.312+0000 7f6172d76640 1 --2- 192.168.123.104:0/3299009782 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f617407c7d0 0x7f617407ac30 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7f615c009920 tx=0x7f615c02ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=5e235af24dc3c8c2 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.312+0000 7f6171d74640 1 -- 192.168.123.104:0/3299009782 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f615c03c070 con 0x7f617407c7d0 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.312+0000 7f6171d74640 1 -- 192.168.123.104:0/3299009782 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f615c037440 con 0x7f617407c7d0 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.312+0000 7f6179015640 1 -- 192.168.123.104:0/3299009782 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f617407c7d0 msgr2=0x7f617407ac30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.312+0000 7f6179015640 1 --2- 192.168.123.104:0/3299009782 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f617407c7d0 0x7f617407ac30 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7f615c009920 tx=0x7f615c02ef20 comp rx=0 tx=0).stop 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.312+0000 7f6179015640 1 -- 192.168.123.104:0/3299009782 shutdown_connections 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.312+0000 7f6179015640 1 --2- 192.168.123.104:0/3299009782 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f617407c7d0 0x7f617407ac30 unknown :-1 s=CLOSED pgs=51 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.312+0000 7f6179015640 1 -- 192.168.123.104:0/3299009782 >> 192.168.123.104:0/3299009782 conn(0x7f61741020d0 msgr2=0x7f61741044f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.312+0000 7f6179015640 1 -- 192.168.123.104:0/3299009782 shutdown_connections 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.312+0000 7f6179015640 1 -- 192.168.123.104:0/3299009782 wait complete. 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.312+0000 7f6179015640 1 Processor -- start 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.312+0000 7f6179015640 1 -- start start 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.312+0000 7f6179015640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f617407c7d0 0x7f61741a2f20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.312+0000 7f6179015640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f61741088c0 con 0x7f617407c7d0 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.312+0000 7f6172d76640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f617407c7d0 0x7f61741a2f20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.312+0000 7f6172d76640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f617407c7d0 0x7f61741a2f20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55662/0 (socket says 192.168.123.104:55662) 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.312+0000 7f6172d76640 1 -- 192.168.123.104:0/4216453309 learned_addr learned my addr 192.168.123.104:0/4216453309 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.312+0000 7f6172d76640 1 -- 192.168.123.104:0/4216453309 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f61741a3460 con 0x7f617407c7d0 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.316+0000 7f6172d76640 1 --2- 192.168.123.104:0/4216453309 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f617407c7d0 0x7f61741a2f20 secure :-1 s=READY pgs=52 cs=0 l=1 rev1=1 crypto rx=0x7f615c035ed0 tx=0x7f615c035f00 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.316+0000 7f6153fff640 1 -- 192.168.123.104:0/4216453309 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f615c045070 con 0x7f617407c7d0 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.316+0000 7f6153fff640 1 -- 192.168.123.104:0/4216453309 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f615c037ca0 con 0x7f617407c7d0 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.316+0000 7f6179015640 1 -- 192.168.123.104:0/4216453309 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f61741a36f0 con 0x7f617407c7d0 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.316+0000 7f6179015640 1 -- 192.168.123.104:0/4216453309 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f61741a63e0 con 0x7f617407c7d0 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.316+0000 7f6153fff640 1 -- 192.168.123.104:0/4216453309 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f615c03c070 con 0x7f617407c7d0 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.316+0000 7f6153fff640 1 -- 192.168.123.104:0/4216453309 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 8) ==== 50212+0+0 (secure 0 0 0) 0x7f615c037850 con 0x7f617407c7d0 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.316+0000 7f6153fff640 1 --2- 192.168.123.104:0/4216453309 >> v2:192.168.123.104:6800/2318507328 conn(0x7f614803dc00 0x7f61480400c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.316+0000 7f6153fff640 1 -- 192.168.123.104:0/4216453309 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (secure 0 0 0) 0x7f615c076350 con 0x7f617407c7d0 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.316+0000 7f6172575640 1 --2- 192.168.123.104:0/4216453309 >> v2:192.168.123.104:6800/2318507328 conn(0x7f614803dc00 0x7f61480400c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.316+0000 7f6172575640 1 --2- 192.168.123.104:0/4216453309 >> v2:192.168.123.104:6800/2318507328 conn(0x7f614803dc00 0x7f61480400c0 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7f61680099c0 tx=0x7f6168006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.316+0000 7f6179015640 1 -- 192.168.123.104:0/4216453309 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6138005180 con 0x7f617407c7d0 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.320+0000 7f6153fff640 1 -- 192.168.123.104:0/4216453309 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f615c075cc0 con 0x7f617407c7d0 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.408+0000 7f6179015640 1 -- 192.168.123.104:0/4216453309 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0) -- 0x7f6138005470 con 0x7f617407c7d0 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.416+0000 7f6153fff640 1 -- 192.168.123.104:0/4216453309 <== mon.0 v2:192.168.123.104:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mgr/dashboard/ssl_server_port}]=0 v7)=0 v7) ==== 130+0+0 (secure 0 0 0) 0x7f615c04a430 con 0x7f617407c7d0 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.416+0000 7f6179015640 1 -- 192.168.123.104:0/4216453309 >> v2:192.168.123.104:6800/2318507328 conn(0x7f614803dc00 msgr2=0x7f61480400c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.416+0000 7f6179015640 1 --2- 192.168.123.104:0/4216453309 >> v2:192.168.123.104:6800/2318507328 conn(0x7f614803dc00 0x7f61480400c0 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7f61680099c0 tx=0x7f6168006eb0 comp rx=0 tx=0).stop 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.416+0000 7f6179015640 1 -- 192.168.123.104:0/4216453309 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f617407c7d0 msgr2=0x7f61741a2f20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.416+0000 7f6179015640 1 --2- 192.168.123.104:0/4216453309 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f617407c7d0 0x7f61741a2f20 secure :-1 s=READY pgs=52 cs=0 l=1 rev1=1 crypto rx=0x7f615c035ed0 tx=0x7f615c035f00 comp rx=0 tx=0).stop 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.416+0000 7f6179015640 1 -- 192.168.123.104:0/4216453309 shutdown_connections 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.416+0000 7f6179015640 1 --2- 192.168.123.104:0/4216453309 >> v2:192.168.123.104:6800/2318507328 conn(0x7f614803dc00 0x7f61480400c0 unknown :-1 s=CLOSED pgs=15 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:25.457 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.416+0000 7f6179015640 1 --2- 192.168.123.104:0/4216453309 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f617407c7d0 0x7f61741a2f20 unknown :-1 s=CLOSED pgs=52 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:25.458 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.416+0000 7f6179015640 1 -- 192.168.123.104:0/4216453309 >> 192.168.123.104:0/4216453309 conn(0x7f61741020d0 msgr2=0x7f61741044c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:25.458 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.416+0000 7f6179015640 1 -- 192.168.123.104:0/4216453309 shutdown_connections 2026-03-10T10:08:25.458 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.416+0000 7f6179015640 1 -- 192.168.123.104:0/4216453309 wait complete. 2026-03-10T10:08:25.458 INFO:teuthology.orchestra.run.vm04.stdout:Enabling the dashboard module... 2026-03-10T10:08:25.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:25 vm04 bash[20742]: audit 2026-03-10T10:08:24.656660+0000 mgr.y (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:08:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:25 vm04 bash[20742]: audit 2026-03-10T10:08:24.656660+0000 mgr.y (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:08:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:25 vm04 bash[20742]: cephadm 2026-03-10T10:08:24.657506+0000 mgr.y (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-10T10:08:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:25 vm04 bash[20742]: cephadm 2026-03-10T10:08:24.657506+0000 mgr.y (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-10T10:08:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:25 vm04 bash[20742]: audit 2026-03-10T10:08:24.659934+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:08:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:25 vm04 bash[20742]: audit 2026-03-10T10:08:24.659934+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:08:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:25 vm04 bash[20742]: audit 2026-03-10T10:08:24.907544+0000 mgr.y (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:08:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:25 vm04 bash[20742]: audit 2026-03-10T10:08:24.907544+0000 mgr.y (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:08:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:25 vm04 bash[20742]: cephadm 2026-03-10T10:08:24.908186+0000 mgr.y (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-10T10:08:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:25 vm04 bash[20742]: cephadm 2026-03-10T10:08:24.908186+0000 mgr.y (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-10T10:08:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:25 vm04 bash[20742]: audit 2026-03-10T10:08:24.910913+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:08:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:25 vm04 bash[20742]: audit 2026-03-10T10:08:24.910913+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:08:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:25 vm04 bash[20742]: audit 2026-03-10T10:08:25.167348+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.104:0/654643366' entity='client.admin' 2026-03-10T10:08:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:25 vm04 bash[20742]: audit 2026-03-10T10:08:25.167348+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.104:0/654643366' entity='client.admin' 2026-03-10T10:08:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:25 vm04 bash[20742]: audit 2026-03-10T10:08:25.417615+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.104:0/4216453309' entity='client.admin' 2026-03-10T10:08:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:25 vm04 bash[20742]: audit 2026-03-10T10:08:25.417615+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.104:0/4216453309' entity='client.admin' 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.560+0000 7f5f462ca640 1 Processor -- start 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.560+0000 7f5f462ca640 1 -- start start 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.560+0000 7f5f462ca640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f5f401045e0 0x7f5f401049e0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.560+0000 7f5f462ca640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f5f40104f20 con 0x7f5f401045e0 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.560+0000 7f5f3ffff640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f5f401045e0 0x7f5f401049e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.560+0000 7f5f3ffff640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f5f401045e0 0x7f5f401049e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55678/0 (socket says 192.168.123.104:55678) 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.560+0000 7f5f3ffff640 1 -- 192.168.123.104:0/2253004293 learned_addr learned my addr 192.168.123.104:0/2253004293 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.560+0000 7f5f3ffff640 1 -- 192.168.123.104:0/2253004293 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f5f401050a0 con 0x7f5f401045e0 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.560+0000 7f5f3ffff640 1 --2- 192.168.123.104:0/2253004293 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f5f401045e0 0x7f5f401049e0 secure :-1 s=READY pgs=53 cs=0 l=1 rev1=1 crypto rx=0x7f5f30009b80 tx=0x7f5f3002f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=c352ebbbe49c2df server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.560+0000 7f5f3effd640 1 -- 192.168.123.104:0/2253004293 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f5f3003c070 con 0x7f5f401045e0 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.560+0000 7f5f3effd640 1 -- 192.168.123.104:0/2253004293 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f5f30037440 con 0x7f5f401045e0 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.560+0000 7f5f3effd640 1 -- 192.168.123.104:0/2253004293 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f5f300353a0 con 0x7f5f401045e0 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.564+0000 7f5f462ca640 1 -- 192.168.123.104:0/2253004293 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f5f401045e0 msgr2=0x7f5f401049e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.564+0000 7f5f462ca640 1 --2- 192.168.123.104:0/2253004293 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f5f401045e0 0x7f5f401049e0 secure :-1 s=READY pgs=53 cs=0 l=1 rev1=1 crypto rx=0x7f5f30009b80 tx=0x7f5f3002f190 comp rx=0 tx=0).stop 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.564+0000 7f5f462ca640 1 -- 192.168.123.104:0/2253004293 shutdown_connections 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.564+0000 7f5f462ca640 1 --2- 192.168.123.104:0/2253004293 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f5f401045e0 0x7f5f401049e0 unknown :-1 s=CLOSED pgs=53 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.564+0000 7f5f462ca640 1 -- 192.168.123.104:0/2253004293 >> 192.168.123.104:0/2253004293 conn(0x7f5f40100250 msgr2=0x7f5f401026b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.564+0000 7f5f462ca640 1 -- 192.168.123.104:0/2253004293 shutdown_connections 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.564+0000 7f5f462ca640 1 -- 192.168.123.104:0/2253004293 wait complete. 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.564+0000 7f5f462ca640 1 Processor -- start 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.564+0000 7f5f462ca640 1 -- start start 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.564+0000 7f5f462ca640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f5f401045e0 0x7f5f4019a4b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.564+0000 7f5f462ca640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f5f40106880 con 0x7f5f401045e0 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.564+0000 7f5f3ffff640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f5f401045e0 0x7f5f4019a4b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.564+0000 7f5f3ffff640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f5f401045e0 0x7f5f4019a4b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55682/0 (socket says 192.168.123.104:55682) 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.564+0000 7f5f3ffff640 1 -- 192.168.123.104:0/923033120 learned_addr learned my addr 192.168.123.104:0/923033120 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.564+0000 7f5f3ffff640 1 -- 192.168.123.104:0/923033120 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f5f4019a9f0 con 0x7f5f401045e0 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.564+0000 7f5f3ffff640 1 --2- 192.168.123.104:0/923033120 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f5f401045e0 0x7f5f4019a4b0 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f5f300091b0 tx=0x7f5f30036890 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.564+0000 7f5f3d7fa640 1 -- 192.168.123.104:0/923033120 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f5f30047070 con 0x7f5f401045e0 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.564+0000 7f5f3d7fa640 1 -- 192.168.123.104:0/923033120 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 998+0+0 (secure 0 0 0) 0x7f5f30042440 con 0x7f5f401045e0 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.564+0000 7f5f3d7fa640 1 -- 192.168.123.104:0/923033120 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f5f3003c070 con 0x7f5f401045e0 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.564+0000 7f5f462ca640 1 -- 192.168.123.104:0/923033120 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f5f4019ac80 con 0x7f5f401045e0 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.564+0000 7f5f462ca640 1 -- 192.168.123.104:0/923033120 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f5f4019b0a0 con 0x7f5f401045e0 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.564+0000 7f5f3d7fa640 1 -- 192.168.123.104:0/923033120 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 8) ==== 50212+0+0 (secure 0 0 0) 0x7f5f30036c00 con 0x7f5f401045e0 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.564+0000 7f5f3d7fa640 1 --2- 192.168.123.104:0/923033120 >> v2:192.168.123.104:6800/2318507328 conn(0x7f5f1403dc00 0x7f5f140400c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.564+0000 7f5f3d7fa640 1 -- 192.168.123.104:0/923033120 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (secure 0 0 0) 0x7f5f30076ad0 con 0x7f5f401045e0 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.564+0000 7f5f462ca640 1 -- 192.168.123.104:0/923033120 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f5f04005180 con 0x7f5f401045e0 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.568+0000 7f5f3f7fe640 1 --2- 192.168.123.104:0/923033120 >> v2:192.168.123.104:6800/2318507328 conn(0x7f5f1403dc00 0x7f5f140400c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.568+0000 7f5f3d7fa640 1 -- 192.168.123.104:0/923033120 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f5f30047210 con 0x7f5f401045e0 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.568+0000 7f5f3f7fe640 1 --2- 192.168.123.104:0/923033120 >> v2:192.168.123.104:6800/2318507328 conn(0x7f5f1403dc00 0x7f5f140400c0 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f5f2c0099c0 tx=0x7f5f2c006eb0 comp rx=0 tx=0).ready entity=mgr.14118 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.696+0000 7f5f462ca640 1 -- 192.168.123.104:0/923033120 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0) -- 0x7f5f04005470 con 0x7f5f401045e0 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:25.768+0000 7f5f3d7fa640 1 -- 192.168.123.104:0/923033120 <== mon.0 v2:192.168.123.104:3300/0 7 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f5f30076440 con 0x7f5f401045e0 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.660+0000 7f5f3d7fa640 1 -- 192.168.123.104:0/923033120 <== mon.0 v2:192.168.123.104:3300/0 8 ==== mon_command_ack([{"prefix": "mgr module enable", "module": "dashboard"}]=0 v9) ==== 88+0+0 (secure 0 0 0) 0x7f5f30040340 con 0x7f5f401045e0 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.664+0000 7f5f462ca640 1 -- 192.168.123.104:0/923033120 >> v2:192.168.123.104:6800/2318507328 conn(0x7f5f1403dc00 msgr2=0x7f5f140400c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.664+0000 7f5f462ca640 1 --2- 192.168.123.104:0/923033120 >> v2:192.168.123.104:6800/2318507328 conn(0x7f5f1403dc00 0x7f5f140400c0 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f5f2c0099c0 tx=0x7f5f2c006eb0 comp rx=0 tx=0).stop 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.664+0000 7f5f462ca640 1 -- 192.168.123.104:0/923033120 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f5f401045e0 msgr2=0x7f5f4019a4b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.664+0000 7f5f462ca640 1 --2- 192.168.123.104:0/923033120 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f5f401045e0 0x7f5f4019a4b0 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f5f300091b0 tx=0x7f5f30036890 comp rx=0 tx=0).stop 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.664+0000 7f5f462ca640 1 -- 192.168.123.104:0/923033120 shutdown_connections 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.664+0000 7f5f462ca640 1 --2- 192.168.123.104:0/923033120 >> v2:192.168.123.104:6800/2318507328 conn(0x7f5f1403dc00 0x7f5f140400c0 unknown :-1 s=CLOSED pgs=16 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.664+0000 7f5f462ca640 1 --2- 192.168.123.104:0/923033120 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f5f401045e0 0x7f5f4019a4b0 unknown :-1 s=CLOSED pgs=54 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:26.729 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.664+0000 7f5f462ca640 1 -- 192.168.123.104:0/923033120 >> 192.168.123.104:0/923033120 conn(0x7f5f40100250 msgr2=0x7f5f40191110 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:26.730 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.664+0000 7f5f462ca640 1 -- 192.168.123.104:0/923033120 shutdown_connections 2026-03-10T10:08:26.730 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.664+0000 7f5f462ca640 1 -- 192.168.123.104:0/923033120 wait complete. 2026-03-10T10:08:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:26 vm04 bash[20742]: audit 2026-03-10T10:08:25.703248+0000 mon.a (mon.0) 67 : audit [INF] from='client.? 192.168.123.104:0/923033120' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T10:08:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:26 vm04 bash[20742]: audit 2026-03-10T10:08:25.703248+0000 mon.a (mon.0) 67 : audit [INF] from='client.? 192.168.123.104:0/923033120' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T10:08:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:26 vm04 bash[20742]: audit 2026-03-10T10:08:25.772469+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:08:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:26 vm04 bash[20742]: audit 2026-03-10T10:08:25.772469+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:08:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:26 vm04 bash[20742]: audit 2026-03-10T10:08:26.030738+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:08:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:26 vm04 bash[20742]: audit 2026-03-10T10:08:26.030738+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:08:26.954 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:26 vm04 bash[20997]: ignoring --setuser ceph since I am not root 2026-03-10T10:08:26.954 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:26 vm04 bash[20997]: ignoring --setgroup ceph since I am not root 2026-03-10T10:08:26.954 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:26 vm04 bash[20997]: debug 2026-03-10T10:08:26.816+0000 7f8930b5b140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T10:08:26.954 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:26 vm04 bash[20997]: debug 2026-03-10T10:08:26.856+0000 7f8930b5b140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout { 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "epoch": 9, 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "active_name": "y", 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout } 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.876+0000 7fe4cb8b1640 1 Processor -- start 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.876+0000 7fe4cb8b1640 1 -- start start 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.876+0000 7fe4cb8b1640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fe4c4074c00 0x7fe4c4075000 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.876+0000 7fe4cb8b1640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fe4c40755d0 con 0x7fe4c4074c00 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.876+0000 7fe4c9626640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fe4c4074c00 0x7fe4c4075000 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.876+0000 7fe4c9626640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fe4c4074c00 0x7fe4c4075000 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55722/0 (socket says 192.168.123.104:55722) 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.876+0000 7fe4c9626640 1 -- 192.168.123.104:0/2525536146 learned_addr learned my addr 192.168.123.104:0/2525536146 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.876+0000 7fe4c9626640 1 -- 192.168.123.104:0/2525536146 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fe4c410e4f0 con 0x7fe4c4074c00 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.876+0000 7fe4c9626640 1 --2- 192.168.123.104:0/2525536146 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fe4c4074c00 0x7fe4c4075000 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7fe4c0009eb0 tx=0x7fe4c0031460 comp rx=0 tx=0).ready entity=mon.0 client_cookie=39af4ec761fd9aa3 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.876+0000 7fe4b3fff640 1 -- 192.168.123.104:0/2525536146 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fe4c003e070 con 0x7fe4c4074c00 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.876+0000 7fe4b3fff640 1 -- 192.168.123.104:0/2525536146 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fe4c0031dd0 con 0x7fe4c4074c00 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.876+0000 7fe4cb8b1640 1 -- 192.168.123.104:0/2525536146 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fe4c4074c00 msgr2=0x7fe4c4075000 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.876+0000 7fe4cb8b1640 1 --2- 192.168.123.104:0/2525536146 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fe4c4074c00 0x7fe4c4075000 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7fe4c0009eb0 tx=0x7fe4c0031460 comp rx=0 tx=0).stop 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.880+0000 7fe4cb8b1640 1 -- 192.168.123.104:0/2525536146 shutdown_connections 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.880+0000 7fe4cb8b1640 1 --2- 192.168.123.104:0/2525536146 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fe4c4074c00 0x7fe4c4075000 unknown :-1 s=CLOSED pgs=57 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.880+0000 7fe4cb8b1640 1 -- 192.168.123.104:0/2525536146 >> 192.168.123.104:0/2525536146 conn(0x7fe4c406fe80 msgr2=0x7fe4c40722e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.880+0000 7fe4cb8b1640 1 -- 192.168.123.104:0/2525536146 shutdown_connections 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.880+0000 7fe4cb8b1640 1 -- 192.168.123.104:0/2525536146 wait complete. 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.880+0000 7fe4cb8b1640 1 Processor -- start 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.880+0000 7fe4cb8b1640 1 -- start start 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.880+0000 7fe4cb8b1640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fe4c41a2c40 0x7fe4c41a3060 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.880+0000 7fe4c9626640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fe4c41a2c40 0x7fe4c41a3060 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.880+0000 7fe4c9626640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fe4c41a2c40 0x7fe4c41a3060 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55730/0 (socket says 192.168.123.104:55730) 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.880+0000 7fe4c9626640 1 -- 192.168.123.104:0/1631284863 learned_addr learned my addr 192.168.123.104:0/1631284863 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.880+0000 7fe4cb8b1640 1 -- 192.168.123.104:0/1631284863 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fe4c410f020 con 0x7fe4c41a2c40 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.880+0000 7fe4c9626640 1 -- 192.168.123.104:0/1631284863 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fe4c41a35a0 con 0x7fe4c41a2c40 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.880+0000 7fe4c9626640 1 --2- 192.168.123.104:0/1631284863 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fe4c41a2c40 0x7fe4c41a3060 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7fe4c003c040 tx=0x7fe4c0004570 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.880+0000 7fe4b27fc640 1 -- 192.168.123.104:0/1631284863 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fe4c003e070 con 0x7fe4c41a2c40 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.880+0000 7fe4cb8b1640 1 -- 192.168.123.104:0/1631284863 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fe4c41a3830 con 0x7fe4c41a2c40 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.880+0000 7fe4cb8b1640 1 -- 192.168.123.104:0/1631284863 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fe4c41a4300 con 0x7fe4c41a2c40 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.880+0000 7fe4cb8b1640 1 -- 192.168.123.104:0/1631284863 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fe4c4074c00 con 0x7fe4c41a2c40 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.880+0000 7fe4b27fc640 1 -- 192.168.123.104:0/1631284863 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fe4c0003b80 con 0x7fe4c41a2c40 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.880+0000 7fe4b27fc640 1 -- 192.168.123.104:0/1631284863 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fe4c00397f0 con 0x7fe4c41a2c40 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.880+0000 7fe4b27fc640 1 -- 192.168.123.104:0/1631284863 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 9) ==== 50225+0+0 (secure 0 0 0) 0x7fe4c0039990 con 0x7fe4c41a2c40 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.880+0000 7fe4b27fc640 1 --2- 192.168.123.104:0/1631284863 >> v2:192.168.123.104:6800/2318507328 conn(0x7fe4a003dc50 0x7fe4a0040110 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.880+0000 7fe4c8e25640 1 -- 192.168.123.104:0/1631284863 >> v2:192.168.123.104:6800/2318507328 conn(0x7fe4a003dc50 msgr2=0x7fe4a0040110 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.104:6800/2318507328 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.880+0000 7fe4c8e25640 1 --2- 192.168.123.104:0/1631284863 >> v2:192.168.123.104:6800/2318507328 conn(0x7fe4a003dc50 0x7fe4a0040110 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.880+0000 7fe4b27fc640 1 -- 192.168.123.104:0/1631284863 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (secure 0 0 0) 0x7fe4c0077f70 con 0x7fe4c41a2c40 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.884+0000 7fe4b27fc640 1 -- 192.168.123.104:0/1631284863 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fe4c0048050 con 0x7fe4c41a2c40 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.996+0000 7fe4cb8b1640 1 -- 192.168.123.104:0/1631284863 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "mgr stat"} v 0) -- 0x7fe4c41a4a10 con 0x7fe4c41a2c40 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.996+0000 7fe4b27fc640 1 -- 192.168.123.104:0/1631284863 <== mon.0 v2:192.168.123.104:3300/0 7 ==== mon_command_ack([{"prefix": "mgr stat"}]=0 v9) ==== 56+0+88 (secure 0 0 0) 0x7fe4c0038dd0 con 0x7fe4c41a2c40 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.996+0000 7fe497fff640 1 -- 192.168.123.104:0/1631284863 >> v2:192.168.123.104:6800/2318507328 conn(0x7fe4a003dc50 msgr2=0x7fe4a0040110 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:08:27.047 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.996+0000 7fe497fff640 1 --2- 192.168.123.104:0/1631284863 >> v2:192.168.123.104:6800/2318507328 conn(0x7fe4a003dc50 0x7fe4a0040110 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:27.048 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.996+0000 7fe497fff640 1 -- 192.168.123.104:0/1631284863 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fe4c41a2c40 msgr2=0x7fe4c41a3060 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:27.048 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:26.996+0000 7fe497fff640 1 --2- 192.168.123.104:0/1631284863 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fe4c41a2c40 0x7fe4c41a3060 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7fe4c003c040 tx=0x7fe4c0004570 comp rx=0 tx=0).stop 2026-03-10T10:08:27.048 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.004+0000 7fe497fff640 1 -- 192.168.123.104:0/1631284863 shutdown_connections 2026-03-10T10:08:27.048 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.004+0000 7fe497fff640 1 --2- 192.168.123.104:0/1631284863 >> v2:192.168.123.104:6800/2318507328 conn(0x7fe4a003dc50 0x7fe4a0040110 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:27.048 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.004+0000 7fe497fff640 1 --2- 192.168.123.104:0/1631284863 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fe4c41a2c40 0x7fe4c41a3060 unknown :-1 s=CLOSED pgs=58 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:27.048 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.004+0000 7fe497fff640 1 -- 192.168.123.104:0/1631284863 >> 192.168.123.104:0/1631284863 conn(0x7fe4c406fe80 msgr2=0x7fe4c40707e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:27.048 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.004+0000 7fe497fff640 1 -- 192.168.123.104:0/1631284863 shutdown_connections 2026-03-10T10:08:27.048 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.004+0000 7fe497fff640 1 -- 192.168.123.104:0/1631284863 wait complete. 2026-03-10T10:08:27.048 INFO:teuthology.orchestra.run.vm04.stdout:Waiting for the mgr to restart... 2026-03-10T10:08:27.048 INFO:teuthology.orchestra.run.vm04.stdout:Waiting for mgr epoch 9... 2026-03-10T10:08:27.267 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:26 vm04 bash[20997]: debug 2026-03-10T10:08:26.968+0000 7f8930b5b140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T10:08:27.657 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:27 vm04 bash[20997]: debug 2026-03-10T10:08:27.260+0000 7f8930b5b140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T10:08:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:27 vm04 bash[20742]: audit 2026-03-10T10:08:26.667102+0000 mon.a (mon.0) 70 : audit [INF] from='client.? 192.168.123.104:0/923033120' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T10:08:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:27 vm04 bash[20742]: audit 2026-03-10T10:08:26.667102+0000 mon.a (mon.0) 70 : audit [INF] from='client.? 192.168.123.104:0/923033120' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T10:08:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:27 vm04 bash[20742]: cluster 2026-03-10T10:08:26.669484+0000 mon.a (mon.0) 71 : cluster [DBG] mgrmap e9: y(active, since 6s) 2026-03-10T10:08:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:27 vm04 bash[20742]: cluster 2026-03-10T10:08:26.669484+0000 mon.a (mon.0) 71 : cluster [DBG] mgrmap e9: y(active, since 6s) 2026-03-10T10:08:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:27 vm04 bash[20742]: audit 2026-03-10T10:08:27.000668+0000 mon.a (mon.0) 72 : audit [DBG] from='client.? 192.168.123.104:0/1631284863' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T10:08:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:27 vm04 bash[20742]: audit 2026-03-10T10:08:27.000668+0000 mon.a (mon.0) 72 : audit [DBG] from='client.? 192.168.123.104:0/1631284863' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T10:08:27.954 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:27 vm04 bash[20997]: debug 2026-03-10T10:08:27.652+0000 7f8930b5b140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T10:08:27.954 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:27 vm04 bash[20997]: debug 2026-03-10T10:08:27.728+0000 7f8930b5b140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T10:08:27.954 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:27 vm04 bash[20997]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T10:08:27.954 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:27 vm04 bash[20997]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T10:08:27.954 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:27 vm04 bash[20997]: from numpy import show_config as show_numpy_config 2026-03-10T10:08:27.954 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:27 vm04 bash[20997]: debug 2026-03-10T10:08:27.840+0000 7f8930b5b140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T10:08:28.454 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:27 vm04 bash[20997]: debug 2026-03-10T10:08:27.964+0000 7f8930b5b140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T10:08:28.454 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:28 vm04 bash[20997]: debug 2026-03-10T10:08:28.000+0000 7f8930b5b140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T10:08:28.454 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:28 vm04 bash[20997]: debug 2026-03-10T10:08:28.032+0000 7f8930b5b140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T10:08:28.454 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:28 vm04 bash[20997]: debug 2026-03-10T10:08:28.072+0000 7f8930b5b140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T10:08:28.454 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:28 vm04 bash[20997]: debug 2026-03-10T10:08:28.116+0000 7f8930b5b140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T10:08:28.779 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:28 vm04 bash[20997]: debug 2026-03-10T10:08:28.504+0000 7f8930b5b140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T10:08:28.779 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:28 vm04 bash[20997]: debug 2026-03-10T10:08:28.536+0000 7f8930b5b140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T10:08:28.779 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:28 vm04 bash[20997]: debug 2026-03-10T10:08:28.572+0000 7f8930b5b140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T10:08:28.779 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:28 vm04 bash[20997]: debug 2026-03-10T10:08:28.700+0000 7f8930b5b140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T10:08:28.779 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:28 vm04 bash[20997]: debug 2026-03-10T10:08:28.736+0000 7f8930b5b140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T10:08:29.030 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:28 vm04 bash[20997]: debug 2026-03-10T10:08:28.772+0000 7f8930b5b140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T10:08:29.031 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:28 vm04 bash[20997]: debug 2026-03-10T10:08:28.876+0000 7f8930b5b140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T10:08:29.394 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:29 vm04 bash[20997]: debug 2026-03-10T10:08:29.024+0000 7f8930b5b140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T10:08:29.394 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:29 vm04 bash[20997]: debug 2026-03-10T10:08:29.184+0000 7f8930b5b140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T10:08:29.394 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:29 vm04 bash[20997]: debug 2026-03-10T10:08:29.216+0000 7f8930b5b140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T10:08:29.394 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:29 vm04 bash[20997]: debug 2026-03-10T10:08:29.256+0000 7f8930b5b140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T10:08:29.649 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:29 vm04 bash[20997]: debug 2026-03-10T10:08:29.388+0000 7f8930b5b140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T10:08:29.649 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:08:29 vm04 bash[20997]: debug 2026-03-10T10:08:29.588+0000 7f8930b5b140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T10:08:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:29 vm04 bash[20742]: cluster 2026-03-10T10:08:29.595102+0000 mon.a (mon.0) 73 : cluster [INF] Active manager daemon y restarted 2026-03-10T10:08:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:29 vm04 bash[20742]: cluster 2026-03-10T10:08:29.595102+0000 mon.a (mon.0) 73 : cluster [INF] Active manager daemon y restarted 2026-03-10T10:08:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:29 vm04 bash[20742]: cluster 2026-03-10T10:08:29.595534+0000 mon.a (mon.0) 74 : cluster [INF] Activating manager daemon y 2026-03-10T10:08:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:29 vm04 bash[20742]: cluster 2026-03-10T10:08:29.595534+0000 mon.a (mon.0) 74 : cluster [INF] Activating manager daemon y 2026-03-10T10:08:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:29 vm04 bash[20742]: cluster 2026-03-10T10:08:29.601364+0000 mon.a (mon.0) 75 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-10T10:08:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:29 vm04 bash[20742]: cluster 2026-03-10T10:08:29.601364+0000 mon.a (mon.0) 75 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-10T10:08:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:29 vm04 bash[20742]: cluster 2026-03-10T10:08:29.601482+0000 mon.a (mon.0) 76 : cluster [DBG] mgrmap e10: y(active, starting, since 0.00606578s) 2026-03-10T10:08:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:29 vm04 bash[20742]: cluster 2026-03-10T10:08:29.601482+0000 mon.a (mon.0) 76 : cluster [DBG] mgrmap e10: y(active, starting, since 0.00606578s) 2026-03-10T10:08:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:29 vm04 bash[20742]: audit 2026-03-10T10:08:29.603257+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:08:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:29 vm04 bash[20742]: audit 2026-03-10T10:08:29.603257+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:08:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:29 vm04 bash[20742]: audit 2026-03-10T10:08:29.603351+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T10:08:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:29 vm04 bash[20742]: audit 2026-03-10T10:08:29.603351+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T10:08:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:29 vm04 bash[20742]: audit 2026-03-10T10:08:29.603851+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T10:08:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:29 vm04 bash[20742]: audit 2026-03-10T10:08:29.603851+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T10:08:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:29 vm04 bash[20742]: audit 2026-03-10T10:08:29.604009+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T10:08:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:29 vm04 bash[20742]: audit 2026-03-10T10:08:29.604009+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T10:08:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:29 vm04 bash[20742]: audit 2026-03-10T10:08:29.604190+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T10:08:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:29 vm04 bash[20742]: audit 2026-03-10T10:08:29.604190+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T10:08:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:29 vm04 bash[20742]: cluster 2026-03-10T10:08:29.607998+0000 mon.a (mon.0) 82 : cluster [INF] Manager daemon y is now available 2026-03-10T10:08:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:29 vm04 bash[20742]: cluster 2026-03-10T10:08:29.607998+0000 mon.a (mon.0) 82 : cluster [INF] Manager daemon y is now available 2026-03-10T10:08:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:29 vm04 bash[20742]: audit 2026-03-10T10:08:29.623051+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:08:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:29 vm04 bash[20742]: audit 2026-03-10T10:08:29.623051+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:08:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:29 vm04 bash[20742]: audit 2026-03-10T10:08:29.624299+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T10:08:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:29 vm04 bash[20742]: audit 2026-03-10T10:08:29.624299+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T10:08:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:29 vm04 bash[20742]: audit 2026-03-10T10:08:29.625008+0000 mon.a (mon.0) 85 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:08:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:29 vm04 bash[20742]: audit 2026-03-10T10:08:29.625008+0000 mon.a (mon.0) 85 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:08:30.655 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout { 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 11, 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout } 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.180+0000 7fb1bbfff640 1 Processor -- start 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.180+0000 7fb1bbfff640 1 -- start start 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.180+0000 7fb1bbfff640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb1bc074bd0 0x7fb1bc074fd0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.180+0000 7fb1bbfff640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fb1bc0755a0 con 0x7fb1bc074bd0 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.180+0000 7fb1baffd640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb1bc074bd0 0x7fb1bc074fd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.180+0000 7fb1baffd640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb1bc074bd0 0x7fb1bc074fd0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55742/0 (socket says 192.168.123.104:55742) 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.180+0000 7fb1baffd640 1 -- 192.168.123.104:0/351795261 learned_addr learned my addr 192.168.123.104:0/351795261 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.180+0000 7fb1baffd640 1 -- 192.168.123.104:0/351795261 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fb1bc10e4c0 con 0x7fb1bc074bd0 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.180+0000 7fb1baffd640 1 --2- 192.168.123.104:0/351795261 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb1bc074bd0 0x7fb1bc074fd0 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7fb1ac009b80 tx=0x7fb1ac02f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=6ec20ff5d24538f7 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.180+0000 7fb1b9ffb640 1 -- 192.168.123.104:0/351795261 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fb1ac03c070 con 0x7fb1bc074bd0 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.180+0000 7fb1b9ffb640 1 -- 192.168.123.104:0/351795261 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fb1ac037440 con 0x7fb1bc074bd0 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.180+0000 7fb1b9ffb640 1 -- 192.168.123.104:0/351795261 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fb1ac0354d0 con 0x7fb1bc074bd0 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.180+0000 7fb1bbfff640 1 -- 192.168.123.104:0/351795261 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb1bc074bd0 msgr2=0x7fb1bc074fd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.180+0000 7fb1bbfff640 1 --2- 192.168.123.104:0/351795261 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb1bc074bd0 0x7fb1bc074fd0 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7fb1ac009b80 tx=0x7fb1ac02f190 comp rx=0 tx=0).stop 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.180+0000 7fb1bbfff640 1 -- 192.168.123.104:0/351795261 shutdown_connections 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.180+0000 7fb1bbfff640 1 --2- 192.168.123.104:0/351795261 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb1bc074bd0 0x7fb1bc074fd0 unknown :-1 s=CLOSED pgs=59 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.180+0000 7fb1bbfff640 1 -- 192.168.123.104:0/351795261 >> 192.168.123.104:0/351795261 conn(0x7fb1bc06fe30 msgr2=0x7fb1bc0722b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.180+0000 7fb1bbfff640 1 -- 192.168.123.104:0/351795261 shutdown_connections 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.180+0000 7fb1bbfff640 1 -- 192.168.123.104:0/351795261 wait complete. 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.180+0000 7fb1bbfff640 1 Processor -- start 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.180+0000 7fb1bbfff640 1 -- start start 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.180+0000 7fb1bbfff640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb1bc074bd0 0x7fb1bc1a42b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.180+0000 7fb1bbfff640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fb1bc10eff0 con 0x7fb1bc074bd0 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.180+0000 7fb1baffd640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb1bc074bd0 0x7fb1bc1a42b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.180+0000 7fb1baffd640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb1bc074bd0 0x7fb1bc1a42b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55756/0 (socket says 192.168.123.104:55756) 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.180+0000 7fb1baffd640 1 -- 192.168.123.104:0/2578077696 learned_addr learned my addr 192.168.123.104:0/2578077696 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.180+0000 7fb1baffd640 1 -- 192.168.123.104:0/2578077696 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fb1bc1a47f0 con 0x7fb1bc074bd0 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.180+0000 7fb1baffd640 1 --2- 192.168.123.104:0/2578077696 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb1bc074bd0 0x7fb1bc1a42b0 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7fb1ac0356f0 tx=0x7fb1ac037f80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.184+0000 7fb19bfff640 1 -- 192.168.123.104:0/2578077696 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fb1ac047070 con 0x7fb1bc074bd0 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.184+0000 7fb19bfff640 1 -- 192.168.123.104:0/2578077696 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fb1ac035e30 con 0x7fb1bc074bd0 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.184+0000 7fb19bfff640 1 -- 192.168.123.104:0/2578077696 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fb1ac03c070 con 0x7fb1bc074bd0 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.184+0000 7fb1bbfff640 1 -- 192.168.123.104:0/2578077696 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fb1bc1a29e0 con 0x7fb1bc074bd0 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.184+0000 7fb1bbfff640 1 -- 192.168.123.104:0/2578077696 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fb1bc1a2ec0 con 0x7fb1bc074bd0 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.184+0000 7fb19bfff640 1 -- 192.168.123.104:0/2578077696 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 9) ==== 50225+0+0 (secure 0 0 0) 0x7fb1ac042640 con 0x7fb1bc074bd0 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.184+0000 7fb19bfff640 1 --2- 192.168.123.104:0/2578077696 >> v2:192.168.123.104:6800/2318507328 conn(0x7fb19003dca0 0x7fb190040160 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.184+0000 7fb19bfff640 1 -- 192.168.123.104:0/2578077696 --> v2:192.168.123.104:6800/2318507328 -- command(tid 0: {"prefix": "get_command_descriptions"}) -- 0x7fb190040830 con 0x7fb19003dca0 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.184+0000 7fb19bfff640 1 -- 192.168.123.104:0/2578077696 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 897+0+0 (secure 0 0 0) 0x7fb1ac076e00 con 0x7fb1bc074bd0 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.184+0000 7fb1ba7fc640 1 -- 192.168.123.104:0/2578077696 >> v2:192.168.123.104:6800/2318507328 conn(0x7fb19003dca0 msgr2=0x7fb190040160 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.104:6800/2318507328 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.184+0000 7fb1ba7fc640 1 --2- 192.168.123.104:0/2578077696 >> v2:192.168.123.104:6800/2318507328 conn(0x7fb19003dca0 0x7fb190040160 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.384+0000 7fb1ba7fc640 1 -- 192.168.123.104:0/2578077696 >> v2:192.168.123.104:6800/2318507328 conn(0x7fb19003dca0 msgr2=0x7fb190040160 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.104:6800/2318507328 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.384+0000 7fb1ba7fc640 1 --2- 192.168.123.104:0/2578077696 >> v2:192.168.123.104:6800/2318507328 conn(0x7fb19003dca0 0x7fb190040160 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.400000 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.784+0000 7fb1ba7fc640 1 -- 192.168.123.104:0/2578077696 >> v2:192.168.123.104:6800/2318507328 conn(0x7fb19003dca0 msgr2=0x7fb190040160 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.104:6800/2318507328 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:27.784+0000 7fb1ba7fc640 1 --2- 192.168.123.104:0/2578077696 >> v2:192.168.123.104:6800/2318507328 conn(0x7fb19003dca0 0x7fb190040160 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.800000 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:28.584+0000 7fb1ba7fc640 1 -- 192.168.123.104:0/2578077696 >> v2:192.168.123.104:6800/2318507328 conn(0x7fb19003dca0 msgr2=0x7fb190040160 unknown :-1 s=STATE_CONNECTING_RE l=1).process reconnect failed to v2:192.168.123.104:6800/2318507328 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:28.584+0000 7fb1ba7fc640 1 --2- 192.168.123.104:0/2578077696 >> v2:192.168.123.104:6800/2318507328 conn(0x7fb19003dca0 0x7fb190040160 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 1.600000 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:29.592+0000 7fb19bfff640 1 -- 192.168.123.104:0/2578077696 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mgrmap(e 10) ==== 50027+0+0 (secure 0 0 0) 0x7fb1ac075810 con 0x7fb1bc074bd0 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:29.592+0000 7fb19bfff640 1 -- 192.168.123.104:0/2578077696 >> v2:192.168.123.104:6800/2318507328 conn(0x7fb19003dca0 msgr2=0x7fb190040160 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:29.592+0000 7fb19bfff640 1 --2- 192.168.123.104:0/2578077696 >> v2:192.168.123.104:6800/2318507328 conn(0x7fb19003dca0 0x7fb190040160 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.596+0000 7fb19bfff640 1 -- 192.168.123.104:0/2578077696 <== mon.0 v2:192.168.123.104:3300/0 7 ==== mgrmap(e 11) ==== 50119+0+0 (secure 0 0 0) 0x7fb1ac075f10 con 0x7fb1bc074bd0 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.596+0000 7fb19bfff640 1 --2- 192.168.123.104:0/2578077696 >> v2:192.168.123.104:6800/632047608 conn(0x7fb190041600 0x7fb1900439f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.596+0000 7fb19bfff640 1 -- 192.168.123.104:0/2578077696 --> v2:192.168.123.104:6800/632047608 -- command(tid 0: {"prefix": "get_command_descriptions"}) -- 0x7fb1ac006910 con 0x7fb190041600 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.600+0000 7fb1ba7fc640 1 --2- 192.168.123.104:0/2578077696 >> v2:192.168.123.104:6800/632047608 conn(0x7fb190041600 0x7fb1900439f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:30.656 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.600+0000 7fb1ba7fc640 1 --2- 192.168.123.104:0/2578077696 >> v2:192.168.123.104:6800/632047608 conn(0x7fb190041600 0x7fb1900439f0 secure :-1 s=READY pgs=2 cs=0 l=1 rev1=1 crypto rx=0x7fb1b4003e00 tx=0x7fb1b4007410 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:30.657 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.600+0000 7fb19bfff640 1 -- 192.168.123.104:0/2578077696 <== mgr.14150 v2:192.168.123.104:6800/632047608 1 ==== command_reply(tid 0: 0 ) ==== 8+0+8901 (secure 0 0 0) 0x7fb1ac006910 con 0x7fb190041600 2026-03-10T10:08:30.657 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.604+0000 7fb1bbfff640 1 -- 192.168.123.104:0/2578077696 --> v2:192.168.123.104:6800/632047608 -- command(tid 1: {"prefix": "mgr_status"}) -- 0x7fb1bc074fd0 con 0x7fb190041600 2026-03-10T10:08:30.657 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.604+0000 7fb19bfff640 1 -- 192.168.123.104:0/2578077696 <== mgr.14150 v2:192.168.123.104:6800/632047608 2 ==== command_reply(tid 1: 0 ) ==== 8+0+52 (secure 0 0 0) 0x7fb1bc074fd0 con 0x7fb190041600 2026-03-10T10:08:30.657 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.604+0000 7fb1bbfff640 1 -- 192.168.123.104:0/2578077696 >> v2:192.168.123.104:6800/632047608 conn(0x7fb190041600 msgr2=0x7fb1900439f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:30.657 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.604+0000 7fb1bbfff640 1 --2- 192.168.123.104:0/2578077696 >> v2:192.168.123.104:6800/632047608 conn(0x7fb190041600 0x7fb1900439f0 secure :-1 s=READY pgs=2 cs=0 l=1 rev1=1 crypto rx=0x7fb1b4003e00 tx=0x7fb1b4007410 comp rx=0 tx=0).stop 2026-03-10T10:08:30.657 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.604+0000 7fb1bbfff640 1 -- 192.168.123.104:0/2578077696 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb1bc074bd0 msgr2=0x7fb1bc1a42b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:30.657 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.604+0000 7fb1bbfff640 1 --2- 192.168.123.104:0/2578077696 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb1bc074bd0 0x7fb1bc1a42b0 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7fb1ac0356f0 tx=0x7fb1ac037f80 comp rx=0 tx=0).stop 2026-03-10T10:08:30.657 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.604+0000 7fb1bbfff640 1 -- 192.168.123.104:0/2578077696 shutdown_connections 2026-03-10T10:08:30.657 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.604+0000 7fb1bbfff640 1 --2- 192.168.123.104:0/2578077696 >> v2:192.168.123.104:6800/632047608 conn(0x7fb190041600 0x7fb1900439f0 unknown :-1 s=CLOSED pgs=2 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:30.657 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.604+0000 7fb1bbfff640 1 --2- 192.168.123.104:0/2578077696 >> v2:192.168.123.104:6800/2318507328 conn(0x7fb19003dca0 0x7fb190040160 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:30.657 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.604+0000 7fb1bbfff640 1 --2- 192.168.123.104:0/2578077696 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb1bc074bd0 0x7fb1bc1a42b0 unknown :-1 s=CLOSED pgs=60 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:30.657 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.604+0000 7fb1bbfff640 1 -- 192.168.123.104:0/2578077696 >> 192.168.123.104:0/2578077696 conn(0x7fb1bc06fe30 msgr2=0x7fb1bc070960 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:30.657 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.604+0000 7fb1bbfff640 1 -- 192.168.123.104:0/2578077696 shutdown_connections 2026-03-10T10:08:30.657 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.604+0000 7fb1bbfff640 1 -- 192.168.123.104:0/2578077696 wait complete. 2026-03-10T10:08:30.657 INFO:teuthology.orchestra.run.vm04.stdout:mgr epoch 9 is available 2026-03-10T10:08:30.657 INFO:teuthology.orchestra.run.vm04.stdout:Generating a dashboard self-signed certificate... 2026-03-10T10:08:30.964 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-10T10:08:30.964 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.760+0000 7fd9e413c640 1 Processor -- start 2026-03-10T10:08:30.964 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.760+0000 7fd9e413c640 1 -- start start 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.764+0000 7fd9e413c640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd9dc107430 0x7fd9dc109820 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.764+0000 7fd9e413c640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fd9dc07a6e0 con 0x7fd9dc107430 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.764+0000 7fd9e1eb1640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd9dc107430 0x7fd9dc109820 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.764+0000 7fd9e1eb1640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd9dc107430 0x7fd9dc109820 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55818/0 (socket says 192.168.123.104:55818) 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.764+0000 7fd9e1eb1640 1 -- 192.168.123.104:0/3511989834 learned_addr learned my addr 192.168.123.104:0/3511989834 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.764+0000 7fd9e1eb1640 1 -- 192.168.123.104:0/3511989834 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fd9dc109d60 con 0x7fd9dc107430 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.764+0000 7fd9e1eb1640 1 --2- 192.168.123.104:0/3511989834 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd9dc107430 0x7fd9dc109820 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7fd9cc009920 tx=0x7fd9cc02ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=3b33b81a9429c8d5 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.768+0000 7fd9e0eaf640 1 -- 192.168.123.104:0/3511989834 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fd9cc03c070 con 0x7fd9dc107430 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.768+0000 7fd9e0eaf640 1 -- 192.168.123.104:0/3511989834 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fd9cc037440 con 0x7fd9dc107430 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.768+0000 7fd9e0eaf640 1 -- 192.168.123.104:0/3511989834 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fd9cc0354d0 con 0x7fd9dc107430 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.768+0000 7fd9e413c640 1 -- 192.168.123.104:0/3511989834 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd9dc107430 msgr2=0x7fd9dc109820 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.768+0000 7fd9e413c640 1 --2- 192.168.123.104:0/3511989834 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd9dc107430 0x7fd9dc109820 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7fd9cc009920 tx=0x7fd9cc02ef20 comp rx=0 tx=0).stop 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.772+0000 7fd9e413c640 1 -- 192.168.123.104:0/3511989834 shutdown_connections 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.772+0000 7fd9e413c640 1 --2- 192.168.123.104:0/3511989834 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd9dc107430 0x7fd9dc109820 unknown :-1 s=CLOSED pgs=68 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.772+0000 7fd9e413c640 1 -- 192.168.123.104:0/3511989834 >> 192.168.123.104:0/3511989834 conn(0x7fd9dc100f90 msgr2=0x7fd9dc1033b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.772+0000 7fd9e413c640 1 -- 192.168.123.104:0/3511989834 shutdown_connections 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.772+0000 7fd9e413c640 1 -- 192.168.123.104:0/3511989834 wait complete. 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.776+0000 7fd9e413c640 1 Processor -- start 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.776+0000 7fd9e413c640 1 -- start start 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.776+0000 7fd9e413c640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd9dc107430 0x7fd9dc19a440 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.776+0000 7fd9e413c640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fd9dc105e10 con 0x7fd9dc107430 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.776+0000 7fd9e1eb1640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd9dc107430 0x7fd9dc19a440 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.776+0000 7fd9e1eb1640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd9dc107430 0x7fd9dc19a440 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55832/0 (socket says 192.168.123.104:55832) 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.776+0000 7fd9e1eb1640 1 -- 192.168.123.104:0/2239460323 learned_addr learned my addr 192.168.123.104:0/2239460323 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.776+0000 7fd9e1eb1640 1 -- 192.168.123.104:0/2239460323 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fd9dc19a980 con 0x7fd9dc107430 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.776+0000 7fd9e1eb1640 1 --2- 192.168.123.104:0/2239460323 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd9dc107430 0x7fd9dc19a440 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7fd9cc009a50 tx=0x7fd9cc02ff50 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.776+0000 7fd9caffd640 1 -- 192.168.123.104:0/2239460323 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fd9cc03c070 con 0x7fd9dc107430 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.776+0000 7fd9caffd640 1 -- 192.168.123.104:0/2239460323 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fd9cc035e50 con 0x7fd9dc107430 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.776+0000 7fd9e413c640 1 -- 192.168.123.104:0/2239460323 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fd9dc19ac10 con 0x7fd9dc107430 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.776+0000 7fd9e413c640 1 -- 192.168.123.104:0/2239460323 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fd9dc19d900 con 0x7fd9dc107430 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.780+0000 7fd9caffd640 1 -- 192.168.123.104:0/2239460323 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fd9cc042df0 con 0x7fd9dc107430 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.780+0000 7fd9caffd640 1 -- 192.168.123.104:0/2239460323 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 11) ==== 50119+0+0 (secure 0 0 0) 0x7fd9cc04c430 con 0x7fd9dc107430 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.780+0000 7fd9caffd640 1 --2- 192.168.123.104:0/2239460323 >> v2:192.168.123.104:6800/632047608 conn(0x7fd9c003dbd0 0x7fd9c0040090 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.780+0000 7fd9caffd640 1 -- 192.168.123.104:0/2239460323 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (secure 0 0 0) 0x7fd9cc077c80 con 0x7fd9dc107430 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.780+0000 7fd9e16b0640 1 --2- 192.168.123.104:0/2239460323 >> v2:192.168.123.104:6800/632047608 conn(0x7fd9c003dbd0 0x7fd9c0040090 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.780+0000 7fd9e16b0640 1 --2- 192.168.123.104:0/2239460323 >> v2:192.168.123.104:6800/632047608 conn(0x7fd9c003dbd0 0x7fd9c0040090 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7fd9d00099c0 tx=0x7fd9d0006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.780+0000 7fd9e413c640 1 -- 192.168.123.104:0/2239460323 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fd9dc19b0f0 con 0x7fd9dc107430 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.784+0000 7fd9caffd640 1 -- 192.168.123.104:0/2239460323 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fd9dc19b0f0 con 0x7fd9dc107430 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.888+0000 7fd9e413c640 1 -- 192.168.123.104:0/2239460323 --> v2:192.168.123.104:6800/632047608 -- mgr_command(tid 0: {"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}) -- 0x7fd9dc1032b0 con 0x7fd9c003dbd0 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.924+0000 7fd9caffd640 1 -- 192.168.123.104:0/2239460323 <== mgr.14150 v2:192.168.123.104:6800/632047608 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7fd9dc1032b0 con 0x7fd9c003dbd0 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.928+0000 7fd9c8ff9640 1 -- 192.168.123.104:0/2239460323 >> v2:192.168.123.104:6800/632047608 conn(0x7fd9c003dbd0 msgr2=0x7fd9c0040090 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.928+0000 7fd9c8ff9640 1 --2- 192.168.123.104:0/2239460323 >> v2:192.168.123.104:6800/632047608 conn(0x7fd9c003dbd0 0x7fd9c0040090 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7fd9d00099c0 tx=0x7fd9d0006eb0 comp rx=0 tx=0).stop 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.928+0000 7fd9c8ff9640 1 -- 192.168.123.104:0/2239460323 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd9dc107430 msgr2=0x7fd9dc19a440 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.928+0000 7fd9c8ff9640 1 --2- 192.168.123.104:0/2239460323 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd9dc107430 0x7fd9dc19a440 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7fd9cc009a50 tx=0x7fd9cc02ff50 comp rx=0 tx=0).stop 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.928+0000 7fd9c8ff9640 1 -- 192.168.123.104:0/2239460323 shutdown_connections 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.928+0000 7fd9c8ff9640 1 --2- 192.168.123.104:0/2239460323 >> v2:192.168.123.104:6800/632047608 conn(0x7fd9c003dbd0 0x7fd9c0040090 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.928+0000 7fd9c8ff9640 1 --2- 192.168.123.104:0/2239460323 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd9dc107430 0x7fd9dc19a440 unknown :-1 s=CLOSED pgs=69 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.928+0000 7fd9c8ff9640 1 -- 192.168.123.104:0/2239460323 >> 192.168.123.104:0/2239460323 conn(0x7fd9dc100f90 msgr2=0x7fd9dc101b90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.928+0000 7fd9c8ff9640 1 -- 192.168.123.104:0/2239460323 shutdown_connections 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:30.928+0000 7fd9c8ff9640 1 -- 192.168.123.104:0/2239460323 wait complete. 2026-03-10T10:08:30.965 INFO:teuthology.orchestra.run.vm04.stdout:Creating initial admin user... 2026-03-10T10:08:31.354 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$/TTaR0.nrBnSZpajvOBTuujTPqd1R7K7dn5ZMB/rGU8AdC1L3oodq", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773137311, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-10T10:08:31.354 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.064+0000 7faacd5a5640 1 Processor -- start 2026-03-10T10:08:31.354 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.064+0000 7faacd5a5640 1 -- start start 2026-03-10T10:08:31.354 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.064+0000 7faacd5a5640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faac8108b70 0x7faac8108f70 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:31.354 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.064+0000 7faacd5a5640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7faac8109540 con 0x7faac8108b70 2026-03-10T10:08:31.354 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.064+0000 7faac6ffd640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faac8108b70 0x7faac8108f70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.064+0000 7faac6ffd640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faac8108b70 0x7faac8108f70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55840/0 (socket says 192.168.123.104:55840) 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.064+0000 7faac6ffd640 1 -- 192.168.123.104:0/4147624158 learned_addr learned my addr 192.168.123.104:0/4147624158 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.064+0000 7faac6ffd640 1 -- 192.168.123.104:0/4147624158 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7faac8109d70 con 0x7faac8108b70 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.064+0000 7faac6ffd640 1 --2- 192.168.123.104:0/4147624158 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faac8108b70 0x7faac8108f70 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7faab0009920 tx=0x7faab002ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=aa588d610df43cf6 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.064+0000 7faac5ffb640 1 -- 192.168.123.104:0/4147624158 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7faab003c070 con 0x7faac8108b70 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.064+0000 7faac5ffb640 1 -- 192.168.123.104:0/4147624158 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7faab0037440 con 0x7faac8108b70 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.064+0000 7faacd5a5640 1 -- 192.168.123.104:0/4147624158 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faac8108b70 msgr2=0x7faac8108f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.064+0000 7faacd5a5640 1 --2- 192.168.123.104:0/4147624158 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faac8108b70 0x7faac8108f70 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7faab0009920 tx=0x7faab002ef20 comp rx=0 tx=0).stop 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.064+0000 7faacd5a5640 1 -- 192.168.123.104:0/4147624158 shutdown_connections 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.064+0000 7faacd5a5640 1 --2- 192.168.123.104:0/4147624158 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faac8108b70 0x7faac8108f70 unknown :-1 s=CLOSED pgs=70 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.064+0000 7faacd5a5640 1 -- 192.168.123.104:0/4147624158 >> 192.168.123.104:0/4147624158 conn(0x7faac807c040 msgr2=0x7faac807c450 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.064+0000 7faacd5a5640 1 -- 192.168.123.104:0/4147624158 shutdown_connections 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.064+0000 7faacd5a5640 1 -- 192.168.123.104:0/4147624158 wait complete. 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.064+0000 7faacd5a5640 1 Processor -- start 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.064+0000 7faacd5a5640 1 -- start start 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.064+0000 7faacd5a5640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faac8108b70 0x7faac80804f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.064+0000 7faacd5a5640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7faac810cfa0 con 0x7faac8108b70 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.068+0000 7faac6ffd640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faac8108b70 0x7faac80804f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.068+0000 7faac6ffd640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faac8108b70 0x7faac80804f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55842/0 (socket says 192.168.123.104:55842) 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.068+0000 7faac6ffd640 1 -- 192.168.123.104:0/2586967957 learned_addr learned my addr 192.168.123.104:0/2586967957 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.068+0000 7faac6ffd640 1 -- 192.168.123.104:0/2586967957 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7faac8080a30 con 0x7faac8108b70 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.068+0000 7faac6ffd640 1 --2- 192.168.123.104:0/2586967957 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faac8108b70 0x7faac80804f0 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7faab002f450 tx=0x7faab0037c20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.068+0000 7faaa7fff640 1 -- 192.168.123.104:0/2586967957 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7faab0045070 con 0x7faac8108b70 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.068+0000 7faaa7fff640 1 -- 192.168.123.104:0/2586967957 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7faab0035c20 con 0x7faac8108b70 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.068+0000 7faaa7fff640 1 -- 192.168.123.104:0/2586967957 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7faab003c070 con 0x7faac8108b70 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.068+0000 7faacd5a5640 1 -- 192.168.123.104:0/2586967957 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7faac8080cc0 con 0x7faac8108b70 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.068+0000 7faacd5a5640 1 -- 192.168.123.104:0/2586967957 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7faac807d030 con 0x7faac8108b70 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.068+0000 7faaa7fff640 1 -- 192.168.123.104:0/2586967957 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 11) ==== 50119+0+0 (secure 0 0 0) 0x7faab00493b0 con 0x7faac8108b70 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.068+0000 7faacd5a5640 1 -- 192.168.123.104:0/2586967957 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7faa8c005180 con 0x7faac8108b70 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.068+0000 7faaa7fff640 1 --2- 192.168.123.104:0/2586967957 >> v2:192.168.123.104:6800/632047608 conn(0x7faa9c03db30 0x7faa9c03fff0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.072+0000 7faac67fc640 1 --2- 192.168.123.104:0/2586967957 >> v2:192.168.123.104:6800/632047608 conn(0x7faa9c03db30 0x7faa9c03fff0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.072+0000 7faaa7fff640 1 -- 192.168.123.104:0/2586967957 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (secure 0 0 0) 0x7faab00772e0 con 0x7faac8108b70 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.072+0000 7faaa7fff640 1 -- 192.168.123.104:0/2586967957 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7faab0043120 con 0x7faac8108b70 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.072+0000 7faac67fc640 1 --2- 192.168.123.104:0/2586967957 >> v2:192.168.123.104:6800/632047608 conn(0x7faa9c03db30 0x7faa9c03fff0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7faabc009a10 tx=0x7faabc006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.164+0000 7faacd5a5640 1 -- 192.168.123.104:0/2586967957 --> v2:192.168.123.104:6800/632047608 -- mgr_command(tid 0: {"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}) -- 0x7faa8c003c00 con 0x7faa9c03db30 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.316+0000 7faaa7fff640 1 -- 192.168.123.104:0/2586967957 <== mgr.14150 v2:192.168.123.104:6800/632047608 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+252 (secure 0 0 0) 0x7faa8c003c00 con 0x7faa9c03db30 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.316+0000 7faacd5a5640 1 -- 192.168.123.104:0/2586967957 >> v2:192.168.123.104:6800/632047608 conn(0x7faa9c03db30 msgr2=0x7faa9c03fff0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.316+0000 7faacd5a5640 1 --2- 192.168.123.104:0/2586967957 >> v2:192.168.123.104:6800/632047608 conn(0x7faa9c03db30 0x7faa9c03fff0 secure :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0x7faabc009a10 tx=0x7faabc006eb0 comp rx=0 tx=0).stop 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.316+0000 7faacd5a5640 1 -- 192.168.123.104:0/2586967957 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faac8108b70 msgr2=0x7faac80804f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.316+0000 7faacd5a5640 1 --2- 192.168.123.104:0/2586967957 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faac8108b70 0x7faac80804f0 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7faab002f450 tx=0x7faab0037c20 comp rx=0 tx=0).stop 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.316+0000 7faacd5a5640 1 -- 192.168.123.104:0/2586967957 shutdown_connections 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.316+0000 7faacd5a5640 1 --2- 192.168.123.104:0/2586967957 >> v2:192.168.123.104:6800/632047608 conn(0x7faa9c03db30 0x7faa9c03fff0 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.316+0000 7faacd5a5640 1 --2- 192.168.123.104:0/2586967957 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faac8108b70 0x7faac80804f0 unknown :-1 s=CLOSED pgs=71 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.316+0000 7faacd5a5640 1 -- 192.168.123.104:0/2586967957 >> 192.168.123.104:0/2586967957 conn(0x7faac807c040 msgr2=0x7faac8105f50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.320+0000 7faacd5a5640 1 -- 192.168.123.104:0/2586967957 shutdown_connections 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.320+0000 7faacd5a5640 1 -- 192.168.123.104:0/2586967957 wait complete. 2026-03-10T10:08:31.355 INFO:teuthology.orchestra.run.vm04.stdout:Fetching dashboard port number... 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stdout 8443 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.452+0000 7faefdd4f640 1 Processor -- start 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.452+0000 7faefdd4f640 1 -- start start 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.452+0000 7faefdd4f640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faef8108b70 0x7faef8108f70 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.452+0000 7faefdd4f640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7faef8109540 con 0x7faef8108b70 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.452+0000 7faef77fe640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faef8108b70 0x7faef8108f70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.452+0000 7faef77fe640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faef8108b70 0x7faef8108f70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55850/0 (socket says 192.168.123.104:55850) 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.452+0000 7faef77fe640 1 -- 192.168.123.104:0/4626582 learned_addr learned my addr 192.168.123.104:0/4626582 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.452+0000 7faef77fe640 1 -- 192.168.123.104:0/4626582 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7faef8109d70 con 0x7faef8108b70 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.452+0000 7faef77fe640 1 --2- 192.168.123.104:0/4626582 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faef8108b70 0x7faef8108f70 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7faee8009920 tx=0x7faee802ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=703d61e4dad83692 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.452+0000 7faef67fc640 1 -- 192.168.123.104:0/4626582 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7faee803c070 con 0x7faef8108b70 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.452+0000 7faef67fc640 1 -- 192.168.123.104:0/4626582 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7faee8037440 con 0x7faef8108b70 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.452+0000 7faefdd4f640 1 -- 192.168.123.104:0/4626582 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faef8108b70 msgr2=0x7faef8108f70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.452+0000 7faefdd4f640 1 --2- 192.168.123.104:0/4626582 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faef8108b70 0x7faef8108f70 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7faee8009920 tx=0x7faee802ef20 comp rx=0 tx=0).stop 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.452+0000 7faefdd4f640 1 -- 192.168.123.104:0/4626582 shutdown_connections 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.452+0000 7faefdd4f640 1 --2- 192.168.123.104:0/4626582 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faef8108b70 0x7faef8108f70 unknown :-1 s=CLOSED pgs=72 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.452+0000 7faefdd4f640 1 -- 192.168.123.104:0/4626582 >> 192.168.123.104:0/4626582 conn(0x7faef807c040 msgr2=0x7faef807c450 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.452+0000 7faefdd4f640 1 -- 192.168.123.104:0/4626582 shutdown_connections 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.452+0000 7faefdd4f640 1 -- 192.168.123.104:0/4626582 wait complete. 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.452+0000 7faefdd4f640 1 Processor -- start 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.452+0000 7faefdd4f640 1 -- start start 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.452+0000 7faefdd4f640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faef8108b70 0x7faef819ece0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.452+0000 7faefdd4f640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7faef810cfa0 con 0x7faef8108b70 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.452+0000 7faef77fe640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faef8108b70 0x7faef819ece0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.452+0000 7faef77fe640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faef8108b70 0x7faef819ece0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55862/0 (socket says 192.168.123.104:55862) 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.452+0000 7faef77fe640 1 -- 192.168.123.104:0/487836184 learned_addr learned my addr 192.168.123.104:0/487836184 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.452+0000 7faef77fe640 1 -- 192.168.123.104:0/487836184 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7faef819f220 con 0x7faef8108b70 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.452+0000 7faef77fe640 1 --2- 192.168.123.104:0/487836184 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faef8108b70 0x7faef819ece0 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7faee8037b20 tx=0x7faee8037b50 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.456+0000 7faef4ff9640 1 -- 192.168.123.104:0/487836184 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7faee8045070 con 0x7faef8108b70 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.456+0000 7faefdd4f640 1 -- 192.168.123.104:0/487836184 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7faef819f4b0 con 0x7faef8108b70 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.456+0000 7faefdd4f640 1 -- 192.168.123.104:0/487836184 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7faef81a2190 con 0x7faef8108b70 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.456+0000 7faef4ff9640 1 -- 192.168.123.104:0/487836184 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7faee8036c10 con 0x7faef8108b70 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.456+0000 7faef4ff9640 1 -- 192.168.123.104:0/487836184 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7faee804f050 con 0x7faef8108b70 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.456+0000 7faef4ff9640 1 -- 192.168.123.104:0/487836184 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 11) ==== 50119+0+0 (secure 0 0 0) 0x7faee80497e0 con 0x7faef8108b70 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.456+0000 7faef4ff9640 1 --2- 192.168.123.104:0/487836184 >> v2:192.168.123.104:6800/632047608 conn(0x7faecc0464f0 0x7faecc0489b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.456+0000 7faef4ff9640 1 -- 192.168.123.104:0/487836184 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (secure 0 0 0) 0x7faee8077180 con 0x7faef8108b70 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.456+0000 7faefdd4f640 1 -- 192.168.123.104:0/487836184 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7faebc005180 con 0x7faef8108b70 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.456+0000 7faef6ffd640 1 --2- 192.168.123.104:0/487836184 >> v2:192.168.123.104:6800/632047608 conn(0x7faecc0464f0 0x7faecc0489b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.460+0000 7faef4ff9640 1 -- 192.168.123.104:0/487836184 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7faee80476e0 con 0x7faef8108b70 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.460+0000 7faef6ffd640 1 --2- 192.168.123.104:0/487836184 >> v2:192.168.123.104:6800/632047608 conn(0x7faecc0464f0 0x7faecc0489b0 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7faee4009a10 tx=0x7faee4006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.544+0000 7faefdd4f640 1 -- 192.168.123.104:0/487836184 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"} v 0) -- 0x7faebc005470 con 0x7faef8108b70 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.548+0000 7faef4ff9640 1 -- 192.168.123.104:0/487836184 <== mon.0 v2:192.168.123.104:3300/0 7 ==== mon_command_ack([{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]=0 v8) ==== 112+0+5 (secure 0 0 0) 0x7faee803ee30 con 0x7faef8108b70 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.548+0000 7faefdd4f640 1 -- 192.168.123.104:0/487836184 >> v2:192.168.123.104:6800/632047608 conn(0x7faecc0464f0 msgr2=0x7faecc0489b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.548+0000 7faefdd4f640 1 --2- 192.168.123.104:0/487836184 >> v2:192.168.123.104:6800/632047608 conn(0x7faecc0464f0 0x7faecc0489b0 secure :-1 s=READY pgs=9 cs=0 l=1 rev1=1 crypto rx=0x7faee4009a10 tx=0x7faee4006eb0 comp rx=0 tx=0).stop 2026-03-10T10:08:31.587 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.548+0000 7faefdd4f640 1 -- 192.168.123.104:0/487836184 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faef8108b70 msgr2=0x7faef819ece0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:31.588 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.548+0000 7faefdd4f640 1 --2- 192.168.123.104:0/487836184 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faef8108b70 0x7faef819ece0 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7faee8037b20 tx=0x7faee8037b50 comp rx=0 tx=0).stop 2026-03-10T10:08:31.588 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.548+0000 7faefdd4f640 1 -- 192.168.123.104:0/487836184 shutdown_connections 2026-03-10T10:08:31.588 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.548+0000 7faefdd4f640 1 --2- 192.168.123.104:0/487836184 >> v2:192.168.123.104:6800/632047608 conn(0x7faecc0464f0 0x7faecc0489b0 unknown :-1 s=CLOSED pgs=9 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:31.588 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.548+0000 7faefdd4f640 1 --2- 192.168.123.104:0/487836184 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faef8108b70 0x7faef819ece0 unknown :-1 s=CLOSED pgs=73 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:31.588 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.548+0000 7faefdd4f640 1 -- 192.168.123.104:0/487836184 >> 192.168.123.104:0/487836184 conn(0x7faef807c040 msgr2=0x7faef8106370 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:31.588 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.548+0000 7faefdd4f640 1 -- 192.168.123.104:0/487836184 shutdown_connections 2026-03-10T10:08:31.588 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.548+0000 7faefdd4f640 1 -- 192.168.123.104:0/487836184 wait complete. 2026-03-10T10:08:31.588 INFO:teuthology.orchestra.run.vm04.stdout:firewalld does not appear to be present 2026-03-10T10:08:31.588 INFO:teuthology.orchestra.run.vm04.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-10T10:08:31.588 INFO:teuthology.orchestra.run.vm04.stdout:Ceph Dashboard is now available at: 2026-03-10T10:08:31.588 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:08:31.588 INFO:teuthology.orchestra.run.vm04.stdout: URL: https://vm04.local:8443/ 2026-03-10T10:08:31.588 INFO:teuthology.orchestra.run.vm04.stdout: User: admin 2026-03-10T10:08:31.588 INFO:teuthology.orchestra.run.vm04.stdout: Password: 6yfhfw7jto 2026-03-10T10:08:31.588 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:08:31.588 INFO:teuthology.orchestra.run.vm04.stdout:Saving cluster configuration to /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config directory 2026-03-10T10:08:31.868 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:31 vm04 bash[20742]: cephadm 2026-03-10T10:08:30.399315+0000 mgr.y (mgr.14150) 1 : cephadm [INF] [10/Mar/2026:10:08:30] ENGINE Bus STARTING 2026-03-10T10:08:31.868 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:31 vm04 bash[20742]: cephadm 2026-03-10T10:08:30.399315+0000 mgr.y (mgr.14150) 1 : cephadm [INF] [10/Mar/2026:10:08:30] ENGINE Bus STARTING 2026-03-10T10:08:31.868 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:31 vm04 bash[20742]: cephadm 2026-03-10T10:08:30.507067+0000 mgr.y (mgr.14150) 2 : cephadm [INF] [10/Mar/2026:10:08:30] ENGINE Serving on https://192.168.123.104:7150 2026-03-10T10:08:31.868 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:31 vm04 bash[20742]: cephadm 2026-03-10T10:08:30.507067+0000 mgr.y (mgr.14150) 2 : cephadm [INF] [10/Mar/2026:10:08:30] ENGINE Serving on https://192.168.123.104:7150 2026-03-10T10:08:31.868 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:31 vm04 bash[20742]: cephadm 2026-03-10T10:08:30.507665+0000 mgr.y (mgr.14150) 3 : cephadm [INF] [10/Mar/2026:10:08:30] ENGINE Client ('192.168.123.104', 39822) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T10:08:31.868 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:31 vm04 bash[20742]: cephadm 2026-03-10T10:08:30.507665+0000 mgr.y (mgr.14150) 3 : cephadm [INF] [10/Mar/2026:10:08:30] ENGINE Client ('192.168.123.104', 39822) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T10:08:31.868 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:31 vm04 bash[20742]: cluster 2026-03-10T10:08:30.604799+0000 mon.a (mon.0) 86 : cluster [DBG] mgrmap e11: y(active, since 1.00939s) 2026-03-10T10:08:31.868 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:31 vm04 bash[20742]: cluster 2026-03-10T10:08:30.604799+0000 mon.a (mon.0) 86 : cluster [DBG] mgrmap e11: y(active, since 1.00939s) 2026-03-10T10:08:31.868 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:31 vm04 bash[20742]: audit 2026-03-10T10:08:30.606818+0000 mgr.y (mgr.14150) 4 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T10:08:31.868 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:31 vm04 bash[20742]: audit 2026-03-10T10:08:30.606818+0000 mgr.y (mgr.14150) 4 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T10:08:31.868 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:31 vm04 bash[20742]: cephadm 2026-03-10T10:08:30.608189+0000 mgr.y (mgr.14150) 5 : cephadm [INF] [10/Mar/2026:10:08:30] ENGINE Serving on http://192.168.123.104:8765 2026-03-10T10:08:31.868 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:31 vm04 bash[20742]: cephadm 2026-03-10T10:08:30.608189+0000 mgr.y (mgr.14150) 5 : cephadm [INF] [10/Mar/2026:10:08:30] ENGINE Serving on http://192.168.123.104:8765 2026-03-10T10:08:31.868 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:31 vm04 bash[20742]: cephadm 2026-03-10T10:08:30.608221+0000 mgr.y (mgr.14150) 6 : cephadm [INF] [10/Mar/2026:10:08:30] ENGINE Bus STARTED 2026-03-10T10:08:31.868 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:31 vm04 bash[20742]: cephadm 2026-03-10T10:08:30.608221+0000 mgr.y (mgr.14150) 6 : cephadm [INF] [10/Mar/2026:10:08:30] ENGINE Bus STARTED 2026-03-10T10:08:31.868 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:31 vm04 bash[20742]: audit 2026-03-10T10:08:30.610577+0000 mgr.y (mgr.14150) 7 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T10:08:31.868 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:31 vm04 bash[20742]: audit 2026-03-10T10:08:30.610577+0000 mgr.y (mgr.14150) 7 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T10:08:31.868 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:31 vm04 bash[20742]: audit 2026-03-10T10:08:30.929579+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:31.868 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:31 vm04 bash[20742]: audit 2026-03-10T10:08:30.929579+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:31.868 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:31 vm04 bash[20742]: audit 2026-03-10T10:08:30.931533+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:31.868 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:31 vm04 bash[20742]: audit 2026-03-10T10:08:30.931533+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:31.868 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:31 vm04 bash[20742]: audit 2026-03-10T10:08:31.320638+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:31.868 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:31 vm04 bash[20742]: audit 2026-03-10T10:08:31.320638+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:31.868 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:31 vm04 bash[20742]: audit 2026-03-10T10:08:31.552124+0000 mon.a (mon.0) 90 : audit [DBG] from='client.? 192.168.123.104:0/487836184' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T10:08:31.868 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:31 vm04 bash[20742]: audit 2026-03-10T10:08:31.552124+0000 mon.a (mon.0) 90 : audit [DBG] from='client.? 192.168.123.104:0/487836184' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T10:08:31.892 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.708+0000 7fa525605640 1 Processor -- start 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.708+0000 7fa525605640 1 -- start start 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.708+0000 7fa525605640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa5201068e0 0x7fa520106ce0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.708+0000 7fa525605640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fa5201072b0 con 0x7fa5201068e0 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.708+0000 7fa51effd640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa5201068e0 0x7fa520106ce0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.708+0000 7fa51effd640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa5201068e0 0x7fa520106ce0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55864/0 (socket says 192.168.123.104:55864) 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.708+0000 7fa51effd640 1 -- 192.168.123.104:0/254893470 learned_addr learned my addr 192.168.123.104:0/254893470 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.708+0000 7fa51effd640 1 -- 192.168.123.104:0/254893470 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fa520107ae0 con 0x7fa5201068e0 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.708+0000 7fa51effd640 1 --2- 192.168.123.104:0/254893470 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa5201068e0 0x7fa520106ce0 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7fa508009b80 tx=0x7fa50802f190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=c9ff57b10db24bda server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.708+0000 7fa51dffb640 1 -- 192.168.123.104:0/254893470 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fa50803c070 con 0x7fa5201068e0 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.708+0000 7fa51dffb640 1 -- 192.168.123.104:0/254893470 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fa508037440 con 0x7fa5201068e0 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.708+0000 7fa525605640 1 -- 192.168.123.104:0/254893470 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa5201068e0 msgr2=0x7fa520106ce0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.708+0000 7fa525605640 1 --2- 192.168.123.104:0/254893470 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa5201068e0 0x7fa520106ce0 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7fa508009b80 tx=0x7fa50802f190 comp rx=0 tx=0).stop 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.708+0000 7fa525605640 1 -- 192.168.123.104:0/254893470 shutdown_connections 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.708+0000 7fa525605640 1 --2- 192.168.123.104:0/254893470 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa5201068e0 0x7fa520106ce0 unknown :-1 s=CLOSED pgs=74 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.708+0000 7fa525605640 1 -- 192.168.123.104:0/254893470 >> 192.168.123.104:0/254893470 conn(0x7fa520102090 msgr2=0x7fa5201044b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.708+0000 7fa525605640 1 -- 192.168.123.104:0/254893470 shutdown_connections 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.708+0000 7fa525605640 1 -- 192.168.123.104:0/254893470 wait complete. 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.708+0000 7fa525605640 1 Processor -- start 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.708+0000 7fa525605640 1 -- start start 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.708+0000 7fa525605640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa5201068e0 0x7fa52007c460 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.708+0000 7fa525605640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fa52010ad10 con 0x7fa5201068e0 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.708+0000 7fa51effd640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa5201068e0 0x7fa52007c460 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.708+0000 7fa51effd640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa5201068e0 0x7fa52007c460 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:55868/0 (socket says 192.168.123.104:55868) 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.708+0000 7fa51effd640 1 -- 192.168.123.104:0/1555448482 learned_addr learned my addr 192.168.123.104:0/1555448482 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.708+0000 7fa51effd640 1 -- 192.168.123.104:0/1555448482 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fa52007c9a0 con 0x7fa5201068e0 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.708+0000 7fa51effd640 1 --2- 192.168.123.104:0/1555448482 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa5201068e0 0x7fa52007c460 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7fa508035a70 tx=0x7fa508035aa0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.708+0000 7fa4fffff640 1 -- 192.168.123.104:0/1555448482 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fa508047070 con 0x7fa5201068e0 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.708+0000 7fa4fffff640 1 -- 192.168.123.104:0/1555448482 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fa508035d30 con 0x7fa5201068e0 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.708+0000 7fa4fffff640 1 -- 192.168.123.104:0/1555448482 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fa50803c040 con 0x7fa5201068e0 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.708+0000 7fa525605640 1 -- 192.168.123.104:0/1555448482 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fa52007ab90 con 0x7fa5201068e0 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.712+0000 7fa525605640 1 -- 192.168.123.104:0/1555448482 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fa52007afb0 con 0x7fa5201068e0 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.712+0000 7fa4fffff640 1 -- 192.168.123.104:0/1555448482 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 11) ==== 50119+0+0 (secure 0 0 0) 0x7fa508030050 con 0x7fa5201068e0 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.712+0000 7fa4fffff640 1 --2- 192.168.123.104:0/1555448482 >> v2:192.168.123.104:6800/632047608 conn(0x7fa4f803db30 0x7fa4f803fff0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.712+0000 7fa4fffff640 1 -- 192.168.123.104:0/1555448482 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (secure 0 0 0) 0x7fa508077660 con 0x7fa5201068e0 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.712+0000 7fa51e7fc640 1 --2- 192.168.123.104:0/1555448482 >> v2:192.168.123.104:6800/632047608 conn(0x7fa4f803db30 0x7fa4f803fff0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.712+0000 7fa525605640 1 -- 192.168.123.104:0/1555448482 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fa4ec005180 con 0x7fa5201068e0 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.716+0000 7fa4fffff640 1 -- 192.168.123.104:0/1555448482 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fa5080342a0 con 0x7fa5201068e0 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.716+0000 7fa51e7fc640 1 --2- 192.168.123.104:0/1555448482 >> v2:192.168.123.104:6800/632047608 conn(0x7fa4f803db30 0x7fa4f803fff0 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7fa514009a10 tx=0x7fa514006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.844+0000 7fa525605640 1 -- 192.168.123.104:0/1555448482 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) -- 0x7fa4ec005470 con 0x7fa5201068e0 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.844+0000 7fa4fffff640 1 -- 192.168.123.104:0/1555448482 <== mon.0 v2:192.168.123.104:3300/0 7 ==== mon_command_ack([{prefix=config-key set, key=mgr/dashboard/cluster/status}]=0 set mgr/dashboard/cluster/status v24)=0 set mgr/dashboard/cluster/status v24) ==== 153+0+0 (secure 0 0 0) 0x7fa508033c90 con 0x7fa5201068e0 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.852+0000 7fa525605640 1 -- 192.168.123.104:0/1555448482 >> v2:192.168.123.104:6800/632047608 conn(0x7fa4f803db30 msgr2=0x7fa4f803fff0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.852+0000 7fa525605640 1 --2- 192.168.123.104:0/1555448482 >> v2:192.168.123.104:6800/632047608 conn(0x7fa4f803db30 0x7fa4f803fff0 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7fa514009a10 tx=0x7fa514006eb0 comp rx=0 tx=0).stop 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.852+0000 7fa525605640 1 -- 192.168.123.104:0/1555448482 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa5201068e0 msgr2=0x7fa52007c460 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.852+0000 7fa525605640 1 --2- 192.168.123.104:0/1555448482 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa5201068e0 0x7fa52007c460 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7fa508035a70 tx=0x7fa508035aa0 comp rx=0 tx=0).stop 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.852+0000 7fa525605640 1 -- 192.168.123.104:0/1555448482 shutdown_connections 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.852+0000 7fa525605640 1 --2- 192.168.123.104:0/1555448482 >> v2:192.168.123.104:6800/632047608 conn(0x7fa4f803db30 0x7fa4f803fff0 unknown :-1 s=CLOSED pgs=10 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.852+0000 7fa525605640 1 --2- 192.168.123.104:0/1555448482 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa5201068e0 0x7fa52007c460 unknown :-1 s=CLOSED pgs=75 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.852+0000 7fa525605640 1 -- 192.168.123.104:0/1555448482 >> 192.168.123.104:0/1555448482 conn(0x7fa520102090 msgr2=0x7fa520102d50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.852+0000 7fa525605640 1 -- 192.168.123.104:0/1555448482 shutdown_connections 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:/usr/bin/ceph: stderr 2026-03-10T10:08:31.852+0000 7fa525605640 1 -- 192.168.123.104:0/1555448482 wait complete. 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout: sudo /home/ubuntu/cephtest/cephadm shell --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:Or, if you are only running a single cluster on this host: 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout: sudo /home/ubuntu/cephtest/cephadm shell 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout: ceph telemetry on 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:08:31.893 INFO:teuthology.orchestra.run.vm04.stdout:For more information see: 2026-03-10T10:08:31.894 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:08:31.894 INFO:teuthology.orchestra.run.vm04.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-10T10:08:31.894 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:08:31.894 INFO:teuthology.orchestra.run.vm04.stdout:Bootstrap complete. 2026-03-10T10:08:31.911 INFO:tasks.cephadm:Fetching config... 2026-03-10T10:08:31.911 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-10T10:08:31.911 DEBUG:teuthology.orchestra.run.vm04:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-10T10:08:31.914 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-10T10:08:31.914 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-10T10:08:31.914 DEBUG:teuthology.orchestra.run.vm04:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-10T10:08:31.957 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-10T10:08:31.957 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-10T10:08:31.957 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.a/keyring of=/dev/stdout 2026-03-10T10:08:32.005 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-10T10:08:32.005 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-10T10:08:32.005 DEBUG:teuthology.orchestra.run.vm04:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-10T10:08:32.049 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-10T10:08:32.049 DEBUG:teuthology.orchestra.run.vm04:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN8HDqv9Pg4TqDxcaMqs5f9pVfocR4ydPGrFWw+qTBBB ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T10:08:32.100 INFO:teuthology.orchestra.run.vm04.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN8HDqv9Pg4TqDxcaMqs5f9pVfocR4ydPGrFWw+qTBBB ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:08:32.104 DEBUG:teuthology.orchestra.run.vm07:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN8HDqv9Pg4TqDxcaMqs5f9pVfocR4ydPGrFWw+qTBBB ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T10:08:32.116 INFO:teuthology.orchestra.run.vm07.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN8HDqv9Pg4TqDxcaMqs5f9pVfocR4ydPGrFWw+qTBBB ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:08:32.121 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-10T10:08:32.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:32 vm04 bash[20742]: audit 2026-03-10T10:08:30.893168+0000 mgr.y (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:08:32.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:32 vm04 bash[20742]: audit 2026-03-10T10:08:30.893168+0000 mgr.y (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:08:32.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:32 vm04 bash[20742]: audit 2026-03-10T10:08:31.170677+0000 mgr.y (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:08:32.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:32 vm04 bash[20742]: audit 2026-03-10T10:08:31.170677+0000 mgr.y (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:08:32.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:32 vm04 bash[20742]: audit 2026-03-10T10:08:31.850862+0000 mon.a (mon.0) 91 : audit [INF] from='client.? 192.168.123.104:0/1555448482' entity='client.admin' 2026-03-10T10:08:32.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:32 vm04 bash[20742]: audit 2026-03-10T10:08:31.850862+0000 mon.a (mon.0) 91 : audit [INF] from='client.? 192.168.123.104:0/1555448482' entity='client.admin' 2026-03-10T10:08:32.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:32 vm04 bash[20742]: cluster 2026-03-10T10:08:32.325607+0000 mon.a (mon.0) 92 : cluster [DBG] mgrmap e12: y(active, since 2s) 2026-03-10T10:08:32.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:32 vm04 bash[20742]: cluster 2026-03-10T10:08:32.325607+0000 mon.a (mon.0) 92 : cluster [DBG] mgrmap e12: y(active, since 2s) 2026-03-10T10:08:35.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:34 vm04 bash[20742]: audit 2026-03-10T10:08:33.889238+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:34 vm04 bash[20742]: audit 2026-03-10T10:08:33.889238+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:34 vm04 bash[20742]: audit 2026-03-10T10:08:34.407558+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:34 vm04 bash[20742]: audit 2026-03-10T10:08:34.407558+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:35.782 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.a/config 2026-03-10T10:08:35.922 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:35.916+0000 7f49f7822640 1 -- 192.168.123.104:0/631733489 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f49f01045f0 msgr2=0x7f49f01049f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:35.922 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:35.916+0000 7f49f7822640 1 --2- 192.168.123.104:0/631733489 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f49f01045f0 0x7f49f01049f0 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7f49e4009a00 tx=0x7f49e402f310 comp rx=0 tx=0).stop 2026-03-10T10:08:35.923 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:35.916+0000 7f49f7822640 1 -- 192.168.123.104:0/631733489 shutdown_connections 2026-03-10T10:08:35.923 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:35.916+0000 7f49f7822640 1 --2- 192.168.123.104:0/631733489 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f49f01045f0 0x7f49f01049f0 unknown :-1 s=CLOSED pgs=76 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:35.923 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:35.916+0000 7f49f7822640 1 -- 192.168.123.104:0/631733489 >> 192.168.123.104:0/631733489 conn(0x7f49f00ffda0 msgr2=0x7f49f01021c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:35.923 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:35.916+0000 7f49f7822640 1 -- 192.168.123.104:0/631733489 shutdown_connections 2026-03-10T10:08:35.923 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:35.916+0000 7f49f7822640 1 -- 192.168.123.104:0/631733489 wait complete. 2026-03-10T10:08:35.923 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:35.916+0000 7f49f7822640 1 Processor -- start 2026-03-10T10:08:35.923 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:35.916+0000 7f49f7822640 1 -- start start 2026-03-10T10:08:35.923 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:35.916+0000 7f49f7822640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f49f01045f0 0x7f49f019b250 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:35.923 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:35.916+0000 7f49f7822640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f49f0109c30 con 0x7f49f01045f0 2026-03-10T10:08:35.923 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:35.920+0000 7f49f5597640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f49f01045f0 0x7f49f019b250 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:35.923 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:35.920+0000 7f49f5597640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f49f01045f0 0x7f49f019b250 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:59510/0 (socket says 192.168.123.104:59510) 2026-03-10T10:08:35.923 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:35.920+0000 7f49f5597640 1 -- 192.168.123.104:0/2021177804 learned_addr learned my addr 192.168.123.104:0/2021177804 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:35.923 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:35.920+0000 7f49f5597640 1 -- 192.168.123.104:0/2021177804 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f49f019b790 con 0x7f49f01045f0 2026-03-10T10:08:35.923 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:35.920+0000 7f49f5597640 1 --2- 192.168.123.104:0/2021177804 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f49f01045f0 0x7f49f019b250 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7f49e402fa40 tx=0x7f49e4004430 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:35.924 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:35.920+0000 7f49de7fc640 1 -- 192.168.123.104:0/2021177804 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f49e4046070 con 0x7f49f01045f0 2026-03-10T10:08:35.924 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:35.920+0000 7f49f7822640 1 -- 192.168.123.104:0/2021177804 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f49f019ba20 con 0x7f49f01045f0 2026-03-10T10:08:35.924 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:35.920+0000 7f49f7822640 1 -- 192.168.123.104:0/2021177804 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f49f019e710 con 0x7f49f01045f0 2026-03-10T10:08:35.924 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:35.920+0000 7f49f7822640 1 -- 192.168.123.104:0/2021177804 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f49f010a3d0 con 0x7f49f01045f0 2026-03-10T10:08:35.927 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:35.924+0000 7f49de7fc640 1 -- 192.168.123.104:0/2021177804 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f49e40045c0 con 0x7f49f01045f0 2026-03-10T10:08:35.927 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:35.924+0000 7f49de7fc640 1 -- 192.168.123.104:0/2021177804 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f49e403d070 con 0x7f49f01045f0 2026-03-10T10:08:35.927 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:35.924+0000 7f49de7fc640 1 -- 192.168.123.104:0/2021177804 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 13) ==== 50271+0+0 (secure 0 0 0) 0x7f49e404b440 con 0x7f49f01045f0 2026-03-10T10:08:35.928 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:35.924+0000 7f49de7fc640 1 --2- 192.168.123.104:0/2021177804 >> v2:192.168.123.104:6800/632047608 conn(0x7f49c003da20 0x7f49c003fee0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:35.928 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:35.924+0000 7f49f4d96640 1 --2- 192.168.123.104:0/2021177804 >> v2:192.168.123.104:6800/632047608 conn(0x7f49c003da20 0x7f49c003fee0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:35.928 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:35.924+0000 7f49de7fc640 1 -- 192.168.123.104:0/2021177804 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (secure 0 0 0) 0x7f49e4077980 con 0x7f49f01045f0 2026-03-10T10:08:35.928 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:35.924+0000 7f49f4d96640 1 --2- 192.168.123.104:0/2021177804 >> v2:192.168.123.104:6800/632047608 conn(0x7f49c003da20 0x7f49c003fee0 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7f49e00099c0 tx=0x7f49e0006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:35.928 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:35.924+0000 7f49de7fc640 1 -- 192.168.123.104:0/2021177804 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f49e407b2a0 con 0x7f49f01045f0 2026-03-10T10:08:36.015 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:36.008+0000 7f49f7822640 1 -- 192.168.123.104:0/2021177804 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command([{prefix=config set, name=mgr/cephadm/allow_ptrace}] v 0) -- 0x7f49f0194b40 con 0x7f49f01045f0 2026-03-10T10:08:36.020 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:36.016+0000 7f49de7fc640 1 -- 192.168.123.104:0/2021177804 <== mon.0 v2:192.168.123.104:3300/0 7 ==== mon_command_ack([{prefix=config set, name=mgr/cephadm/allow_ptrace}]=0 v9)=0 v9) ==== 125+0+0 (secure 0 0 0) 0x7f49e40071c0 con 0x7f49f01045f0 2026-03-10T10:08:36.025 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:36.020+0000 7f49f7822640 1 -- 192.168.123.104:0/2021177804 >> v2:192.168.123.104:6800/632047608 conn(0x7f49c003da20 msgr2=0x7f49c003fee0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:36.025 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:36.020+0000 7f49f7822640 1 --2- 192.168.123.104:0/2021177804 >> v2:192.168.123.104:6800/632047608 conn(0x7f49c003da20 0x7f49c003fee0 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7f49e00099c0 tx=0x7f49e0006eb0 comp rx=0 tx=0).stop 2026-03-10T10:08:36.025 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:36.020+0000 7f49f7822640 1 -- 192.168.123.104:0/2021177804 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f49f01045f0 msgr2=0x7f49f019b250 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:36.025 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:36.020+0000 7f49f7822640 1 --2- 192.168.123.104:0/2021177804 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f49f01045f0 0x7f49f019b250 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7f49e402fa40 tx=0x7f49e4004430 comp rx=0 tx=0).stop 2026-03-10T10:08:36.025 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:36.020+0000 7f49f7822640 1 -- 192.168.123.104:0/2021177804 shutdown_connections 2026-03-10T10:08:36.026 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:36.020+0000 7f49f7822640 1 --2- 192.168.123.104:0/2021177804 >> v2:192.168.123.104:6800/632047608 conn(0x7f49c003da20 0x7f49c003fee0 unknown :-1 s=CLOSED pgs=11 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:36.026 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:36.020+0000 7f49f7822640 1 --2- 192.168.123.104:0/2021177804 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f49f01045f0 0x7f49f019b250 unknown :-1 s=CLOSED pgs=77 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:36.026 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:36.020+0000 7f49f7822640 1 -- 192.168.123.104:0/2021177804 >> 192.168.123.104:0/2021177804 conn(0x7f49f00ffda0 msgr2=0x7f49f01006c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:36.026 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:36.020+0000 7f49f7822640 1 -- 192.168.123.104:0/2021177804 shutdown_connections 2026-03-10T10:08:36.026 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:36.020+0000 7f49f7822640 1 -- 192.168.123.104:0/2021177804 wait complete. 2026-03-10T10:08:36.073 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-10T10:08:36.073 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-10T10:08:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:36 vm04 bash[20742]: cluster 2026-03-10T10:08:35.893854+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e13: y(active, since 6s) 2026-03-10T10:08:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:36 vm04 bash[20742]: cluster 2026-03-10T10:08:35.893854+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e13: y(active, since 6s) 2026-03-10T10:08:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:36 vm04 bash[20742]: audit 2026-03-10T10:08:36.019055+0000 mon.a (mon.0) 96 : audit [INF] from='client.? 192.168.123.104:0/2021177804' entity='client.admin' 2026-03-10T10:08:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:36 vm04 bash[20742]: audit 2026-03-10T10:08:36.019055+0000 mon.a (mon.0) 96 : audit [INF] from='client.? 192.168.123.104:0/2021177804' entity='client.admin' 2026-03-10T10:08:40.703 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.a/config 2026-03-10T10:08:40.860 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.856+0000 7faadb347640 1 -- 192.168.123.104:0/1242286984 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faad4102ed0 msgr2=0x7faad41052c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:40.860 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.856+0000 7faadb347640 1 --2- 192.168.123.104:0/1242286984 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faad4102ed0 0x7faad41052c0 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7faac80099b0 tx=0x7faac802f2b0 comp rx=0 tx=0).stop 2026-03-10T10:08:40.860 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.856+0000 7faadb347640 1 -- 192.168.123.104:0/1242286984 shutdown_connections 2026-03-10T10:08:40.860 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.856+0000 7faadb347640 1 --2- 192.168.123.104:0/1242286984 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faad4102ed0 0x7faad41052c0 unknown :-1 s=CLOSED pgs=78 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:40.860 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.856+0000 7faadb347640 1 -- 192.168.123.104:0/1242286984 >> 192.168.123.104:0/1242286984 conn(0x7faad40fc9d0 msgr2=0x7faad40fedf0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:40.860 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.856+0000 7faadb347640 1 -- 192.168.123.104:0/1242286984 shutdown_connections 2026-03-10T10:08:40.860 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.856+0000 7faadb347640 1 -- 192.168.123.104:0/1242286984 wait complete. 2026-03-10T10:08:40.860 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.856+0000 7faadb347640 1 Processor -- start 2026-03-10T10:08:40.860 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.856+0000 7faadb347640 1 -- start start 2026-03-10T10:08:40.861 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.856+0000 7faadb347640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faad4102ed0 0x7faad4196d20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:40.861 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.856+0000 7faadb347640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7faad4102bf0 con 0x7faad4102ed0 2026-03-10T10:08:40.861 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.856+0000 7faad90bc640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faad4102ed0 0x7faad4196d20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:40.861 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.856+0000 7faad90bc640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faad4102ed0 0x7faad4196d20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:59530/0 (socket says 192.168.123.104:59530) 2026-03-10T10:08:40.861 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.856+0000 7faad90bc640 1 -- 192.168.123.104:0/532496200 learned_addr learned my addr 192.168.123.104:0/532496200 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:40.861 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.856+0000 7faad90bc640 1 -- 192.168.123.104:0/532496200 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7faad4197260 con 0x7faad4102ed0 2026-03-10T10:08:40.861 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.856+0000 7faad90bc640 1 --2- 192.168.123.104:0/532496200 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faad4102ed0 0x7faad4196d20 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7faac80042c0 tx=0x7faac80042f0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:40.861 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.856+0000 7faac27fc640 1 -- 192.168.123.104:0/532496200 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7faac8047070 con 0x7faad4102ed0 2026-03-10T10:08:40.862 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.856+0000 7faac27fc640 1 -- 192.168.123.104:0/532496200 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7faac803e070 con 0x7faad4102ed0 2026-03-10T10:08:40.862 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.856+0000 7faadb347640 1 -- 192.168.123.104:0/532496200 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7faad41974f0 con 0x7faad4102ed0 2026-03-10T10:08:40.862 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.856+0000 7faac27fc640 1 -- 192.168.123.104:0/532496200 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7faac80428c0 con 0x7faad4102ed0 2026-03-10T10:08:40.862 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.856+0000 7faadb347640 1 -- 192.168.123.104:0/532496200 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7faad419a1d0 con 0x7faad4102ed0 2026-03-10T10:08:40.866 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.860+0000 7faac27fc640 1 -- 192.168.123.104:0/532496200 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 13) ==== 50271+0+0 (secure 0 0 0) 0x7faac80413c0 con 0x7faad4102ed0 2026-03-10T10:08:40.866 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.860+0000 7faac27fc640 1 --2- 192.168.123.104:0/532496200 >> v2:192.168.123.104:6800/632047608 conn(0x7faaac03d980 0x7faaac03fe40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:40.866 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.860+0000 7faac27fc640 1 -- 192.168.123.104:0/532496200 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (secure 0 0 0) 0x7faac807c600 con 0x7faad4102ed0 2026-03-10T10:08:40.866 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.860+0000 7faadb347640 1 -- 192.168.123.104:0/532496200 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7faad4105e90 con 0x7faad4102ed0 2026-03-10T10:08:40.866 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.860+0000 7faad88bb640 1 --2- 192.168.123.104:0/532496200 >> v2:192.168.123.104:6800/632047608 conn(0x7faaac03d980 0x7faaac03fe40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:40.866 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.860+0000 7faad88bb640 1 --2- 192.168.123.104:0/532496200 >> v2:192.168.123.104:6800/632047608 conn(0x7faaac03d980 0x7faaac03fe40 secure :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0x7faac4009a10 tx=0x7faac4006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:40.866 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.860+0000 7faac27fc640 1 -- 192.168.123.104:0/532496200 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7faac8038c60 con 0x7faad4102ed0 2026-03-10T10:08:40.957 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.952+0000 7faadb347640 1 -- 192.168.123.104:0/532496200 --> v2:192.168.123.104:6800/632047608 -- mgr_command(tid 0: {"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}) -- 0x7faad41904f0 con 0x7faaac03d980 2026-03-10T10:08:40.961 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.956+0000 7faac27fc640 1 -- 192.168.123.104:0/532496200 <== mgr.14150 v2:192.168.123.104:6800/632047608 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+0 (secure 0 0 0) 0x7faad41904f0 con 0x7faaac03d980 2026-03-10T10:08:40.967 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.964+0000 7faadb347640 1 -- 192.168.123.104:0/532496200 >> v2:192.168.123.104:6800/632047608 conn(0x7faaac03d980 msgr2=0x7faaac03fe40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:40.967 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.964+0000 7faadb347640 1 --2- 192.168.123.104:0/532496200 >> v2:192.168.123.104:6800/632047608 conn(0x7faaac03d980 0x7faaac03fe40 secure :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0x7faac4009a10 tx=0x7faac4006eb0 comp rx=0 tx=0).stop 2026-03-10T10:08:40.967 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.964+0000 7faadb347640 1 -- 192.168.123.104:0/532496200 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faad4102ed0 msgr2=0x7faad4196d20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:40.967 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.964+0000 7faadb347640 1 --2- 192.168.123.104:0/532496200 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faad4102ed0 0x7faad4196d20 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7faac80042c0 tx=0x7faac80042f0 comp rx=0 tx=0).stop 2026-03-10T10:08:40.970 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.964+0000 7faadb347640 1 -- 192.168.123.104:0/532496200 shutdown_connections 2026-03-10T10:08:40.970 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.964+0000 7faadb347640 1 --2- 192.168.123.104:0/532496200 >> v2:192.168.123.104:6800/632047608 conn(0x7faaac03d980 0x7faaac03fe40 unknown :-1 s=CLOSED pgs=12 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:40.970 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.964+0000 7faadb347640 1 --2- 192.168.123.104:0/532496200 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faad4102ed0 0x7faad4196d20 unknown :-1 s=CLOSED pgs=79 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:40.970 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.964+0000 7faadb347640 1 -- 192.168.123.104:0/532496200 >> 192.168.123.104:0/532496200 conn(0x7faad40fc9d0 msgr2=0x7faad40fd290 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:40.970 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.964+0000 7faadb347640 1 -- 192.168.123.104:0/532496200 shutdown_connections 2026-03-10T10:08:40.973 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:40.968+0000 7faadb347640 1 -- 192.168.123.104:0/532496200 wait complete. 2026-03-10T10:08:41.057 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm07 2026-03-10T10:08:41.058 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T10:08:41.058 DEBUG:teuthology.orchestra.run.vm07:> dd of=/etc/ceph/ceph.conf 2026-03-10T10:08:41.061 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T10:08:41.061 DEBUG:teuthology.orchestra.run.vm07:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:08:41.106 INFO:tasks.cephadm:Adding host vm07 to orchestrator... 2026-03-10T10:08:41.106 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph orch host add vm07 2026-03-10T10:08:41.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:41 vm04 bash[20742]: audit 2026-03-10T10:08:40.225084+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:41.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:41 vm04 bash[20742]: audit 2026-03-10T10:08:40.225084+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:41.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:41 vm04 bash[20742]: audit 2026-03-10T10:08:40.227027+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:41.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:41 vm04 bash[20742]: audit 2026-03-10T10:08:40.227027+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:41.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:41 vm04 bash[20742]: audit 2026-03-10T10:08:40.227528+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:08:41.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:41 vm04 bash[20742]: audit 2026-03-10T10:08:40.227528+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:08:41.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:41 vm04 bash[20742]: audit 2026-03-10T10:08:40.229630+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:41.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:41 vm04 bash[20742]: audit 2026-03-10T10:08:40.229630+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:41.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:41 vm04 bash[20742]: audit 2026-03-10T10:08:40.234256+0000 mon.a (mon.0) 101 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:08:41.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:41 vm04 bash[20742]: audit 2026-03-10T10:08:40.234256+0000 mon.a (mon.0) 101 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:08:41.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:41 vm04 bash[20742]: audit 2026-03-10T10:08:40.236401+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:41.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:41 vm04 bash[20742]: audit 2026-03-10T10:08:40.236401+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:41.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:41 vm04 bash[20742]: audit 2026-03-10T10:08:40.960990+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:41.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:41 vm04 bash[20742]: audit 2026-03-10T10:08:40.960990+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:41.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:41 vm04 bash[20742]: audit 2026-03-10T10:08:40.961494+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:08:41.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:41 vm04 bash[20742]: audit 2026-03-10T10:08:40.961494+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:08:41.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:41 vm04 bash[20742]: audit 2026-03-10T10:08:40.962304+0000 mon.a (mon.0) 105 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:08:41.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:41 vm04 bash[20742]: audit 2026-03-10T10:08:40.962304+0000 mon.a (mon.0) 105 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:08:41.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:41 vm04 bash[20742]: audit 2026-03-10T10:08:40.962810+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:08:41.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:41 vm04 bash[20742]: audit 2026-03-10T10:08:40.962810+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:08:41.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:41 vm04 bash[20742]: audit 2026-03-10T10:08:41.103892+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:41.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:41 vm04 bash[20742]: audit 2026-03-10T10:08:41.103892+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:41.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:41 vm04 bash[20742]: audit 2026-03-10T10:08:41.106797+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:41.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:41 vm04 bash[20742]: audit 2026-03-10T10:08:41.106797+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:41.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:41 vm04 bash[20742]: audit 2026-03-10T10:08:41.110263+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:41.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:41 vm04 bash[20742]: audit 2026-03-10T10:08:41.110263+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:42 vm04 bash[20742]: audit 2026-03-10T10:08:40.958093+0000 mgr.y (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:08:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:42 vm04 bash[20742]: audit 2026-03-10T10:08:40.958093+0000 mgr.y (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:08:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:42 vm04 bash[20742]: cephadm 2026-03-10T10:08:40.963376+0000 mgr.y (mgr.14150) 11 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-10T10:08:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:42 vm04 bash[20742]: cephadm 2026-03-10T10:08:40.963376+0000 mgr.y (mgr.14150) 11 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-10T10:08:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:42 vm04 bash[20742]: cephadm 2026-03-10T10:08:41.000767+0000 mgr.y (mgr.14150) 12 : cephadm [INF] Updating vm04:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:08:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:42 vm04 bash[20742]: cephadm 2026-03-10T10:08:41.000767+0000 mgr.y (mgr.14150) 12 : cephadm [INF] Updating vm04:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:08:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:42 vm04 bash[20742]: cephadm 2026-03-10T10:08:41.034303+0000 mgr.y (mgr.14150) 13 : cephadm [INF] Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:08:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:42 vm04 bash[20742]: cephadm 2026-03-10T10:08:41.034303+0000 mgr.y (mgr.14150) 13 : cephadm [INF] Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:08:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:42 vm04 bash[20742]: cephadm 2026-03-10T10:08:41.070244+0000 mgr.y (mgr.14150) 14 : cephadm [INF] Updating vm04:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.client.admin.keyring 2026-03-10T10:08:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:42 vm04 bash[20742]: cephadm 2026-03-10T10:08:41.070244+0000 mgr.y (mgr.14150) 14 : cephadm [INF] Updating vm04:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.client.admin.keyring 2026-03-10T10:08:45.717 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.a/config 2026-03-10T10:08:45.854 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:45.848+0000 7fcfe4a44640 1 -- 192.168.123.104:0/815580869 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fcfe0100620 msgr2=0x7fcfe0100a00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:45.854 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:45.848+0000 7fcfe4a44640 1 --2- 192.168.123.104:0/815580869 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fcfe0100620 0x7fcfe0100a00 secure :-1 s=READY pgs=80 cs=0 l=1 rev1=1 crypto rx=0x7fcfd40099b0 tx=0x7fcfd402f2b0 comp rx=0 tx=0).stop 2026-03-10T10:08:45.854 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:45.848+0000 7fcfe4a44640 1 -- 192.168.123.104:0/815580869 shutdown_connections 2026-03-10T10:08:45.854 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:45.848+0000 7fcfe4a44640 1 --2- 192.168.123.104:0/815580869 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fcfe0100620 0x7fcfe0100a00 unknown :-1 s=CLOSED pgs=80 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:45.854 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:45.848+0000 7fcfe4a44640 1 -- 192.168.123.104:0/815580869 >> 192.168.123.104:0/815580869 conn(0x7fcfe00fc1d0 msgr2=0x7fcfe00fe5f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:45.854 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:45.848+0000 7fcfe4a44640 1 -- 192.168.123.104:0/815580869 shutdown_connections 2026-03-10T10:08:45.854 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:45.848+0000 7fcfe4a44640 1 -- 192.168.123.104:0/815580869 wait complete. 2026-03-10T10:08:45.854 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:45.848+0000 7fcfe4a44640 1 Processor -- start 2026-03-10T10:08:45.854 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:45.848+0000 7fcfe4a44640 1 -- start start 2026-03-10T10:08:45.855 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:45.848+0000 7fcfe4a44640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fcfe0100620 0x7fcfe019b600 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:45.855 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:45.848+0000 7fcfe4a44640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fcfe010bec0 con 0x7fcfe0100620 2026-03-10T10:08:45.855 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:45.848+0000 7fcfde575640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fcfe0100620 0x7fcfe019b600 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:45.855 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:45.848+0000 7fcfde575640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fcfe0100620 0x7fcfe019b600 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:48704/0 (socket says 192.168.123.104:48704) 2026-03-10T10:08:45.855 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:45.848+0000 7fcfde575640 1 -- 192.168.123.104:0/2211981654 learned_addr learned my addr 192.168.123.104:0/2211981654 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:45.855 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:45.852+0000 7fcfde575640 1 -- 192.168.123.104:0/2211981654 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fcfe019bb40 con 0x7fcfe0100620 2026-03-10T10:08:45.855 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:45.852+0000 7fcfde575640 1 --2- 192.168.123.104:0/2211981654 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fcfe0100620 0x7fcfe019b600 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7fcfd40042c0 tx=0x7fcfd40042f0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:45.857 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:45.852+0000 7fcfc77fe640 1 -- 192.168.123.104:0/2211981654 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fcfd4047070 con 0x7fcfe0100620 2026-03-10T10:08:45.857 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:45.852+0000 7fcfc77fe640 1 -- 192.168.123.104:0/2211981654 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fcfd403e070 con 0x7fcfe0100620 2026-03-10T10:08:45.857 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:45.852+0000 7fcfc77fe640 1 -- 192.168.123.104:0/2211981654 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fcfd4042710 con 0x7fcfe0100620 2026-03-10T10:08:45.857 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:45.852+0000 7fcfe4a44640 1 -- 192.168.123.104:0/2211981654 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fcfe019bdd0 con 0x7fcfe0100620 2026-03-10T10:08:45.857 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:45.852+0000 7fcfe4a44640 1 -- 192.168.123.104:0/2211981654 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fcfe019fed0 con 0x7fcfe0100620 2026-03-10T10:08:45.857 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:45.852+0000 7fcfc77fe640 1 -- 192.168.123.104:0/2211981654 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 13) ==== 50271+0+0 (secure 0 0 0) 0x7fcfd404c440 con 0x7fcfe0100620 2026-03-10T10:08:45.857 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:45.852+0000 7fcfc77fe640 1 --2- 192.168.123.104:0/2211981654 >> v2:192.168.123.104:6800/632047608 conn(0x7fcfbc03dcf0 0x7fcfbc0401b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:45.857 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:45.852+0000 7fcfc77fe640 1 -- 192.168.123.104:0/2211981654 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (secure 0 0 0) 0x7fcfd4077c00 con 0x7fcfe0100620 2026-03-10T10:08:45.857 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:45.852+0000 7fcfddd74640 1 --2- 192.168.123.104:0/2211981654 >> v2:192.168.123.104:6800/632047608 conn(0x7fcfbc03dcf0 0x7fcfbc0401b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:45.857 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:45.852+0000 7fcfe4a44640 1 -- 192.168.123.104:0/2211981654 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fcfac005180 con 0x7fcfe0100620 2026-03-10T10:08:45.860 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:45.856+0000 7fcfddd74640 1 --2- 192.168.123.104:0/2211981654 >> v2:192.168.123.104:6800/632047608 conn(0x7fcfbc03dcf0 0x7fcfbc0401b0 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7fcfc8009a10 tx=0x7fcfc8006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:45.860 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:45.856+0000 7fcfc77fe640 1 -- 192.168.123.104:0/2211981654 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fcfd402fe20 con 0x7fcfe0100620 2026-03-10T10:08:45.960 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:45.956+0000 7fcfe4a44640 1 -- 192.168.123.104:0/2211981654 --> v2:192.168.123.104:6800/632047608 -- mgr_command(tid 0: {"prefix": "orch host add", "hostname": "vm07", "target": ["mon-mgr", ""]}) -- 0x7fcfac002bf0 con 0x7fcfbc03dcf0 2026-03-10T10:08:47.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:46 vm04 bash[20742]: audit 2026-03-10T10:08:45.961179+0000 mgr.y (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm07", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:08:47.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:46 vm04 bash[20742]: audit 2026-03-10T10:08:45.961179+0000 mgr.y (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm07", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:08:47.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:46 vm04 bash[20742]: cephadm 2026-03-10T10:08:46.480872+0000 mgr.y (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm07 2026-03-10T10:08:47.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:46 vm04 bash[20742]: cephadm 2026-03-10T10:08:46.480872+0000 mgr.y (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm07 2026-03-10T10:08:47.690 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:47.684+0000 7fcfc77fe640 1 -- 192.168.123.104:0/2211981654 <== mgr.14150 v2:192.168.123.104:6800/632047608 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+46 (secure 0 0 0) 0x7fcfac002bf0 con 0x7fcfbc03dcf0 2026-03-10T10:08:47.691 INFO:teuthology.orchestra.run.vm04.stdout:Added host 'vm07' with addr '192.168.123.107' 2026-03-10T10:08:47.692 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:47.688+0000 7fcfe4a44640 1 -- 192.168.123.104:0/2211981654 >> v2:192.168.123.104:6800/632047608 conn(0x7fcfbc03dcf0 msgr2=0x7fcfbc0401b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:47.692 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:47.688+0000 7fcfe4a44640 1 --2- 192.168.123.104:0/2211981654 >> v2:192.168.123.104:6800/632047608 conn(0x7fcfbc03dcf0 0x7fcfbc0401b0 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7fcfc8009a10 tx=0x7fcfc8006eb0 comp rx=0 tx=0).stop 2026-03-10T10:08:47.692 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:47.688+0000 7fcfe4a44640 1 -- 192.168.123.104:0/2211981654 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fcfe0100620 msgr2=0x7fcfe019b600 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:47.692 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:47.688+0000 7fcfe4a44640 1 --2- 192.168.123.104:0/2211981654 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fcfe0100620 0x7fcfe019b600 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7fcfd40042c0 tx=0x7fcfd40042f0 comp rx=0 tx=0).stop 2026-03-10T10:08:47.693 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:47.688+0000 7fcfe4a44640 1 -- 192.168.123.104:0/2211981654 shutdown_connections 2026-03-10T10:08:47.693 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:47.688+0000 7fcfe4a44640 1 --2- 192.168.123.104:0/2211981654 >> v2:192.168.123.104:6800/632047608 conn(0x7fcfbc03dcf0 0x7fcfbc0401b0 unknown :-1 s=CLOSED pgs=13 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:47.693 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:47.688+0000 7fcfe4a44640 1 --2- 192.168.123.104:0/2211981654 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fcfe0100620 0x7fcfe019b600 unknown :-1 s=CLOSED pgs=81 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:47.693 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:47.688+0000 7fcfe4a44640 1 -- 192.168.123.104:0/2211981654 >> 192.168.123.104:0/2211981654 conn(0x7fcfe00fc1d0 msgr2=0x7fcfe01020b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:47.693 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:47.688+0000 7fcfe4a44640 1 -- 192.168.123.104:0/2211981654 shutdown_connections 2026-03-10T10:08:47.693 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:47.688+0000 7fcfe4a44640 1 -- 192.168.123.104:0/2211981654 wait complete. 2026-03-10T10:08:47.744 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph orch host ls --format=json 2026-03-10T10:08:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:48 vm04 bash[20742]: audit 2026-03-10T10:08:47.689623+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:48 vm04 bash[20742]: audit 2026-03-10T10:08:47.689623+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:48 vm04 bash[20742]: cephadm 2026-03-10T10:08:47.689956+0000 mgr.y (mgr.14150) 17 : cephadm [INF] Added host vm07 2026-03-10T10:08:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:48 vm04 bash[20742]: cephadm 2026-03-10T10:08:47.689956+0000 mgr.y (mgr.14150) 17 : cephadm [INF] Added host vm07 2026-03-10T10:08:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:48 vm04 bash[20742]: audit 2026-03-10T10:08:47.690134+0000 mon.a (mon.0) 111 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:08:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:48 vm04 bash[20742]: audit 2026-03-10T10:08:47.690134+0000 mon.a (mon.0) 111 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:08:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:48 vm04 bash[20742]: audit 2026-03-10T10:08:47.970252+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:48 vm04 bash[20742]: audit 2026-03-10T10:08:47.970252+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:50.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:50 vm04 bash[20742]: audit 2026-03-10T10:08:49.238117+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:50.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:50 vm04 bash[20742]: audit 2026-03-10T10:08:49.238117+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:50.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:50 vm04 bash[20742]: cluster 2026-03-10T10:08:49.605026+0000 mgr.y (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:08:50.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:50 vm04 bash[20742]: cluster 2026-03-10T10:08:49.605026+0000 mgr.y (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:08:50.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:50 vm04 bash[20742]: audit 2026-03-10T10:08:49.768909+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:50.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:50 vm04 bash[20742]: audit 2026-03-10T10:08:49.768909+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:51 vm04 bash[20742]: cluster 2026-03-10T10:08:51.605202+0000 mgr.y (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:08:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:51 vm04 bash[20742]: cluster 2026-03-10T10:08:51.605202+0000 mgr.y (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:08:52.356 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.a/config 2026-03-10T10:08:52.515 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.508+0000 7f3ff0ebe640 1 -- 192.168.123.104:0/3314866821 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3fec10a660 msgr2=0x7f3fec10aa40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:52.515 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.508+0000 7f3ff0ebe640 1 --2- 192.168.123.104:0/3314866821 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3fec10a660 0x7f3fec10aa40 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7f3fe0009a00 tx=0x7f3fe002f310 comp rx=0 tx=0).stop 2026-03-10T10:08:52.515 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.508+0000 7f3ff0ebe640 1 -- 192.168.123.104:0/3314866821 shutdown_connections 2026-03-10T10:08:52.515 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.508+0000 7f3ff0ebe640 1 --2- 192.168.123.104:0/3314866821 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3fec10a660 0x7f3fec10aa40 unknown :-1 s=CLOSED pgs=82 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:52.515 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.508+0000 7f3ff0ebe640 1 -- 192.168.123.104:0/3314866821 >> 192.168.123.104:0/3314866821 conn(0x7f3fec100250 msgr2=0x7f3fec102670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:52.515 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.512+0000 7f3ff0ebe640 1 -- 192.168.123.104:0/3314866821 shutdown_connections 2026-03-10T10:08:52.515 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.512+0000 7f3ff0ebe640 1 -- 192.168.123.104:0/3314866821 wait complete. 2026-03-10T10:08:52.516 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.512+0000 7f3ff0ebe640 1 Processor -- start 2026-03-10T10:08:52.516 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.512+0000 7f3ff0ebe640 1 -- start start 2026-03-10T10:08:52.518 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.512+0000 7f3ff0ebe640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3fec10a660 0x7f3fec19b630 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:52.518 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.512+0000 7f3ff0ebe640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f3fec10e1c0 con 0x7f3fec10a660 2026-03-10T10:08:52.518 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.512+0000 7f3fea575640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3fec10a660 0x7f3fec19b630 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:52.518 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.512+0000 7f3fea575640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3fec10a660 0x7f3fec19b630 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:48730/0 (socket says 192.168.123.104:48730) 2026-03-10T10:08:52.518 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.512+0000 7f3fea575640 1 -- 192.168.123.104:0/2203518551 learned_addr learned my addr 192.168.123.104:0/2203518551 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:52.519 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.512+0000 7f3fea575640 1 -- 192.168.123.104:0/2203518551 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f3fec19bb70 con 0x7f3fec10a660 2026-03-10T10:08:52.520 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.516+0000 7f3fea575640 1 --2- 192.168.123.104:0/2203518551 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3fec10a660 0x7f3fec19b630 secure :-1 s=READY pgs=83 cs=0 l=1 rev1=1 crypto rx=0x7f3fe0009b30 tx=0x7f3fe0002a50 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:52.520 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.516+0000 7f3fd37fe640 1 -- 192.168.123.104:0/2203518551 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f3fe0002cc0 con 0x7f3fec10a660 2026-03-10T10:08:52.520 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.516+0000 7f3fd37fe640 1 -- 192.168.123.104:0/2203518551 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f3fe0046070 con 0x7f3fec10a660 2026-03-10T10:08:52.521 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.516+0000 7f3ff0ebe640 1 -- 192.168.123.104:0/2203518551 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f3fec19be00 con 0x7f3fec10a660 2026-03-10T10:08:52.521 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.516+0000 7f3fd37fe640 1 -- 192.168.123.104:0/2203518551 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f3fe0041670 con 0x7f3fec10a660 2026-03-10T10:08:52.522 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.516+0000 7f3ff0ebe640 1 -- 192.168.123.104:0/2203518551 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f3fec19ff00 con 0x7f3fec10a660 2026-03-10T10:08:52.523 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.516+0000 7f3ff0ebe640 1 -- 192.168.123.104:0/2203518551 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f3fec105cd0 con 0x7f3fec10a660 2026-03-10T10:08:52.523 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.516+0000 7f3fd37fe640 1 -- 192.168.123.104:0/2203518551 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 13) ==== 50271+0+0 (secure 0 0 0) 0x7f3fe0038590 con 0x7f3fec10a660 2026-03-10T10:08:52.526 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.520+0000 7f3fd37fe640 1 --2- 192.168.123.104:0/2203518551 >> v2:192.168.123.104:6800/632047608 conn(0x7f3fc803d9d0 0x7f3fc803fe90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:52.526 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.520+0000 7f3fd37fe640 1 -- 192.168.123.104:0/2203518551 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (secure 0 0 0) 0x7f3fe007c050 con 0x7f3fec10a660 2026-03-10T10:08:52.526 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.520+0000 7f3fe9d74640 1 --2- 192.168.123.104:0/2203518551 >> v2:192.168.123.104:6800/632047608 conn(0x7f3fc803d9d0 0x7f3fc803fe90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:52.526 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.520+0000 7f3fe9d74640 1 --2- 192.168.123.104:0/2203518551 >> v2:192.168.123.104:6800/632047608 conn(0x7f3fc803d9d0 0x7f3fc803fe90 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7f3fd40099c0 tx=0x7f3fd4006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:52.527 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.520+0000 7f3fd37fe640 1 -- 192.168.123.104:0/2203518551 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f3fe0040360 con 0x7f3fec10a660 2026-03-10T10:08:52.647 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.640+0000 7f3ff0ebe640 1 -- 192.168.123.104:0/2203518551 --> v2:192.168.123.104:6800/632047608 -- mgr_command(tid 0: {"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}) -- 0x7f3fec10aac0 con 0x7f3fc803d9d0 2026-03-10T10:08:52.647 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.644+0000 7f3fd37fe640 1 -- 192.168.123.104:0/2203518551 <== mgr.14150 v2:192.168.123.104:6800/632047608 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+155 (secure 0 0 0) 0x7f3fec10aac0 con 0x7f3fc803d9d0 2026-03-10T10:08:52.648 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:08:52.648 INFO:teuthology.orchestra.run.vm04.stdout:[{"addr": "192.168.123.104", "hostname": "vm04", "labels": [], "status": ""}, {"addr": "192.168.123.107", "hostname": "vm07", "labels": [], "status": ""}] 2026-03-10T10:08:52.650 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.644+0000 7f3ff0ebe640 1 -- 192.168.123.104:0/2203518551 >> v2:192.168.123.104:6800/632047608 conn(0x7f3fc803d9d0 msgr2=0x7f3fc803fe90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:52.650 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.644+0000 7f3ff0ebe640 1 --2- 192.168.123.104:0/2203518551 >> v2:192.168.123.104:6800/632047608 conn(0x7f3fc803d9d0 0x7f3fc803fe90 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7f3fd40099c0 tx=0x7f3fd4006eb0 comp rx=0 tx=0).stop 2026-03-10T10:08:52.650 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.644+0000 7f3ff0ebe640 1 -- 192.168.123.104:0/2203518551 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3fec10a660 msgr2=0x7f3fec19b630 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:52.650 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.644+0000 7f3ff0ebe640 1 --2- 192.168.123.104:0/2203518551 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3fec10a660 0x7f3fec19b630 secure :-1 s=READY pgs=83 cs=0 l=1 rev1=1 crypto rx=0x7f3fe0009b30 tx=0x7f3fe0002a50 comp rx=0 tx=0).stop 2026-03-10T10:08:52.650 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.644+0000 7f3ff0ebe640 1 -- 192.168.123.104:0/2203518551 shutdown_connections 2026-03-10T10:08:52.650 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.644+0000 7f3ff0ebe640 1 --2- 192.168.123.104:0/2203518551 >> v2:192.168.123.104:6800/632047608 conn(0x7f3fc803d9d0 0x7f3fc803fe90 unknown :-1 s=CLOSED pgs=14 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:52.651 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.644+0000 7f3ff0ebe640 1 --2- 192.168.123.104:0/2203518551 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f3fec10a660 0x7f3fec19b630 unknown :-1 s=CLOSED pgs=83 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:52.651 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.644+0000 7f3ff0ebe640 1 -- 192.168.123.104:0/2203518551 >> 192.168.123.104:0/2203518551 conn(0x7f3fec100250 msgr2=0x7f3fec100af0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:52.651 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.644+0000 7f3ff0ebe640 1 -- 192.168.123.104:0/2203518551 shutdown_connections 2026-03-10T10:08:52.651 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:52.644+0000 7f3ff0ebe640 1 -- 192.168.123.104:0/2203518551 wait complete. 2026-03-10T10:08:52.709 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-10T10:08:52.709 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph osd crush tunables default 2026-03-10T10:08:53.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:53 vm04 bash[20742]: audit 2026-03-10T10:08:52.518303+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:53.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:53 vm04 bash[20742]: audit 2026-03-10T10:08:52.518303+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:53.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:53 vm04 bash[20742]: audit 2026-03-10T10:08:52.520464+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:53.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:53 vm04 bash[20742]: audit 2026-03-10T10:08:52.520464+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:53.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:53 vm04 bash[20742]: audit 2026-03-10T10:08:52.523500+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:53.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:53 vm04 bash[20742]: audit 2026-03-10T10:08:52.523500+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:53.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:53 vm04 bash[20742]: audit 2026-03-10T10:08:52.528791+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:53.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:53 vm04 bash[20742]: audit 2026-03-10T10:08:52.528791+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:53.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:53 vm04 bash[20742]: audit 2026-03-10T10:08:52.529349+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:08:53.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:53 vm04 bash[20742]: audit 2026-03-10T10:08:52.529349+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:08:53.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:53 vm04 bash[20742]: audit 2026-03-10T10:08:52.529990+0000 mon.a (mon.0) 120 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:08:53.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:53 vm04 bash[20742]: audit 2026-03-10T10:08:52.529990+0000 mon.a (mon.0) 120 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:08:53.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:53 vm04 bash[20742]: audit 2026-03-10T10:08:52.530427+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:08:53.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:53 vm04 bash[20742]: audit 2026-03-10T10:08:52.530427+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:08:53.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:53 vm04 bash[20742]: cephadm 2026-03-10T10:08:52.531018+0000 mgr.y (mgr.14150) 20 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T10:08:53.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:53 vm04 bash[20742]: cephadm 2026-03-10T10:08:52.531018+0000 mgr.y (mgr.14150) 20 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T10:08:53.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:53 vm04 bash[20742]: cephadm 2026-03-10T10:08:52.564357+0000 mgr.y (mgr.14150) 21 : cephadm [INF] Updating vm07:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:08:53.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:53 vm04 bash[20742]: cephadm 2026-03-10T10:08:52.564357+0000 mgr.y (mgr.14150) 21 : cephadm [INF] Updating vm07:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:08:53.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:53 vm04 bash[20742]: cephadm 2026-03-10T10:08:52.594545+0000 mgr.y (mgr.14150) 22 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:08:53.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:53 vm04 bash[20742]: cephadm 2026-03-10T10:08:52.594545+0000 mgr.y (mgr.14150) 22 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:08:53.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:53 vm04 bash[20742]: cephadm 2026-03-10T10:08:52.626942+0000 mgr.y (mgr.14150) 23 : cephadm [INF] Updating vm07:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.client.admin.keyring 2026-03-10T10:08:53.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:53 vm04 bash[20742]: cephadm 2026-03-10T10:08:52.626942+0000 mgr.y (mgr.14150) 23 : cephadm [INF] Updating vm07:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.client.admin.keyring 2026-03-10T10:08:53.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:53 vm04 bash[20742]: audit 2026-03-10T10:08:52.648005+0000 mgr.y (mgr.14150) 24 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T10:08:53.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:53 vm04 bash[20742]: audit 2026-03-10T10:08:52.648005+0000 mgr.y (mgr.14150) 24 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T10:08:53.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:53 vm04 bash[20742]: audit 2026-03-10T10:08:52.665767+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:53.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:53 vm04 bash[20742]: audit 2026-03-10T10:08:52.665767+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:53.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:53 vm04 bash[20742]: audit 2026-03-10T10:08:52.667879+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:53.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:53 vm04 bash[20742]: audit 2026-03-10T10:08:52.667879+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:53.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:53 vm04 bash[20742]: audit 2026-03-10T10:08:52.672453+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:53.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:53 vm04 bash[20742]: audit 2026-03-10T10:08:52.672453+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:54.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:54 vm04 bash[20742]: cluster 2026-03-10T10:08:53.605374+0000 mgr.y (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:08:54.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:54 vm04 bash[20742]: cluster 2026-03-10T10:08:53.605374+0000 mgr.y (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:08:56.365 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.a/config 2026-03-10T10:08:56.510 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.504+0000 7fd7847a5640 1 -- 192.168.123.104:0/1621647778 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd77c10a660 msgr2=0x7fd77c10aa40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:56.510 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.504+0000 7fd7847a5640 1 --2- 192.168.123.104:0/1621647778 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd77c10a660 0x7fd77c10aa40 secure :-1 s=READY pgs=84 cs=0 l=1 rev1=1 crypto rx=0x7fd76c009a00 tx=0x7fd76c02f310 comp rx=0 tx=0).stop 2026-03-10T10:08:56.510 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.504+0000 7fd7847a5640 1 -- 192.168.123.104:0/1621647778 shutdown_connections 2026-03-10T10:08:56.510 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.504+0000 7fd7847a5640 1 --2- 192.168.123.104:0/1621647778 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd77c10a660 0x7fd77c10aa40 unknown :-1 s=CLOSED pgs=84 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:56.510 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.504+0000 7fd7847a5640 1 -- 192.168.123.104:0/1621647778 >> 192.168.123.104:0/1621647778 conn(0x7fd77c100250 msgr2=0x7fd77c102670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:56.510 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.504+0000 7fd7847a5640 1 -- 192.168.123.104:0/1621647778 shutdown_connections 2026-03-10T10:08:56.510 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.504+0000 7fd7847a5640 1 -- 192.168.123.104:0/1621647778 wait complete. 2026-03-10T10:08:56.511 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.504+0000 7fd7847a5640 1 Processor -- start 2026-03-10T10:08:56.511 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.504+0000 7fd7847a5640 1 -- start start 2026-03-10T10:08:56.511 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.508+0000 7fd7847a5640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd77c10a660 0x7fd77c115290 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:56.511 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.508+0000 7fd7847a5640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fd77c0782e0 con 0x7fd77c10a660 2026-03-10T10:08:56.511 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.508+0000 7fd78251a640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd77c10a660 0x7fd77c115290 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:56.511 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.508+0000 7fd78251a640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd77c10a660 0x7fd77c115290 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:45598/0 (socket says 192.168.123.104:45598) 2026-03-10T10:08:56.511 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.508+0000 7fd78251a640 1 -- 192.168.123.104:0/438248174 learned_addr learned my addr 192.168.123.104:0/438248174 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:08:56.512 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.508+0000 7fd78251a640 1 -- 192.168.123.104:0/438248174 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fd77c1157d0 con 0x7fd77c10a660 2026-03-10T10:08:56.512 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.508+0000 7fd78251a640 1 --2- 192.168.123.104:0/438248174 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd77c10a660 0x7fd77c115290 secure :-1 s=READY pgs=85 cs=0 l=1 rev1=1 crypto rx=0x7fd76c009b30 tx=0x7fd76c002c80 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:56.512 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.508+0000 7fd76b7fe640 1 -- 192.168.123.104:0/438248174 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fd76c0045b0 con 0x7fd77c10a660 2026-03-10T10:08:56.512 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.508+0000 7fd76b7fe640 1 -- 192.168.123.104:0/438248174 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fd76c046070 con 0x7fd77c10a660 2026-03-10T10:08:56.513 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.508+0000 7fd76b7fe640 1 -- 192.168.123.104:0/438248174 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fd76c0416b0 con 0x7fd77c10a660 2026-03-10T10:08:56.513 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.508+0000 7fd7847a5640 1 -- 192.168.123.104:0/438248174 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fd77c115a60 con 0x7fd77c10a660 2026-03-10T10:08:56.513 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.508+0000 7fd7847a5640 1 -- 192.168.123.104:0/438248174 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fd77c110570 con 0x7fd77c10a660 2026-03-10T10:08:56.514 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.508+0000 7fd76b7fe640 1 -- 192.168.123.104:0/438248174 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 13) ==== 50271+0+0 (secure 0 0 0) 0x7fd76c038470 con 0x7fd77c10a660 2026-03-10T10:08:56.514 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.508+0000 7fd7847a5640 1 -- 192.168.123.104:0/438248174 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fd77c115c30 con 0x7fd77c10a660 2026-03-10T10:08:56.514 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.508+0000 7fd76b7fe640 1 --2- 192.168.123.104:0/438248174 >> v2:192.168.123.104:6800/632047608 conn(0x7fd75003d980 0x7fd75003fe40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:56.514 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.508+0000 7fd76b7fe640 1 -- 192.168.123.104:0/438248174 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 1069+0+0 (secure 0 0 0) 0x7fd76c076860 con 0x7fd77c10a660 2026-03-10T10:08:56.517 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.512+0000 7fd781d19640 1 --2- 192.168.123.104:0/438248174 >> v2:192.168.123.104:6800/632047608 conn(0x7fd75003d980 0x7fd75003fe40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:56.518 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.512+0000 7fd76b7fe640 1 -- 192.168.123.104:0/438248174 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fd76c0373d0 con 0x7fd77c10a660 2026-03-10T10:08:56.518 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.512+0000 7fd781d19640 1 --2- 192.168.123.104:0/438248174 >> v2:192.168.123.104:6800/632047608 conn(0x7fd75003d980 0x7fd75003fe40 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7fd770009a10 tx=0x7fd770006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:56.611 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.604+0000 7fd7847a5640 1 -- 192.168.123.104:0/438248174 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "osd crush tunables", "profile": "default"} v 0) -- 0x7fd77c10aac0 con 0x7fd77c10a660 2026-03-10T10:08:56.690 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.684+0000 7fd76b7fe640 1 -- 192.168.123.104:0/438248174 <== mon.0 v2:192.168.123.104:3300/0 7 ==== mon_command_ack([{"prefix": "osd crush tunables", "profile": "default"}]=0 adjusted tunables profile to default v4) ==== 124+0+0 (secure 0 0 0) 0x7fd76c038790 con 0x7fd77c10a660 2026-03-10T10:08:56.690 INFO:teuthology.orchestra.run.vm04.stderr:adjusted tunables profile to default 2026-03-10T10:08:56.692 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.688+0000 7fd7847a5640 1 -- 192.168.123.104:0/438248174 >> v2:192.168.123.104:6800/632047608 conn(0x7fd75003d980 msgr2=0x7fd75003fe40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:56.692 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.688+0000 7fd7847a5640 1 --2- 192.168.123.104:0/438248174 >> v2:192.168.123.104:6800/632047608 conn(0x7fd75003d980 0x7fd75003fe40 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7fd770009a10 tx=0x7fd770006eb0 comp rx=0 tx=0).stop 2026-03-10T10:08:56.692 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.688+0000 7fd7847a5640 1 -- 192.168.123.104:0/438248174 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd77c10a660 msgr2=0x7fd77c115290 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:56.693 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.688+0000 7fd7847a5640 1 --2- 192.168.123.104:0/438248174 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd77c10a660 0x7fd77c115290 secure :-1 s=READY pgs=85 cs=0 l=1 rev1=1 crypto rx=0x7fd76c009b30 tx=0x7fd76c002c80 comp rx=0 tx=0).stop 2026-03-10T10:08:56.693 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.688+0000 7fd7847a5640 1 -- 192.168.123.104:0/438248174 shutdown_connections 2026-03-10T10:08:56.693 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.688+0000 7fd7847a5640 1 --2- 192.168.123.104:0/438248174 >> v2:192.168.123.104:6800/632047608 conn(0x7fd75003d980 0x7fd75003fe40 unknown :-1 s=CLOSED pgs=15 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:56.693 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.688+0000 7fd7847a5640 1 --2- 192.168.123.104:0/438248174 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd77c10a660 0x7fd77c115290 unknown :-1 s=CLOSED pgs=85 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:56.693 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.688+0000 7fd7847a5640 1 -- 192.168.123.104:0/438248174 >> 192.168.123.104:0/438248174 conn(0x7fd77c100250 msgr2=0x7fd77c101f40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:56.693 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.688+0000 7fd7847a5640 1 -- 192.168.123.104:0/438248174 shutdown_connections 2026-03-10T10:08:56.693 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:08:56.688+0000 7fd7847a5640 1 -- 192.168.123.104:0/438248174 wait complete. 2026-03-10T10:08:56.747 INFO:tasks.cephadm:Adding mon.a on vm04 2026-03-10T10:08:56.747 INFO:tasks.cephadm:Adding mon.c on vm04 2026-03-10T10:08:56.747 INFO:tasks.cephadm:Adding mon.b on vm07 2026-03-10T10:08:56.747 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph orch apply mon '3;vm04:192.168.123.104=a;vm04:[v2:192.168.123.104:3301,v1:192.168.123.104:6790]=c;vm07:192.168.123.107=b' 2026-03-10T10:08:56.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:56 vm04 bash[20742]: cluster 2026-03-10T10:08:55.605563+0000 mgr.y (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:08:56.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:56 vm04 bash[20742]: cluster 2026-03-10T10:08:55.605563+0000 mgr.y (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:08:56.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:56 vm04 bash[20742]: audit 2026-03-10T10:08:56.611747+0000 mon.a (mon.0) 125 : audit [INF] from='client.? 192.168.123.104:0/438248174' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T10:08:56.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:56 vm04 bash[20742]: audit 2026-03-10T10:08:56.611747+0000 mon.a (mon.0) 125 : audit [INF] from='client.? 192.168.123.104:0/438248174' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T10:08:57.863 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:08:57.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:57 vm04 bash[20742]: audit 2026-03-10T10:08:56.690239+0000 mon.a (mon.0) 126 : audit [INF] from='client.? 192.168.123.104:0/438248174' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T10:08:57.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:57 vm04 bash[20742]: audit 2026-03-10T10:08:56.690239+0000 mon.a (mon.0) 126 : audit [INF] from='client.? 192.168.123.104:0/438248174' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T10:08:57.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:57 vm04 bash[20742]: cluster 2026-03-10T10:08:56.691519+0000 mon.a (mon.0) 127 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T10:08:57.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:57 vm04 bash[20742]: cluster 2026-03-10T10:08:56.691519+0000 mon.a (mon.0) 127 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T10:08:58.008 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.007+0000 7f055019e640 1 -- 192.168.123.107:0/3445061697 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0548102560 msgr2=0x7f0548102940 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:58.008 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.007+0000 7f055019e640 1 --2- 192.168.123.107:0/3445061697 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0548102560 0x7f0548102940 secure :-1 s=READY pgs=86 cs=0 l=1 rev1=1 crypto rx=0x7f05380099b0 tx=0x7f053802f2b0 comp rx=0 tx=0).stop 2026-03-10T10:08:58.008 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.007+0000 7f055019e640 1 -- 192.168.123.107:0/3445061697 shutdown_connections 2026-03-10T10:08:58.008 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.007+0000 7f055019e640 1 --2- 192.168.123.107:0/3445061697 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0548102560 0x7f0548102940 unknown :-1 s=CLOSED pgs=86 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:58.008 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.007+0000 7f055019e640 1 -- 192.168.123.107:0/3445061697 >> 192.168.123.107:0/3445061697 conn(0x7f05480fe000 msgr2=0x7f0548100420 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:58.009 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.007+0000 7f055019e640 1 -- 192.168.123.107:0/3445061697 shutdown_connections 2026-03-10T10:08:58.009 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.007+0000 7f055019e640 1 -- 192.168.123.107:0/3445061697 wait complete. 2026-03-10T10:08:58.009 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.007+0000 7f055019e640 1 Processor -- start 2026-03-10T10:08:58.009 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.007+0000 7f055019e640 1 -- start start 2026-03-10T10:08:58.009 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.007+0000 7f055019e640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0548102560 0x7f054819b670 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:58.009 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.007+0000 7f055019e640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f054810bed0 con 0x7f0548102560 2026-03-10T10:08:58.009 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.007+0000 7f054df13640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0548102560 0x7f054819b670 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:58.009 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.007+0000 7f054df13640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0548102560 0x7f054819b670 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.107:40258/0 (socket says 192.168.123.107:40258) 2026-03-10T10:08:58.009 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.007+0000 7f054df13640 1 -- 192.168.123.107:0/3645347501 learned_addr learned my addr 192.168.123.107:0/3645347501 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-10T10:08:58.010 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.007+0000 7f054df13640 1 -- 192.168.123.107:0/3645347501 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f054819bbb0 con 0x7f0548102560 2026-03-10T10:08:58.010 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.007+0000 7f054df13640 1 --2- 192.168.123.107:0/3645347501 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0548102560 0x7f054819b670 secure :-1 s=READY pgs=87 cs=0 l=1 rev1=1 crypto rx=0x7f0538002a80 tx=0x7f0538002ab0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:58.010 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.007+0000 7f053effd640 1 -- 192.168.123.107:0/3645347501 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f0538047070 con 0x7f0548102560 2026-03-10T10:08:58.010 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.007+0000 7f053effd640 1 -- 192.168.123.107:0/3645347501 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f053803e070 con 0x7f0548102560 2026-03-10T10:08:58.010 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.007+0000 7f055019e640 1 -- 192.168.123.107:0/3645347501 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f054819be40 con 0x7f0548102560 2026-03-10T10:08:58.011 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.007+0000 7f055019e640 1 -- 192.168.123.107:0/3645347501 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f054819ff40 con 0x7f0548102560 2026-03-10T10:08:58.011 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.007+0000 7f053effd640 1 -- 192.168.123.107:0/3645347501 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f05380425f0 con 0x7f0548102560 2026-03-10T10:08:58.011 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.011+0000 7f053effd640 1 -- 192.168.123.107:0/3645347501 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 13) ==== 50271+0+0 (secure 0 0 0) 0x7f0538038470 con 0x7f0548102560 2026-03-10T10:08:58.011 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.011+0000 7f055019e640 1 -- 192.168.123.107:0/3645347501 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f0548103a40 con 0x7f0548102560 2026-03-10T10:08:58.011 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.011+0000 7f053effd640 1 --2- 192.168.123.107:0/3645347501 >> v2:192.168.123.104:6800/632047608 conn(0x7f052003d980 0x7f052003fe40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:58.012 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.011+0000 7f053effd640 1 -- 192.168.123.107:0/3645347501 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(4..4 src has 1..4) ==== 1069+0+0 (secure 0 0 0) 0x7f0538076ab0 con 0x7f0548102560 2026-03-10T10:08:58.012 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.011+0000 7f054d712640 1 --2- 192.168.123.107:0/3645347501 >> v2:192.168.123.104:6800/632047608 conn(0x7f052003d980 0x7f052003fe40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:58.012 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.011+0000 7f054d712640 1 --2- 192.168.123.107:0/3645347501 >> v2:192.168.123.104:6800/632047608 conn(0x7f052003d980 0x7f052003fe40 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f05300099c0 tx=0x7f0530006eb0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:58.015 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.011+0000 7f053effd640 1 -- 192.168.123.107:0/3645347501 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f0538034280 con 0x7f0548102560 2026-03-10T10:08:58.113 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.111+0000 7f055019e640 1 -- 192.168.123.107:0/3645347501 --> v2:192.168.123.104:6800/632047608 -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "mon", "placement": "3;vm04:192.168.123.104=a;vm04:[v2:192.168.123.104:3301,v1:192.168.123.104:6790]=c;vm07:192.168.123.107=b", "target": ["mon-mgr", ""]}) -- 0x7f05481029c0 con 0x7f052003d980 2026-03-10T10:08:58.118 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.119+0000 7f053effd640 1 -- 192.168.123.107:0/3645347501 <== mgr.14150 v2:192.168.123.104:6800/632047608 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+24 (secure 0 0 0) 0x7f05481029c0 con 0x7f052003d980 2026-03-10T10:08:58.118 INFO:teuthology.orchestra.run.vm07.stdout:Scheduled mon update... 2026-03-10T10:08:58.120 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.119+0000 7f055019e640 1 -- 192.168.123.107:0/3645347501 >> v2:192.168.123.104:6800/632047608 conn(0x7f052003d980 msgr2=0x7f052003fe40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:58.120 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.119+0000 7f055019e640 1 --2- 192.168.123.107:0/3645347501 >> v2:192.168.123.104:6800/632047608 conn(0x7f052003d980 0x7f052003fe40 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f05300099c0 tx=0x7f0530006eb0 comp rx=0 tx=0).stop 2026-03-10T10:08:58.120 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.119+0000 7f055019e640 1 -- 192.168.123.107:0/3645347501 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0548102560 msgr2=0x7f054819b670 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:58.120 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.119+0000 7f055019e640 1 --2- 192.168.123.107:0/3645347501 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0548102560 0x7f054819b670 secure :-1 s=READY pgs=87 cs=0 l=1 rev1=1 crypto rx=0x7f0538002a80 tx=0x7f0538002ab0 comp rx=0 tx=0).stop 2026-03-10T10:08:58.120 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.119+0000 7f055019e640 1 -- 192.168.123.107:0/3645347501 shutdown_connections 2026-03-10T10:08:58.120 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.119+0000 7f055019e640 1 --2- 192.168.123.107:0/3645347501 >> v2:192.168.123.104:6800/632047608 conn(0x7f052003d980 0x7f052003fe40 unknown :-1 s=CLOSED pgs=16 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:58.120 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.119+0000 7f055019e640 1 --2- 192.168.123.107:0/3645347501 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0548102560 0x7f054819b670 unknown :-1 s=CLOSED pgs=87 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:58.120 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.119+0000 7f055019e640 1 -- 192.168.123.107:0/3645347501 >> 192.168.123.107:0/3645347501 conn(0x7f05480fe000 msgr2=0x7f05480fe3e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:58.120 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.119+0000 7f055019e640 1 -- 192.168.123.107:0/3645347501 shutdown_connections 2026-03-10T10:08:58.120 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:58.119+0000 7f055019e640 1 -- 192.168.123.107:0/3645347501 wait complete. 2026-03-10T10:08:58.196 DEBUG:teuthology.orchestra.run.vm04:mon.c> sudo journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@mon.c.service 2026-03-10T10:08:58.198 DEBUG:teuthology.orchestra.run.vm07:mon.b> sudo journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@mon.b.service 2026-03-10T10:08:58.199 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-10T10:08:58.199 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph mon dump -f json 2026-03-10T10:08:58.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:58 vm04 bash[20742]: cluster 2026-03-10T10:08:57.605739+0000 mgr.y (mgr.14150) 27 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:08:58.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:58 vm04 bash[20742]: cluster 2026-03-10T10:08:57.605739+0000 mgr.y (mgr.14150) 27 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:08:58.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:58 vm04 bash[20742]: audit 2026-03-10T10:08:58.118008+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:58.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:58 vm04 bash[20742]: audit 2026-03-10T10:08:58.118008+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:58.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:58 vm04 bash[20742]: audit 2026-03-10T10:08:58.118448+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:08:58.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:58 vm04 bash[20742]: audit 2026-03-10T10:08:58.118448+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:08:58.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:58 vm04 bash[20742]: audit 2026-03-10T10:08:58.119349+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:08:58.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:58 vm04 bash[20742]: audit 2026-03-10T10:08:58.119349+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:08:58.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:58 vm04 bash[20742]: audit 2026-03-10T10:08:58.119763+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:08:58.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:58 vm04 bash[20742]: audit 2026-03-10T10:08:58.119763+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:08:58.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:58 vm04 bash[20742]: audit 2026-03-10T10:08:58.121892+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:58.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:58 vm04 bash[20742]: audit 2026-03-10T10:08:58.121892+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:58.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:58 vm04 bash[20742]: audit 2026-03-10T10:08:58.122674+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:08:58.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:58 vm04 bash[20742]: audit 2026-03-10T10:08:58.122674+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:08:58.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:58 vm04 bash[20742]: audit 2026-03-10T10:08:58.123028+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:08:58.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:58 vm04 bash[20742]: audit 2026-03-10T10:08:58.123028+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:08:59.356 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.b/config 2026-03-10T10:08:59.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 systemd[1]: Started Ceph mon.b for e4c1c9d6-1c68-11f1-a9bd-116050875839. 2026-03-10T10:08:59.726 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:59.723+0000 7f0fa3577640 1 -- 192.168.123.107:0/60699696 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0fa40739f0 msgr2=0x7f0fa4073dd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:08:59.727 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:59.723+0000 7f0fa3577640 1 --2- 192.168.123.107:0/60699696 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0fa40739f0 0x7f0fa4073dd0 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7f0f940099b0 tx=0x7f0f9402f2b0 comp rx=0 tx=0).stop 2026-03-10T10:08:59.727 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:59.723+0000 7f0fa3577640 1 -- 192.168.123.107:0/60699696 shutdown_connections 2026-03-10T10:08:59.727 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:59.723+0000 7f0fa3577640 1 --2- 192.168.123.107:0/60699696 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0fa40739f0 0x7f0fa4073dd0 unknown :-1 s=CLOSED pgs=88 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:08:59.727 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:59.723+0000 7f0fa3577640 1 -- 192.168.123.107:0/60699696 >> 192.168.123.107:0/60699696 conn(0x7f0fa406d270 msgr2=0x7f0fa406d680 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:08:59.727 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:59.723+0000 7f0fa3577640 1 -- 192.168.123.107:0/60699696 shutdown_connections 2026-03-10T10:08:59.727 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:59.723+0000 7f0fa3577640 1 -- 192.168.123.107:0/60699696 wait complete. 2026-03-10T10:08:59.727 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:59.723+0000 7f0fa3577640 1 Processor -- start 2026-03-10T10:08:59.727 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:59.727+0000 7f0fa3577640 1 -- start start 2026-03-10T10:08:59.727 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:59.727+0000 7f0fa3577640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0fa40739f0 0x7f0fa4117780 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:59.727 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:59.727+0000 7f0fa3577640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f0fa411afd0 con 0x7f0fa40739f0 2026-03-10T10:08:59.727 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:59.727+0000 7f0fa2575640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0fa40739f0 0x7f0fa4117780 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:59.727 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:59.727+0000 7f0fa2575640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0fa40739f0 0x7f0fa4117780 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.107:40282/0 (socket says 192.168.123.107:40282) 2026-03-10T10:08:59.727 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:59.727+0000 7f0fa2575640 1 -- 192.168.123.107:0/1760053205 learned_addr learned my addr 192.168.123.107:0/1760053205 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-10T10:08:59.727 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:59.727+0000 7f0fa2575640 1 -- 192.168.123.107:0/1760053205 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f0fa4117cc0 con 0x7f0fa40739f0 2026-03-10T10:08:59.728 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:59.727+0000 7f0fa2575640 1 --2- 192.168.123.107:0/1760053205 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0fa40739f0 0x7f0fa4117780 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7f0f94002fd0 tx=0x7f0f94004290 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:59.728 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:59.727+0000 7f0f8b7fe640 1 -- 192.168.123.107:0/1760053205 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f0f9402fc60 con 0x7f0fa40739f0 2026-03-10T10:08:59.728 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:59.727+0000 7f0fa3577640 1 -- 192.168.123.107:0/1760053205 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f0fa4117f50 con 0x7f0fa40739f0 2026-03-10T10:08:59.728 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:59.727+0000 7f0fa3577640 1 -- 192.168.123.107:0/1760053205 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f0fa41183b0 con 0x7f0fa40739f0 2026-03-10T10:08:59.729 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:59.727+0000 7f0f8b7fe640 1 -- 192.168.123.107:0/1760053205 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f0f94033040 con 0x7f0fa40739f0 2026-03-10T10:08:59.729 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:59.727+0000 7f0f8b7fe640 1 -- 192.168.123.107:0/1760053205 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f0f94031200 con 0x7f0fa40739f0 2026-03-10T10:08:59.729 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:59.727+0000 7f0f8b7fe640 1 -- 192.168.123.107:0/1760053205 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 13) ==== 50271+0+0 (secure 0 0 0) 0x7f0f940313a0 con 0x7f0fa40739f0 2026-03-10T10:08:59.729 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:59.727+0000 7f0f8b7fe640 1 --2- 192.168.123.107:0/1760053205 >> v2:192.168.123.104:6800/632047608 conn(0x7f0f6c03dd40 0x7f0f6c040200 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:08:59.730 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:59.727+0000 7f0f8b7fe640 1 -- 192.168.123.107:0/1760053205 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(4..4 src has 1..4) ==== 1069+0+0 (secure 0 0 0) 0x7f0f9403e070 con 0x7f0fa40739f0 2026-03-10T10:08:59.730 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:59.727+0000 7f0fa1d74640 1 --2- 192.168.123.107:0/1760053205 >> v2:192.168.123.104:6800/632047608 conn(0x7f0f6c03dd40 0x7f0f6c040200 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:08:59.730 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:59.727+0000 7f0fa3577640 1 -- 192.168.123.107:0/1760053205 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f0fa4074f00 con 0x7f0fa40739f0 2026-03-10T10:08:59.733 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:59.731+0000 7f0fa1d74640 1 --2- 192.168.123.107:0/1760053205 >> v2:192.168.123.104:6800/632047608 conn(0x7f0f6c03dd40 0x7f0f6c040200 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f0f9c00ad30 tx=0x7f0f9c0093f0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:08:59.733 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:59.731+0000 7f0f8b7fe640 1 -- 192.168.123.107:0/1760053205 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f0f94036370 con 0x7f0fa40739f0 2026-03-10T10:08:59.853 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:59.851+0000 7f0f8b7fe640 1 -- 192.168.123.107:0/1760053205 <== mon.0 v2:192.168.123.104:3300/0 7 ==== mon_map magic: 0 ==== 309+0+0 (secure 0 0 0) 0x7f0f94049330 con 0x7f0fa40739f0 2026-03-10T10:08:59.904 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:08:59.903+0000 7f0fa3577640 1 -- 192.168.123.107:0/1760053205 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) -- 0x7f0fa4073dd0 con 0x7f0fa40739f0 2026-03-10T10:08:59.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:08:59 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:08:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:59 vm04 bash[20742]: audit 2026-03-10T10:08:58.114614+0000 mgr.y (mgr.14150) 28 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm04:192.168.123.104=a;vm04:[v2:192.168.123.104:3301,v1:192.168.123.104:6790]=c;vm07:192.168.123.107=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:08:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:59 vm04 bash[20742]: audit 2026-03-10T10:08:58.114614+0000 mgr.y (mgr.14150) 28 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm04:192.168.123.104=a;vm04:[v2:192.168.123.104:3301,v1:192.168.123.104:6790]=c;vm07:192.168.123.107=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:08:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:59 vm04 bash[20742]: cephadm 2026-03-10T10:08:58.115675+0000 mgr.y (mgr.14150) 29 : cephadm [INF] Saving service mon spec with placement vm04:192.168.123.104=a;vm04:[v2:192.168.123.104:3301,v1:192.168.123.104:6790]=c;vm07:192.168.123.107=b;count:3 2026-03-10T10:08:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:59 vm04 bash[20742]: cephadm 2026-03-10T10:08:58.115675+0000 mgr.y (mgr.14150) 29 : cephadm [INF] Saving service mon spec with placement vm04:192.168.123.104=a;vm04:[v2:192.168.123.104:3301,v1:192.168.123.104:6790]=c;vm07:192.168.123.107=b;count:3 2026-03-10T10:08:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:59 vm04 bash[20742]: cephadm 2026-03-10T10:08:58.123528+0000 mgr.y (mgr.14150) 30 : cephadm [INF] Deploying daemon mon.b on vm07 2026-03-10T10:08:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:59 vm04 bash[20742]: cephadm 2026-03-10T10:08:58.123528+0000 mgr.y (mgr.14150) 30 : cephadm [INF] Deploying daemon mon.b on vm07 2026-03-10T10:08:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:59 vm04 bash[20742]: audit 2026-03-10T10:08:59.664977+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:59 vm04 bash[20742]: audit 2026-03-10T10:08:59.664977+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:59 vm04 bash[20742]: audit 2026-03-10T10:08:59.666999+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:59 vm04 bash[20742]: audit 2026-03-10T10:08:59.666999+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:59 vm04 bash[20742]: audit 2026-03-10T10:08:59.669028+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:59 vm04 bash[20742]: audit 2026-03-10T10:08:59.669028+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:08:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:59 vm04 bash[20742]: audit 2026-03-10T10:08:59.669387+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:08:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:59 vm04 bash[20742]: audit 2026-03-10T10:08:59.669387+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:08:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:59 vm04 bash[20742]: audit 2026-03-10T10:08:59.669807+0000 mon.a (mon.0) 139 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:08:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:08:59 vm04 bash[20742]: audit 2026-03-10T10:08:59.669807+0000 mon.a (mon.0) 139 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:00.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.771+0000 7f55b325fd80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-10T10:09:00.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.771+0000 7f55b325fd80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-10T10:09:00.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.771+0000 7f55b325fd80 0 pidfile_write: ignore empty --pid-file 2026-03-10T10:09:00.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.771+0000 7f55b325fd80 0 load: jerasure load: lrc 2026-03-10T10:09:00.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: RocksDB version: 7.9.2 2026-03-10T10:09:00.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Git sha 0 2026-03-10T10:09:00.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T10:09:00.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: DB SUMMARY 2026-03-10T10:09:00.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: DB Session ID: P7N9O4M7XVFXQYAWJ1ZL 2026-03-10T10:09:00.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: CURRENT file: CURRENT 2026-03-10T10:09:00.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-10T10:09:00.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes 2026-03-10T10:09:00.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-b/store.db dir, Total Num: 0, files: 2026-03-10T10:09:00.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-b/store.db: 000004.log size: 511 ; 2026-03-10T10:09:00.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.error_if_exists: 0 2026-03-10T10:09:00.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.create_if_missing: 0 2026-03-10T10:09:00.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.paranoid_checks: 1 2026-03-10T10:09:00.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T10:09:00.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T10:09:00.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T10:09:00.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.env: 0x561c8dac5dc0 2026-03-10T10:09:00.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-10T10:09:00.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.info_log: 0x561cc8ca7880 2026-03-10T10:09:00.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-10T10:09:00.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.statistics: (nil) 2026-03-10T10:09:00.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.use_fsync: 0 2026-03-10T10:09:00.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.max_log_file_size: 0 2026-03-10T10:09:00.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.allow_fallocate: 1 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.use_direct_reads: 0 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.db_log_dir: 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.wal_dir: 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.write_buffer_manager: 0x561cc8cab900 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.unordered_write: 0 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.row_cache: None 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.wal_filter: None 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.two_write_queues: 0 2026-03-10T10:09:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.wal_compression: 0 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.atomic_flush: 0 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.log_readahead_size: 0 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.max_background_jobs: 2 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.max_background_compactions: -1 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.max_subcompactions: 1 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.max_open_files: -1 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.max_background_flushes: -1 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Compression algorithms supported: 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: kZSTD supported: 0 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: kXpressCompression supported: 0 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: kBZip2Compression supported: 0 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: kLZ4Compression supported: 1 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: kZlibCompression supported: 1 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: kSnappyCompression supported: 1 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-b/store.db/MANIFEST-000005 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.merge_operator: 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.compaction_filter: None 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561cc8ca6480) 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cache_index_and_filter_blocks: 1 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: pin_top_level_index_and_filter: 1 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: index_type: 0 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: data_block_index_type: 0 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: index_shortening: 1 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: data_block_hash_table_util_ratio: 0.750000 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: checksum: 4 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: no_block_cache: 0 2026-03-10T10:09:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: block_cache: 0x561cc8ccd350 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: block_cache_name: BinnedLRUCache 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: block_cache_options: 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: capacity : 536870912 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: num_shard_bits : 4 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: strict_capacity_limit : 0 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: high_pri_pool_ratio: 0.000 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: block_cache_compressed: (nil) 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: persistent_cache: (nil) 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: block_size: 4096 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: block_size_deviation: 10 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: block_restart_interval: 16 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: index_block_restart_interval: 1 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: metadata_block_size: 4096 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: partition_filters: 0 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: use_delta_encoding: 1 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: filter_policy: bloomfilter 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: whole_key_filtering: 1 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: verify_compression: 0 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: read_amp_bytes_per_bit: 0 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: format_version: 5 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: enable_index_compression: 1 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: block_align: 0 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: max_auto_readahead_size: 262144 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: prepopulate_block_cache: 0 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: initial_auto_readahead_size: 8192 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: num_file_reads_for_auto_readahead: 2 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.compression: NoCompression 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.num_levels: 7 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T10:09:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T10:09:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T10:09:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T10:09:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T10:09:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T10:09:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T10:09:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T10:09:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T10:09:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T10:09:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T10:09:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T10:09:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-10T10:09:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T10:09:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T10:09:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-10T10:09:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T10:09:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T10:09:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T10:09:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T10:09:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T10:09:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T10:09:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T10:09:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T10:09:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T10:09:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T10:09:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.inplace_update_support: 0 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.bloom_locality: 0 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.max_successive_merges: 0 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.ttl: 2592000 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.enable_blob_files: false 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.min_blob_size: 0 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.775+0000 7f55b325fd80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.783+0000 7f55b325fd80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-b/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.783+0000 7f55b325fd80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.783+0000 7f55b325fd80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 326a04de-df7c-41f6-ad7e-f41e814a38b3 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.783+0000 7f55b325fd80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773137339786271, "job": 1, "event": "recovery_started", "wal_files": [4]} 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.783+0000 7f55b325fd80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.791+0000 7f55b325fd80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773137339794513, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773137339, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "326a04de-df7c-41f6-ad7e-f41e814a38b3", "db_session_id": "P7N9O4M7XVFXQYAWJ1ZL", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.791+0000 7f55b325fd80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773137339794602, "job": 1, "event": "recovery_finished"} 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.791+0000 7f55b325fd80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 10 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.795+0000 7f55b325fd80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-b/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.795+0000 7f55b325fd80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x561cc8ccee00 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.795+0000 7f55b325fd80 4 rocksdb: DB pointer 0x561cc8dda000 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.795+0000 7f55b325fd80 0 mon.b does not exist in monmap, will attempt to join an existing cluster 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.795+0000 7f55b325fd80 0 using public_addr v2:192.168.123.107:0/0 -> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.795+0000 7f55b325fd80 0 starting mon.b rank -1 at public addrs [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] at bind addrs [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon_data /var/lib/ceph/mon/ceph-b fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.795+0000 7f55b325fd80 1 mon.b@-1(???) e0 preinit fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.795+0000 7f55a9029640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.795+0000 7f55a9029640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: ** DB Stats ** 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T10:09:00.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: ** Compaction Stats [default] ** 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: L0 1/0 1.60 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.008 0 0 0.0 0.0 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: Sum 1/0 1.60 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.008 0 0 0.0 0.0 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.008 0 0 0.0 0.0 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: ** Compaction Stats [default] ** 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.008 0 0 0.0 0.0 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: Flush(GB): cumulative 0.000, interval 0.000 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: AddFile(Total Files): cumulative 0, interval 0 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: AddFile(Keys): cumulative 0, interval 0 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: Cumulative compaction: 0.00 GB write, 0.08 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: Interval compaction: 0.00 GB write, 0.08 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: Block cache BinnedLRUCache@0x561cc8ccd350#7 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 8e-06 secs_since: 0 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: Block cache entry stats(count,size,portion): DataBlock(1,0.64 KB,0.00012219%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%) 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: ** File Read Latency Histogram By Level [default] ** 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.815+0000 7f55ac02f640 0 mon.b@-1(synchronizing).mds e1 new map 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.815+0000 7f55ac02f640 0 mon.b@-1(synchronizing).mds e1 print_map 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: e1 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: btime 2026-03-10T10:08:09:663579+0000 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: legacy client fscid: -1 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: No filesystems configured 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.815+0000 7f55ac02f640 1 mon.b@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.815+0000 7f55ac02f640 1 mon.b@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.815+0000 7f55ac02f640 1 mon.b@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.815+0000 7f55ac02f640 1 mon.b@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.815+0000 7f55ac02f640 1 mon.b@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.815+0000 7f55ac02f640 1 mon.b@-1(synchronizing).osd e4 e4: 0 total, 0 up, 0 in 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.815+0000 7f55ac02f640 0 mon.b@-1(synchronizing).osd e4 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.815+0000 7f55ac02f640 0 mon.b@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.815+0000 7f55ac02f640 0 mon.b@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.815+0000 7f55ac02f640 0 mon.b@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:09.664020+0000 mon.a (mon.0) 0 : cluster [INF] mkfs e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:09.664020+0000 mon.a (mon.0) 0 : cluster [INF] mkfs e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:09.659529+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:09.659529+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:10.680214+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:10.680214+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:10.680238+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:10.680238+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:10.680243+0000 mon.a (mon.0) 3 : cluster [DBG] fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:10.680243+0000 mon.a (mon.0) 3 : cluster [DBG] fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:10.680245+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-10T10:08:08.532327+0000 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:10.680245+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-10T10:08:08.532327+0000 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:10.680253+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-10T10:08:08.532327+0000 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:10.680253+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-10T10:08:08.532327+0000 2026-03-10T10:09:00.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:10.680256+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:10.680256+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:10.680259+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:10.680259+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:10.680261+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:10.680261+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:10.680467+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:10.680467+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:10.680479+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:10.680479+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:10.680836+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:10.680836+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:10.906149+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.104:0/3141641644' entity='client.admin' 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:10.906149+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.104:0/3141641644' entity='client.admin' 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:11.474847+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.104:0/947440216' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:11.474847+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.104:0/947440216' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:13.706685+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.104:0/2122167615' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:13.706685+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.104:0/2122167615' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:14.269965+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon y 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:14.269965+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon y 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:14.273720+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: y(active, starting, since 0.00383463s) 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:14.273720+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: y(active, starting, since 0.00383463s) 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:14.277013+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:14.277013+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:14.277453+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:14.277453+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:14.277836+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:14.277836+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:14.278523+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:14.278523+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:14.278924+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:14.278924+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:14.283610+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon y is now available 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:14.283610+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon y is now available 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:14.292305+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:14.292305+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:14.293364+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:14.293364+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:14.295226+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:14.295226+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:14.296438+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:14.296438+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:14.298870+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:14.298870+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:15.278903+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: y(active, since 1.00902s) 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:15.278903+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: y(active, since 1.00902s) 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:16.013180+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.104:0/2656217666' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:16.013180+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.104:0/2656217666' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:16.253402+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.104:0/1778658827' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:16.253402+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.104:0/1778658827' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:16.284952+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e4: y(active, since 2s) 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:16.284952+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e4: y(active, since 2s) 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:16.529292+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.104:0/756455820' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:16.529292+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.104:0/756455820' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:17.323368+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.104:0/756455820' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:17.323368+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.104:0/756455820' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:17.328303+0000 mon.a (mon.0) 34 : cluster [DBG] mgrmap e5: y(active, since 3s) 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:17.328303+0000 mon.a (mon.0) 34 : cluster [DBG] mgrmap e5: y(active, since 3s) 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:17.641899+0000 mon.a (mon.0) 35 : audit [DBG] from='client.? 192.168.123.104:0/1685368523' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:17.641899+0000 mon.a (mon.0) 35 : audit [DBG] from='client.? 192.168.123.104:0/1685368523' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:20.274732+0000 mon.a (mon.0) 36 : cluster [INF] Active manager daemon y restarted 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:20.274732+0000 mon.a (mon.0) 36 : cluster [INF] Active manager daemon y restarted 2026-03-10T10:09:00.020 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:20.274918+0000 mon.a (mon.0) 37 : cluster [INF] Activating manager daemon y 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:20.274918+0000 mon.a (mon.0) 37 : cluster [INF] Activating manager daemon y 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:20.279317+0000 mon.a (mon.0) 38 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:20.279317+0000 mon.a (mon.0) 38 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:20.279391+0000 mon.a (mon.0) 39 : cluster [DBG] mgrmap e6: y(active, starting, since 0.00455868s) 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:20.279391+0000 mon.a (mon.0) 39 : cluster [DBG] mgrmap e6: y(active, starting, since 0.00455868s) 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:20.283005+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:20.283005+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:20.283065+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:20.283065+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:20.283801+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:20.283801+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:20.283959+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:20.283959+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:20.284138+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:20.284138+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:20.288740+0000 mon.a (mon.0) 45 : cluster [INF] Manager daemon y is now available 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:20.288740+0000 mon.a (mon.0) 45 : cluster [INF] Manager daemon y is now available 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:20.297079+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:20.297079+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:20.299817+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:20.299817+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:20.315592+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:20.315592+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:20.316475+0000 mon.a (mon.0) 49 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:20.316475+0000 mon.a (mon.0) 49 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:20.318310+0000 mon.a (mon.0) 50 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:20.318310+0000 mon.a (mon.0) 50 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:20.324639+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:20.324639+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:20.294863+0000 mgr.y (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:20.294863+0000 mgr.y (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:20.600624+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:20.600624+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:20.603262+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:20.603262+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:21.283038+0000 mon.a (mon.0) 54 : cluster [DBG] mgrmap e7: y(active, since 1.0082s) 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:21.283038+0000 mon.a (mon.0) 54 : cluster [DBG] mgrmap e7: y(active, since 1.0082s) 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:21.285553+0000 mgr.y (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:21.285553+0000 mgr.y (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:21.289274+0000 mgr.y (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:21.289274+0000 mgr.y (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:21.359736+0000 mgr.y (mgr.14118) 4 : cephadm [INF] [10/Mar/2026:10:08:21] ENGINE Bus STARTING 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:21.359736+0000 mgr.y (mgr.14118) 4 : cephadm [INF] [10/Mar/2026:10:08:21] ENGINE Bus STARTING 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:21.570101+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:21.570101+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:21.575693+0000 mon.a (mon.0) 56 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:21.575693+0000 mon.a (mon.0) 56 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:21.580337+0000 mon.a (mon.0) 57 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:21.580337+0000 mon.a (mon.0) 57 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:22.064614+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:22.064614+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:22.066629+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:22.066629+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:21.462056+0000 mgr.y (mgr.14118) 5 : cephadm [INF] [10/Mar/2026:10:08:21] ENGINE Serving on http://192.168.123.104:8765 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:21.462056+0000 mgr.y (mgr.14118) 5 : cephadm [INF] [10/Mar/2026:10:08:21] ENGINE Serving on http://192.168.123.104:8765 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:21.567030+0000 mgr.y (mgr.14118) 6 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:21.567030+0000 mgr.y (mgr.14118) 6 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:21.576426+0000 mgr.y (mgr.14118) 7 : cephadm [INF] [10/Mar/2026:10:08:21] ENGINE Serving on https://192.168.123.104:7150 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:21.576426+0000 mgr.y (mgr.14118) 7 : cephadm [INF] [10/Mar/2026:10:08:21] ENGINE Serving on https://192.168.123.104:7150 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:21.576458+0000 mgr.y (mgr.14118) 8 : cephadm [INF] [10/Mar/2026:10:08:21] ENGINE Bus STARTED 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:21.576458+0000 mgr.y (mgr.14118) 8 : cephadm [INF] [10/Mar/2026:10:08:21] ENGINE Bus STARTED 2026-03-10T10:09:00.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:21.577574+0000 mgr.y (mgr.14118) 9 : cephadm [INF] [10/Mar/2026:10:08:21] ENGINE Client ('192.168.123.104', 39838) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:21.577574+0000 mgr.y (mgr.14118) 9 : cephadm [INF] [10/Mar/2026:10:08:21] ENGINE Client ('192.168.123.104', 39838) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:21.806820+0000 mgr.y (mgr.14118) 10 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:21.806820+0000 mgr.y (mgr.14118) 10 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:22.049695+0000 mgr.y (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:22.049695+0000 mgr.y (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:22.049877+0000 mgr.y (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:22.049877+0000 mgr.y (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:22.298852+0000 mgr.y (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:22.298852+0000 mgr.y (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:23.070528+0000 mon.a (mon.0) 60 : cluster [DBG] mgrmap e8: y(active, since 2s) 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:23.070528+0000 mon.a (mon.0) 60 : cluster [DBG] mgrmap e8: y(active, since 2s) 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:22.551477+0000 mgr.y (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm04", "addr": "192.168.123.104", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:22.551477+0000 mgr.y (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm04", "addr": "192.168.123.104", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:23.080721+0000 mgr.y (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm04 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:23.080721+0000 mgr.y (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm04 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:24.334465+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:24.334465+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:24.334982+0000 mgr.y (mgr.14118) 16 : cephadm [INF] Added host vm04 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:24.334982+0000 mgr.y (mgr.14118) 16 : cephadm [INF] Added host vm04 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:24.336239+0000 mon.a (mon.0) 62 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:24.336239+0000 mon.a (mon.0) 62 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:24.656660+0000 mgr.y (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:24.656660+0000 mgr.y (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:24.657506+0000 mgr.y (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:24.657506+0000 mgr.y (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:24.659934+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:24.659934+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:24.907544+0000 mgr.y (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:24.907544+0000 mgr.y (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:24.908186+0000 mgr.y (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:24.908186+0000 mgr.y (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:24.910913+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:24.910913+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:25.167348+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.104:0/654643366' entity='client.admin' 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:25.167348+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.104:0/654643366' entity='client.admin' 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:25.417615+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.104:0/4216453309' entity='client.admin' 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:25.417615+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.104:0/4216453309' entity='client.admin' 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:25.703248+0000 mon.a (mon.0) 67 : audit [INF] from='client.? 192.168.123.104:0/923033120' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:25.703248+0000 mon.a (mon.0) 67 : audit [INF] from='client.? 192.168.123.104:0/923033120' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:25.772469+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:25.772469+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:26.030738+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:26.030738+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:26.667102+0000 mon.a (mon.0) 70 : audit [INF] from='client.? 192.168.123.104:0/923033120' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:26.667102+0000 mon.a (mon.0) 70 : audit [INF] from='client.? 192.168.123.104:0/923033120' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:26.669484+0000 mon.a (mon.0) 71 : cluster [DBG] mgrmap e9: y(active, since 6s) 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:26.669484+0000 mon.a (mon.0) 71 : cluster [DBG] mgrmap e9: y(active, since 6s) 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:27.000668+0000 mon.a (mon.0) 72 : audit [DBG] from='client.? 192.168.123.104:0/1631284863' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:27.000668+0000 mon.a (mon.0) 72 : audit [DBG] from='client.? 192.168.123.104:0/1631284863' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:29.595102+0000 mon.a (mon.0) 73 : cluster [INF] Active manager daemon y restarted 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:29.595102+0000 mon.a (mon.0) 73 : cluster [INF] Active manager daemon y restarted 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:29.595534+0000 mon.a (mon.0) 74 : cluster [INF] Activating manager daemon y 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:29.595534+0000 mon.a (mon.0) 74 : cluster [INF] Activating manager daemon y 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:29.601364+0000 mon.a (mon.0) 75 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:29.601364+0000 mon.a (mon.0) 75 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:29.601482+0000 mon.a (mon.0) 76 : cluster [DBG] mgrmap e10: y(active, starting, since 0.00606578s) 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:29.601482+0000 mon.a (mon.0) 76 : cluster [DBG] mgrmap e10: y(active, starting, since 0.00606578s) 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:29.603257+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:29.603257+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:09:00.022 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:29.603351+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:29.603351+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:29.603851+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:29.603851+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:29.604009+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:29.604009+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:29.604190+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:29.604190+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:29.607998+0000 mon.a (mon.0) 82 : cluster [INF] Manager daemon y is now available 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:29.607998+0000 mon.a (mon.0) 82 : cluster [INF] Manager daemon y is now available 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:29.623051+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:29.623051+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:29.624299+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:29.624299+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:29.625008+0000 mon.a (mon.0) 85 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:29.625008+0000 mon.a (mon.0) 85 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:30.399315+0000 mgr.y (mgr.14150) 1 : cephadm [INF] [10/Mar/2026:10:08:30] ENGINE Bus STARTING 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:30.399315+0000 mgr.y (mgr.14150) 1 : cephadm [INF] [10/Mar/2026:10:08:30] ENGINE Bus STARTING 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:30.507067+0000 mgr.y (mgr.14150) 2 : cephadm [INF] [10/Mar/2026:10:08:30] ENGINE Serving on https://192.168.123.104:7150 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:30.507067+0000 mgr.y (mgr.14150) 2 : cephadm [INF] [10/Mar/2026:10:08:30] ENGINE Serving on https://192.168.123.104:7150 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:30.507665+0000 mgr.y (mgr.14150) 3 : cephadm [INF] [10/Mar/2026:10:08:30] ENGINE Client ('192.168.123.104', 39822) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:30.507665+0000 mgr.y (mgr.14150) 3 : cephadm [INF] [10/Mar/2026:10:08:30] ENGINE Client ('192.168.123.104', 39822) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:30.604799+0000 mon.a (mon.0) 86 : cluster [DBG] mgrmap e11: y(active, since 1.00939s) 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:30.604799+0000 mon.a (mon.0) 86 : cluster [DBG] mgrmap e11: y(active, since 1.00939s) 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:30.606818+0000 mgr.y (mgr.14150) 4 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:30.606818+0000 mgr.y (mgr.14150) 4 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:30.608189+0000 mgr.y (mgr.14150) 5 : cephadm [INF] [10/Mar/2026:10:08:30] ENGINE Serving on http://192.168.123.104:8765 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:30.608189+0000 mgr.y (mgr.14150) 5 : cephadm [INF] [10/Mar/2026:10:08:30] ENGINE Serving on http://192.168.123.104:8765 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:30.608221+0000 mgr.y (mgr.14150) 6 : cephadm [INF] [10/Mar/2026:10:08:30] ENGINE Bus STARTED 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:30.608221+0000 mgr.y (mgr.14150) 6 : cephadm [INF] [10/Mar/2026:10:08:30] ENGINE Bus STARTED 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:30.610577+0000 mgr.y (mgr.14150) 7 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:30.610577+0000 mgr.y (mgr.14150) 7 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:30.929579+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:30.929579+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:30.931533+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:30.931533+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:31.320638+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:31.320638+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:31.552124+0000 mon.a (mon.0) 90 : audit [DBG] from='client.? 192.168.123.104:0/487836184' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:31.552124+0000 mon.a (mon.0) 90 : audit [DBG] from='client.? 192.168.123.104:0/487836184' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:30.893168+0000 mgr.y (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:30.893168+0000 mgr.y (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:31.170677+0000 mgr.y (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:31.170677+0000 mgr.y (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:31.850862+0000 mon.a (mon.0) 91 : audit [INF] from='client.? 192.168.123.104:0/1555448482' entity='client.admin' 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:31.850862+0000 mon.a (mon.0) 91 : audit [INF] from='client.? 192.168.123.104:0/1555448482' entity='client.admin' 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:32.325607+0000 mon.a (mon.0) 92 : cluster [DBG] mgrmap e12: y(active, since 2s) 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:32.325607+0000 mon.a (mon.0) 92 : cluster [DBG] mgrmap e12: y(active, since 2s) 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:33.889238+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:33.889238+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:34.407558+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:34.407558+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:35.893854+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e13: y(active, since 6s) 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:35.893854+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e13: y(active, since 6s) 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:36.019055+0000 mon.a (mon.0) 96 : audit [INF] from='client.? 192.168.123.104:0/2021177804' entity='client.admin' 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:36.019055+0000 mon.a (mon.0) 96 : audit [INF] from='client.? 192.168.123.104:0/2021177804' entity='client.admin' 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:40.225084+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.023 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:40.225084+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:40.227027+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:40.227027+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:40.227528+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:40.227528+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:40.229630+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:40.229630+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:40.234256+0000 mon.a (mon.0) 101 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:40.234256+0000 mon.a (mon.0) 101 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:40.236401+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:40.236401+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:40.960990+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:40.960990+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:40.961494+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:40.961494+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:40.962304+0000 mon.a (mon.0) 105 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:40.962304+0000 mon.a (mon.0) 105 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:40.962810+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:40.962810+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:41.103892+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:41.103892+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:41.106797+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:41.106797+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:41.110263+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:41.110263+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:40.958093+0000 mgr.y (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:40.958093+0000 mgr.y (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:40.963376+0000 mgr.y (mgr.14150) 11 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:40.963376+0000 mgr.y (mgr.14150) 11 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:41.000767+0000 mgr.y (mgr.14150) 12 : cephadm [INF] Updating vm04:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:41.000767+0000 mgr.y (mgr.14150) 12 : cephadm [INF] Updating vm04:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:41.034303+0000 mgr.y (mgr.14150) 13 : cephadm [INF] Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:41.034303+0000 mgr.y (mgr.14150) 13 : cephadm [INF] Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:41.070244+0000 mgr.y (mgr.14150) 14 : cephadm [INF] Updating vm04:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.client.admin.keyring 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:41.070244+0000 mgr.y (mgr.14150) 14 : cephadm [INF] Updating vm04:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.client.admin.keyring 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:45.961179+0000 mgr.y (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm07", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:45.961179+0000 mgr.y (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm07", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:46.480872+0000 mgr.y (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm07 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:46.480872+0000 mgr.y (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm07 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:47.689623+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:47.689623+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:47.689956+0000 mgr.y (mgr.14150) 17 : cephadm [INF] Added host vm07 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:47.689956+0000 mgr.y (mgr.14150) 17 : cephadm [INF] Added host vm07 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:47.690134+0000 mon.a (mon.0) 111 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:47.690134+0000 mon.a (mon.0) 111 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:47.970252+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:47.970252+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:49.238117+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:49.238117+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:49.605026+0000 mgr.y (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:49.605026+0000 mgr.y (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:49.768909+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:49.768909+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:51.605202+0000 mgr.y (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:51.605202+0000 mgr.y (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:52.518303+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:52.518303+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:52.520464+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:52.520464+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:52.523500+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:52.523500+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:52.528791+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:52.528791+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.024 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:52.529349+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:52.529349+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:52.529990+0000 mon.a (mon.0) 120 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:52.529990+0000 mon.a (mon.0) 120 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:52.530427+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:52.530427+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:52.531018+0000 mgr.y (mgr.14150) 20 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:52.531018+0000 mgr.y (mgr.14150) 20 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:52.564357+0000 mgr.y (mgr.14150) 21 : cephadm [INF] Updating vm07:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:52.564357+0000 mgr.y (mgr.14150) 21 : cephadm [INF] Updating vm07:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:52.594545+0000 mgr.y (mgr.14150) 22 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:52.594545+0000 mgr.y (mgr.14150) 22 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:52.626942+0000 mgr.y (mgr.14150) 23 : cephadm [INF] Updating vm07:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.client.admin.keyring 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:52.626942+0000 mgr.y (mgr.14150) 23 : cephadm [INF] Updating vm07:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.client.admin.keyring 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:52.648005+0000 mgr.y (mgr.14150) 24 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:52.648005+0000 mgr.y (mgr.14150) 24 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:52.665767+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:52.665767+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:52.667879+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:52.667879+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:52.672453+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:52.672453+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:53.605374+0000 mgr.y (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:53.605374+0000 mgr.y (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:55.605563+0000 mgr.y (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:55.605563+0000 mgr.y (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:56.611747+0000 mon.a (mon.0) 125 : audit [INF] from='client.? 192.168.123.104:0/438248174' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:56.611747+0000 mon.a (mon.0) 125 : audit [INF] from='client.? 192.168.123.104:0/438248174' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:56.690239+0000 mon.a (mon.0) 126 : audit [INF] from='client.? 192.168.123.104:0/438248174' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:56.690239+0000 mon.a (mon.0) 126 : audit [INF] from='client.? 192.168.123.104:0/438248174' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:56.691519+0000 mon.a (mon.0) 127 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:56.691519+0000 mon.a (mon.0) 127 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:57.605739+0000 mgr.y (mgr.14150) 27 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cluster 2026-03-10T10:08:57.605739+0000 mgr.y (mgr.14150) 27 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:58.118008+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:58.118008+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:58.118448+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:58.118448+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:58.119349+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:58.119349+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:58.119763+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:58.119763+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:58.121892+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:58.121892+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:58.122674+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:58.122674+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:58.123028+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:58.123028+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:58.114614+0000 mgr.y (mgr.14150) 28 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm04:192.168.123.104=a;vm04:[v2:192.168.123.104:3301,v1:192.168.123.104:6790]=c;vm07:192.168.123.107=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:58.114614+0000 mgr.y (mgr.14150) 28 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm04:192.168.123.104=a;vm04:[v2:192.168.123.104:3301,v1:192.168.123.104:6790]=c;vm07:192.168.123.107=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:58.115675+0000 mgr.y (mgr.14150) 29 : cephadm [INF] Saving service mon spec with placement vm04:192.168.123.104=a;vm04:[v2:192.168.123.104:3301,v1:192.168.123.104:6790]=c;vm07:192.168.123.107=b;count:3 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:58.115675+0000 mgr.y (mgr.14150) 29 : cephadm [INF] Saving service mon spec with placement vm04:192.168.123.104=a;vm04:[v2:192.168.123.104:3301,v1:192.168.123.104:6790]=c;vm07:192.168.123.107=b;count:3 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:58.123528+0000 mgr.y (mgr.14150) 30 : cephadm [INF] Deploying daemon mon.b on vm07 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: cephadm 2026-03-10T10:08:58.123528+0000 mgr.y (mgr.14150) 30 : cephadm [INF] Deploying daemon mon.b on vm07 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:59.664977+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:59.664977+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:59.666999+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:59.666999+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:59.669028+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:59.669028+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:59.669387+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:59.669387+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:09:00.025 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:59.669807+0000 mon.a (mon.0) 139 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:00.026 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: audit 2026-03-10T10:08:59.669807+0000 mon.a (mon.0) 139 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:00.026 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:08:59 vm07 bash[23367]: debug 2026-03-10T10:08:59.843+0000 7f55ac02f640 1 mon.b@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-10T10:09:01.303 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:09:01.303 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:09:01.303 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:09:01.303 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:09:01.303 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:09:01.303 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 systemd[1]: Started Ceph mon.c for e4c1c9d6-1c68-11f1-a9bd-116050875839. 2026-03-10T10:09:01.303 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:01 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:09:01.304 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:01 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:09:01.304 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:09:01 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:09:01.304 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:09:01 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:09:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.364+0000 7fba2ac35d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-10T10:09:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.364+0000 7fba2ac35d80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-10T10:09:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.364+0000 7fba2ac35d80 0 pidfile_write: ignore empty --pid-file 2026-03-10T10:09:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 0 load: jerasure load: lrc 2026-03-10T10:09:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-10T10:09:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Git sha 0 2026-03-10T10:09:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T10:09:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: DB SUMMARY 2026-03-10T10:09:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: DB Session ID: LVTS8RT9U8UBWHRTM4D7 2026-03-10T10:09:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: CURRENT file: CURRENT 2026-03-10T10:09:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-10T10:09:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes 2026-03-10T10:09:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-c/store.db dir, Total Num: 0, files: 2026-03-10T10:09:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-c/store.db: 000004.log size: 511 ; 2026-03-10T10:09:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.error_if_exists: 0 2026-03-10T10:09:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.create_if_missing: 0 2026-03-10T10:09:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-10T10:09:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T10:09:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T10:09:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T10:09:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.env: 0x55beed488dc0 2026-03-10T10:09:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-10T10:09:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.info_log: 0x55bf04a2b880 2026-03-10T10:09:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-10T10:09:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.statistics: (nil) 2026-03-10T10:09:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.use_fsync: 0 2026-03-10T10:09:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-10T10:09:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T10:09:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T10:09:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-10T10:09:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.db_log_dir: 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.wal_dir: 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.write_buffer_manager: 0x55bf04a2f900 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.unordered_write: 0 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.row_cache: None 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.wal_filter: None 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.two_write_queues: 0 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.wal_compression: 0 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.atomic_flush: 0 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T10:09:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.max_open_files: -1 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Compression algorithms supported: 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: kZSTD supported: 0 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: kXpressCompression supported: 0 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: kZlibCompression supported: 1 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-c/store.db/MANIFEST-000005 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.merge_operator: 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.compaction_filter: None 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bf04a2a480) 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cache_index_and_filter_blocks: 1 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: pin_top_level_index_and_filter: 1 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: index_type: 0 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: data_block_index_type: 0 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: index_shortening: 1 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: data_block_hash_table_util_ratio: 0.750000 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: checksum: 4 2026-03-10T10:09:01.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: no_block_cache: 0 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: block_cache: 0x55bf04a51350 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: block_cache_name: BinnedLRUCache 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: block_cache_options: 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: capacity : 536870912 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: num_shard_bits : 4 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: strict_capacity_limit : 0 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: high_pri_pool_ratio: 0.000 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: block_cache_compressed: (nil) 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: persistent_cache: (nil) 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: block_size: 4096 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: block_size_deviation: 10 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: block_restart_interval: 16 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: index_block_restart_interval: 1 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: metadata_block_size: 4096 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: partition_filters: 0 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: use_delta_encoding: 1 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: filter_policy: bloomfilter 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: whole_key_filtering: 1 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: verify_compression: 0 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: read_amp_bytes_per_bit: 0 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: format_version: 5 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: enable_index_compression: 1 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: block_align: 0 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: max_auto_readahead_size: 262144 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: prepopulate_block_cache: 0 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: initial_auto_readahead_size: 8192 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: num_file_reads_for_auto_readahead: 2 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.compression: NoCompression 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.num_levels: 7 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T10:09:01.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T10:09:01.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T10:09:01.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T10:09:01.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-10T10:09:01.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T10:09:01.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T10:09:01.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T10:09:01.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T10:09:01.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T10:09:01.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T10:09:01.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T10:09:01.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T10:09:01.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T10:09:01.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T10:09:01.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T10:09:01.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T10:09:01.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T10:09:01.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T10:09:01.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-10T10:09:01.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T10:09:01.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T10:09:01.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-10T10:09:01.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T10:09:01.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T10:09:01.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T10:09:01.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T10:09:01.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T10:09:01.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T10:09:01.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T10:09:01.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T10:09:01.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T10:09:01.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T10:09:01.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.bloom_locality: 0 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.ttl: 2592000 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.enable_blob_files: false 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.min_blob_size: 0 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-c/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: aee7b2f5-c6c3-4927-b03f-a8b573b2bd58 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773137341375520, "job": 1, "event": "recovery_started", "wal_files": [4]} 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.368+0000 7fba2ac35d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.372+0000 7fba2ac35d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773137341376971, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773137341, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "aee7b2f5-c6c3-4927-b03f-a8b573b2bd58", "db_session_id": "LVTS8RT9U8UBWHRTM4D7", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.372+0000 7fba2ac35d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773137341377041, "job": 1, "event": "recovery_finished"} 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.372+0000 7fba2ac35d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 10 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.376+0000 7fba2ac35d80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-c/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.376+0000 7fba2ac35d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55bf04a52e00 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.376+0000 7fba2ac35d80 4 rocksdb: DB pointer 0x55bf04b5e000 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.376+0000 7fba2ac35d80 0 mon.c does not exist in monmap, will attempt to join an existing cluster 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.376+0000 7fba2ac35d80 0 using public_addrv [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.376+0000 7fba2ac35d80 0 starting mon.c rank -1 at public addrs [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] at bind addrs [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] mon_data /var/lib/ceph/mon/ceph-c fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.376+0000 7fba2ac35d80 1 mon.c@-1(???) e0 preinit fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.384+0000 7fba209ff640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.384+0000 7fba209ff640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: ** DB Stats ** 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: Cumulative writes: 1 writes, 63 keys, 1 commit groups, 1.0 writes per commit group, ingest: 0.00 GB, 65.56 MB/s 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: Interval writes: 1 writes, 63 keys, 1 commit groups, 1.0 writes per commit group, ingest: 1.01 MB, 65.56 MB/s 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: ** Compaction Stats [default] ** 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: L0 1/0 1.60 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 1.1 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: Sum 1/0 1.60 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 1.1 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 1.1 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: ** Compaction Stats [default] ** 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.1 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T10:09:01.709 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: Flush(GB): cumulative 0.000, interval 0.000 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: AddFile(Total Files): cumulative 0, interval 0 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: AddFile(Keys): cumulative 0, interval 0 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: Cumulative compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: Interval compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: Block cache BinnedLRUCache@0x55bf04a51350#7 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 8e-06 secs_since: 0 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: Block cache entry stats(count,size,portion): DataBlock(1,0.64 KB,0.00012219%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%) 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: ** File Read Latency Histogram By Level [default] ** 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.392+0000 7fba23a05640 0 mon.c@-1(synchronizing).mds e1 new map 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.392+0000 7fba23a05640 0 mon.c@-1(synchronizing).mds e1 print_map 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: e1 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: btime 2026-03-10T10:08:09:663579+0000 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: legacy client fscid: -1 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: No filesystems configured 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.392+0000 7fba23a05640 1 mon.c@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.392+0000 7fba23a05640 1 mon.c@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.392+0000 7fba23a05640 1 mon.c@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.392+0000 7fba23a05640 1 mon.c@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.392+0000 7fba23a05640 1 mon.c@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.392+0000 7fba23a05640 1 mon.c@-1(synchronizing).osd e4 e4: 0 total, 0 up, 0 in 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.396+0000 7fba23a05640 0 mon.c@-1(synchronizing).osd e4 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.396+0000 7fba23a05640 0 mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.396+0000 7fba23a05640 0 mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.396+0000 7fba23a05640 0 mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:09.664020+0000 mon.a (mon.0) 0 : cluster [INF] mkfs e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:09.664020+0000 mon.a (mon.0) 0 : cluster [INF] mkfs e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:09.659529+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:09.659529+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:10.680214+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:10.680214+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:10.680238+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:10.680238+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:10.680243+0000 mon.a (mon.0) 3 : cluster [DBG] fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:10.680243+0000 mon.a (mon.0) 3 : cluster [DBG] fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:10.680245+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-10T10:08:08.532327+0000 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:10.680245+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-10T10:08:08.532327+0000 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:10.680253+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-10T10:08:08.532327+0000 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:10.680253+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-10T10:08:08.532327+0000 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:10.680256+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:10.680256+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:10.680259+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:10.680259+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:10.680261+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:10.680261+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:10.680467+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:10.680467+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:10.680479+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:10.680479+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:10.680836+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:10.680836+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:10.906149+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.104:0/3141641644' entity='client.admin' 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:10.906149+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.104:0/3141641644' entity='client.admin' 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:11.474847+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.104:0/947440216' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:11.474847+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.104:0/947440216' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:13.706685+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.104:0/2122167615' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T10:09:01.710 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:13.706685+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.104:0/2122167615' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:14.269965+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon y 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:14.269965+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon y 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:14.273720+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: y(active, starting, since 0.00383463s) 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:14.273720+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: y(active, starting, since 0.00383463s) 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:14.277013+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:14.277013+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:14.277453+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:14.277453+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:14.277836+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:14.277836+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:14.278523+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:14.278523+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:14.278924+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:14.278924+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:14.283610+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon y is now available 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:14.283610+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon y is now available 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:14.292305+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:14.292305+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:14.293364+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:14.293364+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:14.295226+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:14.295226+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:14.296438+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:14.296438+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:14.298870+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:14.298870+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.104:0/3037244906' entity='mgr.y' 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:15.278903+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: y(active, since 1.00902s) 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:15.278903+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: y(active, since 1.00902s) 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:16.013180+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.104:0/2656217666' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:16.013180+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.104:0/2656217666' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:16.253402+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.104:0/1778658827' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:16.253402+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.104:0/1778658827' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:16.284952+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e4: y(active, since 2s) 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:16.284952+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e4: y(active, since 2s) 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:16.529292+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.104:0/756455820' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:16.529292+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.104:0/756455820' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:17.323368+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.104:0/756455820' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:17.323368+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.104:0/756455820' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:17.328303+0000 mon.a (mon.0) 34 : cluster [DBG] mgrmap e5: y(active, since 3s) 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:17.328303+0000 mon.a (mon.0) 34 : cluster [DBG] mgrmap e5: y(active, since 3s) 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:17.641899+0000 mon.a (mon.0) 35 : audit [DBG] from='client.? 192.168.123.104:0/1685368523' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:17.641899+0000 mon.a (mon.0) 35 : audit [DBG] from='client.? 192.168.123.104:0/1685368523' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:20.274732+0000 mon.a (mon.0) 36 : cluster [INF] Active manager daemon y restarted 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:20.274732+0000 mon.a (mon.0) 36 : cluster [INF] Active manager daemon y restarted 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:20.274918+0000 mon.a (mon.0) 37 : cluster [INF] Activating manager daemon y 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:20.274918+0000 mon.a (mon.0) 37 : cluster [INF] Activating manager daemon y 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:20.279317+0000 mon.a (mon.0) 38 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:20.279317+0000 mon.a (mon.0) 38 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:20.279391+0000 mon.a (mon.0) 39 : cluster [DBG] mgrmap e6: y(active, starting, since 0.00455868s) 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:20.279391+0000 mon.a (mon.0) 39 : cluster [DBG] mgrmap e6: y(active, starting, since 0.00455868s) 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:20.283005+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:20.283005+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:20.283065+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:20.283065+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:20.283801+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:20.283801+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:20.283959+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:20.283959+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:20.284138+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:20.284138+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:20.288740+0000 mon.a (mon.0) 45 : cluster [INF] Manager daemon y is now available 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:20.288740+0000 mon.a (mon.0) 45 : cluster [INF] Manager daemon y is now available 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:20.297079+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:20.297079+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:20.299817+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:20.299817+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:01.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:20.315592+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:20.315592+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:20.316475+0000 mon.a (mon.0) 49 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:20.316475+0000 mon.a (mon.0) 49 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:20.318310+0000 mon.a (mon.0) 50 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:20.318310+0000 mon.a (mon.0) 50 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:20.324639+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:20.324639+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:20.294863+0000 mgr.y (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:20.294863+0000 mgr.y (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:20.600624+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:20.600624+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:20.603262+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:20.603262+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:21.283038+0000 mon.a (mon.0) 54 : cluster [DBG] mgrmap e7: y(active, since 1.0082s) 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:21.283038+0000 mon.a (mon.0) 54 : cluster [DBG] mgrmap e7: y(active, since 1.0082s) 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:21.285553+0000 mgr.y (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:21.285553+0000 mgr.y (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:21.289274+0000 mgr.y (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:21.289274+0000 mgr.y (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:21.359736+0000 mgr.y (mgr.14118) 4 : cephadm [INF] [10/Mar/2026:10:08:21] ENGINE Bus STARTING 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:21.359736+0000 mgr.y (mgr.14118) 4 : cephadm [INF] [10/Mar/2026:10:08:21] ENGINE Bus STARTING 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:21.570101+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:21.570101+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:21.575693+0000 mon.a (mon.0) 56 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:21.575693+0000 mon.a (mon.0) 56 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:21.580337+0000 mon.a (mon.0) 57 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:21.580337+0000 mon.a (mon.0) 57 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:22.064614+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:22.064614+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:22.066629+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:22.066629+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:21.462056+0000 mgr.y (mgr.14118) 5 : cephadm [INF] [10/Mar/2026:10:08:21] ENGINE Serving on http://192.168.123.104:8765 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:21.462056+0000 mgr.y (mgr.14118) 5 : cephadm [INF] [10/Mar/2026:10:08:21] ENGINE Serving on http://192.168.123.104:8765 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:21.567030+0000 mgr.y (mgr.14118) 6 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:21.567030+0000 mgr.y (mgr.14118) 6 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:21.576426+0000 mgr.y (mgr.14118) 7 : cephadm [INF] [10/Mar/2026:10:08:21] ENGINE Serving on https://192.168.123.104:7150 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:21.576426+0000 mgr.y (mgr.14118) 7 : cephadm [INF] [10/Mar/2026:10:08:21] ENGINE Serving on https://192.168.123.104:7150 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:21.576458+0000 mgr.y (mgr.14118) 8 : cephadm [INF] [10/Mar/2026:10:08:21] ENGINE Bus STARTED 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:21.576458+0000 mgr.y (mgr.14118) 8 : cephadm [INF] [10/Mar/2026:10:08:21] ENGINE Bus STARTED 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:21.577574+0000 mgr.y (mgr.14118) 9 : cephadm [INF] [10/Mar/2026:10:08:21] ENGINE Client ('192.168.123.104', 39838) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:21.577574+0000 mgr.y (mgr.14118) 9 : cephadm [INF] [10/Mar/2026:10:08:21] ENGINE Client ('192.168.123.104', 39838) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:21.806820+0000 mgr.y (mgr.14118) 10 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:21.806820+0000 mgr.y (mgr.14118) 10 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:22.049695+0000 mgr.y (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:22.049695+0000 mgr.y (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:22.049877+0000 mgr.y (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:22.049877+0000 mgr.y (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:22.298852+0000 mgr.y (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:22.298852+0000 mgr.y (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:23.070528+0000 mon.a (mon.0) 60 : cluster [DBG] mgrmap e8: y(active, since 2s) 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:23.070528+0000 mon.a (mon.0) 60 : cluster [DBG] mgrmap e8: y(active, since 2s) 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:22.551477+0000 mgr.y (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm04", "addr": "192.168.123.104", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:22.551477+0000 mgr.y (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm04", "addr": "192.168.123.104", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:23.080721+0000 mgr.y (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm04 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:23.080721+0000 mgr.y (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm04 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:24.334465+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:24.334465+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:24.334982+0000 mgr.y (mgr.14118) 16 : cephadm [INF] Added host vm04 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:24.334982+0000 mgr.y (mgr.14118) 16 : cephadm [INF] Added host vm04 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:24.336239+0000 mon.a (mon.0) 62 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:01.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:24.336239+0000 mon.a (mon.0) 62 : audit [DBG] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:24.656660+0000 mgr.y (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:24.656660+0000 mgr.y (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:24.657506+0000 mgr.y (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:24.657506+0000 mgr.y (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:24.659934+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:24.659934+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:24.907544+0000 mgr.y (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:24.907544+0000 mgr.y (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:24.908186+0000 mgr.y (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:24.908186+0000 mgr.y (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:24.910913+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:24.910913+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:25.167348+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.104:0/654643366' entity='client.admin' 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:25.167348+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.104:0/654643366' entity='client.admin' 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:25.417615+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.104:0/4216453309' entity='client.admin' 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:25.417615+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.104:0/4216453309' entity='client.admin' 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:25.703248+0000 mon.a (mon.0) 67 : audit [INF] from='client.? 192.168.123.104:0/923033120' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:25.703248+0000 mon.a (mon.0) 67 : audit [INF] from='client.? 192.168.123.104:0/923033120' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:25.772469+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:25.772469+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:26.030738+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:26.030738+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.14118 192.168.123.104:0/2126091993' entity='mgr.y' 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:26.667102+0000 mon.a (mon.0) 70 : audit [INF] from='client.? 192.168.123.104:0/923033120' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:26.667102+0000 mon.a (mon.0) 70 : audit [INF] from='client.? 192.168.123.104:0/923033120' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:26.669484+0000 mon.a (mon.0) 71 : cluster [DBG] mgrmap e9: y(active, since 6s) 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:26.669484+0000 mon.a (mon.0) 71 : cluster [DBG] mgrmap e9: y(active, since 6s) 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:27.000668+0000 mon.a (mon.0) 72 : audit [DBG] from='client.? 192.168.123.104:0/1631284863' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:27.000668+0000 mon.a (mon.0) 72 : audit [DBG] from='client.? 192.168.123.104:0/1631284863' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:29.595102+0000 mon.a (mon.0) 73 : cluster [INF] Active manager daemon y restarted 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:29.595102+0000 mon.a (mon.0) 73 : cluster [INF] Active manager daemon y restarted 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:29.595534+0000 mon.a (mon.0) 74 : cluster [INF] Activating manager daemon y 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:29.595534+0000 mon.a (mon.0) 74 : cluster [INF] Activating manager daemon y 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:29.601364+0000 mon.a (mon.0) 75 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:29.601364+0000 mon.a (mon.0) 75 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:29.601482+0000 mon.a (mon.0) 76 : cluster [DBG] mgrmap e10: y(active, starting, since 0.00606578s) 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:29.601482+0000 mon.a (mon.0) 76 : cluster [DBG] mgrmap e10: y(active, starting, since 0.00606578s) 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:29.603257+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:29.603257+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:29.603351+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:29.603351+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:29.603851+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:29.603851+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:29.604009+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:29.604009+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:29.604190+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:29.604190+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:29.607998+0000 mon.a (mon.0) 82 : cluster [INF] Manager daemon y is now available 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:29.607998+0000 mon.a (mon.0) 82 : cluster [INF] Manager daemon y is now available 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:29.623051+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:29.623051+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:29.624299+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:29.624299+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:29.625008+0000 mon.a (mon.0) 85 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:29.625008+0000 mon.a (mon.0) 85 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:30.399315+0000 mgr.y (mgr.14150) 1 : cephadm [INF] [10/Mar/2026:10:08:30] ENGINE Bus STARTING 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:30.399315+0000 mgr.y (mgr.14150) 1 : cephadm [INF] [10/Mar/2026:10:08:30] ENGINE Bus STARTING 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:30.507067+0000 mgr.y (mgr.14150) 2 : cephadm [INF] [10/Mar/2026:10:08:30] ENGINE Serving on https://192.168.123.104:7150 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:30.507067+0000 mgr.y (mgr.14150) 2 : cephadm [INF] [10/Mar/2026:10:08:30] ENGINE Serving on https://192.168.123.104:7150 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:30.507665+0000 mgr.y (mgr.14150) 3 : cephadm [INF] [10/Mar/2026:10:08:30] ENGINE Client ('192.168.123.104', 39822) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:30.507665+0000 mgr.y (mgr.14150) 3 : cephadm [INF] [10/Mar/2026:10:08:30] ENGINE Client ('192.168.123.104', 39822) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:30.604799+0000 mon.a (mon.0) 86 : cluster [DBG] mgrmap e11: y(active, since 1.00939s) 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:30.604799+0000 mon.a (mon.0) 86 : cluster [DBG] mgrmap e11: y(active, since 1.00939s) 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:30.606818+0000 mgr.y (mgr.14150) 4 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T10:09:01.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:30.606818+0000 mgr.y (mgr.14150) 4 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:30.608189+0000 mgr.y (mgr.14150) 5 : cephadm [INF] [10/Mar/2026:10:08:30] ENGINE Serving on http://192.168.123.104:8765 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:30.608189+0000 mgr.y (mgr.14150) 5 : cephadm [INF] [10/Mar/2026:10:08:30] ENGINE Serving on http://192.168.123.104:8765 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:30.608221+0000 mgr.y (mgr.14150) 6 : cephadm [INF] [10/Mar/2026:10:08:30] ENGINE Bus STARTED 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:30.608221+0000 mgr.y (mgr.14150) 6 : cephadm [INF] [10/Mar/2026:10:08:30] ENGINE Bus STARTED 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:30.610577+0000 mgr.y (mgr.14150) 7 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:30.610577+0000 mgr.y (mgr.14150) 7 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:30.929579+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:30.929579+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:30.931533+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:30.931533+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:31.320638+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:31.320638+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:31.552124+0000 mon.a (mon.0) 90 : audit [DBG] from='client.? 192.168.123.104:0/487836184' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:31.552124+0000 mon.a (mon.0) 90 : audit [DBG] from='client.? 192.168.123.104:0/487836184' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:30.893168+0000 mgr.y (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:30.893168+0000 mgr.y (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:31.170677+0000 mgr.y (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:31.170677+0000 mgr.y (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:31.850862+0000 mon.a (mon.0) 91 : audit [INF] from='client.? 192.168.123.104:0/1555448482' entity='client.admin' 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:31.850862+0000 mon.a (mon.0) 91 : audit [INF] from='client.? 192.168.123.104:0/1555448482' entity='client.admin' 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:32.325607+0000 mon.a (mon.0) 92 : cluster [DBG] mgrmap e12: y(active, since 2s) 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:32.325607+0000 mon.a (mon.0) 92 : cluster [DBG] mgrmap e12: y(active, since 2s) 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:33.889238+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:33.889238+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:34.407558+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:34.407558+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:35.893854+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e13: y(active, since 6s) 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:35.893854+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e13: y(active, since 6s) 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:36.019055+0000 mon.a (mon.0) 96 : audit [INF] from='client.? 192.168.123.104:0/2021177804' entity='client.admin' 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:36.019055+0000 mon.a (mon.0) 96 : audit [INF] from='client.? 192.168.123.104:0/2021177804' entity='client.admin' 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:40.225084+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:40.225084+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:40.227027+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:40.227027+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:40.227528+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:40.227528+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:40.229630+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:40.229630+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:40.234256+0000 mon.a (mon.0) 101 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:40.234256+0000 mon.a (mon.0) 101 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:40.236401+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:40.236401+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:40.960990+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:40.960990+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:40.961494+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:40.961494+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:40.962304+0000 mon.a (mon.0) 105 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:40.962304+0000 mon.a (mon.0) 105 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:40.962810+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:40.962810+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:41.103892+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:41.103892+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:41.106797+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:41.106797+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:41.110263+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:41.110263+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:40.958093+0000 mgr.y (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:40.958093+0000 mgr.y (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:40.963376+0000 mgr.y (mgr.14150) 11 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:40.963376+0000 mgr.y (mgr.14150) 11 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:41.000767+0000 mgr.y (mgr.14150) 12 : cephadm [INF] Updating vm04:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:41.000767+0000 mgr.y (mgr.14150) 12 : cephadm [INF] Updating vm04:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:41.034303+0000 mgr.y (mgr.14150) 13 : cephadm [INF] Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:41.034303+0000 mgr.y (mgr.14150) 13 : cephadm [INF] Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:41.070244+0000 mgr.y (mgr.14150) 14 : cephadm [INF] Updating vm04:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.client.admin.keyring 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:41.070244+0000 mgr.y (mgr.14150) 14 : cephadm [INF] Updating vm04:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.client.admin.keyring 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:45.961179+0000 mgr.y (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm07", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:45.961179+0000 mgr.y (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm07", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:46.480872+0000 mgr.y (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm07 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:46.480872+0000 mgr.y (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm07 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:47.689623+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:47.689623+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:47.689956+0000 mgr.y (mgr.14150) 17 : cephadm [INF] Added host vm07 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:47.689956+0000 mgr.y (mgr.14150) 17 : cephadm [INF] Added host vm07 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:47.690134+0000 mon.a (mon.0) 111 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:47.690134+0000 mon.a (mon.0) 111 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:47.970252+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:47.970252+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:49.238117+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:49.238117+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:49.605026+0000 mgr.y (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:49.605026+0000 mgr.y (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:49.768909+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:49.768909+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:51.605202+0000 mgr.y (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:51.605202+0000 mgr.y (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:52.518303+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:52.518303+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:52.520464+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:52.520464+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:52.523500+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:52.523500+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:52.528791+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:52.528791+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:52.529349+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:52.529349+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:52.529990+0000 mon.a (mon.0) 120 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:52.529990+0000 mon.a (mon.0) 120 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:52.530427+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:52.530427+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:01.715 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:52.531018+0000 mgr.y (mgr.14150) 20 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:52.531018+0000 mgr.y (mgr.14150) 20 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:52.564357+0000 mgr.y (mgr.14150) 21 : cephadm [INF] Updating vm07:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:52.564357+0000 mgr.y (mgr.14150) 21 : cephadm [INF] Updating vm07:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:52.594545+0000 mgr.y (mgr.14150) 22 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:52.594545+0000 mgr.y (mgr.14150) 22 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:52.626942+0000 mgr.y (mgr.14150) 23 : cephadm [INF] Updating vm07:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.client.admin.keyring 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:52.626942+0000 mgr.y (mgr.14150) 23 : cephadm [INF] Updating vm07:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.client.admin.keyring 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:52.648005+0000 mgr.y (mgr.14150) 24 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:52.648005+0000 mgr.y (mgr.14150) 24 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:52.665767+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:52.665767+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:52.667879+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:52.667879+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:52.672453+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:52.672453+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:53.605374+0000 mgr.y (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:53.605374+0000 mgr.y (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:55.605563+0000 mgr.y (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:55.605563+0000 mgr.y (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:56.611747+0000 mon.a (mon.0) 125 : audit [INF] from='client.? 192.168.123.104:0/438248174' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:56.611747+0000 mon.a (mon.0) 125 : audit [INF] from='client.? 192.168.123.104:0/438248174' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:56.690239+0000 mon.a (mon.0) 126 : audit [INF] from='client.? 192.168.123.104:0/438248174' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:56.690239+0000 mon.a (mon.0) 126 : audit [INF] from='client.? 192.168.123.104:0/438248174' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:56.691519+0000 mon.a (mon.0) 127 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:56.691519+0000 mon.a (mon.0) 127 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:57.605739+0000 mgr.y (mgr.14150) 27 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cluster 2026-03-10T10:08:57.605739+0000 mgr.y (mgr.14150) 27 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:58.118008+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:58.118008+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:58.118448+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:58.118448+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:58.119349+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:58.119349+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:58.119763+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:58.119763+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:58.121892+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:58.121892+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:58.122674+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:58.122674+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:58.123028+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:58.123028+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:58.114614+0000 mgr.y (mgr.14150) 28 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm04:192.168.123.104=a;vm04:[v2:192.168.123.104:3301,v1:192.168.123.104:6790]=c;vm07:192.168.123.107=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:58.114614+0000 mgr.y (mgr.14150) 28 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm04:192.168.123.104=a;vm04:[v2:192.168.123.104:3301,v1:192.168.123.104:6790]=c;vm07:192.168.123.107=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:58.115675+0000 mgr.y (mgr.14150) 29 : cephadm [INF] Saving service mon spec with placement vm04:192.168.123.104=a;vm04:[v2:192.168.123.104:3301,v1:192.168.123.104:6790]=c;vm07:192.168.123.107=b;count:3 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:58.115675+0000 mgr.y (mgr.14150) 29 : cephadm [INF] Saving service mon spec with placement vm04:192.168.123.104=a;vm04:[v2:192.168.123.104:3301,v1:192.168.123.104:6790]=c;vm07:192.168.123.107=b;count:3 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:58.123528+0000 mgr.y (mgr.14150) 30 : cephadm [INF] Deploying daemon mon.b on vm07 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: cephadm 2026-03-10T10:08:58.123528+0000 mgr.y (mgr.14150) 30 : cephadm [INF] Deploying daemon mon.b on vm07 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:59.664977+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:59.664977+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:59.666999+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:59.666999+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:59.669028+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:59.669028+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:59.669387+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:59.669387+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:59.669807+0000 mon.a (mon.0) 139 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: audit 2026-03-10T10:08:59.669807+0000 mon.a (mon.0) 139 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:01.716 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:01 vm04 bash[28289]: debug 2026-03-10T10:09:01.404+0000 7fba23a05640 1 mon.c@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-10T10:09:04.864 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:04.863+0000 7f0f8b7fe640 1 -- 192.168.123.107:0/1760053205 <== mon.0 v2:192.168.123.104:3300/0 8 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 2 v2) ==== 95+0+1031 (secure 0 0 0) 0x7f0f94037e40 con 0x7f0fa40739f0 2026-03-10T10:09:04.865 INFO:teuthology.orchestra.run.vm07.stderr:dumped monmap epoch 2 2026-03-10T10:09:04.865 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T10:09:04.865 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":2,"fsid":"e4c1c9d6-1c68-11f1-a9bd-116050875839","modified":"2026-03-10T10:08:59.852684Z","created":"2026-03-10T10:08:08.532327Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:3300","nonce":0},{"type":"v1","addr":"192.168.123.104:6789","nonce":0}]},"addr":"192.168.123.104:6789/0","public_addr":"192.168.123.104:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-10T10:09:04.867 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:04.863+0000 7f0fa3577640 1 -- 192.168.123.107:0/1760053205 >> v2:192.168.123.104:6800/632047608 conn(0x7f0f6c03dd40 msgr2=0x7f0f6c040200 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:09:04.867 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:04.863+0000 7f0fa3577640 1 --2- 192.168.123.107:0/1760053205 >> v2:192.168.123.104:6800/632047608 conn(0x7f0f6c03dd40 0x7f0f6c040200 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f0f9c00ad30 tx=0x7f0f9c0093f0 comp rx=0 tx=0).stop 2026-03-10T10:09:04.867 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:04.867+0000 7f0fa3577640 1 -- 192.168.123.107:0/1760053205 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0fa40739f0 msgr2=0x7f0fa4117780 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:09:04.867 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:04.867+0000 7f0fa3577640 1 --2- 192.168.123.107:0/1760053205 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0fa40739f0 0x7f0fa4117780 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7f0f94002fd0 tx=0x7f0f94004290 comp rx=0 tx=0).stop 2026-03-10T10:09:04.867 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:04.867+0000 7f0fa3577640 1 -- 192.168.123.107:0/1760053205 shutdown_connections 2026-03-10T10:09:04.867 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:04.867+0000 7f0fa3577640 1 --2- 192.168.123.107:0/1760053205 >> v2:192.168.123.104:6800/632047608 conn(0x7f0f6c03dd40 0x7f0f6c040200 unknown :-1 s=CLOSED pgs=17 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:04.867 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:04.867+0000 7f0fa3577640 1 --2- 192.168.123.107:0/1760053205 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0fa40739f0 0x7f0fa4117780 unknown :-1 s=CLOSED pgs=89 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:04.867 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:04.867+0000 7f0fa3577640 1 -- 192.168.123.107:0/1760053205 >> 192.168.123.107:0/1760053205 conn(0x7f0fa406d270 msgr2=0x7f0fa4072f50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:09:04.867 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:04.867+0000 7f0fa3577640 1 -- 192.168.123.107:0/1760053205 shutdown_connections 2026-03-10T10:09:04.867 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:04.867+0000 7f0fa3577640 1 -- 192.168.123.107:0/1760053205 wait complete. 2026-03-10T10:09:05.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: audit 2026-03-10T10:08:59.855798+0000 mon.a (mon.0) 141 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:09:05.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: audit 2026-03-10T10:08:59.855798+0000 mon.a (mon.0) 141 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:09:05.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: audit 2026-03-10T10:08:59.855901+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:05.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: audit 2026-03-10T10:08:59.855901+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:05.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: cluster 2026-03-10T10:08:59.856208+0000 mon.a (mon.0) 143 : cluster [INF] mon.a calling monitor election 2026-03-10T10:09:05.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: cluster 2026-03-10T10:08:59.856208+0000 mon.a (mon.0) 143 : cluster [INF] mon.a calling monitor election 2026-03-10T10:09:05.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: audit 2026-03-10T10:08:59.905419+0000 mon.a (mon.0) 144 : audit [DBG] from='client.? 192.168.123.107:0/1760053205' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:09:05.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: audit 2026-03-10T10:08:59.905419+0000 mon.a (mon.0) 144 : audit [DBG] from='client.? 192.168.123.107:0/1760053205' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:09:05.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: audit 2026-03-10T10:09:00.851870+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:05.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: audit 2026-03-10T10:09:00.851870+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:05.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: audit 2026-03-10T10:09:01.414857+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:05.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: audit 2026-03-10T10:09:01.414857+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:05.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: cluster 2026-03-10T10:09:01.606076+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:05.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: cluster 2026-03-10T10:09:01.606076+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:05.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: audit 2026-03-10T10:09:01.851973+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:05.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: audit 2026-03-10T10:09:01.851973+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:05.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: cluster 2026-03-10T10:09:01.855680+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T10:09:05.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: cluster 2026-03-10T10:09:01.855680+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T10:09:05.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: audit 2026-03-10T10:09:02.414372+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:05.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: audit 2026-03-10T10:09:02.414372+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:05.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: audit 2026-03-10T10:09:02.851899+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:05.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: audit 2026-03-10T10:09:02.851899+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:05.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: audit 2026-03-10T10:09:03.414488+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: audit 2026-03-10T10:09:03.414488+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: cluster 2026-03-10T10:09:03.606247+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: cluster 2026-03-10T10:09:03.606247+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: audit 2026-03-10T10:09:03.852226+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: audit 2026-03-10T10:09:03.852226+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: audit 2026-03-10T10:09:04.414626+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: audit 2026-03-10T10:09:04.414626+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: audit 2026-03-10T10:09:04.852320+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: audit 2026-03-10T10:09:04.852320+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: cluster 2026-03-10T10:09:04.860786+0000 mon.a (mon.0) 154 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: cluster 2026-03-10T10:09:04.860786+0000 mon.a (mon.0) 154 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: cluster 2026-03-10T10:09:04.864480+0000 mon.a (mon.0) 155 : cluster [DBG] monmap epoch 2 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: cluster 2026-03-10T10:09:04.864480+0000 mon.a (mon.0) 155 : cluster [DBG] monmap epoch 2 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: cluster 2026-03-10T10:09:04.864496+0000 mon.a (mon.0) 156 : cluster [DBG] fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: cluster 2026-03-10T10:09:04.864496+0000 mon.a (mon.0) 156 : cluster [DBG] fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: cluster 2026-03-10T10:09:04.864506+0000 mon.a (mon.0) 157 : cluster [DBG] last_changed 2026-03-10T10:08:59.852684+0000 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: cluster 2026-03-10T10:09:04.864506+0000 mon.a (mon.0) 157 : cluster [DBG] last_changed 2026-03-10T10:08:59.852684+0000 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: cluster 2026-03-10T10:09:04.864515+0000 mon.a (mon.0) 158 : cluster [DBG] created 2026-03-10T10:08:08.532327+0000 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: cluster 2026-03-10T10:09:04.864515+0000 mon.a (mon.0) 158 : cluster [DBG] created 2026-03-10T10:08:08.532327+0000 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: cluster 2026-03-10T10:09:04.864523+0000 mon.a (mon.0) 159 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: cluster 2026-03-10T10:09:04.864523+0000 mon.a (mon.0) 159 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: cluster 2026-03-10T10:09:04.864536+0000 mon.a (mon.0) 160 : cluster [DBG] election_strategy: 1 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: cluster 2026-03-10T10:09:04.864536+0000 mon.a (mon.0) 160 : cluster [DBG] election_strategy: 1 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: cluster 2026-03-10T10:09:04.864546+0000 mon.a (mon.0) 161 : cluster [DBG] 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: cluster 2026-03-10T10:09:04.864546+0000 mon.a (mon.0) 161 : cluster [DBG] 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: cluster 2026-03-10T10:09:04.864554+0000 mon.a (mon.0) 162 : cluster [DBG] 1: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.b 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: cluster 2026-03-10T10:09:04.864554+0000 mon.a (mon.0) 162 : cluster [DBG] 1: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.b 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: cluster 2026-03-10T10:09:04.864821+0000 mon.a (mon.0) 163 : cluster [DBG] fsmap 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: cluster 2026-03-10T10:09:04.864821+0000 mon.a (mon.0) 163 : cluster [DBG] fsmap 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: cluster 2026-03-10T10:09:04.864841+0000 mon.a (mon.0) 164 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: cluster 2026-03-10T10:09:04.864841+0000 mon.a (mon.0) 164 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: cluster 2026-03-10T10:09:04.864946+0000 mon.a (mon.0) 165 : cluster [DBG] mgrmap e13: y(active, since 35s) 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: cluster 2026-03-10T10:09:04.864946+0000 mon.a (mon.0) 165 : cluster [DBG] mgrmap e13: y(active, since 35s) 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: cluster 2026-03-10T10:09:04.865018+0000 mon.a (mon.0) 166 : cluster [INF] overall HEALTH_OK 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: cluster 2026-03-10T10:09:04.865018+0000 mon.a (mon.0) 166 : cluster [INF] overall HEALTH_OK 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: audit 2026-03-10T10:09:04.869271+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: audit 2026-03-10T10:09:04.869271+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: audit 2026-03-10T10:09:04.873418+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: audit 2026-03-10T10:09:04.873418+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: audit 2026-03-10T10:09:04.877091+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: audit 2026-03-10T10:09:04.877091+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: audit 2026-03-10T10:09:04.881372+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: audit 2026-03-10T10:09:04.881372+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: audit 2026-03-10T10:09:04.891027+0000 mon.a (mon.0) 171 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:05.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:04 vm04 bash[20742]: audit 2026-03-10T10:09:04.891027+0000 mon.a (mon.0) 171 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:05.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: audit 2026-03-10T10:08:59.855798+0000 mon.a (mon.0) 141 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:09:05.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: audit 2026-03-10T10:08:59.855798+0000 mon.a (mon.0) 141 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:09:05.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: audit 2026-03-10T10:08:59.855901+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:05.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: audit 2026-03-10T10:08:59.855901+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:05.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: cluster 2026-03-10T10:08:59.856208+0000 mon.a (mon.0) 143 : cluster [INF] mon.a calling monitor election 2026-03-10T10:09:05.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: cluster 2026-03-10T10:08:59.856208+0000 mon.a (mon.0) 143 : cluster [INF] mon.a calling monitor election 2026-03-10T10:09:05.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: audit 2026-03-10T10:08:59.905419+0000 mon.a (mon.0) 144 : audit [DBG] from='client.? 192.168.123.107:0/1760053205' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:09:05.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: audit 2026-03-10T10:08:59.905419+0000 mon.a (mon.0) 144 : audit [DBG] from='client.? 192.168.123.107:0/1760053205' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:09:05.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: audit 2026-03-10T10:09:00.851870+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:05.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: audit 2026-03-10T10:09:00.851870+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:05.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: audit 2026-03-10T10:09:01.414857+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:05.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: audit 2026-03-10T10:09:01.414857+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:05.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: cluster 2026-03-10T10:09:01.606076+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:05.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: cluster 2026-03-10T10:09:01.606076+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:05.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: audit 2026-03-10T10:09:01.851973+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:05.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: audit 2026-03-10T10:09:01.851973+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:05.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: cluster 2026-03-10T10:09:01.855680+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T10:09:05.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: cluster 2026-03-10T10:09:01.855680+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T10:09:05.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: audit 2026-03-10T10:09:02.414372+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:05.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: audit 2026-03-10T10:09:02.414372+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:05.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: audit 2026-03-10T10:09:02.851899+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:05.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: audit 2026-03-10T10:09:02.851899+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:05.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: audit 2026-03-10T10:09:03.414488+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: audit 2026-03-10T10:09:03.414488+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: cluster 2026-03-10T10:09:03.606247+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: cluster 2026-03-10T10:09:03.606247+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: audit 2026-03-10T10:09:03.852226+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: audit 2026-03-10T10:09:03.852226+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: audit 2026-03-10T10:09:04.414626+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: audit 2026-03-10T10:09:04.414626+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: audit 2026-03-10T10:09:04.852320+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: audit 2026-03-10T10:09:04.852320+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: cluster 2026-03-10T10:09:04.860786+0000 mon.a (mon.0) 154 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: cluster 2026-03-10T10:09:04.860786+0000 mon.a (mon.0) 154 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: cluster 2026-03-10T10:09:04.864480+0000 mon.a (mon.0) 155 : cluster [DBG] monmap epoch 2 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: cluster 2026-03-10T10:09:04.864480+0000 mon.a (mon.0) 155 : cluster [DBG] monmap epoch 2 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: cluster 2026-03-10T10:09:04.864496+0000 mon.a (mon.0) 156 : cluster [DBG] fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: cluster 2026-03-10T10:09:04.864496+0000 mon.a (mon.0) 156 : cluster [DBG] fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: cluster 2026-03-10T10:09:04.864506+0000 mon.a (mon.0) 157 : cluster [DBG] last_changed 2026-03-10T10:08:59.852684+0000 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: cluster 2026-03-10T10:09:04.864506+0000 mon.a (mon.0) 157 : cluster [DBG] last_changed 2026-03-10T10:08:59.852684+0000 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: cluster 2026-03-10T10:09:04.864515+0000 mon.a (mon.0) 158 : cluster [DBG] created 2026-03-10T10:08:08.532327+0000 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: cluster 2026-03-10T10:09:04.864515+0000 mon.a (mon.0) 158 : cluster [DBG] created 2026-03-10T10:08:08.532327+0000 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: cluster 2026-03-10T10:09:04.864523+0000 mon.a (mon.0) 159 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: cluster 2026-03-10T10:09:04.864523+0000 mon.a (mon.0) 159 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: cluster 2026-03-10T10:09:04.864536+0000 mon.a (mon.0) 160 : cluster [DBG] election_strategy: 1 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: cluster 2026-03-10T10:09:04.864536+0000 mon.a (mon.0) 160 : cluster [DBG] election_strategy: 1 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: cluster 2026-03-10T10:09:04.864546+0000 mon.a (mon.0) 161 : cluster [DBG] 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: cluster 2026-03-10T10:09:04.864546+0000 mon.a (mon.0) 161 : cluster [DBG] 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: cluster 2026-03-10T10:09:04.864554+0000 mon.a (mon.0) 162 : cluster [DBG] 1: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.b 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: cluster 2026-03-10T10:09:04.864554+0000 mon.a (mon.0) 162 : cluster [DBG] 1: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.b 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: cluster 2026-03-10T10:09:04.864821+0000 mon.a (mon.0) 163 : cluster [DBG] fsmap 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: cluster 2026-03-10T10:09:04.864821+0000 mon.a (mon.0) 163 : cluster [DBG] fsmap 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: cluster 2026-03-10T10:09:04.864841+0000 mon.a (mon.0) 164 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: cluster 2026-03-10T10:09:04.864841+0000 mon.a (mon.0) 164 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: cluster 2026-03-10T10:09:04.864946+0000 mon.a (mon.0) 165 : cluster [DBG] mgrmap e13: y(active, since 35s) 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: cluster 2026-03-10T10:09:04.864946+0000 mon.a (mon.0) 165 : cluster [DBG] mgrmap e13: y(active, since 35s) 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: cluster 2026-03-10T10:09:04.865018+0000 mon.a (mon.0) 166 : cluster [INF] overall HEALTH_OK 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: cluster 2026-03-10T10:09:04.865018+0000 mon.a (mon.0) 166 : cluster [INF] overall HEALTH_OK 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: audit 2026-03-10T10:09:04.869271+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: audit 2026-03-10T10:09:04.869271+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: audit 2026-03-10T10:09:04.873418+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: audit 2026-03-10T10:09:04.873418+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: audit 2026-03-10T10:09:04.877091+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: audit 2026-03-10T10:09:04.877091+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: audit 2026-03-10T10:09:04.881372+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: audit 2026-03-10T10:09:04.881372+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: audit 2026-03-10T10:09:04.891027+0000 mon.a (mon.0) 171 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:05.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:04 vm07 bash[23367]: audit 2026-03-10T10:09:04.891027+0000 mon.a (mon.0) 171 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:05.929 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-10T10:09:05.929 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph mon dump -f json 2026-03-10T10:09:06.204 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:09:05 vm04 bash[20997]: debug 2026-03-10T10:09:05.848+0000 7f88fcec7640 -1 mgr.server handle_report got status from non-daemon mon.b 2026-03-10T10:09:09.669 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.b/config 2026-03-10T10:09:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: audit 2026-03-10T10:09:05.421389+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:09:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: audit 2026-03-10T10:09:05.421389+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:09:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: audit 2026-03-10T10:09:05.421466+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: audit 2026-03-10T10:09:05.421466+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: audit 2026-03-10T10:09:05.421504+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: audit 2026-03-10T10:09:05.421504+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:05.421762+0000 mon.a (mon.0) 176 : cluster [INF] mon.a calling monitor election 2026-03-10T10:09:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:05.421762+0000 mon.a (mon.0) 176 : cluster [INF] mon.a calling monitor election 2026-03-10T10:09:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:05.425406+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-10T10:09:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:05.425406+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-10T10:09:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:05.606418+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:05.606418+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: audit 2026-03-10T10:09:06.414714+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: audit 2026-03-10T10:09:06.414714+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: audit 2026-03-10T10:09:07.414939+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: audit 2026-03-10T10:09:07.414939+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:07.418471+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T10:09:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:07.418471+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T10:09:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:07.606618+0000 mgr.y (mgr.14150) 36 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:07.606618+0000 mgr.y (mgr.14150) 36 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: audit 2026-03-10T10:09:08.415418+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: audit 2026-03-10T10:09:08.415418+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: audit 2026-03-10T10:09:09.415067+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: audit 2026-03-10T10:09:09.415067+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:09.606780+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:09.606780+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: audit 2026-03-10T10:09:10.415454+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: audit 2026-03-10T10:09:10.415454+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:10.425691+0000 mon.a (mon.0) 182 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:10.425691+0000 mon.a (mon.0) 182 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:10.429431+0000 mon.a (mon.0) 183 : cluster [DBG] monmap epoch 3 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:10.429431+0000 mon.a (mon.0) 183 : cluster [DBG] monmap epoch 3 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:10.429439+0000 mon.a (mon.0) 184 : cluster [DBG] fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:10.429439+0000 mon.a (mon.0) 184 : cluster [DBG] fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:10.429443+0000 mon.a (mon.0) 185 : cluster [DBG] last_changed 2026-03-10T10:09:05.417182+0000 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:10.429443+0000 mon.a (mon.0) 185 : cluster [DBG] last_changed 2026-03-10T10:09:05.417182+0000 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:10.429449+0000 mon.a (mon.0) 186 : cluster [DBG] created 2026-03-10T10:08:08.532327+0000 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:10.429449+0000 mon.a (mon.0) 186 : cluster [DBG] created 2026-03-10T10:08:08.532327+0000 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:10.429452+0000 mon.a (mon.0) 187 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:10.429452+0000 mon.a (mon.0) 187 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:10.429458+0000 mon.a (mon.0) 188 : cluster [DBG] election_strategy: 1 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:10.429458+0000 mon.a (mon.0) 188 : cluster [DBG] election_strategy: 1 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:10.429480+0000 mon.a (mon.0) 189 : cluster [DBG] 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:10.429480+0000 mon.a (mon.0) 189 : cluster [DBG] 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:10.429484+0000 mon.a (mon.0) 190 : cluster [DBG] 1: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.b 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:10.429484+0000 mon.a (mon.0) 190 : cluster [DBG] 1: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.b 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:10.429490+0000 mon.a (mon.0) 191 : cluster [DBG] 2: [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] mon.c 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:10.429490+0000 mon.a (mon.0) 191 : cluster [DBG] 2: [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] mon.c 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:10.429722+0000 mon.a (mon.0) 192 : cluster [DBG] fsmap 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:10.429722+0000 mon.a (mon.0) 192 : cluster [DBG] fsmap 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:10.429733+0000 mon.a (mon.0) 193 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:10.429733+0000 mon.a (mon.0) 193 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:10.429837+0000 mon.a (mon.0) 194 : cluster [DBG] mgrmap e13: y(active, since 40s) 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:10.429837+0000 mon.a (mon.0) 194 : cluster [DBG] mgrmap e13: y(active, since 40s) 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:10.429908+0000 mon.a (mon.0) 195 : cluster [INF] overall HEALTH_OK 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: cluster 2026-03-10T10:09:10.429908+0000 mon.a (mon.0) 195 : cluster [INF] overall HEALTH_OK 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: audit 2026-03-10T10:09:10.465128+0000 mon.a (mon.0) 196 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: audit 2026-03-10T10:09:10.465128+0000 mon.a (mon.0) 196 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: audit 2026-03-10T10:09:10.468148+0000 mon.a (mon.0) 197 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: audit 2026-03-10T10:09:10.468148+0000 mon.a (mon.0) 197 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: audit 2026-03-10T10:09:10.471921+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: audit 2026-03-10T10:09:10.471921+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: audit 2026-03-10T10:09:10.475935+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: audit 2026-03-10T10:09:10.475935+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: audit 2026-03-10T10:09:10.479489+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:10 vm04 bash[20742]: audit 2026-03-10T10:09:10.479489+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:08:59.855798+0000 mon.a (mon.0) 141 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:08:59.855798+0000 mon.a (mon.0) 141 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:08:59.855901+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:08:59.855901+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:08:59.856208+0000 mon.a (mon.0) 143 : cluster [INF] mon.a calling monitor election 2026-03-10T10:09:10.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:08:59.856208+0000 mon.a (mon.0) 143 : cluster [INF] mon.a calling monitor election 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:08:59.905419+0000 mon.a (mon.0) 144 : audit [DBG] from='client.? 192.168.123.107:0/1760053205' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:08:59.905419+0000 mon.a (mon.0) 144 : audit [DBG] from='client.? 192.168.123.107:0/1760053205' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:00.851870+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:00.851870+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:01.414857+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:01.414857+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:01.606076+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:01.606076+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:01.851973+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:01.851973+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:01.855680+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:01.855680+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:02.414372+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:02.414372+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:02.851899+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:02.851899+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:03.414488+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:03.414488+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:03.606247+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:03.606247+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:03.852226+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:03.852226+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:04.414626+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:04.414626+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:04.852320+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:04.852320+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:04.860786+0000 mon.a (mon.0) 154 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:04.860786+0000 mon.a (mon.0) 154 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:04.864480+0000 mon.a (mon.0) 155 : cluster [DBG] monmap epoch 2 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:04.864480+0000 mon.a (mon.0) 155 : cluster [DBG] monmap epoch 2 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:04.864496+0000 mon.a (mon.0) 156 : cluster [DBG] fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:04.864496+0000 mon.a (mon.0) 156 : cluster [DBG] fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:04.864506+0000 mon.a (mon.0) 157 : cluster [DBG] last_changed 2026-03-10T10:08:59.852684+0000 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:04.864506+0000 mon.a (mon.0) 157 : cluster [DBG] last_changed 2026-03-10T10:08:59.852684+0000 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:04.864515+0000 mon.a (mon.0) 158 : cluster [DBG] created 2026-03-10T10:08:08.532327+0000 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:04.864515+0000 mon.a (mon.0) 158 : cluster [DBG] created 2026-03-10T10:08:08.532327+0000 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:04.864523+0000 mon.a (mon.0) 159 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:04.864523+0000 mon.a (mon.0) 159 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:04.864536+0000 mon.a (mon.0) 160 : cluster [DBG] election_strategy: 1 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:04.864536+0000 mon.a (mon.0) 160 : cluster [DBG] election_strategy: 1 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:04.864546+0000 mon.a (mon.0) 161 : cluster [DBG] 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:04.864546+0000 mon.a (mon.0) 161 : cluster [DBG] 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:04.864554+0000 mon.a (mon.0) 162 : cluster [DBG] 1: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.b 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:04.864554+0000 mon.a (mon.0) 162 : cluster [DBG] 1: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.b 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:04.864821+0000 mon.a (mon.0) 163 : cluster [DBG] fsmap 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:04.864821+0000 mon.a (mon.0) 163 : cluster [DBG] fsmap 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:04.864841+0000 mon.a (mon.0) 164 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:04.864841+0000 mon.a (mon.0) 164 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:04.864946+0000 mon.a (mon.0) 165 : cluster [DBG] mgrmap e13: y(active, since 35s) 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:04.864946+0000 mon.a (mon.0) 165 : cluster [DBG] mgrmap e13: y(active, since 35s) 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:04.865018+0000 mon.a (mon.0) 166 : cluster [INF] overall HEALTH_OK 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:04.865018+0000 mon.a (mon.0) 166 : cluster [INF] overall HEALTH_OK 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:04.869271+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:04.869271+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:04.873418+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:04.873418+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:04.877091+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:04.877091+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:04.881372+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:04.881372+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:04.891027+0000 mon.a (mon.0) 171 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:04.891027+0000 mon.a (mon.0) 171 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:05.421389+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:05.421389+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:05.421466+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:05.421466+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:05.421504+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:05.421504+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:05.421762+0000 mon.a (mon.0) 176 : cluster [INF] mon.a calling monitor election 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:05.421762+0000 mon.a (mon.0) 176 : cluster [INF] mon.a calling monitor election 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:05.425406+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:05.425406+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:05.606418+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:05.606418+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:06.414714+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:06.414714+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:07.414939+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:07.414939+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:07.418471+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:07.418471+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:07.606618+0000 mgr.y (mgr.14150) 36 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:07.606618+0000 mgr.y (mgr.14150) 36 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:08.415418+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:08.415418+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:09.415067+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:09.415067+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:09.606780+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:09.606780+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:10.415454+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:10.415454+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:10.425691+0000 mon.a (mon.0) 182 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:10.425691+0000 mon.a (mon.0) 182 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:10.429431+0000 mon.a (mon.0) 183 : cluster [DBG] monmap epoch 3 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:10.429431+0000 mon.a (mon.0) 183 : cluster [DBG] monmap epoch 3 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:10.429439+0000 mon.a (mon.0) 184 : cluster [DBG] fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:10.429439+0000 mon.a (mon.0) 184 : cluster [DBG] fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:10.429443+0000 mon.a (mon.0) 185 : cluster [DBG] last_changed 2026-03-10T10:09:05.417182+0000 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:10.429443+0000 mon.a (mon.0) 185 : cluster [DBG] last_changed 2026-03-10T10:09:05.417182+0000 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:10.429449+0000 mon.a (mon.0) 186 : cluster [DBG] created 2026-03-10T10:08:08.532327+0000 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:10.429449+0000 mon.a (mon.0) 186 : cluster [DBG] created 2026-03-10T10:08:08.532327+0000 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:10.429452+0000 mon.a (mon.0) 187 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:10.429452+0000 mon.a (mon.0) 187 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:10.429458+0000 mon.a (mon.0) 188 : cluster [DBG] election_strategy: 1 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:10.429458+0000 mon.a (mon.0) 188 : cluster [DBG] election_strategy: 1 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:10.429480+0000 mon.a (mon.0) 189 : cluster [DBG] 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:10.429480+0000 mon.a (mon.0) 189 : cluster [DBG] 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:10.429484+0000 mon.a (mon.0) 190 : cluster [DBG] 1: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.b 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:10.429484+0000 mon.a (mon.0) 190 : cluster [DBG] 1: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.b 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:10.429490+0000 mon.a (mon.0) 191 : cluster [DBG] 2: [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] mon.c 2026-03-10T10:09:10.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:10.429490+0000 mon.a (mon.0) 191 : cluster [DBG] 2: [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] mon.c 2026-03-10T10:09:10.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:10.429722+0000 mon.a (mon.0) 192 : cluster [DBG] fsmap 2026-03-10T10:09:10.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:10.429722+0000 mon.a (mon.0) 192 : cluster [DBG] fsmap 2026-03-10T10:09:10.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:10.429733+0000 mon.a (mon.0) 193 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T10:09:10.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:10.429733+0000 mon.a (mon.0) 193 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T10:09:10.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:10.429837+0000 mon.a (mon.0) 194 : cluster [DBG] mgrmap e13: y(active, since 40s) 2026-03-10T10:09:10.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:10.429837+0000 mon.a (mon.0) 194 : cluster [DBG] mgrmap e13: y(active, since 40s) 2026-03-10T10:09:10.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:10.429908+0000 mon.a (mon.0) 195 : cluster [INF] overall HEALTH_OK 2026-03-10T10:09:10.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: cluster 2026-03-10T10:09:10.429908+0000 mon.a (mon.0) 195 : cluster [INF] overall HEALTH_OK 2026-03-10T10:09:10.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:10.465128+0000 mon.a (mon.0) 196 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:10.465128+0000 mon.a (mon.0) 196 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:10.468148+0000 mon.a (mon.0) 197 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:10.468148+0000 mon.a (mon.0) 197 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:10.471921+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:10.471921+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:10.475935+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:10.475935+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:10.479489+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:10 vm04 bash[28289]: audit 2026-03-10T10:09:10.479489+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: audit 2026-03-10T10:09:05.421389+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: audit 2026-03-10T10:09:05.421389+0000 mon.a (mon.0) 173 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: audit 2026-03-10T10:09:05.421466+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: audit 2026-03-10T10:09:05.421466+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: audit 2026-03-10T10:09:05.421504+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: audit 2026-03-10T10:09:05.421504+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:05.421762+0000 mon.a (mon.0) 176 : cluster [INF] mon.a calling monitor election 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:05.421762+0000 mon.a (mon.0) 176 : cluster [INF] mon.a calling monitor election 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:05.425406+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:05.425406+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:05.606418+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:05.606418+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: audit 2026-03-10T10:09:06.414714+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: audit 2026-03-10T10:09:06.414714+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: audit 2026-03-10T10:09:07.414939+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: audit 2026-03-10T10:09:07.414939+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:07.418471+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:07.418471+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:07.606618+0000 mgr.y (mgr.14150) 36 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:07.606618+0000 mgr.y (mgr.14150) 36 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: audit 2026-03-10T10:09:08.415418+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: audit 2026-03-10T10:09:08.415418+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: audit 2026-03-10T10:09:09.415067+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: audit 2026-03-10T10:09:09.415067+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:09.606780+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:09.606780+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: audit 2026-03-10T10:09:10.415454+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: audit 2026-03-10T10:09:10.415454+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:10.425691+0000 mon.a (mon.0) 182 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:10.425691+0000 mon.a (mon.0) 182 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:10.429431+0000 mon.a (mon.0) 183 : cluster [DBG] monmap epoch 3 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:10.429431+0000 mon.a (mon.0) 183 : cluster [DBG] monmap epoch 3 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:10.429439+0000 mon.a (mon.0) 184 : cluster [DBG] fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:10.429439+0000 mon.a (mon.0) 184 : cluster [DBG] fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:10.429443+0000 mon.a (mon.0) 185 : cluster [DBG] last_changed 2026-03-10T10:09:05.417182+0000 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:10.429443+0000 mon.a (mon.0) 185 : cluster [DBG] last_changed 2026-03-10T10:09:05.417182+0000 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:10.429449+0000 mon.a (mon.0) 186 : cluster [DBG] created 2026-03-10T10:08:08.532327+0000 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:10.429449+0000 mon.a (mon.0) 186 : cluster [DBG] created 2026-03-10T10:08:08.532327+0000 2026-03-10T10:09:10.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:10.429452+0000 mon.a (mon.0) 187 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T10:09:10.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:10.429452+0000 mon.a (mon.0) 187 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T10:09:10.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:10.429458+0000 mon.a (mon.0) 188 : cluster [DBG] election_strategy: 1 2026-03-10T10:09:10.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:10.429458+0000 mon.a (mon.0) 188 : cluster [DBG] election_strategy: 1 2026-03-10T10:09:10.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:10.429480+0000 mon.a (mon.0) 189 : cluster [DBG] 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-10T10:09:10.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:10.429480+0000 mon.a (mon.0) 189 : cluster [DBG] 0: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.a 2026-03-10T10:09:10.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:10.429484+0000 mon.a (mon.0) 190 : cluster [DBG] 1: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.b 2026-03-10T10:09:10.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:10.429484+0000 mon.a (mon.0) 190 : cluster [DBG] 1: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.b 2026-03-10T10:09:10.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:10.429490+0000 mon.a (mon.0) 191 : cluster [DBG] 2: [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] mon.c 2026-03-10T10:09:10.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:10.429490+0000 mon.a (mon.0) 191 : cluster [DBG] 2: [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] mon.c 2026-03-10T10:09:10.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:10.429722+0000 mon.a (mon.0) 192 : cluster [DBG] fsmap 2026-03-10T10:09:10.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:10.429722+0000 mon.a (mon.0) 192 : cluster [DBG] fsmap 2026-03-10T10:09:10.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:10.429733+0000 mon.a (mon.0) 193 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T10:09:10.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:10.429733+0000 mon.a (mon.0) 193 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T10:09:10.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:10.429837+0000 mon.a (mon.0) 194 : cluster [DBG] mgrmap e13: y(active, since 40s) 2026-03-10T10:09:10.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:10.429837+0000 mon.a (mon.0) 194 : cluster [DBG] mgrmap e13: y(active, since 40s) 2026-03-10T10:09:10.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:10.429908+0000 mon.a (mon.0) 195 : cluster [INF] overall HEALTH_OK 2026-03-10T10:09:10.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: cluster 2026-03-10T10:09:10.429908+0000 mon.a (mon.0) 195 : cluster [INF] overall HEALTH_OK 2026-03-10T10:09:10.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: audit 2026-03-10T10:09:10.465128+0000 mon.a (mon.0) 196 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: audit 2026-03-10T10:09:10.465128+0000 mon.a (mon.0) 196 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: audit 2026-03-10T10:09:10.468148+0000 mon.a (mon.0) 197 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: audit 2026-03-10T10:09:10.468148+0000 mon.a (mon.0) 197 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: audit 2026-03-10T10:09:10.471921+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: audit 2026-03-10T10:09:10.471921+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: audit 2026-03-10T10:09:10.475935+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: audit 2026-03-10T10:09:10.475935+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: audit 2026-03-10T10:09:10.479489+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:10.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:10 vm07 bash[23367]: audit 2026-03-10T10:09:10.479489+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.207 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.207+0000 7f24406a3640 1 -- 192.168.123.107:0/2017821412 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2438100320 msgr2=0x7f2438100700 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:09:11.207 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.207+0000 7f24406a3640 1 --2- 192.168.123.107:0/2017821412 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2438100320 0x7f2438100700 secure :-1 s=READY pgs=94 cs=0 l=1 rev1=1 crypto rx=0x7f2428002870 tx=0x7f2428030130 comp rx=0 tx=0).stop 2026-03-10T10:09:11.207 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.207+0000 7f24406a3640 1 -- 192.168.123.107:0/2017821412 shutdown_connections 2026-03-10T10:09:11.207 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.207+0000 7f24406a3640 1 --2- 192.168.123.107:0/2017821412 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2438100320 0x7f2438100700 unknown :-1 s=CLOSED pgs=94 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:11.207 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.207+0000 7f24406a3640 1 -- 192.168.123.107:0/2017821412 >> 192.168.123.107:0/2017821412 conn(0x7f24380fc250 msgr2=0x7f24380fe670 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:09:11.208 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.207+0000 7f24406a3640 1 -- 192.168.123.107:0/2017821412 shutdown_connections 2026-03-10T10:09:11.208 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.207+0000 7f24406a3640 1 -- 192.168.123.107:0/2017821412 wait complete. 2026-03-10T10:09:11.208 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.207+0000 7f24406a3640 1 Processor -- start 2026-03-10T10:09:11.208 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.207+0000 7f24406a3640 1 -- start start 2026-03-10T10:09:11.208 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.207+0000 7f24406a3640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2438100320 0x7f243819fee0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:09:11.208 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.207+0000 7f24406a3640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f24381a0420 0x7f24381a47b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:09:11.208 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.207+0000 7f24406a3640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f24381a4cf0 0x7f24381a51a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:09:11.209 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.207+0000 7f24406a3640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f2438112730 con 0x7f2438100320 2026-03-10T10:09:11.209 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.207+0000 7f24406a3640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f24381125b0 con 0x7f24381a4cf0 2026-03-10T10:09:11.209 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.207+0000 7f24406a3640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f24381128b0 con 0x7f24381a0420 2026-03-10T10:09:11.209 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.207+0000 7f243e418640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2438100320 0x7f243819fee0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:09:11.209 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.207+0000 7f243e418640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2438100320 0x7f243819fee0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.107:39312/0 (socket says 192.168.123.107:39312) 2026-03-10T10:09:11.209 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.207+0000 7f243e418640 1 -- 192.168.123.107:0/3366872918 learned_addr learned my addr 192.168.123.107:0/3366872918 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-10T10:09:11.209 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.207+0000 7f243dc17640 1 --2- 192.168.123.107:0/3366872918 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f24381a0420 0x7f24381a47b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:09:11.209 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.207+0000 7f243e418640 1 -- 192.168.123.107:0/3366872918 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f24381a0420 msgr2=0x7f24381a47b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:09:11.209 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.207+0000 7f243e418640 1 --2- 192.168.123.107:0/3366872918 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f24381a0420 0x7f24381a47b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:11.209 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.207+0000 7f243e418640 1 -- 192.168.123.107:0/3366872918 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f24381a4cf0 msgr2=0x7f24381a51a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:09:11.209 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.207+0000 7f243e418640 1 --2- 192.168.123.107:0/3366872918 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f24381a4cf0 0x7f24381a51a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:11.209 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.207+0000 7f243e418640 1 -- 192.168.123.107:0/3366872918 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f24381aca60 con 0x7f2438100320 2026-03-10T10:09:11.209 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.207+0000 7f243e418640 1 --2- 192.168.123.107:0/3366872918 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2438100320 0x7f243819fee0 secure :-1 s=READY pgs=95 cs=0 l=1 rev1=1 crypto rx=0x7f2428030660 tx=0x7f2428039810 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:09:11.209 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.207+0000 7f242f7fe640 1 -- 192.168.123.107:0/3366872918 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f2428036070 con 0x7f2438100320 2026-03-10T10:09:11.209 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.207+0000 7f24406a3640 1 -- 192.168.123.107:0/3366872918 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f24381acc90 con 0x7f2438100320 2026-03-10T10:09:11.209 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.207+0000 7f24406a3640 1 -- 192.168.123.107:0/3366872918 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f24381ad1a0 con 0x7f2438100320 2026-03-10T10:09:11.210 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.207+0000 7f242f7fe640 1 -- 192.168.123.107:0/3366872918 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f2428047070 con 0x7f2438100320 2026-03-10T10:09:11.210 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.207+0000 7f242f7fe640 1 -- 192.168.123.107:0/3366872918 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f242803a670 con 0x7f2438100320 2026-03-10T10:09:11.211 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.207+0000 7f242f7fe640 1 -- 192.168.123.107:0/3366872918 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 13) ==== 50271+0+0 (secure 0 0 0) 0x7f242800f040 con 0x7f2438100320 2026-03-10T10:09:11.211 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.207+0000 7f242f7fe640 1 --2- 192.168.123.107:0/3366872918 >> v2:192.168.123.104:6800/632047608 conn(0x7f240803ddb0 0x7f2408040270 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:09:11.211 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.207+0000 7f242f7fe640 1 -- 192.168.123.107:0/3366872918 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(4..4 src has 1..4) ==== 1069+0+0 (secure 0 0 0) 0x7f24280781c0 con 0x7f2438100320 2026-03-10T10:09:11.211 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.207+0000 7f24406a3640 1 -- 192.168.123.107:0/3366872918 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f2400005180 con 0x7f2438100320 2026-03-10T10:09:11.211 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.211+0000 7f243dc17640 1 --2- 192.168.123.107:0/3366872918 >> v2:192.168.123.104:6800/632047608 conn(0x7f240803ddb0 0x7f2408040270 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:09:11.211 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.211+0000 7f243dc17640 1 --2- 192.168.123.107:0/3366872918 >> v2:192.168.123.104:6800/632047608 conn(0x7f240803ddb0 0x7f2408040270 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7f2420006fd0 tx=0x7f2420008040 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:09:11.214 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.211+0000 7f242f7fe640 1 -- 192.168.123.107:0/3366872918 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f2428040b40 con 0x7f2438100320 2026-03-10T10:09:11.348 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.347+0000 7f24406a3640 1 -- 192.168.123.107:0/3366872918 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "mon dump", "format": "json"} v 0) -- 0x7f2400005470 con 0x7f2438100320 2026-03-10T10:09:11.348 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.347+0000 7f242f7fe640 1 -- 192.168.123.107:0/3366872918 <== mon.0 v2:192.168.123.104:3300/0 7 ==== mon_command_ack([{"prefix": "mon dump", "format": "json"}]=0 dumped monmap epoch 3 v3) ==== 95+0+1309 (secure 0 0 0) 0x7f2428047220 con 0x7f2438100320 2026-03-10T10:09:11.348 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T10:09:11.348 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":3,"fsid":"e4c1c9d6-1c68-11f1-a9bd-116050875839","modified":"2026-03-10T10:09:05.417182Z","created":"2026-03-10T10:08:08.532327Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:3300","nonce":0},{"type":"v1","addr":"192.168.123.104:6789","nonce":0}]},"addr":"192.168.123.104:6789/0","public_addr":"192.168.123.104:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:3301","nonce":0},{"type":"v1","addr":"192.168.123.104:6790","nonce":0}]},"addr":"192.168.123.104:6790/0","public_addr":"192.168.123.104:6790/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]} 2026-03-10T10:09:11.348 INFO:teuthology.orchestra.run.vm07.stderr:dumped monmap epoch 3 2026-03-10T10:09:11.350 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.347+0000 7f24406a3640 1 -- 192.168.123.107:0/3366872918 >> v2:192.168.123.104:6800/632047608 conn(0x7f240803ddb0 msgr2=0x7f2408040270 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:09:11.350 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.347+0000 7f24406a3640 1 --2- 192.168.123.107:0/3366872918 >> v2:192.168.123.104:6800/632047608 conn(0x7f240803ddb0 0x7f2408040270 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7f2420006fd0 tx=0x7f2420008040 comp rx=0 tx=0).stop 2026-03-10T10:09:11.351 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.347+0000 7f24406a3640 1 -- 192.168.123.107:0/3366872918 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2438100320 msgr2=0x7f243819fee0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:09:11.351 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.347+0000 7f24406a3640 1 --2- 192.168.123.107:0/3366872918 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2438100320 0x7f243819fee0 secure :-1 s=READY pgs=95 cs=0 l=1 rev1=1 crypto rx=0x7f2428030660 tx=0x7f2428039810 comp rx=0 tx=0).stop 2026-03-10T10:09:11.351 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.351+0000 7f24406a3640 1 -- 192.168.123.107:0/3366872918 shutdown_connections 2026-03-10T10:09:11.351 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.351+0000 7f24406a3640 1 --2- 192.168.123.107:0/3366872918 >> v2:192.168.123.104:6800/632047608 conn(0x7f240803ddb0 0x7f2408040270 unknown :-1 s=CLOSED pgs=35 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:11.351 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.351+0000 7f24406a3640 1 --2- 192.168.123.107:0/3366872918 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f24381a4cf0 0x7f24381a51a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:11.351 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.351+0000 7f24406a3640 1 --2- 192.168.123.107:0/3366872918 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f24381a0420 0x7f24381a47b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:11.351 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.351+0000 7f24406a3640 1 --2- 192.168.123.107:0/3366872918 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2438100320 0x7f243819fee0 unknown :-1 s=CLOSED pgs=95 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:11.351 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.351+0000 7f24406a3640 1 -- 192.168.123.107:0/3366872918 >> 192.168.123.107:0/3366872918 conn(0x7f24380fc250 msgr2=0x7f24380fd800 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:09:11.351 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.351+0000 7f24406a3640 1 -- 192.168.123.107:0/3366872918 shutdown_connections 2026-03-10T10:09:11.351 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:11.351+0000 7f24406a3640 1 -- 192.168.123.107:0/3366872918 wait complete. 2026-03-10T10:09:11.425 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-10T10:09:11.425 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph config generate-minimal-conf 2026-03-10T10:09:11.671 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:10.481043+0000 mon.a (mon.0) 201 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:11.671 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:10.481043+0000 mon.a (mon.0) 201 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:11.671 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:10.481701+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:11.671 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:10.481701+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:11.671 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: cephadm 2026-03-10T10:09:10.482238+0000 mgr.y (mgr.14150) 38 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-10T10:09:11.671 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: cephadm 2026-03-10T10:09:10.482238+0000 mgr.y (mgr.14150) 38 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-10T10:09:11.671 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: cephadm 2026-03-10T10:09:10.482649+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T10:09:11.671 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: cephadm 2026-03-10T10:09:10.482649+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T10:09:11.671 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: cephadm 2026-03-10T10:09:10.547230+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm07:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:09:11.671 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: cephadm 2026-03-10T10:09:10.547230+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm07:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:09:11.671 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: cephadm 2026-03-10T10:09:10.547345+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm04:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:09:11.671 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: cephadm 2026-03-10T10:09:10.547345+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm04:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:09:11.671 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:10.588636+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.671 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:10.588636+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.671 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:10.592513+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.671 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:10.592513+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.671 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:10.596233+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.671 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:10.596233+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.671 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:10.599810+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.671 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:10.599810+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.671 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:10.603649+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.671 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:10.603649+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.671 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:10.624314+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:10.624314+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:10.627926+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:10.627926+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:10.631262+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:10.631262+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:10.634652+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:10.634652+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: cephadm 2026-03-10T10:09:10.634966+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: cephadm 2026-03-10T10:09:10.634966+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:10.635656+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:10.635656+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:10.636088+0000 mon.a (mon.0) 213 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:10.636088+0000 mon.a (mon.0) 213 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:10.636494+0000 mon.a (mon.0) 214 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:10.636494+0000 mon.a (mon.0) 214 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: cephadm 2026-03-10T10:09:10.637000+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring daemon mon.c on vm04 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: cephadm 2026-03-10T10:09:10.637000+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring daemon mon.c on vm04 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:11.010097+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:11.010097+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:11.014335+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:11.014335+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:11.015471+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:11.015471+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:11.016271+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:11.016271+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:11.016705+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:11.016705+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:11.348818+0000 mon.a (mon.0) 220 : audit [DBG] from='client.? 192.168.123.107:0/3366872918' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:11.348818+0000 mon.a (mon.0) 220 : audit [DBG] from='client.? 192.168.123.107:0/3366872918' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:11.376376+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:11.376376+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:11.383815+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:11.383815+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:11.384815+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:11.384815+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:11.386513+0000 mon.a (mon.0) 224 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:11.386513+0000 mon.a (mon.0) 224 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:11.387143+0000 mon.a (mon.0) 225 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:11.387143+0000 mon.a (mon.0) 225 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:11.415638+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:11.672 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:11 vm07 bash[23367]: audit 2026-03-10T10:09:11.415638+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:10.481043+0000 mon.a (mon.0) 201 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:10.481043+0000 mon.a (mon.0) 201 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:10.481701+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:10.481701+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: cephadm 2026-03-10T10:09:10.482238+0000 mgr.y (mgr.14150) 38 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-10T10:09:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: cephadm 2026-03-10T10:09:10.482238+0000 mgr.y (mgr.14150) 38 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-10T10:09:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: cephadm 2026-03-10T10:09:10.482649+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T10:09:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: cephadm 2026-03-10T10:09:10.482649+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T10:09:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: cephadm 2026-03-10T10:09:10.547230+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm07:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:09:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: cephadm 2026-03-10T10:09:10.547230+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm07:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:09:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: cephadm 2026-03-10T10:09:10.547345+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm04:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:09:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: cephadm 2026-03-10T10:09:10.547345+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm04:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:09:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:10.588636+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:10.588636+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:10.592513+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:10.592513+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:10.596233+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:10.596233+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:10.599810+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:10.599810+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:10.603649+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:10.603649+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:10.624314+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:10.624314+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:10.627926+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:10.627926+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:10.631262+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:10.631262+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:10.634652+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:10.634652+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: cephadm 2026-03-10T10:09:10.634966+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: cephadm 2026-03-10T10:09:10.634966+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:10.635656+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:10.635656+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:10.636088+0000 mon.a (mon.0) 213 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:10.636088+0000 mon.a (mon.0) 213 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:10.636494+0000 mon.a (mon.0) 214 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:10.636494+0000 mon.a (mon.0) 214 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: cephadm 2026-03-10T10:09:10.637000+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring daemon mon.c on vm04 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: cephadm 2026-03-10T10:09:10.637000+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring daemon mon.c on vm04 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:11.010097+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:11.010097+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:11.014335+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:11.014335+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:11.015471+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:11.015471+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:11.016271+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:11.016271+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:11.016705+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:11.016705+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:11.348818+0000 mon.a (mon.0) 220 : audit [DBG] from='client.? 192.168.123.107:0/3366872918' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:11.348818+0000 mon.a (mon.0) 220 : audit [DBG] from='client.? 192.168.123.107:0/3366872918' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:11.376376+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:11.376376+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:11.383815+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:11.383815+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:11.384815+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:11.384815+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:11.386513+0000 mon.a (mon.0) 224 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:11.386513+0000 mon.a (mon.0) 224 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:11.387143+0000 mon.a (mon.0) 225 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:11.387143+0000 mon.a (mon.0) 225 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:11.415638+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:11 vm04 bash[20742]: audit 2026-03-10T10:09:11.415638+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:10.481043+0000 mon.a (mon.0) 201 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:10.481043+0000 mon.a (mon.0) 201 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:10.481701+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:10.481701+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: cephadm 2026-03-10T10:09:10.482238+0000 mgr.y (mgr.14150) 38 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: cephadm 2026-03-10T10:09:10.482238+0000 mgr.y (mgr.14150) 38 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: cephadm 2026-03-10T10:09:10.482649+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: cephadm 2026-03-10T10:09:10.482649+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: cephadm 2026-03-10T10:09:10.547230+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm07:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: cephadm 2026-03-10T10:09:10.547230+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm07:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: cephadm 2026-03-10T10:09:10.547345+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm04:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: cephadm 2026-03-10T10:09:10.547345+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm04:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:09:11.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:10.588636+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:10.588636+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:10.592513+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:10.592513+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:10.596233+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:10.596233+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:10.599810+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:10.599810+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:10.603649+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:10.603649+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:10.624314+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:10.624314+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:10.627926+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:10.627926+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:10.631262+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:10.631262+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:10.634652+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:10.634652+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: cephadm 2026-03-10T10:09:10.634966+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: cephadm 2026-03-10T10:09:10.634966+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:10.635656+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:10.635656+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:10.636088+0000 mon.a (mon.0) 213 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:10.636088+0000 mon.a (mon.0) 213 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:10.636494+0000 mon.a (mon.0) 214 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:10.636494+0000 mon.a (mon.0) 214 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: cephadm 2026-03-10T10:09:10.637000+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring daemon mon.c on vm04 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: cephadm 2026-03-10T10:09:10.637000+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring daemon mon.c on vm04 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:11.010097+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:11.010097+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:11.014335+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:11.014335+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:11.015471+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:11.015471+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:11.016271+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:11.016271+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:11.016705+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:11.016705+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:11.348818+0000 mon.a (mon.0) 220 : audit [DBG] from='client.? 192.168.123.107:0/3366872918' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:11.348818+0000 mon.a (mon.0) 220 : audit [DBG] from='client.? 192.168.123.107:0/3366872918' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:11.376376+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:11.376376+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:11.383815+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:11.383815+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:11.384815+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:11.384815+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:11.386513+0000 mon.a (mon.0) 224 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:11.386513+0000 mon.a (mon.0) 224 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:11.387143+0000 mon.a (mon.0) 225 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:11.387143+0000 mon.a (mon.0) 225 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:11.415638+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:11.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:11 vm04 bash[28289]: audit 2026-03-10T10:09:11.415638+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:09:12.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:12 vm07 bash[23367]: cephadm 2026-03-10T10:09:11.014820+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-10T10:09:12.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:12 vm07 bash[23367]: cephadm 2026-03-10T10:09:11.014820+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-10T10:09:12.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:12 vm07 bash[23367]: cephadm 2026-03-10T10:09:11.017231+0000 mgr.y (mgr.14150) 45 : cephadm [INF] Reconfiguring daemon mon.a on vm04 2026-03-10T10:09:12.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:12 vm07 bash[23367]: cephadm 2026-03-10T10:09:11.017231+0000 mgr.y (mgr.14150) 45 : cephadm [INF] Reconfiguring daemon mon.a on vm04 2026-03-10T10:09:12.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:12 vm07 bash[23367]: cephadm 2026-03-10T10:09:11.384598+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T10:09:12.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:12 vm07 bash[23367]: cephadm 2026-03-10T10:09:11.384598+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T10:09:12.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:12 vm07 bash[23367]: cephadm 2026-03-10T10:09:11.389102+0000 mgr.y (mgr.14150) 47 : cephadm [INF] Reconfiguring daemon mon.b on vm07 2026-03-10T10:09:12.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:12 vm07 bash[23367]: cephadm 2026-03-10T10:09:11.389102+0000 mgr.y (mgr.14150) 47 : cephadm [INF] Reconfiguring daemon mon.b on vm07 2026-03-10T10:09:12.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:12 vm07 bash[23367]: cluster 2026-03-10T10:09:11.606949+0000 mgr.y (mgr.14150) 48 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:12.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:12 vm07 bash[23367]: cluster 2026-03-10T10:09:11.606949+0000 mgr.y (mgr.14150) 48 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:12.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:12 vm07 bash[23367]: audit 2026-03-10T10:09:11.779909+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:12.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:12 vm07 bash[23367]: audit 2026-03-10T10:09:11.779909+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:12.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:12 vm07 bash[23367]: audit 2026-03-10T10:09:11.783570+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:12.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:12 vm07 bash[23367]: audit 2026-03-10T10:09:11.783570+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:12.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:12 vm07 bash[23367]: audit 2026-03-10T10:09:11.784688+0000 mon.a (mon.0) 229 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:12.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:12 vm07 bash[23367]: audit 2026-03-10T10:09:11.784688+0000 mon.a (mon.0) 229 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:12.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:12 vm07 bash[23367]: audit 2026-03-10T10:09:11.785869+0000 mon.a (mon.0) 230 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:12.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:12 vm07 bash[23367]: audit 2026-03-10T10:09:11.785869+0000 mon.a (mon.0) 230 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:12.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:12 vm07 bash[23367]: audit 2026-03-10T10:09:11.786270+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:12.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:12 vm07 bash[23367]: audit 2026-03-10T10:09:11.786270+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:12.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:12 vm07 bash[23367]: audit 2026-03-10T10:09:11.789954+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:12.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:12 vm07 bash[23367]: audit 2026-03-10T10:09:11.789954+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:12 vm04 bash[20742]: cephadm 2026-03-10T10:09:11.014820+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:12 vm04 bash[20742]: cephadm 2026-03-10T10:09:11.014820+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:12 vm04 bash[20742]: cephadm 2026-03-10T10:09:11.017231+0000 mgr.y (mgr.14150) 45 : cephadm [INF] Reconfiguring daemon mon.a on vm04 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:12 vm04 bash[20742]: cephadm 2026-03-10T10:09:11.017231+0000 mgr.y (mgr.14150) 45 : cephadm [INF] Reconfiguring daemon mon.a on vm04 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:12 vm04 bash[20742]: cephadm 2026-03-10T10:09:11.384598+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:12 vm04 bash[20742]: cephadm 2026-03-10T10:09:11.384598+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:12 vm04 bash[20742]: cephadm 2026-03-10T10:09:11.389102+0000 mgr.y (mgr.14150) 47 : cephadm [INF] Reconfiguring daemon mon.b on vm07 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:12 vm04 bash[20742]: cephadm 2026-03-10T10:09:11.389102+0000 mgr.y (mgr.14150) 47 : cephadm [INF] Reconfiguring daemon mon.b on vm07 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:12 vm04 bash[20742]: cluster 2026-03-10T10:09:11.606949+0000 mgr.y (mgr.14150) 48 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:12 vm04 bash[20742]: cluster 2026-03-10T10:09:11.606949+0000 mgr.y (mgr.14150) 48 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:12 vm04 bash[20742]: audit 2026-03-10T10:09:11.779909+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:12 vm04 bash[20742]: audit 2026-03-10T10:09:11.779909+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:12 vm04 bash[20742]: audit 2026-03-10T10:09:11.783570+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:12 vm04 bash[20742]: audit 2026-03-10T10:09:11.783570+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:12 vm04 bash[20742]: audit 2026-03-10T10:09:11.784688+0000 mon.a (mon.0) 229 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:12 vm04 bash[20742]: audit 2026-03-10T10:09:11.784688+0000 mon.a (mon.0) 229 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:12 vm04 bash[20742]: audit 2026-03-10T10:09:11.785869+0000 mon.a (mon.0) 230 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:12 vm04 bash[20742]: audit 2026-03-10T10:09:11.785869+0000 mon.a (mon.0) 230 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:12 vm04 bash[20742]: audit 2026-03-10T10:09:11.786270+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:12 vm04 bash[20742]: audit 2026-03-10T10:09:11.786270+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:12 vm04 bash[20742]: audit 2026-03-10T10:09:11.789954+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:12 vm04 bash[20742]: audit 2026-03-10T10:09:11.789954+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:12 vm04 bash[28289]: cephadm 2026-03-10T10:09:11.014820+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:12 vm04 bash[28289]: cephadm 2026-03-10T10:09:11.014820+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:12 vm04 bash[28289]: cephadm 2026-03-10T10:09:11.017231+0000 mgr.y (mgr.14150) 45 : cephadm [INF] Reconfiguring daemon mon.a on vm04 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:12 vm04 bash[28289]: cephadm 2026-03-10T10:09:11.017231+0000 mgr.y (mgr.14150) 45 : cephadm [INF] Reconfiguring daemon mon.a on vm04 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:12 vm04 bash[28289]: cephadm 2026-03-10T10:09:11.384598+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:12 vm04 bash[28289]: cephadm 2026-03-10T10:09:11.384598+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:12 vm04 bash[28289]: cephadm 2026-03-10T10:09:11.389102+0000 mgr.y (mgr.14150) 47 : cephadm [INF] Reconfiguring daemon mon.b on vm07 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:12 vm04 bash[28289]: cephadm 2026-03-10T10:09:11.389102+0000 mgr.y (mgr.14150) 47 : cephadm [INF] Reconfiguring daemon mon.b on vm07 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:12 vm04 bash[28289]: cluster 2026-03-10T10:09:11.606949+0000 mgr.y (mgr.14150) 48 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:12 vm04 bash[28289]: cluster 2026-03-10T10:09:11.606949+0000 mgr.y (mgr.14150) 48 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:12 vm04 bash[28289]: audit 2026-03-10T10:09:11.779909+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:12 vm04 bash[28289]: audit 2026-03-10T10:09:11.779909+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:12 vm04 bash[28289]: audit 2026-03-10T10:09:11.783570+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:12 vm04 bash[28289]: audit 2026-03-10T10:09:11.783570+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:12 vm04 bash[28289]: audit 2026-03-10T10:09:11.784688+0000 mon.a (mon.0) 229 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:12 vm04 bash[28289]: audit 2026-03-10T10:09:11.784688+0000 mon.a (mon.0) 229 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:12 vm04 bash[28289]: audit 2026-03-10T10:09:11.785869+0000 mon.a (mon.0) 230 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:12 vm04 bash[28289]: audit 2026-03-10T10:09:11.785869+0000 mon.a (mon.0) 230 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:12 vm04 bash[28289]: audit 2026-03-10T10:09:11.786270+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:12 vm04 bash[28289]: audit 2026-03-10T10:09:11.786270+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:12 vm04 bash[28289]: audit 2026-03-10T10:09:11.789954+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:12 vm04 bash[28289]: audit 2026-03-10T10:09:11.789954+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:14.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:14 vm04 bash[20742]: cluster 2026-03-10T10:09:13.607129+0000 mgr.y (mgr.14150) 49 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:14.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:14 vm04 bash[20742]: cluster 2026-03-10T10:09:13.607129+0000 mgr.y (mgr.14150) 49 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:14.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:14 vm04 bash[28289]: cluster 2026-03-10T10:09:13.607129+0000 mgr.y (mgr.14150) 49 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:14.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:14 vm04 bash[28289]: cluster 2026-03-10T10:09:13.607129+0000 mgr.y (mgr.14150) 49 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:15.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:14 vm07 bash[23367]: cluster 2026-03-10T10:09:13.607129+0000 mgr.y (mgr.14150) 49 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:15.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:14 vm07 bash[23367]: cluster 2026-03-10T10:09:13.607129+0000 mgr.y (mgr.14150) 49 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:16.055 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:09:16.217 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.212+0000 7f5d3b4b3640 1 -- 192.168.123.104:0/3861291501 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f5d3410a070 msgr2=0x7f5d34111bf0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:09:16.218 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.212+0000 7f5d3b4b3640 1 --2- 192.168.123.104:0/3861291501 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f5d3410a070 0x7f5d34111bf0 secure :-1 s=READY pgs=96 cs=0 l=1 rev1=1 crypto rx=0x7f5d3000b0a0 tx=0x7f5d3002f450 comp rx=0 tx=0).stop 2026-03-10T10:09:16.218 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.212+0000 7f5d3b4b3640 1 -- 192.168.123.104:0/3861291501 shutdown_connections 2026-03-10T10:09:16.218 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.212+0000 7f5d3b4b3640 1 --2- 192.168.123.104:0/3861291501 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f5d3410a070 0x7f5d34111bf0 unknown :-1 s=CLOSED pgs=96 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:16.218 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.212+0000 7f5d3b4b3640 1 --2- 192.168.123.104:0/3861291501 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d341058f0 0x7f5d34109940 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:16.218 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.212+0000 7f5d3b4b3640 1 --2- 192.168.123.104:0/3861291501 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f5d34104f40 0x7f5d34105320 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:16.218 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.212+0000 7f5d3b4b3640 1 -- 192.168.123.104:0/3861291501 >> 192.168.123.104:0/3861291501 conn(0x7f5d341009e0 msgr2=0x7f5d34102e00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:09:16.218 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.212+0000 7f5d3b4b3640 1 -- 192.168.123.104:0/3861291501 shutdown_connections 2026-03-10T10:09:16.218 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.212+0000 7f5d3b4b3640 1 -- 192.168.123.104:0/3861291501 wait complete. 2026-03-10T10:09:16.218 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.212+0000 7f5d3b4b3640 1 Processor -- start 2026-03-10T10:09:16.218 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.212+0000 7f5d3b4b3640 1 -- start start 2026-03-10T10:09:16.218 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.212+0000 7f5d3b4b3640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d34104f40 0x7f5d341a2630 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:09:16.219 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.212+0000 7f5d3b4b3640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f5d341058f0 0x7f5d341a2b70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:09:16.219 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.212+0000 7f5d3b4b3640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f5d3410a070 0x7f5d3419c7b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:09:16.219 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.212+0000 7f5d3b4b3640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f5d34114600 con 0x7f5d3410a070 2026-03-10T10:09:16.219 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.212+0000 7f5d3b4b3640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f5d34114480 con 0x7f5d34104f40 2026-03-10T10:09:16.219 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.212+0000 7f5d3b4b3640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f5d34114780 con 0x7f5d341058f0 2026-03-10T10:09:16.219 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.212+0000 7f5d39228640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d34104f40 0x7f5d341a2630 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:09:16.219 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.212+0000 7f5d39228640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d34104f40 0x7f5d341a2630 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.104:38358/0 (socket says 192.168.123.104:38358) 2026-03-10T10:09:16.219 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.212+0000 7f5d39228640 1 -- 192.168.123.104:0/934654686 learned_addr learned my addr 192.168.123.104:0/934654686 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:09:16.219 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.216+0000 7f5d39228640 1 -- 192.168.123.104:0/934654686 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f5d341058f0 msgr2=0x7f5d341a2b70 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:09:16.219 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.216+0000 7f5d39a29640 1 --2- 192.168.123.104:0/934654686 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f5d3410a070 0x7f5d3419c7b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:09:16.220 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.216+0000 7f5d38a27640 1 --2- 192.168.123.104:0/934654686 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f5d341058f0 0x7f5d341a2b70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:09:16.220 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.216+0000 7f5d39228640 1 --2- 192.168.123.104:0/934654686 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f5d341058f0 0x7f5d341a2b70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:16.220 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.216+0000 7f5d39228640 1 -- 192.168.123.104:0/934654686 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f5d3410a070 msgr2=0x7f5d3419c7b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:09:16.220 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.216+0000 7f5d39228640 1 --2- 192.168.123.104:0/934654686 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f5d3410a070 0x7f5d3419c7b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:16.220 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.216+0000 7f5d39228640 1 -- 192.168.123.104:0/934654686 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f5d3419cfe0 con 0x7f5d34104f40 2026-03-10T10:09:16.220 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.216+0000 7f5d39a29640 1 --2- 192.168.123.104:0/934654686 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f5d3410a070 0x7f5d3419c7b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:09:16.221 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.216+0000 7f5d39228640 1 --2- 192.168.123.104:0/934654686 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d34104f40 0x7f5d341a2630 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f5d280027e0 tx=0x7f5d28002cb0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:09:16.221 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.216+0000 7f5d227fc640 1 -- 192.168.123.104:0/934654686 <== mon.1 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f5d2800ed40 con 0x7f5d34104f40 2026-03-10T10:09:16.221 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.216+0000 7f5d3b4b3640 1 -- 192.168.123.104:0/934654686 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f5d3419d2d0 con 0x7f5d34104f40 2026-03-10T10:09:16.221 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.216+0000 7f5d3b4b3640 1 -- 192.168.123.104:0/934654686 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f5d341a9490 con 0x7f5d34104f40 2026-03-10T10:09:16.221 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.216+0000 7f5d227fc640 1 -- 192.168.123.104:0/934654686 <== mon.1 v2:192.168.123.107:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f5d280108a0 con 0x7f5d34104f40 2026-03-10T10:09:16.221 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.216+0000 7f5d227fc640 1 -- 192.168.123.104:0/934654686 <== mon.1 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f5d2800f6c0 con 0x7f5d34104f40 2026-03-10T10:09:16.221 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.216+0000 7f5d227fc640 1 -- 192.168.123.104:0/934654686 <== mon.1 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 13) ==== 50271+0+0 (secure 0 0 0) 0x7f5d28010430 con 0x7f5d34104f40 2026-03-10T10:09:16.222 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.216+0000 7f5d227fc640 1 --2- 192.168.123.104:0/934654686 >> v2:192.168.123.104:6800/632047608 conn(0x7f5d0403de00 0x7f5d040402c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:09:16.222 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.216+0000 7f5d227fc640 1 -- 192.168.123.104:0/934654686 <== mon.1 v2:192.168.123.107:3300/0 5 ==== osd_map(4..4 src has 1..4) ==== 1069+0+0 (secure 0 0 0) 0x7f5d28052560 con 0x7f5d34104f40 2026-03-10T10:09:16.222 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.216+0000 7f5d3b4b3640 1 -- 192.168.123.104:0/934654686 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f5d08005180 con 0x7f5d34104f40 2026-03-10T10:09:16.222 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.216+0000 7f5d38a27640 1 --2- 192.168.123.104:0/934654686 >> v2:192.168.123.104:6800/632047608 conn(0x7f5d0403de00 0x7f5d040402c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:09:16.222 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.216+0000 7f5d38a27640 1 --2- 192.168.123.104:0/934654686 >> v2:192.168.123.104:6800/632047608 conn(0x7f5d0403de00 0x7f5d040402c0 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7f5d240096f0 tx=0x7f5d24009290 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:09:16.227 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.220+0000 7f5d227fc640 1 -- 192.168.123.104:0/934654686 <== mon.1 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f5d28014030 con 0x7f5d34104f40 2026-03-10T10:09:16.318 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.312+0000 7f5d3b4b3640 1 -- 192.168.123.104:0/934654686 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "config generate-minimal-conf"} v 0) -- 0x7f5d08005470 con 0x7f5d34104f40 2026-03-10T10:09:16.318 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.312+0000 7f5d227fc640 1 -- 192.168.123.104:0/934654686 <== mon.1 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "config generate-minimal-conf"}]=0 v9) ==== 76+0+289 (secure 0 0 0) 0x7f5d280071c0 con 0x7f5d34104f40 2026-03-10T10:09:16.318 INFO:teuthology.orchestra.run.vm04.stdout:# minimal ceph.conf for e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:09:16.318 INFO:teuthology.orchestra.run.vm04.stdout:[global] 2026-03-10T10:09:16.318 INFO:teuthology.orchestra.run.vm04.stdout: fsid = e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:09:16.318 INFO:teuthology.orchestra.run.vm04.stdout: mon_host = [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] 2026-03-10T10:09:16.320 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.316+0000 7f5d3b4b3640 1 -- 192.168.123.104:0/934654686 >> v2:192.168.123.104:6800/632047608 conn(0x7f5d0403de00 msgr2=0x7f5d040402c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:09:16.321 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.316+0000 7f5d3b4b3640 1 --2- 192.168.123.104:0/934654686 >> v2:192.168.123.104:6800/632047608 conn(0x7f5d0403de00 0x7f5d040402c0 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7f5d240096f0 tx=0x7f5d24009290 comp rx=0 tx=0).stop 2026-03-10T10:09:16.321 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.316+0000 7f5d3b4b3640 1 -- 192.168.123.104:0/934654686 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d34104f40 msgr2=0x7f5d341a2630 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:09:16.321 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.316+0000 7f5d3b4b3640 1 --2- 192.168.123.104:0/934654686 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d34104f40 0x7f5d341a2630 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7f5d280027e0 tx=0x7f5d28002cb0 comp rx=0 tx=0).stop 2026-03-10T10:09:16.321 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.316+0000 7f5d3b4b3640 1 -- 192.168.123.104:0/934654686 shutdown_connections 2026-03-10T10:09:16.321 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.316+0000 7f5d3b4b3640 1 --2- 192.168.123.104:0/934654686 >> v2:192.168.123.104:6800/632047608 conn(0x7f5d0403de00 0x7f5d040402c0 unknown :-1 s=CLOSED pgs=38 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:16.321 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.316+0000 7f5d3b4b3640 1 --2- 192.168.123.104:0/934654686 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f5d3410a070 0x7f5d3419c7b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:16.321 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.316+0000 7f5d3b4b3640 1 --2- 192.168.123.104:0/934654686 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f5d341058f0 0x7f5d341a2b70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:16.321 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.316+0000 7f5d3b4b3640 1 --2- 192.168.123.104:0/934654686 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f5d34104f40 0x7f5d341a2630 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:16.321 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.316+0000 7f5d3b4b3640 1 -- 192.168.123.104:0/934654686 >> 192.168.123.104:0/934654686 conn(0x7f5d341009e0 msgr2=0x7f5d34101430 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:09:16.321 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.316+0000 7f5d3b4b3640 1 -- 192.168.123.104:0/934654686 shutdown_connections 2026-03-10T10:09:16.321 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:16.316+0000 7f5d3b4b3640 1 -- 192.168.123.104:0/934654686 wait complete. 2026-03-10T10:09:16.392 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-10T10:09:16.392 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-10T10:09:16.392 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T10:09:16.399 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-10T10:09:16.399 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:09:16.450 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T10:09:16.450 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T10:09:16.458 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T10:09:16.458 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:09:16.506 INFO:tasks.cephadm:Adding mgr.y on vm04 2026-03-10T10:09:16.506 INFO:tasks.cephadm:Adding mgr.x on vm07 2026-03-10T10:09:16.506 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph orch apply mgr '2;vm04=y;vm07=x' 2026-03-10T10:09:16.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:16 vm07 bash[23367]: cluster 2026-03-10T10:09:15.607290+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:16.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:16 vm07 bash[23367]: cluster 2026-03-10T10:09:15.607290+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:16.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:16 vm07 bash[23367]: audit 2026-03-10T10:09:16.320492+0000 mon.b (mon.1) 3 : audit [DBG] from='client.? 192.168.123.104:0/934654686' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:16.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:16 vm07 bash[23367]: audit 2026-03-10T10:09:16.320492+0000 mon.b (mon.1) 3 : audit [DBG] from='client.? 192.168.123.104:0/934654686' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:16.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:16 vm04 bash[28289]: cluster 2026-03-10T10:09:15.607290+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:16.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:16 vm04 bash[28289]: cluster 2026-03-10T10:09:15.607290+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:16.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:16 vm04 bash[28289]: audit 2026-03-10T10:09:16.320492+0000 mon.b (mon.1) 3 : audit [DBG] from='client.? 192.168.123.104:0/934654686' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:16.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:16 vm04 bash[28289]: audit 2026-03-10T10:09:16.320492+0000 mon.b (mon.1) 3 : audit [DBG] from='client.? 192.168.123.104:0/934654686' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:16 vm04 bash[20742]: cluster 2026-03-10T10:09:15.607290+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:16 vm04 bash[20742]: cluster 2026-03-10T10:09:15.607290+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:16 vm04 bash[20742]: audit 2026-03-10T10:09:16.320492+0000 mon.b (mon.1) 3 : audit [DBG] from='client.? 192.168.123.104:0/934654686' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:16 vm04 bash[20742]: audit 2026-03-10T10:09:16.320492+0000 mon.b (mon.1) 3 : audit [DBG] from='client.? 192.168.123.104:0/934654686' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:18.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:18 vm04 bash[28289]: cluster 2026-03-10T10:09:17.607451+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:18.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:18 vm04 bash[28289]: cluster 2026-03-10T10:09:17.607451+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:18.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:18 vm04 bash[20742]: cluster 2026-03-10T10:09:17.607451+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:18.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:18 vm04 bash[20742]: cluster 2026-03-10T10:09:17.607451+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:19.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:18 vm07 bash[23367]: cluster 2026-03-10T10:09:17.607451+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:19.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:18 vm07 bash[23367]: cluster 2026-03-10T10:09:17.607451+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:20.161 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.b/config 2026-03-10T10:09:20.295 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.291+0000 7f65bf01f640 1 -- 192.168.123.107:0/2593053086 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f65b8105f60 msgr2=0x7f65b81063e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:09:20.295 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.291+0000 7f65bf01f640 1 --2- 192.168.123.107:0/2593053086 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f65b8105f60 0x7f65b81063e0 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f65a8009a30 tx=0x7f65a802f240 comp rx=0 tx=0).stop 2026-03-10T10:09:20.295 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.291+0000 7f65bf01f640 1 -- 192.168.123.107:0/2593053086 shutdown_connections 2026-03-10T10:09:20.295 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.291+0000 7f65bf01f640 1 --2- 192.168.123.107:0/2593053086 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f65b8106920 0x7f65b810d1b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:20.295 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.291+0000 7f65bf01f640 1 --2- 192.168.123.107:0/2593053086 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f65b8105f60 0x7f65b81063e0 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:20.295 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.291+0000 7f65bf01f640 1 --2- 192.168.123.107:0/2593053086 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f65b8104d60 0x7f65b8105160 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:20.295 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.295+0000 7f65bf01f640 1 -- 192.168.123.107:0/2593053086 >> 192.168.123.107:0/2593053086 conn(0x7f65b81004d0 msgr2=0x7f65b8102930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:09:20.295 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.295+0000 7f65bf01f640 1 -- 192.168.123.107:0/2593053086 shutdown_connections 2026-03-10T10:09:20.295 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.295+0000 7f65bf01f640 1 -- 192.168.123.107:0/2593053086 wait complete. 2026-03-10T10:09:20.295 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.295+0000 7f65bf01f640 1 Processor -- start 2026-03-10T10:09:20.296 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.295+0000 7f65bf01f640 1 -- start start 2026-03-10T10:09:20.296 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.295+0000 7f65bf01f640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f65b8104d60 0x7f65b819c730 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:09:20.296 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.295+0000 7f65bf01f640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f65b8105f60 0x7f65b819cc70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:09:20.296 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.295+0000 7f65bf01f640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f65b8106920 0x7f65b81a3cf0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:09:20.296 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.295+0000 7f65bf01f640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f65b810ff60 con 0x7f65b8106920 2026-03-10T10:09:20.296 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.295+0000 7f65bf01f640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f65b810fde0 con 0x7f65b8105f60 2026-03-10T10:09:20.296 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.295+0000 7f65bf01f640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f65b81100e0 con 0x7f65b8104d60 2026-03-10T10:09:20.296 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.295+0000 7f65bcd94640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f65b8104d60 0x7f65b819c730 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:09:20.296 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.295+0000 7f65bcd94640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f65b8104d60 0x7f65b819c730 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3301/0 says I am v2:192.168.123.107:36606/0 (socket says 192.168.123.107:36606) 2026-03-10T10:09:20.296 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.295+0000 7f65bcd94640 1 -- 192.168.123.107:0/3326429933 learned_addr learned my addr 192.168.123.107:0/3326429933 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-10T10:09:20.296 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.295+0000 7f65bd595640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f65b8106920 0x7f65b81a3cf0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:09:20.297 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.295+0000 7f65affff640 1 --2- 192.168.123.107:0/3326429933 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f65b8105f60 0x7f65b819cc70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:09:20.297 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.295+0000 7f65bcd94640 1 -- 192.168.123.107:0/3326429933 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f65b8105f60 msgr2=0x7f65b819cc70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:09:20.297 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.295+0000 7f65bcd94640 1 --2- 192.168.123.107:0/3326429933 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f65b8105f60 0x7f65b819cc70 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:20.297 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.295+0000 7f65bcd94640 1 -- 192.168.123.107:0/3326429933 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f65b8106920 msgr2=0x7f65b81a3cf0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:09:20.297 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.295+0000 7f65bcd94640 1 --2- 192.168.123.107:0/3326429933 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f65b8106920 0x7f65b81a3cf0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:20.297 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.295+0000 7f65bcd94640 1 -- 192.168.123.107:0/3326429933 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f65b81a43f0 con 0x7f65b8104d60 2026-03-10T10:09:20.297 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.295+0000 7f65bd595640 1 --2- 192.168.123.107:0/3326429933 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f65b8106920 0x7f65b81a3cf0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-10T10:09:20.297 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.295+0000 7f65affff640 1 --2- 192.168.123.107:0/3326429933 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f65b8105f60 0x7f65b819cc70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:09:20.297 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.295+0000 7f65bcd94640 1 --2- 192.168.123.107:0/3326429933 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f65b8104d60 0x7f65b819c730 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f65a000b570 tx=0x7f65a000ba40 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:09:20.297 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.295+0000 7f65adffb640 1 -- 192.168.123.107:0/3326429933 <== mon.2 v2:192.168.123.104:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f65a0013020 con 0x7f65b8104d60 2026-03-10T10:09:20.298 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.295+0000 7f65bf01f640 1 -- 192.168.123.107:0/3326429933 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f65b81a46e0 con 0x7f65b8104d60 2026-03-10T10:09:20.298 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.295+0000 7f65bf01f640 1 -- 192.168.123.107:0/3326429933 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f65b81a4cf0 con 0x7f65b8104d60 2026-03-10T10:09:20.298 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.295+0000 7f65adffb640 1 -- 192.168.123.107:0/3326429933 <== mon.2 v2:192.168.123.104:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f65a0004480 con 0x7f65b8104d60 2026-03-10T10:09:20.298 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.295+0000 7f65adffb640 1 -- 192.168.123.107:0/3326429933 <== mon.2 v2:192.168.123.104:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f65a000fab0 con 0x7f65b8104d60 2026-03-10T10:09:20.299 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.299+0000 7f65adffb640 1 -- 192.168.123.107:0/3326429933 <== mon.2 v2:192.168.123.104:3301/0 4 ==== mgrmap(e 13) ==== 50271+0+0 (secure 0 0 0) 0x7f65a0020020 con 0x7f65b8104d60 2026-03-10T10:09:20.299 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.299+0000 7f65adffb640 1 --2- 192.168.123.107:0/3326429933 >> v2:192.168.123.104:6800/632047608 conn(0x7f658c03dde0 0x7f658c0402a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:09:20.299 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.299+0000 7f65adffb640 1 -- 192.168.123.107:0/3326429933 <== mon.2 v2:192.168.123.104:3301/0 5 ==== osd_map(4..4 src has 1..4) ==== 1069+0+0 (secure 0 0 0) 0x7f65a00516c0 con 0x7f65b8104d60 2026-03-10T10:09:20.299 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.299+0000 7f65bf01f640 1 -- 192.168.123.107:0/3326429933 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f65b81051e0 con 0x7f65b8104d60 2026-03-10T10:09:20.299 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.299+0000 7f65affff640 1 --2- 192.168.123.107:0/3326429933 >> v2:192.168.123.104:6800/632047608 conn(0x7f658c03dde0 0x7f658c0402a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:09:20.300 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.299+0000 7f65affff640 1 --2- 192.168.123.107:0/3326429933 >> v2:192.168.123.104:6800/632047608 conn(0x7f658c03dde0 0x7f658c0402a0 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7f65a8002ae0 tx=0x7f65a8005810 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:09:20.303 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.303+0000 7f65adffb640 1 -- 192.168.123.107:0/3326429933 <== mon.2 v2:192.168.123.104:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f65a001b120 con 0x7f65b8104d60 2026-03-10T10:09:20.407 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.407+0000 7f65bf01f640 1 -- 192.168.123.107:0/3326429933 --> v2:192.168.123.104:6800/632047608 -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm04=y;vm07=x", "target": ["mon-mgr", ""]}) -- 0x7f65b80630c0 con 0x7f658c03dde0 2026-03-10T10:09:20.413 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.411+0000 7f65adffb640 1 -- 192.168.123.107:0/3326429933 <== mgr.14150 v2:192.168.123.104:6800/632047608 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+24 (secure 0 0 0) 0x7f65b80630c0 con 0x7f658c03dde0 2026-03-10T10:09:20.413 INFO:teuthology.orchestra.run.vm07.stdout:Scheduled mgr update... 2026-03-10T10:09:20.416 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.415+0000 7f65bf01f640 1 -- 192.168.123.107:0/3326429933 >> v2:192.168.123.104:6800/632047608 conn(0x7f658c03dde0 msgr2=0x7f658c0402a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:09:20.416 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.415+0000 7f65bf01f640 1 --2- 192.168.123.107:0/3326429933 >> v2:192.168.123.104:6800/632047608 conn(0x7f658c03dde0 0x7f658c0402a0 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7f65a8002ae0 tx=0x7f65a8005810 comp rx=0 tx=0).stop 2026-03-10T10:09:20.416 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.415+0000 7f65bf01f640 1 -- 192.168.123.107:0/3326429933 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f65b8104d60 msgr2=0x7f65b819c730 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:09:20.416 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.415+0000 7f65bf01f640 1 --2- 192.168.123.107:0/3326429933 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f65b8104d60 0x7f65b819c730 secure :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0x7f65a000b570 tx=0x7f65a000ba40 comp rx=0 tx=0).stop 2026-03-10T10:09:20.416 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.415+0000 7f65bf01f640 1 -- 192.168.123.107:0/3326429933 shutdown_connections 2026-03-10T10:09:20.416 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.415+0000 7f65bf01f640 1 --2- 192.168.123.107:0/3326429933 >> v2:192.168.123.104:6800/632047608 conn(0x7f658c03dde0 0x7f658c0402a0 unknown :-1 s=CLOSED pgs=39 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:20.416 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.415+0000 7f65bf01f640 1 --2- 192.168.123.107:0/3326429933 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f65b8106920 0x7f65b81a3cf0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:20.416 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.415+0000 7f65bf01f640 1 --2- 192.168.123.107:0/3326429933 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f65b8105f60 0x7f65b819cc70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:20.416 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.415+0000 7f65bf01f640 1 --2- 192.168.123.107:0/3326429933 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f65b8104d60 0x7f65b819c730 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:20.416 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.415+0000 7f65bf01f640 1 -- 192.168.123.107:0/3326429933 >> 192.168.123.107:0/3326429933 conn(0x7f65b81004d0 msgr2=0x7f65b8101fe0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:09:20.416 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.415+0000 7f65bf01f640 1 -- 192.168.123.107:0/3326429933 shutdown_connections 2026-03-10T10:09:20.416 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:09:20.415+0000 7f65bf01f640 1 -- 192.168.123.107:0/3326429933 wait complete. 2026-03-10T10:09:20.498 DEBUG:teuthology.orchestra.run.vm07:mgr.x> sudo journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@mgr.x.service 2026-03-10T10:09:20.499 INFO:tasks.cephadm:Deploying OSDs... 2026-03-10T10:09:20.499 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-10T10:09:20.499 DEBUG:teuthology.orchestra.run.vm04:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T10:09:20.502 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T10:09:20.502 DEBUG:teuthology.orchestra.run.vm04:> ls /dev/[sv]d? 2026-03-10T10:09:20.546 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vda 2026-03-10T10:09:20.546 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vdb 2026-03-10T10:09:20.546 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vdc 2026-03-10T10:09:20.546 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vdd 2026-03-10T10:09:20.546 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vde 2026-03-10T10:09:20.546 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T10:09:20.546 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T10:09:20.546 DEBUG:teuthology.orchestra.run.vm04:> stat /dev/vdb 2026-03-10T10:09:20.590 INFO:teuthology.orchestra.run.vm04.stdout: File: /dev/vdb 2026-03-10T10:09:20.590 INFO:teuthology.orchestra.run.vm04.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T10:09:20.590 INFO:teuthology.orchestra.run.vm04.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-10T10:09:20.590 INFO:teuthology.orchestra.run.vm04.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T10:09:20.590 INFO:teuthology.orchestra.run.vm04.stdout:Access: 2026-03-10 10:02:25.841085858 +0000 2026-03-10T10:09:20.590 INFO:teuthology.orchestra.run.vm04.stdout:Modify: 2026-03-10 10:02:24.805085858 +0000 2026-03-10T10:09:20.590 INFO:teuthology.orchestra.run.vm04.stdout:Change: 2026-03-10 10:02:24.805085858 +0000 2026-03-10T10:09:20.590 INFO:teuthology.orchestra.run.vm04.stdout: Birth: - 2026-03-10T10:09:20.590 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T10:09:20.637 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records in 2026-03-10T10:09:20.637 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records out 2026-03-10T10:09:20.637 INFO:teuthology.orchestra.run.vm04.stderr:512 bytes copied, 0.000122249 s, 4.2 MB/s 2026-03-10T10:09:20.638 DEBUG:teuthology.orchestra.run.vm04:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T10:09:20.679 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:20 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:09:20.684 DEBUG:teuthology.orchestra.run.vm04:> stat /dev/vdc 2026-03-10T10:09:20.730 INFO:teuthology.orchestra.run.vm04.stdout: File: /dev/vdc 2026-03-10T10:09:20.730 INFO:teuthology.orchestra.run.vm04.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T10:09:20.730 INFO:teuthology.orchestra.run.vm04.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-10T10:09:20.730 INFO:teuthology.orchestra.run.vm04.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T10:09:20.730 INFO:teuthology.orchestra.run.vm04.stdout:Access: 2026-03-10 10:02:25.853085858 +0000 2026-03-10T10:09:20.730 INFO:teuthology.orchestra.run.vm04.stdout:Modify: 2026-03-10 10:02:24.805085858 +0000 2026-03-10T10:09:20.730 INFO:teuthology.orchestra.run.vm04.stdout:Change: 2026-03-10 10:02:24.805085858 +0000 2026-03-10T10:09:20.730 INFO:teuthology.orchestra.run.vm04.stdout: Birth: - 2026-03-10T10:09:20.730 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T10:09:20.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:20 vm07 bash[23367]: cluster 2026-03-10T10:09:19.607729+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:20.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:20 vm07 bash[23367]: cluster 2026-03-10T10:09:19.607729+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:20.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:20 vm07 bash[23367]: audit 2026-03-10T10:09:20.413988+0000 mon.a (mon.0) 233 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:20.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:20 vm07 bash[23367]: audit 2026-03-10T10:09:20.413988+0000 mon.a (mon.0) 233 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:20.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:20 vm07 bash[23367]: audit 2026-03-10T10:09:20.414938+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:20.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:20 vm07 bash[23367]: audit 2026-03-10T10:09:20.414938+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:20.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:20 vm07 bash[23367]: audit 2026-03-10T10:09:20.415796+0000 mon.a (mon.0) 235 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:20.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:20 vm07 bash[23367]: audit 2026-03-10T10:09:20.415796+0000 mon.a (mon.0) 235 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:20.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:20 vm07 bash[23367]: audit 2026-03-10T10:09:20.416189+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:20.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:20 vm07 bash[23367]: audit 2026-03-10T10:09:20.416189+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:20.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:20 vm07 bash[23367]: audit 2026-03-10T10:09:20.419708+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:20.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:20 vm07 bash[23367]: audit 2026-03-10T10:09:20.419708+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:20.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:20 vm07 bash[23367]: audit 2026-03-10T10:09:20.421017+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T10:09:20.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:20 vm07 bash[23367]: audit 2026-03-10T10:09:20.421017+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T10:09:20.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:20 vm07 bash[23367]: audit 2026-03-10T10:09:20.422859+0000 mon.a (mon.0) 239 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T10:09:20.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:20 vm07 bash[23367]: audit 2026-03-10T10:09:20.422859+0000 mon.a (mon.0) 239 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T10:09:20.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:20 vm07 bash[23367]: audit 2026-03-10T10:09:20.425448+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T10:09:20.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:20 vm07 bash[23367]: audit 2026-03-10T10:09:20.425448+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T10:09:20.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:20 vm07 bash[23367]: audit 2026-03-10T10:09:20.425925+0000 mon.a (mon.0) 241 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:20.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:20 vm07 bash[23367]: audit 2026-03-10T10:09:20.425925+0000 mon.a (mon.0) 241 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:20.778 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records in 2026-03-10T10:09:20.778 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records out 2026-03-10T10:09:20.778 INFO:teuthology.orchestra.run.vm04.stderr:512 bytes copied, 0.000159919 s, 3.2 MB/s 2026-03-10T10:09:20.778 DEBUG:teuthology.orchestra.run.vm04:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T10:09:20.823 DEBUG:teuthology.orchestra.run.vm04:> stat /dev/vdd 2026-03-10T10:09:20.870 INFO:teuthology.orchestra.run.vm04.stdout: File: /dev/vdd 2026-03-10T10:09:20.870 INFO:teuthology.orchestra.run.vm04.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T10:09:20.870 INFO:teuthology.orchestra.run.vm04.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-10T10:09:20.870 INFO:teuthology.orchestra.run.vm04.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T10:09:20.870 INFO:teuthology.orchestra.run.vm04.stdout:Access: 2026-03-10 10:02:25.841085858 +0000 2026-03-10T10:09:20.870 INFO:teuthology.orchestra.run.vm04.stdout:Modify: 2026-03-10 10:02:24.805085858 +0000 2026-03-10T10:09:20.870 INFO:teuthology.orchestra.run.vm04.stdout:Change: 2026-03-10 10:02:24.805085858 +0000 2026-03-10T10:09:20.870 INFO:teuthology.orchestra.run.vm04.stdout: Birth: - 2026-03-10T10:09:20.870 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:20 vm04 bash[20742]: cluster 2026-03-10T10:09:19.607729+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:20 vm04 bash[20742]: cluster 2026-03-10T10:09:19.607729+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:20 vm04 bash[20742]: audit 2026-03-10T10:09:20.413988+0000 mon.a (mon.0) 233 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:20 vm04 bash[20742]: audit 2026-03-10T10:09:20.413988+0000 mon.a (mon.0) 233 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:20 vm04 bash[20742]: audit 2026-03-10T10:09:20.414938+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:20 vm04 bash[20742]: audit 2026-03-10T10:09:20.414938+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:20 vm04 bash[20742]: audit 2026-03-10T10:09:20.415796+0000 mon.a (mon.0) 235 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:20 vm04 bash[20742]: audit 2026-03-10T10:09:20.415796+0000 mon.a (mon.0) 235 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:20 vm04 bash[20742]: audit 2026-03-10T10:09:20.416189+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:20 vm04 bash[20742]: audit 2026-03-10T10:09:20.416189+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:20 vm04 bash[20742]: audit 2026-03-10T10:09:20.419708+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:20 vm04 bash[20742]: audit 2026-03-10T10:09:20.419708+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:20 vm04 bash[20742]: audit 2026-03-10T10:09:20.421017+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:20 vm04 bash[20742]: audit 2026-03-10T10:09:20.421017+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:20 vm04 bash[20742]: audit 2026-03-10T10:09:20.422859+0000 mon.a (mon.0) 239 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:20 vm04 bash[20742]: audit 2026-03-10T10:09:20.422859+0000 mon.a (mon.0) 239 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:20 vm04 bash[20742]: audit 2026-03-10T10:09:20.425448+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:20 vm04 bash[20742]: audit 2026-03-10T10:09:20.425448+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:20 vm04 bash[20742]: audit 2026-03-10T10:09:20.425925+0000 mon.a (mon.0) 241 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:20 vm04 bash[20742]: audit 2026-03-10T10:09:20.425925+0000 mon.a (mon.0) 241 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:20 vm04 bash[28289]: cluster 2026-03-10T10:09:19.607729+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:20 vm04 bash[28289]: cluster 2026-03-10T10:09:19.607729+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:20 vm04 bash[28289]: audit 2026-03-10T10:09:20.413988+0000 mon.a (mon.0) 233 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:20 vm04 bash[28289]: audit 2026-03-10T10:09:20.413988+0000 mon.a (mon.0) 233 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:20 vm04 bash[28289]: audit 2026-03-10T10:09:20.414938+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:20 vm04 bash[28289]: audit 2026-03-10T10:09:20.414938+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:20 vm04 bash[28289]: audit 2026-03-10T10:09:20.415796+0000 mon.a (mon.0) 235 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:20 vm04 bash[28289]: audit 2026-03-10T10:09:20.415796+0000 mon.a (mon.0) 235 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:20 vm04 bash[28289]: audit 2026-03-10T10:09:20.416189+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:20 vm04 bash[28289]: audit 2026-03-10T10:09:20.416189+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:20 vm04 bash[28289]: audit 2026-03-10T10:09:20.419708+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:20 vm04 bash[28289]: audit 2026-03-10T10:09:20.419708+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:20 vm04 bash[28289]: audit 2026-03-10T10:09:20.421017+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:20 vm04 bash[28289]: audit 2026-03-10T10:09:20.421017+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:20 vm04 bash[28289]: audit 2026-03-10T10:09:20.422859+0000 mon.a (mon.0) 239 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:20 vm04 bash[28289]: audit 2026-03-10T10:09:20.422859+0000 mon.a (mon.0) 239 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:20 vm04 bash[28289]: audit 2026-03-10T10:09:20.425448+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:20 vm04 bash[28289]: audit 2026-03-10T10:09:20.425448+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:20 vm04 bash[28289]: audit 2026-03-10T10:09:20.425925+0000 mon.a (mon.0) 241 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:20.918 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:20 vm04 bash[28289]: audit 2026-03-10T10:09:20.425925+0000 mon.a (mon.0) 241 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:20.919 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records in 2026-03-10T10:09:20.919 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records out 2026-03-10T10:09:20.919 INFO:teuthology.orchestra.run.vm04.stderr:512 bytes copied, 0.000133219 s, 3.8 MB/s 2026-03-10T10:09:20.919 DEBUG:teuthology.orchestra.run.vm04:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T10:09:20.963 DEBUG:teuthology.orchestra.run.vm04:> stat /dev/vde 2026-03-10T10:09:21.011 INFO:teuthology.orchestra.run.vm04.stdout: File: /dev/vde 2026-03-10T10:09:21.011 INFO:teuthology.orchestra.run.vm04.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T10:09:21.011 INFO:teuthology.orchestra.run.vm04.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-10T10:09:21.011 INFO:teuthology.orchestra.run.vm04.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T10:09:21.011 INFO:teuthology.orchestra.run.vm04.stdout:Access: 2026-03-10 10:02:25.849085858 +0000 2026-03-10T10:09:21.011 INFO:teuthology.orchestra.run.vm04.stdout:Modify: 2026-03-10 10:02:24.793085858 +0000 2026-03-10T10:09:21.011 INFO:teuthology.orchestra.run.vm04.stdout:Change: 2026-03-10 10:02:24.793085858 +0000 2026-03-10T10:09:21.011 INFO:teuthology.orchestra.run.vm04.stdout: Birth: - 2026-03-10T10:09:21.011 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T10:09:21.057 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records in 2026-03-10T10:09:21.057 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records out 2026-03-10T10:09:21.057 INFO:teuthology.orchestra.run.vm04.stderr:512 bytes copied, 0.000112862 s, 4.5 MB/s 2026-03-10T10:09:21.058 DEBUG:teuthology.orchestra.run.vm04:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T10:09:21.103 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T10:09:21.267 DEBUG:teuthology.orchestra.run.vm07:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T10:09:21.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:20 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:09:21.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:21 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:09:21.267 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:20 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:09:21.267 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:20 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:09:21.267 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:20 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:09:21.267 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:20 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:09:21.267 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:21 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:09:21.269 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T10:09:21.270 DEBUG:teuthology.orchestra.run.vm07:> ls /dev/[sv]d? 2026-03-10T10:09:21.313 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vda 2026-03-10T10:09:21.313 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vdb 2026-03-10T10:09:21.313 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vdc 2026-03-10T10:09:21.313 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vdd 2026-03-10T10:09:21.313 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vde 2026-03-10T10:09:21.313 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T10:09:21.313 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T10:09:21.313 DEBUG:teuthology.orchestra.run.vm07:> stat /dev/vdb 2026-03-10T10:09:21.358 INFO:teuthology.orchestra.run.vm07.stdout: File: /dev/vdb 2026-03-10T10:09:21.358 INFO:teuthology.orchestra.run.vm07.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T10:09:21.358 INFO:teuthology.orchestra.run.vm07.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-10T10:09:21.358 INFO:teuthology.orchestra.run.vm07.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T10:09:21.358 INFO:teuthology.orchestra.run.vm07.stdout:Access: 2026-03-10 10:02:57.233007312 +0000 2026-03-10T10:09:21.358 INFO:teuthology.orchestra.run.vm07.stdout:Modify: 2026-03-10 10:02:56.145007312 +0000 2026-03-10T10:09:21.358 INFO:teuthology.orchestra.run.vm07.stdout:Change: 2026-03-10 10:02:56.145007312 +0000 2026-03-10T10:09:21.358 INFO:teuthology.orchestra.run.vm07.stdout: Birth: - 2026-03-10T10:09:21.358 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T10:09:21.405 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records in 2026-03-10T10:09:21.405 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records out 2026-03-10T10:09:21.405 INFO:teuthology.orchestra.run.vm07.stderr:512 bytes copied, 0.000132859 s, 3.9 MB/s 2026-03-10T10:09:21.406 DEBUG:teuthology.orchestra.run.vm07:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T10:09:21.451 DEBUG:teuthology.orchestra.run.vm07:> stat /dev/vdc 2026-03-10T10:09:21.497 INFO:teuthology.orchestra.run.vm07.stdout: File: /dev/vdc 2026-03-10T10:09:21.497 INFO:teuthology.orchestra.run.vm07.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T10:09:21.497 INFO:teuthology.orchestra.run.vm07.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-10T10:09:21.497 INFO:teuthology.orchestra.run.vm07.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T10:09:21.497 INFO:teuthology.orchestra.run.vm07.stdout:Access: 2026-03-10 10:02:57.241007312 +0000 2026-03-10T10:09:21.497 INFO:teuthology.orchestra.run.vm07.stdout:Modify: 2026-03-10 10:02:56.145007312 +0000 2026-03-10T10:09:21.497 INFO:teuthology.orchestra.run.vm07.stdout:Change: 2026-03-10 10:02:56.145007312 +0000 2026-03-10T10:09:21.497 INFO:teuthology.orchestra.run.vm07.stdout: Birth: - 2026-03-10T10:09:21.497 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T10:09:21.512 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:21 vm07 systemd[1]: Started Ceph mgr.x for e4c1c9d6-1c68-11f1-a9bd-116050875839. 2026-03-10T10:09:21.528 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records in 2026-03-10T10:09:21.528 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records out 2026-03-10T10:09:21.528 INFO:teuthology.orchestra.run.vm07.stderr:512 bytes copied, 0.000146063 s, 3.5 MB/s 2026-03-10T10:09:21.528 DEBUG:teuthology.orchestra.run.vm07:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T10:09:21.575 DEBUG:teuthology.orchestra.run.vm07:> stat /dev/vdd 2026-03-10T10:09:21.622 INFO:teuthology.orchestra.run.vm07.stdout: File: /dev/vdd 2026-03-10T10:09:21.622 INFO:teuthology.orchestra.run.vm07.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T10:09:21.622 INFO:teuthology.orchestra.run.vm07.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-10T10:09:21.622 INFO:teuthology.orchestra.run.vm07.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T10:09:21.622 INFO:teuthology.orchestra.run.vm07.stdout:Access: 2026-03-10 10:02:57.229007312 +0000 2026-03-10T10:09:21.622 INFO:teuthology.orchestra.run.vm07.stdout:Modify: 2026-03-10 10:02:56.145007312 +0000 2026-03-10T10:09:21.622 INFO:teuthology.orchestra.run.vm07.stdout:Change: 2026-03-10 10:02:56.145007312 +0000 2026-03-10T10:09:21.622 INFO:teuthology.orchestra.run.vm07.stdout: Birth: - 2026-03-10T10:09:21.622 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T10:09:21.669 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records in 2026-03-10T10:09:21.669 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records out 2026-03-10T10:09:21.669 INFO:teuthology.orchestra.run.vm07.stderr:512 bytes copied, 0.000153226 s, 3.3 MB/s 2026-03-10T10:09:21.669 DEBUG:teuthology.orchestra.run.vm07:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T10:09:21.714 DEBUG:teuthology.orchestra.run.vm07:> stat /dev/vde 2026-03-10T10:09:21.757 INFO:teuthology.orchestra.run.vm07.stdout: File: /dev/vde 2026-03-10T10:09:21.830 INFO:teuthology.orchestra.run.vm07.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T10:09:21.830 INFO:teuthology.orchestra.run.vm07.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-10T10:09:21.830 INFO:teuthology.orchestra.run.vm07.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T10:09:21.830 INFO:teuthology.orchestra.run.vm07.stdout:Access: 2026-03-10 10:02:57.241007312 +0000 2026-03-10T10:09:21.830 INFO:teuthology.orchestra.run.vm07.stdout:Modify: 2026-03-10 10:02:56.145007312 +0000 2026-03-10T10:09:21.830 INFO:teuthology.orchestra.run.vm07.stdout:Change: 2026-03-10 10:02:56.145007312 +0000 2026-03-10T10:09:21.830 INFO:teuthology.orchestra.run.vm07.stdout: Birth: - 2026-03-10T10:09:21.830 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T10:09:21.836 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records in 2026-03-10T10:09:21.854 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records out 2026-03-10T10:09:21.854 INFO:teuthology.orchestra.run.vm07.stderr:512 bytes copied, 0.000123382 s, 4.1 MB/s 2026-03-10T10:09:21.854 DEBUG:teuthology.orchestra.run.vm07:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T10:09:21.898 INFO:tasks.cephadm:Deploying osd.0 on vm04 with /dev/vde... 2026-03-10T10:09:21.913 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- lvm zap /dev/vde 2026-03-10T10:09:22.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:22 vm04 bash[28289]: audit 2026-03-10T10:09:20.409146+0000 mgr.y (mgr.14150) 53 : audit [DBG] from='client.24104 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm04=y;vm07=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:22.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:22 vm04 bash[28289]: audit 2026-03-10T10:09:20.409146+0000 mgr.y (mgr.14150) 53 : audit [DBG] from='client.24104 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm04=y;vm07=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:22.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:22 vm04 bash[28289]: cephadm 2026-03-10T10:09:20.409966+0000 mgr.y (mgr.14150) 54 : cephadm [INF] Saving service mgr spec with placement vm04=y;vm07=x;count:2 2026-03-10T10:09:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:22 vm04 bash[28289]: cephadm 2026-03-10T10:09:20.409966+0000 mgr.y (mgr.14150) 54 : cephadm [INF] Saving service mgr spec with placement vm04=y;vm07=x;count:2 2026-03-10T10:09:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:22 vm04 bash[28289]: cephadm 2026-03-10T10:09:20.426458+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Deploying daemon mgr.x on vm07 2026-03-10T10:09:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:22 vm04 bash[28289]: cephadm 2026-03-10T10:09:20.426458+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Deploying daemon mgr.x on vm07 2026-03-10T10:09:22.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:22 vm04 bash[20742]: audit 2026-03-10T10:09:20.409146+0000 mgr.y (mgr.14150) 53 : audit [DBG] from='client.24104 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm04=y;vm07=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:22.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:22 vm04 bash[20742]: audit 2026-03-10T10:09:20.409146+0000 mgr.y (mgr.14150) 53 : audit [DBG] from='client.24104 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm04=y;vm07=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:22.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:22 vm04 bash[20742]: cephadm 2026-03-10T10:09:20.409966+0000 mgr.y (mgr.14150) 54 : cephadm [INF] Saving service mgr spec with placement vm04=y;vm07=x;count:2 2026-03-10T10:09:22.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:22 vm04 bash[20742]: cephadm 2026-03-10T10:09:20.409966+0000 mgr.y (mgr.14150) 54 : cephadm [INF] Saving service mgr spec with placement vm04=y;vm07=x;count:2 2026-03-10T10:09:22.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:22 vm04 bash[20742]: cephadm 2026-03-10T10:09:20.426458+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Deploying daemon mgr.x on vm07 2026-03-10T10:09:22.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:22 vm04 bash[20742]: cephadm 2026-03-10T10:09:20.426458+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Deploying daemon mgr.x on vm07 2026-03-10T10:09:22.957 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:22 vm07 bash[23367]: audit 2026-03-10T10:09:20.409146+0000 mgr.y (mgr.14150) 53 : audit [DBG] from='client.24104 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm04=y;vm07=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:22.957 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:22 vm07 bash[23367]: audit 2026-03-10T10:09:20.409146+0000 mgr.y (mgr.14150) 53 : audit [DBG] from='client.24104 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm04=y;vm07=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:22.957 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:22 vm07 bash[23367]: cephadm 2026-03-10T10:09:20.409966+0000 mgr.y (mgr.14150) 54 : cephadm [INF] Saving service mgr spec with placement vm04=y;vm07=x;count:2 2026-03-10T10:09:22.957 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:22 vm07 bash[23367]: cephadm 2026-03-10T10:09:20.409966+0000 mgr.y (mgr.14150) 54 : cephadm [INF] Saving service mgr spec with placement vm04=y;vm07=x;count:2 2026-03-10T10:09:22.957 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:22 vm07 bash[23367]: cephadm 2026-03-10T10:09:20.426458+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Deploying daemon mgr.x on vm07 2026-03-10T10:09:22.957 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:22 vm07 bash[23367]: cephadm 2026-03-10T10:09:20.426458+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Deploying daemon mgr.x on vm07 2026-03-10T10:09:23.263 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:22 vm07 bash[24071]: debug 2026-03-10T10:09:22.955+0000 7f6b95a7e140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T10:09:23.263 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:22 vm07 bash[24071]: debug 2026-03-10T10:09:22.991+0000 7f6b95a7e140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T10:09:23.263 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:23 vm07 bash[24071]: debug 2026-03-10T10:09:23.107+0000 7f6b95a7e140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T10:09:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:23 vm04 bash[28289]: cluster 2026-03-10T10:09:21.607899+0000 mgr.y (mgr.14150) 56 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:23 vm04 bash[28289]: cluster 2026-03-10T10:09:21.607899+0000 mgr.y (mgr.14150) 56 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:23 vm04 bash[28289]: audit 2026-03-10T10:09:22.065394+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:23 vm04 bash[28289]: audit 2026-03-10T10:09:22.065394+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:23 vm04 bash[28289]: audit 2026-03-10T10:09:22.807059+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:23 vm04 bash[28289]: audit 2026-03-10T10:09:22.807059+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:23 vm04 bash[28289]: audit 2026-03-10T10:09:22.813246+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:23 vm04 bash[28289]: audit 2026-03-10T10:09:22.813246+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:23 vm04 bash[28289]: audit 2026-03-10T10:09:22.826556+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:23 vm04 bash[28289]: audit 2026-03-10T10:09:22.826556+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:23 vm04 bash[28289]: audit 2026-03-10T10:09:22.836921+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:23 vm04 bash[28289]: audit 2026-03-10T10:09:22.836921+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:23.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:23 vm04 bash[20742]: cluster 2026-03-10T10:09:21.607899+0000 mgr.y (mgr.14150) 56 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:23.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:23 vm04 bash[20742]: cluster 2026-03-10T10:09:21.607899+0000 mgr.y (mgr.14150) 56 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:23.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:23 vm04 bash[20742]: audit 2026-03-10T10:09:22.065394+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:23.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:23 vm04 bash[20742]: audit 2026-03-10T10:09:22.065394+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:23.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:23 vm04 bash[20742]: audit 2026-03-10T10:09:22.807059+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:23.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:23 vm04 bash[20742]: audit 2026-03-10T10:09:22.807059+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:23.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:23 vm04 bash[20742]: audit 2026-03-10T10:09:22.813246+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:23.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:23 vm04 bash[20742]: audit 2026-03-10T10:09:22.813246+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:23.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:23 vm04 bash[20742]: audit 2026-03-10T10:09:22.826556+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:23.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:23 vm04 bash[20742]: audit 2026-03-10T10:09:22.826556+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:23.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:23 vm04 bash[20742]: audit 2026-03-10T10:09:22.836921+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:23.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:23 vm04 bash[20742]: audit 2026-03-10T10:09:22.836921+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:23.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:23 vm07 bash[23367]: cluster 2026-03-10T10:09:21.607899+0000 mgr.y (mgr.14150) 56 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:23.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:23 vm07 bash[23367]: cluster 2026-03-10T10:09:21.607899+0000 mgr.y (mgr.14150) 56 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:23.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:23 vm07 bash[23367]: audit 2026-03-10T10:09:22.065394+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:23.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:23 vm07 bash[23367]: audit 2026-03-10T10:09:22.065394+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:23.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:23 vm07 bash[23367]: audit 2026-03-10T10:09:22.807059+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:23.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:23 vm07 bash[23367]: audit 2026-03-10T10:09:22.807059+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:23.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:23 vm07 bash[23367]: audit 2026-03-10T10:09:22.813246+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:23.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:23 vm07 bash[23367]: audit 2026-03-10T10:09:22.813246+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:23.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:23 vm07 bash[23367]: audit 2026-03-10T10:09:22.826556+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:23.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:23 vm07 bash[23367]: audit 2026-03-10T10:09:22.826556+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:23.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:23 vm07 bash[23367]: audit 2026-03-10T10:09:22.836921+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:23.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:23 vm07 bash[23367]: audit 2026-03-10T10:09:22.836921+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:23.763 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:23 vm07 bash[24071]: debug 2026-03-10T10:09:23.395+0000 7f6b95a7e140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T10:09:24.184 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:23 vm07 bash[24071]: debug 2026-03-10T10:09:23.831+0000 7f6b95a7e140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T10:09:24.184 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:23 vm07 bash[24071]: debug 2026-03-10T10:09:23.911+0000 7f6b95a7e140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T10:09:24.184 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:24 vm07 bash[24071]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T10:09:24.184 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:24 vm07 bash[24071]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T10:09:24.184 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:24 vm07 bash[24071]: from numpy import show_config as show_numpy_config 2026-03-10T10:09:24.184 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:24 vm07 bash[24071]: debug 2026-03-10T10:09:24.035+0000 7f6b95a7e140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T10:09:24.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:24 vm07 bash[23367]: cluster 2026-03-10T10:09:23.608167+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:24.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:24 vm07 bash[23367]: cluster 2026-03-10T10:09:23.608167+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:24.513 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:24 vm07 bash[24071]: debug 2026-03-10T10:09:24.183+0000 7f6b95a7e140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T10:09:24.513 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:24 vm07 bash[24071]: debug 2026-03-10T10:09:24.223+0000 7f6b95a7e140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T10:09:24.513 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:24 vm07 bash[24071]: debug 2026-03-10T10:09:24.263+0000 7f6b95a7e140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T10:09:24.513 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:24 vm07 bash[24071]: debug 2026-03-10T10:09:24.307+0000 7f6b95a7e140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T10:09:24.513 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:24 vm07 bash[24071]: debug 2026-03-10T10:09:24.375+0000 7f6b95a7e140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T10:09:24.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:24 vm04 bash[28289]: cluster 2026-03-10T10:09:23.608167+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:24.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:24 vm04 bash[28289]: cluster 2026-03-10T10:09:23.608167+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:24.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:24 vm04 bash[20742]: cluster 2026-03-10T10:09:23.608167+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:24.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:24 vm04 bash[20742]: cluster 2026-03-10T10:09:23.608167+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:25.080 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:24 vm07 bash[24071]: debug 2026-03-10T10:09:24.815+0000 7f6b95a7e140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T10:09:25.080 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:24 vm07 bash[24071]: debug 2026-03-10T10:09:24.855+0000 7f6b95a7e140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T10:09:25.080 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:24 vm07 bash[24071]: debug 2026-03-10T10:09:24.891+0000 7f6b95a7e140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T10:09:25.080 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:25 vm07 bash[24071]: debug 2026-03-10T10:09:25.039+0000 7f6b95a7e140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T10:09:25.381 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:25 vm07 bash[24071]: debug 2026-03-10T10:09:25.079+0000 7f6b95a7e140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T10:09:25.381 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:25 vm07 bash[24071]: debug 2026-03-10T10:09:25.119+0000 7f6b95a7e140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T10:09:25.381 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:25 vm07 bash[24071]: debug 2026-03-10T10:09:25.231+0000 7f6b95a7e140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T10:09:25.635 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:25 vm07 bash[24071]: debug 2026-03-10T10:09:25.379+0000 7f6b95a7e140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T10:09:25.635 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:25 vm07 bash[24071]: debug 2026-03-10T10:09:25.551+0000 7f6b95a7e140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T10:09:25.635 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:25 vm07 bash[24071]: debug 2026-03-10T10:09:25.587+0000 7f6b95a7e140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T10:09:25.635 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:25 vm07 bash[24071]: debug 2026-03-10T10:09:25.631+0000 7f6b95a7e140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T10:09:26.013 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:25 vm07 bash[24071]: debug 2026-03-10T10:09:25.791+0000 7f6b95a7e140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T10:09:26.513 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:09:26 vm07 bash[24071]: debug 2026-03-10T10:09:26.031+0000 7f6b95a7e140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T10:09:26.528 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:09:27.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:26 vm04 bash[28289]: audit 2026-03-10T10:09:25.473907+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:27.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:26 vm04 bash[28289]: audit 2026-03-10T10:09:25.473907+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:27.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:26 vm04 bash[28289]: cluster 2026-03-10T10:09:25.608355+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:27.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:26 vm04 bash[28289]: cluster 2026-03-10T10:09:25.608355+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:27.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:26 vm04 bash[28289]: cluster 2026-03-10T10:09:26.035541+0000 mon.a (mon.0) 248 : cluster [DBG] Standby manager daemon x started 2026-03-10T10:09:27.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:26 vm04 bash[28289]: cluster 2026-03-10T10:09:26.035541+0000 mon.a (mon.0) 248 : cluster [DBG] Standby manager daemon x started 2026-03-10T10:09:27.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:26 vm04 bash[28289]: audit 2026-03-10T10:09:26.038521+0000 mon.b (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.107:0/1718886795' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T10:09:27.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:26 vm04 bash[28289]: audit 2026-03-10T10:09:26.038521+0000 mon.b (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.107:0/1718886795' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T10:09:27.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:26 vm04 bash[28289]: audit 2026-03-10T10:09:26.038917+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.107:0/1718886795' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T10:09:27.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:26 vm04 bash[28289]: audit 2026-03-10T10:09:26.038917+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.107:0/1718886795' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T10:09:27.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:26 vm04 bash[28289]: audit 2026-03-10T10:09:26.039522+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.107:0/1718886795' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T10:09:27.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:26 vm04 bash[28289]: audit 2026-03-10T10:09:26.039522+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.107:0/1718886795' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T10:09:27.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:26 vm04 bash[28289]: audit 2026-03-10T10:09:26.039850+0000 mon.b (mon.1) 7 : audit [DBG] from='mgr.? 192.168.123.107:0/1718886795' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T10:09:27.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:26 vm04 bash[28289]: audit 2026-03-10T10:09:26.039850+0000 mon.b (mon.1) 7 : audit [DBG] from='mgr.? 192.168.123.107:0/1718886795' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T10:09:27.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:26 vm04 bash[20742]: audit 2026-03-10T10:09:25.473907+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:27.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:26 vm04 bash[20742]: audit 2026-03-10T10:09:25.473907+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:27.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:26 vm04 bash[20742]: cluster 2026-03-10T10:09:25.608355+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:27.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:26 vm04 bash[20742]: cluster 2026-03-10T10:09:25.608355+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:27.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:26 vm04 bash[20742]: cluster 2026-03-10T10:09:26.035541+0000 mon.a (mon.0) 248 : cluster [DBG] Standby manager daemon x started 2026-03-10T10:09:27.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:26 vm04 bash[20742]: cluster 2026-03-10T10:09:26.035541+0000 mon.a (mon.0) 248 : cluster [DBG] Standby manager daemon x started 2026-03-10T10:09:27.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:26 vm04 bash[20742]: audit 2026-03-10T10:09:26.038521+0000 mon.b (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.107:0/1718886795' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T10:09:27.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:26 vm04 bash[20742]: audit 2026-03-10T10:09:26.038521+0000 mon.b (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.107:0/1718886795' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T10:09:27.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:26 vm04 bash[20742]: audit 2026-03-10T10:09:26.038917+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.107:0/1718886795' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T10:09:27.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:26 vm04 bash[20742]: audit 2026-03-10T10:09:26.038917+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.107:0/1718886795' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T10:09:27.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:26 vm04 bash[20742]: audit 2026-03-10T10:09:26.039522+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.107:0/1718886795' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T10:09:27.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:26 vm04 bash[20742]: audit 2026-03-10T10:09:26.039522+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.107:0/1718886795' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T10:09:27.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:26 vm04 bash[20742]: audit 2026-03-10T10:09:26.039850+0000 mon.b (mon.1) 7 : audit [DBG] from='mgr.? 192.168.123.107:0/1718886795' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T10:09:27.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:26 vm04 bash[20742]: audit 2026-03-10T10:09:26.039850+0000 mon.b (mon.1) 7 : audit [DBG] from='mgr.? 192.168.123.107:0/1718886795' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T10:09:27.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:26 vm07 bash[23367]: audit 2026-03-10T10:09:25.473907+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:27.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:26 vm07 bash[23367]: audit 2026-03-10T10:09:25.473907+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:27.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:26 vm07 bash[23367]: cluster 2026-03-10T10:09:25.608355+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:27.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:26 vm07 bash[23367]: cluster 2026-03-10T10:09:25.608355+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:27.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:26 vm07 bash[23367]: cluster 2026-03-10T10:09:26.035541+0000 mon.a (mon.0) 248 : cluster [DBG] Standby manager daemon x started 2026-03-10T10:09:27.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:26 vm07 bash[23367]: cluster 2026-03-10T10:09:26.035541+0000 mon.a (mon.0) 248 : cluster [DBG] Standby manager daemon x started 2026-03-10T10:09:27.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:26 vm07 bash[23367]: audit 2026-03-10T10:09:26.038521+0000 mon.b (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.107:0/1718886795' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T10:09:27.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:26 vm07 bash[23367]: audit 2026-03-10T10:09:26.038521+0000 mon.b (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.107:0/1718886795' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T10:09:27.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:26 vm07 bash[23367]: audit 2026-03-10T10:09:26.038917+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.107:0/1718886795' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T10:09:27.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:26 vm07 bash[23367]: audit 2026-03-10T10:09:26.038917+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.107:0/1718886795' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T10:09:27.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:26 vm07 bash[23367]: audit 2026-03-10T10:09:26.039522+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.107:0/1718886795' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T10:09:27.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:26 vm07 bash[23367]: audit 2026-03-10T10:09:26.039522+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.107:0/1718886795' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T10:09:27.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:26 vm07 bash[23367]: audit 2026-03-10T10:09:26.039850+0000 mon.b (mon.1) 7 : audit [DBG] from='mgr.? 192.168.123.107:0/1718886795' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T10:09:27.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:26 vm07 bash[23367]: audit 2026-03-10T10:09:26.039850+0000 mon.b (mon.1) 7 : audit [DBG] from='mgr.? 192.168.123.107:0/1718886795' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T10:09:27.727 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:09:27.744 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph orch daemon add osd vm04:/dev/vde 2026-03-10T10:09:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:27 vm04 bash[20742]: cluster 2026-03-10T10:09:26.846451+0000 mon.a (mon.0) 249 : cluster [DBG] mgrmap e14: y(active, since 57s), standbys: x 2026-03-10T10:09:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:27 vm04 bash[20742]: cluster 2026-03-10T10:09:26.846451+0000 mon.a (mon.0) 249 : cluster [DBG] mgrmap e14: y(active, since 57s), standbys: x 2026-03-10T10:09:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:27 vm04 bash[20742]: audit 2026-03-10T10:09:26.846882+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T10:09:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:27 vm04 bash[20742]: audit 2026-03-10T10:09:26.846882+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T10:09:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:27 vm04 bash[20742]: cluster 2026-03-10T10:09:27.608562+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:27 vm04 bash[20742]: cluster 2026-03-10T10:09:27.608562+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:27 vm04 bash[20742]: audit 2026-03-10T10:09:27.780776+0000 mon.a (mon.0) 251 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:27 vm04 bash[20742]: audit 2026-03-10T10:09:27.780776+0000 mon.a (mon.0) 251 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:27 vm04 bash[20742]: audit 2026-03-10T10:09:27.786935+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:27 vm04 bash[20742]: audit 2026-03-10T10:09:27.786935+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:27 vm04 bash[20742]: audit 2026-03-10T10:09:27.788198+0000 mon.a (mon.0) 253 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:27 vm04 bash[20742]: audit 2026-03-10T10:09:27.788198+0000 mon.a (mon.0) 253 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:27 vm04 bash[20742]: audit 2026-03-10T10:09:27.788791+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:27 vm04 bash[20742]: audit 2026-03-10T10:09:27.788791+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:27 vm04 bash[20742]: audit 2026-03-10T10:09:27.793513+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:27 vm04 bash[20742]: audit 2026-03-10T10:09:27.793513+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:27 vm04 bash[20742]: audit 2026-03-10T10:09:27.805811+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T10:09:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:27 vm04 bash[20742]: audit 2026-03-10T10:09:27.805811+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T10:09:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:27 vm04 bash[20742]: audit 2026-03-10T10:09:27.806472+0000 mon.a (mon.0) 257 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T10:09:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:27 vm04 bash[20742]: audit 2026-03-10T10:09:27.806472+0000 mon.a (mon.0) 257 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T10:09:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:27 vm04 bash[20742]: audit 2026-03-10T10:09:27.806949+0000 mon.a (mon.0) 258 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:27 vm04 bash[20742]: audit 2026-03-10T10:09:27.806949+0000 mon.a (mon.0) 258 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:27 vm04 bash[28289]: cluster 2026-03-10T10:09:26.846451+0000 mon.a (mon.0) 249 : cluster [DBG] mgrmap e14: y(active, since 57s), standbys: x 2026-03-10T10:09:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:27 vm04 bash[28289]: cluster 2026-03-10T10:09:26.846451+0000 mon.a (mon.0) 249 : cluster [DBG] mgrmap e14: y(active, since 57s), standbys: x 2026-03-10T10:09:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:27 vm04 bash[28289]: audit 2026-03-10T10:09:26.846882+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T10:09:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:27 vm04 bash[28289]: audit 2026-03-10T10:09:26.846882+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T10:09:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:27 vm04 bash[28289]: cluster 2026-03-10T10:09:27.608562+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:27 vm04 bash[28289]: cluster 2026-03-10T10:09:27.608562+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:27 vm04 bash[28289]: audit 2026-03-10T10:09:27.780776+0000 mon.a (mon.0) 251 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:27 vm04 bash[28289]: audit 2026-03-10T10:09:27.780776+0000 mon.a (mon.0) 251 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:27 vm04 bash[28289]: audit 2026-03-10T10:09:27.786935+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:27 vm04 bash[28289]: audit 2026-03-10T10:09:27.786935+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:27 vm04 bash[28289]: audit 2026-03-10T10:09:27.788198+0000 mon.a (mon.0) 253 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:27 vm04 bash[28289]: audit 2026-03-10T10:09:27.788198+0000 mon.a (mon.0) 253 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:27 vm04 bash[28289]: audit 2026-03-10T10:09:27.788791+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:27 vm04 bash[28289]: audit 2026-03-10T10:09:27.788791+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:27 vm04 bash[28289]: audit 2026-03-10T10:09:27.793513+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:27 vm04 bash[28289]: audit 2026-03-10T10:09:27.793513+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:27 vm04 bash[28289]: audit 2026-03-10T10:09:27.805811+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T10:09:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:27 vm04 bash[28289]: audit 2026-03-10T10:09:27.805811+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T10:09:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:27 vm04 bash[28289]: audit 2026-03-10T10:09:27.806472+0000 mon.a (mon.0) 257 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T10:09:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:27 vm04 bash[28289]: audit 2026-03-10T10:09:27.806472+0000 mon.a (mon.0) 257 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T10:09:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:27 vm04 bash[28289]: audit 2026-03-10T10:09:27.806949+0000 mon.a (mon.0) 258 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:27 vm04 bash[28289]: audit 2026-03-10T10:09:27.806949+0000 mon.a (mon.0) 258 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:28.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:27 vm07 bash[23367]: cluster 2026-03-10T10:09:26.846451+0000 mon.a (mon.0) 249 : cluster [DBG] mgrmap e14: y(active, since 57s), standbys: x 2026-03-10T10:09:28.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:27 vm07 bash[23367]: cluster 2026-03-10T10:09:26.846451+0000 mon.a (mon.0) 249 : cluster [DBG] mgrmap e14: y(active, since 57s), standbys: x 2026-03-10T10:09:28.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:27 vm07 bash[23367]: audit 2026-03-10T10:09:26.846882+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T10:09:28.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:27 vm07 bash[23367]: audit 2026-03-10T10:09:26.846882+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T10:09:28.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:27 vm07 bash[23367]: cluster 2026-03-10T10:09:27.608562+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:28.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:27 vm07 bash[23367]: cluster 2026-03-10T10:09:27.608562+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:28.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:27 vm07 bash[23367]: audit 2026-03-10T10:09:27.780776+0000 mon.a (mon.0) 251 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:28.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:27 vm07 bash[23367]: audit 2026-03-10T10:09:27.780776+0000 mon.a (mon.0) 251 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:28.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:27 vm07 bash[23367]: audit 2026-03-10T10:09:27.786935+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:28.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:27 vm07 bash[23367]: audit 2026-03-10T10:09:27.786935+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:28.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:27 vm07 bash[23367]: audit 2026-03-10T10:09:27.788198+0000 mon.a (mon.0) 253 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:28.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:27 vm07 bash[23367]: audit 2026-03-10T10:09:27.788198+0000 mon.a (mon.0) 253 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:28.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:27 vm07 bash[23367]: audit 2026-03-10T10:09:27.788791+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:28.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:27 vm07 bash[23367]: audit 2026-03-10T10:09:27.788791+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:28.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:27 vm07 bash[23367]: audit 2026-03-10T10:09:27.793513+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:28.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:27 vm07 bash[23367]: audit 2026-03-10T10:09:27.793513+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:28.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:27 vm07 bash[23367]: audit 2026-03-10T10:09:27.805811+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T10:09:28.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:27 vm07 bash[23367]: audit 2026-03-10T10:09:27.805811+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T10:09:28.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:27 vm07 bash[23367]: audit 2026-03-10T10:09:27.806472+0000 mon.a (mon.0) 257 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T10:09:28.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:27 vm07 bash[23367]: audit 2026-03-10T10:09:27.806472+0000 mon.a (mon.0) 257 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T10:09:28.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:27 vm07 bash[23367]: audit 2026-03-10T10:09:27.806949+0000 mon.a (mon.0) 258 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:28.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:27 vm07 bash[23367]: audit 2026-03-10T10:09:27.806949+0000 mon.a (mon.0) 258 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:29.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:29 vm07 bash[23367]: cephadm 2026-03-10T10:09:27.805542+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-10T10:09:29.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:29 vm07 bash[23367]: cephadm 2026-03-10T10:09:27.805542+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-10T10:09:29.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:29 vm07 bash[23367]: cephadm 2026-03-10T10:09:27.807572+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm04 2026-03-10T10:09:29.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:29 vm07 bash[23367]: cephadm 2026-03-10T10:09:27.807572+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm04 2026-03-10T10:09:29.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:29 vm07 bash[23367]: audit 2026-03-10T10:09:28.223606+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:29.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:29 vm07 bash[23367]: audit 2026-03-10T10:09:28.223606+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:29.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:29 vm07 bash[23367]: audit 2026-03-10T10:09:28.229419+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:29.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:29 vm07 bash[23367]: audit 2026-03-10T10:09:28.229419+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:29.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:29 vm07 bash[23367]: audit 2026-03-10T10:09:28.230313+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:29.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:29 vm07 bash[23367]: audit 2026-03-10T10:09:28.230313+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:29.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:29 vm07 bash[23367]: audit 2026-03-10T10:09:28.231171+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:29.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:29 vm07 bash[23367]: audit 2026-03-10T10:09:28.231171+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:29.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:29 vm07 bash[23367]: audit 2026-03-10T10:09:28.231576+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:29.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:29 vm07 bash[23367]: audit 2026-03-10T10:09:28.231576+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:29.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:29 vm07 bash[23367]: audit 2026-03-10T10:09:28.238231+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:29.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:29 vm07 bash[23367]: audit 2026-03-10T10:09:28.238231+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:29.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:29 vm04 bash[20742]: cephadm 2026-03-10T10:09:27.805542+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-10T10:09:29.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:29 vm04 bash[20742]: cephadm 2026-03-10T10:09:27.805542+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-10T10:09:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:29 vm04 bash[20742]: cephadm 2026-03-10T10:09:27.807572+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm04 2026-03-10T10:09:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:29 vm04 bash[20742]: cephadm 2026-03-10T10:09:27.807572+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm04 2026-03-10T10:09:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:29 vm04 bash[20742]: audit 2026-03-10T10:09:28.223606+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:29 vm04 bash[20742]: audit 2026-03-10T10:09:28.223606+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:29 vm04 bash[20742]: audit 2026-03-10T10:09:28.229419+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:29 vm04 bash[20742]: audit 2026-03-10T10:09:28.229419+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:29 vm04 bash[20742]: audit 2026-03-10T10:09:28.230313+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:29 vm04 bash[20742]: audit 2026-03-10T10:09:28.230313+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:29 vm04 bash[20742]: audit 2026-03-10T10:09:28.231171+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:29 vm04 bash[20742]: audit 2026-03-10T10:09:28.231171+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:29 vm04 bash[20742]: audit 2026-03-10T10:09:28.231576+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:29 vm04 bash[20742]: audit 2026-03-10T10:09:28.231576+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:29 vm04 bash[20742]: audit 2026-03-10T10:09:28.238231+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:29 vm04 bash[20742]: audit 2026-03-10T10:09:28.238231+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:29.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:29 vm04 bash[28289]: cephadm 2026-03-10T10:09:27.805542+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-10T10:09:29.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:29 vm04 bash[28289]: cephadm 2026-03-10T10:09:27.805542+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-10T10:09:29.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:29 vm04 bash[28289]: cephadm 2026-03-10T10:09:27.807572+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm04 2026-03-10T10:09:29.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:29 vm04 bash[28289]: cephadm 2026-03-10T10:09:27.807572+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm04 2026-03-10T10:09:29.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:29 vm04 bash[28289]: audit 2026-03-10T10:09:28.223606+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:29.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:29 vm04 bash[28289]: audit 2026-03-10T10:09:28.223606+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:29.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:29 vm04 bash[28289]: audit 2026-03-10T10:09:28.229419+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:29.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:29 vm04 bash[28289]: audit 2026-03-10T10:09:28.229419+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:29.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:29 vm04 bash[28289]: audit 2026-03-10T10:09:28.230313+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:29.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:29 vm04 bash[28289]: audit 2026-03-10T10:09:28.230313+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:29.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:29 vm04 bash[28289]: audit 2026-03-10T10:09:28.231171+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:29.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:29 vm04 bash[28289]: audit 2026-03-10T10:09:28.231171+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:29.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:29 vm04 bash[28289]: audit 2026-03-10T10:09:28.231576+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:29.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:29 vm04 bash[28289]: audit 2026-03-10T10:09:28.231576+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:29.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:29 vm04 bash[28289]: audit 2026-03-10T10:09:28.238231+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:29.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:29 vm04 bash[28289]: audit 2026-03-10T10:09:28.238231+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:30.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:30 vm07 bash[23367]: cluster 2026-03-10T10:09:29.608793+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:30.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:30 vm07 bash[23367]: cluster 2026-03-10T10:09:29.608793+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:30.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:30 vm04 bash[20742]: cluster 2026-03-10T10:09:29.608793+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:30.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:30 vm04 bash[20742]: cluster 2026-03-10T10:09:29.608793+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:30.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:30 vm04 bash[28289]: cluster 2026-03-10T10:09:29.608793+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:30.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:30 vm04 bash[28289]: cluster 2026-03-10T10:09:29.608793+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:32.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:31 vm04 bash[20742]: cluster 2026-03-10T10:09:31.609020+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:32.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:31 vm04 bash[20742]: cluster 2026-03-10T10:09:31.609020+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:31 vm04 bash[28289]: cluster 2026-03-10T10:09:31.609020+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:31 vm04 bash[28289]: cluster 2026-03-10T10:09:31.609020+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:32.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:31 vm07 bash[23367]: cluster 2026-03-10T10:09:31.609020+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:32.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:31 vm07 bash[23367]: cluster 2026-03-10T10:09:31.609020+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:32.401 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:09:32.785 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.780+0000 7f6d44cd3640 1 -- 192.168.123.104:0/3211771310 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f6d400770a0 msgr2=0x7f6d40075500 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:09:32.785 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.780+0000 7f6d44cd3640 1 --2- 192.168.123.104:0/3211771310 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f6d400770a0 0x7f6d40075500 secure :-1 s=READY pgs=97 cs=0 l=1 rev1=1 crypto rx=0x7f6d34009a30 tx=0x7f6d3402f220 comp rx=0 tx=0).stop 2026-03-10T10:09:32.785 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.780+0000 7f6d44cd3640 1 -- 192.168.123.104:0/3211771310 shutdown_connections 2026-03-10T10:09:32.785 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.780+0000 7f6d44cd3640 1 --2- 192.168.123.104:0/3211771310 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6d4010a9d0 0x7f6d4010ce90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:32.785 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.780+0000 7f6d44cd3640 1 --2- 192.168.123.104:0/3211771310 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f6d40075a40 0x7f6d40075ea0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:32.785 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.780+0000 7f6d44cd3640 1 --2- 192.168.123.104:0/3211771310 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f6d400770a0 0x7f6d40075500 unknown :-1 s=CLOSED pgs=97 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:32.785 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.780+0000 7f6d44cd3640 1 -- 192.168.123.104:0/3211771310 >> 192.168.123.104:0/3211771310 conn(0x7f6d400fe290 msgr2=0x7f6d401006b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:09:32.786 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.780+0000 7f6d44cd3640 1 -- 192.168.123.104:0/3211771310 shutdown_connections 2026-03-10T10:09:32.786 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.780+0000 7f6d44cd3640 1 -- 192.168.123.104:0/3211771310 wait complete. 2026-03-10T10:09:32.786 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.780+0000 7f6d44cd3640 1 Processor -- start 2026-03-10T10:09:32.786 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.780+0000 7f6d44cd3640 1 -- start start 2026-03-10T10:09:32.786 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.780+0000 7f6d44cd3640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6d40075a40 0x7f6d4019c520 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:09:32.786 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.780+0000 7f6d44cd3640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f6d400770a0 0x7f6d4019ca60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:09:32.786 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.780+0000 7f6d44cd3640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f6d4010a9d0 0x7f6d401a3ae0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:09:32.786 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.780+0000 7f6d44cd3640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f6d4010fe30 con 0x7f6d4010a9d0 2026-03-10T10:09:32.787 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.780+0000 7f6d44cd3640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f6d4010fcb0 con 0x7f6d40075a40 2026-03-10T10:09:32.787 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.780+0000 7f6d44cd3640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f6d4010ffb0 con 0x7f6d400770a0 2026-03-10T10:09:32.787 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.780+0000 7f6d3e575640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6d40075a40 0x7f6d4019c520 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:09:32.787 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.780+0000 7f6d3e575640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6d40075a40 0x7f6d4019c520 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.104:46306/0 (socket says 192.168.123.104:46306) 2026-03-10T10:09:32.787 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.780+0000 7f6d3e575640 1 -- 192.168.123.104:0/2620094738 learned_addr learned my addr 192.168.123.104:0/2620094738 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:09:32.787 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.780+0000 7f6d3dd74640 1 --2- 192.168.123.104:0/2620094738 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f6d400770a0 0x7f6d4019ca60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:09:32.787 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.780+0000 7f6d3ed76640 1 --2- 192.168.123.104:0/2620094738 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f6d4010a9d0 0x7f6d401a3ae0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:09:32.787 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.784+0000 7f6d3e575640 1 -- 192.168.123.104:0/2620094738 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f6d400770a0 msgr2=0x7f6d4019ca60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:09:32.787 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.784+0000 7f6d3e575640 1 --2- 192.168.123.104:0/2620094738 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f6d400770a0 0x7f6d4019ca60 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:32.787 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.784+0000 7f6d3e575640 1 -- 192.168.123.104:0/2620094738 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f6d4010a9d0 msgr2=0x7f6d401a3ae0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:09:32.787 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.784+0000 7f6d3e575640 1 --2- 192.168.123.104:0/2620094738 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f6d4010a9d0 0x7f6d401a3ae0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:32.787 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.784+0000 7f6d3e575640 1 -- 192.168.123.104:0/2620094738 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6d401a4150 con 0x7f6d40075a40 2026-03-10T10:09:32.787 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.784+0000 7f6d3dd74640 1 --2- 192.168.123.104:0/2620094738 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f6d400770a0 0x7f6d4019ca60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:09:32.787 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.784+0000 7f6d3ed76640 1 --2- 192.168.123.104:0/2620094738 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f6d4010a9d0 0x7f6d401a3ae0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:09:32.787 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.784+0000 7f6d3e575640 1 --2- 192.168.123.104:0/2620094738 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6d40075a40 0x7f6d4019c520 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f6d34009a00 tx=0x7f6d3402fd80 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:09:32.788 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.784+0000 7f6d277fe640 1 -- 192.168.123.104:0/2620094738 <== mon.1 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f6d34004280 con 0x7f6d40075a40 2026-03-10T10:09:32.789 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.784+0000 7f6d277fe640 1 -- 192.168.123.104:0/2620094738 <== mon.1 v2:192.168.123.107:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f6d34004420 con 0x7f6d40075a40 2026-03-10T10:09:32.789 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.784+0000 7f6d44cd3640 1 -- 192.168.123.104:0/2620094738 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f6d401a43e0 con 0x7f6d40075a40 2026-03-10T10:09:32.789 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.784+0000 7f6d277fe640 1 -- 192.168.123.104:0/2620094738 <== mon.1 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f6d34004e20 con 0x7f6d40075a40 2026-03-10T10:09:32.789 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.784+0000 7f6d44cd3640 1 -- 192.168.123.104:0/2620094738 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f6d401a4910 con 0x7f6d40075a40 2026-03-10T10:09:32.789 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.784+0000 7f6d277fe640 1 -- 192.168.123.104:0/2620094738 <== mon.1 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 14) ==== 99944+0+0 (secure 0 0 0) 0x7f6d3404b050 con 0x7f6d40075a40 2026-03-10T10:09:32.790 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.784+0000 7f6d44cd3640 1 -- 192.168.123.104:0/2620094738 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6d0c005180 con 0x7f6d40075a40 2026-03-10T10:09:32.790 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.784+0000 7f6d277fe640 1 --2- 192.168.123.104:0/2620094738 >> v2:192.168.123.104:6800/632047608 conn(0x7f6d1c077570 0x7f6d1c079a30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:09:32.790 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.784+0000 7f6d277fe640 1 -- 192.168.123.104:0/2620094738 <== mon.1 v2:192.168.123.107:3300/0 5 ==== osd_map(4..4 src has 1..4) ==== 1069+0+0 (secure 0 0 0) 0x7f6d340bca30 con 0x7f6d40075a40 2026-03-10T10:09:32.790 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.784+0000 7f6d3dd74640 1 --2- 192.168.123.104:0/2620094738 >> v2:192.168.123.104:6800/632047608 conn(0x7f6d1c077570 0x7f6d1c079a30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:09:32.791 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.784+0000 7f6d3dd74640 1 --2- 192.168.123.104:0/2620094738 >> v2:192.168.123.104:6800/632047608 conn(0x7f6d1c077570 0x7f6d1c079a30 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f6d4019da40 tx=0x7f6d28005e90 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:09:32.793 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.788+0000 7f6d277fe640 1 -- 192.168.123.104:0/2620094738 <== mon.1 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f6d34087040 con 0x7f6d40075a40 2026-03-10T10:09:32.891 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:32.888+0000 7f6d44cd3640 1 -- 192.168.123.104:0/2620094738 --> v2:192.168.123.104:6800/632047608 -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}) -- 0x7f6d0c002bf0 con 0x7f6d1c077570 2026-03-10T10:09:33.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:32 vm07 bash[23367]: audit 2026-03-10T10:09:32.894172+0000 mon.a (mon.0) 265 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:09:33.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:32 vm07 bash[23367]: audit 2026-03-10T10:09:32.894172+0000 mon.a (mon.0) 265 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:09:33.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:33 vm07 bash[23367]: audit 2026-03-10T10:09:32.895640+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:09:33.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:33 vm07 bash[23367]: audit 2026-03-10T10:09:32.895640+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:09:33.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:33 vm07 bash[23367]: audit 2026-03-10T10:09:32.896233+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:33.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:33 vm07 bash[23367]: audit 2026-03-10T10:09:32.896233+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:32 vm04 bash[20742]: audit 2026-03-10T10:09:32.894172+0000 mon.a (mon.0) 265 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:09:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:32 vm04 bash[20742]: audit 2026-03-10T10:09:32.894172+0000 mon.a (mon.0) 265 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:09:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:32 vm04 bash[20742]: audit 2026-03-10T10:09:32.895640+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:09:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:32 vm04 bash[20742]: audit 2026-03-10T10:09:32.895640+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:09:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:32 vm04 bash[20742]: audit 2026-03-10T10:09:32.896233+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:32 vm04 bash[20742]: audit 2026-03-10T10:09:32.896233+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:33.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:32 vm04 bash[28289]: audit 2026-03-10T10:09:32.894172+0000 mon.a (mon.0) 265 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:09:33.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:32 vm04 bash[28289]: audit 2026-03-10T10:09:32.894172+0000 mon.a (mon.0) 265 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:09:33.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:32 vm04 bash[28289]: audit 2026-03-10T10:09:32.895640+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:09:33.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:32 vm04 bash[28289]: audit 2026-03-10T10:09:32.895640+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:09:33.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:32 vm04 bash[28289]: audit 2026-03-10T10:09:32.896233+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:33.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:32 vm04 bash[28289]: audit 2026-03-10T10:09:32.896233+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:34.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:34 vm07 bash[23367]: audit 2026-03-10T10:09:32.892796+0000 mgr.y (mgr.14150) 64 : audit [DBG] from='client.24115 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:34.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:34 vm07 bash[23367]: audit 2026-03-10T10:09:32.892796+0000 mgr.y (mgr.14150) 64 : audit [DBG] from='client.24115 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:34.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:34 vm07 bash[23367]: cluster 2026-03-10T10:09:33.609240+0000 mgr.y (mgr.14150) 65 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:34.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:34 vm07 bash[23367]: cluster 2026-03-10T10:09:33.609240+0000 mgr.y (mgr.14150) 65 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:34.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:34 vm04 bash[20742]: audit 2026-03-10T10:09:32.892796+0000 mgr.y (mgr.14150) 64 : audit [DBG] from='client.24115 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:34.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:34 vm04 bash[20742]: audit 2026-03-10T10:09:32.892796+0000 mgr.y (mgr.14150) 64 : audit [DBG] from='client.24115 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:34.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:34 vm04 bash[20742]: cluster 2026-03-10T10:09:33.609240+0000 mgr.y (mgr.14150) 65 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:34.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:34 vm04 bash[20742]: cluster 2026-03-10T10:09:33.609240+0000 mgr.y (mgr.14150) 65 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:34.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:34 vm04 bash[28289]: audit 2026-03-10T10:09:32.892796+0000 mgr.y (mgr.14150) 64 : audit [DBG] from='client.24115 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:34.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:34 vm04 bash[28289]: audit 2026-03-10T10:09:32.892796+0000 mgr.y (mgr.14150) 64 : audit [DBG] from='client.24115 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:09:34.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:34 vm04 bash[28289]: cluster 2026-03-10T10:09:33.609240+0000 mgr.y (mgr.14150) 65 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:34.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:34 vm04 bash[28289]: cluster 2026-03-10T10:09:33.609240+0000 mgr.y (mgr.14150) 65 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:36 vm04 bash[20742]: cluster 2026-03-10T10:09:35.609475+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:36 vm04 bash[20742]: cluster 2026-03-10T10:09:35.609475+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:36 vm04 bash[28289]: cluster 2026-03-10T10:09:35.609475+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:36 vm04 bash[28289]: cluster 2026-03-10T10:09:35.609475+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:37.262 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:36 vm07 bash[23367]: cluster 2026-03-10T10:09:35.609475+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:37.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:36 vm07 bash[23367]: cluster 2026-03-10T10:09:35.609475+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:37.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:37 vm04 bash[28289]: cluster 2026-03-10T10:09:37.609721+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:37.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:37 vm04 bash[28289]: cluster 2026-03-10T10:09:37.609721+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:37.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:37 vm04 bash[20742]: cluster 2026-03-10T10:09:37.609721+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:37.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:37 vm04 bash[20742]: cluster 2026-03-10T10:09:37.609721+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:38.262 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:37 vm07 bash[23367]: cluster 2026-03-10T10:09:37.609721+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:38.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:37 vm07 bash[23367]: cluster 2026-03-10T10:09:37.609721+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:38.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:38 vm04 bash[28289]: audit 2026-03-10T10:09:38.320931+0000 mon.a (mon.0) 268 : audit [INF] from='client.? 192.168.123.104:0/1453721414' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8e28c717-cfeb-4d7d-8ed7-9136d22aff5c"}]: dispatch 2026-03-10T10:09:38.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:38 vm04 bash[28289]: audit 2026-03-10T10:09:38.320931+0000 mon.a (mon.0) 268 : audit [INF] from='client.? 192.168.123.104:0/1453721414' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8e28c717-cfeb-4d7d-8ed7-9136d22aff5c"}]: dispatch 2026-03-10T10:09:38.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:38 vm04 bash[28289]: audit 2026-03-10T10:09:38.334509+0000 mon.a (mon.0) 269 : audit [INF] from='client.? 192.168.123.104:0/1453721414' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8e28c717-cfeb-4d7d-8ed7-9136d22aff5c"}]': finished 2026-03-10T10:09:38.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:38 vm04 bash[28289]: audit 2026-03-10T10:09:38.334509+0000 mon.a (mon.0) 269 : audit [INF] from='client.? 192.168.123.104:0/1453721414' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8e28c717-cfeb-4d7d-8ed7-9136d22aff5c"}]': finished 2026-03-10T10:09:38.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:38 vm04 bash[28289]: cluster 2026-03-10T10:09:38.336651+0000 mon.a (mon.0) 270 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-10T10:09:38.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:38 vm04 bash[28289]: cluster 2026-03-10T10:09:38.336651+0000 mon.a (mon.0) 270 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-10T10:09:38.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:38 vm04 bash[28289]: audit 2026-03-10T10:09:38.337231+0000 mon.a (mon.0) 271 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:38.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:38 vm04 bash[28289]: audit 2026-03-10T10:09:38.337231+0000 mon.a (mon.0) 271 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:38 vm04 bash[20742]: audit 2026-03-10T10:09:38.320931+0000 mon.a (mon.0) 268 : audit [INF] from='client.? 192.168.123.104:0/1453721414' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8e28c717-cfeb-4d7d-8ed7-9136d22aff5c"}]: dispatch 2026-03-10T10:09:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:38 vm04 bash[20742]: audit 2026-03-10T10:09:38.320931+0000 mon.a (mon.0) 268 : audit [INF] from='client.? 192.168.123.104:0/1453721414' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8e28c717-cfeb-4d7d-8ed7-9136d22aff5c"}]: dispatch 2026-03-10T10:09:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:38 vm04 bash[20742]: audit 2026-03-10T10:09:38.334509+0000 mon.a (mon.0) 269 : audit [INF] from='client.? 192.168.123.104:0/1453721414' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8e28c717-cfeb-4d7d-8ed7-9136d22aff5c"}]': finished 2026-03-10T10:09:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:38 vm04 bash[20742]: audit 2026-03-10T10:09:38.334509+0000 mon.a (mon.0) 269 : audit [INF] from='client.? 192.168.123.104:0/1453721414' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8e28c717-cfeb-4d7d-8ed7-9136d22aff5c"}]': finished 2026-03-10T10:09:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:38 vm04 bash[20742]: cluster 2026-03-10T10:09:38.336651+0000 mon.a (mon.0) 270 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-10T10:09:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:38 vm04 bash[20742]: cluster 2026-03-10T10:09:38.336651+0000 mon.a (mon.0) 270 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-10T10:09:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:38 vm04 bash[20742]: audit 2026-03-10T10:09:38.337231+0000 mon.a (mon.0) 271 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:38 vm04 bash[20742]: audit 2026-03-10T10:09:38.337231+0000 mon.a (mon.0) 271 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:39.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:38 vm07 bash[23367]: audit 2026-03-10T10:09:38.320931+0000 mon.a (mon.0) 268 : audit [INF] from='client.? 192.168.123.104:0/1453721414' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8e28c717-cfeb-4d7d-8ed7-9136d22aff5c"}]: dispatch 2026-03-10T10:09:39.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:38 vm07 bash[23367]: audit 2026-03-10T10:09:38.320931+0000 mon.a (mon.0) 268 : audit [INF] from='client.? 192.168.123.104:0/1453721414' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8e28c717-cfeb-4d7d-8ed7-9136d22aff5c"}]: dispatch 2026-03-10T10:09:39.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:38 vm07 bash[23367]: audit 2026-03-10T10:09:38.334509+0000 mon.a (mon.0) 269 : audit [INF] from='client.? 192.168.123.104:0/1453721414' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8e28c717-cfeb-4d7d-8ed7-9136d22aff5c"}]': finished 2026-03-10T10:09:39.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:38 vm07 bash[23367]: audit 2026-03-10T10:09:38.334509+0000 mon.a (mon.0) 269 : audit [INF] from='client.? 192.168.123.104:0/1453721414' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8e28c717-cfeb-4d7d-8ed7-9136d22aff5c"}]': finished 2026-03-10T10:09:39.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:38 vm07 bash[23367]: cluster 2026-03-10T10:09:38.336651+0000 mon.a (mon.0) 270 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-10T10:09:39.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:38 vm07 bash[23367]: cluster 2026-03-10T10:09:38.336651+0000 mon.a (mon.0) 270 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-10T10:09:39.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:38 vm07 bash[23367]: audit 2026-03-10T10:09:38.337231+0000 mon.a (mon.0) 271 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:39.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:38 vm07 bash[23367]: audit 2026-03-10T10:09:38.337231+0000 mon.a (mon.0) 271 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:40.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:39 vm04 bash[28289]: audit 2026-03-10T10:09:38.938348+0000 mon.c (mon.2) 2 : audit [DBG] from='client.? 192.168.123.104:0/2661780019' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:09:40.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:39 vm04 bash[28289]: audit 2026-03-10T10:09:38.938348+0000 mon.c (mon.2) 2 : audit [DBG] from='client.? 192.168.123.104:0/2661780019' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:09:40.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:39 vm04 bash[28289]: cluster 2026-03-10T10:09:39.609929+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:40.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:39 vm04 bash[28289]: cluster 2026-03-10T10:09:39.609929+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:40.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:39 vm04 bash[20742]: audit 2026-03-10T10:09:38.938348+0000 mon.c (mon.2) 2 : audit [DBG] from='client.? 192.168.123.104:0/2661780019' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:09:40.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:39 vm04 bash[20742]: audit 2026-03-10T10:09:38.938348+0000 mon.c (mon.2) 2 : audit [DBG] from='client.? 192.168.123.104:0/2661780019' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:09:40.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:39 vm04 bash[20742]: cluster 2026-03-10T10:09:39.609929+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:40.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:39 vm04 bash[20742]: cluster 2026-03-10T10:09:39.609929+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:40.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:39 vm07 bash[23367]: audit 2026-03-10T10:09:38.938348+0000 mon.c (mon.2) 2 : audit [DBG] from='client.? 192.168.123.104:0/2661780019' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:09:40.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:39 vm07 bash[23367]: audit 2026-03-10T10:09:38.938348+0000 mon.c (mon.2) 2 : audit [DBG] from='client.? 192.168.123.104:0/2661780019' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:09:40.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:39 vm07 bash[23367]: cluster 2026-03-10T10:09:39.609929+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:40.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:39 vm07 bash[23367]: cluster 2026-03-10T10:09:39.609929+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:42 vm04 bash[20742]: cluster 2026-03-10T10:09:41.610156+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:42 vm04 bash[20742]: cluster 2026-03-10T10:09:41.610156+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:43.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:42 vm04 bash[28289]: cluster 2026-03-10T10:09:41.610156+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:43.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:42 vm04 bash[28289]: cluster 2026-03-10T10:09:41.610156+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:43.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:42 vm07 bash[23367]: cluster 2026-03-10T10:09:41.610156+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:43.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:42 vm07 bash[23367]: cluster 2026-03-10T10:09:41.610156+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:43 vm04 bash[20742]: cluster 2026-03-10T10:09:43.610406+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:43 vm04 bash[20742]: cluster 2026-03-10T10:09:43.610406+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:44.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:43 vm04 bash[28289]: cluster 2026-03-10T10:09:43.610406+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:44.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:43 vm04 bash[28289]: cluster 2026-03-10T10:09:43.610406+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:44.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:43 vm07 bash[23367]: cluster 2026-03-10T10:09:43.610406+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:44.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:43 vm07 bash[23367]: cluster 2026-03-10T10:09:43.610406+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:46.945 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:46 vm04 bash[28289]: cluster 2026-03-10T10:09:45.610632+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:46.945 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:46 vm04 bash[28289]: cluster 2026-03-10T10:09:45.610632+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:46.945 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:46 vm04 bash[20742]: cluster 2026-03-10T10:09:45.610632+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:46.945 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:46 vm04 bash[20742]: cluster 2026-03-10T10:09:45.610632+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:47.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:46 vm07 bash[23367]: cluster 2026-03-10T10:09:45.610632+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:47.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:46 vm07 bash[23367]: cluster 2026-03-10T10:09:45.610632+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:47.742 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:47 vm04 bash[20742]: audit 2026-03-10T10:09:47.195787+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T10:09:47.742 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:47 vm04 bash[20742]: audit 2026-03-10T10:09:47.195787+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T10:09:47.742 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:47 vm04 bash[20742]: audit 2026-03-10T10:09:47.196489+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:47.742 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:47 vm04 bash[20742]: audit 2026-03-10T10:09:47.196489+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:47.743 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:47 vm04 bash[28289]: audit 2026-03-10T10:09:47.195787+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T10:09:47.743 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:47 vm04 bash[28289]: audit 2026-03-10T10:09:47.195787+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T10:09:47.743 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:47 vm04 bash[28289]: audit 2026-03-10T10:09:47.196489+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:47.743 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:47 vm04 bash[28289]: audit 2026-03-10T10:09:47.196489+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:48.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:47 vm07 bash[23367]: audit 2026-03-10T10:09:47.195787+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T10:09:48.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:47 vm07 bash[23367]: audit 2026-03-10T10:09:47.195787+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T10:09:48.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:47 vm07 bash[23367]: audit 2026-03-10T10:09:47.196489+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:48.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:47 vm07 bash[23367]: audit 2026-03-10T10:09:47.196489+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:48.351 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:48 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:09:48.351 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:48 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:09:48.351 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:48 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:09:48.351 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:48 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:09:48.351 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:09:48 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:09:48.351 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:09:48 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:09:48.682 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:48 vm04 bash[20742]: cephadm 2026-03-10T10:09:47.197007+0000 mgr.y (mgr.14150) 72 : cephadm [INF] Deploying daemon osd.0 on vm04 2026-03-10T10:09:48.682 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:48 vm04 bash[20742]: cephadm 2026-03-10T10:09:47.197007+0000 mgr.y (mgr.14150) 72 : cephadm [INF] Deploying daemon osd.0 on vm04 2026-03-10T10:09:48.682 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:48 vm04 bash[20742]: cluster 2026-03-10T10:09:47.610856+0000 mgr.y (mgr.14150) 73 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:48.682 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:48 vm04 bash[20742]: cluster 2026-03-10T10:09:47.610856+0000 mgr.y (mgr.14150) 73 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:48.682 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:48 vm04 bash[20742]: audit 2026-03-10T10:09:48.376227+0000 mon.a (mon.0) 274 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:48.682 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:48 vm04 bash[20742]: audit 2026-03-10T10:09:48.376227+0000 mon.a (mon.0) 274 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:48.682 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:48 vm04 bash[20742]: audit 2026-03-10T10:09:48.382150+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:48.682 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:48 vm04 bash[20742]: audit 2026-03-10T10:09:48.382150+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:48.682 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:48 vm04 bash[20742]: audit 2026-03-10T10:09:48.396593+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:48.682 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:48 vm04 bash[20742]: audit 2026-03-10T10:09:48.396593+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:48.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:48 vm04 bash[28289]: cephadm 2026-03-10T10:09:47.197007+0000 mgr.y (mgr.14150) 72 : cephadm [INF] Deploying daemon osd.0 on vm04 2026-03-10T10:09:48.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:48 vm04 bash[28289]: cephadm 2026-03-10T10:09:47.197007+0000 mgr.y (mgr.14150) 72 : cephadm [INF] Deploying daemon osd.0 on vm04 2026-03-10T10:09:48.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:48 vm04 bash[28289]: cluster 2026-03-10T10:09:47.610856+0000 mgr.y (mgr.14150) 73 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:48.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:48 vm04 bash[28289]: cluster 2026-03-10T10:09:47.610856+0000 mgr.y (mgr.14150) 73 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:48.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:48 vm04 bash[28289]: audit 2026-03-10T10:09:48.376227+0000 mon.a (mon.0) 274 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:48.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:48 vm04 bash[28289]: audit 2026-03-10T10:09:48.376227+0000 mon.a (mon.0) 274 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:48.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:48 vm04 bash[28289]: audit 2026-03-10T10:09:48.382150+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:48.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:48 vm04 bash[28289]: audit 2026-03-10T10:09:48.382150+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:48.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:48 vm04 bash[28289]: audit 2026-03-10T10:09:48.396593+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:48.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:48 vm04 bash[28289]: audit 2026-03-10T10:09:48.396593+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:49.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:48 vm07 bash[23367]: cephadm 2026-03-10T10:09:47.197007+0000 mgr.y (mgr.14150) 72 : cephadm [INF] Deploying daemon osd.0 on vm04 2026-03-10T10:09:49.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:48 vm07 bash[23367]: cephadm 2026-03-10T10:09:47.197007+0000 mgr.y (mgr.14150) 72 : cephadm [INF] Deploying daemon osd.0 on vm04 2026-03-10T10:09:49.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:48 vm07 bash[23367]: cluster 2026-03-10T10:09:47.610856+0000 mgr.y (mgr.14150) 73 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:49.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:48 vm07 bash[23367]: cluster 2026-03-10T10:09:47.610856+0000 mgr.y (mgr.14150) 73 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:49.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:48 vm07 bash[23367]: audit 2026-03-10T10:09:48.376227+0000 mon.a (mon.0) 274 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:49.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:48 vm07 bash[23367]: audit 2026-03-10T10:09:48.376227+0000 mon.a (mon.0) 274 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:49.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:48 vm07 bash[23367]: audit 2026-03-10T10:09:48.382150+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:49.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:48 vm07 bash[23367]: audit 2026-03-10T10:09:48.382150+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:49.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:48 vm07 bash[23367]: audit 2026-03-10T10:09:48.396593+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:49.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:48 vm07 bash[23367]: audit 2026-03-10T10:09:48.396593+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:51.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:50 vm07 bash[23367]: cluster 2026-03-10T10:09:49.611098+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:51.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:50 vm07 bash[23367]: cluster 2026-03-10T10:09:49.611098+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:50 vm04 bash[28289]: cluster 2026-03-10T10:09:49.611098+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:51.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:50 vm04 bash[28289]: cluster 2026-03-10T10:09:49.611098+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:51.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:50 vm04 bash[20742]: cluster 2026-03-10T10:09:49.611098+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:51.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:50 vm04 bash[20742]: cluster 2026-03-10T10:09:49.611098+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:52.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:52 vm04 bash[28289]: cluster 2026-03-10T10:09:51.611322+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:52.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:52 vm04 bash[28289]: cluster 2026-03-10T10:09:51.611322+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:52.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:52 vm04 bash[28289]: audit 2026-03-10T10:09:52.676331+0000 mon.c (mon.2) 3 : audit [INF] from='osd.0 v2:192.168.123.104:6801/3431285778' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T10:09:52.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:52 vm04 bash[28289]: audit 2026-03-10T10:09:52.676331+0000 mon.c (mon.2) 3 : audit [INF] from='osd.0 v2:192.168.123.104:6801/3431285778' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T10:09:52.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:52 vm04 bash[28289]: audit 2026-03-10T10:09:52.676746+0000 mon.a (mon.0) 277 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T10:09:52.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:52 vm04 bash[28289]: audit 2026-03-10T10:09:52.676746+0000 mon.a (mon.0) 277 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T10:09:52.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:52 vm04 bash[20742]: cluster 2026-03-10T10:09:51.611322+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:52.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:52 vm04 bash[20742]: cluster 2026-03-10T10:09:51.611322+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:52.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:52 vm04 bash[20742]: audit 2026-03-10T10:09:52.676331+0000 mon.c (mon.2) 3 : audit [INF] from='osd.0 v2:192.168.123.104:6801/3431285778' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T10:09:52.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:52 vm04 bash[20742]: audit 2026-03-10T10:09:52.676331+0000 mon.c (mon.2) 3 : audit [INF] from='osd.0 v2:192.168.123.104:6801/3431285778' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T10:09:52.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:52 vm04 bash[20742]: audit 2026-03-10T10:09:52.676746+0000 mon.a (mon.0) 277 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T10:09:52.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:52 vm04 bash[20742]: audit 2026-03-10T10:09:52.676746+0000 mon.a (mon.0) 277 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T10:09:53.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:52 vm07 bash[23367]: cluster 2026-03-10T10:09:51.611322+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:53.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:52 vm07 bash[23367]: cluster 2026-03-10T10:09:51.611322+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:53.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:52 vm07 bash[23367]: audit 2026-03-10T10:09:52.676331+0000 mon.c (mon.2) 3 : audit [INF] from='osd.0 v2:192.168.123.104:6801/3431285778' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T10:09:53.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:52 vm07 bash[23367]: audit 2026-03-10T10:09:52.676331+0000 mon.c (mon.2) 3 : audit [INF] from='osd.0 v2:192.168.123.104:6801/3431285778' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T10:09:53.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:52 vm07 bash[23367]: audit 2026-03-10T10:09:52.676746+0000 mon.a (mon.0) 277 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T10:09:53.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:52 vm07 bash[23367]: audit 2026-03-10T10:09:52.676746+0000 mon.a (mon.0) 277 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T10:09:54.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:53 vm07 bash[23367]: audit 2026-03-10T10:09:52.730730+0000 mon.a (mon.0) 278 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T10:09:54.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:53 vm07 bash[23367]: audit 2026-03-10T10:09:52.730730+0000 mon.a (mon.0) 278 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T10:09:54.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:53 vm07 bash[23367]: cluster 2026-03-10T10:09:52.733251+0000 mon.a (mon.0) 279 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T10:09:54.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:53 vm07 bash[23367]: cluster 2026-03-10T10:09:52.733251+0000 mon.a (mon.0) 279 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T10:09:54.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:53 vm07 bash[23367]: audit 2026-03-10T10:09:52.733423+0000 mon.a (mon.0) 280 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:54.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:53 vm07 bash[23367]: audit 2026-03-10T10:09:52.733423+0000 mon.a (mon.0) 280 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:54.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:53 vm07 bash[23367]: audit 2026-03-10T10:09:52.734629+0000 mon.c (mon.2) 4 : audit [INF] from='osd.0 v2:192.168.123.104:6801/3431285778' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:09:54.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:53 vm07 bash[23367]: audit 2026-03-10T10:09:52.734629+0000 mon.c (mon.2) 4 : audit [INF] from='osd.0 v2:192.168.123.104:6801/3431285778' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:09:54.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:53 vm07 bash[23367]: audit 2026-03-10T10:09:52.734874+0000 mon.a (mon.0) 281 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:09:54.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:53 vm07 bash[23367]: audit 2026-03-10T10:09:52.734874+0000 mon.a (mon.0) 281 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:09:54.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:53 vm04 bash[20742]: audit 2026-03-10T10:09:52.730730+0000 mon.a (mon.0) 278 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T10:09:54.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:53 vm04 bash[20742]: audit 2026-03-10T10:09:52.730730+0000 mon.a (mon.0) 278 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T10:09:54.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:53 vm04 bash[20742]: cluster 2026-03-10T10:09:52.733251+0000 mon.a (mon.0) 279 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T10:09:54.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:53 vm04 bash[20742]: cluster 2026-03-10T10:09:52.733251+0000 mon.a (mon.0) 279 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T10:09:54.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:53 vm04 bash[20742]: audit 2026-03-10T10:09:52.733423+0000 mon.a (mon.0) 280 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:54.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:53 vm04 bash[20742]: audit 2026-03-10T10:09:52.733423+0000 mon.a (mon.0) 280 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:54.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:53 vm04 bash[20742]: audit 2026-03-10T10:09:52.734629+0000 mon.c (mon.2) 4 : audit [INF] from='osd.0 v2:192.168.123.104:6801/3431285778' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:09:54.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:53 vm04 bash[20742]: audit 2026-03-10T10:09:52.734629+0000 mon.c (mon.2) 4 : audit [INF] from='osd.0 v2:192.168.123.104:6801/3431285778' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:09:54.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:53 vm04 bash[20742]: audit 2026-03-10T10:09:52.734874+0000 mon.a (mon.0) 281 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:09:54.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:53 vm04 bash[20742]: audit 2026-03-10T10:09:52.734874+0000 mon.a (mon.0) 281 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:09:54.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:53 vm04 bash[28289]: audit 2026-03-10T10:09:52.730730+0000 mon.a (mon.0) 278 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T10:09:54.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:53 vm04 bash[28289]: audit 2026-03-10T10:09:52.730730+0000 mon.a (mon.0) 278 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T10:09:54.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:53 vm04 bash[28289]: cluster 2026-03-10T10:09:52.733251+0000 mon.a (mon.0) 279 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T10:09:54.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:53 vm04 bash[28289]: cluster 2026-03-10T10:09:52.733251+0000 mon.a (mon.0) 279 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T10:09:54.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:53 vm04 bash[28289]: audit 2026-03-10T10:09:52.733423+0000 mon.a (mon.0) 280 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:54.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:53 vm04 bash[28289]: audit 2026-03-10T10:09:52.733423+0000 mon.a (mon.0) 280 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:54.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:53 vm04 bash[28289]: audit 2026-03-10T10:09:52.734629+0000 mon.c (mon.2) 4 : audit [INF] from='osd.0 v2:192.168.123.104:6801/3431285778' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:09:54.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:53 vm04 bash[28289]: audit 2026-03-10T10:09:52.734629+0000 mon.c (mon.2) 4 : audit [INF] from='osd.0 v2:192.168.123.104:6801/3431285778' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:09:54.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:53 vm04 bash[28289]: audit 2026-03-10T10:09:52.734874+0000 mon.a (mon.0) 281 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:09:54.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:53 vm04 bash[28289]: audit 2026-03-10T10:09:52.734874+0000 mon.a (mon.0) 281 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:09:54.759 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:54 vm04 bash[20742]: cluster 2026-03-10T10:09:53.611570+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:54.759 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:54 vm04 bash[20742]: cluster 2026-03-10T10:09:53.611570+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:54.759 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:54 vm04 bash[20742]: audit 2026-03-10T10:09:53.737564+0000 mon.a (mon.0) 282 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-10T10:09:54.759 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:54 vm04 bash[20742]: audit 2026-03-10T10:09:53.737564+0000 mon.a (mon.0) 282 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-10T10:09:54.759 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:54 vm04 bash[20742]: cluster 2026-03-10T10:09:53.740090+0000 mon.a (mon.0) 283 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-10T10:09:54.759 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:54 vm04 bash[20742]: cluster 2026-03-10T10:09:53.740090+0000 mon.a (mon.0) 283 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-10T10:09:54.759 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:54 vm04 bash[20742]: audit 2026-03-10T10:09:53.740685+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:54.759 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:54 vm04 bash[20742]: audit 2026-03-10T10:09:53.740685+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:54.759 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:54 vm04 bash[20742]: audit 2026-03-10T10:09:53.750605+0000 mon.a (mon.0) 285 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:54.759 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:54 vm04 bash[20742]: audit 2026-03-10T10:09:53.750605+0000 mon.a (mon.0) 285 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:54.759 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:54 vm04 bash[20742]: audit 2026-03-10T10:09:54.610734+0000 mon.a (mon.0) 286 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:54.759 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:54 vm04 bash[20742]: audit 2026-03-10T10:09:54.610734+0000 mon.a (mon.0) 286 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:54.759 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:54 vm04 bash[20742]: audit 2026-03-10T10:09:54.615587+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:54.759 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:54 vm04 bash[20742]: audit 2026-03-10T10:09:54.615587+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:55.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:54 vm07 bash[23367]: cluster 2026-03-10T10:09:53.611570+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:55.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:54 vm07 bash[23367]: cluster 2026-03-10T10:09:53.611570+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:55.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:54 vm07 bash[23367]: audit 2026-03-10T10:09:53.737564+0000 mon.a (mon.0) 282 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-10T10:09:55.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:54 vm07 bash[23367]: audit 2026-03-10T10:09:53.737564+0000 mon.a (mon.0) 282 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-10T10:09:55.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:54 vm07 bash[23367]: cluster 2026-03-10T10:09:53.740090+0000 mon.a (mon.0) 283 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-10T10:09:55.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:54 vm07 bash[23367]: cluster 2026-03-10T10:09:53.740090+0000 mon.a (mon.0) 283 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-10T10:09:55.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:54 vm07 bash[23367]: audit 2026-03-10T10:09:53.740685+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:55.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:54 vm07 bash[23367]: audit 2026-03-10T10:09:53.740685+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:55.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:54 vm07 bash[23367]: audit 2026-03-10T10:09:53.750605+0000 mon.a (mon.0) 285 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:55.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:54 vm07 bash[23367]: audit 2026-03-10T10:09:53.750605+0000 mon.a (mon.0) 285 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:55.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:54 vm07 bash[23367]: audit 2026-03-10T10:09:54.610734+0000 mon.a (mon.0) 286 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:55.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:54 vm07 bash[23367]: audit 2026-03-10T10:09:54.610734+0000 mon.a (mon.0) 286 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:55.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:54 vm07 bash[23367]: audit 2026-03-10T10:09:54.615587+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:55.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:54 vm07 bash[23367]: audit 2026-03-10T10:09:54.615587+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:54 vm04 bash[28289]: cluster 2026-03-10T10:09:53.611570+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:54 vm04 bash[28289]: cluster 2026-03-10T10:09:53.611570+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:54 vm04 bash[28289]: audit 2026-03-10T10:09:53.737564+0000 mon.a (mon.0) 282 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-10T10:09:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:54 vm04 bash[28289]: audit 2026-03-10T10:09:53.737564+0000 mon.a (mon.0) 282 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-10T10:09:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:54 vm04 bash[28289]: cluster 2026-03-10T10:09:53.740090+0000 mon.a (mon.0) 283 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-10T10:09:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:54 vm04 bash[28289]: cluster 2026-03-10T10:09:53.740090+0000 mon.a (mon.0) 283 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-10T10:09:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:54 vm04 bash[28289]: audit 2026-03-10T10:09:53.740685+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:54 vm04 bash[28289]: audit 2026-03-10T10:09:53.740685+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:54 vm04 bash[28289]: audit 2026-03-10T10:09:53.750605+0000 mon.a (mon.0) 285 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:54 vm04 bash[28289]: audit 2026-03-10T10:09:53.750605+0000 mon.a (mon.0) 285 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:54 vm04 bash[28289]: audit 2026-03-10T10:09:54.610734+0000 mon.a (mon.0) 286 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:54 vm04 bash[28289]: audit 2026-03-10T10:09:54.610734+0000 mon.a (mon.0) 286 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:54 vm04 bash[28289]: audit 2026-03-10T10:09:54.615587+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:54 vm04 bash[28289]: audit 2026-03-10T10:09:54.615587+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:55.762 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:55.752+0000 7f6d277fe640 1 -- 192.168.123.104:0/2620094738 <== mgr.14150 v2:192.168.123.104:6800/632047608 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7f6d0c002bf0 con 0x7f6d1c077570 2026-03-10T10:09:55.762 INFO:teuthology.orchestra.run.vm04.stdout:Created osd(s) 0 on host 'vm04' 2026-03-10T10:09:55.764 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:55.756+0000 7f6d44cd3640 1 -- 192.168.123.104:0/2620094738 >> v2:192.168.123.104:6800/632047608 conn(0x7f6d1c077570 msgr2=0x7f6d1c079a30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:09:55.764 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:55.756+0000 7f6d44cd3640 1 --2- 192.168.123.104:0/2620094738 >> v2:192.168.123.104:6800/632047608 conn(0x7f6d1c077570 0x7f6d1c079a30 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f6d4019da40 tx=0x7f6d28005e90 comp rx=0 tx=0).stop 2026-03-10T10:09:55.764 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:55.756+0000 7f6d44cd3640 1 -- 192.168.123.104:0/2620094738 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6d40075a40 msgr2=0x7f6d4019c520 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:09:55.764 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:55.756+0000 7f6d44cd3640 1 --2- 192.168.123.104:0/2620094738 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6d40075a40 0x7f6d4019c520 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f6d34009a00 tx=0x7f6d3402fd80 comp rx=0 tx=0).stop 2026-03-10T10:09:55.764 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:55.756+0000 7f6d44cd3640 1 -- 192.168.123.104:0/2620094738 shutdown_connections 2026-03-10T10:09:55.764 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:55.756+0000 7f6d44cd3640 1 --2- 192.168.123.104:0/2620094738 >> v2:192.168.123.104:6800/632047608 conn(0x7f6d1c077570 0x7f6d1c079a30 unknown :-1 s=CLOSED pgs=42 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:55.764 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:55.756+0000 7f6d44cd3640 1 --2- 192.168.123.104:0/2620094738 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f6d4010a9d0 0x7f6d401a3ae0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:55.764 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:55.756+0000 7f6d44cd3640 1 --2- 192.168.123.104:0/2620094738 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f6d400770a0 0x7f6d4019ca60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:55.764 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:55.756+0000 7f6d44cd3640 1 --2- 192.168.123.104:0/2620094738 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6d40075a40 0x7f6d4019c520 unknown :-1 s=CLOSED pgs=10 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:09:55.764 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:55.756+0000 7f6d44cd3640 1 -- 192.168.123.104:0/2620094738 >> 192.168.123.104:0/2620094738 conn(0x7f6d400fe290 msgr2=0x7f6d40100680 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:09:55.765 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:55.756+0000 7f6d44cd3640 1 -- 192.168.123.104:0/2620094738 shutdown_connections 2026-03-10T10:09:55.765 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:09:55.756+0000 7f6d44cd3640 1 -- 192.168.123.104:0/2620094738 wait complete. 2026-03-10T10:09:55.853 DEBUG:teuthology.orchestra.run.vm04:osd.0> sudo journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@osd.0.service 2026-03-10T10:09:55.854 INFO:tasks.cephadm:Deploying osd.1 on vm04 with /dev/vdd... 2026-03-10T10:09:55.854 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- lvm zap /dev/vdd 2026-03-10T10:09:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:55 vm04 bash[28289]: cluster 2026-03-10T10:09:53.659000+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:09:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:55 vm04 bash[28289]: cluster 2026-03-10T10:09:53.659000+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:09:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:55 vm04 bash[28289]: cluster 2026-03-10T10:09:53.659045+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:09:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:55 vm04 bash[28289]: cluster 2026-03-10T10:09:53.659045+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:09:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:55 vm04 bash[28289]: audit 2026-03-10T10:09:54.745717+0000 mon.a (mon.0) 288 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:55 vm04 bash[28289]: audit 2026-03-10T10:09:54.745717+0000 mon.a (mon.0) 288 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:55 vm04 bash[28289]: cluster 2026-03-10T10:09:54.750212+0000 mon.a (mon.0) 289 : cluster [INF] osd.0 v2:192.168.123.104:6801/3431285778 boot 2026-03-10T10:09:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:55 vm04 bash[28289]: cluster 2026-03-10T10:09:54.750212+0000 mon.a (mon.0) 289 : cluster [INF] osd.0 v2:192.168.123.104:6801/3431285778 boot 2026-03-10T10:09:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:55 vm04 bash[28289]: cluster 2026-03-10T10:09:54.750241+0000 mon.a (mon.0) 290 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-10T10:09:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:55 vm04 bash[28289]: cluster 2026-03-10T10:09:54.750241+0000 mon.a (mon.0) 290 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-10T10:09:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:55 vm04 bash[28289]: audit 2026-03-10T10:09:54.751233+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:55 vm04 bash[28289]: audit 2026-03-10T10:09:54.751233+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:55 vm04 bash[28289]: audit 2026-03-10T10:09:55.013173+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:55 vm04 bash[28289]: audit 2026-03-10T10:09:55.013173+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:55 vm04 bash[28289]: audit 2026-03-10T10:09:55.013697+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:55 vm04 bash[28289]: audit 2026-03-10T10:09:55.013697+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:55 vm04 bash[28289]: audit 2026-03-10T10:09:55.023096+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:55 vm04 bash[28289]: audit 2026-03-10T10:09:55.023096+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:55 vm04 bash[28289]: audit 2026-03-10T10:09:55.744483+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:55 vm04 bash[28289]: audit 2026-03-10T10:09:55.744483+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:55 vm04 bash[28289]: audit 2026-03-10T10:09:55.750254+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:55 vm04 bash[28289]: audit 2026-03-10T10:09:55.750254+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:55 vm04 bash[28289]: audit 2026-03-10T10:09:55.755932+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:55 vm04 bash[28289]: audit 2026-03-10T10:09:55.755932+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:56.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:55 vm04 bash[20742]: cluster 2026-03-10T10:09:53.659000+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:09:56.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:55 vm04 bash[20742]: cluster 2026-03-10T10:09:53.659000+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:09:56.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:55 vm04 bash[20742]: cluster 2026-03-10T10:09:53.659045+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:09:56.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:55 vm04 bash[20742]: cluster 2026-03-10T10:09:53.659045+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:09:56.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:55 vm04 bash[20742]: audit 2026-03-10T10:09:54.745717+0000 mon.a (mon.0) 288 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:56.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:55 vm04 bash[20742]: audit 2026-03-10T10:09:54.745717+0000 mon.a (mon.0) 288 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:56.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:55 vm04 bash[20742]: cluster 2026-03-10T10:09:54.750212+0000 mon.a (mon.0) 289 : cluster [INF] osd.0 v2:192.168.123.104:6801/3431285778 boot 2026-03-10T10:09:56.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:55 vm04 bash[20742]: cluster 2026-03-10T10:09:54.750212+0000 mon.a (mon.0) 289 : cluster [INF] osd.0 v2:192.168.123.104:6801/3431285778 boot 2026-03-10T10:09:56.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:55 vm04 bash[20742]: cluster 2026-03-10T10:09:54.750241+0000 mon.a (mon.0) 290 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-10T10:09:56.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:55 vm04 bash[20742]: cluster 2026-03-10T10:09:54.750241+0000 mon.a (mon.0) 290 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-10T10:09:56.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:55 vm04 bash[20742]: audit 2026-03-10T10:09:54.751233+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:56.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:55 vm04 bash[20742]: audit 2026-03-10T10:09:54.751233+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:56.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:55 vm04 bash[20742]: audit 2026-03-10T10:09:55.013173+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:56.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:55 vm04 bash[20742]: audit 2026-03-10T10:09:55.013173+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:56.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:55 vm04 bash[20742]: audit 2026-03-10T10:09:55.013697+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:56.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:55 vm04 bash[20742]: audit 2026-03-10T10:09:55.013697+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:56.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:55 vm04 bash[20742]: audit 2026-03-10T10:09:55.023096+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:56.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:55 vm04 bash[20742]: audit 2026-03-10T10:09:55.023096+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:56.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:55 vm04 bash[20742]: audit 2026-03-10T10:09:55.744483+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:56.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:55 vm04 bash[20742]: audit 2026-03-10T10:09:55.744483+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:56.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:55 vm04 bash[20742]: audit 2026-03-10T10:09:55.750254+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:56.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:55 vm04 bash[20742]: audit 2026-03-10T10:09:55.750254+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:56.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:55 vm04 bash[20742]: audit 2026-03-10T10:09:55.755932+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:56.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:55 vm04 bash[20742]: audit 2026-03-10T10:09:55.755932+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:56.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:55 vm07 bash[23367]: cluster 2026-03-10T10:09:53.659000+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:09:56.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:55 vm07 bash[23367]: cluster 2026-03-10T10:09:53.659000+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:09:56.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:55 vm07 bash[23367]: cluster 2026-03-10T10:09:53.659045+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:09:56.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:55 vm07 bash[23367]: cluster 2026-03-10T10:09:53.659045+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:09:56.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:55 vm07 bash[23367]: audit 2026-03-10T10:09:54.745717+0000 mon.a (mon.0) 288 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:56.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:55 vm07 bash[23367]: audit 2026-03-10T10:09:54.745717+0000 mon.a (mon.0) 288 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:56.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:55 vm07 bash[23367]: cluster 2026-03-10T10:09:54.750212+0000 mon.a (mon.0) 289 : cluster [INF] osd.0 v2:192.168.123.104:6801/3431285778 boot 2026-03-10T10:09:56.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:55 vm07 bash[23367]: cluster 2026-03-10T10:09:54.750212+0000 mon.a (mon.0) 289 : cluster [INF] osd.0 v2:192.168.123.104:6801/3431285778 boot 2026-03-10T10:09:56.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:55 vm07 bash[23367]: cluster 2026-03-10T10:09:54.750241+0000 mon.a (mon.0) 290 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-10T10:09:56.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:55 vm07 bash[23367]: cluster 2026-03-10T10:09:54.750241+0000 mon.a (mon.0) 290 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-10T10:09:56.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:55 vm07 bash[23367]: audit 2026-03-10T10:09:54.751233+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:56.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:55 vm07 bash[23367]: audit 2026-03-10T10:09:54.751233+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:09:56.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:55 vm07 bash[23367]: audit 2026-03-10T10:09:55.013173+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:56.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:55 vm07 bash[23367]: audit 2026-03-10T10:09:55.013173+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:09:56.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:55 vm07 bash[23367]: audit 2026-03-10T10:09:55.013697+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:56.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:55 vm07 bash[23367]: audit 2026-03-10T10:09:55.013697+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:09:56.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:55 vm07 bash[23367]: audit 2026-03-10T10:09:55.023096+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:56.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:55 vm07 bash[23367]: audit 2026-03-10T10:09:55.023096+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:56.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:55 vm07 bash[23367]: audit 2026-03-10T10:09:55.744483+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:56.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:55 vm07 bash[23367]: audit 2026-03-10T10:09:55.744483+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:09:56.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:55 vm07 bash[23367]: audit 2026-03-10T10:09:55.750254+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:56.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:55 vm07 bash[23367]: audit 2026-03-10T10:09:55.750254+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:56.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:55 vm07 bash[23367]: audit 2026-03-10T10:09:55.755932+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:56.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:55 vm07 bash[23367]: audit 2026-03-10T10:09:55.755932+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:09:57.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:56 vm04 bash[28289]: cluster 2026-03-10T10:09:55.611756+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:57.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:56 vm04 bash[28289]: cluster 2026-03-10T10:09:55.611756+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:57.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:56 vm04 bash[28289]: cluster 2026-03-10T10:09:56.054885+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-10T10:09:57.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:56 vm04 bash[28289]: cluster 2026-03-10T10:09:56.054885+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-10T10:09:57.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:56 vm04 bash[20742]: cluster 2026-03-10T10:09:55.611756+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:57.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:56 vm04 bash[20742]: cluster 2026-03-10T10:09:55.611756+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:57.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:56 vm04 bash[20742]: cluster 2026-03-10T10:09:56.054885+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-10T10:09:57.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:56 vm04 bash[20742]: cluster 2026-03-10T10:09:56.054885+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-10T10:09:57.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:56 vm07 bash[23367]: cluster 2026-03-10T10:09:55.611756+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:57.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:56 vm07 bash[23367]: cluster 2026-03-10T10:09:55.611756+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T10:09:57.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:56 vm07 bash[23367]: cluster 2026-03-10T10:09:56.054885+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-10T10:09:57.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:56 vm07 bash[23367]: cluster 2026-03-10T10:09:56.054885+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-10T10:09:58.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:57 vm04 bash[28289]: cluster 2026-03-10T10:09:57.612003+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:09:58.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:09:57 vm04 bash[28289]: cluster 2026-03-10T10:09:57.612003+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:09:58.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:57 vm04 bash[20742]: cluster 2026-03-10T10:09:57.612003+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:09:58.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:09:57 vm04 bash[20742]: cluster 2026-03-10T10:09:57.612003+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:09:58.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:57 vm07 bash[23367]: cluster 2026-03-10T10:09:57.612003+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:09:58.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:09:57 vm07 bash[23367]: cluster 2026-03-10T10:09:57.612003+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:00.534 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:10:00.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:00 vm04 bash[28289]: cluster 2026-03-10T10:09:59.612247+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:00.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:00 vm04 bash[28289]: cluster 2026-03-10T10:09:59.612247+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:00.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:00 vm04 bash[28289]: cluster 2026-03-10T10:10:00.000124+0000 mon.a (mon.0) 299 : cluster [INF] overall HEALTH_OK 2026-03-10T10:10:00.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:00 vm04 bash[28289]: cluster 2026-03-10T10:10:00.000124+0000 mon.a (mon.0) 299 : cluster [INF] overall HEALTH_OK 2026-03-10T10:10:00.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:00 vm04 bash[20742]: cluster 2026-03-10T10:09:59.612247+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:00.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:00 vm04 bash[20742]: cluster 2026-03-10T10:09:59.612247+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:00.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:00 vm04 bash[20742]: cluster 2026-03-10T10:10:00.000124+0000 mon.a (mon.0) 299 : cluster [INF] overall HEALTH_OK 2026-03-10T10:10:00.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:00 vm04 bash[20742]: cluster 2026-03-10T10:10:00.000124+0000 mon.a (mon.0) 299 : cluster [INF] overall HEALTH_OK 2026-03-10T10:10:01.012 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:00 vm07 bash[23367]: cluster 2026-03-10T10:09:59.612247+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:01.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:00 vm07 bash[23367]: cluster 2026-03-10T10:09:59.612247+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:01.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:00 vm07 bash[23367]: cluster 2026-03-10T10:10:00.000124+0000 mon.a (mon.0) 299 : cluster [INF] overall HEALTH_OK 2026-03-10T10:10:01.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:00 vm07 bash[23367]: cluster 2026-03-10T10:10:00.000124+0000 mon.a (mon.0) 299 : cluster [INF] overall HEALTH_OK 2026-03-10T10:10:01.447 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:10:01.460 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph orch daemon add osd vm04:/dev/vdd 2026-03-10T10:10:02.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:02 vm04 bash[28289]: cluster 2026-03-10T10:10:01.612477+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:02.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:02 vm04 bash[28289]: cluster 2026-03-10T10:10:01.612477+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:02.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:02 vm04 bash[28289]: audit 2026-03-10T10:10:02.187998+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:02.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:02 vm04 bash[28289]: audit 2026-03-10T10:10:02.187998+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:02.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:02 vm04 bash[28289]: audit 2026-03-10T10:10:02.192978+0000 mon.a (mon.0) 301 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:02.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:02 vm04 bash[28289]: audit 2026-03-10T10:10:02.192978+0000 mon.a (mon.0) 301 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:02.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:02 vm04 bash[28289]: audit 2026-03-10T10:10:02.194049+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:10:02.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:02 vm04 bash[28289]: audit 2026-03-10T10:10:02.194049+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:10:02.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:02 vm04 bash[28289]: audit 2026-03-10T10:10:02.194698+0000 mon.a (mon.0) 303 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:02.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:02 vm04 bash[28289]: audit 2026-03-10T10:10:02.194698+0000 mon.a (mon.0) 303 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:02.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:02 vm04 bash[28289]: audit 2026-03-10T10:10:02.195219+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:10:02.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:02 vm04 bash[28289]: audit 2026-03-10T10:10:02.195219+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:10:02.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:02 vm04 bash[28289]: audit 2026-03-10T10:10:02.199540+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:02.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:02 vm04 bash[28289]: audit 2026-03-10T10:10:02.199540+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:02.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:02 vm04 bash[20742]: cluster 2026-03-10T10:10:01.612477+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:02.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:02 vm04 bash[20742]: cluster 2026-03-10T10:10:01.612477+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:02.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:02 vm04 bash[20742]: audit 2026-03-10T10:10:02.187998+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:02.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:02 vm04 bash[20742]: audit 2026-03-10T10:10:02.187998+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:02.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:02 vm04 bash[20742]: audit 2026-03-10T10:10:02.192978+0000 mon.a (mon.0) 301 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:02.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:02 vm04 bash[20742]: audit 2026-03-10T10:10:02.192978+0000 mon.a (mon.0) 301 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:02.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:02 vm04 bash[20742]: audit 2026-03-10T10:10:02.194049+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:10:02.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:02 vm04 bash[20742]: audit 2026-03-10T10:10:02.194049+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:10:02.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:02 vm04 bash[20742]: audit 2026-03-10T10:10:02.194698+0000 mon.a (mon.0) 303 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:02.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:02 vm04 bash[20742]: audit 2026-03-10T10:10:02.194698+0000 mon.a (mon.0) 303 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:02.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:02 vm04 bash[20742]: audit 2026-03-10T10:10:02.195219+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:10:02.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:02 vm04 bash[20742]: audit 2026-03-10T10:10:02.195219+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:10:02.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:02 vm04 bash[20742]: audit 2026-03-10T10:10:02.199540+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:02.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:02 vm04 bash[20742]: audit 2026-03-10T10:10:02.199540+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:03.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:02 vm07 bash[23367]: cluster 2026-03-10T10:10:01.612477+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:03.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:02 vm07 bash[23367]: cluster 2026-03-10T10:10:01.612477+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:03.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:02 vm07 bash[23367]: audit 2026-03-10T10:10:02.187998+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:03.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:02 vm07 bash[23367]: audit 2026-03-10T10:10:02.187998+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:03.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:02 vm07 bash[23367]: audit 2026-03-10T10:10:02.192978+0000 mon.a (mon.0) 301 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:03.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:02 vm07 bash[23367]: audit 2026-03-10T10:10:02.192978+0000 mon.a (mon.0) 301 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:03.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:02 vm07 bash[23367]: audit 2026-03-10T10:10:02.194049+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:10:03.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:02 vm07 bash[23367]: audit 2026-03-10T10:10:02.194049+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:10:03.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:02 vm07 bash[23367]: audit 2026-03-10T10:10:02.194698+0000 mon.a (mon.0) 303 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:03.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:02 vm07 bash[23367]: audit 2026-03-10T10:10:02.194698+0000 mon.a (mon.0) 303 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:03.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:02 vm07 bash[23367]: audit 2026-03-10T10:10:02.195219+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:10:03.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:02 vm07 bash[23367]: audit 2026-03-10T10:10:02.195219+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:10:03.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:02 vm07 bash[23367]: audit 2026-03-10T10:10:02.199540+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:03.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:02 vm07 bash[23367]: audit 2026-03-10T10:10:02.199540+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:03.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:03 vm04 bash[28289]: cephadm 2026-03-10T10:10:02.182462+0000 mgr.y (mgr.14150) 81 : cephadm [INF] Detected new or changed devices on vm04 2026-03-10T10:10:03.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:03 vm04 bash[28289]: cephadm 2026-03-10T10:10:02.182462+0000 mgr.y (mgr.14150) 81 : cephadm [INF] Detected new or changed devices on vm04 2026-03-10T10:10:03.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:03 vm04 bash[20742]: cephadm 2026-03-10T10:10:02.182462+0000 mgr.y (mgr.14150) 81 : cephadm [INF] Detected new or changed devices on vm04 2026-03-10T10:10:03.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:03 vm04 bash[20742]: cephadm 2026-03-10T10:10:02.182462+0000 mgr.y (mgr.14150) 81 : cephadm [INF] Detected new or changed devices on vm04 2026-03-10T10:10:04.012 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:03 vm07 bash[23367]: cephadm 2026-03-10T10:10:02.182462+0000 mgr.y (mgr.14150) 81 : cephadm [INF] Detected new or changed devices on vm04 2026-03-10T10:10:04.012 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:03 vm07 bash[23367]: cephadm 2026-03-10T10:10:02.182462+0000 mgr.y (mgr.14150) 81 : cephadm [INF] Detected new or changed devices on vm04 2026-03-10T10:10:04.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:04 vm04 bash[28289]: cluster 2026-03-10T10:10:03.612714+0000 mgr.y (mgr.14150) 82 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:04.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:04 vm04 bash[28289]: cluster 2026-03-10T10:10:03.612714+0000 mgr.y (mgr.14150) 82 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:04.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:04 vm04 bash[20742]: cluster 2026-03-10T10:10:03.612714+0000 mgr.y (mgr.14150) 82 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:04.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:04 vm04 bash[20742]: cluster 2026-03-10T10:10:03.612714+0000 mgr.y (mgr.14150) 82 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:05.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:04 vm07 bash[23367]: cluster 2026-03-10T10:10:03.612714+0000 mgr.y (mgr.14150) 82 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:05.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:04 vm07 bash[23367]: cluster 2026-03-10T10:10:03.612714+0000 mgr.y (mgr.14150) 82 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:06.112 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:10:06.258 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.252+0000 7f10047c1640 1 -- 192.168.123.104:0/574759223 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0ffc079690 msgr2=0x7f0ffc079f30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:10:06.258 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.252+0000 7f10047c1640 1 --2- 192.168.123.104:0/574759223 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0ffc079690 0x7f0ffc079f30 secure :-1 s=READY pgs=102 cs=0 l=1 rev1=1 crypto rx=0x7f0ff800b0a0 tx=0x7f0ff802f410 comp rx=0 tx=0).stop 2026-03-10T10:10:06.259 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.252+0000 7f10047c1640 1 -- 192.168.123.104:0/574759223 shutdown_connections 2026-03-10T10:10:06.259 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.252+0000 7f10047c1640 1 --2- 192.168.123.104:0/574759223 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0ffc079690 0x7f0ffc079f30 unknown :-1 s=CLOSED pgs=102 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:10:06.259 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.252+0000 7f10047c1640 1 --2- 192.168.123.104:0/574759223 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0ffc078cf0 0x7f0ffc079150 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:10:06.259 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.252+0000 7f10047c1640 1 --2- 192.168.123.104:0/574759223 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f0ffc077aa0 0x7f0ffc077ea0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:10:06.259 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.252+0000 7f10047c1640 1 -- 192.168.123.104:0/574759223 >> 192.168.123.104:0/574759223 conn(0x7f0ffc1004d0 msgr2=0x7f0ffc102930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:10:06.259 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.252+0000 7f10047c1640 1 -- 192.168.123.104:0/574759223 shutdown_connections 2026-03-10T10:10:06.259 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.252+0000 7f10047c1640 1 -- 192.168.123.104:0/574759223 wait complete. 2026-03-10T10:10:06.259 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.256+0000 7f10047c1640 1 Processor -- start 2026-03-10T10:10:06.259 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.256+0000 7f10047c1640 1 -- start start 2026-03-10T10:10:06.260 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.256+0000 7f10047c1640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0ffc077aa0 0x7f0ffc1a09d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:10:06.260 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.256+0000 7f10047c1640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f0ffc078cf0 0x7f0ffc1a0f10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:10:06.260 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.256+0000 7f1002536640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0ffc077aa0 0x7f0ffc1a09d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:10:06.260 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.256+0000 7f1002536640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0ffc077aa0 0x7f0ffc1a09d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:36450/0 (socket says 192.168.123.104:36450) 2026-03-10T10:10:06.260 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.256+0000 7f10047c1640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0ffc079690 0x7f0ffc1a7f90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:10:06.260 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.256+0000 7f10047c1640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f0ffc1143e0 con 0x7f0ffc077aa0 2026-03-10T10:10:06.260 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.256+0000 7f10047c1640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f0ffc114260 con 0x7f0ffc079690 2026-03-10T10:10:06.260 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.256+0000 7f10047c1640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f0ffc114560 con 0x7f0ffc078cf0 2026-03-10T10:10:06.260 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.256+0000 7f1001d35640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f0ffc078cf0 0x7f0ffc1a0f10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:10:06.260 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.256+0000 7f1001d35640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f0ffc078cf0 0x7f0ffc1a0f10 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3301/0 says I am v2:192.168.123.104:41652/0 (socket says 192.168.123.104:41652) 2026-03-10T10:10:06.260 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.256+0000 7f1001d35640 1 -- 192.168.123.104:0/3478858607 learned_addr learned my addr 192.168.123.104:0/3478858607 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:10:06.260 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.256+0000 7f1002d37640 1 --2- 192.168.123.104:0/3478858607 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0ffc079690 0x7f0ffc1a7f90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:10:06.261 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.256+0000 7f1001d35640 1 -- 192.168.123.104:0/3478858607 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0ffc079690 msgr2=0x7f0ffc1a7f90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:10:06.261 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.256+0000 7f1001d35640 1 --2- 192.168.123.104:0/3478858607 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0ffc079690 0x7f0ffc1a7f90 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:10:06.261 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.256+0000 7f1001d35640 1 -- 192.168.123.104:0/3478858607 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0ffc077aa0 msgr2=0x7f0ffc1a09d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:10:06.261 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.256+0000 7f1001d35640 1 --2- 192.168.123.104:0/3478858607 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0ffc077aa0 0x7f0ffc1a09d0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:10:06.261 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.256+0000 7f1001d35640 1 -- 192.168.123.104:0/3478858607 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f0ffc1a8600 con 0x7f0ffc078cf0 2026-03-10T10:10:06.261 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.256+0000 7f1002536640 1 --2- 192.168.123.104:0/3478858607 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0ffc077aa0 0x7f0ffc1a09d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T10:10:06.261 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.256+0000 7f1001d35640 1 --2- 192.168.123.104:0/3478858607 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f0ffc078cf0 0x7f0ffc1a0f10 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f0fec00d950 tx=0x7f0fec00de20 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:10:06.261 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.256+0000 7f0ff37fe640 1 -- 192.168.123.104:0/3478858607 <== mon.2 v2:192.168.123.104:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f0fec014070 con 0x7f0ffc078cf0 2026-03-10T10:10:06.261 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.256+0000 7f10047c1640 1 -- 192.168.123.104:0/3478858607 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f0ffc1a88f0 con 0x7f0ffc078cf0 2026-03-10T10:10:06.263 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.256+0000 7f10047c1640 1 -- 192.168.123.104:0/3478858607 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f0ffc1a8e30 con 0x7f0ffc078cf0 2026-03-10T10:10:06.263 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.256+0000 7f0ff37fe640 1 -- 192.168.123.104:0/3478858607 <== mon.2 v2:192.168.123.104:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f0fec0044e0 con 0x7f0ffc078cf0 2026-03-10T10:10:06.263 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.256+0000 7f0ff37fe640 1 -- 192.168.123.104:0/3478858607 <== mon.2 v2:192.168.123.104:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f0fec005020 con 0x7f0ffc078cf0 2026-03-10T10:10:06.266 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.256+0000 7f10047c1640 1 -- 192.168.123.104:0/3478858607 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f0ffc079150 con 0x7f0ffc078cf0 2026-03-10T10:10:06.266 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.260+0000 7f0ff37fe640 1 -- 192.168.123.104:0/3478858607 <== mon.2 v2:192.168.123.104:3301/0 4 ==== mgrmap(e 14) ==== 99944+0+0 (secure 0 0 0) 0x7f0fec00b900 con 0x7f0ffc078cf0 2026-03-10T10:10:06.266 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.260+0000 7f0ff37fe640 1 --2- 192.168.123.104:0/3478858607 >> v2:192.168.123.104:6800/632047608 conn(0x7f0fd00775c0 0x7f0fd0079a80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:10:06.266 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.260+0000 7f1002536640 1 --2- 192.168.123.104:0/3478858607 >> v2:192.168.123.104:6800/632047608 conn(0x7f0fd00775c0 0x7f0fd0079a80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:10:06.266 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.260+0000 7f0ff37fe640 1 -- 192.168.123.104:0/3478858607 <== mon.2 v2:192.168.123.104:3301/0 5 ==== osd_map(9..9 src has 1..9) ==== 1531+0+0 (secure 0 0 0) 0x7f0fec09c070 con 0x7f0ffc078cf0 2026-03-10T10:10:06.267 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.260+0000 7f0ff37fe640 1 -- 192.168.123.104:0/3478858607 <== mon.2 v2:192.168.123.104:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f0fec0658d0 con 0x7f0ffc078cf0 2026-03-10T10:10:06.271 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.264+0000 7f1002536640 1 --2- 192.168.123.104:0/3478858607 >> v2:192.168.123.104:6800/632047608 conn(0x7f0fd00775c0 0x7f0fd0079a80 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f0fe4004160 tx=0x7f0fe400a400 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:10:06.367 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:06.360+0000 7f10047c1640 1 -- 192.168.123.104:0/3478858607 --> v2:192.168.123.104:6800/632047608 -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdd", "target": ["mon-mgr", ""]}) -- 0x7f0ffc0630c0 con 0x7f0fd00775c0 2026-03-10T10:10:06.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:06 vm04 bash[28289]: cluster 2026-03-10T10:10:05.612898+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:06.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:06 vm04 bash[28289]: cluster 2026-03-10T10:10:05.612898+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:06.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:06 vm04 bash[28289]: audit 2026-03-10T10:10:06.369394+0000 mon.a (mon.0) 306 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:10:06.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:06 vm04 bash[28289]: audit 2026-03-10T10:10:06.369394+0000 mon.a (mon.0) 306 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:10:06.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:06 vm04 bash[28289]: audit 2026-03-10T10:10:06.370729+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:10:06.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:06 vm04 bash[28289]: audit 2026-03-10T10:10:06.370729+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:10:06.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:06 vm04 bash[28289]: audit 2026-03-10T10:10:06.371178+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:06.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:06 vm04 bash[28289]: audit 2026-03-10T10:10:06.371178+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:06 vm04 bash[20742]: cluster 2026-03-10T10:10:05.612898+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:06 vm04 bash[20742]: cluster 2026-03-10T10:10:05.612898+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:06 vm04 bash[20742]: audit 2026-03-10T10:10:06.369394+0000 mon.a (mon.0) 306 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:10:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:06 vm04 bash[20742]: audit 2026-03-10T10:10:06.369394+0000 mon.a (mon.0) 306 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:10:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:06 vm04 bash[20742]: audit 2026-03-10T10:10:06.370729+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:10:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:06 vm04 bash[20742]: audit 2026-03-10T10:10:06.370729+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:10:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:06 vm04 bash[20742]: audit 2026-03-10T10:10:06.371178+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:06 vm04 bash[20742]: audit 2026-03-10T10:10:06.371178+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:07.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:06 vm07 bash[23367]: cluster 2026-03-10T10:10:05.612898+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:07.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:06 vm07 bash[23367]: cluster 2026-03-10T10:10:05.612898+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:07.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:06 vm07 bash[23367]: audit 2026-03-10T10:10:06.369394+0000 mon.a (mon.0) 306 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:10:07.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:06 vm07 bash[23367]: audit 2026-03-10T10:10:06.369394+0000 mon.a (mon.0) 306 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:10:07.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:06 vm07 bash[23367]: audit 2026-03-10T10:10:06.370729+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:10:07.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:06 vm07 bash[23367]: audit 2026-03-10T10:10:06.370729+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:10:07.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:06 vm07 bash[23367]: audit 2026-03-10T10:10:06.371178+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:07.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:06 vm07 bash[23367]: audit 2026-03-10T10:10:06.371178+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:08.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:07 vm07 bash[23367]: audit 2026-03-10T10:10:06.368062+0000 mgr.y (mgr.14150) 84 : audit [DBG] from='client.24125 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:10:08.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:07 vm07 bash[23367]: audit 2026-03-10T10:10:06.368062+0000 mgr.y (mgr.14150) 84 : audit [DBG] from='client.24125 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:10:08.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:07 vm04 bash[28289]: audit 2026-03-10T10:10:06.368062+0000 mgr.y (mgr.14150) 84 : audit [DBG] from='client.24125 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:10:08.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:07 vm04 bash[28289]: audit 2026-03-10T10:10:06.368062+0000 mgr.y (mgr.14150) 84 : audit [DBG] from='client.24125 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:10:08.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:07 vm04 bash[20742]: audit 2026-03-10T10:10:06.368062+0000 mgr.y (mgr.14150) 84 : audit [DBG] from='client.24125 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:10:08.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:07 vm04 bash[20742]: audit 2026-03-10T10:10:06.368062+0000 mgr.y (mgr.14150) 84 : audit [DBG] from='client.24125 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:10:09.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:08 vm07 bash[23367]: cluster 2026-03-10T10:10:07.613090+0000 mgr.y (mgr.14150) 85 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:09.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:08 vm07 bash[23367]: cluster 2026-03-10T10:10:07.613090+0000 mgr.y (mgr.14150) 85 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:09.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:08 vm04 bash[20742]: cluster 2026-03-10T10:10:07.613090+0000 mgr.y (mgr.14150) 85 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:09.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:08 vm04 bash[20742]: cluster 2026-03-10T10:10:07.613090+0000 mgr.y (mgr.14150) 85 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:09.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:08 vm04 bash[28289]: cluster 2026-03-10T10:10:07.613090+0000 mgr.y (mgr.14150) 85 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:09.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:08 vm04 bash[28289]: cluster 2026-03-10T10:10:07.613090+0000 mgr.y (mgr.14150) 85 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:11.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:10 vm07 bash[23367]: cluster 2026-03-10T10:10:09.613293+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:11.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:10 vm07 bash[23367]: cluster 2026-03-10T10:10:09.613293+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:11.144 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:10 vm04 bash[20742]: cluster 2026-03-10T10:10:09.613293+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:11.144 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:10 vm04 bash[20742]: cluster 2026-03-10T10:10:09.613293+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:11.144 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:10 vm04 bash[28289]: cluster 2026-03-10T10:10:09.613293+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:11.144 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:10 vm04 bash[28289]: cluster 2026-03-10T10:10:09.613293+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:12.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:11 vm07 bash[23367]: audit 2026-03-10T10:10:11.709260+0000 mon.c (mon.2) 5 : audit [INF] from='client.? 192.168.123.104:0/4238902352' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "58ba2152-7e52-4560-a001-e96617e30de1"}]: dispatch 2026-03-10T10:10:12.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:11 vm07 bash[23367]: audit 2026-03-10T10:10:11.709260+0000 mon.c (mon.2) 5 : audit [INF] from='client.? 192.168.123.104:0/4238902352' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "58ba2152-7e52-4560-a001-e96617e30de1"}]: dispatch 2026-03-10T10:10:12.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:11 vm07 bash[23367]: audit 2026-03-10T10:10:11.709629+0000 mon.a (mon.0) 309 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "58ba2152-7e52-4560-a001-e96617e30de1"}]: dispatch 2026-03-10T10:10:12.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:11 vm07 bash[23367]: audit 2026-03-10T10:10:11.709629+0000 mon.a (mon.0) 309 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "58ba2152-7e52-4560-a001-e96617e30de1"}]: dispatch 2026-03-10T10:10:12.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:11 vm07 bash[23367]: audit 2026-03-10T10:10:11.712545+0000 mon.a (mon.0) 310 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "58ba2152-7e52-4560-a001-e96617e30de1"}]': finished 2026-03-10T10:10:12.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:11 vm07 bash[23367]: audit 2026-03-10T10:10:11.712545+0000 mon.a (mon.0) 310 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "58ba2152-7e52-4560-a001-e96617e30de1"}]': finished 2026-03-10T10:10:12.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:11 vm07 bash[23367]: cluster 2026-03-10T10:10:11.715754+0000 mon.a (mon.0) 311 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-10T10:10:12.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:11 vm07 bash[23367]: cluster 2026-03-10T10:10:11.715754+0000 mon.a (mon.0) 311 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-10T10:10:12.142 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:11 vm04 bash[20742]: audit 2026-03-10T10:10:11.709260+0000 mon.c (mon.2) 5 : audit [INF] from='client.? 192.168.123.104:0/4238902352' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "58ba2152-7e52-4560-a001-e96617e30de1"}]: dispatch 2026-03-10T10:10:12.142 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:11 vm04 bash[20742]: audit 2026-03-10T10:10:11.709260+0000 mon.c (mon.2) 5 : audit [INF] from='client.? 192.168.123.104:0/4238902352' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "58ba2152-7e52-4560-a001-e96617e30de1"}]: dispatch 2026-03-10T10:10:12.142 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:11 vm04 bash[20742]: audit 2026-03-10T10:10:11.709629+0000 mon.a (mon.0) 309 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "58ba2152-7e52-4560-a001-e96617e30de1"}]: dispatch 2026-03-10T10:10:12.142 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:11 vm04 bash[20742]: audit 2026-03-10T10:10:11.709629+0000 mon.a (mon.0) 309 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "58ba2152-7e52-4560-a001-e96617e30de1"}]: dispatch 2026-03-10T10:10:12.142 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:11 vm04 bash[20742]: audit 2026-03-10T10:10:11.712545+0000 mon.a (mon.0) 310 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "58ba2152-7e52-4560-a001-e96617e30de1"}]': finished 2026-03-10T10:10:12.142 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:11 vm04 bash[20742]: audit 2026-03-10T10:10:11.712545+0000 mon.a (mon.0) 310 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "58ba2152-7e52-4560-a001-e96617e30de1"}]': finished 2026-03-10T10:10:12.143 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:11 vm04 bash[20742]: cluster 2026-03-10T10:10:11.715754+0000 mon.a (mon.0) 311 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-10T10:10:12.143 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:11 vm04 bash[20742]: cluster 2026-03-10T10:10:11.715754+0000 mon.a (mon.0) 311 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-10T10:10:12.143 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:11 vm04 bash[28289]: audit 2026-03-10T10:10:11.709260+0000 mon.c (mon.2) 5 : audit [INF] from='client.? 192.168.123.104:0/4238902352' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "58ba2152-7e52-4560-a001-e96617e30de1"}]: dispatch 2026-03-10T10:10:12.143 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:11 vm04 bash[28289]: audit 2026-03-10T10:10:11.709260+0000 mon.c (mon.2) 5 : audit [INF] from='client.? 192.168.123.104:0/4238902352' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "58ba2152-7e52-4560-a001-e96617e30de1"}]: dispatch 2026-03-10T10:10:12.143 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:11 vm04 bash[28289]: audit 2026-03-10T10:10:11.709629+0000 mon.a (mon.0) 309 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "58ba2152-7e52-4560-a001-e96617e30de1"}]: dispatch 2026-03-10T10:10:12.143 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:11 vm04 bash[28289]: audit 2026-03-10T10:10:11.709629+0000 mon.a (mon.0) 309 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "58ba2152-7e52-4560-a001-e96617e30de1"}]: dispatch 2026-03-10T10:10:12.143 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:11 vm04 bash[28289]: audit 2026-03-10T10:10:11.712545+0000 mon.a (mon.0) 310 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "58ba2152-7e52-4560-a001-e96617e30de1"}]': finished 2026-03-10T10:10:12.143 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:11 vm04 bash[28289]: audit 2026-03-10T10:10:11.712545+0000 mon.a (mon.0) 310 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "58ba2152-7e52-4560-a001-e96617e30de1"}]': finished 2026-03-10T10:10:12.143 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:11 vm04 bash[28289]: cluster 2026-03-10T10:10:11.715754+0000 mon.a (mon.0) 311 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-10T10:10:12.143 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:11 vm04 bash[28289]: cluster 2026-03-10T10:10:11.715754+0000 mon.a (mon.0) 311 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-10T10:10:13.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:12 vm07 bash[23367]: cluster 2026-03-10T10:10:11.613525+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:13.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:12 vm07 bash[23367]: cluster 2026-03-10T10:10:11.613525+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:13.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:12 vm07 bash[23367]: audit 2026-03-10T10:10:11.716015+0000 mon.a (mon.0) 312 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:13.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:12 vm07 bash[23367]: audit 2026-03-10T10:10:11.716015+0000 mon.a (mon.0) 312 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:13.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:12 vm07 bash[23367]: audit 2026-03-10T10:10:12.318293+0000 mon.c (mon.2) 6 : audit [DBG] from='client.? 192.168.123.104:0/1362149574' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:10:13.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:12 vm07 bash[23367]: audit 2026-03-10T10:10:12.318293+0000 mon.c (mon.2) 6 : audit [DBG] from='client.? 192.168.123.104:0/1362149574' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:10:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:12 vm04 bash[20742]: cluster 2026-03-10T10:10:11.613525+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:12 vm04 bash[20742]: cluster 2026-03-10T10:10:11.613525+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:12 vm04 bash[20742]: audit 2026-03-10T10:10:11.716015+0000 mon.a (mon.0) 312 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:12 vm04 bash[20742]: audit 2026-03-10T10:10:11.716015+0000 mon.a (mon.0) 312 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:12 vm04 bash[20742]: audit 2026-03-10T10:10:12.318293+0000 mon.c (mon.2) 6 : audit [DBG] from='client.? 192.168.123.104:0/1362149574' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:10:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:12 vm04 bash[20742]: audit 2026-03-10T10:10:12.318293+0000 mon.c (mon.2) 6 : audit [DBG] from='client.? 192.168.123.104:0/1362149574' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:10:13.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:12 vm04 bash[28289]: cluster 2026-03-10T10:10:11.613525+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:13.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:12 vm04 bash[28289]: cluster 2026-03-10T10:10:11.613525+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:13.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:12 vm04 bash[28289]: audit 2026-03-10T10:10:11.716015+0000 mon.a (mon.0) 312 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:13.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:12 vm04 bash[28289]: audit 2026-03-10T10:10:11.716015+0000 mon.a (mon.0) 312 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:13.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:12 vm04 bash[28289]: audit 2026-03-10T10:10:12.318293+0000 mon.c (mon.2) 6 : audit [DBG] from='client.? 192.168.123.104:0/1362149574' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:10:13.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:12 vm04 bash[28289]: audit 2026-03-10T10:10:12.318293+0000 mon.c (mon.2) 6 : audit [DBG] from='client.? 192.168.123.104:0/1362149574' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:10:15.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:14 vm07 bash[23367]: cluster 2026-03-10T10:10:13.613751+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:15.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:14 vm07 bash[23367]: cluster 2026-03-10T10:10:13.613751+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:15.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:14 vm04 bash[28289]: cluster 2026-03-10T10:10:13.613751+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:15.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:14 vm04 bash[28289]: cluster 2026-03-10T10:10:13.613751+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:14 vm04 bash[20742]: cluster 2026-03-10T10:10:13.613751+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:14 vm04 bash[20742]: cluster 2026-03-10T10:10:13.613751+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:17.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:16 vm07 bash[23367]: cluster 2026-03-10T10:10:15.613987+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:17.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:16 vm07 bash[23367]: cluster 2026-03-10T10:10:15.613987+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:17.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:16 vm04 bash[28289]: cluster 2026-03-10T10:10:15.613987+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:17.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:16 vm04 bash[28289]: cluster 2026-03-10T10:10:15.613987+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:17.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:16 vm04 bash[20742]: cluster 2026-03-10T10:10:15.613987+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:17.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:16 vm04 bash[20742]: cluster 2026-03-10T10:10:15.613987+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:19.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:18 vm07 bash[23367]: cluster 2026-03-10T10:10:17.614217+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:19.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:18 vm07 bash[23367]: cluster 2026-03-10T10:10:17.614217+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:19.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:18 vm04 bash[28289]: cluster 2026-03-10T10:10:17.614217+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:19.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:18 vm04 bash[28289]: cluster 2026-03-10T10:10:17.614217+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:19.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:18 vm04 bash[20742]: cluster 2026-03-10T10:10:17.614217+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:19.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:18 vm04 bash[20742]: cluster 2026-03-10T10:10:17.614217+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:20.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:20 vm04 bash[28289]: cluster 2026-03-10T10:10:19.614430+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:20.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:20 vm04 bash[28289]: cluster 2026-03-10T10:10:19.614430+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:20.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:20 vm04 bash[20742]: cluster 2026-03-10T10:10:19.614430+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:20.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:20 vm04 bash[20742]: cluster 2026-03-10T10:10:19.614430+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:21.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:20 vm07 bash[23367]: cluster 2026-03-10T10:10:19.614430+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:21.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:20 vm07 bash[23367]: cluster 2026-03-10T10:10:19.614430+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:21.782 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:21 vm04 bash[20742]: audit 2026-03-10T10:10:20.972883+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T10:10:21.782 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:21 vm04 bash[20742]: audit 2026-03-10T10:10:20.972883+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T10:10:21.782 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:21 vm04 bash[20742]: audit 2026-03-10T10:10:20.973430+0000 mon.a (mon.0) 314 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:21.782 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:21 vm04 bash[20742]: audit 2026-03-10T10:10:20.973430+0000 mon.a (mon.0) 314 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:21.782 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:21 vm04 bash[20742]: cephadm 2026-03-10T10:10:20.973916+0000 mgr.y (mgr.14150) 92 : cephadm [INF] Deploying daemon osd.1 on vm04 2026-03-10T10:10:21.782 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:21 vm04 bash[20742]: cephadm 2026-03-10T10:10:20.973916+0000 mgr.y (mgr.14150) 92 : cephadm [INF] Deploying daemon osd.1 on vm04 2026-03-10T10:10:21.782 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:21 vm04 bash[20742]: cluster 2026-03-10T10:10:21.614727+0000 mgr.y (mgr.14150) 93 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:21.782 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:21 vm04 bash[20742]: cluster 2026-03-10T10:10:21.614727+0000 mgr.y (mgr.14150) 93 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:22.033 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:21 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:10:22.033 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:10:21 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:10:22.034 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:21 vm04 bash[28289]: audit 2026-03-10T10:10:20.972883+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T10:10:22.034 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:21 vm04 bash[28289]: audit 2026-03-10T10:10:20.972883+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T10:10:22.034 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:21 vm04 bash[28289]: audit 2026-03-10T10:10:20.973430+0000 mon.a (mon.0) 314 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:22.034 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:21 vm04 bash[28289]: audit 2026-03-10T10:10:20.973430+0000 mon.a (mon.0) 314 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:22.034 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:21 vm04 bash[28289]: cephadm 2026-03-10T10:10:20.973916+0000 mgr.y (mgr.14150) 92 : cephadm [INF] Deploying daemon osd.1 on vm04 2026-03-10T10:10:22.034 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:21 vm04 bash[28289]: cephadm 2026-03-10T10:10:20.973916+0000 mgr.y (mgr.14150) 92 : cephadm [INF] Deploying daemon osd.1 on vm04 2026-03-10T10:10:22.034 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:21 vm04 bash[28289]: cluster 2026-03-10T10:10:21.614727+0000 mgr.y (mgr.14150) 93 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:22.034 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:21 vm04 bash[28289]: cluster 2026-03-10T10:10:21.614727+0000 mgr.y (mgr.14150) 93 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:22.034 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:21 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:10:22.034 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 10:10:21 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:10:22.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:21 vm07 bash[23367]: audit 2026-03-10T10:10:20.972883+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T10:10:22.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:21 vm07 bash[23367]: audit 2026-03-10T10:10:20.972883+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T10:10:22.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:21 vm07 bash[23367]: audit 2026-03-10T10:10:20.973430+0000 mon.a (mon.0) 314 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:22.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:21 vm07 bash[23367]: audit 2026-03-10T10:10:20.973430+0000 mon.a (mon.0) 314 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:22.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:21 vm07 bash[23367]: cephadm 2026-03-10T10:10:20.973916+0000 mgr.y (mgr.14150) 92 : cephadm [INF] Deploying daemon osd.1 on vm04 2026-03-10T10:10:22.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:21 vm07 bash[23367]: cephadm 2026-03-10T10:10:20.973916+0000 mgr.y (mgr.14150) 92 : cephadm [INF] Deploying daemon osd.1 on vm04 2026-03-10T10:10:22.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:21 vm07 bash[23367]: cluster 2026-03-10T10:10:21.614727+0000 mgr.y (mgr.14150) 93 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:22.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:21 vm07 bash[23367]: cluster 2026-03-10T10:10:21.614727+0000 mgr.y (mgr.14150) 93 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:22.393 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:22 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:10:22.393 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:10:22 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:10:22.393 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:22 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:10:22.393 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 10:10:22 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:10:23.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:22 vm04 bash[28289]: audit 2026-03-10T10:10:22.147774+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:10:23.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:22 vm04 bash[28289]: audit 2026-03-10T10:10:22.147774+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:10:23.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:22 vm04 bash[28289]: audit 2026-03-10T10:10:22.156400+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:23.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:22 vm04 bash[28289]: audit 2026-03-10T10:10:22.156400+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:23.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:22 vm04 bash[28289]: audit 2026-03-10T10:10:22.165680+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:23.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:22 vm04 bash[28289]: audit 2026-03-10T10:10:22.165680+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:23.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:22 vm04 bash[20742]: audit 2026-03-10T10:10:22.147774+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:10:23.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:22 vm04 bash[20742]: audit 2026-03-10T10:10:22.147774+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:10:23.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:22 vm04 bash[20742]: audit 2026-03-10T10:10:22.156400+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:23.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:22 vm04 bash[20742]: audit 2026-03-10T10:10:22.156400+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:23.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:22 vm04 bash[20742]: audit 2026-03-10T10:10:22.165680+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:23.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:22 vm04 bash[20742]: audit 2026-03-10T10:10:22.165680+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:23.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:22 vm07 bash[23367]: audit 2026-03-10T10:10:22.147774+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:10:23.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:22 vm07 bash[23367]: audit 2026-03-10T10:10:22.147774+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:10:23.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:22 vm07 bash[23367]: audit 2026-03-10T10:10:22.156400+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:23.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:22 vm07 bash[23367]: audit 2026-03-10T10:10:22.156400+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:23.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:22 vm07 bash[23367]: audit 2026-03-10T10:10:22.165680+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:23.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:22 vm07 bash[23367]: audit 2026-03-10T10:10:22.165680+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:23.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:23 vm04 bash[28289]: cluster 2026-03-10T10:10:23.614985+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:23.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:23 vm04 bash[28289]: cluster 2026-03-10T10:10:23.614985+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:23.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:23 vm04 bash[20742]: cluster 2026-03-10T10:10:23.614985+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:23.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:23 vm04 bash[20742]: cluster 2026-03-10T10:10:23.614985+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:24.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:23 vm07 bash[23367]: cluster 2026-03-10T10:10:23.614985+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:24.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:23 vm07 bash[23367]: cluster 2026-03-10T10:10:23.614985+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:26.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:26 vm04 bash[28289]: cluster 2026-03-10T10:10:25.615231+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:26.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:26 vm04 bash[28289]: cluster 2026-03-10T10:10:25.615231+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:26.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:26 vm04 bash[28289]: audit 2026-03-10T10:10:25.940688+0000 mon.c (mon.2) 7 : audit [INF] from='osd.1 v2:192.168.123.104:6805/2746381987' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T10:10:26.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:26 vm04 bash[28289]: audit 2026-03-10T10:10:25.940688+0000 mon.c (mon.2) 7 : audit [INF] from='osd.1 v2:192.168.123.104:6805/2746381987' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T10:10:26.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:26 vm04 bash[28289]: audit 2026-03-10T10:10:25.940956+0000 mon.a (mon.0) 318 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T10:10:26.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:26 vm04 bash[28289]: audit 2026-03-10T10:10:25.940956+0000 mon.a (mon.0) 318 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T10:10:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:26 vm04 bash[20742]: cluster 2026-03-10T10:10:25.615231+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:26 vm04 bash[20742]: cluster 2026-03-10T10:10:25.615231+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:26 vm04 bash[20742]: audit 2026-03-10T10:10:25.940688+0000 mon.c (mon.2) 7 : audit [INF] from='osd.1 v2:192.168.123.104:6805/2746381987' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T10:10:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:26 vm04 bash[20742]: audit 2026-03-10T10:10:25.940688+0000 mon.c (mon.2) 7 : audit [INF] from='osd.1 v2:192.168.123.104:6805/2746381987' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T10:10:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:26 vm04 bash[20742]: audit 2026-03-10T10:10:25.940956+0000 mon.a (mon.0) 318 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T10:10:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:26 vm04 bash[20742]: audit 2026-03-10T10:10:25.940956+0000 mon.a (mon.0) 318 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T10:10:27.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:26 vm07 bash[23367]: cluster 2026-03-10T10:10:25.615231+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:27.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:26 vm07 bash[23367]: cluster 2026-03-10T10:10:25.615231+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:27.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:26 vm07 bash[23367]: audit 2026-03-10T10:10:25.940688+0000 mon.c (mon.2) 7 : audit [INF] from='osd.1 v2:192.168.123.104:6805/2746381987' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T10:10:27.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:26 vm07 bash[23367]: audit 2026-03-10T10:10:25.940688+0000 mon.c (mon.2) 7 : audit [INF] from='osd.1 v2:192.168.123.104:6805/2746381987' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T10:10:27.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:26 vm07 bash[23367]: audit 2026-03-10T10:10:25.940956+0000 mon.a (mon.0) 318 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T10:10:27.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:26 vm07 bash[23367]: audit 2026-03-10T10:10:25.940956+0000 mon.a (mon.0) 318 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T10:10:27.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:27 vm04 bash[28289]: audit 2026-03-10T10:10:26.671030+0000 mon.a (mon.0) 319 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T10:10:27.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:27 vm04 bash[28289]: audit 2026-03-10T10:10:26.671030+0000 mon.a (mon.0) 319 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T10:10:27.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:27 vm04 bash[28289]: cluster 2026-03-10T10:10:26.673697+0000 mon.a (mon.0) 320 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-10T10:10:27.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:27 vm04 bash[28289]: cluster 2026-03-10T10:10:26.673697+0000 mon.a (mon.0) 320 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-10T10:10:27.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:27 vm04 bash[28289]: audit 2026-03-10T10:10:26.674634+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:27.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:27 vm04 bash[28289]: audit 2026-03-10T10:10:26.674634+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:27.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:27 vm04 bash[28289]: audit 2026-03-10T10:10:26.675859+0000 mon.c (mon.2) 8 : audit [INF] from='osd.1 v2:192.168.123.104:6805/2746381987' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:10:27.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:27 vm04 bash[28289]: audit 2026-03-10T10:10:26.675859+0000 mon.c (mon.2) 8 : audit [INF] from='osd.1 v2:192.168.123.104:6805/2746381987' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:10:27.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:27 vm04 bash[28289]: audit 2026-03-10T10:10:26.681508+0000 mon.a (mon.0) 322 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:10:27.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:27 vm04 bash[28289]: audit 2026-03-10T10:10:26.681508+0000 mon.a (mon.0) 322 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:10:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:27 vm04 bash[20742]: audit 2026-03-10T10:10:26.671030+0000 mon.a (mon.0) 319 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T10:10:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:27 vm04 bash[20742]: audit 2026-03-10T10:10:26.671030+0000 mon.a (mon.0) 319 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T10:10:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:27 vm04 bash[20742]: cluster 2026-03-10T10:10:26.673697+0000 mon.a (mon.0) 320 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-10T10:10:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:27 vm04 bash[20742]: cluster 2026-03-10T10:10:26.673697+0000 mon.a (mon.0) 320 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-10T10:10:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:27 vm04 bash[20742]: audit 2026-03-10T10:10:26.674634+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:27 vm04 bash[20742]: audit 2026-03-10T10:10:26.674634+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:27 vm04 bash[20742]: audit 2026-03-10T10:10:26.675859+0000 mon.c (mon.2) 8 : audit [INF] from='osd.1 v2:192.168.123.104:6805/2746381987' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:10:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:27 vm04 bash[20742]: audit 2026-03-10T10:10:26.675859+0000 mon.c (mon.2) 8 : audit [INF] from='osd.1 v2:192.168.123.104:6805/2746381987' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:10:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:27 vm04 bash[20742]: audit 2026-03-10T10:10:26.681508+0000 mon.a (mon.0) 322 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:10:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:27 vm04 bash[20742]: audit 2026-03-10T10:10:26.681508+0000 mon.a (mon.0) 322 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:10:28.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:27 vm07 bash[23367]: audit 2026-03-10T10:10:26.671030+0000 mon.a (mon.0) 319 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T10:10:28.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:27 vm07 bash[23367]: audit 2026-03-10T10:10:26.671030+0000 mon.a (mon.0) 319 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T10:10:28.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:27 vm07 bash[23367]: cluster 2026-03-10T10:10:26.673697+0000 mon.a (mon.0) 320 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-10T10:10:28.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:27 vm07 bash[23367]: cluster 2026-03-10T10:10:26.673697+0000 mon.a (mon.0) 320 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-10T10:10:28.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:27 vm07 bash[23367]: audit 2026-03-10T10:10:26.674634+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:28.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:27 vm07 bash[23367]: audit 2026-03-10T10:10:26.674634+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:28.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:27 vm07 bash[23367]: audit 2026-03-10T10:10:26.675859+0000 mon.c (mon.2) 8 : audit [INF] from='osd.1 v2:192.168.123.104:6805/2746381987' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:10:28.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:27 vm07 bash[23367]: audit 2026-03-10T10:10:26.675859+0000 mon.c (mon.2) 8 : audit [INF] from='osd.1 v2:192.168.123.104:6805/2746381987' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:10:28.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:27 vm07 bash[23367]: audit 2026-03-10T10:10:26.681508+0000 mon.a (mon.0) 322 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:10:28.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:27 vm07 bash[23367]: audit 2026-03-10T10:10:26.681508+0000 mon.a (mon.0) 322 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:10:28.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:28 vm04 bash[28289]: cluster 2026-03-10T10:10:27.615490+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:28.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:28 vm04 bash[28289]: cluster 2026-03-10T10:10:27.615490+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:28.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:28 vm04 bash[28289]: audit 2026-03-10T10:10:27.679053+0000 mon.a (mon.0) 323 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-10T10:10:28.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:28 vm04 bash[28289]: audit 2026-03-10T10:10:27.679053+0000 mon.a (mon.0) 323 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-10T10:10:28.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:28 vm04 bash[28289]: cluster 2026-03-10T10:10:27.683029+0000 mon.a (mon.0) 324 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-10T10:10:28.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:28 vm04 bash[28289]: cluster 2026-03-10T10:10:27.683029+0000 mon.a (mon.0) 324 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-10T10:10:28.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:28 vm04 bash[28289]: audit 2026-03-10T10:10:27.683814+0000 mon.a (mon.0) 325 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:28.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:28 vm04 bash[28289]: audit 2026-03-10T10:10:27.683814+0000 mon.a (mon.0) 325 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:28.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:28 vm04 bash[28289]: audit 2026-03-10T10:10:27.686525+0000 mon.a (mon.0) 326 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:28.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:28 vm04 bash[28289]: audit 2026-03-10T10:10:27.686525+0000 mon.a (mon.0) 326 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:28.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:28 vm04 bash[28289]: audit 2026-03-10T10:10:28.287673+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:28.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:28 vm04 bash[28289]: audit 2026-03-10T10:10:28.287673+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:28.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:28 vm04 bash[28289]: audit 2026-03-10T10:10:28.293769+0000 mon.a (mon.0) 328 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:28.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:28 vm04 bash[28289]: audit 2026-03-10T10:10:28.293769+0000 mon.a (mon.0) 328 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:28.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:28 vm04 bash[28289]: audit 2026-03-10T10:10:28.294554+0000 mon.a (mon.0) 329 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:28.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:28 vm04 bash[28289]: audit 2026-03-10T10:10:28.294554+0000 mon.a (mon.0) 329 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:28.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:28 vm04 bash[28289]: audit 2026-03-10T10:10:28.295067+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:10:28.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:28 vm04 bash[28289]: audit 2026-03-10T10:10:28.295067+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:10:28.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:28 vm04 bash[28289]: audit 2026-03-10T10:10:28.299067+0000 mon.a (mon.0) 331 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:28.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:28 vm04 bash[28289]: audit 2026-03-10T10:10:28.299067+0000 mon.a (mon.0) 331 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:28.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:28 vm04 bash[28289]: audit 2026-03-10T10:10:28.685857+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:28.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:28 vm04 bash[28289]: audit 2026-03-10T10:10:28.685857+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:28.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:28 vm04 bash[20742]: cluster 2026-03-10T10:10:27.615490+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:28.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:28 vm04 bash[20742]: cluster 2026-03-10T10:10:27.615490+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:28.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:28 vm04 bash[20742]: audit 2026-03-10T10:10:27.679053+0000 mon.a (mon.0) 323 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-10T10:10:28.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:28 vm04 bash[20742]: audit 2026-03-10T10:10:27.679053+0000 mon.a (mon.0) 323 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-10T10:10:28.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:28 vm04 bash[20742]: cluster 2026-03-10T10:10:27.683029+0000 mon.a (mon.0) 324 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-10T10:10:28.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:28 vm04 bash[20742]: cluster 2026-03-10T10:10:27.683029+0000 mon.a (mon.0) 324 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-10T10:10:28.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:28 vm04 bash[20742]: audit 2026-03-10T10:10:27.683814+0000 mon.a (mon.0) 325 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:28.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:28 vm04 bash[20742]: audit 2026-03-10T10:10:27.683814+0000 mon.a (mon.0) 325 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:28.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:28 vm04 bash[20742]: audit 2026-03-10T10:10:27.686525+0000 mon.a (mon.0) 326 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:28.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:28 vm04 bash[20742]: audit 2026-03-10T10:10:27.686525+0000 mon.a (mon.0) 326 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:28.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:28 vm04 bash[20742]: audit 2026-03-10T10:10:28.287673+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:28.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:28 vm04 bash[20742]: audit 2026-03-10T10:10:28.287673+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:28.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:28 vm04 bash[20742]: audit 2026-03-10T10:10:28.293769+0000 mon.a (mon.0) 328 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:28.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:28 vm04 bash[20742]: audit 2026-03-10T10:10:28.293769+0000 mon.a (mon.0) 328 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:28.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:28 vm04 bash[20742]: audit 2026-03-10T10:10:28.294554+0000 mon.a (mon.0) 329 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:28.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:28 vm04 bash[20742]: audit 2026-03-10T10:10:28.294554+0000 mon.a (mon.0) 329 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:28.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:28 vm04 bash[20742]: audit 2026-03-10T10:10:28.295067+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:10:28.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:28 vm04 bash[20742]: audit 2026-03-10T10:10:28.295067+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:10:28.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:28 vm04 bash[20742]: audit 2026-03-10T10:10:28.299067+0000 mon.a (mon.0) 331 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:28.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:28 vm04 bash[20742]: audit 2026-03-10T10:10:28.299067+0000 mon.a (mon.0) 331 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:28.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:28 vm04 bash[20742]: audit 2026-03-10T10:10:28.685857+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:28.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:28 vm04 bash[20742]: audit 2026-03-10T10:10:28.685857+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:29.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:28 vm07 bash[23367]: cluster 2026-03-10T10:10:27.615490+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:29.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:28 vm07 bash[23367]: cluster 2026-03-10T10:10:27.615490+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:29.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:28 vm07 bash[23367]: audit 2026-03-10T10:10:27.679053+0000 mon.a (mon.0) 323 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-10T10:10:29.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:28 vm07 bash[23367]: audit 2026-03-10T10:10:27.679053+0000 mon.a (mon.0) 323 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-10T10:10:29.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:28 vm07 bash[23367]: cluster 2026-03-10T10:10:27.683029+0000 mon.a (mon.0) 324 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-10T10:10:29.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:28 vm07 bash[23367]: cluster 2026-03-10T10:10:27.683029+0000 mon.a (mon.0) 324 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-10T10:10:29.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:28 vm07 bash[23367]: audit 2026-03-10T10:10:27.683814+0000 mon.a (mon.0) 325 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:29.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:28 vm07 bash[23367]: audit 2026-03-10T10:10:27.683814+0000 mon.a (mon.0) 325 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:29.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:28 vm07 bash[23367]: audit 2026-03-10T10:10:27.686525+0000 mon.a (mon.0) 326 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:29.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:28 vm07 bash[23367]: audit 2026-03-10T10:10:27.686525+0000 mon.a (mon.0) 326 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:29.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:28 vm07 bash[23367]: audit 2026-03-10T10:10:28.287673+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:29.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:28 vm07 bash[23367]: audit 2026-03-10T10:10:28.287673+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:29.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:28 vm07 bash[23367]: audit 2026-03-10T10:10:28.293769+0000 mon.a (mon.0) 328 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:29.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:28 vm07 bash[23367]: audit 2026-03-10T10:10:28.293769+0000 mon.a (mon.0) 328 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:29.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:28 vm07 bash[23367]: audit 2026-03-10T10:10:28.294554+0000 mon.a (mon.0) 329 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:29.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:28 vm07 bash[23367]: audit 2026-03-10T10:10:28.294554+0000 mon.a (mon.0) 329 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:29.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:28 vm07 bash[23367]: audit 2026-03-10T10:10:28.295067+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:10:29.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:28 vm07 bash[23367]: audit 2026-03-10T10:10:28.295067+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:10:29.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:28 vm07 bash[23367]: audit 2026-03-10T10:10:28.299067+0000 mon.a (mon.0) 331 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:29.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:28 vm07 bash[23367]: audit 2026-03-10T10:10:28.299067+0000 mon.a (mon.0) 331 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:29.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:28 vm07 bash[23367]: audit 2026-03-10T10:10:28.685857+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:29.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:28 vm07 bash[23367]: audit 2026-03-10T10:10:28.685857+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:29.144 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:29.136+0000 7f0ff37fe640 1 -- 192.168.123.104:0/3478858607 <== mgr.14150 v2:192.168.123.104:6800/632047608 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7f0ffc0630c0 con 0x7f0fd00775c0 2026-03-10T10:10:29.144 INFO:teuthology.orchestra.run.vm04.stdout:Created osd(s) 1 on host 'vm04' 2026-03-10T10:10:29.144 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:29.140+0000 7f10047c1640 1 -- 192.168.123.104:0/3478858607 >> v2:192.168.123.104:6800/632047608 conn(0x7f0fd00775c0 msgr2=0x7f0fd0079a80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:10:29.145 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:29.140+0000 7f10047c1640 1 --2- 192.168.123.104:0/3478858607 >> v2:192.168.123.104:6800/632047608 conn(0x7f0fd00775c0 0x7f0fd0079a80 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f0fe4004160 tx=0x7f0fe400a400 comp rx=0 tx=0).stop 2026-03-10T10:10:29.145 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:29.140+0000 7f10047c1640 1 -- 192.168.123.104:0/3478858607 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f0ffc078cf0 msgr2=0x7f0ffc1a0f10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:10:29.145 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:29.140+0000 7f10047c1640 1 --2- 192.168.123.104:0/3478858607 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f0ffc078cf0 0x7f0ffc1a0f10 secure :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0x7f0fec00d950 tx=0x7f0fec00de20 comp rx=0 tx=0).stop 2026-03-10T10:10:29.145 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:29.140+0000 7f10047c1640 1 -- 192.168.123.104:0/3478858607 shutdown_connections 2026-03-10T10:10:29.145 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:29.140+0000 7f10047c1640 1 --2- 192.168.123.104:0/3478858607 >> v2:192.168.123.104:6800/632047608 conn(0x7f0fd00775c0 0x7f0fd0079a80 unknown :-1 s=CLOSED pgs=48 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:10:29.145 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:29.140+0000 7f10047c1640 1 --2- 192.168.123.104:0/3478858607 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0ffc079690 0x7f0ffc1a7f90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:10:29.145 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:29.140+0000 7f10047c1640 1 --2- 192.168.123.104:0/3478858607 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f0ffc078cf0 0x7f0ffc1a0f10 unknown :-1 s=CLOSED pgs=10 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:10:29.145 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:29.140+0000 7f10047c1640 1 --2- 192.168.123.104:0/3478858607 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0ffc077aa0 0x7f0ffc1a09d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:10:29.145 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:29.140+0000 7f10047c1640 1 -- 192.168.123.104:0/3478858607 >> 192.168.123.104:0/3478858607 conn(0x7f0ffc1004d0 msgr2=0x7f0ffc101f60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:10:29.145 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:29.140+0000 7f10047c1640 1 -- 192.168.123.104:0/3478858607 shutdown_connections 2026-03-10T10:10:29.145 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:29.140+0000 7f10047c1640 1 -- 192.168.123.104:0/3478858607 wait complete. 2026-03-10T10:10:29.214 DEBUG:teuthology.orchestra.run.vm04:osd.1> sudo journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@osd.1.service 2026-03-10T10:10:29.215 INFO:tasks.cephadm:Deploying osd.2 on vm04 with /dev/vdc... 2026-03-10T10:10:29.215 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- lvm zap /dev/vdc 2026-03-10T10:10:29.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:29 vm04 bash[28289]: cluster 2026-03-10T10:10:26.943007+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:10:29.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:29 vm04 bash[28289]: cluster 2026-03-10T10:10:26.943007+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:10:29.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:29 vm04 bash[28289]: cluster 2026-03-10T10:10:26.943062+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:10:29.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:29 vm04 bash[28289]: cluster 2026-03-10T10:10:26.943062+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:10:29.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:29 vm04 bash[28289]: cluster 2026-03-10T10:10:28.692702+0000 mon.a (mon.0) 333 : cluster [INF] osd.1 v2:192.168.123.104:6805/2746381987 boot 2026-03-10T10:10:29.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:29 vm04 bash[28289]: cluster 2026-03-10T10:10:28.692702+0000 mon.a (mon.0) 333 : cluster [INF] osd.1 v2:192.168.123.104:6805/2746381987 boot 2026-03-10T10:10:29.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:29 vm04 bash[28289]: cluster 2026-03-10T10:10:28.692740+0000 mon.a (mon.0) 334 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-10T10:10:29.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:29 vm04 bash[28289]: cluster 2026-03-10T10:10:28.692740+0000 mon.a (mon.0) 334 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-10T10:10:29.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:29 vm04 bash[28289]: audit 2026-03-10T10:10:28.694177+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:29.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:29 vm04 bash[28289]: audit 2026-03-10T10:10:28.694177+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:29.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:29 vm04 bash[28289]: audit 2026-03-10T10:10:29.131441+0000 mon.a (mon.0) 336 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:10:29.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:29 vm04 bash[28289]: audit 2026-03-10T10:10:29.131441+0000 mon.a (mon.0) 336 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:10:29.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:29 vm04 bash[28289]: audit 2026-03-10T10:10:29.136084+0000 mon.a (mon.0) 337 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:29.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:29 vm04 bash[28289]: audit 2026-03-10T10:10:29.136084+0000 mon.a (mon.0) 337 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:29.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:29 vm04 bash[28289]: audit 2026-03-10T10:10:29.140965+0000 mon.a (mon.0) 338 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:29.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:29 vm04 bash[28289]: audit 2026-03-10T10:10:29.140965+0000 mon.a (mon.0) 338 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:29 vm04 bash[20742]: cluster 2026-03-10T10:10:26.943007+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:10:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:29 vm04 bash[20742]: cluster 2026-03-10T10:10:26.943007+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:10:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:29 vm04 bash[20742]: cluster 2026-03-10T10:10:26.943062+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:10:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:29 vm04 bash[20742]: cluster 2026-03-10T10:10:26.943062+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:10:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:29 vm04 bash[20742]: cluster 2026-03-10T10:10:28.692702+0000 mon.a (mon.0) 333 : cluster [INF] osd.1 v2:192.168.123.104:6805/2746381987 boot 2026-03-10T10:10:29.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:29 vm04 bash[20742]: cluster 2026-03-10T10:10:28.692702+0000 mon.a (mon.0) 333 : cluster [INF] osd.1 v2:192.168.123.104:6805/2746381987 boot 2026-03-10T10:10:29.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:29 vm04 bash[20742]: cluster 2026-03-10T10:10:28.692740+0000 mon.a (mon.0) 334 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-10T10:10:29.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:29 vm04 bash[20742]: cluster 2026-03-10T10:10:28.692740+0000 mon.a (mon.0) 334 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-10T10:10:29.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:29 vm04 bash[20742]: audit 2026-03-10T10:10:28.694177+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:29.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:29 vm04 bash[20742]: audit 2026-03-10T10:10:28.694177+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:29.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:29 vm04 bash[20742]: audit 2026-03-10T10:10:29.131441+0000 mon.a (mon.0) 336 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:10:29.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:29 vm04 bash[20742]: audit 2026-03-10T10:10:29.131441+0000 mon.a (mon.0) 336 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:10:29.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:29 vm04 bash[20742]: audit 2026-03-10T10:10:29.136084+0000 mon.a (mon.0) 337 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:29.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:29 vm04 bash[20742]: audit 2026-03-10T10:10:29.136084+0000 mon.a (mon.0) 337 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:29.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:29 vm04 bash[20742]: audit 2026-03-10T10:10:29.140965+0000 mon.a (mon.0) 338 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:29.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:29 vm04 bash[20742]: audit 2026-03-10T10:10:29.140965+0000 mon.a (mon.0) 338 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:30.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:29 vm07 bash[23367]: cluster 2026-03-10T10:10:26.943007+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:10:30.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:29 vm07 bash[23367]: cluster 2026-03-10T10:10:26.943007+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:10:30.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:29 vm07 bash[23367]: cluster 2026-03-10T10:10:26.943062+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:10:30.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:29 vm07 bash[23367]: cluster 2026-03-10T10:10:26.943062+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:10:30.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:29 vm07 bash[23367]: cluster 2026-03-10T10:10:28.692702+0000 mon.a (mon.0) 333 : cluster [INF] osd.1 v2:192.168.123.104:6805/2746381987 boot 2026-03-10T10:10:30.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:29 vm07 bash[23367]: cluster 2026-03-10T10:10:28.692702+0000 mon.a (mon.0) 333 : cluster [INF] osd.1 v2:192.168.123.104:6805/2746381987 boot 2026-03-10T10:10:30.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:29 vm07 bash[23367]: cluster 2026-03-10T10:10:28.692740+0000 mon.a (mon.0) 334 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-10T10:10:30.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:29 vm07 bash[23367]: cluster 2026-03-10T10:10:28.692740+0000 mon.a (mon.0) 334 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-10T10:10:30.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:29 vm07 bash[23367]: audit 2026-03-10T10:10:28.694177+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:30.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:29 vm07 bash[23367]: audit 2026-03-10T10:10:28.694177+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:10:30.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:29 vm07 bash[23367]: audit 2026-03-10T10:10:29.131441+0000 mon.a (mon.0) 336 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:10:30.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:29 vm07 bash[23367]: audit 2026-03-10T10:10:29.131441+0000 mon.a (mon.0) 336 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:10:30.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:29 vm07 bash[23367]: audit 2026-03-10T10:10:29.136084+0000 mon.a (mon.0) 337 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:30.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:29 vm07 bash[23367]: audit 2026-03-10T10:10:29.136084+0000 mon.a (mon.0) 337 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:30.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:29 vm07 bash[23367]: audit 2026-03-10T10:10:29.140965+0000 mon.a (mon.0) 338 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:30.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:29 vm07 bash[23367]: audit 2026-03-10T10:10:29.140965+0000 mon.a (mon.0) 338 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:31.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:30 vm07 bash[23367]: cluster 2026-03-10T10:10:29.615719+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:31.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:30 vm07 bash[23367]: cluster 2026-03-10T10:10:29.615719+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:31.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:30 vm07 bash[23367]: cluster 2026-03-10T10:10:29.706237+0000 mon.a (mon.0) 339 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-10T10:10:31.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:30 vm07 bash[23367]: cluster 2026-03-10T10:10:29.706237+0000 mon.a (mon.0) 339 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-10T10:10:31.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:30 vm04 bash[28289]: cluster 2026-03-10T10:10:29.615719+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:31.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:30 vm04 bash[28289]: cluster 2026-03-10T10:10:29.615719+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:31.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:30 vm04 bash[28289]: cluster 2026-03-10T10:10:29.706237+0000 mon.a (mon.0) 339 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-10T10:10:31.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:30 vm04 bash[28289]: cluster 2026-03-10T10:10:29.706237+0000 mon.a (mon.0) 339 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-10T10:10:31.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:30 vm04 bash[20742]: cluster 2026-03-10T10:10:29.615719+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:31.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:30 vm04 bash[20742]: cluster 2026-03-10T10:10:29.615719+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T10:10:31.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:30 vm04 bash[20742]: cluster 2026-03-10T10:10:29.706237+0000 mon.a (mon.0) 339 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-10T10:10:31.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:30 vm04 bash[20742]: cluster 2026-03-10T10:10:29.706237+0000 mon.a (mon.0) 339 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-10T10:10:33.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:32 vm07 bash[23367]: cluster 2026-03-10T10:10:31.616054+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:33.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:32 vm07 bash[23367]: cluster 2026-03-10T10:10:31.616054+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:32 vm04 bash[28289]: cluster 2026-03-10T10:10:31.616054+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:32 vm04 bash[28289]: cluster 2026-03-10T10:10:31.616054+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:33.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:32 vm04 bash[20742]: cluster 2026-03-10T10:10:31.616054+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:33.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:32 vm04 bash[20742]: cluster 2026-03-10T10:10:31.616054+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:33.881 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:10:34.735 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:10:34.747 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph orch daemon add osd vm04:/dev/vdc 2026-03-10T10:10:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:34 vm04 bash[28289]: cluster 2026-03-10T10:10:33.616349+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:34 vm04 bash[28289]: cluster 2026-03-10T10:10:33.616349+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:34.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:34 vm04 bash[20742]: cluster 2026-03-10T10:10:33.616349+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:34.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:34 vm04 bash[20742]: cluster 2026-03-10T10:10:33.616349+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:35.012 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:34 vm07 bash[23367]: cluster 2026-03-10T10:10:33.616349+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:35.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:34 vm07 bash[23367]: cluster 2026-03-10T10:10:33.616349+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:36.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:36 vm07 bash[23367]: cephadm 2026-03-10T10:10:35.497412+0000 mgr.y (mgr.14150) 100 : cephadm [INF] Detected new or changed devices on vm04 2026-03-10T10:10:36.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:36 vm07 bash[23367]: cephadm 2026-03-10T10:10:35.497412+0000 mgr.y (mgr.14150) 100 : cephadm [INF] Detected new or changed devices on vm04 2026-03-10T10:10:36.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:36 vm07 bash[23367]: audit 2026-03-10T10:10:35.503156+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:36.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:36 vm07 bash[23367]: audit 2026-03-10T10:10:35.503156+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:36.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:36 vm07 bash[23367]: audit 2026-03-10T10:10:35.510245+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:36.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:36 vm07 bash[23367]: audit 2026-03-10T10:10:35.510245+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:36.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:36 vm07 bash[23367]: audit 2026-03-10T10:10:35.511432+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:10:36.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:36 vm07 bash[23367]: audit 2026-03-10T10:10:35.511432+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:10:36.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:36 vm07 bash[23367]: audit 2026-03-10T10:10:35.512366+0000 mon.a (mon.0) 343 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:36.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:36 vm07 bash[23367]: audit 2026-03-10T10:10:35.512366+0000 mon.a (mon.0) 343 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:36.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:36 vm07 bash[23367]: audit 2026-03-10T10:10:35.512842+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:10:36.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:36 vm07 bash[23367]: audit 2026-03-10T10:10:35.512842+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:10:36.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:36 vm07 bash[23367]: audit 2026-03-10T10:10:35.517285+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:36.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:36 vm07 bash[23367]: audit 2026-03-10T10:10:35.517285+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:36.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:36 vm07 bash[23367]: cluster 2026-03-10T10:10:35.616584+0000 mgr.y (mgr.14150) 101 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:36.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:36 vm07 bash[23367]: cluster 2026-03-10T10:10:35.616584+0000 mgr.y (mgr.14150) 101 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:36 vm04 bash[28289]: cephadm 2026-03-10T10:10:35.497412+0000 mgr.y (mgr.14150) 100 : cephadm [INF] Detected new or changed devices on vm04 2026-03-10T10:10:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:36 vm04 bash[28289]: cephadm 2026-03-10T10:10:35.497412+0000 mgr.y (mgr.14150) 100 : cephadm [INF] Detected new or changed devices on vm04 2026-03-10T10:10:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:36 vm04 bash[28289]: audit 2026-03-10T10:10:35.503156+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:36 vm04 bash[28289]: audit 2026-03-10T10:10:35.503156+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:36 vm04 bash[28289]: audit 2026-03-10T10:10:35.510245+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:36 vm04 bash[28289]: audit 2026-03-10T10:10:35.510245+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:36 vm04 bash[28289]: audit 2026-03-10T10:10:35.511432+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:10:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:36 vm04 bash[28289]: audit 2026-03-10T10:10:35.511432+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:10:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:36 vm04 bash[28289]: audit 2026-03-10T10:10:35.512366+0000 mon.a (mon.0) 343 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:36 vm04 bash[28289]: audit 2026-03-10T10:10:35.512366+0000 mon.a (mon.0) 343 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:36 vm04 bash[28289]: audit 2026-03-10T10:10:35.512842+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:10:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:36 vm04 bash[28289]: audit 2026-03-10T10:10:35.512842+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:10:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:36 vm04 bash[28289]: audit 2026-03-10T10:10:35.517285+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:36 vm04 bash[28289]: audit 2026-03-10T10:10:35.517285+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:36 vm04 bash[28289]: cluster 2026-03-10T10:10:35.616584+0000 mgr.y (mgr.14150) 101 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:36 vm04 bash[28289]: cluster 2026-03-10T10:10:35.616584+0000 mgr.y (mgr.14150) 101 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:36 vm04 bash[20742]: cephadm 2026-03-10T10:10:35.497412+0000 mgr.y (mgr.14150) 100 : cephadm [INF] Detected new or changed devices on vm04 2026-03-10T10:10:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:36 vm04 bash[20742]: cephadm 2026-03-10T10:10:35.497412+0000 mgr.y (mgr.14150) 100 : cephadm [INF] Detected new or changed devices on vm04 2026-03-10T10:10:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:36 vm04 bash[20742]: audit 2026-03-10T10:10:35.503156+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:36 vm04 bash[20742]: audit 2026-03-10T10:10:35.503156+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:36 vm04 bash[20742]: audit 2026-03-10T10:10:35.510245+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:36 vm04 bash[20742]: audit 2026-03-10T10:10:35.510245+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:36 vm04 bash[20742]: audit 2026-03-10T10:10:35.511432+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:10:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:36 vm04 bash[20742]: audit 2026-03-10T10:10:35.511432+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:10:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:36 vm04 bash[20742]: audit 2026-03-10T10:10:35.512366+0000 mon.a (mon.0) 343 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:36 vm04 bash[20742]: audit 2026-03-10T10:10:35.512366+0000 mon.a (mon.0) 343 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:36 vm04 bash[20742]: audit 2026-03-10T10:10:35.512842+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:10:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:36 vm04 bash[20742]: audit 2026-03-10T10:10:35.512842+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:10:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:36 vm04 bash[20742]: audit 2026-03-10T10:10:35.517285+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:36 vm04 bash[20742]: audit 2026-03-10T10:10:35.517285+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:36 vm04 bash[20742]: cluster 2026-03-10T10:10:35.616584+0000 mgr.y (mgr.14150) 101 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:36 vm04 bash[20742]: cluster 2026-03-10T10:10:35.616584+0000 mgr.y (mgr.14150) 101 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:38.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:38 vm04 bash[28289]: cluster 2026-03-10T10:10:37.616831+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:38.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:38 vm04 bash[28289]: cluster 2026-03-10T10:10:37.616831+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:38 vm04 bash[20742]: cluster 2026-03-10T10:10:37.616831+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:38 vm04 bash[20742]: cluster 2026-03-10T10:10:37.616831+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:39.012 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:38 vm07 bash[23367]: cluster 2026-03-10T10:10:37.616831+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:39.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:38 vm07 bash[23367]: cluster 2026-03-10T10:10:37.616831+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:39.387 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:10:39.546 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.540+0000 7f8dd3f43640 1 -- 192.168.123.104:0/525998173 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f8dcc104d70 msgr2=0x7f8dcc105170 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:10:39.547 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.540+0000 7f8dd3f43640 1 --2- 192.168.123.104:0/525998173 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f8dcc104d70 0x7f8dcc105170 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f8dc0009a30 tx=0x7f8dc002f240 comp rx=0 tx=0).stop 2026-03-10T10:10:39.547 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.540+0000 7f8dd3f43640 1 -- 192.168.123.104:0/525998173 shutdown_connections 2026-03-10T10:10:39.547 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.540+0000 7f8dd3f43640 1 --2- 192.168.123.104:0/525998173 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8dcc106930 0x7f8dcc10d1c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:10:39.547 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.540+0000 7f8dd3f43640 1 --2- 192.168.123.104:0/525998173 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8dcc105f70 0x7f8dcc1063f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:10:39.547 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.540+0000 7f8dd3f43640 1 --2- 192.168.123.104:0/525998173 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f8dcc104d70 0x7f8dcc105170 unknown :-1 s=CLOSED pgs=17 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:10:39.547 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.540+0000 7f8dd3f43640 1 -- 192.168.123.104:0/525998173 >> 192.168.123.104:0/525998173 conn(0x7f8dcc100520 msgr2=0x7f8dcc102940 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:10:39.547 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.540+0000 7f8dd3f43640 1 -- 192.168.123.104:0/525998173 shutdown_connections 2026-03-10T10:10:39.547 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.540+0000 7f8dd3f43640 1 -- 192.168.123.104:0/525998173 wait complete. 2026-03-10T10:10:39.547 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.544+0000 7f8dd3f43640 1 Processor -- start 2026-03-10T10:10:39.547 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.544+0000 7f8dd3f43640 1 -- start start 2026-03-10T10:10:39.548 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.544+0000 7f8dd3f43640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8dcc104d70 0x7f8dcc19c690 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:10:39.548 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.544+0000 7f8dd3f43640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f8dcc105f70 0x7f8dcc19cbd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:10:39.548 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.544+0000 7f8dd3f43640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8dcc106930 0x7f8dcc1a3c50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:10:39.548 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.544+0000 7f8dd3f43640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f8dcc10fef0 con 0x7f8dcc104d70 2026-03-10T10:10:39.548 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.544+0000 7f8dd3f43640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f8dcc10fd70 con 0x7f8dcc106930 2026-03-10T10:10:39.548 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.544+0000 7f8dd3f43640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f8dcc110070 con 0x7f8dcc105f70 2026-03-10T10:10:39.548 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.544+0000 7f8dd1cb8640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8dcc104d70 0x7f8dcc19c690 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:10:39.548 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.544+0000 7f8dd1cb8640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8dcc104d70 0x7f8dcc19c690 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:53162/0 (socket says 192.168.123.104:53162) 2026-03-10T10:10:39.548 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.544+0000 7f8dd1cb8640 1 -- 192.168.123.104:0/3408237308 learned_addr learned my addr 192.168.123.104:0/3408237308 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:10:39.548 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.544+0000 7f8dd14b7640 1 --2- 192.168.123.104:0/3408237308 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f8dcc105f70 0x7f8dcc19cbd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:10:39.549 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.544+0000 7f8dd24b9640 1 --2- 192.168.123.104:0/3408237308 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8dcc106930 0x7f8dcc1a3c50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:10:39.549 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.544+0000 7f8dd1cb8640 1 -- 192.168.123.104:0/3408237308 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f8dcc105f70 msgr2=0x7f8dcc19cbd0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:10:39.549 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.544+0000 7f8dd1cb8640 1 --2- 192.168.123.104:0/3408237308 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f8dcc105f70 0x7f8dcc19cbd0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:10:39.549 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.544+0000 7f8dd1cb8640 1 -- 192.168.123.104:0/3408237308 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8dcc106930 msgr2=0x7f8dcc1a3c50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:10:39.550 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.544+0000 7f8dd1cb8640 1 --2- 192.168.123.104:0/3408237308 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8dcc106930 0x7f8dcc1a3c50 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:10:39.550 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.544+0000 7f8dd1cb8640 1 -- 192.168.123.104:0/3408237308 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f8dcc1a4350 con 0x7f8dcc104d70 2026-03-10T10:10:39.550 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.544+0000 7f8dd1cb8640 1 --2- 192.168.123.104:0/3408237308 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8dcc104d70 0x7f8dcc19c690 secure :-1 s=READY pgs=104 cs=0 l=1 rev1=1 crypto rx=0x7f8dc0002ba0 tx=0x7f8dc0002f10 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:10:39.550 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.544+0000 7f8dbaffd640 1 -- 192.168.123.104:0/3408237308 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f8dc00041a0 con 0x7f8dcc104d70 2026-03-10T10:10:39.550 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.544+0000 7f8dbaffd640 1 -- 192.168.123.104:0/3408237308 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f8dc00054a0 con 0x7f8dcc104d70 2026-03-10T10:10:39.550 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.544+0000 7f8dbaffd640 1 -- 192.168.123.104:0/3408237308 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f8dc0004ce0 con 0x7f8dcc104d70 2026-03-10T10:10:39.550 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.544+0000 7f8dd24b9640 1 --2- 192.168.123.104:0/3408237308 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8dcc106930 0x7f8dcc1a3c50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-10T10:10:39.550 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.544+0000 7f8dd3f43640 1 -- 192.168.123.104:0/3408237308 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f8dcc077530 con 0x7f8dcc104d70 2026-03-10T10:10:39.550 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.544+0000 7f8dd3f43640 1 -- 192.168.123.104:0/3408237308 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f8dcc077a10 con 0x7f8dcc104d70 2026-03-10T10:10:39.550 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.544+0000 7f8dd14b7640 1 --2- 192.168.123.104:0/3408237308 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f8dcc105f70 0x7f8dcc19cbd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-10T10:10:39.550 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.544+0000 7f8dbaffd640 1 -- 192.168.123.104:0/3408237308 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 14) ==== 99944+0+0 (secure 0 0 0) 0x7f8dc00043d0 con 0x7f8dcc104d70 2026-03-10T10:10:39.551 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.544+0000 7f8dbaffd640 1 --2- 192.168.123.104:0/3408237308 >> v2:192.168.123.104:6800/632047608 conn(0x7f8d94077640 0x7f8d94079b00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:10:39.551 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.544+0000 7f8dd14b7640 1 --2- 192.168.123.104:0/3408237308 >> v2:192.168.123.104:6800/632047608 conn(0x7f8d94077640 0x7f8d94079b00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:10:39.551 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.544+0000 7f8dbaffd640 1 -- 192.168.123.104:0/3408237308 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(14..14 src has 1..14) ==== 1823+0+0 (secure 0 0 0) 0x7f8dc00bce30 con 0x7f8dcc104d70 2026-03-10T10:10:39.551 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.548+0000 7f8dd14b7640 1 --2- 192.168.123.104:0/3408237308 >> v2:192.168.123.104:6800/632047608 conn(0x7f8d94077640 0x7f8d94079b00 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f8dcc19dbb0 tx=0x7f8dbc008040 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:10:39.551 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.548+0000 7f8dd3f43640 1 -- 192.168.123.104:0/3408237308 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f8d98005180 con 0x7f8dcc104d70 2026-03-10T10:10:39.554 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.548+0000 7f8dbaffd640 1 -- 192.168.123.104:0/3408237308 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f8dc0047050 con 0x7f8dcc104d70 2026-03-10T10:10:39.652 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:10:39.648+0000 7f8dd3f43640 1 -- 192.168.123.104:0/3408237308 --> v2:192.168.123.104:6800/632047608 -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdc", "target": ["mon-mgr", ""]}) -- 0x7f8d98002bf0 con 0x7f8d94077640 2026-03-10T10:10:39.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:39 vm04 bash[20742]: audit 2026-03-10T10:10:39.655082+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:10:39.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:39 vm04 bash[20742]: audit 2026-03-10T10:10:39.655082+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:10:39.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:39 vm04 bash[20742]: audit 2026-03-10T10:10:39.656379+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:10:39.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:39 vm04 bash[20742]: audit 2026-03-10T10:10:39.656379+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:10:39.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:39 vm04 bash[20742]: audit 2026-03-10T10:10:39.656737+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:39.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:39 vm04 bash[20742]: audit 2026-03-10T10:10:39.656737+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:39.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:39 vm04 bash[28289]: audit 2026-03-10T10:10:39.655082+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:10:39.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:39 vm04 bash[28289]: audit 2026-03-10T10:10:39.655082+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:10:39.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:39 vm04 bash[28289]: audit 2026-03-10T10:10:39.656379+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:10:39.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:39 vm04 bash[28289]: audit 2026-03-10T10:10:39.656379+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:10:39.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:39 vm04 bash[28289]: audit 2026-03-10T10:10:39.656737+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:39.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:39 vm04 bash[28289]: audit 2026-03-10T10:10:39.656737+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:40.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:39 vm07 bash[23367]: audit 2026-03-10T10:10:39.655082+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:10:40.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:39 vm07 bash[23367]: audit 2026-03-10T10:10:39.655082+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:10:40.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:39 vm07 bash[23367]: audit 2026-03-10T10:10:39.656379+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:10:40.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:39 vm07 bash[23367]: audit 2026-03-10T10:10:39.656379+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:10:40.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:39 vm07 bash[23367]: audit 2026-03-10T10:10:39.656737+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:40.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:39 vm07 bash[23367]: audit 2026-03-10T10:10:39.656737+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:40.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:40 vm04 bash[28289]: cluster 2026-03-10T10:10:39.617091+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:40.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:40 vm04 bash[28289]: cluster 2026-03-10T10:10:39.617091+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:40.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:40 vm04 bash[28289]: audit 2026-03-10T10:10:39.653444+0000 mgr.y (mgr.14150) 104 : audit [DBG] from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:10:40.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:40 vm04 bash[28289]: audit 2026-03-10T10:10:39.653444+0000 mgr.y (mgr.14150) 104 : audit [DBG] from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:10:40.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:40 vm04 bash[20742]: cluster 2026-03-10T10:10:39.617091+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:40.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:40 vm04 bash[20742]: cluster 2026-03-10T10:10:39.617091+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:40.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:40 vm04 bash[20742]: audit 2026-03-10T10:10:39.653444+0000 mgr.y (mgr.14150) 104 : audit [DBG] from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:10:40.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:40 vm04 bash[20742]: audit 2026-03-10T10:10:39.653444+0000 mgr.y (mgr.14150) 104 : audit [DBG] from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:10:41.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:40 vm07 bash[23367]: cluster 2026-03-10T10:10:39.617091+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:41.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:40 vm07 bash[23367]: cluster 2026-03-10T10:10:39.617091+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:41.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:40 vm07 bash[23367]: audit 2026-03-10T10:10:39.653444+0000 mgr.y (mgr.14150) 104 : audit [DBG] from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:10:41.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:40 vm07 bash[23367]: audit 2026-03-10T10:10:39.653444+0000 mgr.y (mgr.14150) 104 : audit [DBG] from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:10:42.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:42 vm04 bash[28289]: cluster 2026-03-10T10:10:41.617352+0000 mgr.y (mgr.14150) 105 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:42.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:42 vm04 bash[28289]: cluster 2026-03-10T10:10:41.617352+0000 mgr.y (mgr.14150) 105 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:42.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:42 vm04 bash[20742]: cluster 2026-03-10T10:10:41.617352+0000 mgr.y (mgr.14150) 105 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:42.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:42 vm04 bash[20742]: cluster 2026-03-10T10:10:41.617352+0000 mgr.y (mgr.14150) 105 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:43.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:42 vm07 bash[23367]: cluster 2026-03-10T10:10:41.617352+0000 mgr.y (mgr.14150) 105 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:43.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:42 vm07 bash[23367]: cluster 2026-03-10T10:10:41.617352+0000 mgr.y (mgr.14150) 105 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:44.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:44 vm04 bash[28289]: cluster 2026-03-10T10:10:43.617653+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:44.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:44 vm04 bash[28289]: cluster 2026-03-10T10:10:43.617653+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:44.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:44 vm04 bash[20742]: cluster 2026-03-10T10:10:43.617653+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:44.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:44 vm04 bash[20742]: cluster 2026-03-10T10:10:43.617653+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:45.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:44 vm07 bash[23367]: cluster 2026-03-10T10:10:43.617653+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:45.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:44 vm07 bash[23367]: cluster 2026-03-10T10:10:43.617653+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:45.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:45 vm04 bash[28289]: audit 2026-03-10T10:10:45.046279+0000 mon.c (mon.2) 9 : audit [INF] from='client.? 192.168.123.104:0/2626476809' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "17bb098c-8eff-4065-b511-7925247ef4a5"}]: dispatch 2026-03-10T10:10:45.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:45 vm04 bash[28289]: audit 2026-03-10T10:10:45.046279+0000 mon.c (mon.2) 9 : audit [INF] from='client.? 192.168.123.104:0/2626476809' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "17bb098c-8eff-4065-b511-7925247ef4a5"}]: dispatch 2026-03-10T10:10:45.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:45 vm04 bash[28289]: audit 2026-03-10T10:10:45.046655+0000 mon.a (mon.0) 349 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "17bb098c-8eff-4065-b511-7925247ef4a5"}]: dispatch 2026-03-10T10:10:45.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:45 vm04 bash[28289]: audit 2026-03-10T10:10:45.046655+0000 mon.a (mon.0) 349 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "17bb098c-8eff-4065-b511-7925247ef4a5"}]: dispatch 2026-03-10T10:10:45.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:45 vm04 bash[28289]: audit 2026-03-10T10:10:45.050073+0000 mon.a (mon.0) 350 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "17bb098c-8eff-4065-b511-7925247ef4a5"}]': finished 2026-03-10T10:10:45.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:45 vm04 bash[28289]: audit 2026-03-10T10:10:45.050073+0000 mon.a (mon.0) 350 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "17bb098c-8eff-4065-b511-7925247ef4a5"}]': finished 2026-03-10T10:10:45.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:45 vm04 bash[28289]: cluster 2026-03-10T10:10:45.053311+0000 mon.a (mon.0) 351 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-10T10:10:45.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:45 vm04 bash[28289]: cluster 2026-03-10T10:10:45.053311+0000 mon.a (mon.0) 351 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-10T10:10:45.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:45 vm04 bash[28289]: audit 2026-03-10T10:10:45.053462+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:10:45.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:45 vm04 bash[28289]: audit 2026-03-10T10:10:45.053462+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:10:45.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:45 vm04 bash[28289]: audit 2026-03-10T10:10:45.650804+0000 mon.a (mon.0) 353 : audit [DBG] from='client.? 192.168.123.104:0/1775594491' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:10:45.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:45 vm04 bash[28289]: audit 2026-03-10T10:10:45.650804+0000 mon.a (mon.0) 353 : audit [DBG] from='client.? 192.168.123.104:0/1775594491' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:10:45.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:45 vm04 bash[20742]: audit 2026-03-10T10:10:45.046279+0000 mon.c (mon.2) 9 : audit [INF] from='client.? 192.168.123.104:0/2626476809' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "17bb098c-8eff-4065-b511-7925247ef4a5"}]: dispatch 2026-03-10T10:10:45.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:45 vm04 bash[20742]: audit 2026-03-10T10:10:45.046279+0000 mon.c (mon.2) 9 : audit [INF] from='client.? 192.168.123.104:0/2626476809' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "17bb098c-8eff-4065-b511-7925247ef4a5"}]: dispatch 2026-03-10T10:10:45.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:45 vm04 bash[20742]: audit 2026-03-10T10:10:45.046655+0000 mon.a (mon.0) 349 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "17bb098c-8eff-4065-b511-7925247ef4a5"}]: dispatch 2026-03-10T10:10:45.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:45 vm04 bash[20742]: audit 2026-03-10T10:10:45.046655+0000 mon.a (mon.0) 349 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "17bb098c-8eff-4065-b511-7925247ef4a5"}]: dispatch 2026-03-10T10:10:45.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:45 vm04 bash[20742]: audit 2026-03-10T10:10:45.050073+0000 mon.a (mon.0) 350 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "17bb098c-8eff-4065-b511-7925247ef4a5"}]': finished 2026-03-10T10:10:45.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:45 vm04 bash[20742]: audit 2026-03-10T10:10:45.050073+0000 mon.a (mon.0) 350 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "17bb098c-8eff-4065-b511-7925247ef4a5"}]': finished 2026-03-10T10:10:45.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:45 vm04 bash[20742]: cluster 2026-03-10T10:10:45.053311+0000 mon.a (mon.0) 351 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-10T10:10:45.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:45 vm04 bash[20742]: cluster 2026-03-10T10:10:45.053311+0000 mon.a (mon.0) 351 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-10T10:10:45.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:45 vm04 bash[20742]: audit 2026-03-10T10:10:45.053462+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:10:45.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:45 vm04 bash[20742]: audit 2026-03-10T10:10:45.053462+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:10:45.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:45 vm04 bash[20742]: audit 2026-03-10T10:10:45.650804+0000 mon.a (mon.0) 353 : audit [DBG] from='client.? 192.168.123.104:0/1775594491' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:10:45.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:45 vm04 bash[20742]: audit 2026-03-10T10:10:45.650804+0000 mon.a (mon.0) 353 : audit [DBG] from='client.? 192.168.123.104:0/1775594491' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:10:46.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:45 vm07 bash[23367]: audit 2026-03-10T10:10:45.046279+0000 mon.c (mon.2) 9 : audit [INF] from='client.? 192.168.123.104:0/2626476809' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "17bb098c-8eff-4065-b511-7925247ef4a5"}]: dispatch 2026-03-10T10:10:46.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:45 vm07 bash[23367]: audit 2026-03-10T10:10:45.046279+0000 mon.c (mon.2) 9 : audit [INF] from='client.? 192.168.123.104:0/2626476809' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "17bb098c-8eff-4065-b511-7925247ef4a5"}]: dispatch 2026-03-10T10:10:46.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:45 vm07 bash[23367]: audit 2026-03-10T10:10:45.046655+0000 mon.a (mon.0) 349 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "17bb098c-8eff-4065-b511-7925247ef4a5"}]: dispatch 2026-03-10T10:10:46.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:45 vm07 bash[23367]: audit 2026-03-10T10:10:45.046655+0000 mon.a (mon.0) 349 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "17bb098c-8eff-4065-b511-7925247ef4a5"}]: dispatch 2026-03-10T10:10:46.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:45 vm07 bash[23367]: audit 2026-03-10T10:10:45.050073+0000 mon.a (mon.0) 350 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "17bb098c-8eff-4065-b511-7925247ef4a5"}]': finished 2026-03-10T10:10:46.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:45 vm07 bash[23367]: audit 2026-03-10T10:10:45.050073+0000 mon.a (mon.0) 350 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "17bb098c-8eff-4065-b511-7925247ef4a5"}]': finished 2026-03-10T10:10:46.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:45 vm07 bash[23367]: cluster 2026-03-10T10:10:45.053311+0000 mon.a (mon.0) 351 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-10T10:10:46.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:45 vm07 bash[23367]: cluster 2026-03-10T10:10:45.053311+0000 mon.a (mon.0) 351 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-10T10:10:46.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:45 vm07 bash[23367]: audit 2026-03-10T10:10:45.053462+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:10:46.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:45 vm07 bash[23367]: audit 2026-03-10T10:10:45.053462+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:10:46.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:45 vm07 bash[23367]: audit 2026-03-10T10:10:45.650804+0000 mon.a (mon.0) 353 : audit [DBG] from='client.? 192.168.123.104:0/1775594491' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:10:46.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:45 vm07 bash[23367]: audit 2026-03-10T10:10:45.650804+0000 mon.a (mon.0) 353 : audit [DBG] from='client.? 192.168.123.104:0/1775594491' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:10:46.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:46 vm04 bash[28289]: cluster 2026-03-10T10:10:45.617894+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:46.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:46 vm04 bash[28289]: cluster 2026-03-10T10:10:45.617894+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:46.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:46 vm04 bash[20742]: cluster 2026-03-10T10:10:45.617894+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:46.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:46 vm04 bash[20742]: cluster 2026-03-10T10:10:45.617894+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:47.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:46 vm07 bash[23367]: cluster 2026-03-10T10:10:45.617894+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:47.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:46 vm07 bash[23367]: cluster 2026-03-10T10:10:45.617894+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:49.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:48 vm07 bash[23367]: cluster 2026-03-10T10:10:47.618132+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:49.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:48 vm07 bash[23367]: cluster 2026-03-10T10:10:47.618132+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:49.110 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:48 vm04 bash[20742]: cluster 2026-03-10T10:10:47.618132+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:49.110 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:48 vm04 bash[20742]: cluster 2026-03-10T10:10:47.618132+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:49.110 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:48 vm04 bash[28289]: cluster 2026-03-10T10:10:47.618132+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:49.110 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:48 vm04 bash[28289]: cluster 2026-03-10T10:10:47.618132+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:51.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:50 vm07 bash[23367]: cluster 2026-03-10T10:10:49.618401+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:51.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:50 vm07 bash[23367]: cluster 2026-03-10T10:10:49.618401+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:51.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:50 vm04 bash[28289]: cluster 2026-03-10T10:10:49.618401+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:51.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:50 vm04 bash[28289]: cluster 2026-03-10T10:10:49.618401+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:51.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:50 vm04 bash[20742]: cluster 2026-03-10T10:10:49.618401+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:51.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:50 vm04 bash[20742]: cluster 2026-03-10T10:10:49.618401+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:53.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:52 vm07 bash[23367]: cluster 2026-03-10T10:10:51.618708+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:53.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:52 vm07 bash[23367]: cluster 2026-03-10T10:10:51.618708+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:53.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:52 vm04 bash[28289]: cluster 2026-03-10T10:10:51.618708+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:53.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:52 vm04 bash[28289]: cluster 2026-03-10T10:10:51.618708+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:53.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:52 vm04 bash[20742]: cluster 2026-03-10T10:10:51.618708+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:53.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:52 vm04 bash[20742]: cluster 2026-03-10T10:10:51.618708+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:54.864 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:54 vm04 bash[28289]: cluster 2026-03-10T10:10:53.618957+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:54.864 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:54 vm04 bash[28289]: cluster 2026-03-10T10:10:53.618957+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:54.864 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:54 vm04 bash[28289]: audit 2026-03-10T10:10:54.341276+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T10:10:54.864 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:54 vm04 bash[28289]: audit 2026-03-10T10:10:54.341276+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T10:10:54.864 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:54 vm04 bash[28289]: audit 2026-03-10T10:10:54.341713+0000 mon.a (mon.0) 355 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:54.864 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:54 vm04 bash[28289]: audit 2026-03-10T10:10:54.341713+0000 mon.a (mon.0) 355 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:54.865 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:54 vm04 bash[20742]: cluster 2026-03-10T10:10:53.618957+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:54.865 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:54 vm04 bash[20742]: cluster 2026-03-10T10:10:53.618957+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:54.865 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:54 vm04 bash[20742]: audit 2026-03-10T10:10:54.341276+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T10:10:54.865 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:54 vm04 bash[20742]: audit 2026-03-10T10:10:54.341276+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T10:10:54.865 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:54 vm04 bash[20742]: audit 2026-03-10T10:10:54.341713+0000 mon.a (mon.0) 355 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:54.865 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:54 vm04 bash[20742]: audit 2026-03-10T10:10:54.341713+0000 mon.a (mon.0) 355 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:55.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:54 vm07 bash[23367]: cluster 2026-03-10T10:10:53.618957+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:55.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:54 vm07 bash[23367]: cluster 2026-03-10T10:10:53.618957+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:55.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:54 vm07 bash[23367]: audit 2026-03-10T10:10:54.341276+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T10:10:55.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:54 vm07 bash[23367]: audit 2026-03-10T10:10:54.341276+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T10:10:55.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:54 vm07 bash[23367]: audit 2026-03-10T10:10:54.341713+0000 mon.a (mon.0) 355 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:55.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:54 vm07 bash[23367]: audit 2026-03-10T10:10:54.341713+0000 mon.a (mon.0) 355 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:10:55.135 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:55 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:10:55.135 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:55 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:10:55.135 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:10:55 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:10:55.135 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 10:10:55 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:10:55.135 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 10 10:10:55 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:10:55.409 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:55 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:10:55.410 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:10:55 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:10:55.410 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 10:10:55 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:10:55.410 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 10 10:10:55 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:10:55.410 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:55 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:10:56.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:55 vm07 bash[23367]: cephadm 2026-03-10T10:10:54.342128+0000 mgr.y (mgr.14150) 112 : cephadm [INF] Deploying daemon osd.2 on vm04 2026-03-10T10:10:56.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:55 vm07 bash[23367]: cephadm 2026-03-10T10:10:54.342128+0000 mgr.y (mgr.14150) 112 : cephadm [INF] Deploying daemon osd.2 on vm04 2026-03-10T10:10:56.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:55 vm07 bash[23367]: audit 2026-03-10T10:10:55.335283+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:10:56.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:55 vm07 bash[23367]: audit 2026-03-10T10:10:55.335283+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:10:56.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:55 vm07 bash[23367]: audit 2026-03-10T10:10:55.341629+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:56.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:55 vm07 bash[23367]: audit 2026-03-10T10:10:55.341629+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:56.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:55 vm07 bash[23367]: audit 2026-03-10T10:10:55.348220+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:56.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:55 vm07 bash[23367]: audit 2026-03-10T10:10:55.348220+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:55 vm04 bash[28289]: cephadm 2026-03-10T10:10:54.342128+0000 mgr.y (mgr.14150) 112 : cephadm [INF] Deploying daemon osd.2 on vm04 2026-03-10T10:10:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:55 vm04 bash[28289]: cephadm 2026-03-10T10:10:54.342128+0000 mgr.y (mgr.14150) 112 : cephadm [INF] Deploying daemon osd.2 on vm04 2026-03-10T10:10:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:55 vm04 bash[28289]: audit 2026-03-10T10:10:55.335283+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:10:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:55 vm04 bash[28289]: audit 2026-03-10T10:10:55.335283+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:10:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:55 vm04 bash[28289]: audit 2026-03-10T10:10:55.341629+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:55 vm04 bash[28289]: audit 2026-03-10T10:10:55.341629+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:55 vm04 bash[28289]: audit 2026-03-10T10:10:55.348220+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:55 vm04 bash[28289]: audit 2026-03-10T10:10:55.348220+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:56.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:55 vm04 bash[20742]: cephadm 2026-03-10T10:10:54.342128+0000 mgr.y (mgr.14150) 112 : cephadm [INF] Deploying daemon osd.2 on vm04 2026-03-10T10:10:56.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:55 vm04 bash[20742]: cephadm 2026-03-10T10:10:54.342128+0000 mgr.y (mgr.14150) 112 : cephadm [INF] Deploying daemon osd.2 on vm04 2026-03-10T10:10:56.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:55 vm04 bash[20742]: audit 2026-03-10T10:10:55.335283+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:10:56.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:55 vm04 bash[20742]: audit 2026-03-10T10:10:55.335283+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:10:56.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:55 vm04 bash[20742]: audit 2026-03-10T10:10:55.341629+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:56.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:55 vm04 bash[20742]: audit 2026-03-10T10:10:55.341629+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:56.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:55 vm04 bash[20742]: audit 2026-03-10T10:10:55.348220+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:56.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:55 vm04 bash[20742]: audit 2026-03-10T10:10:55.348220+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:10:57.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:56 vm07 bash[23367]: cluster 2026-03-10T10:10:55.619165+0000 mgr.y (mgr.14150) 113 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:57.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:56 vm07 bash[23367]: cluster 2026-03-10T10:10:55.619165+0000 mgr.y (mgr.14150) 113 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:57.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:56 vm04 bash[28289]: cluster 2026-03-10T10:10:55.619165+0000 mgr.y (mgr.14150) 113 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:57.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:56 vm04 bash[28289]: cluster 2026-03-10T10:10:55.619165+0000 mgr.y (mgr.14150) 113 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:57.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:56 vm04 bash[20742]: cluster 2026-03-10T10:10:55.619165+0000 mgr.y (mgr.14150) 113 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:57.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:56 vm04 bash[20742]: cluster 2026-03-10T10:10:55.619165+0000 mgr.y (mgr.14150) 113 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:58.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:58 vm04 bash[28289]: cluster 2026-03-10T10:10:57.619483+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:58.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:58 vm04 bash[28289]: cluster 2026-03-10T10:10:57.619483+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:58.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:58 vm04 bash[28289]: audit 2026-03-10T10:10:58.612593+0000 mon.c (mon.2) 10 : audit [INF] from='osd.2 v2:192.168.123.104:6809/1668196037' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T10:10:58.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:58 vm04 bash[28289]: audit 2026-03-10T10:10:58.612593+0000 mon.c (mon.2) 10 : audit [INF] from='osd.2 v2:192.168.123.104:6809/1668196037' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T10:10:58.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:58 vm04 bash[28289]: audit 2026-03-10T10:10:58.612858+0000 mon.a (mon.0) 359 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T10:10:58.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:58 vm04 bash[28289]: audit 2026-03-10T10:10:58.612858+0000 mon.a (mon.0) 359 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T10:10:58.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:58 vm04 bash[20742]: cluster 2026-03-10T10:10:57.619483+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:58.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:58 vm04 bash[20742]: cluster 2026-03-10T10:10:57.619483+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:58.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:58 vm04 bash[20742]: audit 2026-03-10T10:10:58.612593+0000 mon.c (mon.2) 10 : audit [INF] from='osd.2 v2:192.168.123.104:6809/1668196037' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T10:10:58.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:58 vm04 bash[20742]: audit 2026-03-10T10:10:58.612593+0000 mon.c (mon.2) 10 : audit [INF] from='osd.2 v2:192.168.123.104:6809/1668196037' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T10:10:58.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:58 vm04 bash[20742]: audit 2026-03-10T10:10:58.612858+0000 mon.a (mon.0) 359 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T10:10:58.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:58 vm04 bash[20742]: audit 2026-03-10T10:10:58.612858+0000 mon.a (mon.0) 359 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T10:10:59.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:58 vm07 bash[23367]: cluster 2026-03-10T10:10:57.619483+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:59.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:58 vm07 bash[23367]: cluster 2026-03-10T10:10:57.619483+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:10:59.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:58 vm07 bash[23367]: audit 2026-03-10T10:10:58.612593+0000 mon.c (mon.2) 10 : audit [INF] from='osd.2 v2:192.168.123.104:6809/1668196037' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T10:10:59.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:58 vm07 bash[23367]: audit 2026-03-10T10:10:58.612593+0000 mon.c (mon.2) 10 : audit [INF] from='osd.2 v2:192.168.123.104:6809/1668196037' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T10:10:59.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:58 vm07 bash[23367]: audit 2026-03-10T10:10:58.612858+0000 mon.a (mon.0) 359 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T10:10:59.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:58 vm07 bash[23367]: audit 2026-03-10T10:10:58.612858+0000 mon.a (mon.0) 359 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T10:11:00.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:59 vm04 bash[28289]: audit 2026-03-10T10:10:58.740774+0000 mon.a (mon.0) 360 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T10:11:00.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:59 vm04 bash[28289]: audit 2026-03-10T10:10:58.740774+0000 mon.a (mon.0) 360 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T10:11:00.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:59 vm04 bash[28289]: cluster 2026-03-10T10:10:58.743678+0000 mon.a (mon.0) 361 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-10T10:11:00.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:59 vm04 bash[28289]: cluster 2026-03-10T10:10:58.743678+0000 mon.a (mon.0) 361 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-10T10:11:00.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:59 vm04 bash[28289]: audit 2026-03-10T10:10:58.744332+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:11:00.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:59 vm04 bash[28289]: audit 2026-03-10T10:10:58.744332+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:11:00.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:59 vm04 bash[28289]: audit 2026-03-10T10:10:58.744800+0000 mon.c (mon.2) 11 : audit [INF] from='osd.2 v2:192.168.123.104:6809/1668196037' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:11:00.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:59 vm04 bash[28289]: audit 2026-03-10T10:10:58.744800+0000 mon.c (mon.2) 11 : audit [INF] from='osd.2 v2:192.168.123.104:6809/1668196037' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:11:00.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:59 vm04 bash[28289]: audit 2026-03-10T10:10:58.750262+0000 mon.a (mon.0) 363 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:11:00.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:10:59 vm04 bash[28289]: audit 2026-03-10T10:10:58.750262+0000 mon.a (mon.0) 363 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:11:00.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:59 vm04 bash[20742]: audit 2026-03-10T10:10:58.740774+0000 mon.a (mon.0) 360 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T10:11:00.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:59 vm04 bash[20742]: audit 2026-03-10T10:10:58.740774+0000 mon.a (mon.0) 360 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T10:11:00.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:59 vm04 bash[20742]: cluster 2026-03-10T10:10:58.743678+0000 mon.a (mon.0) 361 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-10T10:11:00.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:59 vm04 bash[20742]: cluster 2026-03-10T10:10:58.743678+0000 mon.a (mon.0) 361 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-10T10:11:00.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:59 vm04 bash[20742]: audit 2026-03-10T10:10:58.744332+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:11:00.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:59 vm04 bash[20742]: audit 2026-03-10T10:10:58.744332+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:11:00.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:59 vm04 bash[20742]: audit 2026-03-10T10:10:58.744800+0000 mon.c (mon.2) 11 : audit [INF] from='osd.2 v2:192.168.123.104:6809/1668196037' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:11:00.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:59 vm04 bash[20742]: audit 2026-03-10T10:10:58.744800+0000 mon.c (mon.2) 11 : audit [INF] from='osd.2 v2:192.168.123.104:6809/1668196037' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:11:00.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:59 vm04 bash[20742]: audit 2026-03-10T10:10:58.750262+0000 mon.a (mon.0) 363 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:11:00.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:10:59 vm04 bash[20742]: audit 2026-03-10T10:10:58.750262+0000 mon.a (mon.0) 363 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:11:00.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:59 vm07 bash[23367]: audit 2026-03-10T10:10:58.740774+0000 mon.a (mon.0) 360 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T10:11:00.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:59 vm07 bash[23367]: audit 2026-03-10T10:10:58.740774+0000 mon.a (mon.0) 360 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T10:11:00.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:59 vm07 bash[23367]: cluster 2026-03-10T10:10:58.743678+0000 mon.a (mon.0) 361 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-10T10:11:00.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:59 vm07 bash[23367]: cluster 2026-03-10T10:10:58.743678+0000 mon.a (mon.0) 361 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-10T10:11:00.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:59 vm07 bash[23367]: audit 2026-03-10T10:10:58.744332+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:11:00.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:59 vm07 bash[23367]: audit 2026-03-10T10:10:58.744332+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:11:00.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:59 vm07 bash[23367]: audit 2026-03-10T10:10:58.744800+0000 mon.c (mon.2) 11 : audit [INF] from='osd.2 v2:192.168.123.104:6809/1668196037' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:11:00.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:59 vm07 bash[23367]: audit 2026-03-10T10:10:58.744800+0000 mon.c (mon.2) 11 : audit [INF] from='osd.2 v2:192.168.123.104:6809/1668196037' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:11:00.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:59 vm07 bash[23367]: audit 2026-03-10T10:10:58.750262+0000 mon.a (mon.0) 363 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:11:00.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:10:59 vm07 bash[23367]: audit 2026-03-10T10:10:58.750262+0000 mon.a (mon.0) 363 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:11:01.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:00 vm04 bash[28289]: cluster 2026-03-10T10:10:59.619748+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:11:01.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:00 vm04 bash[28289]: cluster 2026-03-10T10:10:59.619748+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:11:01.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:00 vm04 bash[28289]: audit 2026-03-10T10:10:59.749037+0000 mon.a (mon.0) 364 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-10T10:11:01.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:00 vm04 bash[28289]: audit 2026-03-10T10:10:59.749037+0000 mon.a (mon.0) 364 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-10T10:11:01.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:00 vm04 bash[28289]: cluster 2026-03-10T10:10:59.752310+0000 mon.a (mon.0) 365 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-10T10:11:01.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:00 vm04 bash[28289]: cluster 2026-03-10T10:10:59.752310+0000 mon.a (mon.0) 365 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-10T10:11:01.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:00 vm04 bash[28289]: audit 2026-03-10T10:10:59.757115+0000 mon.a (mon.0) 366 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:11:01.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:00 vm04 bash[28289]: audit 2026-03-10T10:10:59.757115+0000 mon.a (mon.0) 366 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:11:01.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:00 vm04 bash[28289]: audit 2026-03-10T10:11:00.711607+0000 mon.a (mon.0) 367 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-10T10:11:01.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:00 vm04 bash[28289]: audit 2026-03-10T10:11:00.711607+0000 mon.a (mon.0) 367 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-10T10:11:01.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:00 vm04 bash[28289]: audit 2026-03-10T10:11:00.754767+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:11:01.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:00 vm04 bash[28289]: audit 2026-03-10T10:11:00.754767+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:11:01.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:00 vm04 bash[20742]: cluster 2026-03-10T10:10:59.619748+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:11:01.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:00 vm04 bash[20742]: cluster 2026-03-10T10:10:59.619748+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:11:01.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:00 vm04 bash[20742]: audit 2026-03-10T10:10:59.749037+0000 mon.a (mon.0) 364 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-10T10:11:01.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:00 vm04 bash[20742]: audit 2026-03-10T10:10:59.749037+0000 mon.a (mon.0) 364 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-10T10:11:01.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:00 vm04 bash[20742]: cluster 2026-03-10T10:10:59.752310+0000 mon.a (mon.0) 365 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-10T10:11:01.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:00 vm04 bash[20742]: cluster 2026-03-10T10:10:59.752310+0000 mon.a (mon.0) 365 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-10T10:11:01.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:00 vm04 bash[20742]: audit 2026-03-10T10:10:59.757115+0000 mon.a (mon.0) 366 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:11:01.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:00 vm04 bash[20742]: audit 2026-03-10T10:10:59.757115+0000 mon.a (mon.0) 366 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:11:01.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:00 vm04 bash[20742]: audit 2026-03-10T10:11:00.711607+0000 mon.a (mon.0) 367 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-10T10:11:01.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:00 vm04 bash[20742]: audit 2026-03-10T10:11:00.711607+0000 mon.a (mon.0) 367 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-10T10:11:01.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:00 vm04 bash[20742]: audit 2026-03-10T10:11:00.754767+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:11:01.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:00 vm04 bash[20742]: audit 2026-03-10T10:11:00.754767+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:11:01.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:00 vm07 bash[23367]: cluster 2026-03-10T10:10:59.619748+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:11:01.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:00 vm07 bash[23367]: cluster 2026-03-10T10:10:59.619748+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:11:01.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:00 vm07 bash[23367]: audit 2026-03-10T10:10:59.749037+0000 mon.a (mon.0) 364 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-10T10:11:01.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:00 vm07 bash[23367]: audit 2026-03-10T10:10:59.749037+0000 mon.a (mon.0) 364 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-10T10:11:01.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:00 vm07 bash[23367]: cluster 2026-03-10T10:10:59.752310+0000 mon.a (mon.0) 365 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-10T10:11:01.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:00 vm07 bash[23367]: cluster 2026-03-10T10:10:59.752310+0000 mon.a (mon.0) 365 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-10T10:11:01.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:00 vm07 bash[23367]: audit 2026-03-10T10:10:59.757115+0000 mon.a (mon.0) 366 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:11:01.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:00 vm07 bash[23367]: audit 2026-03-10T10:10:59.757115+0000 mon.a (mon.0) 366 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:11:01.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:00 vm07 bash[23367]: audit 2026-03-10T10:11:00.711607+0000 mon.a (mon.0) 367 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-10T10:11:01.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:00 vm07 bash[23367]: audit 2026-03-10T10:11:00.711607+0000 mon.a (mon.0) 367 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-10T10:11:01.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:00 vm07 bash[23367]: audit 2026-03-10T10:11:00.754767+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:11:01.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:00 vm07 bash[23367]: audit 2026-03-10T10:11:00.754767+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:11:02.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:01 vm04 bash[28289]: cluster 2026-03-10T10:10:59.644464+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:11:02.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:01 vm04 bash[28289]: cluster 2026-03-10T10:10:59.644464+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:11:02.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:01 vm04 bash[28289]: cluster 2026-03-10T10:10:59.644518+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:11:02.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:01 vm04 bash[28289]: cluster 2026-03-10T10:10:59.644518+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:11:02.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:01 vm04 bash[28289]: audit 2026-03-10T10:11:01.666279+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:02.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:01 vm04 bash[28289]: audit 2026-03-10T10:11:01.666279+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:02.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:01 vm04 bash[28289]: audit 2026-03-10T10:11:01.672340+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:02.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:01 vm04 bash[28289]: audit 2026-03-10T10:11:01.672340+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:02.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:01 vm04 bash[28289]: cluster 2026-03-10T10:11:01.717902+0000 mon.a (mon.0) 371 : cluster [INF] osd.2 v2:192.168.123.104:6809/1668196037 boot 2026-03-10T10:11:02.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:01 vm04 bash[28289]: cluster 2026-03-10T10:11:01.717902+0000 mon.a (mon.0) 371 : cluster [INF] osd.2 v2:192.168.123.104:6809/1668196037 boot 2026-03-10T10:11:02.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:01 vm04 bash[28289]: cluster 2026-03-10T10:11:01.717953+0000 mon.a (mon.0) 372 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-10T10:11:02.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:01 vm04 bash[28289]: cluster 2026-03-10T10:11:01.717953+0000 mon.a (mon.0) 372 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-10T10:11:02.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:01 vm04 bash[28289]: audit 2026-03-10T10:11:01.718064+0000 mon.a (mon.0) 373 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:11:02.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:01 vm04 bash[28289]: audit 2026-03-10T10:11:01.718064+0000 mon.a (mon.0) 373 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:11:02.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:01 vm04 bash[20742]: cluster 2026-03-10T10:10:59.644464+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:11:02.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:01 vm04 bash[20742]: cluster 2026-03-10T10:10:59.644464+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:11:02.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:01 vm04 bash[20742]: cluster 2026-03-10T10:10:59.644518+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:11:02.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:01 vm04 bash[20742]: cluster 2026-03-10T10:10:59.644518+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:11:02.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:01 vm04 bash[20742]: audit 2026-03-10T10:11:01.666279+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:02.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:01 vm04 bash[20742]: audit 2026-03-10T10:11:01.666279+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:02.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:01 vm04 bash[20742]: audit 2026-03-10T10:11:01.672340+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:02.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:01 vm04 bash[20742]: audit 2026-03-10T10:11:01.672340+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:02.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:01 vm04 bash[20742]: cluster 2026-03-10T10:11:01.717902+0000 mon.a (mon.0) 371 : cluster [INF] osd.2 v2:192.168.123.104:6809/1668196037 boot 2026-03-10T10:11:02.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:01 vm04 bash[20742]: cluster 2026-03-10T10:11:01.717902+0000 mon.a (mon.0) 371 : cluster [INF] osd.2 v2:192.168.123.104:6809/1668196037 boot 2026-03-10T10:11:02.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:01 vm04 bash[20742]: cluster 2026-03-10T10:11:01.717953+0000 mon.a (mon.0) 372 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-10T10:11:02.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:01 vm04 bash[20742]: cluster 2026-03-10T10:11:01.717953+0000 mon.a (mon.0) 372 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-10T10:11:02.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:01 vm04 bash[20742]: audit 2026-03-10T10:11:01.718064+0000 mon.a (mon.0) 373 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:11:02.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:01 vm04 bash[20742]: audit 2026-03-10T10:11:01.718064+0000 mon.a (mon.0) 373 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:11:02.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:01 vm07 bash[23367]: cluster 2026-03-10T10:10:59.644464+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:11:02.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:01 vm07 bash[23367]: cluster 2026-03-10T10:10:59.644464+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:11:02.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:01 vm07 bash[23367]: cluster 2026-03-10T10:10:59.644518+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:11:02.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:01 vm07 bash[23367]: cluster 2026-03-10T10:10:59.644518+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:11:02.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:01 vm07 bash[23367]: audit 2026-03-10T10:11:01.666279+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:02.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:01 vm07 bash[23367]: audit 2026-03-10T10:11:01.666279+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:02.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:01 vm07 bash[23367]: audit 2026-03-10T10:11:01.672340+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:02.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:01 vm07 bash[23367]: audit 2026-03-10T10:11:01.672340+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:02.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:01 vm07 bash[23367]: cluster 2026-03-10T10:11:01.717902+0000 mon.a (mon.0) 371 : cluster [INF] osd.2 v2:192.168.123.104:6809/1668196037 boot 2026-03-10T10:11:02.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:01 vm07 bash[23367]: cluster 2026-03-10T10:11:01.717902+0000 mon.a (mon.0) 371 : cluster [INF] osd.2 v2:192.168.123.104:6809/1668196037 boot 2026-03-10T10:11:02.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:01 vm07 bash[23367]: cluster 2026-03-10T10:11:01.717953+0000 mon.a (mon.0) 372 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-10T10:11:02.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:01 vm07 bash[23367]: cluster 2026-03-10T10:11:01.717953+0000 mon.a (mon.0) 372 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-10T10:11:02.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:01 vm07 bash[23367]: audit 2026-03-10T10:11:01.718064+0000 mon.a (mon.0) 373 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:11:02.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:01 vm07 bash[23367]: audit 2026-03-10T10:11:01.718064+0000 mon.a (mon.0) 373 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:11:02.734 INFO:teuthology.orchestra.run.vm04.stdout:Created osd(s) 2 on host 'vm04' 2026-03-10T10:11:02.734 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:02.724+0000 7f8dbaffd640 1 -- 192.168.123.104:0/3408237308 <== mgr.14150 v2:192.168.123.104:6800/632047608 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7f8d98002bf0 con 0x7f8d94077640 2026-03-10T10:11:02.735 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:02.728+0000 7f8dd3f43640 1 -- 192.168.123.104:0/3408237308 >> v2:192.168.123.104:6800/632047608 conn(0x7f8d94077640 msgr2=0x7f8d94079b00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:11:02.735 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:02.728+0000 7f8dd3f43640 1 --2- 192.168.123.104:0/3408237308 >> v2:192.168.123.104:6800/632047608 conn(0x7f8d94077640 0x7f8d94079b00 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f8dcc19dbb0 tx=0x7f8dbc008040 comp rx=0 tx=0).stop 2026-03-10T10:11:02.735 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:02.728+0000 7f8dd3f43640 1 -- 192.168.123.104:0/3408237308 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8dcc104d70 msgr2=0x7f8dcc19c690 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:11:02.735 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:02.728+0000 7f8dd3f43640 1 --2- 192.168.123.104:0/3408237308 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8dcc104d70 0x7f8dcc19c690 secure :-1 s=READY pgs=104 cs=0 l=1 rev1=1 crypto rx=0x7f8dc0002ba0 tx=0x7f8dc0002f10 comp rx=0 tx=0).stop 2026-03-10T10:11:02.735 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:02.728+0000 7f8dd3f43640 1 -- 192.168.123.104:0/3408237308 shutdown_connections 2026-03-10T10:11:02.735 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:02.728+0000 7f8dd3f43640 1 --2- 192.168.123.104:0/3408237308 >> v2:192.168.123.104:6800/632047608 conn(0x7f8d94077640 0x7f8d94079b00 unknown :-1 s=CLOSED pgs=54 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:11:02.735 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:02.728+0000 7f8dd3f43640 1 --2- 192.168.123.104:0/3408237308 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8dcc106930 0x7f8dcc1a3c50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:11:02.735 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:02.728+0000 7f8dd3f43640 1 --2- 192.168.123.104:0/3408237308 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f8dcc105f70 0x7f8dcc19cbd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:11:02.735 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:02.728+0000 7f8dd3f43640 1 --2- 192.168.123.104:0/3408237308 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8dcc104d70 0x7f8dcc19c690 unknown :-1 s=CLOSED pgs=104 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:11:02.735 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:02.728+0000 7f8dd3f43640 1 -- 192.168.123.104:0/3408237308 >> 192.168.123.104:0/3408237308 conn(0x7f8dcc100520 msgr2=0x7f8dcc101fe0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:11:02.735 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:02.728+0000 7f8dd3f43640 1 -- 192.168.123.104:0/3408237308 shutdown_connections 2026-03-10T10:11:02.735 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:02.728+0000 7f8dd3f43640 1 -- 192.168.123.104:0/3408237308 wait complete. 2026-03-10T10:11:02.828 DEBUG:teuthology.orchestra.run.vm04:osd.2> sudo journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@osd.2.service 2026-03-10T10:11:02.829 INFO:tasks.cephadm:Deploying osd.3 on vm04 with /dev/vdb... 2026-03-10T10:11:02.829 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- lvm zap /dev/vdb 2026-03-10T10:11:02.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:02 vm04 bash[20742]: cluster 2026-03-10T10:11:01.620021+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:11:02.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:02 vm04 bash[20742]: cluster 2026-03-10T10:11:01.620021+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:11:02.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:02 vm04 bash[20742]: audit 2026-03-10T10:11:02.085907+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:02.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:02 vm04 bash[20742]: audit 2026-03-10T10:11:02.085907+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:02.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:02 vm04 bash[20742]: audit 2026-03-10T10:11:02.086481+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:11:02.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:02 vm04 bash[20742]: audit 2026-03-10T10:11:02.086481+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:11:02.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:02 vm04 bash[20742]: audit 2026-03-10T10:11:02.129603+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:02.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:02 vm04 bash[20742]: audit 2026-03-10T10:11:02.129603+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:02.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:02 vm04 bash[20742]: audit 2026-03-10T10:11:02.716326+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:11:02.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:02 vm04 bash[20742]: audit 2026-03-10T10:11:02.716326+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:11:02.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:02 vm04 bash[20742]: audit 2026-03-10T10:11:02.722955+0000 mon.a (mon.0) 378 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:02.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:02 vm04 bash[20742]: audit 2026-03-10T10:11:02.722955+0000 mon.a (mon.0) 378 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:02.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:02 vm04 bash[20742]: audit 2026-03-10T10:11:02.728306+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:02.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:02 vm04 bash[20742]: audit 2026-03-10T10:11:02.728306+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:02.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:02 vm04 bash[28289]: cluster 2026-03-10T10:11:01.620021+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:11:02.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:02 vm04 bash[28289]: cluster 2026-03-10T10:11:01.620021+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:11:02.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:02 vm04 bash[28289]: audit 2026-03-10T10:11:02.085907+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:02.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:02 vm04 bash[28289]: audit 2026-03-10T10:11:02.085907+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:02.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:02 vm04 bash[28289]: audit 2026-03-10T10:11:02.086481+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:11:02.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:02 vm04 bash[28289]: audit 2026-03-10T10:11:02.086481+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:11:02.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:02 vm04 bash[28289]: audit 2026-03-10T10:11:02.129603+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:02.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:02 vm04 bash[28289]: audit 2026-03-10T10:11:02.129603+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:02.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:02 vm04 bash[28289]: audit 2026-03-10T10:11:02.716326+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:11:02.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:02 vm04 bash[28289]: audit 2026-03-10T10:11:02.716326+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:11:02.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:02 vm04 bash[28289]: audit 2026-03-10T10:11:02.722955+0000 mon.a (mon.0) 378 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:02.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:02 vm04 bash[28289]: audit 2026-03-10T10:11:02.722955+0000 mon.a (mon.0) 378 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:02.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:02 vm04 bash[28289]: audit 2026-03-10T10:11:02.728306+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:02.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:02 vm04 bash[28289]: audit 2026-03-10T10:11:02.728306+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:03.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:02 vm07 bash[23367]: cluster 2026-03-10T10:11:01.620021+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:11:03.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:02 vm07 bash[23367]: cluster 2026-03-10T10:11:01.620021+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T10:11:03.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:02 vm07 bash[23367]: audit 2026-03-10T10:11:02.085907+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:03.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:02 vm07 bash[23367]: audit 2026-03-10T10:11:02.085907+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:03.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:02 vm07 bash[23367]: audit 2026-03-10T10:11:02.086481+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:11:03.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:02 vm07 bash[23367]: audit 2026-03-10T10:11:02.086481+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:11:03.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:02 vm07 bash[23367]: audit 2026-03-10T10:11:02.129603+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:03.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:02 vm07 bash[23367]: audit 2026-03-10T10:11:02.129603+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:03.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:02 vm07 bash[23367]: audit 2026-03-10T10:11:02.716326+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:11:03.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:02 vm07 bash[23367]: audit 2026-03-10T10:11:02.716326+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:11:03.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:02 vm07 bash[23367]: audit 2026-03-10T10:11:02.722955+0000 mon.a (mon.0) 378 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:03.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:02 vm07 bash[23367]: audit 2026-03-10T10:11:02.722955+0000 mon.a (mon.0) 378 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:03.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:02 vm07 bash[23367]: audit 2026-03-10T10:11:02.728306+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:03.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:02 vm07 bash[23367]: audit 2026-03-10T10:11:02.728306+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:04.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:04 vm04 bash[28289]: cluster 2026-03-10T10:11:03.133132+0000 mon.a (mon.0) 380 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-10T10:11:04.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:04 vm04 bash[28289]: cluster 2026-03-10T10:11:03.133132+0000 mon.a (mon.0) 380 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-10T10:11:04.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:04 vm04 bash[28289]: cluster 2026-03-10T10:11:03.620334+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v86: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:04.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:04 vm04 bash[28289]: cluster 2026-03-10T10:11:03.620334+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v86: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:04.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:04 vm04 bash[28289]: audit 2026-03-10T10:11:03.645484+0000 mon.a (mon.0) 381 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:11:04.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:04 vm04 bash[28289]: audit 2026-03-10T10:11:03.645484+0000 mon.a (mon.0) 381 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:11:04.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:04 vm04 bash[20742]: cluster 2026-03-10T10:11:03.133132+0000 mon.a (mon.0) 380 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-10T10:11:04.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:04 vm04 bash[20742]: cluster 2026-03-10T10:11:03.133132+0000 mon.a (mon.0) 380 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-10T10:11:04.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:04 vm04 bash[20742]: cluster 2026-03-10T10:11:03.620334+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v86: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:04.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:04 vm04 bash[20742]: cluster 2026-03-10T10:11:03.620334+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v86: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:04.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:04 vm04 bash[20742]: audit 2026-03-10T10:11:03.645484+0000 mon.a (mon.0) 381 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:11:04.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:04 vm04 bash[20742]: audit 2026-03-10T10:11:03.645484+0000 mon.a (mon.0) 381 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:11:04.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:04 vm07 bash[23367]: cluster 2026-03-10T10:11:03.133132+0000 mon.a (mon.0) 380 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-10T10:11:04.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:04 vm07 bash[23367]: cluster 2026-03-10T10:11:03.133132+0000 mon.a (mon.0) 380 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-10T10:11:04.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:04 vm07 bash[23367]: cluster 2026-03-10T10:11:03.620334+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v86: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:04.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:04 vm07 bash[23367]: cluster 2026-03-10T10:11:03.620334+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v86: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:04.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:04 vm07 bash[23367]: audit 2026-03-10T10:11:03.645484+0000 mon.a (mon.0) 381 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:11:04.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:04 vm07 bash[23367]: audit 2026-03-10T10:11:03.645484+0000 mon.a (mon.0) 381 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:11:05.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:05 vm04 bash[28289]: audit 2026-03-10T10:11:04.137815+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T10:11:05.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:05 vm04 bash[28289]: audit 2026-03-10T10:11:04.137815+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T10:11:05.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:05 vm04 bash[28289]: cluster 2026-03-10T10:11:04.143552+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-10T10:11:05.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:05 vm04 bash[28289]: cluster 2026-03-10T10:11:04.143552+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-10T10:11:05.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:05 vm04 bash[28289]: audit 2026-03-10T10:11:04.144911+0000 mon.a (mon.0) 384 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:11:05.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:05 vm04 bash[28289]: audit 2026-03-10T10:11:04.144911+0000 mon.a (mon.0) 384 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:11:05.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:05 vm04 bash[20742]: audit 2026-03-10T10:11:04.137815+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T10:11:05.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:05 vm04 bash[20742]: audit 2026-03-10T10:11:04.137815+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T10:11:05.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:05 vm04 bash[20742]: cluster 2026-03-10T10:11:04.143552+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-10T10:11:05.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:05 vm04 bash[20742]: cluster 2026-03-10T10:11:04.143552+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-10T10:11:05.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:05 vm04 bash[20742]: audit 2026-03-10T10:11:04.144911+0000 mon.a (mon.0) 384 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:11:05.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:05 vm04 bash[20742]: audit 2026-03-10T10:11:04.144911+0000 mon.a (mon.0) 384 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:11:05.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:05 vm07 bash[23367]: audit 2026-03-10T10:11:04.137815+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T10:11:05.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:05 vm07 bash[23367]: audit 2026-03-10T10:11:04.137815+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T10:11:05.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:05 vm07 bash[23367]: cluster 2026-03-10T10:11:04.143552+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-10T10:11:05.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:05 vm07 bash[23367]: cluster 2026-03-10T10:11:04.143552+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-10T10:11:05.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:05 vm07 bash[23367]: audit 2026-03-10T10:11:04.144911+0000 mon.a (mon.0) 384 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:11:05.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:05 vm07 bash[23367]: audit 2026-03-10T10:11:04.144911+0000 mon.a (mon.0) 384 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:11:06.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: audit 2026-03-10T10:11:05.140553+0000 mon.a (mon.0) 385 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T10:11:06.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: audit 2026-03-10T10:11:05.140553+0000 mon.a (mon.0) 385 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T10:11:06.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: cluster 2026-03-10T10:11:05.143467+0000 mon.a (mon.0) 386 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-10T10:11:06.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: cluster 2026-03-10T10:11:05.143467+0000 mon.a (mon.0) 386 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-10T10:11:06.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: audit 2026-03-10T10:11:05.268688+0000 mon.a (mon.0) 387 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T10:11:06.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: audit 2026-03-10T10:11:05.268688+0000 mon.a (mon.0) 387 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T10:11:06.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: audit 2026-03-10T10:11:05.289606+0000 mon.a (mon.0) 388 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T10:11:06.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: audit 2026-03-10T10:11:05.289606+0000 mon.a (mon.0) 388 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T10:11:06.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: audit 2026-03-10T10:11:05.289986+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:11:06.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: audit 2026-03-10T10:11:05.289986+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:11:06.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: audit 2026-03-10T10:11:05.290053+0000 mon.a (mon.0) 390 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:11:06.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: audit 2026-03-10T10:11:05.290053+0000 mon.a (mon.0) 390 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:11:06.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: audit 2026-03-10T10:11:05.290094+0000 mon.a (mon.0) 391 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:11:06.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: audit 2026-03-10T10:11:05.290094+0000 mon.a (mon.0) 391 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:11:06.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: audit 2026-03-10T10:11:05.292111+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:11:06.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: audit 2026-03-10T10:11:05.292111+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:11:06.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: audit 2026-03-10T10:11:05.292294+0000 mon.a (mon.0) 393 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:11:06.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: audit 2026-03-10T10:11:05.292294+0000 mon.a (mon.0) 393 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:11:06.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: audit 2026-03-10T10:11:05.292375+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:11:06.455 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: audit 2026-03-10T10:11:05.140553+0000 mon.a (mon.0) 385 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T10:11:06.455 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: audit 2026-03-10T10:11:05.140553+0000 mon.a (mon.0) 385 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T10:11:06.455 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: cluster 2026-03-10T10:11:05.143467+0000 mon.a (mon.0) 386 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-10T10:11:06.455 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: cluster 2026-03-10T10:11:05.143467+0000 mon.a (mon.0) 386 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-10T10:11:06.455 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: audit 2026-03-10T10:11:05.268688+0000 mon.a (mon.0) 387 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T10:11:06.455 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: audit 2026-03-10T10:11:05.268688+0000 mon.a (mon.0) 387 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: audit 2026-03-10T10:11:05.289606+0000 mon.a (mon.0) 388 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: audit 2026-03-10T10:11:05.289606+0000 mon.a (mon.0) 388 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: audit 2026-03-10T10:11:05.289986+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: audit 2026-03-10T10:11:05.289986+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: audit 2026-03-10T10:11:05.290053+0000 mon.a (mon.0) 390 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: audit 2026-03-10T10:11:05.290053+0000 mon.a (mon.0) 390 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: audit 2026-03-10T10:11:05.290094+0000 mon.a (mon.0) 391 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: audit 2026-03-10T10:11:05.290094+0000 mon.a (mon.0) 391 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: audit 2026-03-10T10:11:05.292111+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: audit 2026-03-10T10:11:05.292111+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: audit 2026-03-10T10:11:05.292294+0000 mon.a (mon.0) 393 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: audit 2026-03-10T10:11:05.292294+0000 mon.a (mon.0) 393 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: audit 2026-03-10T10:11:05.292375+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: audit 2026-03-10T10:11:05.292375+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: audit 2026-03-10T10:11:05.293603+0000 mon.b (mon.1) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: audit 2026-03-10T10:11:05.293603+0000 mon.b (mon.1) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: audit 2026-03-10T10:11:05.311513+0000 mon.b (mon.1) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: audit 2026-03-10T10:11:05.311513+0000 mon.b (mon.1) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: audit 2026-03-10T10:11:05.311576+0000 mon.c (mon.2) 12 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: audit 2026-03-10T10:11:05.311576+0000 mon.c (mon.2) 12 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: audit 2026-03-10T10:11:05.311814+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: audit 2026-03-10T10:11:05.311814+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: audit 2026-03-10T10:11:05.311881+0000 mon.a (mon.0) 396 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: audit 2026-03-10T10:11:05.311881+0000 mon.a (mon.0) 396 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: audit 2026-03-10T10:11:05.311929+0000 mon.a (mon.0) 397 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: audit 2026-03-10T10:11:05.311929+0000 mon.a (mon.0) 397 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: audit 2026-03-10T10:11:05.328995+0000 mon.c (mon.2) 13 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: audit 2026-03-10T10:11:05.328995+0000 mon.c (mon.2) 13 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: cluster 2026-03-10T10:11:05.620577+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v89: 1 pgs: 1 creating+peering; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:06 vm04 bash[28289]: cluster 2026-03-10T10:11:05.620577+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v89: 1 pgs: 1 creating+peering; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: audit 2026-03-10T10:11:05.292375+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: audit 2026-03-10T10:11:05.293603+0000 mon.b (mon.1) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: audit 2026-03-10T10:11:05.293603+0000 mon.b (mon.1) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: audit 2026-03-10T10:11:05.311513+0000 mon.b (mon.1) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: audit 2026-03-10T10:11:05.311513+0000 mon.b (mon.1) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: audit 2026-03-10T10:11:05.311576+0000 mon.c (mon.2) 12 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: audit 2026-03-10T10:11:05.311576+0000 mon.c (mon.2) 12 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: audit 2026-03-10T10:11:05.311814+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:11:06.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: audit 2026-03-10T10:11:05.311814+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:11:06.457 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: audit 2026-03-10T10:11:05.311881+0000 mon.a (mon.0) 396 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:11:06.457 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: audit 2026-03-10T10:11:05.311881+0000 mon.a (mon.0) 396 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:11:06.457 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: audit 2026-03-10T10:11:05.311929+0000 mon.a (mon.0) 397 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:11:06.457 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: audit 2026-03-10T10:11:05.311929+0000 mon.a (mon.0) 397 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:11:06.457 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: audit 2026-03-10T10:11:05.328995+0000 mon.c (mon.2) 13 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T10:11:06.457 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: audit 2026-03-10T10:11:05.328995+0000 mon.c (mon.2) 13 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T10:11:06.457 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: cluster 2026-03-10T10:11:05.620577+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v89: 1 pgs: 1 creating+peering; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:06.457 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:06 vm04 bash[20742]: cluster 2026-03-10T10:11:05.620577+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v89: 1 pgs: 1 creating+peering; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:06.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: audit 2026-03-10T10:11:05.140553+0000 mon.a (mon.0) 385 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T10:11:06.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: audit 2026-03-10T10:11:05.140553+0000 mon.a (mon.0) 385 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T10:11:06.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: cluster 2026-03-10T10:11:05.143467+0000 mon.a (mon.0) 386 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-10T10:11:06.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: cluster 2026-03-10T10:11:05.143467+0000 mon.a (mon.0) 386 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-10T10:11:06.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: audit 2026-03-10T10:11:05.268688+0000 mon.a (mon.0) 387 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T10:11:06.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: audit 2026-03-10T10:11:05.268688+0000 mon.a (mon.0) 387 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T10:11:06.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: audit 2026-03-10T10:11:05.289606+0000 mon.a (mon.0) 388 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T10:11:06.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: audit 2026-03-10T10:11:05.289606+0000 mon.a (mon.0) 388 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T10:11:06.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: audit 2026-03-10T10:11:05.289986+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:11:06.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: audit 2026-03-10T10:11:05.289986+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:11:06.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: audit 2026-03-10T10:11:05.290053+0000 mon.a (mon.0) 390 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:11:06.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: audit 2026-03-10T10:11:05.290053+0000 mon.a (mon.0) 390 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:11:06.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: audit 2026-03-10T10:11:05.290094+0000 mon.a (mon.0) 391 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:11:06.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: audit 2026-03-10T10:11:05.290094+0000 mon.a (mon.0) 391 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:11:06.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: audit 2026-03-10T10:11:05.292111+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:11:06.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: audit 2026-03-10T10:11:05.292111+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:11:06.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: audit 2026-03-10T10:11:05.292294+0000 mon.a (mon.0) 393 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:11:06.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: audit 2026-03-10T10:11:05.292294+0000 mon.a (mon.0) 393 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:11:06.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: audit 2026-03-10T10:11:05.292375+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:11:06.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: audit 2026-03-10T10:11:05.292375+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:11:06.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: audit 2026-03-10T10:11:05.293603+0000 mon.b (mon.1) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T10:11:06.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: audit 2026-03-10T10:11:05.293603+0000 mon.b (mon.1) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T10:11:06.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: audit 2026-03-10T10:11:05.311513+0000 mon.b (mon.1) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T10:11:06.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: audit 2026-03-10T10:11:05.311513+0000 mon.b (mon.1) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T10:11:06.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: audit 2026-03-10T10:11:05.311576+0000 mon.c (mon.2) 12 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T10:11:06.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: audit 2026-03-10T10:11:05.311576+0000 mon.c (mon.2) 12 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T10:11:06.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: audit 2026-03-10T10:11:05.311814+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:11:06.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: audit 2026-03-10T10:11:05.311814+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:11:06.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: audit 2026-03-10T10:11:05.311881+0000 mon.a (mon.0) 396 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:11:06.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: audit 2026-03-10T10:11:05.311881+0000 mon.a (mon.0) 396 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:11:06.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: audit 2026-03-10T10:11:05.311929+0000 mon.a (mon.0) 397 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:11:06.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: audit 2026-03-10T10:11:05.311929+0000 mon.a (mon.0) 397 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:11:06.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: audit 2026-03-10T10:11:05.328995+0000 mon.c (mon.2) 13 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T10:11:06.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: audit 2026-03-10T10:11:05.328995+0000 mon.c (mon.2) 13 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T10:11:06.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: cluster 2026-03-10T10:11:05.620577+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v89: 1 pgs: 1 creating+peering; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:06.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:06 vm07 bash[23367]: cluster 2026-03-10T10:11:05.620577+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v89: 1 pgs: 1 creating+peering; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:07.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:07 vm04 bash[28289]: cluster 2026-03-10T10:11:06.174530+0000 mon.a (mon.0) 398 : cluster [DBG] mgrmap e15: y(active, since 2m), standbys: x 2026-03-10T10:11:07.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:07 vm04 bash[28289]: cluster 2026-03-10T10:11:06.174530+0000 mon.a (mon.0) 398 : cluster [DBG] mgrmap e15: y(active, since 2m), standbys: x 2026-03-10T10:11:07.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:07 vm04 bash[28289]: cluster 2026-03-10T10:11:06.174568+0000 mon.a (mon.0) 399 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-10T10:11:07.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:07 vm04 bash[28289]: cluster 2026-03-10T10:11:06.174568+0000 mon.a (mon.0) 399 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-10T10:11:07.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:07 vm04 bash[20742]: cluster 2026-03-10T10:11:06.174530+0000 mon.a (mon.0) 398 : cluster [DBG] mgrmap e15: y(active, since 2m), standbys: x 2026-03-10T10:11:07.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:07 vm04 bash[20742]: cluster 2026-03-10T10:11:06.174530+0000 mon.a (mon.0) 398 : cluster [DBG] mgrmap e15: y(active, since 2m), standbys: x 2026-03-10T10:11:07.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:07 vm04 bash[20742]: cluster 2026-03-10T10:11:06.174568+0000 mon.a (mon.0) 399 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-10T10:11:07.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:07 vm04 bash[20742]: cluster 2026-03-10T10:11:06.174568+0000 mon.a (mon.0) 399 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-10T10:11:07.508 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:11:07.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:07 vm07 bash[23367]: cluster 2026-03-10T10:11:06.174530+0000 mon.a (mon.0) 398 : cluster [DBG] mgrmap e15: y(active, since 2m), standbys: x 2026-03-10T10:11:07.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:07 vm07 bash[23367]: cluster 2026-03-10T10:11:06.174530+0000 mon.a (mon.0) 398 : cluster [DBG] mgrmap e15: y(active, since 2m), standbys: x 2026-03-10T10:11:07.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:07 vm07 bash[23367]: cluster 2026-03-10T10:11:06.174568+0000 mon.a (mon.0) 399 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-10T10:11:07.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:07 vm07 bash[23367]: cluster 2026-03-10T10:11:06.174568+0000 mon.a (mon.0) 399 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-10T10:11:08.424 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:08 vm04 bash[28289]: cluster 2026-03-10T10:11:07.620824+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v91: 1 pgs: 1 creating+peering; 0 B data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:08.425 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:08 vm04 bash[28289]: cluster 2026-03-10T10:11:07.620824+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v91: 1 pgs: 1 creating+peering; 0 B data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:08.425 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:08 vm04 bash[20742]: cluster 2026-03-10T10:11:07.620824+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v91: 1 pgs: 1 creating+peering; 0 B data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:08.425 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:08 vm04 bash[20742]: cluster 2026-03-10T10:11:07.620824+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v91: 1 pgs: 1 creating+peering; 0 B data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:08.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:08 vm07 bash[23367]: cluster 2026-03-10T10:11:07.620824+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v91: 1 pgs: 1 creating+peering; 0 B data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:08.513 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:08 vm07 bash[23367]: cluster 2026-03-10T10:11:07.620824+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v91: 1 pgs: 1 creating+peering; 0 B data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:09.071 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:11:09.084 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph orch daemon add osd vm04:/dev/vdb 2026-03-10T10:11:09.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:09 vm04 bash[20742]: cephadm 2026-03-10T10:11:08.337217+0000 mgr.y (mgr.14150) 120 : cephadm [INF] Detected new or changed devices on vm04 2026-03-10T10:11:09.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:09 vm04 bash[20742]: cephadm 2026-03-10T10:11:08.337217+0000 mgr.y (mgr.14150) 120 : cephadm [INF] Detected new or changed devices on vm04 2026-03-10T10:11:09.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:09 vm04 bash[20742]: audit 2026-03-10T10:11:08.370455+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:09.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:09 vm04 bash[20742]: audit 2026-03-10T10:11:08.370455+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:09.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:09 vm04 bash[20742]: audit 2026-03-10T10:11:08.377268+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:09.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:09 vm04 bash[20742]: audit 2026-03-10T10:11:08.377268+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:09.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:09 vm04 bash[20742]: audit 2026-03-10T10:11:08.378643+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:11:09.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:09 vm04 bash[20742]: audit 2026-03-10T10:11:08.378643+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:11:09.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:09 vm04 bash[20742]: audit 2026-03-10T10:11:08.379738+0000 mon.a (mon.0) 403 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:09.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:09 vm04 bash[20742]: audit 2026-03-10T10:11:08.379738+0000 mon.a (mon.0) 403 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:09.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:09 vm04 bash[20742]: audit 2026-03-10T10:11:08.380198+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:11:09.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:09 vm04 bash[20742]: audit 2026-03-10T10:11:08.380198+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:11:09.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:09 vm04 bash[20742]: audit 2026-03-10T10:11:08.386182+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:09.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:09 vm04 bash[20742]: audit 2026-03-10T10:11:08.386182+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:09.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:09 vm04 bash[28289]: cephadm 2026-03-10T10:11:08.337217+0000 mgr.y (mgr.14150) 120 : cephadm [INF] Detected new or changed devices on vm04 2026-03-10T10:11:09.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:09 vm04 bash[28289]: cephadm 2026-03-10T10:11:08.337217+0000 mgr.y (mgr.14150) 120 : cephadm [INF] Detected new or changed devices on vm04 2026-03-10T10:11:09.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:09 vm04 bash[28289]: audit 2026-03-10T10:11:08.370455+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:09.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:09 vm04 bash[28289]: audit 2026-03-10T10:11:08.370455+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:09.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:09 vm04 bash[28289]: audit 2026-03-10T10:11:08.377268+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:09.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:09 vm04 bash[28289]: audit 2026-03-10T10:11:08.377268+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:09.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:09 vm04 bash[28289]: audit 2026-03-10T10:11:08.378643+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:11:09.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:09 vm04 bash[28289]: audit 2026-03-10T10:11:08.378643+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:11:09.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:09 vm04 bash[28289]: audit 2026-03-10T10:11:08.379738+0000 mon.a (mon.0) 403 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:09.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:09 vm04 bash[28289]: audit 2026-03-10T10:11:08.379738+0000 mon.a (mon.0) 403 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:09.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:09 vm04 bash[28289]: audit 2026-03-10T10:11:08.380198+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:11:09.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:09 vm04 bash[28289]: audit 2026-03-10T10:11:08.380198+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:11:09.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:09 vm04 bash[28289]: audit 2026-03-10T10:11:08.386182+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:09.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:09 vm04 bash[28289]: audit 2026-03-10T10:11:08.386182+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:09.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:09 vm07 bash[23367]: cephadm 2026-03-10T10:11:08.337217+0000 mgr.y (mgr.14150) 120 : cephadm [INF] Detected new or changed devices on vm04 2026-03-10T10:11:09.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:09 vm07 bash[23367]: cephadm 2026-03-10T10:11:08.337217+0000 mgr.y (mgr.14150) 120 : cephadm [INF] Detected new or changed devices on vm04 2026-03-10T10:11:09.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:09 vm07 bash[23367]: audit 2026-03-10T10:11:08.370455+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:09.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:09 vm07 bash[23367]: audit 2026-03-10T10:11:08.370455+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:09.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:09 vm07 bash[23367]: audit 2026-03-10T10:11:08.377268+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:09.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:09 vm07 bash[23367]: audit 2026-03-10T10:11:08.377268+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:09.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:09 vm07 bash[23367]: audit 2026-03-10T10:11:08.378643+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:11:09.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:09 vm07 bash[23367]: audit 2026-03-10T10:11:08.378643+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:11:09.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:09 vm07 bash[23367]: audit 2026-03-10T10:11:08.379738+0000 mon.a (mon.0) 403 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:09.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:09 vm07 bash[23367]: audit 2026-03-10T10:11:08.379738+0000 mon.a (mon.0) 403 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:09.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:09 vm07 bash[23367]: audit 2026-03-10T10:11:08.380198+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:11:09.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:09 vm07 bash[23367]: audit 2026-03-10T10:11:08.380198+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:11:09.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:09 vm07 bash[23367]: audit 2026-03-10T10:11:08.386182+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:09.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:09 vm07 bash[23367]: audit 2026-03-10T10:11:08.386182+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:10 vm04 bash[20742]: cluster 2026-03-10T10:11:09.621086+0000 mgr.y (mgr.14150) 121 : cluster [DBG] pgmap v92: 1 pgs: 1 creating+peering; 0 B data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:10 vm04 bash[20742]: cluster 2026-03-10T10:11:09.621086+0000 mgr.y (mgr.14150) 121 : cluster [DBG] pgmap v92: 1 pgs: 1 creating+peering; 0 B data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:10.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:10 vm04 bash[28289]: cluster 2026-03-10T10:11:09.621086+0000 mgr.y (mgr.14150) 121 : cluster [DBG] pgmap v92: 1 pgs: 1 creating+peering; 0 B data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:10.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:10 vm04 bash[28289]: cluster 2026-03-10T10:11:09.621086+0000 mgr.y (mgr.14150) 121 : cluster [DBG] pgmap v92: 1 pgs: 1 creating+peering; 0 B data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:10.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:10 vm07 bash[23367]: cluster 2026-03-10T10:11:09.621086+0000 mgr.y (mgr.14150) 121 : cluster [DBG] pgmap v92: 1 pgs: 1 creating+peering; 0 B data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:10.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:10 vm07 bash[23367]: cluster 2026-03-10T10:11:09.621086+0000 mgr.y (mgr.14150) 121 : cluster [DBG] pgmap v92: 1 pgs: 1 creating+peering; 0 B data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:12 vm04 bash[20742]: cluster 2026-03-10T10:11:11.621309+0000 mgr.y (mgr.14150) 122 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:12 vm04 bash[20742]: cluster 2026-03-10T10:11:11.621309+0000 mgr.y (mgr.14150) 122 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:12 vm04 bash[28289]: cluster 2026-03-10T10:11:11.621309+0000 mgr.y (mgr.14150) 122 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:12 vm04 bash[28289]: cluster 2026-03-10T10:11:11.621309+0000 mgr.y (mgr.14150) 122 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:13.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:12 vm07 bash[23367]: cluster 2026-03-10T10:11:11.621309+0000 mgr.y (mgr.14150) 122 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:13.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:12 vm07 bash[23367]: cluster 2026-03-10T10:11:11.621309+0000 mgr.y (mgr.14150) 122 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:13.715 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:11:13.884 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.880+0000 7f947b9b8640 1 -- 192.168.123.104:0/1908874879 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f947410aac0 msgr2=0x7f947410cf80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:11:13.884 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.880+0000 7f947b9b8640 1 --2- 192.168.123.104:0/1908874879 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f947410aac0 0x7f947410cf80 secure :-1 s=READY pgs=108 cs=0 l=1 rev1=1 crypto rx=0x7f947000b3e0 tx=0x7f947002f5b0 comp rx=0 tx=0).stop 2026-03-10T10:11:13.884 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.880+0000 7f947b9b8640 1 -- 192.168.123.104:0/1908874879 shutdown_connections 2026-03-10T10:11:13.884 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.880+0000 7f947b9b8640 1 --2- 192.168.123.104:0/1908874879 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f947410aac0 0x7f947410cf80 unknown :-1 s=CLOSED pgs=108 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:11:13.884 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.880+0000 7f947b9b8640 1 --2- 192.168.123.104:0/1908874879 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9474103960 0x7f947410a400 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:11:13.885 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.880+0000 7f947b9b8640 1 --2- 192.168.123.104:0/1908874879 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f9474102f90 0x7f9474103390 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:11:13.885 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.880+0000 7f947b9b8640 1 -- 192.168.123.104:0/1908874879 >> 192.168.123.104:0/1908874879 conn(0x7f94740fc8d0 msgr2=0x7f94740fecf0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:11:13.885 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.880+0000 7f947b9b8640 1 -- 192.168.123.104:0/1908874879 shutdown_connections 2026-03-10T10:11:13.885 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.880+0000 7f947b9b8640 1 -- 192.168.123.104:0/1908874879 wait complete. 2026-03-10T10:11:13.885 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.880+0000 7f947b9b8640 1 Processor -- start 2026-03-10T10:11:13.885 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.880+0000 7f947b9b8640 1 -- start start 2026-03-10T10:11:13.885 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.880+0000 7f947b9b8640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9474102f90 0x7f947419c450 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:11:13.885 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.880+0000 7f947b9b8640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f9474103960 0x7f947419c990 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:11:13.886 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.880+0000 7f947b9b8640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f947410aac0 0x7f94741a3a10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:11:13.886 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.880+0000 7f947b9b8640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f947410ffb0 con 0x7f947410aac0 2026-03-10T10:11:13.886 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.880+0000 7f947b9b8640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f947410fe30 con 0x7f9474102f90 2026-03-10T10:11:13.886 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.880+0000 7f947b9b8640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f9474110130 con 0x7f9474103960 2026-03-10T10:11:13.886 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.880+0000 7f9479f2e640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f947410aac0 0x7f94741a3a10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:11:13.886 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.880+0000 7f9479f2e640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f947410aac0 0x7f94741a3a10 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:47282/0 (socket says 192.168.123.104:47282) 2026-03-10T10:11:13.886 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.880+0000 7f9479f2e640 1 -- 192.168.123.104:0/999845675 learned_addr learned my addr 192.168.123.104:0/999845675 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:11:13.886 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.880+0000 7f9478f2c640 1 --2- 192.168.123.104:0/999845675 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f9474103960 0x7f947419c990 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:11:13.886 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.880+0000 7f947972d640 1 --2- 192.168.123.104:0/999845675 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9474102f90 0x7f947419c450 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:11:13.886 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.880+0000 7f9479f2e640 1 -- 192.168.123.104:0/999845675 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f9474103960 msgr2=0x7f947419c990 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:11:13.886 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.880+0000 7f9479f2e640 1 --2- 192.168.123.104:0/999845675 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f9474103960 0x7f947419c990 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:11:13.886 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.880+0000 7f9479f2e640 1 -- 192.168.123.104:0/999845675 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9474102f90 msgr2=0x7f947419c450 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:11:13.886 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.880+0000 7f9479f2e640 1 --2- 192.168.123.104:0/999845675 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9474102f90 0x7f947419c450 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:11:13.886 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.880+0000 7f9479f2e640 1 -- 192.168.123.104:0/999845675 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f94741a4110 con 0x7f947410aac0 2026-03-10T10:11:13.887 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.880+0000 7f9479f2e640 1 --2- 192.168.123.104:0/999845675 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f947410aac0 0x7f94741a3a10 secure :-1 s=READY pgs=109 cs=0 l=1 rev1=1 crypto rx=0x7f9470007c00 tx=0x7f9470007c30 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:11:13.887 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.880+0000 7f94627fc640 1 -- 192.168.123.104:0/999845675 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f9470047070 con 0x7f947410aac0 2026-03-10T10:11:13.887 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.880+0000 7f94627fc640 1 -- 192.168.123.104:0/999845675 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f9470004690 con 0x7f947410aac0 2026-03-10T10:11:13.887 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.880+0000 7f94627fc640 1 -- 192.168.123.104:0/999845675 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f9470037430 con 0x7f947410aac0 2026-03-10T10:11:13.887 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.880+0000 7f947b9b8640 1 -- 192.168.123.104:0/999845675 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f94741a43a0 con 0x7f947410aac0 2026-03-10T10:11:13.887 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.880+0000 7f947b9b8640 1 -- 192.168.123.104:0/999845675 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f94741040f0 con 0x7f947410aac0 2026-03-10T10:11:13.888 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.884+0000 7f94627fc640 1 -- 192.168.123.104:0/999845675 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 15) ==== 100000+0+0 (secure 0 0 0) 0x7f94700040a0 con 0x7f947410aac0 2026-03-10T10:11:13.889 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.884+0000 7f94627fc640 1 --2- 192.168.123.104:0/999845675 >> v2:192.168.123.104:6800/632047608 conn(0x7f94500776d0 0x7f9450079b90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:11:13.889 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.884+0000 7f94627fc640 1 -- 192.168.123.104:0/999845675 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(22..22 src has 1..22) ==== 2501+0+0 (secure 0 0 0) 0x7f94700bda90 con 0x7f947410aac0 2026-03-10T10:11:13.889 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.884+0000 7f947972d640 1 --2- 192.168.123.104:0/999845675 >> v2:192.168.123.104:6800/632047608 conn(0x7f94500776d0 0x7f9450079b90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:11:13.889 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.884+0000 7f947972d640 1 --2- 192.168.123.104:0/999845675 >> v2:192.168.123.104:6800/632047608 conn(0x7f94500776d0 0x7f9450079b90 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7f94680097c0 tx=0x7f9468009340 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:11:13.892 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.884+0000 7f947b9b8640 1 -- 192.168.123.104:0/999845675 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f943c005180 con 0x7f947410aac0 2026-03-10T10:11:13.893 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.888+0000 7f94627fc640 1 -- 192.168.123.104:0/999845675 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f947008b750 con 0x7f947410aac0 2026-03-10T10:11:13.991 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:13.988+0000 7f947b9b8640 1 -- 192.168.123.104:0/999845675 --> v2:192.168.123.104:6800/632047608 -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdb", "target": ["mon-mgr", ""]}) -- 0x7f943c002bf0 con 0x7f94500776d0 2026-03-10T10:11:14.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:14 vm04 bash[28289]: cluster 2026-03-10T10:11:13.621562+0000 mgr.y (mgr.14150) 123 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:14.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:14 vm04 bash[28289]: cluster 2026-03-10T10:11:13.621562+0000 mgr.y (mgr.14150) 123 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:14.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:14 vm04 bash[28289]: audit 2026-03-10T10:11:13.994348+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:11:14.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:14 vm04 bash[28289]: audit 2026-03-10T10:11:13.994348+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:11:14.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:14 vm04 bash[28289]: audit 2026-03-10T10:11:13.995668+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:11:14.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:14 vm04 bash[28289]: audit 2026-03-10T10:11:13.995668+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:11:14.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:14 vm04 bash[28289]: audit 2026-03-10T10:11:13.996113+0000 mon.a (mon.0) 408 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:14.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:14 vm04 bash[28289]: audit 2026-03-10T10:11:13.996113+0000 mon.a (mon.0) 408 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:14.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:14 vm04 bash[20742]: cluster 2026-03-10T10:11:13.621562+0000 mgr.y (mgr.14150) 123 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:14.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:14 vm04 bash[20742]: cluster 2026-03-10T10:11:13.621562+0000 mgr.y (mgr.14150) 123 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:14.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:14 vm04 bash[20742]: audit 2026-03-10T10:11:13.994348+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:11:14.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:14 vm04 bash[20742]: audit 2026-03-10T10:11:13.994348+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:11:14.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:14 vm04 bash[20742]: audit 2026-03-10T10:11:13.995668+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:11:14.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:14 vm04 bash[20742]: audit 2026-03-10T10:11:13.995668+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:11:14.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:14 vm04 bash[20742]: audit 2026-03-10T10:11:13.996113+0000 mon.a (mon.0) 408 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:14.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:14 vm04 bash[20742]: audit 2026-03-10T10:11:13.996113+0000 mon.a (mon.0) 408 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:15.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:14 vm07 bash[23367]: cluster 2026-03-10T10:11:13.621562+0000 mgr.y (mgr.14150) 123 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:15.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:14 vm07 bash[23367]: cluster 2026-03-10T10:11:13.621562+0000 mgr.y (mgr.14150) 123 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:15.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:14 vm07 bash[23367]: audit 2026-03-10T10:11:13.994348+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:11:15.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:14 vm07 bash[23367]: audit 2026-03-10T10:11:13.994348+0000 mon.a (mon.0) 406 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:11:15.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:14 vm07 bash[23367]: audit 2026-03-10T10:11:13.995668+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:11:15.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:14 vm07 bash[23367]: audit 2026-03-10T10:11:13.995668+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:11:15.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:14 vm07 bash[23367]: audit 2026-03-10T10:11:13.996113+0000 mon.a (mon.0) 408 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:15.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:14 vm07 bash[23367]: audit 2026-03-10T10:11:13.996113+0000 mon.a (mon.0) 408 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:15.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:15 vm04 bash[20742]: audit 2026-03-10T10:11:13.993026+0000 mgr.y (mgr.14150) 124 : audit [DBG] from='client.14274 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:11:15.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:15 vm04 bash[20742]: audit 2026-03-10T10:11:13.993026+0000 mgr.y (mgr.14150) 124 : audit [DBG] from='client.14274 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:11:15.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:15 vm04 bash[28289]: audit 2026-03-10T10:11:13.993026+0000 mgr.y (mgr.14150) 124 : audit [DBG] from='client.14274 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:11:15.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:15 vm04 bash[28289]: audit 2026-03-10T10:11:13.993026+0000 mgr.y (mgr.14150) 124 : audit [DBG] from='client.14274 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:11:16.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:15 vm07 bash[23367]: audit 2026-03-10T10:11:13.993026+0000 mgr.y (mgr.14150) 124 : audit [DBG] from='client.14274 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:11:16.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:15 vm07 bash[23367]: audit 2026-03-10T10:11:13.993026+0000 mgr.y (mgr.14150) 124 : audit [DBG] from='client.14274 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:11:16.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:16 vm04 bash[28289]: cluster 2026-03-10T10:11:15.621784+0000 mgr.y (mgr.14150) 125 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:16.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:16 vm04 bash[28289]: cluster 2026-03-10T10:11:15.621784+0000 mgr.y (mgr.14150) 125 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:16 vm04 bash[20742]: cluster 2026-03-10T10:11:15.621784+0000 mgr.y (mgr.14150) 125 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:16 vm04 bash[20742]: cluster 2026-03-10T10:11:15.621784+0000 mgr.y (mgr.14150) 125 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:17.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:16 vm07 bash[23367]: cluster 2026-03-10T10:11:15.621784+0000 mgr.y (mgr.14150) 125 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:17.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:16 vm07 bash[23367]: cluster 2026-03-10T10:11:15.621784+0000 mgr.y (mgr.14150) 125 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:19.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:18 vm04 bash[28289]: cluster 2026-03-10T10:11:17.621995+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:19.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:18 vm04 bash[28289]: cluster 2026-03-10T10:11:17.621995+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:19.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:18 vm04 bash[20742]: cluster 2026-03-10T10:11:17.621995+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:19.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:18 vm04 bash[20742]: cluster 2026-03-10T10:11:17.621995+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:19.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:18 vm07 bash[23367]: cluster 2026-03-10T10:11:17.621995+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:19.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:18 vm07 bash[23367]: cluster 2026-03-10T10:11:17.621995+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:20.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:19 vm04 bash[28289]: audit 2026-03-10T10:11:19.397375+0000 mon.c (mon.2) 14 : audit [INF] from='client.? 192.168.123.104:0/4272788580' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f9a0e546-c40a-4fcc-aaca-082199e602f3"}]: dispatch 2026-03-10T10:11:20.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:19 vm04 bash[28289]: audit 2026-03-10T10:11:19.397375+0000 mon.c (mon.2) 14 : audit [INF] from='client.? 192.168.123.104:0/4272788580' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f9a0e546-c40a-4fcc-aaca-082199e602f3"}]: dispatch 2026-03-10T10:11:20.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:19 vm04 bash[28289]: audit 2026-03-10T10:11:19.397722+0000 mon.a (mon.0) 409 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f9a0e546-c40a-4fcc-aaca-082199e602f3"}]: dispatch 2026-03-10T10:11:20.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:19 vm04 bash[28289]: audit 2026-03-10T10:11:19.397722+0000 mon.a (mon.0) 409 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f9a0e546-c40a-4fcc-aaca-082199e602f3"}]: dispatch 2026-03-10T10:11:20.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:19 vm04 bash[28289]: audit 2026-03-10T10:11:19.400890+0000 mon.a (mon.0) 410 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f9a0e546-c40a-4fcc-aaca-082199e602f3"}]': finished 2026-03-10T10:11:20.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:19 vm04 bash[28289]: audit 2026-03-10T10:11:19.400890+0000 mon.a (mon.0) 410 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f9a0e546-c40a-4fcc-aaca-082199e602f3"}]': finished 2026-03-10T10:11:20.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:19 vm04 bash[28289]: cluster 2026-03-10T10:11:19.403369+0000 mon.a (mon.0) 411 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-10T10:11:20.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:19 vm04 bash[28289]: cluster 2026-03-10T10:11:19.403369+0000 mon.a (mon.0) 411 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-10T10:11:20.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:19 vm04 bash[28289]: audit 2026-03-10T10:11:19.404193+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:20.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:19 vm04 bash[28289]: audit 2026-03-10T10:11:19.404193+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:20.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:19 vm04 bash[28289]: cluster 2026-03-10T10:11:19.622225+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:20.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:19 vm04 bash[28289]: cluster 2026-03-10T10:11:19.622225+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:20.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:19 vm04 bash[20742]: audit 2026-03-10T10:11:19.397375+0000 mon.c (mon.2) 14 : audit [INF] from='client.? 192.168.123.104:0/4272788580' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f9a0e546-c40a-4fcc-aaca-082199e602f3"}]: dispatch 2026-03-10T10:11:20.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:19 vm04 bash[20742]: audit 2026-03-10T10:11:19.397375+0000 mon.c (mon.2) 14 : audit [INF] from='client.? 192.168.123.104:0/4272788580' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f9a0e546-c40a-4fcc-aaca-082199e602f3"}]: dispatch 2026-03-10T10:11:20.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:19 vm04 bash[20742]: audit 2026-03-10T10:11:19.397722+0000 mon.a (mon.0) 409 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f9a0e546-c40a-4fcc-aaca-082199e602f3"}]: dispatch 2026-03-10T10:11:20.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:19 vm04 bash[20742]: audit 2026-03-10T10:11:19.397722+0000 mon.a (mon.0) 409 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f9a0e546-c40a-4fcc-aaca-082199e602f3"}]: dispatch 2026-03-10T10:11:20.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:19 vm04 bash[20742]: audit 2026-03-10T10:11:19.400890+0000 mon.a (mon.0) 410 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f9a0e546-c40a-4fcc-aaca-082199e602f3"}]': finished 2026-03-10T10:11:20.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:19 vm04 bash[20742]: audit 2026-03-10T10:11:19.400890+0000 mon.a (mon.0) 410 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f9a0e546-c40a-4fcc-aaca-082199e602f3"}]': finished 2026-03-10T10:11:20.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:19 vm04 bash[20742]: cluster 2026-03-10T10:11:19.403369+0000 mon.a (mon.0) 411 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-10T10:11:20.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:19 vm04 bash[20742]: cluster 2026-03-10T10:11:19.403369+0000 mon.a (mon.0) 411 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-10T10:11:20.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:19 vm04 bash[20742]: audit 2026-03-10T10:11:19.404193+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:20.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:19 vm04 bash[20742]: audit 2026-03-10T10:11:19.404193+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:20.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:19 vm04 bash[20742]: cluster 2026-03-10T10:11:19.622225+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:20.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:19 vm04 bash[20742]: cluster 2026-03-10T10:11:19.622225+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:20.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:19 vm07 bash[23367]: audit 2026-03-10T10:11:19.397375+0000 mon.c (mon.2) 14 : audit [INF] from='client.? 192.168.123.104:0/4272788580' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f9a0e546-c40a-4fcc-aaca-082199e602f3"}]: dispatch 2026-03-10T10:11:20.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:19 vm07 bash[23367]: audit 2026-03-10T10:11:19.397375+0000 mon.c (mon.2) 14 : audit [INF] from='client.? 192.168.123.104:0/4272788580' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f9a0e546-c40a-4fcc-aaca-082199e602f3"}]: dispatch 2026-03-10T10:11:20.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:19 vm07 bash[23367]: audit 2026-03-10T10:11:19.397722+0000 mon.a (mon.0) 409 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f9a0e546-c40a-4fcc-aaca-082199e602f3"}]: dispatch 2026-03-10T10:11:20.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:19 vm07 bash[23367]: audit 2026-03-10T10:11:19.397722+0000 mon.a (mon.0) 409 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f9a0e546-c40a-4fcc-aaca-082199e602f3"}]: dispatch 2026-03-10T10:11:20.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:19 vm07 bash[23367]: audit 2026-03-10T10:11:19.400890+0000 mon.a (mon.0) 410 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f9a0e546-c40a-4fcc-aaca-082199e602f3"}]': finished 2026-03-10T10:11:20.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:19 vm07 bash[23367]: audit 2026-03-10T10:11:19.400890+0000 mon.a (mon.0) 410 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f9a0e546-c40a-4fcc-aaca-082199e602f3"}]': finished 2026-03-10T10:11:20.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:19 vm07 bash[23367]: cluster 2026-03-10T10:11:19.403369+0000 mon.a (mon.0) 411 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-10T10:11:20.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:19 vm07 bash[23367]: cluster 2026-03-10T10:11:19.403369+0000 mon.a (mon.0) 411 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-10T10:11:20.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:19 vm07 bash[23367]: audit 2026-03-10T10:11:19.404193+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:20.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:19 vm07 bash[23367]: audit 2026-03-10T10:11:19.404193+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:20.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:19 vm07 bash[23367]: cluster 2026-03-10T10:11:19.622225+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:20.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:19 vm07 bash[23367]: cluster 2026-03-10T10:11:19.622225+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:21.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:20 vm04 bash[28289]: audit 2026-03-10T10:11:20.043550+0000 mon.a (mon.0) 413 : audit [DBG] from='client.? 192.168.123.104:0/4174063099' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:11:21.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:20 vm04 bash[28289]: audit 2026-03-10T10:11:20.043550+0000 mon.a (mon.0) 413 : audit [DBG] from='client.? 192.168.123.104:0/4174063099' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:11:21.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:20 vm04 bash[20742]: audit 2026-03-10T10:11:20.043550+0000 mon.a (mon.0) 413 : audit [DBG] from='client.? 192.168.123.104:0/4174063099' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:11:21.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:20 vm04 bash[20742]: audit 2026-03-10T10:11:20.043550+0000 mon.a (mon.0) 413 : audit [DBG] from='client.? 192.168.123.104:0/4174063099' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:11:21.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:20 vm07 bash[23367]: audit 2026-03-10T10:11:20.043550+0000 mon.a (mon.0) 413 : audit [DBG] from='client.? 192.168.123.104:0/4174063099' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:11:21.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:20 vm07 bash[23367]: audit 2026-03-10T10:11:20.043550+0000 mon.a (mon.0) 413 : audit [DBG] from='client.? 192.168.123.104:0/4174063099' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:11:22.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:21 vm04 bash[28289]: cluster 2026-03-10T10:11:21.622465+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:22.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:21 vm04 bash[28289]: cluster 2026-03-10T10:11:21.622465+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:22.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:21 vm04 bash[20742]: cluster 2026-03-10T10:11:21.622465+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:22.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:21 vm04 bash[20742]: cluster 2026-03-10T10:11:21.622465+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:22.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:21 vm07 bash[23367]: cluster 2026-03-10T10:11:21.622465+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:22.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:21 vm07 bash[23367]: cluster 2026-03-10T10:11:21.622465+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:24.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:24 vm04 bash[28289]: cluster 2026-03-10T10:11:23.622800+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:24.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:24 vm04 bash[28289]: cluster 2026-03-10T10:11:23.622800+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:24.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:24 vm04 bash[20742]: cluster 2026-03-10T10:11:23.622800+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:24.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:24 vm04 bash[20742]: cluster 2026-03-10T10:11:23.622800+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:25.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:24 vm07 bash[23367]: cluster 2026-03-10T10:11:23.622800+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:25.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:24 vm07 bash[23367]: cluster 2026-03-10T10:11:23.622800+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:26.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:26 vm04 bash[28289]: cluster 2026-03-10T10:11:25.623171+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:26.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:26 vm04 bash[28289]: cluster 2026-03-10T10:11:25.623171+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:26 vm04 bash[20742]: cluster 2026-03-10T10:11:25.623171+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:26 vm04 bash[20742]: cluster 2026-03-10T10:11:25.623171+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:27.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:26 vm07 bash[23367]: cluster 2026-03-10T10:11:25.623171+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:27.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:26 vm07 bash[23367]: cluster 2026-03-10T10:11:25.623171+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:28.937 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:28 vm04 bash[28289]: cluster 2026-03-10T10:11:27.623499+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:28.937 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:28 vm04 bash[28289]: cluster 2026-03-10T10:11:27.623499+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:28.937 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:28 vm04 bash[28289]: audit 2026-03-10T10:11:28.676312+0000 mon.a (mon.0) 414 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T10:11:28.937 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:28 vm04 bash[28289]: audit 2026-03-10T10:11:28.676312+0000 mon.a (mon.0) 414 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T10:11:28.937 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:28 vm04 bash[28289]: audit 2026-03-10T10:11:28.677568+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:28.937 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:28 vm04 bash[28289]: audit 2026-03-10T10:11:28.677568+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:28.937 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:28 vm04 bash[20742]: cluster 2026-03-10T10:11:27.623499+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:28.937 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:28 vm04 bash[20742]: cluster 2026-03-10T10:11:27.623499+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:28.937 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:28 vm04 bash[20742]: audit 2026-03-10T10:11:28.676312+0000 mon.a (mon.0) 414 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T10:11:28.937 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:28 vm04 bash[20742]: audit 2026-03-10T10:11:28.676312+0000 mon.a (mon.0) 414 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T10:11:28.937 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:28 vm04 bash[20742]: audit 2026-03-10T10:11:28.677568+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:28.937 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:28 vm04 bash[20742]: audit 2026-03-10T10:11:28.677568+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:29.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:28 vm07 bash[23367]: cluster 2026-03-10T10:11:27.623499+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:29.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:28 vm07 bash[23367]: cluster 2026-03-10T10:11:27.623499+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:29.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:28 vm07 bash[23367]: audit 2026-03-10T10:11:28.676312+0000 mon.a (mon.0) 414 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T10:11:29.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:28 vm07 bash[23367]: audit 2026-03-10T10:11:28.676312+0000 mon.a (mon.0) 414 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T10:11:29.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:28 vm07 bash[23367]: audit 2026-03-10T10:11:28.677568+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:29.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:28 vm07 bash[23367]: audit 2026-03-10T10:11:28.677568+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:29.536 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:29 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:11:29.536 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:11:29 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:11:29.536 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:29 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:11:29.536 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 10:11:29 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:11:29.536 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 10 10:11:29 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:11:29.536 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 10 10:11:29 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:11:29.845 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:29 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:11:29.846 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:29 vm04 bash[20742]: cephadm 2026-03-10T10:11:28.678407+0000 mgr.y (mgr.14150) 132 : cephadm [INF] Deploying daemon osd.3 on vm04 2026-03-10T10:11:29.846 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:29 vm04 bash[20742]: cephadm 2026-03-10T10:11:28.678407+0000 mgr.y (mgr.14150) 132 : cephadm [INF] Deploying daemon osd.3 on vm04 2026-03-10T10:11:29.846 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:11:29 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:11:29.846 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 10 10:11:29 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:11:29.846 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:29 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:11:29.846 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:29 vm04 bash[28289]: cephadm 2026-03-10T10:11:28.678407+0000 mgr.y (mgr.14150) 132 : cephadm [INF] Deploying daemon osd.3 on vm04 2026-03-10T10:11:29.846 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:29 vm04 bash[28289]: cephadm 2026-03-10T10:11:28.678407+0000 mgr.y (mgr.14150) 132 : cephadm [INF] Deploying daemon osd.3 on vm04 2026-03-10T10:11:29.846 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 10:11:29 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:11:29.846 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 10 10:11:29 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:11:30.012 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:29 vm07 bash[23367]: cephadm 2026-03-10T10:11:28.678407+0000 mgr.y (mgr.14150) 132 : cephadm [INF] Deploying daemon osd.3 on vm04 2026-03-10T10:11:30.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:29 vm07 bash[23367]: cephadm 2026-03-10T10:11:28.678407+0000 mgr.y (mgr.14150) 132 : cephadm [INF] Deploying daemon osd.3 on vm04 2026-03-10T10:11:30.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:30 vm04 bash[28289]: cluster 2026-03-10T10:11:29.623736+0000 mgr.y (mgr.14150) 133 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:30.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:30 vm04 bash[28289]: cluster 2026-03-10T10:11:29.623736+0000 mgr.y (mgr.14150) 133 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:30.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:30 vm04 bash[28289]: audit 2026-03-10T10:11:29.777208+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:11:30.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:30 vm04 bash[28289]: audit 2026-03-10T10:11:29.777208+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:11:30.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:30 vm04 bash[28289]: audit 2026-03-10T10:11:29.787186+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:30.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:30 vm04 bash[28289]: audit 2026-03-10T10:11:29.787186+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:30.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:30 vm04 bash[28289]: audit 2026-03-10T10:11:29.795891+0000 mon.a (mon.0) 418 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:30.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:30 vm04 bash[28289]: audit 2026-03-10T10:11:29.795891+0000 mon.a (mon.0) 418 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:30 vm04 bash[20742]: cluster 2026-03-10T10:11:29.623736+0000 mgr.y (mgr.14150) 133 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:30 vm04 bash[20742]: cluster 2026-03-10T10:11:29.623736+0000 mgr.y (mgr.14150) 133 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:30 vm04 bash[20742]: audit 2026-03-10T10:11:29.777208+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:11:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:30 vm04 bash[20742]: audit 2026-03-10T10:11:29.777208+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:11:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:30 vm04 bash[20742]: audit 2026-03-10T10:11:29.787186+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:30 vm04 bash[20742]: audit 2026-03-10T10:11:29.787186+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:30 vm04 bash[20742]: audit 2026-03-10T10:11:29.795891+0000 mon.a (mon.0) 418 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:30 vm04 bash[20742]: audit 2026-03-10T10:11:29.795891+0000 mon.a (mon.0) 418 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:31.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:30 vm07 bash[23367]: cluster 2026-03-10T10:11:29.623736+0000 mgr.y (mgr.14150) 133 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:31.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:30 vm07 bash[23367]: cluster 2026-03-10T10:11:29.623736+0000 mgr.y (mgr.14150) 133 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:31.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:30 vm07 bash[23367]: audit 2026-03-10T10:11:29.777208+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:11:31.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:30 vm07 bash[23367]: audit 2026-03-10T10:11:29.777208+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:11:31.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:30 vm07 bash[23367]: audit 2026-03-10T10:11:29.787186+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:31.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:30 vm07 bash[23367]: audit 2026-03-10T10:11:29.787186+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:31.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:30 vm07 bash[23367]: audit 2026-03-10T10:11:29.795891+0000 mon.a (mon.0) 418 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:31.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:30 vm07 bash[23367]: audit 2026-03-10T10:11:29.795891+0000 mon.a (mon.0) 418 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:33.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:32 vm07 bash[23367]: cluster 2026-03-10T10:11:31.624081+0000 mgr.y (mgr.14150) 134 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:33.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:32 vm07 bash[23367]: cluster 2026-03-10T10:11:31.624081+0000 mgr.y (mgr.14150) 134 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:33.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:32 vm04 bash[20742]: cluster 2026-03-10T10:11:31.624081+0000 mgr.y (mgr.14150) 134 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:33.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:32 vm04 bash[20742]: cluster 2026-03-10T10:11:31.624081+0000 mgr.y (mgr.14150) 134 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:32 vm04 bash[28289]: cluster 2026-03-10T10:11:31.624081+0000 mgr.y (mgr.14150) 134 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:32 vm04 bash[28289]: cluster 2026-03-10T10:11:31.624081+0000 mgr.y (mgr.14150) 134 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:33.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:33 vm04 bash[28289]: audit 2026-03-10T10:11:33.501403+0000 mon.c (mon.2) 15 : audit [INF] from='osd.3 v2:192.168.123.104:6813/2182249853' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T10:11:33.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:33 vm04 bash[28289]: audit 2026-03-10T10:11:33.501403+0000 mon.c (mon.2) 15 : audit [INF] from='osd.3 v2:192.168.123.104:6813/2182249853' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T10:11:33.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:33 vm04 bash[28289]: audit 2026-03-10T10:11:33.501713+0000 mon.a (mon.0) 419 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T10:11:33.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:33 vm04 bash[28289]: audit 2026-03-10T10:11:33.501713+0000 mon.a (mon.0) 419 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T10:11:33.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:33 vm04 bash[20742]: audit 2026-03-10T10:11:33.501403+0000 mon.c (mon.2) 15 : audit [INF] from='osd.3 v2:192.168.123.104:6813/2182249853' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T10:11:33.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:33 vm04 bash[20742]: audit 2026-03-10T10:11:33.501403+0000 mon.c (mon.2) 15 : audit [INF] from='osd.3 v2:192.168.123.104:6813/2182249853' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T10:11:33.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:33 vm04 bash[20742]: audit 2026-03-10T10:11:33.501713+0000 mon.a (mon.0) 419 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T10:11:33.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:33 vm04 bash[20742]: audit 2026-03-10T10:11:33.501713+0000 mon.a (mon.0) 419 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T10:11:34.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:33 vm07 bash[23367]: audit 2026-03-10T10:11:33.501403+0000 mon.c (mon.2) 15 : audit [INF] from='osd.3 v2:192.168.123.104:6813/2182249853' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T10:11:34.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:33 vm07 bash[23367]: audit 2026-03-10T10:11:33.501403+0000 mon.c (mon.2) 15 : audit [INF] from='osd.3 v2:192.168.123.104:6813/2182249853' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T10:11:34.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:33 vm07 bash[23367]: audit 2026-03-10T10:11:33.501713+0000 mon.a (mon.0) 419 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T10:11:34.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:33 vm07 bash[23367]: audit 2026-03-10T10:11:33.501713+0000 mon.a (mon.0) 419 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T10:11:35.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:34 vm07 bash[23367]: cluster 2026-03-10T10:11:33.624368+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:35.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:34 vm07 bash[23367]: cluster 2026-03-10T10:11:33.624368+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:35.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:34 vm07 bash[23367]: audit 2026-03-10T10:11:33.714643+0000 mon.a (mon.0) 420 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T10:11:35.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:34 vm07 bash[23367]: audit 2026-03-10T10:11:33.714643+0000 mon.a (mon.0) 420 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T10:11:35.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:34 vm07 bash[23367]: cluster 2026-03-10T10:11:33.717623+0000 mon.a (mon.0) 421 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-10T10:11:35.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:34 vm07 bash[23367]: cluster 2026-03-10T10:11:33.717623+0000 mon.a (mon.0) 421 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-10T10:11:35.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:34 vm07 bash[23367]: audit 2026-03-10T10:11:33.718445+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:35.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:34 vm07 bash[23367]: audit 2026-03-10T10:11:33.718445+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:35.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:34 vm07 bash[23367]: audit 2026-03-10T10:11:33.719970+0000 mon.c (mon.2) 16 : audit [INF] from='osd.3 v2:192.168.123.104:6813/2182249853' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:11:35.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:34 vm07 bash[23367]: audit 2026-03-10T10:11:33.719970+0000 mon.c (mon.2) 16 : audit [INF] from='osd.3 v2:192.168.123.104:6813/2182249853' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:11:35.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:34 vm07 bash[23367]: audit 2026-03-10T10:11:33.726265+0000 mon.a (mon.0) 423 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:11:35.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:34 vm07 bash[23367]: audit 2026-03-10T10:11:33.726265+0000 mon.a (mon.0) 423 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:11:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:34 vm04 bash[28289]: cluster 2026-03-10T10:11:33.624368+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:34 vm04 bash[28289]: cluster 2026-03-10T10:11:33.624368+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:34 vm04 bash[28289]: audit 2026-03-10T10:11:33.714643+0000 mon.a (mon.0) 420 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T10:11:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:34 vm04 bash[28289]: audit 2026-03-10T10:11:33.714643+0000 mon.a (mon.0) 420 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T10:11:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:34 vm04 bash[28289]: cluster 2026-03-10T10:11:33.717623+0000 mon.a (mon.0) 421 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-10T10:11:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:34 vm04 bash[28289]: cluster 2026-03-10T10:11:33.717623+0000 mon.a (mon.0) 421 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-10T10:11:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:34 vm04 bash[28289]: audit 2026-03-10T10:11:33.718445+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:34 vm04 bash[28289]: audit 2026-03-10T10:11:33.718445+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:34 vm04 bash[28289]: audit 2026-03-10T10:11:33.719970+0000 mon.c (mon.2) 16 : audit [INF] from='osd.3 v2:192.168.123.104:6813/2182249853' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:11:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:34 vm04 bash[28289]: audit 2026-03-10T10:11:33.719970+0000 mon.c (mon.2) 16 : audit [INF] from='osd.3 v2:192.168.123.104:6813/2182249853' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:11:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:34 vm04 bash[28289]: audit 2026-03-10T10:11:33.726265+0000 mon.a (mon.0) 423 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:11:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:34 vm04 bash[28289]: audit 2026-03-10T10:11:33.726265+0000 mon.a (mon.0) 423 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:11:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:34 vm04 bash[20742]: cluster 2026-03-10T10:11:33.624368+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:34 vm04 bash[20742]: cluster 2026-03-10T10:11:33.624368+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:34 vm04 bash[20742]: audit 2026-03-10T10:11:33.714643+0000 mon.a (mon.0) 420 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T10:11:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:34 vm04 bash[20742]: audit 2026-03-10T10:11:33.714643+0000 mon.a (mon.0) 420 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T10:11:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:34 vm04 bash[20742]: cluster 2026-03-10T10:11:33.717623+0000 mon.a (mon.0) 421 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-10T10:11:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:34 vm04 bash[20742]: cluster 2026-03-10T10:11:33.717623+0000 mon.a (mon.0) 421 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-10T10:11:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:34 vm04 bash[20742]: audit 2026-03-10T10:11:33.718445+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:34 vm04 bash[20742]: audit 2026-03-10T10:11:33.718445+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:34 vm04 bash[20742]: audit 2026-03-10T10:11:33.719970+0000 mon.c (mon.2) 16 : audit [INF] from='osd.3 v2:192.168.123.104:6813/2182249853' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:11:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:34 vm04 bash[20742]: audit 2026-03-10T10:11:33.719970+0000 mon.c (mon.2) 16 : audit [INF] from='osd.3 v2:192.168.123.104:6813/2182249853' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:11:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:34 vm04 bash[20742]: audit 2026-03-10T10:11:33.726265+0000 mon.a (mon.0) 423 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:11:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:34 vm04 bash[20742]: audit 2026-03-10T10:11:33.726265+0000 mon.a (mon.0) 423 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-10T10:11:35.997 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:35 vm04 bash[28289]: audit 2026-03-10T10:11:34.723191+0000 mon.a (mon.0) 424 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-10T10:11:35.997 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:35 vm04 bash[28289]: audit 2026-03-10T10:11:34.723191+0000 mon.a (mon.0) 424 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-10T10:11:35.997 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:35 vm04 bash[28289]: cluster 2026-03-10T10:11:34.726322+0000 mon.a (mon.0) 425 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-10T10:11:35.997 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:35 vm04 bash[28289]: cluster 2026-03-10T10:11:34.726322+0000 mon.a (mon.0) 425 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-10T10:11:35.997 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:35 vm04 bash[28289]: audit 2026-03-10T10:11:34.727984+0000 mon.a (mon.0) 426 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:35.997 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:35 vm04 bash[28289]: audit 2026-03-10T10:11:34.727984+0000 mon.a (mon.0) 426 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:35.997 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:35 vm04 bash[28289]: audit 2026-03-10T10:11:34.735421+0000 mon.a (mon.0) 427 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:35.997 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:35 vm04 bash[28289]: audit 2026-03-10T10:11:34.735421+0000 mon.a (mon.0) 427 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:35.997 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:35 vm04 bash[20742]: audit 2026-03-10T10:11:34.723191+0000 mon.a (mon.0) 424 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-10T10:11:35.997 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:35 vm04 bash[20742]: audit 2026-03-10T10:11:34.723191+0000 mon.a (mon.0) 424 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-10T10:11:35.997 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:35 vm04 bash[20742]: cluster 2026-03-10T10:11:34.726322+0000 mon.a (mon.0) 425 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-10T10:11:35.997 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:35 vm04 bash[20742]: cluster 2026-03-10T10:11:34.726322+0000 mon.a (mon.0) 425 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-10T10:11:35.997 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:35 vm04 bash[20742]: audit 2026-03-10T10:11:34.727984+0000 mon.a (mon.0) 426 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:35.997 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:35 vm04 bash[20742]: audit 2026-03-10T10:11:34.727984+0000 mon.a (mon.0) 426 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:35.997 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:35 vm04 bash[20742]: audit 2026-03-10T10:11:34.735421+0000 mon.a (mon.0) 427 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:35.997 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:35 vm04 bash[20742]: audit 2026-03-10T10:11:34.735421+0000 mon.a (mon.0) 427 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:36.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:35 vm07 bash[23367]: audit 2026-03-10T10:11:34.723191+0000 mon.a (mon.0) 424 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-10T10:11:36.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:35 vm07 bash[23367]: audit 2026-03-10T10:11:34.723191+0000 mon.a (mon.0) 424 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-10T10:11:36.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:35 vm07 bash[23367]: cluster 2026-03-10T10:11:34.726322+0000 mon.a (mon.0) 425 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-10T10:11:36.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:35 vm07 bash[23367]: cluster 2026-03-10T10:11:34.726322+0000 mon.a (mon.0) 425 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-10T10:11:36.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:35 vm07 bash[23367]: audit 2026-03-10T10:11:34.727984+0000 mon.a (mon.0) 426 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:36.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:35 vm07 bash[23367]: audit 2026-03-10T10:11:34.727984+0000 mon.a (mon.0) 426 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:36.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:35 vm07 bash[23367]: audit 2026-03-10T10:11:34.735421+0000 mon.a (mon.0) 427 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:36.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:35 vm07 bash[23367]: audit 2026-03-10T10:11:34.735421+0000 mon.a (mon.0) 427 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:37.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:36 vm07 bash[23367]: cluster 2026-03-10T10:11:34.491396+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:11:37.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:36 vm07 bash[23367]: cluster 2026-03-10T10:11:34.491396+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:11:37.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:36 vm07 bash[23367]: cluster 2026-03-10T10:11:34.491446+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:11:37.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:36 vm07 bash[23367]: cluster 2026-03-10T10:11:34.491446+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:11:37.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:36 vm07 bash[23367]: cluster 2026-03-10T10:11:35.624614+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:37.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:36 vm07 bash[23367]: cluster 2026-03-10T10:11:35.624614+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:37.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:36 vm07 bash[23367]: audit 2026-03-10T10:11:35.731318+0000 mon.a (mon.0) 428 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:37.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:36 vm07 bash[23367]: audit 2026-03-10T10:11:35.731318+0000 mon.a (mon.0) 428 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:37.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:36 vm07 bash[23367]: cluster 2026-03-10T10:11:35.739869+0000 mon.a (mon.0) 429 : cluster [INF] osd.3 v2:192.168.123.104:6813/2182249853 boot 2026-03-10T10:11:37.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:36 vm07 bash[23367]: cluster 2026-03-10T10:11:35.739869+0000 mon.a (mon.0) 429 : cluster [INF] osd.3 v2:192.168.123.104:6813/2182249853 boot 2026-03-10T10:11:37.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:36 vm07 bash[23367]: cluster 2026-03-10T10:11:35.739904+0000 mon.a (mon.0) 430 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-10T10:11:37.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:36 vm07 bash[23367]: cluster 2026-03-10T10:11:35.739904+0000 mon.a (mon.0) 430 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-10T10:11:37.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:36 vm07 bash[23367]: audit 2026-03-10T10:11:35.741044+0000 mon.a (mon.0) 431 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:37.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:36 vm07 bash[23367]: audit 2026-03-10T10:11:35.741044+0000 mon.a (mon.0) 431 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:37.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:36 vm07 bash[23367]: audit 2026-03-10T10:11:36.131111+0000 mon.a (mon.0) 432 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:37.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:36 vm07 bash[23367]: audit 2026-03-10T10:11:36.131111+0000 mon.a (mon.0) 432 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:37.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:36 vm07 bash[23367]: audit 2026-03-10T10:11:36.137398+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:37.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:36 vm07 bash[23367]: audit 2026-03-10T10:11:36.137398+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:37.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:36 vm07 bash[23367]: audit 2026-03-10T10:11:36.138365+0000 mon.a (mon.0) 434 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:37.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:36 vm07 bash[23367]: audit 2026-03-10T10:11:36.138365+0000 mon.a (mon.0) 434 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:37.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:36 vm07 bash[23367]: audit 2026-03-10T10:11:36.138894+0000 mon.a (mon.0) 435 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:11:37.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:36 vm07 bash[23367]: audit 2026-03-10T10:11:36.138894+0000 mon.a (mon.0) 435 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:11:37.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:36 vm07 bash[23367]: audit 2026-03-10T10:11:36.143386+0000 mon.a (mon.0) 436 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:37.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:36 vm07 bash[23367]: audit 2026-03-10T10:11:36.143386+0000 mon.a (mon.0) 436 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:37.046 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:36 vm04 bash[28289]: cluster 2026-03-10T10:11:34.491396+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:36 vm04 bash[28289]: cluster 2026-03-10T10:11:34.491396+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:36 vm04 bash[28289]: cluster 2026-03-10T10:11:34.491446+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:36 vm04 bash[28289]: cluster 2026-03-10T10:11:34.491446+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:36 vm04 bash[28289]: cluster 2026-03-10T10:11:35.624614+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:36 vm04 bash[28289]: cluster 2026-03-10T10:11:35.624614+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:36 vm04 bash[28289]: audit 2026-03-10T10:11:35.731318+0000 mon.a (mon.0) 428 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:36 vm04 bash[28289]: audit 2026-03-10T10:11:35.731318+0000 mon.a (mon.0) 428 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:36 vm04 bash[28289]: cluster 2026-03-10T10:11:35.739869+0000 mon.a (mon.0) 429 : cluster [INF] osd.3 v2:192.168.123.104:6813/2182249853 boot 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:36 vm04 bash[28289]: cluster 2026-03-10T10:11:35.739869+0000 mon.a (mon.0) 429 : cluster [INF] osd.3 v2:192.168.123.104:6813/2182249853 boot 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:36 vm04 bash[28289]: cluster 2026-03-10T10:11:35.739904+0000 mon.a (mon.0) 430 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:36 vm04 bash[28289]: cluster 2026-03-10T10:11:35.739904+0000 mon.a (mon.0) 430 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:36 vm04 bash[28289]: audit 2026-03-10T10:11:35.741044+0000 mon.a (mon.0) 431 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:36 vm04 bash[28289]: audit 2026-03-10T10:11:35.741044+0000 mon.a (mon.0) 431 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:36 vm04 bash[28289]: audit 2026-03-10T10:11:36.131111+0000 mon.a (mon.0) 432 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:36 vm04 bash[28289]: audit 2026-03-10T10:11:36.131111+0000 mon.a (mon.0) 432 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:36 vm04 bash[28289]: audit 2026-03-10T10:11:36.137398+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:36 vm04 bash[28289]: audit 2026-03-10T10:11:36.137398+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:36 vm04 bash[28289]: audit 2026-03-10T10:11:36.138365+0000 mon.a (mon.0) 434 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:36 vm04 bash[28289]: audit 2026-03-10T10:11:36.138365+0000 mon.a (mon.0) 434 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:36 vm04 bash[28289]: audit 2026-03-10T10:11:36.138894+0000 mon.a (mon.0) 435 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:36 vm04 bash[28289]: audit 2026-03-10T10:11:36.138894+0000 mon.a (mon.0) 435 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:36 vm04 bash[28289]: audit 2026-03-10T10:11:36.143386+0000 mon.a (mon.0) 436 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:36 vm04 bash[28289]: audit 2026-03-10T10:11:36.143386+0000 mon.a (mon.0) 436 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:36 vm04 bash[20742]: cluster 2026-03-10T10:11:34.491396+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:36 vm04 bash[20742]: cluster 2026-03-10T10:11:34.491396+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:36 vm04 bash[20742]: cluster 2026-03-10T10:11:34.491446+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:36 vm04 bash[20742]: cluster 2026-03-10T10:11:34.491446+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:36 vm04 bash[20742]: cluster 2026-03-10T10:11:35.624614+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:36 vm04 bash[20742]: cluster 2026-03-10T10:11:35.624614+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:36 vm04 bash[20742]: audit 2026-03-10T10:11:35.731318+0000 mon.a (mon.0) 428 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:36 vm04 bash[20742]: audit 2026-03-10T10:11:35.731318+0000 mon.a (mon.0) 428 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:36 vm04 bash[20742]: cluster 2026-03-10T10:11:35.739869+0000 mon.a (mon.0) 429 : cluster [INF] osd.3 v2:192.168.123.104:6813/2182249853 boot 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:36 vm04 bash[20742]: cluster 2026-03-10T10:11:35.739869+0000 mon.a (mon.0) 429 : cluster [INF] osd.3 v2:192.168.123.104:6813/2182249853 boot 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:36 vm04 bash[20742]: cluster 2026-03-10T10:11:35.739904+0000 mon.a (mon.0) 430 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:36 vm04 bash[20742]: cluster 2026-03-10T10:11:35.739904+0000 mon.a (mon.0) 430 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:36 vm04 bash[20742]: audit 2026-03-10T10:11:35.741044+0000 mon.a (mon.0) 431 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:36 vm04 bash[20742]: audit 2026-03-10T10:11:35.741044+0000 mon.a (mon.0) 431 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:36 vm04 bash[20742]: audit 2026-03-10T10:11:36.131111+0000 mon.a (mon.0) 432 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:36 vm04 bash[20742]: audit 2026-03-10T10:11:36.131111+0000 mon.a (mon.0) 432 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:36 vm04 bash[20742]: audit 2026-03-10T10:11:36.137398+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:36 vm04 bash[20742]: audit 2026-03-10T10:11:36.137398+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:36 vm04 bash[20742]: audit 2026-03-10T10:11:36.138365+0000 mon.a (mon.0) 434 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:36 vm04 bash[20742]: audit 2026-03-10T10:11:36.138365+0000 mon.a (mon.0) 434 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:36 vm04 bash[20742]: audit 2026-03-10T10:11:36.138894+0000 mon.a (mon.0) 435 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:36 vm04 bash[20742]: audit 2026-03-10T10:11:36.138894+0000 mon.a (mon.0) 435 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:36 vm04 bash[20742]: audit 2026-03-10T10:11:36.143386+0000 mon.a (mon.0) 436 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:37.098 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:36 vm04 bash[20742]: audit 2026-03-10T10:11:36.143386+0000 mon.a (mon.0) 436 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:37.324 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:37.320+0000 7f94627fc640 1 -- 192.168.123.104:0/999845675 <== mgr.14150 v2:192.168.123.104:6800/632047608 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7f943c002bf0 con 0x7f94500776d0 2026-03-10T10:11:37.326 INFO:teuthology.orchestra.run.vm04.stdout:Created osd(s) 3 on host 'vm04' 2026-03-10T10:11:37.327 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:37.320+0000 7f947b9b8640 1 -- 192.168.123.104:0/999845675 >> v2:192.168.123.104:6800/632047608 conn(0x7f94500776d0 msgr2=0x7f9450079b90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:11:37.327 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:37.320+0000 7f947b9b8640 1 --2- 192.168.123.104:0/999845675 >> v2:192.168.123.104:6800/632047608 conn(0x7f94500776d0 0x7f9450079b90 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7f94680097c0 tx=0x7f9468009340 comp rx=0 tx=0).stop 2026-03-10T10:11:37.327 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:37.320+0000 7f947b9b8640 1 -- 192.168.123.104:0/999845675 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f947410aac0 msgr2=0x7f94741a3a10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:11:37.327 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:37.320+0000 7f947b9b8640 1 --2- 192.168.123.104:0/999845675 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f947410aac0 0x7f94741a3a10 secure :-1 s=READY pgs=109 cs=0 l=1 rev1=1 crypto rx=0x7f9470007c00 tx=0x7f9470007c30 comp rx=0 tx=0).stop 2026-03-10T10:11:37.327 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:37.320+0000 7f947b9b8640 1 -- 192.168.123.104:0/999845675 shutdown_connections 2026-03-10T10:11:37.327 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:37.320+0000 7f947b9b8640 1 --2- 192.168.123.104:0/999845675 >> v2:192.168.123.104:6800/632047608 conn(0x7f94500776d0 0x7f9450079b90 unknown :-1 s=CLOSED pgs=61 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:11:37.327 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:37.320+0000 7f947b9b8640 1 --2- 192.168.123.104:0/999845675 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f947410aac0 0x7f94741a3a10 unknown :-1 s=CLOSED pgs=109 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:11:37.327 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:37.320+0000 7f947b9b8640 1 --2- 192.168.123.104:0/999845675 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f9474103960 0x7f947419c990 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:11:37.327 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:37.320+0000 7f947b9b8640 1 --2- 192.168.123.104:0/999845675 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9474102f90 0x7f947419c450 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:11:37.327 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:37.324+0000 7f947b9b8640 1 -- 192.168.123.104:0/999845675 >> 192.168.123.104:0/999845675 conn(0x7f94740fc8d0 msgr2=0x7f947410aea0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:11:37.327 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:37.324+0000 7f947b9b8640 1 -- 192.168.123.104:0/999845675 shutdown_connections 2026-03-10T10:11:37.327 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:11:37.324+0000 7f947b9b8640 1 -- 192.168.123.104:0/999845675 wait complete. 2026-03-10T10:11:37.419 DEBUG:teuthology.orchestra.run.vm04:osd.3> sudo journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@osd.3.service 2026-03-10T10:11:37.420 INFO:tasks.cephadm:Deploying osd.4 on vm07 with /dev/vde... 2026-03-10T10:11:37.420 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- lvm zap /dev/vde 2026-03-10T10:11:38.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:38 vm04 bash[20742]: cluster 2026-03-10T10:11:37.284966+0000 mon.a (mon.0) 437 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-10T10:11:38.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:38 vm04 bash[20742]: cluster 2026-03-10T10:11:37.284966+0000 mon.a (mon.0) 437 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-10T10:11:38.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:38 vm04 bash[20742]: audit 2026-03-10T10:11:37.307336+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:11:38.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:38 vm04 bash[20742]: audit 2026-03-10T10:11:37.307336+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:11:38.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:38 vm04 bash[20742]: audit 2026-03-10T10:11:37.312268+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:38.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:38 vm04 bash[20742]: audit 2026-03-10T10:11:37.312268+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:38.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:38 vm04 bash[20742]: audit 2026-03-10T10:11:37.323184+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:38.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:38 vm04 bash[20742]: audit 2026-03-10T10:11:37.323184+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:38.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:38 vm04 bash[20742]: cluster 2026-03-10T10:11:37.624913+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:38.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:38 vm04 bash[20742]: cluster 2026-03-10T10:11:37.624913+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:38.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:38 vm04 bash[28289]: cluster 2026-03-10T10:11:37.284966+0000 mon.a (mon.0) 437 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-10T10:11:38.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:38 vm04 bash[28289]: cluster 2026-03-10T10:11:37.284966+0000 mon.a (mon.0) 437 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-10T10:11:38.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:38 vm04 bash[28289]: audit 2026-03-10T10:11:37.307336+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:11:38.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:38 vm04 bash[28289]: audit 2026-03-10T10:11:37.307336+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:11:38.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:38 vm04 bash[28289]: audit 2026-03-10T10:11:37.312268+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:38.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:38 vm04 bash[28289]: audit 2026-03-10T10:11:37.312268+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:38.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:38 vm04 bash[28289]: audit 2026-03-10T10:11:37.323184+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:38.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:38 vm04 bash[28289]: audit 2026-03-10T10:11:37.323184+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:38.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:38 vm04 bash[28289]: cluster 2026-03-10T10:11:37.624913+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:38.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:38 vm04 bash[28289]: cluster 2026-03-10T10:11:37.624913+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:38.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:38 vm07 bash[23367]: cluster 2026-03-10T10:11:37.284966+0000 mon.a (mon.0) 437 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-10T10:11:38.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:38 vm07 bash[23367]: cluster 2026-03-10T10:11:37.284966+0000 mon.a (mon.0) 437 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-10T10:11:38.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:38 vm07 bash[23367]: audit 2026-03-10T10:11:37.307336+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:11:38.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:38 vm07 bash[23367]: audit 2026-03-10T10:11:37.307336+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:11:38.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:38 vm07 bash[23367]: audit 2026-03-10T10:11:37.312268+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:38.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:38 vm07 bash[23367]: audit 2026-03-10T10:11:37.312268+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:38.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:38 vm07 bash[23367]: audit 2026-03-10T10:11:37.323184+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:38.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:38 vm07 bash[23367]: audit 2026-03-10T10:11:37.323184+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:38.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:38 vm07 bash[23367]: cluster 2026-03-10T10:11:37.624913+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:38.763 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:38 vm07 bash[23367]: cluster 2026-03-10T10:11:37.624913+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:40.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:40 vm04 bash[20742]: cluster 2026-03-10T10:11:39.625149+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:40.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:40 vm04 bash[20742]: cluster 2026-03-10T10:11:39.625149+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:40.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:40 vm04 bash[28289]: cluster 2026-03-10T10:11:39.625149+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:40.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:40 vm04 bash[28289]: cluster 2026-03-10T10:11:39.625149+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:41.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:40 vm07 bash[23367]: cluster 2026-03-10T10:11:39.625149+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:41.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:40 vm07 bash[23367]: cluster 2026-03-10T10:11:39.625149+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:42.036 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.b/config 2026-03-10T10:11:42.886 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T10:11:42.901 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph orch daemon add osd vm07:/dev/vde 2026-03-10T10:11:42.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:42 vm04 bash[20742]: cluster 2026-03-10T10:11:41.625376+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:42.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:42 vm04 bash[20742]: cluster 2026-03-10T10:11:41.625376+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:42.957 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:42 vm04 bash[28289]: cluster 2026-03-10T10:11:41.625376+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:42.957 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:42 vm04 bash[28289]: cluster 2026-03-10T10:11:41.625376+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:43.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:42 vm07 bash[23367]: cluster 2026-03-10T10:11:41.625376+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:43.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:42 vm07 bash[23367]: cluster 2026-03-10T10:11:41.625376+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:44.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:43 vm07 bash[23367]: cephadm 2026-03-10T10:11:42.948489+0000 mgr.y (mgr.14150) 140 : cephadm [INF] Detected new or changed devices on vm04 2026-03-10T10:11:44.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:43 vm07 bash[23367]: cephadm 2026-03-10T10:11:42.948489+0000 mgr.y (mgr.14150) 140 : cephadm [INF] Detected new or changed devices on vm04 2026-03-10T10:11:44.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:43 vm07 bash[23367]: audit 2026-03-10T10:11:42.955496+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:44.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:43 vm07 bash[23367]: audit 2026-03-10T10:11:42.955496+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:44.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:43 vm07 bash[23367]: audit 2026-03-10T10:11:42.962209+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:44.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:43 vm07 bash[23367]: audit 2026-03-10T10:11:42.962209+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:44.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:43 vm07 bash[23367]: audit 2026-03-10T10:11:42.963615+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:11:44.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:43 vm07 bash[23367]: audit 2026-03-10T10:11:42.963615+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:11:44.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:43 vm07 bash[23367]: audit 2026-03-10T10:11:42.964502+0000 mon.a (mon.0) 444 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:44.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:43 vm07 bash[23367]: audit 2026-03-10T10:11:42.964502+0000 mon.a (mon.0) 444 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:44.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:43 vm07 bash[23367]: audit 2026-03-10T10:11:42.964923+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:11:44.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:43 vm07 bash[23367]: audit 2026-03-10T10:11:42.964923+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:11:44.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:43 vm07 bash[23367]: audit 2026-03-10T10:11:42.968894+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:44.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:43 vm07 bash[23367]: audit 2026-03-10T10:11:42.968894+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:44.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:43 vm07 bash[23367]: cluster 2026-03-10T10:11:43.625626+0000 mgr.y (mgr.14150) 141 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:44.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:43 vm07 bash[23367]: cluster 2026-03-10T10:11:43.625626+0000 mgr.y (mgr.14150) 141 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:44.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:43 vm04 bash[28289]: cephadm 2026-03-10T10:11:42.948489+0000 mgr.y (mgr.14150) 140 : cephadm [INF] Detected new or changed devices on vm04 2026-03-10T10:11:44.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:43 vm04 bash[28289]: cephadm 2026-03-10T10:11:42.948489+0000 mgr.y (mgr.14150) 140 : cephadm [INF] Detected new or changed devices on vm04 2026-03-10T10:11:44.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:43 vm04 bash[28289]: audit 2026-03-10T10:11:42.955496+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:44.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:43 vm04 bash[28289]: audit 2026-03-10T10:11:42.955496+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:44.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:43 vm04 bash[28289]: audit 2026-03-10T10:11:42.962209+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:44.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:43 vm04 bash[28289]: audit 2026-03-10T10:11:42.962209+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:44.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:43 vm04 bash[28289]: audit 2026-03-10T10:11:42.963615+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:11:44.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:43 vm04 bash[28289]: audit 2026-03-10T10:11:42.963615+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:11:44.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:43 vm04 bash[28289]: audit 2026-03-10T10:11:42.964502+0000 mon.a (mon.0) 444 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:44.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:43 vm04 bash[28289]: audit 2026-03-10T10:11:42.964502+0000 mon.a (mon.0) 444 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:44.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:43 vm04 bash[28289]: audit 2026-03-10T10:11:42.964923+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:11:44.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:43 vm04 bash[28289]: audit 2026-03-10T10:11:42.964923+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:11:44.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:43 vm04 bash[28289]: audit 2026-03-10T10:11:42.968894+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:44.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:43 vm04 bash[28289]: audit 2026-03-10T10:11:42.968894+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:44.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:43 vm04 bash[28289]: cluster 2026-03-10T10:11:43.625626+0000 mgr.y (mgr.14150) 141 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:44.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:43 vm04 bash[28289]: cluster 2026-03-10T10:11:43.625626+0000 mgr.y (mgr.14150) 141 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:44.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:43 vm04 bash[20742]: cephadm 2026-03-10T10:11:42.948489+0000 mgr.y (mgr.14150) 140 : cephadm [INF] Detected new or changed devices on vm04 2026-03-10T10:11:44.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:43 vm04 bash[20742]: cephadm 2026-03-10T10:11:42.948489+0000 mgr.y (mgr.14150) 140 : cephadm [INF] Detected new or changed devices on vm04 2026-03-10T10:11:44.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:43 vm04 bash[20742]: audit 2026-03-10T10:11:42.955496+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:44.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:43 vm04 bash[20742]: audit 2026-03-10T10:11:42.955496+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:44.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:43 vm04 bash[20742]: audit 2026-03-10T10:11:42.962209+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:44.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:43 vm04 bash[20742]: audit 2026-03-10T10:11:42.962209+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:44.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:43 vm04 bash[20742]: audit 2026-03-10T10:11:42.963615+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:11:44.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:43 vm04 bash[20742]: audit 2026-03-10T10:11:42.963615+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:11:44.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:43 vm04 bash[20742]: audit 2026-03-10T10:11:42.964502+0000 mon.a (mon.0) 444 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:44.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:43 vm04 bash[20742]: audit 2026-03-10T10:11:42.964502+0000 mon.a (mon.0) 444 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:44.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:43 vm04 bash[20742]: audit 2026-03-10T10:11:42.964923+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:11:44.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:43 vm04 bash[20742]: audit 2026-03-10T10:11:42.964923+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:11:44.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:43 vm04 bash[20742]: audit 2026-03-10T10:11:42.968894+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:44.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:43 vm04 bash[20742]: audit 2026-03-10T10:11:42.968894+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:11:44.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:43 vm04 bash[20742]: cluster 2026-03-10T10:11:43.625626+0000 mgr.y (mgr.14150) 141 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:44.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:43 vm04 bash[20742]: cluster 2026-03-10T10:11:43.625626+0000 mgr.y (mgr.14150) 141 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:46.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:46 vm04 bash[28289]: cluster 2026-03-10T10:11:45.625885+0000 mgr.y (mgr.14150) 142 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:46.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:46 vm04 bash[28289]: cluster 2026-03-10T10:11:45.625885+0000 mgr.y (mgr.14150) 142 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:46.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:46 vm04 bash[20742]: cluster 2026-03-10T10:11:45.625885+0000 mgr.y (mgr.14150) 142 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:46.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:46 vm04 bash[20742]: cluster 2026-03-10T10:11:45.625885+0000 mgr.y (mgr.14150) 142 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:47.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:46 vm07 bash[23367]: cluster 2026-03-10T10:11:45.625885+0000 mgr.y (mgr.14150) 142 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:47.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:46 vm07 bash[23367]: cluster 2026-03-10T10:11:45.625885+0000 mgr.y (mgr.14150) 142 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:47.513 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.b/config 2026-03-10T10:11:47.649 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.647+0000 7f1b77532640 1 -- 192.168.123.107:0/3167961632 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f1b70104d80 msgr2=0x7f1b70105180 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:11:47.649 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.647+0000 7f1b77532640 1 --2- 192.168.123.107:0/3167961632 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f1b70104d80 0x7f1b70105180 secure :-1 s=READY pgs=113 cs=0 l=1 rev1=1 crypto rx=0x7f1b58009a30 tx=0x7f1b5802f220 comp rx=0 tx=0).stop 2026-03-10T10:11:47.649 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.647+0000 7f1b77532640 1 -- 192.168.123.107:0/3167961632 shutdown_connections 2026-03-10T10:11:47.649 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.647+0000 7f1b77532640 1 --2- 192.168.123.107:0/3167961632 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f1b70106940 0x7f1b7010d1d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:11:47.649 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.647+0000 7f1b77532640 1 --2- 192.168.123.107:0/3167961632 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1b70105f80 0x7f1b70106400 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:11:47.649 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.647+0000 7f1b77532640 1 --2- 192.168.123.107:0/3167961632 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f1b70104d80 0x7f1b70105180 unknown :-1 s=CLOSED pgs=113 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:11:47.649 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.647+0000 7f1b77532640 1 -- 192.168.123.107:0/3167961632 >> 192.168.123.107:0/3167961632 conn(0x7f1b70100510 msgr2=0x7f1b70102950 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:11:47.649 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.647+0000 7f1b77532640 1 -- 192.168.123.107:0/3167961632 shutdown_connections 2026-03-10T10:11:47.649 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.647+0000 7f1b77532640 1 -- 192.168.123.107:0/3167961632 wait complete. 2026-03-10T10:11:47.650 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.647+0000 7f1b77532640 1 Processor -- start 2026-03-10T10:11:47.650 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.647+0000 7f1b77532640 1 -- start start 2026-03-10T10:11:47.650 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.647+0000 7f1b77532640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1b70104d80 0x7f1b7019c570 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:11:47.650 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.647+0000 7f1b77532640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f1b70105f80 0x7f1b7019cab0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:11:47.651 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.647+0000 7f1b77532640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f1b70106940 0x7f1b701a3b30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:11:47.651 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.647+0000 7f1b77532640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f1b7010fe60 con 0x7f1b70105f80 2026-03-10T10:11:47.651 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.647+0000 7f1b77532640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f1b7010fce0 con 0x7f1b70104d80 2026-03-10T10:11:47.651 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.647+0000 7f1b77532640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f1b7010ffe0 con 0x7f1b70106940 2026-03-10T10:11:47.651 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.647+0000 7f1b752a7640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1b70104d80 0x7f1b7019c570 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:11:47.651 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.647+0000 7f1b752a7640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1b70104d80 0x7f1b7019c570 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:46330/0 (socket says 192.168.123.107:46330) 2026-03-10T10:11:47.651 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.647+0000 7f1b752a7640 1 -- 192.168.123.107:0/1345273949 learned_addr learned my addr 192.168.123.107:0/1345273949 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-10T10:11:47.651 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.647+0000 7f1b752a7640 1 -- 192.168.123.107:0/1345273949 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f1b70106940 msgr2=0x7f1b701a3b30 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:11:47.651 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.647+0000 7f1b75aa8640 1 --2- 192.168.123.107:0/1345273949 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f1b70106940 0x7f1b701a3b30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:11:47.651 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.647+0000 7f1b752a7640 1 --2- 192.168.123.107:0/1345273949 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f1b70106940 0x7f1b701a3b30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:11:47.651 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.647+0000 7f1b752a7640 1 -- 192.168.123.107:0/1345273949 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f1b70105f80 msgr2=0x7f1b7019cab0 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-10T10:11:47.652 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.647+0000 7f1b752a7640 1 --2- 192.168.123.107:0/1345273949 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f1b70105f80 0x7f1b7019cab0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:11:47.652 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.651+0000 7f1b752a7640 1 -- 192.168.123.107:0/1345273949 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f1b701a4230 con 0x7f1b70104d80 2026-03-10T10:11:47.652 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.651+0000 7f1b752a7640 1 --2- 192.168.123.107:0/1345273949 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1b70104d80 0x7f1b7019c570 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7f1b58002b30 tx=0x7f1b5802fc90 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:11:47.652 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.651+0000 7f1b667fc640 1 -- 192.168.123.107:0/1345273949 <== mon.1 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f1b58004440 con 0x7f1b70104d80 2026-03-10T10:11:47.653 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.651+0000 7f1b667fc640 1 -- 192.168.123.107:0/1345273949 <== mon.1 v2:192.168.123.107:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f1b5804c070 con 0x7f1b70104d80 2026-03-10T10:11:47.653 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.651+0000 7f1b77532640 1 -- 192.168.123.107:0/1345273949 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f1b70077530 con 0x7f1b70104d80 2026-03-10T10:11:47.653 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.651+0000 7f1b667fc640 1 -- 192.168.123.107:0/1345273949 <== mon.1 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f1b580465e0 con 0x7f1b70104d80 2026-03-10T10:11:47.653 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.651+0000 7f1b77532640 1 -- 192.168.123.107:0/1345273949 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f1b70077a60 con 0x7f1b70104d80 2026-03-10T10:11:47.654 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.651+0000 7f1b667fc640 1 -- 192.168.123.107:0/1345273949 <== mon.1 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 15) ==== 100000+0+0 (secure 0 0 0) 0x7f1b58046780 con 0x7f1b70104d80 2026-03-10T10:11:47.654 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.651+0000 7f1b77532640 1 -- 192.168.123.107:0/1345273949 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f1b38005180 con 0x7f1b70104d80 2026-03-10T10:11:47.657 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.651+0000 7f1b667fc640 1 --2- 192.168.123.107:0/1345273949 >> v2:192.168.123.104:6800/632047608 conn(0x7f1b48077600 0x7f1b48079ac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:11:47.657 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.651+0000 7f1b667fc640 1 -- 192.168.123.107:0/1345273949 <== mon.1 v2:192.168.123.107:3300/0 5 ==== osd_map(27..27 src has 1..27) ==== 2793+0+0 (secure 0 0 0) 0x7f1b580c6320 con 0x7f1b70104d80 2026-03-10T10:11:47.657 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.651+0000 7f1b74aa6640 1 --2- 192.168.123.107:0/1345273949 >> v2:192.168.123.104:6800/632047608 conn(0x7f1b48077600 0x7f1b48079ac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:11:47.657 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.651+0000 7f1b74aa6640 1 --2- 192.168.123.107:0/1345273949 >> v2:192.168.123.104:6800/632047608 conn(0x7f1b48077600 0x7f1b48079ac0 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7f1b60005e00 tx=0x7f1b60005d70 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:11:47.657 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.655+0000 7f1b667fc640 1 -- 192.168.123.107:0/1345273949 <== mon.1 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f1b58091010 con 0x7f1b70104d80 2026-03-10T10:11:47.758 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:11:47.755+0000 7f1b77532640 1 -- 192.168.123.107:0/1345273949 --> v2:192.168.123.104:6800/632047608 -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vde", "target": ["mon-mgr", ""]}) -- 0x7f1b38002bf0 con 0x7f1b48077600 2026-03-10T10:11:48.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:48 vm04 bash[28289]: cluster 2026-03-10T10:11:47.626170+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:48.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:48 vm04 bash[28289]: cluster 2026-03-10T10:11:47.626170+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:48.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:48 vm04 bash[28289]: audit 2026-03-10T10:11:47.759856+0000 mgr.y (mgr.14150) 144 : audit [DBG] from='client.24184 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:11:48.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:48 vm04 bash[28289]: audit 2026-03-10T10:11:47.759856+0000 mgr.y (mgr.14150) 144 : audit [DBG] from='client.24184 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:11:48.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:48 vm04 bash[28289]: audit 2026-03-10T10:11:47.761201+0000 mon.a (mon.0) 447 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:11:48.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:48 vm04 bash[28289]: audit 2026-03-10T10:11:47.761201+0000 mon.a (mon.0) 447 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:11:48.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:48 vm04 bash[28289]: audit 2026-03-10T10:11:47.762315+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:11:48.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:48 vm04 bash[28289]: audit 2026-03-10T10:11:47.762315+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:11:48.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:48 vm04 bash[28289]: audit 2026-03-10T10:11:47.762673+0000 mon.a (mon.0) 449 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:48.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:48 vm04 bash[28289]: audit 2026-03-10T10:11:47.762673+0000 mon.a (mon.0) 449 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:48 vm04 bash[20742]: cluster 2026-03-10T10:11:47.626170+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:48 vm04 bash[20742]: cluster 2026-03-10T10:11:47.626170+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:48 vm04 bash[20742]: audit 2026-03-10T10:11:47.759856+0000 mgr.y (mgr.14150) 144 : audit [DBG] from='client.24184 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:11:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:48 vm04 bash[20742]: audit 2026-03-10T10:11:47.759856+0000 mgr.y (mgr.14150) 144 : audit [DBG] from='client.24184 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:11:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:48 vm04 bash[20742]: audit 2026-03-10T10:11:47.761201+0000 mon.a (mon.0) 447 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:11:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:48 vm04 bash[20742]: audit 2026-03-10T10:11:47.761201+0000 mon.a (mon.0) 447 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:11:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:48 vm04 bash[20742]: audit 2026-03-10T10:11:47.762315+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:11:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:48 vm04 bash[20742]: audit 2026-03-10T10:11:47.762315+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:11:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:48 vm04 bash[20742]: audit 2026-03-10T10:11:47.762673+0000 mon.a (mon.0) 449 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:48 vm04 bash[20742]: audit 2026-03-10T10:11:47.762673+0000 mon.a (mon.0) 449 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:49.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:48 vm07 bash[23367]: cluster 2026-03-10T10:11:47.626170+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:49.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:48 vm07 bash[23367]: cluster 2026-03-10T10:11:47.626170+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:49.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:48 vm07 bash[23367]: audit 2026-03-10T10:11:47.759856+0000 mgr.y (mgr.14150) 144 : audit [DBG] from='client.24184 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:11:49.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:48 vm07 bash[23367]: audit 2026-03-10T10:11:47.759856+0000 mgr.y (mgr.14150) 144 : audit [DBG] from='client.24184 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:11:49.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:48 vm07 bash[23367]: audit 2026-03-10T10:11:47.761201+0000 mon.a (mon.0) 447 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:11:49.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:48 vm07 bash[23367]: audit 2026-03-10T10:11:47.761201+0000 mon.a (mon.0) 447 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:11:49.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:48 vm07 bash[23367]: audit 2026-03-10T10:11:47.762315+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:11:49.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:48 vm07 bash[23367]: audit 2026-03-10T10:11:47.762315+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:11:49.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:48 vm07 bash[23367]: audit 2026-03-10T10:11:47.762673+0000 mon.a (mon.0) 449 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:49.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:48 vm07 bash[23367]: audit 2026-03-10T10:11:47.762673+0000 mon.a (mon.0) 449 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:11:50.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:50 vm04 bash[28289]: cluster 2026-03-10T10:11:49.626468+0000 mgr.y (mgr.14150) 145 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:50.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:50 vm04 bash[28289]: cluster 2026-03-10T10:11:49.626468+0000 mgr.y (mgr.14150) 145 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:50.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:50 vm04 bash[20742]: cluster 2026-03-10T10:11:49.626468+0000 mgr.y (mgr.14150) 145 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:50.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:50 vm04 bash[20742]: cluster 2026-03-10T10:11:49.626468+0000 mgr.y (mgr.14150) 145 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:51.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:50 vm07 bash[23367]: cluster 2026-03-10T10:11:49.626468+0000 mgr.y (mgr.14150) 145 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:51.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:50 vm07 bash[23367]: cluster 2026-03-10T10:11:49.626468+0000 mgr.y (mgr.14150) 145 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:52.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:52 vm04 bash[28289]: cluster 2026-03-10T10:11:51.626717+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:52.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:52 vm04 bash[28289]: cluster 2026-03-10T10:11:51.626717+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:52.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:52 vm04 bash[20742]: cluster 2026-03-10T10:11:51.626717+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:52.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:52 vm04 bash[20742]: cluster 2026-03-10T10:11:51.626717+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:53.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:52 vm07 bash[23367]: cluster 2026-03-10T10:11:51.626717+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:53.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:52 vm07 bash[23367]: cluster 2026-03-10T10:11:51.626717+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:53.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:53 vm04 bash[28289]: audit 2026-03-10T10:11:53.144815+0000 mon.a (mon.0) 450 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "456a615c-f863-4970-b4a1-90e964abfec7"}]: dispatch 2026-03-10T10:11:53.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:53 vm04 bash[28289]: audit 2026-03-10T10:11:53.144815+0000 mon.a (mon.0) 450 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "456a615c-f863-4970-b4a1-90e964abfec7"}]: dispatch 2026-03-10T10:11:53.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:53 vm04 bash[28289]: audit 2026-03-10T10:11:53.145956+0000 mon.b (mon.1) 10 : audit [INF] from='client.? 192.168.123.107:0/2280493305' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "456a615c-f863-4970-b4a1-90e964abfec7"}]: dispatch 2026-03-10T10:11:53.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:53 vm04 bash[28289]: audit 2026-03-10T10:11:53.145956+0000 mon.b (mon.1) 10 : audit [INF] from='client.? 192.168.123.107:0/2280493305' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "456a615c-f863-4970-b4a1-90e964abfec7"}]: dispatch 2026-03-10T10:11:53.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:53 vm04 bash[28289]: audit 2026-03-10T10:11:53.148837+0000 mon.a (mon.0) 451 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "456a615c-f863-4970-b4a1-90e964abfec7"}]': finished 2026-03-10T10:11:53.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:53 vm04 bash[28289]: audit 2026-03-10T10:11:53.148837+0000 mon.a (mon.0) 451 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "456a615c-f863-4970-b4a1-90e964abfec7"}]': finished 2026-03-10T10:11:53.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:53 vm04 bash[28289]: cluster 2026-03-10T10:11:53.152661+0000 mon.a (mon.0) 452 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-10T10:11:53.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:53 vm04 bash[28289]: cluster 2026-03-10T10:11:53.152661+0000 mon.a (mon.0) 452 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-10T10:11:53.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:53 vm04 bash[28289]: audit 2026-03-10T10:11:53.152811+0000 mon.a (mon.0) 453 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:11:53.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:53 vm04 bash[28289]: audit 2026-03-10T10:11:53.152811+0000 mon.a (mon.0) 453 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:11:53.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:53 vm04 bash[20742]: audit 2026-03-10T10:11:53.144815+0000 mon.a (mon.0) 450 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "456a615c-f863-4970-b4a1-90e964abfec7"}]: dispatch 2026-03-10T10:11:53.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:53 vm04 bash[20742]: audit 2026-03-10T10:11:53.144815+0000 mon.a (mon.0) 450 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "456a615c-f863-4970-b4a1-90e964abfec7"}]: dispatch 2026-03-10T10:11:53.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:53 vm04 bash[20742]: audit 2026-03-10T10:11:53.145956+0000 mon.b (mon.1) 10 : audit [INF] from='client.? 192.168.123.107:0/2280493305' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "456a615c-f863-4970-b4a1-90e964abfec7"}]: dispatch 2026-03-10T10:11:53.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:53 vm04 bash[20742]: audit 2026-03-10T10:11:53.145956+0000 mon.b (mon.1) 10 : audit [INF] from='client.? 192.168.123.107:0/2280493305' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "456a615c-f863-4970-b4a1-90e964abfec7"}]: dispatch 2026-03-10T10:11:53.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:53 vm04 bash[20742]: audit 2026-03-10T10:11:53.148837+0000 mon.a (mon.0) 451 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "456a615c-f863-4970-b4a1-90e964abfec7"}]': finished 2026-03-10T10:11:53.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:53 vm04 bash[20742]: audit 2026-03-10T10:11:53.148837+0000 mon.a (mon.0) 451 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "456a615c-f863-4970-b4a1-90e964abfec7"}]': finished 2026-03-10T10:11:53.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:53 vm04 bash[20742]: cluster 2026-03-10T10:11:53.152661+0000 mon.a (mon.0) 452 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-10T10:11:53.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:53 vm04 bash[20742]: cluster 2026-03-10T10:11:53.152661+0000 mon.a (mon.0) 452 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-10T10:11:53.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:53 vm04 bash[20742]: audit 2026-03-10T10:11:53.152811+0000 mon.a (mon.0) 453 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:11:53.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:53 vm04 bash[20742]: audit 2026-03-10T10:11:53.152811+0000 mon.a (mon.0) 453 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:11:54.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:53 vm07 bash[23367]: audit 2026-03-10T10:11:53.144815+0000 mon.a (mon.0) 450 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "456a615c-f863-4970-b4a1-90e964abfec7"}]: dispatch 2026-03-10T10:11:54.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:53 vm07 bash[23367]: audit 2026-03-10T10:11:53.144815+0000 mon.a (mon.0) 450 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "456a615c-f863-4970-b4a1-90e964abfec7"}]: dispatch 2026-03-10T10:11:54.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:53 vm07 bash[23367]: audit 2026-03-10T10:11:53.145956+0000 mon.b (mon.1) 10 : audit [INF] from='client.? 192.168.123.107:0/2280493305' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "456a615c-f863-4970-b4a1-90e964abfec7"}]: dispatch 2026-03-10T10:11:54.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:53 vm07 bash[23367]: audit 2026-03-10T10:11:53.145956+0000 mon.b (mon.1) 10 : audit [INF] from='client.? 192.168.123.107:0/2280493305' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "456a615c-f863-4970-b4a1-90e964abfec7"}]: dispatch 2026-03-10T10:11:54.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:53 vm07 bash[23367]: audit 2026-03-10T10:11:53.148837+0000 mon.a (mon.0) 451 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "456a615c-f863-4970-b4a1-90e964abfec7"}]': finished 2026-03-10T10:11:54.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:53 vm07 bash[23367]: audit 2026-03-10T10:11:53.148837+0000 mon.a (mon.0) 451 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "456a615c-f863-4970-b4a1-90e964abfec7"}]': finished 2026-03-10T10:11:54.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:53 vm07 bash[23367]: cluster 2026-03-10T10:11:53.152661+0000 mon.a (mon.0) 452 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-10T10:11:54.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:53 vm07 bash[23367]: cluster 2026-03-10T10:11:53.152661+0000 mon.a (mon.0) 452 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-10T10:11:54.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:53 vm07 bash[23367]: audit 2026-03-10T10:11:53.152811+0000 mon.a (mon.0) 453 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:11:54.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:53 vm07 bash[23367]: audit 2026-03-10T10:11:53.152811+0000 mon.a (mon.0) 453 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:11:55.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:54 vm07 bash[23367]: cluster 2026-03-10T10:11:53.627033+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:55.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:54 vm07 bash[23367]: cluster 2026-03-10T10:11:53.627033+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:55.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:54 vm07 bash[23367]: audit 2026-03-10T10:11:53.776063+0000 mon.b (mon.1) 11 : audit [DBG] from='client.? 192.168.123.107:0/2928116094' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:11:55.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:54 vm07 bash[23367]: audit 2026-03-10T10:11:53.776063+0000 mon.b (mon.1) 11 : audit [DBG] from='client.? 192.168.123.107:0/2928116094' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:11:55.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:54 vm04 bash[28289]: cluster 2026-03-10T10:11:53.627033+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:54 vm04 bash[28289]: cluster 2026-03-10T10:11:53.627033+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:54 vm04 bash[28289]: audit 2026-03-10T10:11:53.776063+0000 mon.b (mon.1) 11 : audit [DBG] from='client.? 192.168.123.107:0/2928116094' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:11:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:54 vm04 bash[28289]: audit 2026-03-10T10:11:53.776063+0000 mon.b (mon.1) 11 : audit [DBG] from='client.? 192.168.123.107:0/2928116094' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:11:55.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:54 vm04 bash[20742]: cluster 2026-03-10T10:11:53.627033+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:55.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:54 vm04 bash[20742]: cluster 2026-03-10T10:11:53.627033+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:55.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:54 vm04 bash[20742]: audit 2026-03-10T10:11:53.776063+0000 mon.b (mon.1) 11 : audit [DBG] from='client.? 192.168.123.107:0/2928116094' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:11:55.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:54 vm04 bash[20742]: audit 2026-03-10T10:11:53.776063+0000 mon.b (mon.1) 11 : audit [DBG] from='client.? 192.168.123.107:0/2928116094' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:11:57.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:56 vm07 bash[23367]: cluster 2026-03-10T10:11:55.627303+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:57.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:56 vm07 bash[23367]: cluster 2026-03-10T10:11:55.627303+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:57.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:56 vm04 bash[28289]: cluster 2026-03-10T10:11:55.627303+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:57.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:56 vm04 bash[28289]: cluster 2026-03-10T10:11:55.627303+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:57.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:56 vm04 bash[20742]: cluster 2026-03-10T10:11:55.627303+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:57.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:56 vm04 bash[20742]: cluster 2026-03-10T10:11:55.627303+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:59.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:58 vm07 bash[23367]: cluster 2026-03-10T10:11:57.627529+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:59.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:11:58 vm07 bash[23367]: cluster 2026-03-10T10:11:57.627529+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:58 vm04 bash[28289]: cluster 2026-03-10T10:11:57.627529+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:11:58 vm04 bash[28289]: cluster 2026-03-10T10:11:57.627529+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:59.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:58 vm04 bash[20742]: cluster 2026-03-10T10:11:57.627529+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:11:59.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:11:58 vm04 bash[20742]: cluster 2026-03-10T10:11:57.627529+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:01.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:00 vm07 bash[23367]: cluster 2026-03-10T10:11:59.627796+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:01.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:00 vm07 bash[23367]: cluster 2026-03-10T10:11:59.627796+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:00 vm04 bash[28289]: cluster 2026-03-10T10:11:59.627796+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:00 vm04 bash[28289]: cluster 2026-03-10T10:11:59.627796+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:01.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:00 vm04 bash[20742]: cluster 2026-03-10T10:11:59.627796+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:01.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:00 vm04 bash[20742]: cluster 2026-03-10T10:11:59.627796+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:02.893 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:02 vm07 bash[23367]: cluster 2026-03-10T10:12:01.628084+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:02.893 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:02 vm07 bash[23367]: cluster 2026-03-10T10:12:01.628084+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:02.893 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:02 vm07 bash[23367]: audit 2026-03-10T10:12:02.033238+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T10:12:02.893 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:02 vm07 bash[23367]: audit 2026-03-10T10:12:02.033238+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T10:12:02.893 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:02 vm07 bash[23367]: audit 2026-03-10T10:12:02.033680+0000 mon.a (mon.0) 455 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:02.893 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:02 vm07 bash[23367]: audit 2026-03-10T10:12:02.033680+0000 mon.a (mon.0) 455 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:02.893 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:02 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:12:02.893 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:12:02 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:12:03.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:02 vm04 bash[28289]: cluster 2026-03-10T10:12:01.628084+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:03.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:02 vm04 bash[28289]: cluster 2026-03-10T10:12:01.628084+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:03.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:02 vm04 bash[28289]: audit 2026-03-10T10:12:02.033238+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T10:12:03.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:02 vm04 bash[28289]: audit 2026-03-10T10:12:02.033238+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T10:12:03.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:02 vm04 bash[28289]: audit 2026-03-10T10:12:02.033680+0000 mon.a (mon.0) 455 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:03.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:02 vm04 bash[28289]: audit 2026-03-10T10:12:02.033680+0000 mon.a (mon.0) 455 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:03.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:02 vm04 bash[20742]: cluster 2026-03-10T10:12:01.628084+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:03.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:02 vm04 bash[20742]: cluster 2026-03-10T10:12:01.628084+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:03.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:02 vm04 bash[20742]: audit 2026-03-10T10:12:02.033238+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T10:12:03.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:02 vm04 bash[20742]: audit 2026-03-10T10:12:02.033238+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T10:12:03.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:02 vm04 bash[20742]: audit 2026-03-10T10:12:02.033680+0000 mon.a (mon.0) 455 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:03.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:02 vm04 bash[20742]: audit 2026-03-10T10:12:02.033680+0000 mon.a (mon.0) 455 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:03.218 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:03 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:12:03.218 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:12:03 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:12:04.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:03 vm07 bash[23367]: cephadm 2026-03-10T10:12:02.034035+0000 mgr.y (mgr.14150) 152 : cephadm [INF] Deploying daemon osd.4 on vm07 2026-03-10T10:12:04.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:03 vm07 bash[23367]: cephadm 2026-03-10T10:12:02.034035+0000 mgr.y (mgr.14150) 152 : cephadm [INF] Deploying daemon osd.4 on vm07 2026-03-10T10:12:04.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:03 vm07 bash[23367]: audit 2026-03-10T10:12:03.131257+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:12:04.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:03 vm07 bash[23367]: audit 2026-03-10T10:12:03.131257+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:12:04.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:03 vm07 bash[23367]: audit 2026-03-10T10:12:03.134838+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:04.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:03 vm07 bash[23367]: audit 2026-03-10T10:12:03.134838+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:04.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:03 vm07 bash[23367]: audit 2026-03-10T10:12:03.139349+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:04.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:03 vm07 bash[23367]: audit 2026-03-10T10:12:03.139349+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:03 vm04 bash[28289]: cephadm 2026-03-10T10:12:02.034035+0000 mgr.y (mgr.14150) 152 : cephadm [INF] Deploying daemon osd.4 on vm07 2026-03-10T10:12:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:03 vm04 bash[28289]: cephadm 2026-03-10T10:12:02.034035+0000 mgr.y (mgr.14150) 152 : cephadm [INF] Deploying daemon osd.4 on vm07 2026-03-10T10:12:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:03 vm04 bash[28289]: audit 2026-03-10T10:12:03.131257+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:12:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:03 vm04 bash[28289]: audit 2026-03-10T10:12:03.131257+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:12:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:03 vm04 bash[28289]: audit 2026-03-10T10:12:03.134838+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:03 vm04 bash[28289]: audit 2026-03-10T10:12:03.134838+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:03 vm04 bash[28289]: audit 2026-03-10T10:12:03.139349+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:03 vm04 bash[28289]: audit 2026-03-10T10:12:03.139349+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:04.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:03 vm04 bash[20742]: cephadm 2026-03-10T10:12:02.034035+0000 mgr.y (mgr.14150) 152 : cephadm [INF] Deploying daemon osd.4 on vm07 2026-03-10T10:12:04.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:03 vm04 bash[20742]: cephadm 2026-03-10T10:12:02.034035+0000 mgr.y (mgr.14150) 152 : cephadm [INF] Deploying daemon osd.4 on vm07 2026-03-10T10:12:04.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:03 vm04 bash[20742]: audit 2026-03-10T10:12:03.131257+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:12:04.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:03 vm04 bash[20742]: audit 2026-03-10T10:12:03.131257+0000 mon.a (mon.0) 456 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:12:04.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:03 vm04 bash[20742]: audit 2026-03-10T10:12:03.134838+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:04.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:03 vm04 bash[20742]: audit 2026-03-10T10:12:03.134838+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:04.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:03 vm04 bash[20742]: audit 2026-03-10T10:12:03.139349+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:04.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:03 vm04 bash[20742]: audit 2026-03-10T10:12:03.139349+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:05.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:04 vm07 bash[23367]: cluster 2026-03-10T10:12:03.628346+0000 mgr.y (mgr.14150) 153 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:05.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:04 vm07 bash[23367]: cluster 2026-03-10T10:12:03.628346+0000 mgr.y (mgr.14150) 153 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:05.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:04 vm04 bash[28289]: cluster 2026-03-10T10:12:03.628346+0000 mgr.y (mgr.14150) 153 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:05.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:04 vm04 bash[28289]: cluster 2026-03-10T10:12:03.628346+0000 mgr.y (mgr.14150) 153 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:05.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:04 vm04 bash[20742]: cluster 2026-03-10T10:12:03.628346+0000 mgr.y (mgr.14150) 153 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:05.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:04 vm04 bash[20742]: cluster 2026-03-10T10:12:03.628346+0000 mgr.y (mgr.14150) 153 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:07.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:06 vm04 bash[28289]: cluster 2026-03-10T10:12:05.628552+0000 mgr.y (mgr.14150) 154 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:07.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:06 vm04 bash[28289]: cluster 2026-03-10T10:12:05.628552+0000 mgr.y (mgr.14150) 154 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:07.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:06 vm04 bash[28289]: audit 2026-03-10T10:12:06.543934+0000 mon.a (mon.0) 459 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T10:12:07.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:06 vm04 bash[28289]: audit 2026-03-10T10:12:06.543934+0000 mon.a (mon.0) 459 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T10:12:07.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:06 vm04 bash[28289]: audit 2026-03-10T10:12:06.544393+0000 mon.b (mon.1) 12 : audit [INF] from='osd.4 v2:192.168.123.107:6800/2162643433' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T10:12:07.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:06 vm04 bash[28289]: audit 2026-03-10T10:12:06.544393+0000 mon.b (mon.1) 12 : audit [INF] from='osd.4 v2:192.168.123.107:6800/2162643433' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T10:12:07.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:06 vm04 bash[20742]: cluster 2026-03-10T10:12:05.628552+0000 mgr.y (mgr.14150) 154 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:07.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:06 vm04 bash[20742]: cluster 2026-03-10T10:12:05.628552+0000 mgr.y (mgr.14150) 154 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:07.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:06 vm04 bash[20742]: audit 2026-03-10T10:12:06.543934+0000 mon.a (mon.0) 459 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T10:12:07.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:06 vm04 bash[20742]: audit 2026-03-10T10:12:06.543934+0000 mon.a (mon.0) 459 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T10:12:07.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:06 vm04 bash[20742]: audit 2026-03-10T10:12:06.544393+0000 mon.b (mon.1) 12 : audit [INF] from='osd.4 v2:192.168.123.107:6800/2162643433' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T10:12:07.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:06 vm04 bash[20742]: audit 2026-03-10T10:12:06.544393+0000 mon.b (mon.1) 12 : audit [INF] from='osd.4 v2:192.168.123.107:6800/2162643433' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T10:12:07.263 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:06 vm07 bash[23367]: cluster 2026-03-10T10:12:05.628552+0000 mgr.y (mgr.14150) 154 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:07.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:06 vm07 bash[23367]: cluster 2026-03-10T10:12:05.628552+0000 mgr.y (mgr.14150) 154 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:07.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:06 vm07 bash[23367]: audit 2026-03-10T10:12:06.543934+0000 mon.a (mon.0) 459 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T10:12:07.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:06 vm07 bash[23367]: audit 2026-03-10T10:12:06.543934+0000 mon.a (mon.0) 459 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T10:12:07.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:06 vm07 bash[23367]: audit 2026-03-10T10:12:06.544393+0000 mon.b (mon.1) 12 : audit [INF] from='osd.4 v2:192.168.123.107:6800/2162643433' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T10:12:07.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:06 vm07 bash[23367]: audit 2026-03-10T10:12:06.544393+0000 mon.b (mon.1) 12 : audit [INF] from='osd.4 v2:192.168.123.107:6800/2162643433' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T10:12:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:08 vm04 bash[28289]: audit 2026-03-10T10:12:06.855234+0000 mon.a (mon.0) 460 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T10:12:08.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:08 vm04 bash[28289]: audit 2026-03-10T10:12:06.855234+0000 mon.a (mon.0) 460 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T10:12:08.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:08 vm04 bash[28289]: cluster 2026-03-10T10:12:06.858590+0000 mon.a (mon.0) 461 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-10T10:12:08.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:08 vm04 bash[28289]: cluster 2026-03-10T10:12:06.858590+0000 mon.a (mon.0) 461 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-10T10:12:08.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:08 vm04 bash[28289]: audit 2026-03-10T10:12:06.858857+0000 mon.b (mon.1) 13 : audit [INF] from='osd.4 v2:192.168.123.107:6800/2162643433' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:12:08.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:08 vm04 bash[28289]: audit 2026-03-10T10:12:06.858857+0000 mon.b (mon.1) 13 : audit [INF] from='osd.4 v2:192.168.123.107:6800/2162643433' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:12:08.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:08 vm04 bash[28289]: audit 2026-03-10T10:12:06.858858+0000 mon.a (mon.0) 462 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:12:08.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:08 vm04 bash[28289]: audit 2026-03-10T10:12:06.858858+0000 mon.a (mon.0) 462 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:12:08.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:08 vm04 bash[28289]: audit 2026-03-10T10:12:06.859182+0000 mon.a (mon.0) 463 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:12:08.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:08 vm04 bash[28289]: audit 2026-03-10T10:12:06.859182+0000 mon.a (mon.0) 463 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:12:08.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:08 vm04 bash[28289]: cluster 2026-03-10T10:12:07.628815+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:08.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:08 vm04 bash[28289]: cluster 2026-03-10T10:12:07.628815+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:08.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:08 vm04 bash[20742]: audit 2026-03-10T10:12:06.855234+0000 mon.a (mon.0) 460 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T10:12:08.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:08 vm04 bash[20742]: audit 2026-03-10T10:12:06.855234+0000 mon.a (mon.0) 460 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T10:12:08.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:08 vm04 bash[20742]: cluster 2026-03-10T10:12:06.858590+0000 mon.a (mon.0) 461 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-10T10:12:08.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:08 vm04 bash[20742]: cluster 2026-03-10T10:12:06.858590+0000 mon.a (mon.0) 461 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-10T10:12:08.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:08 vm04 bash[20742]: audit 2026-03-10T10:12:06.858857+0000 mon.b (mon.1) 13 : audit [INF] from='osd.4 v2:192.168.123.107:6800/2162643433' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:12:08.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:08 vm04 bash[20742]: audit 2026-03-10T10:12:06.858857+0000 mon.b (mon.1) 13 : audit [INF] from='osd.4 v2:192.168.123.107:6800/2162643433' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:12:08.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:08 vm04 bash[20742]: audit 2026-03-10T10:12:06.858858+0000 mon.a (mon.0) 462 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:12:08.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:08 vm04 bash[20742]: audit 2026-03-10T10:12:06.858858+0000 mon.a (mon.0) 462 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:12:08.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:08 vm04 bash[20742]: audit 2026-03-10T10:12:06.859182+0000 mon.a (mon.0) 463 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:12:08.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:08 vm04 bash[20742]: audit 2026-03-10T10:12:06.859182+0000 mon.a (mon.0) 463 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:12:08.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:08 vm04 bash[20742]: cluster 2026-03-10T10:12:07.628815+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:08.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:08 vm04 bash[20742]: cluster 2026-03-10T10:12:07.628815+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:08.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:08 vm07 bash[23367]: audit 2026-03-10T10:12:06.855234+0000 mon.a (mon.0) 460 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T10:12:08.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:08 vm07 bash[23367]: audit 2026-03-10T10:12:06.855234+0000 mon.a (mon.0) 460 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T10:12:08.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:08 vm07 bash[23367]: cluster 2026-03-10T10:12:06.858590+0000 mon.a (mon.0) 461 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-10T10:12:08.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:08 vm07 bash[23367]: cluster 2026-03-10T10:12:06.858590+0000 mon.a (mon.0) 461 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-10T10:12:08.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:08 vm07 bash[23367]: audit 2026-03-10T10:12:06.858857+0000 mon.b (mon.1) 13 : audit [INF] from='osd.4 v2:192.168.123.107:6800/2162643433' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:12:08.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:08 vm07 bash[23367]: audit 2026-03-10T10:12:06.858857+0000 mon.b (mon.1) 13 : audit [INF] from='osd.4 v2:192.168.123.107:6800/2162643433' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:12:08.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:08 vm07 bash[23367]: audit 2026-03-10T10:12:06.858858+0000 mon.a (mon.0) 462 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:12:08.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:08 vm07 bash[23367]: audit 2026-03-10T10:12:06.858858+0000 mon.a (mon.0) 462 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:12:08.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:08 vm07 bash[23367]: audit 2026-03-10T10:12:06.859182+0000 mon.a (mon.0) 463 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:12:08.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:08 vm07 bash[23367]: audit 2026-03-10T10:12:06.859182+0000 mon.a (mon.0) 463 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:12:08.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:08 vm07 bash[23367]: cluster 2026-03-10T10:12:07.628815+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:08.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:08 vm07 bash[23367]: cluster 2026-03-10T10:12:07.628815+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:09.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:09 vm07 bash[23367]: audit 2026-03-10T10:12:07.964012+0000 mon.a (mon.0) 464 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T10:12:09.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:09 vm07 bash[23367]: audit 2026-03-10T10:12:07.964012+0000 mon.a (mon.0) 464 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T10:12:09.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:09 vm07 bash[23367]: cluster 2026-03-10T10:12:07.992808+0000 mon.a (mon.0) 465 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-10T10:12:09.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:09 vm07 bash[23367]: cluster 2026-03-10T10:12:07.992808+0000 mon.a (mon.0) 465 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-10T10:12:09.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:09 vm07 bash[23367]: audit 2026-03-10T10:12:08.020284+0000 mon.a (mon.0) 466 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:12:09.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:09 vm07 bash[23367]: audit 2026-03-10T10:12:08.020284+0000 mon.a (mon.0) 466 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:12:09.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:09 vm07 bash[23367]: audit 2026-03-10T10:12:08.049451+0000 mon.a (mon.0) 467 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:12:09.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:09 vm07 bash[23367]: audit 2026-03-10T10:12:08.049451+0000 mon.a (mon.0) 467 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:12:09.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:09 vm07 bash[23367]: audit 2026-03-10T10:12:09.010686+0000 mon.a (mon.0) 468 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-10T10:12:09.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:09 vm07 bash[23367]: audit 2026-03-10T10:12:09.010686+0000 mon.a (mon.0) 468 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-10T10:12:09.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:09 vm04 bash[28289]: audit 2026-03-10T10:12:07.964012+0000 mon.a (mon.0) 464 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T10:12:09.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:09 vm04 bash[28289]: audit 2026-03-10T10:12:07.964012+0000 mon.a (mon.0) 464 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T10:12:09.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:09 vm04 bash[28289]: cluster 2026-03-10T10:12:07.992808+0000 mon.a (mon.0) 465 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-10T10:12:09.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:09 vm04 bash[28289]: cluster 2026-03-10T10:12:07.992808+0000 mon.a (mon.0) 465 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-10T10:12:09.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:09 vm04 bash[28289]: audit 2026-03-10T10:12:08.020284+0000 mon.a (mon.0) 466 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:12:09.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:09 vm04 bash[28289]: audit 2026-03-10T10:12:08.020284+0000 mon.a (mon.0) 466 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:12:09.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:09 vm04 bash[28289]: audit 2026-03-10T10:12:08.049451+0000 mon.a (mon.0) 467 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:12:09.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:09 vm04 bash[28289]: audit 2026-03-10T10:12:08.049451+0000 mon.a (mon.0) 467 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:12:09.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:09 vm04 bash[28289]: audit 2026-03-10T10:12:09.010686+0000 mon.a (mon.0) 468 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-10T10:12:09.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:09 vm04 bash[28289]: audit 2026-03-10T10:12:09.010686+0000 mon.a (mon.0) 468 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-10T10:12:09.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:09 vm04 bash[20742]: audit 2026-03-10T10:12:07.964012+0000 mon.a (mon.0) 464 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T10:12:09.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:09 vm04 bash[20742]: audit 2026-03-10T10:12:07.964012+0000 mon.a (mon.0) 464 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T10:12:09.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:09 vm04 bash[20742]: cluster 2026-03-10T10:12:07.992808+0000 mon.a (mon.0) 465 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-10T10:12:09.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:09 vm04 bash[20742]: cluster 2026-03-10T10:12:07.992808+0000 mon.a (mon.0) 465 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-10T10:12:09.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:09 vm04 bash[20742]: audit 2026-03-10T10:12:08.020284+0000 mon.a (mon.0) 466 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:12:09.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:09 vm04 bash[20742]: audit 2026-03-10T10:12:08.020284+0000 mon.a (mon.0) 466 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:12:09.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:09 vm04 bash[20742]: audit 2026-03-10T10:12:08.049451+0000 mon.a (mon.0) 467 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:12:09.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:09 vm04 bash[20742]: audit 2026-03-10T10:12:08.049451+0000 mon.a (mon.0) 467 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:12:09.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:09 vm04 bash[20742]: audit 2026-03-10T10:12:09.010686+0000 mon.a (mon.0) 468 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-10T10:12:09.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:09 vm04 bash[20742]: audit 2026-03-10T10:12:09.010686+0000 mon.a (mon.0) 468 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-10T10:12:10.354 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:10 vm07 bash[23367]: cluster 2026-03-10T10:12:07.569246+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:12:10.354 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:10 vm07 bash[23367]: cluster 2026-03-10T10:12:07.569246+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:12:10.354 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:10 vm07 bash[23367]: cluster 2026-03-10T10:12:07.569308+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:12:10.354 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:10 vm07 bash[23367]: cluster 2026-03-10T10:12:07.569308+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:12:10.354 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:10 vm07 bash[23367]: audit 2026-03-10T10:12:09.044741+0000 mon.a (mon.0) 469 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:12:10.354 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:10 vm07 bash[23367]: audit 2026-03-10T10:12:09.044741+0000 mon.a (mon.0) 469 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:12:10.354 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:10 vm07 bash[23367]: cluster 2026-03-10T10:12:09.051935+0000 mon.a (mon.0) 470 : cluster [INF] osd.4 v2:192.168.123.107:6800/2162643433 boot 2026-03-10T10:12:10.354 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:10 vm07 bash[23367]: cluster 2026-03-10T10:12:09.051935+0000 mon.a (mon.0) 470 : cluster [INF] osd.4 v2:192.168.123.107:6800/2162643433 boot 2026-03-10T10:12:10.354 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:10 vm07 bash[23367]: cluster 2026-03-10T10:12:09.052010+0000 mon.a (mon.0) 471 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-10T10:12:10.354 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:10 vm07 bash[23367]: cluster 2026-03-10T10:12:09.052010+0000 mon.a (mon.0) 471 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-10T10:12:10.354 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:10 vm07 bash[23367]: audit 2026-03-10T10:12:09.052834+0000 mon.a (mon.0) 472 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:12:10.354 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:10 vm07 bash[23367]: audit 2026-03-10T10:12:09.052834+0000 mon.a (mon.0) 472 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:12:10.354 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:10 vm07 bash[23367]: audit 2026-03-10T10:12:09.314386+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:10.354 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:10 vm07 bash[23367]: audit 2026-03-10T10:12:09.314386+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:10.354 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:10 vm07 bash[23367]: audit 2026-03-10T10:12:09.318504+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:10.354 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:10 vm07 bash[23367]: audit 2026-03-10T10:12:09.318504+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:10.354 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:10 vm07 bash[23367]: cluster 2026-03-10T10:12:09.629089+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:10.354 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:10 vm07 bash[23367]: cluster 2026-03-10T10:12:09.629089+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:10.354 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:10 vm07 bash[23367]: audit 2026-03-10T10:12:09.744344+0000 mon.a (mon.0) 475 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:10.354 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:10 vm07 bash[23367]: audit 2026-03-10T10:12:09.744344+0000 mon.a (mon.0) 475 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:10.354 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:10 vm07 bash[23367]: audit 2026-03-10T10:12:09.744885+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:12:10.354 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:10 vm07 bash[23367]: audit 2026-03-10T10:12:09.744885+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:12:10.354 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:10 vm07 bash[23367]: audit 2026-03-10T10:12:09.761900+0000 mon.a (mon.0) 477 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:10.354 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:10 vm07 bash[23367]: audit 2026-03-10T10:12:09.761900+0000 mon.a (mon.0) 477 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:10.354 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:10 vm07 bash[23367]: cluster 2026-03-10T10:12:10.055687+0000 mon.a (mon.0) 478 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-10T10:12:10.354 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:10 vm07 bash[23367]: cluster 2026-03-10T10:12:10.055687+0000 mon.a (mon.0) 478 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-10T10:12:10.410 INFO:teuthology.orchestra.run.vm07.stdout:Created osd(s) 4 on host 'vm07' 2026-03-10T10:12:10.410 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:10.403+0000 7f1b667fc640 1 -- 192.168.123.107:0/1345273949 <== mgr.14150 v2:192.168.123.104:6800/632047608 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7f1b38002bf0 con 0x7f1b48077600 2026-03-10T10:12:10.410 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:10.407+0000 7f1b77532640 1 -- 192.168.123.107:0/1345273949 >> v2:192.168.123.104:6800/632047608 conn(0x7f1b48077600 msgr2=0x7f1b48079ac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:12:10.410 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:10.407+0000 7f1b77532640 1 --2- 192.168.123.107:0/1345273949 >> v2:192.168.123.104:6800/632047608 conn(0x7f1b48077600 0x7f1b48079ac0 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7f1b60005e00 tx=0x7f1b60005d70 comp rx=0 tx=0).stop 2026-03-10T10:12:10.410 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:10.407+0000 7f1b77532640 1 -- 192.168.123.107:0/1345273949 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1b70104d80 msgr2=0x7f1b7019c570 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:12:10.410 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:10.407+0000 7f1b77532640 1 --2- 192.168.123.107:0/1345273949 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1b70104d80 0x7f1b7019c570 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7f1b58002b30 tx=0x7f1b5802fc90 comp rx=0 tx=0).stop 2026-03-10T10:12:10.410 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:10.407+0000 7f1b77532640 1 -- 192.168.123.107:0/1345273949 shutdown_connections 2026-03-10T10:12:10.410 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:10.407+0000 7f1b77532640 1 --2- 192.168.123.107:0/1345273949 >> v2:192.168.123.104:6800/632047608 conn(0x7f1b48077600 0x7f1b48079ac0 unknown :-1 s=CLOSED pgs=67 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:12:10.410 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:10.407+0000 7f1b77532640 1 --2- 192.168.123.107:0/1345273949 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f1b70106940 0x7f1b701a3b30 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:12:10.410 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:10.407+0000 7f1b77532640 1 --2- 192.168.123.107:0/1345273949 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f1b70105f80 0x7f1b7019cab0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:12:10.410 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:10.407+0000 7f1b77532640 1 --2- 192.168.123.107:0/1345273949 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f1b70104d80 0x7f1b7019c570 unknown :-1 s=CLOSED pgs=14 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:12:10.410 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:10.407+0000 7f1b77532640 1 -- 192.168.123.107:0/1345273949 >> 192.168.123.107:0/1345273949 conn(0x7f1b70100510 msgr2=0x7f1b70101fa0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:12:10.412 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:10.407+0000 7f1b77532640 1 -- 192.168.123.107:0/1345273949 shutdown_connections 2026-03-10T10:12:10.412 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:10.411+0000 7f1b77532640 1 -- 192.168.123.107:0/1345273949 wait complete. 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:10 vm04 bash[28289]: cluster 2026-03-10T10:12:07.569246+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:10 vm04 bash[28289]: cluster 2026-03-10T10:12:07.569246+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:10 vm04 bash[28289]: cluster 2026-03-10T10:12:07.569308+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:10 vm04 bash[28289]: cluster 2026-03-10T10:12:07.569308+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:10 vm04 bash[28289]: audit 2026-03-10T10:12:09.044741+0000 mon.a (mon.0) 469 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:10 vm04 bash[28289]: audit 2026-03-10T10:12:09.044741+0000 mon.a (mon.0) 469 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:10 vm04 bash[28289]: cluster 2026-03-10T10:12:09.051935+0000 mon.a (mon.0) 470 : cluster [INF] osd.4 v2:192.168.123.107:6800/2162643433 boot 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:10 vm04 bash[28289]: cluster 2026-03-10T10:12:09.051935+0000 mon.a (mon.0) 470 : cluster [INF] osd.4 v2:192.168.123.107:6800/2162643433 boot 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:10 vm04 bash[28289]: cluster 2026-03-10T10:12:09.052010+0000 mon.a (mon.0) 471 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:10 vm04 bash[28289]: cluster 2026-03-10T10:12:09.052010+0000 mon.a (mon.0) 471 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:10 vm04 bash[28289]: audit 2026-03-10T10:12:09.052834+0000 mon.a (mon.0) 472 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:10 vm04 bash[28289]: audit 2026-03-10T10:12:09.052834+0000 mon.a (mon.0) 472 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:10 vm04 bash[28289]: audit 2026-03-10T10:12:09.314386+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:10 vm04 bash[28289]: audit 2026-03-10T10:12:09.314386+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:10 vm04 bash[28289]: audit 2026-03-10T10:12:09.318504+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:10 vm04 bash[28289]: audit 2026-03-10T10:12:09.318504+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:10 vm04 bash[28289]: cluster 2026-03-10T10:12:09.629089+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:10 vm04 bash[28289]: cluster 2026-03-10T10:12:09.629089+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:10 vm04 bash[28289]: audit 2026-03-10T10:12:09.744344+0000 mon.a (mon.0) 475 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:10 vm04 bash[28289]: audit 2026-03-10T10:12:09.744344+0000 mon.a (mon.0) 475 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:10 vm04 bash[28289]: audit 2026-03-10T10:12:09.744885+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:10 vm04 bash[28289]: audit 2026-03-10T10:12:09.744885+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:10 vm04 bash[28289]: audit 2026-03-10T10:12:09.761900+0000 mon.a (mon.0) 477 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:10 vm04 bash[28289]: audit 2026-03-10T10:12:09.761900+0000 mon.a (mon.0) 477 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:10 vm04 bash[28289]: cluster 2026-03-10T10:12:10.055687+0000 mon.a (mon.0) 478 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:10 vm04 bash[28289]: cluster 2026-03-10T10:12:10.055687+0000 mon.a (mon.0) 478 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:10 vm04 bash[20742]: cluster 2026-03-10T10:12:07.569246+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:10 vm04 bash[20742]: cluster 2026-03-10T10:12:07.569246+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:10 vm04 bash[20742]: cluster 2026-03-10T10:12:07.569308+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:10 vm04 bash[20742]: cluster 2026-03-10T10:12:07.569308+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:10 vm04 bash[20742]: audit 2026-03-10T10:12:09.044741+0000 mon.a (mon.0) 469 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:10 vm04 bash[20742]: audit 2026-03-10T10:12:09.044741+0000 mon.a (mon.0) 469 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:10 vm04 bash[20742]: cluster 2026-03-10T10:12:09.051935+0000 mon.a (mon.0) 470 : cluster [INF] osd.4 v2:192.168.123.107:6800/2162643433 boot 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:10 vm04 bash[20742]: cluster 2026-03-10T10:12:09.051935+0000 mon.a (mon.0) 470 : cluster [INF] osd.4 v2:192.168.123.107:6800/2162643433 boot 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:10 vm04 bash[20742]: cluster 2026-03-10T10:12:09.052010+0000 mon.a (mon.0) 471 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:10 vm04 bash[20742]: cluster 2026-03-10T10:12:09.052010+0000 mon.a (mon.0) 471 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:10 vm04 bash[20742]: audit 2026-03-10T10:12:09.052834+0000 mon.a (mon.0) 472 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:10 vm04 bash[20742]: audit 2026-03-10T10:12:09.052834+0000 mon.a (mon.0) 472 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:10 vm04 bash[20742]: audit 2026-03-10T10:12:09.314386+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:10 vm04 bash[20742]: audit 2026-03-10T10:12:09.314386+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:10 vm04 bash[20742]: audit 2026-03-10T10:12:09.318504+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:10.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:10 vm04 bash[20742]: audit 2026-03-10T10:12:09.318504+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:10.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:10 vm04 bash[20742]: cluster 2026-03-10T10:12:09.629089+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:10.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:10 vm04 bash[20742]: cluster 2026-03-10T10:12:09.629089+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-10T10:12:10.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:10 vm04 bash[20742]: audit 2026-03-10T10:12:09.744344+0000 mon.a (mon.0) 475 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:10.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:10 vm04 bash[20742]: audit 2026-03-10T10:12:09.744344+0000 mon.a (mon.0) 475 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:10.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:10 vm04 bash[20742]: audit 2026-03-10T10:12:09.744885+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:12:10.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:10 vm04 bash[20742]: audit 2026-03-10T10:12:09.744885+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:12:10.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:10 vm04 bash[20742]: audit 2026-03-10T10:12:09.761900+0000 mon.a (mon.0) 477 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:10.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:10 vm04 bash[20742]: audit 2026-03-10T10:12:09.761900+0000 mon.a (mon.0) 477 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:10.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:10 vm04 bash[20742]: cluster 2026-03-10T10:12:10.055687+0000 mon.a (mon.0) 478 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-10T10:12:10.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:10 vm04 bash[20742]: cluster 2026-03-10T10:12:10.055687+0000 mon.a (mon.0) 478 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-10T10:12:10.488 DEBUG:teuthology.orchestra.run.vm07:osd.4> sudo journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@osd.4.service 2026-03-10T10:12:10.489 INFO:tasks.cephadm:Deploying osd.5 on vm07 with /dev/vdd... 2026-03-10T10:12:10.489 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- lvm zap /dev/vdd 2026-03-10T10:12:11.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:11 vm04 bash[28289]: audit 2026-03-10T10:12:10.396485+0000 mon.a (mon.0) 479 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:12:11.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:11 vm04 bash[28289]: audit 2026-03-10T10:12:10.396485+0000 mon.a (mon.0) 479 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:12:11.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:11 vm04 bash[28289]: audit 2026-03-10T10:12:10.401485+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:11.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:11 vm04 bash[28289]: audit 2026-03-10T10:12:10.401485+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:11.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:11 vm04 bash[28289]: audit 2026-03-10T10:12:10.404966+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:11.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:11 vm04 bash[28289]: audit 2026-03-10T10:12:10.404966+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:11.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:11 vm04 bash[28289]: cluster 2026-03-10T10:12:11.056321+0000 mon.a (mon.0) 482 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-10T10:12:11.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:11 vm04 bash[28289]: cluster 2026-03-10T10:12:11.056321+0000 mon.a (mon.0) 482 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-10T10:12:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:11 vm04 bash[20742]: audit 2026-03-10T10:12:10.396485+0000 mon.a (mon.0) 479 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:12:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:11 vm04 bash[20742]: audit 2026-03-10T10:12:10.396485+0000 mon.a (mon.0) 479 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:12:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:11 vm04 bash[20742]: audit 2026-03-10T10:12:10.401485+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:11 vm04 bash[20742]: audit 2026-03-10T10:12:10.401485+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:11 vm04 bash[20742]: audit 2026-03-10T10:12:10.404966+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:11 vm04 bash[20742]: audit 2026-03-10T10:12:10.404966+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:11 vm04 bash[20742]: cluster 2026-03-10T10:12:11.056321+0000 mon.a (mon.0) 482 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-10T10:12:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:11 vm04 bash[20742]: cluster 2026-03-10T10:12:11.056321+0000 mon.a (mon.0) 482 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-10T10:12:11.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:11 vm07 bash[23367]: audit 2026-03-10T10:12:10.396485+0000 mon.a (mon.0) 479 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:12:11.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:11 vm07 bash[23367]: audit 2026-03-10T10:12:10.396485+0000 mon.a (mon.0) 479 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:12:11.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:11 vm07 bash[23367]: audit 2026-03-10T10:12:10.401485+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:11.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:11 vm07 bash[23367]: audit 2026-03-10T10:12:10.401485+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:11.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:11 vm07 bash[23367]: audit 2026-03-10T10:12:10.404966+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:11.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:11 vm07 bash[23367]: audit 2026-03-10T10:12:10.404966+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:11.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:11 vm07 bash[23367]: cluster 2026-03-10T10:12:11.056321+0000 mon.a (mon.0) 482 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-10T10:12:11.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:11 vm07 bash[23367]: cluster 2026-03-10T10:12:11.056321+0000 mon.a (mon.0) 482 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-10T10:12:12.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:12 vm04 bash[28289]: cluster 2026-03-10T10:12:11.629377+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v134: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:12.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:12 vm04 bash[28289]: cluster 2026-03-10T10:12:11.629377+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v134: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:12.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:12 vm04 bash[20742]: cluster 2026-03-10T10:12:11.629377+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v134: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:12.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:12 vm04 bash[20742]: cluster 2026-03-10T10:12:11.629377+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v134: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:12.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:12 vm07 bash[23367]: cluster 2026-03-10T10:12:11.629377+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v134: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:12.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:12 vm07 bash[23367]: cluster 2026-03-10T10:12:11.629377+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v134: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:14.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:14 vm04 bash[28289]: cluster 2026-03-10T10:12:13.629700+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v135: 1 pgs: 1 remapped+peering; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:14.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:14 vm04 bash[28289]: cluster 2026-03-10T10:12:13.629700+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v135: 1 pgs: 1 remapped+peering; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:14.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:14 vm04 bash[20742]: cluster 2026-03-10T10:12:13.629700+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v135: 1 pgs: 1 remapped+peering; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:14.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:14 vm04 bash[20742]: cluster 2026-03-10T10:12:13.629700+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v135: 1 pgs: 1 remapped+peering; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:15.013 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:14 vm07 bash[23367]: cluster 2026-03-10T10:12:13.629700+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v135: 1 pgs: 1 remapped+peering; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:15.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:14 vm07 bash[23367]: cluster 2026-03-10T10:12:13.629700+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v135: 1 pgs: 1 remapped+peering; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:15.161 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.b/config 2026-03-10T10:12:16.660 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T10:12:16.680 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph orch daemon add osd vm07:/dev/vdd 2026-03-10T10:12:16.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:16 vm04 bash[28289]: cluster 2026-03-10T10:12:15.630044+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 68 KiB/s, 0 objects/s recovering 2026-03-10T10:12:16.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:16 vm04 bash[28289]: cluster 2026-03-10T10:12:15.630044+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 68 KiB/s, 0 objects/s recovering 2026-03-10T10:12:16.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:16 vm04 bash[28289]: audit 2026-03-10T10:12:15.947396+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:16.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:16 vm04 bash[28289]: audit 2026-03-10T10:12:15.947396+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:16.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:16 vm04 bash[28289]: audit 2026-03-10T10:12:15.955274+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:16.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:16 vm04 bash[28289]: audit 2026-03-10T10:12:15.955274+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:16.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:16 vm04 bash[28289]: audit 2026-03-10T10:12:15.956654+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:12:16.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:16 vm04 bash[28289]: audit 2026-03-10T10:12:15.956654+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:12:16.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:16 vm04 bash[28289]: audit 2026-03-10T10:12:15.957780+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:16.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:16 vm04 bash[28289]: audit 2026-03-10T10:12:15.957780+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:16.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:16 vm04 bash[28289]: audit 2026-03-10T10:12:15.958204+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:12:16.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:16 vm04 bash[28289]: audit 2026-03-10T10:12:15.958204+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:12:16.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:16 vm04 bash[28289]: audit 2026-03-10T10:12:15.965665+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:16.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:16 vm04 bash[28289]: audit 2026-03-10T10:12:15.965665+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:16 vm04 bash[20742]: cluster 2026-03-10T10:12:15.630044+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 68 KiB/s, 0 objects/s recovering 2026-03-10T10:12:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:16 vm04 bash[20742]: cluster 2026-03-10T10:12:15.630044+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 68 KiB/s, 0 objects/s recovering 2026-03-10T10:12:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:16 vm04 bash[20742]: audit 2026-03-10T10:12:15.947396+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:16 vm04 bash[20742]: audit 2026-03-10T10:12:15.947396+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:16 vm04 bash[20742]: audit 2026-03-10T10:12:15.955274+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:16 vm04 bash[20742]: audit 2026-03-10T10:12:15.955274+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:16 vm04 bash[20742]: audit 2026-03-10T10:12:15.956654+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:12:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:16 vm04 bash[20742]: audit 2026-03-10T10:12:15.956654+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:12:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:16 vm04 bash[20742]: audit 2026-03-10T10:12:15.957780+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:16 vm04 bash[20742]: audit 2026-03-10T10:12:15.957780+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:16 vm04 bash[20742]: audit 2026-03-10T10:12:15.958204+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:12:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:16 vm04 bash[20742]: audit 2026-03-10T10:12:15.958204+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:12:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:16 vm04 bash[20742]: audit 2026-03-10T10:12:15.965665+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:16 vm04 bash[20742]: audit 2026-03-10T10:12:15.965665+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:17.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:16 vm07 bash[23367]: cluster 2026-03-10T10:12:15.630044+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 68 KiB/s, 0 objects/s recovering 2026-03-10T10:12:17.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:16 vm07 bash[23367]: cluster 2026-03-10T10:12:15.630044+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 68 KiB/s, 0 objects/s recovering 2026-03-10T10:12:17.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:16 vm07 bash[23367]: audit 2026-03-10T10:12:15.947396+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:17.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:16 vm07 bash[23367]: audit 2026-03-10T10:12:15.947396+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:17.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:16 vm07 bash[23367]: audit 2026-03-10T10:12:15.955274+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:17.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:16 vm07 bash[23367]: audit 2026-03-10T10:12:15.955274+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:17.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:16 vm07 bash[23367]: audit 2026-03-10T10:12:15.956654+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:12:17.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:16 vm07 bash[23367]: audit 2026-03-10T10:12:15.956654+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:12:17.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:16 vm07 bash[23367]: audit 2026-03-10T10:12:15.957780+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:17.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:16 vm07 bash[23367]: audit 2026-03-10T10:12:15.957780+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:17.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:16 vm07 bash[23367]: audit 2026-03-10T10:12:15.958204+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:12:17.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:16 vm07 bash[23367]: audit 2026-03-10T10:12:15.958204+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:12:17.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:16 vm07 bash[23367]: audit 2026-03-10T10:12:15.965665+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:17.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:16 vm07 bash[23367]: audit 2026-03-10T10:12:15.965665+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:17.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:17 vm04 bash[28289]: cephadm 2026-03-10T10:12:15.939872+0000 mgr.y (mgr.14150) 160 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T10:12:17.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:17 vm04 bash[28289]: cephadm 2026-03-10T10:12:15.939872+0000 mgr.y (mgr.14150) 160 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T10:12:17.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:17 vm04 bash[28289]: cephadm 2026-03-10T10:12:15.957021+0000 mgr.y (mgr.14150) 161 : cephadm [INF] Adjusting osd_memory_target on vm07 to 455.7M 2026-03-10T10:12:17.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:17 vm04 bash[28289]: cephadm 2026-03-10T10:12:15.957021+0000 mgr.y (mgr.14150) 161 : cephadm [INF] Adjusting osd_memory_target on vm07 to 455.7M 2026-03-10T10:12:17.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:17 vm04 bash[28289]: cephadm 2026-03-10T10:12:15.957426+0000 mgr.y (mgr.14150) 162 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-10T10:12:17.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:17 vm04 bash[28289]: cephadm 2026-03-10T10:12:15.957426+0000 mgr.y (mgr.14150) 162 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-10T10:12:17.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:17 vm04 bash[20742]: cephadm 2026-03-10T10:12:15.939872+0000 mgr.y (mgr.14150) 160 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T10:12:17.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:17 vm04 bash[20742]: cephadm 2026-03-10T10:12:15.939872+0000 mgr.y (mgr.14150) 160 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T10:12:17.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:17 vm04 bash[20742]: cephadm 2026-03-10T10:12:15.957021+0000 mgr.y (mgr.14150) 161 : cephadm [INF] Adjusting osd_memory_target on vm07 to 455.7M 2026-03-10T10:12:17.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:17 vm04 bash[20742]: cephadm 2026-03-10T10:12:15.957021+0000 mgr.y (mgr.14150) 161 : cephadm [INF] Adjusting osd_memory_target on vm07 to 455.7M 2026-03-10T10:12:17.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:17 vm04 bash[20742]: cephadm 2026-03-10T10:12:15.957426+0000 mgr.y (mgr.14150) 162 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-10T10:12:17.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:17 vm04 bash[20742]: cephadm 2026-03-10T10:12:15.957426+0000 mgr.y (mgr.14150) 162 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-10T10:12:18.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:17 vm07 bash[23367]: cephadm 2026-03-10T10:12:15.939872+0000 mgr.y (mgr.14150) 160 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T10:12:18.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:17 vm07 bash[23367]: cephadm 2026-03-10T10:12:15.939872+0000 mgr.y (mgr.14150) 160 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T10:12:18.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:17 vm07 bash[23367]: cephadm 2026-03-10T10:12:15.957021+0000 mgr.y (mgr.14150) 161 : cephadm [INF] Adjusting osd_memory_target on vm07 to 455.7M 2026-03-10T10:12:18.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:17 vm07 bash[23367]: cephadm 2026-03-10T10:12:15.957021+0000 mgr.y (mgr.14150) 161 : cephadm [INF] Adjusting osd_memory_target on vm07 to 455.7M 2026-03-10T10:12:18.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:17 vm07 bash[23367]: cephadm 2026-03-10T10:12:15.957426+0000 mgr.y (mgr.14150) 162 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-10T10:12:18.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:17 vm07 bash[23367]: cephadm 2026-03-10T10:12:15.957426+0000 mgr.y (mgr.14150) 162 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-10T10:12:19.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:18 vm07 bash[23367]: cluster 2026-03-10T10:12:17.630320+0000 mgr.y (mgr.14150) 163 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-10T10:12:19.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:18 vm07 bash[23367]: cluster 2026-03-10T10:12:17.630320+0000 mgr.y (mgr.14150) 163 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-10T10:12:19.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:18 vm04 bash[28289]: cluster 2026-03-10T10:12:17.630320+0000 mgr.y (mgr.14150) 163 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-10T10:12:19.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:18 vm04 bash[28289]: cluster 2026-03-10T10:12:17.630320+0000 mgr.y (mgr.14150) 163 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-10T10:12:19.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:18 vm04 bash[20742]: cluster 2026-03-10T10:12:17.630320+0000 mgr.y (mgr.14150) 163 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-10T10:12:19.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:18 vm04 bash[20742]: cluster 2026-03-10T10:12:17.630320+0000 mgr.y (mgr.14150) 163 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-10T10:12:21.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:20 vm07 bash[23367]: cluster 2026-03-10T10:12:19.630606+0000 mgr.y (mgr.14150) 164 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 47 KiB/s, 0 objects/s recovering 2026-03-10T10:12:21.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:20 vm07 bash[23367]: cluster 2026-03-10T10:12:19.630606+0000 mgr.y (mgr.14150) 164 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 47 KiB/s, 0 objects/s recovering 2026-03-10T10:12:21.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:20 vm04 bash[28289]: cluster 2026-03-10T10:12:19.630606+0000 mgr.y (mgr.14150) 164 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 47 KiB/s, 0 objects/s recovering 2026-03-10T10:12:21.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:20 vm04 bash[28289]: cluster 2026-03-10T10:12:19.630606+0000 mgr.y (mgr.14150) 164 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 47 KiB/s, 0 objects/s recovering 2026-03-10T10:12:21.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:20 vm04 bash[20742]: cluster 2026-03-10T10:12:19.630606+0000 mgr.y (mgr.14150) 164 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 47 KiB/s, 0 objects/s recovering 2026-03-10T10:12:21.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:20 vm04 bash[20742]: cluster 2026-03-10T10:12:19.630606+0000 mgr.y (mgr.14150) 164 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 47 KiB/s, 0 objects/s recovering 2026-03-10T10:12:21.294 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.b/config 2026-03-10T10:12:21.442 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.438+0000 7fb9c2c72640 1 -- 192.168.123.107:0/690854642 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb9bc104d70 msgr2=0x7fb9bc105170 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:12:21.442 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.438+0000 7fb9c2c72640 1 --2- 192.168.123.107:0/690854642 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb9bc104d70 0x7fb9bc105170 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7fb9a8009a80 tx=0x7fb9a802f2b0 comp rx=0 tx=0).stop 2026-03-10T10:12:21.442 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.438+0000 7fb9c2c72640 1 -- 192.168.123.107:0/690854642 shutdown_connections 2026-03-10T10:12:21.442 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.438+0000 7fb9c2c72640 1 --2- 192.168.123.107:0/690854642 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb9bc106930 0x7fb9bc10d1c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:12:21.442 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.438+0000 7fb9c2c72640 1 --2- 192.168.123.107:0/690854642 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fb9bc105f70 0x7fb9bc1063f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:12:21.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.438+0000 7fb9c2c72640 1 --2- 192.168.123.107:0/690854642 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb9bc104d70 0x7fb9bc105170 unknown :-1 s=CLOSED pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:12:21.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.438+0000 7fb9c2c72640 1 -- 192.168.123.107:0/690854642 >> 192.168.123.107:0/690854642 conn(0x7fb9bc100520 msgr2=0x7fb9bc102940 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:12:21.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.438+0000 7fb9c2c72640 1 -- 192.168.123.107:0/690854642 shutdown_connections 2026-03-10T10:12:21.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.438+0000 7fb9c2c72640 1 -- 192.168.123.107:0/690854642 wait complete. 2026-03-10T10:12:21.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.438+0000 7fb9c2c72640 1 Processor -- start 2026-03-10T10:12:21.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.438+0000 7fb9c2c72640 1 -- start start 2026-03-10T10:12:21.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.438+0000 7fb9c2c72640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fb9bc104d70 0x7fb9bc19c770 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:12:21.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.438+0000 7fb9c2c72640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb9bc105f70 0x7fb9bc19ccb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:12:21.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.438+0000 7fb9c2c72640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb9bc106930 0x7fb9bc1a3cc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:12:21.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.438+0000 7fb9c2c72640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fb9bc10ff50 con 0x7fb9bc105f70 2026-03-10T10:12:21.444 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.438+0000 7fb9c2c72640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7fb9bc10fdd0 con 0x7fb9bc106930 2026-03-10T10:12:21.444 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.438+0000 7fb9c2c72640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7fb9bc1100d0 con 0x7fb9bc104d70 2026-03-10T10:12:21.444 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.438+0000 7fb9c09e7640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fb9bc104d70 0x7fb9bc19c770 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:12:21.444 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.438+0000 7fb9c09e7640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fb9bc104d70 0x7fb9bc19c770 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3301/0 says I am v2:192.168.123.107:52412/0 (socket says 192.168.123.107:52412) 2026-03-10T10:12:21.444 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.438+0000 7fb9c09e7640 1 -- 192.168.123.107:0/3297407780 learned_addr learned my addr 192.168.123.107:0/3297407780 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-10T10:12:21.444 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.438+0000 7fb9bbfff640 1 --2- 192.168.123.107:0/3297407780 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb9bc105f70 0x7fb9bc19ccb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:12:21.444 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.442+0000 7fb9c11e8640 1 --2- 192.168.123.107:0/3297407780 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb9bc106930 0x7fb9bc1a3cc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:12:21.444 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.442+0000 7fb9c09e7640 1 -- 192.168.123.107:0/3297407780 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb9bc106930 msgr2=0x7fb9bc1a3cc0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:12:21.444 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.442+0000 7fb9c09e7640 1 --2- 192.168.123.107:0/3297407780 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb9bc106930 0x7fb9bc1a3cc0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:12:21.444 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.442+0000 7fb9c09e7640 1 -- 192.168.123.107:0/3297407780 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb9bc105f70 msgr2=0x7fb9bc19ccb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:12:21.445 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.442+0000 7fb9c09e7640 1 --2- 192.168.123.107:0/3297407780 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb9bc105f70 0x7fb9bc19ccb0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:12:21.445 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.442+0000 7fb9c09e7640 1 -- 192.168.123.107:0/3297407780 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fb9bc1a43c0 con 0x7fb9bc104d70 2026-03-10T10:12:21.445 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.442+0000 7fb9c11e8640 1 --2- 192.168.123.107:0/3297407780 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb9bc106930 0x7fb9bc1a3cc0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-10T10:12:21.445 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.442+0000 7fb9bbfff640 1 --2- 192.168.123.107:0/3297407780 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb9bc105f70 0x7fb9bc19ccb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:12:21.445 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.442+0000 7fb9c09e7640 1 --2- 192.168.123.107:0/3297407780 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fb9bc104d70 0x7fb9bc19c770 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7fb9a80099a0 tx=0x7fb9a80389a0 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:12:21.445 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.442+0000 7fb9b9ffb640 1 -- 192.168.123.107:0/3297407780 <== mon.2 v2:192.168.123.104:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fb9a8004110 con 0x7fb9bc104d70 2026-03-10T10:12:21.445 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.442+0000 7fb9b9ffb640 1 -- 192.168.123.107:0/3297407780 <== mon.2 v2:192.168.123.104:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fb9a803e070 con 0x7fb9bc104d70 2026-03-10T10:12:21.445 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.442+0000 7fb9c2c72640 1 -- 192.168.123.107:0/3297407780 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fb9bc1a4650 con 0x7fb9bc104d70 2026-03-10T10:12:21.445 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.442+0000 7fb9c2c72640 1 -- 192.168.123.107:0/3297407780 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7fb9bc1a4b60 con 0x7fb9bc104d70 2026-03-10T10:12:21.447 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.442+0000 7fb9b9ffb640 1 -- 192.168.123.107:0/3297407780 <== mon.2 v2:192.168.123.104:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fb9a8005290 con 0x7fb9bc104d70 2026-03-10T10:12:21.447 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.442+0000 7fb9b9ffb640 1 -- 192.168.123.107:0/3297407780 <== mon.2 v2:192.168.123.104:3301/0 4 ==== mgrmap(e 15) ==== 100000+0+0 (secure 0 0 0) 0x7fb9a80054f0 con 0x7fb9bc104d70 2026-03-10T10:12:21.447 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.442+0000 7fb9c2c72640 1 -- 192.168.123.107:0/3297407780 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fb984005180 con 0x7fb9bc104d70 2026-03-10T10:12:21.448 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.442+0000 7fb9b9ffb640 1 --2- 192.168.123.107:0/3297407780 >> v2:192.168.123.104:6800/632047608 conn(0x7fb99c077540 0x7fb99c079a00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:12:21.448 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.446+0000 7fb9b9ffb640 1 -- 192.168.123.107:0/3297407780 <== mon.2 v2:192.168.123.104:3301/0 5 ==== osd_map(33..33 src has 1..33) ==== 3185+0+0 (secure 0 0 0) 0x7fb9a80be1a0 con 0x7fb9bc104d70 2026-03-10T10:12:21.448 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.446+0000 7fb9bbfff640 1 --2- 192.168.123.107:0/3297407780 >> v2:192.168.123.104:6800/632047608 conn(0x7fb99c077540 0x7fb99c079a00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:12:21.449 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.446+0000 7fb9bbfff640 1 --2- 192.168.123.107:0/3297407780 >> v2:192.168.123.104:6800/632047608 conn(0x7fb99c077540 0x7fb99c079a00 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7fb9ac006fd0 tx=0x7fb9ac008040 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:12:21.451 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.446+0000 7fb9b9ffb640 1 -- 192.168.123.107:0/3297407780 <== mon.2 v2:192.168.123.104:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fb9a8048050 con 0x7fb9bc104d70 2026-03-10T10:12:21.546 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:21.542+0000 7fb9c2c72640 1 -- 192.168.123.107:0/3297407780 --> v2:192.168.123.104:6800/632047608 -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdd", "target": ["mon-mgr", ""]}) -- 0x7fb984002bf0 con 0x7fb99c077540 2026-03-10T10:12:21.722 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:21 vm07 bash[23367]: audit 2026-03-10T10:12:21.549033+0000 mon.a (mon.0) 489 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:12:21.722 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:21 vm07 bash[23367]: audit 2026-03-10T10:12:21.549033+0000 mon.a (mon.0) 489 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:12:21.722 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:21 vm07 bash[23367]: audit 2026-03-10T10:12:21.550584+0000 mon.a (mon.0) 490 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:12:21.722 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:21 vm07 bash[23367]: audit 2026-03-10T10:12:21.550584+0000 mon.a (mon.0) 490 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:12:21.722 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:21 vm07 bash[23367]: audit 2026-03-10T10:12:21.551004+0000 mon.a (mon.0) 491 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:21.722 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:21 vm07 bash[23367]: audit 2026-03-10T10:12:21.551004+0000 mon.a (mon.0) 491 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:22.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:21 vm04 bash[28289]: audit 2026-03-10T10:12:21.549033+0000 mon.a (mon.0) 489 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:12:22.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:21 vm04 bash[28289]: audit 2026-03-10T10:12:21.549033+0000 mon.a (mon.0) 489 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:12:22.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:21 vm04 bash[28289]: audit 2026-03-10T10:12:21.550584+0000 mon.a (mon.0) 490 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:12:22.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:21 vm04 bash[28289]: audit 2026-03-10T10:12:21.550584+0000 mon.a (mon.0) 490 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:12:22.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:21 vm04 bash[28289]: audit 2026-03-10T10:12:21.551004+0000 mon.a (mon.0) 491 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:22.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:21 vm04 bash[28289]: audit 2026-03-10T10:12:21.551004+0000 mon.a (mon.0) 491 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:22.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:21 vm04 bash[20742]: audit 2026-03-10T10:12:21.549033+0000 mon.a (mon.0) 489 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:12:22.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:21 vm04 bash[20742]: audit 2026-03-10T10:12:21.549033+0000 mon.a (mon.0) 489 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:12:22.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:21 vm04 bash[20742]: audit 2026-03-10T10:12:21.550584+0000 mon.a (mon.0) 490 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:12:22.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:21 vm04 bash[20742]: audit 2026-03-10T10:12:21.550584+0000 mon.a (mon.0) 490 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:12:22.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:21 vm04 bash[20742]: audit 2026-03-10T10:12:21.551004+0000 mon.a (mon.0) 491 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:22.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:21 vm04 bash[20742]: audit 2026-03-10T10:12:21.551004+0000 mon.a (mon.0) 491 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:23.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:22 vm07 bash[23367]: audit 2026-03-10T10:12:21.547559+0000 mgr.y (mgr.14150) 165 : audit [DBG] from='client.24227 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:12:23.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:22 vm07 bash[23367]: audit 2026-03-10T10:12:21.547559+0000 mgr.y (mgr.14150) 165 : audit [DBG] from='client.24227 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:12:23.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:22 vm07 bash[23367]: cluster 2026-03-10T10:12:21.630979+0000 mgr.y (mgr.14150) 166 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 42 KiB/s, 0 objects/s recovering 2026-03-10T10:12:23.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:22 vm07 bash[23367]: cluster 2026-03-10T10:12:21.630979+0000 mgr.y (mgr.14150) 166 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 42 KiB/s, 0 objects/s recovering 2026-03-10T10:12:23.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:22 vm04 bash[28289]: audit 2026-03-10T10:12:21.547559+0000 mgr.y (mgr.14150) 165 : audit [DBG] from='client.24227 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:12:23.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:22 vm04 bash[28289]: audit 2026-03-10T10:12:21.547559+0000 mgr.y (mgr.14150) 165 : audit [DBG] from='client.24227 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:12:23.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:22 vm04 bash[28289]: cluster 2026-03-10T10:12:21.630979+0000 mgr.y (mgr.14150) 166 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 42 KiB/s, 0 objects/s recovering 2026-03-10T10:12:23.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:22 vm04 bash[28289]: cluster 2026-03-10T10:12:21.630979+0000 mgr.y (mgr.14150) 166 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 42 KiB/s, 0 objects/s recovering 2026-03-10T10:12:23.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:22 vm04 bash[20742]: audit 2026-03-10T10:12:21.547559+0000 mgr.y (mgr.14150) 165 : audit [DBG] from='client.24227 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:12:23.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:22 vm04 bash[20742]: audit 2026-03-10T10:12:21.547559+0000 mgr.y (mgr.14150) 165 : audit [DBG] from='client.24227 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:12:23.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:22 vm04 bash[20742]: cluster 2026-03-10T10:12:21.630979+0000 mgr.y (mgr.14150) 166 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 42 KiB/s, 0 objects/s recovering 2026-03-10T10:12:23.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:22 vm04 bash[20742]: cluster 2026-03-10T10:12:21.630979+0000 mgr.y (mgr.14150) 166 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 42 KiB/s, 0 objects/s recovering 2026-03-10T10:12:25.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:24 vm07 bash[23367]: cluster 2026-03-10T10:12:23.631304+0000 mgr.y (mgr.14150) 167 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T10:12:25.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:24 vm07 bash[23367]: cluster 2026-03-10T10:12:23.631304+0000 mgr.y (mgr.14150) 167 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T10:12:25.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:24 vm04 bash[28289]: cluster 2026-03-10T10:12:23.631304+0000 mgr.y (mgr.14150) 167 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T10:12:25.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:24 vm04 bash[28289]: cluster 2026-03-10T10:12:23.631304+0000 mgr.y (mgr.14150) 167 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T10:12:25.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:24 vm04 bash[20742]: cluster 2026-03-10T10:12:23.631304+0000 mgr.y (mgr.14150) 167 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T10:12:25.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:24 vm04 bash[20742]: cluster 2026-03-10T10:12:23.631304+0000 mgr.y (mgr.14150) 167 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T10:12:27.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:26 vm07 bash[23367]: cluster 2026-03-10T10:12:25.631596+0000 mgr.y (mgr.14150) 168 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T10:12:27.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:26 vm07 bash[23367]: cluster 2026-03-10T10:12:25.631596+0000 mgr.y (mgr.14150) 168 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T10:12:27.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:26 vm04 bash[28289]: cluster 2026-03-10T10:12:25.631596+0000 mgr.y (mgr.14150) 168 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T10:12:27.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:26 vm04 bash[28289]: cluster 2026-03-10T10:12:25.631596+0000 mgr.y (mgr.14150) 168 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T10:12:27.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:26 vm04 bash[20742]: cluster 2026-03-10T10:12:25.631596+0000 mgr.y (mgr.14150) 168 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T10:12:27.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:26 vm04 bash[20742]: cluster 2026-03-10T10:12:25.631596+0000 mgr.y (mgr.14150) 168 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T10:12:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:27 vm04 bash[28289]: audit 2026-03-10T10:12:26.909198+0000 mon.b (mon.1) 14 : audit [INF] from='client.? 192.168.123.107:0/3766923670' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c651c78e-882b-47c6-84ff-5a4b54b94531"}]: dispatch 2026-03-10T10:12:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:27 vm04 bash[28289]: audit 2026-03-10T10:12:26.909198+0000 mon.b (mon.1) 14 : audit [INF] from='client.? 192.168.123.107:0/3766923670' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c651c78e-882b-47c6-84ff-5a4b54b94531"}]: dispatch 2026-03-10T10:12:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:27 vm04 bash[28289]: audit 2026-03-10T10:12:26.909432+0000 mon.a (mon.0) 492 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c651c78e-882b-47c6-84ff-5a4b54b94531"}]: dispatch 2026-03-10T10:12:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:27 vm04 bash[28289]: audit 2026-03-10T10:12:26.909432+0000 mon.a (mon.0) 492 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c651c78e-882b-47c6-84ff-5a4b54b94531"}]: dispatch 2026-03-10T10:12:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:27 vm04 bash[28289]: audit 2026-03-10T10:12:26.912623+0000 mon.a (mon.0) 493 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c651c78e-882b-47c6-84ff-5a4b54b94531"}]': finished 2026-03-10T10:12:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:27 vm04 bash[28289]: audit 2026-03-10T10:12:26.912623+0000 mon.a (mon.0) 493 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c651c78e-882b-47c6-84ff-5a4b54b94531"}]': finished 2026-03-10T10:12:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:27 vm04 bash[28289]: cluster 2026-03-10T10:12:26.914996+0000 mon.a (mon.0) 494 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-10T10:12:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:27 vm04 bash[28289]: cluster 2026-03-10T10:12:26.914996+0000 mon.a (mon.0) 494 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-10T10:12:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:27 vm04 bash[28289]: audit 2026-03-10T10:12:26.915108+0000 mon.a (mon.0) 495 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:12:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:27 vm04 bash[28289]: audit 2026-03-10T10:12:26.915108+0000 mon.a (mon.0) 495 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:12:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:27 vm04 bash[28289]: audit 2026-03-10T10:12:27.487696+0000 mon.b (mon.1) 15 : audit [DBG] from='client.? 192.168.123.107:0/297505163' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:12:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:27 vm04 bash[28289]: audit 2026-03-10T10:12:27.487696+0000 mon.b (mon.1) 15 : audit [DBG] from='client.? 192.168.123.107:0/297505163' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:12:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:27 vm04 bash[20742]: audit 2026-03-10T10:12:26.909198+0000 mon.b (mon.1) 14 : audit [INF] from='client.? 192.168.123.107:0/3766923670' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c651c78e-882b-47c6-84ff-5a4b54b94531"}]: dispatch 2026-03-10T10:12:28.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:27 vm04 bash[20742]: audit 2026-03-10T10:12:26.909198+0000 mon.b (mon.1) 14 : audit [INF] from='client.? 192.168.123.107:0/3766923670' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c651c78e-882b-47c6-84ff-5a4b54b94531"}]: dispatch 2026-03-10T10:12:28.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:27 vm04 bash[20742]: audit 2026-03-10T10:12:26.909432+0000 mon.a (mon.0) 492 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c651c78e-882b-47c6-84ff-5a4b54b94531"}]: dispatch 2026-03-10T10:12:28.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:27 vm04 bash[20742]: audit 2026-03-10T10:12:26.909432+0000 mon.a (mon.0) 492 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c651c78e-882b-47c6-84ff-5a4b54b94531"}]: dispatch 2026-03-10T10:12:28.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:27 vm04 bash[20742]: audit 2026-03-10T10:12:26.912623+0000 mon.a (mon.0) 493 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c651c78e-882b-47c6-84ff-5a4b54b94531"}]': finished 2026-03-10T10:12:28.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:27 vm04 bash[20742]: audit 2026-03-10T10:12:26.912623+0000 mon.a (mon.0) 493 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c651c78e-882b-47c6-84ff-5a4b54b94531"}]': finished 2026-03-10T10:12:28.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:27 vm04 bash[20742]: cluster 2026-03-10T10:12:26.914996+0000 mon.a (mon.0) 494 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-10T10:12:28.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:27 vm04 bash[20742]: cluster 2026-03-10T10:12:26.914996+0000 mon.a (mon.0) 494 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-10T10:12:28.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:27 vm04 bash[20742]: audit 2026-03-10T10:12:26.915108+0000 mon.a (mon.0) 495 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:12:28.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:27 vm04 bash[20742]: audit 2026-03-10T10:12:26.915108+0000 mon.a (mon.0) 495 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:12:28.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:27 vm04 bash[20742]: audit 2026-03-10T10:12:27.487696+0000 mon.b (mon.1) 15 : audit [DBG] from='client.? 192.168.123.107:0/297505163' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:12:28.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:27 vm04 bash[20742]: audit 2026-03-10T10:12:27.487696+0000 mon.b (mon.1) 15 : audit [DBG] from='client.? 192.168.123.107:0/297505163' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:12:28.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:27 vm07 bash[23367]: audit 2026-03-10T10:12:26.909198+0000 mon.b (mon.1) 14 : audit [INF] from='client.? 192.168.123.107:0/3766923670' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c651c78e-882b-47c6-84ff-5a4b54b94531"}]: dispatch 2026-03-10T10:12:28.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:27 vm07 bash[23367]: audit 2026-03-10T10:12:26.909198+0000 mon.b (mon.1) 14 : audit [INF] from='client.? 192.168.123.107:0/3766923670' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c651c78e-882b-47c6-84ff-5a4b54b94531"}]: dispatch 2026-03-10T10:12:28.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:27 vm07 bash[23367]: audit 2026-03-10T10:12:26.909432+0000 mon.a (mon.0) 492 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c651c78e-882b-47c6-84ff-5a4b54b94531"}]: dispatch 2026-03-10T10:12:28.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:27 vm07 bash[23367]: audit 2026-03-10T10:12:26.909432+0000 mon.a (mon.0) 492 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c651c78e-882b-47c6-84ff-5a4b54b94531"}]: dispatch 2026-03-10T10:12:28.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:27 vm07 bash[23367]: audit 2026-03-10T10:12:26.912623+0000 mon.a (mon.0) 493 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c651c78e-882b-47c6-84ff-5a4b54b94531"}]': finished 2026-03-10T10:12:28.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:27 vm07 bash[23367]: audit 2026-03-10T10:12:26.912623+0000 mon.a (mon.0) 493 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c651c78e-882b-47c6-84ff-5a4b54b94531"}]': finished 2026-03-10T10:12:28.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:27 vm07 bash[23367]: cluster 2026-03-10T10:12:26.914996+0000 mon.a (mon.0) 494 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-10T10:12:28.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:27 vm07 bash[23367]: cluster 2026-03-10T10:12:26.914996+0000 mon.a (mon.0) 494 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-10T10:12:28.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:27 vm07 bash[23367]: audit 2026-03-10T10:12:26.915108+0000 mon.a (mon.0) 495 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:12:28.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:27 vm07 bash[23367]: audit 2026-03-10T10:12:26.915108+0000 mon.a (mon.0) 495 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:12:28.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:27 vm07 bash[23367]: audit 2026-03-10T10:12:27.487696+0000 mon.b (mon.1) 15 : audit [DBG] from='client.? 192.168.123.107:0/297505163' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:12:28.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:27 vm07 bash[23367]: audit 2026-03-10T10:12:27.487696+0000 mon.b (mon.1) 15 : audit [DBG] from='client.? 192.168.123.107:0/297505163' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:12:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:28 vm04 bash[28289]: cluster 2026-03-10T10:12:27.631846+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:28 vm04 bash[28289]: cluster 2026-03-10T10:12:27.631846+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:28 vm04 bash[20742]: cluster 2026-03-10T10:12:27.631846+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:28 vm04 bash[20742]: cluster 2026-03-10T10:12:27.631846+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:29.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:28 vm07 bash[23367]: cluster 2026-03-10T10:12:27.631846+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:29.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:28 vm07 bash[23367]: cluster 2026-03-10T10:12:27.631846+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:31.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:30 vm04 bash[28289]: cluster 2026-03-10T10:12:29.632096+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:31.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:30 vm04 bash[28289]: cluster 2026-03-10T10:12:29.632096+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:31.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:30 vm04 bash[20742]: cluster 2026-03-10T10:12:29.632096+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:31.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:30 vm04 bash[20742]: cluster 2026-03-10T10:12:29.632096+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:31.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:30 vm07 bash[23367]: cluster 2026-03-10T10:12:29.632096+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:31.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:30 vm07 bash[23367]: cluster 2026-03-10T10:12:29.632096+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:33.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:32 vm04 bash[28289]: cluster 2026-03-10T10:12:31.632329+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:33.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:32 vm04 bash[28289]: cluster 2026-03-10T10:12:31.632329+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:33.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:32 vm04 bash[20742]: cluster 2026-03-10T10:12:31.632329+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:33.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:32 vm04 bash[20742]: cluster 2026-03-10T10:12:31.632329+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:33.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:32 vm07 bash[23367]: cluster 2026-03-10T10:12:31.632329+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:33.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:32 vm07 bash[23367]: cluster 2026-03-10T10:12:31.632329+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:35.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:34 vm04 bash[28289]: cluster 2026-03-10T10:12:33.632590+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:35.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:34 vm04 bash[28289]: cluster 2026-03-10T10:12:33.632590+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:35.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:34 vm04 bash[20742]: cluster 2026-03-10T10:12:33.632590+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:35.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:34 vm04 bash[20742]: cluster 2026-03-10T10:12:33.632590+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:35.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:34 vm07 bash[23367]: cluster 2026-03-10T10:12:33.632590+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:35.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:34 vm07 bash[23367]: cluster 2026-03-10T10:12:33.632590+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:36.133 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:35 vm07 bash[23367]: cluster 2026-03-10T10:12:35.632838+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:36.133 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:35 vm07 bash[23367]: cluster 2026-03-10T10:12:35.632838+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:35 vm04 bash[28289]: cluster 2026-03-10T10:12:35.632838+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:35 vm04 bash[28289]: cluster 2026-03-10T10:12:35.632838+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:35 vm04 bash[20742]: cluster 2026-03-10T10:12:35.632838+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:35 vm04 bash[20742]: cluster 2026-03-10T10:12:35.632838+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:36.989 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:36 vm07 bash[23367]: audit 2026-03-10T10:12:36.184339+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T10:12:36.989 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:36 vm07 bash[23367]: audit 2026-03-10T10:12:36.184339+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T10:12:36.989 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:36 vm07 bash[23367]: audit 2026-03-10T10:12:36.184816+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:36.989 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:36 vm07 bash[23367]: audit 2026-03-10T10:12:36.184816+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:36.989 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:36 vm07 bash[23367]: cephadm 2026-03-10T10:12:36.185192+0000 mgr.y (mgr.14150) 174 : cephadm [INF] Deploying daemon osd.5 on vm07 2026-03-10T10:12:36.989 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:36 vm07 bash[23367]: cephadm 2026-03-10T10:12:36.185192+0000 mgr.y (mgr.14150) 174 : cephadm [INF] Deploying daemon osd.5 on vm07 2026-03-10T10:12:36.989 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:36 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:12:36.989 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 10:12:36 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:12:36.989 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:12:36 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:12:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:36 vm04 bash[28289]: audit 2026-03-10T10:12:36.184339+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T10:12:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:36 vm04 bash[28289]: audit 2026-03-10T10:12:36.184339+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T10:12:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:36 vm04 bash[28289]: audit 2026-03-10T10:12:36.184816+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:36 vm04 bash[28289]: audit 2026-03-10T10:12:36.184816+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:36 vm04 bash[28289]: cephadm 2026-03-10T10:12:36.185192+0000 mgr.y (mgr.14150) 174 : cephadm [INF] Deploying daemon osd.5 on vm07 2026-03-10T10:12:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:36 vm04 bash[28289]: cephadm 2026-03-10T10:12:36.185192+0000 mgr.y (mgr.14150) 174 : cephadm [INF] Deploying daemon osd.5 on vm07 2026-03-10T10:12:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:36 vm04 bash[20742]: audit 2026-03-10T10:12:36.184339+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T10:12:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:36 vm04 bash[20742]: audit 2026-03-10T10:12:36.184339+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T10:12:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:36 vm04 bash[20742]: audit 2026-03-10T10:12:36.184816+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:36 vm04 bash[20742]: audit 2026-03-10T10:12:36.184816+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:36 vm04 bash[20742]: cephadm 2026-03-10T10:12:36.185192+0000 mgr.y (mgr.14150) 174 : cephadm [INF] Deploying daemon osd.5 on vm07 2026-03-10T10:12:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:36 vm04 bash[20742]: cephadm 2026-03-10T10:12:36.185192+0000 mgr.y (mgr.14150) 174 : cephadm [INF] Deploying daemon osd.5 on vm07 2026-03-10T10:12:37.242 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:37 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:12:37.242 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:12:37 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:12:37.242 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 10:12:37 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:12:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:37 vm04 bash[28289]: audit 2026-03-10T10:12:37.265774+0000 mon.a (mon.0) 498 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:12:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:37 vm04 bash[28289]: audit 2026-03-10T10:12:37.265774+0000 mon.a (mon.0) 498 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:12:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:37 vm04 bash[28289]: audit 2026-03-10T10:12:37.271239+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:37 vm04 bash[28289]: audit 2026-03-10T10:12:37.271239+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:37 vm04 bash[28289]: audit 2026-03-10T10:12:37.275424+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:37 vm04 bash[28289]: audit 2026-03-10T10:12:37.275424+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:37 vm04 bash[28289]: cluster 2026-03-10T10:12:37.633049+0000 mgr.y (mgr.14150) 175 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:37 vm04 bash[28289]: cluster 2026-03-10T10:12:37.633049+0000 mgr.y (mgr.14150) 175 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:37 vm04 bash[20742]: audit 2026-03-10T10:12:37.265774+0000 mon.a (mon.0) 498 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:12:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:37 vm04 bash[20742]: audit 2026-03-10T10:12:37.265774+0000 mon.a (mon.0) 498 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:12:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:37 vm04 bash[20742]: audit 2026-03-10T10:12:37.271239+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:37 vm04 bash[20742]: audit 2026-03-10T10:12:37.271239+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:37 vm04 bash[20742]: audit 2026-03-10T10:12:37.275424+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:37 vm04 bash[20742]: audit 2026-03-10T10:12:37.275424+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:37 vm04 bash[20742]: cluster 2026-03-10T10:12:37.633049+0000 mgr.y (mgr.14150) 175 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:37 vm04 bash[20742]: cluster 2026-03-10T10:12:37.633049+0000 mgr.y (mgr.14150) 175 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:38.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:37 vm07 bash[23367]: audit 2026-03-10T10:12:37.265774+0000 mon.a (mon.0) 498 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:12:38.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:37 vm07 bash[23367]: audit 2026-03-10T10:12:37.265774+0000 mon.a (mon.0) 498 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:12:38.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:37 vm07 bash[23367]: audit 2026-03-10T10:12:37.271239+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:38.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:37 vm07 bash[23367]: audit 2026-03-10T10:12:37.271239+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:38.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:37 vm07 bash[23367]: audit 2026-03-10T10:12:37.275424+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:38.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:37 vm07 bash[23367]: audit 2026-03-10T10:12:37.275424+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:38.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:37 vm07 bash[23367]: cluster 2026-03-10T10:12:37.633049+0000 mgr.y (mgr.14150) 175 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:38.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:37 vm07 bash[23367]: cluster 2026-03-10T10:12:37.633049+0000 mgr.y (mgr.14150) 175 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:40 vm04 bash[28289]: cluster 2026-03-10T10:12:39.633294+0000 mgr.y (mgr.14150) 176 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:40 vm04 bash[28289]: cluster 2026-03-10T10:12:39.633294+0000 mgr.y (mgr.14150) 176 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:40 vm04 bash[28289]: audit 2026-03-10T10:12:40.536791+0000 mon.b (mon.1) 16 : audit [INF] from='osd.5 v2:192.168.123.107:6804/1022745989' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T10:12:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:40 vm04 bash[28289]: audit 2026-03-10T10:12:40.536791+0000 mon.b (mon.1) 16 : audit [INF] from='osd.5 v2:192.168.123.107:6804/1022745989' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T10:12:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:40 vm04 bash[28289]: audit 2026-03-10T10:12:40.537221+0000 mon.a (mon.0) 501 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T10:12:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:40 vm04 bash[28289]: audit 2026-03-10T10:12:40.537221+0000 mon.a (mon.0) 501 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T10:12:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:40 vm04 bash[20742]: cluster 2026-03-10T10:12:39.633294+0000 mgr.y (mgr.14150) 176 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:40 vm04 bash[20742]: cluster 2026-03-10T10:12:39.633294+0000 mgr.y (mgr.14150) 176 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:40 vm04 bash[20742]: audit 2026-03-10T10:12:40.536791+0000 mon.b (mon.1) 16 : audit [INF] from='osd.5 v2:192.168.123.107:6804/1022745989' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T10:12:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:40 vm04 bash[20742]: audit 2026-03-10T10:12:40.536791+0000 mon.b (mon.1) 16 : audit [INF] from='osd.5 v2:192.168.123.107:6804/1022745989' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T10:12:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:40 vm04 bash[20742]: audit 2026-03-10T10:12:40.537221+0000 mon.a (mon.0) 501 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T10:12:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:40 vm04 bash[20742]: audit 2026-03-10T10:12:40.537221+0000 mon.a (mon.0) 501 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T10:12:41.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:40 vm07 bash[23367]: cluster 2026-03-10T10:12:39.633294+0000 mgr.y (mgr.14150) 176 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:41.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:40 vm07 bash[23367]: cluster 2026-03-10T10:12:39.633294+0000 mgr.y (mgr.14150) 176 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:41.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:40 vm07 bash[23367]: audit 2026-03-10T10:12:40.536791+0000 mon.b (mon.1) 16 : audit [INF] from='osd.5 v2:192.168.123.107:6804/1022745989' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T10:12:41.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:40 vm07 bash[23367]: audit 2026-03-10T10:12:40.536791+0000 mon.b (mon.1) 16 : audit [INF] from='osd.5 v2:192.168.123.107:6804/1022745989' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T10:12:41.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:40 vm07 bash[23367]: audit 2026-03-10T10:12:40.537221+0000 mon.a (mon.0) 501 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T10:12:41.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:40 vm07 bash[23367]: audit 2026-03-10T10:12:40.537221+0000 mon.a (mon.0) 501 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T10:12:42.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:41 vm07 bash[23367]: audit 2026-03-10T10:12:40.698400+0000 mon.a (mon.0) 502 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T10:12:42.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:41 vm07 bash[23367]: audit 2026-03-10T10:12:40.698400+0000 mon.a (mon.0) 502 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T10:12:42.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:41 vm07 bash[23367]: cluster 2026-03-10T10:12:40.702185+0000 mon.a (mon.0) 503 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-10T10:12:42.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:41 vm07 bash[23367]: cluster 2026-03-10T10:12:40.702185+0000 mon.a (mon.0) 503 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-10T10:12:42.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:41 vm07 bash[23367]: audit 2026-03-10T10:12:40.702411+0000 mon.a (mon.0) 504 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:12:42.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:41 vm07 bash[23367]: audit 2026-03-10T10:12:40.702411+0000 mon.a (mon.0) 504 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:12:42.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:41 vm07 bash[23367]: audit 2026-03-10T10:12:40.702925+0000 mon.b (mon.1) 17 : audit [INF] from='osd.5 v2:192.168.123.107:6804/1022745989' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:12:42.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:41 vm07 bash[23367]: audit 2026-03-10T10:12:40.702925+0000 mon.b (mon.1) 17 : audit [INF] from='osd.5 v2:192.168.123.107:6804/1022745989' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:12:42.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:41 vm07 bash[23367]: audit 2026-03-10T10:12:40.703293+0000 mon.a (mon.0) 505 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:12:42.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:41 vm07 bash[23367]: audit 2026-03-10T10:12:40.703293+0000 mon.a (mon.0) 505 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:12:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:41 vm04 bash[28289]: audit 2026-03-10T10:12:40.698400+0000 mon.a (mon.0) 502 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T10:12:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:41 vm04 bash[28289]: audit 2026-03-10T10:12:40.698400+0000 mon.a (mon.0) 502 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T10:12:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:41 vm04 bash[28289]: cluster 2026-03-10T10:12:40.702185+0000 mon.a (mon.0) 503 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-10T10:12:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:41 vm04 bash[28289]: cluster 2026-03-10T10:12:40.702185+0000 mon.a (mon.0) 503 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-10T10:12:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:41 vm04 bash[28289]: audit 2026-03-10T10:12:40.702411+0000 mon.a (mon.0) 504 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:12:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:41 vm04 bash[28289]: audit 2026-03-10T10:12:40.702411+0000 mon.a (mon.0) 504 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:12:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:41 vm04 bash[28289]: audit 2026-03-10T10:12:40.702925+0000 mon.b (mon.1) 17 : audit [INF] from='osd.5 v2:192.168.123.107:6804/1022745989' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:12:42.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:41 vm04 bash[28289]: audit 2026-03-10T10:12:40.702925+0000 mon.b (mon.1) 17 : audit [INF] from='osd.5 v2:192.168.123.107:6804/1022745989' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:12:42.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:41 vm04 bash[28289]: audit 2026-03-10T10:12:40.703293+0000 mon.a (mon.0) 505 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:12:42.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:41 vm04 bash[28289]: audit 2026-03-10T10:12:40.703293+0000 mon.a (mon.0) 505 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:12:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:41 vm04 bash[20742]: audit 2026-03-10T10:12:40.698400+0000 mon.a (mon.0) 502 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T10:12:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:41 vm04 bash[20742]: audit 2026-03-10T10:12:40.698400+0000 mon.a (mon.0) 502 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T10:12:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:41 vm04 bash[20742]: cluster 2026-03-10T10:12:40.702185+0000 mon.a (mon.0) 503 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-10T10:12:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:41 vm04 bash[20742]: cluster 2026-03-10T10:12:40.702185+0000 mon.a (mon.0) 503 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-10T10:12:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:41 vm04 bash[20742]: audit 2026-03-10T10:12:40.702411+0000 mon.a (mon.0) 504 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:12:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:41 vm04 bash[20742]: audit 2026-03-10T10:12:40.702411+0000 mon.a (mon.0) 504 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:12:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:41 vm04 bash[20742]: audit 2026-03-10T10:12:40.702925+0000 mon.b (mon.1) 17 : audit [INF] from='osd.5 v2:192.168.123.107:6804/1022745989' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:12:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:41 vm04 bash[20742]: audit 2026-03-10T10:12:40.702925+0000 mon.b (mon.1) 17 : audit [INF] from='osd.5 v2:192.168.123.107:6804/1022745989' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:12:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:41 vm04 bash[20742]: audit 2026-03-10T10:12:40.703293+0000 mon.a (mon.0) 505 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:12:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:41 vm04 bash[20742]: audit 2026-03-10T10:12:40.703293+0000 mon.a (mon.0) 505 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:12:43.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:42 vm07 bash[23367]: cluster 2026-03-10T10:12:41.633608+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:43.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:42 vm07 bash[23367]: cluster 2026-03-10T10:12:41.633608+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:43.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:42 vm07 bash[23367]: audit 2026-03-10T10:12:41.701613+0000 mon.a (mon.0) 506 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T10:12:43.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:42 vm07 bash[23367]: audit 2026-03-10T10:12:41.701613+0000 mon.a (mon.0) 506 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T10:12:43.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:42 vm07 bash[23367]: cluster 2026-03-10T10:12:41.706635+0000 mon.a (mon.0) 507 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-10T10:12:43.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:42 vm07 bash[23367]: cluster 2026-03-10T10:12:41.706635+0000 mon.a (mon.0) 507 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-10T10:12:43.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:42 vm07 bash[23367]: audit 2026-03-10T10:12:41.707920+0000 mon.a (mon.0) 508 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:12:43.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:42 vm07 bash[23367]: audit 2026-03-10T10:12:41.707920+0000 mon.a (mon.0) 508 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:12:43.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:42 vm07 bash[23367]: audit 2026-03-10T10:12:41.720099+0000 mon.a (mon.0) 509 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:12:43.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:42 vm07 bash[23367]: audit 2026-03-10T10:12:41.720099+0000 mon.a (mon.0) 509 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:12:43.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:42 vm07 bash[23367]: audit 2026-03-10T10:12:42.642684+0000 mon.a (mon.0) 510 : audit [INF] from='osd.5 ' entity='osd.5' 2026-03-10T10:12:43.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:42 vm07 bash[23367]: audit 2026-03-10T10:12:42.642684+0000 mon.a (mon.0) 510 : audit [INF] from='osd.5 ' entity='osd.5' 2026-03-10T10:12:43.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:42 vm07 bash[23367]: cluster 2026-03-10T10:12:42.705558+0000 mon.a (mon.0) 511 : cluster [INF] osd.5 v2:192.168.123.107:6804/1022745989 boot 2026-03-10T10:12:43.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:42 vm07 bash[23367]: cluster 2026-03-10T10:12:42.705558+0000 mon.a (mon.0) 511 : cluster [INF] osd.5 v2:192.168.123.107:6804/1022745989 boot 2026-03-10T10:12:43.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:42 vm07 bash[23367]: cluster 2026-03-10T10:12:42.705604+0000 mon.a (mon.0) 512 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-10T10:12:43.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:42 vm07 bash[23367]: cluster 2026-03-10T10:12:42.705604+0000 mon.a (mon.0) 512 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-10T10:12:43.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:42 vm07 bash[23367]: audit 2026-03-10T10:12:42.705681+0000 mon.a (mon.0) 513 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:12:43.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:42 vm07 bash[23367]: audit 2026-03-10T10:12:42.705681+0000 mon.a (mon.0) 513 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:12:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:42 vm04 bash[28289]: cluster 2026-03-10T10:12:41.633608+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:42 vm04 bash[28289]: cluster 2026-03-10T10:12:41.633608+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:42 vm04 bash[28289]: audit 2026-03-10T10:12:41.701613+0000 mon.a (mon.0) 506 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T10:12:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:42 vm04 bash[28289]: audit 2026-03-10T10:12:41.701613+0000 mon.a (mon.0) 506 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T10:12:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:42 vm04 bash[28289]: cluster 2026-03-10T10:12:41.706635+0000 mon.a (mon.0) 507 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-10T10:12:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:42 vm04 bash[28289]: cluster 2026-03-10T10:12:41.706635+0000 mon.a (mon.0) 507 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-10T10:12:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:42 vm04 bash[28289]: audit 2026-03-10T10:12:41.707920+0000 mon.a (mon.0) 508 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:12:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:42 vm04 bash[28289]: audit 2026-03-10T10:12:41.707920+0000 mon.a (mon.0) 508 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:12:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:42 vm04 bash[28289]: audit 2026-03-10T10:12:41.720099+0000 mon.a (mon.0) 509 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:12:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:42 vm04 bash[28289]: audit 2026-03-10T10:12:41.720099+0000 mon.a (mon.0) 509 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:12:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:42 vm04 bash[28289]: audit 2026-03-10T10:12:42.642684+0000 mon.a (mon.0) 510 : audit [INF] from='osd.5 ' entity='osd.5' 2026-03-10T10:12:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:42 vm04 bash[28289]: audit 2026-03-10T10:12:42.642684+0000 mon.a (mon.0) 510 : audit [INF] from='osd.5 ' entity='osd.5' 2026-03-10T10:12:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:42 vm04 bash[28289]: cluster 2026-03-10T10:12:42.705558+0000 mon.a (mon.0) 511 : cluster [INF] osd.5 v2:192.168.123.107:6804/1022745989 boot 2026-03-10T10:12:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:42 vm04 bash[28289]: cluster 2026-03-10T10:12:42.705558+0000 mon.a (mon.0) 511 : cluster [INF] osd.5 v2:192.168.123.107:6804/1022745989 boot 2026-03-10T10:12:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:42 vm04 bash[28289]: cluster 2026-03-10T10:12:42.705604+0000 mon.a (mon.0) 512 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-10T10:12:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:42 vm04 bash[28289]: cluster 2026-03-10T10:12:42.705604+0000 mon.a (mon.0) 512 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-10T10:12:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:42 vm04 bash[28289]: audit 2026-03-10T10:12:42.705681+0000 mon.a (mon.0) 513 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:12:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:42 vm04 bash[28289]: audit 2026-03-10T10:12:42.705681+0000 mon.a (mon.0) 513 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:12:43.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:42 vm04 bash[20742]: cluster 2026-03-10T10:12:41.633608+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:43.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:42 vm04 bash[20742]: cluster 2026-03-10T10:12:41.633608+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-10T10:12:43.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:42 vm04 bash[20742]: audit 2026-03-10T10:12:41.701613+0000 mon.a (mon.0) 506 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T10:12:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:42 vm04 bash[20742]: audit 2026-03-10T10:12:41.701613+0000 mon.a (mon.0) 506 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T10:12:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:42 vm04 bash[20742]: cluster 2026-03-10T10:12:41.706635+0000 mon.a (mon.0) 507 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-10T10:12:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:42 vm04 bash[20742]: cluster 2026-03-10T10:12:41.706635+0000 mon.a (mon.0) 507 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-10T10:12:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:42 vm04 bash[20742]: audit 2026-03-10T10:12:41.707920+0000 mon.a (mon.0) 508 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:12:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:42 vm04 bash[20742]: audit 2026-03-10T10:12:41.707920+0000 mon.a (mon.0) 508 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:12:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:42 vm04 bash[20742]: audit 2026-03-10T10:12:41.720099+0000 mon.a (mon.0) 509 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:12:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:42 vm04 bash[20742]: audit 2026-03-10T10:12:41.720099+0000 mon.a (mon.0) 509 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:12:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:42 vm04 bash[20742]: audit 2026-03-10T10:12:42.642684+0000 mon.a (mon.0) 510 : audit [INF] from='osd.5 ' entity='osd.5' 2026-03-10T10:12:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:42 vm04 bash[20742]: audit 2026-03-10T10:12:42.642684+0000 mon.a (mon.0) 510 : audit [INF] from='osd.5 ' entity='osd.5' 2026-03-10T10:12:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:42 vm04 bash[20742]: cluster 2026-03-10T10:12:42.705558+0000 mon.a (mon.0) 511 : cluster [INF] osd.5 v2:192.168.123.107:6804/1022745989 boot 2026-03-10T10:12:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:42 vm04 bash[20742]: cluster 2026-03-10T10:12:42.705558+0000 mon.a (mon.0) 511 : cluster [INF] osd.5 v2:192.168.123.107:6804/1022745989 boot 2026-03-10T10:12:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:42 vm04 bash[20742]: cluster 2026-03-10T10:12:42.705604+0000 mon.a (mon.0) 512 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-10T10:12:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:42 vm04 bash[20742]: cluster 2026-03-10T10:12:42.705604+0000 mon.a (mon.0) 512 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-10T10:12:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:42 vm04 bash[20742]: audit 2026-03-10T10:12:42.705681+0000 mon.a (mon.0) 513 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:12:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:42 vm04 bash[20742]: audit 2026-03-10T10:12:42.705681+0000 mon.a (mon.0) 513 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:12:44.343 INFO:teuthology.orchestra.run.vm07.stdout:Created osd(s) 5 on host 'vm07' 2026-03-10T10:12:44.343 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:44.334+0000 7fb9b9ffb640 1 -- 192.168.123.107:0/3297407780 <== mgr.14150 v2:192.168.123.104:6800/632047608 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7fb984002bf0 con 0x7fb99c077540 2026-03-10T10:12:44.343 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:44.338+0000 7fb9c2c72640 1 -- 192.168.123.107:0/3297407780 >> v2:192.168.123.104:6800/632047608 conn(0x7fb99c077540 msgr2=0x7fb99c079a00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:12:44.343 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:44.338+0000 7fb9c2c72640 1 --2- 192.168.123.107:0/3297407780 >> v2:192.168.123.104:6800/632047608 conn(0x7fb99c077540 0x7fb99c079a00 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7fb9ac006fd0 tx=0x7fb9ac008040 comp rx=0 tx=0).stop 2026-03-10T10:12:44.343 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:44.338+0000 7fb9c2c72640 1 -- 192.168.123.107:0/3297407780 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fb9bc104d70 msgr2=0x7fb9bc19c770 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:12:44.343 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:44.338+0000 7fb9c2c72640 1 --2- 192.168.123.107:0/3297407780 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fb9bc104d70 0x7fb9bc19c770 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7fb9a80099a0 tx=0x7fb9a80389a0 comp rx=0 tx=0).stop 2026-03-10T10:12:44.343 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:44.338+0000 7fb9c2c72640 1 -- 192.168.123.107:0/3297407780 shutdown_connections 2026-03-10T10:12:44.343 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:44.338+0000 7fb9c2c72640 1 --2- 192.168.123.107:0/3297407780 >> v2:192.168.123.104:6800/632047608 conn(0x7fb99c077540 0x7fb99c079a00 unknown :-1 s=CLOSED pgs=73 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:12:44.343 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:44.338+0000 7fb9c2c72640 1 --2- 192.168.123.107:0/3297407780 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb9bc106930 0x7fb9bc1a3cc0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:12:44.343 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:44.338+0000 7fb9c2c72640 1 --2- 192.168.123.107:0/3297407780 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb9bc105f70 0x7fb9bc19ccb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:12:44.343 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:44.338+0000 7fb9c2c72640 1 --2- 192.168.123.107:0/3297407780 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fb9bc104d70 0x7fb9bc19c770 unknown :-1 s=CLOSED pgs=30 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:12:44.343 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:44.338+0000 7fb9c2c72640 1 -- 192.168.123.107:0/3297407780 >> 192.168.123.107:0/3297407780 conn(0x7fb9bc100520 msgr2=0x7fb9bc101d80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:12:44.343 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:44.338+0000 7fb9c2c72640 1 -- 192.168.123.107:0/3297407780 shutdown_connections 2026-03-10T10:12:44.343 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:44.338+0000 7fb9c2c72640 1 -- 192.168.123.107:0/3297407780 wait complete. 2026-03-10T10:12:44.411 DEBUG:teuthology.orchestra.run.vm07:osd.5> sudo journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@osd.5.service 2026-03-10T10:12:44.412 INFO:tasks.cephadm:Deploying osd.6 on vm07 with /dev/vdc... 2026-03-10T10:12:44.412 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- lvm zap /dev/vdc 2026-03-10T10:12:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:44 vm04 bash[28289]: cluster 2026-03-10T10:12:41.508269+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:12:44.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:44 vm04 bash[28289]: cluster 2026-03-10T10:12:41.508269+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:12:44.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:44 vm04 bash[28289]: cluster 2026-03-10T10:12:41.508313+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:12:44.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:44 vm04 bash[28289]: cluster 2026-03-10T10:12:41.508313+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:12:44.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:44 vm04 bash[28289]: audit 2026-03-10T10:12:43.424742+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:44.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:44 vm04 bash[28289]: audit 2026-03-10T10:12:43.424742+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:44.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:44 vm04 bash[28289]: audit 2026-03-10T10:12:43.430331+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:44.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:44 vm04 bash[28289]: audit 2026-03-10T10:12:43.430331+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:44.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:44 vm04 bash[28289]: audit 2026-03-10T10:12:43.430986+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:44.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:44 vm04 bash[28289]: audit 2026-03-10T10:12:43.430986+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:44.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:44 vm04 bash[28289]: audit 2026-03-10T10:12:43.431462+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:12:44.713 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:44 vm04 bash[28289]: audit 2026-03-10T10:12:43.431462+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:44 vm04 bash[28289]: audit 2026-03-10T10:12:43.435770+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:44 vm04 bash[28289]: audit 2026-03-10T10:12:43.435770+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:44 vm04 bash[28289]: cluster 2026-03-10T10:12:43.633926+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:44 vm04 bash[28289]: cluster 2026-03-10T10:12:43.633926+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:44 vm04 bash[28289]: cluster 2026-03-10T10:12:43.711271+0000 mon.a (mon.0) 519 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:44 vm04 bash[28289]: cluster 2026-03-10T10:12:43.711271+0000 mon.a (mon.0) 519 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:44 vm04 bash[28289]: audit 2026-03-10T10:12:44.329447+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:44 vm04 bash[28289]: audit 2026-03-10T10:12:44.329447+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:44 vm04 bash[28289]: audit 2026-03-10T10:12:44.334449+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:44 vm04 bash[28289]: audit 2026-03-10T10:12:44.334449+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:44 vm04 bash[28289]: audit 2026-03-10T10:12:44.339053+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:44 vm04 bash[28289]: audit 2026-03-10T10:12:44.339053+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:44 vm04 bash[20742]: cluster 2026-03-10T10:12:41.508269+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:44 vm04 bash[20742]: cluster 2026-03-10T10:12:41.508269+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:44 vm04 bash[20742]: cluster 2026-03-10T10:12:41.508313+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:44 vm04 bash[20742]: cluster 2026-03-10T10:12:41.508313+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:44 vm04 bash[20742]: audit 2026-03-10T10:12:43.424742+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:44 vm04 bash[20742]: audit 2026-03-10T10:12:43.424742+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:44 vm04 bash[20742]: audit 2026-03-10T10:12:43.430331+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:44 vm04 bash[20742]: audit 2026-03-10T10:12:43.430331+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:44 vm04 bash[20742]: audit 2026-03-10T10:12:43.430986+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:44 vm04 bash[20742]: audit 2026-03-10T10:12:43.430986+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:44 vm04 bash[20742]: audit 2026-03-10T10:12:43.431462+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:44 vm04 bash[20742]: audit 2026-03-10T10:12:43.431462+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:44 vm04 bash[20742]: audit 2026-03-10T10:12:43.435770+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:44 vm04 bash[20742]: audit 2026-03-10T10:12:43.435770+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:44 vm04 bash[20742]: cluster 2026-03-10T10:12:43.633926+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:44 vm04 bash[20742]: cluster 2026-03-10T10:12:43.633926+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:44 vm04 bash[20742]: cluster 2026-03-10T10:12:43.711271+0000 mon.a (mon.0) 519 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:44 vm04 bash[20742]: cluster 2026-03-10T10:12:43.711271+0000 mon.a (mon.0) 519 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:44 vm04 bash[20742]: audit 2026-03-10T10:12:44.329447+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:44 vm04 bash[20742]: audit 2026-03-10T10:12:44.329447+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:44 vm04 bash[20742]: audit 2026-03-10T10:12:44.334449+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:44 vm04 bash[20742]: audit 2026-03-10T10:12:44.334449+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:44 vm04 bash[20742]: audit 2026-03-10T10:12:44.339053+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:44.714 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:44 vm04 bash[20742]: audit 2026-03-10T10:12:44.339053+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:44.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:44 vm07 bash[23367]: cluster 2026-03-10T10:12:41.508269+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:12:44.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:44 vm07 bash[23367]: cluster 2026-03-10T10:12:41.508269+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:12:44.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:44 vm07 bash[23367]: cluster 2026-03-10T10:12:41.508313+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:12:44.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:44 vm07 bash[23367]: cluster 2026-03-10T10:12:41.508313+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:12:44.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:44 vm07 bash[23367]: audit 2026-03-10T10:12:43.424742+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:44.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:44 vm07 bash[23367]: audit 2026-03-10T10:12:43.424742+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:44.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:44 vm07 bash[23367]: audit 2026-03-10T10:12:43.430331+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:44.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:44 vm07 bash[23367]: audit 2026-03-10T10:12:43.430331+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:44 vm07 bash[23367]: audit 2026-03-10T10:12:43.430986+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:44 vm07 bash[23367]: audit 2026-03-10T10:12:43.430986+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:44 vm07 bash[23367]: audit 2026-03-10T10:12:43.431462+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:12:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:44 vm07 bash[23367]: audit 2026-03-10T10:12:43.431462+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:12:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:44 vm07 bash[23367]: audit 2026-03-10T10:12:43.435770+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:44 vm07 bash[23367]: audit 2026-03-10T10:12:43.435770+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:44 vm07 bash[23367]: cluster 2026-03-10T10:12:43.633926+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:44 vm07 bash[23367]: cluster 2026-03-10T10:12:43.633926+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:44 vm07 bash[23367]: cluster 2026-03-10T10:12:43.711271+0000 mon.a (mon.0) 519 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-10T10:12:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:44 vm07 bash[23367]: cluster 2026-03-10T10:12:43.711271+0000 mon.a (mon.0) 519 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-10T10:12:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:44 vm07 bash[23367]: audit 2026-03-10T10:12:44.329447+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:12:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:44 vm07 bash[23367]: audit 2026-03-10T10:12:44.329447+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:12:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:44 vm07 bash[23367]: audit 2026-03-10T10:12:44.334449+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:44 vm07 bash[23367]: audit 2026-03-10T10:12:44.334449+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:44 vm07 bash[23367]: audit 2026-03-10T10:12:44.339053+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:44 vm07 bash[23367]: audit 2026-03-10T10:12:44.339053+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:45 vm04 bash[28289]: cluster 2026-03-10T10:12:44.867608+0000 mon.a (mon.0) 523 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-10T10:12:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:45 vm04 bash[28289]: cluster 2026-03-10T10:12:44.867608+0000 mon.a (mon.0) 523 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-10T10:12:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:45 vm04 bash[28289]: cluster 2026-03-10T10:12:45.634410+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v157: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:45 vm04 bash[28289]: cluster 2026-03-10T10:12:45.634410+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v157: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:46.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:45 vm04 bash[20742]: cluster 2026-03-10T10:12:44.867608+0000 mon.a (mon.0) 523 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-10T10:12:46.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:45 vm04 bash[20742]: cluster 2026-03-10T10:12:44.867608+0000 mon.a (mon.0) 523 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-10T10:12:46.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:45 vm04 bash[20742]: cluster 2026-03-10T10:12:45.634410+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v157: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:46.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:45 vm04 bash[20742]: cluster 2026-03-10T10:12:45.634410+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v157: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:46.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:45 vm07 bash[23367]: cluster 2026-03-10T10:12:44.867608+0000 mon.a (mon.0) 523 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-10T10:12:46.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:45 vm07 bash[23367]: cluster 2026-03-10T10:12:44.867608+0000 mon.a (mon.0) 523 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-10T10:12:46.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:45 vm07 bash[23367]: cluster 2026-03-10T10:12:45.634410+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v157: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:46.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:45 vm07 bash[23367]: cluster 2026-03-10T10:12:45.634410+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v157: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:48.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:48 vm04 bash[28289]: cluster 2026-03-10T10:12:47.634876+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v158: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:48.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:48 vm04 bash[28289]: cluster 2026-03-10T10:12:47.634876+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v158: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:48.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:48 vm04 bash[20742]: cluster 2026-03-10T10:12:47.634876+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v158: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:48.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:48 vm04 bash[20742]: cluster 2026-03-10T10:12:47.634876+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v158: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:49.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:48 vm07 bash[23367]: cluster 2026-03-10T10:12:47.634876+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v158: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:49.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:48 vm07 bash[23367]: cluster 2026-03-10T10:12:47.634876+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v158: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:49.074 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.b/config 2026-03-10T10:12:49.887 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T10:12:49.902 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph orch daemon add osd vm07:/dev/vdc 2026-03-10T10:12:50.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:50 vm04 bash[28289]: cluster 2026-03-10T10:12:49.635139+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:50.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:50 vm04 bash[28289]: cluster 2026-03-10T10:12:49.635139+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:50.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:50 vm04 bash[28289]: audit 2026-03-10T10:12:50.574568+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:50.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:50 vm04 bash[28289]: audit 2026-03-10T10:12:50.574568+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:50.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:50 vm04 bash[28289]: audit 2026-03-10T10:12:50.578506+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:50.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:50 vm04 bash[28289]: audit 2026-03-10T10:12:50.578506+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:50.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:50 vm04 bash[28289]: audit 2026-03-10T10:12:50.579110+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:12:50.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:50 vm04 bash[28289]: audit 2026-03-10T10:12:50.579110+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:12:50.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:50 vm04 bash[28289]: audit 2026-03-10T10:12:50.579530+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:12:50.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:50 vm04 bash[28289]: audit 2026-03-10T10:12:50.579530+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:12:50.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:50 vm04 bash[28289]: audit 2026-03-10T10:12:50.580789+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:50.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:50 vm04 bash[28289]: audit 2026-03-10T10:12:50.580789+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:50.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:50 vm04 bash[28289]: audit 2026-03-10T10:12:50.581250+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:12:50.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:50 vm04 bash[28289]: audit 2026-03-10T10:12:50.581250+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:12:50.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:50 vm04 bash[28289]: audit 2026-03-10T10:12:50.584743+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:50.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:50 vm04 bash[28289]: audit 2026-03-10T10:12:50.584743+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:50.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:50 vm04 bash[20742]: cluster 2026-03-10T10:12:49.635139+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:50.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:50 vm04 bash[20742]: cluster 2026-03-10T10:12:49.635139+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:50.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:50 vm04 bash[20742]: audit 2026-03-10T10:12:50.574568+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:50.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:50 vm04 bash[20742]: audit 2026-03-10T10:12:50.574568+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:50.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:50 vm04 bash[20742]: audit 2026-03-10T10:12:50.578506+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:50.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:50 vm04 bash[20742]: audit 2026-03-10T10:12:50.578506+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:50.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:50 vm04 bash[20742]: audit 2026-03-10T10:12:50.579110+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:12:50.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:50 vm04 bash[20742]: audit 2026-03-10T10:12:50.579110+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:12:50.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:50 vm04 bash[20742]: audit 2026-03-10T10:12:50.579530+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:12:50.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:50 vm04 bash[20742]: audit 2026-03-10T10:12:50.579530+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:12:50.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:50 vm04 bash[20742]: audit 2026-03-10T10:12:50.580789+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:50.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:50 vm04 bash[20742]: audit 2026-03-10T10:12:50.580789+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:50.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:50 vm04 bash[20742]: audit 2026-03-10T10:12:50.581250+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:12:50.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:50 vm04 bash[20742]: audit 2026-03-10T10:12:50.581250+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:12:50.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:50 vm04 bash[20742]: audit 2026-03-10T10:12:50.584743+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:50.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:50 vm04 bash[20742]: audit 2026-03-10T10:12:50.584743+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:51.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:50 vm07 bash[23367]: cluster 2026-03-10T10:12:49.635139+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:51.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:50 vm07 bash[23367]: cluster 2026-03-10T10:12:49.635139+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:51.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:50 vm07 bash[23367]: audit 2026-03-10T10:12:50.574568+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:51.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:50 vm07 bash[23367]: audit 2026-03-10T10:12:50.574568+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:51.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:50 vm07 bash[23367]: audit 2026-03-10T10:12:50.578506+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:51.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:50 vm07 bash[23367]: audit 2026-03-10T10:12:50.578506+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:51.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:50 vm07 bash[23367]: audit 2026-03-10T10:12:50.579110+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:12:51.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:50 vm07 bash[23367]: audit 2026-03-10T10:12:50.579110+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:12:51.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:50 vm07 bash[23367]: audit 2026-03-10T10:12:50.579530+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:12:51.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:50 vm07 bash[23367]: audit 2026-03-10T10:12:50.579530+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:12:51.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:50 vm07 bash[23367]: audit 2026-03-10T10:12:50.580789+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:51.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:50 vm07 bash[23367]: audit 2026-03-10T10:12:50.580789+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:51.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:50 vm07 bash[23367]: audit 2026-03-10T10:12:50.581250+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:12:51.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:50 vm07 bash[23367]: audit 2026-03-10T10:12:50.581250+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:12:51.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:50 vm07 bash[23367]: audit 2026-03-10T10:12:50.584743+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:51.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:50 vm07 bash[23367]: audit 2026-03-10T10:12:50.584743+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:12:51.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:51 vm04 bash[28289]: cephadm 2026-03-10T10:12:50.568681+0000 mgr.y (mgr.14150) 182 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T10:12:51.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:51 vm04 bash[28289]: cephadm 2026-03-10T10:12:50.568681+0000 mgr.y (mgr.14150) 182 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T10:12:51.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:51 vm04 bash[28289]: cephadm 2026-03-10T10:12:50.579980+0000 mgr.y (mgr.14150) 183 : cephadm [INF] Adjusting osd_memory_target on vm07 to 227.8M 2026-03-10T10:12:51.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:51 vm04 bash[28289]: cephadm 2026-03-10T10:12:50.579980+0000 mgr.y (mgr.14150) 183 : cephadm [INF] Adjusting osd_memory_target on vm07 to 227.8M 2026-03-10T10:12:51.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:51 vm04 bash[28289]: cephadm 2026-03-10T10:12:50.580396+0000 mgr.y (mgr.14150) 184 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-10T10:12:51.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:51 vm04 bash[28289]: cephadm 2026-03-10T10:12:50.580396+0000 mgr.y (mgr.14150) 184 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-10T10:12:51.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:51 vm04 bash[20742]: cephadm 2026-03-10T10:12:50.568681+0000 mgr.y (mgr.14150) 182 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T10:12:51.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:51 vm04 bash[20742]: cephadm 2026-03-10T10:12:50.568681+0000 mgr.y (mgr.14150) 182 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T10:12:51.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:51 vm04 bash[20742]: cephadm 2026-03-10T10:12:50.579980+0000 mgr.y (mgr.14150) 183 : cephadm [INF] Adjusting osd_memory_target on vm07 to 227.8M 2026-03-10T10:12:51.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:51 vm04 bash[20742]: cephadm 2026-03-10T10:12:50.579980+0000 mgr.y (mgr.14150) 183 : cephadm [INF] Adjusting osd_memory_target on vm07 to 227.8M 2026-03-10T10:12:51.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:51 vm04 bash[20742]: cephadm 2026-03-10T10:12:50.580396+0000 mgr.y (mgr.14150) 184 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-10T10:12:51.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:51 vm04 bash[20742]: cephadm 2026-03-10T10:12:50.580396+0000 mgr.y (mgr.14150) 184 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-10T10:12:52.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:51 vm07 bash[23367]: cephadm 2026-03-10T10:12:50.568681+0000 mgr.y (mgr.14150) 182 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T10:12:52.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:51 vm07 bash[23367]: cephadm 2026-03-10T10:12:50.568681+0000 mgr.y (mgr.14150) 182 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T10:12:52.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:51 vm07 bash[23367]: cephadm 2026-03-10T10:12:50.579980+0000 mgr.y (mgr.14150) 183 : cephadm [INF] Adjusting osd_memory_target on vm07 to 227.8M 2026-03-10T10:12:52.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:51 vm07 bash[23367]: cephadm 2026-03-10T10:12:50.579980+0000 mgr.y (mgr.14150) 183 : cephadm [INF] Adjusting osd_memory_target on vm07 to 227.8M 2026-03-10T10:12:52.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:51 vm07 bash[23367]: cephadm 2026-03-10T10:12:50.580396+0000 mgr.y (mgr.14150) 184 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-10T10:12:52.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:51 vm07 bash[23367]: cephadm 2026-03-10T10:12:50.580396+0000 mgr.y (mgr.14150) 184 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-10T10:12:53.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:52 vm07 bash[23367]: cluster 2026-03-10T10:12:51.635366+0000 mgr.y (mgr.14150) 185 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:53.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:52 vm07 bash[23367]: cluster 2026-03-10T10:12:51.635366+0000 mgr.y (mgr.14150) 185 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:53.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:52 vm04 bash[28289]: cluster 2026-03-10T10:12:51.635366+0000 mgr.y (mgr.14150) 185 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:53.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:52 vm04 bash[28289]: cluster 2026-03-10T10:12:51.635366+0000 mgr.y (mgr.14150) 185 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:52 vm04 bash[20742]: cluster 2026-03-10T10:12:51.635366+0000 mgr.y (mgr.14150) 185 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:52 vm04 bash[20742]: cluster 2026-03-10T10:12:51.635366+0000 mgr.y (mgr.14150) 185 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:54.554 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.b/config 2026-03-10T10:12:54.686 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.682+0000 7f0dd53d6640 1 -- 192.168.123.107:0/1749578118 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0dd0104d70 msgr2=0x7f0dd0105170 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:12:54.686 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.682+0000 7f0dd53d6640 1 --2- 192.168.123.107:0/1749578118 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0dd0104d70 0x7f0dd0105170 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7f0dc4009a80 tx=0x7f0dc402f270 comp rx=0 tx=0).stop 2026-03-10T10:12:54.686 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.682+0000 7f0dd53d6640 1 -- 192.168.123.107:0/1749578118 shutdown_connections 2026-03-10T10:12:54.686 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.682+0000 7f0dd53d6640 1 --2- 192.168.123.107:0/1749578118 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f0dd0106930 0x7f0dd010d1c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:12:54.686 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.682+0000 7f0dd53d6640 1 --2- 192.168.123.107:0/1749578118 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0dd0105f70 0x7f0dd01063f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:12:54.686 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.682+0000 7f0dd53d6640 1 --2- 192.168.123.107:0/1749578118 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0dd0104d70 0x7f0dd0105170 unknown :-1 s=CLOSED pgs=28 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:12:54.686 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.682+0000 7f0dd53d6640 1 -- 192.168.123.107:0/1749578118 >> 192.168.123.107:0/1749578118 conn(0x7f0dd0100520 msgr2=0x7f0dd0102940 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:12:54.686 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.682+0000 7f0dd53d6640 1 -- 192.168.123.107:0/1749578118 shutdown_connections 2026-03-10T10:12:54.686 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.682+0000 7f0dd53d6640 1 -- 192.168.123.107:0/1749578118 wait complete. 2026-03-10T10:12:54.686 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.682+0000 7f0dd53d6640 1 Processor -- start 2026-03-10T10:12:54.686 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.682+0000 7f0dd53d6640 1 -- start start 2026-03-10T10:12:54.686 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.682+0000 7f0dd53d6640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f0dd0104d70 0x7f0dd019c560 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:12:54.687 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.682+0000 7f0dd53d6640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0dd0105f70 0x7f0dd019caa0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:12:54.687 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.682+0000 7f0dd53d6640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0dd0106930 0x7f0dd01a3b20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:12:54.687 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.682+0000 7f0dd53d6640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f0dd010fe70 con 0x7f0dd0106930 2026-03-10T10:12:54.687 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.682+0000 7f0dd53d6640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f0dd010fcf0 con 0x7f0dd0105f70 2026-03-10T10:12:54.687 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.682+0000 7f0dd53d6640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f0dd010fff0 con 0x7f0dd0104d70 2026-03-10T10:12:54.687 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.682+0000 7f0dce7fc640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0dd0105f70 0x7f0dd019caa0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:12:54.687 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.682+0000 7f0dce7fc640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0dd0105f70 0x7f0dd019caa0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:35928/0 (socket says 192.168.123.107:35928) 2026-03-10T10:12:54.687 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.682+0000 7f0dce7fc640 1 -- 192.168.123.107:0/2245321330 learned_addr learned my addr 192.168.123.107:0/2245321330 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-10T10:12:54.687 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.682+0000 7f0dcf7fe640 1 --2- 192.168.123.107:0/2245321330 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0dd0106930 0x7f0dd01a3b20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:12:54.687 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.682+0000 7f0dce7fc640 1 -- 192.168.123.107:0/2245321330 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f0dd0104d70 msgr2=0x7f0dd019c560 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:12:54.687 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.682+0000 7f0dceffd640 1 --2- 192.168.123.107:0/2245321330 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f0dd0104d70 0x7f0dd019c560 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:12:54.687 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.682+0000 7f0dce7fc640 1 --2- 192.168.123.107:0/2245321330 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f0dd0104d70 0x7f0dd019c560 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:12:54.687 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.682+0000 7f0dce7fc640 1 -- 192.168.123.107:0/2245321330 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0dd0106930 msgr2=0x7f0dd01a3b20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:12:54.688 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.682+0000 7f0dce7fc640 1 --2- 192.168.123.107:0/2245321330 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0dd0106930 0x7f0dd01a3b20 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:12:54.688 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.682+0000 7f0dce7fc640 1 -- 192.168.123.107:0/2245321330 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f0dd01a4220 con 0x7f0dd0105f70 2026-03-10T10:12:54.688 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.682+0000 7f0dcf7fe640 1 --2- 192.168.123.107:0/2245321330 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0dd0106930 0x7f0dd01a3b20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T10:12:54.688 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.682+0000 7f0dceffd640 1 --2- 192.168.123.107:0/2245321330 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f0dd0104d70 0x7f0dd019c560 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:12:54.688 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.682+0000 7f0dce7fc640 1 --2- 192.168.123.107:0/2245321330 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0dd0105f70 0x7f0dd019caa0 secure :-1 s=READY pgs=29 cs=0 l=1 rev1=1 crypto rx=0x7f0dbc00ea10 tx=0x7f0dbc00eee0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:12:54.688 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.682+0000 7f0dabfff640 1 -- 192.168.123.107:0/2245321330 <== mon.1 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f0dbc00ce50 con 0x7f0dd0105f70 2026-03-10T10:12:54.688 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.682+0000 7f0dabfff640 1 -- 192.168.123.107:0/2245321330 <== mon.1 v2:192.168.123.107:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f0dbc004540 con 0x7f0dd0105f70 2026-03-10T10:12:54.689 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.686+0000 7f0dabfff640 1 -- 192.168.123.107:0/2245321330 <== mon.1 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f0dbc010690 con 0x7f0dd0105f70 2026-03-10T10:12:54.689 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.686+0000 7f0dd53d6640 1 -- 192.168.123.107:0/2245321330 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f0dd01a4510 con 0x7f0dd0105f70 2026-03-10T10:12:54.690 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.686+0000 7f0dd53d6640 1 -- 192.168.123.107:0/2245321330 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f0dd01a49d0 con 0x7f0dd0105f70 2026-03-10T10:12:54.692 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.686+0000 7f0dd53d6640 1 -- 192.168.123.107:0/2245321330 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f0d94005180 con 0x7f0dd0105f70 2026-03-10T10:12:54.693 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.690+0000 7f0dabfff640 1 -- 192.168.123.107:0/2245321330 <== mon.1 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 15) ==== 100000+0+0 (secure 0 0 0) 0x7f0dbc0040d0 con 0x7f0dd0105f70 2026-03-10T10:12:54.693 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.690+0000 7f0dabfff640 1 --2- 192.168.123.107:0/2245321330 >> v2:192.168.123.104:6800/632047608 conn(0x7f0da4077730 0x7f0da4079bf0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:12:54.693 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.690+0000 7f0dceffd640 1 --2- 192.168.123.107:0/2245321330 >> v2:192.168.123.104:6800/632047608 conn(0x7f0da4077730 0x7f0da4079bf0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:12:54.693 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.690+0000 7f0dabfff640 1 -- 192.168.123.107:0/2245321330 <== mon.1 v2:192.168.123.107:3300/0 5 ==== osd_map(39..39 src has 1..39) ==== 3477+0+0 (secure 0 0 0) 0x7f0dbc0a1340 con 0x7f0dd0105f70 2026-03-10T10:12:54.694 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.690+0000 7f0dabfff640 1 -- 192.168.123.107:0/2245321330 <== mon.1 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f0dbc062070 con 0x7f0dd0105f70 2026-03-10T10:12:54.694 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.690+0000 7f0dceffd640 1 --2- 192.168.123.107:0/2245321330 >> v2:192.168.123.104:6800/632047608 conn(0x7f0da4077730 0x7f0da4079bf0 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7f0dc40099a0 tx=0x7f0dc40023d0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:12:54.797 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:12:54.794+0000 7f0dd53d6640 1 -- 192.168.123.107:0/2245321330 --> v2:192.168.123.104:6800/632047608 -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdc", "target": ["mon-mgr", ""]}) -- 0x7f0d94002bf0 con 0x7f0da4077730 2026-03-10T10:12:55.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:54 vm07 bash[23367]: cluster 2026-03-10T10:12:53.635693+0000 mgr.y (mgr.14150) 186 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:55.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:54 vm07 bash[23367]: cluster 2026-03-10T10:12:53.635693+0000 mgr.y (mgr.14150) 186 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:55.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:54 vm04 bash[28289]: cluster 2026-03-10T10:12:53.635693+0000 mgr.y (mgr.14150) 186 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:55.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:54 vm04 bash[28289]: cluster 2026-03-10T10:12:53.635693+0000 mgr.y (mgr.14150) 186 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:55.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:54 vm04 bash[20742]: cluster 2026-03-10T10:12:53.635693+0000 mgr.y (mgr.14150) 186 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:55.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:54 vm04 bash[20742]: cluster 2026-03-10T10:12:53.635693+0000 mgr.y (mgr.14150) 186 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:56.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:55 vm07 bash[23367]: audit 2026-03-10T10:12:54.800463+0000 mon.a (mon.0) 531 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:12:56.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:55 vm07 bash[23367]: audit 2026-03-10T10:12:54.800463+0000 mon.a (mon.0) 531 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:12:56.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:55 vm07 bash[23367]: audit 2026-03-10T10:12:54.801739+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:12:56.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:55 vm07 bash[23367]: audit 2026-03-10T10:12:54.801739+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:12:56.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:55 vm07 bash[23367]: audit 2026-03-10T10:12:54.802159+0000 mon.a (mon.0) 533 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:56.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:55 vm07 bash[23367]: audit 2026-03-10T10:12:54.802159+0000 mon.a (mon.0) 533 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:55 vm04 bash[28289]: audit 2026-03-10T10:12:54.800463+0000 mon.a (mon.0) 531 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:12:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:55 vm04 bash[28289]: audit 2026-03-10T10:12:54.800463+0000 mon.a (mon.0) 531 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:12:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:55 vm04 bash[28289]: audit 2026-03-10T10:12:54.801739+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:12:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:55 vm04 bash[28289]: audit 2026-03-10T10:12:54.801739+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:12:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:55 vm04 bash[28289]: audit 2026-03-10T10:12:54.802159+0000 mon.a (mon.0) 533 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:55 vm04 bash[28289]: audit 2026-03-10T10:12:54.802159+0000 mon.a (mon.0) 533 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:55 vm04 bash[20742]: audit 2026-03-10T10:12:54.800463+0000 mon.a (mon.0) 531 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:12:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:55 vm04 bash[20742]: audit 2026-03-10T10:12:54.800463+0000 mon.a (mon.0) 531 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:12:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:55 vm04 bash[20742]: audit 2026-03-10T10:12:54.801739+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:12:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:55 vm04 bash[20742]: audit 2026-03-10T10:12:54.801739+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:12:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:55 vm04 bash[20742]: audit 2026-03-10T10:12:54.802159+0000 mon.a (mon.0) 533 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:55 vm04 bash[20742]: audit 2026-03-10T10:12:54.802159+0000 mon.a (mon.0) 533 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:12:57.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:56 vm07 bash[23367]: audit 2026-03-10T10:12:54.799289+0000 mgr.y (mgr.14150) 187 : audit [DBG] from='client.24238 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:12:57.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:56 vm07 bash[23367]: audit 2026-03-10T10:12:54.799289+0000 mgr.y (mgr.14150) 187 : audit [DBG] from='client.24238 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:12:57.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:56 vm07 bash[23367]: cluster 2026-03-10T10:12:55.636186+0000 mgr.y (mgr.14150) 188 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:57.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:56 vm07 bash[23367]: cluster 2026-03-10T10:12:55.636186+0000 mgr.y (mgr.14150) 188 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:57.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:56 vm04 bash[28289]: audit 2026-03-10T10:12:54.799289+0000 mgr.y (mgr.14150) 187 : audit [DBG] from='client.24238 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:12:57.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:56 vm04 bash[28289]: audit 2026-03-10T10:12:54.799289+0000 mgr.y (mgr.14150) 187 : audit [DBG] from='client.24238 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:12:57.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:56 vm04 bash[28289]: cluster 2026-03-10T10:12:55.636186+0000 mgr.y (mgr.14150) 188 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:57.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:56 vm04 bash[28289]: cluster 2026-03-10T10:12:55.636186+0000 mgr.y (mgr.14150) 188 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:57.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:56 vm04 bash[20742]: audit 2026-03-10T10:12:54.799289+0000 mgr.y (mgr.14150) 187 : audit [DBG] from='client.24238 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:12:57.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:56 vm04 bash[20742]: audit 2026-03-10T10:12:54.799289+0000 mgr.y (mgr.14150) 187 : audit [DBG] from='client.24238 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:12:57.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:56 vm04 bash[20742]: cluster 2026-03-10T10:12:55.636186+0000 mgr.y (mgr.14150) 188 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:57.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:56 vm04 bash[20742]: cluster 2026-03-10T10:12:55.636186+0000 mgr.y (mgr.14150) 188 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:59.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:58 vm07 bash[23367]: cluster 2026-03-10T10:12:57.636511+0000 mgr.y (mgr.14150) 189 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:59.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:58 vm07 bash[23367]: cluster 2026-03-10T10:12:57.636511+0000 mgr.y (mgr.14150) 189 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:58 vm04 bash[28289]: cluster 2026-03-10T10:12:57.636511+0000 mgr.y (mgr.14150) 189 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:58 vm04 bash[28289]: cluster 2026-03-10T10:12:57.636511+0000 mgr.y (mgr.14150) 189 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:59.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:58 vm04 bash[20742]: cluster 2026-03-10T10:12:57.636511+0000 mgr.y (mgr.14150) 189 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:12:59.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:58 vm04 bash[20742]: cluster 2026-03-10T10:12:57.636511+0000 mgr.y (mgr.14150) 189 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:59 vm07 bash[23367]: audit 2026-03-10T10:12:59.164203+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.107:0/2562414377' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "69498577-1b7a-40bf-acac-5912f8ff7cfc"}]: dispatch 2026-03-10T10:13:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:59 vm07 bash[23367]: audit 2026-03-10T10:12:59.164203+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.107:0/2562414377' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "69498577-1b7a-40bf-acac-5912f8ff7cfc"}]: dispatch 2026-03-10T10:13:00.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:59 vm07 bash[23367]: audit 2026-03-10T10:12:59.165226+0000 mon.a (mon.0) 534 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "69498577-1b7a-40bf-acac-5912f8ff7cfc"}]: dispatch 2026-03-10T10:13:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:59 vm07 bash[23367]: audit 2026-03-10T10:12:59.165226+0000 mon.a (mon.0) 534 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "69498577-1b7a-40bf-acac-5912f8ff7cfc"}]: dispatch 2026-03-10T10:13:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:59 vm07 bash[23367]: audit 2026-03-10T10:12:59.168785+0000 mon.a (mon.0) 535 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "69498577-1b7a-40bf-acac-5912f8ff7cfc"}]': finished 2026-03-10T10:13:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:59 vm07 bash[23367]: audit 2026-03-10T10:12:59.168785+0000 mon.a (mon.0) 535 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "69498577-1b7a-40bf-acac-5912f8ff7cfc"}]': finished 2026-03-10T10:13:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:59 vm07 bash[23367]: cluster 2026-03-10T10:12:59.171471+0000 mon.a (mon.0) 536 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-10T10:13:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:59 vm07 bash[23367]: cluster 2026-03-10T10:12:59.171471+0000 mon.a (mon.0) 536 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-10T10:13:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:59 vm07 bash[23367]: audit 2026-03-10T10:12:59.171632+0000 mon.a (mon.0) 537 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:13:00.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:12:59 vm07 bash[23367]: audit 2026-03-10T10:12:59.171632+0000 mon.a (mon.0) 537 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:13:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:59 vm04 bash[28289]: audit 2026-03-10T10:12:59.164203+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.107:0/2562414377' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "69498577-1b7a-40bf-acac-5912f8ff7cfc"}]: dispatch 2026-03-10T10:13:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:59 vm04 bash[28289]: audit 2026-03-10T10:12:59.164203+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.107:0/2562414377' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "69498577-1b7a-40bf-acac-5912f8ff7cfc"}]: dispatch 2026-03-10T10:13:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:59 vm04 bash[28289]: audit 2026-03-10T10:12:59.165226+0000 mon.a (mon.0) 534 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "69498577-1b7a-40bf-acac-5912f8ff7cfc"}]: dispatch 2026-03-10T10:13:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:59 vm04 bash[28289]: audit 2026-03-10T10:12:59.165226+0000 mon.a (mon.0) 534 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "69498577-1b7a-40bf-acac-5912f8ff7cfc"}]: dispatch 2026-03-10T10:13:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:59 vm04 bash[28289]: audit 2026-03-10T10:12:59.168785+0000 mon.a (mon.0) 535 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "69498577-1b7a-40bf-acac-5912f8ff7cfc"}]': finished 2026-03-10T10:13:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:59 vm04 bash[28289]: audit 2026-03-10T10:12:59.168785+0000 mon.a (mon.0) 535 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "69498577-1b7a-40bf-acac-5912f8ff7cfc"}]': finished 2026-03-10T10:13:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:59 vm04 bash[28289]: cluster 2026-03-10T10:12:59.171471+0000 mon.a (mon.0) 536 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-10T10:13:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:59 vm04 bash[28289]: cluster 2026-03-10T10:12:59.171471+0000 mon.a (mon.0) 536 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-10T10:13:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:59 vm04 bash[28289]: audit 2026-03-10T10:12:59.171632+0000 mon.a (mon.0) 537 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:13:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:12:59 vm04 bash[28289]: audit 2026-03-10T10:12:59.171632+0000 mon.a (mon.0) 537 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:13:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:59 vm04 bash[20742]: audit 2026-03-10T10:12:59.164203+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.107:0/2562414377' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "69498577-1b7a-40bf-acac-5912f8ff7cfc"}]: dispatch 2026-03-10T10:13:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:59 vm04 bash[20742]: audit 2026-03-10T10:12:59.164203+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.107:0/2562414377' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "69498577-1b7a-40bf-acac-5912f8ff7cfc"}]: dispatch 2026-03-10T10:13:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:59 vm04 bash[20742]: audit 2026-03-10T10:12:59.165226+0000 mon.a (mon.0) 534 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "69498577-1b7a-40bf-acac-5912f8ff7cfc"}]: dispatch 2026-03-10T10:13:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:59 vm04 bash[20742]: audit 2026-03-10T10:12:59.165226+0000 mon.a (mon.0) 534 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "69498577-1b7a-40bf-acac-5912f8ff7cfc"}]: dispatch 2026-03-10T10:13:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:59 vm04 bash[20742]: audit 2026-03-10T10:12:59.168785+0000 mon.a (mon.0) 535 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "69498577-1b7a-40bf-acac-5912f8ff7cfc"}]': finished 2026-03-10T10:13:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:59 vm04 bash[20742]: audit 2026-03-10T10:12:59.168785+0000 mon.a (mon.0) 535 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "69498577-1b7a-40bf-acac-5912f8ff7cfc"}]': finished 2026-03-10T10:13:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:59 vm04 bash[20742]: cluster 2026-03-10T10:12:59.171471+0000 mon.a (mon.0) 536 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-10T10:13:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:59 vm04 bash[20742]: cluster 2026-03-10T10:12:59.171471+0000 mon.a (mon.0) 536 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-10T10:13:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:59 vm04 bash[20742]: audit 2026-03-10T10:12:59.171632+0000 mon.a (mon.0) 537 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:13:00.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:12:59 vm04 bash[20742]: audit 2026-03-10T10:12:59.171632+0000 mon.a (mon.0) 537 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:13:01.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:00 vm07 bash[23367]: cluster 2026-03-10T10:12:59.636769+0000 mgr.y (mgr.14150) 190 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:01.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:00 vm07 bash[23367]: cluster 2026-03-10T10:12:59.636769+0000 mgr.y (mgr.14150) 190 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:01.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:00 vm07 bash[23367]: audit 2026-03-10T10:12:59.757700+0000 mon.a (mon.0) 538 : audit [DBG] from='client.? 192.168.123.107:0/3164651973' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:13:01.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:00 vm07 bash[23367]: audit 2026-03-10T10:12:59.757700+0000 mon.a (mon.0) 538 : audit [DBG] from='client.? 192.168.123.107:0/3164651973' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:13:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:00 vm04 bash[28289]: cluster 2026-03-10T10:12:59.636769+0000 mgr.y (mgr.14150) 190 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:00 vm04 bash[28289]: cluster 2026-03-10T10:12:59.636769+0000 mgr.y (mgr.14150) 190 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:00 vm04 bash[28289]: audit 2026-03-10T10:12:59.757700+0000 mon.a (mon.0) 538 : audit [DBG] from='client.? 192.168.123.107:0/3164651973' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:13:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:00 vm04 bash[28289]: audit 2026-03-10T10:12:59.757700+0000 mon.a (mon.0) 538 : audit [DBG] from='client.? 192.168.123.107:0/3164651973' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:13:01.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:00 vm04 bash[20742]: cluster 2026-03-10T10:12:59.636769+0000 mgr.y (mgr.14150) 190 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:01.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:00 vm04 bash[20742]: cluster 2026-03-10T10:12:59.636769+0000 mgr.y (mgr.14150) 190 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:01.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:00 vm04 bash[20742]: audit 2026-03-10T10:12:59.757700+0000 mon.a (mon.0) 538 : audit [DBG] from='client.? 192.168.123.107:0/3164651973' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:13:01.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:00 vm04 bash[20742]: audit 2026-03-10T10:12:59.757700+0000 mon.a (mon.0) 538 : audit [DBG] from='client.? 192.168.123.107:0/3164651973' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:13:03.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:02 vm07 bash[23367]: cluster 2026-03-10T10:13:01.637317+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:03.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:02 vm07 bash[23367]: cluster 2026-03-10T10:13:01.637317+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:03.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:02 vm04 bash[28289]: cluster 2026-03-10T10:13:01.637317+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:03.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:02 vm04 bash[28289]: cluster 2026-03-10T10:13:01.637317+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:03.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:02 vm04 bash[20742]: cluster 2026-03-10T10:13:01.637317+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:03.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:02 vm04 bash[20742]: cluster 2026-03-10T10:13:01.637317+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 560 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:05.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:04 vm07 bash[23367]: cluster 2026-03-10T10:13:03.638008+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:05.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:04 vm07 bash[23367]: cluster 2026-03-10T10:13:03.638008+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:05.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:04 vm04 bash[28289]: cluster 2026-03-10T10:13:03.638008+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:05.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:04 vm04 bash[28289]: cluster 2026-03-10T10:13:03.638008+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:05.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:04 vm04 bash[20742]: cluster 2026-03-10T10:13:03.638008+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:05.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:04 vm04 bash[20742]: cluster 2026-03-10T10:13:03.638008+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:07.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:06 vm07 bash[23367]: cluster 2026-03-10T10:13:05.638483+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:07.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:06 vm07 bash[23367]: cluster 2026-03-10T10:13:05.638483+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:07.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:06 vm04 bash[28289]: cluster 2026-03-10T10:13:05.638483+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:07.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:06 vm04 bash[28289]: cluster 2026-03-10T10:13:05.638483+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:07.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:06 vm04 bash[20742]: cluster 2026-03-10T10:13:05.638483+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:07.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:06 vm04 bash[20742]: cluster 2026-03-10T10:13:05.638483+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:08.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:08 vm07 bash[23367]: cluster 2026-03-10T10:13:07.638805+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:08.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:08 vm07 bash[23367]: cluster 2026-03-10T10:13:07.638805+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:08.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:08 vm07 bash[23367]: audit 2026-03-10T10:13:08.675398+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T10:13:08.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:08 vm07 bash[23367]: audit 2026-03-10T10:13:08.675398+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T10:13:08.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:08 vm07 bash[23367]: audit 2026-03-10T10:13:08.675901+0000 mon.a (mon.0) 540 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:08.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:08 vm07 bash[23367]: audit 2026-03-10T10:13:08.675901+0000 mon.a (mon.0) 540 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:09.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:08 vm04 bash[28289]: cluster 2026-03-10T10:13:07.638805+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:09.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:08 vm04 bash[28289]: cluster 2026-03-10T10:13:07.638805+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:09.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:08 vm04 bash[28289]: audit 2026-03-10T10:13:08.675398+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T10:13:09.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:08 vm04 bash[28289]: audit 2026-03-10T10:13:08.675398+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T10:13:09.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:08 vm04 bash[28289]: audit 2026-03-10T10:13:08.675901+0000 mon.a (mon.0) 540 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:09.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:08 vm04 bash[28289]: audit 2026-03-10T10:13:08.675901+0000 mon.a (mon.0) 540 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:09.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:08 vm04 bash[20742]: cluster 2026-03-10T10:13:07.638805+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:09.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:08 vm04 bash[20742]: cluster 2026-03-10T10:13:07.638805+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:09.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:08 vm04 bash[20742]: audit 2026-03-10T10:13:08.675398+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T10:13:09.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:08 vm04 bash[20742]: audit 2026-03-10T10:13:08.675398+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T10:13:09.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:08 vm04 bash[20742]: audit 2026-03-10T10:13:08.675901+0000 mon.a (mon.0) 540 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:09.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:08 vm04 bash[20742]: audit 2026-03-10T10:13:08.675901+0000 mon.a (mon.0) 540 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:09.505 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 10:13:09 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:13:09.505 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:09 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:13:09.505 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:13:09 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:13:09.505 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 10:13:09 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:13:09.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:09 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:13:09.764 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:13:09 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:13:09.764 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 10:13:09 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:13:09.764 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 10:13:09 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:13:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:09 vm04 bash[28289]: cephadm 2026-03-10T10:13:08.676279+0000 mgr.y (mgr.14150) 195 : cephadm [INF] Deploying daemon osd.6 on vm07 2026-03-10T10:13:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:09 vm04 bash[28289]: cephadm 2026-03-10T10:13:08.676279+0000 mgr.y (mgr.14150) 195 : cephadm [INF] Deploying daemon osd.6 on vm07 2026-03-10T10:13:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:09 vm04 bash[28289]: audit 2026-03-10T10:13:09.733262+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:13:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:09 vm04 bash[28289]: audit 2026-03-10T10:13:09.733262+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:13:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:09 vm04 bash[28289]: audit 2026-03-10T10:13:09.740611+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:09 vm04 bash[28289]: audit 2026-03-10T10:13:09.740611+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:09 vm04 bash[28289]: audit 2026-03-10T10:13:09.747764+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:09 vm04 bash[28289]: audit 2026-03-10T10:13:09.747764+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:09 vm04 bash[20742]: cephadm 2026-03-10T10:13:08.676279+0000 mgr.y (mgr.14150) 195 : cephadm [INF] Deploying daemon osd.6 on vm07 2026-03-10T10:13:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:09 vm04 bash[20742]: cephadm 2026-03-10T10:13:08.676279+0000 mgr.y (mgr.14150) 195 : cephadm [INF] Deploying daemon osd.6 on vm07 2026-03-10T10:13:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:09 vm04 bash[20742]: audit 2026-03-10T10:13:09.733262+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:13:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:09 vm04 bash[20742]: audit 2026-03-10T10:13:09.733262+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:13:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:09 vm04 bash[20742]: audit 2026-03-10T10:13:09.740611+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:09 vm04 bash[20742]: audit 2026-03-10T10:13:09.740611+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:09 vm04 bash[20742]: audit 2026-03-10T10:13:09.747764+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:09 vm04 bash[20742]: audit 2026-03-10T10:13:09.747764+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:10.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:09 vm07 bash[23367]: cephadm 2026-03-10T10:13:08.676279+0000 mgr.y (mgr.14150) 195 : cephadm [INF] Deploying daemon osd.6 on vm07 2026-03-10T10:13:10.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:09 vm07 bash[23367]: cephadm 2026-03-10T10:13:08.676279+0000 mgr.y (mgr.14150) 195 : cephadm [INF] Deploying daemon osd.6 on vm07 2026-03-10T10:13:10.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:09 vm07 bash[23367]: audit 2026-03-10T10:13:09.733262+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:13:10.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:09 vm07 bash[23367]: audit 2026-03-10T10:13:09.733262+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:13:10.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:09 vm07 bash[23367]: audit 2026-03-10T10:13:09.740611+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:10.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:09 vm07 bash[23367]: audit 2026-03-10T10:13:09.740611+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:10.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:09 vm07 bash[23367]: audit 2026-03-10T10:13:09.747764+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:10.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:09 vm07 bash[23367]: audit 2026-03-10T10:13:09.747764+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:11.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:10 vm04 bash[28289]: cluster 2026-03-10T10:13:09.639120+0000 mgr.y (mgr.14150) 196 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:11.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:10 vm04 bash[28289]: cluster 2026-03-10T10:13:09.639120+0000 mgr.y (mgr.14150) 196 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:11.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:10 vm04 bash[20742]: cluster 2026-03-10T10:13:09.639120+0000 mgr.y (mgr.14150) 196 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:11.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:10 vm04 bash[20742]: cluster 2026-03-10T10:13:09.639120+0000 mgr.y (mgr.14150) 196 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:11.256 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:10 vm07 bash[23367]: cluster 2026-03-10T10:13:09.639120+0000 mgr.y (mgr.14150) 196 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:11.256 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:10 vm07 bash[23367]: cluster 2026-03-10T10:13:09.639120+0000 mgr.y (mgr.14150) 196 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:12 vm04 bash[28289]: cluster 2026-03-10T10:13:11.639415+0000 mgr.y (mgr.14150) 197 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:12 vm04 bash[28289]: cluster 2026-03-10T10:13:11.639415+0000 mgr.y (mgr.14150) 197 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:13.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:12 vm04 bash[20742]: cluster 2026-03-10T10:13:11.639415+0000 mgr.y (mgr.14150) 197 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:13.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:12 vm04 bash[20742]: cluster 2026-03-10T10:13:11.639415+0000 mgr.y (mgr.14150) 197 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:13.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:12 vm07 bash[23367]: cluster 2026-03-10T10:13:11.639415+0000 mgr.y (mgr.14150) 197 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:13.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:12 vm07 bash[23367]: cluster 2026-03-10T10:13:11.639415+0000 mgr.y (mgr.14150) 197 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:13 vm04 bash[28289]: audit 2026-03-10T10:13:13.307687+0000 mon.c (mon.2) 17 : audit [INF] from='osd.6 v2:192.168.123.107:6808/719340092' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T10:13:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:13 vm04 bash[28289]: audit 2026-03-10T10:13:13.307687+0000 mon.c (mon.2) 17 : audit [INF] from='osd.6 v2:192.168.123.107:6808/719340092' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T10:13:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:13 vm04 bash[28289]: audit 2026-03-10T10:13:13.307954+0000 mon.a (mon.0) 544 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T10:13:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:13 vm04 bash[28289]: audit 2026-03-10T10:13:13.307954+0000 mon.a (mon.0) 544 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T10:13:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:13 vm04 bash[20742]: audit 2026-03-10T10:13:13.307687+0000 mon.c (mon.2) 17 : audit [INF] from='osd.6 v2:192.168.123.107:6808/719340092' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T10:13:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:13 vm04 bash[20742]: audit 2026-03-10T10:13:13.307687+0000 mon.c (mon.2) 17 : audit [INF] from='osd.6 v2:192.168.123.107:6808/719340092' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T10:13:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:13 vm04 bash[20742]: audit 2026-03-10T10:13:13.307954+0000 mon.a (mon.0) 544 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T10:13:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:13 vm04 bash[20742]: audit 2026-03-10T10:13:13.307954+0000 mon.a (mon.0) 544 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T10:13:14.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:13 vm07 bash[23367]: audit 2026-03-10T10:13:13.307687+0000 mon.c (mon.2) 17 : audit [INF] from='osd.6 v2:192.168.123.107:6808/719340092' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T10:13:14.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:13 vm07 bash[23367]: audit 2026-03-10T10:13:13.307687+0000 mon.c (mon.2) 17 : audit [INF] from='osd.6 v2:192.168.123.107:6808/719340092' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T10:13:14.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:13 vm07 bash[23367]: audit 2026-03-10T10:13:13.307954+0000 mon.a (mon.0) 544 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T10:13:14.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:13 vm07 bash[23367]: audit 2026-03-10T10:13:13.307954+0000 mon.a (mon.0) 544 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T10:13:15.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:15 vm07 bash[23367]: cluster 2026-03-10T10:13:13.639869+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:15.264 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:15 vm07 bash[23367]: cluster 2026-03-10T10:13:13.639869+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:15.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:15 vm07 bash[23367]: audit 2026-03-10T10:13:13.791734+0000 mon.a (mon.0) 545 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T10:13:15.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:15 vm07 bash[23367]: audit 2026-03-10T10:13:13.791734+0000 mon.a (mon.0) 545 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T10:13:15.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:15 vm07 bash[23367]: cluster 2026-03-10T10:13:13.794678+0000 mon.a (mon.0) 546 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-10T10:13:15.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:15 vm07 bash[23367]: cluster 2026-03-10T10:13:13.794678+0000 mon.a (mon.0) 546 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-10T10:13:15.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:15 vm07 bash[23367]: audit 2026-03-10T10:13:13.794828+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:13:15.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:15 vm07 bash[23367]: audit 2026-03-10T10:13:13.794828+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:13:15.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:15 vm07 bash[23367]: audit 2026-03-10T10:13:13.794985+0000 mon.c (mon.2) 18 : audit [INF] from='osd.6 v2:192.168.123.107:6808/719340092' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:13:15.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:15 vm07 bash[23367]: audit 2026-03-10T10:13:13.794985+0000 mon.c (mon.2) 18 : audit [INF] from='osd.6 v2:192.168.123.107:6808/719340092' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:13:15.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:15 vm07 bash[23367]: audit 2026-03-10T10:13:13.795238+0000 mon.a (mon.0) 548 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:13:15.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:15 vm07 bash[23367]: audit 2026-03-10T10:13:13.795238+0000 mon.a (mon.0) 548 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:13:15.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:15 vm04 bash[28289]: cluster 2026-03-10T10:13:13.639869+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:15.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:15 vm04 bash[28289]: cluster 2026-03-10T10:13:13.639869+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:15.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:15 vm04 bash[28289]: audit 2026-03-10T10:13:13.791734+0000 mon.a (mon.0) 545 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T10:13:15.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:15 vm04 bash[28289]: audit 2026-03-10T10:13:13.791734+0000 mon.a (mon.0) 545 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T10:13:15.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:15 vm04 bash[28289]: cluster 2026-03-10T10:13:13.794678+0000 mon.a (mon.0) 546 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-10T10:13:15.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:15 vm04 bash[28289]: cluster 2026-03-10T10:13:13.794678+0000 mon.a (mon.0) 546 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-10T10:13:15.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:15 vm04 bash[28289]: audit 2026-03-10T10:13:13.794828+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:13:15.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:15 vm04 bash[28289]: audit 2026-03-10T10:13:13.794828+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:13:15.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:15 vm04 bash[28289]: audit 2026-03-10T10:13:13.794985+0000 mon.c (mon.2) 18 : audit [INF] from='osd.6 v2:192.168.123.107:6808/719340092' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:13:15.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:15 vm04 bash[28289]: audit 2026-03-10T10:13:13.794985+0000 mon.c (mon.2) 18 : audit [INF] from='osd.6 v2:192.168.123.107:6808/719340092' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:13:15.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:15 vm04 bash[28289]: audit 2026-03-10T10:13:13.795238+0000 mon.a (mon.0) 548 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:13:15.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:15 vm04 bash[28289]: audit 2026-03-10T10:13:13.795238+0000 mon.a (mon.0) 548 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:13:15.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:14 vm04 bash[20742]: cluster 2026-03-10T10:13:13.639869+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:15.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:14 vm04 bash[20742]: cluster 2026-03-10T10:13:13.639869+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:15.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:14 vm04 bash[20742]: audit 2026-03-10T10:13:13.791734+0000 mon.a (mon.0) 545 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T10:13:15.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:14 vm04 bash[20742]: audit 2026-03-10T10:13:13.791734+0000 mon.a (mon.0) 545 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T10:13:15.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:14 vm04 bash[20742]: cluster 2026-03-10T10:13:13.794678+0000 mon.a (mon.0) 546 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-10T10:13:15.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:14 vm04 bash[20742]: cluster 2026-03-10T10:13:13.794678+0000 mon.a (mon.0) 546 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-10T10:13:15.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:14 vm04 bash[20742]: audit 2026-03-10T10:13:13.794828+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:13:15.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:14 vm04 bash[20742]: audit 2026-03-10T10:13:13.794828+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:13:15.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:14 vm04 bash[20742]: audit 2026-03-10T10:13:13.794985+0000 mon.c (mon.2) 18 : audit [INF] from='osd.6 v2:192.168.123.107:6808/719340092' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:13:15.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:14 vm04 bash[20742]: audit 2026-03-10T10:13:13.794985+0000 mon.c (mon.2) 18 : audit [INF] from='osd.6 v2:192.168.123.107:6808/719340092' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:13:15.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:14 vm04 bash[20742]: audit 2026-03-10T10:13:13.795238+0000 mon.a (mon.0) 548 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:13:15.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:14 vm04 bash[20742]: audit 2026-03-10T10:13:13.795238+0000 mon.a (mon.0) 548 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:13:16.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:16 vm07 bash[23367]: audit 2026-03-10T10:13:15.217812+0000 mon.a (mon.0) 549 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T10:13:16.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:16 vm07 bash[23367]: audit 2026-03-10T10:13:15.217812+0000 mon.a (mon.0) 549 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T10:13:16.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:16 vm07 bash[23367]: cluster 2026-03-10T10:13:15.222915+0000 mon.a (mon.0) 550 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-10T10:13:16.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:16 vm07 bash[23367]: cluster 2026-03-10T10:13:15.222915+0000 mon.a (mon.0) 550 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-10T10:13:16.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:16 vm07 bash[23367]: audit 2026-03-10T10:13:15.223498+0000 mon.a (mon.0) 551 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:13:16.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:16 vm07 bash[23367]: audit 2026-03-10T10:13:15.223498+0000 mon.a (mon.0) 551 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:13:16.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:16 vm07 bash[23367]: audit 2026-03-10T10:13:15.228324+0000 mon.a (mon.0) 552 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:13:16.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:16 vm07 bash[23367]: audit 2026-03-10T10:13:15.228324+0000 mon.a (mon.0) 552 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:13:16.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:16 vm07 bash[23367]: cluster 2026-03-10T10:13:15.640371+0000 mgr.y (mgr.14150) 199 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:16.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:16 vm07 bash[23367]: cluster 2026-03-10T10:13:15.640371+0000 mgr.y (mgr.14150) 199 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:16.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:16 vm07 bash[23367]: cluster 2026-03-10T10:13:15.745413+0000 mon.a (mon.0) 553 : cluster [INF] osd.6 v2:192.168.123.107:6808/719340092 boot 2026-03-10T10:13:16.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:16 vm07 bash[23367]: cluster 2026-03-10T10:13:15.745413+0000 mon.a (mon.0) 553 : cluster [INF] osd.6 v2:192.168.123.107:6808/719340092 boot 2026-03-10T10:13:16.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:16 vm07 bash[23367]: cluster 2026-03-10T10:13:15.745429+0000 mon.a (mon.0) 554 : cluster [DBG] osdmap e43: 7 total, 7 up, 7 in 2026-03-10T10:13:16.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:16 vm07 bash[23367]: cluster 2026-03-10T10:13:15.745429+0000 mon.a (mon.0) 554 : cluster [DBG] osdmap e43: 7 total, 7 up, 7 in 2026-03-10T10:13:16.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:16 vm07 bash[23367]: audit 2026-03-10T10:13:15.747911+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:13:16.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:16 vm07 bash[23367]: audit 2026-03-10T10:13:15.747911+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:13:16.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:16 vm07 bash[23367]: audit 2026-03-10T10:13:15.985845+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:16.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:16 vm07 bash[23367]: audit 2026-03-10T10:13:15.985845+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:16.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:16 vm07 bash[23367]: audit 2026-03-10T10:13:15.990236+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:16.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:16 vm07 bash[23367]: audit 2026-03-10T10:13:15.990236+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:16.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:16 vm04 bash[28289]: audit 2026-03-10T10:13:15.217812+0000 mon.a (mon.0) 549 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T10:13:16.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:16 vm04 bash[28289]: audit 2026-03-10T10:13:15.217812+0000 mon.a (mon.0) 549 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T10:13:16.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:16 vm04 bash[28289]: cluster 2026-03-10T10:13:15.222915+0000 mon.a (mon.0) 550 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-10T10:13:16.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:16 vm04 bash[28289]: cluster 2026-03-10T10:13:15.222915+0000 mon.a (mon.0) 550 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-10T10:13:16.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:16 vm04 bash[28289]: audit 2026-03-10T10:13:15.223498+0000 mon.a (mon.0) 551 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:13:16.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:16 vm04 bash[28289]: audit 2026-03-10T10:13:15.223498+0000 mon.a (mon.0) 551 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:13:16.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:16 vm04 bash[28289]: audit 2026-03-10T10:13:15.228324+0000 mon.a (mon.0) 552 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:13:16.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:16 vm04 bash[28289]: audit 2026-03-10T10:13:15.228324+0000 mon.a (mon.0) 552 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:13:16.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:16 vm04 bash[28289]: cluster 2026-03-10T10:13:15.640371+0000 mgr.y (mgr.14150) 199 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:16.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:16 vm04 bash[28289]: cluster 2026-03-10T10:13:15.640371+0000 mgr.y (mgr.14150) 199 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:16.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:16 vm04 bash[28289]: cluster 2026-03-10T10:13:15.745413+0000 mon.a (mon.0) 553 : cluster [INF] osd.6 v2:192.168.123.107:6808/719340092 boot 2026-03-10T10:13:16.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:16 vm04 bash[28289]: cluster 2026-03-10T10:13:15.745413+0000 mon.a (mon.0) 553 : cluster [INF] osd.6 v2:192.168.123.107:6808/719340092 boot 2026-03-10T10:13:16.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:16 vm04 bash[28289]: cluster 2026-03-10T10:13:15.745429+0000 mon.a (mon.0) 554 : cluster [DBG] osdmap e43: 7 total, 7 up, 7 in 2026-03-10T10:13:16.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:16 vm04 bash[28289]: cluster 2026-03-10T10:13:15.745429+0000 mon.a (mon.0) 554 : cluster [DBG] osdmap e43: 7 total, 7 up, 7 in 2026-03-10T10:13:16.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:16 vm04 bash[28289]: audit 2026-03-10T10:13:15.747911+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:13:16.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:16 vm04 bash[28289]: audit 2026-03-10T10:13:15.747911+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:13:16.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:16 vm04 bash[28289]: audit 2026-03-10T10:13:15.985845+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:16.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:16 vm04 bash[28289]: audit 2026-03-10T10:13:15.985845+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:16.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:16 vm04 bash[28289]: audit 2026-03-10T10:13:15.990236+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:16.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:16 vm04 bash[28289]: audit 2026-03-10T10:13:15.990236+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:16.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:16 vm04 bash[20742]: audit 2026-03-10T10:13:15.217812+0000 mon.a (mon.0) 549 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T10:13:16.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:16 vm04 bash[20742]: audit 2026-03-10T10:13:15.217812+0000 mon.a (mon.0) 549 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T10:13:16.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:16 vm04 bash[20742]: cluster 2026-03-10T10:13:15.222915+0000 mon.a (mon.0) 550 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-10T10:13:16.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:16 vm04 bash[20742]: cluster 2026-03-10T10:13:15.222915+0000 mon.a (mon.0) 550 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-10T10:13:16.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:16 vm04 bash[20742]: audit 2026-03-10T10:13:15.223498+0000 mon.a (mon.0) 551 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:13:16.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:16 vm04 bash[20742]: audit 2026-03-10T10:13:15.223498+0000 mon.a (mon.0) 551 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:13:16.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:16 vm04 bash[20742]: audit 2026-03-10T10:13:15.228324+0000 mon.a (mon.0) 552 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:13:16.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:16 vm04 bash[20742]: audit 2026-03-10T10:13:15.228324+0000 mon.a (mon.0) 552 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:13:16.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:16 vm04 bash[20742]: cluster 2026-03-10T10:13:15.640371+0000 mgr.y (mgr.14150) 199 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:16.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:16 vm04 bash[20742]: cluster 2026-03-10T10:13:15.640371+0000 mgr.y (mgr.14150) 199 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 561 MiB used, 119 GiB / 120 GiB avail 2026-03-10T10:13:16.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:16 vm04 bash[20742]: cluster 2026-03-10T10:13:15.745413+0000 mon.a (mon.0) 553 : cluster [INF] osd.6 v2:192.168.123.107:6808/719340092 boot 2026-03-10T10:13:16.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:16 vm04 bash[20742]: cluster 2026-03-10T10:13:15.745413+0000 mon.a (mon.0) 553 : cluster [INF] osd.6 v2:192.168.123.107:6808/719340092 boot 2026-03-10T10:13:16.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:16 vm04 bash[20742]: cluster 2026-03-10T10:13:15.745429+0000 mon.a (mon.0) 554 : cluster [DBG] osdmap e43: 7 total, 7 up, 7 in 2026-03-10T10:13:16.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:16 vm04 bash[20742]: cluster 2026-03-10T10:13:15.745429+0000 mon.a (mon.0) 554 : cluster [DBG] osdmap e43: 7 total, 7 up, 7 in 2026-03-10T10:13:16.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:16 vm04 bash[20742]: audit 2026-03-10T10:13:15.747911+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:13:16.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:16 vm04 bash[20742]: audit 2026-03-10T10:13:15.747911+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:13:16.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:16 vm04 bash[20742]: audit 2026-03-10T10:13:15.985845+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:16.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:16 vm04 bash[20742]: audit 2026-03-10T10:13:15.985845+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:16.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:16 vm04 bash[20742]: audit 2026-03-10T10:13:15.990236+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:16.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:16 vm04 bash[20742]: audit 2026-03-10T10:13:15.990236+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:17.044 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:17.038+0000 7f0dabfff640 1 -- 192.168.123.107:0/2245321330 <== mgr.14150 v2:192.168.123.104:6800/632047608 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7f0d94002bf0 con 0x7f0da4077730 2026-03-10T10:13:17.046 INFO:teuthology.orchestra.run.vm07.stdout:Created osd(s) 6 on host 'vm07' 2026-03-10T10:13:17.047 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:17.042+0000 7f0dd53d6640 1 -- 192.168.123.107:0/2245321330 >> v2:192.168.123.104:6800/632047608 conn(0x7f0da4077730 msgr2=0x7f0da4079bf0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:13:17.047 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:17.042+0000 7f0dd53d6640 1 --2- 192.168.123.107:0/2245321330 >> v2:192.168.123.104:6800/632047608 conn(0x7f0da4077730 0x7f0da4079bf0 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7f0dc40099a0 tx=0x7f0dc40023d0 comp rx=0 tx=0).stop 2026-03-10T10:13:17.047 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:17.042+0000 7f0dd53d6640 1 -- 192.168.123.107:0/2245321330 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0dd0105f70 msgr2=0x7f0dd019caa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:13:17.047 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:17.042+0000 7f0dd53d6640 1 --2- 192.168.123.107:0/2245321330 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0dd0105f70 0x7f0dd019caa0 secure :-1 s=READY pgs=29 cs=0 l=1 rev1=1 crypto rx=0x7f0dbc00ea10 tx=0x7f0dbc00eee0 comp rx=0 tx=0).stop 2026-03-10T10:13:17.047 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:17.042+0000 7f0dd53d6640 1 -- 192.168.123.107:0/2245321330 shutdown_connections 2026-03-10T10:13:17.047 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:17.042+0000 7f0dd53d6640 1 --2- 192.168.123.107:0/2245321330 >> v2:192.168.123.104:6800/632047608 conn(0x7f0da4077730 0x7f0da4079bf0 unknown :-1 s=CLOSED pgs=78 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:13:17.047 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:17.042+0000 7f0dd53d6640 1 --2- 192.168.123.107:0/2245321330 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0dd0106930 0x7f0dd01a3b20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:13:17.047 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:17.042+0000 7f0dd53d6640 1 --2- 192.168.123.107:0/2245321330 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0dd0105f70 0x7f0dd019caa0 unknown :-1 s=CLOSED pgs=29 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:13:17.047 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:17.042+0000 7f0dd53d6640 1 --2- 192.168.123.107:0/2245321330 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f0dd0104d70 0x7f0dd019c560 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:13:17.047 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:17.042+0000 7f0dd53d6640 1 -- 192.168.123.107:0/2245321330 >> 192.168.123.107:0/2245321330 conn(0x7f0dd0100520 msgr2=0x7f0dd0101fb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:13:17.047 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:17.042+0000 7f0dd53d6640 1 -- 192.168.123.107:0/2245321330 shutdown_connections 2026-03-10T10:13:17.047 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:17.042+0000 7f0dd53d6640 1 -- 192.168.123.107:0/2245321330 wait complete. 2026-03-10T10:13:17.117 DEBUG:teuthology.orchestra.run.vm07:osd.6> sudo journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@osd.6.service 2026-03-10T10:13:17.118 INFO:tasks.cephadm:Deploying osd.7 on vm07 with /dev/vdb... 2026-03-10T10:13:17.118 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- lvm zap /dev/vdb 2026-03-10T10:13:17.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:17 vm07 bash[23367]: cluster 2026-03-10T10:13:14.258522+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:13:17.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:17 vm07 bash[23367]: cluster 2026-03-10T10:13:14.258522+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:13:17.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:17 vm07 bash[23367]: cluster 2026-03-10T10:13:14.258575+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:13:17.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:17 vm07 bash[23367]: cluster 2026-03-10T10:13:14.258575+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:13:17.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:17 vm07 bash[23367]: audit 2026-03-10T10:13:16.405904+0000 mon.a (mon.0) 558 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:17.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:17 vm07 bash[23367]: audit 2026-03-10T10:13:16.405904+0000 mon.a (mon.0) 558 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:17.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:17 vm07 bash[23367]: audit 2026-03-10T10:13:16.406611+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:13:17.514 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:17 vm07 bash[23367]: audit 2026-03-10T10:13:16.406611+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:13:17.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:17 vm07 bash[23367]: audit 2026-03-10T10:13:16.412228+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:17.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:17 vm07 bash[23367]: audit 2026-03-10T10:13:16.412228+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:17.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:17 vm07 bash[23367]: cluster 2026-03-10T10:13:16.746899+0000 mon.a (mon.0) 561 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-10T10:13:17.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:17 vm07 bash[23367]: cluster 2026-03-10T10:13:16.746899+0000 mon.a (mon.0) 561 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-10T10:13:17.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:17 vm07 bash[23367]: audit 2026-03-10T10:13:17.034076+0000 mon.a (mon.0) 562 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:13:17.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:17 vm07 bash[23367]: audit 2026-03-10T10:13:17.034076+0000 mon.a (mon.0) 562 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:13:17.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:17 vm07 bash[23367]: audit 2026-03-10T10:13:17.039092+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:17.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:17 vm07 bash[23367]: audit 2026-03-10T10:13:17.039092+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:17.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:17 vm07 bash[23367]: audit 2026-03-10T10:13:17.043367+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:17.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:17 vm07 bash[23367]: audit 2026-03-10T10:13:17.043367+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:17 vm04 bash[28289]: cluster 2026-03-10T10:13:14.258522+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:13:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:17 vm04 bash[28289]: cluster 2026-03-10T10:13:14.258522+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:13:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:17 vm04 bash[28289]: cluster 2026-03-10T10:13:14.258575+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:13:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:17 vm04 bash[28289]: cluster 2026-03-10T10:13:14.258575+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:13:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:17 vm04 bash[28289]: audit 2026-03-10T10:13:16.405904+0000 mon.a (mon.0) 558 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:17 vm04 bash[28289]: audit 2026-03-10T10:13:16.405904+0000 mon.a (mon.0) 558 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:17 vm04 bash[28289]: audit 2026-03-10T10:13:16.406611+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:13:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:17 vm04 bash[28289]: audit 2026-03-10T10:13:16.406611+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:13:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:17 vm04 bash[28289]: audit 2026-03-10T10:13:16.412228+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:17 vm04 bash[28289]: audit 2026-03-10T10:13:16.412228+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:17 vm04 bash[28289]: cluster 2026-03-10T10:13:16.746899+0000 mon.a (mon.0) 561 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-10T10:13:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:17 vm04 bash[28289]: cluster 2026-03-10T10:13:16.746899+0000 mon.a (mon.0) 561 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-10T10:13:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:17 vm04 bash[28289]: audit 2026-03-10T10:13:17.034076+0000 mon.a (mon.0) 562 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:13:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:17 vm04 bash[28289]: audit 2026-03-10T10:13:17.034076+0000 mon.a (mon.0) 562 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:13:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:17 vm04 bash[28289]: audit 2026-03-10T10:13:17.039092+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:17 vm04 bash[28289]: audit 2026-03-10T10:13:17.039092+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:17 vm04 bash[28289]: audit 2026-03-10T10:13:17.043367+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:17 vm04 bash[28289]: audit 2026-03-10T10:13:17.043367+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:17.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:17 vm04 bash[20742]: cluster 2026-03-10T10:13:14.258522+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:13:17.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:17 vm04 bash[20742]: cluster 2026-03-10T10:13:14.258522+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:13:17.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:17 vm04 bash[20742]: cluster 2026-03-10T10:13:14.258575+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:13:17.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:17 vm04 bash[20742]: cluster 2026-03-10T10:13:14.258575+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:13:17.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:17 vm04 bash[20742]: audit 2026-03-10T10:13:16.405904+0000 mon.a (mon.0) 558 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:17.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:17 vm04 bash[20742]: audit 2026-03-10T10:13:16.405904+0000 mon.a (mon.0) 558 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:17.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:17 vm04 bash[20742]: audit 2026-03-10T10:13:16.406611+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:13:17.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:17 vm04 bash[20742]: audit 2026-03-10T10:13:16.406611+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:13:17.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:17 vm04 bash[20742]: audit 2026-03-10T10:13:16.412228+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:17.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:17 vm04 bash[20742]: audit 2026-03-10T10:13:16.412228+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:17.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:17 vm04 bash[20742]: cluster 2026-03-10T10:13:16.746899+0000 mon.a (mon.0) 561 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-10T10:13:17.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:17 vm04 bash[20742]: cluster 2026-03-10T10:13:16.746899+0000 mon.a (mon.0) 561 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-10T10:13:17.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:17 vm04 bash[20742]: audit 2026-03-10T10:13:17.034076+0000 mon.a (mon.0) 562 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:13:17.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:17 vm04 bash[20742]: audit 2026-03-10T10:13:17.034076+0000 mon.a (mon.0) 562 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:13:17.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:17 vm04 bash[20742]: audit 2026-03-10T10:13:17.039092+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:17.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:17 vm04 bash[20742]: audit 2026-03-10T10:13:17.039092+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:17.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:17 vm04 bash[20742]: audit 2026-03-10T10:13:17.043367+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:17.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:17 vm04 bash[20742]: audit 2026-03-10T10:13:17.043367+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:18.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:18 vm04 bash[28289]: cluster 2026-03-10T10:13:17.640631+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v178: 1 pgs: 1 remapped+peering; 449 KiB data, 587 MiB used, 139 GiB / 140 GiB avail 2026-03-10T10:13:18.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:18 vm04 bash[28289]: cluster 2026-03-10T10:13:17.640631+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v178: 1 pgs: 1 remapped+peering; 449 KiB data, 587 MiB used, 139 GiB / 140 GiB avail 2026-03-10T10:13:18.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:18 vm04 bash[20742]: cluster 2026-03-10T10:13:17.640631+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v178: 1 pgs: 1 remapped+peering; 449 KiB data, 587 MiB used, 139 GiB / 140 GiB avail 2026-03-10T10:13:18.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:18 vm04 bash[20742]: cluster 2026-03-10T10:13:17.640631+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v178: 1 pgs: 1 remapped+peering; 449 KiB data, 587 MiB used, 139 GiB / 140 GiB avail 2026-03-10T10:13:18.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:18 vm07 bash[23367]: cluster 2026-03-10T10:13:17.640631+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v178: 1 pgs: 1 remapped+peering; 449 KiB data, 587 MiB used, 139 GiB / 140 GiB avail 2026-03-10T10:13:18.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:18 vm07 bash[23367]: cluster 2026-03-10T10:13:17.640631+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v178: 1 pgs: 1 remapped+peering; 449 KiB data, 587 MiB used, 139 GiB / 140 GiB avail 2026-03-10T10:13:19.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:19 vm04 bash[28289]: cluster 2026-03-10T10:13:18.305029+0000 mon.a (mon.0) 565 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-10T10:13:19.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:19 vm04 bash[28289]: cluster 2026-03-10T10:13:18.305029+0000 mon.a (mon.0) 565 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-10T10:13:19.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:19 vm04 bash[28289]: cluster 2026-03-10T10:13:18.322069+0000 mon.a (mon.0) 566 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-10T10:13:19.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:19 vm04 bash[28289]: cluster 2026-03-10T10:13:18.322069+0000 mon.a (mon.0) 566 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-10T10:13:19.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:19 vm04 bash[20742]: cluster 2026-03-10T10:13:18.305029+0000 mon.a (mon.0) 565 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-10T10:13:19.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:19 vm04 bash[20742]: cluster 2026-03-10T10:13:18.305029+0000 mon.a (mon.0) 565 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-10T10:13:19.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:19 vm04 bash[20742]: cluster 2026-03-10T10:13:18.322069+0000 mon.a (mon.0) 566 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-10T10:13:19.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:19 vm04 bash[20742]: cluster 2026-03-10T10:13:18.322069+0000 mon.a (mon.0) 566 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-10T10:13:19.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:19 vm07 bash[23367]: cluster 2026-03-10T10:13:18.305029+0000 mon.a (mon.0) 565 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-10T10:13:19.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:19 vm07 bash[23367]: cluster 2026-03-10T10:13:18.305029+0000 mon.a (mon.0) 565 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-10T10:13:19.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:19 vm07 bash[23367]: cluster 2026-03-10T10:13:18.322069+0000 mon.a (mon.0) 566 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-10T10:13:19.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:19 vm07 bash[23367]: cluster 2026-03-10T10:13:18.322069+0000 mon.a (mon.0) 566 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-10T10:13:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:20 vm04 bash[28289]: cluster 2026-03-10T10:13:19.640877+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v180: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:20 vm04 bash[28289]: cluster 2026-03-10T10:13:19.640877+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v180: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:20.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:20 vm04 bash[20742]: cluster 2026-03-10T10:13:19.640877+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v180: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:20.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:20 vm04 bash[20742]: cluster 2026-03-10T10:13:19.640877+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v180: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:20.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:20 vm07 bash[23367]: cluster 2026-03-10T10:13:19.640877+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v180: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:20.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:20 vm07 bash[23367]: cluster 2026-03-10T10:13:19.640877+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v180: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:21.795 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.b/config 2026-03-10T10:13:22.176 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:21 vm07 bash[23367]: cluster 2026-03-10T10:13:21.641143+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:22.176 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:21 vm07 bash[23367]: cluster 2026-03-10T10:13:21.641143+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:22.176 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:21 vm07 bash[23367]: cluster 2026-03-10T10:13:21.691560+0000 mon.a (mon.0) 567 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-10T10:13:22.176 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:21 vm07 bash[23367]: cluster 2026-03-10T10:13:21.691560+0000 mon.a (mon.0) 567 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-10T10:13:22.176 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:21 vm07 bash[23367]: cluster 2026-03-10T10:13:21.691573+0000 mon.a (mon.0) 568 : cluster [INF] Cluster is now healthy 2026-03-10T10:13:22.176 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:21 vm07 bash[23367]: cluster 2026-03-10T10:13:21.691573+0000 mon.a (mon.0) 568 : cluster [INF] Cluster is now healthy 2026-03-10T10:13:22.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:21 vm04 bash[28289]: cluster 2026-03-10T10:13:21.641143+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:22.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:21 vm04 bash[28289]: cluster 2026-03-10T10:13:21.641143+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:22.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:21 vm04 bash[28289]: cluster 2026-03-10T10:13:21.691560+0000 mon.a (mon.0) 567 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-10T10:13:22.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:21 vm04 bash[28289]: cluster 2026-03-10T10:13:21.691560+0000 mon.a (mon.0) 567 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-10T10:13:22.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:21 vm04 bash[28289]: cluster 2026-03-10T10:13:21.691573+0000 mon.a (mon.0) 568 : cluster [INF] Cluster is now healthy 2026-03-10T10:13:22.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:21 vm04 bash[28289]: cluster 2026-03-10T10:13:21.691573+0000 mon.a (mon.0) 568 : cluster [INF] Cluster is now healthy 2026-03-10T10:13:22.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:21 vm04 bash[20742]: cluster 2026-03-10T10:13:21.641143+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:22.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:21 vm04 bash[20742]: cluster 2026-03-10T10:13:21.641143+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 188 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:22.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:21 vm04 bash[20742]: cluster 2026-03-10T10:13:21.691560+0000 mon.a (mon.0) 567 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-10T10:13:22.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:21 vm04 bash[20742]: cluster 2026-03-10T10:13:21.691560+0000 mon.a (mon.0) 567 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-10T10:13:22.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:21 vm04 bash[20742]: cluster 2026-03-10T10:13:21.691573+0000 mon.a (mon.0) 568 : cluster [INF] Cluster is now healthy 2026-03-10T10:13:22.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:21 vm04 bash[20742]: cluster 2026-03-10T10:13:21.691573+0000 mon.a (mon.0) 568 : cluster [INF] Cluster is now healthy 2026-03-10T10:13:23.551 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T10:13:23.562 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph orch daemon add osd vm07:/dev/vdb 2026-03-10T10:13:23.863 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:23 vm07 bash[23367]: cephadm 2026-03-10T10:13:22.854152+0000 mgr.y (mgr.14150) 203 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T10:13:23.863 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:23 vm07 bash[23367]: cephadm 2026-03-10T10:13:22.854152+0000 mgr.y (mgr.14150) 203 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T10:13:23.863 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:23 vm07 bash[23367]: audit 2026-03-10T10:13:22.860429+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:23.863 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:23 vm07 bash[23367]: audit 2026-03-10T10:13:22.860429+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:23.863 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:23 vm07 bash[23367]: audit 2026-03-10T10:13:22.864813+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:23.863 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:23 vm07 bash[23367]: audit 2026-03-10T10:13:22.864813+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:23.863 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:23 vm07 bash[23367]: audit 2026-03-10T10:13:22.866370+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:23.863 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:23 vm07 bash[23367]: audit 2026-03-10T10:13:22.866370+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:23.863 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:23 vm07 bash[23367]: audit 2026-03-10T10:13:22.866912+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:23.863 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:23 vm07 bash[23367]: audit 2026-03-10T10:13:22.866912+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:23.863 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:23 vm07 bash[23367]: audit 2026-03-10T10:13:22.867289+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:23.863 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:23 vm07 bash[23367]: audit 2026-03-10T10:13:22.867289+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:23.863 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:23 vm07 bash[23367]: cephadm 2026-03-10T10:13:22.867622+0000 mgr.y (mgr.14150) 204 : cephadm [INF] Adjusting osd_memory_target on vm07 to 151.9M 2026-03-10T10:13:23.863 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:23 vm07 bash[23367]: cephadm 2026-03-10T10:13:22.867622+0000 mgr.y (mgr.14150) 204 : cephadm [INF] Adjusting osd_memory_target on vm07 to 151.9M 2026-03-10T10:13:23.863 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:23 vm07 bash[23367]: cephadm 2026-03-10T10:13:22.868021+0000 mgr.y (mgr.14150) 205 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 159307229: error parsing value: Value '159307229' is below minimum 939524096 2026-03-10T10:13:23.863 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:23 vm07 bash[23367]: cephadm 2026-03-10T10:13:22.868021+0000 mgr.y (mgr.14150) 205 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 159307229: error parsing value: Value '159307229' is below minimum 939524096 2026-03-10T10:13:23.863 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:23 vm07 bash[23367]: audit 2026-03-10T10:13:22.868371+0000 mon.a (mon.0) 574 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:23.863 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:23 vm07 bash[23367]: audit 2026-03-10T10:13:22.868371+0000 mon.a (mon.0) 574 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:23.863 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:23 vm07 bash[23367]: audit 2026-03-10T10:13:22.868815+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:13:23.863 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:23 vm07 bash[23367]: audit 2026-03-10T10:13:22.868815+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:13:23.863 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:23 vm07 bash[23367]: audit 2026-03-10T10:13:22.872663+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:23.863 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:23 vm07 bash[23367]: audit 2026-03-10T10:13:22.872663+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:23.863 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:23 vm07 bash[23367]: cluster 2026-03-10T10:13:23.641407+0000 mgr.y (mgr.14150) 206 : cluster [DBG] pgmap v182: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:23.863 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:23 vm07 bash[23367]: cluster 2026-03-10T10:13:23.641407+0000 mgr.y (mgr.14150) 206 : cluster [DBG] pgmap v182: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:23 vm04 bash[28289]: cephadm 2026-03-10T10:13:22.854152+0000 mgr.y (mgr.14150) 203 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T10:13:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:23 vm04 bash[28289]: cephadm 2026-03-10T10:13:22.854152+0000 mgr.y (mgr.14150) 203 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T10:13:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:23 vm04 bash[28289]: audit 2026-03-10T10:13:22.860429+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:23 vm04 bash[28289]: audit 2026-03-10T10:13:22.860429+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:23 vm04 bash[28289]: audit 2026-03-10T10:13:22.864813+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:23 vm04 bash[28289]: audit 2026-03-10T10:13:22.864813+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:23 vm04 bash[28289]: audit 2026-03-10T10:13:22.866370+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:23 vm04 bash[28289]: audit 2026-03-10T10:13:22.866370+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:23 vm04 bash[28289]: audit 2026-03-10T10:13:22.866912+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:23 vm04 bash[28289]: audit 2026-03-10T10:13:22.866912+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:23 vm04 bash[28289]: audit 2026-03-10T10:13:22.867289+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:23 vm04 bash[28289]: audit 2026-03-10T10:13:22.867289+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:23 vm04 bash[28289]: cephadm 2026-03-10T10:13:22.867622+0000 mgr.y (mgr.14150) 204 : cephadm [INF] Adjusting osd_memory_target on vm07 to 151.9M 2026-03-10T10:13:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:23 vm04 bash[28289]: cephadm 2026-03-10T10:13:22.867622+0000 mgr.y (mgr.14150) 204 : cephadm [INF] Adjusting osd_memory_target on vm07 to 151.9M 2026-03-10T10:13:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:23 vm04 bash[28289]: cephadm 2026-03-10T10:13:22.868021+0000 mgr.y (mgr.14150) 205 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 159307229: error parsing value: Value '159307229' is below minimum 939524096 2026-03-10T10:13:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:23 vm04 bash[28289]: cephadm 2026-03-10T10:13:22.868021+0000 mgr.y (mgr.14150) 205 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 159307229: error parsing value: Value '159307229' is below minimum 939524096 2026-03-10T10:13:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:23 vm04 bash[28289]: audit 2026-03-10T10:13:22.868371+0000 mon.a (mon.0) 574 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:23 vm04 bash[28289]: audit 2026-03-10T10:13:22.868371+0000 mon.a (mon.0) 574 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:23 vm04 bash[28289]: audit 2026-03-10T10:13:22.868815+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:13:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:23 vm04 bash[28289]: audit 2026-03-10T10:13:22.868815+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:13:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:23 vm04 bash[28289]: audit 2026-03-10T10:13:22.872663+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:23 vm04 bash[28289]: audit 2026-03-10T10:13:22.872663+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:24.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:23 vm04 bash[28289]: cluster 2026-03-10T10:13:23.641407+0000 mgr.y (mgr.14150) 206 : cluster [DBG] pgmap v182: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:24.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:23 vm04 bash[28289]: cluster 2026-03-10T10:13:23.641407+0000 mgr.y (mgr.14150) 206 : cluster [DBG] pgmap v182: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:24.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:23 vm04 bash[20742]: cephadm 2026-03-10T10:13:22.854152+0000 mgr.y (mgr.14150) 203 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T10:13:24.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:23 vm04 bash[20742]: cephadm 2026-03-10T10:13:22.854152+0000 mgr.y (mgr.14150) 203 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T10:13:24.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:23 vm04 bash[20742]: audit 2026-03-10T10:13:22.860429+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:24.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:23 vm04 bash[20742]: audit 2026-03-10T10:13:22.860429+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:24.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:23 vm04 bash[20742]: audit 2026-03-10T10:13:22.864813+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:24.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:23 vm04 bash[20742]: audit 2026-03-10T10:13:22.864813+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:24.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:23 vm04 bash[20742]: audit 2026-03-10T10:13:22.866370+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:24.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:23 vm04 bash[20742]: audit 2026-03-10T10:13:22.866370+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:24.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:23 vm04 bash[20742]: audit 2026-03-10T10:13:22.866912+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:24.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:23 vm04 bash[20742]: audit 2026-03-10T10:13:22.866912+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:24.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:23 vm04 bash[20742]: audit 2026-03-10T10:13:22.867289+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:24.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:23 vm04 bash[20742]: audit 2026-03-10T10:13:22.867289+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:24.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:23 vm04 bash[20742]: cephadm 2026-03-10T10:13:22.867622+0000 mgr.y (mgr.14150) 204 : cephadm [INF] Adjusting osd_memory_target on vm07 to 151.9M 2026-03-10T10:13:24.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:23 vm04 bash[20742]: cephadm 2026-03-10T10:13:22.867622+0000 mgr.y (mgr.14150) 204 : cephadm [INF] Adjusting osd_memory_target on vm07 to 151.9M 2026-03-10T10:13:24.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:23 vm04 bash[20742]: cephadm 2026-03-10T10:13:22.868021+0000 mgr.y (mgr.14150) 205 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 159307229: error parsing value: Value '159307229' is below minimum 939524096 2026-03-10T10:13:24.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:23 vm04 bash[20742]: cephadm 2026-03-10T10:13:22.868021+0000 mgr.y (mgr.14150) 205 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 159307229: error parsing value: Value '159307229' is below minimum 939524096 2026-03-10T10:13:24.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:23 vm04 bash[20742]: audit 2026-03-10T10:13:22.868371+0000 mon.a (mon.0) 574 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:24.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:23 vm04 bash[20742]: audit 2026-03-10T10:13:22.868371+0000 mon.a (mon.0) 574 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:24.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:23 vm04 bash[20742]: audit 2026-03-10T10:13:22.868815+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:13:24.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:23 vm04 bash[20742]: audit 2026-03-10T10:13:22.868815+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:13:24.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:23 vm04 bash[20742]: audit 2026-03-10T10:13:22.872663+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:24.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:23 vm04 bash[20742]: audit 2026-03-10T10:13:22.872663+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:24.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:23 vm04 bash[20742]: cluster 2026-03-10T10:13:23.641407+0000 mgr.y (mgr.14150) 206 : cluster [DBG] pgmap v182: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:24.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:23 vm04 bash[20742]: cluster 2026-03-10T10:13:23.641407+0000 mgr.y (mgr.14150) 206 : cluster [DBG] pgmap v182: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:26.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:26 vm04 bash[28289]: cluster 2026-03-10T10:13:25.641680+0000 mgr.y (mgr.14150) 207 : cluster [DBG] pgmap v183: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 50 KiB/s, 0 objects/s recovering 2026-03-10T10:13:26.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:26 vm04 bash[28289]: cluster 2026-03-10T10:13:25.641680+0000 mgr.y (mgr.14150) 207 : cluster [DBG] pgmap v183: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 50 KiB/s, 0 objects/s recovering 2026-03-10T10:13:26.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:26 vm04 bash[20742]: cluster 2026-03-10T10:13:25.641680+0000 mgr.y (mgr.14150) 207 : cluster [DBG] pgmap v183: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 50 KiB/s, 0 objects/s recovering 2026-03-10T10:13:26.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:26 vm04 bash[20742]: cluster 2026-03-10T10:13:25.641680+0000 mgr.y (mgr.14150) 207 : cluster [DBG] pgmap v183: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 50 KiB/s, 0 objects/s recovering 2026-03-10T10:13:27.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:26 vm07 bash[23367]: cluster 2026-03-10T10:13:25.641680+0000 mgr.y (mgr.14150) 207 : cluster [DBG] pgmap v183: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 50 KiB/s, 0 objects/s recovering 2026-03-10T10:13:27.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:26 vm07 bash[23367]: cluster 2026-03-10T10:13:25.641680+0000 mgr.y (mgr.14150) 207 : cluster [DBG] pgmap v183: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 50 KiB/s, 0 objects/s recovering 2026-03-10T10:13:28.182 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.b/config 2026-03-10T10:13:28.325 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.322+0000 7fd701f55640 1 -- 192.168.123.107:0/3739215740 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd6fc0770a0 msgr2=0x7fd6fc075500 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:13:28.325 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.322+0000 7fd701f55640 1 --2- 192.168.123.107:0/3739215740 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd6fc0770a0 0x7fd6fc075500 secure :-1 s=READY pgs=33 cs=0 l=1 rev1=1 crypto rx=0x7fd6e8009a30 tx=0x7fd6e802f220 comp rx=0 tx=0).stop 2026-03-10T10:13:28.325 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.322+0000 7fd701f55640 1 -- 192.168.123.107:0/3739215740 shutdown_connections 2026-03-10T10:13:28.325 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.322+0000 7fd701f55640 1 --2- 192.168.123.107:0/3739215740 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fd6fc1064c0 0x7fd6fc1113d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:13:28.325 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.322+0000 7fd701f55640 1 --2- 192.168.123.107:0/3739215740 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd6fc075a40 0x7fd6fc075ea0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:13:28.325 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.322+0000 7fd701f55640 1 --2- 192.168.123.107:0/3739215740 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd6fc0770a0 0x7fd6fc075500 unknown :-1 s=CLOSED pgs=33 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:13:28.325 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.322+0000 7fd701f55640 1 -- 192.168.123.107:0/3739215740 >> 192.168.123.107:0/3739215740 conn(0x7fd6fc0fe290 msgr2=0x7fd6fc1006b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:13:28.325 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.322+0000 7fd701f55640 1 -- 192.168.123.107:0/3739215740 shutdown_connections 2026-03-10T10:13:28.325 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.322+0000 7fd701f55640 1 -- 192.168.123.107:0/3739215740 wait complete. 2026-03-10T10:13:28.326 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.322+0000 7fd701f55640 1 Processor -- start 2026-03-10T10:13:28.326 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.322+0000 7fd701f55640 1 -- start start 2026-03-10T10:13:28.326 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.322+0000 7fd701f55640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd6fc075a40 0x7fd6fc1a0a10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:13:28.326 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.322+0000 7fd701f55640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd6fc0770a0 0x7fd6fc1a0f50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:13:28.326 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.322+0000 7fd701f55640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fd6fc1064c0 0x7fd6fc1a7fd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:13:28.326 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.322+0000 7fd701f55640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fd6fc114400 con 0x7fd6fc0770a0 2026-03-10T10:13:28.327 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.322+0000 7fd6faffd640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd6fc0770a0 0x7fd6fc1a0f50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:13:28.327 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.322+0000 7fd6fbfff640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fd6fc1064c0 0x7fd6fc1a7fd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:13:28.327 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.322+0000 7fd6fbfff640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fd6fc1064c0 0x7fd6fc1a7fd0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3301/0 says I am v2:192.168.123.107:35076/0 (socket says 192.168.123.107:35076) 2026-03-10T10:13:28.327 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.322+0000 7fd6fbfff640 1 -- 192.168.123.107:0/4058216767 learned_addr learned my addr 192.168.123.107:0/4058216767 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-10T10:13:28.327 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.322+0000 7fd701f55640 1 -- 192.168.123.107:0/4058216767 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7fd6fc114280 con 0x7fd6fc075a40 2026-03-10T10:13:28.327 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.322+0000 7fd701f55640 1 -- 192.168.123.107:0/4058216767 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7fd6fc114580 con 0x7fd6fc1064c0 2026-03-10T10:13:28.327 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.322+0000 7fd6fb7fe640 1 --2- 192.168.123.107:0/4058216767 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd6fc075a40 0x7fd6fc1a0a10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:13:28.327 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.322+0000 7fd6fb7fe640 1 -- 192.168.123.107:0/4058216767 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fd6fc1064c0 msgr2=0x7fd6fc1a7fd0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:13:28.327 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.322+0000 7fd6fb7fe640 1 --2- 192.168.123.107:0/4058216767 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fd6fc1064c0 0x7fd6fc1a7fd0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:13:28.327 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.322+0000 7fd6fb7fe640 1 -- 192.168.123.107:0/4058216767 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd6fc0770a0 msgr2=0x7fd6fc1a0f50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:13:28.327 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.322+0000 7fd6fb7fe640 1 --2- 192.168.123.107:0/4058216767 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd6fc0770a0 0x7fd6fc1a0f50 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:13:28.327 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.322+0000 7fd6fb7fe640 1 -- 192.168.123.107:0/4058216767 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fd6fc1a8640 con 0x7fd6fc075a40 2026-03-10T10:13:28.327 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.322+0000 7fd6fbfff640 1 --2- 192.168.123.107:0/4058216767 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fd6fc1064c0 0x7fd6fc1a7fd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T10:13:28.328 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.322+0000 7fd6faffd640 1 --2- 192.168.123.107:0/4058216767 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd6fc0770a0 0x7fd6fc1a0f50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T10:13:28.328 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.322+0000 7fd6fb7fe640 1 --2- 192.168.123.107:0/4058216767 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd6fc075a40 0x7fd6fc1a0a10 secure :-1 s=READY pgs=34 cs=0 l=1 rev1=1 crypto rx=0x7fd6e8009a00 tx=0x7fd6e802fdf0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:13:28.328 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.322+0000 7fd6f8ff9640 1 -- 192.168.123.107:0/4058216767 <== mon.1 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fd6e8004280 con 0x7fd6fc075a40 2026-03-10T10:13:28.328 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.322+0000 7fd6f8ff9640 1 -- 192.168.123.107:0/4058216767 <== mon.1 v2:192.168.123.107:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fd6e8004420 con 0x7fd6fc075a40 2026-03-10T10:13:28.328 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.322+0000 7fd701f55640 1 -- 192.168.123.107:0/4058216767 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fd6fc1a88d0 con 0x7fd6fc075a40 2026-03-10T10:13:28.329 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.326+0000 7fd6f8ff9640 1 -- 192.168.123.107:0/4058216767 <== mon.1 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fd6e8005590 con 0x7fd6fc075a40 2026-03-10T10:13:28.329 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.326+0000 7fd701f55640 1 -- 192.168.123.107:0/4058216767 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fd6fc1a8d60 con 0x7fd6fc075a40 2026-03-10T10:13:28.330 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.326+0000 7fd6f8ff9640 1 -- 192.168.123.107:0/4058216767 <== mon.1 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 15) ==== 100000+0+0 (secure 0 0 0) 0x7fd6e8038770 con 0x7fd6fc075a40 2026-03-10T10:13:28.330 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.326+0000 7fd701f55640 1 -- 192.168.123.107:0/4058216767 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fd6fc075ea0 con 0x7fd6fc075a40 2026-03-10T10:13:28.333 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.330+0000 7fd6f8ff9640 1 --2- 192.168.123.107:0/4058216767 >> v2:192.168.123.104:6800/632047608 conn(0x7fd6d0077610 0x7fd6d0079ad0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:13:28.333 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.330+0000 7fd6f8ff9640 1 -- 192.168.123.107:0/4058216767 <== mon.1 v2:192.168.123.107:3300/0 5 ==== osd_map(45..45 src has 1..45) ==== 3769+0+0 (secure 0 0 0) 0x7fd6e80c1170 con 0x7fd6fc075a40 2026-03-10T10:13:28.333 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.330+0000 7fd6f8ff9640 1 -- 192.168.123.107:0/4058216767 <== mon.1 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fd6e808ed00 con 0x7fd6fc075a40 2026-03-10T10:13:28.333 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.330+0000 7fd6faffd640 1 --2- 192.168.123.107:0/4058216767 >> v2:192.168.123.104:6800/632047608 conn(0x7fd6d0077610 0x7fd6d0079ad0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:13:28.337 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.334+0000 7fd6faffd640 1 --2- 192.168.123.107:0/4058216767 >> v2:192.168.123.104:6800/632047608 conn(0x7fd6d0077610 0x7fd6d0079ad0 secure :-1 s=READY pgs=83 cs=0 l=1 rev1=1 crypto rx=0x7fd6fc1a1f30 tx=0x7fd6ec00a400 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:13:28.433 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:28.426+0000 7fd701f55640 1 -- 192.168.123.107:0/4058216767 --> v2:192.168.123.104:6800/632047608 -- mgr_command(tid 0: {"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdb", "target": ["mon-mgr", ""]}) -- 0x7fd6fc0008d0 con 0x7fd6d0077610 2026-03-10T10:13:28.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:28 vm04 bash[28289]: cluster 2026-03-10T10:13:27.641921+0000 mgr.y (mgr.14150) 208 : cluster [DBG] pgmap v184: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T10:13:28.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:28 vm04 bash[28289]: cluster 2026-03-10T10:13:27.641921+0000 mgr.y (mgr.14150) 208 : cluster [DBG] pgmap v184: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T10:13:28.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:28 vm04 bash[28289]: audit 2026-03-10T10:13:28.436386+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:13:28.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:28 vm04 bash[28289]: audit 2026-03-10T10:13:28.436386+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:13:28.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:28 vm04 bash[28289]: audit 2026-03-10T10:13:28.437857+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:13:28.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:28 vm04 bash[28289]: audit 2026-03-10T10:13:28.437857+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:13:28.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:28 vm04 bash[28289]: audit 2026-03-10T10:13:28.438308+0000 mon.a (mon.0) 579 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:28.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:28 vm04 bash[28289]: audit 2026-03-10T10:13:28.438308+0000 mon.a (mon.0) 579 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:28.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:28 vm04 bash[20742]: cluster 2026-03-10T10:13:27.641921+0000 mgr.y (mgr.14150) 208 : cluster [DBG] pgmap v184: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T10:13:28.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:28 vm04 bash[20742]: cluster 2026-03-10T10:13:27.641921+0000 mgr.y (mgr.14150) 208 : cluster [DBG] pgmap v184: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T10:13:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:28 vm04 bash[20742]: audit 2026-03-10T10:13:28.436386+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:13:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:28 vm04 bash[20742]: audit 2026-03-10T10:13:28.436386+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:13:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:28 vm04 bash[20742]: audit 2026-03-10T10:13:28.437857+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:13:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:28 vm04 bash[20742]: audit 2026-03-10T10:13:28.437857+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:13:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:28 vm04 bash[20742]: audit 2026-03-10T10:13:28.438308+0000 mon.a (mon.0) 579 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:28 vm04 bash[20742]: audit 2026-03-10T10:13:28.438308+0000 mon.a (mon.0) 579 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:29.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:28 vm07 bash[23367]: cluster 2026-03-10T10:13:27.641921+0000 mgr.y (mgr.14150) 208 : cluster [DBG] pgmap v184: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T10:13:29.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:28 vm07 bash[23367]: cluster 2026-03-10T10:13:27.641921+0000 mgr.y (mgr.14150) 208 : cluster [DBG] pgmap v184: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T10:13:29.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:28 vm07 bash[23367]: audit 2026-03-10T10:13:28.436386+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:13:29.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:28 vm07 bash[23367]: audit 2026-03-10T10:13:28.436386+0000 mon.a (mon.0) 577 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T10:13:29.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:28 vm07 bash[23367]: audit 2026-03-10T10:13:28.437857+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:13:29.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:28 vm07 bash[23367]: audit 2026-03-10T10:13:28.437857+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T10:13:29.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:28 vm07 bash[23367]: audit 2026-03-10T10:13:28.438308+0000 mon.a (mon.0) 579 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:29.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:28 vm07 bash[23367]: audit 2026-03-10T10:13:28.438308+0000 mon.a (mon.0) 579 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:30.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:29 vm07 bash[23367]: audit 2026-03-10T10:13:28.434946+0000 mgr.y (mgr.14150) 209 : audit [DBG] from='client.24265 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:13:30.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:29 vm07 bash[23367]: audit 2026-03-10T10:13:28.434946+0000 mgr.y (mgr.14150) 209 : audit [DBG] from='client.24265 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:13:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:29 vm04 bash[28289]: audit 2026-03-10T10:13:28.434946+0000 mgr.y (mgr.14150) 209 : audit [DBG] from='client.24265 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:13:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:29 vm04 bash[28289]: audit 2026-03-10T10:13:28.434946+0000 mgr.y (mgr.14150) 209 : audit [DBG] from='client.24265 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:13:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:29 vm04 bash[20742]: audit 2026-03-10T10:13:28.434946+0000 mgr.y (mgr.14150) 209 : audit [DBG] from='client.24265 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:13:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:29 vm04 bash[20742]: audit 2026-03-10T10:13:28.434946+0000 mgr.y (mgr.14150) 209 : audit [DBG] from='client.24265 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:13:31.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:30 vm07 bash[23367]: cluster 2026-03-10T10:13:29.642138+0000 mgr.y (mgr.14150) 210 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 40 KiB/s, 0 objects/s recovering 2026-03-10T10:13:31.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:30 vm07 bash[23367]: cluster 2026-03-10T10:13:29.642138+0000 mgr.y (mgr.14150) 210 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 40 KiB/s, 0 objects/s recovering 2026-03-10T10:13:31.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:30 vm04 bash[28289]: cluster 2026-03-10T10:13:29.642138+0000 mgr.y (mgr.14150) 210 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 40 KiB/s, 0 objects/s recovering 2026-03-10T10:13:31.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:30 vm04 bash[28289]: cluster 2026-03-10T10:13:29.642138+0000 mgr.y (mgr.14150) 210 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 40 KiB/s, 0 objects/s recovering 2026-03-10T10:13:31.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:30 vm04 bash[20742]: cluster 2026-03-10T10:13:29.642138+0000 mgr.y (mgr.14150) 210 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 40 KiB/s, 0 objects/s recovering 2026-03-10T10:13:31.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:30 vm04 bash[20742]: cluster 2026-03-10T10:13:29.642138+0000 mgr.y (mgr.14150) 210 : cluster [DBG] pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 40 KiB/s, 0 objects/s recovering 2026-03-10T10:13:32.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:31 vm07 bash[23367]: cluster 2026-03-10T10:13:31.642410+0000 mgr.y (mgr.14150) 211 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T10:13:32.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:31 vm07 bash[23367]: cluster 2026-03-10T10:13:31.642410+0000 mgr.y (mgr.14150) 211 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T10:13:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:31 vm04 bash[28289]: cluster 2026-03-10T10:13:31.642410+0000 mgr.y (mgr.14150) 211 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T10:13:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:31 vm04 bash[28289]: cluster 2026-03-10T10:13:31.642410+0000 mgr.y (mgr.14150) 211 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T10:13:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:31 vm04 bash[20742]: cluster 2026-03-10T10:13:31.642410+0000 mgr.y (mgr.14150) 211 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T10:13:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:31 vm04 bash[20742]: cluster 2026-03-10T10:13:31.642410+0000 mgr.y (mgr.14150) 211 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T10:13:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:34 vm04 bash[28289]: cluster 2026-03-10T10:13:33.642681+0000 mgr.y (mgr.14150) 212 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:34 vm04 bash[28289]: cluster 2026-03-10T10:13:33.642681+0000 mgr.y (mgr.14150) 212 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:34 vm04 bash[28289]: audit 2026-03-10T10:13:34.198526+0000 mon.b (mon.1) 19 : audit [INF] from='client.? 192.168.123.107:0/2040843709' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a27eb8fa-556b-467c-bdba-9d899e37064a"}]: dispatch 2026-03-10T10:13:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:34 vm04 bash[28289]: audit 2026-03-10T10:13:34.198526+0000 mon.b (mon.1) 19 : audit [INF] from='client.? 192.168.123.107:0/2040843709' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a27eb8fa-556b-467c-bdba-9d899e37064a"}]: dispatch 2026-03-10T10:13:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:34 vm04 bash[28289]: audit 2026-03-10T10:13:34.199552+0000 mon.a (mon.0) 580 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a27eb8fa-556b-467c-bdba-9d899e37064a"}]: dispatch 2026-03-10T10:13:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:34 vm04 bash[28289]: audit 2026-03-10T10:13:34.199552+0000 mon.a (mon.0) 580 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a27eb8fa-556b-467c-bdba-9d899e37064a"}]: dispatch 2026-03-10T10:13:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:34 vm04 bash[28289]: audit 2026-03-10T10:13:34.202807+0000 mon.a (mon.0) 581 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a27eb8fa-556b-467c-bdba-9d899e37064a"}]': finished 2026-03-10T10:13:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:34 vm04 bash[28289]: audit 2026-03-10T10:13:34.202807+0000 mon.a (mon.0) 581 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a27eb8fa-556b-467c-bdba-9d899e37064a"}]': finished 2026-03-10T10:13:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:34 vm04 bash[28289]: cluster 2026-03-10T10:13:34.205511+0000 mon.a (mon.0) 582 : cluster [DBG] osdmap e46: 8 total, 7 up, 8 in 2026-03-10T10:13:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:34 vm04 bash[28289]: cluster 2026-03-10T10:13:34.205511+0000 mon.a (mon.0) 582 : cluster [DBG] osdmap e46: 8 total, 7 up, 8 in 2026-03-10T10:13:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:34 vm04 bash[28289]: audit 2026-03-10T10:13:34.205611+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:34 vm04 bash[28289]: audit 2026-03-10T10:13:34.205611+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:34.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:34 vm04 bash[20742]: cluster 2026-03-10T10:13:33.642681+0000 mgr.y (mgr.14150) 212 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:34.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:34 vm04 bash[20742]: cluster 2026-03-10T10:13:33.642681+0000 mgr.y (mgr.14150) 212 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:34.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:34 vm04 bash[20742]: audit 2026-03-10T10:13:34.198526+0000 mon.b (mon.1) 19 : audit [INF] from='client.? 192.168.123.107:0/2040843709' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a27eb8fa-556b-467c-bdba-9d899e37064a"}]: dispatch 2026-03-10T10:13:34.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:34 vm04 bash[20742]: audit 2026-03-10T10:13:34.198526+0000 mon.b (mon.1) 19 : audit [INF] from='client.? 192.168.123.107:0/2040843709' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a27eb8fa-556b-467c-bdba-9d899e37064a"}]: dispatch 2026-03-10T10:13:34.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:34 vm04 bash[20742]: audit 2026-03-10T10:13:34.199552+0000 mon.a (mon.0) 580 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a27eb8fa-556b-467c-bdba-9d899e37064a"}]: dispatch 2026-03-10T10:13:34.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:34 vm04 bash[20742]: audit 2026-03-10T10:13:34.199552+0000 mon.a (mon.0) 580 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a27eb8fa-556b-467c-bdba-9d899e37064a"}]: dispatch 2026-03-10T10:13:34.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:34 vm04 bash[20742]: audit 2026-03-10T10:13:34.202807+0000 mon.a (mon.0) 581 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a27eb8fa-556b-467c-bdba-9d899e37064a"}]': finished 2026-03-10T10:13:34.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:34 vm04 bash[20742]: audit 2026-03-10T10:13:34.202807+0000 mon.a (mon.0) 581 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a27eb8fa-556b-467c-bdba-9d899e37064a"}]': finished 2026-03-10T10:13:34.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:34 vm04 bash[20742]: cluster 2026-03-10T10:13:34.205511+0000 mon.a (mon.0) 582 : cluster [DBG] osdmap e46: 8 total, 7 up, 8 in 2026-03-10T10:13:34.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:34 vm04 bash[20742]: cluster 2026-03-10T10:13:34.205511+0000 mon.a (mon.0) 582 : cluster [DBG] osdmap e46: 8 total, 7 up, 8 in 2026-03-10T10:13:34.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:34 vm04 bash[20742]: audit 2026-03-10T10:13:34.205611+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:34.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:34 vm04 bash[20742]: audit 2026-03-10T10:13:34.205611+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:35.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:34 vm07 bash[23367]: cluster 2026-03-10T10:13:33.642681+0000 mgr.y (mgr.14150) 212 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:35.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:34 vm07 bash[23367]: cluster 2026-03-10T10:13:33.642681+0000 mgr.y (mgr.14150) 212 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:35.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:34 vm07 bash[23367]: audit 2026-03-10T10:13:34.198526+0000 mon.b (mon.1) 19 : audit [INF] from='client.? 192.168.123.107:0/2040843709' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a27eb8fa-556b-467c-bdba-9d899e37064a"}]: dispatch 2026-03-10T10:13:35.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:34 vm07 bash[23367]: audit 2026-03-10T10:13:34.198526+0000 mon.b (mon.1) 19 : audit [INF] from='client.? 192.168.123.107:0/2040843709' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a27eb8fa-556b-467c-bdba-9d899e37064a"}]: dispatch 2026-03-10T10:13:35.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:34 vm07 bash[23367]: audit 2026-03-10T10:13:34.199552+0000 mon.a (mon.0) 580 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a27eb8fa-556b-467c-bdba-9d899e37064a"}]: dispatch 2026-03-10T10:13:35.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:34 vm07 bash[23367]: audit 2026-03-10T10:13:34.199552+0000 mon.a (mon.0) 580 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a27eb8fa-556b-467c-bdba-9d899e37064a"}]: dispatch 2026-03-10T10:13:35.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:34 vm07 bash[23367]: audit 2026-03-10T10:13:34.202807+0000 mon.a (mon.0) 581 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a27eb8fa-556b-467c-bdba-9d899e37064a"}]': finished 2026-03-10T10:13:35.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:34 vm07 bash[23367]: audit 2026-03-10T10:13:34.202807+0000 mon.a (mon.0) 581 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a27eb8fa-556b-467c-bdba-9d899e37064a"}]': finished 2026-03-10T10:13:35.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:34 vm07 bash[23367]: cluster 2026-03-10T10:13:34.205511+0000 mon.a (mon.0) 582 : cluster [DBG] osdmap e46: 8 total, 7 up, 8 in 2026-03-10T10:13:35.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:34 vm07 bash[23367]: cluster 2026-03-10T10:13:34.205511+0000 mon.a (mon.0) 582 : cluster [DBG] osdmap e46: 8 total, 7 up, 8 in 2026-03-10T10:13:35.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:34 vm07 bash[23367]: audit 2026-03-10T10:13:34.205611+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:35.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:34 vm07 bash[23367]: audit 2026-03-10T10:13:34.205611+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:36.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:35 vm07 bash[23367]: audit 2026-03-10T10:13:34.791211+0000 mon.b (mon.1) 20 : audit [DBG] from='client.? 192.168.123.107:0/2933230298' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:13:36.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:35 vm07 bash[23367]: audit 2026-03-10T10:13:34.791211+0000 mon.b (mon.1) 20 : audit [DBG] from='client.? 192.168.123.107:0/2933230298' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:13:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:35 vm04 bash[28289]: audit 2026-03-10T10:13:34.791211+0000 mon.b (mon.1) 20 : audit [DBG] from='client.? 192.168.123.107:0/2933230298' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:13:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:35 vm04 bash[28289]: audit 2026-03-10T10:13:34.791211+0000 mon.b (mon.1) 20 : audit [DBG] from='client.? 192.168.123.107:0/2933230298' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:13:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:35 vm04 bash[20742]: audit 2026-03-10T10:13:34.791211+0000 mon.b (mon.1) 20 : audit [DBG] from='client.? 192.168.123.107:0/2933230298' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:13:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:35 vm04 bash[20742]: audit 2026-03-10T10:13:34.791211+0000 mon.b (mon.1) 20 : audit [DBG] from='client.? 192.168.123.107:0/2933230298' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T10:13:37.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:36 vm07 bash[23367]: cluster 2026-03-10T10:13:35.642955+0000 mgr.y (mgr.14150) 213 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:37.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:36 vm07 bash[23367]: cluster 2026-03-10T10:13:35.642955+0000 mgr.y (mgr.14150) 213 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:36 vm04 bash[28289]: cluster 2026-03-10T10:13:35.642955+0000 mgr.y (mgr.14150) 213 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:36 vm04 bash[28289]: cluster 2026-03-10T10:13:35.642955+0000 mgr.y (mgr.14150) 213 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:36 vm04 bash[20742]: cluster 2026-03-10T10:13:35.642955+0000 mgr.y (mgr.14150) 213 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:36 vm04 bash[20742]: cluster 2026-03-10T10:13:35.642955+0000 mgr.y (mgr.14150) 213 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:39.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:38 vm07 bash[23367]: cluster 2026-03-10T10:13:37.643210+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:39.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:38 vm07 bash[23367]: cluster 2026-03-10T10:13:37.643210+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:39.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:38 vm04 bash[28289]: cluster 2026-03-10T10:13:37.643210+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:39.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:38 vm04 bash[28289]: cluster 2026-03-10T10:13:37.643210+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:39.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:38 vm04 bash[20742]: cluster 2026-03-10T10:13:37.643210+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:39.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:38 vm04 bash[20742]: cluster 2026-03-10T10:13:37.643210+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:41.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:40 vm04 bash[28289]: cluster 2026-03-10T10:13:39.643469+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:41.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:40 vm04 bash[28289]: cluster 2026-03-10T10:13:39.643469+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:41.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:40 vm04 bash[20742]: cluster 2026-03-10T10:13:39.643469+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:41.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:40 vm04 bash[20742]: cluster 2026-03-10T10:13:39.643469+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:41.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:40 vm07 bash[23367]: cluster 2026-03-10T10:13:39.643469+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:41.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:40 vm07 bash[23367]: cluster 2026-03-10T10:13:39.643469+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:42.530 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:42 vm07 bash[23367]: cluster 2026-03-10T10:13:41.643762+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:42.530 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:42 vm07 bash[23367]: cluster 2026-03-10T10:13:41.643762+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:42.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:42 vm04 bash[28289]: cluster 2026-03-10T10:13:41.643762+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:42.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:42 vm04 bash[28289]: cluster 2026-03-10T10:13:41.643762+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:42.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:42 vm04 bash[20742]: cluster 2026-03-10T10:13:41.643762+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:42.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:42 vm04 bash[20742]: cluster 2026-03-10T10:13:41.643762+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:43.435 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:43 vm07 bash[23367]: audit 2026-03-10T10:13:42.921041+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T10:13:43.436 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:43 vm07 bash[23367]: audit 2026-03-10T10:13:42.921041+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T10:13:43.436 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:43 vm07 bash[23367]: audit 2026-03-10T10:13:42.921480+0000 mon.a (mon.0) 585 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:43.436 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:43 vm07 bash[23367]: audit 2026-03-10T10:13:42.921480+0000 mon.a (mon.0) 585 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:43.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:43 vm04 bash[28289]: audit 2026-03-10T10:13:42.921041+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T10:13:43.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:43 vm04 bash[28289]: audit 2026-03-10T10:13:42.921041+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T10:13:43.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:43 vm04 bash[28289]: audit 2026-03-10T10:13:42.921480+0000 mon.a (mon.0) 585 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:43.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:43 vm04 bash[28289]: audit 2026-03-10T10:13:42.921480+0000 mon.a (mon.0) 585 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:43.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:43 vm04 bash[20742]: audit 2026-03-10T10:13:42.921041+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T10:13:43.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:43 vm04 bash[20742]: audit 2026-03-10T10:13:42.921041+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T10:13:43.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:43 vm04 bash[20742]: audit 2026-03-10T10:13:42.921480+0000 mon.a (mon.0) 585 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:43.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:43 vm04 bash[20742]: audit 2026-03-10T10:13:42.921480+0000 mon.a (mon.0) 585 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:43.735 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:43 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:13:43.735 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:13:43 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:13:43.735 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 10:13:43 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:13:43.735 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 10:13:43 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:13:43.735 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 10:13:43 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:13:44.011 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:13:43 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:13:44.011 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 10:13:43 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:13:44.011 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 10:13:43 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:13:44.011 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 10:13:43 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:13:44.011 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:43 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:13:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:44 vm04 bash[28289]: cephadm 2026-03-10T10:13:42.921829+0000 mgr.y (mgr.14150) 217 : cephadm [INF] Deploying daemon osd.7 on vm07 2026-03-10T10:13:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:44 vm04 bash[28289]: cephadm 2026-03-10T10:13:42.921829+0000 mgr.y (mgr.14150) 217 : cephadm [INF] Deploying daemon osd.7 on vm07 2026-03-10T10:13:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:44 vm04 bash[28289]: cluster 2026-03-10T10:13:43.644020+0000 mgr.y (mgr.14150) 218 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:44 vm04 bash[28289]: cluster 2026-03-10T10:13:43.644020+0000 mgr.y (mgr.14150) 218 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:44 vm04 bash[28289]: audit 2026-03-10T10:13:43.979437+0000 mon.a (mon.0) 586 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:13:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:44 vm04 bash[28289]: audit 2026-03-10T10:13:43.979437+0000 mon.a (mon.0) 586 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:13:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:44 vm04 bash[28289]: audit 2026-03-10T10:13:43.984113+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:44 vm04 bash[28289]: audit 2026-03-10T10:13:43.984113+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:44 vm04 bash[28289]: audit 2026-03-10T10:13:43.988081+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:44 vm04 bash[28289]: audit 2026-03-10T10:13:43.988081+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:44 vm04 bash[20742]: cephadm 2026-03-10T10:13:42.921829+0000 mgr.y (mgr.14150) 217 : cephadm [INF] Deploying daemon osd.7 on vm07 2026-03-10T10:13:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:44 vm04 bash[20742]: cephadm 2026-03-10T10:13:42.921829+0000 mgr.y (mgr.14150) 217 : cephadm [INF] Deploying daemon osd.7 on vm07 2026-03-10T10:13:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:44 vm04 bash[20742]: cluster 2026-03-10T10:13:43.644020+0000 mgr.y (mgr.14150) 218 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:44 vm04 bash[20742]: cluster 2026-03-10T10:13:43.644020+0000 mgr.y (mgr.14150) 218 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:44.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:44 vm04 bash[20742]: audit 2026-03-10T10:13:43.979437+0000 mon.a (mon.0) 586 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:13:44.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:44 vm04 bash[20742]: audit 2026-03-10T10:13:43.979437+0000 mon.a (mon.0) 586 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:13:44.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:44 vm04 bash[20742]: audit 2026-03-10T10:13:43.984113+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:44.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:44 vm04 bash[20742]: audit 2026-03-10T10:13:43.984113+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:44.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:44 vm04 bash[20742]: audit 2026-03-10T10:13:43.988081+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:44.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:44 vm04 bash[20742]: audit 2026-03-10T10:13:43.988081+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:44.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:44 vm07 bash[23367]: cephadm 2026-03-10T10:13:42.921829+0000 mgr.y (mgr.14150) 217 : cephadm [INF] Deploying daemon osd.7 on vm07 2026-03-10T10:13:44.764 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:44 vm07 bash[23367]: cephadm 2026-03-10T10:13:42.921829+0000 mgr.y (mgr.14150) 217 : cephadm [INF] Deploying daemon osd.7 on vm07 2026-03-10T10:13:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:44 vm07 bash[23367]: cluster 2026-03-10T10:13:43.644020+0000 mgr.y (mgr.14150) 218 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:44 vm07 bash[23367]: cluster 2026-03-10T10:13:43.644020+0000 mgr.y (mgr.14150) 218 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:44 vm07 bash[23367]: audit 2026-03-10T10:13:43.979437+0000 mon.a (mon.0) 586 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:13:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:44 vm07 bash[23367]: audit 2026-03-10T10:13:43.979437+0000 mon.a (mon.0) 586 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:13:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:44 vm07 bash[23367]: audit 2026-03-10T10:13:43.984113+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:44 vm07 bash[23367]: audit 2026-03-10T10:13:43.984113+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:44 vm07 bash[23367]: audit 2026-03-10T10:13:43.988081+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:44 vm07 bash[23367]: audit 2026-03-10T10:13:43.988081+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:47.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:46 vm07 bash[23367]: cluster 2026-03-10T10:13:45.644269+0000 mgr.y (mgr.14150) 219 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:47.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:46 vm07 bash[23367]: cluster 2026-03-10T10:13:45.644269+0000 mgr.y (mgr.14150) 219 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:46 vm04 bash[28289]: cluster 2026-03-10T10:13:45.644269+0000 mgr.y (mgr.14150) 219 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:46 vm04 bash[28289]: cluster 2026-03-10T10:13:45.644269+0000 mgr.y (mgr.14150) 219 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:47.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:46 vm04 bash[20742]: cluster 2026-03-10T10:13:45.644269+0000 mgr.y (mgr.14150) 219 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:47.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:46 vm04 bash[20742]: cluster 2026-03-10T10:13:45.644269+0000 mgr.y (mgr.14150) 219 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:48.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:47 vm07 bash[23367]: audit 2026-03-10T10:13:47.057389+0000 mon.b (mon.1) 21 : audit [INF] from='osd.7 v2:192.168.123.107:6812/4141831103' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T10:13:48.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:47 vm07 bash[23367]: audit 2026-03-10T10:13:47.057389+0000 mon.b (mon.1) 21 : audit [INF] from='osd.7 v2:192.168.123.107:6812/4141831103' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T10:13:48.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:47 vm07 bash[23367]: audit 2026-03-10T10:13:47.058412+0000 mon.a (mon.0) 589 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T10:13:48.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:47 vm07 bash[23367]: audit 2026-03-10T10:13:47.058412+0000 mon.a (mon.0) 589 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T10:13:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:47 vm04 bash[28289]: audit 2026-03-10T10:13:47.057389+0000 mon.b (mon.1) 21 : audit [INF] from='osd.7 v2:192.168.123.107:6812/4141831103' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T10:13:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:47 vm04 bash[28289]: audit 2026-03-10T10:13:47.057389+0000 mon.b (mon.1) 21 : audit [INF] from='osd.7 v2:192.168.123.107:6812/4141831103' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T10:13:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:47 vm04 bash[28289]: audit 2026-03-10T10:13:47.058412+0000 mon.a (mon.0) 589 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T10:13:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:47 vm04 bash[28289]: audit 2026-03-10T10:13:47.058412+0000 mon.a (mon.0) 589 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T10:13:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:47 vm04 bash[20742]: audit 2026-03-10T10:13:47.057389+0000 mon.b (mon.1) 21 : audit [INF] from='osd.7 v2:192.168.123.107:6812/4141831103' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T10:13:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:47 vm04 bash[20742]: audit 2026-03-10T10:13:47.057389+0000 mon.b (mon.1) 21 : audit [INF] from='osd.7 v2:192.168.123.107:6812/4141831103' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T10:13:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:47 vm04 bash[20742]: audit 2026-03-10T10:13:47.058412+0000 mon.a (mon.0) 589 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T10:13:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:47 vm04 bash[20742]: audit 2026-03-10T10:13:47.058412+0000 mon.a (mon.0) 589 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T10:13:49.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:48 vm07 bash[23367]: cluster 2026-03-10T10:13:47.644525+0000 mgr.y (mgr.14150) 220 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:49.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:48 vm07 bash[23367]: cluster 2026-03-10T10:13:47.644525+0000 mgr.y (mgr.14150) 220 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:49.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:48 vm07 bash[23367]: audit 2026-03-10T10:13:47.724046+0000 mon.a (mon.0) 590 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T10:13:49.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:48 vm07 bash[23367]: audit 2026-03-10T10:13:47.724046+0000 mon.a (mon.0) 590 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T10:13:49.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:48 vm07 bash[23367]: audit 2026-03-10T10:13:47.726172+0000 mon.b (mon.1) 22 : audit [INF] from='osd.7 v2:192.168.123.107:6812/4141831103' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:13:49.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:48 vm07 bash[23367]: audit 2026-03-10T10:13:47.726172+0000 mon.b (mon.1) 22 : audit [INF] from='osd.7 v2:192.168.123.107:6812/4141831103' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:13:49.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:48 vm07 bash[23367]: cluster 2026-03-10T10:13:47.726662+0000 mon.a (mon.0) 591 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-10T10:13:49.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:48 vm07 bash[23367]: cluster 2026-03-10T10:13:47.726662+0000 mon.a (mon.0) 591 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-10T10:13:49.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:48 vm07 bash[23367]: audit 2026-03-10T10:13:47.727661+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:49.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:48 vm07 bash[23367]: audit 2026-03-10T10:13:47.727661+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:49.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:48 vm07 bash[23367]: audit 2026-03-10T10:13:47.728114+0000 mon.a (mon.0) 593 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:13:49.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:48 vm07 bash[23367]: audit 2026-03-10T10:13:47.728114+0000 mon.a (mon.0) 593 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:13:49.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:48 vm07 bash[23367]: audit 2026-03-10T10:13:48.726792+0000 mon.a (mon.0) 594 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T10:13:49.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:48 vm07 bash[23367]: audit 2026-03-10T10:13:48.726792+0000 mon.a (mon.0) 594 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T10:13:49.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:48 vm07 bash[23367]: cluster 2026-03-10T10:13:48.729070+0000 mon.a (mon.0) 595 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-10T10:13:49.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:48 vm07 bash[23367]: cluster 2026-03-10T10:13:48.729070+0000 mon.a (mon.0) 595 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-10T10:13:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:48 vm04 bash[28289]: cluster 2026-03-10T10:13:47.644525+0000 mgr.y (mgr.14150) 220 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:48 vm04 bash[28289]: cluster 2026-03-10T10:13:47.644525+0000 mgr.y (mgr.14150) 220 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:48 vm04 bash[28289]: audit 2026-03-10T10:13:47.724046+0000 mon.a (mon.0) 590 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T10:13:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:48 vm04 bash[28289]: audit 2026-03-10T10:13:47.724046+0000 mon.a (mon.0) 590 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T10:13:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:48 vm04 bash[28289]: audit 2026-03-10T10:13:47.726172+0000 mon.b (mon.1) 22 : audit [INF] from='osd.7 v2:192.168.123.107:6812/4141831103' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:13:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:48 vm04 bash[28289]: audit 2026-03-10T10:13:47.726172+0000 mon.b (mon.1) 22 : audit [INF] from='osd.7 v2:192.168.123.107:6812/4141831103' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:13:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:48 vm04 bash[28289]: cluster 2026-03-10T10:13:47.726662+0000 mon.a (mon.0) 591 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-10T10:13:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:48 vm04 bash[28289]: cluster 2026-03-10T10:13:47.726662+0000 mon.a (mon.0) 591 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-10T10:13:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:48 vm04 bash[28289]: audit 2026-03-10T10:13:47.727661+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:48 vm04 bash[28289]: audit 2026-03-10T10:13:47.727661+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:48 vm04 bash[28289]: audit 2026-03-10T10:13:47.728114+0000 mon.a (mon.0) 593 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:13:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:48 vm04 bash[28289]: audit 2026-03-10T10:13:47.728114+0000 mon.a (mon.0) 593 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:13:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:48 vm04 bash[28289]: audit 2026-03-10T10:13:48.726792+0000 mon.a (mon.0) 594 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T10:13:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:48 vm04 bash[28289]: audit 2026-03-10T10:13:48.726792+0000 mon.a (mon.0) 594 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T10:13:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:48 vm04 bash[28289]: cluster 2026-03-10T10:13:48.729070+0000 mon.a (mon.0) 595 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-10T10:13:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:48 vm04 bash[28289]: cluster 2026-03-10T10:13:48.729070+0000 mon.a (mon.0) 595 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-10T10:13:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:48 vm04 bash[20742]: cluster 2026-03-10T10:13:47.644525+0000 mgr.y (mgr.14150) 220 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:48 vm04 bash[20742]: cluster 2026-03-10T10:13:47.644525+0000 mgr.y (mgr.14150) 220 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:48 vm04 bash[20742]: audit 2026-03-10T10:13:47.724046+0000 mon.a (mon.0) 590 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T10:13:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:48 vm04 bash[20742]: audit 2026-03-10T10:13:47.724046+0000 mon.a (mon.0) 590 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T10:13:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:48 vm04 bash[20742]: audit 2026-03-10T10:13:47.726172+0000 mon.b (mon.1) 22 : audit [INF] from='osd.7 v2:192.168.123.107:6812/4141831103' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:13:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:48 vm04 bash[20742]: audit 2026-03-10T10:13:47.726172+0000 mon.b (mon.1) 22 : audit [INF] from='osd.7 v2:192.168.123.107:6812/4141831103' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:13:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:48 vm04 bash[20742]: cluster 2026-03-10T10:13:47.726662+0000 mon.a (mon.0) 591 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-10T10:13:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:48 vm04 bash[20742]: cluster 2026-03-10T10:13:47.726662+0000 mon.a (mon.0) 591 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-10T10:13:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:48 vm04 bash[20742]: audit 2026-03-10T10:13:47.727661+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:48 vm04 bash[20742]: audit 2026-03-10T10:13:47.727661+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:48 vm04 bash[20742]: audit 2026-03-10T10:13:47.728114+0000 mon.a (mon.0) 593 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:13:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:48 vm04 bash[20742]: audit 2026-03-10T10:13:47.728114+0000 mon.a (mon.0) 593 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T10:13:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:48 vm04 bash[20742]: audit 2026-03-10T10:13:48.726792+0000 mon.a (mon.0) 594 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T10:13:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:48 vm04 bash[20742]: audit 2026-03-10T10:13:48.726792+0000 mon.a (mon.0) 594 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T10:13:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:48 vm04 bash[20742]: cluster 2026-03-10T10:13:48.729070+0000 mon.a (mon.0) 595 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-10T10:13:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:48 vm04 bash[20742]: cluster 2026-03-10T10:13:48.729070+0000 mon.a (mon.0) 595 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-10T10:13:50.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:49 vm07 bash[23367]: audit 2026-03-10T10:13:48.729203+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:50.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:49 vm07 bash[23367]: audit 2026-03-10T10:13:48.729203+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:50.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:49 vm07 bash[23367]: audit 2026-03-10T10:13:48.736372+0000 mon.a (mon.0) 597 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:50.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:49 vm07 bash[23367]: audit 2026-03-10T10:13:48.736372+0000 mon.a (mon.0) 597 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:49 vm04 bash[28289]: audit 2026-03-10T10:13:48.729203+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:49 vm04 bash[28289]: audit 2026-03-10T10:13:48.729203+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:49 vm04 bash[28289]: audit 2026-03-10T10:13:48.736372+0000 mon.a (mon.0) 597 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:49 vm04 bash[28289]: audit 2026-03-10T10:13:48.736372+0000 mon.a (mon.0) 597 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:49 vm04 bash[20742]: audit 2026-03-10T10:13:48.729203+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:49 vm04 bash[20742]: audit 2026-03-10T10:13:48.729203+0000 mon.a (mon.0) 596 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:49 vm04 bash[20742]: audit 2026-03-10T10:13:48.736372+0000 mon.a (mon.0) 597 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:49 vm04 bash[20742]: audit 2026-03-10T10:13:48.736372+0000 mon.a (mon.0) 597 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:51.007 INFO:teuthology.orchestra.run.vm07.stdout:Created osd(s) 7 on host 'vm07' 2026-03-10T10:13:51.007 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:51.002+0000 7fd6f8ff9640 1 -- 192.168.123.107:0/4058216767 <== mgr.14150 v2:192.168.123.104:6800/632047608 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+32 (secure 0 0 0) 0x7fd6fc0008d0 con 0x7fd6d0077610 2026-03-10T10:13:51.010 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:51.006+0000 7fd701f55640 1 -- 192.168.123.107:0/4058216767 >> v2:192.168.123.104:6800/632047608 conn(0x7fd6d0077610 msgr2=0x7fd6d0079ad0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:13:51.010 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:51.006+0000 7fd701f55640 1 --2- 192.168.123.107:0/4058216767 >> v2:192.168.123.104:6800/632047608 conn(0x7fd6d0077610 0x7fd6d0079ad0 secure :-1 s=READY pgs=83 cs=0 l=1 rev1=1 crypto rx=0x7fd6fc1a1f30 tx=0x7fd6ec00a400 comp rx=0 tx=0).stop 2026-03-10T10:13:51.010 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:51.006+0000 7fd701f55640 1 -- 192.168.123.107:0/4058216767 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd6fc075a40 msgr2=0x7fd6fc1a0a10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:13:51.010 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:51.006+0000 7fd701f55640 1 --2- 192.168.123.107:0/4058216767 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd6fc075a40 0x7fd6fc1a0a10 secure :-1 s=READY pgs=34 cs=0 l=1 rev1=1 crypto rx=0x7fd6e8009a00 tx=0x7fd6e802fdf0 comp rx=0 tx=0).stop 2026-03-10T10:13:51.010 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:51.006+0000 7fd701f55640 1 -- 192.168.123.107:0/4058216767 shutdown_connections 2026-03-10T10:13:51.010 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:51.006+0000 7fd701f55640 1 --2- 192.168.123.107:0/4058216767 >> v2:192.168.123.104:6800/632047608 conn(0x7fd6d0077610 0x7fd6d0079ad0 unknown :-1 s=CLOSED pgs=83 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:13:51.010 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:51.006+0000 7fd701f55640 1 --2- 192.168.123.107:0/4058216767 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fd6fc1064c0 0x7fd6fc1a7fd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:13:51.010 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:51.006+0000 7fd701f55640 1 --2- 192.168.123.107:0/4058216767 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd6fc0770a0 0x7fd6fc1a0f50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:13:51.010 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:51.006+0000 7fd701f55640 1 --2- 192.168.123.107:0/4058216767 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd6fc075a40 0x7fd6fc1a0a10 unknown :-1 s=CLOSED pgs=34 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:13:51.010 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:51.006+0000 7fd701f55640 1 -- 192.168.123.107:0/4058216767 >> 192.168.123.107:0/4058216767 conn(0x7fd6fc0fe290 msgr2=0x7fd6fc100680 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:13:51.010 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:51.006+0000 7fd701f55640 1 -- 192.168.123.107:0/4058216767 shutdown_connections 2026-03-10T10:13:51.010 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:13:51.006+0000 7fd701f55640 1 -- 192.168.123.107:0/4058216767 wait complete. 2026-03-10T10:13:51.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:50 vm07 bash[23367]: cluster 2026-03-10T10:13:48.010520+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:13:51.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:50 vm07 bash[23367]: cluster 2026-03-10T10:13:48.010520+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:13:51.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:50 vm07 bash[23367]: cluster 2026-03-10T10:13:48.010549+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:13:51.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:50 vm07 bash[23367]: cluster 2026-03-10T10:13:48.010549+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:13:51.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:50 vm07 bash[23367]: cluster 2026-03-10T10:13:49.644736+0000 mgr.y (mgr.14150) 221 : cluster [DBG] pgmap v198: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:51.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:50 vm07 bash[23367]: cluster 2026-03-10T10:13:49.644736+0000 mgr.y (mgr.14150) 221 : cluster [DBG] pgmap v198: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:51.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:50 vm07 bash[23367]: audit 2026-03-10T10:13:49.733625+0000 mon.a (mon.0) 598 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:51.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:50 vm07 bash[23367]: audit 2026-03-10T10:13:49.733625+0000 mon.a (mon.0) 598 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:51.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:50 vm07 bash[23367]: cluster 2026-03-10T10:13:49.745804+0000 mon.a (mon.0) 599 : cluster [INF] osd.7 v2:192.168.123.107:6812/4141831103 boot 2026-03-10T10:13:51.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:50 vm07 bash[23367]: cluster 2026-03-10T10:13:49.745804+0000 mon.a (mon.0) 599 : cluster [INF] osd.7 v2:192.168.123.107:6812/4141831103 boot 2026-03-10T10:13:51.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:50 vm07 bash[23367]: cluster 2026-03-10T10:13:49.745848+0000 mon.a (mon.0) 600 : cluster [DBG] osdmap e49: 8 total, 8 up, 8 in 2026-03-10T10:13:51.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:50 vm07 bash[23367]: cluster 2026-03-10T10:13:49.745848+0000 mon.a (mon.0) 600 : cluster [DBG] osdmap e49: 8 total, 8 up, 8 in 2026-03-10T10:13:51.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:50 vm07 bash[23367]: audit 2026-03-10T10:13:49.745989+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:51.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:50 vm07 bash[23367]: audit 2026-03-10T10:13:49.745989+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:51.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:50 vm07 bash[23367]: audit 2026-03-10T10:13:50.134712+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:51.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:50 vm07 bash[23367]: audit 2026-03-10T10:13:50.134712+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:51.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:50 vm07 bash[23367]: audit 2026-03-10T10:13:50.139156+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:51.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:50 vm07 bash[23367]: audit 2026-03-10T10:13:50.139156+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:51.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:50 vm07 bash[23367]: audit 2026-03-10T10:13:50.140049+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:51.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:50 vm07 bash[23367]: audit 2026-03-10T10:13:50.140049+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:51.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:50 vm07 bash[23367]: audit 2026-03-10T10:13:50.140595+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:13:51.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:50 vm07 bash[23367]: audit 2026-03-10T10:13:50.140595+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:13:51.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:50 vm07 bash[23367]: audit 2026-03-10T10:13:50.144620+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:51.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:50 vm07 bash[23367]: audit 2026-03-10T10:13:50.144620+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:51.073 DEBUG:teuthology.orchestra.run.vm07:osd.7> sudo journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@osd.7.service 2026-03-10T10:13:51.073 INFO:tasks.cephadm:Waiting for 8 OSDs to come up... 2026-03-10T10:13:51.073 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph osd stat -f json 2026-03-10T10:13:51.079 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:50 vm04 bash[28289]: cluster 2026-03-10T10:13:48.010520+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:13:51.079 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:50 vm04 bash[28289]: cluster 2026-03-10T10:13:48.010520+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:13:51.079 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:50 vm04 bash[28289]: cluster 2026-03-10T10:13:48.010549+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:13:51.079 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:50 vm04 bash[28289]: cluster 2026-03-10T10:13:48.010549+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:13:51.079 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:50 vm04 bash[28289]: cluster 2026-03-10T10:13:49.644736+0000 mgr.y (mgr.14150) 221 : cluster [DBG] pgmap v198: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:51.079 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:50 vm04 bash[28289]: cluster 2026-03-10T10:13:49.644736+0000 mgr.y (mgr.14150) 221 : cluster [DBG] pgmap v198: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:51.079 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:50 vm04 bash[28289]: audit 2026-03-10T10:13:49.733625+0000 mon.a (mon.0) 598 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:51.079 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:50 vm04 bash[28289]: audit 2026-03-10T10:13:49.733625+0000 mon.a (mon.0) 598 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:51.079 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:50 vm04 bash[28289]: cluster 2026-03-10T10:13:49.745804+0000 mon.a (mon.0) 599 : cluster [INF] osd.7 v2:192.168.123.107:6812/4141831103 boot 2026-03-10T10:13:51.079 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:50 vm04 bash[28289]: cluster 2026-03-10T10:13:49.745804+0000 mon.a (mon.0) 599 : cluster [INF] osd.7 v2:192.168.123.107:6812/4141831103 boot 2026-03-10T10:13:51.079 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:50 vm04 bash[28289]: cluster 2026-03-10T10:13:49.745848+0000 mon.a (mon.0) 600 : cluster [DBG] osdmap e49: 8 total, 8 up, 8 in 2026-03-10T10:13:51.079 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:50 vm04 bash[28289]: cluster 2026-03-10T10:13:49.745848+0000 mon.a (mon.0) 600 : cluster [DBG] osdmap e49: 8 total, 8 up, 8 in 2026-03-10T10:13:51.079 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:50 vm04 bash[28289]: audit 2026-03-10T10:13:49.745989+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:51.079 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:50 vm04 bash[28289]: audit 2026-03-10T10:13:49.745989+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:51.079 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:50 vm04 bash[28289]: audit 2026-03-10T10:13:50.134712+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:51.079 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:50 vm04 bash[28289]: audit 2026-03-10T10:13:50.134712+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:51.079 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:50 vm04 bash[28289]: audit 2026-03-10T10:13:50.139156+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:51.079 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:50 vm04 bash[28289]: audit 2026-03-10T10:13:50.139156+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:51.079 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:50 vm04 bash[28289]: audit 2026-03-10T10:13:50.140049+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:51.079 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:50 vm04 bash[28289]: audit 2026-03-10T10:13:50.140049+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:51.079 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:50 vm04 bash[28289]: audit 2026-03-10T10:13:50.140595+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:13:51.079 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:50 vm04 bash[28289]: audit 2026-03-10T10:13:50.140595+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:13:51.079 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:50 vm04 bash[28289]: audit 2026-03-10T10:13:50.144620+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:51.079 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:50 vm04 bash[28289]: audit 2026-03-10T10:13:50.144620+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:50 vm04 bash[20742]: cluster 2026-03-10T10:13:48.010520+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:13:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:50 vm04 bash[20742]: cluster 2026-03-10T10:13:48.010520+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T10:13:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:50 vm04 bash[20742]: cluster 2026-03-10T10:13:48.010549+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:13:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:50 vm04 bash[20742]: cluster 2026-03-10T10:13:48.010549+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T10:13:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:50 vm04 bash[20742]: cluster 2026-03-10T10:13:49.644736+0000 mgr.y (mgr.14150) 221 : cluster [DBG] pgmap v198: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:50 vm04 bash[20742]: cluster 2026-03-10T10:13:49.644736+0000 mgr.y (mgr.14150) 221 : cluster [DBG] pgmap v198: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T10:13:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:50 vm04 bash[20742]: audit 2026-03-10T10:13:49.733625+0000 mon.a (mon.0) 598 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:50 vm04 bash[20742]: audit 2026-03-10T10:13:49.733625+0000 mon.a (mon.0) 598 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:50 vm04 bash[20742]: cluster 2026-03-10T10:13:49.745804+0000 mon.a (mon.0) 599 : cluster [INF] osd.7 v2:192.168.123.107:6812/4141831103 boot 2026-03-10T10:13:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:50 vm04 bash[20742]: cluster 2026-03-10T10:13:49.745804+0000 mon.a (mon.0) 599 : cluster [INF] osd.7 v2:192.168.123.107:6812/4141831103 boot 2026-03-10T10:13:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:50 vm04 bash[20742]: cluster 2026-03-10T10:13:49.745848+0000 mon.a (mon.0) 600 : cluster [DBG] osdmap e49: 8 total, 8 up, 8 in 2026-03-10T10:13:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:50 vm04 bash[20742]: cluster 2026-03-10T10:13:49.745848+0000 mon.a (mon.0) 600 : cluster [DBG] osdmap e49: 8 total, 8 up, 8 in 2026-03-10T10:13:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:50 vm04 bash[20742]: audit 2026-03-10T10:13:49.745989+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:50 vm04 bash[20742]: audit 2026-03-10T10:13:49.745989+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:13:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:50 vm04 bash[20742]: audit 2026-03-10T10:13:50.134712+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:50 vm04 bash[20742]: audit 2026-03-10T10:13:50.134712+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:50 vm04 bash[20742]: audit 2026-03-10T10:13:50.139156+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:50 vm04 bash[20742]: audit 2026-03-10T10:13:50.139156+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:50 vm04 bash[20742]: audit 2026-03-10T10:13:50.140049+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:50 vm04 bash[20742]: audit 2026-03-10T10:13:50.140049+0000 mon.a (mon.0) 604 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:50 vm04 bash[20742]: audit 2026-03-10T10:13:50.140595+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:13:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:50 vm04 bash[20742]: audit 2026-03-10T10:13:50.140595+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:13:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:50 vm04 bash[20742]: audit 2026-03-10T10:13:50.144620+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:51.080 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:50 vm04 bash[20742]: audit 2026-03-10T10:13:50.144620+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:52.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:51 vm07 bash[23367]: cluster 2026-03-10T10:13:50.744581+0000 mon.a (mon.0) 607 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-10T10:13:52.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:51 vm07 bash[23367]: cluster 2026-03-10T10:13:50.744581+0000 mon.a (mon.0) 607 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-10T10:13:52.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:51 vm07 bash[23367]: audit 2026-03-10T10:13:50.993138+0000 mon.a (mon.0) 608 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:13:52.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:51 vm07 bash[23367]: audit 2026-03-10T10:13:50.993138+0000 mon.a (mon.0) 608 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:13:52.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:51 vm07 bash[23367]: audit 2026-03-10T10:13:51.000449+0000 mon.a (mon.0) 609 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:52.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:51 vm07 bash[23367]: audit 2026-03-10T10:13:51.000449+0000 mon.a (mon.0) 609 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:52.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:51 vm07 bash[23367]: audit 2026-03-10T10:13:51.006416+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:52.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:51 vm07 bash[23367]: audit 2026-03-10T10:13:51.006416+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:51 vm04 bash[28289]: cluster 2026-03-10T10:13:50.744581+0000 mon.a (mon.0) 607 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-10T10:13:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:51 vm04 bash[28289]: cluster 2026-03-10T10:13:50.744581+0000 mon.a (mon.0) 607 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-10T10:13:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:51 vm04 bash[28289]: audit 2026-03-10T10:13:50.993138+0000 mon.a (mon.0) 608 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:13:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:51 vm04 bash[28289]: audit 2026-03-10T10:13:50.993138+0000 mon.a (mon.0) 608 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:13:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:51 vm04 bash[28289]: audit 2026-03-10T10:13:51.000449+0000 mon.a (mon.0) 609 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:51 vm04 bash[28289]: audit 2026-03-10T10:13:51.000449+0000 mon.a (mon.0) 609 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:51 vm04 bash[28289]: audit 2026-03-10T10:13:51.006416+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:51 vm04 bash[28289]: audit 2026-03-10T10:13:51.006416+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:51 vm04 bash[20742]: cluster 2026-03-10T10:13:50.744581+0000 mon.a (mon.0) 607 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-10T10:13:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:51 vm04 bash[20742]: cluster 2026-03-10T10:13:50.744581+0000 mon.a (mon.0) 607 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-10T10:13:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:51 vm04 bash[20742]: audit 2026-03-10T10:13:50.993138+0000 mon.a (mon.0) 608 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:13:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:51 vm04 bash[20742]: audit 2026-03-10T10:13:50.993138+0000 mon.a (mon.0) 608 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:13:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:51 vm04 bash[20742]: audit 2026-03-10T10:13:51.000449+0000 mon.a (mon.0) 609 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:51 vm04 bash[20742]: audit 2026-03-10T10:13:51.000449+0000 mon.a (mon.0) 609 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:51 vm04 bash[20742]: audit 2026-03-10T10:13:51.006416+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:51 vm04 bash[20742]: audit 2026-03-10T10:13:51.006416+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:53.014 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:52 vm07 bash[23367]: cluster 2026-03-10T10:13:51.644967+0000 mgr.y (mgr.14150) 222 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:13:53.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:52 vm07 bash[23367]: cluster 2026-03-10T10:13:51.644967+0000 mgr.y (mgr.14150) 222 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:13:53.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:52 vm07 bash[23367]: cluster 2026-03-10T10:13:51.756169+0000 mon.a (mon.0) 611 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-10T10:13:53.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:52 vm07 bash[23367]: cluster 2026-03-10T10:13:51.756169+0000 mon.a (mon.0) 611 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-10T10:13:53.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:52 vm04 bash[28289]: cluster 2026-03-10T10:13:51.644967+0000 mgr.y (mgr.14150) 222 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:13:53.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:52 vm04 bash[28289]: cluster 2026-03-10T10:13:51.644967+0000 mgr.y (mgr.14150) 222 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:13:53.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:52 vm04 bash[28289]: cluster 2026-03-10T10:13:51.756169+0000 mon.a (mon.0) 611 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-10T10:13:53.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:52 vm04 bash[28289]: cluster 2026-03-10T10:13:51.756169+0000 mon.a (mon.0) 611 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-10T10:13:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:52 vm04 bash[20742]: cluster 2026-03-10T10:13:51.644967+0000 mgr.y (mgr.14150) 222 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:13:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:52 vm04 bash[20742]: cluster 2026-03-10T10:13:51.644967+0000 mgr.y (mgr.14150) 222 : cluster [DBG] pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:13:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:52 vm04 bash[20742]: cluster 2026-03-10T10:13:51.756169+0000 mon.a (mon.0) 611 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-10T10:13:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:52 vm04 bash[20742]: cluster 2026-03-10T10:13:51.756169+0000 mon.a (mon.0) 611 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-10T10:13:55.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:54 vm04 bash[28289]: cluster 2026-03-10T10:13:53.645200+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:13:55.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:54 vm04 bash[28289]: cluster 2026-03-10T10:13:53.645200+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:13:55.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:54 vm04 bash[20742]: cluster 2026-03-10T10:13:53.645200+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:13:55.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:54 vm04 bash[20742]: cluster 2026-03-10T10:13:53.645200+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:13:55.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:54 vm07 bash[23367]: cluster 2026-03-10T10:13:53.645200+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:13:55.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:54 vm07 bash[23367]: cluster 2026-03-10T10:13:53.645200+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:13:55.694 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:13:55.852 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.849+0000 7f2a23b6b640 1 -- 192.168.123.104:0/3187175864 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2a1c103cd0 msgr2=0x7f2a1c104150 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:13:55.852 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.849+0000 7f2a23b6b640 1 --2- 192.168.123.104:0/3187175864 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2a1c103cd0 0x7f2a1c104150 secure :-1 s=READY pgs=119 cs=0 l=1 rev1=1 crypto rx=0x7f2a10009a30 tx=0x7f2a1002f260 comp rx=0 tx=0).stop 2026-03-10T10:13:55.853 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.849+0000 7f2a23b6b640 1 -- 192.168.123.104:0/3187175864 shutdown_connections 2026-03-10T10:13:55.853 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.849+0000 7f2a23b6b640 1 --2- 192.168.123.104:0/3187175864 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2a1c104690 0x7f2a1c10af20 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:13:55.853 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.849+0000 7f2a23b6b640 1 --2- 192.168.123.104:0/3187175864 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2a1c103cd0 0x7f2a1c104150 unknown :-1 s=CLOSED pgs=119 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:13:55.853 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.849+0000 7f2a23b6b640 1 --2- 192.168.123.104:0/3187175864 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f2a1c102ad0 0x7f2a1c102ed0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:13:55.853 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.849+0000 7f2a23b6b640 1 -- 192.168.123.104:0/3187175864 >> 192.168.123.104:0/3187175864 conn(0x7f2a1c0fe2c0 msgr2=0x7f2a1c1006e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:13:55.853 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.849+0000 7f2a23b6b640 1 -- 192.168.123.104:0/3187175864 shutdown_connections 2026-03-10T10:13:55.853 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.849+0000 7f2a23b6b640 1 -- 192.168.123.104:0/3187175864 wait complete. 2026-03-10T10:13:55.853 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.849+0000 7f2a23b6b640 1 Processor -- start 2026-03-10T10:13:55.853 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.849+0000 7f2a23b6b640 1 -- start start 2026-03-10T10:13:55.853 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.849+0000 7f2a23b6b640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f2a1c102ad0 0x7f2a1c19c5e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:13:55.853 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.849+0000 7f2a23b6b640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2a1c103cd0 0x7f2a1c19cb20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:13:55.853 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.849+0000 7f2a23b6b640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2a1c104690 0x7f2a1c1a3b30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:13:55.854 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.849+0000 7f2a218e0640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f2a1c102ad0 0x7f2a1c19c5e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:13:55.854 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.849+0000 7f2a210df640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2a1c103cd0 0x7f2a1c19cb20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:13:55.854 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.849+0000 7f2a218e0640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f2a1c102ad0 0x7f2a1c19c5e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3301/0 says I am v2:192.168.123.104:50886/0 (socket says 192.168.123.104:50886) 2026-03-10T10:13:55.854 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.849+0000 7f2a23b6b640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f2a1c10dc40 con 0x7f2a1c104690 2026-03-10T10:13:55.854 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.849+0000 7f2a23b6b640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f2a1c10dac0 con 0x7f2a1c103cd0 2026-03-10T10:13:55.854 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.849+0000 7f2a23b6b640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f2a1c10ddc0 con 0x7f2a1c102ad0 2026-03-10T10:13:55.854 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.849+0000 7f2a218e0640 1 -- 192.168.123.104:0/3312900559 learned_addr learned my addr 192.168.123.104:0/3312900559 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:13:55.854 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.849+0000 7f2a220e1640 1 --2- 192.168.123.104:0/3312900559 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2a1c104690 0x7f2a1c1a3b30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:13:55.854 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.849+0000 7f2a218e0640 1 -- 192.168.123.104:0/3312900559 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2a1c103cd0 msgr2=0x7f2a1c19cb20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:13:55.854 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.849+0000 7f2a218e0640 1 --2- 192.168.123.104:0/3312900559 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2a1c103cd0 0x7f2a1c19cb20 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:13:55.854 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.849+0000 7f2a218e0640 1 -- 192.168.123.104:0/3312900559 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2a1c104690 msgr2=0x7f2a1c1a3b30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:13:55.854 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.849+0000 7f2a218e0640 1 --2- 192.168.123.104:0/3312900559 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2a1c104690 0x7f2a1c1a3b30 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:13:55.854 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.849+0000 7f2a218e0640 1 -- 192.168.123.104:0/3312900559 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f2a1c1a41a0 con 0x7f2a1c102ad0 2026-03-10T10:13:55.855 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.853+0000 7f2a220e1640 1 --2- 192.168.123.104:0/3312900559 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2a1c104690 0x7f2a1c1a3b30 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-10T10:13:55.855 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.853+0000 7f2a218e0640 1 --2- 192.168.123.104:0/3312900559 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f2a1c102ad0 0x7f2a1c19c5e0 secure :-1 s=READY pgs=32 cs=0 l=1 rev1=1 crypto rx=0x7f2a0c00cc70 tx=0x7f2a0c007590 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:13:55.856 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.853+0000 7f2a0affd640 1 -- 192.168.123.104:0/3312900559 <== mon.2 v2:192.168.123.104:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f2a0c013070 con 0x7f2a1c102ad0 2026-03-10T10:13:55.856 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.853+0000 7f2a0affd640 1 -- 192.168.123.104:0/3312900559 <== mon.2 v2:192.168.123.104:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f2a0c0044e0 con 0x7f2a1c102ad0 2026-03-10T10:13:55.856 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.853+0000 7f2a0affd640 1 -- 192.168.123.104:0/3312900559 <== mon.2 v2:192.168.123.104:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f2a0c00f450 con 0x7f2a1c102ad0 2026-03-10T10:13:55.856 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.853+0000 7f2a23b6b640 1 -- 192.168.123.104:0/3312900559 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f2a1c1a4490 con 0x7f2a1c102ad0 2026-03-10T10:13:55.856 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.853+0000 7f2a23b6b640 1 -- 192.168.123.104:0/3312900559 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f2a1c1050c0 con 0x7f2a1c102ad0 2026-03-10T10:13:55.856 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.853+0000 7f2a23b6b640 1 -- 192.168.123.104:0/3312900559 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f2a1c102ed0 con 0x7f2a1c102ad0 2026-03-10T10:13:55.858 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.853+0000 7f2a0affd640 1 -- 192.168.123.104:0/3312900559 <== mon.2 v2:192.168.123.104:3301/0 4 ==== mgrmap(e 15) ==== 100000+0+0 (secure 0 0 0) 0x7f2a0c020020 con 0x7f2a1c102ad0 2026-03-10T10:13:55.859 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.853+0000 7f2a0affd640 1 --2- 192.168.123.104:0/3312900559 >> v2:192.168.123.104:6800/632047608 conn(0x7f29f4077600 0x7f29f4079ac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:13:55.859 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.853+0000 7f2a0affd640 1 -- 192.168.123.104:0/3312900559 <== mon.2 v2:192.168.123.104:3301/0 5 ==== osd_map(51..51 src has 1..51) ==== 4061+0+0 (secure 0 0 0) 0x7f2a0c0991e0 con 0x7f2a1c102ad0 2026-03-10T10:13:55.859 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.857+0000 7f2a210df640 1 --2- 192.168.123.104:0/3312900559 >> v2:192.168.123.104:6800/632047608 conn(0x7f29f4077600 0x7f29f4079ac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:13:55.859 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.857+0000 7f2a210df640 1 --2- 192.168.123.104:0/3312900559 >> v2:192.168.123.104:6800/632047608 conn(0x7f29f4077600 0x7f29f4079ac0 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7f2a1c19da90 tx=0x7f2a1003a040 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:13:55.860 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.857+0000 7f2a0affd640 1 -- 192.168.123.104:0/3312900559 <== mon.2 v2:192.168.123.104:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f2a0c062b60 con 0x7f2a1c102ad0 2026-03-10T10:13:55.964 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.961+0000 7f2a23b6b640 1 -- 192.168.123.104:0/3312900559 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "osd stat", "format": "json"} v 0) -- 0x7f2a1c105a10 con 0x7f2a1c102ad0 2026-03-10T10:13:55.965 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.961+0000 7f2a0affd640 1 -- 192.168.123.104:0/3312900559 <== mon.2 v2:192.168.123.104:3301/0 7 ==== mon_command_ack([{"prefix": "osd stat", "format": "json"}]=0 v51) ==== 74+0+130 (secure 0 0 0) 0x7f2a0c066810 con 0x7f2a1c102ad0 2026-03-10T10:13:55.965 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:13:55.967 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.965+0000 7f2a23b6b640 1 -- 192.168.123.104:0/3312900559 >> v2:192.168.123.104:6800/632047608 conn(0x7f29f4077600 msgr2=0x7f29f4079ac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:13:55.967 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.965+0000 7f2a23b6b640 1 --2- 192.168.123.104:0/3312900559 >> v2:192.168.123.104:6800/632047608 conn(0x7f29f4077600 0x7f29f4079ac0 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7f2a1c19da90 tx=0x7f2a1003a040 comp rx=0 tx=0).stop 2026-03-10T10:13:55.967 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.965+0000 7f2a23b6b640 1 -- 192.168.123.104:0/3312900559 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f2a1c102ad0 msgr2=0x7f2a1c19c5e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:13:55.968 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.965+0000 7f2a23b6b640 1 --2- 192.168.123.104:0/3312900559 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f2a1c102ad0 0x7f2a1c19c5e0 secure :-1 s=READY pgs=32 cs=0 l=1 rev1=1 crypto rx=0x7f2a0c00cc70 tx=0x7f2a0c007590 comp rx=0 tx=0).stop 2026-03-10T10:13:55.968 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.965+0000 7f2a23b6b640 1 -- 192.168.123.104:0/3312900559 shutdown_connections 2026-03-10T10:13:55.968 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.965+0000 7f2a23b6b640 1 --2- 192.168.123.104:0/3312900559 >> v2:192.168.123.104:6800/632047608 conn(0x7f29f4077600 0x7f29f4079ac0 unknown :-1 s=CLOSED pgs=89 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:13:55.968 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.965+0000 7f2a23b6b640 1 --2- 192.168.123.104:0/3312900559 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2a1c104690 0x7f2a1c1a3b30 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:13:55.968 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.965+0000 7f2a23b6b640 1 --2- 192.168.123.104:0/3312900559 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2a1c103cd0 0x7f2a1c19cb20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:13:55.968 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.965+0000 7f2a23b6b640 1 --2- 192.168.123.104:0/3312900559 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f2a1c102ad0 0x7f2a1c19c5e0 unknown :-1 s=CLOSED pgs=32 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:13:55.968 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.965+0000 7f2a23b6b640 1 -- 192.168.123.104:0/3312900559 >> 192.168.123.104:0/3312900559 conn(0x7f2a1c0fe2c0 msgr2=0x7f2a1c0ffd50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:13:55.968 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.965+0000 7f2a23b6b640 1 -- 192.168.123.104:0/3312900559 shutdown_connections 2026-03-10T10:13:55.968 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:55.965+0000 7f2a23b6b640 1 -- 192.168.123.104:0/3312900559 wait complete. 2026-03-10T10:13:55.977 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:55 vm04 bash[28289]: cluster 2026-03-10T10:13:55.645487+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v204: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:13:55.977 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:55 vm04 bash[28289]: cluster 2026-03-10T10:13:55.645487+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v204: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:13:55.978 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:55 vm04 bash[20742]: cluster 2026-03-10T10:13:55.645487+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v204: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:13:55.978 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:55 vm04 bash[20742]: cluster 2026-03-10T10:13:55.645487+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v204: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:13:56.021 INFO:teuthology.orchestra.run.vm04.stdout:{"epoch":51,"num_osds":8,"num_up_osds":8,"osd_up_since":1773137629,"num_in_osds":8,"osd_in_since":1773137614,"num_remapped_pgs":0} 2026-03-10T10:13:56.021 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph osd dump --format=json 2026-03-10T10:13:56.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:55 vm07 bash[23367]: cluster 2026-03-10T10:13:55.645487+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v204: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:13:56.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:55 vm07 bash[23367]: cluster 2026-03-10T10:13:55.645487+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v204: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:13:57.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:56 vm04 bash[20742]: audit 2026-03-10T10:13:55.965778+0000 mon.c (mon.2) 19 : audit [DBG] from='client.? 192.168.123.104:0/3312900559' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T10:13:57.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:56 vm04 bash[20742]: audit 2026-03-10T10:13:55.965778+0000 mon.c (mon.2) 19 : audit [DBG] from='client.? 192.168.123.104:0/3312900559' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T10:13:57.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:56 vm04 bash[20742]: cephadm 2026-03-10T10:13:56.531433+0000 mgr.y (mgr.14150) 225 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:56 vm04 bash[20742]: cephadm 2026-03-10T10:13:56.531433+0000 mgr.y (mgr.14150) 225 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:56 vm04 bash[20742]: audit 2026-03-10T10:13:56.538321+0000 mon.a (mon.0) 612 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:56 vm04 bash[20742]: audit 2026-03-10T10:13:56.538321+0000 mon.a (mon.0) 612 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:56 vm04 bash[20742]: audit 2026-03-10T10:13:56.544921+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:56 vm04 bash[20742]: audit 2026-03-10T10:13:56.544921+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:56 vm04 bash[20742]: audit 2026-03-10T10:13:56.546938+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:56 vm04 bash[20742]: audit 2026-03-10T10:13:56.546938+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:56 vm04 bash[20742]: audit 2026-03-10T10:13:56.547904+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:56 vm04 bash[20742]: audit 2026-03-10T10:13:56.547904+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:56 vm04 bash[20742]: audit 2026-03-10T10:13:56.548754+0000 mon.a (mon.0) 616 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:56 vm04 bash[20742]: audit 2026-03-10T10:13:56.548754+0000 mon.a (mon.0) 616 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:56 vm04 bash[20742]: audit 2026-03-10T10:13:56.549580+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:56 vm04 bash[20742]: audit 2026-03-10T10:13:56.549580+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:56 vm04 bash[20742]: cephadm 2026-03-10T10:13:56.550231+0000 mgr.y (mgr.14150) 226 : cephadm [INF] Adjusting osd_memory_target on vm07 to 113.9M 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:56 vm04 bash[20742]: cephadm 2026-03-10T10:13:56.550231+0000 mgr.y (mgr.14150) 226 : cephadm [INF] Adjusting osd_memory_target on vm07 to 113.9M 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:56 vm04 bash[20742]: cephadm 2026-03-10T10:13:56.550938+0000 mgr.y (mgr.14150) 227 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:56 vm04 bash[20742]: cephadm 2026-03-10T10:13:56.550938+0000 mgr.y (mgr.14150) 227 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:56 vm04 bash[20742]: audit 2026-03-10T10:13:56.551403+0000 mon.a (mon.0) 618 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:56 vm04 bash[20742]: audit 2026-03-10T10:13:56.551403+0000 mon.a (mon.0) 618 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:56 vm04 bash[20742]: audit 2026-03-10T10:13:56.552279+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:56 vm04 bash[20742]: audit 2026-03-10T10:13:56.552279+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:56 vm04 bash[20742]: audit 2026-03-10T10:13:56.557796+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:56 vm04 bash[20742]: audit 2026-03-10T10:13:56.557796+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:56 vm04 bash[28289]: audit 2026-03-10T10:13:55.965778+0000 mon.c (mon.2) 19 : audit [DBG] from='client.? 192.168.123.104:0/3312900559' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:56 vm04 bash[28289]: audit 2026-03-10T10:13:55.965778+0000 mon.c (mon.2) 19 : audit [DBG] from='client.? 192.168.123.104:0/3312900559' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:56 vm04 bash[28289]: cephadm 2026-03-10T10:13:56.531433+0000 mgr.y (mgr.14150) 225 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:56 vm04 bash[28289]: cephadm 2026-03-10T10:13:56.531433+0000 mgr.y (mgr.14150) 225 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:56 vm04 bash[28289]: audit 2026-03-10T10:13:56.538321+0000 mon.a (mon.0) 612 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:56 vm04 bash[28289]: audit 2026-03-10T10:13:56.538321+0000 mon.a (mon.0) 612 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:56 vm04 bash[28289]: audit 2026-03-10T10:13:56.544921+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:56 vm04 bash[28289]: audit 2026-03-10T10:13:56.544921+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:56 vm04 bash[28289]: audit 2026-03-10T10:13:56.546938+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:56 vm04 bash[28289]: audit 2026-03-10T10:13:56.546938+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:56 vm04 bash[28289]: audit 2026-03-10T10:13:56.547904+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:56 vm04 bash[28289]: audit 2026-03-10T10:13:56.547904+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:56 vm04 bash[28289]: audit 2026-03-10T10:13:56.548754+0000 mon.a (mon.0) 616 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:56 vm04 bash[28289]: audit 2026-03-10T10:13:56.548754+0000 mon.a (mon.0) 616 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:56 vm04 bash[28289]: audit 2026-03-10T10:13:56.549580+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:56 vm04 bash[28289]: audit 2026-03-10T10:13:56.549580+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:56 vm04 bash[28289]: cephadm 2026-03-10T10:13:56.550231+0000 mgr.y (mgr.14150) 226 : cephadm [INF] Adjusting osd_memory_target on vm07 to 113.9M 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:56 vm04 bash[28289]: cephadm 2026-03-10T10:13:56.550231+0000 mgr.y (mgr.14150) 226 : cephadm [INF] Adjusting osd_memory_target on vm07 to 113.9M 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:56 vm04 bash[28289]: cephadm 2026-03-10T10:13:56.550938+0000 mgr.y (mgr.14150) 227 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:56 vm04 bash[28289]: cephadm 2026-03-10T10:13:56.550938+0000 mgr.y (mgr.14150) 227 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:56 vm04 bash[28289]: audit 2026-03-10T10:13:56.551403+0000 mon.a (mon.0) 618 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:56 vm04 bash[28289]: audit 2026-03-10T10:13:56.551403+0000 mon.a (mon.0) 618 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:56 vm04 bash[28289]: audit 2026-03-10T10:13:56.552279+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:56 vm04 bash[28289]: audit 2026-03-10T10:13:56.552279+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:56 vm04 bash[28289]: audit 2026-03-10T10:13:56.557796+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:57.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:56 vm04 bash[28289]: audit 2026-03-10T10:13:56.557796+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:57.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:56 vm07 bash[23367]: audit 2026-03-10T10:13:55.965778+0000 mon.c (mon.2) 19 : audit [DBG] from='client.? 192.168.123.104:0/3312900559' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T10:13:57.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:56 vm07 bash[23367]: audit 2026-03-10T10:13:55.965778+0000 mon.c (mon.2) 19 : audit [DBG] from='client.? 192.168.123.104:0/3312900559' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T10:13:57.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:56 vm07 bash[23367]: cephadm 2026-03-10T10:13:56.531433+0000 mgr.y (mgr.14150) 225 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T10:13:57.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:56 vm07 bash[23367]: cephadm 2026-03-10T10:13:56.531433+0000 mgr.y (mgr.14150) 225 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T10:13:57.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:56 vm07 bash[23367]: audit 2026-03-10T10:13:56.538321+0000 mon.a (mon.0) 612 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:57.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:56 vm07 bash[23367]: audit 2026-03-10T10:13:56.538321+0000 mon.a (mon.0) 612 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:57.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:56 vm07 bash[23367]: audit 2026-03-10T10:13:56.544921+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:57.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:56 vm07 bash[23367]: audit 2026-03-10T10:13:56.544921+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:57.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:56 vm07 bash[23367]: audit 2026-03-10T10:13:56.546938+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:57.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:56 vm07 bash[23367]: audit 2026-03-10T10:13:56.546938+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:57.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:56 vm07 bash[23367]: audit 2026-03-10T10:13:56.547904+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:57.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:56 vm07 bash[23367]: audit 2026-03-10T10:13:56.547904+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:57.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:56 vm07 bash[23367]: audit 2026-03-10T10:13:56.548754+0000 mon.a (mon.0) 616 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:57.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:56 vm07 bash[23367]: audit 2026-03-10T10:13:56.548754+0000 mon.a (mon.0) 616 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:57.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:56 vm07 bash[23367]: audit 2026-03-10T10:13:56.549580+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:57.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:56 vm07 bash[23367]: audit 2026-03-10T10:13:56.549580+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:13:57.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:56 vm07 bash[23367]: cephadm 2026-03-10T10:13:56.550231+0000 mgr.y (mgr.14150) 226 : cephadm [INF] Adjusting osd_memory_target on vm07 to 113.9M 2026-03-10T10:13:57.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:56 vm07 bash[23367]: cephadm 2026-03-10T10:13:56.550231+0000 mgr.y (mgr.14150) 226 : cephadm [INF] Adjusting osd_memory_target on vm07 to 113.9M 2026-03-10T10:13:57.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:56 vm07 bash[23367]: cephadm 2026-03-10T10:13:56.550938+0000 mgr.y (mgr.14150) 227 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-10T10:13:57.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:56 vm07 bash[23367]: cephadm 2026-03-10T10:13:56.550938+0000 mgr.y (mgr.14150) 227 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-10T10:13:57.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:56 vm07 bash[23367]: audit 2026-03-10T10:13:56.551403+0000 mon.a (mon.0) 618 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:57.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:56 vm07 bash[23367]: audit 2026-03-10T10:13:56.551403+0000 mon.a (mon.0) 618 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:13:57.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:56 vm07 bash[23367]: audit 2026-03-10T10:13:56.552279+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:13:57.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:56 vm07 bash[23367]: audit 2026-03-10T10:13:56.552279+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:13:57.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:56 vm07 bash[23367]: audit 2026-03-10T10:13:56.557796+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:57.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:56 vm07 bash[23367]: audit 2026-03-10T10:13:56.557796+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:13:58.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:57 vm04 bash[28289]: cluster 2026-03-10T10:13:57.645832+0000 mgr.y (mgr.14150) 228 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:13:58.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:13:57 vm04 bash[28289]: cluster 2026-03-10T10:13:57.645832+0000 mgr.y (mgr.14150) 228 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:13:58.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:57 vm04 bash[20742]: cluster 2026-03-10T10:13:57.645832+0000 mgr.y (mgr.14150) 228 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:13:58.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:13:57 vm04 bash[20742]: cluster 2026-03-10T10:13:57.645832+0000 mgr.y (mgr.14150) 228 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:13:58.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:57 vm07 bash[23367]: cluster 2026-03-10T10:13:57.645832+0000 mgr.y (mgr.14150) 228 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:13:58.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:13:57 vm07 bash[23367]: cluster 2026-03-10T10:13:57.645832+0000 mgr.y (mgr.14150) 228 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:13:59.714 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:13:59.861 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.857+0000 7f2b4182e640 1 -- 192.168.123.104:0/2940752246 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2b3c077620 msgr2=0x7f2b3c077a00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:13:59.861 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.857+0000 7f2b4182e640 1 --2- 192.168.123.104:0/2940752246 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2b3c077620 0x7f2b3c077a00 secure :-1 s=READY pgs=120 cs=0 l=1 rev1=1 crypto rx=0x7f2b30009a30 tx=0x7f2b3002f220 comp rx=0 tx=0).stop 2026-03-10T10:13:59.861 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.857+0000 7f2b4182e640 1 -- 192.168.123.104:0/2940752246 shutdown_connections 2026-03-10T10:13:59.861 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.857+0000 7f2b4182e640 1 --2- 192.168.123.104:0/2940752246 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2b3c113b80 0x7f2b3c115f70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:13:59.862 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.857+0000 7f2b4182e640 1 --2- 192.168.123.104:0/2940752246 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f2b3c077f40 0x7f2b3c113640 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:13:59.862 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.857+0000 7f2b4182e640 1 --2- 192.168.123.104:0/2940752246 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2b3c077620 0x7f2b3c077a00 unknown :-1 s=CLOSED pgs=120 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:13:59.862 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.857+0000 7f2b4182e640 1 -- 192.168.123.104:0/2940752246 >> 192.168.123.104:0/2940752246 conn(0x7f2b3c1009e0 msgr2=0x7f2b3c102e00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:13:59.862 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.857+0000 7f2b4182e640 1 -- 192.168.123.104:0/2940752246 shutdown_connections 2026-03-10T10:13:59.862 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.857+0000 7f2b4182e640 1 -- 192.168.123.104:0/2940752246 wait complete. 2026-03-10T10:13:59.862 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.857+0000 7f2b4182e640 1 Processor -- start 2026-03-10T10:13:59.862 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.857+0000 7f2b4182e640 1 -- start start 2026-03-10T10:13:59.862 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.857+0000 7f2b4182e640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2b3c077620 0x7f2b3c1a1020 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:13:59.863 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.861+0000 7f2b3affd640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2b3c077620 0x7f2b3c1a1020 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:13:59.863 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.861+0000 7f2b3affd640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2b3c077620 0x7f2b3c1a1020 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:58542/0 (socket says 192.168.123.104:58542) 2026-03-10T10:13:59.863 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.861+0000 7f2b3affd640 1 -- 192.168.123.104:0/2244894275 learned_addr learned my addr 192.168.123.104:0/2244894275 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:13:59.863 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.861+0000 7f2b4182e640 1 --2- 192.168.123.104:0/2244894275 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f2b3c077f40 0x7f2b3c1a1560 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:13:59.863 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.861+0000 7f2b4182e640 1 --2- 192.168.123.104:0/2244894275 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2b3c113b80 0x7f2b3c1a58f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:13:59.863 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.861+0000 7f2b4182e640 1 -- 192.168.123.104:0/2244894275 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f2b3c118b00 con 0x7f2b3c077620 2026-03-10T10:13:59.863 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.861+0000 7f2b4182e640 1 -- 192.168.123.104:0/2244894275 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f2b3c118980 con 0x7f2b3c113b80 2026-03-10T10:13:59.863 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.861+0000 7f2b4182e640 1 -- 192.168.123.104:0/2244894275 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f2b3c118c80 con 0x7f2b3c077f40 2026-03-10T10:13:59.863 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.861+0000 7f2b3b7fe640 1 --2- 192.168.123.104:0/2244894275 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2b3c113b80 0x7f2b3c1a58f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:13:59.863 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.861+0000 7f2b3a7fc640 1 --2- 192.168.123.104:0/2244894275 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f2b3c077f40 0x7f2b3c1a1560 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:13:59.864 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.861+0000 7f2b3b7fe640 1 -- 192.168.123.104:0/2244894275 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f2b3c077f40 msgr2=0x7f2b3c1a1560 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:13:59.864 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.861+0000 7f2b3b7fe640 1 --2- 192.168.123.104:0/2244894275 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f2b3c077f40 0x7f2b3c1a1560 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:13:59.864 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.861+0000 7f2b3b7fe640 1 -- 192.168.123.104:0/2244894275 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2b3c077620 msgr2=0x7f2b3c1a1020 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:13:59.864 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.861+0000 7f2b3b7fe640 1 --2- 192.168.123.104:0/2244894275 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2b3c077620 0x7f2b3c1a1020 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:13:59.864 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.861+0000 7f2b3b7fe640 1 -- 192.168.123.104:0/2244894275 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f2b3c1a6070 con 0x7f2b3c113b80 2026-03-10T10:13:59.864 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.861+0000 7f2b3affd640 1 --2- 192.168.123.104:0/2244894275 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2b3c077620 0x7f2b3c1a1020 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:13:59.864 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.861+0000 7f2b3a7fc640 1 --2- 192.168.123.104:0/2244894275 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f2b3c077f40 0x7f2b3c1a1560 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T10:13:59.864 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.861+0000 7f2b3b7fe640 1 --2- 192.168.123.104:0/2244894275 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2b3c113b80 0x7f2b3c1a58f0 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7f2b2c00bdf0 tx=0x7f2b2c00bef0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:13:59.864 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.861+0000 7f2b4082c640 1 -- 192.168.123.104:0/2244894275 <== mon.1 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f2b2c00ca60 con 0x7f2b3c113b80 2026-03-10T10:13:59.864 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.861+0000 7f2b4182e640 1 -- 192.168.123.104:0/2244894275 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f2b3c1a6360 con 0x7f2b3c113b80 2026-03-10T10:13:59.864 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.861+0000 7f2b4082c640 1 -- 192.168.123.104:0/2244894275 <== mon.1 v2:192.168.123.107:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f2b2c010070 con 0x7f2b3c113b80 2026-03-10T10:13:59.865 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.861+0000 7f2b4182e640 1 -- 192.168.123.104:0/2244894275 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f2b3c1adba0 con 0x7f2b3c113b80 2026-03-10T10:13:59.865 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.861+0000 7f2b4082c640 1 -- 192.168.123.104:0/2244894275 <== mon.1 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f2b2c015490 con 0x7f2b3c113b80 2026-03-10T10:13:59.865 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.861+0000 7f2b4182e640 1 -- 192.168.123.104:0/2244894275 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f2b08005180 con 0x7f2b3c113b80 2026-03-10T10:13:59.867 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.861+0000 7f2b4082c640 1 -- 192.168.123.104:0/2244894275 <== mon.1 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 15) ==== 100000+0+0 (secure 0 0 0) 0x7f2b2c0040a0 con 0x7f2b3c113b80 2026-03-10T10:13:59.867 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.861+0000 7f2b4082c640 1 --2- 192.168.123.104:0/2244894275 >> v2:192.168.123.104:6800/632047608 conn(0x7f2b140775d0 0x7f2b14079a90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:13:59.867 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.865+0000 7f2b3affd640 1 --2- 192.168.123.104:0/2244894275 >> v2:192.168.123.104:6800/632047608 conn(0x7f2b140775d0 0x7f2b14079a90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:13:59.867 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.865+0000 7f2b4082c640 1 -- 192.168.123.104:0/2244894275 <== mon.1 v2:192.168.123.107:3300/0 5 ==== osd_map(51..51 src has 1..51) ==== 4061+0+0 (secure 0 0 0) 0x7f2b2c098c20 con 0x7f2b3c113b80 2026-03-10T10:13:59.867 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.865+0000 7f2b3affd640 1 --2- 192.168.123.104:0/2244894275 >> v2:192.168.123.104:6800/632047608 conn(0x7f2b140775d0 0x7f2b14079a90 secure :-1 s=READY pgs=90 cs=0 l=1 rev1=1 crypto rx=0x7f2b300097c0 tx=0x7f2b300057d0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:13:59.869 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.865+0000 7f2b4082c640 1 -- 192.168.123.104:0/2244894275 <== mon.1 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f2b2c062620 con 0x7f2b3c113b80 2026-03-10T10:13:59.957 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.953+0000 7f2b4182e640 1 -- 192.168.123.104:0/2244894275 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "osd dump", "format": "json"} v 0) -- 0x7f2b08005740 con 0x7f2b3c113b80 2026-03-10T10:13:59.958 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.957+0000 7f2b4082c640 1 -- 192.168.123.104:0/2244894275 <== mon.1 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "osd dump", "format": "json"}]=0 v51) ==== 74+0+11711 (secure 0 0 0) 0x7f2b2c0662d0 con 0x7f2b3c113b80 2026-03-10T10:13:59.959 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:13:59.959 INFO:teuthology.orchestra.run.vm04.stdout:{"epoch":51,"fsid":"e4c1c9d6-1c68-11f1-a9bd-116050875839","created":"2026-03-10T10:08:09.663961+0000","modified":"2026-03-10T10:13:51.747118+0000","last_up_change":"2026-03-10T10:13:49.733359+0000","last_in_change":"2026-03-10T10:13:34.199840+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T10:11:03.648152+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"22","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"8e28c717-cfeb-4d7d-8ed7-9136d22aff5c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":49,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6801","nonce":3431285778}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6802","nonce":3431285778}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6804","nonce":3431285778}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6803","nonce":3431285778}]},"public_addr":"192.168.123.104:6801/3431285778","cluster_addr":"192.168.123.104:6802/3431285778","heartbeat_back_addr":"192.168.123.104:6804/3431285778","heartbeat_front_addr":"192.168.123.104:6803/3431285778","state":["exists","up"]},{"osd":1,"uuid":"58ba2152-7e52-4560-a001-e96617e30de1","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":32,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6805","nonce":2746381987}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6806","nonce":2746381987}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6808","nonce":2746381987}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6807","nonce":2746381987}]},"public_addr":"192.168.123.104:6805/2746381987","cluster_addr":"192.168.123.104:6806/2746381987","heartbeat_back_addr":"192.168.123.104:6808/2746381987","heartbeat_front_addr":"192.168.123.104:6807/2746381987","state":["exists","up"]},{"osd":2,"uuid":"17bb098c-8eff-4065-b511-7925247ef4a5","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6809","nonce":1668196037}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6810","nonce":1668196037}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6812","nonce":1668196037}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6811","nonce":1668196037}]},"public_addr":"192.168.123.104:6809/1668196037","cluster_addr":"192.168.123.104:6810/1668196037","heartbeat_back_addr":"192.168.123.104:6812/1668196037","heartbeat_front_addr":"192.168.123.104:6811/1668196037","state":["exists","up"]},{"osd":3,"uuid":"f9a0e546-c40a-4fcc-aaca-082199e602f3","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":26,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6813","nonce":2182249853}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6814","nonce":2182249853}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6816","nonce":2182249853}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6815","nonce":2182249853}]},"public_addr":"192.168.123.104:6813/2182249853","cluster_addr":"192.168.123.104:6814/2182249853","heartbeat_back_addr":"192.168.123.104:6816/2182249853","heartbeat_front_addr":"192.168.123.104:6815/2182249853","state":["exists","up"]},{"osd":4,"uuid":"456a615c-f863-4970-b4a1-90e964abfec7","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":31,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6800","nonce":2162643433}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6801","nonce":2162643433}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6803","nonce":2162643433}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6802","nonce":2162643433}]},"public_addr":"192.168.123.107:6800/2162643433","cluster_addr":"192.168.123.107:6801/2162643433","heartbeat_back_addr":"192.168.123.107:6803/2162643433","heartbeat_front_addr":"192.168.123.107:6802/2162643433","state":["exists","up"]},{"osd":5,"uuid":"c651c78e-882b-47c6-84ff-5a4b54b94531","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":37,"up_thru":38,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6804","nonce":1022745989}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6805","nonce":1022745989}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6807","nonce":1022745989}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6806","nonce":1022745989}]},"public_addr":"192.168.123.107:6804/1022745989","cluster_addr":"192.168.123.107:6805/1022745989","heartbeat_back_addr":"192.168.123.107:6807/1022745989","heartbeat_front_addr":"192.168.123.107:6806/1022745989","state":["exists","up"]},{"osd":6,"uuid":"69498577-1b7a-40bf-acac-5912f8ff7cfc","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":43,"up_thru":44,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6808","nonce":719340092}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6809","nonce":719340092}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6811","nonce":719340092}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6810","nonce":719340092}]},"public_addr":"192.168.123.107:6808/719340092","cluster_addr":"192.168.123.107:6809/719340092","heartbeat_back_addr":"192.168.123.107:6811/719340092","heartbeat_front_addr":"192.168.123.107:6810/719340092","state":["exists","up"]},{"osd":7,"uuid":"a27eb8fa-556b-467c-bdba-9d899e37064a","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":49,"up_thru":50,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6812","nonce":4141831103}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6813","nonce":4141831103}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6815","nonce":4141831103}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6814","nonce":4141831103}]},"public_addr":"192.168.123.107:6812/4141831103","cluster_addr":"192.168.123.107:6813/4141831103","heartbeat_back_addr":"192.168.123.107:6815/4141831103","heartbeat_front_addr":"192.168.123.107:6814/4141831103","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:09:53.659046+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:10:26.943063+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:10:59.644519+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:11:34.491448+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:12:07.569310+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:12:41.508315+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:13:14.258577+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:13:48.010550+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.104:0/75332172":"2026-03-11T10:08:29.595143+0000","192.168.123.104:0/1397792734":"2026-03-11T10:08:29.595143+0000","192.168.123.104:0/4228752384":"2026-03-11T10:08:29.595143+0000","192.168.123.104:0/1555346406":"2026-03-11T10:08:20.274742+0000","192.168.123.104:0/4084406241":"2026-03-11T10:08:20.274742+0000","192.168.123.104:0/2315333744":"2026-03-11T10:08:20.274742+0000","192.168.123.104:6800/2318507328":"2026-03-11T10:08:29.595143+0000","192.168.123.104:6800/887024688":"2026-03-11T10:08:20.274742+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T10:13:59.961 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.957+0000 7f2b4182e640 1 -- 192.168.123.104:0/2244894275 >> v2:192.168.123.104:6800/632047608 conn(0x7f2b140775d0 msgr2=0x7f2b14079a90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:13:59.961 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.957+0000 7f2b4182e640 1 --2- 192.168.123.104:0/2244894275 >> v2:192.168.123.104:6800/632047608 conn(0x7f2b140775d0 0x7f2b14079a90 secure :-1 s=READY pgs=90 cs=0 l=1 rev1=1 crypto rx=0x7f2b300097c0 tx=0x7f2b300057d0 comp rx=0 tx=0).stop 2026-03-10T10:13:59.961 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.957+0000 7f2b4182e640 1 -- 192.168.123.104:0/2244894275 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2b3c113b80 msgr2=0x7f2b3c1a58f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:13:59.961 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.957+0000 7f2b4182e640 1 --2- 192.168.123.104:0/2244894275 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2b3c113b80 0x7f2b3c1a58f0 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7f2b2c00bdf0 tx=0x7f2b2c00bef0 comp rx=0 tx=0).stop 2026-03-10T10:13:59.961 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.957+0000 7f2b4182e640 1 -- 192.168.123.104:0/2244894275 shutdown_connections 2026-03-10T10:13:59.961 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.957+0000 7f2b4182e640 1 --2- 192.168.123.104:0/2244894275 >> v2:192.168.123.104:6800/632047608 conn(0x7f2b140775d0 0x7f2b14079a90 unknown :-1 s=CLOSED pgs=90 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:13:59.961 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.957+0000 7f2b4182e640 1 --2- 192.168.123.104:0/2244894275 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f2b3c113b80 0x7f2b3c1a58f0 unknown :-1 s=CLOSED pgs=41 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:13:59.961 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.957+0000 7f2b4182e640 1 --2- 192.168.123.104:0/2244894275 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f2b3c077f40 0x7f2b3c1a1560 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:13:59.962 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.957+0000 7f2b4182e640 1 --2- 192.168.123.104:0/2244894275 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2b3c077620 0x7f2b3c1a1020 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:13:59.962 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.957+0000 7f2b4182e640 1 -- 192.168.123.104:0/2244894275 >> 192.168.123.104:0/2244894275 conn(0x7f2b3c1009e0 msgr2=0x7f2b3c102dd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:13:59.962 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.957+0000 7f2b4182e640 1 -- 192.168.123.104:0/2244894275 shutdown_connections 2026-03-10T10:13:59.962 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:13:59.957+0000 7f2b4182e640 1 -- 192.168.123.104:0/2244894275 wait complete. 2026-03-10T10:14:00.013 INFO:tasks.cephadm.ceph_manager.ceph:[{'pool': 1, 'pool_name': '.mgr', 'create_time': '2026-03-10T10:11:03.648152+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'is_stretch_pool': False, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '22', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_type': 'Fair distribution', 'score_acting': 7.889999866485596, 'score_stable': 7.889999866485596, 'optimal_score': 0.3799999952316284, 'raw_score_acting': 3, 'raw_score_stable': 3, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}] 2026-03-10T10:14:00.014 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph osd pool get .mgr pg_num 2026-03-10T10:14:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:00 vm04 bash[28289]: cluster 2026-03-10T10:13:59.646135+0000 mgr.y (mgr.14150) 229 : cluster [DBG] pgmap v206: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:00 vm04 bash[28289]: cluster 2026-03-10T10:13:59.646135+0000 mgr.y (mgr.14150) 229 : cluster [DBG] pgmap v206: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:00 vm04 bash[28289]: audit 2026-03-10T10:13:59.958454+0000 mon.b (mon.1) 23 : audit [DBG] from='client.? 192.168.123.104:0/2244894275' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T10:14:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:00 vm04 bash[28289]: audit 2026-03-10T10:13:59.958454+0000 mon.b (mon.1) 23 : audit [DBG] from='client.? 192.168.123.104:0/2244894275' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T10:14:01.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:00 vm04 bash[20742]: cluster 2026-03-10T10:13:59.646135+0000 mgr.y (mgr.14150) 229 : cluster [DBG] pgmap v206: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:01.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:00 vm04 bash[20742]: cluster 2026-03-10T10:13:59.646135+0000 mgr.y (mgr.14150) 229 : cluster [DBG] pgmap v206: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:01.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:00 vm04 bash[20742]: audit 2026-03-10T10:13:59.958454+0000 mon.b (mon.1) 23 : audit [DBG] from='client.? 192.168.123.104:0/2244894275' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T10:14:01.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:00 vm04 bash[20742]: audit 2026-03-10T10:13:59.958454+0000 mon.b (mon.1) 23 : audit [DBG] from='client.? 192.168.123.104:0/2244894275' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T10:14:01.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:00 vm07 bash[23367]: cluster 2026-03-10T10:13:59.646135+0000 mgr.y (mgr.14150) 229 : cluster [DBG] pgmap v206: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:01.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:00 vm07 bash[23367]: cluster 2026-03-10T10:13:59.646135+0000 mgr.y (mgr.14150) 229 : cluster [DBG] pgmap v206: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:01.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:00 vm07 bash[23367]: audit 2026-03-10T10:13:59.958454+0000 mon.b (mon.1) 23 : audit [DBG] from='client.? 192.168.123.104:0/2244894275' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T10:14:01.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:00 vm07 bash[23367]: audit 2026-03-10T10:13:59.958454+0000 mon.b (mon.1) 23 : audit [DBG] from='client.? 192.168.123.104:0/2244894275' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T10:14:02.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:01 vm04 bash[28289]: cluster 2026-03-10T10:14:01.646438+0000 mgr.y (mgr.14150) 230 : cluster [DBG] pgmap v207: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:02.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:01 vm04 bash[28289]: cluster 2026-03-10T10:14:01.646438+0000 mgr.y (mgr.14150) 230 : cluster [DBG] pgmap v207: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:01 vm04 bash[20742]: cluster 2026-03-10T10:14:01.646438+0000 mgr.y (mgr.14150) 230 : cluster [DBG] pgmap v207: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:01 vm04 bash[20742]: cluster 2026-03-10T10:14:01.646438+0000 mgr.y (mgr.14150) 230 : cluster [DBG] pgmap v207: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:02.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:01 vm07 bash[23367]: cluster 2026-03-10T10:14:01.646438+0000 mgr.y (mgr.14150) 230 : cluster [DBG] pgmap v207: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:02.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:01 vm07 bash[23367]: cluster 2026-03-10T10:14:01.646438+0000 mgr.y (mgr.14150) 230 : cluster [DBG] pgmap v207: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:03.733 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:14:04.477 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.473+0000 7f022a5a2640 1 -- 192.168.123.104:0/2166982195 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f02241017b0 msgr2=0x7f0224105db0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:04.477 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.473+0000 7f022a5a2640 1 --2- 192.168.123.104:0/2166982195 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f02241017b0 0x7f0224105db0 secure :-1 s=READY pgs=121 cs=0 l=1 rev1=1 crypto rx=0x7f02140099b0 tx=0x7f021402f1b0 comp rx=0 tx=0).stop 2026-03-10T10:14:04.477 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.473+0000 7f022a5a2640 1 -- 192.168.123.104:0/2166982195 shutdown_connections 2026-03-10T10:14:04.477 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.473+0000 7f022a5a2640 1 --2- 192.168.123.104:0/2166982195 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f022410bb10 0x7f022410df00 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:04.477 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.473+0000 7f022a5a2640 1 --2- 192.168.123.104:0/2166982195 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f02241017b0 0x7f0224105db0 unknown :-1 s=CLOSED pgs=121 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:04.477 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.473+0000 7f022a5a2640 1 --2- 192.168.123.104:0/2166982195 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0224069a50 0x7f0224105870 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:04.477 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.473+0000 7f022a5a2640 1 -- 192.168.123.104:0/2166982195 >> 192.168.123.104:0/2166982195 conn(0x7f02240fc910 msgr2=0x7f02240fed30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:04.477 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.473+0000 7f022a5a2640 1 -- 192.168.123.104:0/2166982195 shutdown_connections 2026-03-10T10:14:04.477 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.473+0000 7f022a5a2640 1 -- 192.168.123.104:0/2166982195 wait complete. 2026-03-10T10:14:04.477 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.473+0000 7f022a5a2640 1 Processor -- start 2026-03-10T10:14:04.478 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.473+0000 7f022a5a2640 1 -- start start 2026-03-10T10:14:04.478 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.473+0000 7f022a5a2640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0224069a50 0x7f02241a2770 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:04.478 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.473+0000 7f022a5a2640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f02241017b0 0x7f02241a2cb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:04.478 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.473+0000 7f022a5a2640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f022410bb10 0x7f022419c930 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:04.478 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.473+0000 7f022a5a2640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f0224114510 con 0x7f02241017b0 2026-03-10T10:14:04.478 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.473+0000 7f022a5a2640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f0224114390 con 0x7f0224069a50 2026-03-10T10:14:04.478 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.473+0000 7f022a5a2640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f0224114690 con 0x7f022410bb10 2026-03-10T10:14:04.478 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.473+0000 7f0228b18640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f022410bb10 0x7f022419c930 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:04.478 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.473+0000 7f0223fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0224069a50 0x7f02241a2770 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:04.478 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.473+0000 7f0223fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0224069a50 0x7f02241a2770 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.104:45850/0 (socket says 192.168.123.104:45850) 2026-03-10T10:14:04.478 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.473+0000 7f0223fff640 1 -- 192.168.123.104:0/1043621443 learned_addr learned my addr 192.168.123.104:0/1043621443 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:14:04.479 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.477+0000 7f02237fe640 1 --2- 192.168.123.104:0/1043621443 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f02241017b0 0x7f02241a2cb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:04.479 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.477+0000 7f0223fff640 1 -- 192.168.123.104:0/1043621443 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f022410bb10 msgr2=0x7f022419c930 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:04.479 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.477+0000 7f0223fff640 1 --2- 192.168.123.104:0/1043621443 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f022410bb10 0x7f022419c930 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:04.479 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.477+0000 7f0223fff640 1 -- 192.168.123.104:0/1043621443 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f02241017b0 msgr2=0x7f02241a2cb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:04.479 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.477+0000 7f0223fff640 1 --2- 192.168.123.104:0/1043621443 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f02241017b0 0x7f02241a2cb0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:04.479 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.477+0000 7f0223fff640 1 -- 192.168.123.104:0/1043621443 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f022419d100 con 0x7f0224069a50 2026-03-10T10:14:04.479 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.477+0000 7f02237fe640 1 --2- 192.168.123.104:0/1043621443 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f02241017b0 0x7f02241a2cb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:14:04.479 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.477+0000 7f0223fff640 1 --2- 192.168.123.104:0/1043621443 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0224069a50 0x7f02241a2770 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f0210009870 tx=0x7f0210009d40 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:04.479 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.477+0000 7f02217fa640 1 -- 192.168.123.104:0/1043621443 <== mon.1 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f021001a070 con 0x7f0224069a50 2026-03-10T10:14:04.479 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.477+0000 7f022a5a2640 1 -- 192.168.123.104:0/1043621443 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f022419d3f0 con 0x7f0224069a50 2026-03-10T10:14:04.479 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.477+0000 7f022a5a2640 1 -- 192.168.123.104:0/1043621443 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f022410ecc0 con 0x7f0224069a50 2026-03-10T10:14:04.480 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.477+0000 7f02217fa640 1 -- 192.168.123.104:0/1043621443 <== mon.1 v2:192.168.123.107:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f021000ed20 con 0x7f0224069a50 2026-03-10T10:14:04.480 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.477+0000 7f02217fa640 1 -- 192.168.123.104:0/1043621443 <== mon.1 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f0210002c60 con 0x7f0224069a50 2026-03-10T10:14:04.480 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.477+0000 7f022a5a2640 1 -- 192.168.123.104:0/1043621443 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f01e8005180 con 0x7f0224069a50 2026-03-10T10:14:04.481 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.477+0000 7f02217fa640 1 -- 192.168.123.104:0/1043621443 <== mon.1 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 15) ==== 100000+0+0 (secure 0 0 0) 0x7f02100043d0 con 0x7f0224069a50 2026-03-10T10:14:04.481 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.477+0000 7f02217fa640 1 --2- 192.168.123.104:0/1043621443 >> v2:192.168.123.104:6800/632047608 conn(0x7f01f40775d0 0x7f01f4079a90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:04.481 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.477+0000 7f02217fa640 1 -- 192.168.123.104:0/1043621443 <== mon.1 v2:192.168.123.107:3300/0 5 ==== osd_map(51..51 src has 1..51) ==== 4061+0+0 (secure 0 0 0) 0x7f0210098c10 con 0x7f0224069a50 2026-03-10T10:14:04.481 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.477+0000 7f02237fe640 1 --2- 192.168.123.104:0/1043621443 >> v2:192.168.123.104:6800/632047608 conn(0x7f01f40775d0 0x7f01f4079a90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:04.482 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.477+0000 7f02237fe640 1 --2- 192.168.123.104:0/1043621443 >> v2:192.168.123.104:6800/632047608 conn(0x7f01f40775d0 0x7f01f4079a90 secure :-1 s=READY pgs=91 cs=0 l=1 rev1=1 crypto rx=0x7f02140096f0 tx=0x7f02140023d0 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:04.483 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.481+0000 7f02217fa640 1 -- 192.168.123.104:0/1043621443 <== mon.1 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f02100662c0 con 0x7f0224069a50 2026-03-10T10:14:04.570 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.565+0000 7f022a5a2640 1 -- 192.168.123.104:0/1043621443 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"} v 0) -- 0x7f01e8005740 con 0x7f0224069a50 2026-03-10T10:14:04.571 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.569+0000 7f02217fa640 1 -- 192.168.123.104:0/1043621443 <== mon.1 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]=0 v51) ==== 93+0+10 (secure 0 0 0) 0x7f021006b170 con 0x7f0224069a50 2026-03-10T10:14:04.571 INFO:teuthology.orchestra.run.vm04.stdout:pg_num: 1 2026-03-10T10:14:04.573 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.569+0000 7f022a5a2640 1 -- 192.168.123.104:0/1043621443 >> v2:192.168.123.104:6800/632047608 conn(0x7f01f40775d0 msgr2=0x7f01f4079a90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:04.573 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.569+0000 7f022a5a2640 1 --2- 192.168.123.104:0/1043621443 >> v2:192.168.123.104:6800/632047608 conn(0x7f01f40775d0 0x7f01f4079a90 secure :-1 s=READY pgs=91 cs=0 l=1 rev1=1 crypto rx=0x7f02140096f0 tx=0x7f02140023d0 comp rx=0 tx=0).stop 2026-03-10T10:14:04.573 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.569+0000 7f022a5a2640 1 -- 192.168.123.104:0/1043621443 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0224069a50 msgr2=0x7f02241a2770 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:04.573 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.569+0000 7f022a5a2640 1 --2- 192.168.123.104:0/1043621443 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0224069a50 0x7f02241a2770 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7f0210009870 tx=0x7f0210009d40 comp rx=0 tx=0).stop 2026-03-10T10:14:04.573 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.569+0000 7f022a5a2640 1 -- 192.168.123.104:0/1043621443 shutdown_connections 2026-03-10T10:14:04.573 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.569+0000 7f022a5a2640 1 --2- 192.168.123.104:0/1043621443 >> v2:192.168.123.104:6800/632047608 conn(0x7f01f40775d0 0x7f01f4079a90 unknown :-1 s=CLOSED pgs=91 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:04.573 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.569+0000 7f022a5a2640 1 --2- 192.168.123.104:0/1043621443 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f022410bb10 0x7f022419c930 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:04.573 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.569+0000 7f022a5a2640 1 --2- 192.168.123.104:0/1043621443 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f02241017b0 0x7f02241a2cb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:04.573 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.569+0000 7f022a5a2640 1 --2- 192.168.123.104:0/1043621443 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0224069a50 0x7f02241a2770 unknown :-1 s=CLOSED pgs=42 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:04.573 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.569+0000 7f022a5a2640 1 -- 192.168.123.104:0/1043621443 >> 192.168.123.104:0/1043621443 conn(0x7f02240fc910 msgr2=0x7f022410db90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:04.573 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.569+0000 7f022a5a2640 1 -- 192.168.123.104:0/1043621443 shutdown_connections 2026-03-10T10:14:04.573 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:04.569+0000 7f022a5a2640 1 -- 192.168.123.104:0/1043621443 wait complete. 2026-03-10T10:14:04.620 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:04 vm04 bash[20742]: cluster 2026-03-10T10:14:03.646749+0000 mgr.y (mgr.14150) 231 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:04.620 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:04 vm04 bash[20742]: cluster 2026-03-10T10:14:03.646749+0000 mgr.y (mgr.14150) 231 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:04.620 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:04 vm04 bash[28289]: cluster 2026-03-10T10:14:03.646749+0000 mgr.y (mgr.14150) 231 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:04.620 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:04 vm04 bash[28289]: cluster 2026-03-10T10:14:03.646749+0000 mgr.y (mgr.14150) 231 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:04.620 INFO:tasks.cephadm:Adding ceph.rgw.foo.a on vm04 2026-03-10T10:14:04.621 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph orch apply rgw foo.a --placement '1;vm04=foo.a' 2026-03-10T10:14:04.627 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:04 vm07 bash[23367]: cluster 2026-03-10T10:14:03.646749+0000 mgr.y (mgr.14150) 231 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:04.627 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:04 vm07 bash[23367]: cluster 2026-03-10T10:14:03.646749+0000 mgr.y (mgr.14150) 231 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:05.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:05 vm04 bash[28289]: audit 2026-03-10T10:14:04.571283+0000 mon.b (mon.1) 24 : audit [DBG] from='client.? 192.168.123.104:0/1043621443' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T10:14:05.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:05 vm04 bash[28289]: audit 2026-03-10T10:14:04.571283+0000 mon.b (mon.1) 24 : audit [DBG] from='client.? 192.168.123.104:0/1043621443' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T10:14:05.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:05 vm04 bash[20742]: audit 2026-03-10T10:14:04.571283+0000 mon.b (mon.1) 24 : audit [DBG] from='client.? 192.168.123.104:0/1043621443' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T10:14:05.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:05 vm04 bash[20742]: audit 2026-03-10T10:14:04.571283+0000 mon.b (mon.1) 24 : audit [DBG] from='client.? 192.168.123.104:0/1043621443' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T10:14:05.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:05 vm07 bash[23367]: audit 2026-03-10T10:14:04.571283+0000 mon.b (mon.1) 24 : audit [DBG] from='client.? 192.168.123.104:0/1043621443' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T10:14:05.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:05 vm07 bash[23367]: audit 2026-03-10T10:14:04.571283+0000 mon.b (mon.1) 24 : audit [DBG] from='client.? 192.168.123.104:0/1043621443' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T10:14:06.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:06 vm04 bash[28289]: cluster 2026-03-10T10:14:05.647002+0000 mgr.y (mgr.14150) 232 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:06.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:06 vm04 bash[28289]: cluster 2026-03-10T10:14:05.647002+0000 mgr.y (mgr.14150) 232 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:06.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:06 vm04 bash[20742]: cluster 2026-03-10T10:14:05.647002+0000 mgr.y (mgr.14150) 232 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:06.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:06 vm04 bash[20742]: cluster 2026-03-10T10:14:05.647002+0000 mgr.y (mgr.14150) 232 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:06.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:06 vm07 bash[23367]: cluster 2026-03-10T10:14:05.647002+0000 mgr.y (mgr.14150) 232 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:06.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:06 vm07 bash[23367]: cluster 2026-03-10T10:14:05.647002+0000 mgr.y (mgr.14150) 232 : cluster [DBG] pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:09.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:08 vm07 bash[23367]: cluster 2026-03-10T10:14:07.647252+0000 mgr.y (mgr.14150) 233 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:09.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:08 vm07 bash[23367]: cluster 2026-03-10T10:14:07.647252+0000 mgr.y (mgr.14150) 233 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:09.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:08 vm04 bash[28289]: cluster 2026-03-10T10:14:07.647252+0000 mgr.y (mgr.14150) 233 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:09.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:08 vm04 bash[28289]: cluster 2026-03-10T10:14:07.647252+0000 mgr.y (mgr.14150) 233 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:09.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:08 vm04 bash[20742]: cluster 2026-03-10T10:14:07.647252+0000 mgr.y (mgr.14150) 233 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:09.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:08 vm04 bash[20742]: cluster 2026-03-10T10:14:07.647252+0000 mgr.y (mgr.14150) 233 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:09.242 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.b/config 2026-03-10T10:14:09.598 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.593+0000 7f213f62f640 1 -- 192.168.123.107:0/1451966182 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f21381023d0 msgr2=0x7f2138102850 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:09.598 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.593+0000 7f213f62f640 1 --2- 192.168.123.107:0/1451966182 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f21381023d0 0x7f2138102850 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7f2128009a80 tx=0x7f212802f290 comp rx=0 tx=0).stop 2026-03-10T10:14:09.598 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.593+0000 7f213f62f640 1 -- 192.168.123.107:0/1451966182 shutdown_connections 2026-03-10T10:14:09.598 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.593+0000 7f213f62f640 1 --2- 192.168.123.107:0/1451966182 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f21381070d0 0x7f21381094c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:09.598 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.593+0000 7f213f62f640 1 --2- 192.168.123.107:0/1451966182 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f21381023d0 0x7f2138102850 unknown :-1 s=CLOSED pgs=43 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:09.598 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.593+0000 7f213f62f640 1 --2- 192.168.123.107:0/1451966182 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2138069a50 0x7f2138101e90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:09.598 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.593+0000 7f213f62f640 1 -- 192.168.123.107:0/1451966182 >> 192.168.123.107:0/1451966182 conn(0x7f21380fc420 msgr2=0x7f21380fe860 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:09.598 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.593+0000 7f213f62f640 1 -- 192.168.123.107:0/1451966182 shutdown_connections 2026-03-10T10:14:09.599 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.593+0000 7f213f62f640 1 -- 192.168.123.107:0/1451966182 wait complete. 2026-03-10T10:14:09.599 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.593+0000 7f213f62f640 1 Processor -- start 2026-03-10T10:14:09.599 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.593+0000 7f213f62f640 1 -- start start 2026-03-10T10:14:09.599 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.593+0000 7f213f62f640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2138069a50 0x7f213819c600 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:09.599 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.593+0000 7f213f62f640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f21381023d0 0x7f213819cb40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:09.599 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.593+0000 7f213f62f640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f21381070d0 0x7f21381a3bc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:09.599 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.593+0000 7f213f62f640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f213810dca0 con 0x7f2138069a50 2026-03-10T10:14:09.599 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.593+0000 7f213f62f640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f213810db20 con 0x7f21381023d0 2026-03-10T10:14:09.599 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.593+0000 7f213f62f640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f213810de20 con 0x7f21381070d0 2026-03-10T10:14:09.599 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.593+0000 7f213cba3640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f21381023d0 0x7f213819cb40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:09.599 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.593+0000 7f213dba5640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f21381070d0 0x7f21381a3bc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:09.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.593+0000 7f213dba5640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f21381070d0 0x7f21381a3bc0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3301/0 says I am v2:192.168.123.107:50436/0 (socket says 192.168.123.107:50436) 2026-03-10T10:14:09.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.593+0000 7f213cba3640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f21381023d0 0x7f213819cb40 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:48422/0 (socket says 192.168.123.107:48422) 2026-03-10T10:14:09.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.593+0000 7f213dba5640 1 -- 192.168.123.107:0/3172187694 learned_addr learned my addr 192.168.123.107:0/3172187694 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-10T10:14:09.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.593+0000 7f213d3a4640 1 --2- 192.168.123.107:0/3172187694 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2138069a50 0x7f213819c600 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:09.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.593+0000 7f213dba5640 1 -- 192.168.123.107:0/3172187694 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f21381023d0 msgr2=0x7f213819cb40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:09.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.593+0000 7f213dba5640 1 --2- 192.168.123.107:0/3172187694 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f21381023d0 0x7f213819cb40 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:09.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.593+0000 7f213dba5640 1 -- 192.168.123.107:0/3172187694 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2138069a50 msgr2=0x7f213819c600 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:09.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.593+0000 7f213dba5640 1 --2- 192.168.123.107:0/3172187694 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2138069a50 0x7f213819c600 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:09.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.593+0000 7f213dba5640 1 -- 192.168.123.107:0/3172187694 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f21381a42c0 con 0x7f21381070d0 2026-03-10T10:14:09.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.593+0000 7f213cba3640 1 --2- 192.168.123.107:0/3172187694 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f21381023d0 0x7f213819cb40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T10:14:09.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.593+0000 7f213d3a4640 1 --2- 192.168.123.107:0/3172187694 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2138069a50 0x7f213819c600 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:14:09.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.593+0000 7f213dba5640 1 --2- 192.168.123.107:0/3172187694 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f21381070d0 0x7f21381a3bc0 secure :-1 s=READY pgs=33 cs=0 l=1 rev1=1 crypto rx=0x7f21340047e0 tx=0x7f213400d4a0 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:09.600 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.593+0000 7f21267fc640 1 -- 192.168.123.107:0/3172187694 <== mon.2 v2:192.168.123.104:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f21340090d0 con 0x7f21381070d0 2026-03-10T10:14:09.601 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.593+0000 7f213f62f640 1 -- 192.168.123.107:0/3172187694 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f21381a4550 con 0x7f21381070d0 2026-03-10T10:14:09.601 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.593+0000 7f213f62f640 1 -- 192.168.123.107:0/3172187694 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f21381a4b30 con 0x7f21381070d0 2026-03-10T10:14:09.601 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.597+0000 7f21267fc640 1 -- 192.168.123.107:0/3172187694 <== mon.2 v2:192.168.123.104:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f2134009270 con 0x7f21381070d0 2026-03-10T10:14:09.601 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.597+0000 7f21267fc640 1 -- 192.168.123.107:0/3172187694 <== mon.2 v2:192.168.123.104:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f2134013650 con 0x7f21381070d0 2026-03-10T10:14:09.602 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.597+0000 7f213f62f640 1 -- 192.168.123.107:0/3172187694 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f2100005180 con 0x7f21381070d0 2026-03-10T10:14:09.602 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.597+0000 7f21267fc640 1 -- 192.168.123.107:0/3172187694 <== mon.2 v2:192.168.123.104:3301/0 4 ==== mgrmap(e 15) ==== 100000+0+0 (secure 0 0 0) 0x7f2134012070 con 0x7f21381070d0 2026-03-10T10:14:09.603 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.597+0000 7f21267fc640 1 --2- 192.168.123.107:0/3172187694 >> v2:192.168.123.104:6800/632047608 conn(0x7f210c077600 0x7f210c079ac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:09.603 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.597+0000 7f21267fc640 1 -- 192.168.123.107:0/3172187694 <== mon.2 v2:192.168.123.104:3301/0 5 ==== osd_map(51..51 src has 1..51) ==== 4061+0+0 (secure 0 0 0) 0x7f2134099610 con 0x7f21381070d0 2026-03-10T10:14:09.603 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.597+0000 7f213d3a4640 1 --2- 192.168.123.107:0/3172187694 >> v2:192.168.123.104:6800/632047608 conn(0x7f210c077600 0x7f210c079ac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:09.603 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.597+0000 7f213d3a4640 1 --2- 192.168.123.107:0/3172187694 >> v2:192.168.123.104:6800/632047608 conn(0x7f210c077600 0x7f210c079ac0 secure :-1 s=READY pgs=92 cs=0 l=1 rev1=1 crypto rx=0x7f2138101cf0 tx=0x7f212c008040 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:09.605 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.601+0000 7f21267fc640 1 -- 192.168.123.107:0/3172187694 <== mon.2 v2:192.168.123.104:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f2134010070 con 0x7f21381070d0 2026-03-10T10:14:09.699 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.693+0000 7f213f62f640 1 -- 192.168.123.107:0/3172187694 --> v2:192.168.123.104:6800/632047608 -- mgr_command(tid 0: {"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm04=foo.a", "target": ["mon-mgr", ""]}) -- 0x7f2100002bf0 con 0x7f210c077600 2026-03-10T10:14:09.718 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.717+0000 7f21267fc640 1 -- 192.168.123.107:0/3172187694 <== mgr.14150 v2:192.168.123.104:6800/632047608 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+30 (secure 0 0 0) 0x7f2100002bf0 con 0x7f210c077600 2026-03-10T10:14:09.718 INFO:teuthology.orchestra.run.vm07.stdout:Scheduled rgw.foo.a update... 2026-03-10T10:14:09.720 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.717+0000 7f213f62f640 1 -- 192.168.123.107:0/3172187694 >> v2:192.168.123.104:6800/632047608 conn(0x7f210c077600 msgr2=0x7f210c079ac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:09.720 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.717+0000 7f213f62f640 1 --2- 192.168.123.107:0/3172187694 >> v2:192.168.123.104:6800/632047608 conn(0x7f210c077600 0x7f210c079ac0 secure :-1 s=READY pgs=92 cs=0 l=1 rev1=1 crypto rx=0x7f2138101cf0 tx=0x7f212c008040 comp rx=0 tx=0).stop 2026-03-10T10:14:09.720 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.717+0000 7f213f62f640 1 -- 192.168.123.107:0/3172187694 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f21381070d0 msgr2=0x7f21381a3bc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:09.720 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.717+0000 7f213f62f640 1 --2- 192.168.123.107:0/3172187694 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f21381070d0 0x7f21381a3bc0 secure :-1 s=READY pgs=33 cs=0 l=1 rev1=1 crypto rx=0x7f21340047e0 tx=0x7f213400d4a0 comp rx=0 tx=0).stop 2026-03-10T10:14:09.720 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.717+0000 7f213f62f640 1 -- 192.168.123.107:0/3172187694 shutdown_connections 2026-03-10T10:14:09.721 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.717+0000 7f213f62f640 1 --2- 192.168.123.107:0/3172187694 >> v2:192.168.123.104:6800/632047608 conn(0x7f210c077600 0x7f210c079ac0 unknown :-1 s=CLOSED pgs=92 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:09.721 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.717+0000 7f213f62f640 1 --2- 192.168.123.107:0/3172187694 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f21381070d0 0x7f21381a3bc0 unknown :-1 s=CLOSED pgs=33 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:09.721 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.717+0000 7f213f62f640 1 --2- 192.168.123.107:0/3172187694 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f21381023d0 0x7f213819cb40 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:09.721 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.717+0000 7f213f62f640 1 --2- 192.168.123.107:0/3172187694 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f2138069a50 0x7f213819c600 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:09.721 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.717+0000 7f213f62f640 1 -- 192.168.123.107:0/3172187694 >> 192.168.123.107:0/3172187694 conn(0x7f21380fc420 msgr2=0x7f2138108bc0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:09.721 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.717+0000 7f213f62f640 1 -- 192.168.123.107:0/3172187694 shutdown_connections 2026-03-10T10:14:09.721 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:09.717+0000 7f213f62f640 1 -- 192.168.123.107:0/3172187694 wait complete. 2026-03-10T10:14:09.894 DEBUG:teuthology.orchestra.run.vm04:rgw.foo.a> sudo journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@rgw.foo.a.service 2026-03-10T10:14:09.895 INFO:tasks.cephadm:Adding ceph.iscsi.iscsi.a on vm07 2026-03-10T10:14:09.895 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph osd pool create datapool 3 3 replicated 2026-03-10T10:14:10.203 INFO:journalctl@ceph.rgw.foo.a.vm04.stdout:Mar 10 10:14:10 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:10.666 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:10 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:10.666 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:10 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:10.666 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:10 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:10.666 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 10:14:10 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:10.666 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 10 10:14:10 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:10.666 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 10 10:14:10 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:10.666 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 10 10:14:10 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:10.937 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:10 vm04 bash[28289]: cluster 2026-03-10T10:14:09.647523+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:10.937 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:10 vm04 bash[28289]: cluster 2026-03-10T10:14:09.647523+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:10.937 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:10 vm04 bash[28289]: audit 2026-03-10T10:14:09.700928+0000 mgr.y (mgr.14150) 235 : audit [DBG] from='client.24293 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm04=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:10.937 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:10 vm04 bash[28289]: audit 2026-03-10T10:14:09.700928+0000 mgr.y (mgr.14150) 235 : audit [DBG] from='client.24293 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm04=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:10.937 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:10 vm04 bash[28289]: cephadm 2026-03-10T10:14:09.701798+0000 mgr.y (mgr.14150) 236 : cephadm [INF] Saving service rgw.foo.a spec with placement vm04=foo.a;count:1 2026-03-10T10:14:10.937 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:10 vm04 bash[28289]: cephadm 2026-03-10T10:14:09.701798+0000 mgr.y (mgr.14150) 236 : cephadm [INF] Saving service rgw.foo.a spec with placement vm04=foo.a;count:1 2026-03-10T10:14:10.937 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:10 vm04 bash[28289]: audit 2026-03-10T10:14:09.719132+0000 mon.a (mon.0) 621 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:10.937 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:10 vm04 bash[28289]: audit 2026-03-10T10:14:09.719132+0000 mon.a (mon.0) 621 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:10.937 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:10 vm04 bash[28289]: audit 2026-03-10T10:14:09.730090+0000 mon.a (mon.0) 622 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:10.937 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:10 vm04 bash[28289]: audit 2026-03-10T10:14:09.730090+0000 mon.a (mon.0) 622 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:10.937 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:10 vm04 bash[28289]: audit 2026-03-10T10:14:09.731398+0000 mon.a (mon.0) 623 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:10.937 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:10 vm04 bash[28289]: audit 2026-03-10T10:14:09.731398+0000 mon.a (mon.0) 623 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:10.937 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:10 vm04 bash[28289]: audit 2026-03-10T10:14:09.732156+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:10.937 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:10 vm04 bash[28289]: audit 2026-03-10T10:14:09.732156+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:10.937 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:10 vm04 bash[28289]: audit 2026-03-10T10:14:09.772685+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:10.937 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:10 vm04 bash[28289]: audit 2026-03-10T10:14:09.772685+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:10.937 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:10 vm04 bash[28289]: audit 2026-03-10T10:14:09.774220+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T10:14:10.937 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:10 vm04 bash[28289]: audit 2026-03-10T10:14:09.774220+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T10:14:10.937 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:10 vm04 bash[28289]: audit 2026-03-10T10:14:09.839565+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T10:14:10.937 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:10 vm04 bash[28289]: audit 2026-03-10T10:14:09.839565+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T10:14:10.937 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:10 vm04 bash[28289]: audit 2026-03-10T10:14:09.891929+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:10.937 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:10 vm04 bash[28289]: audit 2026-03-10T10:14:09.891929+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:10.937 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:10 vm04 bash[28289]: audit 2026-03-10T10:14:09.894304+0000 mon.a (mon.0) 629 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:10.938 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:10 vm04 bash[28289]: audit 2026-03-10T10:14:09.894304+0000 mon.a (mon.0) 629 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:10.938 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:10 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:10.938 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 10:14:10 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:10.938 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 10 10:14:10 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:10.938 INFO:journalctl@ceph.rgw.foo.a.vm04.stdout:Mar 10 10:14:10 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:10.938 INFO:journalctl@ceph.rgw.foo.a.vm04.stdout:Mar 10 10:14:10 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:10.938 INFO:journalctl@ceph.rgw.foo.a.vm04.stdout:Mar 10 10:14:10 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:10.938 INFO:journalctl@ceph.rgw.foo.a.vm04.stdout:Mar 10 10:14:10 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:10.938 INFO:journalctl@ceph.rgw.foo.a.vm04.stdout:Mar 10 10:14:10 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:10.938 INFO:journalctl@ceph.rgw.foo.a.vm04.stdout:Mar 10 10:14:10 vm04 systemd[1]: Started Ceph rgw.foo.a for e4c1c9d6-1c68-11f1-a9bd-116050875839. 2026-03-10T10:14:10.938 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:10 vm04 bash[20742]: cluster 2026-03-10T10:14:09.647523+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:10.938 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:10 vm04 bash[20742]: cluster 2026-03-10T10:14:09.647523+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:10.938 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:10 vm04 bash[20742]: audit 2026-03-10T10:14:09.700928+0000 mgr.y (mgr.14150) 235 : audit [DBG] from='client.24293 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm04=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:10.938 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:10 vm04 bash[20742]: audit 2026-03-10T10:14:09.700928+0000 mgr.y (mgr.14150) 235 : audit [DBG] from='client.24293 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm04=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:10.938 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:10 vm04 bash[20742]: cephadm 2026-03-10T10:14:09.701798+0000 mgr.y (mgr.14150) 236 : cephadm [INF] Saving service rgw.foo.a spec with placement vm04=foo.a;count:1 2026-03-10T10:14:10.938 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:10 vm04 bash[20742]: cephadm 2026-03-10T10:14:09.701798+0000 mgr.y (mgr.14150) 236 : cephadm [INF] Saving service rgw.foo.a spec with placement vm04=foo.a;count:1 2026-03-10T10:14:10.938 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:10 vm04 bash[20742]: audit 2026-03-10T10:14:09.719132+0000 mon.a (mon.0) 621 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:10.938 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:10 vm04 bash[20742]: audit 2026-03-10T10:14:09.719132+0000 mon.a (mon.0) 621 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:10.938 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:10 vm04 bash[20742]: audit 2026-03-10T10:14:09.730090+0000 mon.a (mon.0) 622 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:10.938 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:10 vm04 bash[20742]: audit 2026-03-10T10:14:09.730090+0000 mon.a (mon.0) 622 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:10.938 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:10 vm04 bash[20742]: audit 2026-03-10T10:14:09.731398+0000 mon.a (mon.0) 623 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:10.938 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:10 vm04 bash[20742]: audit 2026-03-10T10:14:09.731398+0000 mon.a (mon.0) 623 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:10.938 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:10 vm04 bash[20742]: audit 2026-03-10T10:14:09.732156+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:10.938 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:10 vm04 bash[20742]: audit 2026-03-10T10:14:09.732156+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:10.938 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:10 vm04 bash[20742]: audit 2026-03-10T10:14:09.772685+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:10.938 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:10 vm04 bash[20742]: audit 2026-03-10T10:14:09.772685+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:10.938 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:10 vm04 bash[20742]: audit 2026-03-10T10:14:09.774220+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T10:14:10.938 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:10 vm04 bash[20742]: audit 2026-03-10T10:14:09.774220+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T10:14:10.938 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:10 vm04 bash[20742]: audit 2026-03-10T10:14:09.839565+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T10:14:10.938 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:10 vm04 bash[20742]: audit 2026-03-10T10:14:09.839565+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T10:14:10.938 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:10 vm04 bash[20742]: audit 2026-03-10T10:14:09.891929+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:10.938 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:10 vm04 bash[20742]: audit 2026-03-10T10:14:09.891929+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:10.938 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:10 vm04 bash[20742]: audit 2026-03-10T10:14:09.894304+0000 mon.a (mon.0) 629 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:10.938 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:10 vm04 bash[20742]: audit 2026-03-10T10:14:09.894304+0000 mon.a (mon.0) 629 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:10.938 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:10 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:10.938 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 10 10:14:10 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:10.938 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 10 10:14:10 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:10.938 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:10 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:11.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:10 vm07 bash[23367]: cluster 2026-03-10T10:14:09.647523+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:11.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:10 vm07 bash[23367]: cluster 2026-03-10T10:14:09.647523+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:11.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:10 vm07 bash[23367]: audit 2026-03-10T10:14:09.700928+0000 mgr.y (mgr.14150) 235 : audit [DBG] from='client.24293 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm04=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:11.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:10 vm07 bash[23367]: audit 2026-03-10T10:14:09.700928+0000 mgr.y (mgr.14150) 235 : audit [DBG] from='client.24293 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm04=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:11.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:10 vm07 bash[23367]: cephadm 2026-03-10T10:14:09.701798+0000 mgr.y (mgr.14150) 236 : cephadm [INF] Saving service rgw.foo.a spec with placement vm04=foo.a;count:1 2026-03-10T10:14:11.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:10 vm07 bash[23367]: cephadm 2026-03-10T10:14:09.701798+0000 mgr.y (mgr.14150) 236 : cephadm [INF] Saving service rgw.foo.a spec with placement vm04=foo.a;count:1 2026-03-10T10:14:11.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:10 vm07 bash[23367]: audit 2026-03-10T10:14:09.719132+0000 mon.a (mon.0) 621 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:11.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:10 vm07 bash[23367]: audit 2026-03-10T10:14:09.719132+0000 mon.a (mon.0) 621 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:11.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:10 vm07 bash[23367]: audit 2026-03-10T10:14:09.730090+0000 mon.a (mon.0) 622 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:11.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:10 vm07 bash[23367]: audit 2026-03-10T10:14:09.730090+0000 mon.a (mon.0) 622 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:11.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:10 vm07 bash[23367]: audit 2026-03-10T10:14:09.731398+0000 mon.a (mon.0) 623 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:11.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:10 vm07 bash[23367]: audit 2026-03-10T10:14:09.731398+0000 mon.a (mon.0) 623 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:11.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:10 vm07 bash[23367]: audit 2026-03-10T10:14:09.732156+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:11.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:10 vm07 bash[23367]: audit 2026-03-10T10:14:09.732156+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:11.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:10 vm07 bash[23367]: audit 2026-03-10T10:14:09.772685+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:11.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:10 vm07 bash[23367]: audit 2026-03-10T10:14:09.772685+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:11.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:10 vm07 bash[23367]: audit 2026-03-10T10:14:09.774220+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T10:14:11.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:10 vm07 bash[23367]: audit 2026-03-10T10:14:09.774220+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T10:14:11.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:10 vm07 bash[23367]: audit 2026-03-10T10:14:09.839565+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T10:14:11.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:10 vm07 bash[23367]: audit 2026-03-10T10:14:09.839565+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T10:14:11.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:10 vm07 bash[23367]: audit 2026-03-10T10:14:09.891929+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:11.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:10 vm07 bash[23367]: audit 2026-03-10T10:14:09.891929+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:11.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:10 vm07 bash[23367]: audit 2026-03-10T10:14:09.894304+0000 mon.a (mon.0) 629 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:11.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:10 vm07 bash[23367]: audit 2026-03-10T10:14:09.894304+0000 mon.a (mon.0) 629 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:12.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:11 vm07 bash[23367]: cephadm 2026-03-10T10:14:09.894887+0000 mgr.y (mgr.14150) 237 : cephadm [INF] Deploying daemon rgw.foo.a on vm04 2026-03-10T10:14:12.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:11 vm07 bash[23367]: cephadm 2026-03-10T10:14:09.894887+0000 mgr.y (mgr.14150) 237 : cephadm [INF] Deploying daemon rgw.foo.a on vm04 2026-03-10T10:14:12.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:11 vm07 bash[23367]: audit 2026-03-10T10:14:10.884004+0000 mon.a (mon.0) 630 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:12.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:11 vm07 bash[23367]: audit 2026-03-10T10:14:10.884004+0000 mon.a (mon.0) 630 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:12.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:11 vm07 bash[23367]: audit 2026-03-10T10:14:10.892850+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:12.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:11 vm07 bash[23367]: audit 2026-03-10T10:14:10.892850+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:12.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:11 vm07 bash[23367]: audit 2026-03-10T10:14:10.900504+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:12.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:11 vm07 bash[23367]: audit 2026-03-10T10:14:10.900504+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:12.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:11 vm07 bash[23367]: audit 2026-03-10T10:14:10.905682+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:12.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:11 vm07 bash[23367]: audit 2026-03-10T10:14:10.905682+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:12.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:11 vm07 bash[23367]: audit 2026-03-10T10:14:10.911592+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:12.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:11 vm07 bash[23367]: audit 2026-03-10T10:14:10.911592+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:12.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:11 vm07 bash[23367]: audit 2026-03-10T10:14:10.922712+0000 mon.a (mon.0) 635 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:12.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:11 vm07 bash[23367]: audit 2026-03-10T10:14:10.922712+0000 mon.a (mon.0) 635 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:11 vm04 bash[28289]: cephadm 2026-03-10T10:14:09.894887+0000 mgr.y (mgr.14150) 237 : cephadm [INF] Deploying daemon rgw.foo.a on vm04 2026-03-10T10:14:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:11 vm04 bash[28289]: cephadm 2026-03-10T10:14:09.894887+0000 mgr.y (mgr.14150) 237 : cephadm [INF] Deploying daemon rgw.foo.a on vm04 2026-03-10T10:14:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:11 vm04 bash[28289]: audit 2026-03-10T10:14:10.884004+0000 mon.a (mon.0) 630 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:11 vm04 bash[28289]: audit 2026-03-10T10:14:10.884004+0000 mon.a (mon.0) 630 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:11 vm04 bash[28289]: audit 2026-03-10T10:14:10.892850+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:11 vm04 bash[28289]: audit 2026-03-10T10:14:10.892850+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:11 vm04 bash[28289]: audit 2026-03-10T10:14:10.900504+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:11 vm04 bash[28289]: audit 2026-03-10T10:14:10.900504+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:11 vm04 bash[28289]: audit 2026-03-10T10:14:10.905682+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:11 vm04 bash[28289]: audit 2026-03-10T10:14:10.905682+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:11 vm04 bash[28289]: audit 2026-03-10T10:14:10.911592+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:11 vm04 bash[28289]: audit 2026-03-10T10:14:10.911592+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:11 vm04 bash[28289]: audit 2026-03-10T10:14:10.922712+0000 mon.a (mon.0) 635 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:11 vm04 bash[28289]: audit 2026-03-10T10:14:10.922712+0000 mon.a (mon.0) 635 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:11 vm04 bash[20742]: cephadm 2026-03-10T10:14:09.894887+0000 mgr.y (mgr.14150) 237 : cephadm [INF] Deploying daemon rgw.foo.a on vm04 2026-03-10T10:14:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:11 vm04 bash[20742]: cephadm 2026-03-10T10:14:09.894887+0000 mgr.y (mgr.14150) 237 : cephadm [INF] Deploying daemon rgw.foo.a on vm04 2026-03-10T10:14:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:11 vm04 bash[20742]: audit 2026-03-10T10:14:10.884004+0000 mon.a (mon.0) 630 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:11 vm04 bash[20742]: audit 2026-03-10T10:14:10.884004+0000 mon.a (mon.0) 630 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:11 vm04 bash[20742]: audit 2026-03-10T10:14:10.892850+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:12.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:11 vm04 bash[20742]: audit 2026-03-10T10:14:10.892850+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:12.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:11 vm04 bash[20742]: audit 2026-03-10T10:14:10.900504+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:12.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:11 vm04 bash[20742]: audit 2026-03-10T10:14:10.900504+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:12.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:11 vm04 bash[20742]: audit 2026-03-10T10:14:10.905682+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:12.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:11 vm04 bash[20742]: audit 2026-03-10T10:14:10.905682+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:12.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:11 vm04 bash[20742]: audit 2026-03-10T10:14:10.911592+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:12.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:11 vm04 bash[20742]: audit 2026-03-10T10:14:10.911592+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:12.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:11 vm04 bash[20742]: audit 2026-03-10T10:14:10.922712+0000 mon.a (mon.0) 635 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:12.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:11 vm04 bash[20742]: audit 2026-03-10T10:14:10.922712+0000 mon.a (mon.0) 635 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:13.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:12 vm07 bash[23367]: cephadm 2026-03-10T10:14:10.901242+0000 mgr.y (mgr.14150) 238 : cephadm [INF] Saving service rgw.foo.a spec with placement vm04=foo.a;count:1 2026-03-10T10:14:13.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:12 vm07 bash[23367]: cephadm 2026-03-10T10:14:10.901242+0000 mgr.y (mgr.14150) 238 : cephadm [INF] Saving service rgw.foo.a spec with placement vm04=foo.a;count:1 2026-03-10T10:14:13.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:12 vm07 bash[23367]: cluster 2026-03-10T10:14:11.647798+0000 mgr.y (mgr.14150) 239 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:13.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:12 vm07 bash[23367]: cluster 2026-03-10T10:14:11.647798+0000 mgr.y (mgr.14150) 239 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:13.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:12 vm07 bash[23367]: cluster 2026-03-10T10:14:11.922777+0000 mon.a (mon.0) 636 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-10T10:14:13.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:12 vm07 bash[23367]: cluster 2026-03-10T10:14:11.922777+0000 mon.a (mon.0) 636 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-10T10:14:13.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:12 vm07 bash[23367]: audit 2026-03-10T10:14:11.929302+0000 mon.c (mon.2) 20 : audit [INF] from='client.? 192.168.123.104:0/3097158240' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T10:14:13.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:12 vm07 bash[23367]: audit 2026-03-10T10:14:11.929302+0000 mon.c (mon.2) 20 : audit [INF] from='client.? 192.168.123.104:0/3097158240' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T10:14:13.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:12 vm07 bash[23367]: audit 2026-03-10T10:14:11.929546+0000 mon.a (mon.0) 637 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T10:14:13.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:12 vm07 bash[23367]: audit 2026-03-10T10:14:11.929546+0000 mon.a (mon.0) 637 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T10:14:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:12 vm04 bash[28289]: cephadm 2026-03-10T10:14:10.901242+0000 mgr.y (mgr.14150) 238 : cephadm [INF] Saving service rgw.foo.a spec with placement vm04=foo.a;count:1 2026-03-10T10:14:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:12 vm04 bash[28289]: cephadm 2026-03-10T10:14:10.901242+0000 mgr.y (mgr.14150) 238 : cephadm [INF] Saving service rgw.foo.a spec with placement vm04=foo.a;count:1 2026-03-10T10:14:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:12 vm04 bash[28289]: cluster 2026-03-10T10:14:11.647798+0000 mgr.y (mgr.14150) 239 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:12 vm04 bash[28289]: cluster 2026-03-10T10:14:11.647798+0000 mgr.y (mgr.14150) 239 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:12 vm04 bash[28289]: cluster 2026-03-10T10:14:11.922777+0000 mon.a (mon.0) 636 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-10T10:14:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:12 vm04 bash[28289]: cluster 2026-03-10T10:14:11.922777+0000 mon.a (mon.0) 636 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-10T10:14:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:12 vm04 bash[28289]: audit 2026-03-10T10:14:11.929302+0000 mon.c (mon.2) 20 : audit [INF] from='client.? 192.168.123.104:0/3097158240' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T10:14:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:12 vm04 bash[28289]: audit 2026-03-10T10:14:11.929302+0000 mon.c (mon.2) 20 : audit [INF] from='client.? 192.168.123.104:0/3097158240' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T10:14:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:12 vm04 bash[28289]: audit 2026-03-10T10:14:11.929546+0000 mon.a (mon.0) 637 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T10:14:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:12 vm04 bash[28289]: audit 2026-03-10T10:14:11.929546+0000 mon.a (mon.0) 637 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T10:14:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:12 vm04 bash[20742]: cephadm 2026-03-10T10:14:10.901242+0000 mgr.y (mgr.14150) 238 : cephadm [INF] Saving service rgw.foo.a spec with placement vm04=foo.a;count:1 2026-03-10T10:14:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:12 vm04 bash[20742]: cephadm 2026-03-10T10:14:10.901242+0000 mgr.y (mgr.14150) 238 : cephadm [INF] Saving service rgw.foo.a spec with placement vm04=foo.a;count:1 2026-03-10T10:14:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:12 vm04 bash[20742]: cluster 2026-03-10T10:14:11.647798+0000 mgr.y (mgr.14150) 239 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:12 vm04 bash[20742]: cluster 2026-03-10T10:14:11.647798+0000 mgr.y (mgr.14150) 239 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:12 vm04 bash[20742]: cluster 2026-03-10T10:14:11.922777+0000 mon.a (mon.0) 636 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-10T10:14:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:12 vm04 bash[20742]: cluster 2026-03-10T10:14:11.922777+0000 mon.a (mon.0) 636 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-10T10:14:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:12 vm04 bash[20742]: audit 2026-03-10T10:14:11.929302+0000 mon.c (mon.2) 20 : audit [INF] from='client.? 192.168.123.104:0/3097158240' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T10:14:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:12 vm04 bash[20742]: audit 2026-03-10T10:14:11.929302+0000 mon.c (mon.2) 20 : audit [INF] from='client.? 192.168.123.104:0/3097158240' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T10:14:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:12 vm04 bash[20742]: audit 2026-03-10T10:14:11.929546+0000 mon.a (mon.0) 637 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T10:14:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:12 vm04 bash[20742]: audit 2026-03-10T10:14:11.929546+0000 mon.a (mon.0) 637 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T10:14:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:13 vm04 bash[28289]: audit 2026-03-10T10:14:12.919985+0000 mon.a (mon.0) 638 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-10T10:14:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:13 vm04 bash[28289]: audit 2026-03-10T10:14:12.919985+0000 mon.a (mon.0) 638 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-10T10:14:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:13 vm04 bash[28289]: cluster 2026-03-10T10:14:12.925029+0000 mon.a (mon.0) 639 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-10T10:14:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:13 vm04 bash[28289]: cluster 2026-03-10T10:14:12.925029+0000 mon.a (mon.0) 639 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-10T10:14:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:13 vm04 bash[28289]: cluster 2026-03-10T10:14:13.648065+0000 mgr.y (mgr.14150) 240 : cluster [DBG] pgmap v215: 33 pgs: 5 creating+peering, 27 unknown, 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:13 vm04 bash[28289]: cluster 2026-03-10T10:14:13.648065+0000 mgr.y (mgr.14150) 240 : cluster [DBG] pgmap v215: 33 pgs: 5 creating+peering, 27 unknown, 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:13 vm04 bash[20742]: audit 2026-03-10T10:14:12.919985+0000 mon.a (mon.0) 638 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-10T10:14:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:13 vm04 bash[20742]: audit 2026-03-10T10:14:12.919985+0000 mon.a (mon.0) 638 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-10T10:14:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:13 vm04 bash[20742]: cluster 2026-03-10T10:14:12.925029+0000 mon.a (mon.0) 639 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-10T10:14:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:13 vm04 bash[20742]: cluster 2026-03-10T10:14:12.925029+0000 mon.a (mon.0) 639 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-10T10:14:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:13 vm04 bash[20742]: cluster 2026-03-10T10:14:13.648065+0000 mgr.y (mgr.14150) 240 : cluster [DBG] pgmap v215: 33 pgs: 5 creating+peering, 27 unknown, 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:13 vm04 bash[20742]: cluster 2026-03-10T10:14:13.648065+0000 mgr.y (mgr.14150) 240 : cluster [DBG] pgmap v215: 33 pgs: 5 creating+peering, 27 unknown, 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:14.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:13 vm07 bash[23367]: audit 2026-03-10T10:14:12.919985+0000 mon.a (mon.0) 638 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-10T10:14:14.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:13 vm07 bash[23367]: audit 2026-03-10T10:14:12.919985+0000 mon.a (mon.0) 638 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-10T10:14:14.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:13 vm07 bash[23367]: cluster 2026-03-10T10:14:12.925029+0000 mon.a (mon.0) 639 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-10T10:14:14.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:13 vm07 bash[23367]: cluster 2026-03-10T10:14:12.925029+0000 mon.a (mon.0) 639 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-10T10:14:14.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:13 vm07 bash[23367]: cluster 2026-03-10T10:14:13.648065+0000 mgr.y (mgr.14150) 240 : cluster [DBG] pgmap v215: 33 pgs: 5 creating+peering, 27 unknown, 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:14.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:13 vm07 bash[23367]: cluster 2026-03-10T10:14:13.648065+0000 mgr.y (mgr.14150) 240 : cluster [DBG] pgmap v215: 33 pgs: 5 creating+peering, 27 unknown, 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:14.564 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.b/config 2026-03-10T10:14:14.714 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.709+0000 7fd82f577640 1 -- 192.168.123.107:0/2666848039 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd830107ee0 msgr2=0x7fd83010a2d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:14.714 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.709+0000 7fd82f577640 1 --2- 192.168.123.107:0/2666848039 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd830107ee0 0x7fd83010a2d0 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7fd824009a30 tx=0x7fd82402f240 comp rx=0 tx=0).stop 2026-03-10T10:14:14.715 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.709+0000 7fd82f577640 1 -- 192.168.123.107:0/2666848039 shutdown_connections 2026-03-10T10:14:14.715 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.709+0000 7fd82f577640 1 --2- 192.168.123.107:0/2666848039 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd83010a8f0 0x7fd83010cd80 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:14.715 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.709+0000 7fd82f577640 1 --2- 192.168.123.107:0/2666848039 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd830107ee0 0x7fd83010a2d0 unknown :-1 s=CLOSED pgs=44 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:14.715 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.709+0000 7fd82f577640 1 --2- 192.168.123.107:0/2666848039 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fd83006b750 0x7fd8301079a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:14.715 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.709+0000 7fd82f577640 1 -- 192.168.123.107:0/2666848039 >> 192.168.123.107:0/2666848039 conn(0x7fd8300fd030 msgr2=0x7fd8300ff450 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:14.715 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.709+0000 7fd82f577640 1 -- 192.168.123.107:0/2666848039 shutdown_connections 2026-03-10T10:14:14.715 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.709+0000 7fd82f577640 1 -- 192.168.123.107:0/2666848039 wait complete. 2026-03-10T10:14:14.715 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.709+0000 7fd82f577640 1 Processor -- start 2026-03-10T10:14:14.715 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.709+0000 7fd82f577640 1 -- start start 2026-03-10T10:14:14.715 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.709+0000 7fd82f577640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd83006b750 0x7fd83019c4e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:14.715 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.709+0000 7fd82f577640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fd830107ee0 0x7fd83019ca20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:14.716 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.709+0000 7fd82f577640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd83010a8f0 0x7fd8301a3aa0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:14.716 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.709+0000 7fd82f577640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fd83010fe20 con 0x7fd83006b750 2026-03-10T10:14:14.716 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.709+0000 7fd82f577640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7fd83010fca0 con 0x7fd83010a8f0 2026-03-10T10:14:14.716 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.709+0000 7fd82f577640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7fd83010ffa0 con 0x7fd830107ee0 2026-03-10T10:14:14.716 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.709+0000 7fd82dd74640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fd830107ee0 0x7fd83019ca20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:14.716 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.709+0000 7fd82dd74640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fd830107ee0 0x7fd83019ca20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3301/0 says I am v2:192.168.123.107:50462/0 (socket says 192.168.123.107:50462) 2026-03-10T10:14:14.716 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.709+0000 7fd82dd74640 1 -- 192.168.123.107:0/3048799087 learned_addr learned my addr 192.168.123.107:0/3048799087 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-10T10:14:14.716 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.709+0000 7fd82ed76640 1 --2- 192.168.123.107:0/3048799087 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd83010a8f0 0x7fd8301a3aa0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:14.716 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.709+0000 7fd82dd74640 1 -- 192.168.123.107:0/3048799087 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd83010a8f0 msgr2=0x7fd8301a3aa0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:14.716 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.709+0000 7fd82dd74640 1 --2- 192.168.123.107:0/3048799087 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd83010a8f0 0x7fd8301a3aa0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:14.716 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.709+0000 7fd82dd74640 1 -- 192.168.123.107:0/3048799087 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd83006b750 msgr2=0x7fd83019c4e0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:14:14.716 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.709+0000 7fd82e575640 1 --2- 192.168.123.107:0/3048799087 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd83006b750 0x7fd83019c4e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:14.716 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.709+0000 7fd82dd74640 1 --2- 192.168.123.107:0/3048799087 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd83006b750 0x7fd83019c4e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:14.716 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.709+0000 7fd82dd74640 1 -- 192.168.123.107:0/3048799087 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fd8301a41a0 con 0x7fd830107ee0 2026-03-10T10:14:14.716 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.709+0000 7fd82dd74640 1 --2- 192.168.123.107:0/3048799087 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fd830107ee0 0x7fd83019ca20 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7fd82402f750 tx=0x7fd82402fcb0 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:14.716 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.709+0000 7fd8177fe640 1 -- 192.168.123.107:0/3048799087 <== mon.2 v2:192.168.123.104:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fd82402fe20 con 0x7fd830107ee0 2026-03-10T10:14:14.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.709+0000 7fd8177fe640 1 -- 192.168.123.107:0/3048799087 <== mon.2 v2:192.168.123.104:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fd824038770 con 0x7fd830107ee0 2026-03-10T10:14:14.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.709+0000 7fd82f577640 1 -- 192.168.123.107:0/3048799087 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fd8301a4430 con 0x7fd830107ee0 2026-03-10T10:14:14.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.709+0000 7fd82f577640 1 -- 192.168.123.107:0/3048799087 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7fd8301a49e0 con 0x7fd830107ee0 2026-03-10T10:14:14.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.709+0000 7fd8177fe640 1 -- 192.168.123.107:0/3048799087 <== mon.2 v2:192.168.123.104:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fd8240417a0 con 0x7fd830107ee0 2026-03-10T10:14:14.717 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.713+0000 7fd82f577640 1 -- 192.168.123.107:0/3048799087 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fd7f4005180 con 0x7fd830107ee0 2026-03-10T10:14:14.719 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.713+0000 7fd8177fe640 1 -- 192.168.123.107:0/3048799087 <== mon.2 v2:192.168.123.104:3301/0 4 ==== mgrmap(e 15) ==== 100000+0+0 (secure 0 0 0) 0x7fd824038c00 con 0x7fd830107ee0 2026-03-10T10:14:14.719 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.713+0000 7fd8177fe640 1 --2- 192.168.123.107:0/3048799087 >> v2:192.168.123.104:6800/632047608 conn(0x7fd808077600 0x7fd808079ac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:14.719 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.713+0000 7fd8177fe640 1 -- 192.168.123.107:0/3048799087 <== mon.2 v2:192.168.123.104:3301/0 5 ==== osd_map(54..54 src has 1..54) ==== 4774+0+0 (secure 0 0 0) 0x7fd8240bdf00 con 0x7fd830107ee0 2026-03-10T10:14:14.719 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.713+0000 7fd82e575640 1 --2- 192.168.123.107:0/3048799087 >> v2:192.168.123.104:6800/632047608 conn(0x7fd808077600 0x7fd808079ac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:14.721 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.713+0000 7fd82e575640 1 --2- 192.168.123.107:0/3048799087 >> v2:192.168.123.104:6800/632047608 conn(0x7fd808077600 0x7fd808079ac0 secure :-1 s=READY pgs=96 cs=0 l=1 rev1=1 crypto rx=0x7fd81800a8b0 tx=0x7fd818008040 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:14.721 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.713+0000 7fd8177fe640 1 -- 192.168.123.107:0/3048799087 <== mon.2 v2:192.168.123.104:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fd824087630 con 0x7fd830107ee0 2026-03-10T10:14:14.811 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.805+0000 7fd82f577640 1 -- 192.168.123.107:0/3048799087 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"} v 0) -- 0x7fd7f4005470 con 0x7fd830107ee0 2026-03-10T10:14:14.955 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.949+0000 7fd8177fe640 1 -- 192.168.123.107:0/3048799087 <== mon.2 v2:192.168.123.104:3301/0 7 ==== mon_command_ack([{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]=0 pool 'datapool' created v55) ==== 160+0+0 (secure 0 0 0) 0x7fd82408b2e0 con 0x7fd830107ee0 2026-03-10T10:14:14.956 INFO:teuthology.orchestra.run.vm07.stderr:pool 'datapool' created 2026-03-10T10:14:14.960 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.953+0000 7fd82f577640 1 -- 192.168.123.107:0/3048799087 >> v2:192.168.123.104:6800/632047608 conn(0x7fd808077600 msgr2=0x7fd808079ac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:14.960 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.953+0000 7fd82f577640 1 --2- 192.168.123.107:0/3048799087 >> v2:192.168.123.104:6800/632047608 conn(0x7fd808077600 0x7fd808079ac0 secure :-1 s=READY pgs=96 cs=0 l=1 rev1=1 crypto rx=0x7fd81800a8b0 tx=0x7fd818008040 comp rx=0 tx=0).stop 2026-03-10T10:14:14.960 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.953+0000 7fd82f577640 1 -- 192.168.123.107:0/3048799087 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fd830107ee0 msgr2=0x7fd83019ca20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:14.960 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.953+0000 7fd82f577640 1 --2- 192.168.123.107:0/3048799087 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fd830107ee0 0x7fd83019ca20 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7fd82402f750 tx=0x7fd82402fcb0 comp rx=0 tx=0).stop 2026-03-10T10:14:14.960 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.953+0000 7fd82f577640 1 -- 192.168.123.107:0/3048799087 shutdown_connections 2026-03-10T10:14:14.960 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.953+0000 7fd82f577640 1 --2- 192.168.123.107:0/3048799087 >> v2:192.168.123.104:6800/632047608 conn(0x7fd808077600 0x7fd808079ac0 unknown :-1 s=CLOSED pgs=96 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:14.960 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.953+0000 7fd82f577640 1 --2- 192.168.123.107:0/3048799087 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd83010a8f0 0x7fd8301a3aa0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:14.960 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.953+0000 7fd82f577640 1 --2- 192.168.123.107:0/3048799087 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fd830107ee0 0x7fd83019ca20 unknown :-1 s=CLOSED pgs=38 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:14.960 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.953+0000 7fd82f577640 1 --2- 192.168.123.107:0/3048799087 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd83006b750 0x7fd83019c4e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:14.961 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.953+0000 7fd82f577640 1 -- 192.168.123.107:0/3048799087 >> 192.168.123.107:0/3048799087 conn(0x7fd8300fd030 msgr2=0x7fd830108ae0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:14.961 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.953+0000 7fd82f577640 1 -- 192.168.123.107:0/3048799087 shutdown_connections 2026-03-10T10:14:14.965 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:14.961+0000 7fd82f577640 1 -- 192.168.123.107:0/3048799087 wait complete. 2026-03-10T10:14:15.020 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- rbd pool init datapool 2026-03-10T10:14:15.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:14 vm07 bash[23367]: cluster 2026-03-10T10:14:13.939651+0000 mon.a (mon.0) 640 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-10T10:14:15.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:14 vm07 bash[23367]: cluster 2026-03-10T10:14:13.939651+0000 mon.a (mon.0) 640 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-10T10:14:15.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:14 vm07 bash[23367]: audit 2026-03-10T10:14:13.943401+0000 mon.a (mon.0) 641 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T10:14:15.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:14 vm07 bash[23367]: audit 2026-03-10T10:14:13.943401+0000 mon.a (mon.0) 641 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T10:14:15.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:14 vm07 bash[23367]: audit 2026-03-10T10:14:14.812534+0000 mon.c (mon.2) 21 : audit [INF] from='client.? 192.168.123.107:0/3048799087' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T10:14:15.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:14 vm07 bash[23367]: audit 2026-03-10T10:14:14.812534+0000 mon.c (mon.2) 21 : audit [INF] from='client.? 192.168.123.107:0/3048799087' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T10:14:15.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:14 vm07 bash[23367]: audit 2026-03-10T10:14:14.812905+0000 mon.a (mon.0) 642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T10:14:15.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:14 vm07 bash[23367]: audit 2026-03-10T10:14:14.812905+0000 mon.a (mon.0) 642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T10:14:15.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:14 vm04 bash[28289]: cluster 2026-03-10T10:14:13.939651+0000 mon.a (mon.0) 640 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-10T10:14:15.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:14 vm04 bash[28289]: cluster 2026-03-10T10:14:13.939651+0000 mon.a (mon.0) 640 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-10T10:14:15.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:14 vm04 bash[28289]: audit 2026-03-10T10:14:13.943401+0000 mon.a (mon.0) 641 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T10:14:15.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:14 vm04 bash[28289]: audit 2026-03-10T10:14:13.943401+0000 mon.a (mon.0) 641 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T10:14:15.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:14 vm04 bash[28289]: audit 2026-03-10T10:14:14.812534+0000 mon.c (mon.2) 21 : audit [INF] from='client.? 192.168.123.107:0/3048799087' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T10:14:15.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:14 vm04 bash[28289]: audit 2026-03-10T10:14:14.812534+0000 mon.c (mon.2) 21 : audit [INF] from='client.? 192.168.123.107:0/3048799087' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T10:14:15.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:14 vm04 bash[28289]: audit 2026-03-10T10:14:14.812905+0000 mon.a (mon.0) 642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T10:14:15.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:14 vm04 bash[28289]: audit 2026-03-10T10:14:14.812905+0000 mon.a (mon.0) 642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T10:14:15.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:14 vm04 bash[20742]: cluster 2026-03-10T10:14:13.939651+0000 mon.a (mon.0) 640 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-10T10:14:15.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:14 vm04 bash[20742]: cluster 2026-03-10T10:14:13.939651+0000 mon.a (mon.0) 640 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-10T10:14:15.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:14 vm04 bash[20742]: audit 2026-03-10T10:14:13.943401+0000 mon.a (mon.0) 641 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T10:14:15.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:14 vm04 bash[20742]: audit 2026-03-10T10:14:13.943401+0000 mon.a (mon.0) 641 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-10T10:14:15.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:14 vm04 bash[20742]: audit 2026-03-10T10:14:14.812534+0000 mon.c (mon.2) 21 : audit [INF] from='client.? 192.168.123.107:0/3048799087' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T10:14:15.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:14 vm04 bash[20742]: audit 2026-03-10T10:14:14.812534+0000 mon.c (mon.2) 21 : audit [INF] from='client.? 192.168.123.107:0/3048799087' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T10:14:15.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:14 vm04 bash[20742]: audit 2026-03-10T10:14:14.812905+0000 mon.a (mon.0) 642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T10:14:15.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:14 vm04 bash[20742]: audit 2026-03-10T10:14:14.812905+0000 mon.a (mon.0) 642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-10T10:14:16.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:15 vm07 bash[23367]: audit 2026-03-10T10:14:14.930039+0000 mon.a (mon.0) 643 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-10T10:14:16.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:15 vm07 bash[23367]: audit 2026-03-10T10:14:14.930039+0000 mon.a (mon.0) 643 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-10T10:14:16.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:15 vm07 bash[23367]: audit 2026-03-10T10:14:14.930090+0000 mon.a (mon.0) 644 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-10T10:14:16.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:15 vm07 bash[23367]: audit 2026-03-10T10:14:14.930090+0000 mon.a (mon.0) 644 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-10T10:14:16.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:15 vm07 bash[23367]: cluster 2026-03-10T10:14:14.935122+0000 mon.a (mon.0) 645 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-10T10:14:16.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:15 vm07 bash[23367]: cluster 2026-03-10T10:14:14.935122+0000 mon.a (mon.0) 645 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-10T10:14:16.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:15 vm07 bash[23367]: audit 2026-03-10T10:14:15.490597+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:16.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:15 vm07 bash[23367]: audit 2026-03-10T10:14:15.490597+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:16.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:15 vm07 bash[23367]: cluster 2026-03-10T10:14:15.648352+0000 mgr.y (mgr.14150) 241 : cluster [DBG] pgmap v218: 68 pgs: 9 creating+peering, 47 unknown, 12 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:14:16.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:15 vm07 bash[23367]: cluster 2026-03-10T10:14:15.648352+0000 mgr.y (mgr.14150) 241 : cluster [DBG] pgmap v218: 68 pgs: 9 creating+peering, 47 unknown, 12 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:14:16.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:15 vm07 bash[23367]: audit 2026-03-10T10:14:15.895406+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:16.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:15 vm07 bash[23367]: audit 2026-03-10T10:14:15.895406+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:16.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:15 vm07 bash[23367]: audit 2026-03-10T10:14:15.901939+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:16.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:15 vm07 bash[23367]: audit 2026-03-10T10:14:15.901939+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:16.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:15 vm07 bash[23367]: cluster 2026-03-10T10:14:15.935240+0000 mon.a (mon.0) 649 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-10T10:14:16.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:15 vm07 bash[23367]: cluster 2026-03-10T10:14:15.935240+0000 mon.a (mon.0) 649 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-10T10:14:16.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:15 vm07 bash[23367]: audit 2026-03-10T10:14:15.935683+0000 mon.a (mon.0) 650 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T10:14:16.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:15 vm07 bash[23367]: audit 2026-03-10T10:14:15.935683+0000 mon.a (mon.0) 650 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T10:14:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:15 vm04 bash[28289]: audit 2026-03-10T10:14:14.930039+0000 mon.a (mon.0) 643 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-10T10:14:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:15 vm04 bash[28289]: audit 2026-03-10T10:14:14.930039+0000 mon.a (mon.0) 643 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-10T10:14:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:15 vm04 bash[28289]: audit 2026-03-10T10:14:14.930090+0000 mon.a (mon.0) 644 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-10T10:14:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:15 vm04 bash[28289]: audit 2026-03-10T10:14:14.930090+0000 mon.a (mon.0) 644 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-10T10:14:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:15 vm04 bash[28289]: cluster 2026-03-10T10:14:14.935122+0000 mon.a (mon.0) 645 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-10T10:14:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:15 vm04 bash[28289]: cluster 2026-03-10T10:14:14.935122+0000 mon.a (mon.0) 645 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-10T10:14:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:15 vm04 bash[28289]: audit 2026-03-10T10:14:15.490597+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:15 vm04 bash[28289]: audit 2026-03-10T10:14:15.490597+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:15 vm04 bash[28289]: cluster 2026-03-10T10:14:15.648352+0000 mgr.y (mgr.14150) 241 : cluster [DBG] pgmap v218: 68 pgs: 9 creating+peering, 47 unknown, 12 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:14:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:15 vm04 bash[28289]: cluster 2026-03-10T10:14:15.648352+0000 mgr.y (mgr.14150) 241 : cluster [DBG] pgmap v218: 68 pgs: 9 creating+peering, 47 unknown, 12 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:14:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:15 vm04 bash[28289]: audit 2026-03-10T10:14:15.895406+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:15 vm04 bash[28289]: audit 2026-03-10T10:14:15.895406+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:15 vm04 bash[28289]: audit 2026-03-10T10:14:15.901939+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:15 vm04 bash[28289]: audit 2026-03-10T10:14:15.901939+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:15 vm04 bash[28289]: cluster 2026-03-10T10:14:15.935240+0000 mon.a (mon.0) 649 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-10T10:14:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:15 vm04 bash[28289]: cluster 2026-03-10T10:14:15.935240+0000 mon.a (mon.0) 649 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-10T10:14:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:15 vm04 bash[28289]: audit 2026-03-10T10:14:15.935683+0000 mon.a (mon.0) 650 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T10:14:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:15 vm04 bash[28289]: audit 2026-03-10T10:14:15.935683+0000 mon.a (mon.0) 650 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T10:14:16.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:15 vm04 bash[20742]: audit 2026-03-10T10:14:14.930039+0000 mon.a (mon.0) 643 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-10T10:14:16.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:15 vm04 bash[20742]: audit 2026-03-10T10:14:14.930039+0000 mon.a (mon.0) 643 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-10T10:14:16.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:15 vm04 bash[20742]: audit 2026-03-10T10:14:14.930090+0000 mon.a (mon.0) 644 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-10T10:14:16.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:15 vm04 bash[20742]: audit 2026-03-10T10:14:14.930090+0000 mon.a (mon.0) 644 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-10T10:14:16.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:15 vm04 bash[20742]: cluster 2026-03-10T10:14:14.935122+0000 mon.a (mon.0) 645 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-10T10:14:16.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:15 vm04 bash[20742]: cluster 2026-03-10T10:14:14.935122+0000 mon.a (mon.0) 645 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-10T10:14:16.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:15 vm04 bash[20742]: audit 2026-03-10T10:14:15.490597+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:16.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:15 vm04 bash[20742]: audit 2026-03-10T10:14:15.490597+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:16.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:15 vm04 bash[20742]: cluster 2026-03-10T10:14:15.648352+0000 mgr.y (mgr.14150) 241 : cluster [DBG] pgmap v218: 68 pgs: 9 creating+peering, 47 unknown, 12 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:14:16.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:15 vm04 bash[20742]: cluster 2026-03-10T10:14:15.648352+0000 mgr.y (mgr.14150) 241 : cluster [DBG] pgmap v218: 68 pgs: 9 creating+peering, 47 unknown, 12 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:14:16.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:15 vm04 bash[20742]: audit 2026-03-10T10:14:15.895406+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:16.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:15 vm04 bash[20742]: audit 2026-03-10T10:14:15.895406+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:16.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:15 vm04 bash[20742]: audit 2026-03-10T10:14:15.901939+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:16.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:15 vm04 bash[20742]: audit 2026-03-10T10:14:15.901939+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:16.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:15 vm04 bash[20742]: cluster 2026-03-10T10:14:15.935240+0000 mon.a (mon.0) 649 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-10T10:14:16.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:15 vm04 bash[20742]: cluster 2026-03-10T10:14:15.935240+0000 mon.a (mon.0) 649 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-10T10:14:16.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:15 vm04 bash[20742]: audit 2026-03-10T10:14:15.935683+0000 mon.a (mon.0) 650 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T10:14:16.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:15 vm04 bash[20742]: audit 2026-03-10T10:14:15.935683+0000 mon.a (mon.0) 650 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-10T10:14:17.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:16 vm07 bash[23367]: audit 2026-03-10T10:14:16.203046+0000 mon.a (mon.0) 651 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:17.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:16 vm07 bash[23367]: audit 2026-03-10T10:14:16.203046+0000 mon.a (mon.0) 651 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:17.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:16 vm07 bash[23367]: audit 2026-03-10T10:14:16.203664+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:17.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:16 vm07 bash[23367]: audit 2026-03-10T10:14:16.203664+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:17.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:16 vm07 bash[23367]: cephadm 2026-03-10T10:14:16.205956+0000 mgr.y (mgr.14150) 242 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T10:14:17.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:16 vm07 bash[23367]: cephadm 2026-03-10T10:14:16.205956+0000 mgr.y (mgr.14150) 242 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T10:14:17.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:16 vm07 bash[23367]: cluster 2026-03-10T10:14:16.489006+0000 mon.a (mon.0) 653 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:14:17.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:16 vm07 bash[23367]: cluster 2026-03-10T10:14:16.489006+0000 mon.a (mon.0) 653 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:14:17.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:16 vm07 bash[23367]: audit 2026-03-10T10:14:16.936069+0000 mon.a (mon.0) 654 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-10T10:14:17.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:16 vm07 bash[23367]: audit 2026-03-10T10:14:16.936069+0000 mon.a (mon.0) 654 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-10T10:14:17.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:16 vm07 bash[23367]: cluster 2026-03-10T10:14:16.940501+0000 mon.a (mon.0) 655 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-10T10:14:17.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:16 vm07 bash[23367]: cluster 2026-03-10T10:14:16.940501+0000 mon.a (mon.0) 655 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-10T10:14:17.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:16 vm04 bash[28289]: audit 2026-03-10T10:14:16.203046+0000 mon.a (mon.0) 651 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:17.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:16 vm04 bash[28289]: audit 2026-03-10T10:14:16.203046+0000 mon.a (mon.0) 651 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:17.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:16 vm04 bash[28289]: audit 2026-03-10T10:14:16.203664+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:17.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:16 vm04 bash[28289]: audit 2026-03-10T10:14:16.203664+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:17.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:16 vm04 bash[28289]: cephadm 2026-03-10T10:14:16.205956+0000 mgr.y (mgr.14150) 242 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T10:14:17.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:16 vm04 bash[28289]: cephadm 2026-03-10T10:14:16.205956+0000 mgr.y (mgr.14150) 242 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T10:14:17.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:16 vm04 bash[28289]: cluster 2026-03-10T10:14:16.489006+0000 mon.a (mon.0) 653 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:14:17.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:16 vm04 bash[28289]: cluster 2026-03-10T10:14:16.489006+0000 mon.a (mon.0) 653 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:14:17.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:16 vm04 bash[28289]: audit 2026-03-10T10:14:16.936069+0000 mon.a (mon.0) 654 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-10T10:14:17.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:16 vm04 bash[28289]: audit 2026-03-10T10:14:16.936069+0000 mon.a (mon.0) 654 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-10T10:14:17.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:16 vm04 bash[28289]: cluster 2026-03-10T10:14:16.940501+0000 mon.a (mon.0) 655 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-10T10:14:17.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:16 vm04 bash[28289]: cluster 2026-03-10T10:14:16.940501+0000 mon.a (mon.0) 655 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-10T10:14:17.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:16 vm04 bash[20742]: audit 2026-03-10T10:14:16.203046+0000 mon.a (mon.0) 651 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:17.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:16 vm04 bash[20742]: audit 2026-03-10T10:14:16.203046+0000 mon.a (mon.0) 651 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:17.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:16 vm04 bash[20742]: audit 2026-03-10T10:14:16.203664+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:17.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:16 vm04 bash[20742]: audit 2026-03-10T10:14:16.203664+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:17.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:16 vm04 bash[20742]: cephadm 2026-03-10T10:14:16.205956+0000 mgr.y (mgr.14150) 242 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T10:14:17.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:16 vm04 bash[20742]: cephadm 2026-03-10T10:14:16.205956+0000 mgr.y (mgr.14150) 242 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T10:14:17.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:16 vm04 bash[20742]: cluster 2026-03-10T10:14:16.489006+0000 mon.a (mon.0) 653 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:14:17.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:16 vm04 bash[20742]: cluster 2026-03-10T10:14:16.489006+0000 mon.a (mon.0) 653 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:14:17.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:16 vm04 bash[20742]: audit 2026-03-10T10:14:16.936069+0000 mon.a (mon.0) 654 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-10T10:14:17.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:16 vm04 bash[20742]: audit 2026-03-10T10:14:16.936069+0000 mon.a (mon.0) 654 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-10T10:14:17.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:16 vm04 bash[20742]: cluster 2026-03-10T10:14:16.940501+0000 mon.a (mon.0) 655 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-10T10:14:17.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:16 vm04 bash[20742]: cluster 2026-03-10T10:14:16.940501+0000 mon.a (mon.0) 655 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-10T10:14:18.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:17 vm07 bash[23367]: cluster 2026-03-10T10:14:17.648675+0000 mgr.y (mgr.14150) 243 : cluster [DBG] pgmap v221: 100 pgs: 21 creating+peering, 26 unknown, 53 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T10:14:18.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:17 vm07 bash[23367]: cluster 2026-03-10T10:14:17.648675+0000 mgr.y (mgr.14150) 243 : cluster [DBG] pgmap v221: 100 pgs: 21 creating+peering, 26 unknown, 53 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T10:14:18.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:17 vm07 bash[23367]: cluster 2026-03-10T10:14:17.943513+0000 mon.a (mon.0) 656 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-10T10:14:18.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:17 vm07 bash[23367]: cluster 2026-03-10T10:14:17.943513+0000 mon.a (mon.0) 656 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-10T10:14:18.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:17 vm07 bash[23367]: audit 2026-03-10T10:14:17.944125+0000 mon.a (mon.0) 657 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T10:14:18.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:17 vm07 bash[23367]: audit 2026-03-10T10:14:17.944125+0000 mon.a (mon.0) 657 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T10:14:18.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:17 vm07 bash[23367]: audit 2026-03-10T10:14:17.949701+0000 mon.c (mon.2) 22 : audit [INF] from='client.? 192.168.123.104:0/142367779' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T10:14:18.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:17 vm07 bash[23367]: audit 2026-03-10T10:14:17.949701+0000 mon.c (mon.2) 22 : audit [INF] from='client.? 192.168.123.104:0/142367779' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T10:14:18.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:17 vm07 bash[23367]: audit 2026-03-10T10:14:17.954006+0000 mon.a (mon.0) 658 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T10:14:18.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:17 vm07 bash[23367]: audit 2026-03-10T10:14:17.954006+0000 mon.a (mon.0) 658 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T10:14:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:17 vm04 bash[28289]: cluster 2026-03-10T10:14:17.648675+0000 mgr.y (mgr.14150) 243 : cluster [DBG] pgmap v221: 100 pgs: 21 creating+peering, 26 unknown, 53 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T10:14:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:17 vm04 bash[28289]: cluster 2026-03-10T10:14:17.648675+0000 mgr.y (mgr.14150) 243 : cluster [DBG] pgmap v221: 100 pgs: 21 creating+peering, 26 unknown, 53 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T10:14:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:17 vm04 bash[28289]: cluster 2026-03-10T10:14:17.943513+0000 mon.a (mon.0) 656 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-10T10:14:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:17 vm04 bash[28289]: cluster 2026-03-10T10:14:17.943513+0000 mon.a (mon.0) 656 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-10T10:14:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:17 vm04 bash[28289]: audit 2026-03-10T10:14:17.944125+0000 mon.a (mon.0) 657 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T10:14:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:17 vm04 bash[28289]: audit 2026-03-10T10:14:17.944125+0000 mon.a (mon.0) 657 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T10:14:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:17 vm04 bash[28289]: audit 2026-03-10T10:14:17.949701+0000 mon.c (mon.2) 22 : audit [INF] from='client.? 192.168.123.104:0/142367779' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T10:14:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:17 vm04 bash[28289]: audit 2026-03-10T10:14:17.949701+0000 mon.c (mon.2) 22 : audit [INF] from='client.? 192.168.123.104:0/142367779' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T10:14:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:17 vm04 bash[28289]: audit 2026-03-10T10:14:17.954006+0000 mon.a (mon.0) 658 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T10:14:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:17 vm04 bash[28289]: audit 2026-03-10T10:14:17.954006+0000 mon.a (mon.0) 658 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T10:14:18.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:17 vm04 bash[20742]: cluster 2026-03-10T10:14:17.648675+0000 mgr.y (mgr.14150) 243 : cluster [DBG] pgmap v221: 100 pgs: 21 creating+peering, 26 unknown, 53 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T10:14:18.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:17 vm04 bash[20742]: cluster 2026-03-10T10:14:17.648675+0000 mgr.y (mgr.14150) 243 : cluster [DBG] pgmap v221: 100 pgs: 21 creating+peering, 26 unknown, 53 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T10:14:18.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:17 vm04 bash[20742]: cluster 2026-03-10T10:14:17.943513+0000 mon.a (mon.0) 656 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-10T10:14:18.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:17 vm04 bash[20742]: cluster 2026-03-10T10:14:17.943513+0000 mon.a (mon.0) 656 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-10T10:14:18.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:17 vm04 bash[20742]: audit 2026-03-10T10:14:17.944125+0000 mon.a (mon.0) 657 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T10:14:18.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:17 vm04 bash[20742]: audit 2026-03-10T10:14:17.944125+0000 mon.a (mon.0) 657 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T10:14:18.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:17 vm04 bash[20742]: audit 2026-03-10T10:14:17.949701+0000 mon.c (mon.2) 22 : audit [INF] from='client.? 192.168.123.104:0/142367779' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T10:14:18.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:17 vm04 bash[20742]: audit 2026-03-10T10:14:17.949701+0000 mon.c (mon.2) 22 : audit [INF] from='client.? 192.168.123.104:0/142367779' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T10:14:18.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:17 vm04 bash[20742]: audit 2026-03-10T10:14:17.954006+0000 mon.a (mon.0) 658 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T10:14:18.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:17 vm04 bash[20742]: audit 2026-03-10T10:14:17.954006+0000 mon.a (mon.0) 658 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T10:14:19.635 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.b/config 2026-03-10T10:14:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:19 vm04 bash[28289]: audit 2026-03-10T10:14:18.942301+0000 mon.a (mon.0) 659 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T10:14:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:19 vm04 bash[28289]: audit 2026-03-10T10:14:18.942301+0000 mon.a (mon.0) 659 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T10:14:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:19 vm04 bash[28289]: audit 2026-03-10T10:14:18.942375+0000 mon.a (mon.0) 660 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T10:14:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:19 vm04 bash[28289]: audit 2026-03-10T10:14:18.942375+0000 mon.a (mon.0) 660 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T10:14:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:19 vm04 bash[28289]: audit 2026-03-10T10:14:18.952112+0000 mon.c (mon.2) 23 : audit [INF] from='client.? 192.168.123.104:0/142367779' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T10:14:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:19 vm04 bash[28289]: audit 2026-03-10T10:14:18.952112+0000 mon.c (mon.2) 23 : audit [INF] from='client.? 192.168.123.104:0/142367779' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T10:14:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:19 vm04 bash[28289]: cluster 2026-03-10T10:14:18.955930+0000 mon.a (mon.0) 661 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-10T10:14:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:19 vm04 bash[28289]: cluster 2026-03-10T10:14:18.955930+0000 mon.a (mon.0) 661 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-10T10:14:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:19 vm04 bash[28289]: audit 2026-03-10T10:14:18.957735+0000 mon.a (mon.0) 662 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T10:14:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:19 vm04 bash[28289]: audit 2026-03-10T10:14:18.957735+0000 mon.a (mon.0) 662 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T10:14:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:19 vm04 bash[28289]: audit 2026-03-10T10:14:18.958392+0000 mon.a (mon.0) 663 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T10:14:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:19 vm04 bash[28289]: audit 2026-03-10T10:14:18.958392+0000 mon.a (mon.0) 663 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T10:14:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:19 vm04 bash[28289]: cluster 2026-03-10T10:14:19.648974+0000 mgr.y (mgr.14150) 244 : cluster [DBG] pgmap v224: 132 pgs: 16 creating+peering, 48 unknown, 68 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 3 op/s 2026-03-10T10:14:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:19 vm04 bash[28289]: cluster 2026-03-10T10:14:19.648974+0000 mgr.y (mgr.14150) 244 : cluster [DBG] pgmap v224: 132 pgs: 16 creating+peering, 48 unknown, 68 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 3 op/s 2026-03-10T10:14:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:19 vm04 bash[28289]: audit 2026-03-10T10:14:19.755045+0000 mon.b (mon.1) 25 : audit [INF] from='client.? 192.168.123.107:0/887750058' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T10:14:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:19 vm04 bash[28289]: audit 2026-03-10T10:14:19.755045+0000 mon.b (mon.1) 25 : audit [INF] from='client.? 192.168.123.107:0/887750058' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T10:14:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:19 vm04 bash[28289]: audit 2026-03-10T10:14:19.756260+0000 mon.a (mon.0) 664 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T10:14:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:19 vm04 bash[28289]: audit 2026-03-10T10:14:19.756260+0000 mon.a (mon.0) 664 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T10:14:20.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:19 vm04 bash[20742]: audit 2026-03-10T10:14:18.942301+0000 mon.a (mon.0) 659 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T10:14:20.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:19 vm04 bash[20742]: audit 2026-03-10T10:14:18.942301+0000 mon.a (mon.0) 659 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T10:14:20.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:19 vm04 bash[20742]: audit 2026-03-10T10:14:18.942375+0000 mon.a (mon.0) 660 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T10:14:20.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:19 vm04 bash[20742]: audit 2026-03-10T10:14:18.942375+0000 mon.a (mon.0) 660 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T10:14:20.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:19 vm04 bash[20742]: audit 2026-03-10T10:14:18.952112+0000 mon.c (mon.2) 23 : audit [INF] from='client.? 192.168.123.104:0/142367779' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T10:14:20.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:19 vm04 bash[20742]: audit 2026-03-10T10:14:18.952112+0000 mon.c (mon.2) 23 : audit [INF] from='client.? 192.168.123.104:0/142367779' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T10:14:20.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:19 vm04 bash[20742]: cluster 2026-03-10T10:14:18.955930+0000 mon.a (mon.0) 661 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-10T10:14:20.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:19 vm04 bash[20742]: cluster 2026-03-10T10:14:18.955930+0000 mon.a (mon.0) 661 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-10T10:14:20.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:19 vm04 bash[20742]: audit 2026-03-10T10:14:18.957735+0000 mon.a (mon.0) 662 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T10:14:20.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:19 vm04 bash[20742]: audit 2026-03-10T10:14:18.957735+0000 mon.a (mon.0) 662 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T10:14:20.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:19 vm04 bash[20742]: audit 2026-03-10T10:14:18.958392+0000 mon.a (mon.0) 663 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T10:14:20.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:19 vm04 bash[20742]: audit 2026-03-10T10:14:18.958392+0000 mon.a (mon.0) 663 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T10:14:20.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:19 vm04 bash[20742]: cluster 2026-03-10T10:14:19.648974+0000 mgr.y (mgr.14150) 244 : cluster [DBG] pgmap v224: 132 pgs: 16 creating+peering, 48 unknown, 68 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 3 op/s 2026-03-10T10:14:20.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:19 vm04 bash[20742]: cluster 2026-03-10T10:14:19.648974+0000 mgr.y (mgr.14150) 244 : cluster [DBG] pgmap v224: 132 pgs: 16 creating+peering, 48 unknown, 68 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 3 op/s 2026-03-10T10:14:20.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:19 vm04 bash[20742]: audit 2026-03-10T10:14:19.755045+0000 mon.b (mon.1) 25 : audit [INF] from='client.? 192.168.123.107:0/887750058' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T10:14:20.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:19 vm04 bash[20742]: audit 2026-03-10T10:14:19.755045+0000 mon.b (mon.1) 25 : audit [INF] from='client.? 192.168.123.107:0/887750058' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T10:14:20.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:19 vm04 bash[20742]: audit 2026-03-10T10:14:19.756260+0000 mon.a (mon.0) 664 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T10:14:20.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:19 vm04 bash[20742]: audit 2026-03-10T10:14:19.756260+0000 mon.a (mon.0) 664 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T10:14:20.204 INFO:journalctl@ceph.rgw.foo.a.vm04.stdout:Mar 10 10:14:20 vm04 bash[53425]: debug 2026-03-10T10:14:20.041+0000 7fbe926e2980 -1 LDAP not started since no server URIs were provided in the configuration. 2026-03-10T10:14:20.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:19 vm07 bash[23367]: audit 2026-03-10T10:14:18.942301+0000 mon.a (mon.0) 659 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T10:14:20.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:19 vm07 bash[23367]: audit 2026-03-10T10:14:18.942301+0000 mon.a (mon.0) 659 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T10:14:20.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:19 vm07 bash[23367]: audit 2026-03-10T10:14:18.942375+0000 mon.a (mon.0) 660 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T10:14:20.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:19 vm07 bash[23367]: audit 2026-03-10T10:14:18.942375+0000 mon.a (mon.0) 660 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-10T10:14:20.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:19 vm07 bash[23367]: audit 2026-03-10T10:14:18.952112+0000 mon.c (mon.2) 23 : audit [INF] from='client.? 192.168.123.104:0/142367779' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T10:14:20.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:19 vm07 bash[23367]: audit 2026-03-10T10:14:18.952112+0000 mon.c (mon.2) 23 : audit [INF] from='client.? 192.168.123.104:0/142367779' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T10:14:20.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:19 vm07 bash[23367]: cluster 2026-03-10T10:14:18.955930+0000 mon.a (mon.0) 661 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-10T10:14:20.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:19 vm07 bash[23367]: cluster 2026-03-10T10:14:18.955930+0000 mon.a (mon.0) 661 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-10T10:14:20.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:19 vm07 bash[23367]: audit 2026-03-10T10:14:18.957735+0000 mon.a (mon.0) 662 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T10:14:20.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:19 vm07 bash[23367]: audit 2026-03-10T10:14:18.957735+0000 mon.a (mon.0) 662 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T10:14:20.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:19 vm07 bash[23367]: audit 2026-03-10T10:14:18.958392+0000 mon.a (mon.0) 663 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T10:14:20.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:19 vm07 bash[23367]: audit 2026-03-10T10:14:18.958392+0000 mon.a (mon.0) 663 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T10:14:20.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:19 vm07 bash[23367]: cluster 2026-03-10T10:14:19.648974+0000 mgr.y (mgr.14150) 244 : cluster [DBG] pgmap v224: 132 pgs: 16 creating+peering, 48 unknown, 68 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 3 op/s 2026-03-10T10:14:20.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:19 vm07 bash[23367]: cluster 2026-03-10T10:14:19.648974+0000 mgr.y (mgr.14150) 244 : cluster [DBG] pgmap v224: 132 pgs: 16 creating+peering, 48 unknown, 68 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 3 op/s 2026-03-10T10:14:20.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:19 vm07 bash[23367]: audit 2026-03-10T10:14:19.755045+0000 mon.b (mon.1) 25 : audit [INF] from='client.? 192.168.123.107:0/887750058' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T10:14:20.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:19 vm07 bash[23367]: audit 2026-03-10T10:14:19.755045+0000 mon.b (mon.1) 25 : audit [INF] from='client.? 192.168.123.107:0/887750058' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T10:14:20.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:19 vm07 bash[23367]: audit 2026-03-10T10:14:19.756260+0000 mon.a (mon.0) 664 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T10:14:20.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:19 vm07 bash[23367]: audit 2026-03-10T10:14:19.756260+0000 mon.a (mon.0) 664 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-10T10:14:21.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:20 vm04 bash[28289]: audit 2026-03-10T10:14:19.948994+0000 mon.a (mon.0) 665 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T10:14:21.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:20 vm04 bash[28289]: audit 2026-03-10T10:14:19.948994+0000 mon.a (mon.0) 665 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T10:14:21.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:20 vm04 bash[28289]: audit 2026-03-10T10:14:19.949033+0000 mon.a (mon.0) 666 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T10:14:21.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:20 vm04 bash[28289]: audit 2026-03-10T10:14:19.949033+0000 mon.a (mon.0) 666 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T10:14:21.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:20 vm04 bash[28289]: audit 2026-03-10T10:14:19.949058+0000 mon.a (mon.0) 667 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-10T10:14:21.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:20 vm04 bash[28289]: audit 2026-03-10T10:14:19.949058+0000 mon.a (mon.0) 667 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-10T10:14:21.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:20 vm04 bash[28289]: cluster 2026-03-10T10:14:19.956699+0000 mon.a (mon.0) 668 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-10T10:14:21.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:20 vm04 bash[28289]: cluster 2026-03-10T10:14:19.956699+0000 mon.a (mon.0) 668 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-10T10:14:21.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:20 vm04 bash[28289]: audit 2026-03-10T10:14:20.214730+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:21.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:20 vm04 bash[28289]: audit 2026-03-10T10:14:20.214730+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:21.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:20 vm04 bash[28289]: audit 2026-03-10T10:14:20.222529+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:21.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:20 vm04 bash[28289]: audit 2026-03-10T10:14:20.222529+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:21.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:20 vm04 bash[28289]: audit 2026-03-10T10:14:20.233758+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:21.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:20 vm04 bash[28289]: audit 2026-03-10T10:14:20.233758+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:21.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:20 vm04 bash[28289]: audit 2026-03-10T10:14:20.274109+0000 mon.a (mon.0) 672 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:21.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:20 vm04 bash[28289]: audit 2026-03-10T10:14:20.274109+0000 mon.a (mon.0) 672 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:21.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:20 vm04 bash[28289]: audit 2026-03-10T10:14:20.583524+0000 mon.a (mon.0) 673 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:21.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:20 vm04 bash[28289]: audit 2026-03-10T10:14:20.583524+0000 mon.a (mon.0) 673 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:21.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:20 vm04 bash[28289]: audit 2026-03-10T10:14:20.584143+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:21.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:20 vm04 bash[28289]: audit 2026-03-10T10:14:20.584143+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:21.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:20 vm04 bash[28289]: cephadm 2026-03-10T10:14:20.586291+0000 mgr.y (mgr.14150) 245 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T10:14:21.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:20 vm04 bash[28289]: cephadm 2026-03-10T10:14:20.586291+0000 mgr.y (mgr.14150) 245 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T10:14:21.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:20 vm04 bash[28289]: audit 2026-03-10T10:14:20.737307+0000 mon.a (mon.0) 675 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:21.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:20 vm04 bash[28289]: audit 2026-03-10T10:14:20.737307+0000 mon.a (mon.0) 675 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:21.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:20 vm04 bash[20742]: audit 2026-03-10T10:14:19.948994+0000 mon.a (mon.0) 665 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T10:14:21.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:20 vm04 bash[20742]: audit 2026-03-10T10:14:19.948994+0000 mon.a (mon.0) 665 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T10:14:21.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:20 vm04 bash[20742]: audit 2026-03-10T10:14:19.949033+0000 mon.a (mon.0) 666 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T10:14:21.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:20 vm04 bash[20742]: audit 2026-03-10T10:14:19.949033+0000 mon.a (mon.0) 666 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T10:14:21.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:20 vm04 bash[20742]: audit 2026-03-10T10:14:19.949058+0000 mon.a (mon.0) 667 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-10T10:14:21.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:20 vm04 bash[20742]: audit 2026-03-10T10:14:19.949058+0000 mon.a (mon.0) 667 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-10T10:14:21.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:20 vm04 bash[20742]: cluster 2026-03-10T10:14:19.956699+0000 mon.a (mon.0) 668 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-10T10:14:21.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:20 vm04 bash[20742]: cluster 2026-03-10T10:14:19.956699+0000 mon.a (mon.0) 668 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-10T10:14:21.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:20 vm04 bash[20742]: audit 2026-03-10T10:14:20.214730+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:21.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:20 vm04 bash[20742]: audit 2026-03-10T10:14:20.214730+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:21.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:20 vm04 bash[20742]: audit 2026-03-10T10:14:20.222529+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:21.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:20 vm04 bash[20742]: audit 2026-03-10T10:14:20.222529+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:21.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:20 vm04 bash[20742]: audit 2026-03-10T10:14:20.233758+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:21.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:20 vm04 bash[20742]: audit 2026-03-10T10:14:20.233758+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:21.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:20 vm04 bash[20742]: audit 2026-03-10T10:14:20.274109+0000 mon.a (mon.0) 672 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:21.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:20 vm04 bash[20742]: audit 2026-03-10T10:14:20.274109+0000 mon.a (mon.0) 672 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:21.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:20 vm04 bash[20742]: audit 2026-03-10T10:14:20.583524+0000 mon.a (mon.0) 673 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:21.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:20 vm04 bash[20742]: audit 2026-03-10T10:14:20.583524+0000 mon.a (mon.0) 673 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:21.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:20 vm04 bash[20742]: audit 2026-03-10T10:14:20.584143+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:21.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:20 vm04 bash[20742]: audit 2026-03-10T10:14:20.584143+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:21.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:20 vm04 bash[20742]: cephadm 2026-03-10T10:14:20.586291+0000 mgr.y (mgr.14150) 245 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T10:14:21.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:20 vm04 bash[20742]: cephadm 2026-03-10T10:14:20.586291+0000 mgr.y (mgr.14150) 245 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T10:14:21.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:20 vm04 bash[20742]: audit 2026-03-10T10:14:20.737307+0000 mon.a (mon.0) 675 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:21.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:20 vm04 bash[20742]: audit 2026-03-10T10:14:20.737307+0000 mon.a (mon.0) 675 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:21.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:20 vm07 bash[23367]: audit 2026-03-10T10:14:19.948994+0000 mon.a (mon.0) 665 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T10:14:21.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:20 vm07 bash[23367]: audit 2026-03-10T10:14:19.948994+0000 mon.a (mon.0) 665 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T10:14:21.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:20 vm07 bash[23367]: audit 2026-03-10T10:14:19.949033+0000 mon.a (mon.0) 666 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T10:14:21.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:20 vm07 bash[23367]: audit 2026-03-10T10:14:19.949033+0000 mon.a (mon.0) 666 : audit [INF] from='client.? 192.168.123.104:0/3653639074' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T10:14:21.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:20 vm07 bash[23367]: audit 2026-03-10T10:14:19.949058+0000 mon.a (mon.0) 667 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-10T10:14:21.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:20 vm07 bash[23367]: audit 2026-03-10T10:14:19.949058+0000 mon.a (mon.0) 667 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-10T10:14:21.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:20 vm07 bash[23367]: cluster 2026-03-10T10:14:19.956699+0000 mon.a (mon.0) 668 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-10T10:14:21.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:20 vm07 bash[23367]: cluster 2026-03-10T10:14:19.956699+0000 mon.a (mon.0) 668 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-10T10:14:21.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:20 vm07 bash[23367]: audit 2026-03-10T10:14:20.214730+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:21.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:20 vm07 bash[23367]: audit 2026-03-10T10:14:20.214730+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:21.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:20 vm07 bash[23367]: audit 2026-03-10T10:14:20.222529+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:21.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:20 vm07 bash[23367]: audit 2026-03-10T10:14:20.222529+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:21.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:20 vm07 bash[23367]: audit 2026-03-10T10:14:20.233758+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:21.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:20 vm07 bash[23367]: audit 2026-03-10T10:14:20.233758+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:21.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:20 vm07 bash[23367]: audit 2026-03-10T10:14:20.274109+0000 mon.a (mon.0) 672 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:21.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:20 vm07 bash[23367]: audit 2026-03-10T10:14:20.274109+0000 mon.a (mon.0) 672 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:21.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:20 vm07 bash[23367]: audit 2026-03-10T10:14:20.583524+0000 mon.a (mon.0) 673 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:21.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:20 vm07 bash[23367]: audit 2026-03-10T10:14:20.583524+0000 mon.a (mon.0) 673 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:21.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:20 vm07 bash[23367]: audit 2026-03-10T10:14:20.584143+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:21.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:20 vm07 bash[23367]: audit 2026-03-10T10:14:20.584143+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:21.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:20 vm07 bash[23367]: cephadm 2026-03-10T10:14:20.586291+0000 mgr.y (mgr.14150) 245 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T10:14:21.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:20 vm07 bash[23367]: cephadm 2026-03-10T10:14:20.586291+0000 mgr.y (mgr.14150) 245 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T10:14:21.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:20 vm07 bash[23367]: audit 2026-03-10T10:14:20.737307+0000 mon.a (mon.0) 675 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:21.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:20 vm07 bash[23367]: audit 2026-03-10T10:14:20.737307+0000 mon.a (mon.0) 675 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:22.093 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph orch apply iscsi datapool admin admin --trusted_ip_list 192.168.123.107 --placement '1;vm07=iscsi.a' 2026-03-10T10:14:22.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:22 vm07 bash[23367]: cluster 2026-03-10T10:14:20.960029+0000 mon.a (mon.0) 676 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-10T10:14:22.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:22 vm07 bash[23367]: cluster 2026-03-10T10:14:20.960029+0000 mon.a (mon.0) 676 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-10T10:14:22.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:22 vm07 bash[23367]: cluster 2026-03-10T10:14:21.649305+0000 mgr.y (mgr.14150) 246 : cluster [DBG] pgmap v227: 132 pgs: 9 creating+peering, 13 unknown, 110 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 54 KiB/s rd, 5.0 KiB/s wr, 128 op/s 2026-03-10T10:14:22.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:22 vm07 bash[23367]: cluster 2026-03-10T10:14:21.649305+0000 mgr.y (mgr.14150) 246 : cluster [DBG] pgmap v227: 132 pgs: 9 creating+peering, 13 unknown, 110 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 54 KiB/s rd, 5.0 KiB/s wr, 128 op/s 2026-03-10T10:14:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:22 vm04 bash[28289]: cluster 2026-03-10T10:14:20.960029+0000 mon.a (mon.0) 676 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-10T10:14:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:22 vm04 bash[28289]: cluster 2026-03-10T10:14:20.960029+0000 mon.a (mon.0) 676 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-10T10:14:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:22 vm04 bash[28289]: cluster 2026-03-10T10:14:21.649305+0000 mgr.y (mgr.14150) 246 : cluster [DBG] pgmap v227: 132 pgs: 9 creating+peering, 13 unknown, 110 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 54 KiB/s rd, 5.0 KiB/s wr, 128 op/s 2026-03-10T10:14:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:22 vm04 bash[28289]: cluster 2026-03-10T10:14:21.649305+0000 mgr.y (mgr.14150) 246 : cluster [DBG] pgmap v227: 132 pgs: 9 creating+peering, 13 unknown, 110 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 54 KiB/s rd, 5.0 KiB/s wr, 128 op/s 2026-03-10T10:14:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:22 vm04 bash[20742]: cluster 2026-03-10T10:14:20.960029+0000 mon.a (mon.0) 676 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-10T10:14:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:22 vm04 bash[20742]: cluster 2026-03-10T10:14:20.960029+0000 mon.a (mon.0) 676 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-10T10:14:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:22 vm04 bash[20742]: cluster 2026-03-10T10:14:21.649305+0000 mgr.y (mgr.14150) 246 : cluster [DBG] pgmap v227: 132 pgs: 9 creating+peering, 13 unknown, 110 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 54 KiB/s rd, 5.0 KiB/s wr, 128 op/s 2026-03-10T10:14:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:22 vm04 bash[20742]: cluster 2026-03-10T10:14:21.649305+0000 mgr.y (mgr.14150) 246 : cluster [DBG] pgmap v227: 132 pgs: 9 creating+peering, 13 unknown, 110 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 54 KiB/s rd, 5.0 KiB/s wr, 128 op/s 2026-03-10T10:14:23.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:23 vm07 bash[23367]: cluster 2026-03-10T10:14:21.958561+0000 mon.a (mon.0) 677 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-10T10:14:23.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:23 vm07 bash[23367]: cluster 2026-03-10T10:14:21.958561+0000 mon.a (mon.0) 677 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-10T10:14:23.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:23 vm07 bash[23367]: cluster 2026-03-10T10:14:21.958591+0000 mon.a (mon.0) 678 : cluster [INF] Cluster is now healthy 2026-03-10T10:14:23.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:23 vm07 bash[23367]: cluster 2026-03-10T10:14:21.958591+0000 mon.a (mon.0) 678 : cluster [INF] Cluster is now healthy 2026-03-10T10:14:23.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:23 vm07 bash[23367]: cluster 2026-03-10T10:14:22.010506+0000 mon.a (mon.0) 679 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-10T10:14:23.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:23 vm07 bash[23367]: cluster 2026-03-10T10:14:22.010506+0000 mon.a (mon.0) 679 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-10T10:14:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:23 vm04 bash[28289]: cluster 2026-03-10T10:14:21.958561+0000 mon.a (mon.0) 677 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-10T10:14:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:23 vm04 bash[28289]: cluster 2026-03-10T10:14:21.958561+0000 mon.a (mon.0) 677 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-10T10:14:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:23 vm04 bash[28289]: cluster 2026-03-10T10:14:21.958591+0000 mon.a (mon.0) 678 : cluster [INF] Cluster is now healthy 2026-03-10T10:14:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:23 vm04 bash[28289]: cluster 2026-03-10T10:14:21.958591+0000 mon.a (mon.0) 678 : cluster [INF] Cluster is now healthy 2026-03-10T10:14:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:23 vm04 bash[28289]: cluster 2026-03-10T10:14:22.010506+0000 mon.a (mon.0) 679 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-10T10:14:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:23 vm04 bash[28289]: cluster 2026-03-10T10:14:22.010506+0000 mon.a (mon.0) 679 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-10T10:14:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:23 vm04 bash[20742]: cluster 2026-03-10T10:14:21.958561+0000 mon.a (mon.0) 677 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-10T10:14:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:23 vm04 bash[20742]: cluster 2026-03-10T10:14:21.958561+0000 mon.a (mon.0) 677 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-10T10:14:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:23 vm04 bash[20742]: cluster 2026-03-10T10:14:21.958591+0000 mon.a (mon.0) 678 : cluster [INF] Cluster is now healthy 2026-03-10T10:14:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:23 vm04 bash[20742]: cluster 2026-03-10T10:14:21.958591+0000 mon.a (mon.0) 678 : cluster [INF] Cluster is now healthy 2026-03-10T10:14:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:23 vm04 bash[20742]: cluster 2026-03-10T10:14:22.010506+0000 mon.a (mon.0) 679 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-10T10:14:23.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:23 vm04 bash[20742]: cluster 2026-03-10T10:14:22.010506+0000 mon.a (mon.0) 679 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-10T10:14:24.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:24 vm07 bash[23367]: cluster 2026-03-10T10:14:23.649689+0000 mgr.y (mgr.14150) 247 : cluster [DBG] pgmap v229: 132 pgs: 132 active+clean; 453 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 76 KiB/s rd, 6.2 KiB/s wr, 183 op/s 2026-03-10T10:14:24.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:24 vm07 bash[23367]: cluster 2026-03-10T10:14:23.649689+0000 mgr.y (mgr.14150) 247 : cluster [DBG] pgmap v229: 132 pgs: 132 active+clean; 453 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 76 KiB/s rd, 6.2 KiB/s wr, 183 op/s 2026-03-10T10:14:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:24 vm04 bash[28289]: cluster 2026-03-10T10:14:23.649689+0000 mgr.y (mgr.14150) 247 : cluster [DBG] pgmap v229: 132 pgs: 132 active+clean; 453 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 76 KiB/s rd, 6.2 KiB/s wr, 183 op/s 2026-03-10T10:14:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:24 vm04 bash[28289]: cluster 2026-03-10T10:14:23.649689+0000 mgr.y (mgr.14150) 247 : cluster [DBG] pgmap v229: 132 pgs: 132 active+clean; 453 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 76 KiB/s rd, 6.2 KiB/s wr, 183 op/s 2026-03-10T10:14:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:24 vm04 bash[20742]: cluster 2026-03-10T10:14:23.649689+0000 mgr.y (mgr.14150) 247 : cluster [DBG] pgmap v229: 132 pgs: 132 active+clean; 453 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 76 KiB/s rd, 6.2 KiB/s wr, 183 op/s 2026-03-10T10:14:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:24 vm04 bash[20742]: cluster 2026-03-10T10:14:23.649689+0000 mgr.y (mgr.14150) 247 : cluster [DBG] pgmap v229: 132 pgs: 132 active+clean; 453 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 76 KiB/s rd, 6.2 KiB/s wr, 183 op/s 2026-03-10T10:14:26.718 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.b/config 2026-03-10T10:14:26.851 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.845+0000 7fd077ea5640 1 -- 192.168.123.107:0/989501182 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd070100660 msgr2=0x7fd070100a60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:26.851 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.845+0000 7fd077ea5640 1 --2- 192.168.123.107:0/989501182 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd070100660 0x7fd070100a60 secure :-1 s=READY pgs=52 cs=0 l=1 rev1=1 crypto rx=0x7fd064009a30 tx=0x7fd06402f220 comp rx=0 tx=0).stop 2026-03-10T10:14:26.851 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.845+0000 7fd077ea5640 1 -- 192.168.123.107:0/989501182 shutdown_connections 2026-03-10T10:14:26.851 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.845+0000 7fd077ea5640 1 --2- 192.168.123.107:0/989501182 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd070108850 0x7fd07010ac40 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:26.851 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.845+0000 7fd077ea5640 1 --2- 192.168.123.107:0/989501182 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fd070101030 0x7fd070108310 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:26.851 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.845+0000 7fd077ea5640 1 --2- 192.168.123.107:0/989501182 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd070100660 0x7fd070100a60 unknown :-1 s=CLOSED pgs=52 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:26.851 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.845+0000 7fd077ea5640 1 -- 192.168.123.107:0/989501182 >> 192.168.123.107:0/989501182 conn(0x7fd0700fc410 msgr2=0x7fd0700fe850 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:26.851 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.845+0000 7fd077ea5640 1 -- 192.168.123.107:0/989501182 shutdown_connections 2026-03-10T10:14:26.851 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.845+0000 7fd077ea5640 1 -- 192.168.123.107:0/989501182 wait complete. 2026-03-10T10:14:26.851 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.845+0000 7fd077ea5640 1 Processor -- start 2026-03-10T10:14:26.851 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.845+0000 7fd077ea5640 1 -- start start 2026-03-10T10:14:26.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.845+0000 7fd077ea5640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd070100660 0x7fd070103bd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:26.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.845+0000 7fd077ea5640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd070101030 0x7fd070102220 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:26.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.845+0000 7fd077ea5640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fd070108850 0x7fd070102760 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:26.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.845+0000 7fd077ea5640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fd07010dc00 con 0x7fd070100660 2026-03-10T10:14:26.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.845+0000 7fd077ea5640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7fd07010da80 con 0x7fd070101030 2026-03-10T10:14:26.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.845+0000 7fd077ea5640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7fd07010dd80 con 0x7fd070108850 2026-03-10T10:14:26.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.845+0000 7fd075419640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd070101030 0x7fd070102220 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:26.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.845+0000 7fd075419640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd070101030 0x7fd070102220 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:41342/0 (socket says 192.168.123.107:41342) 2026-03-10T10:14:26.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.845+0000 7fd075419640 1 -- 192.168.123.107:0/1892841466 learned_addr learned my addr 192.168.123.107:0/1892841466 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-10T10:14:26.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.845+0000 7fd07641b640 1 --2- 192.168.123.107:0/1892841466 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fd070108850 0x7fd070102760 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:26.852 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.845+0000 7fd075c1a640 1 --2- 192.168.123.107:0/1892841466 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd070100660 0x7fd070103bd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:26.853 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.845+0000 7fd075c1a640 1 -- 192.168.123.107:0/1892841466 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fd070108850 msgr2=0x7fd070102760 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:26.853 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.845+0000 7fd075c1a640 1 --2- 192.168.123.107:0/1892841466 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fd070108850 0x7fd070102760 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:26.853 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.845+0000 7fd075c1a640 1 -- 192.168.123.107:0/1892841466 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd070101030 msgr2=0x7fd070102220 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:26.853 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.845+0000 7fd075c1a640 1 --2- 192.168.123.107:0/1892841466 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd070101030 0x7fd070102220 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:26.853 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.845+0000 7fd075c1a640 1 -- 192.168.123.107:0/1892841466 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fd07019e250 con 0x7fd070100660 2026-03-10T10:14:26.853 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.845+0000 7fd075419640 1 --2- 192.168.123.107:0/1892841466 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd070101030 0x7fd070102220 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T10:14:26.853 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.849+0000 7fd075c1a640 1 --2- 192.168.123.107:0/1892841466 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd070100660 0x7fd070103bd0 secure :-1 s=READY pgs=136 cs=0 l=1 rev1=1 crypto rx=0x7fd064009a00 tx=0x7fd06402fdf0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:26.854 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.849+0000 7fd05effd640 1 -- 192.168.123.107:0/1892841466 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fd064004280 con 0x7fd070100660 2026-03-10T10:14:26.854 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.849+0000 7fd05effd640 1 -- 192.168.123.107:0/1892841466 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fd064004420 con 0x7fd070100660 2026-03-10T10:14:26.854 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.849+0000 7fd05effd640 1 -- 192.168.123.107:0/1892841466 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fd064005590 con 0x7fd070100660 2026-03-10T10:14:26.854 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.849+0000 7fd077ea5640 1 -- 192.168.123.107:0/1892841466 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fd070102d00 con 0x7fd070100660 2026-03-10T10:14:26.854 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.849+0000 7fd077ea5640 1 -- 192.168.123.107:0/1892841466 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fd0701a8ea0 con 0x7fd070100660 2026-03-10T10:14:26.855 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.849+0000 7fd05effd640 1 -- 192.168.123.107:0/1892841466 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 15) ==== 100000+0+0 (secure 0 0 0) 0x7fd0640047c0 con 0x7fd070100660 2026-03-10T10:14:26.855 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.849+0000 7fd077ea5640 1 -- 192.168.123.107:0/1892841466 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fd038005180 con 0x7fd070100660 2026-03-10T10:14:26.855 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.849+0000 7fd05effd640 1 --2- 192.168.123.107:0/1892841466 >> v2:192.168.123.104:6800/632047608 conn(0x7fd048077620 0x7fd048079ae0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:26.855 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.849+0000 7fd05effd640 1 -- 192.168.123.107:0/1892841466 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(62..62 src has 1..62) ==== 5950+0+0 (secure 0 0 0) 0x7fd0640bdf40 con 0x7fd070100660 2026-03-10T10:14:26.855 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.849+0000 7fd075419640 1 --2- 192.168.123.107:0/1892841466 >> v2:192.168.123.104:6800/632047608 conn(0x7fd048077620 0x7fd048079ae0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:26.856 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.849+0000 7fd075419640 1 --2- 192.168.123.107:0/1892841466 >> v2:192.168.123.104:6800/632047608 conn(0x7fd048077620 0x7fd048079ae0 secure :-1 s=READY pgs=113 cs=0 l=1 rev1=1 crypto rx=0x7fd070103950 tx=0x7fd06000a430 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:26.858 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.853+0000 7fd05effd640 1 -- 192.168.123.107:0/1892841466 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fd06408ae90 con 0x7fd070100660 2026-03-10T10:14:26.951 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.945+0000 7fd077ea5640 1 -- 192.168.123.107:0/1892841466 --> v2:192.168.123.104:6800/632047608 -- mgr_command(tid 0: {"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.107", "placement": "1;vm07=iscsi.a", "target": ["mon-mgr", ""]}) -- 0x7fd038002cc0 con 0x7fd048077620 2026-03-10T10:14:26.958 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.953+0000 7fd05effd640 1 -- 192.168.123.107:0/1892841466 <== mgr.14150 v2:192.168.123.104:6800/632047608 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+35 (secure 0 0 0) 0x7fd038002cc0 con 0x7fd048077620 2026-03-10T10:14:26.958 INFO:teuthology.orchestra.run.vm07.stdout:Scheduled iscsi.datapool update... 2026-03-10T10:14:26.960 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.953+0000 7fd077ea5640 1 -- 192.168.123.107:0/1892841466 >> v2:192.168.123.104:6800/632047608 conn(0x7fd048077620 msgr2=0x7fd048079ae0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:26.960 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.953+0000 7fd077ea5640 1 --2- 192.168.123.107:0/1892841466 >> v2:192.168.123.104:6800/632047608 conn(0x7fd048077620 0x7fd048079ae0 secure :-1 s=READY pgs=113 cs=0 l=1 rev1=1 crypto rx=0x7fd070103950 tx=0x7fd06000a430 comp rx=0 tx=0).stop 2026-03-10T10:14:26.961 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.953+0000 7fd077ea5640 1 -- 192.168.123.107:0/1892841466 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd070100660 msgr2=0x7fd070103bd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:26.961 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.953+0000 7fd077ea5640 1 --2- 192.168.123.107:0/1892841466 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd070100660 0x7fd070103bd0 secure :-1 s=READY pgs=136 cs=0 l=1 rev1=1 crypto rx=0x7fd064009a00 tx=0x7fd06402fdf0 comp rx=0 tx=0).stop 2026-03-10T10:14:26.961 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.953+0000 7fd077ea5640 1 -- 192.168.123.107:0/1892841466 shutdown_connections 2026-03-10T10:14:26.961 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.953+0000 7fd077ea5640 1 --2- 192.168.123.107:0/1892841466 >> v2:192.168.123.104:6800/632047608 conn(0x7fd048077620 0x7fd048079ae0 unknown :-1 s=CLOSED pgs=113 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:26.961 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.953+0000 7fd077ea5640 1 --2- 192.168.123.107:0/1892841466 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fd070108850 0x7fd070102760 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:26.961 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.953+0000 7fd077ea5640 1 --2- 192.168.123.107:0/1892841466 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd070101030 0x7fd070102220 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:26.961 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.953+0000 7fd077ea5640 1 --2- 192.168.123.107:0/1892841466 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd070100660 0x7fd070103bd0 unknown :-1 s=CLOSED pgs=136 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:26.961 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.953+0000 7fd077ea5640 1 -- 192.168.123.107:0/1892841466 >> 192.168.123.107:0/1892841466 conn(0x7fd0700fc410 msgr2=0x7fd0701042f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:26.961 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.957+0000 7fd077ea5640 1 -- 192.168.123.107:0/1892841466 shutdown_connections 2026-03-10T10:14:26.961 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:26.957+0000 7fd077ea5640 1 -- 192.168.123.107:0/1892841466 wait complete. 2026-03-10T10:14:26.972 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:26 vm07 bash[23367]: cluster 2026-03-10T10:14:25.650105+0000 mgr.y (mgr.14150) 248 : cluster [DBG] pgmap v230: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 81 KiB/s rd, 6.2 KiB/s wr, 191 op/s 2026-03-10T10:14:26.972 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:26 vm07 bash[23367]: cluster 2026-03-10T10:14:25.650105+0000 mgr.y (mgr.14150) 248 : cluster [DBG] pgmap v230: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 81 KiB/s rd, 6.2 KiB/s wr, 191 op/s 2026-03-10T10:14:27.025 INFO:tasks.cephadm:Distributing iscsi-gateway.cfg... 2026-03-10T10:14:27.025 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-10T10:14:27.025 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/ceph/iscsi-gateway.cfg 2026-03-10T10:14:27.030 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:26 vm04 bash[28289]: cluster 2026-03-10T10:14:25.650105+0000 mgr.y (mgr.14150) 248 : cluster [DBG] pgmap v230: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 81 KiB/s rd, 6.2 KiB/s wr, 191 op/s 2026-03-10T10:14:27.030 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:26 vm04 bash[28289]: cluster 2026-03-10T10:14:25.650105+0000 mgr.y (mgr.14150) 248 : cluster [DBG] pgmap v230: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 81 KiB/s rd, 6.2 KiB/s wr, 191 op/s 2026-03-10T10:14:27.031 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:26 vm04 bash[20742]: cluster 2026-03-10T10:14:25.650105+0000 mgr.y (mgr.14150) 248 : cluster [DBG] pgmap v230: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 81 KiB/s rd, 6.2 KiB/s wr, 191 op/s 2026-03-10T10:14:27.031 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:26 vm04 bash[20742]: cluster 2026-03-10T10:14:25.650105+0000 mgr.y (mgr.14150) 248 : cluster [DBG] pgmap v230: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 81 KiB/s rd, 6.2 KiB/s wr, 191 op/s 2026-03-10T10:14:27.032 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T10:14:27.032 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/ceph/iscsi-gateway.cfg 2026-03-10T10:14:27.040 DEBUG:teuthology.orchestra.run.vm07:iscsi.iscsi.a> sudo journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@iscsi.iscsi.a.service 2026-03-10T10:14:27.082 INFO:tasks.cephadm:Adding prometheus.a on vm07 2026-03-10T10:14:27.082 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph orch apply prometheus '1;vm07=a' 2026-03-10T10:14:27.265 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:14:27 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:27.530 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 10:14:27 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:27.530 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 10:14:27 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:27.530 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:14:27 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:27.530 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:14:27 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:27.530 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:14:27 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:27.530 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:14:27 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:27.530 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:27.530 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:27 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:27.531 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 10:14:27 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:27.531 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:14:27 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:27.794 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:27.794 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:27 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:27.794 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 10:14:27 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:27.794 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 10:14:27 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:27.794 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 10:14:27 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:27.795 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:14:27 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:27.795 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:14:27 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:27.795 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:14:27 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:27.795 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:14:27 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:27.795 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:14:27 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:27.795 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:14:27 vm07 systemd[1]: Started Ceph iscsi.iscsi.a for e4c1c9d6-1c68-11f1-a9bd-116050875839. 2026-03-10T10:14:28.083 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 bash[23367]: audit 2026-03-10T10:14:26.953055+0000 mgr.y (mgr.14150) 249 : audit [DBG] from='client.14448 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.107", "placement": "1;vm07=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:28.083 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 bash[23367]: audit 2026-03-10T10:14:26.953055+0000 mgr.y (mgr.14150) 249 : audit [DBG] from='client.14448 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.107", "placement": "1;vm07=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:28.083 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 bash[23367]: cephadm 2026-03-10T10:14:26.954188+0000 mgr.y (mgr.14150) 250 : cephadm [INF] Saving service iscsi.datapool spec with placement vm07=iscsi.a;count:1 2026-03-10T10:14:28.083 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 bash[23367]: cephadm 2026-03-10T10:14:26.954188+0000 mgr.y (mgr.14150) 250 : cephadm [INF] Saving service iscsi.datapool spec with placement vm07=iscsi.a;count:1 2026-03-10T10:14:28.083 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 bash[23367]: audit 2026-03-10T10:14:26.958871+0000 mon.a (mon.0) 680 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.083 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 bash[23367]: audit 2026-03-10T10:14:26.958871+0000 mon.a (mon.0) 680 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.083 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 bash[23367]: audit 2026-03-10T10:14:26.959666+0000 mon.a (mon.0) 681 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:28.083 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 bash[23367]: audit 2026-03-10T10:14:26.959666+0000 mon.a (mon.0) 681 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:28.084 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 bash[23367]: audit 2026-03-10T10:14:26.963216+0000 mon.a (mon.0) 682 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:28.084 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 bash[23367]: audit 2026-03-10T10:14:26.963216+0000 mon.a (mon.0) 682 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:28.084 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 bash[23367]: audit 2026-03-10T10:14:26.963600+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:28.084 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 bash[23367]: audit 2026-03-10T10:14:26.963600+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:28.084 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 bash[23367]: audit 2026-03-10T10:14:26.967626+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.084 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 bash[23367]: audit 2026-03-10T10:14:26.967626+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.084 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 bash[23367]: audit 2026-03-10T10:14:26.969697+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T10:14:28.084 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 bash[23367]: audit 2026-03-10T10:14:26.969697+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T10:14:28.084 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 bash[23367]: audit 2026-03-10T10:14:26.972828+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-10T10:14:28.084 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 bash[23367]: audit 2026-03-10T10:14:26.972828+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-10T10:14:28.084 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 bash[23367]: audit 2026-03-10T10:14:26.979138+0000 mon.a (mon.0) 687 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:28.084 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 bash[23367]: audit 2026-03-10T10:14:26.979138+0000 mon.a (mon.0) 687 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:28.084 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 bash[23367]: cephadm 2026-03-10T10:14:26.979754+0000 mgr.y (mgr.14150) 251 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm07 2026-03-10T10:14:28.084 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 bash[23367]: cephadm 2026-03-10T10:14:26.979754+0000 mgr.y (mgr.14150) 251 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm07 2026-03-10T10:14:28.084 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 bash[23367]: cluster 2026-03-10T10:14:27.650548+0000 mgr.y (mgr.14150) 252 : cluster [DBG] pgmap v231: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 63 KiB/s rd, 4.7 KiB/s wr, 148 op/s 2026-03-10T10:14:28.084 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 bash[23367]: cluster 2026-03-10T10:14:27.650548+0000 mgr.y (mgr.14150) 252 : cluster [DBG] pgmap v231: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 63 KiB/s rd, 4.7 KiB/s wr, 148 op/s 2026-03-10T10:14:28.084 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 bash[23367]: audit 2026-03-10T10:14:27.794448+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.084 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 bash[23367]: audit 2026-03-10T10:14:27.794448+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.084 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 bash[23367]: audit 2026-03-10T10:14:27.803219+0000 mon.a (mon.0) 689 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.084 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 bash[23367]: audit 2026-03-10T10:14:27.803219+0000 mon.a (mon.0) 689 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.084 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 bash[23367]: audit 2026-03-10T10:14:27.808677+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.084 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 bash[23367]: audit 2026-03-10T10:14:27.808677+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.084 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 bash[23367]: audit 2026-03-10T10:14:27.816720+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.084 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 bash[23367]: audit 2026-03-10T10:14:27.816720+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.084 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 bash[23367]: audit 2026-03-10T10:14:27.825959+0000 mon.a (mon.0) 692 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:28.084 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:27 vm07 bash[23367]: audit 2026-03-10T10:14:27.825959+0000 mon.a (mon.0) 692 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:28.084 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:14:28 vm07 bash[48477]: debug Started the configuration object watcher 2026-03-10T10:14:28.084 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:14:28 vm07 bash[48477]: debug Checking for config object changes every 1s 2026-03-10T10:14:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:27 vm04 bash[28289]: audit 2026-03-10T10:14:26.953055+0000 mgr.y (mgr.14150) 249 : audit [DBG] from='client.14448 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.107", "placement": "1;vm07=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:27 vm04 bash[28289]: audit 2026-03-10T10:14:26.953055+0000 mgr.y (mgr.14150) 249 : audit [DBG] from='client.14448 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.107", "placement": "1;vm07=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:27 vm04 bash[28289]: cephadm 2026-03-10T10:14:26.954188+0000 mgr.y (mgr.14150) 250 : cephadm [INF] Saving service iscsi.datapool spec with placement vm07=iscsi.a;count:1 2026-03-10T10:14:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:27 vm04 bash[28289]: cephadm 2026-03-10T10:14:26.954188+0000 mgr.y (mgr.14150) 250 : cephadm [INF] Saving service iscsi.datapool spec with placement vm07=iscsi.a;count:1 2026-03-10T10:14:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:27 vm04 bash[28289]: audit 2026-03-10T10:14:26.958871+0000 mon.a (mon.0) 680 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:27 vm04 bash[28289]: audit 2026-03-10T10:14:26.958871+0000 mon.a (mon.0) 680 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:27 vm04 bash[28289]: audit 2026-03-10T10:14:26.959666+0000 mon.a (mon.0) 681 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:27 vm04 bash[28289]: audit 2026-03-10T10:14:26.959666+0000 mon.a (mon.0) 681 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:27 vm04 bash[28289]: audit 2026-03-10T10:14:26.963216+0000 mon.a (mon.0) 682 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:27 vm04 bash[28289]: audit 2026-03-10T10:14:26.963216+0000 mon.a (mon.0) 682 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:27 vm04 bash[28289]: audit 2026-03-10T10:14:26.963600+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:27 vm04 bash[28289]: audit 2026-03-10T10:14:26.963600+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:27 vm04 bash[28289]: audit 2026-03-10T10:14:26.967626+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:27 vm04 bash[28289]: audit 2026-03-10T10:14:26.967626+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:27 vm04 bash[28289]: audit 2026-03-10T10:14:26.969697+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T10:14:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:27 vm04 bash[28289]: audit 2026-03-10T10:14:26.969697+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T10:14:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:27 vm04 bash[28289]: audit 2026-03-10T10:14:26.972828+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-10T10:14:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:27 vm04 bash[28289]: audit 2026-03-10T10:14:26.972828+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-10T10:14:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:27 vm04 bash[28289]: audit 2026-03-10T10:14:26.979138+0000 mon.a (mon.0) 687 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:27 vm04 bash[28289]: audit 2026-03-10T10:14:26.979138+0000 mon.a (mon.0) 687 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:27 vm04 bash[28289]: cephadm 2026-03-10T10:14:26.979754+0000 mgr.y (mgr.14150) 251 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm07 2026-03-10T10:14:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:27 vm04 bash[28289]: cephadm 2026-03-10T10:14:26.979754+0000 mgr.y (mgr.14150) 251 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm07 2026-03-10T10:14:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:27 vm04 bash[28289]: cluster 2026-03-10T10:14:27.650548+0000 mgr.y (mgr.14150) 252 : cluster [DBG] pgmap v231: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 63 KiB/s rd, 4.7 KiB/s wr, 148 op/s 2026-03-10T10:14:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:27 vm04 bash[28289]: cluster 2026-03-10T10:14:27.650548+0000 mgr.y (mgr.14150) 252 : cluster [DBG] pgmap v231: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 63 KiB/s rd, 4.7 KiB/s wr, 148 op/s 2026-03-10T10:14:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:27 vm04 bash[28289]: audit 2026-03-10T10:14:27.794448+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:27 vm04 bash[28289]: audit 2026-03-10T10:14:27.794448+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:27 vm04 bash[28289]: audit 2026-03-10T10:14:27.803219+0000 mon.a (mon.0) 689 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:27 vm04 bash[28289]: audit 2026-03-10T10:14:27.803219+0000 mon.a (mon.0) 689 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:27 vm04 bash[28289]: audit 2026-03-10T10:14:27.808677+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:27 vm04 bash[28289]: audit 2026-03-10T10:14:27.808677+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:27 vm04 bash[28289]: audit 2026-03-10T10:14:27.816720+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:27 vm04 bash[28289]: audit 2026-03-10T10:14:27.816720+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:27 vm04 bash[28289]: audit 2026-03-10T10:14:27.825959+0000 mon.a (mon.0) 692 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:27 vm04 bash[28289]: audit 2026-03-10T10:14:27.825959+0000 mon.a (mon.0) 692 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:27 vm04 bash[20742]: audit 2026-03-10T10:14:26.953055+0000 mgr.y (mgr.14150) 249 : audit [DBG] from='client.14448 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.107", "placement": "1;vm07=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:27 vm04 bash[20742]: audit 2026-03-10T10:14:26.953055+0000 mgr.y (mgr.14150) 249 : audit [DBG] from='client.14448 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.107", "placement": "1;vm07=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:27 vm04 bash[20742]: cephadm 2026-03-10T10:14:26.954188+0000 mgr.y (mgr.14150) 250 : cephadm [INF] Saving service iscsi.datapool spec with placement vm07=iscsi.a;count:1 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:27 vm04 bash[20742]: cephadm 2026-03-10T10:14:26.954188+0000 mgr.y (mgr.14150) 250 : cephadm [INF] Saving service iscsi.datapool spec with placement vm07=iscsi.a;count:1 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:27 vm04 bash[20742]: audit 2026-03-10T10:14:26.958871+0000 mon.a (mon.0) 680 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:27 vm04 bash[20742]: audit 2026-03-10T10:14:26.958871+0000 mon.a (mon.0) 680 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:27 vm04 bash[20742]: audit 2026-03-10T10:14:26.959666+0000 mon.a (mon.0) 681 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:27 vm04 bash[20742]: audit 2026-03-10T10:14:26.959666+0000 mon.a (mon.0) 681 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:27 vm04 bash[20742]: audit 2026-03-10T10:14:26.963216+0000 mon.a (mon.0) 682 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:27 vm04 bash[20742]: audit 2026-03-10T10:14:26.963216+0000 mon.a (mon.0) 682 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:27 vm04 bash[20742]: audit 2026-03-10T10:14:26.963600+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:27 vm04 bash[20742]: audit 2026-03-10T10:14:26.963600+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:27 vm04 bash[20742]: audit 2026-03-10T10:14:26.967626+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:27 vm04 bash[20742]: audit 2026-03-10T10:14:26.967626+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:27 vm04 bash[20742]: audit 2026-03-10T10:14:26.969697+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:27 vm04 bash[20742]: audit 2026-03-10T10:14:26.969697+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:27 vm04 bash[20742]: audit 2026-03-10T10:14:26.972828+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:27 vm04 bash[20742]: audit 2026-03-10T10:14:26.972828+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:27 vm04 bash[20742]: audit 2026-03-10T10:14:26.979138+0000 mon.a (mon.0) 687 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:27 vm04 bash[20742]: audit 2026-03-10T10:14:26.979138+0000 mon.a (mon.0) 687 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:27 vm04 bash[20742]: cephadm 2026-03-10T10:14:26.979754+0000 mgr.y (mgr.14150) 251 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm07 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:27 vm04 bash[20742]: cephadm 2026-03-10T10:14:26.979754+0000 mgr.y (mgr.14150) 251 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm07 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:27 vm04 bash[20742]: cluster 2026-03-10T10:14:27.650548+0000 mgr.y (mgr.14150) 252 : cluster [DBG] pgmap v231: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 63 KiB/s rd, 4.7 KiB/s wr, 148 op/s 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:27 vm04 bash[20742]: cluster 2026-03-10T10:14:27.650548+0000 mgr.y (mgr.14150) 252 : cluster [DBG] pgmap v231: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 63 KiB/s rd, 4.7 KiB/s wr, 148 op/s 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:27 vm04 bash[20742]: audit 2026-03-10T10:14:27.794448+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:27 vm04 bash[20742]: audit 2026-03-10T10:14:27.794448+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:27 vm04 bash[20742]: audit 2026-03-10T10:14:27.803219+0000 mon.a (mon.0) 689 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:27 vm04 bash[20742]: audit 2026-03-10T10:14:27.803219+0000 mon.a (mon.0) 689 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:27 vm04 bash[20742]: audit 2026-03-10T10:14:27.808677+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:27 vm04 bash[20742]: audit 2026-03-10T10:14:27.808677+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:27 vm04 bash[20742]: audit 2026-03-10T10:14:27.816720+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:27 vm04 bash[20742]: audit 2026-03-10T10:14:27.816720+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:27 vm04 bash[20742]: audit 2026-03-10T10:14:27.825959+0000 mon.a (mon.0) 692 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:27 vm04 bash[20742]: audit 2026-03-10T10:14:27.825959+0000 mon.a (mon.0) 692 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:28.515 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:14:28 vm07 bash[48477]: debug Processing osd blocklist entries for this node 2026-03-10T10:14:28.515 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:14:28 vm07 bash[48477]: debug Reading the configuration object to update local LIO configuration 2026-03-10T10:14:28.515 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:14:28 vm07 bash[48477]: debug Configuration does not have an entry for this host(vm07.local) - nothing to define to LIO 2026-03-10T10:14:28.515 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:14:28 vm07 bash[48477]: * Serving Flask app 'rbd-target-api' (lazy loading) 2026-03-10T10:14:28.515 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:14:28 vm07 bash[48477]: * Environment: production 2026-03-10T10:14:28.515 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:14:28 vm07 bash[48477]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-10T10:14:28.515 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:14:28 vm07 bash[48477]: Use a production WSGI server instead. 2026-03-10T10:14:28.515 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:14:28 vm07 bash[48477]: * Debug mode: off 2026-03-10T10:14:28.515 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:14:28 vm07 bash[48477]: debug * Running on all addresses. 2026-03-10T10:14:28.515 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:14:28 vm07 bash[48477]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-10T10:14:28.515 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:14:28 vm07 bash[48477]: * Running on all addresses. 2026-03-10T10:14:28.515 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:14:28 vm07 bash[48477]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-10T10:14:28.515 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:14:28 vm07 bash[48477]: debug * Running on http://[::1]:5000/ (Press CTRL+C to quit) 2026-03-10T10:14:28.515 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:14:28 vm07 bash[48477]: * Running on http://[::1]:5000/ (Press CTRL+C to quit) 2026-03-10T10:14:29.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:28 vm07 bash[23367]: cephadm 2026-03-10T10:14:27.809044+0000 mgr.y (mgr.14150) 253 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-10T10:14:29.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:28 vm07 bash[23367]: cephadm 2026-03-10T10:14:27.809044+0000 mgr.y (mgr.14150) 253 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-10T10:14:29.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:28 vm07 bash[23367]: audit 2026-03-10T10:14:28.298857+0000 mon.a (mon.0) 693 : audit [DBG] from='client.? 192.168.123.107:0/1141813312' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T10:14:29.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:28 vm07 bash[23367]: audit 2026-03-10T10:14:28.298857+0000 mon.a (mon.0) 693 : audit [DBG] from='client.? 192.168.123.107:0/1141813312' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T10:14:29.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:29 vm04 bash[28289]: cephadm 2026-03-10T10:14:27.809044+0000 mgr.y (mgr.14150) 253 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-10T10:14:29.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:29 vm04 bash[28289]: cephadm 2026-03-10T10:14:27.809044+0000 mgr.y (mgr.14150) 253 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-10T10:14:29.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:29 vm04 bash[28289]: audit 2026-03-10T10:14:28.298857+0000 mon.a (mon.0) 693 : audit [DBG] from='client.? 192.168.123.107:0/1141813312' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T10:14:29.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:29 vm04 bash[28289]: audit 2026-03-10T10:14:28.298857+0000 mon.a (mon.0) 693 : audit [DBG] from='client.? 192.168.123.107:0/1141813312' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T10:14:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:28 vm04 bash[20742]: cephadm 2026-03-10T10:14:27.809044+0000 mgr.y (mgr.14150) 253 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-10T10:14:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:29 vm04 bash[20742]: cephadm 2026-03-10T10:14:27.809044+0000 mgr.y (mgr.14150) 253 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-10T10:14:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:29 vm04 bash[20742]: audit 2026-03-10T10:14:28.298857+0000 mon.a (mon.0) 693 : audit [DBG] from='client.? 192.168.123.107:0/1141813312' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T10:14:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:29 vm04 bash[20742]: audit 2026-03-10T10:14:28.298857+0000 mon.a (mon.0) 693 : audit [DBG] from='client.? 192.168.123.107:0/1141813312' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T10:14:30.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:30 vm07 bash[23367]: audit 2026-03-10T10:14:29.615832+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.4", "id": [7, 2]}]: dispatch 2026-03-10T10:14:30.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:30 vm07 bash[23367]: audit 2026-03-10T10:14:29.615832+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.4", "id": [7, 2]}]: dispatch 2026-03-10T10:14:30.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:30 vm07 bash[23367]: cluster 2026-03-10T10:14:29.650889+0000 mgr.y (mgr.14150) 254 : cluster [DBG] pgmap v232: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 34 KiB/s rd, 2.2 KiB/s wr, 80 op/s 2026-03-10T10:14:30.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:30 vm07 bash[23367]: cluster 2026-03-10T10:14:29.650889+0000 mgr.y (mgr.14150) 254 : cluster [DBG] pgmap v232: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 34 KiB/s rd, 2.2 KiB/s wr, 80 op/s 2026-03-10T10:14:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:30 vm04 bash[28289]: audit 2026-03-10T10:14:29.615832+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.4", "id": [7, 2]}]: dispatch 2026-03-10T10:14:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:30 vm04 bash[28289]: audit 2026-03-10T10:14:29.615832+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.4", "id": [7, 2]}]: dispatch 2026-03-10T10:14:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:30 vm04 bash[28289]: cluster 2026-03-10T10:14:29.650889+0000 mgr.y (mgr.14150) 254 : cluster [DBG] pgmap v232: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 34 KiB/s rd, 2.2 KiB/s wr, 80 op/s 2026-03-10T10:14:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:30 vm04 bash[28289]: cluster 2026-03-10T10:14:29.650889+0000 mgr.y (mgr.14150) 254 : cluster [DBG] pgmap v232: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 34 KiB/s rd, 2.2 KiB/s wr, 80 op/s 2026-03-10T10:14:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:30 vm04 bash[20742]: audit 2026-03-10T10:14:29.615832+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.4", "id": [7, 2]}]: dispatch 2026-03-10T10:14:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:30 vm04 bash[20742]: audit 2026-03-10T10:14:29.615832+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.4", "id": [7, 2]}]: dispatch 2026-03-10T10:14:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:30 vm04 bash[20742]: cluster 2026-03-10T10:14:29.650889+0000 mgr.y (mgr.14150) 254 : cluster [DBG] pgmap v232: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 34 KiB/s rd, 2.2 KiB/s wr, 80 op/s 2026-03-10T10:14:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:30 vm04 bash[20742]: cluster 2026-03-10T10:14:29.650889+0000 mgr.y (mgr.14150) 254 : cluster [DBG] pgmap v232: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 34 KiB/s rd, 2.2 KiB/s wr, 80 op/s 2026-03-10T10:14:31.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:31 vm07 bash[23367]: audit 2026-03-10T10:14:30.011602+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.4", "id": [7, 2]}]': finished 2026-03-10T10:14:31.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:31 vm07 bash[23367]: audit 2026-03-10T10:14:30.011602+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.4", "id": [7, 2]}]': finished 2026-03-10T10:14:31.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:31 vm07 bash[23367]: cluster 2026-03-10T10:14:30.018251+0000 mon.a (mon.0) 696 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-10T10:14:31.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:31 vm07 bash[23367]: cluster 2026-03-10T10:14:30.018251+0000 mon.a (mon.0) 696 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-10T10:14:31.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:31 vm07 bash[23367]: cluster 2026-03-10T10:14:30.018996+0000 mon.a (mon.0) 697 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-10T10:14:31.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:31 vm07 bash[23367]: cluster 2026-03-10T10:14:30.018996+0000 mon.a (mon.0) 697 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-10T10:14:31.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:31 vm07 bash[23367]: audit 2026-03-10T10:14:30.496345+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:31.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:31 vm07 bash[23367]: audit 2026-03-10T10:14:30.496345+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:31.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:31 vm07 bash[23367]: cluster 2026-03-10T10:14:30.798205+0000 mon.a (mon.0) 699 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-10T10:14:31.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:31 vm07 bash[23367]: cluster 2026-03-10T10:14:30.798205+0000 mon.a (mon.0) 699 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-10T10:14:31.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:31 vm04 bash[28289]: audit 2026-03-10T10:14:30.011602+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.4", "id": [7, 2]}]': finished 2026-03-10T10:14:31.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:31 vm04 bash[28289]: audit 2026-03-10T10:14:30.011602+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.4", "id": [7, 2]}]': finished 2026-03-10T10:14:31.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:31 vm04 bash[28289]: cluster 2026-03-10T10:14:30.018251+0000 mon.a (mon.0) 696 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-10T10:14:31.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:31 vm04 bash[28289]: cluster 2026-03-10T10:14:30.018251+0000 mon.a (mon.0) 696 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-10T10:14:31.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:31 vm04 bash[28289]: cluster 2026-03-10T10:14:30.018996+0000 mon.a (mon.0) 697 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-10T10:14:31.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:31 vm04 bash[28289]: cluster 2026-03-10T10:14:30.018996+0000 mon.a (mon.0) 697 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-10T10:14:31.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:31 vm04 bash[28289]: audit 2026-03-10T10:14:30.496345+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:31.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:31 vm04 bash[28289]: audit 2026-03-10T10:14:30.496345+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:31.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:31 vm04 bash[28289]: cluster 2026-03-10T10:14:30.798205+0000 mon.a (mon.0) 699 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-10T10:14:31.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:31 vm04 bash[28289]: cluster 2026-03-10T10:14:30.798205+0000 mon.a (mon.0) 699 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-10T10:14:31.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:31 vm04 bash[20742]: audit 2026-03-10T10:14:30.011602+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.4", "id": [7, 2]}]': finished 2026-03-10T10:14:31.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:31 vm04 bash[20742]: audit 2026-03-10T10:14:30.011602+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.4", "id": [7, 2]}]': finished 2026-03-10T10:14:31.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:31 vm04 bash[20742]: cluster 2026-03-10T10:14:30.018251+0000 mon.a (mon.0) 696 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-10T10:14:31.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:31 vm04 bash[20742]: cluster 2026-03-10T10:14:30.018251+0000 mon.a (mon.0) 696 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-10T10:14:31.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:31 vm04 bash[20742]: cluster 2026-03-10T10:14:30.018996+0000 mon.a (mon.0) 697 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-10T10:14:31.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:31 vm04 bash[20742]: cluster 2026-03-10T10:14:30.018996+0000 mon.a (mon.0) 697 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-10T10:14:31.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:31 vm04 bash[20742]: audit 2026-03-10T10:14:30.496345+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:31.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:31 vm04 bash[20742]: audit 2026-03-10T10:14:30.496345+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:31.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:31 vm04 bash[20742]: cluster 2026-03-10T10:14:30.798205+0000 mon.a (mon.0) 699 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-10T10:14:31.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:31 vm04 bash[20742]: cluster 2026-03-10T10:14:30.798205+0000 mon.a (mon.0) 699 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-10T10:14:31.747 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.b/config 2026-03-10T10:14:31.923 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.917+0000 7f30c2115640 1 -- 192.168.123.107:0/586098146 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f30bc073b70 msgr2=0x7f30bc073ff0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:31.923 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.917+0000 7f30c2115640 1 --2- 192.168.123.107:0/586098146 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f30bc073b70 0x7f30bc073ff0 secure :-1 s=READY pgs=56 cs=0 l=1 rev1=1 crypto rx=0x7f30ac009a80 tx=0x7f30ac02f2d0 comp rx=0 tx=0).stop 2026-03-10T10:14:31.923 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.917+0000 7f30c2115640 1 -- 192.168.123.107:0/586098146 shutdown_connections 2026-03-10T10:14:31.924 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.917+0000 7f30c2115640 1 --2- 192.168.123.107:0/586098146 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f30bc074530 0x7f30bc07b180 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:31.924 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.917+0000 7f30c2115640 1 --2- 192.168.123.107:0/586098146 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f30bc073b70 0x7f30bc073ff0 unknown :-1 s=CLOSED pgs=56 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:31.924 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.917+0000 7f30c2115640 1 --2- 192.168.123.107:0/586098146 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f30bc10a850 0x7f30bc10ac50 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:31.924 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.917+0000 7f30c2115640 1 -- 192.168.123.107:0/586098146 >> 192.168.123.107:0/586098146 conn(0x7f30bc06f810 msgr2=0x7f30bc071c50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:31.924 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.917+0000 7f30c2115640 1 -- 192.168.123.107:0/586098146 shutdown_connections 2026-03-10T10:14:31.924 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.917+0000 7f30c2115640 1 -- 192.168.123.107:0/586098146 wait complete. 2026-03-10T10:14:31.924 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.917+0000 7f30c2115640 1 Processor -- start 2026-03-10T10:14:31.924 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.917+0000 7f30c2115640 1 -- start start 2026-03-10T10:14:31.924 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.917+0000 7f30c2115640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f30bc073b70 0x7f30bc078a20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:31.925 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.917+0000 7f30c2115640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f30bc074530 0x7f30bc078f60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:31.925 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.917+0000 7f30c2115640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f30bc10a850 0x7f30bc079fb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:31.925 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.917+0000 7f30c2115640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f30bc07e000 con 0x7f30bc074530 2026-03-10T10:14:31.925 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.917+0000 7f30c2115640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f30bc07de80 con 0x7f30bc073b70 2026-03-10T10:14:31.925 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.917+0000 7f30c2115640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f30bc07e180 con 0x7f30bc10a850 2026-03-10T10:14:31.925 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.917+0000 7f30bb7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f30bc073b70 0x7f30bc078a20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:31.925 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.917+0000 7f30bb7fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f30bc073b70 0x7f30bc078a20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:41430/0 (socket says 192.168.123.107:41430) 2026-03-10T10:14:31.925 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.917+0000 7f30bb7fe640 1 -- 192.168.123.107:0/2532797325 learned_addr learned my addr 192.168.123.107:0/2532797325 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-10T10:14:31.925 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.917+0000 7f30bbfff640 1 --2- 192.168.123.107:0/2532797325 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f30bc10a850 0x7f30bc079fb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:31.925 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.917+0000 7f30baffd640 1 --2- 192.168.123.107:0/2532797325 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f30bc074530 0x7f30bc078f60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:31.925 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.917+0000 7f30bb7fe640 1 -- 192.168.123.107:0/2532797325 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f30bc10a850 msgr2=0x7f30bc079fb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:31.925 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.917+0000 7f30bb7fe640 1 --2- 192.168.123.107:0/2532797325 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f30bc10a850 0x7f30bc079fb0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:31.925 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.917+0000 7f30bb7fe640 1 -- 192.168.123.107:0/2532797325 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f30bc074530 msgr2=0x7f30bc078f60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:31.925 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.917+0000 7f30bb7fe640 1 --2- 192.168.123.107:0/2532797325 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f30bc074530 0x7f30bc078f60 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:31.925 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.917+0000 7f30bb7fe640 1 -- 192.168.123.107:0/2532797325 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f30bc07a6e0 con 0x7f30bc073b70 2026-03-10T10:14:31.925 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.917+0000 7f30bb7fe640 1 --2- 192.168.123.107:0/2532797325 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f30bc073b70 0x7f30bc078a20 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7f30b000b7c0 tx=0x7f30b000bc90 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:31.925 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.917+0000 7f30b8ff9640 1 -- 192.168.123.107:0/2532797325 <== mon.1 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f30b0004270 con 0x7f30bc073b70 2026-03-10T10:14:31.926 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.917+0000 7f30c2115640 1 -- 192.168.123.107:0/2532797325 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f30bc07a9d0 con 0x7f30bc073b70 2026-03-10T10:14:31.926 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.921+0000 7f30c2115640 1 -- 192.168.123.107:0/2532797325 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f30bc07aec0 con 0x7f30bc073b70 2026-03-10T10:14:31.926 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.921+0000 7f30c2115640 1 -- 192.168.123.107:0/2532797325 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f30bc10ac50 con 0x7f30bc073b70 2026-03-10T10:14:31.933 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.925+0000 7f30b8ff9640 1 -- 192.168.123.107:0/2532797325 <== mon.1 v2:192.168.123.107:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f30b0010070 con 0x7f30bc073b70 2026-03-10T10:14:31.933 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.925+0000 7f30b8ff9640 1 -- 192.168.123.107:0/2532797325 <== mon.1 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f30b000ca60 con 0x7f30bc073b70 2026-03-10T10:14:31.933 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.925+0000 7f30b8ff9640 1 -- 192.168.123.107:0/2532797325 <== mon.1 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 16) ==== 100051+0+0 (secure 0 0 0) 0x7f30b000cce0 con 0x7f30bc073b70 2026-03-10T10:14:31.933 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.925+0000 7f30b8ff9640 1 --2- 192.168.123.107:0/2532797325 >> v2:192.168.123.104:6800/632047608 conn(0x7f308c077760 0x7f308c079c20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:31.933 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.925+0000 7f30b8ff9640 1 -- 192.168.123.107:0/2532797325 <== mon.1 v2:192.168.123.107:3300/0 5 ==== osd_map(64..64 src has 1..64) ==== 5951+0+0 (secure 0 0 0) 0x7f30b0098fe0 con 0x7f30bc073b70 2026-03-10T10:14:31.933 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.925+0000 7f30b8ff9640 1 -- 192.168.123.107:0/2532797325 <== mon.1 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f30b0099300 con 0x7f30bc073b70 2026-03-10T10:14:31.933 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.925+0000 7f30baffd640 1 --2- 192.168.123.107:0/2532797325 >> v2:192.168.123.104:6800/632047608 conn(0x7f308c077760 0x7f308c079c20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:31.936 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:31.929+0000 7f30baffd640 1 --2- 192.168.123.107:0/2532797325 >> v2:192.168.123.104:6800/632047608 conn(0x7f308c077760 0x7f308c079c20 secure :-1 s=READY pgs=119 cs=0 l=1 rev1=1 crypto rx=0x7f30bc079f40 tx=0x7f30ac005e50 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:32.052 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:32.045+0000 7f30c2115640 1 -- 192.168.123.107:0/2532797325 --> v2:192.168.123.104:6800/632047608 -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm07=a", "target": ["mon-mgr", ""]}) -- 0x7f30bc0630c0 con 0x7f308c077760 2026-03-10T10:14:32.059 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:32.053+0000 7f30b8ff9640 1 -- 192.168.123.107:0/2532797325 <== mgr.14150 v2:192.168.123.104:6800/632047608 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+31 (secure 0 0 0) 0x7f30bc0630c0 con 0x7f308c077760 2026-03-10T10:14:32.059 INFO:teuthology.orchestra.run.vm07.stdout:Scheduled prometheus update... 2026-03-10T10:14:32.063 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:32.057+0000 7f30c2115640 1 -- 192.168.123.107:0/2532797325 >> v2:192.168.123.104:6800/632047608 conn(0x7f308c077760 msgr2=0x7f308c079c20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:32.063 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:32.057+0000 7f30c2115640 1 --2- 192.168.123.107:0/2532797325 >> v2:192.168.123.104:6800/632047608 conn(0x7f308c077760 0x7f308c079c20 secure :-1 s=READY pgs=119 cs=0 l=1 rev1=1 crypto rx=0x7f30bc079f40 tx=0x7f30ac005e50 comp rx=0 tx=0).stop 2026-03-10T10:14:32.063 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:32.057+0000 7f30c2115640 1 -- 192.168.123.107:0/2532797325 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f30bc073b70 msgr2=0x7f30bc078a20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:32.063 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:32.057+0000 7f30c2115640 1 --2- 192.168.123.107:0/2532797325 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f30bc073b70 0x7f30bc078a20 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7f30b000b7c0 tx=0x7f30b000bc90 comp rx=0 tx=0).stop 2026-03-10T10:14:32.063 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:32.057+0000 7f30c2115640 1 -- 192.168.123.107:0/2532797325 shutdown_connections 2026-03-10T10:14:32.064 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:32.057+0000 7f30c2115640 1 --2- 192.168.123.107:0/2532797325 >> v2:192.168.123.104:6800/632047608 conn(0x7f308c077760 0x7f308c079c20 unknown :-1 s=CLOSED pgs=119 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:32.064 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:32.057+0000 7f30c2115640 1 --2- 192.168.123.107:0/2532797325 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f30bc10a850 0x7f30bc079fb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:32.064 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:32.057+0000 7f30c2115640 1 --2- 192.168.123.107:0/2532797325 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f30bc074530 0x7f30bc078f60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:32.064 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:32.057+0000 7f30c2115640 1 --2- 192.168.123.107:0/2532797325 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f30bc073b70 0x7f30bc078a20 unknown :-1 s=CLOSED pgs=57 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:32.064 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:32.057+0000 7f30c2115640 1 -- 192.168.123.107:0/2532797325 >> 192.168.123.107:0/2532797325 conn(0x7f30bc06f810 msgr2=0x7f30bc071570 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:32.064 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:32.057+0000 7f30c2115640 1 -- 192.168.123.107:0/2532797325 shutdown_connections 2026-03-10T10:14:32.064 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:32.057+0000 7f30c2115640 1 -- 192.168.123.107:0/2532797325 wait complete. 2026-03-10T10:14:32.122 DEBUG:teuthology.orchestra.run.vm07:prometheus.a> sudo journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@prometheus.a.service 2026-03-10T10:14:32.122 INFO:tasks.cephadm:Adding node-exporter.a on vm04 2026-03-10T10:14:32.122 INFO:tasks.cephadm:Adding node-exporter.b on vm07 2026-03-10T10:14:32.123 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph orch apply node-exporter '2;vm04=a;vm07=b' 2026-03-10T10:14:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:32 vm04 bash[28289]: cluster 2026-03-10T10:14:31.651278+0000 mgr.y (mgr.14150) 255 : cluster [DBG] pgmap v235: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 1023 B/s wr, 35 op/s 2026-03-10T10:14:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:32 vm04 bash[28289]: cluster 2026-03-10T10:14:31.651278+0000 mgr.y (mgr.14150) 255 : cluster [DBG] pgmap v235: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 1023 B/s wr, 35 op/s 2026-03-10T10:14:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:32 vm04 bash[20742]: cluster 2026-03-10T10:14:31.651278+0000 mgr.y (mgr.14150) 255 : cluster [DBG] pgmap v235: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 1023 B/s wr, 35 op/s 2026-03-10T10:14:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:32 vm04 bash[20742]: cluster 2026-03-10T10:14:31.651278+0000 mgr.y (mgr.14150) 255 : cluster [DBG] pgmap v235: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 1023 B/s wr, 35 op/s 2026-03-10T10:14:32.476 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:32 vm07 bash[23367]: cluster 2026-03-10T10:14:31.651278+0000 mgr.y (mgr.14150) 255 : cluster [DBG] pgmap v235: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 1023 B/s wr, 35 op/s 2026-03-10T10:14:32.476 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:32 vm07 bash[23367]: cluster 2026-03-10T10:14:31.651278+0000 mgr.y (mgr.14150) 255 : cluster [DBG] pgmap v235: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 1023 B/s wr, 35 op/s 2026-03-10T10:14:32.765 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:14:32 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:33 vm04 bash[28289]: audit 2026-03-10T10:14:32.054107+0000 mgr.y (mgr.14150) 256 : audit [DBG] from='client.24418 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm07=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:33 vm04 bash[28289]: audit 2026-03-10T10:14:32.054107+0000 mgr.y (mgr.14150) 256 : audit [DBG] from='client.24418 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm07=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:33 vm04 bash[28289]: cephadm 2026-03-10T10:14:32.055120+0000 mgr.y (mgr.14150) 257 : cephadm [INF] Saving service prometheus spec with placement vm07=a;count:1 2026-03-10T10:14:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:33 vm04 bash[28289]: cephadm 2026-03-10T10:14:32.055120+0000 mgr.y (mgr.14150) 257 : cephadm [INF] Saving service prometheus spec with placement vm07=a;count:1 2026-03-10T10:14:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:33 vm04 bash[28289]: audit 2026-03-10T10:14:32.058414+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:33 vm04 bash[28289]: audit 2026-03-10T10:14:32.058414+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:33 vm04 bash[28289]: audit 2026-03-10T10:14:32.079762+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:33 vm04 bash[28289]: audit 2026-03-10T10:14:32.079762+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:33 vm04 bash[28289]: audit 2026-03-10T10:14:32.085090+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:33 vm04 bash[28289]: audit 2026-03-10T10:14:32.085090+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:33 vm04 bash[28289]: audit 2026-03-10T10:14:32.086199+0000 mon.a (mon.0) 703 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:33.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:33 vm04 bash[28289]: audit 2026-03-10T10:14:32.086199+0000 mon.a (mon.0) 703 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:33.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:33 vm04 bash[28289]: audit 2026-03-10T10:14:32.086715+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:33.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:33 vm04 bash[28289]: audit 2026-03-10T10:14:32.086715+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:33.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:33 vm04 bash[28289]: audit 2026-03-10T10:14:32.090580+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:33.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:33 vm04 bash[28289]: audit 2026-03-10T10:14:32.090580+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:33.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:33 vm04 bash[28289]: cephadm 2026-03-10T10:14:32.248368+0000 mgr.y (mgr.14150) 258 : cephadm [INF] Deploying daemon prometheus.a on vm07 2026-03-10T10:14:33.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:33 vm04 bash[28289]: cephadm 2026-03-10T10:14:32.248368+0000 mgr.y (mgr.14150) 258 : cephadm [INF] Deploying daemon prometheus.a on vm07 2026-03-10T10:14:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:33 vm04 bash[20742]: audit 2026-03-10T10:14:32.054107+0000 mgr.y (mgr.14150) 256 : audit [DBG] from='client.24418 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm07=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:33 vm04 bash[20742]: audit 2026-03-10T10:14:32.054107+0000 mgr.y (mgr.14150) 256 : audit [DBG] from='client.24418 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm07=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:33 vm04 bash[20742]: cephadm 2026-03-10T10:14:32.055120+0000 mgr.y (mgr.14150) 257 : cephadm [INF] Saving service prometheus spec with placement vm07=a;count:1 2026-03-10T10:14:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:33 vm04 bash[20742]: cephadm 2026-03-10T10:14:32.055120+0000 mgr.y (mgr.14150) 257 : cephadm [INF] Saving service prometheus spec with placement vm07=a;count:1 2026-03-10T10:14:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:33 vm04 bash[20742]: audit 2026-03-10T10:14:32.058414+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:33 vm04 bash[20742]: audit 2026-03-10T10:14:32.058414+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:33 vm04 bash[20742]: audit 2026-03-10T10:14:32.079762+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:33 vm04 bash[20742]: audit 2026-03-10T10:14:32.079762+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:33 vm04 bash[20742]: audit 2026-03-10T10:14:32.085090+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:33 vm04 bash[20742]: audit 2026-03-10T10:14:32.085090+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:33 vm04 bash[20742]: audit 2026-03-10T10:14:32.086199+0000 mon.a (mon.0) 703 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:33 vm04 bash[20742]: audit 2026-03-10T10:14:32.086199+0000 mon.a (mon.0) 703 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:33 vm04 bash[20742]: audit 2026-03-10T10:14:32.086715+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:33 vm04 bash[20742]: audit 2026-03-10T10:14:32.086715+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:33 vm04 bash[20742]: audit 2026-03-10T10:14:32.090580+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:33 vm04 bash[20742]: audit 2026-03-10T10:14:32.090580+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:33 vm04 bash[20742]: cephadm 2026-03-10T10:14:32.248368+0000 mgr.y (mgr.14150) 258 : cephadm [INF] Deploying daemon prometheus.a on vm07 2026-03-10T10:14:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:33 vm04 bash[20742]: cephadm 2026-03-10T10:14:32.248368+0000 mgr.y (mgr.14150) 258 : cephadm [INF] Deploying daemon prometheus.a on vm07 2026-03-10T10:14:33.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:33 vm07 bash[23367]: audit 2026-03-10T10:14:32.054107+0000 mgr.y (mgr.14150) 256 : audit [DBG] from='client.24418 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm07=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:33.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:33 vm07 bash[23367]: audit 2026-03-10T10:14:32.054107+0000 mgr.y (mgr.14150) 256 : audit [DBG] from='client.24418 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm07=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:33.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:33 vm07 bash[23367]: cephadm 2026-03-10T10:14:32.055120+0000 mgr.y (mgr.14150) 257 : cephadm [INF] Saving service prometheus spec with placement vm07=a;count:1 2026-03-10T10:14:33.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:33 vm07 bash[23367]: cephadm 2026-03-10T10:14:32.055120+0000 mgr.y (mgr.14150) 257 : cephadm [INF] Saving service prometheus spec with placement vm07=a;count:1 2026-03-10T10:14:33.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:33 vm07 bash[23367]: audit 2026-03-10T10:14:32.058414+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:33.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:33 vm07 bash[23367]: audit 2026-03-10T10:14:32.058414+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:33.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:33 vm07 bash[23367]: audit 2026-03-10T10:14:32.079762+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:33.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:33 vm07 bash[23367]: audit 2026-03-10T10:14:32.079762+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:33.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:33 vm07 bash[23367]: audit 2026-03-10T10:14:32.085090+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:33.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:33 vm07 bash[23367]: audit 2026-03-10T10:14:32.085090+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:33.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:33 vm07 bash[23367]: audit 2026-03-10T10:14:32.086199+0000 mon.a (mon.0) 703 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:33.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:33 vm07 bash[23367]: audit 2026-03-10T10:14:32.086199+0000 mon.a (mon.0) 703 : audit [DBG] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:33.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:33 vm07 bash[23367]: audit 2026-03-10T10:14:32.086715+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:33.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:33 vm07 bash[23367]: audit 2026-03-10T10:14:32.086715+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:33.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:33 vm07 bash[23367]: audit 2026-03-10T10:14:32.090580+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:33.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:33 vm07 bash[23367]: audit 2026-03-10T10:14:32.090580+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:33.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:33 vm07 bash[23367]: cephadm 2026-03-10T10:14:32.248368+0000 mgr.y (mgr.14150) 258 : cephadm [INF] Deploying daemon prometheus.a on vm07 2026-03-10T10:14:33.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:33 vm07 bash[23367]: cephadm 2026-03-10T10:14:32.248368+0000 mgr.y (mgr.14150) 258 : cephadm [INF] Deploying daemon prometheus.a on vm07 2026-03-10T10:14:34.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:34 vm04 bash[28289]: cluster 2026-03-10T10:14:33.651748+0000 mgr.y (mgr.14150) 259 : cluster [DBG] pgmap v236: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 127 B/s wr, 1 op/s 2026-03-10T10:14:34.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:34 vm04 bash[28289]: cluster 2026-03-10T10:14:33.651748+0000 mgr.y (mgr.14150) 259 : cluster [DBG] pgmap v236: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 127 B/s wr, 1 op/s 2026-03-10T10:14:34.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:34 vm04 bash[20742]: cluster 2026-03-10T10:14:33.651748+0000 mgr.y (mgr.14150) 259 : cluster [DBG] pgmap v236: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 127 B/s wr, 1 op/s 2026-03-10T10:14:34.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:34 vm04 bash[20742]: cluster 2026-03-10T10:14:33.651748+0000 mgr.y (mgr.14150) 259 : cluster [DBG] pgmap v236: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 127 B/s wr, 1 op/s 2026-03-10T10:14:34.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:34 vm07 bash[23367]: cluster 2026-03-10T10:14:33.651748+0000 mgr.y (mgr.14150) 259 : cluster [DBG] pgmap v236: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 127 B/s wr, 1 op/s 2026-03-10T10:14:34.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:34 vm07 bash[23367]: cluster 2026-03-10T10:14:33.651748+0000 mgr.y (mgr.14150) 259 : cluster [DBG] pgmap v236: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 127 B/s wr, 1 op/s 2026-03-10T10:14:36.882 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.b/config 2026-03-10T10:14:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:36 vm04 bash[28289]: cluster 2026-03-10T10:14:35.652106+0000 mgr.y (mgr.14150) 260 : cluster [DBG] pgmap v237: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 127 B/s wr, 1 op/s 2026-03-10T10:14:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:36 vm04 bash[28289]: cluster 2026-03-10T10:14:35.652106+0000 mgr.y (mgr.14150) 260 : cluster [DBG] pgmap v237: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 127 B/s wr, 1 op/s 2026-03-10T10:14:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:36 vm04 bash[20742]: cluster 2026-03-10T10:14:35.652106+0000 mgr.y (mgr.14150) 260 : cluster [DBG] pgmap v237: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 127 B/s wr, 1 op/s 2026-03-10T10:14:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:36 vm04 bash[20742]: cluster 2026-03-10T10:14:35.652106+0000 mgr.y (mgr.14150) 260 : cluster [DBG] pgmap v237: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 127 B/s wr, 1 op/s 2026-03-10T10:14:37.290 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:36 vm07 bash[23367]: cluster 2026-03-10T10:14:35.652106+0000 mgr.y (mgr.14150) 260 : cluster [DBG] pgmap v237: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 127 B/s wr, 1 op/s 2026-03-10T10:14:37.290 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:36 vm07 bash[23367]: cluster 2026-03-10T10:14:35.652106+0000 mgr.y (mgr.14150) 260 : cluster [DBG] pgmap v237: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 127 B/s wr, 1 op/s 2026-03-10T10:14:37.441 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.433+0000 7fc441610640 1 -- 192.168.123.107:0/1153389813 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc43c10a850 msgr2=0x7fc43c10ac50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:37.441 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.433+0000 7fc441610640 1 --2- 192.168.123.107:0/1153389813 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc43c10a850 0x7fc43c10ac50 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7fc43400b3e0 tx=0x7fc43402f6c0 comp rx=0 tx=0).stop 2026-03-10T10:14:37.441 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.433+0000 7fc441610640 1 -- 192.168.123.107:0/1153389813 shutdown_connections 2026-03-10T10:14:37.441 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.433+0000 7fc441610640 1 --2- 192.168.123.107:0/1153389813 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc43c074530 0x7fc43c07b180 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:37.441 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.433+0000 7fc441610640 1 --2- 192.168.123.107:0/1153389813 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fc43c073b70 0x7fc43c073ff0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:37.441 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.433+0000 7fc441610640 1 --2- 192.168.123.107:0/1153389813 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc43c10a850 0x7fc43c10ac50 unknown :-1 s=CLOSED pgs=58 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:37.441 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.433+0000 7fc441610640 1 -- 192.168.123.107:0/1153389813 >> 192.168.123.107:0/1153389813 conn(0x7fc43c06f810 msgr2=0x7fc43c071c50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:37.441 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.433+0000 7fc441610640 1 -- 192.168.123.107:0/1153389813 shutdown_connections 2026-03-10T10:14:37.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.437+0000 7fc441610640 1 -- 192.168.123.107:0/1153389813 wait complete. 2026-03-10T10:14:37.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.437+0000 7fc441610640 1 Processor -- start 2026-03-10T10:14:37.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.437+0000 7fc441610640 1 -- start start 2026-03-10T10:14:37.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.437+0000 7fc441610640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fc43c073b70 0x7fc43c083a70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:37.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.437+0000 7fc441610640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc43c074530 0x7fc43c083fb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:37.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.437+0000 7fc441610640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc43c10a850 0x7fc43c085040 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:37.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.437+0000 7fc441610640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fc43c07e000 con 0x7fc43c10a850 2026-03-10T10:14:37.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.437+0000 7fc441610640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7fc43c07de80 con 0x7fc43c074530 2026-03-10T10:14:37.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.437+0000 7fc441610640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7fc43c07e180 con 0x7fc43c073b70 2026-03-10T10:14:37.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.437+0000 7fc43b7fe640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc43c10a850 0x7fc43c085040 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:37.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.437+0000 7fc43b7fe640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc43c10a850 0x7fc43c085040 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.107:46842/0 (socket says 192.168.123.107:46842) 2026-03-10T10:14:37.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.437+0000 7fc43b7fe640 1 -- 192.168.123.107:0/3248560374 learned_addr learned my addr 192.168.123.107:0/3248560374 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-10T10:14:37.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.437+0000 7fc43affd640 1 --2- 192.168.123.107:0/3248560374 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fc43c073b70 0x7fc43c083a70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:37.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.437+0000 7fc43b7fe640 1 -- 192.168.123.107:0/3248560374 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fc43c073b70 msgr2=0x7fc43c083a70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:37.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.437+0000 7fc43b7fe640 1 --2- 192.168.123.107:0/3248560374 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fc43c073b70 0x7fc43c083a70 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:37.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.437+0000 7fc43b7fe640 1 -- 192.168.123.107:0/3248560374 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc43c074530 msgr2=0x7fc43c083fb0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:14:37.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.437+0000 7fc43b7fe640 1 --2- 192.168.123.107:0/3248560374 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc43c074530 0x7fc43c083fb0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:37.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.437+0000 7fc43b7fe640 1 -- 192.168.123.107:0/3248560374 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fc43c085740 con 0x7fc43c10a850 2026-03-10T10:14:37.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.437+0000 7fc43b7fe640 1 --2- 192.168.123.107:0/3248560374 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc43c10a850 0x7fc43c085040 secure :-1 s=READY pgs=142 cs=0 l=1 rev1=1 crypto rx=0x7fc43000b550 tx=0x7fc43000ba20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:37.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.437+0000 7fc41ffff640 1 -- 192.168.123.107:0/3248560374 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fc430013020 con 0x7fc43c10a850 2026-03-10T10:14:37.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.437+0000 7fc441610640 1 -- 192.168.123.107:0/3248560374 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fc43c085a30 con 0x7fc43c10a850 2026-03-10T10:14:37.443 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.437+0000 7fc441610640 1 -- 192.168.123.107:0/3248560374 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fc43c085f70 con 0x7fc43c10a850 2026-03-10T10:14:37.445 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.437+0000 7fc41ffff640 1 -- 192.168.123.107:0/3248560374 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fc430004480 con 0x7fc43c10a850 2026-03-10T10:14:37.445 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.437+0000 7fc41ffff640 1 -- 192.168.123.107:0/3248560374 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fc43000f9e0 con 0x7fc43c10a850 2026-03-10T10:14:37.445 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.441+0000 7fc41ffff640 1 -- 192.168.123.107:0/3248560374 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 16) ==== 100051+0+0 (secure 0 0 0) 0x7fc430020020 con 0x7fc43c10a850 2026-03-10T10:14:37.446 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.441+0000 7fc41ffff640 1 --2- 192.168.123.107:0/3248560374 >> v2:192.168.123.104:6800/632047608 conn(0x7fc420077660 0x7fc420079b20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:37.446 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.441+0000 7fc41ffff640 1 -- 192.168.123.107:0/3248560374 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(64..64 src has 1..64) ==== 5951+0+0 (secure 0 0 0) 0x7fc430099d80 con 0x7fc43c10a850 2026-03-10T10:14:37.446 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.441+0000 7fc43affd640 1 --2- 192.168.123.107:0/3248560374 >> v2:192.168.123.104:6800/632047608 conn(0x7fc420077660 0x7fc420079b20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:37.448 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.441+0000 7fc43affd640 1 --2- 192.168.123.107:0/3248560374 >> v2:192.168.123.104:6800/632047608 conn(0x7fc420077660 0x7fc420079b20 secure :-1 s=READY pgs=120 cs=0 l=1 rev1=1 crypto rx=0x7fc434009e60 tx=0x7fc434005b50 comp rx=0 tx=0).ready entity=mgr.14150 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:37.448 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.441+0000 7fc441610640 1 -- 192.168.123.107:0/3248560374 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fc43c07e7e0 con 0x7fc43c10a850 2026-03-10T10:14:37.452 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.445+0000 7fc41ffff640 1 -- 192.168.123.107:0/3248560374 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fc430062e40 con 0x7fc43c10a850 2026-03-10T10:14:37.589 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.581+0000 7fc441610640 1 -- 192.168.123.107:0/3248560374 --> v2:192.168.123.104:6800/632047608 -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm04=a;vm07=b", "target": ["mon-mgr", ""]}) -- 0x7fc43c073ff0 con 0x7fc420077660 2026-03-10T10:14:37.597 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.589+0000 7fc41ffff640 1 -- 192.168.123.107:0/3248560374 <== mgr.14150 v2:192.168.123.104:6800/632047608 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+34 (secure 0 0 0) 0x7fc43c073ff0 con 0x7fc420077660 2026-03-10T10:14:37.597 INFO:teuthology.orchestra.run.vm07.stdout:Scheduled node-exporter update... 2026-03-10T10:14:37.602 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.597+0000 7fc41dffb640 1 -- 192.168.123.107:0/3248560374 >> v2:192.168.123.104:6800/632047608 conn(0x7fc420077660 msgr2=0x7fc420079b20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:37.602 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.597+0000 7fc41dffb640 1 --2- 192.168.123.107:0/3248560374 >> v2:192.168.123.104:6800/632047608 conn(0x7fc420077660 0x7fc420079b20 secure :-1 s=READY pgs=120 cs=0 l=1 rev1=1 crypto rx=0x7fc434009e60 tx=0x7fc434005b50 comp rx=0 tx=0).stop 2026-03-10T10:14:37.602 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.597+0000 7fc41dffb640 1 -- 192.168.123.107:0/3248560374 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc43c10a850 msgr2=0x7fc43c085040 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:37.602 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.597+0000 7fc41dffb640 1 --2- 192.168.123.107:0/3248560374 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc43c10a850 0x7fc43c085040 secure :-1 s=READY pgs=142 cs=0 l=1 rev1=1 crypto rx=0x7fc43000b550 tx=0x7fc43000ba20 comp rx=0 tx=0).stop 2026-03-10T10:14:37.609 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.597+0000 7fc41dffb640 1 -- 192.168.123.107:0/3248560374 shutdown_connections 2026-03-10T10:14:37.609 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.597+0000 7fc41dffb640 1 --2- 192.168.123.107:0/3248560374 >> v2:192.168.123.104:6800/632047608 conn(0x7fc420077660 0x7fc420079b20 unknown :-1 s=CLOSED pgs=120 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:37.609 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.597+0000 7fc41dffb640 1 --2- 192.168.123.107:0/3248560374 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc43c10a850 0x7fc43c085040 unknown :-1 s=CLOSED pgs=142 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:37.609 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.597+0000 7fc41dffb640 1 --2- 192.168.123.107:0/3248560374 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc43c074530 0x7fc43c083fb0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:37.610 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.597+0000 7fc41dffb640 1 --2- 192.168.123.107:0/3248560374 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fc43c073b70 0x7fc43c083a70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:37.610 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.597+0000 7fc41dffb640 1 -- 192.168.123.107:0/3248560374 >> 192.168.123.107:0/3248560374 conn(0x7fc43c06f810 msgr2=0x7fc43c0797f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:37.610 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.605+0000 7fc41dffb640 1 -- 192.168.123.107:0/3248560374 shutdown_connections 2026-03-10T10:14:37.610 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:37.605+0000 7fc41dffb640 1 -- 192.168.123.107:0/3248560374 wait complete. 2026-03-10T10:14:37.812 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:37 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:37.812 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:37 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:37.812 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 10:14:37 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:37.813 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 10:14:37 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:37.813 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 10:14:37 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:37.813 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:14:37 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:37.813 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:14:37 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:37.813 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:14:37 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:37.813 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:14:37 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:37.813 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:14:37 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:38.098 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 10:14:37 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:38.098 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:14:37 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:38.098 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:14:38 vm07 systemd[1]: Started Ceph prometheus.a for e4c1c9d6-1c68-11f1-a9bd-116050875839. 2026-03-10T10:14:38.098 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 10:14:37 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:38.098 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:14:37 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:38.098 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:37 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:38.098 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 10:14:37 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:38.099 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:37 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:38.099 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:14:37 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:38.209 DEBUG:teuthology.orchestra.run.vm04:node-exporter.a> sudo journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@node-exporter.a.service 2026-03-10T10:14:38.210 DEBUG:teuthology.orchestra.run.vm07:node-exporter.b> sudo journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@node-exporter.b.service 2026-03-10T10:14:38.211 INFO:tasks.cephadm:Adding alertmanager.a on vm04 2026-03-10T10:14:38.211 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph orch apply alertmanager '1;vm04=a' 2026-03-10T10:14:38.354 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:14:38 vm07 bash[49439]: ts=2026-03-10T10:14:38.318Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 2026-03-10T10:14:38.354 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:14:38 vm07 bash[49439]: ts=2026-03-10T10:14:38.318Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 2026-03-10T10:14:38.354 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:14:38 vm07 bash[49439]: ts=2026-03-10T10:14:38.318Z caller=main.go:623 level=info host_details="(Linux 5.15.0-1092-kvm #97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026 x86_64 vm07 (none))" 2026-03-10T10:14:38.354 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:14:38 vm07 bash[49439]: ts=2026-03-10T10:14:38.318Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-10T10:14:38.354 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:14:38 vm07 bash[49439]: ts=2026-03-10T10:14:38.318Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-10T10:14:38.354 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:14:38 vm07 bash[49439]: ts=2026-03-10T10:14:38.321Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=:9095 2026-03-10T10:14:38.354 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:14:38 vm07 bash[49439]: ts=2026-03-10T10:14:38.322Z caller=main.go:1129 level=info msg="Starting TSDB ..." 2026-03-10T10:14:38.354 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:14:38 vm07 bash[49439]: ts=2026-03-10T10:14:38.326Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9095 2026-03-10T10:14:38.354 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:14:38 vm07 bash[49439]: ts=2026-03-10T10:14:38.326Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9095 2026-03-10T10:14:38.354 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:14:38 vm07 bash[49439]: ts=2026-03-10T10:14:38.327Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-10T10:14:38.354 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:14:38 vm07 bash[49439]: ts=2026-03-10T10:14:38.327Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.333µs 2026-03-10T10:14:38.354 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:14:38 vm07 bash[49439]: ts=2026-03-10T10:14:38.327Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-10T10:14:38.354 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:14:38 vm07 bash[49439]: ts=2026-03-10T10:14:38.327Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 2026-03-10T10:14:38.354 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:14:38 vm07 bash[49439]: ts=2026-03-10T10:14:38.327Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=17.503µs wal_replay_duration=166.46µs wbl_replay_duration=120ns total_replay_duration=195.134µs 2026-03-10T10:14:38.354 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:14:38 vm07 bash[49439]: ts=2026-03-10T10:14:38.328Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 2026-03-10T10:14:38.354 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:14:38 vm07 bash[49439]: ts=2026-03-10T10:14:38.328Z caller=main.go:1153 level=info msg="TSDB started" 2026-03-10T10:14:38.354 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:14:38 vm07 bash[49439]: ts=2026-03-10T10:14:38.328Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-10T10:14:38.354 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:14:38 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:14:38.515 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:14:38 vm07 bash[49439]: ts=2026-03-10T10:14:38.351Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=22.303519ms db_storage=902ns remote_storage=1.713µs web_handler=300ns query_engine=731ns scrape=1.642935ms scrape_sd=132.887µs notify=1.002µs notify_sd=710ns rules=20.296746ms tracing=7.313µs 2026-03-10T10:14:38.515 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:14:38 vm07 bash[49439]: ts=2026-03-10T10:14:38.351Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 2026-03-10T10:14:38.515 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:14:38 vm07 bash[49439]: ts=2026-03-10T10:14:38.351Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-10T10:14:38.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:38 vm07 bash[23367]: audit 2026-03-10T10:14:37.589951+0000 mgr.y (mgr.14150) 261 : audit [DBG] from='client.14481 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm04=a;vm07=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:38.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:38 vm07 bash[23367]: audit 2026-03-10T10:14:37.589951+0000 mgr.y (mgr.14150) 261 : audit [DBG] from='client.14481 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm04=a;vm07=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:38.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:38 vm07 bash[23367]: cephadm 2026-03-10T10:14:37.590774+0000 mgr.y (mgr.14150) 262 : cephadm [INF] Saving service node-exporter spec with placement vm04=a;vm07=b;count:2 2026-03-10T10:14:38.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:38 vm07 bash[23367]: cephadm 2026-03-10T10:14:37.590774+0000 mgr.y (mgr.14150) 262 : cephadm [INF] Saving service node-exporter spec with placement vm04=a;vm07=b;count:2 2026-03-10T10:14:38.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:38 vm07 bash[23367]: audit 2026-03-10T10:14:37.596660+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:38.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:38 vm07 bash[23367]: audit 2026-03-10T10:14:37.596660+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:38.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:38 vm07 bash[23367]: cluster 2026-03-10T10:14:37.652466+0000 mgr.y (mgr.14150) 263 : cluster [DBG] pgmap v238: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 127 B/s wr, 2 op/s 2026-03-10T10:14:38.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:38 vm07 bash[23367]: cluster 2026-03-10T10:14:37.652466+0000 mgr.y (mgr.14150) 263 : cluster [DBG] pgmap v238: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 127 B/s wr, 2 op/s 2026-03-10T10:14:38.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:38 vm07 bash[23367]: audit 2026-03-10T10:14:38.195953+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:38.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:38 vm07 bash[23367]: audit 2026-03-10T10:14:38.195953+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:38.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:38 vm07 bash[23367]: audit 2026-03-10T10:14:38.202358+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:38.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:38 vm07 bash[23367]: audit 2026-03-10T10:14:38.202358+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:38.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:38 vm07 bash[23367]: audit 2026-03-10T10:14:38.208736+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:38.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:38 vm07 bash[23367]: audit 2026-03-10T10:14:38.208736+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:38.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:38 vm07 bash[23367]: audit 2026-03-10T10:14:38.211099+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T10:14:38.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:38 vm07 bash[23367]: audit 2026-03-10T10:14:38.211099+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T10:14:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:38 vm04 bash[28289]: audit 2026-03-10T10:14:37.589951+0000 mgr.y (mgr.14150) 261 : audit [DBG] from='client.14481 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm04=a;vm07=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:38 vm04 bash[28289]: audit 2026-03-10T10:14:37.589951+0000 mgr.y (mgr.14150) 261 : audit [DBG] from='client.14481 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm04=a;vm07=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:38 vm04 bash[28289]: cephadm 2026-03-10T10:14:37.590774+0000 mgr.y (mgr.14150) 262 : cephadm [INF] Saving service node-exporter spec with placement vm04=a;vm07=b;count:2 2026-03-10T10:14:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:38 vm04 bash[28289]: cephadm 2026-03-10T10:14:37.590774+0000 mgr.y (mgr.14150) 262 : cephadm [INF] Saving service node-exporter spec with placement vm04=a;vm07=b;count:2 2026-03-10T10:14:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:38 vm04 bash[28289]: audit 2026-03-10T10:14:37.596660+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:38 vm04 bash[28289]: audit 2026-03-10T10:14:37.596660+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:38 vm04 bash[28289]: cluster 2026-03-10T10:14:37.652466+0000 mgr.y (mgr.14150) 263 : cluster [DBG] pgmap v238: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 127 B/s wr, 2 op/s 2026-03-10T10:14:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:38 vm04 bash[28289]: cluster 2026-03-10T10:14:37.652466+0000 mgr.y (mgr.14150) 263 : cluster [DBG] pgmap v238: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 127 B/s wr, 2 op/s 2026-03-10T10:14:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:38 vm04 bash[28289]: audit 2026-03-10T10:14:38.195953+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:38 vm04 bash[28289]: audit 2026-03-10T10:14:38.195953+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:38 vm04 bash[28289]: audit 2026-03-10T10:14:38.202358+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:38 vm04 bash[28289]: audit 2026-03-10T10:14:38.202358+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:38 vm04 bash[28289]: audit 2026-03-10T10:14:38.208736+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:38 vm04 bash[28289]: audit 2026-03-10T10:14:38.208736+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:38 vm04 bash[28289]: audit 2026-03-10T10:14:38.211099+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T10:14:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:38 vm04 bash[28289]: audit 2026-03-10T10:14:38.211099+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T10:14:38.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:38 vm04 bash[20742]: audit 2026-03-10T10:14:37.589951+0000 mgr.y (mgr.14150) 261 : audit [DBG] from='client.14481 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm04=a;vm07=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:38.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:38 vm04 bash[20742]: audit 2026-03-10T10:14:37.589951+0000 mgr.y (mgr.14150) 261 : audit [DBG] from='client.14481 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm04=a;vm07=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:38 vm04 bash[20742]: cephadm 2026-03-10T10:14:37.590774+0000 mgr.y (mgr.14150) 262 : cephadm [INF] Saving service node-exporter spec with placement vm04=a;vm07=b;count:2 2026-03-10T10:14:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:38 vm04 bash[20742]: cephadm 2026-03-10T10:14:37.590774+0000 mgr.y (mgr.14150) 262 : cephadm [INF] Saving service node-exporter spec with placement vm04=a;vm07=b;count:2 2026-03-10T10:14:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:38 vm04 bash[20742]: audit 2026-03-10T10:14:37.596660+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:38 vm04 bash[20742]: audit 2026-03-10T10:14:37.596660+0000 mon.a (mon.0) 706 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:38 vm04 bash[20742]: cluster 2026-03-10T10:14:37.652466+0000 mgr.y (mgr.14150) 263 : cluster [DBG] pgmap v238: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 127 B/s wr, 2 op/s 2026-03-10T10:14:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:38 vm04 bash[20742]: cluster 2026-03-10T10:14:37.652466+0000 mgr.y (mgr.14150) 263 : cluster [DBG] pgmap v238: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 127 B/s wr, 2 op/s 2026-03-10T10:14:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:38 vm04 bash[20742]: audit 2026-03-10T10:14:38.195953+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:38 vm04 bash[20742]: audit 2026-03-10T10:14:38.195953+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:38 vm04 bash[20742]: audit 2026-03-10T10:14:38.202358+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:38 vm04 bash[20742]: audit 2026-03-10T10:14:38.202358+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:38 vm04 bash[20742]: audit 2026-03-10T10:14:38.208736+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:38 vm04 bash[20742]: audit 2026-03-10T10:14:38.208736+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' 2026-03-10T10:14:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:38 vm04 bash[20742]: audit 2026-03-10T10:14:38.211099+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T10:14:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:38 vm04 bash[20742]: audit 2026-03-10T10:14:38.211099+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T10:14:39.515 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:39 vm07 bash[24071]: ignoring --setuser ceph since I am not root 2026-03-10T10:14:39.515 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:39 vm07 bash[24071]: ignoring --setgroup ceph since I am not root 2026-03-10T10:14:39.515 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:39 vm07 bash[24071]: debug 2026-03-10T10:14:39.329+0000 7fcf5ab0f140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T10:14:39.515 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:39 vm07 bash[24071]: debug 2026-03-10T10:14:39.365+0000 7fcf5ab0f140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T10:14:39.515 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:39 vm07 bash[24071]: debug 2026-03-10T10:14:39.477+0000 7fcf5ab0f140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T10:14:39.606 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:39 vm04 bash[20997]: ignoring --setuser ceph since I am not root 2026-03-10T10:14:39.606 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:39 vm04 bash[20997]: ignoring --setgroup ceph since I am not root 2026-03-10T10:14:39.606 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:39 vm04 bash[20997]: debug 2026-03-10T10:14:39.321+0000 7f2a8e049140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T10:14:39.606 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:39 vm04 bash[20997]: debug 2026-03-10T10:14:39.357+0000 7f2a8e049140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T10:14:39.606 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:39 vm04 bash[20997]: debug 2026-03-10T10:14:39.473+0000 7f2a8e049140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T10:14:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:39 vm04 bash[28289]: audit 2026-03-10T10:14:38.099121+0000 mgr.y (mgr.14150) 264 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:14:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:39 vm04 bash[28289]: audit 2026-03-10T10:14:38.099121+0000 mgr.y (mgr.14150) 264 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:14:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:39 vm04 bash[28289]: audit 2026-03-10T10:14:39.215884+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T10:14:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:39 vm04 bash[28289]: audit 2026-03-10T10:14:39.215884+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T10:14:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:39 vm04 bash[28289]: cluster 2026-03-10T10:14:39.233807+0000 mon.a (mon.0) 712 : cluster [DBG] mgrmap e17: y(active, since 6m), standbys: x 2026-03-10T10:14:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:39 vm04 bash[28289]: cluster 2026-03-10T10:14:39.233807+0000 mon.a (mon.0) 712 : cluster [DBG] mgrmap e17: y(active, since 6m), standbys: x 2026-03-10T10:14:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:39 vm04 bash[20742]: audit 2026-03-10T10:14:38.099121+0000 mgr.y (mgr.14150) 264 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:14:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:39 vm04 bash[20742]: audit 2026-03-10T10:14:38.099121+0000 mgr.y (mgr.14150) 264 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:14:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:39 vm04 bash[20742]: audit 2026-03-10T10:14:39.215884+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T10:14:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:39 vm04 bash[20742]: audit 2026-03-10T10:14:39.215884+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T10:14:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:39 vm04 bash[20742]: cluster 2026-03-10T10:14:39.233807+0000 mon.a (mon.0) 712 : cluster [DBG] mgrmap e17: y(active, since 6m), standbys: x 2026-03-10T10:14:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:39 vm04 bash[20742]: cluster 2026-03-10T10:14:39.233807+0000 mon.a (mon.0) 712 : cluster [DBG] mgrmap e17: y(active, since 6m), standbys: x 2026-03-10T10:14:39.953 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:39 vm04 bash[20997]: debug 2026-03-10T10:14:39.745+0000 7f2a8e049140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T10:14:40.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:39 vm07 bash[23367]: audit 2026-03-10T10:14:38.099121+0000 mgr.y (mgr.14150) 264 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:14:40.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:39 vm07 bash[23367]: audit 2026-03-10T10:14:38.099121+0000 mgr.y (mgr.14150) 264 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:14:40.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:39 vm07 bash[23367]: audit 2026-03-10T10:14:39.215884+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T10:14:40.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:39 vm07 bash[23367]: audit 2026-03-10T10:14:39.215884+0000 mon.a (mon.0) 711 : audit [INF] from='mgr.14150 192.168.123.104:0/1910879500' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T10:14:40.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:39 vm07 bash[23367]: cluster 2026-03-10T10:14:39.233807+0000 mon.a (mon.0) 712 : cluster [DBG] mgrmap e17: y(active, since 6m), standbys: x 2026-03-10T10:14:40.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:39 vm07 bash[23367]: cluster 2026-03-10T10:14:39.233807+0000 mon.a (mon.0) 712 : cluster [DBG] mgrmap e17: y(active, since 6m), standbys: x 2026-03-10T10:14:40.015 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:39 vm07 bash[24071]: debug 2026-03-10T10:14:39.749+0000 7fcf5ab0f140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T10:14:40.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:40 vm04 bash[20997]: debug 2026-03-10T10:14:40.169+0000 7f2a8e049140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T10:14:40.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:40 vm04 bash[20997]: debug 2026-03-10T10:14:40.249+0000 7f2a8e049140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T10:14:40.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:40 vm04 bash[20997]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T10:14:40.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:40 vm04 bash[20997]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T10:14:40.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:40 vm04 bash[20997]: from numpy import show_config as show_numpy_config 2026-03-10T10:14:40.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:40 vm04 bash[20997]: debug 2026-03-10T10:14:40.365+0000 7f2a8e049140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T10:14:40.515 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:40 vm07 bash[24071]: debug 2026-03-10T10:14:40.201+0000 7fcf5ab0f140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T10:14:40.515 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:40 vm07 bash[24071]: debug 2026-03-10T10:14:40.285+0000 7fcf5ab0f140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T10:14:40.515 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:40 vm07 bash[24071]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T10:14:40.515 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:40 vm07 bash[24071]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T10:14:40.515 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:40 vm07 bash[24071]: from numpy import show_config as show_numpy_config 2026-03-10T10:14:40.515 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:40 vm07 bash[24071]: debug 2026-03-10T10:14:40.405+0000 7fcf5ab0f140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T10:14:40.953 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:40 vm04 bash[20997]: debug 2026-03-10T10:14:40.497+0000 7f2a8e049140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T10:14:40.953 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:40 vm04 bash[20997]: debug 2026-03-10T10:14:40.533+0000 7f2a8e049140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T10:14:40.953 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:40 vm04 bash[20997]: debug 2026-03-10T10:14:40.569+0000 7f2a8e049140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T10:14:40.953 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:40 vm04 bash[20997]: debug 2026-03-10T10:14:40.617+0000 7f2a8e049140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T10:14:40.953 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:40 vm04 bash[20997]: debug 2026-03-10T10:14:40.669+0000 7f2a8e049140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T10:14:41.015 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:40 vm07 bash[24071]: debug 2026-03-10T10:14:40.533+0000 7fcf5ab0f140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T10:14:41.015 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:40 vm07 bash[24071]: debug 2026-03-10T10:14:40.569+0000 7fcf5ab0f140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T10:14:41.015 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:40 vm07 bash[24071]: debug 2026-03-10T10:14:40.605+0000 7fcf5ab0f140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T10:14:41.015 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:40 vm07 bash[24071]: debug 2026-03-10T10:14:40.645+0000 7fcf5ab0f140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T10:14:41.015 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:40 vm07 bash[24071]: debug 2026-03-10T10:14:40.701+0000 7fcf5ab0f140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T10:14:41.378 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:41 vm04 bash[20997]: debug 2026-03-10T10:14:41.097+0000 7f2a8e049140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T10:14:41.378 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:41 vm04 bash[20997]: debug 2026-03-10T10:14:41.133+0000 7f2a8e049140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T10:14:41.378 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:41 vm04 bash[20997]: debug 2026-03-10T10:14:41.165+0000 7f2a8e049140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T10:14:41.378 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:41 vm04 bash[20997]: debug 2026-03-10T10:14:41.297+0000 7f2a8e049140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T10:14:41.378 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:41 vm04 bash[20997]: debug 2026-03-10T10:14:41.337+0000 7f2a8e049140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T10:14:41.429 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:41 vm07 bash[24071]: debug 2026-03-10T10:14:41.137+0000 7fcf5ab0f140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T10:14:41.429 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:41 vm07 bash[24071]: debug 2026-03-10T10:14:41.173+0000 7fcf5ab0f140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T10:14:41.429 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:41 vm07 bash[24071]: debug 2026-03-10T10:14:41.205+0000 7fcf5ab0f140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T10:14:41.429 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:41 vm07 bash[24071]: debug 2026-03-10T10:14:41.341+0000 7fcf5ab0f140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T10:14:41.429 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:41 vm07 bash[24071]: debug 2026-03-10T10:14:41.385+0000 7fcf5ab0f140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T10:14:41.634 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:41 vm04 bash[20997]: debug 2026-03-10T10:14:41.373+0000 7f2a8e049140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T10:14:41.634 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:41 vm04 bash[20997]: debug 2026-03-10T10:14:41.481+0000 7f2a8e049140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T10:14:41.693 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:41 vm07 bash[24071]: debug 2026-03-10T10:14:41.421+0000 7fcf5ab0f140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T10:14:41.693 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:41 vm07 bash[24071]: debug 2026-03-10T10:14:41.533+0000 7fcf5ab0f140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T10:14:41.953 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:41 vm04 bash[20997]: debug 2026-03-10T10:14:41.629+0000 7f2a8e049140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T10:14:41.953 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:41 vm04 bash[20997]: debug 2026-03-10T10:14:41.801+0000 7f2a8e049140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T10:14:41.953 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:41 vm04 bash[20997]: debug 2026-03-10T10:14:41.833+0000 7f2a8e049140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T10:14:41.953 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:41 vm04 bash[20997]: debug 2026-03-10T10:14:41.877+0000 7f2a8e049140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T10:14:42.015 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:41 vm07 bash[24071]: debug 2026-03-10T10:14:41.685+0000 7fcf5ab0f140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T10:14:42.015 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:41 vm07 bash[24071]: debug 2026-03-10T10:14:41.853+0000 7fcf5ab0f140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T10:14:42.015 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:41 vm07 bash[24071]: debug 2026-03-10T10:14:41.889+0000 7fcf5ab0f140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T10:14:42.015 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:41 vm07 bash[24071]: debug 2026-03-10T10:14:41.925+0000 7fcf5ab0f140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T10:14:42.327 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:42 vm04 bash[20997]: debug 2026-03-10T10:14:42.021+0000 7f2a8e049140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T10:14:42.327 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:42 vm04 bash[20997]: debug 2026-03-10T10:14:42.249+0000 7f2a8e049140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T10:14:42.374 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: cluster 2026-03-10T10:14:42.257018+0000 mon.a (mon.0) 713 : cluster [INF] Active manager daemon y restarted 2026-03-10T10:14:42.374 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: cluster 2026-03-10T10:14:42.257018+0000 mon.a (mon.0) 713 : cluster [INF] Active manager daemon y restarted 2026-03-10T10:14:42.374 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: cluster 2026-03-10T10:14:42.257261+0000 mon.a (mon.0) 714 : cluster [INF] Activating manager daemon y 2026-03-10T10:14:42.374 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: cluster 2026-03-10T10:14:42.257261+0000 mon.a (mon.0) 714 : cluster [INF] Activating manager daemon y 2026-03-10T10:14:42.374 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: cluster 2026-03-10T10:14:42.280068+0000 mon.a (mon.0) 715 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-10T10:14:42.374 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: cluster 2026-03-10T10:14:42.280068+0000 mon.a (mon.0) 715 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-10T10:14:42.374 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: cluster 2026-03-10T10:14:42.287180+0000 mon.a (mon.0) 716 : cluster [DBG] mgrmap e18: y(active, starting, since 0.0299965s), standbys: x 2026-03-10T10:14:42.374 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: cluster 2026-03-10T10:14:42.287180+0000 mon.a (mon.0) 716 : cluster [DBG] mgrmap e18: y(active, starting, since 0.0299965s), standbys: x 2026-03-10T10:14:42.374 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.291575+0000 mon.a (mon.0) 717 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:14:42.374 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.291575+0000 mon.a (mon.0) 717 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:14:42.374 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.291622+0000 mon.a (mon.0) 718 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:14:42.374 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.291622+0000 mon.a (mon.0) 718 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:14:42.374 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.291654+0000 mon.a (mon.0) 719 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:14:42.374 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.291654+0000 mon.a (mon.0) 719 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:14:42.374 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.291696+0000 mon.a (mon.0) 720 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T10:14:42.374 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.291696+0000 mon.a (mon.0) 720 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T10:14:42.374 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.291767+0000 mon.a (mon.0) 721 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T10:14:42.374 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.291767+0000 mon.a (mon.0) 721 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T10:14:42.374 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.291997+0000 mon.a (mon.0) 722 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:14:42.374 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.291997+0000 mon.a (mon.0) 722 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:14:42.374 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.292100+0000 mon.a (mon.0) 723 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:14:42.374 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.292100+0000 mon.a (mon.0) 723 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.292148+0000 mon.a (mon.0) 724 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.292148+0000 mon.a (mon.0) 724 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.292189+0000 mon.a (mon.0) 725 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.292189+0000 mon.a (mon.0) 725 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.292249+0000 mon.a (mon.0) 726 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.292249+0000 mon.a (mon.0) 726 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.292292+0000 mon.a (mon.0) 727 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.292292+0000 mon.a (mon.0) 727 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.292331+0000 mon.a (mon.0) 728 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.292331+0000 mon.a (mon.0) 728 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.292561+0000 mon.a (mon.0) 729 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.292561+0000 mon.a (mon.0) 729 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.293541+0000 mon.a (mon.0) 730 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.293541+0000 mon.a (mon.0) 730 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.293629+0000 mon.a (mon.0) 731 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.293629+0000 mon.a (mon.0) 731 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.293856+0000 mon.a (mon.0) 732 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.293856+0000 mon.a (mon.0) 732 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: cluster 2026-03-10T10:14:42.300967+0000 mon.a (mon.0) 733 : cluster [INF] Manager daemon y is now available 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: cluster 2026-03-10T10:14:42.300967+0000 mon.a (mon.0) 733 : cluster [INF] Manager daemon y is now available 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: cluster 2026-03-10T10:14:42.314154+0000 mon.a (mon.0) 734 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: cluster 2026-03-10T10:14:42.314154+0000 mon.a (mon.0) 734 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: cluster 2026-03-10T10:14:42.314353+0000 mon.a (mon.0) 735 : cluster [DBG] Standby manager daemon x started 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: cluster 2026-03-10T10:14:42.314353+0000 mon.a (mon.0) 735 : cluster [DBG] Standby manager daemon x started 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.317066+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.317066+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.318364+0000 mon.b (mon.1) 26 : audit [DBG] from='mgr.? 192.168.123.107:0/2040972295' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.318364+0000 mon.b (mon.1) 26 : audit [DBG] from='mgr.? 192.168.123.107:0/2040972295' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.318727+0000 mon.b (mon.1) 27 : audit [DBG] from='mgr.? 192.168.123.107:0/2040972295' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.318727+0000 mon.b (mon.1) 27 : audit [DBG] from='mgr.? 192.168.123.107:0/2040972295' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.319365+0000 mon.b (mon.1) 28 : audit [DBG] from='mgr.? 192.168.123.107:0/2040972295' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.319365+0000 mon.b (mon.1) 28 : audit [DBG] from='mgr.? 192.168.123.107:0/2040972295' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.319684+0000 mon.b (mon.1) 29 : audit [DBG] from='mgr.? 192.168.123.107:0/2040972295' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:42 vm07 bash[23367]: audit 2026-03-10T10:14:42.319684+0000 mon.b (mon.1) 29 : audit [DBG] from='mgr.? 192.168.123.107:0/2040972295' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:42 vm07 bash[24071]: debug 2026-03-10T10:14:42.073+0000 7fcf5ab0f140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:42 vm07 bash[24071]: debug 2026-03-10T10:14:42.305+0000 7fcf5ab0f140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:42 vm07 bash[24071]: [10/Mar/2026:10:14:42] ENGINE Bus STARTING 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:42 vm07 bash[24071]: CherryPy Checker: 2026-03-10T10:14:42.375 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:42 vm07 bash[24071]: The Application mounted at '' has an empty config. 2026-03-10T10:14:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: cluster 2026-03-10T10:14:42.257018+0000 mon.a (mon.0) 713 : cluster [INF] Active manager daemon y restarted 2026-03-10T10:14:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: cluster 2026-03-10T10:14:42.257018+0000 mon.a (mon.0) 713 : cluster [INF] Active manager daemon y restarted 2026-03-10T10:14:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: cluster 2026-03-10T10:14:42.257261+0000 mon.a (mon.0) 714 : cluster [INF] Activating manager daemon y 2026-03-10T10:14:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: cluster 2026-03-10T10:14:42.257261+0000 mon.a (mon.0) 714 : cluster [INF] Activating manager daemon y 2026-03-10T10:14:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: cluster 2026-03-10T10:14:42.280068+0000 mon.a (mon.0) 715 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-10T10:14:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: cluster 2026-03-10T10:14:42.280068+0000 mon.a (mon.0) 715 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-10T10:14:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: cluster 2026-03-10T10:14:42.287180+0000 mon.a (mon.0) 716 : cluster [DBG] mgrmap e18: y(active, starting, since 0.0299965s), standbys: x 2026-03-10T10:14:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: cluster 2026-03-10T10:14:42.287180+0000 mon.a (mon.0) 716 : cluster [DBG] mgrmap e18: y(active, starting, since 0.0299965s), standbys: x 2026-03-10T10:14:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.291575+0000 mon.a (mon.0) 717 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:14:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.291575+0000 mon.a (mon.0) 717 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:14:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.291622+0000 mon.a (mon.0) 718 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:14:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.291622+0000 mon.a (mon.0) 718 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:14:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.291654+0000 mon.a (mon.0) 719 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:14:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.291654+0000 mon.a (mon.0) 719 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:14:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.291696+0000 mon.a (mon.0) 720 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T10:14:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.291696+0000 mon.a (mon.0) 720 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T10:14:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.291767+0000 mon.a (mon.0) 721 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T10:14:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.291767+0000 mon.a (mon.0) 721 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T10:14:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.291997+0000 mon.a (mon.0) 722 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:14:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.291997+0000 mon.a (mon.0) 722 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:14:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.292100+0000 mon.a (mon.0) 723 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:14:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.292100+0000 mon.a (mon.0) 723 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:14:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.292148+0000 mon.a (mon.0) 724 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:14:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.292148+0000 mon.a (mon.0) 724 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:14:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.292189+0000 mon.a (mon.0) 725 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:14:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.292189+0000 mon.a (mon.0) 725 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:14:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.292249+0000 mon.a (mon.0) 726 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:14:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.292249+0000 mon.a (mon.0) 726 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:14:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.292292+0000 mon.a (mon.0) 727 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:14:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.292292+0000 mon.a (mon.0) 727 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:14:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.292331+0000 mon.a (mon.0) 728 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:14:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.292331+0000 mon.a (mon.0) 728 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:14:42.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.292561+0000 mon.a (mon.0) 729 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.292561+0000 mon.a (mon.0) 729 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.293541+0000 mon.a (mon.0) 730 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.293541+0000 mon.a (mon.0) 730 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.293629+0000 mon.a (mon.0) 731 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.293629+0000 mon.a (mon.0) 731 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.293856+0000 mon.a (mon.0) 732 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.293856+0000 mon.a (mon.0) 732 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: cluster 2026-03-10T10:14:42.300967+0000 mon.a (mon.0) 733 : cluster [INF] Manager daemon y is now available 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: cluster 2026-03-10T10:14:42.300967+0000 mon.a (mon.0) 733 : cluster [INF] Manager daemon y is now available 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: cluster 2026-03-10T10:14:42.314154+0000 mon.a (mon.0) 734 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: cluster 2026-03-10T10:14:42.314154+0000 mon.a (mon.0) 734 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: cluster 2026-03-10T10:14:42.314353+0000 mon.a (mon.0) 735 : cluster [DBG] Standby manager daemon x started 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: cluster 2026-03-10T10:14:42.314353+0000 mon.a (mon.0) 735 : cluster [DBG] Standby manager daemon x started 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.317066+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.317066+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.318364+0000 mon.b (mon.1) 26 : audit [DBG] from='mgr.? 192.168.123.107:0/2040972295' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.318364+0000 mon.b (mon.1) 26 : audit [DBG] from='mgr.? 192.168.123.107:0/2040972295' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.318727+0000 mon.b (mon.1) 27 : audit [DBG] from='mgr.? 192.168.123.107:0/2040972295' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.318727+0000 mon.b (mon.1) 27 : audit [DBG] from='mgr.? 192.168.123.107:0/2040972295' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.319365+0000 mon.b (mon.1) 28 : audit [DBG] from='mgr.? 192.168.123.107:0/2040972295' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.319365+0000 mon.b (mon.1) 28 : audit [DBG] from='mgr.? 192.168.123.107:0/2040972295' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.319684+0000 mon.b (mon.1) 29 : audit [DBG] from='mgr.? 192.168.123.107:0/2040972295' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:42 vm04 bash[20742]: audit 2026-03-10T10:14:42.319684+0000 mon.b (mon.1) 29 : audit [DBG] from='mgr.? 192.168.123.107:0/2040972295' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:42 vm04 bash[20997]: [10/Mar/2026:10:14:42] ENGINE Bus STARTING 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:42 vm04 bash[20997]: CherryPy Checker: 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:42 vm04 bash[20997]: The Application mounted at '' has an empty config. 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:42 vm04 bash[20997]: [10/Mar/2026:10:14:42] ENGINE Serving on http://:::9283 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:42 vm04 bash[20997]: [10/Mar/2026:10:14:42] ENGINE Bus STARTED 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: cluster 2026-03-10T10:14:42.257018+0000 mon.a (mon.0) 713 : cluster [INF] Active manager daemon y restarted 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: cluster 2026-03-10T10:14:42.257018+0000 mon.a (mon.0) 713 : cluster [INF] Active manager daemon y restarted 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: cluster 2026-03-10T10:14:42.257261+0000 mon.a (mon.0) 714 : cluster [INF] Activating manager daemon y 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: cluster 2026-03-10T10:14:42.257261+0000 mon.a (mon.0) 714 : cluster [INF] Activating manager daemon y 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: cluster 2026-03-10T10:14:42.280068+0000 mon.a (mon.0) 715 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: cluster 2026-03-10T10:14:42.280068+0000 mon.a (mon.0) 715 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: cluster 2026-03-10T10:14:42.287180+0000 mon.a (mon.0) 716 : cluster [DBG] mgrmap e18: y(active, starting, since 0.0299965s), standbys: x 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: cluster 2026-03-10T10:14:42.287180+0000 mon.a (mon.0) 716 : cluster [DBG] mgrmap e18: y(active, starting, since 0.0299965s), standbys: x 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.291575+0000 mon.a (mon.0) 717 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.291575+0000 mon.a (mon.0) 717 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.291622+0000 mon.a (mon.0) 718 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.291622+0000 mon.a (mon.0) 718 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.291654+0000 mon.a (mon.0) 719 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.291654+0000 mon.a (mon.0) 719 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.291696+0000 mon.a (mon.0) 720 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.291696+0000 mon.a (mon.0) 720 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.291767+0000 mon.a (mon.0) 721 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.291767+0000 mon.a (mon.0) 721 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.291997+0000 mon.a (mon.0) 722 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.291997+0000 mon.a (mon.0) 722 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.292100+0000 mon.a (mon.0) 723 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.292100+0000 mon.a (mon.0) 723 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.292148+0000 mon.a (mon.0) 724 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.292148+0000 mon.a (mon.0) 724 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T10:14:42.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.292189+0000 mon.a (mon.0) 725 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:14:42.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.292189+0000 mon.a (mon.0) 725 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T10:14:42.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.292249+0000 mon.a (mon.0) 726 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:14:42.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.292249+0000 mon.a (mon.0) 726 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T10:14:42.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.292292+0000 mon.a (mon.0) 727 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:14:42.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.292292+0000 mon.a (mon.0) 727 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T10:14:42.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.292331+0000 mon.a (mon.0) 728 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:14:42.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.292331+0000 mon.a (mon.0) 728 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T10:14:42.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.292561+0000 mon.a (mon.0) 729 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:14:42.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.292561+0000 mon.a (mon.0) 729 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T10:14:42.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.293541+0000 mon.a (mon.0) 730 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T10:14:42.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.293541+0000 mon.a (mon.0) 730 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T10:14:42.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.293629+0000 mon.a (mon.0) 731 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T10:14:42.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.293629+0000 mon.a (mon.0) 731 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T10:14:42.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.293856+0000 mon.a (mon.0) 732 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T10:14:42.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.293856+0000 mon.a (mon.0) 732 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T10:14:42.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: cluster 2026-03-10T10:14:42.300967+0000 mon.a (mon.0) 733 : cluster [INF] Manager daemon y is now available 2026-03-10T10:14:42.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: cluster 2026-03-10T10:14:42.300967+0000 mon.a (mon.0) 733 : cluster [INF] Manager daemon y is now available 2026-03-10T10:14:42.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: cluster 2026-03-10T10:14:42.314154+0000 mon.a (mon.0) 734 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T10:14:42.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: cluster 2026-03-10T10:14:42.314154+0000 mon.a (mon.0) 734 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T10:14:42.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: cluster 2026-03-10T10:14:42.314353+0000 mon.a (mon.0) 735 : cluster [DBG] Standby manager daemon x started 2026-03-10T10:14:42.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: cluster 2026-03-10T10:14:42.314353+0000 mon.a (mon.0) 735 : cluster [DBG] Standby manager daemon x started 2026-03-10T10:14:42.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.317066+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:42.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.317066+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:42.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.318364+0000 mon.b (mon.1) 26 : audit [DBG] from='mgr.? 192.168.123.107:0/2040972295' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T10:14:42.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.318364+0000 mon.b (mon.1) 26 : audit [DBG] from='mgr.? 192.168.123.107:0/2040972295' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T10:14:42.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.318727+0000 mon.b (mon.1) 27 : audit [DBG] from='mgr.? 192.168.123.107:0/2040972295' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T10:14:42.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.318727+0000 mon.b (mon.1) 27 : audit [DBG] from='mgr.? 192.168.123.107:0/2040972295' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T10:14:42.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.319365+0000 mon.b (mon.1) 28 : audit [DBG] from='mgr.? 192.168.123.107:0/2040972295' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T10:14:42.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.319365+0000 mon.b (mon.1) 28 : audit [DBG] from='mgr.? 192.168.123.107:0/2040972295' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T10:14:42.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.319684+0000 mon.b (mon.1) 29 : audit [DBG] from='mgr.? 192.168.123.107:0/2040972295' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T10:14:42.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:42 vm04 bash[28289]: audit 2026-03-10T10:14:42.319684+0000 mon.b (mon.1) 29 : audit [DBG] from='mgr.? 192.168.123.107:0/2040972295' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T10:14:42.765 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:42 vm07 bash[24071]: [10/Mar/2026:10:14:42] ENGINE Serving on http://:::9283 2026-03-10T10:14:42.765 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:42 vm07 bash[24071]: [10/Mar/2026:10:14:42] ENGINE Bus STARTED 2026-03-10T10:14:42.899 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.b/config 2026-03-10T10:14:43.066 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.061+0000 7fbf31300640 1 -- 192.168.123.107:0/2925030076 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbf2c102b90 msgr2=0x7fbf2c103010 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:43.066 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.061+0000 7fbf31300640 1 --2- 192.168.123.107:0/2925030076 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbf2c102b90 0x7fbf2c103010 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7fbf18009a30 tx=0x7fbf1802f240 comp rx=0 tx=0).stop 2026-03-10T10:14:43.066 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.061+0000 7fbf31300640 1 -- 192.168.123.107:0/2925030076 shutdown_connections 2026-03-10T10:14:43.066 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.061+0000 7fbf31300640 1 --2- 192.168.123.107:0/2925030076 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fbf2c103550 0x7fbf2c109de0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:43.066 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.061+0000 7fbf31300640 1 --2- 192.168.123.107:0/2925030076 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbf2c102b90 0x7fbf2c103010 unknown :-1 s=CLOSED pgs=61 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:43.066 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.061+0000 7fbf31300640 1 --2- 192.168.123.107:0/2925030076 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fbf2c101990 0x7fbf2c101d90 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:43.066 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.061+0000 7fbf31300640 1 -- 192.168.123.107:0/2925030076 >> 192.168.123.107:0/2925030076 conn(0x7fbf2c0fd120 msgr2=0x7fbf2c0ff560 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:43.067 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.061+0000 7fbf31300640 1 -- 192.168.123.107:0/2925030076 shutdown_connections 2026-03-10T10:14:43.067 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.061+0000 7fbf31300640 1 -- 192.168.123.107:0/2925030076 wait complete. 2026-03-10T10:14:43.067 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.061+0000 7fbf31300640 1 Processor -- start 2026-03-10T10:14:43.067 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.061+0000 7fbf31300640 1 -- start start 2026-03-10T10:14:43.067 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.061+0000 7fbf31300640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fbf2c101990 0x7fbf2c105920 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:43.068 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.061+0000 7fbf31300640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fbf2c102b90 0x7fbf2c103f70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:43.068 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.061+0000 7fbf31300640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbf2c103550 0x7fbf2c1044b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:43.068 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.061+0000 7fbf31300640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fbf2c10caa0 con 0x7fbf2c102b90 2026-03-10T10:14:43.068 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.061+0000 7fbf31300640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7fbf2c10c920 con 0x7fbf2c103550 2026-03-10T10:14:43.068 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.061+0000 7fbf31300640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7fbf2c10cc20 con 0x7fbf2c101990 2026-03-10T10:14:43.068 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.061+0000 7fbf2affd640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fbf2c101990 0x7fbf2c105920 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:43.068 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.061+0000 7fbf2a7fc640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fbf2c102b90 0x7fbf2c103f70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:43.068 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.061+0000 7fbf2affd640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fbf2c101990 0x7fbf2c105920 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3301/0 says I am v2:192.168.123.107:37264/0 (socket says 192.168.123.107:37264) 2026-03-10T10:14:43.068 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.061+0000 7fbf2affd640 1 -- 192.168.123.107:0/3590727309 learned_addr learned my addr 192.168.123.107:0/3590727309 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-10T10:14:43.068 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.061+0000 7fbf2b7fe640 1 --2- 192.168.123.107:0/3590727309 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbf2c103550 0x7fbf2c1044b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:43.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.061+0000 7fbf2b7fe640 1 -- 192.168.123.107:0/3590727309 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fbf2c101990 msgr2=0x7fbf2c105920 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:43.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.061+0000 7fbf2b7fe640 1 --2- 192.168.123.107:0/3590727309 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fbf2c101990 0x7fbf2c105920 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:43.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.061+0000 7fbf2b7fe640 1 -- 192.168.123.107:0/3590727309 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fbf2c102b90 msgr2=0x7fbf2c103f70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:43.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.061+0000 7fbf2b7fe640 1 --2- 192.168.123.107:0/3590727309 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fbf2c102b90 0x7fbf2c103f70 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:43.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.061+0000 7fbf2b7fe640 1 -- 192.168.123.107:0/3590727309 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fbf2c104d40 con 0x7fbf2c103550 2026-03-10T10:14:43.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.065+0000 7fbf2b7fe640 1 --2- 192.168.123.107:0/3590727309 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbf2c103550 0x7fbf2c1044b0 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7fbf1c004830 tx=0x7fbf1c00d4a0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:43.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.065+0000 7fbf07fff640 1 -- 192.168.123.107:0/3590727309 <== mon.1 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fbf1c00db60 con 0x7fbf2c103550 2026-03-10T10:14:43.069 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.065+0000 7fbf31300640 1 -- 192.168.123.107:0/3590727309 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fbf2c1a4530 con 0x7fbf2c103550 2026-03-10T10:14:43.070 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.065+0000 7fbf31300640 1 -- 192.168.123.107:0/3590727309 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fbf2c1a4a70 con 0x7fbf2c103550 2026-03-10T10:14:43.070 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.065+0000 7fbf07fff640 1 -- 192.168.123.107:0/3590727309 <== mon.1 v2:192.168.123.107:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fbf1c00dd00 con 0x7fbf2c103550 2026-03-10T10:14:43.070 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.065+0000 7fbf07fff640 1 -- 192.168.123.107:0/3590727309 <== mon.1 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fbf1c013650 con 0x7fbf2c103550 2026-03-10T10:14:43.070 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.065+0000 7fbf31300640 1 -- 192.168.123.107:0/3590727309 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fbef0005180 con 0x7fbf2c103550 2026-03-10T10:14:43.074 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.069+0000 7fbf07fff640 1 -- 192.168.123.107:0/3590727309 <== mon.1 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 18) ==== 99714+0+0 (secure 0 0 0) 0x7fbf1c025080 con 0x7fbf2c103550 2026-03-10T10:14:43.074 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.069+0000 7fbf07fff640 1 -- 192.168.123.107:0/3590727309 <== mon.1 v2:192.168.123.107:3300/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7fbf1c099b60 con 0x7fbf2c103550 2026-03-10T10:14:43.074 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.069+0000 7fbf07fff640 1 -- 192.168.123.107:0/3590727309 <== mon.1 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fbf1c010070 con 0x7fbf2c103550 2026-03-10T10:14:43.295 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.289+0000 7fbf07fff640 1 -- 192.168.123.107:0/3590727309 <== mon.1 v2:192.168.123.107:3300/0 7 ==== mgrmap(e 19) ==== 99806+0+0 (secure 0 0 0) 0x7fbf1c012070 con 0x7fbf2c103550 2026-03-10T10:14:43.295 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.289+0000 7fbf07fff640 1 --2- 192.168.123.107:0/3590727309 >> v2:192.168.123.104:6800/3326026257 conn(0x7fbf0007dfc0 0x7fbf000803b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:43.295 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.289+0000 7fbf07fff640 1 -- 192.168.123.107:0/3590727309 --> v2:192.168.123.104:6800/3326026257 -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm04=a", "target": ["mon-mgr", ""]}) -- 0x7fbf1c024e50 con 0x7fbf0007dfc0 2026-03-10T10:14:43.302 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.297+0000 7fbf2affd640 1 --2- 192.168.123.107:0/3590727309 >> v2:192.168.123.104:6800/3326026257 conn(0x7fbf0007dfc0 0x7fbf000803b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:43.303 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.297+0000 7fbf2affd640 1 --2- 192.168.123.107:0/3590727309 >> v2:192.168.123.104:6800/3326026257 conn(0x7fbf0007dfc0 0x7fbf000803b0 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7fbf20005fd0 tx=0x7fbf20007480 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:43.309 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.301+0000 7fbf07fff640 1 -- 192.168.123.107:0/3590727309 <== mgr.24422 v2:192.168.123.104:6800/3326026257 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+33 (secure 0 0 0) 0x7fbf1c024e50 con 0x7fbf0007dfc0 2026-03-10T10:14:43.309 INFO:teuthology.orchestra.run.vm07.stdout:Scheduled alertmanager update... 2026-03-10T10:14:43.312 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.305+0000 7fbf31300640 1 -- 192.168.123.107:0/3590727309 >> v2:192.168.123.104:6800/3326026257 conn(0x7fbf0007dfc0 msgr2=0x7fbf000803b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:43.312 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.305+0000 7fbf31300640 1 --2- 192.168.123.107:0/3590727309 >> v2:192.168.123.104:6800/3326026257 conn(0x7fbf0007dfc0 0x7fbf000803b0 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7fbf20005fd0 tx=0x7fbf20007480 comp rx=0 tx=0).stop 2026-03-10T10:14:43.312 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.305+0000 7fbf31300640 1 -- 192.168.123.107:0/3590727309 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbf2c103550 msgr2=0x7fbf2c1044b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:43.312 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.305+0000 7fbf31300640 1 --2- 192.168.123.107:0/3590727309 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbf2c103550 0x7fbf2c1044b0 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7fbf1c004830 tx=0x7fbf1c00d4a0 comp rx=0 tx=0).stop 2026-03-10T10:14:43.312 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.305+0000 7fbf31300640 1 -- 192.168.123.107:0/3590727309 shutdown_connections 2026-03-10T10:14:43.312 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.305+0000 7fbf31300640 1 --2- 192.168.123.107:0/3590727309 >> v2:192.168.123.104:6800/3326026257 conn(0x7fbf0007dfc0 0x7fbf000803b0 unknown :-1 s=CLOSED pgs=17 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:43.312 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.305+0000 7fbf31300640 1 --2- 192.168.123.107:0/3590727309 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fbf2c103550 0x7fbf2c1044b0 unknown :-1 s=CLOSED pgs=62 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:43.312 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.305+0000 7fbf31300640 1 --2- 192.168.123.107:0/3590727309 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fbf2c102b90 0x7fbf2c103f70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:43.312 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.305+0000 7fbf31300640 1 --2- 192.168.123.107:0/3590727309 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fbf2c101990 0x7fbf2c105920 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:43.312 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.305+0000 7fbf31300640 1 -- 192.168.123.107:0/3590727309 >> 192.168.123.107:0/3590727309 conn(0x7fbf2c0fd120 msgr2=0x7fbf2c06b980 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:43.312 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.305+0000 7fbf31300640 1 -- 192.168.123.107:0/3590727309 shutdown_connections 2026-03-10T10:14:43.313 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:43.305+0000 7fbf31300640 1 -- 192.168.123.107:0/3590727309 wait complete. 2026-03-10T10:14:43.381 DEBUG:teuthology.orchestra.run.vm04:alertmanager.a> sudo journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@alertmanager.a.service 2026-03-10T10:14:43.383 INFO:tasks.cephadm:Adding grafana.a on vm07 2026-03-10T10:14:43.383 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph orch apply grafana '1;vm07=a' 2026-03-10T10:14:43.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:43 vm04 bash[28289]: audit 2026-03-10T10:14:42.348571+0000 mon.a (mon.0) 737 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:43.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:43 vm04 bash[28289]: audit 2026-03-10T10:14:42.348571+0000 mon.a (mon.0) 737 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:43.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:43 vm04 bash[28289]: audit 2026-03-10T10:14:42.352815+0000 mon.a (mon.0) 738 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:14:43.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:43 vm04 bash[28289]: audit 2026-03-10T10:14:42.352815+0000 mon.a (mon.0) 738 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:14:43.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:43 vm04 bash[28289]: audit 2026-03-10T10:14:42.357889+0000 mon.a (mon.0) 739 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:14:43.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:43 vm04 bash[28289]: audit 2026-03-10T10:14:42.357889+0000 mon.a (mon.0) 739 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:14:43.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:43 vm04 bash[28289]: audit 2026-03-10T10:14:42.392507+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T10:14:43.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:43 vm04 bash[28289]: audit 2026-03-10T10:14:42.392507+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T10:14:43.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:43 vm04 bash[28289]: cluster 2026-03-10T10:14:43.288938+0000 mon.a (mon.0) 741 : cluster [DBG] mgrmap e19: y(active, since 1.03176s), standbys: x 2026-03-10T10:14:43.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:43 vm04 bash[28289]: cluster 2026-03-10T10:14:43.288938+0000 mon.a (mon.0) 741 : cluster [DBG] mgrmap e19: y(active, since 1.03176s), standbys: x 2026-03-10T10:14:43.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:43 vm04 bash[28289]: audit 2026-03-10T10:14:43.309791+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:43.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:43 vm04 bash[28289]: audit 2026-03-10T10:14:43.309791+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:43.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:43 vm04 bash[20742]: audit 2026-03-10T10:14:42.348571+0000 mon.a (mon.0) 737 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:43.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:43 vm04 bash[20742]: audit 2026-03-10T10:14:42.348571+0000 mon.a (mon.0) 737 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:43.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:43 vm04 bash[20742]: audit 2026-03-10T10:14:42.352815+0000 mon.a (mon.0) 738 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:14:43.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:43 vm04 bash[20742]: audit 2026-03-10T10:14:42.352815+0000 mon.a (mon.0) 738 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:14:43.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:43 vm04 bash[20742]: audit 2026-03-10T10:14:42.357889+0000 mon.a (mon.0) 739 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:14:43.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:43 vm04 bash[20742]: audit 2026-03-10T10:14:42.357889+0000 mon.a (mon.0) 739 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:14:43.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:43 vm04 bash[20742]: audit 2026-03-10T10:14:42.392507+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T10:14:43.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:43 vm04 bash[20742]: audit 2026-03-10T10:14:42.392507+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T10:14:43.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:43 vm04 bash[20742]: cluster 2026-03-10T10:14:43.288938+0000 mon.a (mon.0) 741 : cluster [DBG] mgrmap e19: y(active, since 1.03176s), standbys: x 2026-03-10T10:14:43.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:43 vm04 bash[20742]: cluster 2026-03-10T10:14:43.288938+0000 mon.a (mon.0) 741 : cluster [DBG] mgrmap e19: y(active, since 1.03176s), standbys: x 2026-03-10T10:14:43.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:43 vm04 bash[20742]: audit 2026-03-10T10:14:43.309791+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:43.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:43 vm04 bash[20742]: audit 2026-03-10T10:14:43.309791+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:43.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:43 vm07 bash[23367]: audit 2026-03-10T10:14:42.348571+0000 mon.a (mon.0) 737 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:43.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:43 vm07 bash[23367]: audit 2026-03-10T10:14:42.348571+0000 mon.a (mon.0) 737 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:14:43.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:43 vm07 bash[23367]: audit 2026-03-10T10:14:42.352815+0000 mon.a (mon.0) 738 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:14:43.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:43 vm07 bash[23367]: audit 2026-03-10T10:14:42.352815+0000 mon.a (mon.0) 738 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:14:43.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:43 vm07 bash[23367]: audit 2026-03-10T10:14:42.357889+0000 mon.a (mon.0) 739 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:14:43.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:43 vm07 bash[23367]: audit 2026-03-10T10:14:42.357889+0000 mon.a (mon.0) 739 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T10:14:43.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:43 vm07 bash[23367]: audit 2026-03-10T10:14:42.392507+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T10:14:43.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:43 vm07 bash[23367]: audit 2026-03-10T10:14:42.392507+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T10:14:43.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:43 vm07 bash[23367]: cluster 2026-03-10T10:14:43.288938+0000 mon.a (mon.0) 741 : cluster [DBG] mgrmap e19: y(active, since 1.03176s), standbys: x 2026-03-10T10:14:43.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:43 vm07 bash[23367]: cluster 2026-03-10T10:14:43.288938+0000 mon.a (mon.0) 741 : cluster [DBG] mgrmap e19: y(active, since 1.03176s), standbys: x 2026-03-10T10:14:43.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:43 vm07 bash[23367]: audit 2026-03-10T10:14:43.309791+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:43.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:43 vm07 bash[23367]: audit 2026-03-10T10:14:43.309791+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:44 vm04 bash[28289]: cephadm 2026-03-10T10:14:43.306511+0000 mgr.y (mgr.24422) 3 : cephadm [INF] Saving service alertmanager spec with placement vm04=a;count:1 2026-03-10T10:14:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:44 vm04 bash[28289]: cephadm 2026-03-10T10:14:43.306511+0000 mgr.y (mgr.24422) 3 : cephadm [INF] Saving service alertmanager spec with placement vm04=a;count:1 2026-03-10T10:14:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:44 vm04 bash[28289]: cephadm 2026-03-10T10:14:43.359148+0000 mgr.y (mgr.24422) 4 : cephadm [INF] [10/Mar/2026:10:14:43] ENGINE Bus STARTING 2026-03-10T10:14:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:44 vm04 bash[28289]: cephadm 2026-03-10T10:14:43.359148+0000 mgr.y (mgr.24422) 4 : cephadm [INF] [10/Mar/2026:10:14:43] ENGINE Bus STARTING 2026-03-10T10:14:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:44 vm04 bash[28289]: cephadm 2026-03-10T10:14:43.460496+0000 mgr.y (mgr.24422) 5 : cephadm [INF] [10/Mar/2026:10:14:43] ENGINE Serving on http://192.168.123.104:8765 2026-03-10T10:14:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:44 vm04 bash[28289]: cephadm 2026-03-10T10:14:43.460496+0000 mgr.y (mgr.24422) 5 : cephadm [INF] [10/Mar/2026:10:14:43] ENGINE Serving on http://192.168.123.104:8765 2026-03-10T10:14:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:44 vm04 bash[28289]: cephadm 2026-03-10T10:14:43.569702+0000 mgr.y (mgr.24422) 6 : cephadm [INF] [10/Mar/2026:10:14:43] ENGINE Serving on https://192.168.123.104:7150 2026-03-10T10:14:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:44 vm04 bash[28289]: cephadm 2026-03-10T10:14:43.569702+0000 mgr.y (mgr.24422) 6 : cephadm [INF] [10/Mar/2026:10:14:43] ENGINE Serving on https://192.168.123.104:7150 2026-03-10T10:14:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:44 vm04 bash[28289]: cephadm 2026-03-10T10:14:43.569750+0000 mgr.y (mgr.24422) 7 : cephadm [INF] [10/Mar/2026:10:14:43] ENGINE Bus STARTED 2026-03-10T10:14:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:44 vm04 bash[28289]: cephadm 2026-03-10T10:14:43.569750+0000 mgr.y (mgr.24422) 7 : cephadm [INF] [10/Mar/2026:10:14:43] ENGINE Bus STARTED 2026-03-10T10:14:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:44 vm04 bash[28289]: cephadm 2026-03-10T10:14:43.570118+0000 mgr.y (mgr.24422) 8 : cephadm [INF] [10/Mar/2026:10:14:43] ENGINE Client ('192.168.123.104', 59742) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T10:14:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:44 vm04 bash[28289]: cephadm 2026-03-10T10:14:43.570118+0000 mgr.y (mgr.24422) 8 : cephadm [INF] [10/Mar/2026:10:14:43] ENGINE Client ('192.168.123.104', 59742) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T10:14:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:44 vm04 bash[20742]: cephadm 2026-03-10T10:14:43.306511+0000 mgr.y (mgr.24422) 3 : cephadm [INF] Saving service alertmanager spec with placement vm04=a;count:1 2026-03-10T10:14:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:44 vm04 bash[20742]: cephadm 2026-03-10T10:14:43.306511+0000 mgr.y (mgr.24422) 3 : cephadm [INF] Saving service alertmanager spec with placement vm04=a;count:1 2026-03-10T10:14:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:44 vm04 bash[20742]: cephadm 2026-03-10T10:14:43.359148+0000 mgr.y (mgr.24422) 4 : cephadm [INF] [10/Mar/2026:10:14:43] ENGINE Bus STARTING 2026-03-10T10:14:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:44 vm04 bash[20742]: cephadm 2026-03-10T10:14:43.359148+0000 mgr.y (mgr.24422) 4 : cephadm [INF] [10/Mar/2026:10:14:43] ENGINE Bus STARTING 2026-03-10T10:14:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:44 vm04 bash[20742]: cephadm 2026-03-10T10:14:43.460496+0000 mgr.y (mgr.24422) 5 : cephadm [INF] [10/Mar/2026:10:14:43] ENGINE Serving on http://192.168.123.104:8765 2026-03-10T10:14:44.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:44 vm04 bash[20742]: cephadm 2026-03-10T10:14:43.460496+0000 mgr.y (mgr.24422) 5 : cephadm [INF] [10/Mar/2026:10:14:43] ENGINE Serving on http://192.168.123.104:8765 2026-03-10T10:14:44.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:44 vm04 bash[20742]: cephadm 2026-03-10T10:14:43.569702+0000 mgr.y (mgr.24422) 6 : cephadm [INF] [10/Mar/2026:10:14:43] ENGINE Serving on https://192.168.123.104:7150 2026-03-10T10:14:44.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:44 vm04 bash[20742]: cephadm 2026-03-10T10:14:43.569702+0000 mgr.y (mgr.24422) 6 : cephadm [INF] [10/Mar/2026:10:14:43] ENGINE Serving on https://192.168.123.104:7150 2026-03-10T10:14:44.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:44 vm04 bash[20742]: cephadm 2026-03-10T10:14:43.569750+0000 mgr.y (mgr.24422) 7 : cephadm [INF] [10/Mar/2026:10:14:43] ENGINE Bus STARTED 2026-03-10T10:14:44.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:44 vm04 bash[20742]: cephadm 2026-03-10T10:14:43.569750+0000 mgr.y (mgr.24422) 7 : cephadm [INF] [10/Mar/2026:10:14:43] ENGINE Bus STARTED 2026-03-10T10:14:44.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:44 vm04 bash[20742]: cephadm 2026-03-10T10:14:43.570118+0000 mgr.y (mgr.24422) 8 : cephadm [INF] [10/Mar/2026:10:14:43] ENGINE Client ('192.168.123.104', 59742) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T10:14:44.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:44 vm04 bash[20742]: cephadm 2026-03-10T10:14:43.570118+0000 mgr.y (mgr.24422) 8 : cephadm [INF] [10/Mar/2026:10:14:43] ENGINE Client ('192.168.123.104', 59742) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T10:14:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:44 vm07 bash[23367]: cephadm 2026-03-10T10:14:43.306511+0000 mgr.y (mgr.24422) 3 : cephadm [INF] Saving service alertmanager spec with placement vm04=a;count:1 2026-03-10T10:14:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:44 vm07 bash[23367]: cephadm 2026-03-10T10:14:43.306511+0000 mgr.y (mgr.24422) 3 : cephadm [INF] Saving service alertmanager spec with placement vm04=a;count:1 2026-03-10T10:14:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:44 vm07 bash[23367]: cephadm 2026-03-10T10:14:43.359148+0000 mgr.y (mgr.24422) 4 : cephadm [INF] [10/Mar/2026:10:14:43] ENGINE Bus STARTING 2026-03-10T10:14:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:44 vm07 bash[23367]: cephadm 2026-03-10T10:14:43.359148+0000 mgr.y (mgr.24422) 4 : cephadm [INF] [10/Mar/2026:10:14:43] ENGINE Bus STARTING 2026-03-10T10:14:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:44 vm07 bash[23367]: cephadm 2026-03-10T10:14:43.460496+0000 mgr.y (mgr.24422) 5 : cephadm [INF] [10/Mar/2026:10:14:43] ENGINE Serving on http://192.168.123.104:8765 2026-03-10T10:14:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:44 vm07 bash[23367]: cephadm 2026-03-10T10:14:43.460496+0000 mgr.y (mgr.24422) 5 : cephadm [INF] [10/Mar/2026:10:14:43] ENGINE Serving on http://192.168.123.104:8765 2026-03-10T10:14:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:44 vm07 bash[23367]: cephadm 2026-03-10T10:14:43.569702+0000 mgr.y (mgr.24422) 6 : cephadm [INF] [10/Mar/2026:10:14:43] ENGINE Serving on https://192.168.123.104:7150 2026-03-10T10:14:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:44 vm07 bash[23367]: cephadm 2026-03-10T10:14:43.569702+0000 mgr.y (mgr.24422) 6 : cephadm [INF] [10/Mar/2026:10:14:43] ENGINE Serving on https://192.168.123.104:7150 2026-03-10T10:14:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:44 vm07 bash[23367]: cephadm 2026-03-10T10:14:43.569750+0000 mgr.y (mgr.24422) 7 : cephadm [INF] [10/Mar/2026:10:14:43] ENGINE Bus STARTED 2026-03-10T10:14:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:44 vm07 bash[23367]: cephadm 2026-03-10T10:14:43.569750+0000 mgr.y (mgr.24422) 7 : cephadm [INF] [10/Mar/2026:10:14:43] ENGINE Bus STARTED 2026-03-10T10:14:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:44 vm07 bash[23367]: cephadm 2026-03-10T10:14:43.570118+0000 mgr.y (mgr.24422) 8 : cephadm [INF] [10/Mar/2026:10:14:43] ENGINE Client ('192.168.123.104', 59742) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T10:14:44.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:44 vm07 bash[23367]: cephadm 2026-03-10T10:14:43.570118+0000 mgr.y (mgr.24422) 8 : cephadm [INF] [10/Mar/2026:10:14:43] ENGINE Client ('192.168.123.104', 59742) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T10:14:45.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:45 vm04 bash[28289]: cluster 2026-03-10T10:14:44.293624+0000 mgr.y (mgr.24422) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:45.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:45 vm04 bash[28289]: cluster 2026-03-10T10:14:44.293624+0000 mgr.y (mgr.24422) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:45.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:45 vm04 bash[28289]: cluster 2026-03-10T10:14:44.346011+0000 mon.a (mon.0) 743 : cluster [DBG] mgrmap e20: y(active, since 2s), standbys: x 2026-03-10T10:14:45.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:45 vm04 bash[28289]: cluster 2026-03-10T10:14:44.346011+0000 mon.a (mon.0) 743 : cluster [DBG] mgrmap e20: y(active, since 2s), standbys: x 2026-03-10T10:14:45.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:45 vm04 bash[20742]: cluster 2026-03-10T10:14:44.293624+0000 mgr.y (mgr.24422) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:45.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:45 vm04 bash[20742]: cluster 2026-03-10T10:14:44.293624+0000 mgr.y (mgr.24422) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:45.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:45 vm04 bash[20742]: cluster 2026-03-10T10:14:44.346011+0000 mon.a (mon.0) 743 : cluster [DBG] mgrmap e20: y(active, since 2s), standbys: x 2026-03-10T10:14:45.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:45 vm04 bash[20742]: cluster 2026-03-10T10:14:44.346011+0000 mon.a (mon.0) 743 : cluster [DBG] mgrmap e20: y(active, since 2s), standbys: x 2026-03-10T10:14:45.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:45 vm07 bash[23367]: cluster 2026-03-10T10:14:44.293624+0000 mgr.y (mgr.24422) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:45.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:45 vm07 bash[23367]: cluster 2026-03-10T10:14:44.293624+0000 mgr.y (mgr.24422) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:45.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:45 vm07 bash[23367]: cluster 2026-03-10T10:14:44.346011+0000 mon.a (mon.0) 743 : cluster [DBG] mgrmap e20: y(active, since 2s), standbys: x 2026-03-10T10:14:45.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:45 vm07 bash[23367]: cluster 2026-03-10T10:14:44.346011+0000 mon.a (mon.0) 743 : cluster [DBG] mgrmap e20: y(active, since 2s), standbys: x 2026-03-10T10:14:47.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:47 vm04 bash[28289]: cluster 2026-03-10T10:14:46.293848+0000 mgr.y (mgr.24422) 10 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:47.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:47 vm04 bash[28289]: cluster 2026-03-10T10:14:46.293848+0000 mgr.y (mgr.24422) 10 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:47.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:47 vm04 bash[28289]: cluster 2026-03-10T10:14:46.365676+0000 mon.a (mon.0) 744 : cluster [DBG] mgrmap e21: y(active, since 4s), standbys: x 2026-03-10T10:14:47.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:47 vm04 bash[28289]: cluster 2026-03-10T10:14:46.365676+0000 mon.a (mon.0) 744 : cluster [DBG] mgrmap e21: y(active, since 4s), standbys: x 2026-03-10T10:14:47.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:47 vm04 bash[28289]: audit 2026-03-10T10:14:47.272205+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:47.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:47 vm04 bash[28289]: audit 2026-03-10T10:14:47.272205+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:47.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:47 vm04 bash[28289]: audit 2026-03-10T10:14:47.278526+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:47.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:47 vm04 bash[28289]: audit 2026-03-10T10:14:47.278526+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:47.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:47 vm04 bash[20742]: cluster 2026-03-10T10:14:46.293848+0000 mgr.y (mgr.24422) 10 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:47.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:47 vm04 bash[20742]: cluster 2026-03-10T10:14:46.293848+0000 mgr.y (mgr.24422) 10 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:47.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:47 vm04 bash[20742]: cluster 2026-03-10T10:14:46.365676+0000 mon.a (mon.0) 744 : cluster [DBG] mgrmap e21: y(active, since 4s), standbys: x 2026-03-10T10:14:47.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:47 vm04 bash[20742]: cluster 2026-03-10T10:14:46.365676+0000 mon.a (mon.0) 744 : cluster [DBG] mgrmap e21: y(active, since 4s), standbys: x 2026-03-10T10:14:47.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:47 vm04 bash[20742]: audit 2026-03-10T10:14:47.272205+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:47.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:47 vm04 bash[20742]: audit 2026-03-10T10:14:47.272205+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:47.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:47 vm04 bash[20742]: audit 2026-03-10T10:14:47.278526+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:47.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:47 vm04 bash[20742]: audit 2026-03-10T10:14:47.278526+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:47.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:47 vm07 bash[23367]: cluster 2026-03-10T10:14:46.293848+0000 mgr.y (mgr.24422) 10 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:47.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:47 vm07 bash[23367]: cluster 2026-03-10T10:14:46.293848+0000 mgr.y (mgr.24422) 10 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:47.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:47 vm07 bash[23367]: cluster 2026-03-10T10:14:46.365676+0000 mon.a (mon.0) 744 : cluster [DBG] mgrmap e21: y(active, since 4s), standbys: x 2026-03-10T10:14:47.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:47 vm07 bash[23367]: cluster 2026-03-10T10:14:46.365676+0000 mon.a (mon.0) 744 : cluster [DBG] mgrmap e21: y(active, since 4s), standbys: x 2026-03-10T10:14:47.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:47 vm07 bash[23367]: audit 2026-03-10T10:14:47.272205+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:47.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:47 vm07 bash[23367]: audit 2026-03-10T10:14:47.272205+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:47.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:47 vm07 bash[23367]: audit 2026-03-10T10:14:47.278526+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:47.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:47 vm07 bash[23367]: audit 2026-03-10T10:14:47.278526+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:47.937 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.b/config 2026-03-10T10:14:48.089 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.081+0000 7f07fa640640 1 -- 192.168.123.107:0/356567057 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f07f4075a40 msgr2=0x7f07f4075ea0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:48.089 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.081+0000 7f07fa640640 1 --2- 192.168.123.107:0/356567057 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f07f4075a40 0x7f07f4075ea0 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7f07e0009a30 tx=0x7f07e002f240 comp rx=0 tx=0).stop 2026-03-10T10:14:48.089 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.081+0000 7f07fa640640 1 -- 192.168.123.107:0/356567057 shutdown_connections 2026-03-10T10:14:48.089 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.081+0000 7f07fa640640 1 --2- 192.168.123.107:0/356567057 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f07f410a9e0 0x7f07f410cea0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:48.089 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.081+0000 7f07fa640640 1 --2- 192.168.123.107:0/356567057 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f07f4075a40 0x7f07f4075ea0 unknown :-1 s=CLOSED pgs=63 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:48.089 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.081+0000 7f07fa640640 1 --2- 192.168.123.107:0/356567057 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f07f40770a0 0x7f07f4075500 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:48.089 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.081+0000 7f07fa640640 1 -- 192.168.123.107:0/356567057 >> 192.168.123.107:0/356567057 conn(0x7f07f40fe2c0 msgr2=0x7f07f41006e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:48.089 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.081+0000 7f07fa640640 1 -- 192.168.123.107:0/356567057 shutdown_connections 2026-03-10T10:14:48.089 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.085+0000 7f07fa640640 1 -- 192.168.123.107:0/356567057 wait complete. 2026-03-10T10:14:48.089 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.085+0000 7f07fa640640 1 Processor -- start 2026-03-10T10:14:48.090 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.085+0000 7f07fa640640 1 -- start start 2026-03-10T10:14:48.090 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.085+0000 7f07fa640640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f07f4075a40 0x7f07f419c5b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:48.090 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.085+0000 7f07f3fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f07f4075a40 0x7f07f419c5b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:48.090 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.085+0000 7f07f3fff640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f07f4075a40 0x7f07f419c5b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:38582/0 (socket says 192.168.123.107:38582) 2026-03-10T10:14:48.090 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.085+0000 7f07fa640640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f07f40770a0 0x7f07f419caf0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:48.090 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.085+0000 7f07fa640640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f07f410a9e0 0x7f07f41a3b70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:48.090 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.085+0000 7f07fa640640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f07f410feb0 con 0x7f07f410a9e0 2026-03-10T10:14:48.090 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.085+0000 7f07fa640640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f07f410fd30 con 0x7f07f4075a40 2026-03-10T10:14:48.090 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.085+0000 7f07fa640640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f07f4110030 con 0x7f07f40770a0 2026-03-10T10:14:48.090 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.085+0000 7f07f3fff640 1 -- 192.168.123.107:0/2551579685 learned_addr learned my addr 192.168.123.107:0/2551579685 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-10T10:14:48.090 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.085+0000 7f07f3fff640 1 -- 192.168.123.107:0/2551579685 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f07f40770a0 msgr2=0x7f07f419caf0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:48.090 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.085+0000 7f07f3fff640 1 --2- 192.168.123.107:0/2551579685 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f07f40770a0 0x7f07f419caf0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:48.091 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.085+0000 7f07f8bb6640 1 --2- 192.168.123.107:0/2551579685 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f07f410a9e0 0x7f07f41a3b70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:48.091 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.085+0000 7f07f3fff640 1 -- 192.168.123.107:0/2551579685 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f07f410a9e0 msgr2=0x7f07f41a3b70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:48.091 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.085+0000 7f07f3fff640 1 --2- 192.168.123.107:0/2551579685 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f07f410a9e0 0x7f07f41a3b70 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:48.091 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.085+0000 7f07f3fff640 1 -- 192.168.123.107:0/2551579685 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f07f41a4270 con 0x7f07f4075a40 2026-03-10T10:14:48.091 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.085+0000 7f07f8bb6640 1 --2- 192.168.123.107:0/2551579685 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f07f410a9e0 0x7f07f41a3b70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:14:48.093 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.085+0000 7f07f3fff640 1 --2- 192.168.123.107:0/2551579685 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f07f4075a40 0x7f07f419c5b0 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7f07e400ea30 tx=0x7f07e400ef00 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:48.093 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.085+0000 7f07f17fa640 1 -- 192.168.123.107:0/2551579685 <== mon.1 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f07e400ce50 con 0x7f07f4075a40 2026-03-10T10:14:48.093 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.085+0000 7f07f17fa640 1 -- 192.168.123.107:0/2551579685 <== mon.1 v2:192.168.123.107:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f07e4004540 con 0x7f07f4075a40 2026-03-10T10:14:48.093 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.085+0000 7f07f17fa640 1 -- 192.168.123.107:0/2551579685 <== mon.1 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f07e4010690 con 0x7f07f4075a40 2026-03-10T10:14:48.093 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.085+0000 7f07fa640640 1 -- 192.168.123.107:0/2551579685 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f07f41a4560 con 0x7f07f4075a40 2026-03-10T10:14:48.093 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.085+0000 7f07fa640640 1 -- 192.168.123.107:0/2551579685 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f07f4102770 con 0x7f07f4075a40 2026-03-10T10:14:48.093 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.085+0000 7f07f17fa640 1 -- 192.168.123.107:0/2551579685 <== mon.1 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f07e4020070 con 0x7f07f4075a40 2026-03-10T10:14:48.093 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.085+0000 7f07f17fa640 1 --2- 192.168.123.107:0/2551579685 >> v2:192.168.123.104:6800/3326026257 conn(0x7f07c80777c0 0x7f07c8079c80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:48.093 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.085+0000 7f07f17fa640 1 -- 192.168.123.107:0/2551579685 <== mon.1 v2:192.168.123.107:3300/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f07e409aeb0 con 0x7f07f4075a40 2026-03-10T10:14:48.093 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.089+0000 7f07f37fe640 1 --2- 192.168.123.107:0/2551579685 >> v2:192.168.123.104:6800/3326026257 conn(0x7f07c80777c0 0x7f07c8079c80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:48.093 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.089+0000 7f07fa640640 1 -- 192.168.123.107:0/2551579685 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f07b8005180 con 0x7f07f4075a40 2026-03-10T10:14:48.096 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.089+0000 7f07f17fa640 1 -- 192.168.123.107:0/2551579685 <== mon.1 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f07e4067d10 con 0x7f07f4075a40 2026-03-10T10:14:48.097 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.093+0000 7f07f37fe640 1 --2- 192.168.123.107:0/2551579685 >> v2:192.168.123.104:6800/3326026257 conn(0x7f07c80777c0 0x7f07c8079c80 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7f07e0009950 tx=0x7f07e0002aa0 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:48.192 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.185+0000 7f07fa640640 1 -- 192.168.123.107:0/2551579685 --> v2:192.168.123.104:6800/3326026257 -- mgr_command(tid 0: {"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm07=a", "target": ["mon-mgr", ""]}) -- 0x7f07b8002bf0 con 0x7f07c80777c0 2026-03-10T10:14:48.200 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.193+0000 7f07f17fa640 1 -- 192.168.123.107:0/2551579685 <== mgr.24422 v2:192.168.123.104:6800/3326026257 1 ==== mgr_command_reply(tid 0: 0 ) ==== 8+0+28 (secure 0 0 0) 0x7f07b8002bf0 con 0x7f07c80777c0 2026-03-10T10:14:48.200 INFO:teuthology.orchestra.run.vm07.stdout:Scheduled grafana update... 2026-03-10T10:14:48.202 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.197+0000 7f07fa640640 1 -- 192.168.123.107:0/2551579685 >> v2:192.168.123.104:6800/3326026257 conn(0x7f07c80777c0 msgr2=0x7f07c8079c80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:48.202 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.197+0000 7f07fa640640 1 --2- 192.168.123.107:0/2551579685 >> v2:192.168.123.104:6800/3326026257 conn(0x7f07c80777c0 0x7f07c8079c80 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7f07e0009950 tx=0x7f07e0002aa0 comp rx=0 tx=0).stop 2026-03-10T10:14:48.202 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.197+0000 7f07fa640640 1 -- 192.168.123.107:0/2551579685 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f07f4075a40 msgr2=0x7f07f419c5b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:48.202 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.197+0000 7f07fa640640 1 --2- 192.168.123.107:0/2551579685 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f07f4075a40 0x7f07f419c5b0 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7f07e400ea30 tx=0x7f07e400ef00 comp rx=0 tx=0).stop 2026-03-10T10:14:48.202 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.197+0000 7f07fa640640 1 -- 192.168.123.107:0/2551579685 shutdown_connections 2026-03-10T10:14:48.202 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.197+0000 7f07fa640640 1 --2- 192.168.123.107:0/2551579685 >> v2:192.168.123.104:6800/3326026257 conn(0x7f07c80777c0 0x7f07c8079c80 unknown :-1 s=CLOSED pgs=25 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:48.203 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.197+0000 7f07fa640640 1 --2- 192.168.123.107:0/2551579685 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f07f410a9e0 0x7f07f41a3b70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:48.203 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.197+0000 7f07fa640640 1 --2- 192.168.123.107:0/2551579685 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f07f40770a0 0x7f07f419caf0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:48.203 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.197+0000 7f07fa640640 1 --2- 192.168.123.107:0/2551579685 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f07f4075a40 0x7f07f419c5b0 unknown :-1 s=CLOSED pgs=64 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:48.203 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.197+0000 7f07fa640640 1 -- 192.168.123.107:0/2551579685 >> 192.168.123.107:0/2551579685 conn(0x7f07f40fe2c0 msgr2=0x7f07f41006b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:48.203 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.197+0000 7f07fa640640 1 -- 192.168.123.107:0/2551579685 shutdown_connections 2026-03-10T10:14:48.203 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:48.197+0000 7f07fa640640 1 -- 192.168.123.107:0/2551579685 wait complete. 2026-03-10T10:14:48.211 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:14:48 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:14:48.251 DEBUG:teuthology.orchestra.run.vm07:grafana.a> sudo journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@grafana.a.service 2026-03-10T10:14:48.251 INFO:tasks.cephadm:Setting up client nodes... 2026-03-10T10:14:48.252 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:47.725732+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:47.725732+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:47.731390+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:47.731390+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:47.833388+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:47.833388+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:47.839124+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:47.839124+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:47.840247+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:47.840247+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:48.107122+0000 mgr.y (mgr.24422) 11 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:48.107122+0000 mgr.y (mgr.24422) 11 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:48.194716+0000 mgr.y (mgr.24422) 12 : audit [DBG] from='client.24448 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm07=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:48.194716+0000 mgr.y (mgr.24422) 12 : audit [DBG] from='client.24448 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm07=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: cephadm 2026-03-10T10:14:48.195670+0000 mgr.y (mgr.24422) 13 : cephadm [INF] Saving service grafana spec with placement vm07=a;count:1 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: cephadm 2026-03-10T10:14:48.195670+0000 mgr.y (mgr.24422) 13 : cephadm [INF] Saving service grafana spec with placement vm07=a;count:1 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:48.200418+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:48.200418+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:48.304166+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:48.304166+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:48.312955+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:48.312955+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:48.314353+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:48.314353+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:48.314920+0000 mon.a (mon.0) 756 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:48.314920+0000 mon.a (mon.0) 756 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:48.315341+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:48.315341+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:48.477662+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:48.477662+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:48.483711+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:48.483711+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:48.489283+0000 mon.a (mon.0) 760 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:48.489283+0000 mon.a (mon.0) 760 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:48.493050+0000 mon.a (mon.0) 761 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:48.493050+0000 mon.a (mon.0) 761 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:48.496904+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 bash[28289]: audit 2026-03-10T10:14:48.496904+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.980 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:48 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:48.981 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 10:14:48 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:48.981 INFO:journalctl@ceph.rgw.foo.a.vm04.stdout:Mar 10 10:14:48 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:48.981 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 10 10:14:48 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:48.981 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 10 10:14:48 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:48.981 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 10 10:14:48 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:47.725732+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:47.725732+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:47.731390+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:47.731390+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:47.833388+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:47.833388+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:47.839124+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:47.839124+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:47.840247+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:47.840247+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:48.107122+0000 mgr.y (mgr.24422) 11 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:48.107122+0000 mgr.y (mgr.24422) 11 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:48.194716+0000 mgr.y (mgr.24422) 12 : audit [DBG] from='client.24448 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm07=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:48.194716+0000 mgr.y (mgr.24422) 12 : audit [DBG] from='client.24448 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm07=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: cephadm 2026-03-10T10:14:48.195670+0000 mgr.y (mgr.24422) 13 : cephadm [INF] Saving service grafana spec with placement vm07=a;count:1 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: cephadm 2026-03-10T10:14:48.195670+0000 mgr.y (mgr.24422) 13 : cephadm [INF] Saving service grafana spec with placement vm07=a;count:1 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:48.200418+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:48.200418+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:48.304166+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:48.304166+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:48.312955+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:48.312955+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:48.314353+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:48.314353+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:48.314920+0000 mon.a (mon.0) 756 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:48.314920+0000 mon.a (mon.0) 756 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:48.315341+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:48.315341+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:48.477662+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:48.477662+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:48.483711+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:48.483711+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:48.489283+0000 mon.a (mon.0) 760 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:48.489283+0000 mon.a (mon.0) 760 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:48.493050+0000 mon.a (mon.0) 761 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:48.493050+0000 mon.a (mon.0) 761 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:48.496904+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 bash[20742]: audit 2026-03-10T10:14:48.496904+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:48.981 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:48 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:48.982 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:48 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:49.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:47.725732+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:49.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:47.725732+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:49.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:47.731390+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:49.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:47.731390+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:49.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:47.833388+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:49.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:47.833388+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:49.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:47.839124+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:49.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:47.839124+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:49.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:47.840247+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:14:49.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:47.840247+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:14:49.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:48.107122+0000 mgr.y (mgr.24422) 11 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:14:49.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:48.107122+0000 mgr.y (mgr.24422) 11 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:14:49.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:48.194716+0000 mgr.y (mgr.24422) 12 : audit [DBG] from='client.24448 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm07=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:49.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:48.194716+0000 mgr.y (mgr.24422) 12 : audit [DBG] from='client.24448 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm07=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T10:14:49.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: cephadm 2026-03-10T10:14:48.195670+0000 mgr.y (mgr.24422) 13 : cephadm [INF] Saving service grafana spec with placement vm07=a;count:1 2026-03-10T10:14:49.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: cephadm 2026-03-10T10:14:48.195670+0000 mgr.y (mgr.24422) 13 : cephadm [INF] Saving service grafana spec with placement vm07=a;count:1 2026-03-10T10:14:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:48.200418+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:48.200418+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:48.304166+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:48.304166+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:48.312955+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:48.312955+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:48.314353+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:14:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:48.314353+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:14:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:48.314920+0000 mon.a (mon.0) 756 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:48.314920+0000 mon.a (mon.0) 756 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:14:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:48.315341+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:48.315341+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:14:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:48.477662+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:48.477662+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:48.483711+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:48.483711+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:48.489283+0000 mon.a (mon.0) 760 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:48.489283+0000 mon.a (mon.0) 760 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:48.493050+0000 mon.a (mon.0) 761 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:48.493050+0000 mon.a (mon.0) 761 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:48.496904+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:48 vm07 bash[23367]: audit 2026-03-10T10:14:48.496904+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:49.237 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:49 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:49.238 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 10:14:49 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:49.238 INFO:journalctl@ceph.rgw.foo.a.vm04.stdout:Mar 10 10:14:49 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:49.238 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 10 10:14:49 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:49.238 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 10 10:14:49 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:49.238 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 10 10:14:49 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:49.238 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:49 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:49.238 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:49 vm04 systemd[1]: Started Ceph node-exporter.a for e4c1c9d6-1c68-11f1-a9bd-116050875839. 2026-03-10T10:14:49.238 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:49 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:49.703 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:49 vm04 bash[55279]: Unable to find image 'quay.io/prometheus/node-exporter:v1.7.0' locally 2026-03-10T10:14:49.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:49 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:49.765 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 10:14:49 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:49.765 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 10:14:49 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:49.766 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:14:49 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:49.766 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:14:49 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:49.766 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:14:49 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:49.766 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:49 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:49.766 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 10:14:49 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:50.219 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 10:14:49 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:50.219 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:14:49 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:50.219 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 10:14:49 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:50.219 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:14:49 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:50.220 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:49 vm07 systemd[1]: Started Ceph node-exporter.b for e4c1c9d6-1c68-11f1-a9bd-116050875839. 2026-03-10T10:14:50.220 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:49 vm07 bash[50199]: Unable to find image 'quay.io/prometheus/node-exporter:v1.7.0' locally 2026-03-10T10:14:50.220 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:49 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:50.220 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 10:14:49 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:50.220 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:14:49 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:50.220 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:14:49 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:50.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: cluster 2026-03-10T10:14:48.294125+0000 mgr.y (mgr.24422) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: cluster 2026-03-10T10:14:48.294125+0000 mgr.y (mgr.24422) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: cephadm 2026-03-10T10:14:48.316336+0000 mgr.y (mgr.24422) 15 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: cephadm 2026-03-10T10:14:48.316336+0000 mgr.y (mgr.24422) 15 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: cephadm 2026-03-10T10:14:48.316433+0000 mgr.y (mgr.24422) 16 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: cephadm 2026-03-10T10:14:48.316433+0000 mgr.y (mgr.24422) 16 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: cephadm 2026-03-10T10:14:48.359715+0000 mgr.y (mgr.24422) 17 : cephadm [INF] Updating vm07:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: cephadm 2026-03-10T10:14:48.359715+0000 mgr.y (mgr.24422) 17 : cephadm [INF] Updating vm07:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: cephadm 2026-03-10T10:14:48.373815+0000 mgr.y (mgr.24422) 18 : cephadm [INF] Updating vm04:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: cephadm 2026-03-10T10:14:48.373815+0000 mgr.y (mgr.24422) 18 : cephadm [INF] Updating vm04:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: cephadm 2026-03-10T10:14:48.402494+0000 mgr.y (mgr.24422) 19 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: cephadm 2026-03-10T10:14:48.402494+0000 mgr.y (mgr.24422) 19 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: cephadm 2026-03-10T10:14:48.409845+0000 mgr.y (mgr.24422) 20 : cephadm [INF] Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: cephadm 2026-03-10T10:14:48.409845+0000 mgr.y (mgr.24422) 20 : cephadm [INF] Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: cephadm 2026-03-10T10:14:48.437518+0000 mgr.y (mgr.24422) 21 : cephadm [INF] Updating vm07:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.client.admin.keyring 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: cephadm 2026-03-10T10:14:48.437518+0000 mgr.y (mgr.24422) 21 : cephadm [INF] Updating vm07:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.client.admin.keyring 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: cephadm 2026-03-10T10:14:48.442072+0000 mgr.y (mgr.24422) 22 : cephadm [INF] Updating vm04:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.client.admin.keyring 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: cephadm 2026-03-10T10:14:48.442072+0000 mgr.y (mgr.24422) 22 : cephadm [INF] Updating vm04:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.client.admin.keyring 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: cephadm 2026-03-10T10:14:48.498575+0000 mgr.y (mgr.24422) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm04 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: cephadm 2026-03-10T10:14:48.498575+0000 mgr.y (mgr.24422) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm04 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: audit 2026-03-10T10:14:49.216107+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: audit 2026-03-10T10:14:49.216107+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: audit 2026-03-10T10:14:49.221191+0000 mon.a (mon.0) 764 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: audit 2026-03-10T10:14:49.221191+0000 mon.a (mon.0) 764 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: audit 2026-03-10T10:14:49.227379+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: audit 2026-03-10T10:14:49.227379+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: cephadm 2026-03-10T10:14:49.229651+0000 mgr.y (mgr.24422) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm07 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: cephadm 2026-03-10T10:14:49.229651+0000 mgr.y (mgr.24422) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm07 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: audit 2026-03-10T10:14:49.956837+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: audit 2026-03-10T10:14:49.956837+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: audit 2026-03-10T10:14:49.962317+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: audit 2026-03-10T10:14:49.962317+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: audit 2026-03-10T10:14:49.967366+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: audit 2026-03-10T10:14:49.967366+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: audit 2026-03-10T10:14:49.971598+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: audit 2026-03-10T10:14:49.971598+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: audit 2026-03-10T10:14:49.977682+0000 mon.a (mon.0) 770 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:50 vm07 bash[23367]: audit 2026-03-10T10:14:49.977682+0000 mon.a (mon.0) 770 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.694 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: cluster 2026-03-10T10:14:48.294125+0000 mgr.y (mgr.24422) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:50.694 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: cluster 2026-03-10T10:14:48.294125+0000 mgr.y (mgr.24422) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:50.694 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: cephadm 2026-03-10T10:14:48.316336+0000 mgr.y (mgr.24422) 15 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-10T10:14:50.694 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: cephadm 2026-03-10T10:14:48.316336+0000 mgr.y (mgr.24422) 15 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-10T10:14:50.694 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: cephadm 2026-03-10T10:14:48.316433+0000 mgr.y (mgr.24422) 16 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T10:14:50.694 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: cephadm 2026-03-10T10:14:48.316433+0000 mgr.y (mgr.24422) 16 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T10:14:50.694 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: cephadm 2026-03-10T10:14:48.359715+0000 mgr.y (mgr.24422) 17 : cephadm [INF] Updating vm07:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:14:50.694 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: cephadm 2026-03-10T10:14:48.359715+0000 mgr.y (mgr.24422) 17 : cephadm [INF] Updating vm07:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:14:50.694 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: cephadm 2026-03-10T10:14:48.373815+0000 mgr.y (mgr.24422) 18 : cephadm [INF] Updating vm04:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:14:50.694 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: cephadm 2026-03-10T10:14:48.373815+0000 mgr.y (mgr.24422) 18 : cephadm [INF] Updating vm04:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:14:50.694 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: cephadm 2026-03-10T10:14:48.402494+0000 mgr.y (mgr.24422) 19 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:14:50.694 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: cephadm 2026-03-10T10:14:48.402494+0000 mgr.y (mgr.24422) 19 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:14:50.694 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: cephadm 2026-03-10T10:14:48.409845+0000 mgr.y (mgr.24422) 20 : cephadm [INF] Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: cephadm 2026-03-10T10:14:48.409845+0000 mgr.y (mgr.24422) 20 : cephadm [INF] Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: cephadm 2026-03-10T10:14:48.437518+0000 mgr.y (mgr.24422) 21 : cephadm [INF] Updating vm07:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.client.admin.keyring 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: cephadm 2026-03-10T10:14:48.437518+0000 mgr.y (mgr.24422) 21 : cephadm [INF] Updating vm07:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.client.admin.keyring 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: cephadm 2026-03-10T10:14:48.442072+0000 mgr.y (mgr.24422) 22 : cephadm [INF] Updating vm04:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.client.admin.keyring 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: cephadm 2026-03-10T10:14:48.442072+0000 mgr.y (mgr.24422) 22 : cephadm [INF] Updating vm04:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.client.admin.keyring 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: cephadm 2026-03-10T10:14:48.498575+0000 mgr.y (mgr.24422) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm04 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: cephadm 2026-03-10T10:14:48.498575+0000 mgr.y (mgr.24422) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm04 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: audit 2026-03-10T10:14:49.216107+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: audit 2026-03-10T10:14:49.216107+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: audit 2026-03-10T10:14:49.221191+0000 mon.a (mon.0) 764 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: audit 2026-03-10T10:14:49.221191+0000 mon.a (mon.0) 764 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: audit 2026-03-10T10:14:49.227379+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: audit 2026-03-10T10:14:49.227379+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: cephadm 2026-03-10T10:14:49.229651+0000 mgr.y (mgr.24422) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm07 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: cephadm 2026-03-10T10:14:49.229651+0000 mgr.y (mgr.24422) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm07 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: audit 2026-03-10T10:14:49.956837+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: audit 2026-03-10T10:14:49.956837+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: audit 2026-03-10T10:14:49.962317+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: audit 2026-03-10T10:14:49.962317+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: audit 2026-03-10T10:14:49.967366+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: audit 2026-03-10T10:14:49.967366+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: audit 2026-03-10T10:14:49.971598+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: audit 2026-03-10T10:14:49.971598+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: audit 2026-03-10T10:14:49.977682+0000 mon.a (mon.0) 770 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:50 vm04 bash[28289]: audit 2026-03-10T10:14:49.977682+0000 mon.a (mon.0) 770 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.695 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:14:50 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: cluster 2026-03-10T10:14:48.294125+0000 mgr.y (mgr.24422) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: cluster 2026-03-10T10:14:48.294125+0000 mgr.y (mgr.24422) 14 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: cephadm 2026-03-10T10:14:48.316336+0000 mgr.y (mgr.24422) 15 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: cephadm 2026-03-10T10:14:48.316336+0000 mgr.y (mgr.24422) 15 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: cephadm 2026-03-10T10:14:48.316433+0000 mgr.y (mgr.24422) 16 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: cephadm 2026-03-10T10:14:48.316433+0000 mgr.y (mgr.24422) 16 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: cephadm 2026-03-10T10:14:48.359715+0000 mgr.y (mgr.24422) 17 : cephadm [INF] Updating vm07:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: cephadm 2026-03-10T10:14:48.359715+0000 mgr.y (mgr.24422) 17 : cephadm [INF] Updating vm07:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: cephadm 2026-03-10T10:14:48.373815+0000 mgr.y (mgr.24422) 18 : cephadm [INF] Updating vm04:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: cephadm 2026-03-10T10:14:48.373815+0000 mgr.y (mgr.24422) 18 : cephadm [INF] Updating vm04:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.conf 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: cephadm 2026-03-10T10:14:48.402494+0000 mgr.y (mgr.24422) 19 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: cephadm 2026-03-10T10:14:48.402494+0000 mgr.y (mgr.24422) 19 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: cephadm 2026-03-10T10:14:48.409845+0000 mgr.y (mgr.24422) 20 : cephadm [INF] Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: cephadm 2026-03-10T10:14:48.409845+0000 mgr.y (mgr.24422) 20 : cephadm [INF] Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: cephadm 2026-03-10T10:14:48.437518+0000 mgr.y (mgr.24422) 21 : cephadm [INF] Updating vm07:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.client.admin.keyring 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: cephadm 2026-03-10T10:14:48.437518+0000 mgr.y (mgr.24422) 21 : cephadm [INF] Updating vm07:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.client.admin.keyring 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: cephadm 2026-03-10T10:14:48.442072+0000 mgr.y (mgr.24422) 22 : cephadm [INF] Updating vm04:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.client.admin.keyring 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: cephadm 2026-03-10T10:14:48.442072+0000 mgr.y (mgr.24422) 22 : cephadm [INF] Updating vm04:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/config/ceph.client.admin.keyring 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: cephadm 2026-03-10T10:14:48.498575+0000 mgr.y (mgr.24422) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm04 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: cephadm 2026-03-10T10:14:48.498575+0000 mgr.y (mgr.24422) 23 : cephadm [INF] Deploying daemon node-exporter.a on vm04 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: audit 2026-03-10T10:14:49.216107+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: audit 2026-03-10T10:14:49.216107+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: audit 2026-03-10T10:14:49.221191+0000 mon.a (mon.0) 764 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: audit 2026-03-10T10:14:49.221191+0000 mon.a (mon.0) 764 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: audit 2026-03-10T10:14:49.227379+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.695 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: audit 2026-03-10T10:14:49.227379+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.696 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: cephadm 2026-03-10T10:14:49.229651+0000 mgr.y (mgr.24422) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm07 2026-03-10T10:14:50.696 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: cephadm 2026-03-10T10:14:49.229651+0000 mgr.y (mgr.24422) 24 : cephadm [INF] Deploying daemon node-exporter.b on vm07 2026-03-10T10:14:50.696 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: audit 2026-03-10T10:14:49.956837+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.696 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: audit 2026-03-10T10:14:49.956837+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.696 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: audit 2026-03-10T10:14:49.962317+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.696 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: audit 2026-03-10T10:14:49.962317+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.696 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: audit 2026-03-10T10:14:49.967366+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.696 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: audit 2026-03-10T10:14:49.967366+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.696 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: audit 2026-03-10T10:14:49.971598+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.696 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: audit 2026-03-10T10:14:49.971598+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.696 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: audit 2026-03-10T10:14:49.977682+0000 mon.a (mon.0) 770 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.696 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[20742]: audit 2026-03-10T10:14:49.977682+0000 mon.a (mon.0) 770 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:50.953 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:50 vm04 bash[55279]: v1.7.0: Pulling from prometheus/node-exporter 2026-03-10T10:14:51.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:51 vm04 bash[28289]: cephadm 2026-03-10T10:14:49.984493+0000 mgr.y (mgr.24422) 25 : cephadm [INF] Deploying daemon alertmanager.a on vm04 2026-03-10T10:14:51.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:51 vm04 bash[28289]: cephadm 2026-03-10T10:14:49.984493+0000 mgr.y (mgr.24422) 25 : cephadm [INF] Deploying daemon alertmanager.a on vm04 2026-03-10T10:14:51.453 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: 2abcce694348: Pulling fs layer 2026-03-10T10:14:51.453 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: 455fd88e5221: Pulling fs layer 2026-03-10T10:14:51.453 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: 324153f2810a: Pulling fs layer 2026-03-10T10:14:51.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[20742]: cephadm 2026-03-10T10:14:49.984493+0000 mgr.y (mgr.24422) 25 : cephadm [INF] Deploying daemon alertmanager.a on vm04 2026-03-10T10:14:51.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[20742]: cephadm 2026-03-10T10:14:49.984493+0000 mgr.y (mgr.24422) 25 : cephadm [INF] Deploying daemon alertmanager.a on vm04 2026-03-10T10:14:51.515 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:51 vm07 bash[50199]: v1.7.0: Pulling from prometheus/node-exporter 2026-03-10T10:14:51.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:51 vm07 bash[23367]: cephadm 2026-03-10T10:14:49.984493+0000 mgr.y (mgr.24422) 25 : cephadm [INF] Deploying daemon alertmanager.a on vm04 2026-03-10T10:14:51.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:51 vm07 bash[23367]: cephadm 2026-03-10T10:14:49.984493+0000 mgr.y (mgr.24422) 25 : cephadm [INF] Deploying daemon alertmanager.a on vm04 2026-03-10T10:14:51.797 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: 2abcce694348: Verifying Checksum 2026-03-10T10:14:51.797 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: 2abcce694348: Download complete 2026-03-10T10:14:51.797 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: 455fd88e5221: Verifying Checksum 2026-03-10T10:14:51.797 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: 455fd88e5221: Download complete 2026-03-10T10:14:51.797 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: 2abcce694348: Pull complete 2026-03-10T10:14:51.797 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: 324153f2810a: Verifying Checksum 2026-03-10T10:14:51.797 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: 324153f2810a: Download complete 2026-03-10T10:14:51.797 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: 455fd88e5221: Pull complete 2026-03-10T10:14:52.015 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:51 vm07 bash[50199]: 2abcce694348: Pulling fs layer 2026-03-10T10:14:52.015 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:51 vm07 bash[50199]: 455fd88e5221: Pulling fs layer 2026-03-10T10:14:52.015 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:51 vm07 bash[50199]: 324153f2810a: Pulling fs layer 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: 324153f2810a: Pull complete 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: Digest: sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: Status: Downloaded newer image for quay.io/prometheus/node-exporter:v1.7.0 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.930Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)" 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.930Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)" 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.931Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.931Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.931Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.931Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:110 level=info msg="Enabled collectors" 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=arp 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=bcache 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=bonding 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=btrfs 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=conntrack 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=cpu 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=cpufreq 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=diskstats 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=dmi 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=edac 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=entropy 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=fibrechannel 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=filefd 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=filesystem 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=hwmon 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=infiniband 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=ipvs 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=loadavg 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=mdadm 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=meminfo 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=netclass 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=netdev 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=netstat 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=nfs 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=nfsd 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=nvme 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=os 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=powersupplyclass 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=pressure 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=rapl 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=schedstat 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=selinux 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=sockstat 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=softnet 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=stat 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=tapestats 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=textfile 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=thermal_zone 2026-03-10T10:14:52.204 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=time 2026-03-10T10:14:52.205 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=udp_queues 2026-03-10T10:14:52.205 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.932Z caller=node_exporter.go:117 level=info collector=uname 2026-03-10T10:14:52.205 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.933Z caller=node_exporter.go:117 level=info collector=vmstat 2026-03-10T10:14:52.205 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.933Z caller=node_exporter.go:117 level=info collector=xfs 2026-03-10T10:14:52.205 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.933Z caller=node_exporter.go:117 level=info collector=zfs 2026-03-10T10:14:52.205 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.934Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100 2026-03-10T10:14:52.205 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:51 vm04 bash[55279]: ts=2026-03-10T10:14:51.934Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100 2026-03-10T10:14:52.456 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: 2abcce694348: Verifying Checksum 2026-03-10T10:14:52.456 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: 2abcce694348: Download complete 2026-03-10T10:14:52.456 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: 2abcce694348: Pull complete 2026-03-10T10:14:52.456 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: 455fd88e5221: Verifying Checksum 2026-03-10T10:14:52.456 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: 455fd88e5221: Download complete 2026-03-10T10:14:52.456 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: 324153f2810a: Verifying Checksum 2026-03-10T10:14:52.456 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: 324153f2810a: Download complete 2026-03-10T10:14:52.456 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: 455fd88e5221: Pull complete 2026-03-10T10:14:52.456 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[23367]: cluster 2026-03-10T10:14:50.294618+0000 mgr.y (mgr.24422) 26 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T10:14:52.456 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[23367]: cluster 2026-03-10T10:14:50.294618+0000 mgr.y (mgr.24422) 26 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T10:14:52.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:52 vm04 bash[28289]: cluster 2026-03-10T10:14:50.294618+0000 mgr.y (mgr.24422) 26 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T10:14:52.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:52 vm04 bash[28289]: cluster 2026-03-10T10:14:50.294618+0000 mgr.y (mgr.24422) 26 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T10:14:52.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:52 vm04 bash[20742]: cluster 2026-03-10T10:14:50.294618+0000 mgr.y (mgr.24422) 26 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T10:14:52.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:52 vm04 bash[20742]: cluster 2026-03-10T10:14:50.294618+0000 mgr.y (mgr.24422) 26 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T10:14:52.765 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: 324153f2810a: Pull complete 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: Digest: sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: Status: Downloaded newer image for quay.io/prometheus/node-exporter:v1.7.0 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.599Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)" 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.599Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)" 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.599Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.599Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.600Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.600Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.600Z caller=node_exporter.go:110 level=info msg="Enabled collectors" 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.600Z caller=node_exporter.go:117 level=info collector=arp 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.600Z caller=node_exporter.go:117 level=info collector=bcache 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.600Z caller=node_exporter.go:117 level=info collector=bonding 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.600Z caller=node_exporter.go:117 level=info collector=btrfs 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.601Z caller=node_exporter.go:117 level=info collector=conntrack 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.601Z caller=node_exporter.go:117 level=info collector=cpu 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.601Z caller=node_exporter.go:117 level=info collector=cpufreq 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.601Z caller=node_exporter.go:117 level=info collector=diskstats 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.601Z caller=node_exporter.go:117 level=info collector=dmi 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.601Z caller=node_exporter.go:117 level=info collector=edac 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.601Z caller=node_exporter.go:117 level=info collector=entropy 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.601Z caller=node_exporter.go:117 level=info collector=fibrechannel 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.601Z caller=node_exporter.go:117 level=info collector=filefd 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.601Z caller=node_exporter.go:117 level=info collector=filesystem 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.601Z caller=node_exporter.go:117 level=info collector=hwmon 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.601Z caller=node_exporter.go:117 level=info collector=infiniband 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.601Z caller=node_exporter.go:117 level=info collector=ipvs 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.601Z caller=node_exporter.go:117 level=info collector=loadavg 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.602Z caller=node_exporter.go:117 level=info collector=mdadm 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.602Z caller=node_exporter.go:117 level=info collector=meminfo 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.602Z caller=node_exporter.go:117 level=info collector=netclass 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.602Z caller=node_exporter.go:117 level=info collector=netdev 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.602Z caller=node_exporter.go:117 level=info collector=netstat 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.602Z caller=node_exporter.go:117 level=info collector=nfs 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.602Z caller=node_exporter.go:117 level=info collector=nfsd 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.602Z caller=node_exporter.go:117 level=info collector=nvme 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.602Z caller=node_exporter.go:117 level=info collector=os 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.602Z caller=node_exporter.go:117 level=info collector=powersupplyclass 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.602Z caller=node_exporter.go:117 level=info collector=pressure 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.602Z caller=node_exporter.go:117 level=info collector=rapl 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.602Z caller=node_exporter.go:117 level=info collector=schedstat 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.602Z caller=node_exporter.go:117 level=info collector=selinux 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.602Z caller=node_exporter.go:117 level=info collector=sockstat 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.603Z caller=node_exporter.go:117 level=info collector=softnet 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.603Z caller=node_exporter.go:117 level=info collector=stat 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.603Z caller=node_exporter.go:117 level=info collector=tapestats 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.603Z caller=node_exporter.go:117 level=info collector=textfile 2026-03-10T10:14:52.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.603Z caller=node_exporter.go:117 level=info collector=thermal_zone 2026-03-10T10:14:52.767 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.603Z caller=node_exporter.go:117 level=info collector=time 2026-03-10T10:14:52.767 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.603Z caller=node_exporter.go:117 level=info collector=udp_queues 2026-03-10T10:14:52.767 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.603Z caller=node_exporter.go:117 level=info collector=uname 2026-03-10T10:14:52.767 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.603Z caller=node_exporter.go:117 level=info collector=vmstat 2026-03-10T10:14:52.767 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.603Z caller=node_exporter.go:117 level=info collector=xfs 2026-03-10T10:14:52.767 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.603Z caller=node_exporter.go:117 level=info collector=zfs 2026-03-10T10:14:52.767 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.604Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100 2026-03-10T10:14:52.767 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:14:52 vm07 bash[50199]: ts=2026-03-10T10:14:52.604Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100 2026-03-10T10:14:53.366 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:14:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:14:53.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:53 vm04 bash[28289]: cluster 2026-03-10T10:14:52.294920+0000 mgr.y (mgr.24422) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T10:14:53.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:53 vm04 bash[28289]: cluster 2026-03-10T10:14:52.294920+0000 mgr.y (mgr.24422) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T10:14:53.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:53 vm04 bash[28289]: audit 2026-03-10T10:14:52.330000+0000 mon.a (mon.0) 771 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:53.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:53 vm04 bash[28289]: audit 2026-03-10T10:14:52.330000+0000 mon.a (mon.0) 771 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:53.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:53 vm04 bash[20742]: cluster 2026-03-10T10:14:52.294920+0000 mgr.y (mgr.24422) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T10:14:53.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:53 vm04 bash[20742]: cluster 2026-03-10T10:14:52.294920+0000 mgr.y (mgr.24422) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T10:14:53.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:53 vm04 bash[20742]: audit 2026-03-10T10:14:52.330000+0000 mon.a (mon.0) 771 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:53.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:53 vm04 bash[20742]: audit 2026-03-10T10:14:52.330000+0000 mon.a (mon.0) 771 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:53.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:53 vm07 bash[23367]: cluster 2026-03-10T10:14:52.294920+0000 mgr.y (mgr.24422) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T10:14:53.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:53 vm07 bash[23367]: cluster 2026-03-10T10:14:52.294920+0000 mgr.y (mgr.24422) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T10:14:53.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:53 vm07 bash[23367]: audit 2026-03-10T10:14:52.330000+0000 mon.a (mon.0) 771 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:53.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:53 vm07 bash[23367]: audit 2026-03-10T10:14:52.330000+0000 mon.a (mon.0) 771 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:53.903 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:14:54.217 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.213+0000 7f2503577640 1 -- 192.168.123.104:0/2125834656 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f24fc097ae0 msgr2=0x7f24fc097f40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:54.217 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.213+0000 7f2503577640 1 --2- 192.168.123.104:0/2125834656 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f24fc097ae0 0x7f24fc097f40 secure :-1 s=READY pgs=65 cs=0 l=1 rev1=1 crypto rx=0x7f24f8009a30 tx=0x7f24f802f240 comp rx=0 tx=0).stop 2026-03-10T10:14:54.217 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.213+0000 7f2503577640 1 -- 192.168.123.104:0/2125834656 shutdown_connections 2026-03-10T10:14:54.217 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.213+0000 7f2503577640 1 --2- 192.168.123.104:0/2125834656 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f24fc098480 0x7f24fc0a49b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:54.217 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.213+0000 7f2503577640 1 --2- 192.168.123.104:0/2125834656 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f24fc097ae0 0x7f24fc097f40 unknown :-1 s=CLOSED pgs=65 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:54.217 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.213+0000 7f2503577640 1 --2- 192.168.123.104:0/2125834656 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f24fc09db70 0x7f24fc09df50 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:54.217 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.213+0000 7f2503577640 1 -- 192.168.123.104:0/2125834656 >> 192.168.123.104:0/2125834656 conn(0x7f24fc0937c0 msgr2=0x7f24fc095be0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:54.217 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.213+0000 7f2503577640 1 -- 192.168.123.104:0/2125834656 shutdown_connections 2026-03-10T10:14:54.217 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.213+0000 7f2503577640 1 -- 192.168.123.104:0/2125834656 wait complete. 2026-03-10T10:14:54.218 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.213+0000 7f2503577640 1 Processor -- start 2026-03-10T10:14:54.218 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.213+0000 7f2503577640 1 -- start start 2026-03-10T10:14:54.218 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.213+0000 7f2503577640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f24fc097ae0 0x7f24fc135620 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:54.218 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.213+0000 7f2503577640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f24fc098480 0x7f24fc135b60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:54.218 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.213+0000 7f2503577640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f24fc09db70 0x7f24fc12f6f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:54.218 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.213+0000 7f2503577640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f24fc0a73d0 con 0x7f24fc097ae0 2026-03-10T10:14:54.218 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.213+0000 7f2503577640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f24fc0a7250 con 0x7f24fc09db70 2026-03-10T10:14:54.218 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.213+0000 7f2503577640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f24fc0a7550 con 0x7f24fc098480 2026-03-10T10:14:54.219 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.213+0000 7f2501d74640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f24fc098480 0x7f24fc135b60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:54.219 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.213+0000 7f2501d74640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f24fc098480 0x7f24fc135b60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3301/0 says I am v2:192.168.123.104:54712/0 (socket says 192.168.123.104:54712) 2026-03-10T10:14:54.219 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.213+0000 7f2501d74640 1 -- 192.168.123.104:0/1450766224 learned_addr learned my addr 192.168.123.104:0/1450766224 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:14:54.219 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.213+0000 7f2501d74640 1 -- 192.168.123.104:0/1450766224 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f24fc09db70 msgr2=0x7f24fc12f6f0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:14:54.219 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.213+0000 7f2502d76640 1 --2- 192.168.123.104:0/1450766224 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f24fc09db70 0x7f24fc12f6f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:54.219 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.217+0000 7f2502575640 1 --2- 192.168.123.104:0/1450766224 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f24fc097ae0 0x7f24fc135620 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:54.219 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.217+0000 7f2501d74640 1 --2- 192.168.123.104:0/1450766224 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f24fc09db70 0x7f24fc12f6f0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:54.219 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.217+0000 7f2501d74640 1 -- 192.168.123.104:0/1450766224 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f24fc097ae0 msgr2=0x7f24fc135620 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:54.219 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.217+0000 7f2501d74640 1 --2- 192.168.123.104:0/1450766224 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f24fc097ae0 0x7f24fc135620 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:54.219 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.217+0000 7f2501d74640 1 -- 192.168.123.104:0/1450766224 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f24fc12ff80 con 0x7f24fc098480 2026-03-10T10:14:54.219 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.217+0000 7f2502d76640 1 --2- 192.168.123.104:0/1450766224 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f24fc09db70 0x7f24fc12f6f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:14:54.219 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.217+0000 7f2502575640 1 --2- 192.168.123.104:0/1450766224 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f24fc097ae0 0x7f24fc135620 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:14:54.220 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.217+0000 7f2501d74640 1 --2- 192.168.123.104:0/1450766224 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f24fc098480 0x7f24fc135b60 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7f24f8009a00 tx=0x7f24f80026e0 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:54.220 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.217+0000 7f24e77fe640 1 -- 192.168.123.104:0/1450766224 <== mon.2 v2:192.168.123.104:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f24f8002a90 con 0x7f24fc098480 2026-03-10T10:14:54.223 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.217+0000 7f24e77fe640 1 -- 192.168.123.104:0/1450766224 <== mon.2 v2:192.168.123.104:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f24f80044e0 con 0x7f24fc098480 2026-03-10T10:14:54.223 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.217+0000 7f24e77fe640 1 -- 192.168.123.104:0/1450766224 <== mon.2 v2:192.168.123.104:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f24f8004aa0 con 0x7f24fc098480 2026-03-10T10:14:54.223 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.217+0000 7f2503577640 1 -- 192.168.123.104:0/1450766224 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f24fc130210 con 0x7f24fc098480 2026-03-10T10:14:54.223 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.217+0000 7f2503577640 1 -- 192.168.123.104:0/1450766224 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f24fc13c3d0 con 0x7f24fc098480 2026-03-10T10:14:54.224 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.221+0000 7f24e77fe640 1 -- 192.168.123.104:0/1450766224 <== mon.2 v2:192.168.123.104:3301/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f24f804b020 con 0x7f24fc098480 2026-03-10T10:14:54.234 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.221+0000 7f2503577640 1 -- 192.168.123.104:0/1450766224 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f24c8005180 con 0x7f24fc098480 2026-03-10T10:14:54.234 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.225+0000 7f24e77fe640 1 --2- 192.168.123.104:0/1450766224 >> v2:192.168.123.104:6800/3326026257 conn(0x7f24dc077790 0x7f24dc079c50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:54.234 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.225+0000 7f24e77fe640 1 -- 192.168.123.104:0/1450766224 <== mon.2 v2:192.168.123.104:3301/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f24f80bdc80 con 0x7f24fc098480 2026-03-10T10:14:54.234 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.225+0000 7f24e77fe640 1 -- 192.168.123.104:0/1450766224 <== mon.2 v2:192.168.123.104:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f24f808aae0 con 0x7f24fc098480 2026-03-10T10:14:54.234 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.225+0000 7f2502575640 1 --2- 192.168.123.104:0/1450766224 >> v2:192.168.123.104:6800/3326026257 conn(0x7f24dc077790 0x7f24dc079c50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:54.234 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.229+0000 7f2502575640 1 --2- 192.168.123.104:0/1450766224 >> v2:192.168.123.104:6800/3326026257 conn(0x7f24dc077790 0x7f24dc079c50 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7f24f0004480 tx=0x7f24f0009290 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:54.387 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.385+0000 7f2503577640 1 -- 192.168.123.104:0/1450766224 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]} v 0) -- 0x7f24c8005470 con 0x7f24fc098480 2026-03-10T10:14:54.394 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.389+0000 7f24e77fe640 1 -- 192.168.123.104:0/1450766224 <== mon.2 v2:192.168.123.104:3301/0 7 ==== mon_command_ack([{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]=0 v16) ==== 170+0+59 (secure 0 0 0) 0x7f24f808f990 con 0x7f24fc098480 2026-03-10T10:14:54.394 INFO:teuthology.orchestra.run.vm04.stdout:[client.0] 2026-03-10T10:14:54.394 INFO:teuthology.orchestra.run.vm04.stdout: key = AQAe769p3lYtFxAACdnqnnDmLeuvlsqGT1oxnA== 2026-03-10T10:14:54.401 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.397+0000 7f2503577640 1 -- 192.168.123.104:0/1450766224 >> v2:192.168.123.104:6800/3326026257 conn(0x7f24dc077790 msgr2=0x7f24dc079c50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:54.401 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.397+0000 7f2503577640 1 --2- 192.168.123.104:0/1450766224 >> v2:192.168.123.104:6800/3326026257 conn(0x7f24dc077790 0x7f24dc079c50 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7f24f0004480 tx=0x7f24f0009290 comp rx=0 tx=0).stop 2026-03-10T10:14:54.401 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.397+0000 7f2503577640 1 -- 192.168.123.104:0/1450766224 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f24fc098480 msgr2=0x7f24fc135b60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:54.401 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.397+0000 7f2503577640 1 --2- 192.168.123.104:0/1450766224 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f24fc098480 0x7f24fc135b60 secure :-1 s=READY pgs=59 cs=0 l=1 rev1=1 crypto rx=0x7f24f8009a00 tx=0x7f24f80026e0 comp rx=0 tx=0).stop 2026-03-10T10:14:54.401 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.397+0000 7f2503577640 1 -- 192.168.123.104:0/1450766224 shutdown_connections 2026-03-10T10:14:54.401 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.397+0000 7f2503577640 1 --2- 192.168.123.104:0/1450766224 >> v2:192.168.123.104:6800/3326026257 conn(0x7f24dc077790 0x7f24dc079c50 unknown :-1 s=CLOSED pgs=26 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:54.401 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.397+0000 7f2503577640 1 --2- 192.168.123.104:0/1450766224 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f24fc09db70 0x7f24fc12f6f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:54.401 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.397+0000 7f2503577640 1 --2- 192.168.123.104:0/1450766224 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f24fc098480 0x7f24fc135b60 unknown :-1 s=CLOSED pgs=59 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:54.401 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.397+0000 7f2503577640 1 --2- 192.168.123.104:0/1450766224 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f24fc097ae0 0x7f24fc135620 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:54.401 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.397+0000 7f2503577640 1 -- 192.168.123.104:0/1450766224 >> 192.168.123.104:0/1450766224 conn(0x7f24fc0937c0 msgr2=0x7f24fc09b680 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:54.403 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.401+0000 7f2503577640 1 -- 192.168.123.104:0/1450766224 shutdown_connections 2026-03-10T10:14:54.403 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:14:54.401+0000 7f2503577640 1 -- 192.168.123.104:0/1450766224 wait complete. 2026-03-10T10:14:54.581 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-10T10:14:54.581 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-10T10:14:54.581 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-10T10:14:54.606 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph auth get-or-create client.1 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-10T10:14:54.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:54 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:54.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:54 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:54.703 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:54 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:54.703 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:54 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:54 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:54 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:54.703 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 10:14:54 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:54.704 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 10:14:54 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:54.704 INFO:journalctl@ceph.rgw.foo.a.vm04.stdout:Mar 10 10:14:54 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:54.704 INFO:journalctl@ceph.rgw.foo.a.vm04.stdout:Mar 10 10:14:54 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:54.704 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 10 10:14:54 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:54.704 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 10 10:14:54 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:54.704 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 10 10:14:54 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:54.704 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 10 10:14:54 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:54.704 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 10 10:14:54 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:54.704 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 10 10:14:54 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:54.704 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:54 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:54.704 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:14:54 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:54.704 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:14:54 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:54.704 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:14:54 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:54.704 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:14:54 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:54.704 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:14:54 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:54.830 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:14:54 vm04 systemd[1]: Started Ceph alertmanager.a for e4c1c9d6-1c68-11f1-a9bd-116050875839. 2026-03-10T10:14:55.028 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:14:54 vm04 bash[55742]: ts=2026-03-10T10:14:54.878Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)" 2026-03-10T10:14:55.028 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:14:54 vm04 bash[55742]: ts=2026-03-10T10:14:54.878Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)" 2026-03-10T10:14:55.028 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:14:54 vm04 bash[55742]: ts=2026-03-10T10:14:54.879Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.123.104 port=9094 2026-03-10T10:14:55.028 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:14:54 vm04 bash[55742]: ts=2026-03-10T10:14:54.880Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-10T10:14:55.028 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:14:54 vm04 bash[55742]: ts=2026-03-10T10:14:54.897Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-10T10:14:55.028 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:14:54 vm04 bash[55742]: ts=2026-03-10T10:14:54.898Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-10T10:14:55.028 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:14:54 vm04 bash[55742]: ts=2026-03-10T10:14:54.899Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9093 2026-03-10T10:14:55.028 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:14:54 vm04 bash[55742]: ts=2026-03-10T10:14:54.899Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9093 2026-03-10T10:14:55.028 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:54 vm04 bash[20997]: [10/Mar/2026:10:14:54] ENGINE Bus STOPPING 2026-03-10T10:14:55.373 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:55 vm04 bash[20997]: [10/Mar/2026:10:14:55] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T10:14:55.374 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:55 vm04 bash[20997]: [10/Mar/2026:10:14:55] ENGINE Bus STOPPED 2026-03-10T10:14:55.374 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:55 vm04 bash[20997]: [10/Mar/2026:10:14:55] ENGINE Bus STARTING 2026-03-10T10:14:55.374 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:55 vm04 bash[20997]: [10/Mar/2026:10:14:55] ENGINE Serving on http://:::9283 2026-03-10T10:14:55.374 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:14:55 vm04 bash[20997]: [10/Mar/2026:10:14:55] ENGINE Bus STARTED 2026-03-10T10:14:55.515 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:14:55 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:14:55.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:55 vm07 bash[23367]: cluster 2026-03-10T10:14:54.295332+0000 mgr.y (mgr.24422) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T10:14:55.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:55 vm07 bash[23367]: cluster 2026-03-10T10:14:54.295332+0000 mgr.y (mgr.24422) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T10:14:55.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:55 vm07 bash[23367]: audit 2026-03-10T10:14:54.388466+0000 mon.c (mon.2) 24 : audit [INF] from='client.? 192.168.123.104:0/1450766224' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T10:14:55.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:55 vm07 bash[23367]: audit 2026-03-10T10:14:54.388466+0000 mon.c (mon.2) 24 : audit [INF] from='client.? 192.168.123.104:0/1450766224' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T10:14:55.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:55 vm07 bash[23367]: audit 2026-03-10T10:14:54.388756+0000 mon.a (mon.0) 772 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T10:14:55.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:55 vm07 bash[23367]: audit 2026-03-10T10:14:54.388756+0000 mon.a (mon.0) 772 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T10:14:55.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:55 vm07 bash[23367]: audit 2026-03-10T10:14:54.391232+0000 mon.a (mon.0) 773 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T10:14:55.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:55 vm07 bash[23367]: audit 2026-03-10T10:14:54.391232+0000 mon.a (mon.0) 773 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T10:14:55.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:55 vm07 bash[23367]: audit 2026-03-10T10:14:54.766686+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:55 vm07 bash[23367]: audit 2026-03-10T10:14:54.766686+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:55 vm07 bash[23367]: audit 2026-03-10T10:14:54.778243+0000 mon.a (mon.0) 775 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:55 vm07 bash[23367]: audit 2026-03-10T10:14:54.778243+0000 mon.a (mon.0) 775 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:55 vm07 bash[23367]: audit 2026-03-10T10:14:54.785252+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:55 vm07 bash[23367]: audit 2026-03-10T10:14:54.785252+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:55 vm07 bash[23367]: audit 2026-03-10T10:14:54.789413+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:55 vm07 bash[23367]: audit 2026-03-10T10:14:54.789413+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:55 vm07 bash[23367]: cephadm 2026-03-10T10:14:54.796762+0000 mgr.y (mgr.24422) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T10:14:55.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:55 vm07 bash[23367]: cephadm 2026-03-10T10:14:54.796762+0000 mgr.y (mgr.24422) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T10:14:55.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:55 vm07 bash[23367]: audit 2026-03-10T10:14:54.845052+0000 mon.a (mon.0) 778 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:55 vm07 bash[23367]: audit 2026-03-10T10:14:54.845052+0000 mon.a (mon.0) 778 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:55 vm07 bash[23367]: audit 2026-03-10T10:14:54.911300+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:55 vm07 bash[23367]: audit 2026-03-10T10:14:54.911300+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:55 vm07 bash[23367]: audit 2026-03-10T10:14:54.913765+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T10:14:55.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:55 vm07 bash[23367]: audit 2026-03-10T10:14:54.913765+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T10:14:55.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:55 vm07 bash[23367]: audit 2026-03-10T10:14:54.914081+0000 mgr.y (mgr.24422) 30 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T10:14:55.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:55 vm07 bash[23367]: audit 2026-03-10T10:14:54.914081+0000 mgr.y (mgr.24422) 30 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T10:14:55.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:55 vm07 bash[23367]: audit 2026-03-10T10:14:54.948292+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:55 vm07 bash[23367]: audit 2026-03-10T10:14:54.948292+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:55 vm07 bash[23367]: cephadm 2026-03-10T10:14:54.960316+0000 mgr.y (mgr.24422) 31 : cephadm [INF] Deploying daemon grafana.a on vm07 2026-03-10T10:14:55.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:55 vm07 bash[23367]: cephadm 2026-03-10T10:14:54.960316+0000 mgr.y (mgr.24422) 31 : cephadm [INF] Deploying daemon grafana.a on vm07 2026-03-10T10:14:55.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:55 vm04 bash[20742]: cluster 2026-03-10T10:14:54.295332+0000 mgr.y (mgr.24422) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T10:14:55.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:55 vm04 bash[20742]: cluster 2026-03-10T10:14:54.295332+0000 mgr.y (mgr.24422) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T10:14:55.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:55 vm04 bash[20742]: audit 2026-03-10T10:14:54.388466+0000 mon.c (mon.2) 24 : audit [INF] from='client.? 192.168.123.104:0/1450766224' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:55 vm04 bash[20742]: audit 2026-03-10T10:14:54.388466+0000 mon.c (mon.2) 24 : audit [INF] from='client.? 192.168.123.104:0/1450766224' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:55 vm04 bash[20742]: audit 2026-03-10T10:14:54.388756+0000 mon.a (mon.0) 772 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:55 vm04 bash[20742]: audit 2026-03-10T10:14:54.388756+0000 mon.a (mon.0) 772 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:55 vm04 bash[20742]: audit 2026-03-10T10:14:54.391232+0000 mon.a (mon.0) 773 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:55 vm04 bash[20742]: audit 2026-03-10T10:14:54.391232+0000 mon.a (mon.0) 773 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:55 vm04 bash[20742]: audit 2026-03-10T10:14:54.766686+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:55 vm04 bash[20742]: audit 2026-03-10T10:14:54.766686+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:55 vm04 bash[20742]: audit 2026-03-10T10:14:54.778243+0000 mon.a (mon.0) 775 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:55 vm04 bash[20742]: audit 2026-03-10T10:14:54.778243+0000 mon.a (mon.0) 775 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:55 vm04 bash[20742]: audit 2026-03-10T10:14:54.785252+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:55 vm04 bash[20742]: audit 2026-03-10T10:14:54.785252+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:55 vm04 bash[20742]: audit 2026-03-10T10:14:54.789413+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:55 vm04 bash[20742]: audit 2026-03-10T10:14:54.789413+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:55 vm04 bash[20742]: cephadm 2026-03-10T10:14:54.796762+0000 mgr.y (mgr.24422) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:55 vm04 bash[20742]: cephadm 2026-03-10T10:14:54.796762+0000 mgr.y (mgr.24422) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:55 vm04 bash[20742]: audit 2026-03-10T10:14:54.845052+0000 mon.a (mon.0) 778 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:55 vm04 bash[20742]: audit 2026-03-10T10:14:54.845052+0000 mon.a (mon.0) 778 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:55 vm04 bash[20742]: audit 2026-03-10T10:14:54.911300+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:55 vm04 bash[20742]: audit 2026-03-10T10:14:54.911300+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:55 vm04 bash[20742]: audit 2026-03-10T10:14:54.913765+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:55 vm04 bash[20742]: audit 2026-03-10T10:14:54.913765+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:55 vm04 bash[20742]: audit 2026-03-10T10:14:54.914081+0000 mgr.y (mgr.24422) 30 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:55 vm04 bash[20742]: audit 2026-03-10T10:14:54.914081+0000 mgr.y (mgr.24422) 30 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:55 vm04 bash[20742]: audit 2026-03-10T10:14:54.948292+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:55 vm04 bash[20742]: audit 2026-03-10T10:14:54.948292+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:55 vm04 bash[20742]: cephadm 2026-03-10T10:14:54.960316+0000 mgr.y (mgr.24422) 31 : cephadm [INF] Deploying daemon grafana.a on vm07 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:55 vm04 bash[20742]: cephadm 2026-03-10T10:14:54.960316+0000 mgr.y (mgr.24422) 31 : cephadm [INF] Deploying daemon grafana.a on vm07 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:55 vm04 bash[28289]: cluster 2026-03-10T10:14:54.295332+0000 mgr.y (mgr.24422) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:55 vm04 bash[28289]: cluster 2026-03-10T10:14:54.295332+0000 mgr.y (mgr.24422) 28 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:55 vm04 bash[28289]: audit 2026-03-10T10:14:54.388466+0000 mon.c (mon.2) 24 : audit [INF] from='client.? 192.168.123.104:0/1450766224' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:55 vm04 bash[28289]: audit 2026-03-10T10:14:54.388466+0000 mon.c (mon.2) 24 : audit [INF] from='client.? 192.168.123.104:0/1450766224' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:55 vm04 bash[28289]: audit 2026-03-10T10:14:54.388756+0000 mon.a (mon.0) 772 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:55 vm04 bash[28289]: audit 2026-03-10T10:14:54.388756+0000 mon.a (mon.0) 772 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:55 vm04 bash[28289]: audit 2026-03-10T10:14:54.391232+0000 mon.a (mon.0) 773 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:55 vm04 bash[28289]: audit 2026-03-10T10:14:54.391232+0000 mon.a (mon.0) 773 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:55 vm04 bash[28289]: audit 2026-03-10T10:14:54.766686+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:55 vm04 bash[28289]: audit 2026-03-10T10:14:54.766686+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:55 vm04 bash[28289]: audit 2026-03-10T10:14:54.778243+0000 mon.a (mon.0) 775 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:55 vm04 bash[28289]: audit 2026-03-10T10:14:54.778243+0000 mon.a (mon.0) 775 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:55 vm04 bash[28289]: audit 2026-03-10T10:14:54.785252+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:55 vm04 bash[28289]: audit 2026-03-10T10:14:54.785252+0000 mon.a (mon.0) 776 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:55 vm04 bash[28289]: audit 2026-03-10T10:14:54.789413+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:55 vm04 bash[28289]: audit 2026-03-10T10:14:54.789413+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:55 vm04 bash[28289]: cephadm 2026-03-10T10:14:54.796762+0000 mgr.y (mgr.24422) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:55 vm04 bash[28289]: cephadm 2026-03-10T10:14:54.796762+0000 mgr.y (mgr.24422) 29 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:55 vm04 bash[28289]: audit 2026-03-10T10:14:54.845052+0000 mon.a (mon.0) 778 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:55 vm04 bash[28289]: audit 2026-03-10T10:14:54.845052+0000 mon.a (mon.0) 778 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:55 vm04 bash[28289]: audit 2026-03-10T10:14:54.911300+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:55 vm04 bash[28289]: audit 2026-03-10T10:14:54.911300+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:55 vm04 bash[28289]: audit 2026-03-10T10:14:54.913765+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:55 vm04 bash[28289]: audit 2026-03-10T10:14:54.913765+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:55 vm04 bash[28289]: audit 2026-03-10T10:14:54.914081+0000 mgr.y (mgr.24422) 30 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:55 vm04 bash[28289]: audit 2026-03-10T10:14:54.914081+0000 mgr.y (mgr.24422) 30 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:55 vm04 bash[28289]: audit 2026-03-10T10:14:54.948292+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:55 vm04 bash[28289]: audit 2026-03-10T10:14:54.948292+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:55.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:55 vm04 bash[28289]: cephadm 2026-03-10T10:14:54.960316+0000 mgr.y (mgr.24422) 31 : cephadm [INF] Deploying daemon grafana.a on vm07 2026-03-10T10:14:55.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:55 vm04 bash[28289]: cephadm 2026-03-10T10:14:54.960316+0000 mgr.y (mgr.24422) 31 : cephadm [INF] Deploying daemon grafana.a on vm07 2026-03-10T10:14:57.203 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:14:56 vm04 bash[55742]: ts=2026-03-10T10:14:56.880Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000240142s 2026-03-10T10:14:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:57 vm04 bash[20742]: cluster 2026-03-10T10:14:56.295594+0000 mgr.y (mgr.24422) 32 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T10:14:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:57 vm04 bash[20742]: cluster 2026-03-10T10:14:56.295594+0000 mgr.y (mgr.24422) 32 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T10:14:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:57 vm04 bash[20742]: audit 2026-03-10T10:14:57.340628+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:57 vm04 bash[20742]: audit 2026-03-10T10:14:57.340628+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:57 vm04 bash[20742]: audit 2026-03-10T10:14:57.363313+0000 mon.a (mon.0) 783 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:14:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:57 vm04 bash[20742]: audit 2026-03-10T10:14:57.363313+0000 mon.a (mon.0) 783 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:14:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:57 vm04 bash[28289]: cluster 2026-03-10T10:14:56.295594+0000 mgr.y (mgr.24422) 32 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T10:14:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:57 vm04 bash[28289]: cluster 2026-03-10T10:14:56.295594+0000 mgr.y (mgr.24422) 32 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T10:14:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:57 vm04 bash[28289]: audit 2026-03-10T10:14:57.340628+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:57 vm04 bash[28289]: audit 2026-03-10T10:14:57.340628+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:57 vm04 bash[28289]: audit 2026-03-10T10:14:57.363313+0000 mon.a (mon.0) 783 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:14:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:57 vm04 bash[28289]: audit 2026-03-10T10:14:57.363313+0000 mon.a (mon.0) 783 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:14:57.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:57 vm07 bash[23367]: cluster 2026-03-10T10:14:56.295594+0000 mgr.y (mgr.24422) 32 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T10:14:57.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:57 vm07 bash[23367]: cluster 2026-03-10T10:14:56.295594+0000 mgr.y (mgr.24422) 32 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T10:14:57.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:57 vm07 bash[23367]: audit 2026-03-10T10:14:57.340628+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:57.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:57 vm07 bash[23367]: audit 2026-03-10T10:14:57.340628+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:14:57.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:57 vm07 bash[23367]: audit 2026-03-10T10:14:57.363313+0000 mon.a (mon.0) 783 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:14:57.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:57 vm07 bash[23367]: audit 2026-03-10T10:14:57.363313+0000 mon.a (mon.0) 783 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:14:58.386 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:14:58 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:14:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:58 vm04 bash[20742]: audit 2026-03-10T10:14:58.115237+0000 mgr.y (mgr.24422) 33 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:14:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:58 vm04 bash[20742]: audit 2026-03-10T10:14:58.115237+0000 mgr.y (mgr.24422) 33 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:14:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:58 vm04 bash[28289]: audit 2026-03-10T10:14:58.115237+0000 mgr.y (mgr.24422) 33 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:14:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:58 vm04 bash[28289]: audit 2026-03-10T10:14:58.115237+0000 mgr.y (mgr.24422) 33 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:14:58.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:58 vm07 bash[23367]: audit 2026-03-10T10:14:58.115237+0000 mgr.y (mgr.24422) 33 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:14:58.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:58 vm07 bash[23367]: audit 2026-03-10T10:14:58.115237+0000 mgr.y (mgr.24422) 33 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:14:59.230 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.b/config 2026-03-10T10:14:59.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:59 vm07 bash[23367]: cluster 2026-03-10T10:14:58.296110+0000 mgr.y (mgr.24422) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T10:14:59.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:14:59 vm07 bash[23367]: cluster 2026-03-10T10:14:58.296110+0000 mgr.y (mgr.24422) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T10:14:59.522 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.517+0000 7f5495b12640 1 -- 192.168.123.107:0/1462687244 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f548809db70 msgr2=0x7f548809df50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:59.522 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.517+0000 7f54877fe640 1 -- 192.168.123.107:0/1462687244 <== mon.1 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f5478038470 con 0x7f548809db70 2026-03-10T10:14:59.522 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.517+0000 7f5495b12640 1 --2- 192.168.123.107:0/1462687244 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f548809db70 0x7f548809df50 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7f5478009a80 tx=0x7f547802f270 comp rx=0 tx=0).stop 2026-03-10T10:14:59.522 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.517+0000 7f5495b12640 1 -- 192.168.123.107:0/1462687244 shutdown_connections 2026-03-10T10:14:59.522 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.517+0000 7f5495b12640 1 --2- 192.168.123.107:0/1462687244 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f5488098480 0x7f54880a49b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:59.522 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.517+0000 7f5495b12640 1 --2- 192.168.123.107:0/1462687244 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f5488097ae0 0x7f5488097f40 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:59.522 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.517+0000 7f5495b12640 1 --2- 192.168.123.107:0/1462687244 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f548809db70 0x7f548809df50 unknown :-1 s=CLOSED pgs=66 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:59.522 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.517+0000 7f5495b12640 1 -- 192.168.123.107:0/1462687244 >> 192.168.123.107:0/1462687244 conn(0x7f54880937c0 msgr2=0x7f5488095be0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:59.522 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.517+0000 7f5495b12640 1 -- 192.168.123.107:0/1462687244 shutdown_connections 2026-03-10T10:14:59.522 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.517+0000 7f5495b12640 1 -- 192.168.123.107:0/1462687244 wait complete. 2026-03-10T10:14:59.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.517+0000 7f5495b12640 1 Processor -- start 2026-03-10T10:14:59.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.517+0000 7f5495b12640 1 -- start start 2026-03-10T10:14:59.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.517+0000 7f5495b12640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f5488097ae0 0x7f54881355b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:59.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.517+0000 7f5495b12640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f5488098480 0x7f5488135af0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:59.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.517+0000 7f5495b12640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f548809db70 0x7f548812f680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:59.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.517+0000 7f5495b12640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f54880a7410 con 0x7f5488098480 2026-03-10T10:14:59.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.517+0000 7f5495b12640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f54880a7290 con 0x7f548809db70 2026-03-10T10:14:59.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.517+0000 7f5495b12640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f54880a7590 con 0x7f5488097ae0 2026-03-10T10:14:59.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.517+0000 7f5495311640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f548809db70 0x7f548812f680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:59.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.517+0000 7f5495311640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f548809db70 0x7f548812f680 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.107:3300/0 says I am v2:192.168.123.107:47878/0 (socket says 192.168.123.107:47878) 2026-03-10T10:14:59.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.517+0000 7f5495311640 1 -- 192.168.123.107:0/1636327860 learned_addr learned my addr 192.168.123.107:0/1636327860 (peer_addr_for_me v2:192.168.123.107:0/0) 2026-03-10T10:14:59.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.517+0000 7f5487fff640 1 --2- 192.168.123.107:0/1636327860 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f5488098480 0x7f5488135af0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:59.524 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.517+0000 7f5495311640 1 -- 192.168.123.107:0/1636327860 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f5488097ae0 msgr2=0x7f54881355b0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:14:59.525 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.517+0000 7f5494b10640 1 --2- 192.168.123.107:0/1636327860 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f5488097ae0 0x7f54881355b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:59.525 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.517+0000 7f5495311640 1 --2- 192.168.123.107:0/1636327860 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f5488097ae0 0x7f54881355b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:59.525 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.517+0000 7f5495311640 1 -- 192.168.123.107:0/1636327860 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f5488098480 msgr2=0x7f5488135af0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:59.525 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.517+0000 7f5495311640 1 --2- 192.168.123.107:0/1636327860 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f5488098480 0x7f5488135af0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:59.525 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.517+0000 7f5495311640 1 -- 192.168.123.107:0/1636327860 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f548812ff10 con 0x7f548809db70 2026-03-10T10:14:59.525 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.517+0000 7f5487fff640 1 --2- 192.168.123.107:0/1636327860 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f5488098480 0x7f5488135af0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-10T10:14:59.525 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.517+0000 7f5494b10640 1 --2- 192.168.123.107:0/1636327860 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f5488097ae0 0x7f54881355b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:14:59.525 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.517+0000 7f5495311640 1 --2- 192.168.123.107:0/1636327860 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f548809db70 0x7f548812f680 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7f548c004820 tx=0x7f548c00d4a0 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:59.525 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.517+0000 7f5485ffb640 1 -- 192.168.123.107:0/1636327860 <== mon.1 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f548c0090d0 con 0x7f548809db70 2026-03-10T10:14:59.525 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.517+0000 7f5485ffb640 1 -- 192.168.123.107:0/1636327860 <== mon.1 v2:192.168.123.107:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f548c009270 con 0x7f548809db70 2026-03-10T10:14:59.526 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.521+0000 7f5485ffb640 1 -- 192.168.123.107:0/1636327860 <== mon.1 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f548c013760 con 0x7f548809db70 2026-03-10T10:14:59.526 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.521+0000 7f5495b12640 1 -- 192.168.123.107:0/1636327860 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f54881301a0 con 0x7f548809db70 2026-03-10T10:14:59.528 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.521+0000 7f5485ffb640 1 -- 192.168.123.107:0/1636327860 <== mon.1 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f548c012070 con 0x7f548809db70 2026-03-10T10:14:59.528 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.521+0000 7f5485ffb640 1 --2- 192.168.123.107:0/1636327860 >> v2:192.168.123.104:6800/3326026257 conn(0x7f546c077680 0x7f546c079b40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:14:59.528 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.521+0000 7f5495b12640 1 -- 192.168.123.107:0/1636327860 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f548813c360 con 0x7f548809db70 2026-03-10T10:14:59.528 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.521+0000 7f5494b10640 1 --2- 192.168.123.107:0/1636327860 >> v2:192.168.123.104:6800/3326026257 conn(0x7f546c077680 0x7f546c079b40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:14:59.528 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.521+0000 7f5485ffb640 1 -- 192.168.123.107:0/1636327860 <== mon.1 v2:192.168.123.107:3300/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f548c05e0d0 con 0x7f548809db70 2026-03-10T10:14:59.529 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.521+0000 7f5494b10640 1 --2- 192.168.123.107:0/1636327860 >> v2:192.168.123.104:6800/3326026257 conn(0x7f546c077680 0x7f546c079b40 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7f54780097c0 tx=0x7f5478005e50 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:14:59.530 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.521+0000 7f5495b12640 1 -- 192.168.123.107:0/1636327860 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f54880991e0 con 0x7f548809db70 2026-03-10T10:14:59.531 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.525+0000 7f5485ffb640 1 -- 192.168.123.107:0/1636327860 <== mon.1 v2:192.168.123.107:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f548c010070 con 0x7f548809db70 2026-03-10T10:14:59.675 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.669+0000 7f5495b12640 1 -- 192.168.123.107:0/1636327860 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_command({"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]} v 0) -- 0x7f5488097f40 con 0x7f548809db70 2026-03-10T10:14:59.681 INFO:teuthology.orchestra.run.vm07.stdout:[client.1] 2026-03-10T10:14:59.682 INFO:teuthology.orchestra.run.vm07.stdout: key = AQAj769pOONfKBAACdBnfja4T6ePnbG7tZ6Fow== 2026-03-10T10:14:59.682 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.673+0000 7f5485ffb640 1 -- 192.168.123.107:0/1636327860 <== mon.1 v2:192.168.123.107:3300/0 7 ==== mon_command_ack([{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]=0 v17) ==== 170+0+59 (secure 0 0 0) 0x7f548c0660e0 con 0x7f548809db70 2026-03-10T10:14:59.684 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.677+0000 7f5495b12640 1 -- 192.168.123.107:0/1636327860 >> v2:192.168.123.104:6800/3326026257 conn(0x7f546c077680 msgr2=0x7f546c079b40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:59.684 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.677+0000 7f5495b12640 1 --2- 192.168.123.107:0/1636327860 >> v2:192.168.123.104:6800/3326026257 conn(0x7f546c077680 0x7f546c079b40 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7f54780097c0 tx=0x7f5478005e50 comp rx=0 tx=0).stop 2026-03-10T10:14:59.685 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.677+0000 7f5495b12640 1 -- 192.168.123.107:0/1636327860 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f548809db70 msgr2=0x7f548812f680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:14:59.685 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.677+0000 7f5495b12640 1 --2- 192.168.123.107:0/1636327860 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f548809db70 0x7f548812f680 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7f548c004820 tx=0x7f548c00d4a0 comp rx=0 tx=0).stop 2026-03-10T10:14:59.685 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.677+0000 7f5495b12640 1 -- 192.168.123.107:0/1636327860 shutdown_connections 2026-03-10T10:14:59.685 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.677+0000 7f5495b12640 1 --2- 192.168.123.107:0/1636327860 >> v2:192.168.123.104:6800/3326026257 conn(0x7f546c077680 0x7f546c079b40 unknown :-1 s=CLOSED pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:59.685 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.677+0000 7f5495b12640 1 --2- 192.168.123.107:0/1636327860 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f548809db70 0x7f548812f680 unknown :-1 s=CLOSED pgs=67 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:59.685 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.677+0000 7f5495b12640 1 --2- 192.168.123.107:0/1636327860 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f5488098480 0x7f5488135af0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:59.685 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.677+0000 7f5495b12640 1 --2- 192.168.123.107:0/1636327860 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f5488097ae0 0x7f54881355b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:14:59.685 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.677+0000 7f5495b12640 1 -- 192.168.123.107:0/1636327860 >> 192.168.123.107:0/1636327860 conn(0x7f54880937c0 msgr2=0x7f5488093fa0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:14:59.685 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.677+0000 7f5495b12640 1 -- 192.168.123.107:0/1636327860 shutdown_connections 2026-03-10T10:14:59.685 INFO:teuthology.orchestra.run.vm07.stderr:2026-03-10T10:14:59.677+0000 7f5495b12640 1 -- 192.168.123.107:0/1636327860 wait complete. 2026-03-10T10:14:59.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:59 vm04 bash[20742]: cluster 2026-03-10T10:14:58.296110+0000 mgr.y (mgr.24422) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T10:14:59.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:14:59 vm04 bash[20742]: cluster 2026-03-10T10:14:58.296110+0000 mgr.y (mgr.24422) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T10:14:59.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:59 vm04 bash[28289]: cluster 2026-03-10T10:14:58.296110+0000 mgr.y (mgr.24422) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T10:14:59.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:14:59 vm04 bash[28289]: cluster 2026-03-10T10:14:58.296110+0000 mgr.y (mgr.24422) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T10:14:59.782 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T10:14:59.782 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/ceph/ceph.client.1.keyring 2026-03-10T10:14:59.782 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod 0644 /etc/ceph/ceph.client.1.keyring 2026-03-10T10:14:59.802 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-10T10:14:59.802 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-10T10:14:59.802 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph mgr dump --format=json 2026-03-10T10:15:00.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:00 vm04 bash[20742]: audit 2026-03-10T10:14:59.675254+0000 mon.b (mon.1) 30 : audit [INF] from='client.? 192.168.123.107:0/1636327860' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T10:15:00.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:00 vm04 bash[20742]: audit 2026-03-10T10:14:59.675254+0000 mon.b (mon.1) 30 : audit [INF] from='client.? 192.168.123.107:0/1636327860' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T10:15:00.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:00 vm04 bash[20742]: audit 2026-03-10T10:14:59.677235+0000 mon.a (mon.0) 784 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T10:15:00.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:00 vm04 bash[20742]: audit 2026-03-10T10:14:59.677235+0000 mon.a (mon.0) 784 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T10:15:00.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:00 vm04 bash[20742]: audit 2026-03-10T10:14:59.680395+0000 mon.a (mon.0) 785 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T10:15:00.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:00 vm04 bash[20742]: audit 2026-03-10T10:14:59.680395+0000 mon.a (mon.0) 785 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T10:15:00.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:00 vm04 bash[28289]: audit 2026-03-10T10:14:59.675254+0000 mon.b (mon.1) 30 : audit [INF] from='client.? 192.168.123.107:0/1636327860' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T10:15:00.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:00 vm04 bash[28289]: audit 2026-03-10T10:14:59.675254+0000 mon.b (mon.1) 30 : audit [INF] from='client.? 192.168.123.107:0/1636327860' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T10:15:00.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:00 vm04 bash[28289]: audit 2026-03-10T10:14:59.677235+0000 mon.a (mon.0) 784 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T10:15:00.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:00 vm04 bash[28289]: audit 2026-03-10T10:14:59.677235+0000 mon.a (mon.0) 784 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T10:15:00.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:00 vm04 bash[28289]: audit 2026-03-10T10:14:59.680395+0000 mon.a (mon.0) 785 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T10:15:00.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:00 vm04 bash[28289]: audit 2026-03-10T10:14:59.680395+0000 mon.a (mon.0) 785 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T10:15:01.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:00 vm07 bash[23367]: audit 2026-03-10T10:14:59.675254+0000 mon.b (mon.1) 30 : audit [INF] from='client.? 192.168.123.107:0/1636327860' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T10:15:01.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:00 vm07 bash[23367]: audit 2026-03-10T10:14:59.675254+0000 mon.b (mon.1) 30 : audit [INF] from='client.? 192.168.123.107:0/1636327860' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T10:15:01.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:00 vm07 bash[23367]: audit 2026-03-10T10:14:59.677235+0000 mon.a (mon.0) 784 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T10:15:01.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:00 vm07 bash[23367]: audit 2026-03-10T10:14:59.677235+0000 mon.a (mon.0) 784 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T10:15:01.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:00 vm07 bash[23367]: audit 2026-03-10T10:14:59.680395+0000 mon.a (mon.0) 785 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T10:15:01.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:00 vm07 bash[23367]: audit 2026-03-10T10:14:59.680395+0000 mon.a (mon.0) 785 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T10:15:01.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:01 vm04 bash[20742]: cluster 2026-03-10T10:15:00.296757+0000 mgr.y (mgr.24422) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T10:15:01.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:01 vm04 bash[20742]: cluster 2026-03-10T10:15:00.296757+0000 mgr.y (mgr.24422) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T10:15:01.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:01 vm04 bash[28289]: cluster 2026-03-10T10:15:00.296757+0000 mgr.y (mgr.24422) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T10:15:01.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:01 vm04 bash[28289]: cluster 2026-03-10T10:15:00.296757+0000 mgr.y (mgr.24422) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T10:15:02.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:01 vm07 bash[23367]: cluster 2026-03-10T10:15:00.296757+0000 mgr.y (mgr.24422) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T10:15:02.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:01 vm07 bash[23367]: cluster 2026-03-10T10:15:00.296757+0000 mgr.y (mgr.24422) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T10:15:03.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:15:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:15:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:15:03.972 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 10:15:03 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:15:03.972 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 10:15:03 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:15:03.972 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:15:03 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:15:03.972 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:03 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:15:03.972 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:03 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:15:03.972 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:15:03 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:15:03.972 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:03 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:15:03.972 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:03 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:15:03.972 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:03 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:15:03.972 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:15:03 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:15:03.973 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:03 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:15:03.973 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:15:03 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:15:03.973 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 10:15:03 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:15:04.265 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:15:03 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:15:04.266 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 10:15:03 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:15:04.266 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 10:15:03 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:15:04.266 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 10:15:03 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:15:04.266 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:15:03 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:15:04.266 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:15:03 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:15:04.266 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:15:03 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:15:04.266 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:03 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:15:04.266 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 systemd[1]: Started Ceph grafana.a for e4c1c9d6-1c68-11f1-a9bd-116050875839. 2026-03-10T10:15:04.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:03 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:15:04.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:04 vm07 bash[23367]: cluster 2026-03-10T10:15:02.297044+0000 mgr.y (mgr.24422) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:04.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:04 vm07 bash[23367]: cluster 2026-03-10T10:15:02.297044+0000 mgr.y (mgr.24422) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:04.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:04 vm07 bash[23367]: audit 2026-03-10T10:15:04.080864+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:04.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:04 vm07 bash[23367]: audit 2026-03-10T10:15:04.080864+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:04.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:04 vm07 bash[23367]: audit 2026-03-10T10:15:04.085351+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:04.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:04 vm07 bash[23367]: audit 2026-03-10T10:15:04.085351+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:04.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:04 vm07 bash[23367]: audit 2026-03-10T10:15:04.089126+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:04.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:04 vm07 bash[23367]: audit 2026-03-10T10:15:04.089126+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:04.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:04 vm07 bash[23367]: audit 2026-03-10T10:15:04.093318+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:04.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:04 vm07 bash[23367]: audit 2026-03-10T10:15:04.093318+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:04.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:04 vm07 bash[23367]: audit 2026-03-10T10:15:04.104816+0000 mon.a (mon.0) 790 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:15:04.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:04 vm07 bash[23367]: audit 2026-03-10T10:15:04.104816+0000 mon.a (mon.0) 790 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:15:04.432 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:15:04.511 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:04 vm04 bash[28289]: cluster 2026-03-10T10:15:02.297044+0000 mgr.y (mgr.24422) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:04.511 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:04 vm04 bash[28289]: cluster 2026-03-10T10:15:02.297044+0000 mgr.y (mgr.24422) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:04.511 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:04 vm04 bash[28289]: audit 2026-03-10T10:15:04.080864+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:04.511 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:04 vm04 bash[28289]: audit 2026-03-10T10:15:04.080864+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:04.511 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:04 vm04 bash[28289]: audit 2026-03-10T10:15:04.085351+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:04.511 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:04 vm04 bash[28289]: audit 2026-03-10T10:15:04.085351+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:04.511 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:04 vm04 bash[28289]: audit 2026-03-10T10:15:04.089126+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:04.511 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:04 vm04 bash[28289]: audit 2026-03-10T10:15:04.089126+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:04.512 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:04 vm04 bash[28289]: audit 2026-03-10T10:15:04.093318+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:04.512 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:04 vm04 bash[28289]: audit 2026-03-10T10:15:04.093318+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:04.512 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:04 vm04 bash[28289]: audit 2026-03-10T10:15:04.104816+0000 mon.a (mon.0) 790 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:15:04.512 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:04 vm04 bash[28289]: audit 2026-03-10T10:15:04.104816+0000 mon.a (mon.0) 790 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:15:04.512 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:04 vm04 bash[20742]: cluster 2026-03-10T10:15:02.297044+0000 mgr.y (mgr.24422) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:04.512 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:04 vm04 bash[20742]: cluster 2026-03-10T10:15:02.297044+0000 mgr.y (mgr.24422) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:04.512 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:04 vm04 bash[20742]: audit 2026-03-10T10:15:04.080864+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:04.512 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:04 vm04 bash[20742]: audit 2026-03-10T10:15:04.080864+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:04.512 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:04 vm04 bash[20742]: audit 2026-03-10T10:15:04.085351+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:04.512 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:04 vm04 bash[20742]: audit 2026-03-10T10:15:04.085351+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:04.512 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:04 vm04 bash[20742]: audit 2026-03-10T10:15:04.089126+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:04.512 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:04 vm04 bash[20742]: audit 2026-03-10T10:15:04.089126+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:04.512 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:04 vm04 bash[20742]: audit 2026-03-10T10:15:04.093318+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:04.512 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:04 vm04 bash[20742]: audit 2026-03-10T10:15:04.093318+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:04.512 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:04 vm04 bash[20742]: audit 2026-03-10T10:15:04.104816+0000 mon.a (mon.0) 790 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:15:04.512 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:04 vm04 bash[20742]: audit 2026-03-10T10:15:04.104816+0000 mon.a (mon.0) 790 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=settings t=2026-03-10T10:15:04.309668871Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2026-03-10T10:15:04Z 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=settings t=2026-03-10T10:15:04.310409072Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=settings t=2026-03-10T10:15:04.310464115Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=settings t=2026-03-10T10:15:04.310839996Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=settings t=2026-03-10T10:15:04.311009341Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=settings t=2026-03-10T10:15:04.311162386Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=settings t=2026-03-10T10:15:04.31131436Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=settings t=2026-03-10T10:15:04.311486221Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=settings t=2026-03-10T10:15:04.311647151Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=settings t=2026-03-10T10:15:04.311798994Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=settings t=2026-03-10T10:15:04.311954575Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=settings t=2026-03-10T10:15:04.312114092Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=settings t=2026-03-10T10:15:04.312275222Z level=info msg=Target target=[all] 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=settings t=2026-03-10T10:15:04.312429822Z level=info msg="Path Home" path=/usr/share/grafana 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=settings t=2026-03-10T10:15:04.312594579Z level=info msg="Path Data" path=/var/lib/grafana 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=settings t=2026-03-10T10:15:04.31274527Z level=info msg="Path Logs" path=/var/log/grafana 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=settings t=2026-03-10T10:15:04.31289588Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=settings t=2026-03-10T10:15:04.313049318Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=settings t=2026-03-10T10:15:04.313210378Z level=info msg="App mode production" 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=sqlstore t=2026-03-10T10:15:04.313588854Z level=info msg="Connecting to DB" dbtype=sqlite3 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=sqlstore t=2026-03-10T10:15:04.313775822Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r----- 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.325937012Z level=info msg="Starting DB migrations" 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.32705131Z level=info msg="Executing migration" id="create migration_log table" 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.32771569Z level=info msg="Migration successfully executed" id="create migration_log table" duration=666.103µs 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.329498035Z level=info msg="Executing migration" id="create user table" 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.330078629Z level=info msg="Migration successfully executed" id="create user table" duration=579.872µs 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.331566184Z level=info msg="Executing migration" id="add unique index user.login" 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.332083699Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=517.355µs 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.333835107Z level=info msg="Executing migration" id="add unique index user.email" 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.334364144Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=529.327µs 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.335649792Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.336131601Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=481.589µs 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.337272029Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.337804492Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=532.354µs 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.339403426Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.340605238Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=1.201442ms 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.341842015Z level=info msg="Executing migration" id="create user table v2" 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.342437936Z level=info msg="Migration successfully executed" id="create user table v2" duration=593.537µs 2026-03-10T10:15:04.562 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.343767027Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.344306614Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=539.477µs 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.345383652Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.345932688Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=549.005µs 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.347562417Z level=info msg="Executing migration" id="copy data_source v1 to v2" 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.347962774Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=400.337µs 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.349012482Z level=info msg="Executing migration" id="Drop old table user_v1" 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.349448156Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=435.653µs 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.350639529Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.351356155Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=716.446µs 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.352760776Z level=info msg="Executing migration" id="Update user table charset" 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.352958725Z level=info msg="Migration successfully executed" id="Update user table charset" duration=198.179µs 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.354171828Z level=info msg="Executing migration" id="Add last_seen_at column to user" 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.354859371Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=687.644µs 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.355891988Z level=info msg="Executing migration" id="Add missing user data" 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.356192689Z level=info msg="Migration successfully executed" id="Add missing user data" duration=300.562µs 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.357201672Z level=info msg="Executing migration" id="Add is_disabled column to user" 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.357868154Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=666.211µs 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.359312069Z level=info msg="Executing migration" id="Add index user.login/user.email" 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.359839181Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=539.898µs 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.36079749Z level=info msg="Executing migration" id="Add is_service_account column to user" 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.361452732Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=654.972µs 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.362661157Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.366620934Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=3.959146ms 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.368113118Z level=info msg="Executing migration" id="Add uid column to user" 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.368778049Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=665.301µs 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.37010214Z level=info msg="Executing migration" id="Update uid column values for users" 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.3703599Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=258.161µs 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.371301687Z level=info msg="Executing migration" id="Add unique index user_uid" 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.371818462Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=516.624µs 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.373852045Z level=info msg="Executing migration" id="create temp user table v1-7" 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.374358991Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=507.617µs 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.376082618Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.37657746Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=496.235µs 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.377946875Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.378518341Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=571.396µs 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.379896653Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.380539813Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=642.92µs 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.384351905Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.385852895Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.501129ms 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.391159747Z level=info msg="Executing migration" id="Update temp_user table charset" 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.391174975Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=15.458µs 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.392063923Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.392379703Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=315.9µs 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.393627611Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.393930797Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=303.137µs 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.395006773Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.395346918Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=340.205µs 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.396128626Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.39645237Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=323.624µs 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.397518619Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.39866055Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=1.140938ms 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.399615642Z level=info msg="Executing migration" id="create temp_user v2" 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.400016891Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=400.317µs 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.40085794Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.401313109Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=455.319µs 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.402373708Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.402843114Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=469.296µs 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.404116079Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.404571157Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=455.531µs 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.405520989Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.405979084Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=457.975µs 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.407220231Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 2026-03-10T10:15:04.563 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.407542231Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=322.04µs 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.408483507Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.408893962Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=410.657µs 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.410235996Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.410567946Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=331.028µs 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.411553194Z level=info msg="Executing migration" id="create star table" 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.411989978Z level=info msg="Migration successfully executed" id="create star table" duration=436.874µs 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.412937647Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.413455263Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=517.165µs 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.414894518Z level=info msg="Executing migration" id="create org table v1" 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.415416602Z level=info msg="Migration successfully executed" id="create org table v1" duration=522.243µs 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.41662715Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.417143264Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=516.274µs 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.418276046Z level=info msg="Executing migration" id="create org_user table v1" 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.418726307Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=449.298µs 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.419883757Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.420365274Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=481.518µs 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.42193865Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.422408416Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=469.546µs 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.423540759Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.424021245Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=479.805µs 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.42510141Z level=info msg="Executing migration" id="Update org table charset" 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.425291174Z level=info msg="Migration successfully executed" id="Update org table charset" duration=167.483µs 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.426414982Z level=info msg="Executing migration" id="Update org_user table charset" 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.426595478Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=179.885µs 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.427790869Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.428024224Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=234.096µs 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.42892801Z level=info msg="Executing migration" id="create dashboard table" 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.429384912Z level=info msg="Migration successfully executed" id="create dashboard table" duration=456.953µs 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.430628562Z level=info msg="Executing migration" id="add index dashboard.account_id" 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.431137853Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=509.261µs 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.43248199Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.432970901Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=489.011µs 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.434330069Z level=info msg="Executing migration" id="create dashboard_tag table" 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.434802549Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=473.162µs 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.436004652Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.436497061Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=492.68µs 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.437582405Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.438079424Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=496.917µs 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.439461552Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.441226295Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=1.764562ms 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.442444277Z level=info msg="Executing migration" id="create dashboard v2" 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.442978894Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=534.537µs 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.444013684Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.444540978Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=527.384µs 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.446075842Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.44661044Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=534.649µs 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.448769117Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.44910828Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=339.382µs 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.471355406Z level=info msg="Executing migration" id="drop table dashboard_v1" 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.472137525Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=783.331µs 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.488122517Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.488322721Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=200.554µs 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.489425028Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.490292525Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=867.427µs 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.49144706Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.4922348Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=786.097µs 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.493402067Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.494115218Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=713.29µs 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.495508737Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.495953318Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=444.859µs 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.496898441Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.497581124Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=682.665µs 2026-03-10T10:15:04.564 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.499063049Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.499489444Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=425.083µs 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.500448413Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.500881862Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=433.159µs 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.5020705Z level=info msg="Executing migration" id="Update dashboard table charset" 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.502118841Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=49.242µs 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.50359845Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.50365687Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=58.059µs 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.50470223Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.505496352Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=793.972µs 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.506659421Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.507583946Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=924.164µs 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.508757466Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.509732033Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=974.679µs 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.510795849Z level=info msg="Executing migration" id="Add column uid in dashboard" 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.511505242Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=710.274µs 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.512360989Z level=info msg="Executing migration" id="Update uid column values in dashboard" 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.512532819Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=172.091µs 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.513502098Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.513995989Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=493.953µs 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.515491539Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.515999638Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=508.309µs 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.516887353Z level=info msg="Executing migration" id="Update dashboard title length" 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.516938839Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=51.878µs 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.518103592Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.518639231Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=535.318µs 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.519796622Z level=info msg="Executing migration" id="create dashboard_provisioning" 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.520196146Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=399.285µs 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.521348096Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.523800591Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=2.451954ms 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.525024865Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.525423449Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=398.454µs 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.526847135Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.527284161Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=437.036µs 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.528380175Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.529559986Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.179671ms 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.531754711Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.531996923Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=242.162µs 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.533173719Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.533506369Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=332.631µs 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.534625237Z level=info msg="Executing migration" id="Add check_sum column" 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.535391637Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=767.612µs 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.536371645Z level=info msg="Executing migration" id="Add index for dashboard_title" 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.536807979Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=436.224µs 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.53816482Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.538351488Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=185.506µs 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.539293546Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.539432666Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=140.122µs 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.54044274Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.540991594Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=548.785µs 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.542125089Z level=info msg="Executing migration" id="Add isPublic for dashboard" 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.543041068Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=915.748µs 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.544201172Z level=info msg="Executing migration" id="create data_source table" 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.54470945Z level=info msg="Migration successfully executed" id="create data_source table" duration=508.149µs 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.545821465Z level=info msg="Executing migration" id="add index data_source.account_id" 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.546264812Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=443.397µs 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.547270078Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.547706763Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=436.805µs 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.549023639Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.549422754Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=399.155µs 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.550443038Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.550851069Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=407.981µs 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.551968263Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.553835848Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=1.867144ms 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.554969062Z level=info msg="Executing migration" id="create data_source table v2" 2026-03-10T10:15:04.565 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.555451502Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=482.25µs 2026-03-10T10:15:04.566 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.556346982Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 2026-03-10T10:15:04.566 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.556790761Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=443.908µs 2026-03-10T10:15:04.566 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.558131482Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 2026-03-10T10:15:04.566 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.558594025Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=461.12µs 2026-03-10T10:15:04.566 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.559731166Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 2026-03-10T10:15:04.566 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.560069117Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=338.151µs 2026-03-10T10:15:04.609 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.605+0000 7f4b5ba00640 1 -- 192.168.123.104:0/2713853896 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f4b54077620 msgr2=0x7f4b54077a00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:04.609 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.605+0000 7f4b5ba00640 1 --2- 192.168.123.104:0/2713853896 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f4b54077620 0x7f4b54077a00 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7f4b48009a30 tx=0x7f4b4802f240 comp rx=0 tx=0).stop 2026-03-10T10:15:04.609 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.605+0000 7f4b5ba00640 1 -- 192.168.123.104:0/2713853896 shutdown_connections 2026-03-10T10:15:04.609 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.605+0000 7f4b5ba00640 1 --2- 192.168.123.104:0/2713853896 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4b54113b80 0x7f4b54115f70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:04.609 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.605+0000 7f4b5ba00640 1 --2- 192.168.123.104:0/2713853896 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f4b54077f40 0x7f4b54113640 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:04.609 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.605+0000 7f4b5ba00640 1 --2- 192.168.123.104:0/2713853896 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f4b54077620 0x7f4b54077a00 unknown :-1 s=CLOSED pgs=60 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:04.609 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.605+0000 7f4b5ba00640 1 -- 192.168.123.104:0/2713853896 >> 192.168.123.104:0/2713853896 conn(0x7f4b541009e0 msgr2=0x7f4b54102e00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:04.609 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.605+0000 7f4b5ba00640 1 -- 192.168.123.104:0/2713853896 shutdown_connections 2026-03-10T10:15:04.609 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.605+0000 7f4b5ba00640 1 -- 192.168.123.104:0/2713853896 wait complete. 2026-03-10T10:15:04.609 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.605+0000 7f4b5ba00640 1 Processor -- start 2026-03-10T10:15:04.610 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.605+0000 7f4b5ba00640 1 -- start start 2026-03-10T10:15:04.610 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.605+0000 7f4b5ba00640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f4b54077620 0x7f4b541a10a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:04.610 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.605+0000 7f4b5ba00640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4b54077f40 0x7f4b541a15e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:04.610 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.605+0000 7f4b5ba00640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f4b54113b80 0x7f4b541a5970 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:04.610 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.605+0000 7f4b5ba00640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f4b54118b00 con 0x7f4b54113b80 2026-03-10T10:15:04.610 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.605+0000 7f4b5ba00640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f4b54118980 con 0x7f4b54077f40 2026-03-10T10:15:04.610 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.605+0000 7f4b5ba00640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f4b54118c80 con 0x7f4b54077620 2026-03-10T10:15:04.610 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.605+0000 7f4b59775640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f4b54077620 0x7f4b541a10a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:04.610 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.605+0000 7f4b59775640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f4b54077620 0x7f4b541a10a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3301/0 says I am v2:192.168.123.104:52818/0 (socket says 192.168.123.104:52818) 2026-03-10T10:15:04.610 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.605+0000 7f4b59775640 1 -- 192.168.123.104:0/3261953690 learned_addr learned my addr 192.168.123.104:0/3261953690 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:15:04.610 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.605+0000 7f4b59775640 1 -- 192.168.123.104:0/3261953690 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4b54077f40 msgr2=0x7f4b541a15e0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:15:04.610 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.605+0000 7f4b58f74640 1 --2- 192.168.123.104:0/3261953690 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4b54077f40 0x7f4b541a15e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:04.610 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.605+0000 7f4b59f76640 1 --2- 192.168.123.104:0/3261953690 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f4b54113b80 0x7f4b541a5970 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:04.610 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.605+0000 7f4b59775640 1 --2- 192.168.123.104:0/3261953690 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4b54077f40 0x7f4b541a15e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:04.610 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.605+0000 7f4b59775640 1 -- 192.168.123.104:0/3261953690 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f4b54113b80 msgr2=0x7f4b541a5970 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:04.610 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.605+0000 7f4b59775640 1 --2- 192.168.123.104:0/3261953690 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f4b54113b80 0x7f4b541a5970 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:04.610 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.605+0000 7f4b59775640 1 -- 192.168.123.104:0/3261953690 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f4b541a6050 con 0x7f4b54077620 2026-03-10T10:15:04.610 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.609+0000 7f4b59775640 1 --2- 192.168.123.104:0/3261953690 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f4b54077620 0x7f4b541a10a0 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7f4b48009a00 tx=0x7f4b480028f0 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:04.611 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.609+0000 7f4b59f76640 1 --2- 192.168.123.104:0/3261953690 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f4b54113b80 0x7f4b541a5970 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:15:04.611 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.609+0000 7f4b427fc640 1 -- 192.168.123.104:0/3261953690 <== mon.2 v2:192.168.123.104:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f4b48002a90 con 0x7f4b54077620 2026-03-10T10:15:04.611 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.609+0000 7f4b427fc640 1 -- 192.168.123.104:0/3261953690 <== mon.2 v2:192.168.123.104:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f4b48038770 con 0x7f4b54077620 2026-03-10T10:15:04.611 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.609+0000 7f4b427fc640 1 -- 192.168.123.104:0/3261953690 <== mon.2 v2:192.168.123.104:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f4b48042660 con 0x7f4b54077620 2026-03-10T10:15:04.612 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.609+0000 7f4b5ba00640 1 -- 192.168.123.104:0/3261953690 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f4b541a62e0 con 0x7f4b54077620 2026-03-10T10:15:04.613 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.609+0000 7f4b5ba00640 1 -- 192.168.123.104:0/3261953690 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f4b541a6740 con 0x7f4b54077620 2026-03-10T10:15:04.613 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.609+0000 7f4b427fc640 1 -- 192.168.123.104:0/3261953690 <== mon.2 v2:192.168.123.104:3301/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f4b48004240 con 0x7f4b54077620 2026-03-10T10:15:04.613 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.609+0000 7f4b5ba00640 1 -- 192.168.123.104:0/3261953690 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f4b1c005180 con 0x7f4b54077620 2026-03-10T10:15:04.616 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.613+0000 7f4b427fc640 1 --2- 192.168.123.104:0/3261953690 >> v2:192.168.123.104:6800/3326026257 conn(0x7f4b30077820 0x7f4b30079ce0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:04.616 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.613+0000 7f4b427fc640 1 -- 192.168.123.104:0/3261953690 <== mon.2 v2:192.168.123.104:3301/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f4b480bde60 con 0x7f4b54077620 2026-03-10T10:15:04.616 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.613+0000 7f4b58f74640 1 --2- 192.168.123.104:0/3261953690 >> v2:192.168.123.104:6800/3326026257 conn(0x7f4b30077820 0x7f4b30079ce0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:04.616 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.613+0000 7f4b427fc640 1 -- 192.168.123.104:0/3261953690 <== mon.2 v2:192.168.123.104:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f4b48038920 con 0x7f4b54077620 2026-03-10T10:15:04.617 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.613+0000 7f4b58f74640 1 --2- 192.168.123.104:0/3261953690 >> v2:192.168.123.104:6800/3326026257 conn(0x7f4b30077820 0x7f4b30079ce0 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7f4b44005fd0 tx=0x7f4b44005dc0 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:04.729 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.725+0000 7f4b5ba00640 1 -- 192.168.123.104:0/3261953690 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "mgr dump", "format": "json"} v 0) -- 0x7f4b1c005470 con 0x7f4b54077620 2026-03-10T10:15:04.731 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.729+0000 7f4b427fc640 1 -- 192.168.123.104:0/3261953690 <== mon.2 v2:192.168.123.104:3301/0 7 ==== mon_command_ack([{"prefix": "mgr dump", "format": "json"}]=0 v21) ==== 74+0+192038 (secure 0 0 0) 0x7f4b4808ad70 con 0x7f4b54077620 2026-03-10T10:15:04.732 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:15:04.734 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.733+0000 7f4b5ba00640 1 -- 192.168.123.104:0/3261953690 >> v2:192.168.123.104:6800/3326026257 conn(0x7f4b30077820 msgr2=0x7f4b30079ce0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:04.735 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.733+0000 7f4b5ba00640 1 --2- 192.168.123.104:0/3261953690 >> v2:192.168.123.104:6800/3326026257 conn(0x7f4b30077820 0x7f4b30079ce0 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7f4b44005fd0 tx=0x7f4b44005dc0 comp rx=0 tx=0).stop 2026-03-10T10:15:04.735 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.733+0000 7f4b5ba00640 1 -- 192.168.123.104:0/3261953690 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f4b54077620 msgr2=0x7f4b541a10a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:04.735 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.733+0000 7f4b5ba00640 1 --2- 192.168.123.104:0/3261953690 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f4b54077620 0x7f4b541a10a0 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7f4b48009a00 tx=0x7f4b480028f0 comp rx=0 tx=0).stop 2026-03-10T10:15:04.735 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.733+0000 7f4b5ba00640 1 -- 192.168.123.104:0/3261953690 shutdown_connections 2026-03-10T10:15:04.735 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.733+0000 7f4b5ba00640 1 --2- 192.168.123.104:0/3261953690 >> v2:192.168.123.104:6800/3326026257 conn(0x7f4b30077820 0x7f4b30079ce0 unknown :-1 s=CLOSED pgs=28 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:04.735 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.733+0000 7f4b5ba00640 1 --2- 192.168.123.104:0/3261953690 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f4b54113b80 0x7f4b541a5970 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:04.736 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.733+0000 7f4b5ba00640 1 --2- 192.168.123.104:0/3261953690 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4b54077f40 0x7f4b541a15e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:04.736 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.733+0000 7f4b5ba00640 1 --2- 192.168.123.104:0/3261953690 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f4b54077620 0x7f4b541a10a0 unknown :-1 s=CLOSED pgs=61 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:04.736 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.733+0000 7f4b5ba00640 1 -- 192.168.123.104:0/3261953690 >> 192.168.123.104:0/3261953690 conn(0x7f4b541009e0 msgr2=0x7f4b54102dd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:04.736 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.733+0000 7f4b5ba00640 1 -- 192.168.123.104:0/3261953690 shutdown_connections 2026-03-10T10:15:04.736 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:04.733+0000 7f4b5ba00640 1 -- 192.168.123.104:0/3261953690 wait complete. 2026-03-10T10:15:04.784 INFO:teuthology.orchestra.run.vm04.stdout:{"epoch":21,"flags":0,"active_gid":24422,"active_name":"y","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6800","nonce":3326026257}]},"active_addr":"192.168.123.104:6800/3326026257","active_change":"2026-03-10T10:14:42.257167+0000","active_mgr_features":4540701547738038271,"available":true,"standbys":[{"gid":24427,"name":"x","mgr_features":4540701547738038271,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["cephadm","dashboard","iostat","nfs","prometheus","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.104:8443/","prometheus":"http://192.168.123.104:9283/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":65,"active_clients":[{"name":"devicehealth","addrvec":[{"type":"v2","addr":"192.168.123.104:0","nonce":1237213738}]},{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.104:0","nonce":3019082007}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.104:0","nonce":283979197}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.104:0","nonce":1278760722}]}]} 2026-03-10T10:15:04.786 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-10T10:15:04.786 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-10T10:15:04.786 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph osd dump --format=json 2026-03-10T10:15:04.814 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.562941636Z level=info msg="Executing migration" id="Add column with_credentials" 2026-03-10T10:15:04.814 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.563890336Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=950.142µs 2026-03-10T10:15:04.814 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.565164023Z level=info msg="Executing migration" id="Add secure json data column" 2026-03-10T10:15:04.814 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.566228969Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=1.064496ms 2026-03-10T10:15:04.814 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.567449897Z level=info msg="Executing migration" id="Update data_source table charset" 2026-03-10T10:15:04.814 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.567608624Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=158.397µs 2026-03-10T10:15:04.814 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.568635569Z level=info msg="Executing migration" id="Update initial version to 1" 2026-03-10T10:15:04.814 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.568895043Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=259.505µs 2026-03-10T10:15:04.814 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.570127513Z level=info msg="Executing migration" id="Add read_only data column" 2026-03-10T10:15:04.814 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.571086101Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=957.166µs 2026-03-10T10:15:04.814 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.572356481Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 2026-03-10T10:15:04.814 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.572715712Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=360.082µs 2026-03-10T10:15:04.814 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.573725154Z level=info msg="Executing migration" id="Update json_data with nulls" 2026-03-10T10:15:04.814 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.5739642Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=239.647µs 2026-03-10T10:15:04.814 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.575141417Z level=info msg="Executing migration" id="Add uid column" 2026-03-10T10:15:04.814 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.576048259Z level=info msg="Migration successfully executed" id="Add uid column" duration=907.013µs 2026-03-10T10:15:04.814 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.577232519Z level=info msg="Executing migration" id="Update uid value" 2026-03-10T10:15:04.814 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.577478107Z level=info msg="Migration successfully executed" id="Update uid value" duration=245.338µs 2026-03-10T10:15:04.814 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.57862182Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 2026-03-10T10:15:04.814 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.579130208Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=508.417µs 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.580049203Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.580564314Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=514.97µs 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.581751209Z level=info msg="Executing migration" id="create api_key table" 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.582256853Z level=info msg="Migration successfully executed" id="create api_key table" duration=505.684µs 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.58369754Z level=info msg="Executing migration" id="add index api_key.account_id" 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.584203905Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=505.193µs 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.585323142Z level=info msg="Executing migration" id="add index api_key.key" 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.585850536Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=527.604µs 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.587004519Z level=info msg="Executing migration" id="add index api_key.account_id_name" 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.587521364Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=518.067µs 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.589011654Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.589535652Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=523.948µs 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.590758423Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.591356399Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=597.846µs 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.592592274Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.59314166Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=529.098µs 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.594322603Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.596792471Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=2.469457ms 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.597769153Z level=info msg="Executing migration" id="create api_key table v2" 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.598252054Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=482.761µs 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.599709783Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.600234493Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=524.771µs 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.601331279Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.601853623Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=522.255µs 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.603342881Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.60381911Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=477.111µs 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.605013519Z level=info msg="Executing migration" id="copy api_key v1 to v2" 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.605314941Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=301.352µs 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.606379958Z level=info msg="Executing migration" id="Drop old table api_key_v1" 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.606782058Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=402.071µs 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.607923608Z level=info msg="Executing migration" id="Update api_key table charset" 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.608082354Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=158.866µs 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.609312479Z level=info msg="Executing migration" id="Add expires to api_key table" 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.610395158Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=1.081947ms 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.611822612Z level=info msg="Executing migration" id="Add service account foreign key" 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.612749832Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=927.36µs 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.613671281Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.613905699Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=233.296µs 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.615241751Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.616192926Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=951.214µs 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.617158548Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.618175044Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=1.016346ms 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.619363621Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.619874535Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=511.204µs 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.621089621Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.621508603Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=419.113µs 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.622727016Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.623215748Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=488.953µs 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.624197389Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.624693615Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=495.023µs 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.62612195Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.626617986Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=495.896µs 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.627769814Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.628278694Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=508.979µs 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.629435172Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.629599869Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=165.037µs 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.631160541Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.631317213Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=156.772µs 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.632324963Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.633310943Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=986.11µs 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.634455839Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 2026-03-10T10:15:04.815 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.63546964Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=1.013922ms 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.636624194Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.636794882Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=171.429µs 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.638254055Z level=info msg="Executing migration" id="create quota table v1" 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.638718782Z level=info msg="Migration successfully executed" id="create quota table v1" duration=464.677µs 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.639879378Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.640361326Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=481.738µs 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.641593094Z level=info msg="Executing migration" id="Update quota table charset" 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.641774302Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=181.529µs 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.643241119Z level=info msg="Executing migration" id="create plugin_setting table" 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.643775807Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=534.637µs 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.644889385Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.645442306Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=553.052µs 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.646588825Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.647699937Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=1.110861ms 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.648605327Z level=info msg="Executing migration" id="Update plugin_setting table charset" 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.648756609Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=151.132µs 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.650124761Z level=info msg="Executing migration" id="create session table" 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.650646986Z level=info msg="Migration successfully executed" id="create session table" duration=523.287µs 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.651812671Z level=info msg="Executing migration" id="Drop old table playlist table" 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.652003106Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=190.715µs 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.652938792Z level=info msg="Executing migration" id="Drop old table playlist_item table" 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.653140288Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=201.195µs 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.654584393Z level=info msg="Executing migration" id="create playlist table v2" 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.655066681Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=482.029µs 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.656197783Z level=info msg="Executing migration" id="create playlist item table v2" 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.656638955Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=441.083µs 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.658205818Z level=info msg="Executing migration" id="Update playlist table charset" 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.658266992Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=61.724µs 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.659731014Z level=info msg="Executing migration" id="Update playlist_item table charset" 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.659901252Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=170.117µs 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.660889104Z level=info msg="Executing migration" id="Add playlist column created_at" 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.661952959Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=1.063744ms 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.663061698Z level=info msg="Executing migration" id="Add playlist column updated_at" 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.664067304Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=1.006938ms 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.665073191Z level=info msg="Executing migration" id="drop preferences table v2" 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.6652603Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=186.979µs 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.666709564Z level=info msg="Executing migration" id="drop preferences table v3" 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.666897915Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=188.231µs 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.667863537Z level=info msg="Executing migration" id="create preferences table v3" 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.668358069Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=494.612µs 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.66958611Z level=info msg="Executing migration" id="Update preferences table charset" 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.669650631Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=64.881µs 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.67089943Z level=info msg="Executing migration" id="Add column team_id in preferences" 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.6719335Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=1.0351ms 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.673215642Z level=info msg="Executing migration" id="Update team_id column values in preferences" 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.673427767Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=212.095µs 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.674558046Z level=info msg="Executing migration" id="Add column week_start in preferences" 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.675614367Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=1.05637ms 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.676565582Z level=info msg="Executing migration" id="Add column preferences.json_data" 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.677592478Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=1.026976ms 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.67904086Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.679239159Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=199.241µs 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.680550506Z level=info msg="Executing migration" id="Add preferences index org_id" 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.681149805Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=599.268µs 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.682431876Z level=info msg="Executing migration" id="Add preferences index user_id" 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.683012389Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=580.232µs 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.684539969Z level=info msg="Executing migration" id="create alert table v1" 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.685121424Z level=info msg="Migration successfully executed" id="create alert table v1" duration=582.297µs 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.686404538Z level=info msg="Executing migration" id="add index alert org_id & id " 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.686963381Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=558.752µs 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.688470623Z level=info msg="Executing migration" id="add index alert state" 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.688984702Z level=info msg="Migration successfully executed" id="add index alert state" duration=514.129µs 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.690159003Z level=info msg="Executing migration" id="add index alert dashboard_id" 2026-03-10T10:15:04.816 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.690680637Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=522.556µs 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.691921711Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.69237158Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=449.71µs 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.693559096Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.694132055Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=573.089µs 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.695776393Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.696325437Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=549.244µs 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.69727074Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.700084851Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=2.8139ms 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.70122075Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.7016705Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=449.8µs 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.703181719Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.703706508Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=524.64µs 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.705059362Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.705350185Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=291.114µs 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.706484691Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.706882032Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=397.121µs 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.708208738Z level=info msg="Executing migration" id="create alert_notification table v1" 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.70866033Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=451.683µs 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.709561642Z level=info msg="Executing migration" id="Add column is_default" 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.710939813Z level=info msg="Migration successfully executed" id="Add column is_default" duration=1.377981ms 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.712091882Z level=info msg="Executing migration" id="Add column frequency" 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.713224906Z level=info msg="Migration successfully executed" id="Add column frequency" duration=1.132905ms 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.714666105Z level=info msg="Executing migration" id="Add column send_reminder" 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.71583675Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=1.170695ms 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.716782895Z level=info msg="Executing migration" id="Add column disable_resolve_message" 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.717949972Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=1.167058ms 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.719022503Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.71954079Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=518.237µs 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.720963895Z level=info msg="Executing migration" id="Update alert table charset" 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.72113281Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=169.285µs 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.722238794Z level=info msg="Executing migration" id="Update alert_notification table charset" 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.722402008Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=163.254µs 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.723402515Z level=info msg="Executing migration" id="create notification_journal table v1" 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.723849599Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=447.123µs 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.725302479Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.725835573Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=532.814µs 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.72732879Z level=info msg="Executing migration" id="drop alert_notification_journal" 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.727803144Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=474.324µs 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.728755772Z level=info msg="Executing migration" id="create alert_notification_state table v1" 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.72926933Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=513.498µs 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.730910953Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.73152555Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=614.446µs 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.732550211Z level=info msg="Executing migration" id="Add for to alert table" 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.734563948Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=2.013457ms 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.735820833Z level=info msg="Executing migration" id="Add column uid in alert_notification" 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.737108104Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=1.287451ms 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.738629313Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.738866976Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=238.003µs 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.739917236Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.740469265Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=552.159µs 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.741752861Z level=info msg="Executing migration" id="Remove unique index org_id_name" 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.742352459Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=599.378µs 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.744021082Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.745329363Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=1.308461ms 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.746580167Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.74683907Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=259.052µs 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.747920577Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.748488147Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=566.587µs 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.749700057Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.750308593Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=608.655µs 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.751681404Z level=info msg="Executing migration" id="Drop old annotation table v4" 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.751917104Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=237.803µs 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.752944369Z level=info msg="Executing migration" id="create annotation table v5" 2026-03-10T10:15:04.817 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.753538318Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=594.078µs 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.755175903Z level=info msg="Executing migration" id="add index annotation 0 v3" 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.755768068Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=592.315µs 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.757070358Z level=info msg="Executing migration" id="add index annotation 1 v3" 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.757733735Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=663.407µs 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.75907591Z level=info msg="Executing migration" id="add index annotation 2 v3" 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.759696687Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=620.437µs 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.761010289Z level=info msg="Executing migration" id="add index annotation 3 v3" 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.761720263Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=709.113µs 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.764347956Z level=info msg="Executing migration" id="add index annotation 4 v3" 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.765008769Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=659.44µs 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.766383664Z level=info msg="Executing migration" id="Update annotation table charset" 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.766584649Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=201.295µs 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.767928496Z level=info msg="Executing migration" id="Add column region_id to annotation table" 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.769312327Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=1.383742ms 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.770641166Z level=info msg="Executing migration" id="Drop category_id index" 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.77123268Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=591.874µs 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.772244628Z level=info msg="Executing migration" id="Add column tags to annotation table" 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.773535365Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=1.290727ms 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.775045374Z level=info msg="Executing migration" id="Create annotation_tag table v2" 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.775582084Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=536.861µs 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.776546233Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.777103955Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=557.372µs 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.778347163Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.778947694Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=600.42µs 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.780283126Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.783462767Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=3.180283ms 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.784691169Z level=info msg="Executing migration" id="Create annotation_tag table v3" 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.785207622Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=516.484µs 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.786315089Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.786946507Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=630.777µs 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.788622764Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.788922914Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=300.171µs 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.790076627Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.790498834Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=422.088µs 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.791567929Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.791803628Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=235.63µs 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.793102312Z level=info msg="Executing migration" id="Add created time to annotation table" 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.794459404Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=1.355639ms 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.795699658Z level=info msg="Executing migration" id="Add updated time to annotation table" 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.797014971Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=1.315173ms 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.798180506Z level=info msg="Executing migration" id="Add index for created in annotation table" 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.79873955Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=558.903µs 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.800063971Z level=info msg="Executing migration" id="Add index for updated in annotation table" 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.800600601Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=536.37µs 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.801908852Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.802172985Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=263.932µs 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.803326486Z level=info msg="Executing migration" id="Add epoch_end column" 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.804657361Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=1.330834ms 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.805639142Z level=info msg="Executing migration" id="Add index for epoch_end" 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.806175933Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=536.601µs 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.807725925Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.807958408Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=232.164µs 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.808911266Z level=info msg="Executing migration" id="Move region to single row" 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.809250791Z level=info msg="Migration successfully executed" id="Move region to single row" duration=340.657µs 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.810412758Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.810999553Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=586.624µs 2026-03-10T10:15:04.818 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.812218607Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 2026-03-10T10:15:05.063 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.812915648Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=697.001µs 2026-03-10T10:15:05.063 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.815410141Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 2026-03-10T10:15:05.063 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.815979693Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=569.402µs 2026-03-10T10:15:05.063 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.816941178Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 2026-03-10T10:15:05.063 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.817466007Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=524.989µs 2026-03-10T10:15:05.063 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.818918507Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 2026-03-10T10:15:05.063 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.81946161Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=542.732µs 2026-03-10T10:15:05.063 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.820460183Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 2026-03-10T10:15:05.063 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.820993919Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=533.575µs 2026-03-10T10:15:05.063 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.82242008Z level=info msg="Executing migration" id="Increase tags column to length 4096" 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.822616346Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=196.286µs 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.823919457Z level=info msg="Executing migration" id="create test_data table" 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.824410042Z level=info msg="Migration successfully executed" id="create test_data table" duration=490.404µs 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.826000028Z level=info msg="Executing migration" id="create dashboard_version table v1" 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.826459555Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=459.496µs 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.827787914Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.828342228Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=554.875µs 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.829952402Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.830496267Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=543.695µs 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.831887322Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.832128753Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=241.41µs 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.83307108Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.833393814Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=320.688µs 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.834754432Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.834958412Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=204.241µs 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.835920327Z level=info msg="Executing migration" id="create team table" 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.836431931Z level=info msg="Migration successfully executed" id="create team table" duration=510.933µs 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.837641698Z level=info msg="Executing migration" id="add index team.org_id" 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.838283576Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=641.637µs 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.839450513Z level=info msg="Executing migration" id="add unique index team_org_id_name" 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.83998497Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=535.278µs 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.841081175Z level=info msg="Executing migration" id="Add column uid in team" 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.842652667Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=1.572273ms 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.843708798Z level=info msg="Executing migration" id="Update uid column values in team" 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.843980504Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=271.466µs 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.844912974Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.845490141Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=577.306µs 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.84701168Z level=info msg="Executing migration" id="create team member table" 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.847418339Z level=info msg="Migration successfully executed" id="create team member table" duration=406.467µs 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.848708154Z level=info msg="Executing migration" id="add index team_member.org_id" 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.849158054Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=450.26µs 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.850448591Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.850924039Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=476.59µs 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.852247437Z level=info msg="Executing migration" id="add index team_member.team_id" 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.852669044Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=421.857µs 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.85422683Z level=info msg="Executing migration" id="Add column email to team table" 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.855563435Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=1.336293ms 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.856512715Z level=info msg="Executing migration" id="Add column external to team_member table" 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.857948164Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=1.435308ms 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.859057313Z level=info msg="Executing migration" id="Add column permission to team_member table" 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.860396932Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=1.339399ms 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.861619402Z level=info msg="Executing migration" id="create dashboard acl table" 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.862077798Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=458.405µs 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.863353016Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.863891001Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=538.014µs 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.865138968Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.865755069Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=616.089µs 2026-03-10T10:15:05.064 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.86771182Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.868209328Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=496.816µs 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.869358151Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.869843016Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=484.985µs 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.871189357Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.871687006Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=497.93µs 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.872776008Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.87325955Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=483.663µs 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.874419354Z level=info msg="Executing migration" id="add index dashboard_permission" 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.874946146Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=526.902µs 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.876109216Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.876477042Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=367.926µs 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.877716114Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.877929252Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=212.827µs 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.879035475Z level=info msg="Executing migration" id="create tag table" 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.879460769Z level=info msg="Migration successfully executed" id="create tag table" duration=425.384µs 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.880891449Z level=info msg="Executing migration" id="add index tag.key_value" 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.881381684Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=488.852µs 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.882601278Z level=info msg="Executing migration" id="create login attempt table" 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.883005181Z level=info msg="Migration successfully executed" id="create login attempt table" duration=404.424µs 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.884164956Z level=info msg="Executing migration" id="add index login_attempt.username" 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.884631226Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=466.639µs 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.886179544Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.886682482Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=502.237µs 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.88761357Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.893057356Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=5.439478ms 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.894464552Z level=info msg="Executing migration" id="create login_attempt v2" 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.894987296Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=522.654µs 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.896290297Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.896766937Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=477.041µs 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.89801692Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.898262688Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=245.438µs 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.899257683Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.89961973Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=361.946µs 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.900774353Z level=info msg="Executing migration" id="create user auth table" 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.901174019Z level=info msg="Migration successfully executed" id="create user auth table" duration=399.475µs 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.902260365Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.902747064Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=486.568µs 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.903925082Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.904009949Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=84.837µs 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.905447522Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.907322711Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=1.873066ms 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.90846952Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.910036123Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=1.566552ms 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.911032021Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.912485443Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=1.453372ms 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.913713012Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.91521237Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=1.499728ms 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.916223046Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.916656495Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=433.108µs 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.917787485Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.919263227Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=1.476795ms 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.920560247Z level=info msg="Executing migration" id="create server_lock table" 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.920960124Z level=info msg="Migration successfully executed" id="create server_lock table" duration=399.677µs 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.922162127Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.922597639Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=435.533µs 2026-03-10T10:15:05.065 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.923717197Z level=info msg="Executing migration" id="create user auth token table" 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.924107977Z level=info msg="Migration successfully executed" id="create user auth token table" duration=390.619µs 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.925944112Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.926401425Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=457.803µs 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.927565937Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.927996952Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=432.257µs 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.92914827Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.929631411Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=482.931µs 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.931216528Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.932794863Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=1.578185ms 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.933662462Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.93410093Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=438.378µs 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.935234535Z level=info msg="Executing migration" id="create cache_data table" 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.935697198Z level=info msg="Migration successfully executed" id="create cache_data table" duration=462.613µs 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.937031678Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.937469875Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=436.954µs 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.938648253Z level=info msg="Executing migration" id="create short_url table v1" 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.939072264Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=424.041µs 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.940313731Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.940740367Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=425.634µs 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.942216912Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.942276623Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=59.821µs 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.943149903Z level=info msg="Executing migration" id="delete alert_definition table" 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.943220054Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=70.312µs 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.944062385Z level=info msg="Executing migration" id="recreate alert_definition table" 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.944458794Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=396.169µs 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.945495649Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.946026811Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=531.212µs 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.947147261Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.947659586Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=512.295µs 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.948801036Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.948967847Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=166.901µs 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.9498271Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.950338324Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=511.435µs 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.951328672Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.951812414Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=484.053µs 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.952635489Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.953875834Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.238851ms 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.955547462Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.956208666Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=661.605µs 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.957810164Z level=info msg="Executing migration" id="Add column paused in alert_definition" 2026-03-10T10:15:05.066 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.960833515Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=3.02283ms 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.961958463Z level=info msg="Executing migration" id="drop alert_definition table" 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.962474386Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=515.712µs 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.963890738Z level=info msg="Executing migration" id="delete alert_definition_version table" 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.964072567Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=181.979µs 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.965384446Z level=info msg="Executing migration" id="recreate alert_definition_version table" 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.966269957Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=885.32µs 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.967515321Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.96805619Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=540.688µs 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.96898889Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.969510864Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=521.863µs 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.970578596Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.970757169Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=178.473µs 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.971686272Z level=info msg="Executing migration" id="drop alert_definition_version table" 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.972200773Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=514.109µs 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.973358192Z level=info msg="Executing migration" id="create alert_instance table" 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.974099234Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=739.629µs 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.975130669Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.97565126Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=521.533µs 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.976511024Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.977020865Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=509.45µs 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.978151965Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.979875431Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=1.723266ms 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.980766933Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.981348058Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=580.814µs 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.982346591Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.982847997Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=501.466µs 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.983688705Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.990807135Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=7.118389ms 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.991867674Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:04 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:04.999482251Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=7.614036ms 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.000551055Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.001076295Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=525.111µs 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.002053067Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.002591963Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=539.006µs 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.003988929Z level=info msg="Executing migration" id="add current_reason column related to current_state" 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.005703407Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=1.713737ms 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.00674469Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.00841677Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=1.673122ms 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.009349279Z level=info msg="Executing migration" id="create alert_rule table" 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.009875Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=525.701µs 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.011047046Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.011532322Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=485.326µs 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.012643264Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.013121067Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=477.531µs 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.014266914Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.015662958Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.394673ms 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.016910817Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.017058642Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=147.736µs 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.018065692Z level=info msg="Executing migration" id="add column for to alert_rule" 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.02002716Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=1.962572ms 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.020938081Z level=info msg="Executing migration" id="add column annotations to alert_rule" 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.022684719Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=1.746598ms 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.02369835Z level=info msg="Executing migration" id="add column labels to alert_rule" 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.025375469Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=1.676978ms 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.026455234Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.026918068Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=462.874µs 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.027805443Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.028293382Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=488.02µs 2026-03-10T10:15:05.067 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.02928395Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 2026-03-10T10:15:05.068 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.031011835Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=1.726021ms 2026-03-10T10:15:05.068 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.031996612Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 2026-03-10T10:15:05.068 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.033714788Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=1.718005ms 2026-03-10T10:15:05.068 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.034684446Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 2026-03-10T10:15:05.068 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.035228312Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=543.314µs 2026-03-10T10:15:05.068 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.036450011Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 2026-03-10T10:15:05.068 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.038225765Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=1.775984ms 2026-03-10T10:15:05.068 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.039323873Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 2026-03-10T10:15:05.068 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.041077104Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=1.753521ms 2026-03-10T10:15:05.068 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.042129116Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 2026-03-10T10:15:05.068 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.04229194Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=162.724µs 2026-03-10T10:15:05.068 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.043201997Z level=info msg="Executing migration" id="create alert_rule_version table" 2026-03-10T10:15:05.068 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.043848755Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=646.696µs 2026-03-10T10:15:05.068 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.045034596Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 2026-03-10T10:15:05.068 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.045590424Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=555.416µs 2026-03-10T10:15:05.068 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.046765576Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 2026-03-10T10:15:05.068 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.047399929Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=632.961µs 2026-03-10T10:15:05.068 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.048544295Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 2026-03-10T10:15:05.068 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.048685228Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=140.793µs 2026-03-10T10:15:05.068 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.049482495Z level=info msg="Executing migration" id="add column for to alert_rule_version" 2026-03-10T10:15:05.068 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.051321016Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=1.837849ms 2026-03-10T10:15:05.068 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.052246643Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 2026-03-10T10:15:05.068 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.054042302Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=1.795711ms 2026-03-10T10:15:05.068 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.055082374Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 2026-03-10T10:15:05.068 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.056873384Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=1.790951ms 2026-03-10T10:15:05.068 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.057698914Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 2026-03-10T10:15:05.068 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.05949214Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=1.792975ms 2026-03-10T10:15:05.068 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.060441661Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 2026-03-10T10:15:05.068 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.062197297Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=1.755656ms 2026-03-10T10:15:05.068 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.063248849Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 2026-03-10T10:15:05.203 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:15:04 vm04 bash[55742]: ts=2026-03-10T10:15:04.884Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.003329401s 2026-03-10T10:15:05.314 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:05 vm07 bash[23367]: audit 2026-03-10T10:15:04.730958+0000 mon.c (mon.2) 25 : audit [DBG] from='client.? 192.168.123.104:0/3261953690' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T10:15:05.314 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:05 vm07 bash[23367]: audit 2026-03-10T10:15:04.730958+0000 mon.c (mon.2) 25 : audit [DBG] from='client.? 192.168.123.104:0/3261953690' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T10:15:05.314 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.063540663Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=291.825µs 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.066335878Z level=info msg="Executing migration" id=create_alert_configuration_table 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.066832054Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=496.034µs 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.067968555Z level=info msg="Executing migration" id="Add column default in alert_configuration" 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.069941837Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=1.974354ms 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.07111777Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.071264444Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=70.751µs 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.072169281Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.074275932Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=2.106501ms 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.075319348Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.075841223Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=521.855µs 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.077079062Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.078929774Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=1.850672ms 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.079960788Z level=info msg="Executing migration" id=create_ngalert_configuration_table 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.080379739Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=417.449µs 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.081597351Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.082087886Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=490.484µs 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.083207005Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.085084808Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=1.877622ms 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.086214645Z level=info msg="Executing migration" id="create provenance_type table" 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.086656209Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=441.164µs 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.08810916Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.088590078Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=480.436µs 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.089750032Z level=info msg="Executing migration" id="create alert_image table" 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.090169936Z level=info msg="Migration successfully executed" id="create alert_image table" duration=419.633µs 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.091384402Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.091863404Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=479.254µs 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.093050419Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.093186323Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=135.923µs 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.094290163Z level=info msg="Executing migration" id=create_alert_configuration_history_table 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.094853122Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=563.01µs 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.096115248Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.096664392Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=548.222µs 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.097590339Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.097868969Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.09904321Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.099421485Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=378.175µs 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.100244131Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.10077472Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=529.158µs 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.101646817Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.103786019Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=2.139081ms 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.104782698Z level=info msg="Executing migration" id="create library_element table v1" 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.105284915Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=502.076µs 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.106503197Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.106992691Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=489.304µs 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.108133439Z level=info msg="Executing migration" id="create library_element_connection table v1" 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.108540709Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=407.34µs 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.109702757Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 2026-03-10T10:15:05.315 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.110171372Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=468.815µs 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.111231759Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.111688141Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=456.151µs 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.112736337Z level=info msg="Executing migration" id="increase max description length to 2048" 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.112770771Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=34.735µs 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.113601881Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.113660501Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=57.888µs 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.114611255Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.114829592Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=218.298µs 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.115623553Z level=info msg="Executing migration" id="create data_keys table" 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.116129517Z level=info msg="Migration successfully executed" id="create data_keys table" duration=505.865µs 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.117222647Z level=info msg="Executing migration" id="create secrets table" 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.11766397Z level=info msg="Migration successfully executed" id="create secrets table" duration=441.965µs 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.118755147Z level=info msg="Executing migration" id="rename data_keys name column to id" 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.127673235Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=8.917317ms 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.128668511Z level=info msg="Executing migration" id="add name column into data_keys" 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.130701534Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=2.032841ms 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.131736635Z level=info msg="Executing migration" id="copy data_keys id column values into name" 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.131890913Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=154.198µs 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.132728696Z level=info msg="Executing migration" id="rename data_keys name column to label" 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.142190558Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=9.461592ms 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.143392912Z level=info msg="Executing migration" id="rename data_keys id column back to name" 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.153104111Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=9.710327ms 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.154267942Z level=info msg="Executing migration" id="create kv_store table v1" 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.15472306Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=455.169µs 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.155838562Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.156332303Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=493.72µs 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.157738777Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.157962665Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=223.848µs 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.15901029Z level=info msg="Executing migration" id="create permission table" 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.159547592Z level=info msg="Migration successfully executed" id="create permission table" duration=537.182µs 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.16070423Z level=info msg="Executing migration" id="add unique index permission.role_id" 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.161232525Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=528.505µs 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.162350431Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.162850975Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=500.744µs 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.164056794Z level=info msg="Executing migration" id="create role table" 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.164480264Z level=info msg="Migration successfully executed" id="create role table" duration=420.614µs 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.165948885Z level=info msg="Executing migration" id="add column display_name" 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.168078228Z level=info msg="Migration successfully executed" id="add column display_name" duration=2.129142ms 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.169195301Z level=info msg="Executing migration" id="add column group_name" 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.171275954Z level=info msg="Migration successfully executed" id="add column group_name" duration=2.080732ms 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.172416101Z level=info msg="Executing migration" id="add index role.org_id" 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.172873334Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=456.131µs 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.174460174Z level=info msg="Executing migration" id="add unique index role_org_id_name" 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.174954305Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=493.961µs 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.176145118Z level=info msg="Executing migration" id="add index role_org_id_uid" 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.176636334Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=491.117µs 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.177845771Z level=info msg="Executing migration" id="create team role table" 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.178268259Z level=info msg="Migration successfully executed" id="create team role table" duration=422.117µs 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.179763849Z level=info msg="Executing migration" id="add index team_role.org_id" 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.180260626Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=496.837µs 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.181465585Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.181992327Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=526.753µs 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.183475995Z level=info msg="Executing migration" id="add index team_role.team_id" 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.184001175Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=524.168µs 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.185146171Z level=info msg="Executing migration" id="create user role table" 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.185599356Z level=info msg="Migration successfully executed" id="create user role table" duration=453.154µs 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.187091039Z level=info msg="Executing migration" id="add index user_role.org_id" 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.187678124Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=587.055µs 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.188883233Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.189350304Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=467.012µs 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.19104212Z level=info msg="Executing migration" id="add index user_role.user_id" 2026-03-10T10:15:05.316 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.191516737Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=474.555µs 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.192645632Z level=info msg="Executing migration" id="create builtin role table" 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.193069303Z level=info msg="Migration successfully executed" id="create builtin role table" duration=423.44µs 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.194258461Z level=info msg="Executing migration" id="add index builtin_role.role_id" 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.194744739Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=487.19µs 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.196236652Z level=info msg="Executing migration" id="add index builtin_role.name" 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.196707551Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=470.808µs 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.197862926Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.200091996Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=2.229339ms 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.201200543Z level=info msg="Executing migration" id="add index builtin_role.org_id" 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.201680748Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=480.365µs 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.2032567Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.203742916Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=486.216µs 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.20500438Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.205672435Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=668.267µs 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.206723848Z level=info msg="Executing migration" id="add unique index role.uid" 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.207194586Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=472.261µs 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.208450129Z level=info msg="Executing migration" id="create seed assignment table" 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.208882565Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=432.316µs 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.21009122Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.210677233Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=585.982µs 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.211914091Z level=info msg="Executing migration" id="add column hidden to role table" 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.21443304Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=2.518748ms 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.21577304Z level=info msg="Executing migration" id="permission kind migration" 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.218128355Z level=info msg="Migration successfully executed" id="permission kind migration" duration=2.355144ms 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.219175639Z level=info msg="Executing migration" id="permission attribute migration" 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.221307837Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=2.131899ms 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.222750228Z level=info msg="Executing migration" id="permission identifier migration" 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.22502959Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=2.279042ms 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.226881766Z level=info msg="Executing migration" id="add permission identifier index" 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.227419139Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=537.633µs 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.228724443Z level=info msg="Executing migration" id="add permission action scope role_id index" 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.229268569Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=543.754µs 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.231099314Z level=info msg="Executing migration" id="remove permission role_id action scope index" 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.231620888Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=521.665µs 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.233252101Z level=info msg="Executing migration" id="create query_history table v1" 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.233758326Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=506.114µs 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.235186571Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.235809322Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=622.692µs 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.237058643Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.237221998Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=163.215µs 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.238278659Z level=info msg="Executing migration" id="rbac disabled migrator" 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.23842375Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=145.16µs 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.239630241Z level=info msg="Executing migration" id="teams permissions migration" 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.239950689Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=320.448µs 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.240826573Z level=info msg="Executing migration" id="dashboard permissions" 2026-03-10T10:15:05.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.241150657Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=324.385µs 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.24221339Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.242612665Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=399.385µs 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.243800681Z level=info msg="Executing migration" id="drop managed folder create actions" 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.244017927Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=216.213µs 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.245136082Z level=info msg="Executing migration" id="alerting notification permissions" 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.245466018Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=329.865µs 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.24653369Z level=info msg="Executing migration" id="create query_history_star table v1" 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.246970275Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=443.738µs 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.248120582Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.248629019Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=508.418µs 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.250085988Z level=info msg="Executing migration" id="add column org_id in query_history_star" 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.252377423Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=2.291435ms 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.253544019Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.253592009Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=48.01µs 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.254781298Z level=info msg="Executing migration" id="create correlation table v1" 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.255363804Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=582.737µs 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.257159595Z level=info msg="Executing migration" id="add index correlations.uid" 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.257701256Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=541.67µs 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.258951508Z level=info msg="Executing migration" id="add index correlations.source_uid" 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.259459205Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=507.887µs 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.26066266Z level=info msg="Executing migration" id="add correlation config column" 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.263065393Z level=info msg="Migration successfully executed" id="add correlation config column" duration=2.402633ms 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.264236769Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.264799288Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=562.51µs 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.266272687Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.26685845Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=585.743µs 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.267852304Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.274459571Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=6.606977ms 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.276028368Z level=info msg="Executing migration" id="create correlation v2" 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.276596568Z level=info msg="Migration successfully executed" id="create correlation v2" duration=567.739µs 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.277587116Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.278113518Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=526.211µs 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.279302196Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.27983998Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=537.794µs 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.281326703Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.281899571Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=572.849µs 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.283026004Z level=info msg="Executing migration" id="copy correlation v1 to v2" 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.283239692Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=213.519µs 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.284130003Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.284562911Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=432.797µs 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.285701364Z level=info msg="Executing migration" id="add provisioning column" 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.287953437Z level=info msg="Migration successfully executed" id="add provisioning column" duration=2.251932ms 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.289036547Z level=info msg="Executing migration" id="create entity_events table" 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.289466048Z level=info msg="Migration successfully executed" id="create entity_events table" duration=430.474µs 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.290859728Z level=info msg="Executing migration" id="create dashboard public config v1" 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.291407831Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=548.223µs 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.292681177Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.292980144Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.294051443Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.29434925Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.295321934Z level=info msg="Executing migration" id="Drop old dashboard public config table" 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.295800366Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=478.362µs 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.297043566Z level=info msg="Executing migration" id="recreate dashboard public config v1" 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.297588893Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=544.385µs 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.298828496Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.299356331Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=527.765µs 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.300855297Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.301373795Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=518.297µs 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.302692504Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 2026-03-10T10:15:05.318 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.303214298Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=521.674µs 2026-03-10T10:15:05.319 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.304081556Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 2026-03-10T10:15:05.319 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.30457646Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=494.884µs 2026-03-10T10:15:05.319 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.305844496Z level=info msg="Executing migration" id="Drop public config table" 2026-03-10T10:15:05.319 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.306282813Z level=info msg="Migration successfully executed" id="Drop public config table" duration=438.237µs 2026-03-10T10:15:05.319 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.307482882Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 2026-03-10T10:15:05.319 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.308041535Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=558.623µs 2026-03-10T10:15:05.319 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.309327534Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 2026-03-10T10:15:05.319 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.309846403Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=518.858µs 2026-03-10T10:15:05.319 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.311138322Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 2026-03-10T10:15:05.319 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.311666397Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=529.818µs 2026-03-10T10:15:05.319 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.312632219Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 2026-03-10T10:15:05.319 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.313136681Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=505.443µs 2026-03-10T10:15:05.319 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.31460518Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 2026-03-10T10:15:05.572 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.322850023Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=8.244433ms 2026-03-10T10:15:05.572 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.3240384Z level=info msg="Executing migration" id="add annotations_enabled column" 2026-03-10T10:15:05.572 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.326543884Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=2.505304ms 2026-03-10T10:15:05.572 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.327823832Z level=info msg="Executing migration" id="add time_selection_enabled column" 2026-03-10T10:15:05.572 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.330320881Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=2.496958ms 2026-03-10T10:15:05.572 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.331747483Z level=info msg="Executing migration" id="delete orphaned public dashboards" 2026-03-10T10:15:05.572 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.331997278Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=249.956µs 2026-03-10T10:15:05.572 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.332983268Z level=info msg="Executing migration" id="add share column" 2026-03-10T10:15:05.572 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.335704755Z level=info msg="Migration successfully executed" id="add share column" duration=2.721196ms 2026-03-10T10:15:05.572 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.336857425Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 2026-03-10T10:15:05.572 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.33707403Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=216.504µs 2026-03-10T10:15:05.572 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.338182138Z level=info msg="Executing migration" id="create file table" 2026-03-10T10:15:05.572 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.338698451Z level=info msg="Migration successfully executed" id="create file table" duration=516.724µs 2026-03-10T10:15:05.572 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.340312683Z level=info msg="Executing migration" id="file table idx: path natural pk" 2026-03-10T10:15:05.572 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.340855816Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=543.243µs 2026-03-10T10:15:05.572 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.342054301Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 2026-03-10T10:15:05.572 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.342622601Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=568.08µs 2026-03-10T10:15:05.572 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.343907459Z level=info msg="Executing migration" id="create file_meta table" 2026-03-10T10:15:05.572 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.344350736Z level=info msg="Migration successfully executed" id="create file_meta table" duration=444.259µs 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.345887624Z level=info msg="Executing migration" id="file table idx: path key" 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.346409337Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=521.924µs 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.348032876Z level=info msg="Executing migration" id="set path collation in file table" 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.348174931Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=142.085µs 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.349132377Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.349274031Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=141.806µs 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.350451528Z level=info msg="Executing migration" id="managed permissions migration" 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.350809816Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=358.278µs 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.352132263Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.352339149Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=206.455µs 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.353315421Z level=info msg="Executing migration" id="RBAC action name migrator" 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.354015658Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=700.227µs 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.355195298Z level=info msg="Executing migration" id="Add UID column to playlist" 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.357813473Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=2.618044ms 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.359098531Z level=info msg="Executing migration" id="Update uid column values in playlist" 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.359297101Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=198.741µs 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.360634095Z level=info msg="Executing migration" id="Add index for uid in playlist" 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.361296391Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=662.316µs 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.362750113Z level=info msg="Executing migration" id="update group index for alert rules" 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.363067756Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=318.082µs 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.364065898Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.36427604Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=209.992µs 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.365511876Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.365847723Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=334.634µs 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.366981838Z level=info msg="Executing migration" id="add action column to seed_assignment" 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.369874355Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=2.892097ms 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.371063283Z level=info msg="Executing migration" id="add scope column to seed_assignment" 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.373712295Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=2.648831ms 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.375122155Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.375647406Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=525.08µs 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.376530854Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.404726839Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=28.192309ms 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.406422924Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.407104745Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=682.121µs 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.407980118Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.408580078Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=599.959µs 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.409762283Z level=info msg="Executing migration" id="add primary key to seed_assigment" 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.417940953Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=8.175584ms 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.419463322Z level=info msg="Executing migration" id="add origin column to seed_assignment" 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.422004554Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=2.543275ms 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.423157274Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.423395498Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=238.265µs 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.424271092Z level=info msg="Executing migration" id="prevent seeding OnCall access" 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.424467799Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=196.707µs 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.425273822Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.425480007Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=204.542µs 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.426571694Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 2026-03-10T10:15:05.573 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.426781776Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=211.784µs 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.427650757Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.427857974Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=207.216µs 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.428741472Z level=info msg="Executing migration" id="create folder table" 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.429230455Z level=info msg="Migration successfully executed" id="create folder table" duration=488.832µs 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.430274011Z level=info msg="Executing migration" id="Add index for parent_uid" 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.430906081Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=633.573µs 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.432054563Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.432676945Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=622.341µs 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.433865983Z level=info msg="Executing migration" id="Update folder title length" 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.433896921Z level=info msg="Migration successfully executed" id="Update folder title length" duration=30.949µs 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.434944926Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.435827393Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=882.356µs 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.436982347Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.437511995Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=529.638µs 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.438515828Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.43907945Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=563.681µs 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.440203266Z level=info msg="Executing migration" id="Sync dashboard and folder table" 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.440519697Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=316.391µs 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.44134183Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.44156698Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=224.97µs 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.442647707Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.44316375Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=516.152µs 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.444039504Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.444586945Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=546.21µs 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.445440878Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.446002847Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=563.04µs 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.447012751Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.447573987Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=561.047µs 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.448363711Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.448854307Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=490.726µs 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.449690728Z level=info msg="Executing migration" id="create anon_device table" 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.450151186Z level=info msg="Migration successfully executed" id="create anon_device table" duration=460.509µs 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.451117609Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.451697352Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=579.712µs 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.452848619Z level=info msg="Executing migration" id="add index anon_device.updated_at" 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.453382635Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=534.047µs 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.454627568Z level=info msg="Executing migration" id="create signing_key table" 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.455131888Z level=info msg="Migration successfully executed" id="create signing_key table" duration=504.23µs 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.456190604Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.456711976Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=521.403µs 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.45778583Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.458402281Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=617.942µs 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.45939937Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.459657923Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=258.892µs 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.460525411Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.463518686Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=2.993204ms 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.464677909Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.465113041Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=435.502µs 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.466132222Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.466685143Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=551.519µs 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.467807167Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.468335441Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=528.266µs 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.469151615Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.469714725Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=563.23µs 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.470694564Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.471248457Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=553.712µs 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.472367624Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.47290689Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=538.856µs 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.473755073Z level=info msg="Executing migration" id="create sso_setting table" 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.474282488Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=527.284µs 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.475688461Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 2026-03-10T10:15:05.574 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.476166282Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=478.372µs 2026-03-10T10:15:05.575 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.477005157Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 2026-03-10T10:15:05.575 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.477244835Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=238.625µs 2026-03-10T10:15:05.575 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.478462387Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 2026-03-10T10:15:05.575 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.478632513Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=169.966µs 2026-03-10T10:15:05.575 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.479537492Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 2026-03-10T10:15:05.575 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.48224341Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=2.705708ms 2026-03-10T10:15:05.575 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.483377657Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 2026-03-10T10:15:05.575 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.486041536Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=2.663769ms 2026-03-10T10:15:05.575 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.487226787Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 2026-03-10T10:15:05.575 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.487547666Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=320.869µs 2026-03-10T10:15:05.575 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=migrator t=2026-03-10T10:15:05.488520632Z level=info msg="migrations completed" performed=547 skipped=0 duration=1.161494568s 2026-03-10T10:15:05.575 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=sqlstore t=2026-03-10T10:15:05.489287131Z level=info msg="Created default organization" 2026-03-10T10:15:05.575 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=secrets t=2026-03-10T10:15:05.490466201Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 2026-03-10T10:15:05.575 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=plugin.store t=2026-03-10T10:15:05.499549317Z level=info msg="Loading plugins..." 2026-03-10T10:15:05.575 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=local.finder t=2026-03-10T10:15:05.548992793Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 2026-03-10T10:15:05.575 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=plugin.store t=2026-03-10T10:15:05.549055851Z level=info msg="Plugins loaded" count=55 duration=49.506715ms 2026-03-10T10:15:05.575 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=query_data t=2026-03-10T10:15:05.550837996Z level=info msg="Query Service initialization" 2026-03-10T10:15:05.575 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=live.push_http t=2026-03-10T10:15:05.552511528Z level=info msg="Live Push Gateway initialization" 2026-03-10T10:15:05.575 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=ngalert.migration t=2026-03-10T10:15:05.561316826Z level=info msg=Starting 2026-03-10T10:15:05.575 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=ngalert.migration t=2026-03-10T10:15:05.561708517Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false 2026-03-10T10:15:05.575 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=ngalert.migration orgID=1 t=2026-03-10T10:15:05.562053831Z level=info msg="Migrating alerts for organisation" 2026-03-10T10:15:05.575 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=ngalert.migration orgID=1 t=2026-03-10T10:15:05.562484454Z level=info msg="Alerts found to migrate" alerts=0 2026-03-10T10:15:05.575 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=ngalert.migration t=2026-03-10T10:15:05.563435479Z level=info msg="Completed alerting migration" 2026-03-10T10:15:05.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:05 vm04 bash[28289]: audit 2026-03-10T10:15:04.730958+0000 mon.c (mon.2) 25 : audit [DBG] from='client.? 192.168.123.104:0/3261953690' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T10:15:05.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:05 vm04 bash[28289]: audit 2026-03-10T10:15:04.730958+0000 mon.c (mon.2) 25 : audit [DBG] from='client.? 192.168.123.104:0/3261953690' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T10:15:05.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:05 vm04 bash[20742]: audit 2026-03-10T10:15:04.730958+0000 mon.c (mon.2) 25 : audit [DBG] from='client.? 192.168.123.104:0/3261953690' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T10:15:05.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:05 vm04 bash[20742]: audit 2026-03-10T10:15:04.730958+0000 mon.c (mon.2) 25 : audit [DBG] from='client.? 192.168.123.104:0/3261953690' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T10:15:05.824 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=ngalert.state.manager t=2026-03-10T10:15:05.572073275Z level=info msg="Running in alternative execution of Error/NoData mode" 2026-03-10T10:15:05.824 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=infra.usagestats.collector t=2026-03-10T10:15:05.573435336Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 2026-03-10T10:15:05.824 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=provisioning.datasources t=2026-03-10T10:15:05.57471938Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596 2026-03-10T10:15:05.824 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=provisioning.datasources t=2026-03-10T10:15:05.579853419Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940 2026-03-10T10:15:05.824 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=provisioning.alerting t=2026-03-10T10:15:05.585237384Z level=info msg="starting to provision alerting" 2026-03-10T10:15:05.824 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=provisioning.alerting t=2026-03-10T10:15:05.585267751Z level=info msg="finished to provision alerting" 2026-03-10T10:15:05.824 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=ngalert.state.manager t=2026-03-10T10:15:05.58537984Z level=info msg="Warming state cache for startup" 2026-03-10T10:15:05.824 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=ngalert.state.manager t=2026-03-10T10:15:05.585582988Z level=info msg="State cache has been initialized" states=0 duration=202.899µs 2026-03-10T10:15:05.824 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=http.server t=2026-03-10T10:15:05.587437499Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA 2026-03-10T10:15:05.824 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=http.server t=2026-03-10T10:15:05.587661457Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=https subUrl= socket= 2026-03-10T10:15:05.824 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=ngalert.multiorg.alertmanager t=2026-03-10T10:15:05.587702343Z level=info msg="Starting MultiOrg Alertmanager" 2026-03-10T10:15:05.824 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=ngalert.scheduler t=2026-03-10T10:15:05.587721369Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 2026-03-10T10:15:05.824 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=ticker t=2026-03-10T10:15:05.587741066Z level=info msg=starting first_tick=2026-03-10T10:15:10Z 2026-03-10T10:15:05.824 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=provisioning.dashboard t=2026-03-10T10:15:05.591595025Z level=info msg="starting to provision dashboards" 2026-03-10T10:15:05.824 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=grafanaStorageLogger t=2026-03-10T10:15:05.600359267Z level=info msg="Storage starting" 2026-03-10T10:15:05.824 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=sqlstore.transactions t=2026-03-10T10:15:05.620233666Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 2026-03-10T10:15:05.824 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=plugins.update.checker t=2026-03-10T10:15:05.679215468Z level=info msg="Update check succeeded" duration=79.534988ms 2026-03-10T10:15:05.824 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=provisioning.dashboard t=2026-03-10T10:15:05.728320763Z level=info msg="finished to provision dashboards" 2026-03-10T10:15:06.232 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=grafana-apiserver t=2026-03-10T10:15:05.821160452Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 2026-03-10T10:15:06.232 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:05 vm07 bash[50688]: logger=grafana-apiserver t=2026-03-10T10:15:05.821448119Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 2026-03-10T10:15:06.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:06 vm07 bash[23367]: cluster 2026-03-10T10:15:04.297598+0000 mgr.y (mgr.24422) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:06.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:06 vm07 bash[23367]: cluster 2026-03-10T10:15:04.297598+0000 mgr.y (mgr.24422) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:06.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:06 vm04 bash[20742]: cluster 2026-03-10T10:15:04.297598+0000 mgr.y (mgr.24422) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:06.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:06 vm04 bash[20742]: cluster 2026-03-10T10:15:04.297598+0000 mgr.y (mgr.24422) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:06.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:06 vm04 bash[28289]: cluster 2026-03-10T10:15:04.297598+0000 mgr.y (mgr.24422) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:06.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:06 vm04 bash[28289]: cluster 2026-03-10T10:15:04.297598+0000 mgr.y (mgr.24422) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:08.395 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:15:08 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:15:08.458 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:15:08.622 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.617+0000 7f4912d06640 1 -- 192.168.123.104:0/2137504981 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f490c10a470 msgr2=0x7f490c1114d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:08.622 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.617+0000 7f4912d06640 1 --2- 192.168.123.104:0/2137504981 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f490c10a470 0x7f490c1114d0 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7f48fc009a30 tx=0x7f48fc02f260 comp rx=0 tx=0).stop 2026-03-10T10:15:08.622 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.617+0000 7f4912d06640 1 -- 192.168.123.104:0/2137504981 shutdown_connections 2026-03-10T10:15:08.622 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.617+0000 7f4912d06640 1 --2- 192.168.123.104:0/2137504981 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f490c11c780 0x7f490c11eb70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:08.622 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.617+0000 7f4912d06640 1 --2- 192.168.123.104:0/2137504981 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f490c10a850 0x7f490c10acd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:08.622 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.617+0000 7f4912d06640 1 --2- 192.168.123.104:0/2137504981 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f490c10a470 0x7f490c1114d0 unknown :-1 s=CLOSED pgs=68 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:08.622 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.617+0000 7f4912d06640 1 -- 192.168.123.104:0/2137504981 >> 192.168.123.104:0/2137504981 conn(0x7f490c06ed00 msgr2=0x7f490c06f110 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:08.622 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.617+0000 7f4912d06640 1 -- 192.168.123.104:0/2137504981 shutdown_connections 2026-03-10T10:15:08.622 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.617+0000 7f4912d06640 1 -- 192.168.123.104:0/2137504981 wait complete. 2026-03-10T10:15:08.622 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.617+0000 7f4912d06640 1 Processor -- start 2026-03-10T10:15:08.622 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.621+0000 7f4912d06640 1 -- start start 2026-03-10T10:15:08.626 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.621+0000 7f4912d06640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f490c10a470 0x7f490c1af660 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:08.627 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.621+0000 7f4912d06640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f490c10a850 0x7f490c1afba0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:08.627 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.621+0000 7f4912d06640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f490c11c780 0x7f490c1a97e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:08.627 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.621+0000 7f4912d06640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f490c121440 con 0x7f490c10a470 2026-03-10T10:15:08.627 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.621+0000 7f4912d06640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f490c1212c0 con 0x7f490c11c780 2026-03-10T10:15:08.627 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.621+0000 7f4912d06640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f490c1215c0 con 0x7f490c10a850 2026-03-10T10:15:08.627 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.621+0000 7f4911503640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f490c10a850 0x7f490c1afba0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:08.627 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.621+0000 7f4911503640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f490c10a850 0x7f490c1afba0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3301/0 says I am v2:192.168.123.104:52826/0 (socket says 192.168.123.104:52826) 2026-03-10T10:15:08.627 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.621+0000 7f4911503640 1 -- 192.168.123.104:0/3190125261 learned_addr learned my addr 192.168.123.104:0/3190125261 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:15:08.627 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.621+0000 7f4911d04640 1 --2- 192.168.123.104:0/3190125261 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f490c10a470 0x7f490c1af660 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:08.627 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.621+0000 7f4911503640 1 -- 192.168.123.104:0/3190125261 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f490c11c780 msgr2=0x7f490c1a97e0 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-10T10:15:08.627 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.621+0000 7f4911503640 1 --2- 192.168.123.104:0/3190125261 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f490c11c780 0x7f490c1a97e0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:08.627 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.621+0000 7f4911503640 1 -- 192.168.123.104:0/3190125261 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f490c10a470 msgr2=0x7f490c1af660 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:08.627 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.621+0000 7f4911503640 1 --2- 192.168.123.104:0/3190125261 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f490c10a470 0x7f490c1af660 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:08.627 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.621+0000 7f4911503640 1 -- 192.168.123.104:0/3190125261 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f490c1aa010 con 0x7f490c10a850 2026-03-10T10:15:08.627 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.621+0000 7f4911503640 1 --2- 192.168.123.104:0/3190125261 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f490c10a850 0x7f490c1afba0 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7f490800c990 tx=0x7f490800ce60 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:08.627 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.621+0000 7f48faffd640 1 -- 192.168.123.104:0/3190125261 <== mon.2 v2:192.168.123.104:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f4908007c70 con 0x7f490c10a850 2026-03-10T10:15:08.627 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.621+0000 7f4912d06640 1 -- 192.168.123.104:0/3190125261 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f490c1aa300 con 0x7f490c10a850 2026-03-10T10:15:08.627 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.621+0000 7f4912d06640 1 -- 192.168.123.104:0/3190125261 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f490c1b6690 con 0x7f490c10a850 2026-03-10T10:15:08.627 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.621+0000 7f48faffd640 1 -- 192.168.123.104:0/3190125261 <== mon.2 v2:192.168.123.104:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f4908007e10 con 0x7f490c10a850 2026-03-10T10:15:08.627 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.621+0000 7f48faffd640 1 -- 192.168.123.104:0/3190125261 <== mon.2 v2:192.168.123.104:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f490800f450 con 0x7f490c10a850 2026-03-10T10:15:08.627 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.621+0000 7f4912d06640 1 -- 192.168.123.104:0/3190125261 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f48d4005180 con 0x7f490c10a850 2026-03-10T10:15:08.627 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.621+0000 7f48faffd640 1 -- 192.168.123.104:0/3190125261 <== mon.2 v2:192.168.123.104:3301/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f4908020020 con 0x7f490c10a850 2026-03-10T10:15:08.627 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.621+0000 7f48faffd640 1 --2- 192.168.123.104:0/3190125261 >> v2:192.168.123.104:6800/3326026257 conn(0x7f48dc0776c0 0x7f48dc079b80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:08.627 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.621+0000 7f48faffd640 1 -- 192.168.123.104:0/3190125261 <== mon.2 v2:192.168.123.104:3301/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f490809a9b0 con 0x7f490c10a850 2026-03-10T10:15:08.627 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.621+0000 7f4911d04640 1 --2- 192.168.123.104:0/3190125261 >> v2:192.168.123.104:6800/3326026257 conn(0x7f48dc0776c0 0x7f48dc079b80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:08.627 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.621+0000 7f4911d04640 1 --2- 192.168.123.104:0/3190125261 >> v2:192.168.123.104:6800/3326026257 conn(0x7f48dc0776c0 0x7f48dc079b80 secure :-1 s=READY pgs=29 cs=0 l=1 rev1=1 crypto rx=0x7f48fc005d20 tx=0x7f48fc03a040 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:08.635 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.629+0000 7f48faffd640 1 -- 192.168.123.104:0/3190125261 <== mon.2 v2:192.168.123.104:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f4908014030 con 0x7f490c10a850 2026-03-10T10:15:08.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:08 vm04 bash[28289]: cluster 2026-03-10T10:15:06.297889+0000 mgr.y (mgr.24422) 38 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:08.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:08 vm04 bash[28289]: cluster 2026-03-10T10:15:06.297889+0000 mgr.y (mgr.24422) 38 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:08.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:08 vm04 bash[28289]: audit 2026-03-10T10:15:07.346596+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:08.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:08 vm04 bash[28289]: audit 2026-03-10T10:15:07.346596+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:08.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:08 vm04 bash[20742]: cluster 2026-03-10T10:15:06.297889+0000 mgr.y (mgr.24422) 38 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:08.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:08 vm04 bash[20742]: cluster 2026-03-10T10:15:06.297889+0000 mgr.y (mgr.24422) 38 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:08.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:08 vm04 bash[20742]: audit 2026-03-10T10:15:07.346596+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:08.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:08 vm04 bash[20742]: audit 2026-03-10T10:15:07.346596+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:08.750 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.745+0000 7f4912d06640 1 -- 192.168.123.104:0/3190125261 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "osd dump", "format": "json"} v 0) -- 0x7f48d4005740 con 0x7f490c10a850 2026-03-10T10:15:08.751 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.749+0000 7f48faffd640 1 -- 192.168.123.104:0/3190125261 <== mon.2 v2:192.168.123.104:3301/0 7 ==== mon_command_ack([{"prefix": "osd dump", "format": "json"}]=0 v65) ==== 74+0+21299 (secure 0 0 0) 0x7f4908067810 con 0x7f490c10a850 2026-03-10T10:15:08.752 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:15:08.752 INFO:teuthology.orchestra.run.vm04.stdout:{"epoch":65,"fsid":"e4c1c9d6-1c68-11f1-a9bd-116050875839","created":"2026-03-10T10:08:09.663961+0000","modified":"2026-03-10T10:14:42.257055+0000","last_up_change":"2026-03-10T10:13:49.733359+0000","last_in_change":"2026-03-10T10:13:34.199840+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":6,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"luminous","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T10:11:03.648152+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"22","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":".rgw.root","create_time":"2026-03-10T10:14:11.014942+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"54","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":3,"pool_name":"default.rgw.log","create_time":"2026-03-10T10:14:12.999504+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"56","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":4,"pool_name":"datapool","create_time":"2026-03-10T10:14:14.815391+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":3,"pg_placement_num":3,"pg_placement_num_target":3,"pg_num_target":3,"pg_num_pending":3,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"62","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":62,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.6500000953674316,"score_stable":2.6500000953674316,"optimal_score":0.87999999523162842,"raw_score_acting":2.3299999237060547,"raw_score_stable":2.3299999237060547,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":5,"pool_name":"default.rgw.control","create_time":"2026-03-10T10:14:14.962359+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"58","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.25,"score_stable":1.25,"optimal_score":1,"raw_score_acting":1.25,"raw_score_stable":1.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":6,"pool_name":"default.rgw.meta","create_time":"2026-03-10T10:14:17.023637+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"60","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_autoscale_bias":4},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.75,"score_stable":1.75,"optimal_score":1,"raw_score_acting":1.75,"raw_score_stable":1.75,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"8e28c717-cfeb-4d7d-8ed7-9136d22aff5c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":58,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6801","nonce":3431285778}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6802","nonce":3431285778}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6804","nonce":3431285778}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6803","nonce":3431285778}]},"public_addr":"192.168.123.104:6801/3431285778","cluster_addr":"192.168.123.104:6802/3431285778","heartbeat_back_addr":"192.168.123.104:6804/3431285778","heartbeat_front_addr":"192.168.123.104:6803/3431285778","state":["exists","up"]},{"osd":1,"uuid":"58ba2152-7e52-4560-a001-e96617e30de1","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":63,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6805","nonce":2746381987}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6806","nonce":2746381987}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6808","nonce":2746381987}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6807","nonce":2746381987}]},"public_addr":"192.168.123.104:6805/2746381987","cluster_addr":"192.168.123.104:6806/2746381987","heartbeat_back_addr":"192.168.123.104:6808/2746381987","heartbeat_front_addr":"192.168.123.104:6807/2746381987","state":["exists","up"]},{"osd":2,"uuid":"17bb098c-8eff-4065-b511-7925247ef4a5","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":58,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6809","nonce":1668196037}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6810","nonce":1668196037}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6812","nonce":1668196037}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6811","nonce":1668196037}]},"public_addr":"192.168.123.104:6809/1668196037","cluster_addr":"192.168.123.104:6810/1668196037","heartbeat_back_addr":"192.168.123.104:6812/1668196037","heartbeat_front_addr":"192.168.123.104:6811/1668196037","state":["exists","up"]},{"osd":3,"uuid":"f9a0e546-c40a-4fcc-aaca-082199e602f3","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":26,"up_thru":58,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6813","nonce":2182249853}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6814","nonce":2182249853}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6816","nonce":2182249853}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6815","nonce":2182249853}]},"public_addr":"192.168.123.104:6813/2182249853","cluster_addr":"192.168.123.104:6814/2182249853","heartbeat_back_addr":"192.168.123.104:6816/2182249853","heartbeat_front_addr":"192.168.123.104:6815/2182249853","state":["exists","up"]},{"osd":4,"uuid":"456a615c-f863-4970-b4a1-90e964abfec7","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":31,"up_thru":58,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6800","nonce":2162643433}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6801","nonce":2162643433}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6803","nonce":2162643433}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6802","nonce":2162643433}]},"public_addr":"192.168.123.107:6800/2162643433","cluster_addr":"192.168.123.107:6801/2162643433","heartbeat_back_addr":"192.168.123.107:6803/2162643433","heartbeat_front_addr":"192.168.123.107:6802/2162643433","state":["exists","up"]},{"osd":5,"uuid":"c651c78e-882b-47c6-84ff-5a4b54b94531","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":37,"up_thru":58,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6804","nonce":1022745989}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6805","nonce":1022745989}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6807","nonce":1022745989}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6806","nonce":1022745989}]},"public_addr":"192.168.123.107:6804/1022745989","cluster_addr":"192.168.123.107:6805/1022745989","heartbeat_back_addr":"192.168.123.107:6807/1022745989","heartbeat_front_addr":"192.168.123.107:6806/1022745989","state":["exists","up"]},{"osd":6,"uuid":"69498577-1b7a-40bf-acac-5912f8ff7cfc","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":43,"up_thru":56,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6808","nonce":719340092}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6809","nonce":719340092}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6811","nonce":719340092}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6810","nonce":719340092}]},"public_addr":"192.168.123.107:6808/719340092","cluster_addr":"192.168.123.107:6809/719340092","heartbeat_back_addr":"192.168.123.107:6811/719340092","heartbeat_front_addr":"192.168.123.107:6810/719340092","state":["exists","up"]},{"osd":7,"uuid":"a27eb8fa-556b-467c-bdba-9d899e37064a","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":49,"up_thru":58,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6812","nonce":4141831103}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6813","nonce":4141831103}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6815","nonce":4141831103}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6814","nonce":4141831103}]},"public_addr":"192.168.123.107:6812/4141831103","cluster_addr":"192.168.123.107:6813/4141831103","heartbeat_back_addr":"192.168.123.107:6815/4141831103","heartbeat_front_addr":"192.168.123.107:6814/4141831103","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:09:53.659046+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:10:26.943063+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:10:59.644519+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:11:34.491448+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:12:07.569310+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:12:41.508315+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:13:14.258577+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:13:48.010550+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[{"pgid":"2.4","mappings":[{"from":7,"to":2}]}],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.104:0/2082123263":"2026-03-11T10:14:42.257033+0000","192.168.123.104:6800/2318507328":"2026-03-11T10:08:29.595143+0000","192.168.123.104:0/2315333744":"2026-03-11T10:08:20.274742+0000","192.168.123.104:6800/887024688":"2026-03-11T10:08:20.274742+0000","192.168.123.104:0/4084406241":"2026-03-11T10:08:20.274742+0000","192.168.123.104:0/1555346406":"2026-03-11T10:08:20.274742+0000","192.168.123.104:0/1397792734":"2026-03-11T10:08:29.595143+0000","192.168.123.104:6800/632047608":"2026-03-11T10:14:42.257033+0000","192.168.123.104:0/75332172":"2026-03-11T10:08:29.595143+0000","192.168.123.104:0/3286460520":"2026-03-11T10:14:42.257033+0000","192.168.123.104:0/1036710518":"2026-03-11T10:14:42.257033+0000","192.168.123.104:0/4228752384":"2026-03-11T10:08:29.595143+0000","192.168.123.104:0/2502458287":"2026-03-11T10:14:42.257033+0000","192.168.123.104:0/176697585":"2026-03-11T10:14:42.257033+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T10:15:08.756 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.753+0000 7f48f8ff9640 1 -- 192.168.123.104:0/3190125261 >> v2:192.168.123.104:6800/3326026257 conn(0x7f48dc0776c0 msgr2=0x7f48dc079b80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:08.756 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.753+0000 7f48f8ff9640 1 --2- 192.168.123.104:0/3190125261 >> v2:192.168.123.104:6800/3326026257 conn(0x7f48dc0776c0 0x7f48dc079b80 secure :-1 s=READY pgs=29 cs=0 l=1 rev1=1 crypto rx=0x7f48fc005d20 tx=0x7f48fc03a040 comp rx=0 tx=0).stop 2026-03-10T10:15:08.756 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.753+0000 7f48f8ff9640 1 -- 192.168.123.104:0/3190125261 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f490c10a850 msgr2=0x7f490c1afba0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:08.756 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.753+0000 7f48f8ff9640 1 --2- 192.168.123.104:0/3190125261 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f490c10a850 0x7f490c1afba0 secure :-1 s=READY pgs=62 cs=0 l=1 rev1=1 crypto rx=0x7f490800c990 tx=0x7f490800ce60 comp rx=0 tx=0).stop 2026-03-10T10:15:08.756 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.753+0000 7f48f8ff9640 1 -- 192.168.123.104:0/3190125261 shutdown_connections 2026-03-10T10:15:08.756 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.753+0000 7f48f8ff9640 1 --2- 192.168.123.104:0/3190125261 >> v2:192.168.123.104:6800/3326026257 conn(0x7f48dc0776c0 0x7f48dc079b80 unknown :-1 s=CLOSED pgs=29 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:08.756 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.753+0000 7f48f8ff9640 1 --2- 192.168.123.104:0/3190125261 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f490c11c780 0x7f490c1a97e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:08.756 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.753+0000 7f48f8ff9640 1 --2- 192.168.123.104:0/3190125261 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f490c10a850 0x7f490c1afba0 unknown :-1 s=CLOSED pgs=62 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:08.756 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.753+0000 7f48f8ff9640 1 --2- 192.168.123.104:0/3190125261 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f490c10a470 0x7f490c1af660 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:08.756 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.753+0000 7f48f8ff9640 1 -- 192.168.123.104:0/3190125261 >> 192.168.123.104:0/3190125261 conn(0x7f490c06ed00 msgr2=0x7f490c11e030 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:08.756 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.753+0000 7f48f8ff9640 1 -- 192.168.123.104:0/3190125261 shutdown_connections 2026-03-10T10:15:08.756 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:08.753+0000 7f48f8ff9640 1 -- 192.168.123.104:0/3190125261 wait complete. 2026-03-10T10:15:08.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:08 vm07 bash[23367]: cluster 2026-03-10T10:15:06.297889+0000 mgr.y (mgr.24422) 38 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:08.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:08 vm07 bash[23367]: cluster 2026-03-10T10:15:06.297889+0000 mgr.y (mgr.24422) 38 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:08.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:08 vm07 bash[23367]: audit 2026-03-10T10:15:07.346596+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:08.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:08 vm07 bash[23367]: audit 2026-03-10T10:15:07.346596+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:08.818 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-10T10:15:08.818 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph osd dump --format=json 2026-03-10T10:15:09.675 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:09 vm04 bash[20742]: audit 2026-03-10T10:15:08.125954+0000 mgr.y (mgr.24422) 39 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:09.675 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:09 vm04 bash[20742]: audit 2026-03-10T10:15:08.125954+0000 mgr.y (mgr.24422) 39 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:09.675 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:09 vm04 bash[20742]: cluster 2026-03-10T10:15:08.298147+0000 mgr.y (mgr.24422) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:09.675 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:09 vm04 bash[20742]: cluster 2026-03-10T10:15:08.298147+0000 mgr.y (mgr.24422) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:09.675 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:09 vm04 bash[20742]: audit 2026-03-10T10:15:08.751327+0000 mon.c (mon.2) 26 : audit [DBG] from='client.? 192.168.123.104:0/3190125261' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T10:15:09.675 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:09 vm04 bash[20742]: audit 2026-03-10T10:15:08.751327+0000 mon.c (mon.2) 26 : audit [DBG] from='client.? 192.168.123.104:0/3190125261' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T10:15:09.675 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:09 vm04 bash[20742]: audit 2026-03-10T10:15:08.970168+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:09.675 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:09 vm04 bash[20742]: audit 2026-03-10T10:15:08.970168+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:09.675 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:09 vm04 bash[20742]: audit 2026-03-10T10:15:08.976510+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:09.675 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:09 vm04 bash[20742]: audit 2026-03-10T10:15:08.976510+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:09.675 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:09 vm04 bash[28289]: audit 2026-03-10T10:15:08.125954+0000 mgr.y (mgr.24422) 39 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:09.675 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:09 vm04 bash[28289]: audit 2026-03-10T10:15:08.125954+0000 mgr.y (mgr.24422) 39 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:09.675 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:09 vm04 bash[28289]: cluster 2026-03-10T10:15:08.298147+0000 mgr.y (mgr.24422) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:09.675 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:09 vm04 bash[28289]: cluster 2026-03-10T10:15:08.298147+0000 mgr.y (mgr.24422) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:09.675 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:09 vm04 bash[28289]: audit 2026-03-10T10:15:08.751327+0000 mon.c (mon.2) 26 : audit [DBG] from='client.? 192.168.123.104:0/3190125261' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T10:15:09.675 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:09 vm04 bash[28289]: audit 2026-03-10T10:15:08.751327+0000 mon.c (mon.2) 26 : audit [DBG] from='client.? 192.168.123.104:0/3190125261' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T10:15:09.675 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:09 vm04 bash[28289]: audit 2026-03-10T10:15:08.970168+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:09.675 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:09 vm04 bash[28289]: audit 2026-03-10T10:15:08.970168+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:09.675 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:09 vm04 bash[28289]: audit 2026-03-10T10:15:08.976510+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:09.675 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:09 vm04 bash[28289]: audit 2026-03-10T10:15:08.976510+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:09.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:09 vm07 bash[23367]: audit 2026-03-10T10:15:08.125954+0000 mgr.y (mgr.24422) 39 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:09.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:09 vm07 bash[23367]: audit 2026-03-10T10:15:08.125954+0000 mgr.y (mgr.24422) 39 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:09.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:09 vm07 bash[23367]: cluster 2026-03-10T10:15:08.298147+0000 mgr.y (mgr.24422) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:09.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:09 vm07 bash[23367]: cluster 2026-03-10T10:15:08.298147+0000 mgr.y (mgr.24422) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:09.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:09 vm07 bash[23367]: audit 2026-03-10T10:15:08.751327+0000 mon.c (mon.2) 26 : audit [DBG] from='client.? 192.168.123.104:0/3190125261' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T10:15:09.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:09 vm07 bash[23367]: audit 2026-03-10T10:15:08.751327+0000 mon.c (mon.2) 26 : audit [DBG] from='client.? 192.168.123.104:0/3190125261' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T10:15:09.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:09 vm07 bash[23367]: audit 2026-03-10T10:15:08.970168+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:09.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:09 vm07 bash[23367]: audit 2026-03-10T10:15:08.970168+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:09.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:09 vm07 bash[23367]: audit 2026-03-10T10:15:08.976510+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:09.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:09 vm07 bash[23367]: audit 2026-03-10T10:15:08.976510+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:09.935 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:15:09 vm04 systemd[1]: Stopping Ceph alertmanager.a for e4c1c9d6-1c68-11f1-a9bd-116050875839... 2026-03-10T10:15:09.935 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:15:09 vm04 bash[55742]: ts=2026-03-10T10:15:09.902Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..." 2026-03-10T10:15:10.203 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:15:09 vm04 bash[56485]: ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839-alertmanager-a 2026-03-10T10:15:10.203 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:15:09 vm04 systemd[1]: ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@alertmanager.a.service: Deactivated successfully. 2026-03-10T10:15:10.203 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:15:09 vm04 systemd[1]: Stopped Ceph alertmanager.a for e4c1c9d6-1c68-11f1-a9bd-116050875839. 2026-03-10T10:15:10.203 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:15:09 vm04 systemd[1]: Started Ceph alertmanager.a for e4c1c9d6-1c68-11f1-a9bd-116050875839. 2026-03-10T10:15:10.203 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:15:10 vm04 bash[56561]: ts=2026-03-10T10:15:10.097Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)" 2026-03-10T10:15:10.203 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:15:10 vm04 bash[56561]: ts=2026-03-10T10:15:10.097Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)" 2026-03-10T10:15:10.203 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:15:10 vm04 bash[56561]: ts=2026-03-10T10:15:10.098Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.123.104 port=9094 2026-03-10T10:15:10.203 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:15:10 vm04 bash[56561]: ts=2026-03-10T10:15:10.112Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-10T10:15:10.203 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:15:10 vm04 bash[56561]: ts=2026-03-10T10:15:10.132Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-10T10:15:10.203 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:15:10 vm04 bash[56561]: ts=2026-03-10T10:15:10.133Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-10T10:15:10.203 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:15:10 vm04 bash[56561]: ts=2026-03-10T10:15:10.134Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9093 2026-03-10T10:15:10.203 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:15:10 vm04 bash[56561]: ts=2026-03-10T10:15:10.134Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9093 2026-03-10T10:15:10.663 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 systemd[1]: Stopping Ceph prometheus.a for e4c1c9d6-1c68-11f1-a9bd-116050875839... 2026-03-10T10:15:10.663 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 bash[49439]: ts=2026-03-10T10:15:10.658Z caller=main.go:964 level=warn msg="Received SIGTERM, exiting gracefully..." 2026-03-10T10:15:10.663 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 bash[49439]: ts=2026-03-10T10:15:10.658Z caller=main.go:988 level=info msg="Stopping scrape discovery manager..." 2026-03-10T10:15:10.663 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 bash[49439]: ts=2026-03-10T10:15:10.658Z caller=main.go:1002 level=info msg="Stopping notify discovery manager..." 2026-03-10T10:15:10.663 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 bash[49439]: ts=2026-03-10T10:15:10.658Z caller=main.go:984 level=info msg="Scrape discovery manager stopped" 2026-03-10T10:15:10.663 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 bash[49439]: ts=2026-03-10T10:15:10.658Z caller=manager.go:177 level=info component="rule manager" msg="Stopping rule manager..." 2026-03-10T10:15:10.663 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 bash[49439]: ts=2026-03-10T10:15:10.658Z caller=main.go:998 level=info msg="Notify discovery manager stopped" 2026-03-10T10:15:10.663 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 bash[49439]: ts=2026-03-10T10:15:10.658Z caller=manager.go:187 level=info component="rule manager" msg="Rule manager stopped" 2026-03-10T10:15:10.663 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 bash[49439]: ts=2026-03-10T10:15:10.658Z caller=main.go:1039 level=info msg="Stopping scrape manager..." 2026-03-10T10:15:10.663 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 bash[49439]: ts=2026-03-10T10:15:10.658Z caller=main.go:1031 level=info msg="Scrape manager stopped" 2026-03-10T10:15:10.663 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:10 vm07 bash[23367]: audit 2026-03-10T10:15:09.418724+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:10.664 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:10 vm07 bash[23367]: audit 2026-03-10T10:15:09.418724+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:10.664 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:10 vm07 bash[23367]: audit 2026-03-10T10:15:09.423404+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:10.664 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:10 vm07 bash[23367]: audit 2026-03-10T10:15:09.423404+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:10.664 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:10 vm07 bash[23367]: audit 2026-03-10T10:15:09.424399+0000 mon.a (mon.0) 796 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:10.664 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:10 vm07 bash[23367]: audit 2026-03-10T10:15:09.424399+0000 mon.a (mon.0) 796 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:10.664 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:10 vm07 bash[23367]: audit 2026-03-10T10:15:09.424839+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:15:10.664 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:10 vm07 bash[23367]: audit 2026-03-10T10:15:09.424839+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:15:10.664 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:10 vm07 bash[23367]: audit 2026-03-10T10:15:09.428281+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:10.664 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:10 vm07 bash[23367]: audit 2026-03-10T10:15:09.428281+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:10.664 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:10 vm07 bash[23367]: cephadm 2026-03-10T10:15:09.439501+0000 mgr.y (mgr.24422) 41 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T10:15:10.664 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:10 vm07 bash[23367]: cephadm 2026-03-10T10:15:09.439501+0000 mgr.y (mgr.24422) 41 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T10:15:10.664 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:10 vm07 bash[23367]: cephadm 2026-03-10T10:15:09.442244+0000 mgr.y (mgr.24422) 42 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm04 2026-03-10T10:15:10.664 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:10 vm07 bash[23367]: cephadm 2026-03-10T10:15:09.442244+0000 mgr.y (mgr.24422) 42 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm04 2026-03-10T10:15:10.664 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:10 vm07 bash[23367]: audit 2026-03-10T10:15:09.992800+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:10.664 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:10 vm07 bash[23367]: audit 2026-03-10T10:15:09.992800+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:10.664 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:10 vm07 bash[23367]: audit 2026-03-10T10:15:09.997814+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:10.664 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:10 vm07 bash[23367]: audit 2026-03-10T10:15:09.997814+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:10.664 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:10 vm07 bash[23367]: cephadm 2026-03-10T10:15:09.999178+0000 mgr.y (mgr.24422) 43 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T10:15:10.664 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:10 vm07 bash[23367]: cephadm 2026-03-10T10:15:09.999178+0000 mgr.y (mgr.24422) 43 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T10:15:10.664 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:10 vm07 bash[23367]: cephadm 2026-03-10T10:15:10.171037+0000 mgr.y (mgr.24422) 44 : cephadm [INF] Reconfiguring daemon prometheus.a on vm07 2026-03-10T10:15:10.664 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:10 vm07 bash[23367]: cephadm 2026-03-10T10:15:10.171037+0000 mgr.y (mgr.24422) 44 : cephadm [INF] Reconfiguring daemon prometheus.a on vm07 2026-03-10T10:15:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:10 vm04 bash[28289]: audit 2026-03-10T10:15:09.418724+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:10 vm04 bash[28289]: audit 2026-03-10T10:15:09.418724+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:10 vm04 bash[28289]: audit 2026-03-10T10:15:09.423404+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:10 vm04 bash[28289]: audit 2026-03-10T10:15:09.423404+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:10 vm04 bash[28289]: audit 2026-03-10T10:15:09.424399+0000 mon.a (mon.0) 796 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:10 vm04 bash[28289]: audit 2026-03-10T10:15:09.424399+0000 mon.a (mon.0) 796 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:10 vm04 bash[28289]: audit 2026-03-10T10:15:09.424839+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:15:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:10 vm04 bash[28289]: audit 2026-03-10T10:15:09.424839+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:15:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:10 vm04 bash[28289]: audit 2026-03-10T10:15:09.428281+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:10 vm04 bash[28289]: audit 2026-03-10T10:15:09.428281+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:10 vm04 bash[28289]: cephadm 2026-03-10T10:15:09.439501+0000 mgr.y (mgr.24422) 41 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T10:15:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:10 vm04 bash[28289]: cephadm 2026-03-10T10:15:09.439501+0000 mgr.y (mgr.24422) 41 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T10:15:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:10 vm04 bash[28289]: cephadm 2026-03-10T10:15:09.442244+0000 mgr.y (mgr.24422) 42 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm04 2026-03-10T10:15:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:10 vm04 bash[28289]: cephadm 2026-03-10T10:15:09.442244+0000 mgr.y (mgr.24422) 42 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm04 2026-03-10T10:15:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:10 vm04 bash[28289]: audit 2026-03-10T10:15:09.992800+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:10 vm04 bash[28289]: audit 2026-03-10T10:15:09.992800+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:10 vm04 bash[28289]: audit 2026-03-10T10:15:09.997814+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:10 vm04 bash[28289]: audit 2026-03-10T10:15:09.997814+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:10 vm04 bash[28289]: cephadm 2026-03-10T10:15:09.999178+0000 mgr.y (mgr.24422) 43 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T10:15:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:10 vm04 bash[28289]: cephadm 2026-03-10T10:15:09.999178+0000 mgr.y (mgr.24422) 43 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T10:15:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:10 vm04 bash[28289]: cephadm 2026-03-10T10:15:10.171037+0000 mgr.y (mgr.24422) 44 : cephadm [INF] Reconfiguring daemon prometheus.a on vm07 2026-03-10T10:15:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:10 vm04 bash[28289]: cephadm 2026-03-10T10:15:10.171037+0000 mgr.y (mgr.24422) 44 : cephadm [INF] Reconfiguring daemon prometheus.a on vm07 2026-03-10T10:15:10.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:10 vm04 bash[20742]: audit 2026-03-10T10:15:09.418724+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:10.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:10 vm04 bash[20742]: audit 2026-03-10T10:15:09.418724+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:10 vm04 bash[20742]: audit 2026-03-10T10:15:09.423404+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:10 vm04 bash[20742]: audit 2026-03-10T10:15:09.423404+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:10 vm04 bash[20742]: audit 2026-03-10T10:15:09.424399+0000 mon.a (mon.0) 796 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:10 vm04 bash[20742]: audit 2026-03-10T10:15:09.424399+0000 mon.a (mon.0) 796 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:10 vm04 bash[20742]: audit 2026-03-10T10:15:09.424839+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:15:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:10 vm04 bash[20742]: audit 2026-03-10T10:15:09.424839+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:15:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:10 vm04 bash[20742]: audit 2026-03-10T10:15:09.428281+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:10 vm04 bash[20742]: audit 2026-03-10T10:15:09.428281+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:10 vm04 bash[20742]: cephadm 2026-03-10T10:15:09.439501+0000 mgr.y (mgr.24422) 41 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T10:15:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:10 vm04 bash[20742]: cephadm 2026-03-10T10:15:09.439501+0000 mgr.y (mgr.24422) 41 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T10:15:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:10 vm04 bash[20742]: cephadm 2026-03-10T10:15:09.442244+0000 mgr.y (mgr.24422) 42 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm04 2026-03-10T10:15:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:10 vm04 bash[20742]: cephadm 2026-03-10T10:15:09.442244+0000 mgr.y (mgr.24422) 42 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm04 2026-03-10T10:15:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:10 vm04 bash[20742]: audit 2026-03-10T10:15:09.992800+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:10 vm04 bash[20742]: audit 2026-03-10T10:15:09.992800+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:10 vm04 bash[20742]: audit 2026-03-10T10:15:09.997814+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:10 vm04 bash[20742]: audit 2026-03-10T10:15:09.997814+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:10 vm04 bash[20742]: cephadm 2026-03-10T10:15:09.999178+0000 mgr.y (mgr.24422) 43 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T10:15:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:10 vm04 bash[20742]: cephadm 2026-03-10T10:15:09.999178+0000 mgr.y (mgr.24422) 43 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T10:15:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:10 vm04 bash[20742]: cephadm 2026-03-10T10:15:10.171037+0000 mgr.y (mgr.24422) 44 : cephadm [INF] Reconfiguring daemon prometheus.a on vm07 2026-03-10T10:15:10.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:10 vm04 bash[20742]: cephadm 2026-03-10T10:15:10.171037+0000 mgr.y (mgr.24422) 44 : cephadm [INF] Reconfiguring daemon prometheus.a on vm07 2026-03-10T10:15:11.015 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 bash[49439]: ts=2026-03-10T10:15:10.659Z caller=notifier.go:618 level=info component=notifier msg="Stopping notification manager..." 2026-03-10T10:15:11.015 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 bash[49439]: ts=2026-03-10T10:15:10.660Z caller=main.go:1261 level=info msg="Notifier manager stopped" 2026-03-10T10:15:11.015 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 bash[49439]: ts=2026-03-10T10:15:10.660Z caller=main.go:1273 level=info msg="See you next time!" 2026-03-10T10:15:11.015 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 bash[51248]: ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839-prometheus-a 2026-03-10T10:15:11.015 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 systemd[1]: ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@prometheus.a.service: Deactivated successfully. 2026-03-10T10:15:11.015 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 systemd[1]: Stopped Ceph prometheus.a for e4c1c9d6-1c68-11f1-a9bd-116050875839. 2026-03-10T10:15:11.015 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 systemd[1]: Started Ceph prometheus.a for e4c1c9d6-1c68-11f1-a9bd-116050875839. 2026-03-10T10:15:11.015 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 bash[51324]: ts=2026-03-10T10:15:10.841Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 2026-03-10T10:15:11.015 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 bash[51324]: ts=2026-03-10T10:15:10.841Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 2026-03-10T10:15:11.015 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 bash[51324]: ts=2026-03-10T10:15:10.841Z caller=main.go:623 level=info host_details="(Linux 5.15.0-1092-kvm #97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026 x86_64 vm07 (none))" 2026-03-10T10:15:11.015 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 bash[51324]: ts=2026-03-10T10:15:10.841Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-10T10:15:11.015 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 bash[51324]: ts=2026-03-10T10:15:10.841Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-10T10:15:11.015 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 bash[51324]: ts=2026-03-10T10:15:10.851Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=:9095 2026-03-10T10:15:11.015 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 bash[51324]: ts=2026-03-10T10:15:10.853Z caller=main.go:1129 level=info msg="Starting TSDB ..." 2026-03-10T10:15:11.015 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 bash[51324]: ts=2026-03-10T10:15:10.855Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-10T10:15:11.015 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 bash[51324]: ts=2026-03-10T10:15:10.856Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.883µs 2026-03-10T10:15:11.015 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 bash[51324]: ts=2026-03-10T10:15:10.856Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-10T10:15:11.015 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 bash[51324]: ts=2026-03-10T10:15:10.857Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9095 2026-03-10T10:15:11.015 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 bash[51324]: ts=2026-03-10T10:15:10.857Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9095 2026-03-10T10:15:11.015 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 bash[51324]: ts=2026-03-10T10:15:10.857Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=1 2026-03-10T10:15:11.015 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 bash[51324]: ts=2026-03-10T10:15:10.857Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=1 maxSegment=1 2026-03-10T10:15:11.016 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 bash[51324]: ts=2026-03-10T10:15:10.858Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=25.407µs wal_replay_duration=1.553398ms wbl_replay_duration=141ns total_replay_duration=1.928859ms 2026-03-10T10:15:11.016 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 bash[51324]: ts=2026-03-10T10:15:10.862Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 2026-03-10T10:15:11.016 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 bash[51324]: ts=2026-03-10T10:15:10.863Z caller=main.go:1153 level=info msg="TSDB started" 2026-03-10T10:15:11.016 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 bash[51324]: ts=2026-03-10T10:15:10.863Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-10T10:15:11.016 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 bash[51324]: ts=2026-03-10T10:15:10.884Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=20.837016ms db_storage=1.082µs remote_storage=1.193µs web_handler=351ns query_engine=842ns scrape=10.052935ms scrape_sd=126.385µs notify=9.427µs notify_sd=5.55µs rules=10.174573ms tracing=5.339µs 2026-03-10T10:15:11.016 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 bash[51324]: ts=2026-03-10T10:15:10.884Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 2026-03-10T10:15:11.016 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:15:10 vm07 bash[51324]: ts=2026-03-10T10:15:10.885Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-10T10:15:11.162 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:15:10 vm04 bash[20997]: [10/Mar/2026:10:15:10] ENGINE Bus STOPPING 2026-03-10T10:15:11.162 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:15:11 vm04 bash[20997]: [10/Mar/2026:10:15:11] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T10:15:11.162 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:15:11 vm04 bash[20997]: [10/Mar/2026:10:15:11] ENGINE Bus STOPPED 2026-03-10T10:15:11.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:15:11 vm04 bash[20997]: [10/Mar/2026:10:15:11] ENGINE Bus STARTING 2026-03-10T10:15:11.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:15:11 vm04 bash[20997]: [10/Mar/2026:10:15:11] ENGINE Serving on http://:::9283 2026-03-10T10:15:11.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:15:11 vm04 bash[20997]: [10/Mar/2026:10:15:11] ENGINE Bus STARTED 2026-03-10T10:15:11.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:15:11 vm04 bash[20997]: [10/Mar/2026:10:15:11] ENGINE Bus STOPPING 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: cluster 2026-03-10T10:15:10.298617+0000 mgr.y (mgr.24422) 45 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: cluster 2026-03-10T10:15:10.298617+0000 mgr.y (mgr.24422) 45 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.755625+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.755625+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.761756+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.761756+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.764448+0000 mon.a (mon.0) 803 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.764448+0000 mon.a (mon.0) 803 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.764710+0000 mgr.y (mgr.24422) 46 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.764710+0000 mgr.y (mgr.24422) 46 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.765337+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm04.local:9093"}]: dispatch 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.765337+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm04.local:9093"}]: dispatch 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.765497+0000 mgr.y (mgr.24422) 47 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm04.local:9093"}]: dispatch 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.765497+0000 mgr.y (mgr.24422) 47 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm04.local:9093"}]: dispatch 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.769003+0000 mon.a (mon.0) 805 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.769003+0000 mon.a (mon.0) 805 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.779147+0000 mon.a (mon.0) 806 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.779147+0000 mon.a (mon.0) 806 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.779481+0000 mgr.y (mgr.24422) 48 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.779481+0000 mgr.y (mgr.24422) 48 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.780095+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm07.local:3000"}]: dispatch 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.780095+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm07.local:3000"}]: dispatch 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.780329+0000 mgr.y (mgr.24422) 49 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm07.local:3000"}]: dispatch 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.780329+0000 mgr.y (mgr.24422) 49 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm07.local:3000"}]: dispatch 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.784324+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.784324+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.792333+0000 mon.a (mon.0) 809 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.792333+0000 mon.a (mon.0) 809 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.792715+0000 mgr.y (mgr.24422) 50 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.792715+0000 mgr.y (mgr.24422) 50 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.793276+0000 mon.a (mon.0) 810 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm07.local:9095"}]: dispatch 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.793276+0000 mon.a (mon.0) 810 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm07.local:9095"}]: dispatch 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.793504+0000 mgr.y (mgr.24422) 51 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm07.local:9095"}]: dispatch 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.793504+0000 mgr.y (mgr.24422) 51 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm07.local:9095"}]: dispatch 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.798092+0000 mon.a (mon.0) 811 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.798092+0000 mon.a (mon.0) 811 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.831890+0000 mon.a (mon.0) 812 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:15:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:11 vm07 bash[23367]: audit 2026-03-10T10:15:10.831890+0000 mon.a (mon.0) 812 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:15:12.112 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: cluster 2026-03-10T10:15:10.298617+0000 mgr.y (mgr.24422) 45 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:12.112 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: cluster 2026-03-10T10:15:10.298617+0000 mgr.y (mgr.24422) 45 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:12.112 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.755625+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:12.112 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.755625+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:12.112 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.761756+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:12.112 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.761756+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:12.112 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.764448+0000 mon.a (mon.0) 803 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T10:15:12.112 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.764448+0000 mon.a (mon.0) 803 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T10:15:12.112 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.764710+0000 mgr.y (mgr.24422) 46 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T10:15:12.112 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.764710+0000 mgr.y (mgr.24422) 46 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T10:15:12.112 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.765337+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm04.local:9093"}]: dispatch 2026-03-10T10:15:12.112 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.765337+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm04.local:9093"}]: dispatch 2026-03-10T10:15:12.112 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.765497+0000 mgr.y (mgr.24422) 47 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm04.local:9093"}]: dispatch 2026-03-10T10:15:12.112 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.765497+0000 mgr.y (mgr.24422) 47 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm04.local:9093"}]: dispatch 2026-03-10T10:15:12.112 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.769003+0000 mon.a (mon.0) 805 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:12.112 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.769003+0000 mon.a (mon.0) 805 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:12.112 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.779147+0000 mon.a (mon.0) 806 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T10:15:12.112 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.779147+0000 mon.a (mon.0) 806 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T10:15:12.112 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.779481+0000 mgr.y (mgr.24422) 48 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T10:15:12.112 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.779481+0000 mgr.y (mgr.24422) 48 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T10:15:12.112 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.780095+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm07.local:3000"}]: dispatch 2026-03-10T10:15:12.112 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.780095+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm07.local:3000"}]: dispatch 2026-03-10T10:15:12.112 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.780329+0000 mgr.y (mgr.24422) 49 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm07.local:3000"}]: dispatch 2026-03-10T10:15:12.112 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.780329+0000 mgr.y (mgr.24422) 49 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm07.local:3000"}]: dispatch 2026-03-10T10:15:12.112 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.784324+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:12.112 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.784324+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:12.112 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.792333+0000 mon.a (mon.0) 809 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T10:15:12.112 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.792333+0000 mon.a (mon.0) 809 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T10:15:12.112 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.792715+0000 mgr.y (mgr.24422) 50 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T10:15:12.112 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.792715+0000 mgr.y (mgr.24422) 50 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.793276+0000 mon.a (mon.0) 810 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm07.local:9095"}]: dispatch 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.793276+0000 mon.a (mon.0) 810 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm07.local:9095"}]: dispatch 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.793504+0000 mgr.y (mgr.24422) 51 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm07.local:9095"}]: dispatch 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.793504+0000 mgr.y (mgr.24422) 51 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm07.local:9095"}]: dispatch 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.798092+0000 mon.a (mon.0) 811 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.798092+0000 mon.a (mon.0) 811 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.831890+0000 mon.a (mon.0) 812 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:11 vm04 bash[28289]: audit 2026-03-10T10:15:10.831890+0000 mon.a (mon.0) 812 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:15:11 vm04 bash[20997]: [10/Mar/2026:10:15:11] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:15:11 vm04 bash[20997]: [10/Mar/2026:10:15:11] ENGINE Bus STOPPED 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:15:11 vm04 bash[20997]: [10/Mar/2026:10:15:11] ENGINE Bus STARTING 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:15:11 vm04 bash[20997]: [10/Mar/2026:10:15:11] ENGINE Serving on http://:::9283 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:15:11 vm04 bash[20997]: [10/Mar/2026:10:15:11] ENGINE Bus STARTED 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:15:11 vm04 bash[20997]: [10/Mar/2026:10:15:11] ENGINE Bus STOPPING 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: cluster 2026-03-10T10:15:10.298617+0000 mgr.y (mgr.24422) 45 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: cluster 2026-03-10T10:15:10.298617+0000 mgr.y (mgr.24422) 45 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.755625+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.755625+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.761756+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.761756+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.764448+0000 mon.a (mon.0) 803 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.764448+0000 mon.a (mon.0) 803 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.764710+0000 mgr.y (mgr.24422) 46 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.764710+0000 mgr.y (mgr.24422) 46 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.765337+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm04.local:9093"}]: dispatch 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.765337+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm04.local:9093"}]: dispatch 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.765497+0000 mgr.y (mgr.24422) 47 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm04.local:9093"}]: dispatch 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.765497+0000 mgr.y (mgr.24422) 47 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm04.local:9093"}]: dispatch 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.769003+0000 mon.a (mon.0) 805 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.769003+0000 mon.a (mon.0) 805 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.779147+0000 mon.a (mon.0) 806 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.779147+0000 mon.a (mon.0) 806 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.779481+0000 mgr.y (mgr.24422) 48 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.779481+0000 mgr.y (mgr.24422) 48 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.780095+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm07.local:3000"}]: dispatch 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.780095+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm07.local:3000"}]: dispatch 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.780329+0000 mgr.y (mgr.24422) 49 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm07.local:3000"}]: dispatch 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.780329+0000 mgr.y (mgr.24422) 49 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm07.local:3000"}]: dispatch 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.784324+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.784324+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.792333+0000 mon.a (mon.0) 809 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.792333+0000 mon.a (mon.0) 809 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.792715+0000 mgr.y (mgr.24422) 50 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.792715+0000 mgr.y (mgr.24422) 50 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.793276+0000 mon.a (mon.0) 810 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm07.local:9095"}]: dispatch 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.793276+0000 mon.a (mon.0) 810 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm07.local:9095"}]: dispatch 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.793504+0000 mgr.y (mgr.24422) 51 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm07.local:9095"}]: dispatch 2026-03-10T10:15:12.113 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.793504+0000 mgr.y (mgr.24422) 51 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm07.local:9095"}]: dispatch 2026-03-10T10:15:12.114 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.798092+0000 mon.a (mon.0) 811 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:12.114 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.798092+0000 mon.a (mon.0) 811 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:12.114 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.831890+0000 mon.a (mon.0) 812 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:15:12.114 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:11 vm04 bash[20742]: audit 2026-03-10T10:15:10.831890+0000 mon.a (mon.0) 812 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:15:12.379 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:15:12 vm04 bash[56561]: ts=2026-03-10T10:15:12.112Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000642648s 2026-03-10T10:15:12.703 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:15:12 vm04 bash[20997]: [10/Mar/2026:10:15:12] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T10:15:12.703 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:15:12 vm04 bash[20997]: [10/Mar/2026:10:15:12] ENGINE Bus STOPPED 2026-03-10T10:15:12.703 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:15:12 vm04 bash[20997]: [10/Mar/2026:10:15:12] ENGINE Bus STARTING 2026-03-10T10:15:12.703 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:15:12 vm04 bash[20997]: [10/Mar/2026:10:15:12] ENGINE Serving on http://:::9283 2026-03-10T10:15:12.703 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:15:12 vm04 bash[20997]: [10/Mar/2026:10:15:12] ENGINE Bus STARTED 2026-03-10T10:15:12.764 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:12 vm04 bash[20742]: audit 2026-03-10T10:15:12.369974+0000 mon.a (mon.0) 813 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:15:13.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:12 vm07 bash[23367]: audit 2026-03-10T10:15:12.369974+0000 mon.a (mon.0) 813 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:15:13.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:12 vm07 bash[23367]: audit 2026-03-10T10:15:12.369974+0000 mon.a (mon.0) 813 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:15:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:12 vm04 bash[28289]: audit 2026-03-10T10:15:12.369974+0000 mon.a (mon.0) 813 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:15:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:12 vm04 bash[28289]: audit 2026-03-10T10:15:12.369974+0000 mon.a (mon.0) 813 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:15:13.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:12 vm04 bash[20742]: audit 2026-03-10T10:15:12.369974+0000 mon.a (mon.0) 813 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:15:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:13 vm04 bash[28289]: cluster 2026-03-10T10:15:12.298922+0000 mgr.y (mgr.24422) 52 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:13 vm04 bash[28289]: cluster 2026-03-10T10:15:12.298922+0000 mgr.y (mgr.24422) 52 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:13 vm04 bash[20742]: cluster 2026-03-10T10:15:12.298922+0000 mgr.y (mgr.24422) 52 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:13 vm04 bash[20742]: cluster 2026-03-10T10:15:12.298922+0000 mgr.y (mgr.24422) 52 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:14.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:13 vm07 bash[23367]: cluster 2026-03-10T10:15:12.298922+0000 mgr.y (mgr.24422) 52 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:14.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:13 vm07 bash[23367]: cluster 2026-03-10T10:15:12.298922+0000 mgr.y (mgr.24422) 52 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:14.494 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:15:14.634 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.629+0000 7f8cdffff640 1 --2- 192.168.123.104:0/3815326170 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8ce8108260 0x7f8ce8106560 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:15:14.634 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.629+0000 7f8ceeec0640 1 -- 192.168.123.104:0/3815326170 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f8ce81078b0 msgr2=0x7f8ce8107c90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:14.634 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.629+0000 7f8ceeec0640 1 --2- 192.168.123.104:0/3815326170 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f8ce81078b0 0x7f8ce8107c90 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7f8cd800dec0 tx=0x7f8cd80332e0 comp rx=0 tx=0).stop 2026-03-10T10:15:14.634 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.629+0000 7f8ceeec0640 1 -- 192.168.123.104:0/3815326170 shutdown_connections 2026-03-10T10:15:14.634 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.629+0000 7f8ceeec0640 1 --2- 192.168.123.104:0/3815326170 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8ce8106c90 0x7f8ce810e810 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:14.634 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.629+0000 7f8ceeec0640 1 --2- 192.168.123.104:0/3815326170 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8ce8108260 0x7f8ce8106560 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:14.634 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.629+0000 7f8ceeec0640 1 --2- 192.168.123.104:0/3815326170 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f8ce81078b0 0x7f8ce8107c90 unknown :-1 s=CLOSED pgs=63 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:14.634 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.629+0000 7f8ceeec0640 1 -- 192.168.123.104:0/3815326170 >> 192.168.123.104:0/3815326170 conn(0x7f8ce80fc910 msgr2=0x7f8ce80fed30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:14.634 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.629+0000 7f8ceeec0640 1 -- 192.168.123.104:0/3815326170 shutdown_connections 2026-03-10T10:15:14.634 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.629+0000 7f8ceeec0640 1 -- 192.168.123.104:0/3815326170 wait complete. 2026-03-10T10:15:14.635 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.633+0000 7f8ceeec0640 1 Processor -- start 2026-03-10T10:15:14.635 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.633+0000 7f8ceeec0640 1 -- start start 2026-03-10T10:15:14.635 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.633+0000 7f8ceeec0640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f8ce8106c90 0x7f8ce8198580 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:14.635 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.633+0000 7f8cecc35640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f8ce8106c90 0x7f8ce8198580 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:14.635 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.633+0000 7f8cecc35640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f8ce8106c90 0x7f8ce8198580 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3301/0 says I am v2:192.168.123.104:49682/0 (socket says 192.168.123.104:49682) 2026-03-10T10:15:14.635 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.633+0000 7f8ceeec0640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8ce81078b0 0x7f8ce8198ac0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:14.635 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.633+0000 7f8ceeec0640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8ce8108260 0x7f8ce819ce50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:14.635 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.633+0000 7f8cecc35640 1 -- 192.168.123.104:0/1444730067 learned_addr learned my addr 192.168.123.104:0/1444730067 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:15:14.635 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.633+0000 7f8ceeec0640 1 -- 192.168.123.104:0/1444730067 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f8ce81100e0 con 0x7f8ce81078b0 2026-03-10T10:15:14.636 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.633+0000 7f8ceeec0640 1 -- 192.168.123.104:0/1444730067 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f8ce810ff60 con 0x7f8ce8108260 2026-03-10T10:15:14.636 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.633+0000 7f8ceeec0640 1 -- 192.168.123.104:0/1444730067 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f8ce8110260 con 0x7f8ce8106c90 2026-03-10T10:15:14.636 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.633+0000 7f8cdffff640 1 --2- 192.168.123.104:0/1444730067 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8ce81078b0 0x7f8ce8198ac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:14.636 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.633+0000 7f8ced436640 1 --2- 192.168.123.104:0/1444730067 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8ce8108260 0x7f8ce819ce50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:14.636 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.633+0000 7f8cdffff640 1 -- 192.168.123.104:0/1444730067 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f8ce8106c90 msgr2=0x7f8ce8198580 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:14.636 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.633+0000 7f8cdffff640 1 --2- 192.168.123.104:0/1444730067 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f8ce8106c90 0x7f8ce8198580 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:14.636 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.633+0000 7f8cdffff640 1 -- 192.168.123.104:0/1444730067 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8ce8108260 msgr2=0x7f8ce819ce50 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:14.636 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.633+0000 7f8cdffff640 1 --2- 192.168.123.104:0/1444730067 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8ce8108260 0x7f8ce819ce50 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:14.636 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.633+0000 7f8cdffff640 1 -- 192.168.123.104:0/1444730067 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f8ce819d530 con 0x7f8ce81078b0 2026-03-10T10:15:14.636 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.633+0000 7f8cdffff640 1 --2- 192.168.123.104:0/1444730067 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8ce81078b0 0x7f8ce8198ac0 secure :-1 s=READY pgs=149 cs=0 l=1 rev1=1 crypto rx=0x7f8cd000cce0 tx=0x7f8cd0007590 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:14.637 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.633+0000 7f8cddffb640 1 -- 192.168.123.104:0/1444730067 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f8cd0013070 con 0x7f8ce81078b0 2026-03-10T10:15:14.637 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.633+0000 7f8cddffb640 1 -- 192.168.123.104:0/1444730067 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f8cd00044e0 con 0x7f8ce81078b0 2026-03-10T10:15:14.637 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.633+0000 7f8cddffb640 1 -- 192.168.123.104:0/1444730067 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f8cd0002e20 con 0x7f8ce81078b0 2026-03-10T10:15:14.637 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.633+0000 7f8cecc35640 1 --2- 192.168.123.104:0/1444730067 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f8ce8106c90 0x7f8ce8198580 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:15:14.637 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.633+0000 7f8ceeec0640 1 -- 192.168.123.104:0/1444730067 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f8ce819d820 con 0x7f8ce81078b0 2026-03-10T10:15:14.637 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.633+0000 7f8ceeec0640 1 -- 192.168.123.104:0/1444730067 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f8ce81a5100 con 0x7f8ce81078b0 2026-03-10T10:15:14.638 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.633+0000 7f8ceeec0640 1 -- 192.168.123.104:0/1444730067 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f8cb0005180 con 0x7f8ce81078b0 2026-03-10T10:15:14.641 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.637+0000 7f8cddffb640 1 -- 192.168.123.104:0/1444730067 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f8cd00040a0 con 0x7f8ce81078b0 2026-03-10T10:15:14.641 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.637+0000 7f8cddffb640 1 --2- 192.168.123.104:0/1444730067 >> v2:192.168.123.104:6800/3326026257 conn(0x7f8cc4077700 0x7f8cc4079bc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:14.642 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.637+0000 7f8cddffb640 1 -- 192.168.123.104:0/1444730067 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f8cd0099a40 con 0x7f8ce81078b0 2026-03-10T10:15:14.642 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.637+0000 7f8cecc35640 1 --2- 192.168.123.104:0/1444730067 >> v2:192.168.123.104:6800/3326026257 conn(0x7f8cc4077700 0x7f8cc4079bc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:14.642 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.637+0000 7f8cddffb640 1 -- 192.168.123.104:0/1444730067 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f8cd009be20 con 0x7f8ce81078b0 2026-03-10T10:15:14.642 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.637+0000 7f8cecc35640 1 --2- 192.168.123.104:0/1444730067 >> v2:192.168.123.104:6800/3326026257 conn(0x7f8cc4077700 0x7f8cc4079bc0 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7f8cd8002410 tx=0x7f8cd800dc50 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:14.730 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.725+0000 7f8ceeec0640 1 -- 192.168.123.104:0/1444730067 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "osd dump", "format": "json"} v 0) -- 0x7f8cb0005470 con 0x7f8ce81078b0 2026-03-10T10:15:14.731 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.729+0000 7f8cddffb640 1 -- 192.168.123.104:0/1444730067 <== mon.0 v2:192.168.123.104:3300/0 7 ==== mon_command_ack([{"prefix": "osd dump", "format": "json"}]=0 v65) ==== 74+0+21299 (secure 0 0 0) 0x7f8cd00668a0 con 0x7f8ce81078b0 2026-03-10T10:15:14.732 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:15:14.732 INFO:teuthology.orchestra.run.vm04.stdout:{"epoch":65,"fsid":"e4c1c9d6-1c68-11f1-a9bd-116050875839","created":"2026-03-10T10:08:09.663961+0000","modified":"2026-03-10T10:14:42.257055+0000","last_up_change":"2026-03-10T10:13:49.733359+0000","last_in_change":"2026-03-10T10:13:34.199840+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":6,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"luminous","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T10:11:03.648152+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"22","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":".rgw.root","create_time":"2026-03-10T10:14:11.014942+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"54","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":3,"pool_name":"default.rgw.log","create_time":"2026-03-10T10:14:12.999504+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"56","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":4,"pool_name":"datapool","create_time":"2026-03-10T10:14:14.815391+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":3,"pg_placement_num":3,"pg_placement_num_target":3,"pg_num_target":3,"pg_num_pending":3,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"62","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":62,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.6500000953674316,"score_stable":2.6500000953674316,"optimal_score":0.87999999523162842,"raw_score_acting":2.3299999237060547,"raw_score_stable":2.3299999237060547,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":5,"pool_name":"default.rgw.control","create_time":"2026-03-10T10:14:14.962359+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"58","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.25,"score_stable":1.25,"optimal_score":1,"raw_score_acting":1.25,"raw_score_stable":1.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":6,"pool_name":"default.rgw.meta","create_time":"2026-03-10T10:14:17.023637+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"60","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_autoscale_bias":4},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.75,"score_stable":1.75,"optimal_score":1,"raw_score_acting":1.75,"raw_score_stable":1.75,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"8e28c717-cfeb-4d7d-8ed7-9136d22aff5c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":58,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6801","nonce":3431285778}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6802","nonce":3431285778}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6804","nonce":3431285778}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6803","nonce":3431285778}]},"public_addr":"192.168.123.104:6801/3431285778","cluster_addr":"192.168.123.104:6802/3431285778","heartbeat_back_addr":"192.168.123.104:6804/3431285778","heartbeat_front_addr":"192.168.123.104:6803/3431285778","state":["exists","up"]},{"osd":1,"uuid":"58ba2152-7e52-4560-a001-e96617e30de1","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":63,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6805","nonce":2746381987}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6806","nonce":2746381987}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6808","nonce":2746381987}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6807","nonce":2746381987}]},"public_addr":"192.168.123.104:6805/2746381987","cluster_addr":"192.168.123.104:6806/2746381987","heartbeat_back_addr":"192.168.123.104:6808/2746381987","heartbeat_front_addr":"192.168.123.104:6807/2746381987","state":["exists","up"]},{"osd":2,"uuid":"17bb098c-8eff-4065-b511-7925247ef4a5","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":58,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6809","nonce":1668196037}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6810","nonce":1668196037}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6812","nonce":1668196037}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6811","nonce":1668196037}]},"public_addr":"192.168.123.104:6809/1668196037","cluster_addr":"192.168.123.104:6810/1668196037","heartbeat_back_addr":"192.168.123.104:6812/1668196037","heartbeat_front_addr":"192.168.123.104:6811/1668196037","state":["exists","up"]},{"osd":3,"uuid":"f9a0e546-c40a-4fcc-aaca-082199e602f3","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":26,"up_thru":58,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6813","nonce":2182249853}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6814","nonce":2182249853}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6816","nonce":2182249853}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6815","nonce":2182249853}]},"public_addr":"192.168.123.104:6813/2182249853","cluster_addr":"192.168.123.104:6814/2182249853","heartbeat_back_addr":"192.168.123.104:6816/2182249853","heartbeat_front_addr":"192.168.123.104:6815/2182249853","state":["exists","up"]},{"osd":4,"uuid":"456a615c-f863-4970-b4a1-90e964abfec7","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":31,"up_thru":58,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6800","nonce":2162643433}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6801","nonce":2162643433}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6803","nonce":2162643433}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6802","nonce":2162643433}]},"public_addr":"192.168.123.107:6800/2162643433","cluster_addr":"192.168.123.107:6801/2162643433","heartbeat_back_addr":"192.168.123.107:6803/2162643433","heartbeat_front_addr":"192.168.123.107:6802/2162643433","state":["exists","up"]},{"osd":5,"uuid":"c651c78e-882b-47c6-84ff-5a4b54b94531","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":37,"up_thru":58,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6804","nonce":1022745989}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6805","nonce":1022745989}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6807","nonce":1022745989}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6806","nonce":1022745989}]},"public_addr":"192.168.123.107:6804/1022745989","cluster_addr":"192.168.123.107:6805/1022745989","heartbeat_back_addr":"192.168.123.107:6807/1022745989","heartbeat_front_addr":"192.168.123.107:6806/1022745989","state":["exists","up"]},{"osd":6,"uuid":"69498577-1b7a-40bf-acac-5912f8ff7cfc","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":43,"up_thru":56,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6808","nonce":719340092}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6809","nonce":719340092}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6811","nonce":719340092}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6810","nonce":719340092}]},"public_addr":"192.168.123.107:6808/719340092","cluster_addr":"192.168.123.107:6809/719340092","heartbeat_back_addr":"192.168.123.107:6811/719340092","heartbeat_front_addr":"192.168.123.107:6810/719340092","state":["exists","up"]},{"osd":7,"uuid":"a27eb8fa-556b-467c-bdba-9d899e37064a","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":49,"up_thru":58,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6812","nonce":4141831103}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6813","nonce":4141831103}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6815","nonce":4141831103}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6814","nonce":4141831103}]},"public_addr":"192.168.123.107:6812/4141831103","cluster_addr":"192.168.123.107:6813/4141831103","heartbeat_back_addr":"192.168.123.107:6815/4141831103","heartbeat_front_addr":"192.168.123.107:6814/4141831103","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:09:53.659046+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:10:26.943063+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:10:59.644519+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:11:34.491448+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:12:07.569310+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:12:41.508315+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:13:14.258577+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T10:13:48.010550+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[{"pgid":"2.4","mappings":[{"from":7,"to":2}]}],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.104:0/2082123263":"2026-03-11T10:14:42.257033+0000","192.168.123.104:6800/2318507328":"2026-03-11T10:08:29.595143+0000","192.168.123.104:0/2315333744":"2026-03-11T10:08:20.274742+0000","192.168.123.104:6800/887024688":"2026-03-11T10:08:20.274742+0000","192.168.123.104:0/4084406241":"2026-03-11T10:08:20.274742+0000","192.168.123.104:0/1555346406":"2026-03-11T10:08:20.274742+0000","192.168.123.104:0/1397792734":"2026-03-11T10:08:29.595143+0000","192.168.123.104:6800/632047608":"2026-03-11T10:14:42.257033+0000","192.168.123.104:0/75332172":"2026-03-11T10:08:29.595143+0000","192.168.123.104:0/3286460520":"2026-03-11T10:14:42.257033+0000","192.168.123.104:0/1036710518":"2026-03-11T10:14:42.257033+0000","192.168.123.104:0/4228752384":"2026-03-11T10:08:29.595143+0000","192.168.123.104:0/2502458287":"2026-03-11T10:14:42.257033+0000","192.168.123.104:0/176697585":"2026-03-11T10:14:42.257033+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T10:15:14.734 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.729+0000 7f8ceeec0640 1 -- 192.168.123.104:0/1444730067 >> v2:192.168.123.104:6800/3326026257 conn(0x7f8cc4077700 msgr2=0x7f8cc4079bc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:14.734 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.729+0000 7f8ceeec0640 1 --2- 192.168.123.104:0/1444730067 >> v2:192.168.123.104:6800/3326026257 conn(0x7f8cc4077700 0x7f8cc4079bc0 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7f8cd8002410 tx=0x7f8cd800dc50 comp rx=0 tx=0).stop 2026-03-10T10:15:14.734 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.729+0000 7f8ceeec0640 1 -- 192.168.123.104:0/1444730067 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8ce81078b0 msgr2=0x7f8ce8198ac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:14.734 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.729+0000 7f8ceeec0640 1 --2- 192.168.123.104:0/1444730067 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8ce81078b0 0x7f8ce8198ac0 secure :-1 s=READY pgs=149 cs=0 l=1 rev1=1 crypto rx=0x7f8cd000cce0 tx=0x7f8cd0007590 comp rx=0 tx=0).stop 2026-03-10T10:15:14.734 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.729+0000 7f8ceeec0640 1 -- 192.168.123.104:0/1444730067 shutdown_connections 2026-03-10T10:15:14.734 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.729+0000 7f8ceeec0640 1 --2- 192.168.123.104:0/1444730067 >> v2:192.168.123.104:6800/3326026257 conn(0x7f8cc4077700 0x7f8cc4079bc0 unknown :-1 s=CLOSED pgs=30 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:14.734 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.733+0000 7f8ceeec0640 1 --2- 192.168.123.104:0/1444730067 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f8ce8108260 0x7f8ce819ce50 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:14.735 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.733+0000 7f8ceeec0640 1 --2- 192.168.123.104:0/1444730067 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f8ce81078b0 0x7f8ce8198ac0 unknown :-1 s=CLOSED pgs=149 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:14.735 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.733+0000 7f8ceeec0640 1 --2- 192.168.123.104:0/1444730067 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f8ce8106c90 0x7f8ce8198580 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:14.735 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.733+0000 7f8ceeec0640 1 -- 192.168.123.104:0/1444730067 >> 192.168.123.104:0/1444730067 conn(0x7f8ce80fc910 msgr2=0x7f8ce8104730 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:14.735 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.733+0000 7f8ceeec0640 1 -- 192.168.123.104:0/1444730067 shutdown_connections 2026-03-10T10:15:14.735 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:14.733+0000 7f8ceeec0640 1 -- 192.168.123.104:0/1444730067 wait complete. 2026-03-10T10:15:14.782 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph tell osd.0 flush_pg_stats 2026-03-10T10:15:14.782 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph tell osd.1 flush_pg_stats 2026-03-10T10:15:14.783 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph tell osd.2 flush_pg_stats 2026-03-10T10:15:14.783 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph tell osd.3 flush_pg_stats 2026-03-10T10:15:14.783 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph tell osd.4 flush_pg_stats 2026-03-10T10:15:14.783 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph tell osd.5 flush_pg_stats 2026-03-10T10:15:14.783 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph tell osd.6 flush_pg_stats 2026-03-10T10:15:14.783 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph tell osd.7 flush_pg_stats 2026-03-10T10:15:15.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:14 vm04 bash[20742]: audit 2026-03-10T10:15:14.731853+0000 mon.a (mon.0) 814 : audit [DBG] from='client.? 192.168.123.104:0/1444730067' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T10:15:15.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:14 vm04 bash[20742]: audit 2026-03-10T10:15:14.731853+0000 mon.a (mon.0) 814 : audit [DBG] from='client.? 192.168.123.104:0/1444730067' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T10:15:15.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:14 vm04 bash[28289]: audit 2026-03-10T10:15:14.731853+0000 mon.a (mon.0) 814 : audit [DBG] from='client.? 192.168.123.104:0/1444730067' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T10:15:15.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:14 vm04 bash[28289]: audit 2026-03-10T10:15:14.731853+0000 mon.a (mon.0) 814 : audit [DBG] from='client.? 192.168.123.104:0/1444730067' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T10:15:15.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:14 vm07 bash[23367]: audit 2026-03-10T10:15:14.731853+0000 mon.a (mon.0) 814 : audit [DBG] from='client.? 192.168.123.104:0/1444730067' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T10:15:15.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:14 vm07 bash[23367]: audit 2026-03-10T10:15:14.731853+0000 mon.a (mon.0) 814 : audit [DBG] from='client.? 192.168.123.104:0/1444730067' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T10:15:15.784 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:15 vm04 bash[20742]: cluster 2026-03-10T10:15:14.299350+0000 mgr.y (mgr.24422) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:15.785 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:15 vm04 bash[20742]: cluster 2026-03-10T10:15:14.299350+0000 mgr.y (mgr.24422) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:15 vm04 bash[28289]: cluster 2026-03-10T10:15:14.299350+0000 mgr.y (mgr.24422) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:15 vm04 bash[28289]: cluster 2026-03-10T10:15:14.299350+0000 mgr.y (mgr.24422) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:16.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:15 vm07 bash[23367]: cluster 2026-03-10T10:15:14.299350+0000 mgr.y (mgr.24422) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:16.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:15 vm07 bash[23367]: cluster 2026-03-10T10:15:14.299350+0000 mgr.y (mgr.24422) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:17.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:16 vm04 bash[28289]: audit 2026-03-10T10:15:15.877443+0000 mon.a (mon.0) 815 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:17.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:16 vm04 bash[28289]: audit 2026-03-10T10:15:15.877443+0000 mon.a (mon.0) 815 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:17.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:16 vm04 bash[28289]: audit 2026-03-10T10:15:15.882102+0000 mon.a (mon.0) 816 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:17.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:16 vm04 bash[28289]: audit 2026-03-10T10:15:15.882102+0000 mon.a (mon.0) 816 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:17.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:16 vm04 bash[28289]: audit 2026-03-10T10:15:16.143265+0000 mon.a (mon.0) 817 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:17.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:16 vm04 bash[28289]: audit 2026-03-10T10:15:16.143265+0000 mon.a (mon.0) 817 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:17.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:16 vm04 bash[28289]: audit 2026-03-10T10:15:16.148807+0000 mon.a (mon.0) 818 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:17.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:16 vm04 bash[28289]: audit 2026-03-10T10:15:16.148807+0000 mon.a (mon.0) 818 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:17.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:16 vm04 bash[28289]: audit 2026-03-10T10:15:16.149709+0000 mon.a (mon.0) 819 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:17.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:16 vm04 bash[28289]: audit 2026-03-10T10:15:16.149709+0000 mon.a (mon.0) 819 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:17.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:16 vm04 bash[28289]: audit 2026-03-10T10:15:16.150094+0000 mon.a (mon.0) 820 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:15:17.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:16 vm04 bash[28289]: audit 2026-03-10T10:15:16.150094+0000 mon.a (mon.0) 820 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:15:17.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:16 vm04 bash[28289]: audit 2026-03-10T10:15:16.153886+0000 mon.a (mon.0) 821 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:17.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:16 vm04 bash[28289]: audit 2026-03-10T10:15:16.153886+0000 mon.a (mon.0) 821 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:17.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:16 vm04 bash[20742]: audit 2026-03-10T10:15:15.877443+0000 mon.a (mon.0) 815 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:17.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:16 vm04 bash[20742]: audit 2026-03-10T10:15:15.877443+0000 mon.a (mon.0) 815 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:17.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:16 vm04 bash[20742]: audit 2026-03-10T10:15:15.882102+0000 mon.a (mon.0) 816 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:17.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:16 vm04 bash[20742]: audit 2026-03-10T10:15:15.882102+0000 mon.a (mon.0) 816 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:17.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:16 vm04 bash[20742]: audit 2026-03-10T10:15:16.143265+0000 mon.a (mon.0) 817 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:17.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:16 vm04 bash[20742]: audit 2026-03-10T10:15:16.143265+0000 mon.a (mon.0) 817 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:17.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:16 vm04 bash[20742]: audit 2026-03-10T10:15:16.148807+0000 mon.a (mon.0) 818 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:17.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:16 vm04 bash[20742]: audit 2026-03-10T10:15:16.148807+0000 mon.a (mon.0) 818 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:17.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:16 vm04 bash[20742]: audit 2026-03-10T10:15:16.149709+0000 mon.a (mon.0) 819 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:17.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:16 vm04 bash[20742]: audit 2026-03-10T10:15:16.149709+0000 mon.a (mon.0) 819 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:17.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:16 vm04 bash[20742]: audit 2026-03-10T10:15:16.150094+0000 mon.a (mon.0) 820 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:15:17.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:16 vm04 bash[20742]: audit 2026-03-10T10:15:16.150094+0000 mon.a (mon.0) 820 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:15:17.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:16 vm04 bash[20742]: audit 2026-03-10T10:15:16.153886+0000 mon.a (mon.0) 821 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:17.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:16 vm04 bash[20742]: audit 2026-03-10T10:15:16.153886+0000 mon.a (mon.0) 821 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:17.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:16 vm07 bash[23367]: audit 2026-03-10T10:15:15.877443+0000 mon.a (mon.0) 815 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:17.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:16 vm07 bash[23367]: audit 2026-03-10T10:15:15.877443+0000 mon.a (mon.0) 815 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:17.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:16 vm07 bash[23367]: audit 2026-03-10T10:15:15.882102+0000 mon.a (mon.0) 816 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:17.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:16 vm07 bash[23367]: audit 2026-03-10T10:15:15.882102+0000 mon.a (mon.0) 816 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:17.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:16 vm07 bash[23367]: audit 2026-03-10T10:15:16.143265+0000 mon.a (mon.0) 817 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:17.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:16 vm07 bash[23367]: audit 2026-03-10T10:15:16.143265+0000 mon.a (mon.0) 817 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:17.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:16 vm07 bash[23367]: audit 2026-03-10T10:15:16.148807+0000 mon.a (mon.0) 818 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:17.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:16 vm07 bash[23367]: audit 2026-03-10T10:15:16.148807+0000 mon.a (mon.0) 818 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:17.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:16 vm07 bash[23367]: audit 2026-03-10T10:15:16.149709+0000 mon.a (mon.0) 819 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:17.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:16 vm07 bash[23367]: audit 2026-03-10T10:15:16.149709+0000 mon.a (mon.0) 819 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:15:17.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:16 vm07 bash[23367]: audit 2026-03-10T10:15:16.150094+0000 mon.a (mon.0) 820 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:15:17.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:16 vm07 bash[23367]: audit 2026-03-10T10:15:16.150094+0000 mon.a (mon.0) 820 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:15:17.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:16 vm07 bash[23367]: audit 2026-03-10T10:15:16.153886+0000 mon.a (mon.0) 821 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:17.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:16 vm07 bash[23367]: audit 2026-03-10T10:15:16.153886+0000 mon.a (mon.0) 821 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:15:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:18 vm04 bash[28289]: cluster 2026-03-10T10:15:16.299674+0000 mgr.y (mgr.24422) 54 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:18 vm04 bash[28289]: cluster 2026-03-10T10:15:16.299674+0000 mgr.y (mgr.24422) 54 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:18 vm04 bash[20742]: cluster 2026-03-10T10:15:16.299674+0000 mgr.y (mgr.24422) 54 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:18 vm04 bash[20742]: cluster 2026-03-10T10:15:16.299674+0000 mgr.y (mgr.24422) 54 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:18.515 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:15:18 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:15:18.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:18 vm07 bash[23367]: cluster 2026-03-10T10:15:16.299674+0000 mgr.y (mgr.24422) 54 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:18.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:18 vm07 bash[23367]: cluster 2026-03-10T10:15:16.299674+0000 mgr.y (mgr.24422) 54 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:19.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:19 vm04 bash[28289]: audit 2026-03-10T10:15:18.135489+0000 mgr.y (mgr.24422) 55 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:19.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:19 vm04 bash[28289]: audit 2026-03-10T10:15:18.135489+0000 mgr.y (mgr.24422) 55 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:19.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:19 vm04 bash[20742]: audit 2026-03-10T10:15:18.135489+0000 mgr.y (mgr.24422) 55 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:19.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:19 vm04 bash[20742]: audit 2026-03-10T10:15:18.135489+0000 mgr.y (mgr.24422) 55 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:19.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:19 vm07 bash[23367]: audit 2026-03-10T10:15:18.135489+0000 mgr.y (mgr.24422) 55 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:19.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:19 vm07 bash[23367]: audit 2026-03-10T10:15:18.135489+0000 mgr.y (mgr.24422) 55 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:19.552 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:15:19.553 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:15:19.554 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:15:19.554 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:15:19.557 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:15:19.558 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:15:19.560 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:15:19.564 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:15:19.838 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.829+0000 7f650f0c6640 1 -- 192.168.123.104:0/93354637 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f650810b080 msgr2=0x7f6508074d30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:19.838 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.829+0000 7f650f0c6640 1 --2- 192.168.123.104:0/93354637 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f650810b080 0x7f6508074d30 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7f650000b0a0 tx=0x7f650002f450 comp rx=0 tx=0).stop 2026-03-10T10:15:19.838 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.829+0000 7f650f0c6640 1 -- 192.168.123.104:0/93354637 shutdown_connections 2026-03-10T10:15:19.838 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.829+0000 7f650f0c6640 1 --2- 192.168.123.104:0/93354637 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f6508075470 0x7f650807be20 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:19.838 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.829+0000 7f650f0c6640 1 --2- 192.168.123.104:0/93354637 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f650810b080 0x7f6508074d30 unknown :-1 s=CLOSED pgs=69 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:19.838 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.829+0000 7f650f0c6640 1 --2- 192.168.123.104:0/93354637 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f650810a6d0 0x7f650810aab0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:19.838 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.829+0000 7f650f0c6640 1 -- 192.168.123.104:0/93354637 >> 192.168.123.104:0/93354637 conn(0x7f650806d9f0 msgr2=0x7f650806de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:19.838 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.829+0000 7f650f0c6640 1 -- 192.168.123.104:0/93354637 shutdown_connections 2026-03-10T10:15:19.838 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.829+0000 7f650f0c6640 1 -- 192.168.123.104:0/93354637 wait complete. 2026-03-10T10:15:19.838 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.829+0000 7f650f0c6640 1 Processor -- start 2026-03-10T10:15:19.838 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.829+0000 7f650f0c6640 1 -- start start 2026-03-10T10:15:19.838 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.829+0000 7f650f0c6640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f6508075470 0x7f650807b740 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:19.838 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.829+0000 7f650f0c6640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f650810a6d0 0x7f650807bc80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:19.838 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.829+0000 7f650f0c6640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f6508075d40 0x7f65080761f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:19.838 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.829+0000 7f650f0c6640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f650807e170 con 0x7f6508075d40 2026-03-10T10:15:19.838 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.829+0000 7f650f0c6640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f650807dff0 con 0x7f650810a6d0 2026-03-10T10:15:19.838 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.829+0000 7f650f0c6640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f650807e2f0 con 0x7f6508075470 2026-03-10T10:15:19.839 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.829+0000 7f650ce3b640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f6508075470 0x7f650807b740 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:19.839 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.829+0000 7f650ce3b640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f6508075470 0x7f650807b740 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3301/0 says I am v2:192.168.123.104:49686/0 (socket says 192.168.123.104:49686) 2026-03-10T10:15:19.839 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.829+0000 7f650ce3b640 1 -- 192.168.123.104:0/27342082 learned_addr learned my addr 192.168.123.104:0/27342082 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:15:19.839 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.829+0000 7f6507fff640 1 --2- 192.168.123.104:0/27342082 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f650810a6d0 0x7f650807bc80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:19.839 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.829+0000 7f650ce3b640 1 -- 192.168.123.104:0/27342082 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f650810a6d0 msgr2=0x7f650807bc80 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:19.839 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.829+0000 7f650ce3b640 1 --2- 192.168.123.104:0/27342082 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f650810a6d0 0x7f650807bc80 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:19.839 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.829+0000 7f650ce3b640 1 -- 192.168.123.104:0/27342082 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f6508075d40 msgr2=0x7f65080761f0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:15:19.839 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.829+0000 7f650ce3b640 1 --2- 192.168.123.104:0/27342082 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f6508075d40 0x7f65080761f0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:19.839 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.829+0000 7f650ce3b640 1 -- 192.168.123.104:0/27342082 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6508076ab0 con 0x7f6508075470 2026-03-10T10:15:19.839 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.833+0000 7f650ce3b640 1 --2- 192.168.123.104:0/27342082 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f6508075470 0x7f650807b740 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7f64f8002a10 tx=0x7f64f8002ee0 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:19.839 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.833+0000 7f6505ffb640 1 -- 192.168.123.104:0/27342082 <== mon.2 v2:192.168.123.104:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f64f8053cd0 con 0x7f6508075470 2026-03-10T10:15:19.839 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.833+0000 7f650f0c6640 1 -- 192.168.123.104:0/27342082 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f6508137d00 con 0x7f6508075470 2026-03-10T10:15:19.839 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.833+0000 7f650f0c6640 1 -- 192.168.123.104:0/27342082 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f6508138240 con 0x7f6508075470 2026-03-10T10:15:19.839 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.833+0000 7f6505ffb640 1 -- 192.168.123.104:0/27342082 <== mon.2 v2:192.168.123.104:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f64f8053e70 con 0x7f6508075470 2026-03-10T10:15:19.839 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.833+0000 7f6505ffb640 1 -- 192.168.123.104:0/27342082 <== mon.2 v2:192.168.123.104:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f64f805d720 con 0x7f6508075470 2026-03-10T10:15:19.840 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.837+0000 7f6505ffb640 1 -- 192.168.123.104:0/27342082 <== mon.2 v2:192.168.123.104:3301/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f64f805d8c0 con 0x7f6508075470 2026-03-10T10:15:19.840 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.837+0000 7f6505ffb640 1 --2- 192.168.123.104:0/27342082 >> v2:192.168.123.104:6800/3326026257 conn(0x7f64e0077790 0x7f64e0079c50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:19.842 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.837+0000 7f6507fff640 1 --2- 192.168.123.104:0/27342082 >> v2:192.168.123.104:6800/3326026257 conn(0x7f64e0077790 0x7f64e0079c50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:19.850 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.841+0000 7f6505ffb640 1 -- 192.168.123.104:0/27342082 <== mon.2 v2:192.168.123.104:3301/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f64f80e0230 con 0x7f6508075470 2026-03-10T10:15:19.850 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.841+0000 7f6507fff640 1 --2- 192.168.123.104:0/27342082 >> v2:192.168.123.104:6800/3326026257 conn(0x7f64e0077790 0x7f64e0079c50 secure :-1 s=READY pgs=31 cs=0 l=1 rev1=1 crypto rx=0x7f650000b070 tx=0x7f650003a040 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:19.850 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.841+0000 7f650f0c6640 1 --2- 192.168.123.104:0/27342082 >> v2:192.168.123.104:6805/2746381987 conn(0x7f64d0001630 0x7f64d0003af0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:19.850 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.841+0000 7f650d63c640 1 --2- 192.168.123.104:0/27342082 >> v2:192.168.123.104:6805/2746381987 conn(0x7f64d0001630 0x7f64d0003af0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:19.850 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.841+0000 7f650f0c6640 1 -- 192.168.123.104:0/27342082 --> v2:192.168.123.104:6805/2746381987 -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7f64d0006ba0 con 0x7f64d0001630 2026-03-10T10:15:19.850 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.841+0000 7f650d63c640 1 --2- 192.168.123.104:0/27342082 >> v2:192.168.123.104:6805/2746381987 conn(0x7f64d0001630 0x7f64d0003af0 crc :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:19.850 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.841+0000 7f6505ffb640 1 -- 192.168.123.104:0/27342082 <== osd.1 v2:192.168.123.104:6805/2746381987 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (crc 0 0 0) 0x7f64d0006ba0 con 0x7f64d0001630 2026-03-10T10:15:19.863 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.853+0000 7f650f0c6640 1 -- 192.168.123.104:0/27342082 --> v2:192.168.123.104:6805/2746381987 -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7f64d0005ca0 con 0x7f64d0001630 2026-03-10T10:15:19.863 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.857+0000 7f6505ffb640 1 -- 192.168.123.104:0/27342082 <== osd.1 v2:192.168.123.104:6805/2746381987 2 ==== command_reply(tid 2: 0 ) ==== 8+0+11 (crc 0 0 0) 0x7f64d0005ca0 con 0x7f64d0001630 2026-03-10T10:15:19.863 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.857+0000 7f64df7fe640 1 -- 192.168.123.104:0/27342082 >> v2:192.168.123.104:6805/2746381987 conn(0x7f64d0001630 msgr2=0x7f64d0003af0 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:19.863 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.857+0000 7f64df7fe640 1 --2- 192.168.123.104:0/27342082 >> v2:192.168.123.104:6805/2746381987 conn(0x7f64d0001630 0x7f64d0003af0 crc :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:19.863 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.857+0000 7f64df7fe640 1 -- 192.168.123.104:0/27342082 >> v2:192.168.123.104:6800/3326026257 conn(0x7f64e0077790 msgr2=0x7f64e0079c50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:19.863 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.857+0000 7f64df7fe640 1 --2- 192.168.123.104:0/27342082 >> v2:192.168.123.104:6800/3326026257 conn(0x7f64e0077790 0x7f64e0079c50 secure :-1 s=READY pgs=31 cs=0 l=1 rev1=1 crypto rx=0x7f650000b070 tx=0x7f650003a040 comp rx=0 tx=0).stop 2026-03-10T10:15:19.863 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.857+0000 7f64df7fe640 1 -- 192.168.123.104:0/27342082 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f6508075470 msgr2=0x7f650807b740 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:19.863 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.857+0000 7f64df7fe640 1 --2- 192.168.123.104:0/27342082 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f6508075470 0x7f650807b740 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7f64f8002a10 tx=0x7f64f8002ee0 comp rx=0 tx=0).stop 2026-03-10T10:15:19.863 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.857+0000 7f650d63c640 1 -- 192.168.123.104:0/27342082 reap_dead start 2026-03-10T10:15:19.863 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.857+0000 7f64df7fe640 1 -- 192.168.123.104:0/27342082 shutdown_connections 2026-03-10T10:15:19.863 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.857+0000 7f64df7fe640 1 -- 192.168.123.104:0/27342082 >> 192.168.123.104:0/27342082 conn(0x7f650806d9f0 msgr2=0x7f6508073cd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:19.863 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.857+0000 7f64df7fe640 1 -- 192.168.123.104:0/27342082 shutdown_connections 2026-03-10T10:15:19.863 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.857+0000 7f64df7fe640 1 -- 192.168.123.104:0/27342082 wait complete. 2026-03-10T10:15:19.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.909+0000 7f97f02c6640 1 -- 192.168.123.104:0/1827628917 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f97e00b7eb0 msgr2=0x7f97e00ba2a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:19.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.909+0000 7f97f02c6640 1 --2- 192.168.123.104:0/1827628917 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f97e00b7eb0 0x7f97e00ba2a0 secure :-1 s=READY pgs=65 cs=0 l=1 rev1=1 crypto rx=0x7f97e400b0a0 tx=0x7f97e402f450 comp rx=0 tx=0).stop 2026-03-10T10:15:19.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.909+0000 7f97f02c6640 1 -- 192.168.123.104:0/1827628917 shutdown_connections 2026-03-10T10:15:19.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.909+0000 7f97f02c6640 1 --2- 192.168.123.104:0/1827628917 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f97e00b7eb0 0x7f97e00ba2a0 unknown :-1 s=CLOSED pgs=65 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:19.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.909+0000 7f97f02c6640 1 --2- 192.168.123.104:0/1827628917 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f97e00a56d0 0x7f97e00b7970 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:19.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.909+0000 7f97f02c6640 1 --2- 192.168.123.104:0/1827628917 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f97e00a4db0 0x7f97e00a5190 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:19.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.909+0000 7f97f02c6640 1 -- 192.168.123.104:0/1827628917 >> 192.168.123.104:0/1827628917 conn(0x7f97e001a740 msgr2=0x7f97e001ab50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:19.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.909+0000 7f97f02c6640 1 -- 192.168.123.104:0/1827628917 shutdown_connections 2026-03-10T10:15:19.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.913+0000 7f97f02c6640 1 -- 192.168.123.104:0/1827628917 wait complete. 2026-03-10T10:15:19.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.913+0000 7f97f02c6640 1 Processor -- start 2026-03-10T10:15:19.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.913+0000 7f97f02c6640 1 -- start start 2026-03-10T10:15:19.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.913+0000 7f97f02c6640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f97e00a4db0 0x7f97e00aeb70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:19.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.913+0000 7f97f02c6640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f97e00a56d0 0x7f97e00af0b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:19.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.913+0000 7f97f02c6640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f97e00b7eb0 0x7f97e0159b60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:19.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.913+0000 7f97f02c6640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f97e00bcd30 con 0x7f97e00b7eb0 2026-03-10T10:15:19.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.913+0000 7f97f02c6640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f97e00bcbb0 con 0x7f97e00a4db0 2026-03-10T10:15:19.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.913+0000 7f97f02c6640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f97e00bceb0 con 0x7f97e00a56d0 2026-03-10T10:15:19.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.913+0000 7f97ed83a640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f97e00a56d0 0x7f97e00af0b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:19.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.913+0000 7f97ed83a640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f97e00a56d0 0x7f97e00af0b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3301/0 says I am v2:192.168.123.104:49716/0 (socket says 192.168.123.104:49716) 2026-03-10T10:15:19.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.913+0000 7f97ed83a640 1 -- 192.168.123.104:0/2467700756 learned_addr learned my addr 192.168.123.104:0/2467700756 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:15:19.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.913+0000 7f97ee83c640 1 --2- 192.168.123.104:0/2467700756 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f97e00b7eb0 0x7f97e0159b60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:19.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.913+0000 7f97ee03b640 1 --2- 192.168.123.104:0/2467700756 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f97e00a4db0 0x7f97e00aeb70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:19.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.913+0000 7f97ed83a640 1 -- 192.168.123.104:0/2467700756 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f97e00a4db0 msgr2=0x7f97e00aeb70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:19.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.913+0000 7f97ed83a640 1 --2- 192.168.123.104:0/2467700756 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f97e00a4db0 0x7f97e00aeb70 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:19.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.913+0000 7f97ed83a640 1 -- 192.168.123.104:0/2467700756 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f97e00b7eb0 msgr2=0x7f97e0159b60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:19.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.913+0000 7f97ed83a640 1 --2- 192.168.123.104:0/2467700756 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f97e00b7eb0 0x7f97e0159b60 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:19.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.913+0000 7f97ed83a640 1 -- 192.168.123.104:0/2467700756 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f97e015a0a0 con 0x7f97e00a56d0 2026-03-10T10:15:19.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.913+0000 7f97ee03b640 1 --2- 192.168.123.104:0/2467700756 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f97e00a4db0 0x7f97e00aeb70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:15:19.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.913+0000 7f97ed83a640 1 --2- 192.168.123.104:0/2467700756 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f97e00a56d0 0x7f97e00af0b0 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7f97d800ea10 tx=0x7f97d800eee0 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:19.927 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.913+0000 7f97d77fe640 1 -- 192.168.123.104:0/2467700756 <== mon.2 v2:192.168.123.104:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f97d800ce50 con 0x7f97e00a56d0 2026-03-10T10:15:19.927 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.913+0000 7f97f02c6640 1 -- 192.168.123.104:0/2467700756 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f97e015a390 con 0x7f97e00a56d0 2026-03-10T10:15:19.927 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.913+0000 7f97f02c6640 1 -- 192.168.123.104:0/2467700756 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f97e015a8a0 con 0x7f97e00a56d0 2026-03-10T10:15:19.927 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.913+0000 7f97d77fe640 1 -- 192.168.123.104:0/2467700756 <== mon.2 v2:192.168.123.104:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f97d8004540 con 0x7f97e00a56d0 2026-03-10T10:15:19.927 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.913+0000 7f97d77fe640 1 -- 192.168.123.104:0/2467700756 <== mon.2 v2:192.168.123.104:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f97d8010690 con 0x7f97e00a56d0 2026-03-10T10:15:19.927 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.913+0000 7f97f02c6640 1 -- 192.168.123.104:0/2467700756 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_get_version(what=osdmap handle=1) -- 0x7f97b0000f80 con 0x7f97e00a56d0 2026-03-10T10:15:19.927 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.913+0000 7f97d77fe640 1 -- 192.168.123.104:0/2467700756 <== mon.2 v2:192.168.123.104:3301/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f97d80040d0 con 0x7f97e00a56d0 2026-03-10T10:15:19.928 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.913+0000 7f97d77fe640 1 --2- 192.168.123.104:0/2467700756 >> v2:192.168.123.104:6800/3326026257 conn(0x7f97d0077930 0x7f97d0079df0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:19.928 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.913+0000 7f97d77fe640 1 -- 192.168.123.104:0/2467700756 <== mon.2 v2:192.168.123.104:3301/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f97d809a1f0 con 0x7f97e00a56d0 2026-03-10T10:15:19.928 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.913+0000 7f97d77fe640 1 --2- 192.168.123.104:0/2467700756 >> v2:192.168.123.107:6800/2162643433 conn(0x7f97d00812d0 0x7f97d0083730 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:19.928 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.913+0000 7f97d77fe640 1 -- 192.168.123.104:0/2467700756 --> v2:192.168.123.107:6800/2162643433 -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7f97d800ce50 con 0x7f97d00812d0 2026-03-10T10:15:19.928 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.913+0000 7f97d77fe640 1 -- 192.168.123.104:0/2467700756 <== mon.2 v2:192.168.123.104:3301/0 6 ==== mon_get_version_reply(handle=1 version=65) ==== 24+0+0 (secure 0 0 0) 0x7f97d80a1050 con 0x7f97e00a56d0 2026-03-10T10:15:19.928 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.917+0000 7f97ee83c640 1 --2- 192.168.123.104:0/2467700756 >> v2:192.168.123.107:6800/2162643433 conn(0x7f97d00812d0 0x7f97d0083730 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:19.928 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.917+0000 7f97ee03b640 1 --2- 192.168.123.104:0/2467700756 >> v2:192.168.123.104:6800/3326026257 conn(0x7f97d0077930 0x7f97d0079df0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:19.928 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.917+0000 7f97ee03b640 1 --2- 192.168.123.104:0/2467700756 >> v2:192.168.123.104:6800/3326026257 conn(0x7f97d0077930 0x7f97d0079df0 secure :-1 s=READY pgs=32 cs=0 l=1 rev1=1 crypto rx=0x7f97dc007970 tx=0x7f97dc008040 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:19.928 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.917+0000 7f97ee83c640 1 --2- 192.168.123.104:0/2467700756 >> v2:192.168.123.107:6800/2162643433 conn(0x7f97d00812d0 0x7f97d0083730 crc :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.4 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:19.928 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.917+0000 7f97d77fe640 1 -- 192.168.123.104:0/2467700756 <== osd.4 v2:192.168.123.107:6800/2162643433 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (crc 0 0 0) 0x7f97d800ce50 con 0x7f97d00812d0 2026-03-10T10:15:19.959 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.953+0000 7ff429a98640 1 -- 192.168.123.104:0/4273927910 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7ff42410a850 msgr2=0x7ff42410acb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:19.959 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.953+0000 7ff429a98640 1 --2- 192.168.123.104:0/4273927910 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7ff42410a850 0x7ff42410acb0 secure :-1 s=READY pgs=150 cs=0 l=1 rev1=1 crypto rx=0x7ff418009a60 tx=0x7ff41802f280 comp rx=0 tx=0).stop 2026-03-10T10:15:19.959 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.953+0000 7ff429a98640 1 -- 192.168.123.104:0/4273927910 shutdown_connections 2026-03-10T10:15:19.959 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.953+0000 7ff429a98640 1 --2- 192.168.123.104:0/4273927910 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7ff42411c780 0x7ff42411eb70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:19.959 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.953+0000 7ff429a98640 1 --2- 192.168.123.104:0/4273927910 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7ff42410a850 0x7ff42410acb0 unknown :-1 s=CLOSED pgs=150 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:19.959 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.953+0000 7ff429a98640 1 --2- 192.168.123.104:0/4273927910 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff42410a470 0x7ff4241114d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:19.959 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.953+0000 7ff429a98640 1 -- 192.168.123.104:0/4273927910 >> 192.168.123.104:0/4273927910 conn(0x7ff42406d9c0 msgr2=0x7ff42406ddd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:19.959 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.953+0000 7ff429a98640 1 -- 192.168.123.104:0/4273927910 shutdown_connections 2026-03-10T10:15:19.960 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.957+0000 7ff429a98640 1 -- 192.168.123.104:0/4273927910 wait complete. 2026-03-10T10:15:19.960 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.957+0000 7ff429a98640 1 Processor -- start 2026-03-10T10:15:19.960 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.957+0000 7ff429a98640 1 -- start start 2026-03-10T10:15:19.961 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.957+0000 7ff429a98640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff42410a470 0x7ff4241af640 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:19.961 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.957+0000 7ff429a98640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7ff42410a850 0x7ff4241afb80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:19.961 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.957+0000 7ff429a98640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7ff42411c780 0x7ff4241a9710 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:19.961 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.957+0000 7ff429a98640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7ff424121400 con 0x7ff42410a850 2026-03-10T10:15:19.961 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.957+0000 7ff429a98640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7ff424121280 con 0x7ff42410a470 2026-03-10T10:15:19.961 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.957+0000 7ff429a98640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7ff424121580 con 0x7ff42411c780 2026-03-10T10:15:19.961 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.957+0000 7ff429297640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7ff42411c780 0x7ff4241a9710 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:19.961 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.957+0000 7ff429297640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7ff42411c780 0x7ff4241a9710 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3301/0 says I am v2:192.168.123.104:49730/0 (socket says 192.168.123.104:49730) 2026-03-10T10:15:19.961 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.957+0000 7ff429297640 1 -- 192.168.123.104:0/969532364 learned_addr learned my addr 192.168.123.104:0/969532364 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:15:19.962 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.957+0000 7ff429297640 1 -- 192.168.123.104:0/969532364 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff42410a470 msgr2=0x7ff4241af640 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:19.962 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.957+0000 7ff423fff640 1 --2- 192.168.123.104:0/969532364 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7ff42410a850 0x7ff4241afb80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:19.962 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.957+0000 7ff428a96640 1 --2- 192.168.123.104:0/969532364 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff42410a470 0x7ff4241af640 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:19.962 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.957+0000 7ff429297640 1 --2- 192.168.123.104:0/969532364 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff42410a470 0x7ff4241af640 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:19.962 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.957+0000 7ff429297640 1 -- 192.168.123.104:0/969532364 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7ff42410a850 msgr2=0x7ff4241afb80 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:19.962 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.957+0000 7ff429297640 1 --2- 192.168.123.104:0/969532364 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7ff42410a850 0x7ff4241afb80 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:19.962 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.957+0000 7ff429297640 1 -- 192.168.123.104:0/969532364 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ff4241a9f70 con 0x7ff42411c780 2026-03-10T10:15:19.963 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.957+0000 7ff423fff640 1 --2- 192.168.123.104:0/969532364 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7ff42410a850 0x7ff4241afb80 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:15:19.963 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.957+0000 7ff428a96640 1 --2- 192.168.123.104:0/969532364 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff42410a470 0x7ff4241af640 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:15:19.964 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.957+0000 7ff429297640 1 --2- 192.168.123.104:0/969532364 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7ff42411c780 0x7ff4241a9710 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7ff41400ef90 tx=0x7ff41400c560 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:19.964 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.957+0000 7ff421ffb640 1 -- 192.168.123.104:0/969532364 <== mon.2 v2:192.168.123.104:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7ff414019070 con 0x7ff42411c780 2026-03-10T10:15:19.964 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.957+0000 7ff429a98640 1 -- 192.168.123.104:0/969532364 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7ff4241aa200 con 0x7ff42411c780 2026-03-10T10:15:19.964 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.957+0000 7ff429a98640 1 -- 192.168.123.104:0/969532364 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7ff424111d60 con 0x7ff42411c780 2026-03-10T10:15:19.964 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.957+0000 7ff421ffb640 1 -- 192.168.123.104:0/969532364 <== mon.2 v2:192.168.123.104:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7ff4140092d0 con 0x7ff42411c780 2026-03-10T10:15:19.964 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.957+0000 7ff421ffb640 1 -- 192.168.123.104:0/969532364 <== mon.2 v2:192.168.123.104:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7ff414007500 con 0x7ff42411c780 2026-03-10T10:15:19.964 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.961+0000 7ff421ffb640 1 -- 192.168.123.104:0/969532364 <== mon.2 v2:192.168.123.104:3301/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7ff4140076a0 con 0x7ff42411c780 2026-03-10T10:15:19.964 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.961+0000 7ff429a98640 1 -- 192.168.123.104:0/969532364 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_get_version(what=osdmap handle=1) -- 0x7ff42411e9e0 con 0x7ff42411c780 2026-03-10T10:15:19.964 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.961+0000 7ff421ffb640 1 --2- 192.168.123.104:0/969532364 >> v2:192.168.123.104:6800/3326026257 conn(0x7ff400077790 0x7ff400079c50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:19.964 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.961+0000 7ff421ffb640 1 -- 192.168.123.104:0/969532364 <== mon.2 v2:192.168.123.104:3301/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7ff41409a830 con 0x7ff42411c780 2026-03-10T10:15:19.964 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.961+0000 7ff421ffb640 1 --2- 192.168.123.104:0/969532364 >> v2:192.168.123.107:6804/1022745989 conn(0x7ff400081130 0x7ff400083590 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:19.964 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.961+0000 7ff421ffb640 1 -- 192.168.123.104:0/969532364 --> v2:192.168.123.107:6804/1022745989 -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7ff414021ea0 con 0x7ff400081130 2026-03-10T10:15:19.964 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.961+0000 7ff421ffb640 1 -- 192.168.123.104:0/969532364 <== mon.2 v2:192.168.123.104:3301/0 6 ==== mon_get_version_reply(handle=1 version=65) ==== 24+0+0 (secure 0 0 0) 0x7ff4140a1050 con 0x7ff42411c780 2026-03-10T10:15:19.966 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.961+0000 7ff428a96640 1 --2- 192.168.123.104:0/969532364 >> v2:192.168.123.104:6800/3326026257 conn(0x7ff400077790 0x7ff400079c50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:19.967 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.961+0000 7ff428a96640 1 --2- 192.168.123.104:0/969532364 >> v2:192.168.123.104:6800/3326026257 conn(0x7ff400077790 0x7ff400079c50 secure :-1 s=READY pgs=33 cs=0 l=1 rev1=1 crypto rx=0x7ff40c007970 tx=0x7ff40c008040 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:19.967 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.965+0000 7ff423fff640 1 --2- 192.168.123.104:0/969532364 >> v2:192.168.123.107:6804/1022745989 conn(0x7ff400081130 0x7ff400083590 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:19.983 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.965+0000 7ff423fff640 1 --2- 192.168.123.104:0/969532364 >> v2:192.168.123.107:6804/1022745989 conn(0x7ff400081130 0x7ff400083590 crc :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.5 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:19.983 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:19.973+0000 7ff421ffb640 1 -- 192.168.123.104:0/969532364 <== osd.5 v2:192.168.123.107:6804/1022745989 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (crc 0 0 0) 0x7ff414021ea0 con 0x7ff400081130 2026-03-10T10:15:20.015 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.009+0000 7febaa13c640 1 -- 192.168.123.104:0/4155702694 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feba410a850 msgr2=0x7feba410acd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.015 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.009+0000 7febaa13c640 1 --2- 192.168.123.104:0/4155702694 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feba410a850 0x7feba410acd0 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7feb8c009960 tx=0x7feb8c02f120 comp rx=0 tx=0).stop 2026-03-10T10:15:20.015 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.013+0000 7febaa13c640 1 -- 192.168.123.104:0/4155702694 shutdown_connections 2026-03-10T10:15:20.015 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.013+0000 7febaa13c640 1 --2- 192.168.123.104:0/4155702694 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7feba411c780 0x7feba411eb70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.015 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.013+0000 7febaa13c640 1 --2- 192.168.123.104:0/4155702694 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feba410a850 0x7feba410acd0 unknown :-1 s=CLOSED pgs=70 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.015 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.013+0000 7febaa13c640 1 --2- 192.168.123.104:0/4155702694 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7feba410a470 0x7feba41114d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.015 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.013+0000 7febaa13c640 1 -- 192.168.123.104:0/4155702694 >> 192.168.123.104:0/4155702694 conn(0x7feba406db00 msgr2=0x7feba406df10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:20.015 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.013+0000 7febaa13c640 1 -- 192.168.123.104:0/4155702694 shutdown_connections 2026-03-10T10:15:20.015 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.013+0000 7febaa13c640 1 -- 192.168.123.104:0/4155702694 wait complete. 2026-03-10T10:15:20.015 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.013+0000 7febaa13c640 1 Processor -- start 2026-03-10T10:15:20.016 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.013+0000 7febaa13c640 1 -- start start 2026-03-10T10:15:20.016 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.013+0000 7febaa13c640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7feba410a470 0x7feba4112720 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:20.016 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.013+0000 7feba37fe640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7feba410a470 0x7feba4112720 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:20.016 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.013+0000 7feba37fe640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7feba410a470 0x7feba4112720 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:54214/0 (socket says 192.168.123.104:54214) 2026-03-10T10:15:20.016 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.013+0000 7febaa13c640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feba410a850 0x7feba4112c60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:20.016 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.013+0000 7febaa13c640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7feba411c780 0x7feba41be2e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:20.016 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.013+0000 7febaa13c640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7feba4121530 con 0x7feba410a470 2026-03-10T10:15:20.016 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.013+0000 7febaa13c640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7feba41213b0 con 0x7feba410a850 2026-03-10T10:15:20.016 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.013+0000 7febaa13c640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7feba41216b0 con 0x7feba411c780 2026-03-10T10:15:20.016 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.013+0000 7feba37fe640 1 -- 192.168.123.104:0/52004086 learned_addr learned my addr 192.168.123.104:0/52004086 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:15:20.016 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.013+0000 7feba2ffd640 1 --2- 192.168.123.104:0/52004086 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feba410a850 0x7feba4112c60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:20.017 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.013+0000 7feba3fff640 1 --2- 192.168.123.104:0/52004086 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7feba411c780 0x7feba41be2e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:20.017 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.013+0000 7feba37fe640 1 -- 192.168.123.104:0/52004086 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7feba411c780 msgr2=0x7feba41be2e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.017 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.013+0000 7feba37fe640 1 --2- 192.168.123.104:0/52004086 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7feba411c780 0x7feba41be2e0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.017 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.013+0000 7feba37fe640 1 -- 192.168.123.104:0/52004086 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feba410a850 msgr2=0x7feba4112c60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.017 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.013+0000 7feba37fe640 1 --2- 192.168.123.104:0/52004086 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7feba410a850 0x7feba4112c60 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.017 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.013+0000 7feba37fe640 1 -- 192.168.123.104:0/52004086 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7feba41be820 con 0x7feba410a470 2026-03-10T10:15:20.017 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.013+0000 7feba3fff640 1 --2- 192.168.123.104:0/52004086 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7feba411c780 0x7feba41be2e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:15:20.017 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.013+0000 7feba37fe640 1 --2- 192.168.123.104:0/52004086 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7feba410a470 0x7feba4112720 secure :-1 s=READY pgs=151 cs=0 l=1 rev1=1 crypto rx=0x7feb980027e0 tx=0x7feb98002cb0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:20.017 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.013+0000 7feba0ff9640 1 -- 192.168.123.104:0/52004086 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7feb9800ed40 con 0x7feba410a470 2026-03-10T10:15:20.017 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.013+0000 7feba0ff9640 1 -- 192.168.123.104:0/52004086 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7feb980108a0 con 0x7feba410a470 2026-03-10T10:15:20.017 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.013+0000 7feba0ff9640 1 -- 192.168.123.104:0/52004086 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7feb9800f660 con 0x7feba410a470 2026-03-10T10:15:20.017 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.013+0000 7febaa13c640 1 -- 192.168.123.104:0/52004086 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7feba41bea50 con 0x7feba410a470 2026-03-10T10:15:20.017 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.013+0000 7febaa13c640 1 -- 192.168.123.104:0/52004086 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7feba41bef10 con 0x7feba410a470 2026-03-10T10:15:20.020 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.013+0000 7feb827fc640 1 -- 192.168.123.104:0/52004086 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_get_version(what=osdmap handle=1) -- 0x7feb68000f80 con 0x7feba410a470 2026-03-10T10:15:20.020 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.017+0000 7feba0ff9640 1 -- 192.168.123.104:0/52004086 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7feb98010430 con 0x7feba410a470 2026-03-10T10:15:20.020 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.017+0000 7feba0ff9640 1 --2- 192.168.123.104:0/52004086 >> v2:192.168.123.104:6800/3326026257 conn(0x7feb880777c0 0x7feb88079c80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:20.020 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.017+0000 7feba2ffd640 1 --2- 192.168.123.104:0/52004086 >> v2:192.168.123.104:6800/3326026257 conn(0x7feb880777c0 0x7feb88079c80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:20.020 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.017+0000 7feba2ffd640 1 --2- 192.168.123.104:0/52004086 >> v2:192.168.123.104:6800/3326026257 conn(0x7feb880777c0 0x7feb88079c80 secure :-1 s=READY pgs=34 cs=0 l=1 rev1=1 crypto rx=0x7feb8c002fd0 tx=0x7feb8c03a040 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:20.020 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.017+0000 7feba0ff9640 1 -- 192.168.123.104:0/52004086 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7feb9809a5c0 con 0x7feba410a470 2026-03-10T10:15:20.020 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.017+0000 7feba0ff9640 1 --2- 192.168.123.104:0/52004086 >> v2:192.168.123.104:6813/2182249853 conn(0x7feb88081160 0x7feb880835c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:20.020 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.017+0000 7feba0ff9640 1 -- 192.168.123.104:0/52004086 --> v2:192.168.123.104:6813/2182249853 -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7feb98023e50 con 0x7feb88081160 2026-03-10T10:15:20.020 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.017+0000 7feba0ff9640 1 -- 192.168.123.104:0/52004086 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_get_version_reply(handle=1 version=65) ==== 24+0+0 (secure 0 0 0) 0x7feb9809a9b0 con 0x7feba410a470 2026-03-10T10:15:20.020 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.017+0000 7feba3fff640 1 --2- 192.168.123.104:0/52004086 >> v2:192.168.123.104:6813/2182249853 conn(0x7feb88081160 0x7feb880835c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:20.021 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.017+0000 7feba3fff640 1 --2- 192.168.123.104:0/52004086 >> v2:192.168.123.104:6813/2182249853 conn(0x7feb88081160 0x7feb880835c0 crc :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.3 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:20.021 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.017+0000 7feba0ff9640 1 -- 192.168.123.104:0/52004086 <== osd.3 v2:192.168.123.104:6813/2182249853 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (crc 0 0 0) 0x7feb98023e50 con 0x7feb88081160 2026-03-10T10:15:20.024 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.021+0000 7f97f02c6640 1 -- 192.168.123.104:0/2467700756 --> v2:192.168.123.107:6800/2162643433 -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7f97b0002d70 con 0x7f97d00812d0 2026-03-10T10:15:20.024 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.021+0000 7f97d77fe640 1 -- 192.168.123.104:0/2467700756 <== osd.4 v2:192.168.123.107:6800/2162643433 2 ==== command_reply(tid 2: 0 ) ==== 8+0+12 (crc 0 0 0) 0x7f97b0002d70 con 0x7f97d00812d0 2026-03-10T10:15:20.041 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.037+0000 7f97f02c6640 1 -- 192.168.123.104:0/2467700756 >> v2:192.168.123.107:6800/2162643433 conn(0x7f97d00812d0 msgr2=0x7f97d0083730 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.041 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.037+0000 7f97f02c6640 1 --2- 192.168.123.104:0/2467700756 >> v2:192.168.123.107:6800/2162643433 conn(0x7f97d00812d0 0x7f97d0083730 crc :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.041 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.037+0000 7f97f02c6640 1 -- 192.168.123.104:0/2467700756 >> v2:192.168.123.104:6800/3326026257 conn(0x7f97d0077930 msgr2=0x7f97d0079df0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.041 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.037+0000 7f97f02c6640 1 --2- 192.168.123.104:0/2467700756 >> v2:192.168.123.104:6800/3326026257 conn(0x7f97d0077930 0x7f97d0079df0 secure :-1 s=READY pgs=32 cs=0 l=1 rev1=1 crypto rx=0x7f97dc007970 tx=0x7f97dc008040 comp rx=0 tx=0).stop 2026-03-10T10:15:20.041 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.037+0000 7f97f02c6640 1 -- 192.168.123.104:0/2467700756 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f97e00a56d0 msgr2=0x7f97e00af0b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.041 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.037+0000 7f97f02c6640 1 --2- 192.168.123.104:0/2467700756 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f97e00a56d0 0x7f97e00af0b0 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7f97d800ea10 tx=0x7f97d800eee0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.041 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.037+0000 7f97ee83c640 1 -- 192.168.123.104:0/2467700756 reap_dead start 2026-03-10T10:15:20.042 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.037+0000 7f97f02c6640 1 -- 192.168.123.104:0/2467700756 shutdown_connections 2026-03-10T10:15:20.042 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.037+0000 7f97f02c6640 1 -- 192.168.123.104:0/2467700756 >> 192.168.123.104:0/2467700756 conn(0x7f97e001a740 msgr2=0x7f97e00b9b10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:20.042 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.037+0000 7f97f02c6640 1 -- 192.168.123.104:0/2467700756 shutdown_connections 2026-03-10T10:15:20.042 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.037+0000 7f97f02c6640 1 -- 192.168.123.104:0/2467700756 wait complete. 2026-03-10T10:15:20.061 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.053+0000 7ff429a98640 1 -- 192.168.123.104:0/969532364 --> v2:192.168.123.107:6804/1022745989 -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7ff424074650 con 0x7ff400081130 2026-03-10T10:15:20.061 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.057+0000 7ff421ffb640 1 -- 192.168.123.104:0/969532364 <== osd.5 v2:192.168.123.107:6804/1022745989 2 ==== command_reply(tid 2: 0 ) ==== 8+0+12 (crc 0 0 0) 0x7ff424074650 con 0x7ff400081130 2026-03-10T10:15:20.071 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.065+0000 7ff429a98640 1 -- 192.168.123.104:0/969532364 >> v2:192.168.123.107:6804/1022745989 conn(0x7ff400081130 msgr2=0x7ff400083590 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.071 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.065+0000 7ff429a98640 1 --2- 192.168.123.104:0/969532364 >> v2:192.168.123.107:6804/1022745989 conn(0x7ff400081130 0x7ff400083590 crc :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.071 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.065+0000 7ff429a98640 1 -- 192.168.123.104:0/969532364 >> v2:192.168.123.104:6800/3326026257 conn(0x7ff400077790 msgr2=0x7ff400079c50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.071 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.065+0000 7ff429a98640 1 --2- 192.168.123.104:0/969532364 >> v2:192.168.123.104:6800/3326026257 conn(0x7ff400077790 0x7ff400079c50 secure :-1 s=READY pgs=33 cs=0 l=1 rev1=1 crypto rx=0x7ff40c007970 tx=0x7ff40c008040 comp rx=0 tx=0).stop 2026-03-10T10:15:20.071 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.065+0000 7ff429a98640 1 -- 192.168.123.104:0/969532364 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7ff42411c780 msgr2=0x7ff4241a9710 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.071 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.065+0000 7ff429a98640 1 --2- 192.168.123.104:0/969532364 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7ff42411c780 0x7ff4241a9710 secure :-1 s=READY pgs=67 cs=0 l=1 rev1=1 crypto rx=0x7ff41400ef90 tx=0x7ff41400c560 comp rx=0 tx=0).stop 2026-03-10T10:15:20.071 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.065+0000 7ff429297640 1 -- 192.168.123.104:0/969532364 reap_dead start 2026-03-10T10:15:20.071 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.065+0000 7ff429a98640 1 -- 192.168.123.104:0/969532364 shutdown_connections 2026-03-10T10:15:20.071 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.065+0000 7ff429a98640 1 -- 192.168.123.104:0/969532364 >> 192.168.123.104:0/969532364 conn(0x7ff42406d9c0 msgr2=0x7ff42411e040 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:20.071 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.065+0000 7ff429a98640 1 -- 192.168.123.104:0/969532364 shutdown_connections 2026-03-10T10:15:20.071 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.065+0000 7ff429a98640 1 -- 192.168.123.104:0/969532364 wait complete. 2026-03-10T10:15:20.077 INFO:teuthology.orchestra.run.vm04.stdout:55834574907 2026-03-10T10:15:20.077 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph osd last-stat-seq osd.1 2026-03-10T10:15:20.094 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.081+0000 7feb827fc640 1 -- 192.168.123.104:0/52004086 --> v2:192.168.123.104:6813/2182249853 -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7feb68002cf0 con 0x7feb88081160 2026-03-10T10:15:20.094 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.085+0000 7feba0ff9640 1 -- 192.168.123.104:0/52004086 <== osd.3 v2:192.168.123.104:6813/2182249853 2 ==== command_reply(tid 2: 0 ) ==== 8+0+12 (crc 0 0 0) 0x7feb68002cf0 con 0x7feb88081160 2026-03-10T10:15:20.094 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.089+0000 7feb827fc640 1 -- 192.168.123.104:0/52004086 >> v2:192.168.123.104:6813/2182249853 conn(0x7feb88081160 msgr2=0x7feb880835c0 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.094 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.089+0000 7feb827fc640 1 --2- 192.168.123.104:0/52004086 >> v2:192.168.123.104:6813/2182249853 conn(0x7feb88081160 0x7feb880835c0 crc :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.094 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.089+0000 7feb827fc640 1 -- 192.168.123.104:0/52004086 >> v2:192.168.123.104:6800/3326026257 conn(0x7feb880777c0 msgr2=0x7feb88079c80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.094 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.089+0000 7feb827fc640 1 --2- 192.168.123.104:0/52004086 >> v2:192.168.123.104:6800/3326026257 conn(0x7feb880777c0 0x7feb88079c80 secure :-1 s=READY pgs=34 cs=0 l=1 rev1=1 crypto rx=0x7feb8c002fd0 tx=0x7feb8c03a040 comp rx=0 tx=0).stop 2026-03-10T10:15:20.094 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.089+0000 7feb827fc640 1 -- 192.168.123.104:0/52004086 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7feba410a470 msgr2=0x7feba4112720 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.094 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.089+0000 7feb827fc640 1 --2- 192.168.123.104:0/52004086 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7feba410a470 0x7feba4112720 secure :-1 s=READY pgs=151 cs=0 l=1 rev1=1 crypto rx=0x7feb980027e0 tx=0x7feb98002cb0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.095 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.089+0000 7feba3fff640 1 -- 192.168.123.104:0/52004086 reap_dead start 2026-03-10T10:15:20.099 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.093+0000 7feb827fc640 1 -- 192.168.123.104:0/52004086 shutdown_connections 2026-03-10T10:15:20.099 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.093+0000 7feb827fc640 1 -- 192.168.123.104:0/52004086 >> 192.168.123.104:0/52004086 conn(0x7feba406db00 msgr2=0x7feba411cb60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:20.099 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.097+0000 7feb827fc640 1 -- 192.168.123.104:0/52004086 shutdown_connections 2026-03-10T10:15:20.100 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.097+0000 7feb827fc640 1 -- 192.168.123.104:0/52004086 wait complete. 2026-03-10T10:15:20.111 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.105+0000 7efe8de29640 1 -- 192.168.123.104:0/131276464 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7efe800a5700 msgr2=0x7efe800b79a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.112 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.105+0000 7efe8de29640 1 --2- 192.168.123.104:0/131276464 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7efe800a5700 0x7efe800b79a0 secure :-1 s=READY pgs=69 cs=0 l=1 rev1=1 crypto rx=0x7efe78009960 tx=0x7efe7802f140 comp rx=0 tx=0).stop 2026-03-10T10:15:20.112 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.101+0000 7fc83cdf6640 1 -- 192.168.123.104:0/1963892357 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fc83810a6d0 msgr2=0x7fc83810aab0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.112 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.101+0000 7fc83cdf6640 1 --2- 192.168.123.104:0/1963892357 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fc83810a6d0 0x7fc83810aab0 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7fc828009a30 tx=0x7fc82802f240 comp rx=0 tx=0).stop 2026-03-10T10:15:20.112 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.101+0000 7fc83cdf6640 1 -- 192.168.123.104:0/1963892357 shutdown_connections 2026-03-10T10:15:20.112 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.101+0000 7fc83cdf6640 1 --2- 192.168.123.104:0/1963892357 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc838075470 0x7fc83807be20 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.112 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.101+0000 7fc83cdf6640 1 --2- 192.168.123.104:0/1963892357 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc83810b080 0x7fc838074d30 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.112 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.101+0000 7fc83cdf6640 1 --2- 192.168.123.104:0/1963892357 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fc83810a6d0 0x7fc83810aab0 unknown :-1 s=CLOSED pgs=68 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.112 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.101+0000 7fc83cdf6640 1 -- 192.168.123.104:0/1963892357 >> 192.168.123.104:0/1963892357 conn(0x7fc83806d9f0 msgr2=0x7fc83806de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:20.112 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.109+0000 7efe8de29640 1 -- 192.168.123.104:0/131276464 shutdown_connections 2026-03-10T10:15:20.112 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.109+0000 7efe8de29640 1 --2- 192.168.123.104:0/131276464 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7efe800b7ee0 0x7efe800ba2d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.112 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.109+0000 7efe8de29640 1 --2- 192.168.123.104:0/131276464 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7efe800a5700 0x7efe800b79a0 unknown :-1 s=CLOSED pgs=69 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.112 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.109+0000 7efe8de29640 1 --2- 192.168.123.104:0/131276464 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7efe800a4de0 0x7efe800a51c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.112 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.109+0000 7efe8de29640 1 -- 192.168.123.104:0/131276464 >> 192.168.123.104:0/131276464 conn(0x7efe8001a730 msgr2=0x7efe8001ab40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:20.112 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.109+0000 7efe8de29640 1 -- 192.168.123.104:0/131276464 shutdown_connections 2026-03-10T10:15:20.113 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.101+0000 7fc83cdf6640 1 -- 192.168.123.104:0/1963892357 shutdown_connections 2026-03-10T10:15:20.113 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.101+0000 7fc83cdf6640 1 -- 192.168.123.104:0/1963892357 wait complete. 2026-03-10T10:15:20.113 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.109+0000 7fc83cdf6640 1 Processor -- start 2026-03-10T10:15:20.113 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.109+0000 7fc83cdf6640 1 -- start start 2026-03-10T10:15:20.113 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.109+0000 7fc83cdf6640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fc838075470 0x7fc838085a70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:20.113 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.109+0000 7fc83cdf6640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc83810b080 0x7fc838085fb0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:20.113 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.109+0000 7fc83cdf6640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc83807faf0 0x7fc83807ff60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:20.113 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.109+0000 7fc83cdf6640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fc83807e7d0 con 0x7fc83810b080 2026-03-10T10:15:20.113 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.109+0000 7fc83cdf6640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7fc83807e650 con 0x7fc83807faf0 2026-03-10T10:15:20.113 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.109+0000 7fc83cdf6640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7fc83807e950 con 0x7fc838075470 2026-03-10T10:15:20.113 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.109+0000 7fc835d74640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc83810b080 0x7fc838085fb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:20.113 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.109+0000 7fc835d74640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc83810b080 0x7fc838085fb0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:54248/0 (socket says 192.168.123.104:54248) 2026-03-10T10:15:20.113 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.109+0000 7fc835d74640 1 -- 192.168.123.104:0/4033212592 learned_addr learned my addr 192.168.123.104:0/4033212592 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:15:20.113 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.109+0000 7fc836d76640 1 --2- 192.168.123.104:0/4033212592 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc83807faf0 0x7fc83807ff60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:20.113 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.109+0000 7fc836575640 1 --2- 192.168.123.104:0/4033212592 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fc838075470 0x7fc838085a70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:20.113 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.109+0000 7fc836d76640 1 -- 192.168.123.104:0/4033212592 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fc838075470 msgr2=0x7fc838085a70 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.113 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.109+0000 7fc836d76640 1 --2- 192.168.123.104:0/4033212592 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fc838075470 0x7fc838085a70 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.113 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.109+0000 7fc836d76640 1 -- 192.168.123.104:0/4033212592 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc83810b080 msgr2=0x7fc838085fb0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.113 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.109+0000 7fc836d76640 1 --2- 192.168.123.104:0/4033212592 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fc83810b080 0x7fc838085fb0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.113 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.109+0000 7fc836d76640 1 -- 192.168.123.104:0/4033212592 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fc838080820 con 0x7fc83807faf0 2026-03-10T10:15:20.113 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.109+0000 7fc836575640 1 --2- 192.168.123.104:0/4033212592 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fc838075470 0x7fc838085a70 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T10:15:20.113 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.109+0000 7fc836d76640 1 --2- 192.168.123.104:0/4033212592 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc83807faf0 0x7fc83807ff60 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7fc82c00d6a0 tx=0x7fc82c00db70 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:20.113 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.109+0000 7fc8277fe640 1 -- 192.168.123.104:0/4033212592 <== mon.1 v2:192.168.123.107:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fc82c014070 con 0x7fc83807faf0 2026-03-10T10:15:20.114 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.109+0000 7fc83cdf6640 1 -- 192.168.123.104:0/4033212592 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fc8381be9b0 con 0x7fc83807faf0 2026-03-10T10:15:20.114 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.109+0000 7fc83cdf6640 1 -- 192.168.123.104:0/4033212592 --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fc8381beec0 con 0x7fc83807faf0 2026-03-10T10:15:20.114 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.109+0000 7fc8277fe640 1 -- 192.168.123.104:0/4033212592 <== mon.1 v2:192.168.123.107:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fc82c004d10 con 0x7fc83807faf0 2026-03-10T10:15:20.114 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.109+0000 7fc8277fe640 1 -- 192.168.123.104:0/4033212592 <== mon.1 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fc82c005020 con 0x7fc83807faf0 2026-03-10T10:15:20.114 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.113+0000 7efe8de29640 1 -- 192.168.123.104:0/131276464 wait complete. 2026-03-10T10:15:20.115 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.113+0000 7efe8de29640 1 Processor -- start 2026-03-10T10:15:20.115 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.113+0000 7efe8de29640 1 -- start start 2026-03-10T10:15:20.116 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.113+0000 7efe8de29640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7efe800a4de0 0x7efe801452c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:20.116 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.113+0000 7efe8de29640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7efe800a5700 0x7efe80145800 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:20.116 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.113+0000 7efe8de29640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7efe800b7ee0 0x7efe80149b90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:20.116 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.113+0000 7efe8de29640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7efe800bccf0 con 0x7efe800b7ee0 2026-03-10T10:15:20.116 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.113+0000 7efe8de29640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7efe800bcb70 con 0x7efe800a5700 2026-03-10T10:15:20.116 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.113+0000 7efe8de29640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7efe800bce70 con 0x7efe800a4de0 2026-03-10T10:15:20.116 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.113+0000 7efe8d628640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7efe800b7ee0 0x7efe80149b90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:20.116 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.113+0000 7efe8d628640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7efe800b7ee0 0x7efe80149b90 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:54258/0 (socket says 192.168.123.104:54258) 2026-03-10T10:15:20.116 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.113+0000 7efe8d628640 1 -- 192.168.123.104:0/2312618229 learned_addr learned my addr 192.168.123.104:0/2312618229 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:15:20.116 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.113+0000 7efe7ffff640 1 --2- 192.168.123.104:0/2312618229 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7efe800a5700 0x7efe80145800 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:20.116 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.113+0000 7efe8ce27640 1 --2- 192.168.123.104:0/2312618229 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7efe800a4de0 0x7efe801452c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:20.116 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.113+0000 7efe8d628640 1 -- 192.168.123.104:0/2312618229 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7efe800a4de0 msgr2=0x7efe801452c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.116 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.113+0000 7efe8d628640 1 --2- 192.168.123.104:0/2312618229 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7efe800a4de0 0x7efe801452c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.116 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.113+0000 7efe8d628640 1 -- 192.168.123.104:0/2312618229 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7efe800a5700 msgr2=0x7efe80145800 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.116 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.113+0000 7efe8d628640 1 --2- 192.168.123.104:0/2312618229 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7efe800a5700 0x7efe80145800 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.116 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.113+0000 7efe8d628640 1 -- 192.168.123.104:0/2312618229 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7efe8014a310 con 0x7efe800b7ee0 2026-03-10T10:15:20.116 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.113+0000 7efe7ffff640 1 --2- 192.168.123.104:0/2312618229 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7efe800a5700 0x7efe80145800 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T10:15:20.117 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.113+0000 7efe8d628640 1 --2- 192.168.123.104:0/2312618229 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7efe800b7ee0 0x7efe80149b90 secure :-1 s=READY pgs=153 cs=0 l=1 rev1=1 crypto rx=0x7efe84009fd0 tx=0x7efe8400ef90 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:20.122 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.113+0000 7efe7dffb640 1 -- 192.168.123.104:0/2312618229 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7efe84019070 con 0x7efe800b7ee0 2026-03-10T10:15:20.122 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.113+0000 7efe7dffb640 1 -- 192.168.123.104:0/2312618229 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7efe840092d0 con 0x7efe800b7ee0 2026-03-10T10:15:20.122 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.113+0000 7efe7dffb640 1 -- 192.168.123.104:0/2312618229 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7efe84004850 con 0x7efe800b7ee0 2026-03-10T10:15:20.123 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.117+0000 7fc8277fe640 1 -- 192.168.123.104:0/4033212592 <== mon.1 v2:192.168.123.107:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7fc82c020020 con 0x7fc83807faf0 2026-03-10T10:15:20.123 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.117+0000 7fc8277fe640 1 --2- 192.168.123.104:0/4033212592 >> v2:192.168.123.104:6800/3326026257 conn(0x7fc80c077790 0x7fc80c079c50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:20.123 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.117+0000 7fc8277fe640 1 -- 192.168.123.104:0/4033212592 <== mon.1 v2:192.168.123.107:3300/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7fc82c09a200 con 0x7fc83807faf0 2026-03-10T10:15:20.123 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.117+0000 7fc8257fa640 1 --2- 192.168.123.104:0/4033212592 >> v2:192.168.123.107:6812/4141831103 conn(0x7fc804001630 0x7fc804003af0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:20.123 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.117+0000 7fc8257fa640 1 -- 192.168.123.104:0/4033212592 --> v2:192.168.123.107:6812/4141831103 -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7fc804006ba0 con 0x7fc804001630 2026-03-10T10:15:20.123 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.117+0000 7fc835d74640 1 --2- 192.168.123.104:0/4033212592 >> v2:192.168.123.107:6812/4141831103 conn(0x7fc804001630 0x7fc804003af0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:20.123 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.121+0000 7efe8de29640 1 -- 192.168.123.104:0/2312618229 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7efe8014a5a0 con 0x7efe800b7ee0 2026-03-10T10:15:20.123 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.121+0000 7efe8de29640 1 -- 192.168.123.104:0/2312618229 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7efe80151e40 con 0x7efe800b7ee0 2026-03-10T10:15:20.123 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.121+0000 7fc836575640 1 --2- 192.168.123.104:0/4033212592 >> v2:192.168.123.104:6800/3326026257 conn(0x7fc80c077790 0x7fc80c079c50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:20.124 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.121+0000 7efe7dffb640 1 -- 192.168.123.104:0/2312618229 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7efe840075c0 con 0x7efe800b7ee0 2026-03-10T10:15:20.124 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.121+0000 7efe7dffb640 1 --2- 192.168.123.104:0/2312618229 >> v2:192.168.123.104:6800/3326026257 conn(0x7efe60077790 0x7efe60079c50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:20.124 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.121+0000 7efe8ce27640 1 --2- 192.168.123.104:0/2312618229 >> v2:192.168.123.104:6800/3326026257 conn(0x7efe60077790 0x7efe60079c50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:20.125 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.121+0000 7efe7dffb640 1 -- 192.168.123.104:0/2312618229 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7efe8409a6f0 con 0x7efe800b7ee0 2026-03-10T10:15:20.125 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.121+0000 7efe8de29640 1 --2- 192.168.123.104:0/2312618229 >> v2:192.168.123.107:6808/719340092 conn(0x7efe54001630 0x7efe54003af0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:20.125 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.121+0000 7efe8de29640 1 -- 192.168.123.104:0/2312618229 --> v2:192.168.123.107:6808/719340092 -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7efe54006ba0 con 0x7efe54001630 2026-03-10T10:15:20.125 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:15:20 vm04 bash[56561]: ts=2026-03-10T10:15:20.116Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.003936118s 2026-03-10T10:15:20.136 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.133+0000 7efe7ffff640 1 --2- 192.168.123.104:0/2312618229 >> v2:192.168.123.107:6808/719340092 conn(0x7efe54001630 0x7efe54003af0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:20.136 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.133+0000 7efe8ce27640 1 --2- 192.168.123.104:0/2312618229 >> v2:192.168.123.104:6800/3326026257 conn(0x7efe60077790 0x7efe60079c50 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7efe70002930 tx=0x7efe70005f50 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:20.136 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.133+0000 7efe7ffff640 1 --2- 192.168.123.104:0/2312618229 >> v2:192.168.123.107:6808/719340092 conn(0x7efe54001630 0x7efe54003af0 crc :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.6 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:20.136 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.133+0000 7fc835d74640 1 --2- 192.168.123.104:0/4033212592 >> v2:192.168.123.107:6812/4141831103 conn(0x7fc804001630 0x7fc804003af0 crc :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.7 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:20.136 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.133+0000 7fc836575640 1 --2- 192.168.123.104:0/4033212592 >> v2:192.168.123.104:6800/3326026257 conn(0x7fc80c077790 0x7fc80c079c50 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7fc828002410 tx=0x7fc82803a040 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:20.137 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.133+0000 7f21c25c1640 1 -- 192.168.123.104:0/1982519559 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f21bc10b080 msgr2=0x7f21bc074d30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.137 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.133+0000 7f21c25c1640 1 --2- 192.168.123.104:0/1982519559 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f21bc10b080 0x7f21bc074d30 secure :-1 s=READY pgs=152 cs=0 l=1 rev1=1 crypto rx=0x7f21ac0099b0 tx=0x7f21ac02f1d0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.137 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.133+0000 7f21c25c1640 1 -- 192.168.123.104:0/1982519559 shutdown_connections 2026-03-10T10:15:20.137 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.133+0000 7f21c25c1640 1 --2- 192.168.123.104:0/1982519559 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f21bc075470 0x7f21bc07be20 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.137 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.133+0000 7f21c25c1640 1 --2- 192.168.123.104:0/1982519559 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f21bc10b080 0x7f21bc074d30 unknown :-1 s=CLOSED pgs=152 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.137 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.133+0000 7f21c25c1640 1 --2- 192.168.123.104:0/1982519559 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f21bc10a6d0 0x7f21bc10aab0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.137 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.133+0000 7f21c25c1640 1 -- 192.168.123.104:0/1982519559 >> 192.168.123.104:0/1982519559 conn(0x7f21bc06d9f0 msgr2=0x7f21bc06de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:20.137 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.133+0000 7efe7dffb640 1 -- 192.168.123.104:0/2312618229 <== osd.6 v2:192.168.123.107:6808/719340092 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (crc 0 0 0) 0x7efe54006ba0 con 0x7efe54001630 2026-03-10T10:15:20.139 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.133+0000 7fc8277fe640 1 -- 192.168.123.104:0/4033212592 <== osd.7 v2:192.168.123.107:6812/4141831103 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (crc 0 0 0) 0x7fc804006ba0 con 0x7fc804001630 2026-03-10T10:15:20.153 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.149+0000 7fc8257fa640 1 -- 192.168.123.104:0/4033212592 --> v2:192.168.123.107:6812/4141831103 -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7fc804005ca0 con 0x7fc804001630 2026-03-10T10:15:20.153 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.149+0000 7fc8277fe640 1 -- 192.168.123.104:0/4033212592 <== osd.7 v2:192.168.123.107:6812/4141831103 2 ==== command_reply(tid 2: 0 ) ==== 8+0+12 (crc 0 0 0) 0x7fc804005ca0 con 0x7fc804001630 2026-03-10T10:15:20.153 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.149+0000 7fc83cdf6640 1 -- 192.168.123.104:0/4033212592 >> v2:192.168.123.107:6812/4141831103 conn(0x7fc804001630 msgr2=0x7fc804003af0 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.154 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.149+0000 7fc83cdf6640 1 --2- 192.168.123.104:0/4033212592 >> v2:192.168.123.107:6812/4141831103 conn(0x7fc804001630 0x7fc804003af0 crc :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.154 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.149+0000 7fc83cdf6640 1 -- 192.168.123.104:0/4033212592 >> v2:192.168.123.104:6800/3326026257 conn(0x7fc80c077790 msgr2=0x7fc80c079c50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.154 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.149+0000 7fc83cdf6640 1 --2- 192.168.123.104:0/4033212592 >> v2:192.168.123.104:6800/3326026257 conn(0x7fc80c077790 0x7fc80c079c50 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7fc828002410 tx=0x7fc82803a040 comp rx=0 tx=0).stop 2026-03-10T10:15:20.154 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.149+0000 7fc83cdf6640 1 -- 192.168.123.104:0/4033212592 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc83807faf0 msgr2=0x7fc83807ff60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.154 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.149+0000 7fc83cdf6640 1 --2- 192.168.123.104:0/4033212592 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fc83807faf0 0x7fc83807ff60 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7fc82c00d6a0 tx=0x7fc82c00db70 comp rx=0 tx=0).stop 2026-03-10T10:15:20.154 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.153+0000 7fc836d76640 1 -- 192.168.123.104:0/4033212592 reap_dead start 2026-03-10T10:15:20.155 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.153+0000 7fc83cdf6640 1 -- 192.168.123.104:0/4033212592 shutdown_connections 2026-03-10T10:15:20.155 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.153+0000 7fc83cdf6640 1 -- 192.168.123.104:0/4033212592 >> 192.168.123.104:0/4033212592 conn(0x7fc83806d9f0 msgr2=0x7fc838072cb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:20.155 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.153+0000 7fc83cdf6640 1 -- 192.168.123.104:0/4033212592 shutdown_connections 2026-03-10T10:15:20.155 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.153+0000 7fc83cdf6640 1 -- 192.168.123.104:0/4033212592 wait complete. 2026-03-10T10:15:20.162 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.133+0000 7f21c25c1640 1 -- 192.168.123.104:0/1982519559 shutdown_connections 2026-03-10T10:15:20.162 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.133+0000 7f21c25c1640 1 -- 192.168.123.104:0/1982519559 wait complete. 2026-03-10T10:15:20.162 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.133+0000 7f21c25c1640 1 Processor -- start 2026-03-10T10:15:20.162 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.141+0000 7f21c25c1640 1 -- start start 2026-03-10T10:15:20.162 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.141+0000 7f21c25c1640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f21bc075470 0x7f21bc085db0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:20.162 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.141+0000 7f21c25c1640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f21bc10a6d0 0x7f21bc07fe30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:20.162 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.141+0000 7f21c25c1640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f21bc10b080 0x7f21bc080370 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:20.162 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.141+0000 7f21c25c1640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f21bc07e7d0 con 0x7f21bc075470 2026-03-10T10:15:20.162 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.141+0000 7f21c25c1640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f21bc07e650 con 0x7f21bc10a6d0 2026-03-10T10:15:20.162 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.141+0000 7f21c25c1640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f21bc07e950 con 0x7f21bc10b080 2026-03-10T10:15:20.166 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.161+0000 7f21c0b37640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f21bc10b080 0x7f21bc080370 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:20.166 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.161+0000 7f21c0b37640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f21bc10b080 0x7f21bc080370 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3301/0 says I am v2:192.168.123.104:49802/0 (socket says 192.168.123.104:49802) 2026-03-10T10:15:20.166 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.161+0000 7f21c0b37640 1 -- 192.168.123.104:0/3489531103 learned_addr learned my addr 192.168.123.104:0/3489531103 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:15:20.167 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.161+0000 7f21bbfff640 1 --2- 192.168.123.104:0/3489531103 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f21bc075470 0x7f21bc085db0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:20.167 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.161+0000 7f21bbfff640 1 -- 192.168.123.104:0/3489531103 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f21bc10b080 msgr2=0x7f21bc080370 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.167 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.161+0000 7f21bbfff640 1 --2- 192.168.123.104:0/3489531103 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f21bc10b080 0x7f21bc080370 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.167 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.161+0000 7f21bbfff640 1 -- 192.168.123.104:0/3489531103 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f21bc10a6d0 msgr2=0x7f21bc07fe30 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:15:20.167 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.161+0000 7f21bbfff640 1 --2- 192.168.123.104:0/3489531103 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f21bc10a6d0 0x7f21bc07fe30 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.167 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.161+0000 7f21bbfff640 1 -- 192.168.123.104:0/3489531103 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f21bc080af0 con 0x7f21bc075470 2026-03-10T10:15:20.179 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.165+0000 7f21bbfff640 1 --2- 192.168.123.104:0/3489531103 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f21bc075470 0x7f21bc085db0 secure :-1 s=READY pgs=154 cs=0 l=1 rev1=1 crypto rx=0x7f21b400c370 tx=0x7f21b400c840 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:20.179 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.169+0000 7f21b97fa640 1 -- 192.168.123.104:0/3489531103 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f21b4019070 con 0x7f21bc075470 2026-03-10T10:15:20.180 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.169+0000 7f21c25c1640 1 -- 192.168.123.104:0/3489531103 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f21bc1b8920 con 0x7f21bc075470 2026-03-10T10:15:20.180 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.169+0000 7f21c25c1640 1 -- 192.168.123.104:0/3489531103 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f21bc1b8e60 con 0x7f21bc075470 2026-03-10T10:15:20.184 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.177+0000 7f21b97fa640 1 -- 192.168.123.104:0/3489531103 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f21b40092d0 con 0x7f21bc075470 2026-03-10T10:15:20.184 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.177+0000 7f21b97fa640 1 -- 192.168.123.104:0/3489531103 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f21b40075c0 con 0x7f21bc075470 2026-03-10T10:15:20.185 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.177+0000 7f21b97fa640 1 -- 192.168.123.104:0/3489531103 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f21b40077e0 con 0x7f21bc075470 2026-03-10T10:15:20.185 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.177+0000 7f21b97fa640 1 --2- 192.168.123.104:0/3489531103 >> v2:192.168.123.104:6800/3326026257 conn(0x7f218c0777c0 0x7f218c079c80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:20.191 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.185+0000 7f21bb7fe640 1 --2- 192.168.123.104:0/3489531103 >> v2:192.168.123.104:6800/3326026257 conn(0x7f218c0777c0 0x7f218c079c80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:20.191 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.185+0000 7f21b97fa640 1 -- 192.168.123.104:0/3489531103 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f21b409a7f0 con 0x7f21bc075470 2026-03-10T10:15:20.191 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.185+0000 7f21bb7fe640 1 --2- 192.168.123.104:0/3489531103 >> v2:192.168.123.104:6800/3326026257 conn(0x7f218c0777c0 0x7f218c079c80 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7f21ac0098d0 tx=0x7f21ac0023d0 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:20.191 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.189+0000 7f219affd640 1 --2- 192.168.123.104:0/3489531103 >> v2:192.168.123.104:6809/1668196037 conn(0x7f2188001630 0x7f2188003af0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:20.191 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.189+0000 7f219affd640 1 -- 192.168.123.104:0/3489531103 --> v2:192.168.123.104:6809/1668196037 -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7f2188006ba0 con 0x7f2188001630 2026-03-10T10:15:20.200 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.189+0000 7f21c0b37640 1 --2- 192.168.123.104:0/3489531103 >> v2:192.168.123.104:6809/1668196037 conn(0x7f2188001630 0x7f2188003af0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:20.200 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.189+0000 7f4c1e4f3640 1 -- 192.168.123.104:0/309902721 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f4c1810b080 msgr2=0x7f4c18074d30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.200 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.189+0000 7f4c1e4f3640 1 --2- 192.168.123.104:0/309902721 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f4c1810b080 0x7f4c18074d30 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7f4c1000b3e0 tx=0x7f4c1002f6c0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.200 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.189+0000 7f4c1e4f3640 1 -- 192.168.123.104:0/309902721 shutdown_connections 2026-03-10T10:15:20.200 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.189+0000 7f4c1e4f3640 1 --2- 192.168.123.104:0/309902721 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4c18075470 0x7f4c1807be20 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.200 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.189+0000 7f4c1e4f3640 1 --2- 192.168.123.104:0/309902721 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f4c1810b080 0x7f4c18074d30 unknown :-1 s=CLOSED pgs=70 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.200 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.189+0000 7f4c1e4f3640 1 --2- 192.168.123.104:0/309902721 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f4c1810a6d0 0x7f4c1810aab0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.200 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.189+0000 7f4c1e4f3640 1 -- 192.168.123.104:0/309902721 >> 192.168.123.104:0/309902721 conn(0x7f4c1806d9f0 msgr2=0x7f4c1806de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:20.200 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.189+0000 7f4c1e4f3640 1 -- 192.168.123.104:0/309902721 shutdown_connections 2026-03-10T10:15:20.200 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.189+0000 7f4c1e4f3640 1 -- 192.168.123.104:0/309902721 wait complete. 2026-03-10T10:15:20.200 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.189+0000 7f4c1e4f3640 1 Processor -- start 2026-03-10T10:15:20.200 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.189+0000 7f4c1e4f3640 1 -- start start 2026-03-10T10:15:20.200 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.189+0000 7f4c1e4f3640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4c18075470 0x7f4c18085c60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:20.201 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.189+0000 7f4c1e4f3640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f4c1810a6d0 0x7f4c1807fd30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:20.201 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.189+0000 7f4c1e4f3640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f4c18080270 0x7f4c18080720 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:20.201 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.189+0000 7f4c1e4f3640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f4c1807e850 con 0x7f4c18080270 2026-03-10T10:15:20.201 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.189+0000 7f4c1e4f3640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f4c1807e6d0 con 0x7f4c18075470 2026-03-10T10:15:20.201 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.189+0000 7f4c1e4f3640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f4c1807e9d0 con 0x7f4c1810a6d0 2026-03-10T10:15:20.201 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.189+0000 7f4c177fe640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f4c1810a6d0 0x7f4c1807fd30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:20.201 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.189+0000 7f4c177fe640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f4c1810a6d0 0x7f4c1807fd30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3301/0 says I am v2:192.168.123.104:49808/0 (socket says 192.168.123.104:49808) 2026-03-10T10:15:20.201 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.189+0000 7f4c177fe640 1 -- 192.168.123.104:0/179328903 learned_addr learned my addr 192.168.123.104:0/179328903 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:15:20.201 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.189+0000 7f4c17fff640 1 --2- 192.168.123.104:0/179328903 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4c18075470 0x7f4c18085c60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:20.201 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.189+0000 7f4c177fe640 1 -- 192.168.123.104:0/179328903 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4c18075470 msgr2=0x7f4c18085c60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.201 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.189+0000 7f4c177fe640 1 --2- 192.168.123.104:0/179328903 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4c18075470 0x7f4c18085c60 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.201 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.189+0000 7f4c177fe640 1 -- 192.168.123.104:0/179328903 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f4c18080270 msgr2=0x7f4c18080720 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.201 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.189+0000 7f4c177fe640 1 --2- 192.168.123.104:0/179328903 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f4c18080270 0x7f4c18080720 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.201 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.189+0000 7f4c177fe640 1 -- 192.168.123.104:0/179328903 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f4c181be950 con 0x7f4c1810a6d0 2026-03-10T10:15:20.201 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.189+0000 7f4c177fe640 1 --2- 192.168.123.104:0/179328903 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f4c1810a6d0 0x7f4c1807fd30 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7f4c10030010 tx=0x7f4c10002d00 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:20.201 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.197+0000 7f4c157fa640 1 -- 192.168.123.104:0/179328903 <== mon.2 v2:192.168.123.104:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f4c10004d40 con 0x7f4c1810a6d0 2026-03-10T10:15:20.201 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.197+0000 7f4c1e4f3640 1 -- 192.168.123.104:0/179328903 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f4c181beb80 con 0x7f4c1810a6d0 2026-03-10T10:15:20.201 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.197+0000 7f4c1e4f3640 1 -- 192.168.123.104:0/179328903 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f4c181befe0 con 0x7f4c1810a6d0 2026-03-10T10:15:20.201 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.197+0000 7f4c157fa640 1 -- 192.168.123.104:0/179328903 <== mon.2 v2:192.168.123.104:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f4c100044e0 con 0x7f4c1810a6d0 2026-03-10T10:15:20.201 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.197+0000 7f4c157fa640 1 -- 192.168.123.104:0/179328903 <== mon.2 v2:192.168.123.104:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f4c100436e0 con 0x7f4c1810a6d0 2026-03-10T10:15:20.201 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.197+0000 7f4c1e4f3640 1 -- 192.168.123.104:0/179328903 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_get_version(what=osdmap handle=1) -- 0x7f4c1810ee80 con 0x7f4c1810a6d0 2026-03-10T10:15:20.201 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.197+0000 7f21c0b37640 1 --2- 192.168.123.104:0/3489531103 >> v2:192.168.123.104:6809/1668196037 conn(0x7f2188001630 0x7f2188003af0 crc :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:20.211 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.205+0000 7f4c157fa640 1 -- 192.168.123.104:0/179328903 <== mon.2 v2:192.168.123.104:3301/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f4c10007820 con 0x7f4c1810a6d0 2026-03-10T10:15:20.211 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.205+0000 7f4c157fa640 1 --2- 192.168.123.104:0/179328903 >> v2:192.168.123.104:6800/3326026257 conn(0x7f4c04077790 0x7f4c04079c50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:20.211 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.197+0000 7f21b97fa640 1 -- 192.168.123.104:0/3489531103 <== osd.2 v2:192.168.123.104:6809/1668196037 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (crc 0 0 0) 0x7f2188006ba0 con 0x7f2188001630 2026-03-10T10:15:20.212 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.205+0000 7f4c157fa640 1 -- 192.168.123.104:0/179328903 <== mon.2 v2:192.168.123.104:3301/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f4c100beb00 con 0x7f4c1810a6d0 2026-03-10T10:15:20.212 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.205+0000 7f4c157fa640 1 --2- 192.168.123.104:0/179328903 >> v2:192.168.123.104:6801/3431285778 conn(0x7f4c04081130 0x7f4c04083590 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:20.212 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.205+0000 7f4c157fa640 1 -- 192.168.123.104:0/179328903 --> v2:192.168.123.104:6801/3431285778 -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7f4c10049e70 con 0x7f4c04081130 2026-03-10T10:15:20.212 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.205+0000 7f4c157fa640 1 -- 192.168.123.104:0/179328903 <== mon.2 v2:192.168.123.104:3301/0 6 ==== mon_get_version_reply(handle=1 version=65) ==== 24+0+0 (secure 0 0 0) 0x7f4c100bee70 con 0x7f4c1810a6d0 2026-03-10T10:15:20.212 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.205+0000 7f4c1ca69640 1 --2- 192.168.123.104:0/179328903 >> v2:192.168.123.104:6801/3431285778 conn(0x7f4c04081130 0x7f4c04083590 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:20.212 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.205+0000 7f4c17fff640 1 --2- 192.168.123.104:0/179328903 >> v2:192.168.123.104:6800/3326026257 conn(0x7f4c04077790 0x7f4c04079c50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:20.212 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.205+0000 7f4c1ca69640 1 --2- 192.168.123.104:0/179328903 >> v2:192.168.123.104:6801/3431285778 conn(0x7f4c04081130 0x7f4c04083590 crc :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:20.212 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.205+0000 7f4c17fff640 1 --2- 192.168.123.104:0/179328903 >> v2:192.168.123.104:6800/3326026257 conn(0x7f4c04077790 0x7f4c04079c50 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7f4c08006fd0 tx=0x7f4c08008040 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:20.212 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.205+0000 7f4c157fa640 1 -- 192.168.123.104:0/179328903 <== osd.0 v2:192.168.123.104:6801/3431285778 1 ==== command_reply(tid 1: 0 ) ==== 8+0+27504 (crc 0 0 0) 0x7f4c10049e70 con 0x7f4c04081130 2026-03-10T10:15:20.216 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.209+0000 7f219affd640 1 -- 192.168.123.104:0/3489531103 --> v2:192.168.123.104:6809/1668196037 -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7f2188005ca0 con 0x7f2188001630 2026-03-10T10:15:20.216 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.209+0000 7f21b97fa640 1 -- 192.168.123.104:0/3489531103 <== osd.2 v2:192.168.123.104:6809/1668196037 2 ==== command_reply(tid 2: 0 ) ==== 8+0+11 (crc 0 0 0) 0x7f2188005ca0 con 0x7f2188001630 2026-03-10T10:15:20.218 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.213+0000 7f21c25c1640 1 -- 192.168.123.104:0/3489531103 >> v2:192.168.123.104:6809/1668196037 conn(0x7f2188001630 msgr2=0x7f2188003af0 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.218 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.213+0000 7f21c25c1640 1 --2- 192.168.123.104:0/3489531103 >> v2:192.168.123.104:6809/1668196037 conn(0x7f2188001630 0x7f2188003af0 crc :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.225 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.213+0000 7f21c25c1640 1 -- 192.168.123.104:0/3489531103 >> v2:192.168.123.104:6800/3326026257 conn(0x7f218c0777c0 msgr2=0x7f218c079c80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.225 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.213+0000 7f21c25c1640 1 --2- 192.168.123.104:0/3489531103 >> v2:192.168.123.104:6800/3326026257 conn(0x7f218c0777c0 0x7f218c079c80 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7f21ac0098d0 tx=0x7f21ac0023d0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.225 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.213+0000 7f21c25c1640 1 -- 192.168.123.104:0/3489531103 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f21bc075470 msgr2=0x7f21bc085db0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.225 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.213+0000 7f21c25c1640 1 --2- 192.168.123.104:0/3489531103 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f21bc075470 0x7f21bc085db0 secure :-1 s=READY pgs=154 cs=0 l=1 rev1=1 crypto rx=0x7f21b400c370 tx=0x7f21b400c840 comp rx=0 tx=0).stop 2026-03-10T10:15:20.225 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.213+0000 7f21c0b37640 1 -- 192.168.123.104:0/3489531103 reap_dead start 2026-03-10T10:15:20.225 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.213+0000 7f21c25c1640 1 -- 192.168.123.104:0/3489531103 shutdown_connections 2026-03-10T10:15:20.225 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.213+0000 7f21c25c1640 1 -- 192.168.123.104:0/3489531103 >> 192.168.123.104:0/3489531103 conn(0x7f21bc06d9f0 msgr2=0x7f21bc10d490 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:20.225 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.213+0000 7f21c25c1640 1 -- 192.168.123.104:0/3489531103 shutdown_connections 2026-03-10T10:15:20.225 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.213+0000 7f21c25c1640 1 -- 192.168.123.104:0/3489531103 wait complete. 2026-03-10T10:15:20.231 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.225+0000 7f4c1e4f3640 1 -- 192.168.123.104:0/179328903 --> v2:192.168.123.104:6801/3431285778 -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7f4c1810f090 con 0x7f4c04081130 2026-03-10T10:15:20.231 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.225+0000 7f4c157fa640 1 -- 192.168.123.104:0/179328903 <== osd.0 v2:192.168.123.104:6801/3431285778 2 ==== command_reply(tid 2: 0 ) ==== 8+0+11 (crc 0 0 0) 0x7f4c1810f090 con 0x7f4c04081130 2026-03-10T10:15:20.231 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.225+0000 7f4bf6ffd640 1 -- 192.168.123.104:0/179328903 >> v2:192.168.123.104:6801/3431285778 conn(0x7f4c04081130 msgr2=0x7f4c04083590 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.231 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.225+0000 7f4bf6ffd640 1 --2- 192.168.123.104:0/179328903 >> v2:192.168.123.104:6801/3431285778 conn(0x7f4c04081130 0x7f4c04083590 crc :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.231 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.225+0000 7f4bf6ffd640 1 -- 192.168.123.104:0/179328903 >> v2:192.168.123.104:6800/3326026257 conn(0x7f4c04077790 msgr2=0x7f4c04079c50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.231 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.225+0000 7f4bf6ffd640 1 --2- 192.168.123.104:0/179328903 >> v2:192.168.123.104:6800/3326026257 conn(0x7f4c04077790 0x7f4c04079c50 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7f4c08006fd0 tx=0x7f4c08008040 comp rx=0 tx=0).stop 2026-03-10T10:15:20.231 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.225+0000 7f4bf6ffd640 1 -- 192.168.123.104:0/179328903 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f4c1810a6d0 msgr2=0x7f4c1807fd30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.231 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.225+0000 7f4bf6ffd640 1 --2- 192.168.123.104:0/179328903 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f4c1810a6d0 0x7f4c1807fd30 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7f4c10030010 tx=0x7f4c10002d00 comp rx=0 tx=0).stop 2026-03-10T10:15:20.231 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.225+0000 7f4c1ca69640 1 -- 192.168.123.104:0/179328903 reap_dead start 2026-03-10T10:15:20.231 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.225+0000 7f4bf6ffd640 1 -- 192.168.123.104:0/179328903 shutdown_connections 2026-03-10T10:15:20.231 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.225+0000 7f4bf6ffd640 1 -- 192.168.123.104:0/179328903 >> 192.168.123.104:0/179328903 conn(0x7f4c1806d9f0 msgr2=0x7f4c1810d490 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:20.231 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.225+0000 7f4bf6ffd640 1 -- 192.168.123.104:0/179328903 shutdown_connections 2026-03-10T10:15:20.231 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.225+0000 7f4bf6ffd640 1 -- 192.168.123.104:0/179328903 wait complete. 2026-03-10T10:15:20.236 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.229+0000 7efe8de29640 1 -- 192.168.123.104:0/2312618229 --> v2:192.168.123.107:6808/719340092 -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7efe54005c70 con 0x7efe54001630 2026-03-10T10:15:20.236 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.229+0000 7efe7dffb640 1 -- 192.168.123.104:0/2312618229 <== osd.6 v2:192.168.123.107:6808/719340092 2 ==== command_reply(tid 2: 0 ) ==== 8+0+12 (crc 0 0 0) 0x7efe54005c70 con 0x7efe54001630 2026-03-10T10:15:20.236 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.233+0000 7efe8de29640 1 -- 192.168.123.104:0/2312618229 >> v2:192.168.123.107:6808/719340092 conn(0x7efe54001630 msgr2=0x7efe54003af0 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.236 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.233+0000 7efe8de29640 1 --2- 192.168.123.104:0/2312618229 >> v2:192.168.123.107:6808/719340092 conn(0x7efe54001630 0x7efe54003af0 crc :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:20.236 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.233+0000 7efe8de29640 1 -- 192.168.123.104:0/2312618229 >> v2:192.168.123.104:6800/3326026257 conn(0x7efe60077790 msgr2=0x7efe60079c50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.236 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.233+0000 7efe8de29640 1 --2- 192.168.123.104:0/2312618229 >> v2:192.168.123.104:6800/3326026257 conn(0x7efe60077790 0x7efe60079c50 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7efe70002930 tx=0x7efe70005f50 comp rx=0 tx=0).stop 2026-03-10T10:15:20.236 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.233+0000 7efe8de29640 1 -- 192.168.123.104:0/2312618229 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7efe800b7ee0 msgr2=0x7efe80149b90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:20.236 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.233+0000 7efe8de29640 1 --2- 192.168.123.104:0/2312618229 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7efe800b7ee0 0x7efe80149b90 secure :-1 s=READY pgs=153 cs=0 l=1 rev1=1 crypto rx=0x7efe84009fd0 tx=0x7efe8400ef90 comp rx=0 tx=0).stop 2026-03-10T10:15:20.236 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.233+0000 7efe8d628640 1 -- 192.168.123.104:0/2312618229 reap_dead start 2026-03-10T10:15:20.237 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.233+0000 7efe8de29640 1 -- 192.168.123.104:0/2312618229 shutdown_connections 2026-03-10T10:15:20.237 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.233+0000 7efe8de29640 1 -- 192.168.123.104:0/2312618229 >> 192.168.123.104:0/2312618229 conn(0x7efe8001a730 msgr2=0x7efe800b76c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:20.237 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.233+0000 7efe8de29640 1 -- 192.168.123.104:0/2312618229 shutdown_connections 2026-03-10T10:15:20.237 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:20.233+0000 7efe8de29640 1 -- 192.168.123.104:0/2312618229 wait complete. 2026-03-10T10:15:20.298 INFO:teuthology.orchestra.run.vm04.stdout:133143986215 2026-03-10T10:15:20.298 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph osd last-stat-seq osd.4 2026-03-10T10:15:20.333 INFO:teuthology.orchestra.run.vm04.stdout:158913789985 2026-03-10T10:15:20.333 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph osd last-stat-seq osd.5 2026-03-10T10:15:20.334 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:20 vm04 bash[28289]: cluster 2026-03-10T10:15:18.299963+0000 mgr.y (mgr.24422) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:20.334 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:20 vm04 bash[28289]: cluster 2026-03-10T10:15:18.299963+0000 mgr.y (mgr.24422) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:20.335 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:20 vm04 bash[20742]: cluster 2026-03-10T10:15:18.299963+0000 mgr.y (mgr.24422) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:20.335 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:20 vm04 bash[20742]: cluster 2026-03-10T10:15:18.299963+0000 mgr.y (mgr.24422) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:20.343 INFO:teuthology.orchestra.run.vm04.stdout:210453397524 2026-03-10T10:15:20.343 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph osd last-stat-seq osd.7 2026-03-10T10:15:20.362 INFO:teuthology.orchestra.run.vm04.stdout:111669149742 2026-03-10T10:15:20.362 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph osd last-stat-seq osd.3 2026-03-10T10:15:20.372 INFO:teuthology.orchestra.run.vm04.stdout:34359738434 2026-03-10T10:15:20.373 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph osd last-stat-seq osd.0 2026-03-10T10:15:20.459 INFO:teuthology.orchestra.run.vm04.stdout:77309411381 2026-03-10T10:15:20.460 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph osd last-stat-seq osd.2 2026-03-10T10:15:20.460 INFO:teuthology.orchestra.run.vm04.stdout:184683593754 2026-03-10T10:15:20.460 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph osd last-stat-seq osd.6 2026-03-10T10:15:20.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:20 vm07 bash[23367]: cluster 2026-03-10T10:15:18.299963+0000 mgr.y (mgr.24422) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:20.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:20 vm07 bash[23367]: cluster 2026-03-10T10:15:18.299963+0000 mgr.y (mgr.24422) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:22 vm04 bash[28289]: cluster 2026-03-10T10:15:20.300569+0000 mgr.y (mgr.24422) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T10:15:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:22 vm04 bash[28289]: cluster 2026-03-10T10:15:20.300569+0000 mgr.y (mgr.24422) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T10:15:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:22 vm04 bash[20742]: cluster 2026-03-10T10:15:20.300569+0000 mgr.y (mgr.24422) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T10:15:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:22 vm04 bash[20742]: cluster 2026-03-10T10:15:20.300569+0000 mgr.y (mgr.24422) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T10:15:22.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:22 vm07 bash[23367]: cluster 2026-03-10T10:15:20.300569+0000 mgr.y (mgr.24422) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T10:15:22.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:22 vm07 bash[23367]: cluster 2026-03-10T10:15:20.300569+0000 mgr.y (mgr.24422) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T10:15:23.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:15:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:15:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:15:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:24 vm04 bash[28289]: cluster 2026-03-10T10:15:22.300795+0000 mgr.y (mgr.24422) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 0 op/s 2026-03-10T10:15:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:24 vm04 bash[28289]: cluster 2026-03-10T10:15:22.300795+0000 mgr.y (mgr.24422) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 0 op/s 2026-03-10T10:15:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:24 vm04 bash[20742]: cluster 2026-03-10T10:15:22.300795+0000 mgr.y (mgr.24422) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 0 op/s 2026-03-10T10:15:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:24 vm04 bash[20742]: cluster 2026-03-10T10:15:22.300795+0000 mgr.y (mgr.24422) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 0 op/s 2026-03-10T10:15:24.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:24 vm07 bash[23367]: cluster 2026-03-10T10:15:22.300795+0000 mgr.y (mgr.24422) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 0 op/s 2026-03-10T10:15:24.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:24 vm07 bash[23367]: cluster 2026-03-10T10:15:22.300795+0000 mgr.y (mgr.24422) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 0 op/s 2026-03-10T10:15:24.774 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:15:24.774 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:15:24.776 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:15:24.779 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:15:24.781 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:15:24.781 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:15:24.783 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:15:24.785 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:15:25.014 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.009+0000 7f49ddf47640 1 -- 192.168.123.104:0/3301153503 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f49d810b080 msgr2=0x7f49d8074d30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:25.014 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.009+0000 7f49ddf47640 1 --2- 192.168.123.104:0/3301153503 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f49d810b080 0x7f49d8074d30 secure :-1 s=READY pgs=155 cs=0 l=1 rev1=1 crypto rx=0x7f49d000b3e0 tx=0x7f49d002f6c0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.014 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.009+0000 7f49ddf47640 1 -- 192.168.123.104:0/3301153503 shutdown_connections 2026-03-10T10:15:25.014 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.009+0000 7f49ddf47640 1 --2- 192.168.123.104:0/3301153503 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f49d8075470 0x7f49d807be20 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.014 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.009+0000 7f49ddf47640 1 --2- 192.168.123.104:0/3301153503 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f49d810b080 0x7f49d8074d30 unknown :-1 s=CLOSED pgs=155 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.014 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.009+0000 7f49ddf47640 1 --2- 192.168.123.104:0/3301153503 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f49d810a6d0 0x7f49d810aab0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.014 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.009+0000 7f49ddf47640 1 -- 192.168.123.104:0/3301153503 >> 192.168.123.104:0/3301153503 conn(0x7f49d806d9f0 msgr2=0x7f49d806de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:25.014 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.009+0000 7f49ddf47640 1 -- 192.168.123.104:0/3301153503 shutdown_connections 2026-03-10T10:15:25.014 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.009+0000 7f49ddf47640 1 -- 192.168.123.104:0/3301153503 wait complete. 2026-03-10T10:15:25.014 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.009+0000 7f49ddf47640 1 Processor -- start 2026-03-10T10:15:25.015 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.009+0000 7f49ddf47640 1 -- start start 2026-03-10T10:15:25.015 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.009+0000 7f49ddf47640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f49d8075470 0x7f49d8085be0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:25.015 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.009+0000 7f49ddf47640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f49d810a6d0 0x7f49d8086120 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:25.015 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.009+0000 7f49ddf47640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f49d807fe00 0x7f49d80802b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:25.015 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.009+0000 7f49ddf47640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f49d807e850 con 0x7f49d807fe00 2026-03-10T10:15:25.015 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.009+0000 7f49ddf47640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f49d807e6d0 con 0x7f49d810a6d0 2026-03-10T10:15:25.015 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.009+0000 7f49ddf47640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f49d807e9d0 con 0x7f49d8075470 2026-03-10T10:15:25.015 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.009+0000 7f49d77fe640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f49d8075470 0x7f49d8085be0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:25.015 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.009+0000 7f49d77fe640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f49d8075470 0x7f49d8085be0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3301/0 says I am v2:192.168.123.104:52566/0 (socket says 192.168.123.104:52566) 2026-03-10T10:15:25.015 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.009+0000 7f49d77fe640 1 -- 192.168.123.104:0/2538999384 learned_addr learned my addr 192.168.123.104:0/2538999384 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:15:25.015 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.009+0000 7f49d7fff640 1 --2- 192.168.123.104:0/2538999384 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f49d807fe00 0x7f49d80802b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:25.015 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.009+0000 7f49d77fe640 1 -- 192.168.123.104:0/2538999384 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f49d810a6d0 msgr2=0x7f49d8086120 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:25.015 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.009+0000 7f49d77fe640 1 --2- 192.168.123.104:0/2538999384 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f49d810a6d0 0x7f49d8086120 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.015 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.009+0000 7f49d77fe640 1 -- 192.168.123.104:0/2538999384 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f49d807fe00 msgr2=0x7f49d80802b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:25.015 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.009+0000 7f49d77fe640 1 --2- 192.168.123.104:0/2538999384 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f49d807fe00 0x7f49d80802b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.015 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.009+0000 7f49d77fe640 1 -- 192.168.123.104:0/2538999384 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f49d81be950 con 0x7f49d8075470 2026-03-10T10:15:25.015 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.009+0000 7f49d7fff640 1 --2- 192.168.123.104:0/2538999384 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f49d807fe00 0x7f49d80802b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-10T10:15:25.015 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.009+0000 7f49d77fe640 1 --2- 192.168.123.104:0/2538999384 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f49d8075470 0x7f49d8085be0 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7f49c800b570 tx=0x7f49c800ba40 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:25.015 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.009+0000 7f49d4ff9640 1 -- 192.168.123.104:0/2538999384 <== mon.2 v2:192.168.123.104:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f49c8013020 con 0x7f49d8075470 2026-03-10T10:15:25.015 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.009+0000 7f49ddf47640 1 -- 192.168.123.104:0/2538999384 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f49d81beb20 con 0x7f49d8075470 2026-03-10T10:15:25.015 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.009+0000 7f49ddf47640 1 -- 192.168.123.104:0/2538999384 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f49d81bf030 con 0x7f49d8075470 2026-03-10T10:15:25.015 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.009+0000 7f49d4ff9640 1 -- 192.168.123.104:0/2538999384 <== mon.2 v2:192.168.123.104:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f49c8004480 con 0x7f49d8075470 2026-03-10T10:15:25.015 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.009+0000 7f49d4ff9640 1 -- 192.168.123.104:0/2538999384 <== mon.2 v2:192.168.123.104:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f49c800fb70 con 0x7f49d8075470 2026-03-10T10:15:25.020 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.013+0000 7f49d4ff9640 1 -- 192.168.123.104:0/2538999384 <== mon.2 v2:192.168.123.104:3301/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f49c8020020 con 0x7f49d8075470 2026-03-10T10:15:25.020 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.013+0000 7f49d4ff9640 1 --2- 192.168.123.104:0/2538999384 >> v2:192.168.123.104:6800/3326026257 conn(0x7f49c4077790 0x7f49c4079c50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:25.020 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.013+0000 7f49d6ffd640 1 --2- 192.168.123.104:0/2538999384 >> v2:192.168.123.104:6800/3326026257 conn(0x7f49c4077790 0x7f49c4079c50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:25.020 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.013+0000 7f49d4ff9640 1 -- 192.168.123.104:0/2538999384 <== mon.2 v2:192.168.123.104:3301/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f49c809a080 con 0x7f49d8075470 2026-03-10T10:15:25.020 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.013+0000 7f49d6ffd640 1 --2- 192.168.123.104:0/2538999384 >> v2:192.168.123.104:6800/3326026257 conn(0x7f49c4077790 0x7f49c4079c50 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7f49d8081420 tx=0x7f49d003b040 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:25.020 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.013+0000 7f49ddf47640 1 -- 192.168.123.104:0/2538999384 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f49a0005180 con 0x7f49d8075470 2026-03-10T10:15:25.034 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.017+0000 7f49d4ff9640 1 -- 192.168.123.104:0/2538999384 <== mon.2 v2:192.168.123.104:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f49c8066ee0 con 0x7f49d8075470 2026-03-10T10:15:25.171 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.161+0000 7eff3f577640 1 -- 192.168.123.104:0/411901092 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7eff4010a850 msgr2=0x7eff4010acb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:25.171 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.161+0000 7eff3f577640 1 --2- 192.168.123.104:0/411901092 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7eff4010a850 0x7eff4010acb0 secure :-1 s=READY pgs=156 cs=0 l=1 rev1=1 crypto rx=0x7eff28009a60 tx=0x7eff2802f290 comp rx=0 tx=0).stop 2026-03-10T10:15:25.171 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.161+0000 7eff3f577640 1 -- 192.168.123.104:0/411901092 shutdown_connections 2026-03-10T10:15:25.171 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.161+0000 7eff3f577640 1 --2- 192.168.123.104:0/411901092 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7eff4011c780 0x7eff4011eb70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.171 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.161+0000 7eff3f577640 1 --2- 192.168.123.104:0/411901092 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7eff4010a850 0x7eff4010acb0 unknown :-1 s=CLOSED pgs=156 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.171 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.161+0000 7eff3f577640 1 --2- 192.168.123.104:0/411901092 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7eff4010a470 0x7eff401114d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.171 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.161+0000 7eff3f577640 1 -- 192.168.123.104:0/411901092 >> 192.168.123.104:0/411901092 conn(0x7eff4006d9c0 msgr2=0x7eff4006ddd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:25.172 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.161+0000 7eff3f577640 1 -- 192.168.123.104:0/411901092 shutdown_connections 2026-03-10T10:15:25.172 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.161+0000 7eff3f577640 1 -- 192.168.123.104:0/411901092 wait complete. 2026-03-10T10:15:25.172 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.161+0000 7eff3f577640 1 Processor -- start 2026-03-10T10:15:25.172 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.161+0000 7eff3f577640 1 -- start start 2026-03-10T10:15:25.172 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.161+0000 7eff3f577640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7eff4010a470 0x7eff401a95e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:25.172 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.161+0000 7eff3f577640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7eff4011c780 0x7eff401a9b20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:25.172 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.161+0000 7eff3f577640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7eff401adeb0 0x7eff401ae360 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:25.172 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.161+0000 7eff3f577640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7eff40121400 con 0x7eff4010a470 2026-03-10T10:15:25.172 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.161+0000 7eff3f577640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7eff40121280 con 0x7eff4011c780 2026-03-10T10:15:25.172 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.161+0000 7eff3f577640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7eff40121580 con 0x7eff401adeb0 2026-03-10T10:15:25.172 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.161+0000 7eff3ed76640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7eff401adeb0 0x7eff401ae360 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:25.172 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.161+0000 7eff3ed76640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7eff401adeb0 0x7eff401ae360 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3301/0 says I am v2:192.168.123.104:52582/0 (socket says 192.168.123.104:52582) 2026-03-10T10:15:25.172 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.161+0000 7eff3ed76640 1 -- 192.168.123.104:0/142442713 learned_addr learned my addr 192.168.123.104:0/142442713 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:15:25.172 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.161+0000 7eff3e575640 1 --2- 192.168.123.104:0/142442713 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7eff4010a470 0x7eff401a95e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:25.172 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.161+0000 7eff3e575640 1 -- 192.168.123.104:0/142442713 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7eff401adeb0 msgr2=0x7eff401ae360 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:25.172 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.161+0000 7eff3e575640 1 --2- 192.168.123.104:0/142442713 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7eff401adeb0 0x7eff401ae360 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.172 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.161+0000 7eff3e575640 1 -- 192.168.123.104:0/142442713 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7eff4011c780 msgr2=0x7eff401a9b20 unknown :-1 s=STATE_CONNECTING_RE l=1).mark_down 2026-03-10T10:15:25.172 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.161+0000 7eff3e575640 1 --2- 192.168.123.104:0/142442713 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7eff4011c780 0x7eff401a9b20 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.172 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.161+0000 7eff3e575640 1 -- 192.168.123.104:0/142442713 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7eff401aea00 con 0x7eff4010a470 2026-03-10T10:15:25.172 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.161+0000 7eff3e575640 1 --2- 192.168.123.104:0/142442713 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7eff4010a470 0x7eff401a95e0 secure :-1 s=READY pgs=157 cs=0 l=1 rev1=1 crypto rx=0x7eff3400e9f0 tx=0x7eff3400eec0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:25.172 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.169+0000 7eff2f7fe640 1 -- 192.168.123.104:0/142442713 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7eff3400cd80 con 0x7eff4010a470 2026-03-10T10:15:25.174 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.169+0000 7eff3f577640 1 -- 192.168.123.104:0/142442713 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7eff401aecf0 con 0x7eff4010a470 2026-03-10T10:15:25.174 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.169+0000 7eff3f577640 1 -- 192.168.123.104:0/142442713 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7eff401120d0 con 0x7eff4010a470 2026-03-10T10:15:25.174 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.169+0000 7eff2f7fe640 1 -- 192.168.123.104:0/142442713 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7eff34004540 con 0x7eff4010a470 2026-03-10T10:15:25.174 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.169+0000 7eff2f7fe640 1 -- 192.168.123.104:0/142442713 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7eff34010690 con 0x7eff4010a470 2026-03-10T10:15:25.174 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.169+0000 7eff2d7fa640 1 -- 192.168.123.104:0/142442713 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7efefc005180 con 0x7eff4010a470 2026-03-10T10:15:25.187 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.173+0000 7eff2f7fe640 1 -- 192.168.123.104:0/142442713 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7eff34010830 con 0x7eff4010a470 2026-03-10T10:15:25.187 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.173+0000 7eff2f7fe640 1 --2- 192.168.123.104:0/142442713 >> v2:192.168.123.104:6800/3326026257 conn(0x7eff04077790 0x7eff04079c50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:25.187 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.173+0000 7eff3dd74640 1 --2- 192.168.123.104:0/142442713 >> v2:192.168.123.104:6800/3326026257 conn(0x7eff04077790 0x7eff04079c50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:25.187 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.177+0000 7eff3dd74640 1 --2- 192.168.123.104:0/142442713 >> v2:192.168.123.104:6800/3326026257 conn(0x7eff04077790 0x7eff04079c50 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7eff28009950 tx=0x7eff280023d0 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:25.187 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.177+0000 7eff2f7fe640 1 -- 192.168.123.104:0/142442713 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7eff3409a680 con 0x7eff4010a470 2026-03-10T10:15:25.187 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.177+0000 7eff2f7fe640 1 -- 192.168.123.104:0/142442713 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7eff3409ab40 con 0x7eff4010a470 2026-03-10T10:15:25.219 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.217+0000 7f44ca4ec640 1 -- 192.168.123.104:0/1285432651 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f44c410b080 msgr2=0x7f44c4074d30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:25.219 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.217+0000 7f44ca4ec640 1 --2- 192.168.123.104:0/1285432651 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f44c410b080 0x7f44c4074d30 secure :-1 s=READY pgs=158 cs=0 l=1 rev1=1 crypto rx=0x7f44b8009e10 tx=0x7f44b802f5f0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.220 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.217+0000 7f44ca4ec640 1 -- 192.168.123.104:0/1285432651 shutdown_connections 2026-03-10T10:15:25.220 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.217+0000 7f44ca4ec640 1 --2- 192.168.123.104:0/1285432651 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f44c4075470 0x7f44c407be20 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.220 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.217+0000 7f44ca4ec640 1 --2- 192.168.123.104:0/1285432651 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f44c410b080 0x7f44c4074d30 unknown :-1 s=CLOSED pgs=158 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.220 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.217+0000 7f44ca4ec640 1 --2- 192.168.123.104:0/1285432651 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f44c410a6d0 0x7f44c410aab0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.220 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.217+0000 7f44ca4ec640 1 -- 192.168.123.104:0/1285432651 >> 192.168.123.104:0/1285432651 conn(0x7f44c406d9f0 msgr2=0x7f44c406de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:25.220 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.217+0000 7f44ca4ec640 1 -- 192.168.123.104:0/1285432651 shutdown_connections 2026-03-10T10:15:25.230 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.217+0000 7f44ca4ec640 1 -- 192.168.123.104:0/1285432651 wait complete. 2026-03-10T10:15:25.230 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.217+0000 7f44ca4ec640 1 Processor -- start 2026-03-10T10:15:25.231 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.217+0000 7f44ca4ec640 1 -- start start 2026-03-10T10:15:25.231 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.217+0000 7f44ca4ec640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f44c4075470 0x7f44c4085ae0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:25.231 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.217+0000 7f44ca4ec640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f44c410a6d0 0x7f44c4086020 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:25.231 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.217+0000 7f44ca4ec640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f44c407fc40 0x7f44c40800f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:25.231 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.217+0000 7f44ca4ec640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f44c407e760 con 0x7f44c4075470 2026-03-10T10:15:25.231 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.217+0000 7f44ca4ec640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f44c407e5e0 con 0x7f44c410a6d0 2026-03-10T10:15:25.231 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.217+0000 7f44ca4ec640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f44c407e8e0 con 0x7f44c407fc40 2026-03-10T10:15:25.231 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.221+0000 7f44c3fff640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f44c4075470 0x7f44c4085ae0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:25.231 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.221+0000 7f44c8a62640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f44c407fc40 0x7f44c40800f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:25.231 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.221+0000 7f44c8a62640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f44c407fc40 0x7f44c40800f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3301/0 says I am v2:192.168.123.104:52598/0 (socket says 192.168.123.104:52598) 2026-03-10T10:15:25.231 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.221+0000 7f44c8a62640 1 -- 192.168.123.104:0/230236358 learned_addr learned my addr 192.168.123.104:0/230236358 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:15:25.231 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.225+0000 7f44c8a62640 1 -- 192.168.123.104:0/230236358 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f44c410a6d0 msgr2=0x7f44c4086020 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:15:25.231 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.225+0000 7f44c8a62640 1 --2- 192.168.123.104:0/230236358 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f44c410a6d0 0x7f44c4086020 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.231 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.225+0000 7f44c8a62640 1 -- 192.168.123.104:0/230236358 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f44c4075470 msgr2=0x7f44c4085ae0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:25.231 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.225+0000 7f44c8a62640 1 --2- 192.168.123.104:0/230236358 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f44c4075470 0x7f44c4085ae0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.231 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.225+0000 7f44c8a62640 1 -- 192.168.123.104:0/230236358 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f44c41c7360 con 0x7f44c407fc40 2026-03-10T10:15:25.231 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.225+0000 7f44c3fff640 1 --2- 192.168.123.104:0/230236358 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f44c4075470 0x7f44c4085ae0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T10:15:25.231 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.225+0000 7f44c8a62640 1 --2- 192.168.123.104:0/230236358 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f44c407fc40 0x7f44c40800f0 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7f44bc00e3c0 tx=0x7f44bc00e890 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:25.231 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.229+0000 7f44c17fa640 1 -- 192.168.123.104:0/230236358 <== mon.2 v2:192.168.123.104:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f44bc0040d0 con 0x7f44c407fc40 2026-03-10T10:15:25.241 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.229+0000 7f44ca4ec640 1 -- 192.168.123.104:0/230236358 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f44c41c7530 con 0x7f44c407fc40 2026-03-10T10:15:25.241 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.229+0000 7f44ca4ec640 1 -- 192.168.123.104:0/230236358 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f44c41c7980 con 0x7f44c407fc40 2026-03-10T10:15:25.241 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.237+0000 7f44c17fa640 1 -- 192.168.123.104:0/230236358 <== mon.2 v2:192.168.123.104:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f44bc018070 con 0x7f44c407fc40 2026-03-10T10:15:25.241 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.237+0000 7f44c17fa640 1 -- 192.168.123.104:0/230236358 <== mon.2 v2:192.168.123.104:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f44bc0138c0 con 0x7f44c407fc40 2026-03-10T10:15:25.241 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.237+0000 7f44a2ffd640 1 -- 192.168.123.104:0/230236358 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f4490005180 con 0x7f44c407fc40 2026-03-10T10:15:25.241 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.237+0000 7f44c17fa640 1 -- 192.168.123.104:0/230236358 <== mon.2 v2:192.168.123.104:3301/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f44bc013a60 con 0x7f44c407fc40 2026-03-10T10:15:25.241 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.237+0000 7f44c17fa640 1 --2- 192.168.123.104:0/230236358 >> v2:192.168.123.104:6800/3326026257 conn(0x7f4494077790 0x7f4494079c50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:25.241 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.237+0000 7f44c17fa640 1 -- 192.168.123.104:0/230236358 <== mon.2 v2:192.168.123.104:3301/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f44bc09b390 con 0x7f44c407fc40 2026-03-10T10:15:25.242 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.237+0000 7f44c3fff640 1 --2- 192.168.123.104:0/230236358 >> v2:192.168.123.104:6800/3326026257 conn(0x7f4494077790 0x7f4494079c50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:25.242 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.237+0000 7f44c3fff640 1 --2- 192.168.123.104:0/230236358 >> v2:192.168.123.104:6800/3326026257 conn(0x7f4494077790 0x7f4494079c50 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7f44b4004160 tx=0x7f44b4009290 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:25.246 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.241+0000 7f44c17fa640 1 -- 192.168.123.104:0/230236358 <== mon.2 v2:192.168.123.104:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f44bc010040 con 0x7f44c407fc40 2026-03-10T10:15:25.254 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.245+0000 7fdacc9e2640 1 -- 192.168.123.104:0/248748662 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fdac8075470 msgr2=0x7fdac807be20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:25.254 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.245+0000 7fdacc9e2640 1 --2- 192.168.123.104:0/248748662 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fdac8075470 0x7fdac807be20 secure :-1 s=READY pgs=159 cs=0 l=1 rev1=1 crypto rx=0x7fdac000b3e0 tx=0x7fdac002f5b0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.254 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.249+0000 7fdacc9e2640 1 -- 192.168.123.104:0/248748662 shutdown_connections 2026-03-10T10:15:25.254 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.249+0000 7fdacc9e2640 1 --2- 192.168.123.104:0/248748662 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fdac8075470 0x7fdac807be20 unknown :-1 s=CLOSED pgs=159 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.254 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.249+0000 7fdacc9e2640 1 --2- 192.168.123.104:0/248748662 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fdac810b080 0x7fdac8074d30 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.254 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.249+0000 7fdacc9e2640 1 --2- 192.168.123.104:0/248748662 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fdac810a6d0 0x7fdac810aab0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.254 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.249+0000 7fdacc9e2640 1 -- 192.168.123.104:0/248748662 >> 192.168.123.104:0/248748662 conn(0x7fdac806d9f0 msgr2=0x7fdac806de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:25.256 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.249+0000 7fdacc9e2640 1 -- 192.168.123.104:0/248748662 shutdown_connections 2026-03-10T10:15:25.256 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.249+0000 7fdacc9e2640 1 -- 192.168.123.104:0/248748662 wait complete. 2026-03-10T10:15:25.256 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.249+0000 7fdacc9e2640 1 Processor -- start 2026-03-10T10:15:25.256 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.249+0000 7fdacc9e2640 1 -- start start 2026-03-10T10:15:25.256 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.249+0000 7fdacc9e2640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fdac810a6d0 0x7fdac8085ad0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:25.256 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.249+0000 7fdacc9e2640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fdac810b080 0x7fdac8086010 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:25.256 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.249+0000 7fdacc9e2640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fdac807fc30 0x7fdac80800e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:25.256 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.249+0000 7fdacc9e2640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fdac807e7d0 con 0x7fdac810a6d0 2026-03-10T10:15:25.256 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.249+0000 7fdacc9e2640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7fdac807e650 con 0x7fdac810b080 2026-03-10T10:15:25.256 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.249+0000 7fdacc9e2640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7fdac807e950 con 0x7fdac807fc30 2026-03-10T10:15:25.256 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.249+0000 7fdac6575640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fdac810a6d0 0x7fdac8085ad0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:25.256 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.249+0000 7fdac6575640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fdac810a6d0 0x7fdac8085ad0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:59088/0 (socket says 192.168.123.104:59088) 2026-03-10T10:15:25.256 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.249+0000 7fdac6575640 1 -- 192.168.123.104:0/2877714411 learned_addr learned my addr 192.168.123.104:0/2877714411 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:15:25.256 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.249+0000 7fdac6575640 1 -- 192.168.123.104:0/2877714411 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fdac807fc30 msgr2=0x7fdac80800e0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:15:25.256 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.249+0000 7fdac6575640 1 --2- 192.168.123.104:0/2877714411 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fdac807fc30 0x7fdac80800e0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.256 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.249+0000 7fdac6575640 1 -- 192.168.123.104:0/2877714411 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fdac810b080 msgr2=0x7fdac8086010 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:25.256 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.249+0000 7fdac6575640 1 --2- 192.168.123.104:0/2877714411 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fdac810b080 0x7fdac8086010 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.256 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.249+0000 7fdac6575640 1 -- 192.168.123.104:0/2877714411 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fdac80809a0 con 0x7fdac810a6d0 2026-03-10T10:15:25.256 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.249+0000 7fdac6575640 1 --2- 192.168.123.104:0/2877714411 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fdac810a6d0 0x7fdac8085ad0 secure :-1 s=READY pgs=160 cs=0 l=1 rev1=1 crypto rx=0x7fdab800b810 tx=0x7fdab800bce0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:25.256 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.253+0000 7fdab77fe640 1 -- 192.168.123.104:0/2877714411 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fdab8004270 con 0x7fdac810a6d0 2026-03-10T10:15:25.259 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.253+0000 7fdacc9e2640 1 -- 192.168.123.104:0/2877714411 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fdac81bea10 con 0x7fdac810a6d0 2026-03-10T10:15:25.259 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.253+0000 7fdacc9e2640 1 -- 192.168.123.104:0/2877714411 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fdac81bef20 con 0x7fdac810a6d0 2026-03-10T10:15:25.259 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.257+0000 7fdab77fe640 1 -- 192.168.123.104:0/2877714411 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fdab8010070 con 0x7fdac810a6d0 2026-03-10T10:15:25.259 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.257+0000 7fdab77fe640 1 -- 192.168.123.104:0/2877714411 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fdab800ca50 con 0x7fdac810a6d0 2026-03-10T10:15:25.262 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.257+0000 7fdab77fe640 1 -- 192.168.123.104:0/2877714411 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7fdab800ccb0 con 0x7fdac810a6d0 2026-03-10T10:15:25.268 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.257+0000 7fdab77fe640 1 --2- 192.168.123.104:0/2877714411 >> v2:192.168.123.104:6800/3326026257 conn(0x7fda980777c0 0x7fda98079c80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:25.269 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.265+0000 7fd429267640 1 -- 192.168.123.104:0/2031170072 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd42410a470 msgr2=0x7fd4241114d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:25.269 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.265+0000 7fd429267640 1 --2- 192.168.123.104:0/2031170072 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd42410a470 0x7fd4241114d0 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7fd41c00b0a0 tx=0x7fd41c02f450 comp rx=0 tx=0).stop 2026-03-10T10:15:25.269 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.265+0000 7fd429267640 1 -- 192.168.123.104:0/2031170072 shutdown_connections 2026-03-10T10:15:25.269 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.265+0000 7fd429267640 1 --2- 192.168.123.104:0/2031170072 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fd42411c780 0x7fd42411eb70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.269 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.265+0000 7fd429267640 1 --2- 192.168.123.104:0/2031170072 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd42410a850 0x7fd42410acd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.269 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.265+0000 7fd429267640 1 --2- 192.168.123.104:0/2031170072 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd42410a470 0x7fd4241114d0 unknown :-1 s=CLOSED pgs=72 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.269 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.265+0000 7fd429267640 1 -- 192.168.123.104:0/2031170072 >> 192.168.123.104:0/2031170072 conn(0x7fd42406d9f0 msgr2=0x7fd42406de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:25.269 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.265+0000 7fd429267640 1 -- 192.168.123.104:0/2031170072 shutdown_connections 2026-03-10T10:15:25.270 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.265+0000 7fd429267640 1 -- 192.168.123.104:0/2031170072 wait complete. 2026-03-10T10:15:25.270 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.265+0000 7fdac5d74640 1 --2- 192.168.123.104:0/2877714411 >> v2:192.168.123.104:6800/3326026257 conn(0x7fda980777c0 0x7fda98079c80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:25.270 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.265+0000 7fdab77fe640 1 -- 192.168.123.104:0/2877714411 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7fdab8099e70 con 0x7fdac810a6d0 2026-03-10T10:15:25.270 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.265+0000 7fdac5d74640 1 --2- 192.168.123.104:0/2877714411 >> v2:192.168.123.104:6800/3326026257 conn(0x7fda980777c0 0x7fda98079c80 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7fdabc0059c0 tx=0x7fdabc005950 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:25.270 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.265+0000 7fd429267640 1 Processor -- start 2026-03-10T10:15:25.271 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.265+0000 7fd429267640 1 -- start start 2026-03-10T10:15:25.271 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.265+0000 7fd429267640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fd42410a850 0x7fd424112a60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:25.271 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.265+0000 7fd429267640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd42411c780 0x7fd424112fa0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:25.271 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.265+0000 7fd429267640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd4241bbc20 0x7fd4241be010 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:25.271 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.269+0000 7fd429267640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fd424121430 con 0x7fd42411c780 2026-03-10T10:15:25.271 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.269+0000 7fd429267640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7fd4241212b0 con 0x7fd4241bbc20 2026-03-10T10:15:25.271 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.269+0000 7fd429267640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7fd4241215b0 con 0x7fd42410a850 2026-03-10T10:15:25.274 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.269+0000 7fd422d76640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fd42410a850 0x7fd424112a60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:25.274 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.269+0000 7fd422d76640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fd42410a850 0x7fd424112a60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3301/0 says I am v2:192.168.123.104:52618/0 (socket says 192.168.123.104:52618) 2026-03-10T10:15:25.274 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.269+0000 7fd422d76640 1 -- 192.168.123.104:0/328214764 learned_addr learned my addr 192.168.123.104:0/328214764 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:15:25.274 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.269+0000 7fd423577640 1 --2- 192.168.123.104:0/328214764 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd4241bbc20 0x7fd4241be010 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:25.274 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.269+0000 7fdab57fa640 1 -- 192.168.123.104:0/2877714411 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fda94005180 con 0x7fdac810a6d0 2026-03-10T10:15:25.274 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.269+0000 7fd422575640 1 --2- 192.168.123.104:0/328214764 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd42411c780 0x7fd424112fa0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:25.278 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.269+0000 7fdab77fe640 1 -- 192.168.123.104:0/2877714411 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fdab8066de0 con 0x7fdac810a6d0 2026-03-10T10:15:25.279 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.273+0000 7fd422575640 1 -- 192.168.123.104:0/328214764 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fd42410a850 msgr2=0x7fd424112a60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:25.279 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.273+0000 7fd422575640 1 --2- 192.168.123.104:0/328214764 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fd42410a850 0x7fd424112a60 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.279 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.273+0000 7fd422575640 1 -- 192.168.123.104:0/328214764 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd4241bbc20 msgr2=0x7fd4241be010 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:25.279 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.273+0000 7fd422575640 1 --2- 192.168.123.104:0/328214764 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd4241bbc20 0x7fd4241be010 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.279 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.273+0000 7fd422575640 1 -- 192.168.123.104:0/328214764 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fd4241be5d0 con 0x7fd42411c780 2026-03-10T10:15:25.280 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.277+0000 7fd422575640 1 --2- 192.168.123.104:0/328214764 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd42411c780 0x7fd424112fa0 secure :-1 s=READY pgs=161 cs=0 l=1 rev1=1 crypto rx=0x7fd40c00cce0 tx=0x7fd40c007590 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:25.280 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.277+0000 7fd403fff640 1 -- 192.168.123.104:0/328214764 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fd40c013070 con 0x7fd42411c780 2026-03-10T10:15:25.280 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.277+0000 7fd429267640 1 -- 192.168.123.104:0/328214764 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fd4241be8c0 con 0x7fd42411c780 2026-03-10T10:15:25.280 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.277+0000 7fd429267640 1 -- 192.168.123.104:0/328214764 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fd4241bee00 con 0x7fd42411c780 2026-03-10T10:15:25.280 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.277+0000 7fd403fff640 1 -- 192.168.123.104:0/328214764 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fd40c0044e0 con 0x7fd42411c780 2026-03-10T10:15:25.280 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.277+0000 7fd403fff640 1 -- 192.168.123.104:0/328214764 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fd40c002e40 con 0x7fd42411c780 2026-03-10T10:15:25.285 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.277+0000 7fd403fff640 1 -- 192.168.123.104:0/328214764 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7fd40c020020 con 0x7fd42411c780 2026-03-10T10:15:25.285 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.277+0000 7fd403fff640 1 --2- 192.168.123.104:0/328214764 >> v2:192.168.123.104:6800/3326026257 conn(0x7fd3f4077790 0x7fd3f4079c50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:25.285 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.277+0000 7fd403fff640 1 -- 192.168.123.104:0/328214764 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7fd40c09a5f0 con 0x7fd42411c780 2026-03-10T10:15:25.285 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.277+0000 7fd401ffb640 1 -- 192.168.123.104:0/328214764 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fd3f0005180 con 0x7fd42411c780 2026-03-10T10:15:25.287 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.285+0000 7fd422d76640 1 --2- 192.168.123.104:0/328214764 >> v2:192.168.123.104:6800/3326026257 conn(0x7fd3f4077790 0x7fd3f4079c50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:25.287 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.285+0000 7fd422d76640 1 --2- 192.168.123.104:0/328214764 >> v2:192.168.123.104:6800/3326026257 conn(0x7fd3f4077790 0x7fd3f4079c50 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7fd41c002790 tx=0x7fd41c03a040 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:25.290 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.285+0000 7fd403fff640 1 -- 192.168.123.104:0/328214764 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fd40c067450 con 0x7fd42411c780 2026-03-10T10:15:25.395 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.393+0000 7f49ddf47640 1 -- 192.168.123.104:0/2538999384 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 3} v 0) -- 0x7f49a0005470 con 0x7f49d8075470 2026-03-10T10:15:25.398 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.393+0000 7f49d4ff9640 1 -- 192.168.123.104:0/2538999384 <== mon.2 v2:192.168.123.104:3301/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 3}]=0 v0) ==== 74+0+13 (secure 0 0 0) 0x7f49c806bd90 con 0x7f49d8075470 2026-03-10T10:15:25.398 INFO:teuthology.orchestra.run.vm04.stdout:111669149743 2026-03-10T10:15:25.400 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.397+0000 7f49ddf47640 1 -- 192.168.123.104:0/2538999384 >> v2:192.168.123.104:6800/3326026257 conn(0x7f49c4077790 msgr2=0x7f49c4079c50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:25.407 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.405+0000 7f49ddf47640 1 --2- 192.168.123.104:0/2538999384 >> v2:192.168.123.104:6800/3326026257 conn(0x7f49c4077790 0x7f49c4079c50 secure :-1 s=READY pgs=39 cs=0 l=1 rev1=1 crypto rx=0x7f49d8081420 tx=0x7f49d003b040 comp rx=0 tx=0).stop 2026-03-10T10:15:25.410 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.405+0000 7f49ddf47640 1 -- 192.168.123.104:0/2538999384 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f49d8075470 msgr2=0x7f49d8085be0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:25.410 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.405+0000 7f49ddf47640 1 --2- 192.168.123.104:0/2538999384 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f49d8075470 0x7f49d8085be0 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7f49c800b570 tx=0x7f49c800ba40 comp rx=0 tx=0).stop 2026-03-10T10:15:25.410 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.405+0000 7f49ddf47640 1 -- 192.168.123.104:0/2538999384 shutdown_connections 2026-03-10T10:15:25.410 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.405+0000 7f49ddf47640 1 --2- 192.168.123.104:0/2538999384 >> v2:192.168.123.104:6800/3326026257 conn(0x7f49c4077790 0x7f49c4079c50 unknown :-1 s=CLOSED pgs=39 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.410 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.405+0000 7f49ddf47640 1 --2- 192.168.123.104:0/2538999384 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f49d807fe00 0x7f49d80802b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.410 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.405+0000 7f49ddf47640 1 --2- 192.168.123.104:0/2538999384 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f49d810a6d0 0x7f49d8086120 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.410 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.405+0000 7f49ddf47640 1 --2- 192.168.123.104:0/2538999384 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f49d8075470 0x7f49d8085be0 unknown :-1 s=CLOSED pgs=72 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.410 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.405+0000 7f49ddf47640 1 -- 192.168.123.104:0/2538999384 >> 192.168.123.104:0/2538999384 conn(0x7f49d806d9f0 msgr2=0x7f49d810d490 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:25.410 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.405+0000 7f49ddf47640 1 -- 192.168.123.104:0/2538999384 shutdown_connections 2026-03-10T10:15:25.410 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.405+0000 7f49ddf47640 1 -- 192.168.123.104:0/2538999384 wait complete. 2026-03-10T10:15:25.425 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.421+0000 7f3717633640 1 -- 192.168.123.104:0/372416102 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f37080a4db0 msgr2=0x7f37080a5190 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:25.425 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.421+0000 7f3717633640 1 --2- 192.168.123.104:0/372416102 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f37080a4db0 0x7f37080a5190 secure :-1 s=READY pgs=74 cs=0 l=1 rev1=1 crypto rx=0x7f370c00b0a0 tx=0x7f370c02f430 comp rx=0 tx=0).stop 2026-03-10T10:15:25.428 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.425+0000 7f3717633640 1 -- 192.168.123.104:0/372416102 shutdown_connections 2026-03-10T10:15:25.428 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.425+0000 7f3717633640 1 --2- 192.168.123.104:0/372416102 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f37080b7eb0 0x7f37080ba2a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.428 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.425+0000 7f3717633640 1 --2- 192.168.123.104:0/372416102 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f37080a56d0 0x7f37080b7970 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.428 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.425+0000 7f3717633640 1 --2- 192.168.123.104:0/372416102 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f37080a4db0 0x7f37080a5190 unknown :-1 s=CLOSED pgs=74 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.428 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.425+0000 7f3717633640 1 -- 192.168.123.104:0/372416102 >> 192.168.123.104:0/372416102 conn(0x7f370801a740 msgr2=0x7f370801ab50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:25.428 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.425+0000 7f3717633640 1 -- 192.168.123.104:0/372416102 shutdown_connections 2026-03-10T10:15:25.429 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.425+0000 7f3717633640 1 -- 192.168.123.104:0/372416102 wait complete. 2026-03-10T10:15:25.429 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.425+0000 7f3717633640 1 Processor -- start 2026-03-10T10:15:25.429 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.425+0000 7f3717633640 1 -- start start 2026-03-10T10:15:25.429 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.425+0000 7f3717633640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f37080a4db0 0x7f37081451c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:25.429 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.425+0000 7f3717633640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f37080a56d0 0x7f3708145700 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:25.429 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.425+0000 7f3717633640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f37080b7eb0 0x7f3708149a90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:25.429 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.425+0000 7f3717633640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f37080bcc10 con 0x7f37080a56d0 2026-03-10T10:15:25.429 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.425+0000 7f3717633640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f37080bca90 con 0x7f37080a4db0 2026-03-10T10:15:25.429 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.425+0000 7f3717633640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f37080bcd90 con 0x7f37080b7eb0 2026-03-10T10:15:25.429 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.425+0000 7f3715ba9640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f37080b7eb0 0x7f3708149a90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:25.429 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.425+0000 7f3714ba7640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f37080a56d0 0x7f3708145700 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:25.429 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.425+0000 7f3715ba9640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f37080b7eb0 0x7f3708149a90 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3301/0 says I am v2:192.168.123.104:52638/0 (socket says 192.168.123.104:52638) 2026-03-10T10:15:25.429 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.425+0000 7f3715ba9640 1 -- 192.168.123.104:0/448172173 learned_addr learned my addr 192.168.123.104:0/448172173 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:15:25.430 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.425+0000 7f3714ba7640 1 -- 192.168.123.104:0/448172173 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f37080b7eb0 msgr2=0x7f3708149a90 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:25.430 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.425+0000 7f37153a8640 1 --2- 192.168.123.104:0/448172173 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f37080a4db0 0x7f37081451c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:25.430 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.425+0000 7f3714ba7640 1 --2- 192.168.123.104:0/448172173 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f37080b7eb0 0x7f3708149a90 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.430 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.425+0000 7f3714ba7640 1 -- 192.168.123.104:0/448172173 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f37080a4db0 msgr2=0x7f37081451c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:25.430 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.425+0000 7f3714ba7640 1 --2- 192.168.123.104:0/448172173 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f37080a4db0 0x7f37081451c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.430 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.425+0000 7f3714ba7640 1 -- 192.168.123.104:0/448172173 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f370814a210 con 0x7f37080a56d0 2026-03-10T10:15:25.430 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.425+0000 7f3715ba9640 1 --2- 192.168.123.104:0/448172173 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f37080b7eb0 0x7f3708149a90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T10:15:25.430 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.425+0000 7f3714ba7640 1 --2- 192.168.123.104:0/448172173 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f37080a56d0 0x7f3708145700 secure :-1 s=READY pgs=162 cs=0 l=1 rev1=1 crypto rx=0x7f370000d9f0 tx=0x7f370000dec0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:25.430 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.425+0000 7f37153a8640 1 --2- 192.168.123.104:0/448172173 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f37080a4db0 0x7f37081451c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:15:25.430 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.425+0000 7f36fe7fc640 1 -- 192.168.123.104:0/448172173 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f3700014070 con 0x7f37080a56d0 2026-03-10T10:15:25.430 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.429+0000 7f36fe7fc640 1 -- 192.168.123.104:0/448172173 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f370000bd70 con 0x7f37080a56d0 2026-03-10T10:15:25.431 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.429+0000 7f3717633640 1 -- 192.168.123.104:0/448172173 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f370814a500 con 0x7f37080a56d0 2026-03-10T10:15:25.431 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.429+0000 7f36fe7fc640 1 -- 192.168.123.104:0/448172173 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f3700005020 con 0x7f37080a56d0 2026-03-10T10:15:25.431 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.429+0000 7f3717633640 1 -- 192.168.123.104:0/448172173 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f37080acf50 con 0x7f37080a56d0 2026-03-10T10:15:25.431 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.429+0000 7f3717633640 1 -- 192.168.123.104:0/448172173 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f37080a6130 con 0x7f37080a56d0 2026-03-10T10:15:25.431 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.429+0000 7f1246b67640 1 -- 192.168.123.104:0/1014409501 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f124011c780 msgr2=0x7f124011eb70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:25.432 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.429+0000 7f1246b67640 1 --2- 192.168.123.104:0/1014409501 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f124011c780 0x7f124011eb70 secure :-1 s=READY pgs=75 cs=0 l=1 rev1=1 crypto rx=0x7f123000b3e0 tx=0x7f123002f5f0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.435 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.429+0000 7f36fe7fc640 1 -- 192.168.123.104:0/448172173 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f3700020020 con 0x7f37080a56d0 2026-03-10T10:15:25.435 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.429+0000 7f36fe7fc640 1 --2- 192.168.123.104:0/448172173 >> v2:192.168.123.104:6800/3326026257 conn(0x7f36e0077790 0x7f36e0079c50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:25.435 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.429+0000 7f36fe7fc640 1 -- 192.168.123.104:0/448172173 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f370009a2f0 con 0x7f37080a56d0 2026-03-10T10:15:25.435 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.433+0000 7f37153a8640 1 --2- 192.168.123.104:0/448172173 >> v2:192.168.123.104:6800/3326026257 conn(0x7f36e0077790 0x7f36e0079c50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:25.435 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.433+0000 7f1246b67640 1 -- 192.168.123.104:0/1014409501 shutdown_connections 2026-03-10T10:15:25.435 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.433+0000 7f1246b67640 1 --2- 192.168.123.104:0/1014409501 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f124011c780 0x7f124011eb70 unknown :-1 s=CLOSED pgs=75 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.435 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.433+0000 7f1246b67640 1 --2- 192.168.123.104:0/1014409501 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f124010a850 0x7f124010acd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.435 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.433+0000 7f1246b67640 1 --2- 192.168.123.104:0/1014409501 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f124010a470 0x7f12401114d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.435 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.433+0000 7f1246b67640 1 -- 192.168.123.104:0/1014409501 >> 192.168.123.104:0/1014409501 conn(0x7f124006d9f0 msgr2=0x7f124006de00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:25.435 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.433+0000 7f1246b67640 1 -- 192.168.123.104:0/1014409501 shutdown_connections 2026-03-10T10:15:25.436 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.433+0000 7f37153a8640 1 --2- 192.168.123.104:0/448172173 >> v2:192.168.123.104:6800/3326026257 conn(0x7f36e0077790 0x7f36e0079c50 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7f370c0062a0 tx=0x7f370c002750 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:25.436 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.433+0000 7f36fe7fc640 1 -- 192.168.123.104:0/448172173 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f3700067150 con 0x7f37080a56d0 2026-03-10T10:15:25.439 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.433+0000 7f1246b67640 1 -- 192.168.123.104:0/1014409501 wait complete. 2026-03-10T10:15:25.439 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.437+0000 7f1246b67640 1 Processor -- start 2026-03-10T10:15:25.439 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.437+0000 7f1246b67640 1 -- start start 2026-03-10T10:15:25.439 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.437+0000 7f1246b67640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f124010a470 0x7f124011bb80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:25.439 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.437+0000 7f1246b67640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f124010a850 0x7f124011c0c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:25.439 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.437+0000 7f1246b67640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f124011c780 0x7f1240114d00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:25.440 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.437+0000 7f1246b67640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f12401213a0 con 0x7f124010a850 2026-03-10T10:15:25.440 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.437+0000 7f1246b67640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f1240121220 con 0x7f124010a470 2026-03-10T10:15:25.440 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.437+0000 7f1246b67640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f1240121520 con 0x7f124011c780 2026-03-10T10:15:25.440 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.437+0000 7f12450dd640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f124011c780 0x7f1240114d00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:25.440 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.437+0000 7f12450dd640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f124011c780 0x7f1240114d00 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3301/0 says I am v2:192.168.123.104:52650/0 (socket says 192.168.123.104:52650) 2026-03-10T10:15:25.440 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.437+0000 7f12450dd640 1 -- 192.168.123.104:0/659655326 learned_addr learned my addr 192.168.123.104:0/659655326 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:15:25.440 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.437+0000 7f12450dd640 1 -- 192.168.123.104:0/659655326 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f124010a470 msgr2=0x7f124011bb80 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:15:25.440 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.437+0000 7f12450dd640 1 --2- 192.168.123.104:0/659655326 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f124010a470 0x7f124011bb80 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.440 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.437+0000 7f12450dd640 1 -- 192.168.123.104:0/659655326 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f124010a850 msgr2=0x7f124011c0c0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:15:25.440 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.437+0000 7f12450dd640 1 --2- 192.168.123.104:0/659655326 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f124010a850 0x7f124011c0c0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.440 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.437+0000 7f12450dd640 1 -- 192.168.123.104:0/659655326 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f12401155c0 con 0x7f124011c780 2026-03-10T10:15:25.440 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.437+0000 7f12450dd640 1 --2- 192.168.123.104:0/659655326 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f124011c780 0x7f1240114d00 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7f123002fb00 tx=0x7f1230002c90 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:25.442 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.437+0000 7f123dffb640 1 -- 192.168.123.104:0/659655326 <== mon.2 v2:192.168.123.104:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f1230046070 con 0x7f124011c780 2026-03-10T10:15:25.442 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.437+0000 7f123dffb640 1 -- 192.168.123.104:0/659655326 <== mon.2 v2:192.168.123.104:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f1230007710 con 0x7f124011c780 2026-03-10T10:15:25.442 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.437+0000 7f123dffb640 1 -- 192.168.123.104:0/659655326 <== mon.2 v2:192.168.123.104:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f1230037430 con 0x7f124011c780 2026-03-10T10:15:25.443 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.437+0000 7f1246b67640 1 -- 192.168.123.104:0/659655326 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f1240115850 con 0x7f124011c780 2026-03-10T10:15:25.443 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.437+0000 7f1246b67640 1 -- 192.168.123.104:0/659655326 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f1240119880 con 0x7f124011c780 2026-03-10T10:15:25.444 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.441+0000 7f123dffb640 1 -- 192.168.123.104:0/659655326 <== mon.2 v2:192.168.123.104:3301/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f12300375d0 con 0x7f124011c780 2026-03-10T10:15:25.446 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.441+0000 7f123dffb640 1 --2- 192.168.123.104:0/659655326 >> v2:192.168.123.104:6800/3326026257 conn(0x7f121c077790 0x7f121c079c50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:25.447 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.441+0000 7f123dffb640 1 -- 192.168.123.104:0/659655326 <== mon.2 v2:192.168.123.104:3301/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f12300bf0e0 con 0x7f124011c780 2026-03-10T10:15:25.447 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.445+0000 7f12448dc640 1 --2- 192.168.123.104:0/659655326 >> v2:192.168.123.104:6800/3326026257 conn(0x7f121c077790 0x7f121c079c50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:25.448 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.445+0000 7f12448dc640 1 --2- 192.168.123.104:0/659655326 >> v2:192.168.123.104:6800/3326026257 conn(0x7f121c077790 0x7f121c079c50 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f12340097c0 tx=0x7f1234009340 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:25.448 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.445+0000 7f12177fe640 1 -- 192.168.123.104:0/659655326 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f120c005180 con 0x7f124011c780 2026-03-10T10:15:25.463 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.461+0000 7ff689008640 1 -- 192.168.123.104:0/223025294 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7ff68410a850 msgr2=0x7ff68410acb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:25.463 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.461+0000 7ff689008640 1 --2- 192.168.123.104:0/223025294 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7ff68410a850 0x7ff68410acb0 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7ff66c009990 tx=0x7ff66c02f1b0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.470 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.465+0000 7ff689008640 1 -- 192.168.123.104:0/223025294 shutdown_connections 2026-03-10T10:15:25.470 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.465+0000 7ff689008640 1 --2- 192.168.123.104:0/223025294 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff68411c780 0x7ff68411eb70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.470 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.465+0000 7ff689008640 1 --2- 192.168.123.104:0/223025294 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7ff68410a850 0x7ff68410acb0 unknown :-1 s=CLOSED pgs=77 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.470 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.465+0000 7ff689008640 1 --2- 192.168.123.104:0/223025294 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7ff68410a470 0x7ff6841114d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.470 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.465+0000 7ff689008640 1 -- 192.168.123.104:0/223025294 >> 192.168.123.104:0/223025294 conn(0x7ff68406d9c0 msgr2=0x7ff68406ddd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:25.470 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.465+0000 7ff689008640 1 -- 192.168.123.104:0/223025294 shutdown_connections 2026-03-10T10:15:25.470 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.465+0000 7ff689008640 1 -- 192.168.123.104:0/223025294 wait complete. 2026-03-10T10:15:25.470 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.465+0000 7ff689008640 1 Processor -- start 2026-03-10T10:15:25.470 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.465+0000 7ff689008640 1 -- start start 2026-03-10T10:15:25.470 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.465+0000 7ff689008640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7ff68410a470 0x7ff6841af560 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:25.470 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.465+0000 7ff689008640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff68410a850 0x7ff6841afaa0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:25.470 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.465+0000 7ff689008640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7ff68411c780 0x7ff6841a9630 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:25.470 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.465+0000 7ff689008640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7ff684121400 con 0x7ff68410a470 2026-03-10T10:15:25.470 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.465+0000 7ff689008640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7ff684121280 con 0x7ff68410a850 2026-03-10T10:15:25.470 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.465+0000 7ff689008640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7ff684121580 con 0x7ff68411c780 2026-03-10T10:15:25.470 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.465+0000 7ff683fff640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7ff68410a470 0x7ff6841af560 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:25.470 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.465+0000 7ff688807640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7ff68411c780 0x7ff6841a9630 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:25.470 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.465+0000 7ff688807640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7ff68411c780 0x7ff6841a9630 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3301/0 says I am v2:192.168.123.104:52676/0 (socket says 192.168.123.104:52676) 2026-03-10T10:15:25.470 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.465+0000 7ff688807640 1 -- 192.168.123.104:0/1185491510 learned_addr learned my addr 192.168.123.104:0/1185491510 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:15:25.471 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.465+0000 7f123dffb640 1 -- 192.168.123.104:0/659655326 <== mon.2 v2:192.168.123.104:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f123008bf40 con 0x7f124011c780 2026-03-10T10:15:25.471 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.465+0000 7ff683fff640 1 -- 192.168.123.104:0/1185491510 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7ff68411c780 msgr2=0x7ff6841a9630 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:25.471 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.465+0000 7ff683fff640 1 --2- 192.168.123.104:0/1185491510 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7ff68411c780 0x7ff6841a9630 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.471 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.465+0000 7ff683fff640 1 -- 192.168.123.104:0/1185491510 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff68410a850 msgr2=0x7ff6841afaa0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:15:25.471 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.465+0000 7ff683fff640 1 --2- 192.168.123.104:0/1185491510 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff68410a850 0x7ff6841afaa0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.471 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.465+0000 7ff683fff640 1 -- 192.168.123.104:0/1185491510 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ff6841a9e90 con 0x7ff68410a470 2026-03-10T10:15:25.471 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.465+0000 7ff688807640 1 --2- 192.168.123.104:0/1185491510 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7ff68411c780 0x7ff6841a9630 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T10:15:25.471 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.465+0000 7ff683fff640 1 --2- 192.168.123.104:0/1185491510 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7ff68410a470 0x7ff6841af560 secure :-1 s=READY pgs=163 cs=0 l=1 rev1=1 crypto rx=0x7ff6780029e0 tx=0x7ff678002eb0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:25.475 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.469+0000 7ff6817fa640 1 -- 192.168.123.104:0/1185491510 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7ff67800ebf0 con 0x7ff68410a470 2026-03-10T10:15:25.475 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.473+0000 7ff6817fa640 1 -- 192.168.123.104:0/1185491510 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7ff67800ed90 con 0x7ff68410a470 2026-03-10T10:15:25.475 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.473+0000 7ff6817fa640 1 -- 192.168.123.104:0/1185491510 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7ff67800f650 con 0x7ff68410a470 2026-03-10T10:15:25.475 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.473+0000 7ff689008640 1 -- 192.168.123.104:0/1185491510 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7ff6841aa180 con 0x7ff68410a470 2026-03-10T10:15:25.476 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.473+0000 7ff689008640 1 -- 192.168.123.104:0/1185491510 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7ff684111d60 con 0x7ff68410a470 2026-03-10T10:15:25.479 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.473+0000 7ff689008640 1 -- 192.168.123.104:0/1185491510 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7ff684074650 con 0x7ff68410a470 2026-03-10T10:15:25.481 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.477+0000 7ff6817fa640 1 -- 192.168.123.104:0/1185491510 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7ff67800f7f0 con 0x7ff68410a470 2026-03-10T10:15:25.481 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.477+0000 7ff6817fa640 1 --2- 192.168.123.104:0/1185491510 >> v2:192.168.123.104:6800/3326026257 conn(0x7ff660077790 0x7ff660079c50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:25.481 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.477+0000 7ff6817fa640 1 -- 192.168.123.104:0/1185491510 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7ff67809a070 con 0x7ff68410a470 2026-03-10T10:15:25.482 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.477+0000 7ff6837fe640 1 --2- 192.168.123.104:0/1185491510 >> v2:192.168.123.104:6800/3326026257 conn(0x7ff660077790 0x7ff660079c50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:25.482 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.477+0000 7ff6837fe640 1 --2- 192.168.123.104:0/1185491510 >> v2:192.168.123.104:6800/3326026257 conn(0x7ff660077790 0x7ff660079c50 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7ff66c02f6c0 tx=0x7ff66c0023d0 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:25.482 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.477+0000 7ff6817fa640 1 -- 192.168.123.104:0/1185491510 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7ff678014070 con 0x7ff68410a470 2026-03-10T10:15:25.548 INFO:tasks.cephadm.ceph_manager.ceph:need seq 111669149742 got 111669149743 for osd.3 2026-03-10T10:15:25.549 DEBUG:teuthology.parallel:result is None 2026-03-10T10:15:25.581 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.577+0000 7eff2d7fa640 1 -- 192.168.123.104:0/142442713 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 1} v 0) -- 0x7efefc005740 con 0x7eff4010a470 2026-03-10T10:15:25.581 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.577+0000 7eff2f7fe640 1 -- 192.168.123.104:0/142442713 <== mon.0 v2:192.168.123.104:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 1}]=0 v0) ==== 74+0+12 (secure 0 0 0) 0x7eff340674e0 con 0x7eff4010a470 2026-03-10T10:15:25.583 INFO:teuthology.orchestra.run.vm04.stdout:55834574908 2026-03-10T10:15:25.584 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.581+0000 7eff3f577640 1 -- 192.168.123.104:0/142442713 >> v2:192.168.123.104:6800/3326026257 conn(0x7eff04077790 msgr2=0x7eff04079c50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:25.584 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.581+0000 7eff3f577640 1 --2- 192.168.123.104:0/142442713 >> v2:192.168.123.104:6800/3326026257 conn(0x7eff04077790 0x7eff04079c50 secure :-1 s=READY pgs=40 cs=0 l=1 rev1=1 crypto rx=0x7eff28009950 tx=0x7eff280023d0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.584 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.581+0000 7eff3f577640 1 -- 192.168.123.104:0/142442713 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7eff4010a470 msgr2=0x7eff401a95e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:25.584 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.581+0000 7eff3f577640 1 --2- 192.168.123.104:0/142442713 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7eff4010a470 0x7eff401a95e0 secure :-1 s=READY pgs=157 cs=0 l=1 rev1=1 crypto rx=0x7eff3400e9f0 tx=0x7eff3400eec0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.584 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.581+0000 7eff3f577640 1 -- 192.168.123.104:0/142442713 shutdown_connections 2026-03-10T10:15:25.584 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.581+0000 7eff3f577640 1 --2- 192.168.123.104:0/142442713 >> v2:192.168.123.104:6800/3326026257 conn(0x7eff04077790 0x7eff04079c50 unknown :-1 s=CLOSED pgs=40 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.584 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.581+0000 7eff3f577640 1 --2- 192.168.123.104:0/142442713 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7eff401adeb0 0x7eff401ae360 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.584 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.581+0000 7eff3f577640 1 --2- 192.168.123.104:0/142442713 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7eff4011c780 0x7eff401a9b20 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.584 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.581+0000 7eff3f577640 1 --2- 192.168.123.104:0/142442713 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7eff4010a470 0x7eff401a95e0 unknown :-1 s=CLOSED pgs=157 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.584 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.581+0000 7eff3f577640 1 -- 192.168.123.104:0/142442713 >> 192.168.123.104:0/142442713 conn(0x7eff4006d9c0 msgr2=0x7eff40073410 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:25.584 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.581+0000 7eff3f577640 1 -- 192.168.123.104:0/142442713 shutdown_connections 2026-03-10T10:15:25.584 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.581+0000 7eff3f577640 1 -- 192.168.123.104:0/142442713 wait complete. 2026-03-10T10:15:25.631 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.629+0000 7f44a2ffd640 1 -- 192.168.123.104:0/230236358 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 6} v 0) -- 0x7f4490005740 con 0x7f44c407fc40 2026-03-10T10:15:25.632 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.629+0000 7f44c17fa640 1 -- 192.168.123.104:0/230236358 <== mon.2 v2:192.168.123.104:3301/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 6}]=0 v0) ==== 74+0+13 (secure 0 0 0) 0x7f44bc0681f0 con 0x7f44c407fc40 2026-03-10T10:15:25.632 INFO:teuthology.orchestra.run.vm04.stdout:184683593755 2026-03-10T10:15:25.641 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.637+0000 7fdab57fa640 1 -- 192.168.123.104:0/2877714411 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 0} v 0) -- 0x7fda94005470 con 0x7fdac810a6d0 2026-03-10T10:15:25.641 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.637+0000 7fdab77fe640 1 -- 192.168.123.104:0/2877714411 <== mon.0 v2:192.168.123.104:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 0}]=0 v0) ==== 74+0+12 (secure 0 0 0) 0x7fdab806bc90 con 0x7fdac810a6d0 2026-03-10T10:15:25.641 INFO:teuthology.orchestra.run.vm04.stdout:34359738435 2026-03-10T10:15:25.643 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.641+0000 7f44ca4ec640 1 -- 192.168.123.104:0/230236358 >> v2:192.168.123.104:6800/3326026257 conn(0x7f4494077790 msgr2=0x7f4494079c50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:25.643 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.641+0000 7f44ca4ec640 1 --2- 192.168.123.104:0/230236358 >> v2:192.168.123.104:6800/3326026257 conn(0x7f4494077790 0x7f4494079c50 secure :-1 s=READY pgs=41 cs=0 l=1 rev1=1 crypto rx=0x7f44b4004160 tx=0x7f44b4009290 comp rx=0 tx=0).stop 2026-03-10T10:15:25.643 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.641+0000 7f44ca4ec640 1 -- 192.168.123.104:0/230236358 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f44c407fc40 msgr2=0x7f44c40800f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:25.643 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.641+0000 7f44ca4ec640 1 --2- 192.168.123.104:0/230236358 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f44c407fc40 0x7f44c40800f0 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7f44bc00e3c0 tx=0x7f44bc00e890 comp rx=0 tx=0).stop 2026-03-10T10:15:25.643 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.641+0000 7f44ca4ec640 1 -- 192.168.123.104:0/230236358 shutdown_connections 2026-03-10T10:15:25.643 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.641+0000 7f44ca4ec640 1 --2- 192.168.123.104:0/230236358 >> v2:192.168.123.104:6800/3326026257 conn(0x7f4494077790 0x7f4494079c50 unknown :-1 s=CLOSED pgs=41 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.643 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.641+0000 7f44ca4ec640 1 --2- 192.168.123.104:0/230236358 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f44c407fc40 0x7f44c40800f0 unknown :-1 s=CLOSED pgs=73 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.643 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.641+0000 7f44ca4ec640 1 --2- 192.168.123.104:0/230236358 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f44c410a6d0 0x7f44c4086020 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.643 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.641+0000 7f44ca4ec640 1 --2- 192.168.123.104:0/230236358 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f44c4075470 0x7f44c4085ae0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.643 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.641+0000 7f44ca4ec640 1 -- 192.168.123.104:0/230236358 >> 192.168.123.104:0/230236358 conn(0x7f44c406d9f0 msgr2=0x7f44c40732a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:25.643 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.641+0000 7f44ca4ec640 1 -- 192.168.123.104:0/230236358 shutdown_connections 2026-03-10T10:15:25.643 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.641+0000 7f44ca4ec640 1 -- 192.168.123.104:0/230236358 wait complete. 2026-03-10T10:15:25.645 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.641+0000 7fdab57fa640 1 -- 192.168.123.104:0/2877714411 >> v2:192.168.123.104:6800/3326026257 conn(0x7fda980777c0 msgr2=0x7fda98079c80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:25.648 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.645+0000 7fdab57fa640 1 --2- 192.168.123.104:0/2877714411 >> v2:192.168.123.104:6800/3326026257 conn(0x7fda980777c0 0x7fda98079c80 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x7fdabc0059c0 tx=0x7fdabc005950 comp rx=0 tx=0).stop 2026-03-10T10:15:25.648 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.645+0000 7fdab57fa640 1 -- 192.168.123.104:0/2877714411 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fdac810a6d0 msgr2=0x7fdac8085ad0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:25.648 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.645+0000 7fdab57fa640 1 --2- 192.168.123.104:0/2877714411 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fdac810a6d0 0x7fdac8085ad0 secure :-1 s=READY pgs=160 cs=0 l=1 rev1=1 crypto rx=0x7fdab800b810 tx=0x7fdab800bce0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.649 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.645+0000 7fdab57fa640 1 -- 192.168.123.104:0/2877714411 shutdown_connections 2026-03-10T10:15:25.649 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.645+0000 7fdab57fa640 1 --2- 192.168.123.104:0/2877714411 >> v2:192.168.123.104:6800/3326026257 conn(0x7fda980777c0 0x7fda98079c80 unknown :-1 s=CLOSED pgs=42 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.649 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.645+0000 7fdab57fa640 1 --2- 192.168.123.104:0/2877714411 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fdac807fc30 0x7fdac80800e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.649 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.645+0000 7fdab57fa640 1 --2- 192.168.123.104:0/2877714411 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fdac810b080 0x7fdac8086010 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.649 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.645+0000 7fdab57fa640 1 --2- 192.168.123.104:0/2877714411 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fdac810a6d0 0x7fdac8085ad0 unknown :-1 s=CLOSED pgs=160 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.649 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.645+0000 7fdab57fa640 1 -- 192.168.123.104:0/2877714411 >> 192.168.123.104:0/2877714411 conn(0x7fdac806d9f0 msgr2=0x7fdac80732a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:25.650 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.645+0000 7fdab57fa640 1 -- 192.168.123.104:0/2877714411 shutdown_connections 2026-03-10T10:15:25.650 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.645+0000 7fdab57fa640 1 -- 192.168.123.104:0/2877714411 wait complete. 2026-03-10T10:15:25.678 INFO:teuthology.orchestra.run.vm04.stdout:77309411382 2026-03-10T10:15:25.678 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.661+0000 7fd401ffb640 1 -- 192.168.123.104:0/328214764 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 2} v 0) -- 0x7fd3f0005470 con 0x7fd42411c780 2026-03-10T10:15:25.678 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.673+0000 7fd403fff640 1 -- 192.168.123.104:0/328214764 <== mon.0 v2:192.168.123.104:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 2}]=0 v0) ==== 74+0+12 (secure 0 0 0) 0x7fd40c06c300 con 0x7fd42411c780 2026-03-10T10:15:25.678 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.673+0000 7fd401ffb640 1 -- 192.168.123.104:0/328214764 >> v2:192.168.123.104:6800/3326026257 conn(0x7fd3f4077790 msgr2=0x7fd3f4079c50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:25.678 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.673+0000 7fd401ffb640 1 --2- 192.168.123.104:0/328214764 >> v2:192.168.123.104:6800/3326026257 conn(0x7fd3f4077790 0x7fd3f4079c50 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7fd41c002790 tx=0x7fd41c03a040 comp rx=0 tx=0).stop 2026-03-10T10:15:25.679 INFO:tasks.cephadm.ceph_manager.ceph:need seq 55834574907 got 55834574908 for osd.1 2026-03-10T10:15:25.679 DEBUG:teuthology.parallel:result is None 2026-03-10T10:15:25.679 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.673+0000 7fd401ffb640 1 -- 192.168.123.104:0/328214764 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd42411c780 msgr2=0x7fd424112fa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:25.679 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.673+0000 7fd401ffb640 1 --2- 192.168.123.104:0/328214764 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd42411c780 0x7fd424112fa0 secure :-1 s=READY pgs=161 cs=0 l=1 rev1=1 crypto rx=0x7fd40c00cce0 tx=0x7fd40c007590 comp rx=0 tx=0).stop 2026-03-10T10:15:25.679 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.677+0000 7fd401ffb640 1 -- 192.168.123.104:0/328214764 shutdown_connections 2026-03-10T10:15:25.679 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.677+0000 7fd401ffb640 1 --2- 192.168.123.104:0/328214764 >> v2:192.168.123.104:6800/3326026257 conn(0x7fd3f4077790 0x7fd3f4079c50 unknown :-1 s=CLOSED pgs=43 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.679 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.677+0000 7fd401ffb640 1 --2- 192.168.123.104:0/328214764 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fd4241bbc20 0x7fd4241be010 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.679 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.677+0000 7fd401ffb640 1 --2- 192.168.123.104:0/328214764 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fd42411c780 0x7fd424112fa0 unknown :-1 s=CLOSED pgs=161 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.679 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.677+0000 7fd401ffb640 1 --2- 192.168.123.104:0/328214764 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fd42410a850 0x7fd424112a60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.679 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.677+0000 7fd401ffb640 1 -- 192.168.123.104:0/328214764 >> 192.168.123.104:0/328214764 conn(0x7fd42406d9f0 msgr2=0x7fd42411cb60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:25.679 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.677+0000 7fd401ffb640 1 -- 192.168.123.104:0/328214764 shutdown_connections 2026-03-10T10:15:25.680 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.677+0000 7fd401ffb640 1 -- 192.168.123.104:0/328214764 wait complete. 2026-03-10T10:15:25.779 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738434 got 34359738435 for osd.0 2026-03-10T10:15:25.779 DEBUG:teuthology.parallel:result is None 2026-03-10T10:15:25.791 INFO:teuthology.orchestra.run.vm04.stdout:158913789986 2026-03-10T10:15:25.791 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.785+0000 7f3717633640 1 -- 192.168.123.104:0/448172173 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 5} v 0) -- 0x7f3708146520 con 0x7f37080a56d0 2026-03-10T10:15:25.791 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.785+0000 7f36fe7fc640 1 -- 192.168.123.104:0/448172173 <== mon.0 v2:192.168.123.104:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 5}]=0 v0) ==== 74+0+13 (secure 0 0 0) 0x7f370006c000 con 0x7f37080a56d0 2026-03-10T10:15:25.796 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.793+0000 7f36e7fff640 1 -- 192.168.123.104:0/448172173 >> v2:192.168.123.104:6800/3326026257 conn(0x7f36e0077790 msgr2=0x7f36e0079c50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:25.796 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.793+0000 7f36e7fff640 1 --2- 192.168.123.104:0/448172173 >> v2:192.168.123.104:6800/3326026257 conn(0x7f36e0077790 0x7f36e0079c50 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x7f370c0062a0 tx=0x7f370c002750 comp rx=0 tx=0).stop 2026-03-10T10:15:25.796 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.793+0000 7f36e7fff640 1 -- 192.168.123.104:0/448172173 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f37080a56d0 msgr2=0x7f3708145700 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:25.797 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.793+0000 7f36e7fff640 1 --2- 192.168.123.104:0/448172173 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f37080a56d0 0x7f3708145700 secure :-1 s=READY pgs=162 cs=0 l=1 rev1=1 crypto rx=0x7f370000d9f0 tx=0x7f370000dec0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.809 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.805+0000 7f36e7fff640 1 -- 192.168.123.104:0/448172173 shutdown_connections 2026-03-10T10:15:25.809 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.805+0000 7f36e7fff640 1 --2- 192.168.123.104:0/448172173 >> v2:192.168.123.104:6800/3326026257 conn(0x7f36e0077790 0x7f36e0079c50 unknown :-1 s=CLOSED pgs=44 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.809 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.805+0000 7f36e7fff640 1 --2- 192.168.123.104:0/448172173 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f37080b7eb0 0x7f3708149a90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.809 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.805+0000 7f36e7fff640 1 --2- 192.168.123.104:0/448172173 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f37080a56d0 0x7f3708145700 unknown :-1 s=CLOSED pgs=162 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.809 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.805+0000 7f36e7fff640 1 --2- 192.168.123.104:0/448172173 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f37080a4db0 0x7f37081451c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.809 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.805+0000 7f36e7fff640 1 -- 192.168.123.104:0/448172173 >> 192.168.123.104:0/448172173 conn(0x7f370801a740 msgr2=0x7f37080b6130 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:25.810 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.809+0000 7f36e7fff640 1 -- 192.168.123.104:0/448172173 shutdown_connections 2026-03-10T10:15:25.811 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.809+0000 7f36e7fff640 1 -- 192.168.123.104:0/448172173 wait complete. 2026-03-10T10:15:25.831 INFO:tasks.cephadm.ceph_manager.ceph:need seq 77309411381 got 77309411382 for osd.2 2026-03-10T10:15:25.831 DEBUG:teuthology.parallel:result is None 2026-03-10T10:15:25.835 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.829+0000 7f12177fe640 1 -- 192.168.123.104:0/659655326 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 4} v 0) -- 0x7f120c005470 con 0x7f124011c780 2026-03-10T10:15:25.838 INFO:teuthology.orchestra.run.vm04.stdout:133143986216 2026-03-10T10:15:25.838 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.833+0000 7f123dffb640 1 -- 192.168.123.104:0/659655326 <== mon.2 v2:192.168.123.104:3301/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 4}]=0 v0) ==== 74+0+13 (secure 0 0 0) 0x7f1230090df0 con 0x7f124011c780 2026-03-10T10:15:25.842 INFO:tasks.cephadm.ceph_manager.ceph:need seq 184683593754 got 184683593755 for osd.6 2026-03-10T10:15:25.842 DEBUG:teuthology.parallel:result is None 2026-03-10T10:15:25.851 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.845+0000 7f1246b67640 1 -- 192.168.123.104:0/659655326 >> v2:192.168.123.104:6800/3326026257 conn(0x7f121c077790 msgr2=0x7f121c079c50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:25.851 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.845+0000 7f1246b67640 1 --2- 192.168.123.104:0/659655326 >> v2:192.168.123.104:6800/3326026257 conn(0x7f121c077790 0x7f121c079c50 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f12340097c0 tx=0x7f1234009340 comp rx=0 tx=0).stop 2026-03-10T10:15:25.851 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.845+0000 7f1246b67640 1 -- 192.168.123.104:0/659655326 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f124011c780 msgr2=0x7f1240114d00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:25.851 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.845+0000 7f1246b67640 1 --2- 192.168.123.104:0/659655326 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f124011c780 0x7f1240114d00 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7f123002fb00 tx=0x7f1230002c90 comp rx=0 tx=0).stop 2026-03-10T10:15:25.851 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.845+0000 7f1246b67640 1 -- 192.168.123.104:0/659655326 shutdown_connections 2026-03-10T10:15:25.851 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.845+0000 7f1246b67640 1 --2- 192.168.123.104:0/659655326 >> v2:192.168.123.104:6800/3326026257 conn(0x7f121c077790 0x7f121c079c50 unknown :-1 s=CLOSED pgs=45 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.851 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.845+0000 7f1246b67640 1 --2- 192.168.123.104:0/659655326 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f124011c780 0x7f1240114d00 unknown :-1 s=CLOSED pgs=76 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.851 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.845+0000 7f1246b67640 1 --2- 192.168.123.104:0/659655326 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f124010a850 0x7f124011c0c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.851 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.845+0000 7f1246b67640 1 --2- 192.168.123.104:0/659655326 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f124010a470 0x7f124011bb80 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.851 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.845+0000 7f1246b67640 1 -- 192.168.123.104:0/659655326 >> 192.168.123.104:0/659655326 conn(0x7f124006d9f0 msgr2=0x7f1240071540 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:25.851 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.845+0000 7f1246b67640 1 -- 192.168.123.104:0/659655326 shutdown_connections 2026-03-10T10:15:25.851 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.845+0000 7f1246b67640 1 -- 192.168.123.104:0/659655326 wait complete. 2026-03-10T10:15:25.867 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.861+0000 7ff689008640 1 -- 192.168.123.104:0/1185491510 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 7} v 0) -- 0x7ff6841aab60 con 0x7ff68410a470 2026-03-10T10:15:25.868 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.865+0000 7ff6817fa640 1 -- 192.168.123.104:0/1185491510 <== mon.0 v2:192.168.123.104:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 7}]=0 v0) ==== 74+0+13 (secure 0 0 0) 0x7ff678066ed0 con 0x7ff68410a470 2026-03-10T10:15:25.869 INFO:teuthology.orchestra.run.vm04.stdout:210453397525 2026-03-10T10:15:25.875 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.869+0000 7ff689008640 1 -- 192.168.123.104:0/1185491510 >> v2:192.168.123.104:6800/3326026257 conn(0x7ff660077790 msgr2=0x7ff660079c50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:25.875 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.869+0000 7ff689008640 1 --2- 192.168.123.104:0/1185491510 >> v2:192.168.123.104:6800/3326026257 conn(0x7ff660077790 0x7ff660079c50 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7ff66c02f6c0 tx=0x7ff66c0023d0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.875 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.869+0000 7ff689008640 1 -- 192.168.123.104:0/1185491510 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7ff68410a470 msgr2=0x7ff6841af560 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:25.875 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.869+0000 7ff689008640 1 --2- 192.168.123.104:0/1185491510 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7ff68410a470 0x7ff6841af560 secure :-1 s=READY pgs=163 cs=0 l=1 rev1=1 crypto rx=0x7ff6780029e0 tx=0x7ff678002eb0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.875 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.873+0000 7ff689008640 1 -- 192.168.123.104:0/1185491510 shutdown_connections 2026-03-10T10:15:25.875 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.873+0000 7ff689008640 1 --2- 192.168.123.104:0/1185491510 >> v2:192.168.123.104:6800/3326026257 conn(0x7ff660077790 0x7ff660079c50 unknown :-1 s=CLOSED pgs=46 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.875 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.873+0000 7ff689008640 1 --2- 192.168.123.104:0/1185491510 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7ff68411c780 0x7ff6841a9630 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.875 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.873+0000 7ff689008640 1 --2- 192.168.123.104:0/1185491510 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7ff68410a850 0x7ff6841afaa0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.875 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.873+0000 7ff689008640 1 --2- 192.168.123.104:0/1185491510 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7ff68410a470 0x7ff6841af560 unknown :-1 s=CLOSED pgs=163 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:25.875 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.873+0000 7ff689008640 1 -- 192.168.123.104:0/1185491510 >> 192.168.123.104:0/1185491510 conn(0x7ff68406d9c0 msgr2=0x7ff68411cc60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:25.876 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.873+0000 7ff689008640 1 -- 192.168.123.104:0/1185491510 shutdown_connections 2026-03-10T10:15:25.876 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:25.873+0000 7ff689008640 1 -- 192.168.123.104:0/1185491510 wait complete. 2026-03-10T10:15:25.926 INFO:tasks.cephadm.ceph_manager.ceph:need seq 158913789985 got 158913789986 for osd.5 2026-03-10T10:15:25.926 DEBUG:teuthology.parallel:result is None 2026-03-10T10:15:25.934 INFO:tasks.cephadm.ceph_manager.ceph:need seq 133143986215 got 133143986216 for osd.4 2026-03-10T10:15:25.934 DEBUG:teuthology.parallel:result is None 2026-03-10T10:15:25.975 INFO:tasks.cephadm.ceph_manager.ceph:need seq 210453397524 got 210453397525 for osd.7 2026-03-10T10:15:25.975 DEBUG:teuthology.parallel:result is None 2026-03-10T10:15:25.975 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-10T10:15:25.975 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph pg dump --format=json 2026-03-10T10:15:26.187 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:26 vm04 bash[20742]: cluster 2026-03-10T10:15:24.301174+0000 mgr.y (mgr.24422) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:26.187 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:26 vm04 bash[20742]: cluster 2026-03-10T10:15:24.301174+0000 mgr.y (mgr.24422) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:26.187 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:26 vm04 bash[20742]: audit 2026-03-10T10:15:25.399189+0000 mon.c (mon.2) 27 : audit [DBG] from='client.? 192.168.123.104:0/2538999384' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T10:15:26.187 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:26 vm04 bash[20742]: audit 2026-03-10T10:15:25.399189+0000 mon.c (mon.2) 27 : audit [DBG] from='client.? 192.168.123.104:0/2538999384' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T10:15:26.187 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:26 vm04 bash[20742]: audit 2026-03-10T10:15:25.582917+0000 mon.a (mon.0) 822 : audit [DBG] from='client.? 192.168.123.104:0/142442713' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T10:15:26.187 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:26 vm04 bash[20742]: audit 2026-03-10T10:15:25.582917+0000 mon.a (mon.0) 822 : audit [DBG] from='client.? 192.168.123.104:0/142442713' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T10:15:26.187 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:26 vm04 bash[20742]: audit 2026-03-10T10:15:25.633181+0000 mon.c (mon.2) 28 : audit [DBG] from='client.? 192.168.123.104:0/230236358' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T10:15:26.187 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:26 vm04 bash[20742]: audit 2026-03-10T10:15:25.633181+0000 mon.c (mon.2) 28 : audit [DBG] from='client.? 192.168.123.104:0/230236358' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T10:15:26.187 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:26 vm04 bash[20742]: audit 2026-03-10T10:15:25.642545+0000 mon.a (mon.0) 823 : audit [DBG] from='client.? 192.168.123.104:0/2877714411' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T10:15:26.187 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:26 vm04 bash[20742]: audit 2026-03-10T10:15:25.642545+0000 mon.a (mon.0) 823 : audit [DBG] from='client.? 192.168.123.104:0/2877714411' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T10:15:26.187 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:26 vm04 bash[20742]: audit 2026-03-10T10:15:25.676273+0000 mon.a (mon.0) 824 : audit [DBG] from='client.? 192.168.123.104:0/328214764' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T10:15:26.187 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:26 vm04 bash[20742]: audit 2026-03-10T10:15:25.676273+0000 mon.a (mon.0) 824 : audit [DBG] from='client.? 192.168.123.104:0/328214764' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T10:15:26.187 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:26 vm04 bash[20742]: audit 2026-03-10T10:15:25.789556+0000 mon.a (mon.0) 825 : audit [DBG] from='client.? 192.168.123.104:0/448172173' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T10:15:26.187 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:26 vm04 bash[20742]: audit 2026-03-10T10:15:25.789556+0000 mon.a (mon.0) 825 : audit [DBG] from='client.? 192.168.123.104:0/448172173' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T10:15:26.187 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:26 vm04 bash[20742]: audit 2026-03-10T10:15:25.836819+0000 mon.c (mon.2) 29 : audit [DBG] from='client.? 192.168.123.104:0/659655326' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T10:15:26.187 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:26 vm04 bash[20742]: audit 2026-03-10T10:15:25.836819+0000 mon.c (mon.2) 29 : audit [DBG] from='client.? 192.168.123.104:0/659655326' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T10:15:26.187 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:26 vm04 bash[20742]: audit 2026-03-10T10:15:25.869217+0000 mon.a (mon.0) 826 : audit [DBG] from='client.? 192.168.123.104:0/1185491510' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T10:15:26.187 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:26 vm04 bash[20742]: audit 2026-03-10T10:15:25.869217+0000 mon.a (mon.0) 826 : audit [DBG] from='client.? 192.168.123.104:0/1185491510' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T10:15:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:26 vm04 bash[28289]: cluster 2026-03-10T10:15:24.301174+0000 mgr.y (mgr.24422) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:26 vm04 bash[28289]: cluster 2026-03-10T10:15:24.301174+0000 mgr.y (mgr.24422) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:26 vm04 bash[28289]: audit 2026-03-10T10:15:25.399189+0000 mon.c (mon.2) 27 : audit [DBG] from='client.? 192.168.123.104:0/2538999384' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T10:15:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:26 vm04 bash[28289]: audit 2026-03-10T10:15:25.399189+0000 mon.c (mon.2) 27 : audit [DBG] from='client.? 192.168.123.104:0/2538999384' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T10:15:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:26 vm04 bash[28289]: audit 2026-03-10T10:15:25.582917+0000 mon.a (mon.0) 822 : audit [DBG] from='client.? 192.168.123.104:0/142442713' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T10:15:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:26 vm04 bash[28289]: audit 2026-03-10T10:15:25.582917+0000 mon.a (mon.0) 822 : audit [DBG] from='client.? 192.168.123.104:0/142442713' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T10:15:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:26 vm04 bash[28289]: audit 2026-03-10T10:15:25.633181+0000 mon.c (mon.2) 28 : audit [DBG] from='client.? 192.168.123.104:0/230236358' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T10:15:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:26 vm04 bash[28289]: audit 2026-03-10T10:15:25.633181+0000 mon.c (mon.2) 28 : audit [DBG] from='client.? 192.168.123.104:0/230236358' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T10:15:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:26 vm04 bash[28289]: audit 2026-03-10T10:15:25.642545+0000 mon.a (mon.0) 823 : audit [DBG] from='client.? 192.168.123.104:0/2877714411' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T10:15:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:26 vm04 bash[28289]: audit 2026-03-10T10:15:25.642545+0000 mon.a (mon.0) 823 : audit [DBG] from='client.? 192.168.123.104:0/2877714411' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T10:15:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:26 vm04 bash[28289]: audit 2026-03-10T10:15:25.676273+0000 mon.a (mon.0) 824 : audit [DBG] from='client.? 192.168.123.104:0/328214764' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T10:15:26.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:26 vm04 bash[28289]: audit 2026-03-10T10:15:25.676273+0000 mon.a (mon.0) 824 : audit [DBG] from='client.? 192.168.123.104:0/328214764' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T10:15:26.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:26 vm04 bash[28289]: audit 2026-03-10T10:15:25.789556+0000 mon.a (mon.0) 825 : audit [DBG] from='client.? 192.168.123.104:0/448172173' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T10:15:26.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:26 vm04 bash[28289]: audit 2026-03-10T10:15:25.789556+0000 mon.a (mon.0) 825 : audit [DBG] from='client.? 192.168.123.104:0/448172173' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T10:15:26.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:26 vm04 bash[28289]: audit 2026-03-10T10:15:25.836819+0000 mon.c (mon.2) 29 : audit [DBG] from='client.? 192.168.123.104:0/659655326' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T10:15:26.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:26 vm04 bash[28289]: audit 2026-03-10T10:15:25.836819+0000 mon.c (mon.2) 29 : audit [DBG] from='client.? 192.168.123.104:0/659655326' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T10:15:26.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:26 vm04 bash[28289]: audit 2026-03-10T10:15:25.869217+0000 mon.a (mon.0) 826 : audit [DBG] from='client.? 192.168.123.104:0/1185491510' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T10:15:26.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:26 vm04 bash[28289]: audit 2026-03-10T10:15:25.869217+0000 mon.a (mon.0) 826 : audit [DBG] from='client.? 192.168.123.104:0/1185491510' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T10:15:26.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:26 vm07 bash[23367]: cluster 2026-03-10T10:15:24.301174+0000 mgr.y (mgr.24422) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:26 vm07 bash[23367]: cluster 2026-03-10T10:15:24.301174+0000 mgr.y (mgr.24422) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:26 vm07 bash[23367]: audit 2026-03-10T10:15:25.399189+0000 mon.c (mon.2) 27 : audit [DBG] from='client.? 192.168.123.104:0/2538999384' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T10:15:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:26 vm07 bash[23367]: audit 2026-03-10T10:15:25.399189+0000 mon.c (mon.2) 27 : audit [DBG] from='client.? 192.168.123.104:0/2538999384' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T10:15:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:26 vm07 bash[23367]: audit 2026-03-10T10:15:25.582917+0000 mon.a (mon.0) 822 : audit [DBG] from='client.? 192.168.123.104:0/142442713' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T10:15:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:26 vm07 bash[23367]: audit 2026-03-10T10:15:25.582917+0000 mon.a (mon.0) 822 : audit [DBG] from='client.? 192.168.123.104:0/142442713' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T10:15:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:26 vm07 bash[23367]: audit 2026-03-10T10:15:25.633181+0000 mon.c (mon.2) 28 : audit [DBG] from='client.? 192.168.123.104:0/230236358' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T10:15:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:26 vm07 bash[23367]: audit 2026-03-10T10:15:25.633181+0000 mon.c (mon.2) 28 : audit [DBG] from='client.? 192.168.123.104:0/230236358' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T10:15:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:26 vm07 bash[23367]: audit 2026-03-10T10:15:25.642545+0000 mon.a (mon.0) 823 : audit [DBG] from='client.? 192.168.123.104:0/2877714411' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T10:15:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:26 vm07 bash[23367]: audit 2026-03-10T10:15:25.642545+0000 mon.a (mon.0) 823 : audit [DBG] from='client.? 192.168.123.104:0/2877714411' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T10:15:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:26 vm07 bash[23367]: audit 2026-03-10T10:15:25.676273+0000 mon.a (mon.0) 824 : audit [DBG] from='client.? 192.168.123.104:0/328214764' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T10:15:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:26 vm07 bash[23367]: audit 2026-03-10T10:15:25.676273+0000 mon.a (mon.0) 824 : audit [DBG] from='client.? 192.168.123.104:0/328214764' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T10:15:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:26 vm07 bash[23367]: audit 2026-03-10T10:15:25.789556+0000 mon.a (mon.0) 825 : audit [DBG] from='client.? 192.168.123.104:0/448172173' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T10:15:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:26 vm07 bash[23367]: audit 2026-03-10T10:15:25.789556+0000 mon.a (mon.0) 825 : audit [DBG] from='client.? 192.168.123.104:0/448172173' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T10:15:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:26 vm07 bash[23367]: audit 2026-03-10T10:15:25.836819+0000 mon.c (mon.2) 29 : audit [DBG] from='client.? 192.168.123.104:0/659655326' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T10:15:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:26 vm07 bash[23367]: audit 2026-03-10T10:15:25.836819+0000 mon.c (mon.2) 29 : audit [DBG] from='client.? 192.168.123.104:0/659655326' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T10:15:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:26 vm07 bash[23367]: audit 2026-03-10T10:15:25.869217+0000 mon.a (mon.0) 826 : audit [DBG] from='client.? 192.168.123.104:0/1185491510' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T10:15:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:26 vm07 bash[23367]: audit 2026-03-10T10:15:25.869217+0000 mon.a (mon.0) 826 : audit [DBG] from='client.? 192.168.123.104:0/1185491510' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T10:15:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:28 vm04 bash[28289]: cluster 2026-03-10T10:15:26.301461+0000 mgr.y (mgr.24422) 60 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:28 vm04 bash[28289]: cluster 2026-03-10T10:15:26.301461+0000 mgr.y (mgr.24422) 60 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:28 vm04 bash[28289]: audit 2026-03-10T10:15:27.375322+0000 mon.a (mon.0) 827 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:15:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:28 vm04 bash[28289]: audit 2026-03-10T10:15:27.375322+0000 mon.a (mon.0) 827 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:15:28.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:28 vm04 bash[20742]: cluster 2026-03-10T10:15:26.301461+0000 mgr.y (mgr.24422) 60 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:28.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:28 vm04 bash[20742]: cluster 2026-03-10T10:15:26.301461+0000 mgr.y (mgr.24422) 60 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:28.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:28 vm04 bash[20742]: audit 2026-03-10T10:15:27.375322+0000 mon.a (mon.0) 827 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:15:28.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:28 vm04 bash[20742]: audit 2026-03-10T10:15:27.375322+0000 mon.a (mon.0) 827 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:15:28.515 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:15:28 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:15:28.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:28 vm07 bash[23367]: cluster 2026-03-10T10:15:26.301461+0000 mgr.y (mgr.24422) 60 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:28.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:28 vm07 bash[23367]: cluster 2026-03-10T10:15:26.301461+0000 mgr.y (mgr.24422) 60 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:28.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:28 vm07 bash[23367]: audit 2026-03-10T10:15:27.375322+0000 mon.a (mon.0) 827 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:15:28.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:28 vm07 bash[23367]: audit 2026-03-10T10:15:27.375322+0000 mon.a (mon.0) 827 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:15:29.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:29 vm04 bash[28289]: audit 2026-03-10T10:15:28.136305+0000 mgr.y (mgr.24422) 61 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:29.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:29 vm04 bash[28289]: audit 2026-03-10T10:15:28.136305+0000 mgr.y (mgr.24422) 61 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:29 vm04 bash[20742]: audit 2026-03-10T10:15:28.136305+0000 mgr.y (mgr.24422) 61 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:29 vm04 bash[20742]: audit 2026-03-10T10:15:28.136305+0000 mgr.y (mgr.24422) 61 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:29.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:29 vm07 bash[23367]: audit 2026-03-10T10:15:28.136305+0000 mgr.y (mgr.24422) 61 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:29.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:29 vm07 bash[23367]: audit 2026-03-10T10:15:28.136305+0000 mgr.y (mgr.24422) 61 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:30 vm04 bash[28289]: cluster 2026-03-10T10:15:28.301675+0000 mgr.y (mgr.24422) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:30 vm04 bash[28289]: cluster 2026-03-10T10:15:28.301675+0000 mgr.y (mgr.24422) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:30 vm04 bash[20742]: cluster 2026-03-10T10:15:28.301675+0000 mgr.y (mgr.24422) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:30 vm04 bash[20742]: cluster 2026-03-10T10:15:28.301675+0000 mgr.y (mgr.24422) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:30.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:30 vm07 bash[23367]: cluster 2026-03-10T10:15:28.301675+0000 mgr.y (mgr.24422) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:30.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:30 vm07 bash[23367]: cluster 2026-03-10T10:15:28.301675+0000 mgr.y (mgr.24422) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:30.605 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:15:30.752 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.749+0000 7f0ec1548640 1 -- 192.168.123.104:0/1294887055 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f0ebc10f4b0 msgr2=0x7f0ebc1118a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:30.752 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.749+0000 7f0ec1548640 1 --2- 192.168.123.104:0/1294887055 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f0ebc10f4b0 0x7f0ebc1118a0 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7f0eac00b3e0 tx=0x7f0eac02f690 comp rx=0 tx=0).stop 2026-03-10T10:15:30.752 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.749+0000 7f0ec1548640 1 -- 192.168.123.104:0/1294887055 shutdown_connections 2026-03-10T10:15:30.752 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.749+0000 7f0ec1548640 1 --2- 192.168.123.104:0/1294887055 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f0ebc10f4b0 0x7f0ebc1118a0 unknown :-1 s=CLOSED pgs=78 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:30.752 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.749+0000 7f0ec1548640 1 --2- 192.168.123.104:0/1294887055 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0ebc1020a0 0x7f0ebc10ef70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:30.752 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.749+0000 7f0ec1548640 1 --2- 192.168.123.104:0/1294887055 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0ebc101780 0x7f0ebc101b60 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:30.753 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.749+0000 7f0ec1548640 1 -- 192.168.123.104:0/1294887055 >> 192.168.123.104:0/1294887055 conn(0x7f0ebc0fd650 msgr2=0x7f0ebc0ffa70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:30.753 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.749+0000 7f0ec1548640 1 -- 192.168.123.104:0/1294887055 shutdown_connections 2026-03-10T10:15:30.753 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.749+0000 7f0ec1548640 1 -- 192.168.123.104:0/1294887055 wait complete. 2026-03-10T10:15:30.753 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.749+0000 7f0ec1548640 1 Processor -- start 2026-03-10T10:15:30.753 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.749+0000 7f0ec1548640 1 -- start start 2026-03-10T10:15:30.753 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.749+0000 7f0ec1548640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0ebc101780 0x7f0ebc1a2720 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:30.754 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.749+0000 7f0ec1548640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f0ebc1020a0 0x7f0ebc1a2c60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:30.754 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.749+0000 7f0ebaffd640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0ebc101780 0x7f0ebc1a2720 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:30.754 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.749+0000 7f0ebaffd640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0ebc101780 0x7f0ebc1a2720 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:59160/0 (socket says 192.168.123.104:59160) 2026-03-10T10:15:30.754 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.749+0000 7f0eba7fc640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f0ebc1020a0 0x7f0ebc1a2c60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:30.754 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.749+0000 7f0ec1548640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0ebc10f4b0 0x7f0ebc19c8e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:30.754 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.749+0000 7f0ec1548640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f0ebc114570 con 0x7f0ebc101780 2026-03-10T10:15:30.754 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.749+0000 7f0ec1548640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f0ebc1143f0 con 0x7f0ebc10f4b0 2026-03-10T10:15:30.754 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.749+0000 7f0ec1548640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f0ebc1146f0 con 0x7f0ebc1020a0 2026-03-10T10:15:30.754 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.749+0000 7f0ebaffd640 1 -- 192.168.123.104:0/2287870841 learned_addr learned my addr 192.168.123.104:0/2287870841 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:15:30.754 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.749+0000 7f0ebb7fe640 1 --2- 192.168.123.104:0/2287870841 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0ebc10f4b0 0x7f0ebc19c8e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:30.755 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.753+0000 7f0ebaffd640 1 -- 192.168.123.104:0/2287870841 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f0ebc1020a0 msgr2=0x7f0ebc1a2c60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:30.755 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.753+0000 7f0ebaffd640 1 --2- 192.168.123.104:0/2287870841 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f0ebc1020a0 0x7f0ebc1a2c60 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:30.755 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.753+0000 7f0ebaffd640 1 -- 192.168.123.104:0/2287870841 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0ebc10f4b0 msgr2=0x7f0ebc19c8e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:30.755 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.753+0000 7f0ebaffd640 1 --2- 192.168.123.104:0/2287870841 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0ebc10f4b0 0x7f0ebc19c8e0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:30.755 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.753+0000 7f0ebaffd640 1 -- 192.168.123.104:0/2287870841 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f0ebc19d010 con 0x7f0ebc101780 2026-03-10T10:15:30.755 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.753+0000 7f0ebb7fe640 1 --2- 192.168.123.104:0/2287870841 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0ebc10f4b0 0x7f0ebc19c8e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-10T10:15:30.755 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.753+0000 7f0eba7fc640 1 --2- 192.168.123.104:0/2287870841 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f0ebc1020a0 0x7f0ebc1a2c60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T10:15:30.755 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.753+0000 7f0ebaffd640 1 --2- 192.168.123.104:0/2287870841 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0ebc101780 0x7f0ebc1a2720 secure :-1 s=READY pgs=164 cs=0 l=1 rev1=1 crypto rx=0x7f0ea400ea10 tx=0x7f0ea400eee0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:30.757 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.753+0000 7f0e9bfff640 1 -- 192.168.123.104:0/2287870841 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f0ea400ce50 con 0x7f0ebc101780 2026-03-10T10:15:30.757 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.753+0000 7f0e9bfff640 1 -- 192.168.123.104:0/2287870841 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f0ea40108a0 con 0x7f0ebc101780 2026-03-10T10:15:30.757 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.753+0000 7f0e9bfff640 1 -- 192.168.123.104:0/2287870841 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f0ea40188d0 con 0x7f0ebc101780 2026-03-10T10:15:30.757 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.753+0000 7f0ec1548640 1 -- 192.168.123.104:0/2287870841 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f0ebc19d2a0 con 0x7f0ebc101780 2026-03-10T10:15:30.757 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.753+0000 7f0ec1548640 1 -- 192.168.123.104:0/2287870841 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f0ebc1a95f0 con 0x7f0ebc101780 2026-03-10T10:15:30.757 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.753+0000 7f0e9bfff640 1 -- 192.168.123.104:0/2287870841 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f0ea4018a70 con 0x7f0ebc101780 2026-03-10T10:15:30.757 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.753+0000 7f0ec1548640 1 -- 192.168.123.104:0/2287870841 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f0e80005180 con 0x7f0ebc101780 2026-03-10T10:15:30.759 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.753+0000 7f0e9bfff640 1 --2- 192.168.123.104:0/2287870841 >> v2:192.168.123.104:6800/3326026257 conn(0x7f0e90077860 0x7f0e90079d20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:30.759 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.753+0000 7f0e9bfff640 1 -- 192.168.123.104:0/2287870841 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f0ea409a750 con 0x7f0ebc101780 2026-03-10T10:15:30.759 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.757+0000 7f0eba7fc640 1 --2- 192.168.123.104:0/2287870841 >> v2:192.168.123.104:6800/3326026257 conn(0x7f0e90077860 0x7f0e90079d20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:30.761 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.757+0000 7f0eba7fc640 1 --2- 192.168.123.104:0/2287870841 >> v2:192.168.123.104:6800/3326026257 conn(0x7f0e90077860 0x7f0e90079d20 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7f0ebc10f9b0 tx=0x7f0eb000a400 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:30.762 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.757+0000 7f0e9bfff640 1 -- 192.168.123.104:0/2287870841 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f0ea4068200 con 0x7f0ebc101780 2026-03-10T10:15:30.848 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.845+0000 7f0ec1548640 1 -- 192.168.123.104:0/2287870841 --> v2:192.168.123.104:6800/3326026257 -- mgr_command(tid 0: {"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}) -- 0x7f0e80002bf0 con 0x7f0e90077860 2026-03-10T10:15:30.853 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.849+0000 7f0e9bfff640 1 -- 192.168.123.104:0/2287870841 <== mgr.24422 v2:192.168.123.104:6800/3326026257 1 ==== mgr_command_reply(tid 0: 0 dumped all) ==== 18+0+346473 (secure 0 0 0) 0x7f0e80002bf0 con 0x7f0e90077860 2026-03-10T10:15:30.854 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:15:30.855 INFO:teuthology.orchestra.run.vm04.stderr:dumped all 2026-03-10T10:15:30.857 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.853+0000 7f0ec1548640 1 -- 192.168.123.104:0/2287870841 >> v2:192.168.123.104:6800/3326026257 conn(0x7f0e90077860 msgr2=0x7f0e90079d20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:30.857 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.853+0000 7f0ec1548640 1 --2- 192.168.123.104:0/2287870841 >> v2:192.168.123.104:6800/3326026257 conn(0x7f0e90077860 0x7f0e90079d20 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7f0ebc10f9b0 tx=0x7f0eb000a400 comp rx=0 tx=0).stop 2026-03-10T10:15:30.857 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.853+0000 7f0ec1548640 1 -- 192.168.123.104:0/2287870841 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0ebc101780 msgr2=0x7f0ebc1a2720 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:30.857 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.853+0000 7f0ec1548640 1 --2- 192.168.123.104:0/2287870841 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0ebc101780 0x7f0ebc1a2720 secure :-1 s=READY pgs=164 cs=0 l=1 rev1=1 crypto rx=0x7f0ea400ea10 tx=0x7f0ea400eee0 comp rx=0 tx=0).stop 2026-03-10T10:15:30.857 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.853+0000 7f0ec1548640 1 -- 192.168.123.104:0/2287870841 shutdown_connections 2026-03-10T10:15:30.857 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.853+0000 7f0ec1548640 1 --2- 192.168.123.104:0/2287870841 >> v2:192.168.123.104:6800/3326026257 conn(0x7f0e90077860 0x7f0e90079d20 unknown :-1 s=CLOSED pgs=47 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:30.857 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.853+0000 7f0ec1548640 1 --2- 192.168.123.104:0/2287870841 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f0ebc10f4b0 0x7f0ebc19c8e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:30.857 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.853+0000 7f0ec1548640 1 --2- 192.168.123.104:0/2287870841 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f0ebc1020a0 0x7f0ebc1a2c60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:30.857 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.853+0000 7f0ec1548640 1 --2- 192.168.123.104:0/2287870841 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f0ebc101780 0x7f0ebc1a2720 unknown :-1 s=CLOSED pgs=164 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:30.857 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.853+0000 7f0ec1548640 1 -- 192.168.123.104:0/2287870841 >> 192.168.123.104:0/2287870841 conn(0x7f0ebc0fd650 msgr2=0x7f0ebc0ffa40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:30.857 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.853+0000 7f0ec1548640 1 -- 192.168.123.104:0/2287870841 shutdown_connections 2026-03-10T10:15:30.857 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:30.853+0000 7f0ec1548640 1 -- 192.168.123.104:0/2287870841 wait complete. 2026-03-10T10:15:30.923 INFO:teuthology.orchestra.run.vm04.stdout:{"pg_ready":true,"pg_map":{"version":27,"stamp":"2026-03-10T10:15:30.301792+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":465419,"num_objects":199,"num_object_clones":0,"num_object_copies":597,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":199,"num_whiteouts":0,"num_read":910,"num_read_kb":769,"num_write":505,"num_write_kb":629,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":538,"ondisk_log_size":538,"up":396,"acting":396,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":396,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":8,"kb":167739392,"kb_used":221268,"kb_used_data":6556,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167518124,"statfs":{"total":171765137408,"available":171538558976,"internally_reserved":0,"allocated":6713344,"data_stored":3395649,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12712,"internal_metadata":219663960},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":15,"num_read_kb":15,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.001966"},"pg_stats":[{"pgid":"6.1b","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.385778+0000","last_change":"2026-03-10T10:14:19.317580+0000","last_active":"2026-03-10T10:14:42.385778+0000","last_peered":"2026-03-10T10:14:42.385778+0000","last_clean":"2026-03-10T10:14:42.385778+0000","last_became_active":"2026-03-10T10:14:19.317138+0000","last_became_peered":"2026-03-10T10:14:19.317138+0000","last_unstale":"2026-03-10T10:14:42.385778+0000","last_undegraded":"2026-03-10T10:14:42.385778+0000","last_fullsized":"2026-03-10T10:14:42.385778+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:45:13.473349+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.1f","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.993127+0000","last_change":"2026-03-10T10:14:12.941865+0000","last_active":"2026-03-10T10:14:42.993127+0000","last_peered":"2026-03-10T10:14:42.993127+0000","last_clean":"2026-03-10T10:14:42.993127+0000","last_became_active":"2026-03-10T10:14:12.941718+0000","last_became_peered":"2026-03-10T10:14:12.941718+0000","last_unstale":"2026-03-10T10:14:42.993127+0000","last_undegraded":"2026-03-10T10:14:42.993127+0000","last_fullsized":"2026-03-10T10:14:42.993127+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:47:48.070949+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,4],"acting":[0,7,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.1e","version":"60'10","reported_seq":46,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.385800+0000","last_change":"2026-03-10T10:14:14.954713+0000","last_active":"2026-03-10T10:14:42.385800+0000","last_peered":"2026-03-10T10:14:42.385800+0000","last_clean":"2026-03-10T10:14:42.385800+0000","last_became_active":"2026-03-10T10:14:14.953527+0000","last_became_peered":"2026-03-10T10:14:14.953527+0000","last_unstale":"2026-03-10T10:14:42.385800+0000","last_undegraded":"2026-03-10T10:14:42.385800+0000","last_fullsized":"2026-03-10T10:14:42.385800+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:50:42.294681+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,2],"acting":[3,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.18","version":"0'0","reported_seq":26,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.281026+0000","last_change":"2026-03-10T10:14:16.952114+0000","last_active":"2026-03-10T10:14:42.281026+0000","last_peered":"2026-03-10T10:14:42.281026+0000","last_clean":"2026-03-10T10:14:42.281026+0000","last_became_active":"2026-03-10T10:14:16.951912+0000","last_became_peered":"2026-03-10T10:14:16.951912+0000","last_unstale":"2026-03-10T10:14:42.281026+0000","last_undegraded":"2026-03-10T10:14:42.281026+0000","last_fullsized":"2026-03-10T10:14:42.281026+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:24:21.428499+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1e","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.385400+0000","last_change":"2026-03-10T10:14:12.950570+0000","last_active":"2026-03-10T10:14:42.385400+0000","last_peered":"2026-03-10T10:14:42.385400+0000","last_clean":"2026-03-10T10:14:42.385400+0000","last_became_active":"2026-03-10T10:14:12.945351+0000","last_became_peered":"2026-03-10T10:14:12.945351+0000","last_unstale":"2026-03-10T10:14:42.385400+0000","last_undegraded":"2026-03-10T10:14:42.385400+0000","last_fullsized":"2026-03-10T10:14:42.385400+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:52:17.682926+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1f","version":"60'11","reported_seq":50,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.993454+0000","last_change":"2026-03-10T10:14:14.949062+0000","last_active":"2026-03-10T10:14:42.993454+0000","last_peered":"2026-03-10T10:14:42.993454+0000","last_clean":"2026-03-10T10:14:42.993454+0000","last_became_active":"2026-03-10T10:14:14.947353+0000","last_became_peered":"2026-03-10T10:14:14.947353+0000","last_unstale":"2026-03-10T10:14:42.993454+0000","last_undegraded":"2026-03-10T10:14:42.993454+0000","last_fullsized":"2026-03-10T10:14:42.993454+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:20:32.731864+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,2],"acting":[0,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.19","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.282162+0000","last_change":"2026-03-10T10:14:16.959010+0000","last_active":"2026-03-10T10:14:42.282162+0000","last_peered":"2026-03-10T10:14:42.282162+0000","last_clean":"2026-03-10T10:14:42.282162+0000","last_became_active":"2026-03-10T10:14:16.958915+0000","last_became_peered":"2026-03-10T10:14:16.958915+0000","last_unstale":"2026-03-10T10:14:42.282162+0000","last_undegraded":"2026-03-10T10:14:42.282162+0000","last_fullsized":"2026-03-10T10:14:42.282162+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:47:57.570171+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,7],"acting":[1,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1a","version":"0'0","reported_seq":22,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.280591+0000","last_change":"2026-03-10T10:14:18.972649+0000","last_active":"2026-03-10T10:14:42.280591+0000","last_peered":"2026-03-10T10:14:42.280591+0000","last_clean":"2026-03-10T10:14:42.280591+0000","last_became_active":"2026-03-10T10:14:18.972217+0000","last_became_peered":"2026-03-10T10:14:18.972217+0000","last_unstale":"2026-03-10T10:14:42.280591+0000","last_undegraded":"2026-03-10T10:14:42.280591+0000","last_fullsized":"2026-03-10T10:14:42.280591+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:50:56.085011+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,1],"acting":[4,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1d","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986798+0000","last_change":"2026-03-10T10:14:12.941700+0000","last_active":"2026-03-10T10:14:42.986798+0000","last_peered":"2026-03-10T10:14:42.986798+0000","last_clean":"2026-03-10T10:14:42.986798+0000","last_became_active":"2026-03-10T10:14:12.941210+0000","last_became_peered":"2026-03-10T10:14:12.941210+0000","last_unstale":"2026-03-10T10:14:42.986798+0000","last_undegraded":"2026-03-10T10:14:42.986798+0000","last_fullsized":"2026-03-10T10:14:42.986798+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:50:29.693383+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,0],"acting":[7,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1c","version":"60'15","reported_seq":56,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992269+0000","last_change":"2026-03-10T10:14:14.949466+0000","last_active":"2026-03-10T10:14:42.992269+0000","last_peered":"2026-03-10T10:14:42.992269+0000","last_clean":"2026-03-10T10:14:42.992269+0000","last_became_active":"2026-03-10T10:14:14.949390+0000","last_became_peered":"2026-03-10T10:14:14.949390+0000","last_unstale":"2026-03-10T10:14:42.992269+0000","last_undegraded":"2026-03-10T10:14:42.992269+0000","last_fullsized":"2026-03-10T10:14:42.992269+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:51:37.134797+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,1],"acting":[5,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1a","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986823+0000","last_change":"2026-03-10T10:14:16.954963+0000","last_active":"2026-03-10T10:14:42.986823+0000","last_peered":"2026-03-10T10:14:42.986823+0000","last_clean":"2026-03-10T10:14:42.986823+0000","last_became_active":"2026-03-10T10:14:16.954873+0000","last_became_peered":"2026-03-10T10:14:16.954873+0000","last_unstale":"2026-03-10T10:14:42.986823+0000","last_undegraded":"2026-03-10T10:14:42.986823+0000","last_fullsized":"2026-03-10T10:14:42.986823+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:20:57.501364+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.19","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.991088+0000","last_change":"2026-03-10T10:14:18.978561+0000","last_active":"2026-03-10T10:14:42.991088+0000","last_peered":"2026-03-10T10:14:42.991088+0000","last_clean":"2026-03-10T10:14:42.991088+0000","last_became_active":"2026-03-10T10:14:18.978483+0000","last_became_peered":"2026-03-10T10:14:18.978483+0000","last_unstale":"2026-03-10T10:14:42.991088+0000","last_undegraded":"2026-03-10T10:14:42.991088+0000","last_fullsized":"2026-03-10T10:14:42.991088+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:42:27.368238+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,3],"acting":[5,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.1c","version":"53'1","reported_seq":43,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986710+0000","last_change":"2026-03-10T10:14:12.941543+0000","last_active":"2026-03-10T10:14:42.986710+0000","last_peered":"2026-03-10T10:14:42.986710+0000","last_clean":"2026-03-10T10:14:42.986710+0000","last_became_active":"2026-03-10T10:14:12.940992+0000","last_became_peered":"2026-03-10T10:14:12.940992+0000","last_unstale":"2026-03-10T10:14:42.986710+0000","last_undegraded":"2026-03-10T10:14:42.986710+0000","last_fullsized":"2026-03-10T10:14:42.986710+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:45:06.191731+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":436,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":10,"num_read_kb":10,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1d","version":"60'12","reported_seq":54,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.991682+0000","last_change":"2026-03-10T10:14:14.949278+0000","last_active":"2026-03-10T10:14:42.991682+0000","last_peered":"2026-03-10T10:14:42.991682+0000","last_clean":"2026-03-10T10:14:42.991682+0000","last_became_active":"2026-03-10T10:14:14.948923+0000","last_became_peered":"2026-03-10T10:14:14.948923+0000","last_unstale":"2026-03-10T10:14:42.991682+0000","last_undegraded":"2026-03-10T10:14:42.991682+0000","last_fullsized":"2026-03-10T10:14:42.991682+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:02:17.323597+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1b","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.991637+0000","last_change":"2026-03-10T10:14:16.965282+0000","last_active":"2026-03-10T10:14:42.991637+0000","last_peered":"2026-03-10T10:14:42.991637+0000","last_clean":"2026-03-10T10:14:42.991637+0000","last_became_active":"2026-03-10T10:14:16.965202+0000","last_became_peered":"2026-03-10T10:14:16.965202+0000","last_unstale":"2026-03-10T10:14:42.991637+0000","last_undegraded":"2026-03-10T10:14:42.991637+0000","last_fullsized":"2026-03-10T10:14:42.991637+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:37:49.368095+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,0,7],"acting":[5,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.18","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992832+0000","last_change":"2026-03-10T10:14:18.961784+0000","last_active":"2026-03-10T10:14:42.992832+0000","last_peered":"2026-03-10T10:14:42.992832+0000","last_clean":"2026-03-10T10:14:42.992832+0000","last_became_active":"2026-03-10T10:14:18.961123+0000","last_became_peered":"2026-03-10T10:14:18.961123+0000","last_unstale":"2026-03-10T10:14:42.992832+0000","last_undegraded":"2026-03-10T10:14:42.992832+0000","last_fullsized":"2026-03-10T10:14:42.992832+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:23:28.011814+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,7],"acting":[0,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.a","version":"60'19","reported_seq":62,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992029+0000","last_change":"2026-03-10T10:14:14.949157+0000","last_active":"2026-03-10T10:14:42.992029+0000","last_peered":"2026-03-10T10:14:42.992029+0000","last_clean":"2026-03-10T10:14:42.992029+0000","last_became_active":"2026-03-10T10:14:14.949031+0000","last_became_peered":"2026-03-10T10:14:14.949031+0000","last_unstale":"2026-03-10T10:14:42.992029+0000","last_undegraded":"2026-03-10T10:14:42.992029+0000","last_fullsized":"2026-03-10T10:14:42.992029+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:21:24.235646+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":9,"num_object_clones":0,"num_object_copies":27,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":9,"num_whiteouts":0,"num_read":32,"num_read_kb":21,"num_write":20,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"2.b","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986606+0000","last_change":"2026-03-10T10:14:12.941438+0000","last_active":"2026-03-10T10:14:42.986606+0000","last_peered":"2026-03-10T10:14:42.986606+0000","last_clean":"2026-03-10T10:14:42.986606+0000","last_became_active":"2026-03-10T10:14:12.940895+0000","last_became_peered":"2026-03-10T10:14:12.940895+0000","last_unstale":"2026-03-10T10:14:42.986606+0000","last_undegraded":"2026-03-10T10:14:42.986606+0000","last_fullsized":"2026-03-10T10:14:42.986606+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:16:13.952725+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,5],"acting":[7,4,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.c","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.282041+0000","last_change":"2026-03-10T10:14:16.955478+0000","last_active":"2026-03-10T10:14:42.282041+0000","last_peered":"2026-03-10T10:14:42.282041+0000","last_clean":"2026-03-10T10:14:42.282041+0000","last_became_active":"2026-03-10T10:14:16.955324+0000","last_became_peered":"2026-03-10T10:14:16.955324+0000","last_unstale":"2026-03-10T10:14:42.282041+0000","last_undegraded":"2026-03-10T10:14:42.282041+0000","last_fullsized":"2026-03-10T10:14:42.282041+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:25:48.499536+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.f","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.994311+0000","last_change":"2026-03-10T10:14:18.970021+0000","last_active":"2026-03-10T10:14:42.994311+0000","last_peered":"2026-03-10T10:14:42.994311+0000","last_clean":"2026-03-10T10:14:42.994311+0000","last_became_active":"2026-03-10T10:14:18.969925+0000","last_became_peered":"2026-03-10T10:14:18.969925+0000","last_unstale":"2026-03-10T10:14:42.994311+0000","last_undegraded":"2026-03-10T10:14:42.994311+0000","last_fullsized":"2026-03-10T10:14:42.994311+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:30:12.335118+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,4],"acting":[2,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"3.b","version":"60'9","reported_seq":47,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.385264+0000","last_change":"2026-03-10T10:14:14.955946+0000","last_active":"2026-03-10T10:14:42.385264+0000","last_peered":"2026-03-10T10:14:42.385264+0000","last_clean":"2026-03-10T10:14:42.385264+0000","last_became_active":"2026-03-10T10:14:14.955239+0000","last_became_peered":"2026-03-10T10:14:14.955239+0000","last_unstale":"2026-03-10T10:14:42.385264+0000","last_undegraded":"2026-03-10T10:14:42.385264+0000","last_fullsized":"2026-03-10T10:14:42.385264+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:04:23.905101+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,4],"acting":[3,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.a","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.281304+0000","last_change":"2026-03-10T10:14:12.946812+0000","last_active":"2026-03-10T10:14:42.281304+0000","last_peered":"2026-03-10T10:14:42.281304+0000","last_clean":"2026-03-10T10:14:42.281304+0000","last_became_active":"2026-03-10T10:14:12.946729+0000","last_became_peered":"2026-03-10T10:14:12.946729+0000","last_unstale":"2026-03-10T10:14:42.281304+0000","last_undegraded":"2026-03-10T10:14:42.281304+0000","last_fullsized":"2026-03-10T10:14:42.281304+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:26:10.694206+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,7],"acting":[1,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.d","version":"60'11","reported_seq":53,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:15:22.993088+0000","last_change":"2026-03-10T10:14:16.967875+0000","last_active":"2026-03-10T10:15:22.993088+0000","last_peered":"2026-03-10T10:15:22.993088+0000","last_clean":"2026-03-10T10:15:22.993088+0000","last_became_active":"2026-03-10T10:14:16.967673+0000","last_became_peered":"2026-03-10T10:14:16.967673+0000","last_unstale":"2026-03-10T10:15:22.993088+0000","last_undegraded":"2026-03-10T10:15:22.993088+0000","last_fullsized":"2026-03-10T10:15:22.993088+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:04:37.191568+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,7,5],"acting":[2,7,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.e","version":"0'0","reported_seq":22,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.282420+0000","last_change":"2026-03-10T10:14:18.954832+0000","last_active":"2026-03-10T10:14:42.282420+0000","last_peered":"2026-03-10T10:14:42.282420+0000","last_clean":"2026-03-10T10:14:42.282420+0000","last_became_active":"2026-03-10T10:14:18.954684+0000","last_became_peered":"2026-03-10T10:14:18.954684+0000","last_unstale":"2026-03-10T10:14:42.282420+0000","last_undegraded":"2026-03-10T10:14:42.282420+0000","last_fullsized":"2026-03-10T10:14:42.282420+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:08:44.361192+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.8","version":"60'15","reported_seq":56,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.385314+0000","last_change":"2026-03-10T10:14:14.955159+0000","last_active":"2026-03-10T10:14:42.385314+0000","last_peered":"2026-03-10T10:14:42.385314+0000","last_clean":"2026-03-10T10:14:42.385314+0000","last_became_active":"2026-03-10T10:14:14.954301+0000","last_became_peered":"2026-03-10T10:14:14.954301+0000","last_unstale":"2026-03-10T10:14:42.385314+0000","last_undegraded":"2026-03-10T10:14:42.385314+0000","last_fullsized":"2026-03-10T10:14:42.385314+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:41:00.860635+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.9","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.281345+0000","last_change":"2026-03-10T10:14:12.947099+0000","last_active":"2026-03-10T10:14:42.281345+0000","last_peered":"2026-03-10T10:14:42.281345+0000","last_clean":"2026-03-10T10:14:42.281345+0000","last_became_active":"2026-03-10T10:14:12.947024+0000","last_became_peered":"2026-03-10T10:14:12.947024+0000","last_unstale":"2026-03-10T10:14:42.281345+0000","last_undegraded":"2026-03-10T10:14:42.281345+0000","last_fullsized":"2026-03-10T10:14:42.281345+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:54:42.251553+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,7,3],"acting":[1,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.e","version":"60'11","reported_seq":52,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:15:22.991096+0000","last_change":"2026-03-10T10:14:16.958269+0000","last_active":"2026-03-10T10:15:22.991096+0000","last_peered":"2026-03-10T10:15:22.991096+0000","last_clean":"2026-03-10T10:15:22.991096+0000","last_became_active":"2026-03-10T10:14:16.958204+0000","last_became_peered":"2026-03-10T10:14:16.958204+0000","last_unstale":"2026-03-10T10:15:22.991096+0000","last_undegraded":"2026-03-10T10:15:22.991096+0000","last_fullsized":"2026-03-10T10:15:22.991096+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:03:15.314313+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,0],"acting":[4,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.d","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.991218+0000","last_change":"2026-03-10T10:14:18.973526+0000","last_active":"2026-03-10T10:14:42.991218+0000","last_peered":"2026-03-10T10:14:42.991218+0000","last_clean":"2026-03-10T10:14:42.991218+0000","last_became_active":"2026-03-10T10:14:18.973262+0000","last_became_peered":"2026-03-10T10:14:18.973262+0000","last_unstale":"2026-03-10T10:14:42.991218+0000","last_undegraded":"2026-03-10T10:14:42.991218+0000","last_fullsized":"2026-03-10T10:14:42.991218+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:00:13.308609+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.9","version":"60'12","reported_seq":53,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.280823+0000","last_change":"2026-03-10T10:14:14.947306+0000","last_active":"2026-03-10T10:14:42.280823+0000","last_peered":"2026-03-10T10:14:42.280823+0000","last_clean":"2026-03-10T10:14:42.280823+0000","last_became_active":"2026-03-10T10:14:14.942393+0000","last_became_peered":"2026-03-10T10:14:14.942393+0000","last_unstale":"2026-03-10T10:14:42.280823+0000","last_undegraded":"2026-03-10T10:14:42.280823+0000","last_fullsized":"2026-03-10T10:14:42.280823+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:02:08.934268+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,7],"acting":[4,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.8","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986562+0000","last_change":"2026-03-10T10:14:12.941382+0000","last_active":"2026-03-10T10:14:42.986562+0000","last_peered":"2026-03-10T10:14:42.986562+0000","last_clean":"2026-03-10T10:14:42.986562+0000","last_became_active":"2026-03-10T10:14:12.940815+0000","last_became_peered":"2026-03-10T10:14:12.940815+0000","last_unstale":"2026-03-10T10:14:42.986562+0000","last_undegraded":"2026-03-10T10:14:42.986562+0000","last_fullsized":"2026-03-10T10:14:42.986562+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:53:16.169201+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,1],"acting":[7,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.f","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992437+0000","last_change":"2026-03-10T10:14:16.964328+0000","last_active":"2026-03-10T10:14:42.992437+0000","last_peered":"2026-03-10T10:14:42.992437+0000","last_clean":"2026-03-10T10:14:42.992437+0000","last_became_active":"2026-03-10T10:14:16.964244+0000","last_became_peered":"2026-03-10T10:14:16.964244+0000","last_unstale":"2026-03-10T10:14:42.992437+0000","last_undegraded":"2026-03-10T10:14:42.992437+0000","last_fullsized":"2026-03-10T10:14:42.992437+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:31:16.904187+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.c","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.384689+0000","last_change":"2026-03-10T10:14:19.317024+0000","last_active":"2026-03-10T10:14:42.384689+0000","last_peered":"2026-03-10T10:14:42.384689+0000","last_clean":"2026-03-10T10:14:42.384689+0000","last_became_active":"2026-03-10T10:14:19.316813+0000","last_became_peered":"2026-03-10T10:14:19.316813+0000","last_unstale":"2026-03-10T10:14:42.384689+0000","last_undegraded":"2026-03-10T10:14:42.384689+0000","last_fullsized":"2026-03-10T10:14:42.384689+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:16:04.811046+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.6","version":"60'12","reported_seq":49,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.993542+0000","last_change":"2026-03-10T10:14:14.960776+0000","last_active":"2026-03-10T10:14:42.993542+0000","last_peered":"2026-03-10T10:14:42.993542+0000","last_clean":"2026-03-10T10:14:42.993542+0000","last_became_active":"2026-03-10T10:14:14.960394+0000","last_became_peered":"2026-03-10T10:14:14.960394+0000","last_unstale":"2026-03-10T10:14:42.993542+0000","last_undegraded":"2026-03-10T10:14:42.993542+0000","last_fullsized":"2026-03-10T10:14:42.993542+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:09:43.154149+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":18,"num_read_kb":12,"num_write":12,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.7","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992200+0000","last_change":"2026-03-10T10:14:12.942165+0000","last_active":"2026-03-10T10:14:42.992200+0000","last_peered":"2026-03-10T10:14:42.992200+0000","last_clean":"2026-03-10T10:14:42.992200+0000","last_became_active":"2026-03-10T10:14:12.941947+0000","last_became_peered":"2026-03-10T10:14:12.941947+0000","last_unstale":"2026-03-10T10:14:42.992200+0000","last_undegraded":"2026-03-10T10:14:42.992200+0000","last_fullsized":"2026-03-10T10:14:42.992200+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:05:19.126898+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,7,2],"acting":[6,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"4.1","version":"60'1","reported_seq":36,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.282160+0000","last_change":"2026-03-10T10:14:22.015323+0000","last_active":"2026-03-10T10:14:42.282160+0000","last_peered":"2026-03-10T10:14:42.282160+0000","last_clean":"2026-03-10T10:14:42.282160+0000","last_became_active":"2026-03-10T10:14:15.942863+0000","last_became_peered":"2026-03-10T10:14:15.942863+0000","last_unstale":"2026-03-10T10:14:42.282160+0000","last_undegraded":"2026-03-10T10:14:42.282160+0000","last_fullsized":"2026-03-10T10:14:42.282160+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:14.926552+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:14.926552+0000","last_clean_scrub_stamp":"2026-03-10T10:14:14.926552+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:43:59.191092+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00026765900000000001,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,6],"acting":[4,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.0","version":"60'11","reported_seq":53,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:15:22.992669+0000","last_change":"2026-03-10T10:14:16.963181+0000","last_active":"2026-03-10T10:15:22.992669+0000","last_peered":"2026-03-10T10:15:22.992669+0000","last_clean":"2026-03-10T10:15:22.992669+0000","last_became_active":"2026-03-10T10:14:16.963039+0000","last_became_peered":"2026-03-10T10:14:16.963039+0000","last_unstale":"2026-03-10T10:15:22.992669+0000","last_undegraded":"2026-03-10T10:15:22.992669+0000","last_fullsized":"2026-03-10T10:15:22.992669+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:46:11.908855+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,4],"acting":[3,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.3","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986473+0000","last_change":"2026-03-10T10:14:19.318858+0000","last_active":"2026-03-10T10:14:42.986473+0000","last_peered":"2026-03-10T10:14:42.986473+0000","last_clean":"2026-03-10T10:14:42.986473+0000","last_became_active":"2026-03-10T10:14:19.318255+0000","last_became_peered":"2026-03-10T10:14:19.318255+0000","last_unstale":"2026-03-10T10:14:42.986473+0000","last_undegraded":"2026-03-10T10:14:42.986473+0000","last_fullsized":"2026-03-10T10:14:42.986473+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:11:41.370247+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,2],"acting":[7,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.7","version":"60'13","reported_seq":58,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.384934+0000","last_change":"2026-03-10T10:14:14.955870+0000","last_active":"2026-03-10T10:14:42.384934+0000","last_peered":"2026-03-10T10:14:42.384934+0000","last_clean":"2026-03-10T10:14:42.384934+0000","last_became_active":"2026-03-10T10:14:14.955610+0000","last_became_peered":"2026-03-10T10:14:14.955610+0000","last_unstale":"2026-03-10T10:14:42.384934+0000","last_undegraded":"2026-03-10T10:14:42.384934+0000","last_fullsized":"2026-03-10T10:14:42.384934+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":13,"log_dups_size":0,"ondisk_log_size":13,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:47:50.861777+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":30,"num_read_kb":19,"num_write":16,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.6","version":"53'1","reported_seq":36,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.281390+0000","last_change":"2026-03-10T10:14:12.926530+0000","last_active":"2026-03-10T10:14:42.281390+0000","last_peered":"2026-03-10T10:14:42.281390+0000","last_clean":"2026-03-10T10:14:42.281390+0000","last_became_active":"2026-03-10T10:14:12.926445+0000","last_became_peered":"2026-03-10T10:14:12.926445+0000","last_unstale":"2026-03-10T10:14:42.281390+0000","last_undegraded":"2026-03-10T10:14:42.281390+0000","last_fullsized":"2026-03-10T10:14:42.281390+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:18:14.210414+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,4],"acting":[1,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.0","version":"62'5","reported_seq":105,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:15:27.178242+0000","last_change":"2026-03-10T10:14:22.472330+0000","last_active":"2026-03-10T10:15:27.178242+0000","last_peered":"2026-03-10T10:15:27.178242+0000","last_clean":"2026-03-10T10:15:27.178242+0000","last_became_active":"2026-03-10T10:14:15.952486+0000","last_became_peered":"2026-03-10T10:14:15.952486+0000","last_unstale":"2026-03-10T10:15:27.178242+0000","last_undegraded":"2026-03-10T10:15:27.178242+0000","last_fullsized":"2026-03-10T10:15:27.178242+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:14.926552+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:14.926552+0000","last_clean_scrub_stamp":"2026-03-10T10:14:14.926552+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:39:05.926790+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.001322606,"stat_sum":{"num_bytes":389,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":66,"num_read_kb":61,"num_write":4,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1","version":"0'0","reported_seq":26,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.281874+0000","last_change":"2026-03-10T10:14:16.952046+0000","last_active":"2026-03-10T10:14:42.281874+0000","last_peered":"2026-03-10T10:14:42.281874+0000","last_clean":"2026-03-10T10:14:42.281874+0000","last_became_active":"2026-03-10T10:14:16.951818+0000","last_became_peered":"2026-03-10T10:14:16.951818+0000","last_unstale":"2026-03-10T10:14:42.281874+0000","last_undegraded":"2026-03-10T10:14:42.281874+0000","last_fullsized":"2026-03-10T10:14:42.281874+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:30:07.603418+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,7],"acting":[4,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.2","version":"0'0","reported_seq":22,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.281869+0000","last_change":"2026-03-10T10:14:18.970338+0000","last_active":"2026-03-10T10:14:42.281869+0000","last_peered":"2026-03-10T10:14:42.281869+0000","last_clean":"2026-03-10T10:14:42.281869+0000","last_became_active":"2026-03-10T10:14:18.969492+0000","last_became_peered":"2026-03-10T10:14:18.969492+0000","last_unstale":"2026-03-10T10:14:42.281869+0000","last_undegraded":"2026-03-10T10:14:42.281869+0000","last_fullsized":"2026-03-10T10:14:42.281869+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:03:19.247888+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.4","version":"60'30","reported_seq":97,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:15:22.992428+0000","last_change":"2026-03-10T10:14:14.944853+0000","last_active":"2026-03-10T10:15:22.992428+0000","last_peered":"2026-03-10T10:15:22.992428+0000","last_clean":"2026-03-10T10:15:22.992428+0000","last_became_active":"2026-03-10T10:14:14.944618+0000","last_became_peered":"2026-03-10T10:14:14.944618+0000","last_unstale":"2026-03-10T10:15:22.992428+0000","last_undegraded":"2026-03-10T10:15:22.992428+0000","last_fullsized":"2026-03-10T10:15:22.992428+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":30,"log_dups_size":0,"ondisk_log_size":30,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:03:21.346423+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":358,"num_objects":10,"num_object_clones":0,"num_object_copies":30,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":10,"num_whiteouts":0,"num_read":51,"num_read_kb":36,"num_write":26,"num_write_kb":4,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,5],"acting":[1,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.5","version":"53'1","reported_seq":43,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986532+0000","last_change":"2026-03-10T10:14:12.941298+0000","last_active":"2026-03-10T10:14:42.986532+0000","last_peered":"2026-03-10T10:14:42.986532+0000","last_clean":"2026-03-10T10:14:42.986532+0000","last_became_active":"2026-03-10T10:14:12.940714+0000","last_became_peered":"2026-03-10T10:14:12.940714+0000","last_unstale":"2026-03-10T10:14:42.986532+0000","last_undegraded":"2026-03-10T10:14:42.986532+0000","last_fullsized":"2026-03-10T10:14:42.986532+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:38:59.576851+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":993,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":10,"num_read_kb":10,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,4],"acting":[7,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.2","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992459+0000","last_change":"2026-03-10T10:14:16.955779+0000","last_active":"2026-03-10T10:14:42.992459+0000","last_peered":"2026-03-10T10:14:42.992459+0000","last_clean":"2026-03-10T10:14:42.992459+0000","last_became_active":"2026-03-10T10:14:16.955647+0000","last_became_peered":"2026-03-10T10:14:16.955647+0000","last_unstale":"2026-03-10T10:14:42.992459+0000","last_undegraded":"2026-03-10T10:14:42.992459+0000","last_fullsized":"2026-03-10T10:14:42.992459+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:34:59.151202+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.280855+0000","last_change":"2026-03-10T10:14:19.315139+0000","last_active":"2026-03-10T10:14:42.280855+0000","last_peered":"2026-03-10T10:14:42.280855+0000","last_clean":"2026-03-10T10:14:42.280855+0000","last_became_active":"2026-03-10T10:14:19.315010+0000","last_became_peered":"2026-03-10T10:14:19.315010+0000","last_unstale":"2026-03-10T10:14:42.280855+0000","last_undegraded":"2026-03-10T10:14:42.280855+0000","last_fullsized":"2026-03-10T10:14:42.280855+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:27:00.038117+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,2],"acting":[1,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.5","version":"60'16","reported_seq":69,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:15:22.991649+0000","last_change":"2026-03-10T10:14:14.950231+0000","last_active":"2026-03-10T10:15:22.991649+0000","last_peered":"2026-03-10T10:15:22.991649+0000","last_clean":"2026-03-10T10:15:22.991649+0000","last_became_active":"2026-03-10T10:14:14.950150+0000","last_became_peered":"2026-03-10T10:14:14.950150+0000","last_unstale":"2026-03-10T10:15:22.991649+0000","last_undegraded":"2026-03-10T10:15:22.991649+0000","last_fullsized":"2026-03-10T10:15:22.991649+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":16,"log_dups_size":0,"ondisk_log_size":16,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:32:13.734797+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":154,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":25,"num_read_kb":15,"num_write":13,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,2],"acting":[5,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.4","version":"0'0","reported_seq":41,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.281453+0000","last_change":"2026-03-10T10:14:31.273016+0000","last_active":"2026-03-10T10:14:42.281453+0000","last_peered":"2026-03-10T10:14:42.281453+0000","last_clean":"2026-03-10T10:14:42.281453+0000","last_became_active":"2026-03-10T10:14:31.272884+0000","last_became_peered":"2026-03-10T10:14:31.272884+0000","last_unstale":"2026-03-10T10:14:42.281453+0000","last_undegraded":"2026-03-10T10:14:42.281453+0000","last_fullsized":"2026-03-10T10:14:42.281453+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:48:36.556896+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0,2],"acting":[1,0,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.2","version":"62'2","reported_seq":38,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.281478+0000","last_change":"2026-03-10T10:14:22.013588+0000","last_active":"2026-03-10T10:14:42.281478+0000","last_peered":"2026-03-10T10:14:42.281478+0000","last_clean":"2026-03-10T10:14:42.281478+0000","last_became_active":"2026-03-10T10:14:15.948858+0000","last_became_peered":"2026-03-10T10:14:15.948858+0000","last_unstale":"2026-03-10T10:14:42.281478+0000","last_undegraded":"2026-03-10T10:14:42.281478+0000","last_fullsized":"2026-03-10T10:14:42.281478+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:14.926552+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:14.926552+0000","last_clean_scrub_stamp":"2026-03-10T10:14:14.926552+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:31:29.091561+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000606676,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.3","version":"60'11","reported_seq":56,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:15:22.992843+0000","last_change":"2026-03-10T10:14:16.958647+0000","last_active":"2026-03-10T10:15:22.992843+0000","last_peered":"2026-03-10T10:15:22.992843+0000","last_clean":"2026-03-10T10:15:22.992843+0000","last_became_active":"2026-03-10T10:14:16.958542+0000","last_became_peered":"2026-03-10T10:14:16.958542+0000","last_unstale":"2026-03-10T10:15:22.992843+0000","last_undegraded":"2026-03-10T10:15:22.992843+0000","last_fullsized":"2026-03-10T10:15:22.992843+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:34:30.260681+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,6,5],"acting":[0,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.0","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992702+0000","last_change":"2026-03-10T10:14:18.967980+0000","last_active":"2026-03-10T10:14:42.992702+0000","last_peered":"2026-03-10T10:14:42.992702+0000","last_clean":"2026-03-10T10:14:42.992702+0000","last_became_active":"2026-03-10T10:14:18.967899+0000","last_became_peered":"2026-03-10T10:14:18.967899+0000","last_unstale":"2026-03-10T10:14:42.992702+0000","last_undegraded":"2026-03-10T10:14:42.992702+0000","last_fullsized":"2026-03-10T10:14:42.992702+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:57:04.171484+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,3,2],"acting":[0,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.3","version":"60'19","reported_seq":66,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.281182+0000","last_change":"2026-03-10T10:14:14.958879+0000","last_active":"2026-03-10T10:14:42.281182+0000","last_peered":"2026-03-10T10:14:42.281182+0000","last_clean":"2026-03-10T10:14:42.281182+0000","last_became_active":"2026-03-10T10:14:14.958766+0000","last_became_peered":"2026-03-10T10:14:14.958766+0000","last_unstale":"2026-03-10T10:14:42.281182+0000","last_undegraded":"2026-03-10T10:14:42.281182+0000","last_fullsized":"2026-03-10T10:14:42.281182+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:42:35.783489+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":39,"num_read_kb":25,"num_write":22,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,6],"acting":[4,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.2","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992202+0000","last_change":"2026-03-10T10:14:12.939028+0000","last_active":"2026-03-10T10:14:42.992202+0000","last_peered":"2026-03-10T10:14:42.992202+0000","last_clean":"2026-03-10T10:14:42.992202+0000","last_became_active":"2026-03-10T10:14:12.938815+0000","last_became_peered":"2026-03-10T10:14:12.938815+0000","last_unstale":"2026-03-10T10:14:42.992202+0000","last_undegraded":"2026-03-10T10:14:42.992202+0000","last_fullsized":"2026-03-10T10:14:42.992202+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:32:28.599259+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,6],"acting":[5,1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.5","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.993148+0000","last_change":"2026-03-10T10:14:16.965697+0000","last_active":"2026-03-10T10:14:42.993148+0000","last_peered":"2026-03-10T10:14:42.993148+0000","last_clean":"2026-03-10T10:14:42.993148+0000","last_became_active":"2026-03-10T10:14:16.965499+0000","last_became_peered":"2026-03-10T10:14:16.965499+0000","last_unstale":"2026-03-10T10:14:42.993148+0000","last_undegraded":"2026-03-10T10:14:42.993148+0000","last_fullsized":"2026-03-10T10:14:42.993148+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:01:22.833682+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.6","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.384739+0000","last_change":"2026-03-10T10:14:18.977453+0000","last_active":"2026-03-10T10:14:42.384739+0000","last_peered":"2026-03-10T10:14:42.384739+0000","last_clean":"2026-03-10T10:14:42.384739+0000","last_became_active":"2026-03-10T10:14:18.977324+0000","last_became_peered":"2026-03-10T10:14:18.977324+0000","last_unstale":"2026-03-10T10:14:42.384739+0000","last_undegraded":"2026-03-10T10:14:42.384739+0000","last_fullsized":"2026-03-10T10:14:42.384739+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:46:31.006057+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,4,7],"acting":[3,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.0","version":"60'18","reported_seq":63,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.280955+0000","last_change":"2026-03-10T10:14:14.944195+0000","last_active":"2026-03-10T10:14:42.280955+0000","last_peered":"2026-03-10T10:14:42.280955+0000","last_clean":"2026-03-10T10:14:42.280955+0000","last_became_active":"2026-03-10T10:14:14.944112+0000","last_became_peered":"2026-03-10T10:14:14.944112+0000","last_unstale":"2026-03-10T10:14:42.280955+0000","last_undegraded":"2026-03-10T10:14:42.280955+0000","last_fullsized":"2026-03-10T10:14:42.280955+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":18,"log_dups_size":0,"ondisk_log_size":18,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:44:23.197174+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":34,"num_read_kb":22,"num_write":20,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,6],"acting":[1,2,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.1","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.994741+0000","last_change":"2026-03-10T10:14:12.946155+0000","last_active":"2026-03-10T10:14:42.994741+0000","last_peered":"2026-03-10T10:14:42.994741+0000","last_clean":"2026-03-10T10:14:42.994741+0000","last_became_active":"2026-03-10T10:14:12.946062+0000","last_became_peered":"2026-03-10T10:14:12.946062+0000","last_unstale":"2026-03-10T10:14:42.994741+0000","last_undegraded":"2026-03-10T10:14:42.994741+0000","last_fullsized":"2026-03-10T10:14:42.994741+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:00:36.052036+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,0],"acting":[2,3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.6","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.994782+0000","last_change":"2026-03-10T10:14:16.967851+0000","last_active":"2026-03-10T10:14:42.994782+0000","last_peered":"2026-03-10T10:14:42.994782+0000","last_clean":"2026-03-10T10:14:42.994782+0000","last_became_active":"2026-03-10T10:14:16.967761+0000","last_became_peered":"2026-03-10T10:14:16.967761+0000","last_unstale":"2026-03-10T10:14:42.994782+0000","last_undegraded":"2026-03-10T10:14:42.994782+0000","last_fullsized":"2026-03-10T10:14:42.994782+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:21:26.995530+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,7],"acting":[2,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.5","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.987234+0000","last_change":"2026-03-10T10:14:19.318784+0000","last_active":"2026-03-10T10:14:42.987234+0000","last_peered":"2026-03-10T10:14:42.987234+0000","last_clean":"2026-03-10T10:14:42.987234+0000","last_became_active":"2026-03-10T10:14:19.317929+0000","last_became_peered":"2026-03-10T10:14:19.317929+0000","last_unstale":"2026-03-10T10:14:42.987234+0000","last_undegraded":"2026-03-10T10:14:42.987234+0000","last_fullsized":"2026-03-10T10:14:42.987234+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:55:58.247403+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,3],"acting":[7,6,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1","version":"60'14","reported_seq":52,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.993404+0000","last_change":"2026-03-10T10:14:14.960611+0000","last_active":"2026-03-10T10:14:42.993404+0000","last_peered":"2026-03-10T10:14:42.993404+0000","last_clean":"2026-03-10T10:14:42.993404+0000","last_became_active":"2026-03-10T10:14:14.960268+0000","last_became_peered":"2026-03-10T10:14:14.960268+0000","last_unstale":"2026-03-10T10:14:42.993404+0000","last_undegraded":"2026-03-10T10:14:42.993404+0000","last_fullsized":"2026-03-10T10:14:42.993404+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":14,"log_dups_size":0,"ondisk_log_size":14,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:50:37.268853+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":21,"num_read_kb":14,"num_write":14,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.0","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986774+0000","last_change":"2026-03-10T10:14:12.941625+0000","last_active":"2026-03-10T10:14:42.986774+0000","last_peered":"2026-03-10T10:14:42.986774+0000","last_clean":"2026-03-10T10:14:42.986774+0000","last_became_active":"2026-03-10T10:14:12.941099+0000","last_became_peered":"2026-03-10T10:14:12.941099+0000","last_unstale":"2026-03-10T10:14:42.986774+0000","last_undegraded":"2026-03-10T10:14:42.986774+0000","last_fullsized":"2026-03-10T10:14:42.986774+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:05:02.429935+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,1,0],"acting":[7,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.7","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992309+0000","last_change":"2026-03-10T10:14:16.956321+0000","last_active":"2026-03-10T10:14:42.992309+0000","last_peered":"2026-03-10T10:14:42.992309+0000","last_clean":"2026-03-10T10:14:42.992309+0000","last_became_active":"2026-03-10T10:14:16.956043+0000","last_became_peered":"2026-03-10T10:14:16.956043+0000","last_unstale":"2026-03-10T10:14:42.992309+0000","last_undegraded":"2026-03-10T10:14:42.992309+0000","last_fullsized":"2026-03-10T10:14:42.992309+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:47:13.633995+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.4","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.281144+0000","last_change":"2026-03-10T10:14:18.972971+0000","last_active":"2026-03-10T10:14:42.281144+0000","last_peered":"2026-03-10T10:14:42.281144+0000","last_clean":"2026-03-10T10:14:42.281144+0000","last_became_active":"2026-03-10T10:14:18.972868+0000","last_became_peered":"2026-03-10T10:14:18.972868+0000","last_unstale":"2026-03-10T10:14:42.281144+0000","last_undegraded":"2026-03-10T10:14:42.281144+0000","last_fullsized":"2026-03-10T10:14:42.281144+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:05:19.845157+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.2","version":"60'10","reported_seq":46,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.385654+0000","last_change":"2026-03-10T10:14:14.956516+0000","last_active":"2026-03-10T10:14:42.385654+0000","last_peered":"2026-03-10T10:14:42.385654+0000","last_clean":"2026-03-10T10:14:42.385654+0000","last_became_active":"2026-03-10T10:14:14.956186+0000","last_became_peered":"2026-03-10T10:14:14.956186+0000","last_unstale":"2026-03-10T10:14:42.385654+0000","last_undegraded":"2026-03-10T10:14:42.385654+0000","last_fullsized":"2026-03-10T10:14:42.385654+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:59:12.794770+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,5,6],"acting":[3,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.3","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.991734+0000","last_change":"2026-03-10T10:14:12.950942+0000","last_active":"2026-03-10T10:14:42.991734+0000","last_peered":"2026-03-10T10:14:42.991734+0000","last_clean":"2026-03-10T10:14:42.991734+0000","last_became_active":"2026-03-10T10:14:12.950754+0000","last_became_peered":"2026-03-10T10:14:12.950754+0000","last_unstale":"2026-03-10T10:14:42.991734+0000","last_undegraded":"2026-03-10T10:14:42.991734+0000","last_fullsized":"2026-03-10T10:14:42.991734+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:40:30.632758+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,2,7],"acting":[5,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"1.0","version":"65'39","reported_seq":70,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:44.358091+0000","last_change":"2026-03-10T10:13:52.067583+0000","last_active":"2026-03-10T10:14:44.358091+0000","last_peered":"2026-03-10T10:14:44.358091+0000","last_clean":"2026-03-10T10:14:44.358091+0000","last_became_active":"2026-03-10T10:13:51.757788+0000","last_became_peered":"2026-03-10T10:13:51.757788+0000","last_unstale":"2026-03-10T10:14:44.358091+0000","last_undegraded":"2026-03-10T10:14:44.358091+0000","last_fullsized":"2026-03-10T10:14:44.358091+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":20,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:11:04.132205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:11:04.132205+0000","last_clean_scrub_stamp":"2026-03-10T10:11:04.132205+0000","objects_scrubbed":0,"log_size":39,"log_dups_size":0,"ondisk_log_size":39,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:59:42.231015+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":106,"num_read_kb":213,"num_write":69,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,6],"acting":[7,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.4","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986685+0000","last_change":"2026-03-10T10:14:16.962305+0000","last_active":"2026-03-10T10:14:42.986685+0000","last_peered":"2026-03-10T10:14:42.986685+0000","last_clean":"2026-03-10T10:14:42.986685+0000","last_became_active":"2026-03-10T10:14:16.962179+0000","last_became_peered":"2026-03-10T10:14:16.962179+0000","last_unstale":"2026-03-10T10:14:42.986685+0000","last_undegraded":"2026-03-10T10:14:42.986685+0000","last_fullsized":"2026-03-10T10:14:42.986685+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:22:08.289700+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,5],"acting":[7,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.7","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.991169+0000","last_change":"2026-03-10T10:14:18.978101+0000","last_active":"2026-03-10T10:14:42.991169+0000","last_peered":"2026-03-10T10:14:42.991169+0000","last_clean":"2026-03-10T10:14:42.991169+0000","last_became_active":"2026-03-10T10:14:18.977737+0000","last_became_peered":"2026-03-10T10:14:18.977737+0000","last_unstale":"2026-03-10T10:14:42.991169+0000","last_undegraded":"2026-03-10T10:14:42.991169+0000","last_fullsized":"2026-03-10T10:14:42.991169+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:07:05.313991+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,4],"acting":[5,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.d","version":"60'17","reported_seq":59,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986230+0000","last_change":"2026-03-10T10:14:14.945052+0000","last_active":"2026-03-10T10:14:42.986230+0000","last_peered":"2026-03-10T10:14:42.986230+0000","last_clean":"2026-03-10T10:14:42.986230+0000","last_became_active":"2026-03-10T10:14:14.944793+0000","last_became_peered":"2026-03-10T10:14:14.944793+0000","last_unstale":"2026-03-10T10:14:42.986230+0000","last_undegraded":"2026-03-10T10:14:42.986230+0000","last_fullsized":"2026-03-10T10:14:42.986230+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":17,"log_dups_size":0,"ondisk_log_size":17,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:39:17.289692+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":29,"num_read_kb":19,"num_write":18,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,6],"acting":[7,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.c","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.994685+0000","last_change":"2026-03-10T10:14:12.937665+0000","last_active":"2026-03-10T10:14:42.994685+0000","last_peered":"2026-03-10T10:14:42.994685+0000","last_clean":"2026-03-10T10:14:42.994685+0000","last_became_active":"2026-03-10T10:14:12.937571+0000","last_became_peered":"2026-03-10T10:14:12.937571+0000","last_unstale":"2026-03-10T10:14:42.994685+0000","last_undegraded":"2026-03-10T10:14:42.994685+0000","last_fullsized":"2026-03-10T10:14:42.994685+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:01:05.664905+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,0],"acting":[2,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.b","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.994696+0000","last_change":"2026-03-10T10:14:16.967783+0000","last_active":"2026-03-10T10:14:42.994696+0000","last_peered":"2026-03-10T10:14:42.994696+0000","last_clean":"2026-03-10T10:14:42.994696+0000","last_became_active":"2026-03-10T10:14:16.967592+0000","last_became_peered":"2026-03-10T10:14:16.967592+0000","last_unstale":"2026-03-10T10:14:42.994696+0000","last_undegraded":"2026-03-10T10:14:42.994696+0000","last_fullsized":"2026-03-10T10:14:42.994696+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:56:12.053751+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,5],"acting":[2,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.8","version":"60'1","reported_seq":24,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986093+0000","last_change":"2026-03-10T10:14:18.969721+0000","last_active":"2026-03-10T10:14:42.986093+0000","last_peered":"2026-03-10T10:14:42.986093+0000","last_clean":"2026-03-10T10:14:42.986093+0000","last_became_active":"2026-03-10T10:14:18.969642+0000","last_became_peered":"2026-03-10T10:14:18.969642+0000","last_unstale":"2026-03-10T10:14:42.986093+0000","last_undegraded":"2026-03-10T10:14:42.986093+0000","last_fullsized":"2026-03-10T10:14:42.986093+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:41:55.774753+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":13,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,3],"acting":[7,2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.c","version":"60'10","reported_seq":46,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992229+0000","last_change":"2026-03-10T10:14:14.949789+0000","last_active":"2026-03-10T10:14:42.992229+0000","last_peered":"2026-03-10T10:14:42.992229+0000","last_clean":"2026-03-10T10:14:42.992229+0000","last_became_active":"2026-03-10T10:14:14.949715+0000","last_became_peered":"2026-03-10T10:14:14.949715+0000","last_unstale":"2026-03-10T10:14:42.992229+0000","last_undegraded":"2026-03-10T10:14:42.992229+0000","last_fullsized":"2026-03-10T10:14:42.992229+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:51:04.661751+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,6],"acting":[5,3,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.d","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.281258+0000","last_change":"2026-03-10T10:14:12.943045+0000","last_active":"2026-03-10T10:14:42.281258+0000","last_peered":"2026-03-10T10:14:42.281258+0000","last_clean":"2026-03-10T10:14:42.281258+0000","last_became_active":"2026-03-10T10:14:12.942958+0000","last_became_peered":"2026-03-10T10:14:12.942958+0000","last_unstale":"2026-03-10T10:14:42.281258+0000","last_undegraded":"2026-03-10T10:14:42.281258+0000","last_fullsized":"2026-03-10T10:14:42.281258+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:58:58.899346+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,3],"acting":[1,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.a","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.995041+0000","last_change":"2026-03-10T10:14:16.973647+0000","last_active":"2026-03-10T10:14:42.995041+0000","last_peered":"2026-03-10T10:14:42.995041+0000","last_clean":"2026-03-10T10:14:42.995041+0000","last_became_active":"2026-03-10T10:14:16.973522+0000","last_became_peered":"2026-03-10T10:14:16.973522+0000","last_unstale":"2026-03-10T10:14:42.995041+0000","last_undegraded":"2026-03-10T10:14:42.995041+0000","last_fullsized":"2026-03-10T10:14:42.995041+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:07:54.494874+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,3],"acting":[2,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.9","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992874+0000","last_change":"2026-03-10T10:14:18.961920+0000","last_active":"2026-03-10T10:14:42.992874+0000","last_peered":"2026-03-10T10:14:42.992874+0000","last_clean":"2026-03-10T10:14:42.992874+0000","last_became_active":"2026-03-10T10:14:18.961205+0000","last_became_peered":"2026-03-10T10:14:18.961205+0000","last_unstale":"2026-03-10T10:14:42.992874+0000","last_undegraded":"2026-03-10T10:14:42.992874+0000","last_fullsized":"2026-03-10T10:14:42.992874+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:16:05.935516+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.f","version":"60'15","reported_seq":56,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986847+0000","last_change":"2026-03-10T10:14:14.951822+0000","last_active":"2026-03-10T10:14:42.986847+0000","last_peered":"2026-03-10T10:14:42.986847+0000","last_clean":"2026-03-10T10:14:42.986847+0000","last_became_active":"2026-03-10T10:14:14.951229+0000","last_became_peered":"2026-03-10T10:14:14.951229+0000","last_unstale":"2026-03-10T10:14:42.986847+0000","last_undegraded":"2026-03-10T10:14:42.986847+0000","last_fullsized":"2026-03-10T10:14:42.986847+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:34:31.646212+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,0],"acting":[7,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.e","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.994641+0000","last_change":"2026-03-10T10:14:12.946410+0000","last_active":"2026-03-10T10:14:42.994641+0000","last_peered":"2026-03-10T10:14:42.994641+0000","last_clean":"2026-03-10T10:14:42.994641+0000","last_became_active":"2026-03-10T10:14:12.946291+0000","last_became_peered":"2026-03-10T10:14:12.946291+0000","last_unstale":"2026-03-10T10:14:42.994641+0000","last_undegraded":"2026-03-10T10:14:42.994641+0000","last_fullsized":"2026-03-10T10:14:42.994641+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:16:03.035546+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,7],"acting":[2,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.9","version":"60'11","reported_seq":53,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:15:22.991484+0000","last_change":"2026-03-10T10:14:16.957980+0000","last_active":"2026-03-10T10:15:22.991484+0000","last_peered":"2026-03-10T10:15:22.991484+0000","last_clean":"2026-03-10T10:15:22.991484+0000","last_became_active":"2026-03-10T10:14:16.957669+0000","last_became_peered":"2026-03-10T10:14:16.957669+0000","last_unstale":"2026-03-10T10:15:22.991484+0000","last_undegraded":"2026-03-10T10:15:22.991484+0000","last_fullsized":"2026-03-10T10:15:22.991484+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:28:49.140275+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.a","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.991447+0000","last_change":"2026-03-10T10:14:19.319997+0000","last_active":"2026-03-10T10:14:42.991447+0000","last_peered":"2026-03-10T10:14:42.991447+0000","last_clean":"2026-03-10T10:14:42.991447+0000","last_became_active":"2026-03-10T10:14:19.319913+0000","last_became_peered":"2026-03-10T10:14:19.319913+0000","last_unstale":"2026-03-10T10:14:42.991447+0000","last_undegraded":"2026-03-10T10:14:42.991447+0000","last_fullsized":"2026-03-10T10:14:42.991447+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:16:08.646739+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,0],"acting":[5,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.e","version":"60'11","reported_seq":50,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986928+0000","last_change":"2026-03-10T10:14:14.951764+0000","last_active":"2026-03-10T10:14:42.986928+0000","last_peered":"2026-03-10T10:14:42.986928+0000","last_clean":"2026-03-10T10:14:42.986928+0000","last_became_active":"2026-03-10T10:14:14.951106+0000","last_became_peered":"2026-03-10T10:14:14.951106+0000","last_unstale":"2026-03-10T10:14:42.986928+0000","last_undegraded":"2026-03-10T10:14:42.986928+0000","last_fullsized":"2026-03-10T10:14:42.986928+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:50:19.320521+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.f","version":"53'2","reported_seq":50,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.282020+0000","last_change":"2026-03-10T10:14:12.946050+0000","last_active":"2026-03-10T10:14:42.282020+0000","last_peered":"2026-03-10T10:14:42.282020+0000","last_clean":"2026-03-10T10:14:42.282020+0000","last_became_active":"2026-03-10T10:14:12.945736+0000","last_became_peered":"2026-03-10T10:14:12.945736+0000","last_unstale":"2026-03-10T10:14:42.282020+0000","last_undegraded":"2026-03-10T10:14:42.282020+0000","last_fullsized":"2026-03-10T10:14:42.282020+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:32:51.576554+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":92,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":14,"num_read_kb":14,"num_write":4,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,7],"acting":[4,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.8","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.994992+0000","last_change":"2026-03-10T10:14:16.975977+0000","last_active":"2026-03-10T10:14:42.994992+0000","last_peered":"2026-03-10T10:14:42.994992+0000","last_clean":"2026-03-10T10:14:42.994992+0000","last_became_active":"2026-03-10T10:14:16.975898+0000","last_became_peered":"2026-03-10T10:14:16.975898+0000","last_unstale":"2026-03-10T10:14:42.994992+0000","last_undegraded":"2026-03-10T10:14:42.994992+0000","last_fullsized":"2026-03-10T10:14:42.994992+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:48:03.299964+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,1],"acting":[2,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.b","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.384756+0000","last_change":"2026-03-10T10:14:18.975880+0000","last_active":"2026-03-10T10:14:42.384756+0000","last_peered":"2026-03-10T10:14:42.384756+0000","last_clean":"2026-03-10T10:14:42.384756+0000","last_became_active":"2026-03-10T10:14:18.975790+0000","last_became_peered":"2026-03-10T10:14:18.975790+0000","last_unstale":"2026-03-10T10:14:42.384756+0000","last_undegraded":"2026-03-10T10:14:42.384756+0000","last_fullsized":"2026-03-10T10:14:42.384756+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:09:10.136046+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,1],"acting":[3,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.11","version":"60'11","reported_seq":50,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986175+0000","last_change":"2026-03-10T10:14:14.951923+0000","last_active":"2026-03-10T10:14:42.986175+0000","last_peered":"2026-03-10T10:14:42.986175+0000","last_clean":"2026-03-10T10:14:42.986175+0000","last_became_active":"2026-03-10T10:14:14.951543+0000","last_became_peered":"2026-03-10T10:14:14.951543+0000","last_unstale":"2026-03-10T10:14:42.986175+0000","last_undegraded":"2026-03-10T10:14:42.986175+0000","last_fullsized":"2026-03-10T10:14:42.986175+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:23:09.745001+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.10","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.994616+0000","last_change":"2026-03-10T10:14:12.932000+0000","last_active":"2026-03-10T10:14:42.994616+0000","last_peered":"2026-03-10T10:14:42.994616+0000","last_clean":"2026-03-10T10:14:42.994616+0000","last_became_active":"2026-03-10T10:14:12.931852+0000","last_became_peered":"2026-03-10T10:14:12.931852+0000","last_unstale":"2026-03-10T10:14:42.994616+0000","last_undegraded":"2026-03-10T10:14:42.994616+0000","last_fullsized":"2026-03-10T10:14:42.994616+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:44:40.121048+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1,0],"acting":[2,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.17","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.384850+0000","last_change":"2026-03-10T10:14:16.953076+0000","last_active":"2026-03-10T10:14:42.384850+0000","last_peered":"2026-03-10T10:14:42.384850+0000","last_clean":"2026-03-10T10:14:42.384850+0000","last_became_active":"2026-03-10T10:14:16.952909+0000","last_became_peered":"2026-03-10T10:14:16.952909+0000","last_unstale":"2026-03-10T10:14:42.384850+0000","last_undegraded":"2026-03-10T10:14:42.384850+0000","last_fullsized":"2026-03-10T10:14:42.384850+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:00:34.156358+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.14","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.994464+0000","last_change":"2026-03-10T10:14:18.960071+0000","last_active":"2026-03-10T10:14:42.994464+0000","last_peered":"2026-03-10T10:14:42.994464+0000","last_clean":"2026-03-10T10:14:42.994464+0000","last_became_active":"2026-03-10T10:14:18.959929+0000","last_became_peered":"2026-03-10T10:14:18.959929+0000","last_unstale":"2026-03-10T10:14:42.994464+0000","last_undegraded":"2026-03-10T10:14:42.994464+0000","last_fullsized":"2026-03-10T10:14:42.994464+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:45:06.742064+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,7],"acting":[2,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"3.10","version":"60'4","reported_seq":37,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992138+0000","last_change":"2026-03-10T10:14:14.945697+0000","last_active":"2026-03-10T10:14:42.992138+0000","last_peered":"2026-03-10T10:14:42.992138+0000","last_clean":"2026-03-10T10:14:42.992138+0000","last_became_active":"2026-03-10T10:14:14.945588+0000","last_became_peered":"2026-03-10T10:14:14.945588+0000","last_unstale":"2026-03-10T10:14:42.992138+0000","last_undegraded":"2026-03-10T10:14:42.992138+0000","last_fullsized":"2026-03-10T10:14:42.992138+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":4,"log_dups_size":0,"ondisk_log_size":4,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:54:00.384443+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":6,"num_read_kb":4,"num_write":4,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"2.11","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992114+0000","last_change":"2026-03-10T10:14:12.926065+0000","last_active":"2026-03-10T10:14:42.992114+0000","last_peered":"2026-03-10T10:14:42.992114+0000","last_clean":"2026-03-10T10:14:42.992114+0000","last_became_active":"2026-03-10T10:14:12.925929+0000","last_became_peered":"2026-03-10T10:14:12.925929+0000","last_unstale":"2026-03-10T10:14:42.992114+0000","last_undegraded":"2026-03-10T10:14:42.992114+0000","last_fullsized":"2026-03-10T10:14:42.992114+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:44:36.158559+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.16","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992399+0000","last_change":"2026-03-10T10:14:16.971939+0000","last_active":"2026-03-10T10:14:42.992399+0000","last_peered":"2026-03-10T10:14:42.992399+0000","last_clean":"2026-03-10T10:14:42.992399+0000","last_became_active":"2026-03-10T10:14:16.971858+0000","last_became_peered":"2026-03-10T10:14:16.971858+0000","last_unstale":"2026-03-10T10:14:42.992399+0000","last_undegraded":"2026-03-10T10:14:42.992399+0000","last_fullsized":"2026-03-10T10:14:42.992399+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:35:59.126994+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,1],"acting":[5,3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.15","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.987281+0000","last_change":"2026-03-10T10:14:19.318956+0000","last_active":"2026-03-10T10:14:42.987281+0000","last_peered":"2026-03-10T10:14:42.987281+0000","last_clean":"2026-03-10T10:14:42.987281+0000","last_became_active":"2026-03-10T10:14:19.316531+0000","last_became_peered":"2026-03-10T10:14:19.316531+0000","last_unstale":"2026-03-10T10:14:42.987281+0000","last_undegraded":"2026-03-10T10:14:42.987281+0000","last_fullsized":"2026-03-10T10:14:42.987281+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:00:22.998681+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.13","version":"60'11","reported_seq":50,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986277+0000","last_change":"2026-03-10T10:14:14.951872+0000","last_active":"2026-03-10T10:14:42.986277+0000","last_peered":"2026-03-10T10:14:42.986277+0000","last_clean":"2026-03-10T10:14:42.986277+0000","last_became_active":"2026-03-10T10:14:14.951441+0000","last_became_peered":"2026-03-10T10:14:14.951441+0000","last_unstale":"2026-03-10T10:14:42.986277+0000","last_undegraded":"2026-03-10T10:14:42.986277+0000","last_fullsized":"2026-03-10T10:14:42.986277+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:15:02.819585+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,2],"acting":[7,4,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.12","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992056+0000","last_change":"2026-03-10T10:14:12.950885+0000","last_active":"2026-03-10T10:14:42.992056+0000","last_peered":"2026-03-10T10:14:42.992056+0000","last_clean":"2026-03-10T10:14:42.992056+0000","last_became_active":"2026-03-10T10:14:12.950733+0000","last_became_peered":"2026-03-10T10:14:12.950733+0000","last_unstale":"2026-03-10T10:14:42.992056+0000","last_undegraded":"2026-03-10T10:14:42.992056+0000","last_fullsized":"2026-03-10T10:14:42.992056+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:50:38.523969+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,7],"acting":[5,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.15","version":"60'11","reported_seq":53,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:15:22.991458+0000","last_change":"2026-03-10T10:14:16.965142+0000","last_active":"2026-03-10T10:15:22.991458+0000","last_peered":"2026-03-10T10:15:22.991458+0000","last_clean":"2026-03-10T10:15:22.991458+0000","last_became_active":"2026-03-10T10:14:16.964919+0000","last_became_peered":"2026-03-10T10:14:16.964919+0000","last_unstale":"2026-03-10T10:15:22.991458+0000","last_undegraded":"2026-03-10T10:15:22.991458+0000","last_fullsized":"2026-03-10T10:15:22.991458+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:49:19.188262+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.16","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992828+0000","last_change":"2026-03-10T10:14:18.967988+0000","last_active":"2026-03-10T10:14:42.992828+0000","last_peered":"2026-03-10T10:14:42.992828+0000","last_clean":"2026-03-10T10:14:42.992828+0000","last_became_active":"2026-03-10T10:14:18.967911+0000","last_became_peered":"2026-03-10T10:14:18.967911+0000","last_unstale":"2026-03-10T10:14:42.992828+0000","last_undegraded":"2026-03-10T10:14:42.992828+0000","last_fullsized":"2026-03-10T10:14:42.992828+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:12:07.776130+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.12","version":"60'9","reported_seq":47,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.993210+0000","last_change":"2026-03-10T10:14:14.953102+0000","last_active":"2026-03-10T10:14:42.993210+0000","last_peered":"2026-03-10T10:14:42.993210+0000","last_clean":"2026-03-10T10:14:42.993210+0000","last_became_active":"2026-03-10T10:14:14.952739+0000","last_became_peered":"2026-03-10T10:14:14.952739+0000","last_unstale":"2026-03-10T10:14:42.993210+0000","last_undegraded":"2026-03-10T10:14:42.993210+0000","last_fullsized":"2026-03-10T10:14:42.993210+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:01:05.398407+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.13","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.993188+0000","last_change":"2026-03-10T10:14:12.949349+0000","last_active":"2026-03-10T10:14:42.993188+0000","last_peered":"2026-03-10T10:14:42.993188+0000","last_clean":"2026-03-10T10:14:42.993188+0000","last_became_active":"2026-03-10T10:14:12.948871+0000","last_became_peered":"2026-03-10T10:14:12.948871+0000","last_unstale":"2026-03-10T10:14:42.993188+0000","last_undegraded":"2026-03-10T10:14:42.993188+0000","last_fullsized":"2026-03-10T10:14:42.993188+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:10:13.751894+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.14","version":"60'11","reported_seq":53,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:15:22.992672+0000","last_change":"2026-03-10T10:14:16.969268+0000","last_active":"2026-03-10T10:15:22.992672+0000","last_peered":"2026-03-10T10:15:22.992672+0000","last_clean":"2026-03-10T10:15:22.992672+0000","last_became_active":"2026-03-10T10:14:16.969157+0000","last_became_peered":"2026-03-10T10:14:16.969157+0000","last_unstale":"2026-03-10T10:15:22.992672+0000","last_undegraded":"2026-03-10T10:15:22.992672+0000","last_fullsized":"2026-03-10T10:15:22.992672+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:26:02.431594+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,2],"acting":[3,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.17","version":"0'0","reported_seq":22,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.281808+0000","last_change":"2026-03-10T10:14:18.972554+0000","last_active":"2026-03-10T10:14:42.281808+0000","last_peered":"2026-03-10T10:14:42.281808+0000","last_clean":"2026-03-10T10:14:42.281808+0000","last_became_active":"2026-03-10T10:14:18.972078+0000","last_became_peered":"2026-03-10T10:14:18.972078+0000","last_unstale":"2026-03-10T10:14:42.281808+0000","last_undegraded":"2026-03-10T10:14:42.281808+0000","last_fullsized":"2026-03-10T10:14:42.281808+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:42:54.122210+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,5],"acting":[4,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.15","version":"60'9","reported_seq":47,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986172+0000","last_change":"2026-03-10T10:14:14.951975+0000","last_active":"2026-03-10T10:14:42.986172+0000","last_peered":"2026-03-10T10:14:42.986172+0000","last_clean":"2026-03-10T10:14:42.986172+0000","last_became_active":"2026-03-10T10:14:14.951638+0000","last_became_peered":"2026-03-10T10:14:14.951638+0000","last_unstale":"2026-03-10T10:14:42.986172+0000","last_undegraded":"2026-03-10T10:14:42.986172+0000","last_fullsized":"2026-03-10T10:14:42.986172+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:18:03.689555+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,3,4],"acting":[7,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.14","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992072+0000","last_change":"2026-03-10T10:14:12.944676+0000","last_active":"2026-03-10T10:14:42.992072+0000","last_peered":"2026-03-10T10:14:42.992072+0000","last_clean":"2026-03-10T10:14:42.992072+0000","last_became_active":"2026-03-10T10:14:12.944571+0000","last_became_peered":"2026-03-10T10:14:12.944571+0000","last_unstale":"2026-03-10T10:14:42.992072+0000","last_undegraded":"2026-03-10T10:14:42.992072+0000","last_fullsized":"2026-03-10T10:14:42.992072+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:49:51.094210+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,3,5],"acting":[6,3,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.13","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.385225+0000","last_change":"2026-03-10T10:14:16.953131+0000","last_active":"2026-03-10T10:14:42.385225+0000","last_peered":"2026-03-10T10:14:42.385225+0000","last_clean":"2026-03-10T10:14:42.385225+0000","last_became_active":"2026-03-10T10:14:16.953004+0000","last_became_peered":"2026-03-10T10:14:16.953004+0000","last_unstale":"2026-03-10T10:14:42.385225+0000","last_undegraded":"2026-03-10T10:14:42.385225+0000","last_fullsized":"2026-03-10T10:14:42.385225+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:23:31.712596+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.10","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992769+0000","last_change":"2026-03-10T10:14:18.973547+0000","last_active":"2026-03-10T10:14:42.992769+0000","last_peered":"2026-03-10T10:14:42.992769+0000","last_clean":"2026-03-10T10:14:42.992769+0000","last_became_active":"2026-03-10T10:14:18.972495+0000","last_became_peered":"2026-03-10T10:14:18.972495+0000","last_unstale":"2026-03-10T10:14:42.992769+0000","last_undegraded":"2026-03-10T10:14:42.992769+0000","last_fullsized":"2026-03-10T10:14:42.992769+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:41:42.468683+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,1],"acting":[0,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.14","version":"60'10","reported_seq":45,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.280750+0000","last_change":"2026-03-10T10:14:14.948291+0000","last_active":"2026-03-10T10:14:42.280750+0000","last_peered":"2026-03-10T10:14:42.280750+0000","last_clean":"2026-03-10T10:14:42.280750+0000","last_became_active":"2026-03-10T10:14:14.948202+0000","last_became_peered":"2026-03-10T10:14:14.948202+0000","last_unstale":"2026-03-10T10:14:42.280750+0000","last_undegraded":"2026-03-10T10:14:42.280750+0000","last_fullsized":"2026-03-10T10:14:42.280750+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:16:18.954084+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,7,6],"acting":[4,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.15","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.281093+0000","last_change":"2026-03-10T10:14:12.940058+0000","last_active":"2026-03-10T10:14:42.281093+0000","last_peered":"2026-03-10T10:14:42.281093+0000","last_clean":"2026-03-10T10:14:42.281093+0000","last_became_active":"2026-03-10T10:14:12.939930+0000","last_became_peered":"2026-03-10T10:14:12.939930+0000","last_unstale":"2026-03-10T10:14:42.281093+0000","last_undegraded":"2026-03-10T10:14:42.281093+0000","last_fullsized":"2026-03-10T10:14:42.281093+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:22:47.235890+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,0],"acting":[1,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.12","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.281119+0000","last_change":"2026-03-10T10:14:16.980497+0000","last_active":"2026-03-10T10:14:42.281119+0000","last_peered":"2026-03-10T10:14:42.281119+0000","last_clean":"2026-03-10T10:14:42.281119+0000","last_became_active":"2026-03-10T10:14:16.980383+0000","last_became_peered":"2026-03-10T10:14:16.980383+0000","last_unstale":"2026-03-10T10:14:42.281119+0000","last_undegraded":"2026-03-10T10:14:42.281119+0000","last_fullsized":"2026-03-10T10:14:42.281119+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:51:20.407631+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.11","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.384787+0000","last_change":"2026-03-10T10:14:18.977284+0000","last_active":"2026-03-10T10:14:42.384787+0000","last_peered":"2026-03-10T10:14:42.384787+0000","last_clean":"2026-03-10T10:14:42.384787+0000","last_became_active":"2026-03-10T10:14:18.976997+0000","last_became_peered":"2026-03-10T10:14:18.976997+0000","last_unstale":"2026-03-10T10:14:42.384787+0000","last_undegraded":"2026-03-10T10:14:42.384787+0000","last_fullsized":"2026-03-10T10:14:42.384787+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:42:40.284612+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.17","version":"60'6","reported_seq":40,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.993582+0000","last_change":"2026-03-10T10:14:14.950503+0000","last_active":"2026-03-10T10:14:42.993582+0000","last_peered":"2026-03-10T10:14:42.993582+0000","last_clean":"2026-03-10T10:14:42.993582+0000","last_became_active":"2026-03-10T10:14:14.949413+0000","last_became_peered":"2026-03-10T10:14:14.949413+0000","last_unstale":"2026-03-10T10:14:42.993582+0000","last_undegraded":"2026-03-10T10:14:42.993582+0000","last_fullsized":"2026-03-10T10:14:42.993582+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":6,"log_dups_size":0,"ondisk_log_size":6,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:34:11.235583+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":9,"num_read_kb":6,"num_write":6,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,3],"acting":[0,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.16","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992014+0000","last_change":"2026-03-10T10:14:12.939019+0000","last_active":"2026-03-10T10:14:42.992014+0000","last_peered":"2026-03-10T10:14:42.992014+0000","last_clean":"2026-03-10T10:14:42.992014+0000","last_became_active":"2026-03-10T10:14:12.938918+0000","last_became_peered":"2026-03-10T10:14:12.938918+0000","last_unstale":"2026-03-10T10:14:42.992014+0000","last_undegraded":"2026-03-10T10:14:42.992014+0000","last_fullsized":"2026-03-10T10:14:42.992014+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:41:41.170089+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,2],"acting":[5,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.11","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.991764+0000","last_change":"2026-03-10T10:14:16.953276+0000","last_active":"2026-03-10T10:14:42.991764+0000","last_peered":"2026-03-10T10:14:42.991764+0000","last_clean":"2026-03-10T10:14:42.991764+0000","last_became_active":"2026-03-10T10:14:16.953179+0000","last_became_peered":"2026-03-10T10:14:16.953179+0000","last_unstale":"2026-03-10T10:14:42.991764+0000","last_undegraded":"2026-03-10T10:14:42.991764+0000","last_fullsized":"2026-03-10T10:14:42.991764+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:54:47.477688+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.12","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986024+0000","last_change":"2026-03-10T10:14:18.964714+0000","last_active":"2026-03-10T10:14:42.986024+0000","last_peered":"2026-03-10T10:14:42.986024+0000","last_clean":"2026-03-10T10:14:42.986024+0000","last_became_active":"2026-03-10T10:14:18.964413+0000","last_became_peered":"2026-03-10T10:14:18.964413+0000","last_unstale":"2026-03-10T10:14:42.986024+0000","last_undegraded":"2026-03-10T10:14:42.986024+0000","last_fullsized":"2026-03-10T10:14:42.986024+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:46:26.335677+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,4],"acting":[7,2,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.16","version":"60'9","reported_seq":47,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.991800+0000","last_change":"2026-03-10T10:14:14.953172+0000","last_active":"2026-03-10T10:14:42.991800+0000","last_peered":"2026-03-10T10:14:42.991800+0000","last_clean":"2026-03-10T10:14:42.991800+0000","last_became_active":"2026-03-10T10:14:14.953016+0000","last_became_peered":"2026-03-10T10:14:14.953016+0000","last_unstale":"2026-03-10T10:14:42.991800+0000","last_undegraded":"2026-03-10T10:14:42.991800+0000","last_fullsized":"2026-03-10T10:14:42.991800+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:41:32.775380+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,1],"acting":[5,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.17","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992003+0000","last_change":"2026-03-10T10:14:12.939818+0000","last_active":"2026-03-10T10:14:42.992003+0000","last_peered":"2026-03-10T10:14:42.992003+0000","last_clean":"2026-03-10T10:14:42.992003+0000","last_became_active":"2026-03-10T10:14:12.939654+0000","last_became_peered":"2026-03-10T10:14:12.939654+0000","last_unstale":"2026-03-10T10:14:42.992003+0000","last_undegraded":"2026-03-10T10:14:42.992003+0000","last_fullsized":"2026-03-10T10:14:42.992003+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:31:09.237927+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,5,2],"acting":[6,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.10","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.987070+0000","last_change":"2026-03-10T10:14:16.958911+0000","last_active":"2026-03-10T10:14:42.987070+0000","last_peered":"2026-03-10T10:14:42.987070+0000","last_clean":"2026-03-10T10:14:42.987070+0000","last_became_active":"2026-03-10T10:14:16.958832+0000","last_became_peered":"2026-03-10T10:14:16.958832+0000","last_unstale":"2026-03-10T10:14:42.987070+0000","last_undegraded":"2026-03-10T10:14:42.987070+0000","last_fullsized":"2026-03-10T10:14:42.987070+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:53:01.679616+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.13","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.384881+0000","last_change":"2026-03-10T10:14:19.316329+0000","last_active":"2026-03-10T10:14:42.384881+0000","last_peered":"2026-03-10T10:14:42.384881+0000","last_clean":"2026-03-10T10:14:42.384881+0000","last_became_active":"2026-03-10T10:14:19.316018+0000","last_became_peered":"2026-03-10T10:14:19.316018+0000","last_unstale":"2026-03-10T10:14:42.384881+0000","last_undegraded":"2026-03-10T10:14:42.384881+0000","last_fullsized":"2026-03-10T10:14:42.384881+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:02:08.182524+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,6],"acting":[3,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.1c","version":"60'1","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986389+0000","last_change":"2026-03-10T10:14:18.971822+0000","last_active":"2026-03-10T10:14:42.986389+0000","last_peered":"2026-03-10T10:14:42.986389+0000","last_clean":"2026-03-10T10:14:42.986389+0000","last_became_active":"2026-03-10T10:14:18.971595+0000","last_became_peered":"2026-03-10T10:14:18.971595+0000","last_unstale":"2026-03-10T10:14:42.986389+0000","last_undegraded":"2026-03-10T10:14:42.986389+0000","last_fullsized":"2026-03-10T10:14:42.986389+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:07:27.535419+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":403,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":3,"num_read_kb":3,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.19","version":"60'15","reported_seq":56,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.281508+0000","last_change":"2026-03-10T10:14:14.949777+0000","last_active":"2026-03-10T10:14:42.281508+0000","last_peered":"2026-03-10T10:14:42.281508+0000","last_clean":"2026-03-10T10:14:42.281508+0000","last_became_active":"2026-03-10T10:14:14.949201+0000","last_became_peered":"2026-03-10T10:14:14.949201+0000","last_unstale":"2026-03-10T10:14:42.281508+0000","last_undegraded":"2026-03-10T10:14:42.281508+0000","last_fullsized":"2026-03-10T10:14:42.281508+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:19:35.165985+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,4],"acting":[1,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.18","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992178+0000","last_change":"2026-03-10T10:14:12.948563+0000","last_active":"2026-03-10T10:14:42.992178+0000","last_peered":"2026-03-10T10:14:42.992178+0000","last_clean":"2026-03-10T10:14:42.992178+0000","last_became_active":"2026-03-10T10:14:12.948398+0000","last_became_peered":"2026-03-10T10:14:12.948398+0000","last_unstale":"2026-03-10T10:14:42.992178+0000","last_undegraded":"2026-03-10T10:14:42.992178+0000","last_fullsized":"2026-03-10T10:14:42.992178+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:52:12.263521+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,7],"acting":[5,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1f","version":"60'11","reported_seq":56,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:15:22.991215+0000","last_change":"2026-03-10T10:14:16.959146+0000","last_active":"2026-03-10T10:15:22.991215+0000","last_peered":"2026-03-10T10:15:22.991215+0000","last_clean":"2026-03-10T10:15:22.991215+0000","last_became_active":"2026-03-10T10:14:16.958966+0000","last_became_peered":"2026-03-10T10:14:16.958966+0000","last_unstale":"2026-03-10T10:15:22.991215+0000","last_undegraded":"2026-03-10T10:15:22.991215+0000","last_fullsized":"2026-03-10T10:15:22.991215+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:53:10.085138+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1d","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.282102+0000","last_change":"2026-03-10T10:14:18.974109+0000","last_active":"2026-03-10T10:14:42.282102+0000","last_peered":"2026-03-10T10:14:42.282102+0000","last_clean":"2026-03-10T10:14:42.282102+0000","last_became_active":"2026-03-10T10:14:18.974041+0000","last_became_peered":"2026-03-10T10:14:18.974041+0000","last_unstale":"2026-03-10T10:14:42.282102+0000","last_undegraded":"2026-03-10T10:14:42.282102+0000","last_fullsized":"2026-03-10T10:14:42.282102+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:51:01.760692+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.18","version":"60'9","reported_seq":47,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.385763+0000","last_change":"2026-03-10T10:14:14.953631+0000","last_active":"2026-03-10T10:14:42.385763+0000","last_peered":"2026-03-10T10:14:42.385763+0000","last_clean":"2026-03-10T10:14:42.385763+0000","last_became_active":"2026-03-10T10:14:14.951122+0000","last_became_peered":"2026-03-10T10:14:14.951122+0000","last_unstale":"2026-03-10T10:14:42.385763+0000","last_undegraded":"2026-03-10T10:14:42.385763+0000","last_fullsized":"2026-03-10T10:14:42.385763+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:31:34.325912+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.19","version":"53'1","reported_seq":36,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.385744+0000","last_change":"2026-03-10T10:14:12.944848+0000","last_active":"2026-03-10T10:14:42.385744+0000","last_peered":"2026-03-10T10:14:42.385744+0000","last_clean":"2026-03-10T10:14:42.385744+0000","last_became_active":"2026-03-10T10:14:12.943948+0000","last_became_peered":"2026-03-10T10:14:12.943948+0000","last_unstale":"2026-03-10T10:14:42.385744+0000","last_undegraded":"2026-03-10T10:14:42.385744+0000","last_fullsized":"2026-03-10T10:14:42.385744+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:51:19.515363+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,0],"acting":[3,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1e","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.993486+0000","last_change":"2026-03-10T10:14:16.965759+0000","last_active":"2026-03-10T10:14:42.993486+0000","last_peered":"2026-03-10T10:14:42.993486+0000","last_clean":"2026-03-10T10:14:42.993486+0000","last_became_active":"2026-03-10T10:14:16.965615+0000","last_became_peered":"2026-03-10T10:14:16.965615+0000","last_unstale":"2026-03-10T10:14:42.993486+0000","last_undegraded":"2026-03-10T10:14:42.993486+0000","last_fullsized":"2026-03-10T10:14:42.993486+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:54:41.778025+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.1e","version":"0'0","reported_seq":22,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.281683+0000","last_change":"2026-03-10T10:14:19.315389+0000","last_active":"2026-03-10T10:14:42.281683+0000","last_peered":"2026-03-10T10:14:42.281683+0000","last_clean":"2026-03-10T10:14:42.281683+0000","last_became_active":"2026-03-10T10:14:19.315279+0000","last_became_peered":"2026-03-10T10:14:19.315279+0000","last_unstale":"2026-03-10T10:14:42.281683+0000","last_undegraded":"2026-03-10T10:14:42.281683+0000","last_fullsized":"2026-03-10T10:14:42.281683+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:09:35.750325+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,5],"acting":[4,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1a","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.991945+0000","last_change":"2026-03-10T10:14:12.942253+0000","last_active":"2026-03-10T10:14:42.991945+0000","last_peered":"2026-03-10T10:14:42.991945+0000","last_clean":"2026-03-10T10:14:42.991945+0000","last_became_active":"2026-03-10T10:14:12.942074+0000","last_became_peered":"2026-03-10T10:14:12.942074+0000","last_unstale":"2026-03-10T10:14:42.991945+0000","last_undegraded":"2026-03-10T10:14:42.991945+0000","last_fullsized":"2026-03-10T10:14:42.991945+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:57:53.020154+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"3.1b","version":"60'5","reported_seq":41,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.993635+0000","last_change":"2026-03-10T10:14:14.960701+0000","last_active":"2026-03-10T10:14:42.993635+0000","last_peered":"2026-03-10T10:14:42.993635+0000","last_clean":"2026-03-10T10:14:42.993635+0000","last_became_active":"2026-03-10T10:14:14.960493+0000","last_became_peered":"2026-03-10T10:14:14.960493+0000","last_unstale":"2026-03-10T10:14:42.993635+0000","last_undegraded":"2026-03-10T10:14:42.993635+0000","last_fullsized":"2026-03-10T10:14:42.993635+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:12:56.098048+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":11,"num_read_kb":7,"num_write":6,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,7],"acting":[0,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.1d","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.282185+0000","last_change":"2026-03-10T10:14:16.955414+0000","last_active":"2026-03-10T10:14:42.282185+0000","last_peered":"2026-03-10T10:14:42.282185+0000","last_clean":"2026-03-10T10:14:42.282185+0000","last_became_active":"2026-03-10T10:14:16.954995+0000","last_became_peered":"2026-03-10T10:14:16.954995+0000","last_unstale":"2026-03-10T10:14:42.282185+0000","last_undegraded":"2026-03-10T10:14:42.282185+0000","last_fullsized":"2026-03-10T10:14:42.282185+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:38:22.812402+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1f","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.385440+0000","last_change":"2026-03-10T10:14:19.316915+0000","last_active":"2026-03-10T10:14:42.385440+0000","last_peered":"2026-03-10T10:14:42.385440+0000","last_clean":"2026-03-10T10:14:42.385440+0000","last_became_active":"2026-03-10T10:14:19.316153+0000","last_became_peered":"2026-03-10T10:14:19.316153+0000","last_unstale":"2026-03-10T10:14:42.385440+0000","last_undegraded":"2026-03-10T10:14:42.385440+0000","last_fullsized":"2026-03-10T10:14:42.385440+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:35:53.622709+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.1b","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.385482+0000","last_change":"2026-03-10T10:14:12.950790+0000","last_active":"2026-03-10T10:14:42.385482+0000","last_peered":"2026-03-10T10:14:42.385482+0000","last_clean":"2026-03-10T10:14:42.385482+0000","last_became_active":"2026-03-10T10:14:12.950571+0000","last_became_peered":"2026-03-10T10:14:12.950571+0000","last_unstale":"2026-03-10T10:14:42.385482+0000","last_undegraded":"2026-03-10T10:14:42.385482+0000","last_fullsized":"2026-03-10T10:14:42.385482+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:38:24.753287+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1a","version":"60'9","reported_seq":46,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.282081+0000","last_change":"2026-03-10T10:14:14.947375+0000","last_active":"2026-03-10T10:14:42.282081+0000","last_peered":"2026-03-10T10:14:42.282081+0000","last_clean":"2026-03-10T10:14:42.282081+0000","last_became_active":"2026-03-10T10:14:14.942646+0000","last_became_peered":"2026-03-10T10:14:14.942646+0000","last_unstale":"2026-03-10T10:14:42.282081+0000","last_undegraded":"2026-03-10T10:14:42.282081+0000","last_fullsized":"2026-03-10T10:14:42.282081+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:04:14.957909+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.1c","version":"0'0","reported_seq":26,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.282092+0000","last_change":"2026-03-10T10:14:16.962239+0000","last_active":"2026-03-10T10:14:42.282092+0000","last_peered":"2026-03-10T10:14:42.282092+0000","last_clean":"2026-03-10T10:14:42.282092+0000","last_became_active":"2026-03-10T10:14:16.962160+0000","last_became_peered":"2026-03-10T10:14:16.962160+0000","last_unstale":"2026-03-10T10:14:42.282092+0000","last_undegraded":"2026-03-10T10:14:42.282092+0000","last_fullsized":"2026-03-10T10:14:42.282092+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:44:29.176451+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]}],"pool_stats":[{"poolid":6,"num_pg":32,"stat_sum":{"num_bytes":416,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":3,"num_read_kb":3,"num_write":3,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1248,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":2,"ondisk_log_size":2,"up":96,"acting":96,"num_store_stats":8},{"poolid":5,"num_pg":32,"stat_sum":{"num_bytes":0,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":88,"ondisk_log_size":88,"up":96,"acting":96,"num_store_stats":8},{"poolid":4,"num_pg":3,"stat_sum":{"num_bytes":408,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":66,"num_read_kb":61,"num_write":6,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1224,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":8,"ondisk_log_size":8,"up":9,"acting":9,"num_store_stats":7},{"poolid":3,"num_pg":32,"stat_sum":{"num_bytes":3702,"num_objects":178,"num_object_clones":0,"num_object_copies":534,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":178,"num_whiteouts":0,"num_read":701,"num_read_kb":458,"num_write":417,"num_write_kb":34,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":417792,"data_stored":11106,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":395,"ondisk_log_size":395,"up":96,"acting":96,"num_store_stats":8},{"poolid":2,"num_pg":32,"stat_sum":{"num_bytes":1613,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":34,"num_read_kb":34,"num_write":10,"num_write_kb":6,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":4839,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":6,"ondisk_log_size":6,"up":96,"acting":96,"num_store_stats":8},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":106,"num_read_kb":213,"num_write":69,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":39,"ondisk_log_size":39,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":7,"up_from":49,"seq":210453397526,"num_pgs":59,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27956,"kb_used_data":1124,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939468,"statfs":{"total":21470642176,"available":21442015232,"internally_reserved":0,"allocated":1150976,"data_stored":713180,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1585,"internal_metadata":27457999},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":43,"seq":184683593756,"num_pgs":42,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27916,"kb_used_data":1084,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939508,"statfs":{"total":21470642176,"available":21442056192,"internally_reserved":0,"allocated":1110016,"data_stored":710396,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":37,"seq":158913789987,"num_pgs":53,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27484,"kb_used_data":640,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939940,"statfs":{"total":21470642176,"available":21442498560,"internally_reserved":0,"allocated":655360,"data_stored":251954,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":31,"seq":133143986217,"num_pgs":56,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27516,"kb_used_data":676,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939908,"statfs":{"total":21470642176,"available":21442465792,"internally_reserved":0,"allocated":692224,"data_stored":253054,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":26,"seq":111669149744,"num_pgs":50,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27480,"kb_used_data":636,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939944,"statfs":{"total":21470642176,"available":21442502656,"internally_reserved":0,"allocated":651264,"data_stored":251516,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":18,"seq":77309411383,"num_pgs":39,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27488,"kb_used_data":644,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939936,"statfs":{"total":21470642176,"available":21442494464,"internally_reserved":0,"allocated":659456,"data_stored":252058,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574909,"num_pgs":47,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27492,"kb_used_data":648,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939932,"statfs":{"total":21470642176,"available":21442490368,"internally_reserved":0,"allocated":663552,"data_stored":251447,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738436,"num_pgs":50,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27936,"kb_used_data":1104,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939488,"statfs":{"total":21470642176,"available":21442035712,"internally_reserved":0,"allocated":1130496,"data_stored":712044,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":16384,"data_stored":1131,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":46,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":436,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":46,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":16384,"data_stored":1131,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":436,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":92,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":16384,"data_stored":1521,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1320,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":57344,"data_stored":1458,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1282,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":40960,"data_stored":1144,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":1980,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":45056,"data_stored":1172,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":40960,"data_stored":1100,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":61440,"data_stored":1650,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":416,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":416,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T10:15:30.925 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph pg dump --format=json 2026-03-10T10:15:32.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:32 vm07 bash[23367]: cluster 2026-03-10T10:15:30.302053+0000 mgr.y (mgr.24422) 63 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:32.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:32 vm07 bash[23367]: cluster 2026-03-10T10:15:30.302053+0000 mgr.y (mgr.24422) 63 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:32.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:32 vm07 bash[23367]: audit 2026-03-10T10:15:30.850108+0000 mgr.y (mgr.24422) 64 : audit [DBG] from='client.14607 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T10:15:32.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:32 vm07 bash[23367]: audit 2026-03-10T10:15:30.850108+0000 mgr.y (mgr.24422) 64 : audit [DBG] from='client.14607 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T10:15:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:32 vm04 bash[28289]: cluster 2026-03-10T10:15:30.302053+0000 mgr.y (mgr.24422) 63 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:32 vm04 bash[28289]: cluster 2026-03-10T10:15:30.302053+0000 mgr.y (mgr.24422) 63 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:32 vm04 bash[28289]: audit 2026-03-10T10:15:30.850108+0000 mgr.y (mgr.24422) 64 : audit [DBG] from='client.14607 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T10:15:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:32 vm04 bash[28289]: audit 2026-03-10T10:15:30.850108+0000 mgr.y (mgr.24422) 64 : audit [DBG] from='client.14607 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T10:15:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:32 vm04 bash[20742]: cluster 2026-03-10T10:15:30.302053+0000 mgr.y (mgr.24422) 63 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:32 vm04 bash[20742]: cluster 2026-03-10T10:15:30.302053+0000 mgr.y (mgr.24422) 63 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:32 vm04 bash[20742]: audit 2026-03-10T10:15:30.850108+0000 mgr.y (mgr.24422) 64 : audit [DBG] from='client.14607 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T10:15:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:32 vm04 bash[20742]: audit 2026-03-10T10:15:30.850108+0000 mgr.y (mgr.24422) 64 : audit [DBG] from='client.14607 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T10:15:33.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:15:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:15:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:15:34.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:34 vm07 bash[23367]: cluster 2026-03-10T10:15:32.302271+0000 mgr.y (mgr.24422) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:15:34.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:34 vm07 bash[23367]: cluster 2026-03-10T10:15:32.302271+0000 mgr.y (mgr.24422) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:15:34.630 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:15:34.645 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:34 vm04 bash[20742]: cluster 2026-03-10T10:15:32.302271+0000 mgr.y (mgr.24422) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:15:34.645 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:34 vm04 bash[20742]: cluster 2026-03-10T10:15:32.302271+0000 mgr.y (mgr.24422) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:15:34.645 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:34 vm04 bash[28289]: cluster 2026-03-10T10:15:32.302271+0000 mgr.y (mgr.24422) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:15:34.645 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:34 vm04 bash[28289]: cluster 2026-03-10T10:15:32.302271+0000 mgr.y (mgr.24422) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:15:34.784 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.781+0000 7f4795316640 1 -- 192.168.123.104:0/2639541644 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f4790104f40 msgr2=0x7f4790105320 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:34.784 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.781+0000 7f4795316640 1 --2- 192.168.123.104:0/2639541644 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f4790104f40 0x7f4790105320 secure :-1 s=READY pgs=165 cs=0 l=1 rev1=1 crypto rx=0x7f4778009a30 tx=0x7f477802f220 comp rx=0 tx=0).stop 2026-03-10T10:15:34.784 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.781+0000 7f4795316640 1 -- 192.168.123.104:0/2639541644 shutdown_connections 2026-03-10T10:15:34.784 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.781+0000 7f4795316640 1 --2- 192.168.123.104:0/2639541644 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f479010a070 0x7f4790111bf0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:34.784 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.781+0000 7f4795316640 1 --2- 192.168.123.104:0/2639541644 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f47901058f0 0x7f4790109940 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:34.784 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.781+0000 7f4795316640 1 --2- 192.168.123.104:0/2639541644 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f4790104f40 0x7f4790105320 unknown :-1 s=CLOSED pgs=165 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:34.784 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.781+0000 7f4795316640 1 -- 192.168.123.104:0/2639541644 >> 192.168.123.104:0/2639541644 conn(0x7f47901009e0 msgr2=0x7f4790102e00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:34.784 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.781+0000 7f4795316640 1 -- 192.168.123.104:0/2639541644 shutdown_connections 2026-03-10T10:15:34.784 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.781+0000 7f4795316640 1 -- 192.168.123.104:0/2639541644 wait complete. 2026-03-10T10:15:34.785 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.781+0000 7f4795316640 1 Processor -- start 2026-03-10T10:15:34.785 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.781+0000 7f4795316640 1 -- start start 2026-03-10T10:15:34.785 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.781+0000 7f4795316640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f4790104f40 0x7f47901a2750 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:34.786 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.781+0000 7f4795316640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f47901058f0 0x7f47901a2c90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:34.786 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.781+0000 7f4795316640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f479010a070 0x7f479019c910 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:34.786 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.781+0000 7f4795316640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f4790114570 con 0x7f4790104f40 2026-03-10T10:15:34.786 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.781+0000 7f4795316640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f47901143f0 con 0x7f479010a070 2026-03-10T10:15:34.786 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.781+0000 7f4795316640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f47901146f0 con 0x7f47901058f0 2026-03-10T10:15:34.786 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.781+0000 7f478effd640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f4790104f40 0x7f47901a2750 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:34.786 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.781+0000 7f478effd640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f4790104f40 0x7f47901a2750 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:52530/0 (socket says 192.168.123.104:52530) 2026-03-10T10:15:34.786 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.781+0000 7f478effd640 1 -- 192.168.123.104:0/2780054650 learned_addr learned my addr 192.168.123.104:0/2780054650 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:15:34.786 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.781+0000 7f478effd640 1 -- 192.168.123.104:0/2780054650 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f47901058f0 msgr2=0x7f47901a2c90 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:15:34.786 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.781+0000 7f478f7fe640 1 --2- 192.168.123.104:0/2780054650 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f479010a070 0x7f479019c910 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:34.786 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.781+0000 7f478e7fc640 1 --2- 192.168.123.104:0/2780054650 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f47901058f0 0x7f47901a2c90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:34.786 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.781+0000 7f478effd640 1 --2- 192.168.123.104:0/2780054650 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f47901058f0 0x7f47901a2c90 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:34.786 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.781+0000 7f478effd640 1 -- 192.168.123.104:0/2780054650 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f479010a070 msgr2=0x7f479019c910 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:34.786 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.781+0000 7f478effd640 1 --2- 192.168.123.104:0/2780054650 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f479010a070 0x7f479019c910 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:34.786 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.781+0000 7f478effd640 1 -- 192.168.123.104:0/2780054650 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f479019d040 con 0x7f4790104f40 2026-03-10T10:15:34.786 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.781+0000 7f478f7fe640 1 --2- 192.168.123.104:0/2780054650 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f479010a070 0x7f479019c910 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:15:34.786 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.781+0000 7f478e7fc640 1 --2- 192.168.123.104:0/2780054650 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f47901058f0 0x7f47901a2c90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:15:34.786 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.785+0000 7f478effd640 1 --2- 192.168.123.104:0/2780054650 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f4790104f40 0x7f47901a2750 secure :-1 s=READY pgs=166 cs=0 l=1 rev1=1 crypto rx=0x7f477802f730 tx=0x7f477802fcb0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:34.787 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.785+0000 7f476ffff640 1 -- 192.168.123.104:0/2780054650 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f4778004280 con 0x7f4790104f40 2026-03-10T10:15:34.787 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.785+0000 7f476ffff640 1 -- 192.168.123.104:0/2780054650 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f4778004420 con 0x7f4790104f40 2026-03-10T10:15:34.787 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.785+0000 7f476ffff640 1 -- 192.168.123.104:0/2780054650 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f4778038cd0 con 0x7f4790104f40 2026-03-10T10:15:34.787 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.785+0000 7f4795316640 1 -- 192.168.123.104:0/2780054650 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f479019d2d0 con 0x7f4790104f40 2026-03-10T10:15:34.787 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.785+0000 7f4795316640 1 -- 192.168.123.104:0/2780054650 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f479019d730 con 0x7f4790104f40 2026-03-10T10:15:34.788 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.785+0000 7f476ffff640 1 -- 192.168.123.104:0/2780054650 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f47780047c0 con 0x7f4790104f40 2026-03-10T10:15:34.789 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.785+0000 7f4795316640 1 -- 192.168.123.104:0/2780054650 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f4790106420 con 0x7f4790104f40 2026-03-10T10:15:34.791 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.785+0000 7f476ffff640 1 --2- 192.168.123.104:0/2780054650 >> v2:192.168.123.104:6800/3326026257 conn(0x7f4764077630 0x7f4764079af0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:34.791 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.785+0000 7f476ffff640 1 -- 192.168.123.104:0/2780054650 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f47780bda50 con 0x7f4790104f40 2026-03-10T10:15:34.791 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.789+0000 7f478e7fc640 1 --2- 192.168.123.104:0/2780054650 >> v2:192.168.123.104:6800/3326026257 conn(0x7f4764077630 0x7f4764079af0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:34.792 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.789+0000 7f478e7fc640 1 --2- 192.168.123.104:0/2780054650 >> v2:192.168.123.104:6800/3326026257 conn(0x7f4764077630 0x7f4764079af0 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f4790102430 tx=0x7f4784005e30 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:34.792 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.789+0000 7f476ffff640 1 -- 192.168.123.104:0/2780054650 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f4778041330 con 0x7f4790104f40 2026-03-10T10:15:34.880 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.877+0000 7f4795316640 1 -- 192.168.123.104:0/2780054650 --> v2:192.168.123.104:6800/3326026257 -- mgr_command(tid 0: {"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}) -- 0x7f4790105320 con 0x7f4764077630 2026-03-10T10:15:34.885 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.881+0000 7f476ffff640 1 -- 192.168.123.104:0/2780054650 <== mgr.24422 v2:192.168.123.104:6800/3326026257 1 ==== mgr_command_reply(tid 0: 0 dumped all) ==== 18+0+346473 (secure 0 0 0) 0x7f4790105320 con 0x7f4764077630 2026-03-10T10:15:34.885 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:15:34.887 INFO:teuthology.orchestra.run.vm04.stderr:dumped all 2026-03-10T10:15:34.888 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.885+0000 7f4795316640 1 -- 192.168.123.104:0/2780054650 >> v2:192.168.123.104:6800/3326026257 conn(0x7f4764077630 msgr2=0x7f4764079af0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:34.888 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.885+0000 7f4795316640 1 --2- 192.168.123.104:0/2780054650 >> v2:192.168.123.104:6800/3326026257 conn(0x7f4764077630 0x7f4764079af0 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f4790102430 tx=0x7f4784005e30 comp rx=0 tx=0).stop 2026-03-10T10:15:34.888 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.885+0000 7f4795316640 1 -- 192.168.123.104:0/2780054650 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f4790104f40 msgr2=0x7f47901a2750 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:34.888 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.885+0000 7f4795316640 1 --2- 192.168.123.104:0/2780054650 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f4790104f40 0x7f47901a2750 secure :-1 s=READY pgs=166 cs=0 l=1 rev1=1 crypto rx=0x7f477802f730 tx=0x7f477802fcb0 comp rx=0 tx=0).stop 2026-03-10T10:15:34.889 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.885+0000 7f4795316640 1 -- 192.168.123.104:0/2780054650 shutdown_connections 2026-03-10T10:15:34.889 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.885+0000 7f4795316640 1 --2- 192.168.123.104:0/2780054650 >> v2:192.168.123.104:6800/3326026257 conn(0x7f4764077630 0x7f4764079af0 unknown :-1 s=CLOSED pgs=48 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:34.889 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.885+0000 7f4795316640 1 --2- 192.168.123.104:0/2780054650 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f479010a070 0x7f479019c910 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:34.889 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.885+0000 7f4795316640 1 --2- 192.168.123.104:0/2780054650 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f47901058f0 0x7f47901a2c90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:34.889 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.885+0000 7f4795316640 1 --2- 192.168.123.104:0/2780054650 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f4790104f40 0x7f47901a2750 unknown :-1 s=CLOSED pgs=166 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:34.889 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.885+0000 7f4795316640 1 -- 192.168.123.104:0/2780054650 >> 192.168.123.104:0/2780054650 conn(0x7f47901009e0 msgr2=0x7f47901010c0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:34.889 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.885+0000 7f4795316640 1 -- 192.168.123.104:0/2780054650 shutdown_connections 2026-03-10T10:15:34.889 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:34.885+0000 7f4795316640 1 -- 192.168.123.104:0/2780054650 wait complete. 2026-03-10T10:15:34.948 INFO:teuthology.orchestra.run.vm04.stdout:{"pg_ready":true,"pg_map":{"version":29,"stamp":"2026-03-10T10:15:34.302412+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":465419,"num_objects":199,"num_object_clones":0,"num_object_copies":597,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":199,"num_whiteouts":0,"num_read":915,"num_read_kb":774,"num_write":505,"num_write_kb":629,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":538,"ondisk_log_size":538,"up":396,"acting":396,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":396,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":8,"kb":167739392,"kb_used":221268,"kb_used_data":6556,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167518124,"statfs":{"total":171765137408,"available":171538558976,"internally_reserved":0,"allocated":6713344,"data_stored":3395649,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12712,"internal_metadata":219663960},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":14,"num_read_kb":14,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.001732"},"pg_stats":[{"pgid":"6.1b","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.385778+0000","last_change":"2026-03-10T10:14:19.317580+0000","last_active":"2026-03-10T10:14:42.385778+0000","last_peered":"2026-03-10T10:14:42.385778+0000","last_clean":"2026-03-10T10:14:42.385778+0000","last_became_active":"2026-03-10T10:14:19.317138+0000","last_became_peered":"2026-03-10T10:14:19.317138+0000","last_unstale":"2026-03-10T10:14:42.385778+0000","last_undegraded":"2026-03-10T10:14:42.385778+0000","last_fullsized":"2026-03-10T10:14:42.385778+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:45:13.473349+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.1f","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.993127+0000","last_change":"2026-03-10T10:14:12.941865+0000","last_active":"2026-03-10T10:14:42.993127+0000","last_peered":"2026-03-10T10:14:42.993127+0000","last_clean":"2026-03-10T10:14:42.993127+0000","last_became_active":"2026-03-10T10:14:12.941718+0000","last_became_peered":"2026-03-10T10:14:12.941718+0000","last_unstale":"2026-03-10T10:14:42.993127+0000","last_undegraded":"2026-03-10T10:14:42.993127+0000","last_fullsized":"2026-03-10T10:14:42.993127+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:47:48.070949+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,4],"acting":[0,7,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.1e","version":"60'10","reported_seq":46,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.385800+0000","last_change":"2026-03-10T10:14:14.954713+0000","last_active":"2026-03-10T10:14:42.385800+0000","last_peered":"2026-03-10T10:14:42.385800+0000","last_clean":"2026-03-10T10:14:42.385800+0000","last_became_active":"2026-03-10T10:14:14.953527+0000","last_became_peered":"2026-03-10T10:14:14.953527+0000","last_unstale":"2026-03-10T10:14:42.385800+0000","last_undegraded":"2026-03-10T10:14:42.385800+0000","last_fullsized":"2026-03-10T10:14:42.385800+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:50:42.294681+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,2],"acting":[3,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.18","version":"0'0","reported_seq":26,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.281026+0000","last_change":"2026-03-10T10:14:16.952114+0000","last_active":"2026-03-10T10:14:42.281026+0000","last_peered":"2026-03-10T10:14:42.281026+0000","last_clean":"2026-03-10T10:14:42.281026+0000","last_became_active":"2026-03-10T10:14:16.951912+0000","last_became_peered":"2026-03-10T10:14:16.951912+0000","last_unstale":"2026-03-10T10:14:42.281026+0000","last_undegraded":"2026-03-10T10:14:42.281026+0000","last_fullsized":"2026-03-10T10:14:42.281026+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:24:21.428499+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1e","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.385400+0000","last_change":"2026-03-10T10:14:12.950570+0000","last_active":"2026-03-10T10:14:42.385400+0000","last_peered":"2026-03-10T10:14:42.385400+0000","last_clean":"2026-03-10T10:14:42.385400+0000","last_became_active":"2026-03-10T10:14:12.945351+0000","last_became_peered":"2026-03-10T10:14:12.945351+0000","last_unstale":"2026-03-10T10:14:42.385400+0000","last_undegraded":"2026-03-10T10:14:42.385400+0000","last_fullsized":"2026-03-10T10:14:42.385400+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:52:17.682926+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1f","version":"60'11","reported_seq":50,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.993454+0000","last_change":"2026-03-10T10:14:14.949062+0000","last_active":"2026-03-10T10:14:42.993454+0000","last_peered":"2026-03-10T10:14:42.993454+0000","last_clean":"2026-03-10T10:14:42.993454+0000","last_became_active":"2026-03-10T10:14:14.947353+0000","last_became_peered":"2026-03-10T10:14:14.947353+0000","last_unstale":"2026-03-10T10:14:42.993454+0000","last_undegraded":"2026-03-10T10:14:42.993454+0000","last_fullsized":"2026-03-10T10:14:42.993454+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:20:32.731864+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,2],"acting":[0,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.19","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.282162+0000","last_change":"2026-03-10T10:14:16.959010+0000","last_active":"2026-03-10T10:14:42.282162+0000","last_peered":"2026-03-10T10:14:42.282162+0000","last_clean":"2026-03-10T10:14:42.282162+0000","last_became_active":"2026-03-10T10:14:16.958915+0000","last_became_peered":"2026-03-10T10:14:16.958915+0000","last_unstale":"2026-03-10T10:14:42.282162+0000","last_undegraded":"2026-03-10T10:14:42.282162+0000","last_fullsized":"2026-03-10T10:14:42.282162+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:47:57.570171+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,7],"acting":[1,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1a","version":"0'0","reported_seq":22,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.280591+0000","last_change":"2026-03-10T10:14:18.972649+0000","last_active":"2026-03-10T10:14:42.280591+0000","last_peered":"2026-03-10T10:14:42.280591+0000","last_clean":"2026-03-10T10:14:42.280591+0000","last_became_active":"2026-03-10T10:14:18.972217+0000","last_became_peered":"2026-03-10T10:14:18.972217+0000","last_unstale":"2026-03-10T10:14:42.280591+0000","last_undegraded":"2026-03-10T10:14:42.280591+0000","last_fullsized":"2026-03-10T10:14:42.280591+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:50:56.085011+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,1],"acting":[4,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1d","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986798+0000","last_change":"2026-03-10T10:14:12.941700+0000","last_active":"2026-03-10T10:14:42.986798+0000","last_peered":"2026-03-10T10:14:42.986798+0000","last_clean":"2026-03-10T10:14:42.986798+0000","last_became_active":"2026-03-10T10:14:12.941210+0000","last_became_peered":"2026-03-10T10:14:12.941210+0000","last_unstale":"2026-03-10T10:14:42.986798+0000","last_undegraded":"2026-03-10T10:14:42.986798+0000","last_fullsized":"2026-03-10T10:14:42.986798+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:50:29.693383+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,0],"acting":[7,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1c","version":"60'15","reported_seq":56,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992269+0000","last_change":"2026-03-10T10:14:14.949466+0000","last_active":"2026-03-10T10:14:42.992269+0000","last_peered":"2026-03-10T10:14:42.992269+0000","last_clean":"2026-03-10T10:14:42.992269+0000","last_became_active":"2026-03-10T10:14:14.949390+0000","last_became_peered":"2026-03-10T10:14:14.949390+0000","last_unstale":"2026-03-10T10:14:42.992269+0000","last_undegraded":"2026-03-10T10:14:42.992269+0000","last_fullsized":"2026-03-10T10:14:42.992269+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:51:37.134797+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,1],"acting":[5,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1a","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986823+0000","last_change":"2026-03-10T10:14:16.954963+0000","last_active":"2026-03-10T10:14:42.986823+0000","last_peered":"2026-03-10T10:14:42.986823+0000","last_clean":"2026-03-10T10:14:42.986823+0000","last_became_active":"2026-03-10T10:14:16.954873+0000","last_became_peered":"2026-03-10T10:14:16.954873+0000","last_unstale":"2026-03-10T10:14:42.986823+0000","last_undegraded":"2026-03-10T10:14:42.986823+0000","last_fullsized":"2026-03-10T10:14:42.986823+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:20:57.501364+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.19","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.991088+0000","last_change":"2026-03-10T10:14:18.978561+0000","last_active":"2026-03-10T10:14:42.991088+0000","last_peered":"2026-03-10T10:14:42.991088+0000","last_clean":"2026-03-10T10:14:42.991088+0000","last_became_active":"2026-03-10T10:14:18.978483+0000","last_became_peered":"2026-03-10T10:14:18.978483+0000","last_unstale":"2026-03-10T10:14:42.991088+0000","last_undegraded":"2026-03-10T10:14:42.991088+0000","last_fullsized":"2026-03-10T10:14:42.991088+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:42:27.368238+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,3],"acting":[5,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.1c","version":"53'1","reported_seq":43,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986710+0000","last_change":"2026-03-10T10:14:12.941543+0000","last_active":"2026-03-10T10:14:42.986710+0000","last_peered":"2026-03-10T10:14:42.986710+0000","last_clean":"2026-03-10T10:14:42.986710+0000","last_became_active":"2026-03-10T10:14:12.940992+0000","last_became_peered":"2026-03-10T10:14:12.940992+0000","last_unstale":"2026-03-10T10:14:42.986710+0000","last_undegraded":"2026-03-10T10:14:42.986710+0000","last_fullsized":"2026-03-10T10:14:42.986710+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:45:06.191731+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":436,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":10,"num_read_kb":10,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1d","version":"60'12","reported_seq":54,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.991682+0000","last_change":"2026-03-10T10:14:14.949278+0000","last_active":"2026-03-10T10:14:42.991682+0000","last_peered":"2026-03-10T10:14:42.991682+0000","last_clean":"2026-03-10T10:14:42.991682+0000","last_became_active":"2026-03-10T10:14:14.948923+0000","last_became_peered":"2026-03-10T10:14:14.948923+0000","last_unstale":"2026-03-10T10:14:42.991682+0000","last_undegraded":"2026-03-10T10:14:42.991682+0000","last_fullsized":"2026-03-10T10:14:42.991682+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:02:17.323597+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1b","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.991637+0000","last_change":"2026-03-10T10:14:16.965282+0000","last_active":"2026-03-10T10:14:42.991637+0000","last_peered":"2026-03-10T10:14:42.991637+0000","last_clean":"2026-03-10T10:14:42.991637+0000","last_became_active":"2026-03-10T10:14:16.965202+0000","last_became_peered":"2026-03-10T10:14:16.965202+0000","last_unstale":"2026-03-10T10:14:42.991637+0000","last_undegraded":"2026-03-10T10:14:42.991637+0000","last_fullsized":"2026-03-10T10:14:42.991637+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:37:49.368095+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,0,7],"acting":[5,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.18","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992832+0000","last_change":"2026-03-10T10:14:18.961784+0000","last_active":"2026-03-10T10:14:42.992832+0000","last_peered":"2026-03-10T10:14:42.992832+0000","last_clean":"2026-03-10T10:14:42.992832+0000","last_became_active":"2026-03-10T10:14:18.961123+0000","last_became_peered":"2026-03-10T10:14:18.961123+0000","last_unstale":"2026-03-10T10:14:42.992832+0000","last_undegraded":"2026-03-10T10:14:42.992832+0000","last_fullsized":"2026-03-10T10:14:42.992832+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:23:28.011814+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,7],"acting":[0,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.a","version":"60'19","reported_seq":62,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992029+0000","last_change":"2026-03-10T10:14:14.949157+0000","last_active":"2026-03-10T10:14:42.992029+0000","last_peered":"2026-03-10T10:14:42.992029+0000","last_clean":"2026-03-10T10:14:42.992029+0000","last_became_active":"2026-03-10T10:14:14.949031+0000","last_became_peered":"2026-03-10T10:14:14.949031+0000","last_unstale":"2026-03-10T10:14:42.992029+0000","last_undegraded":"2026-03-10T10:14:42.992029+0000","last_fullsized":"2026-03-10T10:14:42.992029+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:21:24.235646+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":9,"num_object_clones":0,"num_object_copies":27,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":9,"num_whiteouts":0,"num_read":32,"num_read_kb":21,"num_write":20,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"2.b","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986606+0000","last_change":"2026-03-10T10:14:12.941438+0000","last_active":"2026-03-10T10:14:42.986606+0000","last_peered":"2026-03-10T10:14:42.986606+0000","last_clean":"2026-03-10T10:14:42.986606+0000","last_became_active":"2026-03-10T10:14:12.940895+0000","last_became_peered":"2026-03-10T10:14:12.940895+0000","last_unstale":"2026-03-10T10:14:42.986606+0000","last_undegraded":"2026-03-10T10:14:42.986606+0000","last_fullsized":"2026-03-10T10:14:42.986606+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:16:13.952725+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,5],"acting":[7,4,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.c","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.282041+0000","last_change":"2026-03-10T10:14:16.955478+0000","last_active":"2026-03-10T10:14:42.282041+0000","last_peered":"2026-03-10T10:14:42.282041+0000","last_clean":"2026-03-10T10:14:42.282041+0000","last_became_active":"2026-03-10T10:14:16.955324+0000","last_became_peered":"2026-03-10T10:14:16.955324+0000","last_unstale":"2026-03-10T10:14:42.282041+0000","last_undegraded":"2026-03-10T10:14:42.282041+0000","last_fullsized":"2026-03-10T10:14:42.282041+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:25:48.499536+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.f","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.994311+0000","last_change":"2026-03-10T10:14:18.970021+0000","last_active":"2026-03-10T10:14:42.994311+0000","last_peered":"2026-03-10T10:14:42.994311+0000","last_clean":"2026-03-10T10:14:42.994311+0000","last_became_active":"2026-03-10T10:14:18.969925+0000","last_became_peered":"2026-03-10T10:14:18.969925+0000","last_unstale":"2026-03-10T10:14:42.994311+0000","last_undegraded":"2026-03-10T10:14:42.994311+0000","last_fullsized":"2026-03-10T10:14:42.994311+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:30:12.335118+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,4],"acting":[2,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"3.b","version":"60'9","reported_seq":47,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.385264+0000","last_change":"2026-03-10T10:14:14.955946+0000","last_active":"2026-03-10T10:14:42.385264+0000","last_peered":"2026-03-10T10:14:42.385264+0000","last_clean":"2026-03-10T10:14:42.385264+0000","last_became_active":"2026-03-10T10:14:14.955239+0000","last_became_peered":"2026-03-10T10:14:14.955239+0000","last_unstale":"2026-03-10T10:14:42.385264+0000","last_undegraded":"2026-03-10T10:14:42.385264+0000","last_fullsized":"2026-03-10T10:14:42.385264+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:04:23.905101+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,4],"acting":[3,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.a","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.281304+0000","last_change":"2026-03-10T10:14:12.946812+0000","last_active":"2026-03-10T10:14:42.281304+0000","last_peered":"2026-03-10T10:14:42.281304+0000","last_clean":"2026-03-10T10:14:42.281304+0000","last_became_active":"2026-03-10T10:14:12.946729+0000","last_became_peered":"2026-03-10T10:14:12.946729+0000","last_unstale":"2026-03-10T10:14:42.281304+0000","last_undegraded":"2026-03-10T10:14:42.281304+0000","last_fullsized":"2026-03-10T10:14:42.281304+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:26:10.694206+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,7],"acting":[1,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.d","version":"60'11","reported_seq":54,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:15:27.992596+0000","last_change":"2026-03-10T10:14:16.967875+0000","last_active":"2026-03-10T10:15:27.992596+0000","last_peered":"2026-03-10T10:15:27.992596+0000","last_clean":"2026-03-10T10:15:27.992596+0000","last_became_active":"2026-03-10T10:14:16.967673+0000","last_became_peered":"2026-03-10T10:14:16.967673+0000","last_unstale":"2026-03-10T10:15:27.992596+0000","last_undegraded":"2026-03-10T10:15:27.992596+0000","last_fullsized":"2026-03-10T10:15:27.992596+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:04:37.191568+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,7,5],"acting":[2,7,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.e","version":"0'0","reported_seq":22,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.282420+0000","last_change":"2026-03-10T10:14:18.954832+0000","last_active":"2026-03-10T10:14:42.282420+0000","last_peered":"2026-03-10T10:14:42.282420+0000","last_clean":"2026-03-10T10:14:42.282420+0000","last_became_active":"2026-03-10T10:14:18.954684+0000","last_became_peered":"2026-03-10T10:14:18.954684+0000","last_unstale":"2026-03-10T10:14:42.282420+0000","last_undegraded":"2026-03-10T10:14:42.282420+0000","last_fullsized":"2026-03-10T10:14:42.282420+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:08:44.361192+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.8","version":"60'15","reported_seq":56,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.385314+0000","last_change":"2026-03-10T10:14:14.955159+0000","last_active":"2026-03-10T10:14:42.385314+0000","last_peered":"2026-03-10T10:14:42.385314+0000","last_clean":"2026-03-10T10:14:42.385314+0000","last_became_active":"2026-03-10T10:14:14.954301+0000","last_became_peered":"2026-03-10T10:14:14.954301+0000","last_unstale":"2026-03-10T10:14:42.385314+0000","last_undegraded":"2026-03-10T10:14:42.385314+0000","last_fullsized":"2026-03-10T10:14:42.385314+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:41:00.860635+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.9","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.281345+0000","last_change":"2026-03-10T10:14:12.947099+0000","last_active":"2026-03-10T10:14:42.281345+0000","last_peered":"2026-03-10T10:14:42.281345+0000","last_clean":"2026-03-10T10:14:42.281345+0000","last_became_active":"2026-03-10T10:14:12.947024+0000","last_became_peered":"2026-03-10T10:14:12.947024+0000","last_unstale":"2026-03-10T10:14:42.281345+0000","last_undegraded":"2026-03-10T10:14:42.281345+0000","last_fullsized":"2026-03-10T10:14:42.281345+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:54:42.251553+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,7,3],"acting":[1,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.e","version":"60'11","reported_seq":53,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:15:27.991449+0000","last_change":"2026-03-10T10:14:16.958269+0000","last_active":"2026-03-10T10:15:27.991449+0000","last_peered":"2026-03-10T10:15:27.991449+0000","last_clean":"2026-03-10T10:15:27.991449+0000","last_became_active":"2026-03-10T10:14:16.958204+0000","last_became_peered":"2026-03-10T10:14:16.958204+0000","last_unstale":"2026-03-10T10:15:27.991449+0000","last_undegraded":"2026-03-10T10:15:27.991449+0000","last_fullsized":"2026-03-10T10:15:27.991449+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:03:15.314313+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,0],"acting":[4,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.d","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.991218+0000","last_change":"2026-03-10T10:14:18.973526+0000","last_active":"2026-03-10T10:14:42.991218+0000","last_peered":"2026-03-10T10:14:42.991218+0000","last_clean":"2026-03-10T10:14:42.991218+0000","last_became_active":"2026-03-10T10:14:18.973262+0000","last_became_peered":"2026-03-10T10:14:18.973262+0000","last_unstale":"2026-03-10T10:14:42.991218+0000","last_undegraded":"2026-03-10T10:14:42.991218+0000","last_fullsized":"2026-03-10T10:14:42.991218+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:00:13.308609+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.9","version":"60'12","reported_seq":53,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.280823+0000","last_change":"2026-03-10T10:14:14.947306+0000","last_active":"2026-03-10T10:14:42.280823+0000","last_peered":"2026-03-10T10:14:42.280823+0000","last_clean":"2026-03-10T10:14:42.280823+0000","last_became_active":"2026-03-10T10:14:14.942393+0000","last_became_peered":"2026-03-10T10:14:14.942393+0000","last_unstale":"2026-03-10T10:14:42.280823+0000","last_undegraded":"2026-03-10T10:14:42.280823+0000","last_fullsized":"2026-03-10T10:14:42.280823+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:02:08.934268+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,7],"acting":[4,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.8","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986562+0000","last_change":"2026-03-10T10:14:12.941382+0000","last_active":"2026-03-10T10:14:42.986562+0000","last_peered":"2026-03-10T10:14:42.986562+0000","last_clean":"2026-03-10T10:14:42.986562+0000","last_became_active":"2026-03-10T10:14:12.940815+0000","last_became_peered":"2026-03-10T10:14:12.940815+0000","last_unstale":"2026-03-10T10:14:42.986562+0000","last_undegraded":"2026-03-10T10:14:42.986562+0000","last_fullsized":"2026-03-10T10:14:42.986562+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:53:16.169201+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,1],"acting":[7,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.f","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992437+0000","last_change":"2026-03-10T10:14:16.964328+0000","last_active":"2026-03-10T10:14:42.992437+0000","last_peered":"2026-03-10T10:14:42.992437+0000","last_clean":"2026-03-10T10:14:42.992437+0000","last_became_active":"2026-03-10T10:14:16.964244+0000","last_became_peered":"2026-03-10T10:14:16.964244+0000","last_unstale":"2026-03-10T10:14:42.992437+0000","last_undegraded":"2026-03-10T10:14:42.992437+0000","last_fullsized":"2026-03-10T10:14:42.992437+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:31:16.904187+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.c","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.384689+0000","last_change":"2026-03-10T10:14:19.317024+0000","last_active":"2026-03-10T10:14:42.384689+0000","last_peered":"2026-03-10T10:14:42.384689+0000","last_clean":"2026-03-10T10:14:42.384689+0000","last_became_active":"2026-03-10T10:14:19.316813+0000","last_became_peered":"2026-03-10T10:14:19.316813+0000","last_unstale":"2026-03-10T10:14:42.384689+0000","last_undegraded":"2026-03-10T10:14:42.384689+0000","last_fullsized":"2026-03-10T10:14:42.384689+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:16:04.811046+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.6","version":"60'12","reported_seq":49,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.993542+0000","last_change":"2026-03-10T10:14:14.960776+0000","last_active":"2026-03-10T10:14:42.993542+0000","last_peered":"2026-03-10T10:14:42.993542+0000","last_clean":"2026-03-10T10:14:42.993542+0000","last_became_active":"2026-03-10T10:14:14.960394+0000","last_became_peered":"2026-03-10T10:14:14.960394+0000","last_unstale":"2026-03-10T10:14:42.993542+0000","last_undegraded":"2026-03-10T10:14:42.993542+0000","last_fullsized":"2026-03-10T10:14:42.993542+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:09:43.154149+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":18,"num_read_kb":12,"num_write":12,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.7","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992200+0000","last_change":"2026-03-10T10:14:12.942165+0000","last_active":"2026-03-10T10:14:42.992200+0000","last_peered":"2026-03-10T10:14:42.992200+0000","last_clean":"2026-03-10T10:14:42.992200+0000","last_became_active":"2026-03-10T10:14:12.941947+0000","last_became_peered":"2026-03-10T10:14:12.941947+0000","last_unstale":"2026-03-10T10:14:42.992200+0000","last_undegraded":"2026-03-10T10:14:42.992200+0000","last_fullsized":"2026-03-10T10:14:42.992200+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:05:19.126898+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,7,2],"acting":[6,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"4.1","version":"60'1","reported_seq":36,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.282160+0000","last_change":"2026-03-10T10:14:22.015323+0000","last_active":"2026-03-10T10:14:42.282160+0000","last_peered":"2026-03-10T10:14:42.282160+0000","last_clean":"2026-03-10T10:14:42.282160+0000","last_became_active":"2026-03-10T10:14:15.942863+0000","last_became_peered":"2026-03-10T10:14:15.942863+0000","last_unstale":"2026-03-10T10:14:42.282160+0000","last_undegraded":"2026-03-10T10:14:42.282160+0000","last_fullsized":"2026-03-10T10:14:42.282160+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:14.926552+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:14.926552+0000","last_clean_scrub_stamp":"2026-03-10T10:14:14.926552+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:43:59.191092+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00026765900000000001,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,6],"acting":[4,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.0","version":"60'11","reported_seq":54,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:15:27.993007+0000","last_change":"2026-03-10T10:14:16.963181+0000","last_active":"2026-03-10T10:15:27.993007+0000","last_peered":"2026-03-10T10:15:27.993007+0000","last_clean":"2026-03-10T10:15:27.993007+0000","last_became_active":"2026-03-10T10:14:16.963039+0000","last_became_peered":"2026-03-10T10:14:16.963039+0000","last_unstale":"2026-03-10T10:15:27.993007+0000","last_undegraded":"2026-03-10T10:15:27.993007+0000","last_fullsized":"2026-03-10T10:15:27.993007+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:46:11.908855+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,4],"acting":[3,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.3","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986473+0000","last_change":"2026-03-10T10:14:19.318858+0000","last_active":"2026-03-10T10:14:42.986473+0000","last_peered":"2026-03-10T10:14:42.986473+0000","last_clean":"2026-03-10T10:14:42.986473+0000","last_became_active":"2026-03-10T10:14:19.318255+0000","last_became_peered":"2026-03-10T10:14:19.318255+0000","last_unstale":"2026-03-10T10:14:42.986473+0000","last_undegraded":"2026-03-10T10:14:42.986473+0000","last_fullsized":"2026-03-10T10:14:42.986473+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:11:41.370247+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,2],"acting":[7,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.7","version":"60'13","reported_seq":58,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.384934+0000","last_change":"2026-03-10T10:14:14.955870+0000","last_active":"2026-03-10T10:14:42.384934+0000","last_peered":"2026-03-10T10:14:42.384934+0000","last_clean":"2026-03-10T10:14:42.384934+0000","last_became_active":"2026-03-10T10:14:14.955610+0000","last_became_peered":"2026-03-10T10:14:14.955610+0000","last_unstale":"2026-03-10T10:14:42.384934+0000","last_undegraded":"2026-03-10T10:14:42.384934+0000","last_fullsized":"2026-03-10T10:14:42.384934+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":13,"log_dups_size":0,"ondisk_log_size":13,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:47:50.861777+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":30,"num_read_kb":19,"num_write":16,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.6","version":"53'1","reported_seq":36,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.281390+0000","last_change":"2026-03-10T10:14:12.926530+0000","last_active":"2026-03-10T10:14:42.281390+0000","last_peered":"2026-03-10T10:14:42.281390+0000","last_clean":"2026-03-10T10:14:42.281390+0000","last_became_active":"2026-03-10T10:14:12.926445+0000","last_became_peered":"2026-03-10T10:14:12.926445+0000","last_unstale":"2026-03-10T10:14:42.281390+0000","last_undegraded":"2026-03-10T10:14:42.281390+0000","last_fullsized":"2026-03-10T10:14:42.281390+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:18:14.210414+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,4],"acting":[1,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.0","version":"62'5","reported_seq":110,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:15:32.184882+0000","last_change":"2026-03-10T10:14:22.472330+0000","last_active":"2026-03-10T10:15:32.184882+0000","last_peered":"2026-03-10T10:15:32.184882+0000","last_clean":"2026-03-10T10:15:32.184882+0000","last_became_active":"2026-03-10T10:14:15.952486+0000","last_became_peered":"2026-03-10T10:14:15.952486+0000","last_unstale":"2026-03-10T10:15:32.184882+0000","last_undegraded":"2026-03-10T10:15:32.184882+0000","last_fullsized":"2026-03-10T10:15:32.184882+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:14.926552+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:14.926552+0000","last_clean_scrub_stamp":"2026-03-10T10:14:14.926552+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:39:05.926790+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.001322606,"stat_sum":{"num_bytes":389,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":71,"num_read_kb":66,"num_write":4,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1","version":"0'0","reported_seq":26,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.281874+0000","last_change":"2026-03-10T10:14:16.952046+0000","last_active":"2026-03-10T10:14:42.281874+0000","last_peered":"2026-03-10T10:14:42.281874+0000","last_clean":"2026-03-10T10:14:42.281874+0000","last_became_active":"2026-03-10T10:14:16.951818+0000","last_became_peered":"2026-03-10T10:14:16.951818+0000","last_unstale":"2026-03-10T10:14:42.281874+0000","last_undegraded":"2026-03-10T10:14:42.281874+0000","last_fullsized":"2026-03-10T10:14:42.281874+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:30:07.603418+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,7],"acting":[4,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.2","version":"0'0","reported_seq":22,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.281869+0000","last_change":"2026-03-10T10:14:18.970338+0000","last_active":"2026-03-10T10:14:42.281869+0000","last_peered":"2026-03-10T10:14:42.281869+0000","last_clean":"2026-03-10T10:14:42.281869+0000","last_became_active":"2026-03-10T10:14:18.969492+0000","last_became_peered":"2026-03-10T10:14:18.969492+0000","last_unstale":"2026-03-10T10:14:42.281869+0000","last_undegraded":"2026-03-10T10:14:42.281869+0000","last_fullsized":"2026-03-10T10:14:42.281869+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:03:19.247888+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.4","version":"60'30","reported_seq":98,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:15:27.992528+0000","last_change":"2026-03-10T10:14:14.944853+0000","last_active":"2026-03-10T10:15:27.992528+0000","last_peered":"2026-03-10T10:15:27.992528+0000","last_clean":"2026-03-10T10:15:27.992528+0000","last_became_active":"2026-03-10T10:14:14.944618+0000","last_became_peered":"2026-03-10T10:14:14.944618+0000","last_unstale":"2026-03-10T10:15:27.992528+0000","last_undegraded":"2026-03-10T10:15:27.992528+0000","last_fullsized":"2026-03-10T10:15:27.992528+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":30,"log_dups_size":0,"ondisk_log_size":30,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:03:21.346423+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":358,"num_objects":10,"num_object_clones":0,"num_object_copies":30,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":10,"num_whiteouts":0,"num_read":51,"num_read_kb":36,"num_write":26,"num_write_kb":4,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,5],"acting":[1,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.5","version":"53'1","reported_seq":43,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986532+0000","last_change":"2026-03-10T10:14:12.941298+0000","last_active":"2026-03-10T10:14:42.986532+0000","last_peered":"2026-03-10T10:14:42.986532+0000","last_clean":"2026-03-10T10:14:42.986532+0000","last_became_active":"2026-03-10T10:14:12.940714+0000","last_became_peered":"2026-03-10T10:14:12.940714+0000","last_unstale":"2026-03-10T10:14:42.986532+0000","last_undegraded":"2026-03-10T10:14:42.986532+0000","last_fullsized":"2026-03-10T10:14:42.986532+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:38:59.576851+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":993,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":10,"num_read_kb":10,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,4],"acting":[7,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.2","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992459+0000","last_change":"2026-03-10T10:14:16.955779+0000","last_active":"2026-03-10T10:14:42.992459+0000","last_peered":"2026-03-10T10:14:42.992459+0000","last_clean":"2026-03-10T10:14:42.992459+0000","last_became_active":"2026-03-10T10:14:16.955647+0000","last_became_peered":"2026-03-10T10:14:16.955647+0000","last_unstale":"2026-03-10T10:14:42.992459+0000","last_undegraded":"2026-03-10T10:14:42.992459+0000","last_fullsized":"2026-03-10T10:14:42.992459+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:34:59.151202+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.280855+0000","last_change":"2026-03-10T10:14:19.315139+0000","last_active":"2026-03-10T10:14:42.280855+0000","last_peered":"2026-03-10T10:14:42.280855+0000","last_clean":"2026-03-10T10:14:42.280855+0000","last_became_active":"2026-03-10T10:14:19.315010+0000","last_became_peered":"2026-03-10T10:14:19.315010+0000","last_unstale":"2026-03-10T10:14:42.280855+0000","last_undegraded":"2026-03-10T10:14:42.280855+0000","last_fullsized":"2026-03-10T10:14:42.280855+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:27:00.038117+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,2],"acting":[1,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.5","version":"60'16","reported_seq":70,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:15:27.991290+0000","last_change":"2026-03-10T10:14:14.950231+0000","last_active":"2026-03-10T10:15:27.991290+0000","last_peered":"2026-03-10T10:15:27.991290+0000","last_clean":"2026-03-10T10:15:27.991290+0000","last_became_active":"2026-03-10T10:14:14.950150+0000","last_became_peered":"2026-03-10T10:14:14.950150+0000","last_unstale":"2026-03-10T10:15:27.991290+0000","last_undegraded":"2026-03-10T10:15:27.991290+0000","last_fullsized":"2026-03-10T10:15:27.991290+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":16,"log_dups_size":0,"ondisk_log_size":16,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:32:13.734797+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":154,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":25,"num_read_kb":15,"num_write":13,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,2],"acting":[5,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.4","version":"0'0","reported_seq":41,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.281453+0000","last_change":"2026-03-10T10:14:31.273016+0000","last_active":"2026-03-10T10:14:42.281453+0000","last_peered":"2026-03-10T10:14:42.281453+0000","last_clean":"2026-03-10T10:14:42.281453+0000","last_became_active":"2026-03-10T10:14:31.272884+0000","last_became_peered":"2026-03-10T10:14:31.272884+0000","last_unstale":"2026-03-10T10:14:42.281453+0000","last_undegraded":"2026-03-10T10:14:42.281453+0000","last_fullsized":"2026-03-10T10:14:42.281453+0000","mapping_epoch":63,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":64,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:48:36.556896+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0,2],"acting":[1,0,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.2","version":"62'2","reported_seq":38,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.281478+0000","last_change":"2026-03-10T10:14:22.013588+0000","last_active":"2026-03-10T10:14:42.281478+0000","last_peered":"2026-03-10T10:14:42.281478+0000","last_clean":"2026-03-10T10:14:42.281478+0000","last_became_active":"2026-03-10T10:14:15.948858+0000","last_became_peered":"2026-03-10T10:14:15.948858+0000","last_unstale":"2026-03-10T10:14:42.281478+0000","last_undegraded":"2026-03-10T10:14:42.281478+0000","last_fullsized":"2026-03-10T10:14:42.281478+0000","mapping_epoch":55,"log_start":"0'0","ondisk_log_start":"0'0","created":55,"last_epoch_clean":56,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:14.926552+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:14.926552+0000","last_clean_scrub_stamp":"2026-03-10T10:14:14.926552+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:31:29.091561+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000606676,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.3","version":"60'11","reported_seq":57,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:15:27.992765+0000","last_change":"2026-03-10T10:14:16.958647+0000","last_active":"2026-03-10T10:15:27.992765+0000","last_peered":"2026-03-10T10:15:27.992765+0000","last_clean":"2026-03-10T10:15:27.992765+0000","last_became_active":"2026-03-10T10:14:16.958542+0000","last_became_peered":"2026-03-10T10:14:16.958542+0000","last_unstale":"2026-03-10T10:15:27.992765+0000","last_undegraded":"2026-03-10T10:15:27.992765+0000","last_fullsized":"2026-03-10T10:15:27.992765+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:34:30.260681+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,6,5],"acting":[0,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.0","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992702+0000","last_change":"2026-03-10T10:14:18.967980+0000","last_active":"2026-03-10T10:14:42.992702+0000","last_peered":"2026-03-10T10:14:42.992702+0000","last_clean":"2026-03-10T10:14:42.992702+0000","last_became_active":"2026-03-10T10:14:18.967899+0000","last_became_peered":"2026-03-10T10:14:18.967899+0000","last_unstale":"2026-03-10T10:14:42.992702+0000","last_undegraded":"2026-03-10T10:14:42.992702+0000","last_fullsized":"2026-03-10T10:14:42.992702+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:57:04.171484+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,3,2],"acting":[0,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.3","version":"60'19","reported_seq":66,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.281182+0000","last_change":"2026-03-10T10:14:14.958879+0000","last_active":"2026-03-10T10:14:42.281182+0000","last_peered":"2026-03-10T10:14:42.281182+0000","last_clean":"2026-03-10T10:14:42.281182+0000","last_became_active":"2026-03-10T10:14:14.958766+0000","last_became_peered":"2026-03-10T10:14:14.958766+0000","last_unstale":"2026-03-10T10:14:42.281182+0000","last_undegraded":"2026-03-10T10:14:42.281182+0000","last_fullsized":"2026-03-10T10:14:42.281182+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:42:35.783489+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":39,"num_read_kb":25,"num_write":22,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,6],"acting":[4,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.2","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992202+0000","last_change":"2026-03-10T10:14:12.939028+0000","last_active":"2026-03-10T10:14:42.992202+0000","last_peered":"2026-03-10T10:14:42.992202+0000","last_clean":"2026-03-10T10:14:42.992202+0000","last_became_active":"2026-03-10T10:14:12.938815+0000","last_became_peered":"2026-03-10T10:14:12.938815+0000","last_unstale":"2026-03-10T10:14:42.992202+0000","last_undegraded":"2026-03-10T10:14:42.992202+0000","last_fullsized":"2026-03-10T10:14:42.992202+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:32:28.599259+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,6],"acting":[5,1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.5","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.993148+0000","last_change":"2026-03-10T10:14:16.965697+0000","last_active":"2026-03-10T10:14:42.993148+0000","last_peered":"2026-03-10T10:14:42.993148+0000","last_clean":"2026-03-10T10:14:42.993148+0000","last_became_active":"2026-03-10T10:14:16.965499+0000","last_became_peered":"2026-03-10T10:14:16.965499+0000","last_unstale":"2026-03-10T10:14:42.993148+0000","last_undegraded":"2026-03-10T10:14:42.993148+0000","last_fullsized":"2026-03-10T10:14:42.993148+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:01:22.833682+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.6","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.384739+0000","last_change":"2026-03-10T10:14:18.977453+0000","last_active":"2026-03-10T10:14:42.384739+0000","last_peered":"2026-03-10T10:14:42.384739+0000","last_clean":"2026-03-10T10:14:42.384739+0000","last_became_active":"2026-03-10T10:14:18.977324+0000","last_became_peered":"2026-03-10T10:14:18.977324+0000","last_unstale":"2026-03-10T10:14:42.384739+0000","last_undegraded":"2026-03-10T10:14:42.384739+0000","last_fullsized":"2026-03-10T10:14:42.384739+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:46:31.006057+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,4,7],"acting":[3,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.0","version":"60'18","reported_seq":63,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.280955+0000","last_change":"2026-03-10T10:14:14.944195+0000","last_active":"2026-03-10T10:14:42.280955+0000","last_peered":"2026-03-10T10:14:42.280955+0000","last_clean":"2026-03-10T10:14:42.280955+0000","last_became_active":"2026-03-10T10:14:14.944112+0000","last_became_peered":"2026-03-10T10:14:14.944112+0000","last_unstale":"2026-03-10T10:14:42.280955+0000","last_undegraded":"2026-03-10T10:14:42.280955+0000","last_fullsized":"2026-03-10T10:14:42.280955+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":18,"log_dups_size":0,"ondisk_log_size":18,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:44:23.197174+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":34,"num_read_kb":22,"num_write":20,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,6],"acting":[1,2,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.1","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.994741+0000","last_change":"2026-03-10T10:14:12.946155+0000","last_active":"2026-03-10T10:14:42.994741+0000","last_peered":"2026-03-10T10:14:42.994741+0000","last_clean":"2026-03-10T10:14:42.994741+0000","last_became_active":"2026-03-10T10:14:12.946062+0000","last_became_peered":"2026-03-10T10:14:12.946062+0000","last_unstale":"2026-03-10T10:14:42.994741+0000","last_undegraded":"2026-03-10T10:14:42.994741+0000","last_fullsized":"2026-03-10T10:14:42.994741+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:00:36.052036+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,0],"acting":[2,3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.6","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.994782+0000","last_change":"2026-03-10T10:14:16.967851+0000","last_active":"2026-03-10T10:14:42.994782+0000","last_peered":"2026-03-10T10:14:42.994782+0000","last_clean":"2026-03-10T10:14:42.994782+0000","last_became_active":"2026-03-10T10:14:16.967761+0000","last_became_peered":"2026-03-10T10:14:16.967761+0000","last_unstale":"2026-03-10T10:14:42.994782+0000","last_undegraded":"2026-03-10T10:14:42.994782+0000","last_fullsized":"2026-03-10T10:14:42.994782+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:21:26.995530+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,7],"acting":[2,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.5","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.987234+0000","last_change":"2026-03-10T10:14:19.318784+0000","last_active":"2026-03-10T10:14:42.987234+0000","last_peered":"2026-03-10T10:14:42.987234+0000","last_clean":"2026-03-10T10:14:42.987234+0000","last_became_active":"2026-03-10T10:14:19.317929+0000","last_became_peered":"2026-03-10T10:14:19.317929+0000","last_unstale":"2026-03-10T10:14:42.987234+0000","last_undegraded":"2026-03-10T10:14:42.987234+0000","last_fullsized":"2026-03-10T10:14:42.987234+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:55:58.247403+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,3],"acting":[7,6,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1","version":"60'14","reported_seq":52,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.993404+0000","last_change":"2026-03-10T10:14:14.960611+0000","last_active":"2026-03-10T10:14:42.993404+0000","last_peered":"2026-03-10T10:14:42.993404+0000","last_clean":"2026-03-10T10:14:42.993404+0000","last_became_active":"2026-03-10T10:14:14.960268+0000","last_became_peered":"2026-03-10T10:14:14.960268+0000","last_unstale":"2026-03-10T10:14:42.993404+0000","last_undegraded":"2026-03-10T10:14:42.993404+0000","last_fullsized":"2026-03-10T10:14:42.993404+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":14,"log_dups_size":0,"ondisk_log_size":14,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:50:37.268853+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":21,"num_read_kb":14,"num_write":14,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.0","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986774+0000","last_change":"2026-03-10T10:14:12.941625+0000","last_active":"2026-03-10T10:14:42.986774+0000","last_peered":"2026-03-10T10:14:42.986774+0000","last_clean":"2026-03-10T10:14:42.986774+0000","last_became_active":"2026-03-10T10:14:12.941099+0000","last_became_peered":"2026-03-10T10:14:12.941099+0000","last_unstale":"2026-03-10T10:14:42.986774+0000","last_undegraded":"2026-03-10T10:14:42.986774+0000","last_fullsized":"2026-03-10T10:14:42.986774+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:05:02.429935+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,1,0],"acting":[7,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.7","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992309+0000","last_change":"2026-03-10T10:14:16.956321+0000","last_active":"2026-03-10T10:14:42.992309+0000","last_peered":"2026-03-10T10:14:42.992309+0000","last_clean":"2026-03-10T10:14:42.992309+0000","last_became_active":"2026-03-10T10:14:16.956043+0000","last_became_peered":"2026-03-10T10:14:16.956043+0000","last_unstale":"2026-03-10T10:14:42.992309+0000","last_undegraded":"2026-03-10T10:14:42.992309+0000","last_fullsized":"2026-03-10T10:14:42.992309+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:47:13.633995+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.4","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.281144+0000","last_change":"2026-03-10T10:14:18.972971+0000","last_active":"2026-03-10T10:14:42.281144+0000","last_peered":"2026-03-10T10:14:42.281144+0000","last_clean":"2026-03-10T10:14:42.281144+0000","last_became_active":"2026-03-10T10:14:18.972868+0000","last_became_peered":"2026-03-10T10:14:18.972868+0000","last_unstale":"2026-03-10T10:14:42.281144+0000","last_undegraded":"2026-03-10T10:14:42.281144+0000","last_fullsized":"2026-03-10T10:14:42.281144+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:05:19.845157+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.2","version":"60'10","reported_seq":46,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.385654+0000","last_change":"2026-03-10T10:14:14.956516+0000","last_active":"2026-03-10T10:14:42.385654+0000","last_peered":"2026-03-10T10:14:42.385654+0000","last_clean":"2026-03-10T10:14:42.385654+0000","last_became_active":"2026-03-10T10:14:14.956186+0000","last_became_peered":"2026-03-10T10:14:14.956186+0000","last_unstale":"2026-03-10T10:14:42.385654+0000","last_undegraded":"2026-03-10T10:14:42.385654+0000","last_fullsized":"2026-03-10T10:14:42.385654+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:59:12.794770+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,5,6],"acting":[3,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.3","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.991734+0000","last_change":"2026-03-10T10:14:12.950942+0000","last_active":"2026-03-10T10:14:42.991734+0000","last_peered":"2026-03-10T10:14:42.991734+0000","last_clean":"2026-03-10T10:14:42.991734+0000","last_became_active":"2026-03-10T10:14:12.950754+0000","last_became_peered":"2026-03-10T10:14:12.950754+0000","last_unstale":"2026-03-10T10:14:42.991734+0000","last_undegraded":"2026-03-10T10:14:42.991734+0000","last_fullsized":"2026-03-10T10:14:42.991734+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:40:30.632758+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,2,7],"acting":[5,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"1.0","version":"65'39","reported_seq":70,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:44.358091+0000","last_change":"2026-03-10T10:13:52.067583+0000","last_active":"2026-03-10T10:14:44.358091+0000","last_peered":"2026-03-10T10:14:44.358091+0000","last_clean":"2026-03-10T10:14:44.358091+0000","last_became_active":"2026-03-10T10:13:51.757788+0000","last_became_peered":"2026-03-10T10:13:51.757788+0000","last_unstale":"2026-03-10T10:14:44.358091+0000","last_undegraded":"2026-03-10T10:14:44.358091+0000","last_fullsized":"2026-03-10T10:14:44.358091+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":20,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:11:04.132205+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:11:04.132205+0000","last_clean_scrub_stamp":"2026-03-10T10:11:04.132205+0000","objects_scrubbed":0,"log_size":39,"log_dups_size":0,"ondisk_log_size":39,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:59:42.231015+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":106,"num_read_kb":213,"num_write":69,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,6],"acting":[7,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.4","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986685+0000","last_change":"2026-03-10T10:14:16.962305+0000","last_active":"2026-03-10T10:14:42.986685+0000","last_peered":"2026-03-10T10:14:42.986685+0000","last_clean":"2026-03-10T10:14:42.986685+0000","last_became_active":"2026-03-10T10:14:16.962179+0000","last_became_peered":"2026-03-10T10:14:16.962179+0000","last_unstale":"2026-03-10T10:14:42.986685+0000","last_undegraded":"2026-03-10T10:14:42.986685+0000","last_fullsized":"2026-03-10T10:14:42.986685+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:22:08.289700+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,5],"acting":[7,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.7","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.991169+0000","last_change":"2026-03-10T10:14:18.978101+0000","last_active":"2026-03-10T10:14:42.991169+0000","last_peered":"2026-03-10T10:14:42.991169+0000","last_clean":"2026-03-10T10:14:42.991169+0000","last_became_active":"2026-03-10T10:14:18.977737+0000","last_became_peered":"2026-03-10T10:14:18.977737+0000","last_unstale":"2026-03-10T10:14:42.991169+0000","last_undegraded":"2026-03-10T10:14:42.991169+0000","last_fullsized":"2026-03-10T10:14:42.991169+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:07:05.313991+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,4],"acting":[5,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.d","version":"60'17","reported_seq":59,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986230+0000","last_change":"2026-03-10T10:14:14.945052+0000","last_active":"2026-03-10T10:14:42.986230+0000","last_peered":"2026-03-10T10:14:42.986230+0000","last_clean":"2026-03-10T10:14:42.986230+0000","last_became_active":"2026-03-10T10:14:14.944793+0000","last_became_peered":"2026-03-10T10:14:14.944793+0000","last_unstale":"2026-03-10T10:14:42.986230+0000","last_undegraded":"2026-03-10T10:14:42.986230+0000","last_fullsized":"2026-03-10T10:14:42.986230+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":17,"log_dups_size":0,"ondisk_log_size":17,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:39:17.289692+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":29,"num_read_kb":19,"num_write":18,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,6],"acting":[7,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.c","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.994685+0000","last_change":"2026-03-10T10:14:12.937665+0000","last_active":"2026-03-10T10:14:42.994685+0000","last_peered":"2026-03-10T10:14:42.994685+0000","last_clean":"2026-03-10T10:14:42.994685+0000","last_became_active":"2026-03-10T10:14:12.937571+0000","last_became_peered":"2026-03-10T10:14:12.937571+0000","last_unstale":"2026-03-10T10:14:42.994685+0000","last_undegraded":"2026-03-10T10:14:42.994685+0000","last_fullsized":"2026-03-10T10:14:42.994685+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:01:05.664905+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,0],"acting":[2,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.b","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.994696+0000","last_change":"2026-03-10T10:14:16.967783+0000","last_active":"2026-03-10T10:14:42.994696+0000","last_peered":"2026-03-10T10:14:42.994696+0000","last_clean":"2026-03-10T10:14:42.994696+0000","last_became_active":"2026-03-10T10:14:16.967592+0000","last_became_peered":"2026-03-10T10:14:16.967592+0000","last_unstale":"2026-03-10T10:14:42.994696+0000","last_undegraded":"2026-03-10T10:14:42.994696+0000","last_fullsized":"2026-03-10T10:14:42.994696+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:56:12.053751+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,5],"acting":[2,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.8","version":"60'1","reported_seq":24,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986093+0000","last_change":"2026-03-10T10:14:18.969721+0000","last_active":"2026-03-10T10:14:42.986093+0000","last_peered":"2026-03-10T10:14:42.986093+0000","last_clean":"2026-03-10T10:14:42.986093+0000","last_became_active":"2026-03-10T10:14:18.969642+0000","last_became_peered":"2026-03-10T10:14:18.969642+0000","last_unstale":"2026-03-10T10:14:42.986093+0000","last_undegraded":"2026-03-10T10:14:42.986093+0000","last_fullsized":"2026-03-10T10:14:42.986093+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:41:55.774753+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":13,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,3],"acting":[7,2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.c","version":"60'10","reported_seq":46,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992229+0000","last_change":"2026-03-10T10:14:14.949789+0000","last_active":"2026-03-10T10:14:42.992229+0000","last_peered":"2026-03-10T10:14:42.992229+0000","last_clean":"2026-03-10T10:14:42.992229+0000","last_became_active":"2026-03-10T10:14:14.949715+0000","last_became_peered":"2026-03-10T10:14:14.949715+0000","last_unstale":"2026-03-10T10:14:42.992229+0000","last_undegraded":"2026-03-10T10:14:42.992229+0000","last_fullsized":"2026-03-10T10:14:42.992229+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:51:04.661751+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,6],"acting":[5,3,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.d","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.281258+0000","last_change":"2026-03-10T10:14:12.943045+0000","last_active":"2026-03-10T10:14:42.281258+0000","last_peered":"2026-03-10T10:14:42.281258+0000","last_clean":"2026-03-10T10:14:42.281258+0000","last_became_active":"2026-03-10T10:14:12.942958+0000","last_became_peered":"2026-03-10T10:14:12.942958+0000","last_unstale":"2026-03-10T10:14:42.281258+0000","last_undegraded":"2026-03-10T10:14:42.281258+0000","last_fullsized":"2026-03-10T10:14:42.281258+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:58:58.899346+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,3],"acting":[1,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.a","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.995041+0000","last_change":"2026-03-10T10:14:16.973647+0000","last_active":"2026-03-10T10:14:42.995041+0000","last_peered":"2026-03-10T10:14:42.995041+0000","last_clean":"2026-03-10T10:14:42.995041+0000","last_became_active":"2026-03-10T10:14:16.973522+0000","last_became_peered":"2026-03-10T10:14:16.973522+0000","last_unstale":"2026-03-10T10:14:42.995041+0000","last_undegraded":"2026-03-10T10:14:42.995041+0000","last_fullsized":"2026-03-10T10:14:42.995041+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:07:54.494874+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,3],"acting":[2,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.9","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992874+0000","last_change":"2026-03-10T10:14:18.961920+0000","last_active":"2026-03-10T10:14:42.992874+0000","last_peered":"2026-03-10T10:14:42.992874+0000","last_clean":"2026-03-10T10:14:42.992874+0000","last_became_active":"2026-03-10T10:14:18.961205+0000","last_became_peered":"2026-03-10T10:14:18.961205+0000","last_unstale":"2026-03-10T10:14:42.992874+0000","last_undegraded":"2026-03-10T10:14:42.992874+0000","last_fullsized":"2026-03-10T10:14:42.992874+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:16:05.935516+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.f","version":"60'15","reported_seq":56,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986847+0000","last_change":"2026-03-10T10:14:14.951822+0000","last_active":"2026-03-10T10:14:42.986847+0000","last_peered":"2026-03-10T10:14:42.986847+0000","last_clean":"2026-03-10T10:14:42.986847+0000","last_became_active":"2026-03-10T10:14:14.951229+0000","last_became_peered":"2026-03-10T10:14:14.951229+0000","last_unstale":"2026-03-10T10:14:42.986847+0000","last_undegraded":"2026-03-10T10:14:42.986847+0000","last_fullsized":"2026-03-10T10:14:42.986847+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:34:31.646212+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,0],"acting":[7,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.e","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.994641+0000","last_change":"2026-03-10T10:14:12.946410+0000","last_active":"2026-03-10T10:14:42.994641+0000","last_peered":"2026-03-10T10:14:42.994641+0000","last_clean":"2026-03-10T10:14:42.994641+0000","last_became_active":"2026-03-10T10:14:12.946291+0000","last_became_peered":"2026-03-10T10:14:12.946291+0000","last_unstale":"2026-03-10T10:14:42.994641+0000","last_undegraded":"2026-03-10T10:14:42.994641+0000","last_fullsized":"2026-03-10T10:14:42.994641+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:16:03.035546+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,7],"acting":[2,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.9","version":"60'11","reported_seq":54,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:15:27.991218+0000","last_change":"2026-03-10T10:14:16.957980+0000","last_active":"2026-03-10T10:15:27.991218+0000","last_peered":"2026-03-10T10:15:27.991218+0000","last_clean":"2026-03-10T10:15:27.991218+0000","last_became_active":"2026-03-10T10:14:16.957669+0000","last_became_peered":"2026-03-10T10:14:16.957669+0000","last_unstale":"2026-03-10T10:15:27.991218+0000","last_undegraded":"2026-03-10T10:15:27.991218+0000","last_fullsized":"2026-03-10T10:15:27.991218+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:28:49.140275+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.a","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.991447+0000","last_change":"2026-03-10T10:14:19.319997+0000","last_active":"2026-03-10T10:14:42.991447+0000","last_peered":"2026-03-10T10:14:42.991447+0000","last_clean":"2026-03-10T10:14:42.991447+0000","last_became_active":"2026-03-10T10:14:19.319913+0000","last_became_peered":"2026-03-10T10:14:19.319913+0000","last_unstale":"2026-03-10T10:14:42.991447+0000","last_undegraded":"2026-03-10T10:14:42.991447+0000","last_fullsized":"2026-03-10T10:14:42.991447+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:16:08.646739+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,0],"acting":[5,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.e","version":"60'11","reported_seq":50,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986928+0000","last_change":"2026-03-10T10:14:14.951764+0000","last_active":"2026-03-10T10:14:42.986928+0000","last_peered":"2026-03-10T10:14:42.986928+0000","last_clean":"2026-03-10T10:14:42.986928+0000","last_became_active":"2026-03-10T10:14:14.951106+0000","last_became_peered":"2026-03-10T10:14:14.951106+0000","last_unstale":"2026-03-10T10:14:42.986928+0000","last_undegraded":"2026-03-10T10:14:42.986928+0000","last_fullsized":"2026-03-10T10:14:42.986928+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:50:19.320521+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.f","version":"53'2","reported_seq":50,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.282020+0000","last_change":"2026-03-10T10:14:12.946050+0000","last_active":"2026-03-10T10:14:42.282020+0000","last_peered":"2026-03-10T10:14:42.282020+0000","last_clean":"2026-03-10T10:14:42.282020+0000","last_became_active":"2026-03-10T10:14:12.945736+0000","last_became_peered":"2026-03-10T10:14:12.945736+0000","last_unstale":"2026-03-10T10:14:42.282020+0000","last_undegraded":"2026-03-10T10:14:42.282020+0000","last_fullsized":"2026-03-10T10:14:42.282020+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T14:32:51.576554+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":92,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":14,"num_read_kb":14,"num_write":4,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,7],"acting":[4,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.8","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.994992+0000","last_change":"2026-03-10T10:14:16.975977+0000","last_active":"2026-03-10T10:14:42.994992+0000","last_peered":"2026-03-10T10:14:42.994992+0000","last_clean":"2026-03-10T10:14:42.994992+0000","last_became_active":"2026-03-10T10:14:16.975898+0000","last_became_peered":"2026-03-10T10:14:16.975898+0000","last_unstale":"2026-03-10T10:14:42.994992+0000","last_undegraded":"2026-03-10T10:14:42.994992+0000","last_fullsized":"2026-03-10T10:14:42.994992+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:48:03.299964+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,1],"acting":[2,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.b","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.384756+0000","last_change":"2026-03-10T10:14:18.975880+0000","last_active":"2026-03-10T10:14:42.384756+0000","last_peered":"2026-03-10T10:14:42.384756+0000","last_clean":"2026-03-10T10:14:42.384756+0000","last_became_active":"2026-03-10T10:14:18.975790+0000","last_became_peered":"2026-03-10T10:14:18.975790+0000","last_unstale":"2026-03-10T10:14:42.384756+0000","last_undegraded":"2026-03-10T10:14:42.384756+0000","last_fullsized":"2026-03-10T10:14:42.384756+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:09:10.136046+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,1],"acting":[3,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.11","version":"60'11","reported_seq":50,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986175+0000","last_change":"2026-03-10T10:14:14.951923+0000","last_active":"2026-03-10T10:14:42.986175+0000","last_peered":"2026-03-10T10:14:42.986175+0000","last_clean":"2026-03-10T10:14:42.986175+0000","last_became_active":"2026-03-10T10:14:14.951543+0000","last_became_peered":"2026-03-10T10:14:14.951543+0000","last_unstale":"2026-03-10T10:14:42.986175+0000","last_undegraded":"2026-03-10T10:14:42.986175+0000","last_fullsized":"2026-03-10T10:14:42.986175+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:23:09.745001+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.10","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.994616+0000","last_change":"2026-03-10T10:14:12.932000+0000","last_active":"2026-03-10T10:14:42.994616+0000","last_peered":"2026-03-10T10:14:42.994616+0000","last_clean":"2026-03-10T10:14:42.994616+0000","last_became_active":"2026-03-10T10:14:12.931852+0000","last_became_peered":"2026-03-10T10:14:12.931852+0000","last_unstale":"2026-03-10T10:14:42.994616+0000","last_undegraded":"2026-03-10T10:14:42.994616+0000","last_fullsized":"2026-03-10T10:14:42.994616+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:44:40.121048+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1,0],"acting":[2,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.17","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.384850+0000","last_change":"2026-03-10T10:14:16.953076+0000","last_active":"2026-03-10T10:14:42.384850+0000","last_peered":"2026-03-10T10:14:42.384850+0000","last_clean":"2026-03-10T10:14:42.384850+0000","last_became_active":"2026-03-10T10:14:16.952909+0000","last_became_peered":"2026-03-10T10:14:16.952909+0000","last_unstale":"2026-03-10T10:14:42.384850+0000","last_undegraded":"2026-03-10T10:14:42.384850+0000","last_fullsized":"2026-03-10T10:14:42.384850+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:00:34.156358+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.14","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.994464+0000","last_change":"2026-03-10T10:14:18.960071+0000","last_active":"2026-03-10T10:14:42.994464+0000","last_peered":"2026-03-10T10:14:42.994464+0000","last_clean":"2026-03-10T10:14:42.994464+0000","last_became_active":"2026-03-10T10:14:18.959929+0000","last_became_peered":"2026-03-10T10:14:18.959929+0000","last_unstale":"2026-03-10T10:14:42.994464+0000","last_undegraded":"2026-03-10T10:14:42.994464+0000","last_fullsized":"2026-03-10T10:14:42.994464+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:45:06.742064+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,7],"acting":[2,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"3.10","version":"60'4","reported_seq":37,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992138+0000","last_change":"2026-03-10T10:14:14.945697+0000","last_active":"2026-03-10T10:14:42.992138+0000","last_peered":"2026-03-10T10:14:42.992138+0000","last_clean":"2026-03-10T10:14:42.992138+0000","last_became_active":"2026-03-10T10:14:14.945588+0000","last_became_peered":"2026-03-10T10:14:14.945588+0000","last_unstale":"2026-03-10T10:14:42.992138+0000","last_undegraded":"2026-03-10T10:14:42.992138+0000","last_fullsized":"2026-03-10T10:14:42.992138+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":4,"log_dups_size":0,"ondisk_log_size":4,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:54:00.384443+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":6,"num_read_kb":4,"num_write":4,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"2.11","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992114+0000","last_change":"2026-03-10T10:14:12.926065+0000","last_active":"2026-03-10T10:14:42.992114+0000","last_peered":"2026-03-10T10:14:42.992114+0000","last_clean":"2026-03-10T10:14:42.992114+0000","last_became_active":"2026-03-10T10:14:12.925929+0000","last_became_peered":"2026-03-10T10:14:12.925929+0000","last_unstale":"2026-03-10T10:14:42.992114+0000","last_undegraded":"2026-03-10T10:14:42.992114+0000","last_fullsized":"2026-03-10T10:14:42.992114+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:44:36.158559+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.16","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992399+0000","last_change":"2026-03-10T10:14:16.971939+0000","last_active":"2026-03-10T10:14:42.992399+0000","last_peered":"2026-03-10T10:14:42.992399+0000","last_clean":"2026-03-10T10:14:42.992399+0000","last_became_active":"2026-03-10T10:14:16.971858+0000","last_became_peered":"2026-03-10T10:14:16.971858+0000","last_unstale":"2026-03-10T10:14:42.992399+0000","last_undegraded":"2026-03-10T10:14:42.992399+0000","last_fullsized":"2026-03-10T10:14:42.992399+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:35:59.126994+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,1],"acting":[5,3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.15","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.987281+0000","last_change":"2026-03-10T10:14:19.318956+0000","last_active":"2026-03-10T10:14:42.987281+0000","last_peered":"2026-03-10T10:14:42.987281+0000","last_clean":"2026-03-10T10:14:42.987281+0000","last_became_active":"2026-03-10T10:14:19.316531+0000","last_became_peered":"2026-03-10T10:14:19.316531+0000","last_unstale":"2026-03-10T10:14:42.987281+0000","last_undegraded":"2026-03-10T10:14:42.987281+0000","last_fullsized":"2026-03-10T10:14:42.987281+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:00:22.998681+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.13","version":"60'11","reported_seq":50,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986277+0000","last_change":"2026-03-10T10:14:14.951872+0000","last_active":"2026-03-10T10:14:42.986277+0000","last_peered":"2026-03-10T10:14:42.986277+0000","last_clean":"2026-03-10T10:14:42.986277+0000","last_became_active":"2026-03-10T10:14:14.951441+0000","last_became_peered":"2026-03-10T10:14:14.951441+0000","last_unstale":"2026-03-10T10:14:42.986277+0000","last_undegraded":"2026-03-10T10:14:42.986277+0000","last_fullsized":"2026-03-10T10:14:42.986277+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:15:02.819585+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,2],"acting":[7,4,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.12","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992056+0000","last_change":"2026-03-10T10:14:12.950885+0000","last_active":"2026-03-10T10:14:42.992056+0000","last_peered":"2026-03-10T10:14:42.992056+0000","last_clean":"2026-03-10T10:14:42.992056+0000","last_became_active":"2026-03-10T10:14:12.950733+0000","last_became_peered":"2026-03-10T10:14:12.950733+0000","last_unstale":"2026-03-10T10:14:42.992056+0000","last_undegraded":"2026-03-10T10:14:42.992056+0000","last_fullsized":"2026-03-10T10:14:42.992056+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:50:38.523969+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,7],"acting":[5,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.15","version":"60'11","reported_seq":54,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:15:27.991209+0000","last_change":"2026-03-10T10:14:16.965142+0000","last_active":"2026-03-10T10:15:27.991209+0000","last_peered":"2026-03-10T10:15:27.991209+0000","last_clean":"2026-03-10T10:15:27.991209+0000","last_became_active":"2026-03-10T10:14:16.964919+0000","last_became_peered":"2026-03-10T10:14:16.964919+0000","last_unstale":"2026-03-10T10:15:27.991209+0000","last_undegraded":"2026-03-10T10:15:27.991209+0000","last_fullsized":"2026-03-10T10:15:27.991209+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:49:19.188262+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.16","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992828+0000","last_change":"2026-03-10T10:14:18.967988+0000","last_active":"2026-03-10T10:14:42.992828+0000","last_peered":"2026-03-10T10:14:42.992828+0000","last_clean":"2026-03-10T10:14:42.992828+0000","last_became_active":"2026-03-10T10:14:18.967911+0000","last_became_peered":"2026-03-10T10:14:18.967911+0000","last_unstale":"2026-03-10T10:14:42.992828+0000","last_undegraded":"2026-03-10T10:14:42.992828+0000","last_fullsized":"2026-03-10T10:14:42.992828+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:12:07.776130+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.12","version":"60'9","reported_seq":47,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.993210+0000","last_change":"2026-03-10T10:14:14.953102+0000","last_active":"2026-03-10T10:14:42.993210+0000","last_peered":"2026-03-10T10:14:42.993210+0000","last_clean":"2026-03-10T10:14:42.993210+0000","last_became_active":"2026-03-10T10:14:14.952739+0000","last_became_peered":"2026-03-10T10:14:14.952739+0000","last_unstale":"2026-03-10T10:14:42.993210+0000","last_undegraded":"2026-03-10T10:14:42.993210+0000","last_fullsized":"2026-03-10T10:14:42.993210+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:01:05.398407+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.13","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.993188+0000","last_change":"2026-03-10T10:14:12.949349+0000","last_active":"2026-03-10T10:14:42.993188+0000","last_peered":"2026-03-10T10:14:42.993188+0000","last_clean":"2026-03-10T10:14:42.993188+0000","last_became_active":"2026-03-10T10:14:12.948871+0000","last_became_peered":"2026-03-10T10:14:12.948871+0000","last_unstale":"2026-03-10T10:14:42.993188+0000","last_undegraded":"2026-03-10T10:14:42.993188+0000","last_fullsized":"2026-03-10T10:14:42.993188+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:10:13.751894+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.14","version":"60'11","reported_seq":54,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:15:27.993151+0000","last_change":"2026-03-10T10:14:16.969268+0000","last_active":"2026-03-10T10:15:27.993151+0000","last_peered":"2026-03-10T10:15:27.993151+0000","last_clean":"2026-03-10T10:15:27.993151+0000","last_became_active":"2026-03-10T10:14:16.969157+0000","last_became_peered":"2026-03-10T10:14:16.969157+0000","last_unstale":"2026-03-10T10:15:27.993151+0000","last_undegraded":"2026-03-10T10:15:27.993151+0000","last_fullsized":"2026-03-10T10:15:27.993151+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:26:02.431594+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,2],"acting":[3,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.17","version":"0'0","reported_seq":22,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.281808+0000","last_change":"2026-03-10T10:14:18.972554+0000","last_active":"2026-03-10T10:14:42.281808+0000","last_peered":"2026-03-10T10:14:42.281808+0000","last_clean":"2026-03-10T10:14:42.281808+0000","last_became_active":"2026-03-10T10:14:18.972078+0000","last_became_peered":"2026-03-10T10:14:18.972078+0000","last_unstale":"2026-03-10T10:14:42.281808+0000","last_undegraded":"2026-03-10T10:14:42.281808+0000","last_fullsized":"2026-03-10T10:14:42.281808+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:42:54.122210+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,5],"acting":[4,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.15","version":"60'9","reported_seq":47,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986172+0000","last_change":"2026-03-10T10:14:14.951975+0000","last_active":"2026-03-10T10:14:42.986172+0000","last_peered":"2026-03-10T10:14:42.986172+0000","last_clean":"2026-03-10T10:14:42.986172+0000","last_became_active":"2026-03-10T10:14:14.951638+0000","last_became_peered":"2026-03-10T10:14:14.951638+0000","last_unstale":"2026-03-10T10:14:42.986172+0000","last_undegraded":"2026-03-10T10:14:42.986172+0000","last_fullsized":"2026-03-10T10:14:42.986172+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:18:03.689555+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,3,4],"acting":[7,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.14","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992072+0000","last_change":"2026-03-10T10:14:12.944676+0000","last_active":"2026-03-10T10:14:42.992072+0000","last_peered":"2026-03-10T10:14:42.992072+0000","last_clean":"2026-03-10T10:14:42.992072+0000","last_became_active":"2026-03-10T10:14:12.944571+0000","last_became_peered":"2026-03-10T10:14:12.944571+0000","last_unstale":"2026-03-10T10:14:42.992072+0000","last_undegraded":"2026-03-10T10:14:42.992072+0000","last_fullsized":"2026-03-10T10:14:42.992072+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:49:51.094210+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,3,5],"acting":[6,3,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.13","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.385225+0000","last_change":"2026-03-10T10:14:16.953131+0000","last_active":"2026-03-10T10:14:42.385225+0000","last_peered":"2026-03-10T10:14:42.385225+0000","last_clean":"2026-03-10T10:14:42.385225+0000","last_became_active":"2026-03-10T10:14:16.953004+0000","last_became_peered":"2026-03-10T10:14:16.953004+0000","last_unstale":"2026-03-10T10:14:42.385225+0000","last_undegraded":"2026-03-10T10:14:42.385225+0000","last_fullsized":"2026-03-10T10:14:42.385225+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:23:31.712596+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.10","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992769+0000","last_change":"2026-03-10T10:14:18.973547+0000","last_active":"2026-03-10T10:14:42.992769+0000","last_peered":"2026-03-10T10:14:42.992769+0000","last_clean":"2026-03-10T10:14:42.992769+0000","last_became_active":"2026-03-10T10:14:18.972495+0000","last_became_peered":"2026-03-10T10:14:18.972495+0000","last_unstale":"2026-03-10T10:14:42.992769+0000","last_undegraded":"2026-03-10T10:14:42.992769+0000","last_fullsized":"2026-03-10T10:14:42.992769+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:41:42.468683+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,1],"acting":[0,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.14","version":"60'10","reported_seq":45,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.280750+0000","last_change":"2026-03-10T10:14:14.948291+0000","last_active":"2026-03-10T10:14:42.280750+0000","last_peered":"2026-03-10T10:14:42.280750+0000","last_clean":"2026-03-10T10:14:42.280750+0000","last_became_active":"2026-03-10T10:14:14.948202+0000","last_became_peered":"2026-03-10T10:14:14.948202+0000","last_unstale":"2026-03-10T10:14:42.280750+0000","last_undegraded":"2026-03-10T10:14:42.280750+0000","last_fullsized":"2026-03-10T10:14:42.280750+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:16:18.954084+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,7,6],"acting":[4,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.15","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.281093+0000","last_change":"2026-03-10T10:14:12.940058+0000","last_active":"2026-03-10T10:14:42.281093+0000","last_peered":"2026-03-10T10:14:42.281093+0000","last_clean":"2026-03-10T10:14:42.281093+0000","last_became_active":"2026-03-10T10:14:12.939930+0000","last_became_peered":"2026-03-10T10:14:12.939930+0000","last_unstale":"2026-03-10T10:14:42.281093+0000","last_undegraded":"2026-03-10T10:14:42.281093+0000","last_fullsized":"2026-03-10T10:14:42.281093+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:22:47.235890+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,0],"acting":[1,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.12","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.281119+0000","last_change":"2026-03-10T10:14:16.980497+0000","last_active":"2026-03-10T10:14:42.281119+0000","last_peered":"2026-03-10T10:14:42.281119+0000","last_clean":"2026-03-10T10:14:42.281119+0000","last_became_active":"2026-03-10T10:14:16.980383+0000","last_became_peered":"2026-03-10T10:14:16.980383+0000","last_unstale":"2026-03-10T10:14:42.281119+0000","last_undegraded":"2026-03-10T10:14:42.281119+0000","last_fullsized":"2026-03-10T10:14:42.281119+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:51:20.407631+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.11","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.384787+0000","last_change":"2026-03-10T10:14:18.977284+0000","last_active":"2026-03-10T10:14:42.384787+0000","last_peered":"2026-03-10T10:14:42.384787+0000","last_clean":"2026-03-10T10:14:42.384787+0000","last_became_active":"2026-03-10T10:14:18.976997+0000","last_became_peered":"2026-03-10T10:14:18.976997+0000","last_unstale":"2026-03-10T10:14:42.384787+0000","last_undegraded":"2026-03-10T10:14:42.384787+0000","last_fullsized":"2026-03-10T10:14:42.384787+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:42:40.284612+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.17","version":"60'6","reported_seq":40,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.993582+0000","last_change":"2026-03-10T10:14:14.950503+0000","last_active":"2026-03-10T10:14:42.993582+0000","last_peered":"2026-03-10T10:14:42.993582+0000","last_clean":"2026-03-10T10:14:42.993582+0000","last_became_active":"2026-03-10T10:14:14.949413+0000","last_became_peered":"2026-03-10T10:14:14.949413+0000","last_unstale":"2026-03-10T10:14:42.993582+0000","last_undegraded":"2026-03-10T10:14:42.993582+0000","last_fullsized":"2026-03-10T10:14:42.993582+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":6,"log_dups_size":0,"ondisk_log_size":6,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:34:11.235583+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":9,"num_read_kb":6,"num_write":6,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,3],"acting":[0,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.16","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992014+0000","last_change":"2026-03-10T10:14:12.939019+0000","last_active":"2026-03-10T10:14:42.992014+0000","last_peered":"2026-03-10T10:14:42.992014+0000","last_clean":"2026-03-10T10:14:42.992014+0000","last_became_active":"2026-03-10T10:14:12.938918+0000","last_became_peered":"2026-03-10T10:14:12.938918+0000","last_unstale":"2026-03-10T10:14:42.992014+0000","last_undegraded":"2026-03-10T10:14:42.992014+0000","last_fullsized":"2026-03-10T10:14:42.992014+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:41:41.170089+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,2],"acting":[5,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.11","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.991764+0000","last_change":"2026-03-10T10:14:16.953276+0000","last_active":"2026-03-10T10:14:42.991764+0000","last_peered":"2026-03-10T10:14:42.991764+0000","last_clean":"2026-03-10T10:14:42.991764+0000","last_became_active":"2026-03-10T10:14:16.953179+0000","last_became_peered":"2026-03-10T10:14:16.953179+0000","last_unstale":"2026-03-10T10:14:42.991764+0000","last_undegraded":"2026-03-10T10:14:42.991764+0000","last_fullsized":"2026-03-10T10:14:42.991764+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:54:47.477688+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.12","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986024+0000","last_change":"2026-03-10T10:14:18.964714+0000","last_active":"2026-03-10T10:14:42.986024+0000","last_peered":"2026-03-10T10:14:42.986024+0000","last_clean":"2026-03-10T10:14:42.986024+0000","last_became_active":"2026-03-10T10:14:18.964413+0000","last_became_peered":"2026-03-10T10:14:18.964413+0000","last_unstale":"2026-03-10T10:14:42.986024+0000","last_undegraded":"2026-03-10T10:14:42.986024+0000","last_fullsized":"2026-03-10T10:14:42.986024+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:46:26.335677+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,4],"acting":[7,2,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.16","version":"60'9","reported_seq":47,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.991800+0000","last_change":"2026-03-10T10:14:14.953172+0000","last_active":"2026-03-10T10:14:42.991800+0000","last_peered":"2026-03-10T10:14:42.991800+0000","last_clean":"2026-03-10T10:14:42.991800+0000","last_became_active":"2026-03-10T10:14:14.953016+0000","last_became_peered":"2026-03-10T10:14:14.953016+0000","last_unstale":"2026-03-10T10:14:42.991800+0000","last_undegraded":"2026-03-10T10:14:42.991800+0000","last_fullsized":"2026-03-10T10:14:42.991800+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:41:32.775380+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,1],"acting":[5,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.17","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992003+0000","last_change":"2026-03-10T10:14:12.939818+0000","last_active":"2026-03-10T10:14:42.992003+0000","last_peered":"2026-03-10T10:14:42.992003+0000","last_clean":"2026-03-10T10:14:42.992003+0000","last_became_active":"2026-03-10T10:14:12.939654+0000","last_became_peered":"2026-03-10T10:14:12.939654+0000","last_unstale":"2026-03-10T10:14:42.992003+0000","last_undegraded":"2026-03-10T10:14:42.992003+0000","last_fullsized":"2026-03-10T10:14:42.992003+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:31:09.237927+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,5,2],"acting":[6,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.10","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.987070+0000","last_change":"2026-03-10T10:14:16.958911+0000","last_active":"2026-03-10T10:14:42.987070+0000","last_peered":"2026-03-10T10:14:42.987070+0000","last_clean":"2026-03-10T10:14:42.987070+0000","last_became_active":"2026-03-10T10:14:16.958832+0000","last_became_peered":"2026-03-10T10:14:16.958832+0000","last_unstale":"2026-03-10T10:14:42.987070+0000","last_undegraded":"2026-03-10T10:14:42.987070+0000","last_fullsized":"2026-03-10T10:14:42.987070+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:53:01.679616+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.13","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.384881+0000","last_change":"2026-03-10T10:14:19.316329+0000","last_active":"2026-03-10T10:14:42.384881+0000","last_peered":"2026-03-10T10:14:42.384881+0000","last_clean":"2026-03-10T10:14:42.384881+0000","last_became_active":"2026-03-10T10:14:19.316018+0000","last_became_peered":"2026-03-10T10:14:19.316018+0000","last_unstale":"2026-03-10T10:14:42.384881+0000","last_undegraded":"2026-03-10T10:14:42.384881+0000","last_fullsized":"2026-03-10T10:14:42.384881+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:02:08.182524+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,6],"acting":[3,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.1c","version":"60'1","reported_seq":25,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.986389+0000","last_change":"2026-03-10T10:14:18.971822+0000","last_active":"2026-03-10T10:14:42.986389+0000","last_peered":"2026-03-10T10:14:42.986389+0000","last_clean":"2026-03-10T10:14:42.986389+0000","last_became_active":"2026-03-10T10:14:18.971595+0000","last_became_peered":"2026-03-10T10:14:18.971595+0000","last_unstale":"2026-03-10T10:14:42.986389+0000","last_undegraded":"2026-03-10T10:14:42.986389+0000","last_fullsized":"2026-03-10T10:14:42.986389+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:07:27.535419+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":403,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":3,"num_read_kb":3,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.19","version":"60'15","reported_seq":56,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.281508+0000","last_change":"2026-03-10T10:14:14.949777+0000","last_active":"2026-03-10T10:14:42.281508+0000","last_peered":"2026-03-10T10:14:42.281508+0000","last_clean":"2026-03-10T10:14:42.281508+0000","last_became_active":"2026-03-10T10:14:14.949201+0000","last_became_peered":"2026-03-10T10:14:14.949201+0000","last_unstale":"2026-03-10T10:14:42.281508+0000","last_undegraded":"2026-03-10T10:14:42.281508+0000","last_fullsized":"2026-03-10T10:14:42.281508+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T15:19:35.165985+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,4],"acting":[1,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.18","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.992178+0000","last_change":"2026-03-10T10:14:12.948563+0000","last_active":"2026-03-10T10:14:42.992178+0000","last_peered":"2026-03-10T10:14:42.992178+0000","last_clean":"2026-03-10T10:14:42.992178+0000","last_became_active":"2026-03-10T10:14:12.948398+0000","last_became_peered":"2026-03-10T10:14:12.948398+0000","last_unstale":"2026-03-10T10:14:42.992178+0000","last_undegraded":"2026-03-10T10:14:42.992178+0000","last_fullsized":"2026-03-10T10:14:42.992178+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:52:12.263521+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,7],"acting":[5,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1f","version":"60'11","reported_seq":57,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:15:27.991588+0000","last_change":"2026-03-10T10:14:16.959146+0000","last_active":"2026-03-10T10:15:27.991588+0000","last_peered":"2026-03-10T10:15:27.991588+0000","last_clean":"2026-03-10T10:15:27.991588+0000","last_became_active":"2026-03-10T10:14:16.958966+0000","last_became_peered":"2026-03-10T10:14:16.958966+0000","last_unstale":"2026-03-10T10:15:27.991588+0000","last_undegraded":"2026-03-10T10:15:27.991588+0000","last_fullsized":"2026-03-10T10:15:27.991588+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T13:53:10.085138+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1d","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.282102+0000","last_change":"2026-03-10T10:14:18.974109+0000","last_active":"2026-03-10T10:14:42.282102+0000","last_peered":"2026-03-10T10:14:42.282102+0000","last_clean":"2026-03-10T10:14:42.282102+0000","last_became_active":"2026-03-10T10:14:18.974041+0000","last_became_peered":"2026-03-10T10:14:18.974041+0000","last_unstale":"2026-03-10T10:14:42.282102+0000","last_undegraded":"2026-03-10T10:14:42.282102+0000","last_fullsized":"2026-03-10T10:14:42.282102+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:51:01.760692+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.18","version":"60'9","reported_seq":47,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.385763+0000","last_change":"2026-03-10T10:14:14.953631+0000","last_active":"2026-03-10T10:14:42.385763+0000","last_peered":"2026-03-10T10:14:42.385763+0000","last_clean":"2026-03-10T10:14:42.385763+0000","last_became_active":"2026-03-10T10:14:14.951122+0000","last_became_peered":"2026-03-10T10:14:14.951122+0000","last_unstale":"2026-03-10T10:14:42.385763+0000","last_undegraded":"2026-03-10T10:14:42.385763+0000","last_fullsized":"2026-03-10T10:14:42.385763+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:31:34.325912+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.19","version":"53'1","reported_seq":36,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.385744+0000","last_change":"2026-03-10T10:14:12.944848+0000","last_active":"2026-03-10T10:14:42.385744+0000","last_peered":"2026-03-10T10:14:42.385744+0000","last_clean":"2026-03-10T10:14:42.385744+0000","last_became_active":"2026-03-10T10:14:12.943948+0000","last_became_peered":"2026-03-10T10:14:12.943948+0000","last_unstale":"2026-03-10T10:14:42.385744+0000","last_undegraded":"2026-03-10T10:14:42.385744+0000","last_fullsized":"2026-03-10T10:14:42.385744+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T21:51:19.515363+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,0],"acting":[3,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1e","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.993486+0000","last_change":"2026-03-10T10:14:16.965759+0000","last_active":"2026-03-10T10:14:42.993486+0000","last_peered":"2026-03-10T10:14:42.993486+0000","last_clean":"2026-03-10T10:14:42.993486+0000","last_became_active":"2026-03-10T10:14:16.965615+0000","last_became_peered":"2026-03-10T10:14:16.965615+0000","last_unstale":"2026-03-10T10:14:42.993486+0000","last_undegraded":"2026-03-10T10:14:42.993486+0000","last_fullsized":"2026-03-10T10:14:42.993486+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:54:41.778025+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.1e","version":"0'0","reported_seq":22,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.281683+0000","last_change":"2026-03-10T10:14:19.315389+0000","last_active":"2026-03-10T10:14:42.281683+0000","last_peered":"2026-03-10T10:14:42.281683+0000","last_clean":"2026-03-10T10:14:42.281683+0000","last_became_active":"2026-03-10T10:14:19.315279+0000","last_became_peered":"2026-03-10T10:14:19.315279+0000","last_unstale":"2026-03-10T10:14:42.281683+0000","last_undegraded":"2026-03-10T10:14:42.281683+0000","last_fullsized":"2026-03-10T10:14:42.281683+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T20:09:35.750325+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,5],"acting":[4,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1a","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.991945+0000","last_change":"2026-03-10T10:14:12.942253+0000","last_active":"2026-03-10T10:14:42.991945+0000","last_peered":"2026-03-10T10:14:42.991945+0000","last_clean":"2026-03-10T10:14:42.991945+0000","last_became_active":"2026-03-10T10:14:12.942074+0000","last_became_peered":"2026-03-10T10:14:12.942074+0000","last_unstale":"2026-03-10T10:14:42.991945+0000","last_undegraded":"2026-03-10T10:14:42.991945+0000","last_fullsized":"2026-03-10T10:14:42.991945+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T12:57:53.020154+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"3.1b","version":"60'5","reported_seq":41,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.993635+0000","last_change":"2026-03-10T10:14:14.960701+0000","last_active":"2026-03-10T10:14:42.993635+0000","last_peered":"2026-03-10T10:14:42.993635+0000","last_clean":"2026-03-10T10:14:42.993635+0000","last_became_active":"2026-03-10T10:14:14.960493+0000","last_became_peered":"2026-03-10T10:14:14.960493+0000","last_unstale":"2026-03-10T10:14:42.993635+0000","last_undegraded":"2026-03-10T10:14:42.993635+0000","last_fullsized":"2026-03-10T10:14:42.993635+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:12:56.098048+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":11,"num_read_kb":7,"num_write":6,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,7],"acting":[0,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.1d","version":"0'0","reported_seq":27,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.282185+0000","last_change":"2026-03-10T10:14:16.955414+0000","last_active":"2026-03-10T10:14:42.282185+0000","last_peered":"2026-03-10T10:14:42.282185+0000","last_clean":"2026-03-10T10:14:42.282185+0000","last_became_active":"2026-03-10T10:14:16.954995+0000","last_became_peered":"2026-03-10T10:14:16.954995+0000","last_unstale":"2026-03-10T10:14:42.282185+0000","last_undegraded":"2026-03-10T10:14:42.282185+0000","last_fullsized":"2026-03-10T10:14:42.282185+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T18:38:22.812402+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1f","version":"0'0","reported_seq":23,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.385440+0000","last_change":"2026-03-10T10:14:19.316915+0000","last_active":"2026-03-10T10:14:42.385440+0000","last_peered":"2026-03-10T10:14:42.385440+0000","last_clean":"2026-03-10T10:14:42.385440+0000","last_became_active":"2026-03-10T10:14:19.316153+0000","last_became_peered":"2026-03-10T10:14:19.316153+0000","last_unstale":"2026-03-10T10:14:42.385440+0000","last_undegraded":"2026-03-10T10:14:42.385440+0000","last_fullsized":"2026-03-10T10:14:42.385440+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:17.935957+0000","last_clean_scrub_stamp":"2026-03-10T10:14:17.935957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:35:53.622709+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.1b","version":"0'0","reported_seq":35,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.385482+0000","last_change":"2026-03-10T10:14:12.950790+0000","last_active":"2026-03-10T10:14:42.385482+0000","last_peered":"2026-03-10T10:14:42.385482+0000","last_clean":"2026-03-10T10:14:42.385482+0000","last_became_active":"2026-03-10T10:14:12.950571+0000","last_became_peered":"2026-03-10T10:14:12.950571+0000","last_unstale":"2026-03-10T10:14:42.385482+0000","last_undegraded":"2026-03-10T10:14:42.385482+0000","last_fullsized":"2026-03-10T10:14:42.385482+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:11.911259+0000","last_clean_scrub_stamp":"2026-03-10T10:14:11.911259+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T17:38:24.753287+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1a","version":"60'9","reported_seq":46,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.282081+0000","last_change":"2026-03-10T10:14:14.947375+0000","last_active":"2026-03-10T10:14:42.282081+0000","last_peered":"2026-03-10T10:14:42.282081+0000","last_clean":"2026-03-10T10:14:42.282081+0000","last_became_active":"2026-03-10T10:14:14.942646+0000","last_became_peered":"2026-03-10T10:14:14.942646+0000","last_unstale":"2026-03-10T10:14:42.282081+0000","last_undegraded":"2026-03-10T10:14:42.282081+0000","last_fullsized":"2026-03-10T10:14:42.282081+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:13.920894+0000","last_clean_scrub_stamp":"2026-03-10T10:14:13.920894+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T16:04:14.957909+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.1c","version":"0'0","reported_seq":26,"reported_epoch":65,"state":"active+clean","last_fresh":"2026-03-10T10:14:42.282092+0000","last_change":"2026-03-10T10:14:16.962239+0000","last_active":"2026-03-10T10:14:42.282092+0000","last_peered":"2026-03-10T10:14:42.282092+0000","last_clean":"2026-03-10T10:14:42.282092+0000","last_became_active":"2026-03-10T10:14:16.962160+0000","last_became_peered":"2026-03-10T10:14:16.962160+0000","last_unstale":"2026-03-10T10:14:42.282092+0000","last_undegraded":"2026-03-10T10:14:42.282092+0000","last_fullsized":"2026-03-10T10:14:42.282092+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T10:14:15.929979+0000","last_clean_scrub_stamp":"2026-03-10T10:14:15.929979+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T10:44:29.176451+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]}],"pool_stats":[{"poolid":6,"num_pg":32,"stat_sum":{"num_bytes":416,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":3,"num_read_kb":3,"num_write":3,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1248,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":2,"ondisk_log_size":2,"up":96,"acting":96,"num_store_stats":8},{"poolid":5,"num_pg":32,"stat_sum":{"num_bytes":0,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":88,"ondisk_log_size":88,"up":96,"acting":96,"num_store_stats":8},{"poolid":4,"num_pg":3,"stat_sum":{"num_bytes":408,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":71,"num_read_kb":66,"num_write":6,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1224,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":8,"ondisk_log_size":8,"up":9,"acting":9,"num_store_stats":7},{"poolid":3,"num_pg":32,"stat_sum":{"num_bytes":3702,"num_objects":178,"num_object_clones":0,"num_object_copies":534,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":178,"num_whiteouts":0,"num_read":701,"num_read_kb":458,"num_write":417,"num_write_kb":34,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":417792,"data_stored":11106,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":395,"ondisk_log_size":395,"up":96,"acting":96,"num_store_stats":8},{"poolid":2,"num_pg":32,"stat_sum":{"num_bytes":1613,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":34,"num_read_kb":34,"num_write":10,"num_write_kb":6,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":4839,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":6,"ondisk_log_size":6,"up":96,"acting":96,"num_store_stats":8},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":106,"num_read_kb":213,"num_write":69,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":39,"ondisk_log_size":39,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":7,"up_from":49,"seq":210453397527,"num_pgs":59,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27956,"kb_used_data":1124,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939468,"statfs":{"total":21470642176,"available":21442015232,"internally_reserved":0,"allocated":1150976,"data_stored":713180,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1585,"internal_metadata":27457999},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":43,"seq":184683593757,"num_pgs":42,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27916,"kb_used_data":1084,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939508,"statfs":{"total":21470642176,"available":21442056192,"internally_reserved":0,"allocated":1110016,"data_stored":710396,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":37,"seq":158913789988,"num_pgs":53,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27484,"kb_used_data":640,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939940,"statfs":{"total":21470642176,"available":21442498560,"internally_reserved":0,"allocated":655360,"data_stored":251954,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":31,"seq":133143986218,"num_pgs":56,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27516,"kb_used_data":676,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939908,"statfs":{"total":21470642176,"available":21442465792,"internally_reserved":0,"allocated":692224,"data_stored":253054,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":26,"seq":111669149745,"num_pgs":50,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27480,"kb_used_data":636,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939944,"statfs":{"total":21470642176,"available":21442502656,"internally_reserved":0,"allocated":651264,"data_stored":251516,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":18,"seq":77309411384,"num_pgs":39,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27488,"kb_used_data":644,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939936,"statfs":{"total":21470642176,"available":21442494464,"internally_reserved":0,"allocated":659456,"data_stored":252058,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574910,"num_pgs":47,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27492,"kb_used_data":648,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939932,"statfs":{"total":21470642176,"available":21442490368,"internally_reserved":0,"allocated":663552,"data_stored":251447,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738437,"num_pgs":50,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27936,"kb_used_data":1104,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939488,"statfs":{"total":21470642176,"available":21442035712,"internally_reserved":0,"allocated":1130496,"data_stored":712044,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":16384,"data_stored":1131,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":46,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":436,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":46,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":16384,"data_stored":1131,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":436,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":92,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":16384,"data_stored":1521,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1320,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":57344,"data_stored":1458,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1282,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":40960,"data_stored":1144,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":1980,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":45056,"data_stored":1172,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":40960,"data_stored":1100,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":61440,"data_stored":1650,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":416,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":416,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T10:15:34.949 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-10T10:15:34.949 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-10T10:15:34.949 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-10T10:15:34.950 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph health --format=json 2026-03-10T10:15:36.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:36 vm07 bash[23367]: cluster 2026-03-10T10:15:34.302799+0000 mgr.y (mgr.24422) 66 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:36.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:36 vm07 bash[23367]: cluster 2026-03-10T10:15:34.302799+0000 mgr.y (mgr.24422) 66 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:36.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:36 vm07 bash[23367]: audit 2026-03-10T10:15:34.881711+0000 mgr.y (mgr.24422) 67 : audit [DBG] from='client.14613 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T10:15:36.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:36 vm07 bash[23367]: audit 2026-03-10T10:15:34.881711+0000 mgr.y (mgr.24422) 67 : audit [DBG] from='client.14613 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T10:15:36.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:36 vm04 bash[28289]: cluster 2026-03-10T10:15:34.302799+0000 mgr.y (mgr.24422) 66 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:36.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:36 vm04 bash[28289]: cluster 2026-03-10T10:15:34.302799+0000 mgr.y (mgr.24422) 66 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:36.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:36 vm04 bash[28289]: audit 2026-03-10T10:15:34.881711+0000 mgr.y (mgr.24422) 67 : audit [DBG] from='client.14613 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T10:15:36.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:36 vm04 bash[28289]: audit 2026-03-10T10:15:34.881711+0000 mgr.y (mgr.24422) 67 : audit [DBG] from='client.14613 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T10:15:36.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:36 vm04 bash[20742]: cluster 2026-03-10T10:15:34.302799+0000 mgr.y (mgr.24422) 66 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:36.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:36 vm04 bash[20742]: cluster 2026-03-10T10:15:34.302799+0000 mgr.y (mgr.24422) 66 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:36.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:36 vm04 bash[20742]: audit 2026-03-10T10:15:34.881711+0000 mgr.y (mgr.24422) 67 : audit [DBG] from='client.14613 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T10:15:36.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:36 vm04 bash[20742]: audit 2026-03-10T10:15:34.881711+0000 mgr.y (mgr.24422) 67 : audit [DBG] from='client.14613 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T10:15:38.515 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:15:38 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:15:38.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:38 vm07 bash[23367]: cluster 2026-03-10T10:15:36.303097+0000 mgr.y (mgr.24422) 68 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:38.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:38 vm07 bash[23367]: cluster 2026-03-10T10:15:36.303097+0000 mgr.y (mgr.24422) 68 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:38.656 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:15:38.668 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:38 vm04 bash[28289]: cluster 2026-03-10T10:15:36.303097+0000 mgr.y (mgr.24422) 68 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:38.668 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:38 vm04 bash[28289]: cluster 2026-03-10T10:15:36.303097+0000 mgr.y (mgr.24422) 68 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:38.669 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:38 vm04 bash[20742]: cluster 2026-03-10T10:15:36.303097+0000 mgr.y (mgr.24422) 68 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:38.669 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:38 vm04 bash[20742]: cluster 2026-03-10T10:15:36.303097+0000 mgr.y (mgr.24422) 68 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:38.802 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.797+0000 7f36037e9640 1 -- 192.168.123.104:0/1592782392 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f35fc108c50 msgr2=0x7f35fc109030 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:38.802 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.797+0000 7f36037e9640 1 --2- 192.168.123.104:0/1592782392 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f35fc108c50 0x7f35fc109030 secure :-1 s=READY pgs=167 cs=0 l=1 rev1=1 crypto rx=0x7f35f0009a30 tx=0x7f35f002f220 comp rx=0 tx=0).stop 2026-03-10T10:15:38.802 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.797+0000 7f36037e9640 1 -- 192.168.123.104:0/1592782392 shutdown_connections 2026-03-10T10:15:38.802 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.797+0000 7f36037e9640 1 --2- 192.168.123.104:0/1592782392 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f35fc1014a0 0x7f35fc10f930 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:38.802 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.797+0000 7f36037e9640 1 --2- 192.168.123.104:0/1592782392 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f35fc100ae0 0x7f35fc100f60 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:38.802 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.797+0000 7f36037e9640 1 --2- 192.168.123.104:0/1592782392 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f35fc108c50 0x7f35fc109030 unknown :-1 s=CLOSED pgs=167 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:38.802 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.797+0000 7f36037e9640 1 -- 192.168.123.104:0/1592782392 >> 192.168.123.104:0/1592782392 conn(0x7f35fc0fc910 msgr2=0x7f35fc0fed30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:38.802 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.797+0000 7f36037e9640 1 -- 192.168.123.104:0/1592782392 shutdown_connections 2026-03-10T10:15:38.803 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.801+0000 7f36037e9640 1 -- 192.168.123.104:0/1592782392 wait complete. 2026-03-10T10:15:38.803 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.801+0000 7f36037e9640 1 Processor -- start 2026-03-10T10:15:38.803 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.801+0000 7f36037e9640 1 -- start start 2026-03-10T10:15:38.803 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.801+0000 7f36037e9640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f35fc100ae0 0x7f35fc19a720 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:38.803 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.801+0000 7f36037e9640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f35fc1014a0 0x7f35fc19ac60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:38.803 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.801+0000 7f36037e9640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f35fc108c50 0x7f35fc19eff0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:38.803 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.801+0000 7f36037e9640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f35fc112300 con 0x7f35fc1014a0 2026-03-10T10:15:38.803 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.801+0000 7f36037e9640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f35fc112180 con 0x7f35fc108c50 2026-03-10T10:15:38.803 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.801+0000 7f36037e9640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f35fc112480 con 0x7f35fc100ae0 2026-03-10T10:15:38.803 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.801+0000 7f360155e640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f35fc100ae0 0x7f35fc19a720 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:38.803 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.801+0000 7f360155e640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f35fc100ae0 0x7f35fc19a720 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3301/0 says I am v2:192.168.123.104:35138/0 (socket says 192.168.123.104:35138) 2026-03-10T10:15:38.803 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.801+0000 7f360155e640 1 -- 192.168.123.104:0/2013657821 learned_addr learned my addr 192.168.123.104:0/2013657821 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:15:38.803 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.801+0000 7f3601d5f640 1 --2- 192.168.123.104:0/2013657821 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f35fc108c50 0x7f35fc19eff0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:38.803 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.801+0000 7f3600d5d640 1 --2- 192.168.123.104:0/2013657821 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f35fc1014a0 0x7f35fc19ac60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:38.803 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.801+0000 7f360155e640 1 -- 192.168.123.104:0/2013657821 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f35fc108c50 msgr2=0x7f35fc19eff0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:38.804 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.801+0000 7f360155e640 1 --2- 192.168.123.104:0/2013657821 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f35fc108c50 0x7f35fc19eff0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:38.804 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.801+0000 7f360155e640 1 -- 192.168.123.104:0/2013657821 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f35fc1014a0 msgr2=0x7f35fc19ac60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:38.804 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.801+0000 7f360155e640 1 --2- 192.168.123.104:0/2013657821 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f35fc1014a0 0x7f35fc19ac60 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:38.804 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.801+0000 7f360155e640 1 -- 192.168.123.104:0/2013657821 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f35fc19f6d0 con 0x7f35fc100ae0 2026-03-10T10:15:38.804 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.801+0000 7f3600d5d640 1 --2- 192.168.123.104:0/2013657821 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f35fc1014a0 0x7f35fc19ac60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:15:38.804 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.801+0000 7f3601d5f640 1 --2- 192.168.123.104:0/2013657821 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f35fc108c50 0x7f35fc19eff0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-10T10:15:38.804 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.801+0000 7f360155e640 1 --2- 192.168.123.104:0/2013657821 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f35fc100ae0 0x7f35fc19a720 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7f35f002f730 tx=0x7f35f00385b0 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:38.804 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.801+0000 7f35ea7fc640 1 -- 192.168.123.104:0/2013657821 <== mon.2 v2:192.168.123.104:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f35f0004050 con 0x7f35fc100ae0 2026-03-10T10:15:38.804 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.801+0000 7f36037e9640 1 -- 192.168.123.104:0/2013657821 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f35fc19f960 con 0x7f35fc100ae0 2026-03-10T10:15:38.804 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.801+0000 7f36037e9640 1 -- 192.168.123.104:0/2013657821 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f35fc1024e0 con 0x7f35fc100ae0 2026-03-10T10:15:38.805 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.801+0000 7f36037e9640 1 -- 192.168.123.104:0/2013657821 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f35fc101f00 con 0x7f35fc100ae0 2026-03-10T10:15:38.807 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.805+0000 7f35ea7fc640 1 -- 192.168.123.104:0/2013657821 <== mon.2 v2:192.168.123.104:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f35f00047c0 con 0x7f35fc100ae0 2026-03-10T10:15:38.807 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.805+0000 7f35ea7fc640 1 -- 192.168.123.104:0/2013657821 <== mon.2 v2:192.168.123.104:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f35f0038ba0 con 0x7f35fc100ae0 2026-03-10T10:15:38.808 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.805+0000 7f35ea7fc640 1 -- 192.168.123.104:0/2013657821 <== mon.2 v2:192.168.123.104:3301/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f35f00041f0 con 0x7f35fc100ae0 2026-03-10T10:15:38.808 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.805+0000 7f35ea7fc640 1 --2- 192.168.123.104:0/2013657821 >> v2:192.168.123.104:6800/3326026257 conn(0x7f35d0077680 0x7f35d0079b40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:15:38.808 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.805+0000 7f3600d5d640 1 --2- 192.168.123.104:0/2013657821 >> v2:192.168.123.104:6800/3326026257 conn(0x7f35d0077680 0x7f35d0079b40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:15:38.809 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.805+0000 7f3600d5d640 1 --2- 192.168.123.104:0/2013657821 >> v2:192.168.123.104:6800/3326026257 conn(0x7f35d0077680 0x7f35d0079b40 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7f35fc108230 tx=0x7f35ec005e90 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:15:38.809 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.805+0000 7f35ea7fc640 1 -- 192.168.123.104:0/2013657821 <== mon.2 v2:192.168.123.104:3301/0 5 ==== osd_map(65..65 src has 1..65) ==== 6181+0+0 (secure 0 0 0) 0x7f35f00bf070 con 0x7f35fc100ae0 2026-03-10T10:15:38.809 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.805+0000 7f35ea7fc640 1 -- 192.168.123.104:0/2013657821 <== mon.2 v2:192.168.123.104:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f35f0048050 con 0x7f35fc100ae0 2026-03-10T10:15:38.914 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.909+0000 7f36037e9640 1 -- 192.168.123.104:0/2013657821 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "health", "format": "json"} v 0) -- 0x7f35fc19ba80 con 0x7f35fc100ae0 2026-03-10T10:15:38.914 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.909+0000 7f35ea7fc640 1 -- 192.168.123.104:0/2013657821 <== mon.2 v2:192.168.123.104:3301/0 7 ==== mon_command_ack([{"prefix": "health", "format": "json"}]=0 v0) ==== 72+0+46 (secure 0 0 0) 0x7f35f0004c50 con 0x7f35fc100ae0 2026-03-10T10:15:38.914 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-10T10:15:38.915 INFO:teuthology.orchestra.run.vm04.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-10T10:15:38.916 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.913+0000 7f36037e9640 1 -- 192.168.123.104:0/2013657821 >> v2:192.168.123.104:6800/3326026257 conn(0x7f35d0077680 msgr2=0x7f35d0079b40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:38.916 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.913+0000 7f36037e9640 1 --2- 192.168.123.104:0/2013657821 >> v2:192.168.123.104:6800/3326026257 conn(0x7f35d0077680 0x7f35d0079b40 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7f35fc108230 tx=0x7f35ec005e90 comp rx=0 tx=0).stop 2026-03-10T10:15:38.916 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.913+0000 7f36037e9640 1 -- 192.168.123.104:0/2013657821 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f35fc100ae0 msgr2=0x7f35fc19a720 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:15:38.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.913+0000 7f36037e9640 1 --2- 192.168.123.104:0/2013657821 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f35fc100ae0 0x7f35fc19a720 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7f35f002f730 tx=0x7f35f00385b0 comp rx=0 tx=0).stop 2026-03-10T10:15:38.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.913+0000 7f36037e9640 1 -- 192.168.123.104:0/2013657821 shutdown_connections 2026-03-10T10:15:38.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.913+0000 7f36037e9640 1 --2- 192.168.123.104:0/2013657821 >> v2:192.168.123.104:6800/3326026257 conn(0x7f35d0077680 0x7f35d0079b40 unknown :-1 s=CLOSED pgs=49 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:38.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.913+0000 7f36037e9640 1 --2- 192.168.123.104:0/2013657821 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f35fc108c50 0x7f35fc19eff0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:38.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.913+0000 7f36037e9640 1 --2- 192.168.123.104:0/2013657821 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f35fc1014a0 0x7f35fc19ac60 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:38.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.913+0000 7f36037e9640 1 --2- 192.168.123.104:0/2013657821 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f35fc100ae0 0x7f35fc19a720 unknown :-1 s=CLOSED pgs=79 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:15:38.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.913+0000 7f36037e9640 1 -- 192.168.123.104:0/2013657821 >> 192.168.123.104:0/2013657821 conn(0x7f35fc0fc910 msgr2=0x7f35fc0fe070 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:15:38.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.913+0000 7f36037e9640 1 -- 192.168.123.104:0/2013657821 shutdown_connections 2026-03-10T10:15:38.917 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:15:38.913+0000 7f36037e9640 1 -- 192.168.123.104:0/2013657821 wait complete. 2026-03-10T10:15:38.962 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-10T10:15:38.962 INFO:tasks.cephadm:Setup complete, yielding 2026-03-10T10:15:38.962 INFO:teuthology.run_tasks:Running task workunit... 2026-03-10T10:15:38.966 INFO:tasks.workunit:Pulling workunits from ref 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b 2026-03-10T10:15:38.966 INFO:tasks.workunit:Making a separate scratch dir for every client... 2026-03-10T10:15:38.967 DEBUG:teuthology.orchestra.run.vm04:> stat -- /home/ubuntu/cephtest/mnt.0 2026-03-10T10:15:38.971 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T10:15:38.971 INFO:teuthology.orchestra.run.vm04.stderr:stat: cannot statx '/home/ubuntu/cephtest/mnt.0': No such file or directory 2026-03-10T10:15:38.971 DEBUG:teuthology.orchestra.run.vm04:> mkdir -- /home/ubuntu/cephtest/mnt.0 2026-03-10T10:15:39.017 INFO:tasks.workunit:Created dir /home/ubuntu/cephtest/mnt.0 2026-03-10T10:15:39.017 DEBUG:teuthology.orchestra.run.vm04:> cd -- /home/ubuntu/cephtest/mnt.0 && mkdir -- client.0 2026-03-10T10:15:39.061 INFO:tasks.workunit:timeout=3h 2026-03-10T10:15:39.061 INFO:tasks.workunit:cleanup=True 2026-03-10T10:15:39.061 DEBUG:teuthology.orchestra.run.vm04:> rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b 2026-03-10T10:15:39.108 INFO:tasks.workunit.client.0.vm04.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.0'... 2026-03-10T10:15:39.667 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:39 vm04 bash[28289]: audit 2026-03-10T10:15:38.143336+0000 mgr.y (mgr.24422) 69 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:39.667 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:39 vm04 bash[28289]: audit 2026-03-10T10:15:38.143336+0000 mgr.y (mgr.24422) 69 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:39.667 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:39 vm04 bash[28289]: audit 2026-03-10T10:15:38.915627+0000 mon.c (mon.2) 30 : audit [DBG] from='client.? 192.168.123.104:0/2013657821' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T10:15:39.667 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:39 vm04 bash[28289]: audit 2026-03-10T10:15:38.915627+0000 mon.c (mon.2) 30 : audit [DBG] from='client.? 192.168.123.104:0/2013657821' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T10:15:39.667 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:39 vm04 bash[20742]: audit 2026-03-10T10:15:38.143336+0000 mgr.y (mgr.24422) 69 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:39.668 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:39 vm04 bash[20742]: audit 2026-03-10T10:15:38.143336+0000 mgr.y (mgr.24422) 69 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:39.668 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:39 vm04 bash[20742]: audit 2026-03-10T10:15:38.915627+0000 mon.c (mon.2) 30 : audit [DBG] from='client.? 192.168.123.104:0/2013657821' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T10:15:39.668 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:39 vm04 bash[20742]: audit 2026-03-10T10:15:38.915627+0000 mon.c (mon.2) 30 : audit [DBG] from='client.? 192.168.123.104:0/2013657821' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T10:15:39.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:39 vm07 bash[23367]: audit 2026-03-10T10:15:38.143336+0000 mgr.y (mgr.24422) 69 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:39.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:39 vm07 bash[23367]: audit 2026-03-10T10:15:38.143336+0000 mgr.y (mgr.24422) 69 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:39.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:39 vm07 bash[23367]: audit 2026-03-10T10:15:38.915627+0000 mon.c (mon.2) 30 : audit [DBG] from='client.? 192.168.123.104:0/2013657821' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T10:15:39.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:39 vm07 bash[23367]: audit 2026-03-10T10:15:38.915627+0000 mon.c (mon.2) 30 : audit [DBG] from='client.? 192.168.123.104:0/2013657821' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T10:15:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:40 vm04 bash[28289]: cluster 2026-03-10T10:15:38.303368+0000 mgr.y (mgr.24422) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:40 vm04 bash[28289]: cluster 2026-03-10T10:15:38.303368+0000 mgr.y (mgr.24422) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:40 vm04 bash[20742]: cluster 2026-03-10T10:15:38.303368+0000 mgr.y (mgr.24422) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:40 vm04 bash[20742]: cluster 2026-03-10T10:15:38.303368+0000 mgr.y (mgr.24422) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:40.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:40 vm07 bash[23367]: cluster 2026-03-10T10:15:38.303368+0000 mgr.y (mgr.24422) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:40.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:40 vm07 bash[23367]: cluster 2026-03-10T10:15:38.303368+0000 mgr.y (mgr.24422) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:41.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:41 vm04 bash[28289]: cluster 2026-03-10T10:15:40.303902+0000 mgr.y (mgr.24422) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:41.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:41 vm04 bash[28289]: cluster 2026-03-10T10:15:40.303902+0000 mgr.y (mgr.24422) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:41.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:41 vm04 bash[20742]: cluster 2026-03-10T10:15:40.303902+0000 mgr.y (mgr.24422) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:41.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:41 vm04 bash[20742]: cluster 2026-03-10T10:15:40.303902+0000 mgr.y (mgr.24422) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:41.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:41 vm07 bash[23367]: cluster 2026-03-10T10:15:40.303902+0000 mgr.y (mgr.24422) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:41.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:41 vm07 bash[23367]: cluster 2026-03-10T10:15:40.303902+0000 mgr.y (mgr.24422) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:42.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:42 vm04 bash[20742]: audit 2026-03-10T10:15:42.380390+0000 mon.a (mon.0) 828 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:15:42.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:42 vm04 bash[20742]: audit 2026-03-10T10:15:42.380390+0000 mon.a (mon.0) 828 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:15:42.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:42 vm04 bash[28289]: audit 2026-03-10T10:15:42.380390+0000 mon.a (mon.0) 828 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:15:42.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:42 vm04 bash[28289]: audit 2026-03-10T10:15:42.380390+0000 mon.a (mon.0) 828 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:15:42.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:42 vm07 bash[23367]: audit 2026-03-10T10:15:42.380390+0000 mon.a (mon.0) 828 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:15:42.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:42 vm07 bash[23367]: audit 2026-03-10T10:15:42.380390+0000 mon.a (mon.0) 828 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:15:43.444 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:43 vm04 bash[20742]: cluster 2026-03-10T10:15:42.304138+0000 mgr.y (mgr.24422) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:43.444 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:43 vm04 bash[20742]: cluster 2026-03-10T10:15:42.304138+0000 mgr.y (mgr.24422) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:43.444 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:15:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:15:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:15:43.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:43 vm04 bash[28289]: cluster 2026-03-10T10:15:42.304138+0000 mgr.y (mgr.24422) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:43.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:43 vm04 bash[28289]: cluster 2026-03-10T10:15:42.304138+0000 mgr.y (mgr.24422) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:43.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:43 vm07 bash[23367]: cluster 2026-03-10T10:15:42.304138+0000 mgr.y (mgr.24422) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:43.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:43 vm07 bash[23367]: cluster 2026-03-10T10:15:42.304138+0000 mgr.y (mgr.24422) 72 : cluster [DBG] pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:45.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:45 vm07 bash[23367]: cluster 2026-03-10T10:15:44.304637+0000 mgr.y (mgr.24422) 73 : cluster [DBG] pgmap v34: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:45.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:45 vm07 bash[23367]: cluster 2026-03-10T10:15:44.304637+0000 mgr.y (mgr.24422) 73 : cluster [DBG] pgmap v34: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:45.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:45 vm04 bash[28289]: cluster 2026-03-10T10:15:44.304637+0000 mgr.y (mgr.24422) 73 : cluster [DBG] pgmap v34: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:45.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:45 vm04 bash[28289]: cluster 2026-03-10T10:15:44.304637+0000 mgr.y (mgr.24422) 73 : cluster [DBG] pgmap v34: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:45.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:45 vm04 bash[20742]: cluster 2026-03-10T10:15:44.304637+0000 mgr.y (mgr.24422) 73 : cluster [DBG] pgmap v34: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:45.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:45 vm04 bash[20742]: cluster 2026-03-10T10:15:44.304637+0000 mgr.y (mgr.24422) 73 : cluster [DBG] pgmap v34: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:47.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:47 vm07 bash[23367]: cluster 2026-03-10T10:15:46.304967+0000 mgr.y (mgr.24422) 74 : cluster [DBG] pgmap v35: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:47.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:47 vm07 bash[23367]: cluster 2026-03-10T10:15:46.304967+0000 mgr.y (mgr.24422) 74 : cluster [DBG] pgmap v35: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:47.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:47 vm04 bash[20742]: cluster 2026-03-10T10:15:46.304967+0000 mgr.y (mgr.24422) 74 : cluster [DBG] pgmap v35: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:47.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:47 vm04 bash[20742]: cluster 2026-03-10T10:15:46.304967+0000 mgr.y (mgr.24422) 74 : cluster [DBG] pgmap v35: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:47.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:47 vm04 bash[28289]: cluster 2026-03-10T10:15:46.304967+0000 mgr.y (mgr.24422) 74 : cluster [DBG] pgmap v35: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:47.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:47 vm04 bash[28289]: cluster 2026-03-10T10:15:46.304967+0000 mgr.y (mgr.24422) 74 : cluster [DBG] pgmap v35: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:48.506 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:15:48 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:15:48.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:48 vm07 bash[23367]: audit 2026-03-10T10:15:48.153822+0000 mgr.y (mgr.24422) 75 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:48.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:48 vm07 bash[23367]: audit 2026-03-10T10:15:48.153822+0000 mgr.y (mgr.24422) 75 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:48.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:48 vm04 bash[20742]: audit 2026-03-10T10:15:48.153822+0000 mgr.y (mgr.24422) 75 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:48.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:48 vm04 bash[20742]: audit 2026-03-10T10:15:48.153822+0000 mgr.y (mgr.24422) 75 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:48.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:48 vm04 bash[28289]: audit 2026-03-10T10:15:48.153822+0000 mgr.y (mgr.24422) 75 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:48.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:48 vm04 bash[28289]: audit 2026-03-10T10:15:48.153822+0000 mgr.y (mgr.24422) 75 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:50.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:49 vm07 bash[23367]: cluster 2026-03-10T10:15:48.305258+0000 mgr.y (mgr.24422) 76 : cluster [DBG] pgmap v36: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:50.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:49 vm07 bash[23367]: cluster 2026-03-10T10:15:48.305258+0000 mgr.y (mgr.24422) 76 : cluster [DBG] pgmap v36: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:50.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:49 vm04 bash[20742]: cluster 2026-03-10T10:15:48.305258+0000 mgr.y (mgr.24422) 76 : cluster [DBG] pgmap v36: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:50.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:49 vm04 bash[20742]: cluster 2026-03-10T10:15:48.305258+0000 mgr.y (mgr.24422) 76 : cluster [DBG] pgmap v36: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:49 vm04 bash[28289]: cluster 2026-03-10T10:15:48.305258+0000 mgr.y (mgr.24422) 76 : cluster [DBG] pgmap v36: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:49 vm04 bash[28289]: cluster 2026-03-10T10:15:48.305258+0000 mgr.y (mgr.24422) 76 : cluster [DBG] pgmap v36: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:51 vm07 bash[23367]: cluster 2026-03-10T10:15:50.305696+0000 mgr.y (mgr.24422) 77 : cluster [DBG] pgmap v37: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:51 vm07 bash[23367]: cluster 2026-03-10T10:15:50.305696+0000 mgr.y (mgr.24422) 77 : cluster [DBG] pgmap v37: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:52.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:51 vm04 bash[28289]: cluster 2026-03-10T10:15:50.305696+0000 mgr.y (mgr.24422) 77 : cluster [DBG] pgmap v37: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:52.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:51 vm04 bash[28289]: cluster 2026-03-10T10:15:50.305696+0000 mgr.y (mgr.24422) 77 : cluster [DBG] pgmap v37: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:52.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:51 vm04 bash[20742]: cluster 2026-03-10T10:15:50.305696+0000 mgr.y (mgr.24422) 77 : cluster [DBG] pgmap v37: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:52.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:51 vm04 bash[20742]: cluster 2026-03-10T10:15:50.305696+0000 mgr.y (mgr.24422) 77 : cluster [DBG] pgmap v37: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:53.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:15:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:15:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:15:54.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:53 vm07 bash[23367]: cluster 2026-03-10T10:15:52.305964+0000 mgr.y (mgr.24422) 78 : cluster [DBG] pgmap v38: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:54.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:53 vm07 bash[23367]: cluster 2026-03-10T10:15:52.305964+0000 mgr.y (mgr.24422) 78 : cluster [DBG] pgmap v38: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:54.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:53 vm04 bash[20742]: cluster 2026-03-10T10:15:52.305964+0000 mgr.y (mgr.24422) 78 : cluster [DBG] pgmap v38: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:54.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:53 vm04 bash[20742]: cluster 2026-03-10T10:15:52.305964+0000 mgr.y (mgr.24422) 78 : cluster [DBG] pgmap v38: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:54.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:53 vm04 bash[28289]: cluster 2026-03-10T10:15:52.305964+0000 mgr.y (mgr.24422) 78 : cluster [DBG] pgmap v38: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:54.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:53 vm04 bash[28289]: cluster 2026-03-10T10:15:52.305964+0000 mgr.y (mgr.24422) 78 : cluster [DBG] pgmap v38: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:55.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:55 vm04 bash[20742]: cluster 2026-03-10T10:15:54.306451+0000 mgr.y (mgr.24422) 79 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:55.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:55 vm04 bash[20742]: cluster 2026-03-10T10:15:54.306451+0000 mgr.y (mgr.24422) 79 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:55.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:55 vm04 bash[28289]: cluster 2026-03-10T10:15:54.306451+0000 mgr.y (mgr.24422) 79 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:55.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:55 vm04 bash[28289]: cluster 2026-03-10T10:15:54.306451+0000 mgr.y (mgr.24422) 79 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:55.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:55 vm07 bash[23367]: cluster 2026-03-10T10:15:54.306451+0000 mgr.y (mgr.24422) 79 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:55.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:55 vm07 bash[23367]: cluster 2026-03-10T10:15:54.306451+0000 mgr.y (mgr.24422) 79 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:15:57.015 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:15:56 vm07 bash[50688]: logger=infra.usagestats t=2026-03-10T10:15:56.604155564Z level=info msg="Usage stats are ready to report" 2026-03-10T10:15:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:57 vm04 bash[20742]: cluster 2026-03-10T10:15:56.306709+0000 mgr.y (mgr.24422) 80 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:57 vm04 bash[20742]: cluster 2026-03-10T10:15:56.306709+0000 mgr.y (mgr.24422) 80 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:57 vm04 bash[20742]: audit 2026-03-10T10:15:57.385949+0000 mon.a (mon.0) 829 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:15:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:57 vm04 bash[20742]: audit 2026-03-10T10:15:57.385949+0000 mon.a (mon.0) 829 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:15:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:57 vm04 bash[28289]: cluster 2026-03-10T10:15:56.306709+0000 mgr.y (mgr.24422) 80 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:57 vm04 bash[28289]: cluster 2026-03-10T10:15:56.306709+0000 mgr.y (mgr.24422) 80 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:57 vm04 bash[28289]: audit 2026-03-10T10:15:57.385949+0000 mon.a (mon.0) 829 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:15:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:57 vm04 bash[28289]: audit 2026-03-10T10:15:57.385949+0000 mon.a (mon.0) 829 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:15:57.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:57 vm07 bash[23367]: cluster 2026-03-10T10:15:56.306709+0000 mgr.y (mgr.24422) 80 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:57.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:57 vm07 bash[23367]: cluster 2026-03-10T10:15:56.306709+0000 mgr.y (mgr.24422) 80 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:57.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:57 vm07 bash[23367]: audit 2026-03-10T10:15:57.385949+0000 mon.a (mon.0) 829 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:15:57.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:57 vm07 bash[23367]: audit 2026-03-10T10:15:57.385949+0000 mon.a (mon.0) 829 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:15:58.439 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:15:58 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:15:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:58 vm04 bash[20742]: audit 2026-03-10T10:15:58.161670+0000 mgr.y (mgr.24422) 81 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:58 vm04 bash[20742]: audit 2026-03-10T10:15:58.161670+0000 mgr.y (mgr.24422) 81 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:58 vm04 bash[28289]: audit 2026-03-10T10:15:58.161670+0000 mgr.y (mgr.24422) 81 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:58 vm04 bash[28289]: audit 2026-03-10T10:15:58.161670+0000 mgr.y (mgr.24422) 81 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:58 vm07 bash[23367]: audit 2026-03-10T10:15:58.161670+0000 mgr.y (mgr.24422) 81 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:58 vm07 bash[23367]: audit 2026-03-10T10:15:58.161670+0000 mgr.y (mgr.24422) 81 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:15:59.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:59 vm04 bash[20742]: cluster 2026-03-10T10:15:58.307017+0000 mgr.y (mgr.24422) 82 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:59.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:15:59 vm04 bash[20742]: cluster 2026-03-10T10:15:58.307017+0000 mgr.y (mgr.24422) 82 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:59.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:59 vm04 bash[28289]: cluster 2026-03-10T10:15:58.307017+0000 mgr.y (mgr.24422) 82 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:59.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:15:59 vm04 bash[28289]: cluster 2026-03-10T10:15:58.307017+0000 mgr.y (mgr.24422) 82 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:59.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:59 vm07 bash[23367]: cluster 2026-03-10T10:15:58.307017+0000 mgr.y (mgr.24422) 82 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:15:59.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:15:59 vm07 bash[23367]: cluster 2026-03-10T10:15:58.307017+0000 mgr.y (mgr.24422) 82 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:16:01.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:01 vm04 bash[20742]: cluster 2026-03-10T10:16:00.307433+0000 mgr.y (mgr.24422) 83 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:01.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:01 vm04 bash[20742]: cluster 2026-03-10T10:16:00.307433+0000 mgr.y (mgr.24422) 83 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:01.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:01 vm04 bash[28289]: cluster 2026-03-10T10:16:00.307433+0000 mgr.y (mgr.24422) 83 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:01.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:01 vm04 bash[28289]: cluster 2026-03-10T10:16:00.307433+0000 mgr.y (mgr.24422) 83 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:01.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:01 vm07 bash[23367]: cluster 2026-03-10T10:16:00.307433+0000 mgr.y (mgr.24422) 83 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:01.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:01 vm07 bash[23367]: cluster 2026-03-10T10:16:00.307433+0000 mgr.y (mgr.24422) 83 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:03.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:16:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:16:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:16:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:03 vm07 bash[23367]: cluster 2026-03-10T10:16:02.307699+0000 mgr.y (mgr.24422) 84 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:16:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:03 vm07 bash[23367]: cluster 2026-03-10T10:16:02.307699+0000 mgr.y (mgr.24422) 84 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:16:03.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:03 vm04 bash[20742]: cluster 2026-03-10T10:16:02.307699+0000 mgr.y (mgr.24422) 84 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:16:03.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:03 vm04 bash[20742]: cluster 2026-03-10T10:16:02.307699+0000 mgr.y (mgr.24422) 84 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:16:03.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:03 vm04 bash[28289]: cluster 2026-03-10T10:16:02.307699+0000 mgr.y (mgr.24422) 84 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:16:03.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:03 vm04 bash[28289]: cluster 2026-03-10T10:16:02.307699+0000 mgr.y (mgr.24422) 84 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:16:05.764 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:05 vm04 bash[20742]: cluster 2026-03-10T10:16:04.308121+0000 mgr.y (mgr.24422) 85 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:05.764 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:05 vm04 bash[20742]: cluster 2026-03-10T10:16:04.308121+0000 mgr.y (mgr.24422) 85 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:05.764 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:05 vm04 bash[28289]: cluster 2026-03-10T10:16:04.308121+0000 mgr.y (mgr.24422) 85 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:05.764 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:05 vm04 bash[28289]: cluster 2026-03-10T10:16:04.308121+0000 mgr.y (mgr.24422) 85 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:05.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:05 vm07 bash[23367]: cluster 2026-03-10T10:16:04.308121+0000 mgr.y (mgr.24422) 85 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:05.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:05 vm07 bash[23367]: cluster 2026-03-10T10:16:04.308121+0000 mgr.y (mgr.24422) 85 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:07.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:07 vm07 bash[23367]: cluster 2026-03-10T10:16:06.308340+0000 mgr.y (mgr.24422) 86 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:16:07.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:07 vm07 bash[23367]: cluster 2026-03-10T10:16:06.308340+0000 mgr.y (mgr.24422) 86 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:16:07.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:07 vm04 bash[20742]: cluster 2026-03-10T10:16:06.308340+0000 mgr.y (mgr.24422) 86 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:16:07.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:07 vm04 bash[20742]: cluster 2026-03-10T10:16:06.308340+0000 mgr.y (mgr.24422) 86 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:16:07.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:07 vm04 bash[28289]: cluster 2026-03-10T10:16:06.308340+0000 mgr.y (mgr.24422) 86 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:16:07.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:07 vm04 bash[28289]: cluster 2026-03-10T10:16:06.308340+0000 mgr.y (mgr.24422) 86 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:16:08.477 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:16:08 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:16:08.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:08 vm07 bash[23367]: audit 2026-03-10T10:16:08.167635+0000 mgr.y (mgr.24422) 87 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:08.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:08 vm07 bash[23367]: audit 2026-03-10T10:16:08.167635+0000 mgr.y (mgr.24422) 87 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:08.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:08 vm04 bash[20742]: audit 2026-03-10T10:16:08.167635+0000 mgr.y (mgr.24422) 87 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:08.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:08 vm04 bash[20742]: audit 2026-03-10T10:16:08.167635+0000 mgr.y (mgr.24422) 87 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:08.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:08 vm04 bash[28289]: audit 2026-03-10T10:16:08.167635+0000 mgr.y (mgr.24422) 87 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:08.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:08 vm04 bash[28289]: audit 2026-03-10T10:16:08.167635+0000 mgr.y (mgr.24422) 87 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:09.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:09 vm07 bash[23367]: cluster 2026-03-10T10:16:08.308936+0000 mgr.y (mgr.24422) 88 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:16:09.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:09 vm07 bash[23367]: cluster 2026-03-10T10:16:08.308936+0000 mgr.y (mgr.24422) 88 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:16:09.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:09 vm04 bash[20742]: cluster 2026-03-10T10:16:08.308936+0000 mgr.y (mgr.24422) 88 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:16:09.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:09 vm04 bash[20742]: cluster 2026-03-10T10:16:08.308936+0000 mgr.y (mgr.24422) 88 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:16:09.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:09 vm04 bash[28289]: cluster 2026-03-10T10:16:08.308936+0000 mgr.y (mgr.24422) 88 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:16:09.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:09 vm04 bash[28289]: cluster 2026-03-10T10:16:08.308936+0000 mgr.y (mgr.24422) 88 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:16:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:11 vm04 bash[20742]: cluster 2026-03-10T10:16:10.309304+0000 mgr.y (mgr.24422) 89 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:11 vm04 bash[20742]: cluster 2026-03-10T10:16:10.309304+0000 mgr.y (mgr.24422) 89 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:11 vm04 bash[28289]: cluster 2026-03-10T10:16:10.309304+0000 mgr.y (mgr.24422) 89 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:11 vm04 bash[28289]: cluster 2026-03-10T10:16:10.309304+0000 mgr.y (mgr.24422) 89 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:12.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:11 vm07 bash[23367]: cluster 2026-03-10T10:16:10.309304+0000 mgr.y (mgr.24422) 89 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:12.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:11 vm07 bash[23367]: cluster 2026-03-10T10:16:10.309304+0000 mgr.y (mgr.24422) 89 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:13.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:12 vm04 bash[20742]: audit 2026-03-10T10:16:12.397735+0000 mon.a (mon.0) 830 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:16:13.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:12 vm04 bash[20742]: audit 2026-03-10T10:16:12.397735+0000 mon.a (mon.0) 830 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:16:13.203 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:16:13 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:16:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:16:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:12 vm04 bash[28289]: audit 2026-03-10T10:16:12.397735+0000 mon.a (mon.0) 830 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:16:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:12 vm04 bash[28289]: audit 2026-03-10T10:16:12.397735+0000 mon.a (mon.0) 830 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:16:13.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:12 vm07 bash[23367]: audit 2026-03-10T10:16:12.397735+0000 mon.a (mon.0) 830 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:16:13.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:12 vm07 bash[23367]: audit 2026-03-10T10:16:12.397735+0000 mon.a (mon.0) 830 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:16:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:13 vm04 bash[20742]: cluster 2026-03-10T10:16:12.309553+0000 mgr.y (mgr.24422) 90 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:16:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:13 vm04 bash[20742]: cluster 2026-03-10T10:16:12.309553+0000 mgr.y (mgr.24422) 90 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:16:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:13 vm04 bash[28289]: cluster 2026-03-10T10:16:12.309553+0000 mgr.y (mgr.24422) 90 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:16:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:13 vm04 bash[28289]: cluster 2026-03-10T10:16:12.309553+0000 mgr.y (mgr.24422) 90 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:16:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:13 vm07 bash[23367]: cluster 2026-03-10T10:16:12.309553+0000 mgr.y (mgr.24422) 90 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:16:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:13 vm07 bash[23367]: cluster 2026-03-10T10:16:12.309553+0000 mgr.y (mgr.24422) 90 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:16:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:15 vm04 bash[20742]: cluster 2026-03-10T10:16:14.309937+0000 mgr.y (mgr.24422) 91 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:15 vm04 bash[20742]: cluster 2026-03-10T10:16:14.309937+0000 mgr.y (mgr.24422) 91 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:15 vm04 bash[28289]: cluster 2026-03-10T10:16:14.309937+0000 mgr.y (mgr.24422) 91 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:15 vm04 bash[28289]: cluster 2026-03-10T10:16:14.309937+0000 mgr.y (mgr.24422) 91 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:16.265 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:15 vm07 bash[23367]: cluster 2026-03-10T10:16:14.309937+0000 mgr.y (mgr.24422) 91 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:16.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:15 vm07 bash[23367]: cluster 2026-03-10T10:16:14.309937+0000 mgr.y (mgr.24422) 91 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:17.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:16 vm04 bash[20742]: audit 2026-03-10T10:16:16.192499+0000 mon.a (mon.0) 831 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:16:17.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:16 vm04 bash[20742]: audit 2026-03-10T10:16:16.192499+0000 mon.a (mon.0) 831 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:16:17.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:16 vm04 bash[28289]: audit 2026-03-10T10:16:16.192499+0000 mon.a (mon.0) 831 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:16:17.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:16 vm04 bash[28289]: audit 2026-03-10T10:16:16.192499+0000 mon.a (mon.0) 831 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:16:17.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:16 vm07 bash[23367]: audit 2026-03-10T10:16:16.192499+0000 mon.a (mon.0) 831 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:16:17.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:16 vm07 bash[23367]: audit 2026-03-10T10:16:16.192499+0000 mon.a (mon.0) 831 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:16:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:18 vm04 bash[20742]: cluster 2026-03-10T10:16:16.310188+0000 mgr.y (mgr.24422) 92 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:16:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:18 vm04 bash[20742]: cluster 2026-03-10T10:16:16.310188+0000 mgr.y (mgr.24422) 92 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:16:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:18 vm04 bash[28289]: cluster 2026-03-10T10:16:16.310188+0000 mgr.y (mgr.24422) 92 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:16:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:18 vm04 bash[28289]: cluster 2026-03-10T10:16:16.310188+0000 mgr.y (mgr.24422) 92 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:16:18.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:16:18 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:16:18.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:18 vm07 bash[23367]: cluster 2026-03-10T10:16:16.310188+0000 mgr.y (mgr.24422) 92 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:16:18.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:18 vm07 bash[23367]: cluster 2026-03-10T10:16:16.310188+0000 mgr.y (mgr.24422) 92 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:16:19.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:19 vm04 bash[20742]: audit 2026-03-10T10:16:18.175683+0000 mgr.y (mgr.24422) 93 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:19.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:19 vm04 bash[20742]: audit 2026-03-10T10:16:18.175683+0000 mgr.y (mgr.24422) 93 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:19.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:19 vm04 bash[28289]: audit 2026-03-10T10:16:18.175683+0000 mgr.y (mgr.24422) 93 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:19.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:19 vm04 bash[28289]: audit 2026-03-10T10:16:18.175683+0000 mgr.y (mgr.24422) 93 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:19.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:19 vm07 bash[23367]: audit 2026-03-10T10:16:18.175683+0000 mgr.y (mgr.24422) 93 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:19.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:19 vm07 bash[23367]: audit 2026-03-10T10:16:18.175683+0000 mgr.y (mgr.24422) 93 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:19.854 INFO:tasks.workunit.client.0.vm04.stderr:Note: switching to '75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b'. 2026-03-10T10:16:19.854 INFO:tasks.workunit.client.0.vm04.stderr: 2026-03-10T10:16:19.854 INFO:tasks.workunit.client.0.vm04.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-10T10:16:19.854 INFO:tasks.workunit.client.0.vm04.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-10T10:16:19.854 INFO:tasks.workunit.client.0.vm04.stderr:state without impacting any branches by switching back to a branch. 2026-03-10T10:16:19.854 INFO:tasks.workunit.client.0.vm04.stderr: 2026-03-10T10:16:19.854 INFO:tasks.workunit.client.0.vm04.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-10T10:16:19.854 INFO:tasks.workunit.client.0.vm04.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-10T10:16:19.854 INFO:tasks.workunit.client.0.vm04.stderr: 2026-03-10T10:16:19.854 INFO:tasks.workunit.client.0.vm04.stderr: git switch -c 2026-03-10T10:16:19.854 INFO:tasks.workunit.client.0.vm04.stderr: 2026-03-10T10:16:19.854 INFO:tasks.workunit.client.0.vm04.stderr:Or undo this operation with: 2026-03-10T10:16:19.854 INFO:tasks.workunit.client.0.vm04.stderr: 2026-03-10T10:16:19.854 INFO:tasks.workunit.client.0.vm04.stderr: git switch - 2026-03-10T10:16:19.854 INFO:tasks.workunit.client.0.vm04.stderr: 2026-03-10T10:16:19.854 INFO:tasks.workunit.client.0.vm04.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-10T10:16:19.854 INFO:tasks.workunit.client.0.vm04.stderr: 2026-03-10T10:16:19.854 INFO:tasks.workunit.client.0.vm04.stderr:HEAD is now at 75a68fd8ca3 qa/suites/orch/cephadm/osds: drop nvme_loop task 2026-03-10T10:16:19.860 DEBUG:teuthology.orchestra.run.vm04:> cd -- /home/ubuntu/cephtest/clone.client.0/qa/workunits && if test -e Makefile ; then make ; fi && find -executable -type f -printf '%P\0' >/home/ubuntu/cephtest/workunits.list.client.0 2026-03-10T10:16:19.906 INFO:tasks.workunit.client.0.vm04.stdout:for d in direct_io fs ; do ( cd $d ; make all ) ; done 2026-03-10T10:16:19.907 INFO:tasks.workunit.client.0.vm04.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-10T10:16:19.907 INFO:tasks.workunit.client.0.vm04.stdout:cc -Wall -Wextra -D_GNU_SOURCE direct_io_test.c -o direct_io_test 2026-03-10T10:16:19.944 INFO:tasks.workunit.client.0.vm04.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_sync_io.c -o test_sync_io 2026-03-10T10:16:19.974 INFO:tasks.workunit.client.0.vm04.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_short_dio_read.c -o test_short_dio_read 2026-03-10T10:16:20.005 INFO:tasks.workunit.client.0.vm04.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-10T10:16:20.006 INFO:tasks.workunit.client.0.vm04.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-10T10:16:20.006 INFO:tasks.workunit.client.0.vm04.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_o_trunc.c -o test_o_trunc 2026-03-10T10:16:20.028 INFO:tasks.workunit.client.0.vm04.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-10T10:16:20.031 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-10T10:16:20.031 DEBUG:teuthology.orchestra.run.vm04:> dd if=/home/ubuntu/cephtest/workunits.list.client.0 of=/dev/stdout 2026-03-10T10:16:20.078 INFO:tasks.workunit:Running workunits matching rados/test.sh on client.0... 2026-03-10T10:16:20.078 INFO:tasks.workunit:Running workunit rados/test.sh... 2026-03-10T10:16:20.078 DEBUG:teuthology.orchestra.run.vm04:workunit test rados/test.sh> mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh 2026-03-10T10:16:20.127 INFO:tasks.workunit.client.0.vm04.stderr:+ parallel=1 2026-03-10T10:16:20.127 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' '' = --serial ']' 2026-03-10T10:16:20.127 INFO:tasks.workunit.client.0.vm04.stderr:+ crimson=0 2026-03-10T10:16:20.127 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' '' = --crimson ']' 2026-03-10T10:16:20.127 INFO:tasks.workunit.client.0.vm04.stderr:+ color= 2026-03-10T10:16:20.127 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -t 1 ']' 2026-03-10T10:16:20.127 INFO:tasks.workunit.client.0.vm04.stderr:+ trap cleanup EXIT ERR HUP INT QUIT 2026-03-10T10:16:20.127 INFO:tasks.workunit.client.0.vm04.stderr:+ GTEST_OUTPUT_DIR=/home/ubuntu/cephtest/archive/unit_test_xml_report 2026-03-10T10:16:20.127 INFO:tasks.workunit.client.0.vm04.stderr:+ mkdir -p /home/ubuntu/cephtest/archive/unit_test_xml_report 2026-03-10T10:16:20.127 INFO:tasks.workunit.client.0.vm04.stderr:+ declare -A pids 2026-03-10T10:16:20.127 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T10:16:20.127 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.127 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s api_aio 2026-03-10T10:16:20.130 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' api_aio' 2026-03-10T10:16:20.131 INFO:tasks.workunit.client.0.vm04.stderr:++ echo api_aio 2026-03-10T10:16:20.131 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.135 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=api_aio 2026-03-10T10:16:20.135 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59249 2026-03-10T10:16:20.135 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test api_aio on pid 59249' 2026-03-10T10:16:20.135 INFO:tasks.workunit.client.0.vm04.stdout:test api_aio on pid 59249 2026-03-10T10:16:20.135 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=59249 2026-03-10T10:16:20.135 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T10:16:20.136 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.136 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_aio --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_aio.xml 2>&1 | tee ceph_test_rados_api_aio.log | sed "s/^/ api_aio: /"' 2026-03-10T10:16:20.136 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s api_aio_pp 2026-03-10T10:16:20.136 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' api_aio_pp' 2026-03-10T10:16:20.136 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.136 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.136 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.136 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.136 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_rados_api_aio --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_aio.xml 2026-03-10T10:16:20.137 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_rados_api_aio.log 2026-03-10T10:16:20.137 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ api_aio: /' 2026-03-10T10:16:20.137 INFO:tasks.workunit.client.0.vm04.stderr:++ echo api_aio_pp 2026-03-10T10:16:20.138 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.144 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=api_aio_pp 2026-03-10T10:16:20.144 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59257 2026-03-10T10:16:20.144 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test api_aio_pp on pid 59257' 2026-03-10T10:16:20.144 INFO:tasks.workunit.client.0.vm04.stdout:test api_aio_pp on pid 59257 2026-03-10T10:16:20.145 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=59257 2026-03-10T10:16:20.145 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T10:16:20.145 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.145 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_aio_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_aio_pp.xml 2>&1 | tee ceph_test_rados_api_aio_pp.log | sed "s/^/ api_aio_pp: /"' 2026-03-10T10:16:20.145 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s api_io 2026-03-10T10:16:20.145 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.145 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.145 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.145 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.146 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' api_io' 2026-03-10T10:16:20.146 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_rados_api_aio_pp.log 2026-03-10T10:16:20.146 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_rados_api_aio_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_aio_pp.xml 2026-03-10T10:16:20.146 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.149 INFO:tasks.workunit.client.0.vm04.stderr:++ echo api_io 2026-03-10T10:16:20.149 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=api_io 2026-03-10T10:16:20.149 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59265 2026-03-10T10:16:20.149 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test api_io on pid 59265' 2026-03-10T10:16:20.149 INFO:tasks.workunit.client.0.vm04.stdout:test api_io on pid 59265 2026-03-10T10:16:20.149 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=59265 2026-03-10T10:16:20.150 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T10:16:20.150 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.150 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ api_aio_pp: /' 2026-03-10T10:16:20.153 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s api_io_pp 2026-03-10T10:16:20.153 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' api_io_pp' 2026-03-10T10:16:20.154 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_io --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_io.xml 2>&1 | tee ceph_test_rados_api_io.log | sed "s/^/ api_io: /"' 2026-03-10T10:16:20.154 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.154 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.154 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.154 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.156 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.156 INFO:tasks.workunit.client.0.vm04.stderr:++ echo api_io_pp 2026-03-10T10:16:20.157 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_rados_api_io --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_io.xml 2026-03-10T10:16:20.159 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=api_io_pp 2026-03-10T10:16:20.160 INFO:tasks.workunit.client.0.vm04.stdout:test api_io_pp on pid 59277 2026-03-10T10:16:20.160 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59277 2026-03-10T10:16:20.160 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test api_io_pp on pid 59277' 2026-03-10T10:16:20.160 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=59277 2026-03-10T10:16:20.160 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T10:16:20.160 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.161 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_rados_api_io.log 2026-03-10T10:16:20.161 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ api_io: /' 2026-03-10T10:16:20.164 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s api_asio 2026-03-10T10:16:20.164 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' api_asio' 2026-03-10T10:16:20.164 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_io_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_io_pp.xml 2>&1 | tee ceph_test_rados_api_io_pp.log | sed "s/^/ api_io_pp: /"' 2026-03-10T10:16:20.165 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.165 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.165 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.165 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.166 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ api_io_pp: /' 2026-03-10T10:16:20.167 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_rados_api_io_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_io_pp.xml 2026-03-10T10:16:20.170 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_rados_api_io_pp.log 2026-03-10T10:16:20.177 INFO:tasks.workunit.client.0.vm04.stderr:++ echo api_asio 2026-03-10T10:16:20.177 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.188 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=api_asio 2026-03-10T10:16:20.188 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59329 2026-03-10T10:16:20.188 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test api_asio on pid 59329' 2026-03-10T10:16:20.188 INFO:tasks.workunit.client.0.vm04.stdout:test api_asio on pid 59329 2026-03-10T10:16:20.188 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=59329 2026-03-10T10:16:20.188 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T10:16:20.188 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.189 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_asio --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_asio.xml 2>&1 | tee ceph_test_rados_api_asio.log | sed "s/^/ api_asio: /"' 2026-03-10T10:16:20.191 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.191 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.191 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.191 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.192 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s api_list 2026-03-10T10:16:20.192 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' api_list' 2026-03-10T10:16:20.193 INFO:tasks.workunit.client.0.vm04.stderr:++ echo api_list 2026-03-10T10:16:20.193 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_rados_api_asio --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_asio.xml 2026-03-10T10:16:20.195 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ api_asio: /' 2026-03-10T10:16:20.196 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_rados_api_asio.log 2026-03-10T10:16:20.197 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.201 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=api_list 2026-03-10T10:16:20.201 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59355 2026-03-10T10:16:20.201 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test api_list on pid 59355' 2026-03-10T10:16:20.201 INFO:tasks.workunit.client.0.vm04.stdout:test api_list on pid 59355 2026-03-10T10:16:20.201 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=59355 2026-03-10T10:16:20.201 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T10:16:20.201 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.204 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s api_lock 2026-03-10T10:16:20.204 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' api_lock' 2026-03-10T10:16:20.204 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.205 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_list --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_list.xml 2>&1 | tee ceph_test_rados_api_list.log | sed "s/^/ api_list: /"' 2026-03-10T10:16:20.205 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.205 INFO:tasks.workunit.client.0.vm04.stderr:++ echo api_lock 2026-03-10T10:16:20.206 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=api_lock 2026-03-10T10:16:20.206 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59362 2026-03-10T10:16:20.206 INFO:tasks.workunit.client.0.vm04.stdout:test api_lock on pid 59362 2026-03-10T10:16:20.206 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test api_lock on pid 59362' 2026-03-10T10:16:20.206 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=59362 2026-03-10T10:16:20.206 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T10:16:20.206 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.206 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_lock --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_lock.xml 2>&1 | tee ceph_test_rados_api_lock.log | sed "s/^/ api_lock: /"' 2026-03-10T10:16:20.207 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.207 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.207 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.207 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.207 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.207 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.207 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.207 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ api_lock: /' 2026-03-10T10:16:20.208 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_rados_api_lock --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_lock.xml 2026-03-10T10:16:20.209 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ api_list: /' 2026-03-10T10:16:20.209 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_rados_api_list.log 2026-03-10T10:16:20.210 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_rados_api_list --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_list.xml 2026-03-10T10:16:20.213 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s api_lock_pp 2026-03-10T10:16:20.213 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' api_lock_pp' 2026-03-10T10:16:20.213 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_rados_api_lock.log 2026-03-10T10:16:20.221 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.222 INFO:tasks.workunit.client.0.vm04.stderr:++ echo api_lock_pp 2026-03-10T10:16:20.223 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=api_lock_pp 2026-03-10T10:16:20.223 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59389 2026-03-10T10:16:20.223 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test api_lock_pp on pid 59389' 2026-03-10T10:16:20.223 INFO:tasks.workunit.client.0.vm04.stdout:test api_lock_pp on pid 59389 2026-03-10T10:16:20.223 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=59389 2026-03-10T10:16:20.223 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T10:16:20.223 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.226 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s api_misc 2026-03-10T10:16:20.226 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' api_misc' 2026-03-10T10:16:20.226 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_lock_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_lock_pp.xml 2>&1 | tee ceph_test_rados_api_lock_pp.log | sed "s/^/ api_lock_pp: /"' 2026-03-10T10:16:20.229 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.229 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.229 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.230 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.236 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_rados_api_lock_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_lock_pp.xml 2026-03-10T10:16:20.237 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_rados_api_lock_pp.log 2026-03-10T10:16:20.237 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ api_lock_pp: /' 2026-03-10T10:16:20.237 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.238 INFO:tasks.workunit.client.0.vm04.stderr:++ echo api_misc 2026-03-10T10:16:20.238 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=api_misc 2026-03-10T10:16:20.238 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59421 2026-03-10T10:16:20.238 INFO:tasks.workunit.client.0.vm04.stdout:test api_misc on pid 59421 2026-03-10T10:16:20.238 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test api_misc on pid 59421' 2026-03-10T10:16:20.239 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=59421 2026-03-10T10:16:20.239 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T10:16:20.239 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.239 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_misc --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_misc.xml 2>&1 | tee ceph_test_rados_api_misc.log | sed "s/^/ api_misc: /"' 2026-03-10T10:16:20.239 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.239 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.239 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.239 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.239 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ api_misc: /' 2026-03-10T10:16:20.240 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_rados_api_misc --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_misc.xml 2026-03-10T10:16:20.249 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s api_misc_pp 2026-03-10T10:16:20.250 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' api_misc_pp' 2026-03-10T10:16:20.250 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_rados_api_misc.log 2026-03-10T10:16:20.260 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.261 INFO:tasks.workunit.client.0.vm04.stderr:++ echo api_misc_pp 2026-03-10T10:16:20.262 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=api_misc_pp 2026-03-10T10:16:20.262 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59459 2026-03-10T10:16:20.262 INFO:tasks.workunit.client.0.vm04.stdout:test api_misc_pp on pid 59459 2026-03-10T10:16:20.262 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test api_misc_pp on pid 59459' 2026-03-10T10:16:20.262 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=59459 2026-03-10T10:16:20.262 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T10:16:20.262 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.263 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_misc_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_misc_pp.xml 2>&1 | tee ceph_test_rados_api_misc_pp.log | sed "s/^/ api_misc_pp: /"' 2026-03-10T10:16:20.264 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s api_tier_pp 2026-03-10T10:16:20.265 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' api_tier_pp' 2026-03-10T10:16:20.266 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.268 INFO:tasks.workunit.client.0.vm04.stderr:++ echo api_tier_pp 2026-03-10T10:16:20.270 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.270 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.270 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.270 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.270 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ api_misc_pp: /' 2026-03-10T10:16:20.271 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=api_tier_pp 2026-03-10T10:16:20.271 INFO:tasks.workunit.client.0.vm04.stdout:test api_tier_pp on pid 59487 2026-03-10T10:16:20.271 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59487 2026-03-10T10:16:20.271 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test api_tier_pp on pid 59487' 2026-03-10T10:16:20.271 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=59487 2026-03-10T10:16:20.271 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T10:16:20.271 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.271 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_rados_api_misc_pp.log 2026-03-10T10:16:20.272 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_rados_api_misc_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_misc_pp.xml 2026-03-10T10:16:20.273 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_tier_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_tier_pp.xml 2>&1 | tee ceph_test_rados_api_tier_pp.log | sed "s/^/ api_tier_pp: /"' 2026-03-10T10:16:20.273 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s api_pool 2026-03-10T10:16:20.274 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' api_pool' 2026-03-10T10:16:20.275 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.275 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.275 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.275 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.278 INFO:tasks.workunit.client.0.vm04.stderr:++ echo api_pool 2026-03-10T10:16:20.278 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.279 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ api_tier_pp: /' 2026-03-10T10:16:20.282 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_rados_api_tier_pp.log 2026-03-10T10:16:20.282 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=api_pool 2026-03-10T10:16:20.282 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_rados_api_tier_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_tier_pp.xml 2026-03-10T10:16:20.282 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59501 2026-03-10T10:16:20.282 INFO:tasks.workunit.client.0.vm04.stdout:test api_pool on pid 59501 2026-03-10T10:16:20.283 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test api_pool on pid 59501' 2026-03-10T10:16:20.283 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=59501 2026-03-10T10:16:20.283 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T10:16:20.283 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.283 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s api_snapshots 2026-03-10T10:16:20.283 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' api_snapshots' 2026-03-10T10:16:20.283 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_pool --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_pool.xml 2>&1 | tee ceph_test_rados_api_pool.log | sed "s/^/ api_pool: /"' 2026-03-10T10:16:20.284 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.284 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.284 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.284 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.284 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ api_pool: /' 2026-03-10T10:16:20.285 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_rados_api_pool.log 2026-03-10T10:16:20.285 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_rados_api_pool --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_pool.xml 2026-03-10T10:16:20.289 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.290 INFO:tasks.workunit.client.0.vm04.stderr:++ echo api_snapshots 2026-03-10T10:16:20.293 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=api_snapshots 2026-03-10T10:16:20.293 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59514 2026-03-10T10:16:20.293 INFO:tasks.workunit.client.0.vm04.stdout:test api_snapshots on pid 59514 2026-03-10T10:16:20.293 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test api_snapshots on pid 59514' 2026-03-10T10:16:20.293 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=59514 2026-03-10T10:16:20.293 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T10:16:20.293 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.296 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s api_snapshots_pp 2026-03-10T10:16:20.297 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' api_snapshots_pp' 2026-03-10T10:16:20.301 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.301 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_snapshots --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_snapshots.xml 2>&1 | tee ceph_test_rados_api_snapshots.log | sed "s/^/ api_snapshots: /"' 2026-03-10T10:16:20.302 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.302 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.302 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.302 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.304 INFO:tasks.workunit.client.0.vm04.stderr:++ echo api_snapshots_pp 2026-03-10T10:16:20.305 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=api_snapshots_pp 2026-03-10T10:16:20.305 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59538 2026-03-10T10:16:20.305 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test api_snapshots_pp on pid 59538' 2026-03-10T10:16:20.305 INFO:tasks.workunit.client.0.vm04.stdout:test api_snapshots_pp on pid 59538 2026-03-10T10:16:20.305 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=59538 2026-03-10T10:16:20.305 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T10:16:20.305 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.305 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s api_stat 2026-03-10T10:16:20.305 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' api_stat' 2026-03-10T10:16:20.305 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_snapshots_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_snapshots_pp.xml 2>&1 | tee ceph_test_rados_api_snapshots_pp.log | sed "s/^/ api_snapshots_pp: /"' 2026-03-10T10:16:20.306 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.306 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.306 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.306 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.307 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ api_snapshots_pp: /' 2026-03-10T10:16:20.307 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_rados_api_snapshots_pp.log 2026-03-10T10:16:20.308 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_rados_api_snapshots_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_snapshots_pp.xml 2026-03-10T10:16:20.309 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_rados_api_snapshots.log 2026-03-10T10:16:20.310 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ api_snapshots: /' 2026-03-10T10:16:20.310 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_rados_api_snapshots --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_snapshots.xml 2026-03-10T10:16:20.310 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.312 INFO:tasks.workunit.client.0.vm04.stderr:++ echo api_stat 2026-03-10T10:16:20.314 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=api_stat 2026-03-10T10:16:20.314 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59546 2026-03-10T10:16:20.314 INFO:tasks.workunit.client.0.vm04.stdout:test api_stat on pid 59546 2026-03-10T10:16:20.314 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test api_stat on pid 59546' 2026-03-10T10:16:20.314 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=59546 2026-03-10T10:16:20.314 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T10:16:20.314 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.326 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s api_stat_pp 2026-03-10T10:16:20.326 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' api_stat_pp' 2026-03-10T10:16:20.329 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_stat --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_stat.xml 2>&1 | tee ceph_test_rados_api_stat.log | sed "s/^/ api_stat: /"' 2026-03-10T10:16:20.331 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.331 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.331 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.331 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.333 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_rados_api_stat --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_stat.xml 2026-03-10T10:16:20.334 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_rados_api_stat.log 2026-03-10T10:16:20.335 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.335 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ api_stat: /' 2026-03-10T10:16:20.336 INFO:tasks.workunit.client.0.vm04.stderr:++ echo api_stat_pp 2026-03-10T10:16:20.337 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=api_stat_pp 2026-03-10T10:16:20.337 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59596 2026-03-10T10:16:20.337 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test api_stat_pp on pid 59596' 2026-03-10T10:16:20.337 INFO:tasks.workunit.client.0.vm04.stdout:test api_stat_pp on pid 59596 2026-03-10T10:16:20.337 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=59596 2026-03-10T10:16:20.337 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T10:16:20.337 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.337 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s api_watch_notify 2026-03-10T10:16:20.337 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' api_watch_notify' 2026-03-10T10:16:20.337 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_stat_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_stat_pp.xml 2>&1 | tee ceph_test_rados_api_stat_pp.log | sed "s/^/ api_stat_pp: /"' 2026-03-10T10:16:20.338 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.338 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.338 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.338 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.338 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ api_stat_pp: /' 2026-03-10T10:16:20.339 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_rados_api_stat_pp.log 2026-03-10T10:16:20.339 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.344 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_rados_api_stat_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_stat_pp.xml 2026-03-10T10:16:20.344 INFO:tasks.workunit.client.0.vm04.stderr:++ echo api_watch_notify 2026-03-10T10:16:20.347 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=api_watch_notify 2026-03-10T10:16:20.347 INFO:tasks.workunit.client.0.vm04.stdout:test api_watch_notify on pid 59613 2026-03-10T10:16:20.347 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59613 2026-03-10T10:16:20.347 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test api_watch_notify on pid 59613' 2026-03-10T10:16:20.347 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=59613 2026-03-10T10:16:20.347 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T10:16:20.347 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.350 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s api_watch_notify_pp 2026-03-10T10:16:20.351 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' api_watch_notify_pp' 2026-03-10T10:16:20.351 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_watch_notify --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_watch_notify.xml 2>&1 | tee ceph_test_rados_api_watch_notify.log | sed "s/^/ api_watch_notify: /"' 2026-03-10T10:16:20.352 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.352 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.352 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.352 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.364 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ api_watch_notify: /' 2026-03-10T10:16:20.364 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_rados_api_watch_notify.log 2026-03-10T10:16:20.365 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_rados_api_watch_notify --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_watch_notify.xml 2026-03-10T10:16:20.366 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.368 INFO:tasks.workunit.client.0.vm04.stderr:++ echo api_watch_notify_pp 2026-03-10T10:16:20.372 INFO:tasks.workunit.client.0.vm04.stdout:test api_watch_notify_pp on pid 59658 2026-03-10T10:16:20.372 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=api_watch_notify_pp 2026-03-10T10:16:20.372 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59658 2026-03-10T10:16:20.372 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test api_watch_notify_pp on pid 59658' 2026-03-10T10:16:20.372 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=59658 2026-03-10T10:16:20.372 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T10:16:20.372 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.380 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s api_cmd 2026-03-10T10:16:20.380 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' api_cmd' 2026-03-10T10:16:20.380 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_watch_notify_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_watch_notify_pp.xml 2>&1 | tee ceph_test_rados_api_watch_notify_pp.log | sed "s/^/ api_watch_notify_pp: /"' 2026-03-10T10:16:20.381 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.381 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.381 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.381 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.381 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_rados_api_watch_notify_pp.log 2026-03-10T10:16:20.382 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_rados_api_watch_notify_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_watch_notify_pp.xml 2026-03-10T10:16:20.384 INFO:tasks.workunit.client.0.vm04.stderr:++ echo api_cmd 2026-03-10T10:16:20.386 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.392 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ api_watch_notify_pp: /' 2026-03-10T10:16:20.393 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=api_cmd 2026-03-10T10:16:20.393 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59683 2026-03-10T10:16:20.393 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test api_cmd on pid 59683' 2026-03-10T10:16:20.393 INFO:tasks.workunit.client.0.vm04.stdout:test api_cmd on pid 59683 2026-03-10T10:16:20.393 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=59683 2026-03-10T10:16:20.393 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T10:16:20.393 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.395 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s api_cmd_pp 2026-03-10T10:16:20.395 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' api_cmd_pp' 2026-03-10T10:16:20.398 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_cmd --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_cmd.xml 2>&1 | tee ceph_test_rados_api_cmd.log | sed "s/^/ api_cmd: /"' 2026-03-10T10:16:20.398 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.398 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.399 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.399 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.399 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.399 INFO:tasks.workunit.client.0.vm04.stderr:++ echo api_cmd_pp 2026-03-10T10:16:20.399 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ api_cmd: /' 2026-03-10T10:16:20.402 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_rados_api_cmd.log 2026-03-10T10:16:20.403 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=api_cmd_pp 2026-03-10T10:16:20.403 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59712 2026-03-10T10:16:20.403 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test api_cmd_pp on pid 59712' 2026-03-10T10:16:20.403 INFO:tasks.workunit.client.0.vm04.stdout:test api_cmd_pp on pid 59712 2026-03-10T10:16:20.403 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=59712 2026-03-10T10:16:20.403 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T10:16:20.403 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.403 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_rados_api_cmd --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_cmd.xml 2026-03-10T10:16:20.407 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_cmd_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_cmd_pp.xml 2>&1 | tee ceph_test_rados_api_cmd_pp.log | sed "s/^/ api_cmd_pp: /"' 2026-03-10T10:16:20.408 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s api_service 2026-03-10T10:16:20.409 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.409 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.409 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.409 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.409 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' api_service' 2026-03-10T10:16:20.409 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.410 INFO:tasks.workunit.client.0.vm04.stderr:++ echo api_service 2026-03-10T10:16:20.410 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=api_service 2026-03-10T10:16:20.410 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59727 2026-03-10T10:16:20.411 INFO:tasks.workunit.client.0.vm04.stdout:test api_service on pid 59727 2026-03-10T10:16:20.411 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test api_service on pid 59727' 2026-03-10T10:16:20.411 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=59727 2026-03-10T10:16:20.411 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T10:16:20.411 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.411 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s api_service_pp 2026-03-10T10:16:20.411 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' api_service_pp' 2026-03-10T10:16:20.411 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_service --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_service.xml 2>&1 | tee ceph_test_rados_api_service.log | sed "s/^/ api_service: /"' 2026-03-10T10:16:20.411 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.411 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.412 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.412 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.412 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ api_service: /' 2026-03-10T10:16:20.412 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_rados_api_service.log 2026-03-10T10:16:20.413 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.413 INFO:tasks.workunit.client.0.vm04.stderr:++ echo api_service_pp 2026-03-10T10:16:20.413 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_rados_api_service --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_service.xml 2026-03-10T10:16:20.415 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=api_service_pp 2026-03-10T10:16:20.415 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59739 2026-03-10T10:16:20.415 INFO:tasks.workunit.client.0.vm04.stdout:test api_service_pp on pid 59739 2026-03-10T10:16:20.415 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test api_service_pp on pid 59739' 2026-03-10T10:16:20.415 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=59739 2026-03-10T10:16:20.415 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T10:16:20.415 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.418 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_rados_api_cmd_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_cmd_pp.xml 2026-03-10T10:16:20.419 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_rados_api_cmd_pp.log 2026-03-10T10:16:20.420 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ api_cmd_pp: /' 2026-03-10T10:16:20.426 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s api_c_write_operations 2026-03-10T10:16:20.427 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_service_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_service_pp.xml 2>&1 | tee ceph_test_rados_api_service_pp.log | sed "s/^/ api_service_pp: /"' 2026-03-10T10:16:20.427 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' api_c_write_operations' 2026-03-10T10:16:20.427 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.428 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.428 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.429 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.434 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ api_service_pp: /' 2026-03-10T10:16:20.435 INFO:tasks.workunit.client.0.vm04.stderr:++ echo api_c_write_operations 2026-03-10T10:16:20.435 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_rados_api_service_pp --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_service_pp.xml 2026-03-10T10:16:20.435 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_rados_api_service_pp.log 2026-03-10T10:16:20.439 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.441 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=api_c_write_operations 2026-03-10T10:16:20.441 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59761 2026-03-10T10:16:20.441 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test api_c_write_operations on pid 59761' 2026-03-10T10:16:20.441 INFO:tasks.workunit.client.0.vm04.stdout:test api_c_write_operations on pid 59761 2026-03-10T10:16:20.441 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=59761 2026-03-10T10:16:20.441 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T10:16:20.441 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.443 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_c_write_operations --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_c_write_operations.xml 2>&1 | tee ceph_test_rados_api_c_write_operations.log | sed "s/^/ api_c_write_operations: /"' 2026-03-10T10:16:20.444 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.444 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.444 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.444 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.444 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s api_c_read_operations 2026-03-10T10:16:20.445 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' api_c_read_operations' 2026-03-10T10:16:20.449 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ api_c_write_operations: /' 2026-03-10T10:16:20.449 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_rados_api_c_write_operations.log 2026-03-10T10:16:20.449 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_rados_api_c_write_operations --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_c_write_operations.xml 2026-03-10T10:16:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:20 vm04 bash[20742]: cluster 2026-03-10T10:16:18.310500+0000 mgr.y (mgr.24422) 94 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:20 vm04 bash[20742]: cluster 2026-03-10T10:16:18.310500+0000 mgr.y (mgr.24422) 94 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:20 vm04 bash[28289]: cluster 2026-03-10T10:16:18.310500+0000 mgr.y (mgr.24422) 94 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:20 vm04 bash[28289]: cluster 2026-03-10T10:16:18.310500+0000 mgr.y (mgr.24422) 94 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:20.467 INFO:tasks.workunit.client.0.vm04.stderr:++ echo api_c_read_operations 2026-03-10T10:16:20.467 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.480 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=api_c_read_operations 2026-03-10T10:16:20.480 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59812 2026-03-10T10:16:20.480 INFO:tasks.workunit.client.0.vm04.stdout:test api_c_read_operations on pid 59812 2026-03-10T10:16:20.480 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test api_c_read_operations on pid 59812' 2026-03-10T10:16:20.481 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=59812 2026-03-10T10:16:20.481 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T10:16:20.481 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.486 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s list_parallel 2026-03-10T10:16:20.486 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' list_parallel' 2026-03-10T10:16:20.488 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_rados_api_c_read_operations --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_c_read_operations.xml 2>&1 | tee ceph_test_rados_api_c_read_operations.log | sed "s/^/ api_c_read_operations: /"' 2026-03-10T10:16:20.489 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.489 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.489 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.489 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.496 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_rados_api_c_read_operations --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/api_c_read_operations.xml 2026-03-10T10:16:20.496 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_rados_api_c_read_operations.log 2026-03-10T10:16:20.496 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ api_c_read_operations: /' 2026-03-10T10:16:20.514 INFO:tasks.workunit.client.0.vm04.stderr:++ echo list_parallel 2026-03-10T10:16:20.514 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:20 vm07 bash[23367]: cluster 2026-03-10T10:16:18.310500+0000 mgr.y (mgr.24422) 94 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:20.515 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:20 vm07 bash[23367]: cluster 2026-03-10T10:16:18.310500+0000 mgr.y (mgr.24422) 94 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:20.516 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=list_parallel 2026-03-10T10:16:20.516 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59903 2026-03-10T10:16:20.516 INFO:tasks.workunit.client.0.vm04.stdout:test list_parallel on pid 59903 2026-03-10T10:16:20.516 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test list_parallel on pid 59903' 2026-03-10T10:16:20.516 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=59903 2026-03-10T10:16:20.516 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T10:16:20.516 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.516 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s open_pools_parallel 2026-03-10T10:16:20.516 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' open_pools_parallel' 2026-03-10T10:16:20.522 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_rados_list_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/list_parallel.xml 2>&1 | tee ceph_test_rados_list_parallel.log | sed "s/^/ list_parallel: /"' 2026-03-10T10:16:20.523 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.523 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.523 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.523 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.523 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_rados_list_parallel.log 2026-03-10T10:16:20.524 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ list_parallel: /' 2026-03-10T10:16:20.524 INFO:tasks.workunit.client.0.vm04.stderr:++ echo open_pools_parallel 2026-03-10T10:16:20.527 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.529 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_rados_list_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/list_parallel.xml 2026-03-10T10:16:20.530 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=open_pools_parallel 2026-03-10T10:16:20.531 INFO:tasks.workunit.client.0.vm04.stdout:test open_pools_parallel on pid 59923 2026-03-10T10:16:20.531 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59923 2026-03-10T10:16:20.531 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test open_pools_parallel on pid 59923' 2026-03-10T10:16:20.531 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=59923 2026-03-10T10:16:20.531 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in api_aio api_aio_pp api_io api_io_pp api_asio api_list api_lock api_lock_pp api_misc api_misc_pp api_tier_pp api_pool api_snapshots api_snapshots_pp api_stat api_stat_pp api_watch_notify api_watch_notify_pp api_cmd api_cmd_pp api_service api_service_pp api_c_write_operations api_c_read_operations list_parallel open_pools_parallel delete_pools_parallel 2026-03-10T10:16:20.531 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.531 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_rados_open_pools_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/open_pools_parallel.xml 2>&1 | tee ceph_test_rados_open_pools_parallel.log | sed "s/^/ open_pools_parallel: /"' 2026-03-10T10:16:20.531 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.531 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.531 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.531 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.532 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ open_pools_parallel: /' 2026-03-10T10:16:20.533 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_rados_open_pools_parallel.log 2026-03-10T10:16:20.534 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_rados_open_pools_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/open_pools_parallel.xml 2026-03-10T10:16:20.537 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s delete_pools_parallel 2026-03-10T10:16:20.537 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' delete_pools_parallel' 2026-03-10T10:16:20.547 INFO:tasks.workunit.client.0.vm04.stderr:++ echo delete_pools_parallel 2026-03-10T10:16:20.547 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.548 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=delete_pools_parallel 2026-03-10T10:16:20.549 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59945 2026-03-10T10:16:20.549 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test delete_pools_parallel on pid 59945' 2026-03-10T10:16:20.549 INFO:tasks.workunit.client.0.vm04.stdout:test delete_pools_parallel on pid 59945 2026-03-10T10:16:20.549 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=59945 2026-03-10T10:16:20.549 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T10:16:20.549 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.555 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_rados_delete_pools_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/delete_pools_parallel.xml 2>&1 | tee ceph_test_rados_delete_pools_parallel.log | sed "s/^/ delete_pools_parallel: /"' 2026-03-10T10:16:20.555 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.555 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.555 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.555 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.556 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_rados_delete_pools_parallel --gtest_output=xml:/home/ubuntu/cephtest/archive/unit_test_xml_report/delete_pools_parallel.xml 2026-03-10T10:16:20.558 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s cls 2026-03-10T10:16:20.559 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' cls' 2026-03-10T10:16:20.561 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_rados_delete_pools_parallel.log 2026-03-10T10:16:20.562 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ delete_pools_parallel: /' 2026-03-10T10:16:20.573 INFO:tasks.workunit.client.0.vm04.stderr:++ echo cls 2026-03-10T10:16:20.573 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.580 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=cls 2026-03-10T10:16:20.580 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=60012 2026-03-10T10:16:20.580 INFO:tasks.workunit.client.0.vm04.stdout:test cls on pid 60012 2026-03-10T10:16:20.581 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test cls on pid 60012' 2026-03-10T10:16:20.581 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=60012 2026-03-10T10:16:20.581 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T10:16:20.581 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.583 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s cmd 2026-03-10T10:16:20.584 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' cmd' 2026-03-10T10:16:20.584 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_cls 2>&1 | tee ceph_test_neorados_cls.log | sed "s/^/ cls: /"' 2026-03-10T10:16:20.585 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.585 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.585 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.585 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.585 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ cls: /' 2026-03-10T10:16:20.589 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.591 INFO:tasks.workunit.client.0.vm04.stderr:++ echo cmd 2026-03-10T10:16:20.591 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=cmd 2026-03-10T10:16:20.592 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=60029 2026-03-10T10:16:20.592 INFO:tasks.workunit.client.0.vm04.stdout:test cmd on pid 60029 2026-03-10T10:16:20.592 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test cmd on pid 60029' 2026-03-10T10:16:20.592 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=60029 2026-03-10T10:16:20.592 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T10:16:20.592 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.592 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s handler_error 2026-03-10T10:16:20.593 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' handler_error' 2026-03-10T10:16:20.593 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_neorados_cls 2026-03-10T10:16:20.594 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_neorados_cls.log 2026-03-10T10:16:20.595 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_cmd 2>&1 | tee ceph_test_neorados_cmd.log | sed "s/^/ cmd: /"' 2026-03-10T10:16:20.596 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.599 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.599 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.599 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.601 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ cmd: /' 2026-03-10T10:16:20.601 INFO:tasks.workunit.client.0.vm04.stderr:++ echo handler_error 2026-03-10T10:16:20.601 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.602 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_neorados_cmd.log 2026-03-10T10:16:20.603 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_neorados_cmd 2026-03-10T10:16:20.607 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=handler_error 2026-03-10T10:16:20.607 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=60063 2026-03-10T10:16:20.607 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test handler_error on pid 60063' 2026-03-10T10:16:20.607 INFO:tasks.workunit.client.0.vm04.stdout:test handler_error on pid 60063 2026-03-10T10:16:20.607 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=60063 2026-03-10T10:16:20.607 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T10:16:20.607 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.615 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s io 2026-03-10T10:16:20.615 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_handler_error 2>&1 | tee ceph_test_neorados_handler_error.log | sed "s/^/ handler_error: /"' 2026-03-10T10:16:20.616 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.616 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' io' 2026-03-10T10:16:20.617 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.617 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.617 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.618 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ handler_error: /' 2026-03-10T10:16:20.619 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_neorados_handler_error.log 2026-03-10T10:16:20.620 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_neorados_handler_error 2026-03-10T10:16:20.630 INFO:tasks.workunit.client.0.vm04.stderr:++ echo io 2026-03-10T10:16:20.630 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.632 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=io 2026-03-10T10:16:20.633 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=60094 2026-03-10T10:16:20.633 INFO:tasks.workunit.client.0.vm04.stdout:test io on pid 60094 2026-03-10T10:16:20.633 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test io on pid 60094' 2026-03-10T10:16:20.633 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=60094 2026-03-10T10:16:20.633 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T10:16:20.633 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.636 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_io 2>&1 | tee ceph_test_neorados_io.log | sed "s/^/ io: /"' 2026-03-10T10:16:20.638 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.638 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.638 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.638 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.638 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ io: /' 2026-03-10T10:16:20.638 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_neorados_io.log 2026-03-10T10:16:20.639 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s ec_io 2026-03-10T10:16:20.640 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_neorados_io 2026-03-10T10:16:20.640 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' ec_io' 2026-03-10T10:16:20.649 INFO:tasks.workunit.client.0.vm04.stderr:++ echo ec_io 2026-03-10T10:16:20.649 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.651 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=ec_io 2026-03-10T10:16:20.651 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=60116 2026-03-10T10:16:20.651 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test ec_io on pid 60116' 2026-03-10T10:16:20.651 INFO:tasks.workunit.client.0.vm04.stdout:test ec_io on pid 60116 2026-03-10T10:16:20.651 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=60116 2026-03-10T10:16:20.651 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T10:16:20.651 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.651 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_ec_io 2>&1 | tee ceph_test_neorados_ec_io.log | sed "s/^/ ec_io: /"' 2026-03-10T10:16:20.653 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.653 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.653 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.653 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.654 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ ec_io: /' 2026-03-10T10:16:20.654 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_neorados_ec_io.log 2026-03-10T10:16:20.655 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_neorados_ec_io 2026-03-10T10:16:20.655 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s list 2026-03-10T10:16:20.655 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' list' 2026-03-10T10:16:20.656 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.657 INFO:tasks.workunit.client.0.vm04.stderr:++ echo list 2026-03-10T10:16:20.658 INFO:tasks.workunit.client.0.vm04.stdout:test list on pid 60133 2026-03-10T10:16:20.658 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=list 2026-03-10T10:16:20.658 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=60133 2026-03-10T10:16:20.658 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test list on pid 60133' 2026-03-10T10:16:20.658 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=60133 2026-03-10T10:16:20.658 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T10:16:20.658 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.658 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_list 2>&1 | tee ceph_test_neorados_list.log | sed "s/^/ list: /"' 2026-03-10T10:16:20.658 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s ec_list 2026-03-10T10:16:20.660 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' ec_list' 2026-03-10T10:16:20.660 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.660 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.660 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.661 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.661 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ list: /' 2026-03-10T10:16:20.663 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_neorados_list.log 2026-03-10T10:16:20.663 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_neorados_list 2026-03-10T10:16:20.671 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.673 INFO:tasks.workunit.client.0.vm04.stderr:++ echo ec_list 2026-03-10T10:16:20.679 INFO:tasks.workunit.client.0.vm04.stdout:test ec_list on pid 60165 2026-03-10T10:16:20.679 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=ec_list 2026-03-10T10:16:20.679 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=60165 2026-03-10T10:16:20.679 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test ec_list on pid 60165' 2026-03-10T10:16:20.679 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=60165 2026-03-10T10:16:20.679 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T10:16:20.679 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.682 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_ec_list 2>&1 | tee ceph_test_neorados_ec_list.log | sed "s/^/ ec_list: /"' 2026-03-10T10:16:20.684 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s misc 2026-03-10T10:16:20.684 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' misc' 2026-03-10T10:16:20.685 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.685 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.685 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.685 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.687 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_neorados_ec_list.log 2026-03-10T10:16:20.687 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ ec_list: /' 2026-03-10T10:16:20.689 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_neorados_ec_list 2026-03-10T10:16:20.692 INFO:tasks.workunit.client.0.vm04.stderr:++ echo misc 2026-03-10T10:16:20.692 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.696 INFO:tasks.workunit.client.0.vm04.stdout:test misc on pid 60194 2026-03-10T10:16:20.696 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=misc 2026-03-10T10:16:20.696 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=60194 2026-03-10T10:16:20.696 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test misc on pid 60194' 2026-03-10T10:16:20.696 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=60194 2026-03-10T10:16:20.696 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T10:16:20.696 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.696 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s pool 2026-03-10T10:16:20.696 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' pool' 2026-03-10T10:16:20.696 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_misc 2>&1 | tee ceph_test_neorados_misc.log | sed "s/^/ misc: /"' 2026-03-10T10:16:20.697 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.697 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.697 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.697 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.697 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ misc: /' 2026-03-10T10:16:20.698 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_neorados_misc.log 2026-03-10T10:16:20.699 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_neorados_misc 2026-03-10T10:16:20.705 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.707 INFO:tasks.workunit.client.0.vm04.stderr:++ echo pool 2026-03-10T10:16:20.708 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=pool 2026-03-10T10:16:20.708 INFO:tasks.workunit.client.0.vm04.stdout:test pool on pid 60209 2026-03-10T10:16:20.708 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=60209 2026-03-10T10:16:20.708 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test pool on pid 60209' 2026-03-10T10:16:20.708 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=60209 2026-03-10T10:16:20.708 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T10:16:20.708 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.708 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_pool 2>&1 | tee ceph_test_neorados_pool.log | sed "s/^/ pool: /"' 2026-03-10T10:16:20.710 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s read_operations 2026-03-10T10:16:20.712 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' read_operations' 2026-03-10T10:16:20.712 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.712 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.712 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.712 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.718 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_neorados_pool.log 2026-03-10T10:16:20.718 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ pool: /' 2026-03-10T10:16:20.718 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_neorados_pool 2026-03-10T10:16:20.718 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.719 INFO:tasks.workunit.client.0.vm04.stderr:++ echo read_operations 2026-03-10T10:16:20.720 INFO:tasks.workunit.client.0.vm04.stdout:test read_operations on pid 60239 2026-03-10T10:16:20.720 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=read_operations 2026-03-10T10:16:20.720 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=60239 2026-03-10T10:16:20.720 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test read_operations on pid 60239' 2026-03-10T10:16:20.720 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=60239 2026-03-10T10:16:20.720 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T10:16:20.720 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.721 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s snapshots 2026-03-10T10:16:20.721 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' snapshots' 2026-03-10T10:16:20.721 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.722 INFO:tasks.workunit.client.0.vm04.stderr:++ echo snapshots 2026-03-10T10:16:20.722 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=snapshots 2026-03-10T10:16:20.722 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=60244 2026-03-10T10:16:20.722 INFO:tasks.workunit.client.0.vm04.stdout:test snapshots on pid 60244 2026-03-10T10:16:20.722 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test snapshots on pid 60244' 2026-03-10T10:16:20.723 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=60244 2026-03-10T10:16:20.723 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T10:16:20.723 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.723 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_read_operations 2>&1 | tee ceph_test_neorados_read_operations.log | sed "s/^/ read_operations: /"' 2026-03-10T10:16:20.723 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s watch_notify 2026-03-10T10:16:20.723 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' watch_notify' 2026-03-10T10:16:20.723 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_snapshots 2>&1 | tee ceph_test_neorados_snapshots.log | sed "s/^/ snapshots: /"' 2026-03-10T10:16:20.724 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.724 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.724 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.724 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.724 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ snapshots: /' 2026-03-10T10:16:20.725 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.726 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_neorados_snapshots.log 2026-03-10T10:16:20.726 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.726 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.726 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.726 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.726 INFO:tasks.workunit.client.0.vm04.stderr:++ echo watch_notify 2026-03-10T10:16:20.727 INFO:tasks.workunit.client.0.vm04.stdout:test watch_notify on pid 60255 2026-03-10T10:16:20.727 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=watch_notify 2026-03-10T10:16:20.727 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=60255 2026-03-10T10:16:20.727 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test watch_notify on pid 60255' 2026-03-10T10:16:20.728 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=60255 2026-03-10T10:16:20.728 INFO:tasks.workunit.client.0.vm04.stderr:+ for f in cls cmd handler_error io ec_io list ec_list misc pool read_operations snapshots watch_notify write_operations 2026-03-10T10:16:20.728 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.728 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_neorados_snapshots 2026-03-10T10:16:20.729 INFO:tasks.workunit.client.0.vm04.stderr:++ printf %25s write_operations 2026-03-10T10:16:20.729 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ read_operations: /' 2026-03-10T10:16:20.730 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_neorados_read_operations.log 2026-03-10T10:16:20.730 INFO:tasks.workunit.client.0.vm04.stderr:+ r=' write_operations' 2026-03-10T10:16:20.733 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_neorados_read_operations 2026-03-10T10:16:20.733 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_watch_notify 2>&1 | tee ceph_test_neorados_watch_notify.log | sed "s/^/ watch_notify: /"' 2026-03-10T10:16:20.734 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.734 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.734 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.734 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.736 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ watch_notify: /' 2026-03-10T10:16:20.737 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_neorados_watch_notify.log 2026-03-10T10:16:20.738 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_neorados_watch_notify 2026-03-10T10:16:20.738 INFO:tasks.workunit.client.0.vm04.stderr:++ awk '{print $1}' 2026-03-10T10:16:20.741 INFO:tasks.workunit.client.0.vm04.stderr:++ echo write_operations 2026-03-10T10:16:20.743 INFO:tasks.workunit.client.0.vm04.stdout:test write_operations on pid 60269 2026-03-10T10:16:20.743 INFO:tasks.workunit.client.0.vm04.stderr:+ ff=write_operations 2026-03-10T10:16:20.743 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=60269 2026-03-10T10:16:20.743 INFO:tasks.workunit.client.0.vm04.stderr:+ echo 'test write_operations on pid 60269' 2026-03-10T10:16:20.743 INFO:tasks.workunit.client.0.vm04.stderr:+ pids[$f]=60269 2026-03-10T10:16:20.743 INFO:tasks.workunit.client.0.vm04.stderr:+ ret=0 2026-03-10T10:16:20.743 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' 1 -eq 1 ']' 2026-03-10T10:16:20.743 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:16:20.743 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59487 2026-03-10T10:16:20.743 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 59487 2026-03-10T10:16:20.750 INFO:tasks.workunit.client.0.vm04.stderr:+ bash -o pipefail -exc 'ceph_test_neorados_write_operations 2>&1 | tee ceph_test_neorados_write_operations.log | sed "s/^/ write_operations: /"' 2026-03-10T10:16:20.751 INFO:tasks.workunit.client.0.vm04.stderr:+ '[' -z '' ']' 2026-03-10T10:16:20.751 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.751 INFO:tasks.workunit.client.0.vm04.stderr:+ case $- in 2026-03-10T10:16:20.752 INFO:tasks.workunit.client.0.vm04.stderr:+ return 2026-03-10T10:16:20.754 INFO:tasks.workunit.client.0.vm04.stderr:+ sed 's/^/ write_operations: /' 2026-03-10T10:16:20.758 INFO:tasks.workunit.client.0.vm04.stderr:+ tee ceph_test_neorados_write_operations.log 2026-03-10T10:16:20.759 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph_test_neorados_write_operations 2026-03-10T10:16:21.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:21 vm04 bash[20742]: audit 2026-03-10T10:16:20.411916+0000 mon.b (mon.1) 31 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:21 vm04 bash[20742]: audit 2026-03-10T10:16:20.411916+0000 mon.b (mon.1) 31 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:21 vm04 bash[20742]: audit 2026-03-10T10:16:20.413236+0000 mon.b (mon.1) 32 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:21 vm04 bash[20742]: audit 2026-03-10T10:16:20.413236+0000 mon.b (mon.1) 32 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:21 vm04 bash[20742]: audit 2026-03-10T10:16:20.413877+0000 mon.a (mon.0) 832 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:21 vm04 bash[20742]: audit 2026-03-10T10:16:20.413877+0000 mon.a (mon.0) 832 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:21 vm04 bash[20742]: audit 2026-03-10T10:16:20.415147+0000 mon.a (mon.0) 833 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:21 vm04 bash[20742]: audit 2026-03-10T10:16:20.415147+0000 mon.a (mon.0) 833 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:21 vm04 bash[20742]: audit 2026-03-10T10:16:20.415677+0000 mon.b (mon.1) 33 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:21 vm04 bash[20742]: audit 2026-03-10T10:16:20.415677+0000 mon.b (mon.1) 33 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:21 vm04 bash[20742]: audit 2026-03-10T10:16:20.417595+0000 mon.a (mon.0) 834 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:21 vm04 bash[20742]: audit 2026-03-10T10:16:20.417595+0000 mon.a (mon.0) 834 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:21 vm04 bash[20742]: audit 2026-03-10T10:16:20.469328+0000 mon.c (mon.2) 31 : audit [DBG] from='client.? 192.168.123.104:0/2180440549' entity='client.admin' cmd=[{"prefix":"quorum_status"}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:21 vm04 bash[20742]: audit 2026-03-10T10:16:20.469328+0000 mon.c (mon.2) 31 : audit [DBG] from='client.? 192.168.123.104:0/2180440549' entity='client.admin' cmd=[{"prefix":"quorum_status"}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:21 vm04 bash[20742]: audit 2026-03-10T10:16:20.691531+0000 mon.c (mon.2) 32 : audit [INF] from='client.? 192.168.123.104:0/3988453270' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm04-60121-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:21 vm04 bash[20742]: audit 2026-03-10T10:16:20.691531+0000 mon.c (mon.2) 32 : audit [INF] from='client.? 192.168.123.104:0/3988453270' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm04-60121-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:21 vm04 bash[20742]: audit 2026-03-10T10:16:20.691954+0000 mon.a (mon.0) 835 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm04-60121-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:21 vm04 bash[20742]: audit 2026-03-10T10:16:20.691954+0000 mon.a (mon.0) 835 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm04-60121-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:21 vm04 bash[20742]: audit 2026-03-10T10:16:20.716100+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.104:0/409720393' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm04-60174-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:21 vm04 bash[20742]: audit 2026-03-10T10:16:20.716100+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.104:0/409720393' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm04-60174-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:21 vm04 bash[20742]: audit 2026-03-10T10:16:20.720903+0000 mon.a (mon.0) 836 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm04-60174-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:21 vm04 bash[20742]: audit 2026-03-10T10:16:20.720903+0000 mon.a (mon.0) 836 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm04-60174-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:21 vm04 bash[28289]: audit 2026-03-10T10:16:20.411916+0000 mon.b (mon.1) 31 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:21 vm04 bash[28289]: audit 2026-03-10T10:16:20.411916+0000 mon.b (mon.1) 31 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:21 vm04 bash[28289]: audit 2026-03-10T10:16:20.413236+0000 mon.b (mon.1) 32 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:21 vm04 bash[28289]: audit 2026-03-10T10:16:20.413236+0000 mon.b (mon.1) 32 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:21 vm04 bash[28289]: audit 2026-03-10T10:16:20.413877+0000 mon.a (mon.0) 832 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:21 vm04 bash[28289]: audit 2026-03-10T10:16:20.413877+0000 mon.a (mon.0) 832 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:21 vm04 bash[28289]: audit 2026-03-10T10:16:20.415147+0000 mon.a (mon.0) 833 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:21 vm04 bash[28289]: audit 2026-03-10T10:16:20.415147+0000 mon.a (mon.0) 833 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:21 vm04 bash[28289]: audit 2026-03-10T10:16:20.415677+0000 mon.b (mon.1) 33 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:21 vm04 bash[28289]: audit 2026-03-10T10:16:20.415677+0000 mon.b (mon.1) 33 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:21 vm04 bash[28289]: audit 2026-03-10T10:16:20.417595+0000 mon.a (mon.0) 834 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:21 vm04 bash[28289]: audit 2026-03-10T10:16:20.417595+0000 mon.a (mon.0) 834 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:21 vm04 bash[28289]: audit 2026-03-10T10:16:20.469328+0000 mon.c (mon.2) 31 : audit [DBG] from='client.? 192.168.123.104:0/2180440549' entity='client.admin' cmd=[{"prefix":"quorum_status"}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:21 vm04 bash[28289]: audit 2026-03-10T10:16:20.469328+0000 mon.c (mon.2) 31 : audit [DBG] from='client.? 192.168.123.104:0/2180440549' entity='client.admin' cmd=[{"prefix":"quorum_status"}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:21 vm04 bash[28289]: audit 2026-03-10T10:16:20.691531+0000 mon.c (mon.2) 32 : audit [INF] from='client.? 192.168.123.104:0/3988453270' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm04-60121-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:21 vm04 bash[28289]: audit 2026-03-10T10:16:20.691531+0000 mon.c (mon.2) 32 : audit [INF] from='client.? 192.168.123.104:0/3988453270' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm04-60121-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:21 vm04 bash[28289]: audit 2026-03-10T10:16:20.691954+0000 mon.a (mon.0) 835 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm04-60121-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:21 vm04 bash[28289]: audit 2026-03-10T10:16:20.691954+0000 mon.a (mon.0) 835 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm04-60121-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:21 vm04 bash[28289]: audit 2026-03-10T10:16:20.716100+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.104:0/409720393' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm04-60174-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:21 vm04 bash[28289]: audit 2026-03-10T10:16:20.716100+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.104:0/409720393' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm04-60174-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:21 vm04 bash[28289]: audit 2026-03-10T10:16:20.720903+0000 mon.a (mon.0) 836 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm04-60174-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:21 vm04 bash[28289]: audit 2026-03-10T10:16:20.720903+0000 mon.a (mon.0) 836 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm04-60174-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:21 vm07 bash[23367]: audit 2026-03-10T10:16:20.411916+0000 mon.b (mon.1) 31 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:21 vm07 bash[23367]: audit 2026-03-10T10:16:20.411916+0000 mon.b (mon.1) 31 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:21 vm07 bash[23367]: audit 2026-03-10T10:16:20.413236+0000 mon.b (mon.1) 32 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:21 vm07 bash[23367]: audit 2026-03-10T10:16:20.413236+0000 mon.b (mon.1) 32 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:21 vm07 bash[23367]: audit 2026-03-10T10:16:20.413877+0000 mon.a (mon.0) 832 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:21 vm07 bash[23367]: audit 2026-03-10T10:16:20.413877+0000 mon.a (mon.0) 832 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:21 vm07 bash[23367]: audit 2026-03-10T10:16:20.415147+0000 mon.a (mon.0) 833 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:21 vm07 bash[23367]: audit 2026-03-10T10:16:20.415147+0000 mon.a (mon.0) 833 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:21 vm07 bash[23367]: audit 2026-03-10T10:16:20.415677+0000 mon.b (mon.1) 33 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:21 vm07 bash[23367]: audit 2026-03-10T10:16:20.415677+0000 mon.b (mon.1) 33 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:21 vm07 bash[23367]: audit 2026-03-10T10:16:20.417595+0000 mon.a (mon.0) 834 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:21 vm07 bash[23367]: audit 2026-03-10T10:16:20.417595+0000 mon.a (mon.0) 834 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:21 vm07 bash[23367]: audit 2026-03-10T10:16:20.469328+0000 mon.c (mon.2) 31 : audit [DBG] from='client.? 192.168.123.104:0/2180440549' entity='client.admin' cmd=[{"prefix":"quorum_status"}]: dispatch 2026-03-10T10:16:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:21 vm07 bash[23367]: audit 2026-03-10T10:16:20.469328+0000 mon.c (mon.2) 31 : audit [DBG] from='client.? 192.168.123.104:0/2180440549' entity='client.admin' cmd=[{"prefix":"quorum_status"}]: dispatch 2026-03-10T10:16:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:21 vm07 bash[23367]: audit 2026-03-10T10:16:20.691531+0000 mon.c (mon.2) 32 : audit [INF] from='client.? 192.168.123.104:0/3988453270' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm04-60121-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:21 vm07 bash[23367]: audit 2026-03-10T10:16:20.691531+0000 mon.c (mon.2) 32 : audit [INF] from='client.? 192.168.123.104:0/3988453270' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm04-60121-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:21 vm07 bash[23367]: audit 2026-03-10T10:16:20.691954+0000 mon.a (mon.0) 835 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm04-60121-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:21 vm07 bash[23367]: audit 2026-03-10T10:16:20.691954+0000 mon.a (mon.0) 835 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm04-60121-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:21 vm07 bash[23367]: audit 2026-03-10T10:16:20.716100+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.104:0/409720393' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm04-60174-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:21 vm07 bash[23367]: audit 2026-03-10T10:16:20.716100+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.104:0/409720393' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm04-60174-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:21 vm07 bash[23367]: audit 2026-03-10T10:16:20.720903+0000 mon.a (mon.0) 836 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm04-60174-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:21 vm07 bash[23367]: audit 2026-03-10T10:16:20.720903+0000 mon.a (mon.0) 836 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm04-60174-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: cluster 2026-03-10T10:16:20.310810+0000 mgr.y (mgr.24422) 95 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: cluster 2026-03-10T10:16:20.310810+0000 mgr.y (mgr.24422) 95 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.153566+0000 mon.a (mon.0) 837 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.153566+0000 mon.a (mon.0) 837 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.153683+0000 mon.a (mon.0) 838 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm04-60121-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.153683+0000 mon.a (mon.0) 838 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm04-60121-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.153760+0000 mon.a (mon.0) 839 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm04-60174-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.153760+0000 mon.a (mon.0) 839 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm04-60174-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.181262+0000 mon.b (mon.1) 35 : audit [INF] from='client.? 192.168.123.104:0/409720393' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm04-60174-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.181262+0000 mon.b (mon.1) 35 : audit [INF] from='client.? 192.168.123.104:0/409720393' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm04-60174-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.181512+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm04-59675-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.181512+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm04-59675-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.186718+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.104:0/2556399022' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm04-59274-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.186718+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.104:0/2556399022' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm04-59274-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.189796+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.104:0/2994003743' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.189796+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.104:0/2994003743' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.189900+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.104:0/3733426727' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.189900+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.104:0/3733426727' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: cluster 2026-03-10T10:16:21.229652+0000 mon.a (mon.0) 840 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: cluster 2026-03-10T10:16:21.229652+0000 mon.a (mon.0) 840 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.240224+0000 mon.c (mon.2) 33 : audit [INF] from='client.? 192.168.123.104:0/3988453270' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm04-60121-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.240224+0000 mon.c (mon.2) 33 : audit [INF] from='client.? 192.168.123.104:0/3988453270' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm04-60121-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.245100+0000 mon.c (mon.2) 34 : audit [INF] from='client.? 192.168.123.104:0/3007984408' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm04-59252-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.245100+0000 mon.c (mon.2) 34 : audit [INF] from='client.? 192.168.123.104:0/3007984408' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm04-59252-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.245533+0000 mon.c (mon.2) 35 : audit [INF] from='client.? 192.168.123.104:0/685286540' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm04-59364-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.245533+0000 mon.c (mon.2) 35 : audit [INF] from='client.? 192.168.123.104:0/685286540' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm04-59364-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.245918+0000 mon.c (mon.2) 36 : audit [INF] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm04-59366-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.245918+0000 mon.c (mon.2) 36 : audit [INF] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm04-59366-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.246372+0000 mon.c (mon.2) 37 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.246372+0000 mon.c (mon.2) 37 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.249852+0000 mon.c (mon.2) 38 : audit [INF] from='client.? 192.168.123.104:0/3514642772' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm04-59491-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.249852+0000 mon.c (mon.2) 38 : audit [INF] from='client.? 192.168.123.104:0/3514642772' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm04-59491-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.250229+0000 mon.c (mon.2) 39 : audit [INF] from='client.? 192.168.123.104:0/1019460894' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm04-59531-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.250229+0000 mon.c (mon.2) 39 : audit [INF] from='client.? 192.168.123.104:0/1019460894' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm04-59531-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.252808+0000 mon.c (mon.2) 40 : audit [INF] from='client.? 192.168.123.104:0/1296779503' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm04-59578-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.252808+0000 mon.c (mon.2) 40 : audit [INF] from='client.? 192.168.123.104:0/1296779503' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm04-59578-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.253324+0000 mon.c (mon.2) 41 : audit [INF] from='client.? 192.168.123.104:0/2230152973' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm04-59599-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.253324+0000 mon.c (mon.2) 41 : audit [INF] from='client.? 192.168.123.104:0/2230152973' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm04-59599-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.253792+0000 mon.c (mon.2) 42 : audit [INF] from='client.? 192.168.123.104:0/1819213432' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm04-59623-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.253792+0000 mon.c (mon.2) 42 : audit [INF] from='client.? 192.168.123.104:0/1819213432' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm04-59623-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.254446+0000 mon.c (mon.2) 43 : audit [INF] from='client.? 192.168.123.104:0/1040227268' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm04-59849-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.254446+0000 mon.c (mon.2) 43 : audit [INF] from='client.? 192.168.123.104:0/1040227268' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm04-59849-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.292549+0000 mon.a (mon.0) 841 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm04-60174-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.292549+0000 mon.a (mon.0) 841 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm04-60174-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.292908+0000 mon.a (mon.0) 842 : audit [INF] from='client.? 192.168.123.104:0/653623571' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm04-59259-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.292908+0000 mon.a (mon.0) 842 : audit [INF] from='client.? 192.168.123.104:0/653623571' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm04-59259-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.292962+0000 mon.a (mon.0) 843 : audit [INF] from='client.? 192.168.123.104:0/692604628' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm04-59290-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.292962+0000 mon.a (mon.0) 843 : audit [INF] from='client.? 192.168.123.104:0/692604628' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm04-59290-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.293003+0000 mon.a (mon.0) 844 : audit [INF] from='client.? 192.168.123.104:0/514098834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm04-59409-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.293003+0000 mon.a (mon.0) 844 : audit [INF] from='client.? 192.168.123.104:0/514098834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm04-59409-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.293046+0000 mon.a (mon.0) 845 : audit [INF] from='client.? 192.168.123.104:0/3753958607' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm04-59541-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.293046+0000 mon.a (mon.0) 845 : audit [INF] from='client.? 192.168.123.104:0/3753958607' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm04-59541-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.293086+0000 mon.a (mon.0) 846 : audit [INF] from='client.? 192.168.123.104:0/3335916557' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59729-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.293086+0000 mon.a (mon.0) 846 : audit [INF] from='client.? 192.168.123.104:0/3335916557' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59729-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.293129+0000 mon.a (mon.0) 847 : audit [INF] from='client.? 192.168.123.104:0/1657916283' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59695-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.293129+0000 mon.a (mon.0) 847 : audit [INF] from='client.? 192.168.123.104:0/1657916283' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59695-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.296315+0000 mon.a (mon.0) 848 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm04-60121-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.296315+0000 mon.a (mon.0) 848 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm04-60121-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.296506+0000 mon.a (mon.0) 849 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm04-59252-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.296506+0000 mon.a (mon.0) 849 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm04-59252-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.296554+0000 mon.a (mon.0) 850 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm04-59675-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.296554+0000 mon.a (mon.0) 850 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm04-59675-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.296641+0000 mon.a (mon.0) 851 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm04-59364-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.296641+0000 mon.a (mon.0) 851 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm04-59364-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.296683+0000 mon.a (mon.0) 852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm04-59274-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.296683+0000 mon.a (mon.0) 852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm04-59274-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.296724+0000 mon.a (mon.0) 853 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm04-59366-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.296724+0000 mon.a (mon.0) 853 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm04-59366-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.297865+0000 mon.a (mon.0) 854 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.297865+0000 mon.a (mon.0) 854 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.298029+0000 mon.a (mon.0) 855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.298029+0000 mon.a (mon.0) 855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.298090+0000 mon.a (mon.0) 856 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.298090+0000 mon.a (mon.0) 856 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.298155+0000 mon.a (mon.0) 857 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm04-59491-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.298155+0000 mon.a (mon.0) 857 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm04-59491-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.298216+0000 mon.a (mon.0) 858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm04-59531-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.298216+0000 mon.a (mon.0) 858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm04-59531-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.298279+0000 mon.a (mon.0) 859 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm04-59578-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.298279+0000 mon.a (mon.0) 859 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm04-59578-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.298338+0000 mon.a (mon.0) 860 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm04-59599-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.298338+0000 mon.a (mon.0) 860 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm04-59599-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.298400+0000 mon.a (mon.0) 861 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm04-59623-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.298400+0000 mon.a (mon.0) 861 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm04-59623-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.298462+0000 mon.a (mon.0) 862 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm04-59849-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.298462+0000 mon.a (mon.0) 862 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm04-59849-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.608054+0000 mon.a (mon.0) 863 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.608054+0000 mon.a (mon.0) 863 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.617344+0000 mon.a (mon.0) 864 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.617344+0000 mon.a (mon.0) 864 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.675829+0000 mon.a (mon.0) 865 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.675829+0000 mon.a (mon.0) 865 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.681179+0000 mon.a (mon.0) 866 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.681179+0000 mon.a (mon.0) 866 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.976774+0000 mon.a (mon.0) 867 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.976774+0000 mon.a (mon.0) 867 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.977242+0000 mon.a (mon.0) 868 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.977242+0000 mon.a (mon.0) 868 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.982768+0000 mon.a (mon.0) 869 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:22 vm04 bash[28289]: audit 2026-03-10T10:16:21.982768+0000 mon.a (mon.0) 869 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: cluster 2026-03-10T10:16:20.310810+0000 mgr.y (mgr.24422) 95 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: cluster 2026-03-10T10:16:20.310810+0000 mgr.y (mgr.24422) 95 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.153566+0000 mon.a (mon.0) 837 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.153566+0000 mon.a (mon.0) 837 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.153683+0000 mon.a (mon.0) 838 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm04-60121-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.153683+0000 mon.a (mon.0) 838 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm04-60121-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.153760+0000 mon.a (mon.0) 839 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm04-60174-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.153760+0000 mon.a (mon.0) 839 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm04-60174-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.181262+0000 mon.b (mon.1) 35 : audit [INF] from='client.? 192.168.123.104:0/409720393' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm04-60174-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.181262+0000 mon.b (mon.1) 35 : audit [INF] from='client.? 192.168.123.104:0/409720393' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm04-60174-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.181512+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm04-59675-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.181512+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm04-59675-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.186718+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.104:0/2556399022' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm04-59274-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.186718+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.104:0/2556399022' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm04-59274-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.189796+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.104:0/2994003743' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.189796+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.104:0/2994003743' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.189900+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.104:0/3733426727' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.189900+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.104:0/3733426727' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: cluster 2026-03-10T10:16:21.229652+0000 mon.a (mon.0) 840 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: cluster 2026-03-10T10:16:21.229652+0000 mon.a (mon.0) 840 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.240224+0000 mon.c (mon.2) 33 : audit [INF] from='client.? 192.168.123.104:0/3988453270' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm04-60121-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.240224+0000 mon.c (mon.2) 33 : audit [INF] from='client.? 192.168.123.104:0/3988453270' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm04-60121-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.245100+0000 mon.c (mon.2) 34 : audit [INF] from='client.? 192.168.123.104:0/3007984408' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm04-59252-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.245100+0000 mon.c (mon.2) 34 : audit [INF] from='client.? 192.168.123.104:0/3007984408' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm04-59252-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.245533+0000 mon.c (mon.2) 35 : audit [INF] from='client.? 192.168.123.104:0/685286540' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm04-59364-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.245533+0000 mon.c (mon.2) 35 : audit [INF] from='client.? 192.168.123.104:0/685286540' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm04-59364-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.245918+0000 mon.c (mon.2) 36 : audit [INF] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm04-59366-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.245918+0000 mon.c (mon.2) 36 : audit [INF] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm04-59366-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.246372+0000 mon.c (mon.2) 37 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.246372+0000 mon.c (mon.2) 37 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.249852+0000 mon.c (mon.2) 38 : audit [INF] from='client.? 192.168.123.104:0/3514642772' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm04-59491-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.249852+0000 mon.c (mon.2) 38 : audit [INF] from='client.? 192.168.123.104:0/3514642772' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm04-59491-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.250229+0000 mon.c (mon.2) 39 : audit [INF] from='client.? 192.168.123.104:0/1019460894' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm04-59531-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.250229+0000 mon.c (mon.2) 39 : audit [INF] from='client.? 192.168.123.104:0/1019460894' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm04-59531-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.252808+0000 mon.c (mon.2) 40 : audit [INF] from='client.? 192.168.123.104:0/1296779503' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm04-59578-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.252808+0000 mon.c (mon.2) 40 : audit [INF] from='client.? 192.168.123.104:0/1296779503' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm04-59578-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.253324+0000 mon.c (mon.2) 41 : audit [INF] from='client.? 192.168.123.104:0/2230152973' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm04-59599-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.253324+0000 mon.c (mon.2) 41 : audit [INF] from='client.? 192.168.123.104:0/2230152973' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm04-59599-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.253792+0000 mon.c (mon.2) 42 : audit [INF] from='client.? 192.168.123.104:0/1819213432' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm04-59623-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.253792+0000 mon.c (mon.2) 42 : audit [INF] from='client.? 192.168.123.104:0/1819213432' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm04-59623-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.254446+0000 mon.c (mon.2) 43 : audit [INF] from='client.? 192.168.123.104:0/1040227268' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm04-59849-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.254446+0000 mon.c (mon.2) 43 : audit [INF] from='client.? 192.168.123.104:0/1040227268' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm04-59849-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.292549+0000 mon.a (mon.0) 841 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm04-60174-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.292549+0000 mon.a (mon.0) 841 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm04-60174-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.292908+0000 mon.a (mon.0) 842 : audit [INF] from='client.? 192.168.123.104:0/653623571' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm04-59259-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.292908+0000 mon.a (mon.0) 842 : audit [INF] from='client.? 192.168.123.104:0/653623571' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm04-59259-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.292962+0000 mon.a (mon.0) 843 : audit [INF] from='client.? 192.168.123.104:0/692604628' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm04-59290-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.292962+0000 mon.a (mon.0) 843 : audit [INF] from='client.? 192.168.123.104:0/692604628' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm04-59290-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.293003+0000 mon.a (mon.0) 844 : audit [INF] from='client.? 192.168.123.104:0/514098834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm04-59409-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.293003+0000 mon.a (mon.0) 844 : audit [INF] from='client.? 192.168.123.104:0/514098834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm04-59409-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.293046+0000 mon.a (mon.0) 845 : audit [INF] from='client.? 192.168.123.104:0/3753958607' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm04-59541-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.293046+0000 mon.a (mon.0) 845 : audit [INF] from='client.? 192.168.123.104:0/3753958607' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm04-59541-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.293086+0000 mon.a (mon.0) 846 : audit [INF] from='client.? 192.168.123.104:0/3335916557' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59729-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.293086+0000 mon.a (mon.0) 846 : audit [INF] from='client.? 192.168.123.104:0/3335916557' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59729-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.293129+0000 mon.a (mon.0) 847 : audit [INF] from='client.? 192.168.123.104:0/1657916283' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59695-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.293129+0000 mon.a (mon.0) 847 : audit [INF] from='client.? 192.168.123.104:0/1657916283' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59695-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.296315+0000 mon.a (mon.0) 848 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm04-60121-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.296315+0000 mon.a (mon.0) 848 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm04-60121-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.296506+0000 mon.a (mon.0) 849 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm04-59252-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.296506+0000 mon.a (mon.0) 849 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm04-59252-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.296554+0000 mon.a (mon.0) 850 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm04-59675-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.296554+0000 mon.a (mon.0) 850 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm04-59675-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.296641+0000 mon.a (mon.0) 851 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm04-59364-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.296641+0000 mon.a (mon.0) 851 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm04-59364-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.296683+0000 mon.a (mon.0) 852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm04-59274-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.296683+0000 mon.a (mon.0) 852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm04-59274-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.296724+0000 mon.a (mon.0) 853 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm04-59366-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.296724+0000 mon.a (mon.0) 853 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm04-59366-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.708 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.297865+0000 mon.a (mon.0) 854 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.708 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.297865+0000 mon.a (mon.0) 854 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.708 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.298029+0000 mon.a (mon.0) 855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.708 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.298029+0000 mon.a (mon.0) 855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.708 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.298090+0000 mon.a (mon.0) 856 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.708 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.298090+0000 mon.a (mon.0) 856 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.708 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.298155+0000 mon.a (mon.0) 857 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm04-59491-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.708 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.298155+0000 mon.a (mon.0) 857 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm04-59491-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.708 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.298216+0000 mon.a (mon.0) 858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm04-59531-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.708 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.298216+0000 mon.a (mon.0) 858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm04-59531-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.708 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.298279+0000 mon.a (mon.0) 859 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm04-59578-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.708 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.298279+0000 mon.a (mon.0) 859 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm04-59578-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.708 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.298338+0000 mon.a (mon.0) 860 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm04-59599-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.708 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.298338+0000 mon.a (mon.0) 860 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm04-59599-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.708 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.298400+0000 mon.a (mon.0) 861 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm04-59623-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.708 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.298400+0000 mon.a (mon.0) 861 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm04-59623-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.708 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.298462+0000 mon.a (mon.0) 862 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm04-59849-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.708 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.298462+0000 mon.a (mon.0) 862 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm04-59849-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.708 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.608054+0000 mon.a (mon.0) 863 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:22.708 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.608054+0000 mon.a (mon.0) 863 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:22.708 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.617344+0000 mon.a (mon.0) 864 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:22.708 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.617344+0000 mon.a (mon.0) 864 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:22.708 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.675829+0000 mon.a (mon.0) 865 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:22.708 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.675829+0000 mon.a (mon.0) 865 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:22.708 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.681179+0000 mon.a (mon.0) 866 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:22.708 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.681179+0000 mon.a (mon.0) 866 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:22.708 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.976774+0000 mon.a (mon.0) 867 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:22.708 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.976774+0000 mon.a (mon.0) 867 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:22.708 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.977242+0000 mon.a (mon.0) 868 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:16:22.708 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.977242+0000 mon.a (mon.0) 868 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:16:22.708 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.982768+0000 mon.a (mon.0) 869 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:22.708 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:22 vm04 bash[20742]: audit 2026-03-10T10:16:21.982768+0000 mon.a (mon.0) 869 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:22.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: cluster 2026-03-10T10:16:20.310810+0000 mgr.y (mgr.24422) 95 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:22.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: cluster 2026-03-10T10:16:20.310810+0000 mgr.y (mgr.24422) 95 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:16:22.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.153566+0000 mon.a (mon.0) 837 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:22.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.153566+0000 mon.a (mon.0) 837 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:22.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.153683+0000 mon.a (mon.0) 838 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm04-60121-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:22.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.153683+0000 mon.a (mon.0) 838 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritevm04-60121-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:22.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.153760+0000 mon.a (mon.0) 839 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm04-60174-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:22.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.153760+0000 mon.a (mon.0) 839 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsvm04-60174-1", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:22.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.181262+0000 mon.b (mon.1) 35 : audit [INF] from='client.? 192.168.123.104:0/409720393' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm04-60174-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:22.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.181262+0000 mon.b (mon.1) 35 : audit [INF] from='client.? 192.168.123.104:0/409720393' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm04-60174-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:22.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.181512+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm04-59675-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:22.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.181512+0000 mon.b (mon.1) 36 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm04-59675-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:22.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.186718+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.104:0/2556399022' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm04-59274-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.186718+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.104:0/2556399022' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm04-59274-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.189796+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.104:0/2994003743' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.189796+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.104:0/2994003743' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.189900+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.104:0/3733426727' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.189900+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.104:0/3733426727' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: cluster 2026-03-10T10:16:21.229652+0000 mon.a (mon.0) 840 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-10T10:16:22.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: cluster 2026-03-10T10:16:21.229652+0000 mon.a (mon.0) 840 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-10T10:16:22.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.240224+0000 mon.c (mon.2) 33 : audit [INF] from='client.? 192.168.123.104:0/3988453270' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm04-60121-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.240224+0000 mon.c (mon.2) 33 : audit [INF] from='client.? 192.168.123.104:0/3988453270' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm04-60121-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.245100+0000 mon.c (mon.2) 34 : audit [INF] from='client.? 192.168.123.104:0/3007984408' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm04-59252-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.245100+0000 mon.c (mon.2) 34 : audit [INF] from='client.? 192.168.123.104:0/3007984408' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm04-59252-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.245533+0000 mon.c (mon.2) 35 : audit [INF] from='client.? 192.168.123.104:0/685286540' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm04-59364-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.245533+0000 mon.c (mon.2) 35 : audit [INF] from='client.? 192.168.123.104:0/685286540' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm04-59364-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.245918+0000 mon.c (mon.2) 36 : audit [INF] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm04-59366-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.245918+0000 mon.c (mon.2) 36 : audit [INF] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm04-59366-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.246372+0000 mon.c (mon.2) 37 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.246372+0000 mon.c (mon.2) 37 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.249852+0000 mon.c (mon.2) 38 : audit [INF] from='client.? 192.168.123.104:0/3514642772' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm04-59491-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.249852+0000 mon.c (mon.2) 38 : audit [INF] from='client.? 192.168.123.104:0/3514642772' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm04-59491-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.250229+0000 mon.c (mon.2) 39 : audit [INF] from='client.? 192.168.123.104:0/1019460894' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm04-59531-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.250229+0000 mon.c (mon.2) 39 : audit [INF] from='client.? 192.168.123.104:0/1019460894' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm04-59531-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.252808+0000 mon.c (mon.2) 40 : audit [INF] from='client.? 192.168.123.104:0/1296779503' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm04-59578-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.252808+0000 mon.c (mon.2) 40 : audit [INF] from='client.? 192.168.123.104:0/1296779503' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm04-59578-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.253324+0000 mon.c (mon.2) 41 : audit [INF] from='client.? 192.168.123.104:0/2230152973' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm04-59599-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.253324+0000 mon.c (mon.2) 41 : audit [INF] from='client.? 192.168.123.104:0/2230152973' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm04-59599-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.253792+0000 mon.c (mon.2) 42 : audit [INF] from='client.? 192.168.123.104:0/1819213432' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm04-59623-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.253792+0000 mon.c (mon.2) 42 : audit [INF] from='client.? 192.168.123.104:0/1819213432' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm04-59623-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.254446+0000 mon.c (mon.2) 43 : audit [INF] from='client.? 192.168.123.104:0/1040227268' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm04-59849-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.254446+0000 mon.c (mon.2) 43 : audit [INF] from='client.? 192.168.123.104:0/1040227268' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm04-59849-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.292549+0000 mon.a (mon.0) 841 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm04-60174-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.292549+0000 mon.a (mon.0) 841 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsvm04-60174-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.292908+0000 mon.a (mon.0) 842 : audit [INF] from='client.? 192.168.123.104:0/653623571' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm04-59259-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.292908+0000 mon.a (mon.0) 842 : audit [INF] from='client.? 192.168.123.104:0/653623571' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBigPP_vm04-59259-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.292962+0000 mon.a (mon.0) 843 : audit [INF] from='client.? 192.168.123.104:0/692604628' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm04-59290-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.292962+0000 mon.a (mon.0) 843 : audit [INF] from='client.? 192.168.123.104:0/692604628' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm04-59290-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.293003+0000 mon.a (mon.0) 844 : audit [INF] from='client.? 192.168.123.104:0/514098834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm04-59409-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.293003+0000 mon.a (mon.0) 844 : audit [INF] from='client.? 192.168.123.104:0/514098834' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm04-59409-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.293046+0000 mon.a (mon.0) 845 : audit [INF] from='client.? 192.168.123.104:0/3753958607' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm04-59541-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.293046+0000 mon.a (mon.0) 845 : audit [INF] from='client.? 192.168.123.104:0/3753958607' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm04-59541-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.293086+0000 mon.a (mon.0) 846 : audit [INF] from='client.? 192.168.123.104:0/3335916557' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59729-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.293086+0000 mon.a (mon.0) 846 : audit [INF] from='client.? 192.168.123.104:0/3335916557' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59729-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.293129+0000 mon.a (mon.0) 847 : audit [INF] from='client.? 192.168.123.104:0/1657916283' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59695-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.293129+0000 mon.a (mon.0) 847 : audit [INF] from='client.? 192.168.123.104:0/1657916283' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59695-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.296315+0000 mon.a (mon.0) 848 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm04-60121-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.296315+0000 mon.a (mon.0) 848 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritevm04-60121-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.296506+0000 mon.a (mon.0) 849 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm04-59252-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.296506+0000 mon.a (mon.0) 849 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "TooBig_vm04-59252-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.296554+0000 mon.a (mon.0) 850 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm04-59675-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.296554+0000 mon.a (mon.0) 850 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm04-59675-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.296641+0000 mon.a (mon.0) 851 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm04-59364-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.296641+0000 mon.a (mon.0) 851 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm04-59364-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.296683+0000 mon.a (mon.0) 852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm04-59274-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.296683+0000 mon.a (mon.0) 852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm04-59274-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.296724+0000 mon.a (mon.0) 853 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm04-59366-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.296724+0000 mon.a (mon.0) 853 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosList_vm04-59366-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.297865+0000 mon.a (mon.0) 854 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.297865+0000 mon.a (mon.0) 854 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.298029+0000 mon.a (mon.0) 855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.298029+0000 mon.a (mon.0) 855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.298090+0000 mon.a (mon.0) 856 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.298090+0000 mon.a (mon.0) 856 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.298155+0000 mon.a (mon.0) 857 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm04-59491-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.298155+0000 mon.a (mon.0) 857 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm04-59491-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.298216+0000 mon.a (mon.0) 858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm04-59531-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.298216+0000 mon.a (mon.0) 858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm04-59531-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.298279+0000 mon.a (mon.0) 859 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm04-59578-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.298279+0000 mon.a (mon.0) 859 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm04-59578-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.298338+0000 mon.a (mon.0) 860 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm04-59599-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.298338+0000 mon.a (mon.0) 860 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm04-59599-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.298400+0000 mon.a (mon.0) 861 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm04-59623-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.298400+0000 mon.a (mon.0) 861 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm04-59623-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.298462+0000 mon.a (mon.0) 862 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm04-59849-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.298462+0000 mon.a (mon.0) 862 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm04-59849-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.608054+0000 mon.a (mon.0) 863 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.608054+0000 mon.a (mon.0) 863 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.617344+0000 mon.a (mon.0) 864 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.617344+0000 mon.a (mon.0) 864 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.675829+0000 mon.a (mon.0) 865 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.675829+0000 mon.a (mon.0) 865 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.681179+0000 mon.a (mon.0) 866 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.681179+0000 mon.a (mon.0) 866 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.976774+0000 mon.a (mon.0) 867 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.976774+0000 mon.a (mon.0) 867 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:16:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.977242+0000 mon.a (mon.0) 868 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:16:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.977242+0000 mon.a (mon.0) 868 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:16:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.982768+0000 mon.a (mon.0) 869 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:22 vm07 bash[23367]: audit 2026-03-10T10:16:21.982768+0000 mon.a (mon.0) 869 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:22.938 INFO:tasks.workunit.client.0.vm04.stdout: api_asio: [==========] Running 12 tests from 1 test suite. 2026-03-10T10:16:22.938 INFO:tasks.workunit.client.0.vm04.stdout: api_asio: [----------] Global test environment set-up. 2026-03-10T10:16:22.938 INFO:tasks.workunit.client.0.vm04.stdout: api_asio: [----------] 12 tests from AsioRados 2026-03-10T10:16:22.938 INFO:tasks.workunit.client.0.vm04.stdout: api_asio: [ RUN ] AsioRados.AsyncReadCallback 2026-03-10T10:16:22.938 INFO:tasks.workunit.client.0.vm04.stdout: api_asio: [ OK ] AsioRados.AsyncReadCallback (0 ms) 2026-03-10T10:16:22.938 INFO:tasks.workunit.client.0.vm04.stdout: api_asio: [ RUN ] AsioRados.AsyncReadFuture 2026-03-10T10:16:22.938 INFO:tasks.workunit.client.0.vm04.stdout: api_asio: [ OK ] AsioRados.AsyncReadFuture (1 ms) 2026-03-10T10:16:22.938 INFO:tasks.workunit.client.0.vm04.stdout: api_asio: [ RUN ] AsioRados.AsyncReadYield 2026-03-10T10:16:22.938 INFO:tasks.workunit.client.0.vm04.stdout: api_asio: [ OK ] AsioRados.AsyncReadYield (0 ms) 2026-03-10T10:16:22.938 INFO:tasks.workunit.client.0.vm04.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteCallback 2026-03-10T10:16:22.938 INFO:tasks.workunit.client.0.vm04.stdout: api_asio: [ OK ] AsioRados.AsyncWriteCallback (8 ms) 2026-03-10T10:16:22.938 INFO:tasks.workunit.client.0.vm04.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteFuture 2026-03-10T10:16:22.938 INFO:tasks.workunit.client.0.vm04.stdout: api_asio: [ OK ] AsioRados.AsyncWriteFuture (12 ms) 2026-03-10T10:16:22.938 INFO:tasks.workunit.client.0.vm04.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteYield 2026-03-10T10:16:22.938 INFO:tasks.workunit.client.0.vm04.stdout: api_asio: [ OK ] AsioRados.AsyncWriteYield (9 ms) 2026-03-10T10:16:22.938 INFO:tasks.workunit.client.0.vm04.stdout: api_asio: [ RUN ] AsioRados.AsyncReadOperationCallback 2026-03-10T10:16:22.938 INFO:tasks.workunit.client.0.vm04.stdout: api_asio: [ OK ] AsioRados.AsyncReadOperationCallback (1 ms) 2026-03-10T10:16:22.938 INFO:tasks.workunit.client.0.vm04.stdout: api_asio: [ RUN ] AsioRados.AsyncReadOperationFuture 2026-03-10T10:16:22.938 INFO:tasks.workunit.client.0.vm04.stdout: api_asio: [ OK ] AsioRados.AsyncReadOperationFuture (4 ms) 2026-03-10T10:16:22.938 INFO:tasks.workunit.client.0.vm04.stdout: api_asio: [ RUN ] AsioRados.AsyncReadOperationYield 2026-03-10T10:16:22.938 INFO:tasks.workunit.client.0.vm04.stdout: api_asio: [ OK ] AsioRados.AsyncReadOperationYield (0 ms) 2026-03-10T10:16:22.938 INFO:tasks.workunit.client.0.vm04.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteOperationCallback 2026-03-10T10:16:22.938 INFO:tasks.workunit.client.0.vm04.stdout: api_asio: [ OK ] AsioRados.AsyncWriteOperationCallback (6 ms) 2026-03-10T10:16:22.938 INFO:tasks.workunit.client.0.vm04.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteOperationFuture 2026-03-10T10:16:22.938 INFO:tasks.workunit.client.0.vm04.stdout: api_asio: [ OK ] AsioRados.AsyncWriteOperationFuture (15 ms) 2026-03-10T10:16:22.938 INFO:tasks.workunit.client.0.vm04.stdout: api_asio: [ RUN ] AsioRados.AsyncWriteOperationYield 2026-03-10T10:16:22.938 INFO:tasks.workunit.client.0.vm04.stdout: api_asio: [ OK ] AsioRados.AsyncWriteOperationYield (18 ms) 2026-03-10T10:16:22.938 INFO:tasks.workunit.client.0.vm04.stdout: api_asio: [----------] 12 tests from AsioRados (74 ms total) 2026-03-10T10:16:22.938 INFO:tasks.workunit.client.0.vm04.stdout: api_asio: 2026-03-10T10:16:22.939 INFO:tasks.workunit.client.0.vm04.stdout: api_asio: [----------] Global test environment tear-down 2026-03-10T10:16:22.939 INFO:tasks.workunit.client.0.vm04.stdout: api_asio: [==========] 12 tests from 1 test suite ran. (2709 ms total) 2026-03-10T10:16:22.939 INFO:tasks.workunit.client.0.vm04.stdout: api_asio: [ PASSED ] 12 tests. 2026-03-10T10:16:23.237 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd_pp: Running main() from gmock_main.cc 2026-03-10T10:16:23.237 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd_pp: [==========] Running 3 tests from 1 test suite. 2026-03-10T10:16:23.237 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd_pp: [----------] Global test environment set-up. 2026-03-10T10:16:23.237 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd_pp: [----------] 3 tests from LibRadosCmd 2026-03-10T10:16:23.237 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd_pp: [ RUN ] LibRadosCmd.MonDescribePP 2026-03-10T10:16:23.237 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd_pp: [ OK ] LibRadosCmd.MonDescribePP (43 ms) 2026-03-10T10:16:23.237 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd_pp: [ RUN ] LibRadosCmd.OSDCmdPP 2026-03-10T10:16:23.237 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd_pp: [ OK ] LibRadosCmd.OSDCmdPP (25 ms) 2026-03-10T10:16:23.237 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd_pp: [ RUN ] LibRadosCmd.PGCmdPP 2026-03-10T10:16:23.237 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd_pp: [ OK ] LibRadosCmd.PGCmdPP (2708 ms) 2026-03-10T10:16:23.237 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd_pp: [----------] 3 tests from LibRadosCmd (2776 ms total) 2026-03-10T10:16:23.237 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd_pp: 2026-03-10T10:16:23.237 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd_pp: [----------] Global test environment tear-down 2026-03-10T10:16:23.237 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd_pp: [==========] 3 tests from 1 test suite ran. (2776 ms total) 2026-03-10T10:16:23.238 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd_pp: [ PASSED ] 3 tests. 2026-03-10T10:16:23.272 INFO:tasks.workunit.client.0.vm04.stdout: handler_error: Running main() from gmock_main.cc 2026-03-10T10:16:23.272 INFO:tasks.workunit.client.0.vm04.stdout: handler_error: [==========] Running 1 test from 1 test suite. 2026-03-10T10:16:23.272 INFO:tasks.workunit.client.0.vm04.stdout: handler_error: [----------] Global test environment set-up. 2026-03-10T10:16:23.272 INFO:tasks.workunit.client.0.vm04.stdout: handler_error: [----------] 1 test from neocls_handler_error 2026-03-10T10:16:23.272 INFO:tasks.workunit.client.0.vm04.stdout: handler_error: [ RUN ] neocls_handler_error.test_handler_error 2026-03-10T10:16:23.272 INFO:tasks.workunit.client.0.vm04.stdout: handler_error: [ OK ] neocls_handler_error.test_handler_error (2609 ms) 2026-03-10T10:16:23.273 INFO:tasks.workunit.client.0.vm04.stdout: handler_error: [----------] 1 test from neocls_handler_error (2609 ms total) 2026-03-10T10:16:23.273 INFO:tasks.workunit.client.0.vm04.stdout: handler_error: 2026-03-10T10:16:23.273 INFO:tasks.workunit.client.0.vm04.stdout: handler_error: [----------] Global test environment tear-down 2026-03-10T10:16:23.273 INFO:tasks.workunit.client.0.vm04.stdout: handler_error: [==========] 1 test from 1 test suite ran. (2609 ms total) 2026-03-10T10:16:23.273 INFO:tasks.workunit.client.0.vm04.stdout: handler_error: [ PASSED ] 1 test. 2026-03-10T10:16:23.289 INFO:tasks.workunit.client.0.vm04.stdout: cls: Running main() from gmock_main.cc 2026-03-10T10:16:23.289 INFO:tasks.workunit.client.0.vm04.stdout: cls: [==========] Running 1 test from 1 test suite. 2026-03-10T10:16:23.289 INFO:tasks.workunit.client.0.vm04.stdout: cls: [----------] Global test environment set-up. 2026-03-10T10:16:23.289 INFO:tasks.workunit.client.0.vm04.stdout: cls: [----------] 1 test from NeoRadosCls 2026-03-10T10:16:23.289 INFO:tasks.workunit.client.0.vm04.stdout: cls: [ RUN ] NeoRadosCls.DNE 2026-03-10T10:16:23.289 INFO:tasks.workunit.client.0.vm04.stdout: cls: [ OK ] NeoRadosCls.DNE (2669 ms) 2026-03-10T10:16:23.289 INFO:tasks.workunit.client.0.vm04.stdout: cls: [----------] 1 test from NeoRadosCls (2669 ms total) 2026-03-10T10:16:23.289 INFO:tasks.workunit.client.0.vm04.stdout: cls: 2026-03-10T10:16:23.289 INFO:tasks.workunit.client.0.vm04.stdout: cls: [----------] Global test environment tear-down 2026-03-10T10:16:23.289 INFO:tasks.workunit.client.0.vm04.stdout: cls: [==========] 1 test from 1 test suite ran. (2669 ms total) 2026-03-10T10:16:23.289 INFO:tasks.workunit.client.0.vm04.stdout: cls: [ PASSED ] 1 test. 2026-03-10T10:16:23.305 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: Running main() from gmock_main.cc 2026-03-10T10:16:23.305 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [==========] Running 17 tests from 1 test suite. 2026-03-10T10:16:23.305 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [----------] Global test environment set-up. 2026-03-10T10:16:23.305 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [----------] 17 tests from CReadOpsTest 2026-03-10T10:16:23.305 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.NewDelete 2026-03-10T10:16:23.305 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ OK ] CReadOpsTest.NewDelete (0 ms) 2026-03-10T10:16:23.305 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.SetOpFlags 2026-03-10T10:16:23.305 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ OK ] CReadOpsTest.SetOpFlags (469 ms) 2026-03-10T10:16:23.305 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.AssertExists 2026-03-10T10:16:23.305 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ OK ] CReadOpsTest.AssertExists (5 ms) 2026-03-10T10:16:23.305 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.AssertVersion 2026-03-10T10:16:23.305 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ OK ] CReadOpsTest.AssertVersion (120 ms) 2026-03-10T10:16:23.305 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.CmpXattr 2026-03-10T10:16:23.305 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ OK ] CReadOpsTest.CmpXattr (51 ms) 2026-03-10T10:16:23.305 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Read 2026-03-10T10:16:23.305 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Read (17 ms) 2026-03-10T10:16:23.305 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Checksum 2026-03-10T10:16:23.305 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Checksum (9 ms) 2026-03-10T10:16:23.305 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.RWOrderedRead 2026-03-10T10:16:23.305 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ OK ] CReadOpsTest.RWOrderedRead (15 ms) 2026-03-10T10:16:23.305 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.ShortRead 2026-03-10T10:16:23.305 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ OK ] CReadOpsTest.ShortRead (15 ms) 2026-03-10T10:16:23.305 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Exec 2026-03-10T10:16:23.305 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Exec (8 ms) 2026-03-10T10:16:23.305 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.ExecUserBuf 2026-03-10T10:16:23.305 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ OK ] CReadOpsTest.ExecUserBuf (10 ms) 2026-03-10T10:16:23.306 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Stat 2026-03-10T10:16:23.306 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Stat (7 ms) 2026-03-10T10:16:23.306 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Stat2 2026-03-10T10:16:23.306 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Stat2 (10 ms) 2026-03-10T10:16:23.306 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.Omap 2026-03-10T10:16:23.306 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ OK ] CReadOpsTest.Omap (16 ms) 2026-03-10T10:16:23.306 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.OmapNuls 2026-03-10T10:16:23.306 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ OK ] CReadOpsTest.OmapNuls (16 ms) 2026-03-10T10:16:23.306 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.GetXattrs 2026-03-10T10:16:23.306 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ OK ] CReadOpsTest.GetXattrs (18 ms) 2026-03-10T10:16:23.306 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ RUN ] CReadOpsTest.CmpExt 2026-03-10T10:16:23.306 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ OK ] CReadOpsTest.CmpExt (5 ms) 2026-03-10T10:16:23.306 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [----------] 17 tests from CReadOpsTest (792 ms total) 2026-03-10T10:16:23.306 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: 2026-03-10T10:16:23.306 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [----------] Global test environment tear-down 2026-03-10T10:16:23.306 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [==========] 17 tests from 1 test suite ran. (2784 ms total) 2026-03-10T10:16:23.306 INFO:tasks.workunit.client.0.vm04.stdout: api_c_read_operations: [ PASSED ] 17 tests. 2026-03-10T10:16:23.349 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: cluster 2026-03-10T10:16:21.978510+0000 mgr.y (mgr.24422) 96 : cluster [DBG] pgmap v54: 1220 pgs: 1088 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T10:16:23.349 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:16:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:16:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:16:23.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: Running main() from gmock_main.cc 2026-03-10T10:16:23.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: [==========] Running 4 tests from 1 test suite. 2026-03-10T10:16:23.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: [----------] Global test environment set-up. 2026-03-10T10:16:23.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: [----------] 4 tests from LibRadosCmd 2026-03-10T10:16:23.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: [ RUN ] LibRadosCmd.MonDescribe 2026-03-10T10:16:23.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: [ OK ] LibRadosCmd.MonDescribe (41 ms) 2026-03-10T10:16:23.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: [ RUN ] LibRadosCmd.OSDCmd 2026-03-10T10:16:23.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: [ OK ] LibRadosCmd.OSDCmd (36 ms) 2026-03-10T10:16:23.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: [ RUN ] LibRadosCmd.PGCmd 2026-03-10T10:16:23.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: [ OK ] LibRadosCmd.PGCmd (2710 ms) 2026-03-10T10:16:23.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: [ RUN ] LibRadosCmd.WatchLog 2026-03-10T10:16:23.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:22.161916+0000 mon.a [INF] from='client.? 192.168.123.104:0/653623571' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBigPP_vm04-59259-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:22.161991+0000 mon.a [INF] from='client.? 192.168.123.104:0/692604628' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm04-59290-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:22.162042+0000 mon.a [INF] from='client.? 192.168.123.104:0/514098834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm04-59409-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:22.162423+0000 mon.a [INF] from='client.? 192.168.123.104:0/3753958607' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm04-59541-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:22.162568+0000 mon.a [INF] from='client.? 192.168.123.104:0/3335916557' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59729-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:22.162818+0000 mon.a [INF] from='client.? 192.168.123.104:0/1657916283' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59695-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:22.163010+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBig_vm04-59252-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:22.163060+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm04-59364-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:22.163106+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm04-59274-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:22.163141+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosList_vm04-59366-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:22.163180+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:22.163211+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:22.163256+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.368 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:22.163281+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm04-59491-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.368 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:22.163320+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm04-59531-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.368 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:22.163374+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm04-59578-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.368 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:22.163417+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm04-59599-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.368 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:22.163456+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm04-59623-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.368 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:22.163499+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm04-59849-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.368 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:22.679871+0000 mon.a [WRN] Health check failed: 2 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) 2026-03-10T10:16:23.368 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:22.679896+0000 mon.a [WRN] Health check failed: 34 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:23.368 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:23.179486+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm04-60174-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm04-60174-1"}]': finished 2026-03-10T10:16:23.368 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:23.179538+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm04-60121-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm04-60121-1"}]': finished 2026-03-10T10:16:23.368 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:23.179564+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm04-59675-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]': finished 2026-03-10T10:16:23.368 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:23.208531+0000 mon.a [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:23.368 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:23.260279+0000 mon.b [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:23.368 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:23.265894+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:23.368 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:23.267947+0000 mon.b [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:23.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: cluster 2026-03-10T10:16:21.978510+0000 mgr.y (mgr.24422) 96 : cluster [DBG] pgmap v54: 1220 pgs: 1088 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: cluster 2026-03-10T10:16:21.978510+0000 mgr.y (mgr.24422) 96 : cluster [DBG] pgmap v54: 1220 pgs: 1088 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.161916+0000 mon.a (mon.0) 870 : audit [INF] from='client.? 192.168.123.104:0/653623571' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBigPP_vm04-59259-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.161916+0000 mon.a (mon.0) 870 : audit [INF] from='client.? 192.168.123.104:0/653623571' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBigPP_vm04-59259-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.161991+0000 mon.a (mon.0) 871 : audit [INF] from='client.? 192.168.123.104:0/692604628' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm04-59290-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.161991+0000 mon.a (mon.0) 871 : audit [INF] from='client.? 192.168.123.104:0/692604628' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm04-59290-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.162042+0000 mon.a (mon.0) 872 : audit [INF] from='client.? 192.168.123.104:0/514098834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm04-59409-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.162042+0000 mon.a (mon.0) 872 : audit [INF] from='client.? 192.168.123.104:0/514098834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm04-59409-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.162423+0000 mon.a (mon.0) 873 : audit [INF] from='client.? 192.168.123.104:0/3753958607' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm04-59541-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.162423+0000 mon.a (mon.0) 873 : audit [INF] from='client.? 192.168.123.104:0/3753958607' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm04-59541-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.162568+0000 mon.a (mon.0) 874 : audit [INF] from='client.? 192.168.123.104:0/3335916557' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59729-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.162568+0000 mon.a (mon.0) 874 : audit [INF] from='client.? 192.168.123.104:0/3335916557' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59729-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.162818+0000 mon.a (mon.0) 875 : audit [INF] from='client.? 192.168.123.104:0/1657916283' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59695-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.162818+0000 mon.a (mon.0) 875 : audit [INF] from='client.? 192.168.123.104:0/1657916283' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59695-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.163010+0000 mon.a (mon.0) 876 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBig_vm04-59252-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.163010+0000 mon.a (mon.0) 876 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBig_vm04-59252-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.163060+0000 mon.a (mon.0) 877 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm04-59364-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.163060+0000 mon.a (mon.0) 877 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm04-59364-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.163106+0000 mon.a (mon.0) 878 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm04-59274-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.163106+0000 mon.a (mon.0) 878 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm04-59274-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.163141+0000 mon.a (mon.0) 879 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosList_vm04-59366-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.163141+0000 mon.a (mon.0) 879 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosList_vm04-59366-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.163180+0000 mon.a (mon.0) 880 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.163180+0000 mon.a (mon.0) 880 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.163211+0000 mon.a (mon.0) 881 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.163211+0000 mon.a (mon.0) 881 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.163256+0000 mon.a (mon.0) 882 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.163256+0000 mon.a (mon.0) 882 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.163281+0000 mon.a (mon.0) 883 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm04-59491-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.163281+0000 mon.a (mon.0) 883 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm04-59491-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.163320+0000 mon.a (mon.0) 884 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm04-59531-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.163320+0000 mon.a (mon.0) 884 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm04-59531-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.163374+0000 mon.a (mon.0) 885 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm04-59578-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.163374+0000 mon.a (mon.0) 885 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm04-59578-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.163417+0000 mon.a (mon.0) 886 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm04-59599-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.163417+0000 mon.a (mon.0) 886 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm04-59599-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.163456+0000 mon.a (mon.0) 887 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm04-59623-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.163456+0000 mon.a (mon.0) 887 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm04-59623-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.163499+0000 mon.a (mon.0) 888 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm04-59849-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:22.163499+0000 mon.a (mon.0) 888 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm04-59849-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: cluster 2026-03-10T10:16:22.210688+0000 mon.a (mon.0) 889 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: cluster 2026-03-10T10:16:22.210688+0000 mon.a (mon.0) 889 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: cluster 2026-03-10T10:16:22.679871+0000 mon.a (mon.0) 890 : cluster [WRN] Health check failed: 2 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: cluster 2026-03-10T10:16:22.679871+0000 mon.a (mon.0) 890 : cluster [WRN] Health check failed: 2 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: cluster 2026-03-10T10:16:22.679896+0000 mon.a (mon.0) 891 : cluster [WRN] Health check failed: 34 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: cluster 2026-03-10T10:16:22.679896+0000 mon.a (mon.0) 891 : cluster [WRN] Health check failed: 34 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:23.179486+0000 mon.a (mon.0) 892 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm04-60174-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm04-60174-1"}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:23.179486+0000 mon.a (mon.0) 892 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm04-60174-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm04-60174-1"}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:23.179538+0000 mon.a (mon.0) 893 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm04-60121-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm04-60121-1"}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:23.179538+0000 mon.a (mon.0) 893 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm04-60121-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm04-60121-1"}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:23.179564+0000 mon.a (mon.0) 894 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm04-59675-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:23.179564+0000 mon.a (mon.0) 894 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm04-59675-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]': finished 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: cluster 2026-03-10T10:16:23.185305+0000 mon.a (mon.0) 895 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: cluster 2026-03-10T10:16:23.185305+0000 mon.a (mon.0) 895 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:23.208531+0000 mon.a (mon.0) 896 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:23.208531+0000 mon.a (mon.0) 896 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:23.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:23.260279+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:23.260279+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:23.265894+0000 mon.a (mon.0) 897 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:23.265894+0000 mon.a (mon.0) 897 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:23.267947+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:23.267947+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:23.292815+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:23.292815+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:23.292982+0000 mon.a (mon.0) 898 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:23 vm04 bash[28289]: audit 2026-03-10T10:16:23.292982+0000 mon.a (mon.0) 898 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: cluster 2026-03-10T10:16:21.978510+0000 mgr.y (mgr.24422) 96 : cluster [DBG] pgmap v54: 1220 pgs: 1088 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.161916+0000 mon.a (mon.0) 870 : audit [INF] from='client.? 192.168.123.104:0/653623571' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBigPP_vm04-59259-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.161916+0000 mon.a (mon.0) 870 : audit [INF] from='client.? 192.168.123.104:0/653623571' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBigPP_vm04-59259-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.161991+0000 mon.a (mon.0) 871 : audit [INF] from='client.? 192.168.123.104:0/692604628' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm04-59290-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.161991+0000 mon.a (mon.0) 871 : audit [INF] from='client.? 192.168.123.104:0/692604628' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm04-59290-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.162042+0000 mon.a (mon.0) 872 : audit [INF] from='client.? 192.168.123.104:0/514098834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm04-59409-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.162042+0000 mon.a (mon.0) 872 : audit [INF] from='client.? 192.168.123.104:0/514098834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm04-59409-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.162423+0000 mon.a (mon.0) 873 : audit [INF] from='client.? 192.168.123.104:0/3753958607' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm04-59541-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.162423+0000 mon.a (mon.0) 873 : audit [INF] from='client.? 192.168.123.104:0/3753958607' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm04-59541-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.162568+0000 mon.a (mon.0) 874 : audit [INF] from='client.? 192.168.123.104:0/3335916557' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59729-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.162568+0000 mon.a (mon.0) 874 : audit [INF] from='client.? 192.168.123.104:0/3335916557' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59729-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.162818+0000 mon.a (mon.0) 875 : audit [INF] from='client.? 192.168.123.104:0/1657916283' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59695-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.162818+0000 mon.a (mon.0) 875 : audit [INF] from='client.? 192.168.123.104:0/1657916283' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59695-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.163010+0000 mon.a (mon.0) 876 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBig_vm04-59252-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.163010+0000 mon.a (mon.0) 876 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBig_vm04-59252-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.163060+0000 mon.a (mon.0) 877 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm04-59364-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.163060+0000 mon.a (mon.0) 877 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm04-59364-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.163106+0000 mon.a (mon.0) 878 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm04-59274-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.163106+0000 mon.a (mon.0) 878 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm04-59274-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.163141+0000 mon.a (mon.0) 879 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosList_vm04-59366-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.163141+0000 mon.a (mon.0) 879 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosList_vm04-59366-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.163180+0000 mon.a (mon.0) 880 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.163180+0000 mon.a (mon.0) 880 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.163211+0000 mon.a (mon.0) 881 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.163211+0000 mon.a (mon.0) 881 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.163256+0000 mon.a (mon.0) 882 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.163256+0000 mon.a (mon.0) 882 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.163281+0000 mon.a (mon.0) 883 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm04-59491-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.163281+0000 mon.a (mon.0) 883 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm04-59491-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.163320+0000 mon.a (mon.0) 884 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm04-59531-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.163320+0000 mon.a (mon.0) 884 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm04-59531-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.163374+0000 mon.a (mon.0) 885 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm04-59578-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.163374+0000 mon.a (mon.0) 885 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm04-59578-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.163417+0000 mon.a (mon.0) 886 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm04-59599-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.163417+0000 mon.a (mon.0) 886 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm04-59599-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.163456+0000 mon.a (mon.0) 887 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm04-59623-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.163456+0000 mon.a (mon.0) 887 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm04-59623-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.163499+0000 mon.a (mon.0) 888 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm04-59849-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:22.163499+0000 mon.a (mon.0) 888 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm04-59849-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: cluster 2026-03-10T10:16:22.210688+0000 mon.a (mon.0) 889 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: cluster 2026-03-10T10:16:22.210688+0000 mon.a (mon.0) 889 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-10T10:16:23.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: cluster 2026-03-10T10:16:22.679871+0000 mon.a (mon.0) 890 : cluster [WRN] Health check failed: 2 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) 2026-03-10T10:16:23.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: cluster 2026-03-10T10:16:22.679871+0000 mon.a (mon.0) 890 : cluster [WRN] Health check failed: 2 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) 2026-03-10T10:16:23.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: cluster 2026-03-10T10:16:22.679896+0000 mon.a (mon.0) 891 : cluster [WRN] Health check failed: 34 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:23.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: cluster 2026-03-10T10:16:22.679896+0000 mon.a (mon.0) 891 : cluster [WRN] Health check failed: 34 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:23.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:23.179486+0000 mon.a (mon.0) 892 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm04-60174-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm04-60174-1"}]': finished 2026-03-10T10:16:23.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:23.179486+0000 mon.a (mon.0) 892 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm04-60174-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm04-60174-1"}]': finished 2026-03-10T10:16:23.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:23.179538+0000 mon.a (mon.0) 893 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm04-60121-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm04-60121-1"}]': finished 2026-03-10T10:16:23.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:23.179538+0000 mon.a (mon.0) 893 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm04-60121-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm04-60121-1"}]': finished 2026-03-10T10:16:23.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:23.179564+0000 mon.a (mon.0) 894 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm04-59675-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]': finished 2026-03-10T10:16:23.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:23.179564+0000 mon.a (mon.0) 894 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm04-59675-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]': finished 2026-03-10T10:16:23.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: cluster 2026-03-10T10:16:23.185305+0000 mon.a (mon.0) 895 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-10T10:16:23.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: cluster 2026-03-10T10:16:23.185305+0000 mon.a (mon.0) 895 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-10T10:16:23.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:23.208531+0000 mon.a (mon.0) 896 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:23.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:23.208531+0000 mon.a (mon.0) 896 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:23.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:23.260279+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:23.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:23.260279+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:23.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:23.265894+0000 mon.a (mon.0) 897 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:23.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:23.265894+0000 mon.a (mon.0) 897 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:23.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:23.267947+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:23.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:23.267947+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:23.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:23.292815+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:23.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:23.292815+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:23.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:23.292982+0000 mon.a (mon.0) 898 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:23.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:23 vm04 bash[20742]: audit 2026-03-10T10:16:23.292982+0000 mon.a (mon.0) 898 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:23.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: cluster 2026-03-10T10:16:21.978510+0000 mgr.y (mgr.24422) 96 : cluster [DBG] pgmap v54: 1220 pgs: 1088 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T10:16:23.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: cluster 2026-03-10T10:16:21.978510+0000 mgr.y (mgr.24422) 96 : cluster [DBG] pgmap v54: 1220 pgs: 1088 unknown, 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T10:16:23.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.161916+0000 mon.a (mon.0) 870 : audit [INF] from='client.? 192.168.123.104:0/653623571' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBigPP_vm04-59259-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.161916+0000 mon.a (mon.0) 870 : audit [INF] from='client.? 192.168.123.104:0/653623571' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBigPP_vm04-59259-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.161991+0000 mon.a (mon.0) 871 : audit [INF] from='client.? 192.168.123.104:0/692604628' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm04-59290-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.161991+0000 mon.a (mon.0) 871 : audit [INF] from='client.? 192.168.123.104:0/692604628' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIoPP_vm04-59290-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.162042+0000 mon.a (mon.0) 872 : audit [INF] from='client.? 192.168.123.104:0/514098834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm04-59409-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.162042+0000 mon.a (mon.0) 872 : audit [INF] from='client.? 192.168.123.104:0/514098834' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLockPP_vm04-59409-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.162423+0000 mon.a (mon.0) 873 : audit [INF] from='client.? 192.168.123.104:0/3753958607' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm04-59541-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.162423+0000 mon.a (mon.0) 873 : audit [INF] from='client.? 192.168.123.104:0/3753958607' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsPP_vm04-59541-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.162568+0000 mon.a (mon.0) 874 : audit [INF] from='client.? 192.168.123.104:0/3335916557' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59729-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.162568+0000 mon.a (mon.0) 874 : audit [INF] from='client.? 192.168.123.104:0/3335916557' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59729-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.162818+0000 mon.a (mon.0) 875 : audit [INF] from='client.? 192.168.123.104:0/1657916283' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59695-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.162818+0000 mon.a (mon.0) 875 : audit [INF] from='client.? 192.168.123.104:0/1657916283' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59695-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.163010+0000 mon.a (mon.0) 876 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBig_vm04-59252-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.163010+0000 mon.a (mon.0) 876 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "TooBig_vm04-59252-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.163060+0000 mon.a (mon.0) 877 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm04-59364-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.163060+0000 mon.a (mon.0) 877 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosLock_vm04-59364-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.163106+0000 mon.a (mon.0) 878 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm04-59274-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.163106+0000 mon.a (mon.0) 878 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosIo_vm04-59274-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.163141+0000 mon.a (mon.0) 879 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosList_vm04-59366-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.163141+0000 mon.a (mon.0) 879 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosList_vm04-59366-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.163180+0000 mon.a (mon.0) 880 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.163180+0000 mon.a (mon.0) 880 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.163211+0000 mon.a (mon.0) 881 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.163211+0000 mon.a (mon.0) 881 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.163256+0000 mon.a (mon.0) 882 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.163256+0000 mon.a (mon.0) 882 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.163281+0000 mon.a (mon.0) 883 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm04-59491-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.163281+0000 mon.a (mon.0) 883 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosTierPP_vm04-59491-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.163320+0000 mon.a (mon.0) 884 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm04-59531-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.163320+0000 mon.a (mon.0) 884 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshots_vm04-59531-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.163374+0000 mon.a (mon.0) 885 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm04-59578-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.163374+0000 mon.a (mon.0) 885 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStat_vm04-59578-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.163417+0000 mon.a (mon.0) 886 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm04-59599-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.163417+0000 mon.a (mon.0) 886 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosStatPP_vm04-59599-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.163456+0000 mon.a (mon.0) 887 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm04-59623-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.163456+0000 mon.a (mon.0) 887 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotify_vm04-59623-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.163499+0000 mon.a (mon.0) 888 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm04-59849-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:22.163499+0000 mon.a (mon.0) 888 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "CReadOpsTest_vm04-59849-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: cluster 2026-03-10T10:16:22.210688+0000 mon.a (mon.0) 889 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: cluster 2026-03-10T10:16:22.210688+0000 mon.a (mon.0) 889 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: cluster 2026-03-10T10:16:22.679871+0000 mon.a (mon.0) 890 : cluster [WRN] Health check failed: 2 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: cluster 2026-03-10T10:16:22.679871+0000 mon.a (mon.0) 890 : cluster [WRN] Health check failed: 2 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: cluster 2026-03-10T10:16:22.679896+0000 mon.a (mon.0) 891 : cluster [WRN] Health check failed: 34 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: cluster 2026-03-10T10:16:22.679896+0000 mon.a (mon.0) 891 : cluster [WRN] Health check failed: 34 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:23.179486+0000 mon.a (mon.0) 892 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm04-60174-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm04-60174-1"}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:23.179486+0000 mon.a (mon.0) 892 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsvm04-60174-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsvm04-60174-1"}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:23.179538+0000 mon.a (mon.0) 893 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm04-60121-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm04-60121-1"}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:23.179538+0000 mon.a (mon.0) 893 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritevm04-60121-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritevm04-60121-1"}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:23.179564+0000 mon.a (mon.0) 894 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm04-59675-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:23.179564+0000 mon.a (mon.0) 894 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyECPP_vm04-59675-1", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]': finished 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: cluster 2026-03-10T10:16:23.185305+0000 mon.a (mon.0) 895 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: cluster 2026-03-10T10:16:23.185305+0000 mon.a (mon.0) 895 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:23.208531+0000 mon.a (mon.0) 896 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:23.208531+0000 mon.a (mon.0) 896 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:23.260279+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:23.260279+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:23.265894+0000 mon.a (mon.0) 897 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:23.265894+0000 mon.a (mon.0) 897 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:23.267947+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:23.267947+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:23.292815+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:23.292815+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:23.292982+0000 mon.a (mon.0) 898 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:23 vm07 bash[23367]: audit 2026-03-10T10:16:23.292982+0000 mon.a (mon.0) 898 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:23.292815+0000 mon.b [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd erasure-co api_list: [==========] Running 11 tests from 3 test suites. 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: [----------] Global test environment set-up. 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: [----------] 7 tests from LibRadosList 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: [ RUN ] LibRadosList.ListObjects 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: [ OK ] LibRadosList.ListObjects (596 ms) 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: [ RUN ] LibRadosList.ListObjectsZeroInName 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: [ OK ] LibRadosList.ListObjectsZeroInName (53 ms) 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: [ RUN ] LibRadosList.ListObjectsNS 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: myset foo1,foo2,foo3 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: foo1 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: foo2 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: foo3 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: myset foo1,foo4,foo5 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: foo4 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: foo5 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: foo1 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: myset foo6,foo7 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: foo7 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: foo6 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: myset :foo1,:foo2,:foo3,ns1:foo1,ns1:foo4,ns1:foo5,ns2:foo6,ns2:foo7 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: ns1:foo4 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: ns1:foo5 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: ns2:foo7 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: ns2:foo6 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: ns1:foo1 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: :foo1 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: :foo2 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: :foo3 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: [ OK ] LibRadosList.ListObjectsNS (130 ms) 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: [ RUN ] LibRadosList.ListObjectsStart 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 1 0 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 10 0 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 13 0 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 7 0 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 14 0 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 0 0 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 15 0 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 11 0 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 5 0 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 8 0 2026-03-10T10:16:23.805 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 6 0 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 3 0 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 4 0 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 12 0 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 9 0 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 2 0 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: have 1 expect one of 0,1,10,11,12,13,14,15,2,3,4,5,6,7,8,9 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: [ OK ] LibRadosList.ListObjectsStart (582 ms) 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: [ RUN ] LibRadosList.ListObjectsCursor 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: x cursor=MIN 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > oid=1 cursor=13:02547ec2:::1:head 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > oid=10 cursor=13:52ea6a34:::10:head 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > oid=13 cursor=13:566253c9:::13:head 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > oid=7 cursor=13:5c6b0b28:::7:head 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > oid=14 cursor=13:62a1935d:::14:head 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > oid=0 cursor=13:6cac518f:::0:head 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > oid=15 cursor=13:863748b0:::15:head 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > oid=11 cursor=13:89d3ae78:::11:head 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > oid=5 cursor=13:b29083e3:::5:head 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > oid=8 cursor=13:bd63b0f1:::8:head 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > oid=6 cursor=13:c4fdafeb:::6:head 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > oid=3 cursor=13:cfc208b3:::3:head 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > oid=4 cursor=13:d83876eb:::4:head 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > oid=12 cursor=13:de5d7c5f:::12:head 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > oid=9 cursor=13:e960b815:::9:head 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > oid=2 cursor=13:f905c69b:::2:head 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: FIRST> seek to MIN oid=1 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : oid=1 cursor=13:02547ec2:::1:head 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : seek to 13:02547ec2:::1:head 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > 13:02547ec2:::1:head -> 1 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : oid=10 cursor=13:52ea6a34:::10:head 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : seek to 13:52ea6a34:::10:head 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > 13:52ea6a34:::10:head -> 10 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : oid=13 cursor=13:566253c9:::13:head 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : seek to 13:566253c9:::13:head 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > 13:566253c9:::13:head -> 13 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : oid=7 cursor=13:5c6b0b28:::7:head 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : seek to 13:5c6b0b28:::7:head 2026-03-10T10:16:23.806 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > 13:5c6b0b28:::7:head -> 7 2026-03-10T10:16:23.854 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : oid=14 cursor=13:62a1935d:::14:head 2026-03-10T10:16:23.854 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : seek to 13:62a1935d:::14:head 2026-03-10T10:16:23.854 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > 13:62a1935d:::14:head -> 14 2026-03-10T10:16:23.854 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : oid=0 cursor=13:6cac518f:::0:head 2026-03-10T10:16:23.854 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : seek to 13:6cac518f:::0:head 2026-03-10T10:16:23.854 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > 13:6cac518f:::0:head -> 0 2026-03-10T10:16:23.854 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : oid=15 cursor=13:863748b0:::15:head 2026-03-10T10:16:23.854 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : seek to 13:863748b0:::15:head 2026-03-10T10:16:23.854 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > 13:863748b0:::15:head -> 15 2026-03-10T10:16:23.854 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : oid=11 cursor=13:89d3ae78:::11:head 2026-03-10T10:16:23.854 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : seek to 13:89d3ae78:::11:head 2026-03-10T10:16:23.854 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > 13:89d3ae78:::11:head -> 11 2026-03-10T10:16:23.854 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : oid=5 cursor=13:b29083e3:::5:head 2026-03-10T10:16:23.854 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : seek to 13:b29083e3:::5:head 2026-03-10T10:16:23.854 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > 13:b29083e3:::5:head -> 5 2026-03-10T10:16:23.854 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : oid=8 cursor=13:bd63b0f1:::8:head 2026-03-10T10:16:23.854 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : seek to 13:bd63b0f1:::8:head 2026-03-10T10:16:23.854 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > 13:bd63b0f1:::8:head -> 8 2026-03-10T10:16:23.854 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : oid=6 cursor=13:c4fdafeb:::6:head 2026-03-10T10:16:23.854 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : seek to 13:c4fdafeb:::6:head 2026-03-10T10:16:23.854 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > 13:c4fdafeb:::6:head -> 6 2026-03-10T10:16:23.854 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : oid=3 cursor=13:cfc208b3:::3:head 2026-03-10T10:16:23.854 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : seek to 13:cfc208b3:::3:head 2026-03-10T10:16:23.854 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > 13:cfc208b3:::3:head -> 3 2026-03-10T10:16:23.854 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : oid=4 cursor=13:d83876eb:::4:head 2026-03-10T10:16:23.854 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : seek to 13:d83876eb:::4:head 2026-03-10T10:16:23.854 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > 13:d83876eb:::4:head -> 4 2026-03-10T10:16:23.854 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : oid=12 cursor=13:de5d7c5f:::12:head 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : seek to 13:de5d7c5f:::12:head 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > 13:de5d7c5f:::12:head -> 12 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : oid=9 cursor=13:e960b815:::9:head 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : seek to 13:e960b815:::9:head 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > 13:e960b815:::9:head -> 9 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : oid=2 cursor=13:f905c69b:::2:head 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : seek to 13:f905c69b:::2:head 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > 13:f905c69b:::2:head -> 2 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : seek to 13:de5d7c5f:::12:head 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : cursor()=13:de5d7c5f:::12:head expected=13:de5d7c5f:::12:head 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > 13:de5d7c5f:::12:head -> 12 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : entry=12 expected=12 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : seek to 13:b29083e3:::5:head 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : cursor()=13:b29083e3:::5:head expected=13:b29083e3:::5:head 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > 13:b29083e3:::5:head -> 5 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : entry=5 expected=5 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : seek to 13:f905c69b:::2:head 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : cursor()=13:f905c69b:::2:head expected=13:f905c69b:::2:head 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > 13:f905c69b:::2:head -> 2 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : entry=2 expected=2 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : seek to 13:e960b815:::9:head 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : cursor()=13:e960b815:::9:head expected=13:e960b815:::9:head 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > 13:e960b815:::9:head -> 9 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : entry=9 expected=9 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : seek to 13:89d3ae78:::11:head 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : cursor()=13:89d3ae78:::11:head expected=13:89d3ae78:::11:head 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > 13:89d3ae78:::11:head -> 11 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : entry=11 expected=11 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : seek to 13:d83876eb:::4:head 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : cursor()=13:d83876eb:::4:head expected=13:d83876eb:::4:head 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > 13:d83876eb:::4:head -> 4 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : entry=4 expected=4 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : seek to 13:cfc208b3:::3:head 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : cursor()=13:cfc208b3:::3:head expected=13:cfc208b3:::3:head 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > 13:cfc208b3:::3:head -> 3 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : entry=3 expected=3 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : seek to 13:c4fdafeb:::6:head 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : cursor()=13:c4fdafeb:::6:head expected=13:c4fdafeb:::6:head 2026-03-10T10:16:23.855 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > 13:c4fdafeb:::6:head -> 6 2026-03-10T10:16:24.367 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : entry=de-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:23.292982+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:24.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:23.337427+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:23.358083+0000 mon.b [INF] from='client.? 192.168.123.104:0/4117200580' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T10:16:24.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:23.397286+0000 client.admin [INF] onexx 2026-03-10T10:16:24.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:23.397544+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T10:16:24.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:23.410740+0000 mon.b [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:23.449316+0000 mon.b [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:23.456429+0000 mon.b [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:23.479290+0000 mon.b [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:23.479294+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:23.507564+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:23.507703+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:23.507844+0000 mon.b [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:23.507971+0000 mon.b [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:23.519365+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:23.519864+0000 mon.b [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm04-59274-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:23.519991+0000 mon.b [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm04-59599-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:23.520094+0000 mon.b [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm04-59578-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:23.520600+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:23.520666+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:23.522530+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm04-59274-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:23.522669+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm04-59599-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:23.522742+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm04-59578-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:24.185157+0000 mon.a [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:24.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:24.185191+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:24.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:24.185707+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm04-59274-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:24.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:24.185734+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm04-59599-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:24.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:24.185758+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm04-59578-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:24.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:24.192547+0000 mon.a [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm04-59259-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:24.198698+0000 mon.c [INF] from='client.? 192.168.123.104:0/3988453270' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:24.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:24.199791+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:24.367 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:24.202947+0000 mon.b [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:24.392 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:24.203130+0000 mon.b [INF] from='client.? 192.168.123.104:0/546846441' enti list_parallel: process_1_[59939]: starting. 2026-03-10T10:16:24.392 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_1_[59939]: creating pool ceph_test_rados_list_parallel.vm04-59913 2026-03-10T10:16:24.392 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_1_[59939]: created object 0... 2026-03-10T10:16:24.392 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_1_[59939]: created object 25... 2026-03-10T10:16:24.392 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_1_[59939]: created object 49... 2026-03-10T10:16:24.392 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_1_[59939]: finishing. 2026-03-10T10:16:24.392 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_1_[59939]: shutting down. 2026-03-10T10:16:24.392 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_2_[59941]: starting. 2026-03-10T10:16:24.392 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_2_[59941]: listing objects. 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_2_[59941]: listed object 0... 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_2_[59941]: listed object 25... 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_2_[59941]: saw 50 objects 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_2_[59941]: shutting down. 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_3_[60670]: starting. 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_3_[60670]: creating pool ceph_test_rados_list_parallel.vm04-59913 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_3_[60670]: created object 0... 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_3_[60670]: created object 25... 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_3_[60670]: created object 49... 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_3_[60670]: finishing. 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_3_[60670]: shutting down. 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_4_[60671]: starting. 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_4_[60671]: listing objects. 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_4_[60671]: listed object 0... 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_4_[60671]: listed object 25... 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_4_[60671]: saw 46 objects 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_4_[60671]: shutting down. 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_5_[60672]: starting. 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_5_[60672]: removed 25 objects... 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_5_[60672]: removed half of the objects 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_5_[60672]: removed 50 objects... 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_5_[60672]: removed 50 objects 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_5_[60672]: shutting down. 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_6_[60990]: starting. 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_6_[60990]: creating pool ceph_test_rados_list_parallel.vm04-59913 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_6_[60990]: created object 0... 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_6_[60990]: created object 25... 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_6_[60990]: created object 49... 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_6_[60990]: finishing. 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_6_[60990]: shutting down. 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_7_[60991]: starting. 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_7_[60991]: listing objects. 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_7_[60991]: listed object 0... 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_7_[60991]: listed object 25... 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_7_[60991]: listed object 50... 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_7_[60991]: saw 53 objects 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_7_[60991]: shutting down. 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_8_[60992]: starting. 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_8_[60992]: added 25 objects... 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_8_[60992]: added half of the objects 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_8_[60992]: added 50 objects... 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_8_[60992]: added 50 objects 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_8_[60992]: shutting down. 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:24.393 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_9_[61038]: starting. 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_9_[61038]: creating pool ceph_test_rados_list_parallel.vm04-59913 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_9_[61038]: created object 0... 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_9_[61038]: created object 25... 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_9_[61038]: created object 49... 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_9_[61038]: finishing. 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_9_[61038]: shutting down. 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_10_[61039]: starting. 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_10_[61039]: listing objects. 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_10_[61039]: listed object 0... 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_10_[61039]: listed object 25... 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_10_[61039]: listed object 50... 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_10_[61039]: listed object 75... 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_10_[61039]: saw 98 objects 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_10_[61039]: shutting down. 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_13_[61042]: starting. 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_13_[61042]: removed 25 objects... 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_13_[61042]: removed half of the objects 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_13_[61042]: removed 50 objects... 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_13_[61042]: removed 50 objects 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_13_[61042]: shutting down. 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_11_[61040]: starting. 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_11_[61040]: added 25 objects... 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_11_[61040]: added half of the objects 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_11_[61040]: added 50 objects... 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_11_[61040]: added 50 objects 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_11_[61040]: shutting down. 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_12_[61041]: starting. 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_12_[61041]: added 25 objects... 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_12_[61041]: added half of the objects 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_12_[61041]: added 50 objects... 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_12_[61041]: added 50 objects 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_12_[61041]: shutting down. 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_14_[61178]: starting. 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_14_[61178]: creating pool ceph_test_rados_list_parallel.vm04-59913 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_14_[61178]: created object 0... 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_14_[61178]: created object 25... 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_14_[61178]: created object 49... 2026-03-10T10:16:24.661 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_14_[61178]: finishing. 2026-03-10T10:16:24.662 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_14_[61178]: shutting down. 2026-03-10T10:16:24.662 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:24.662 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:24.662 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:24.662 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:24.662 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_15_[61179]: starting. 2026-03-10T10:16:24.662 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_15_[61179]: listing objects. 2026-03-10T10:16:24.662 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_15_[61179]: listed object 0... 2026-03-10T10:16:24.662 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_15_[61179]: listed object 25... 2026-03-10T10:16:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.337427+0000 mon.a (mon.0) 899 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.337427+0000 mon.a (mon.0) 899 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.358083+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.104:0/4117200580' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T10:16:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.358083+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.104:0/4117200580' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T10:16:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: cluster 2026-03-10T10:16:23.397286+0000 client.admin (client.?) 0 : cluster [INF] onexx 2026-03-10T10:16:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: cluster 2026-03-10T10:16:23.397286+0000 client.admin (client.?) 0 : cluster [INF] onexx 2026-03-10T10:16:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.397544+0000 mon.a (mon.0) 900 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T10:16:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.397544+0000 mon.a (mon.0) 900 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T10:16:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.410740+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.410740+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.449316+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.449316+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.456429+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.456429+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.479290+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.479290+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.479294+0000 mon.a (mon.0) 901 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.479294+0000 mon.a (mon.0) 901 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.507564+0000 mon.a (mon.0) 902 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.507564+0000 mon.a (mon.0) 902 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.507703+0000 mon.a (mon.0) 903 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.507703+0000 mon.a (mon.0) 903 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.507844+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.507844+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.507971+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.507971+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.519365+0000 mon.a (mon.0) 904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.519365+0000 mon.a (mon.0) 904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.519864+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm04-59274-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.519864+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm04-59274-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.519991+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm04-59599-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.519991+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm04-59599-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.520094+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm04-59578-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.520094+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm04-59578-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.520600+0000 mon.a (mon.0) 905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.520600+0000 mon.a (mon.0) 905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.520666+0000 mon.a (mon.0) 906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.520666+0000 mon.a (mon.0) 906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.522530+0000 mon.a (mon.0) 907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm04-59274-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.522530+0000 mon.a (mon.0) 907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm04-59274-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.522669+0000 mon.a (mon.0) 908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm04-59599-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.522669+0000 mon.a (mon.0) 908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm04-59599-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.522742+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm04-59578-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.522742+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm04-59578-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.977443+0000 mon.c (mon.2) 44 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:23.977443+0000 mon.c (mon.2) 44 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: cluster 2026-03-10T10:16:23.979608+0000 mgr.y (mgr.24422) 97 : cluster [DBG] pgmap v57: 700 pgs: 4 active, 157 creating+peering, 42 creating+activating, 497 active+clean; 465 KiB data, 243 MiB used, 160 GiB / 160 GiB avail; 1.9 KiB/s rd, 19 KiB/s wr, 39 op/s 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: cluster 2026-03-10T10:16:23.979608+0000 mgr.y (mgr.24422) 97 : cluster [DBG] pgmap v57: 700 pgs: 4 active, 157 creating+peering, 42 creating+activating, 497 active+clean; 465 KiB data, 243 MiB used, 160 GiB / 160 GiB avail; 1.9 KiB/s rd, 19 KiB/s wr, 39 op/s 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.185157+0000 mon.a (mon.0) 910 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.185157+0000 mon.a (mon.0) 910 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.185191+0000 mon.a (mon.0) 911 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.185191+0000 mon.a (mon.0) 911 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.185707+0000 mon.a (mon.0) 912 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm04-59274-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.185707+0000 mon.a (mon.0) 912 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm04-59274-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.185734+0000 mon.a (mon.0) 913 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm04-59599-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.185734+0000 mon.a (mon.0) 913 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm04-59599-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.185758+0000 mon.a (mon.0) 914 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm04-59578-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.185758+0000 mon.a (mon.0) 914 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm04-59578-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: cluster 2026-03-10T10:16:24.189066+0000 mon.a (mon.0) 915 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: cluster 2026-03-10T10:16:24.189066+0000 mon.a (mon.0) 915 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.192547+0000 mon.a (mon.0) 916 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm04-59259-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.192547+0000 mon.a (mon.0) 916 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm04-59259-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.198698+0000 mon.c (mon.2) 45 : audit [INF] from='client.? 192.168.123.104:0/3988453270' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.198698+0000 mon.c (mon.2) 45 : audit [INF] from='client.? 192.168.123.104:0/3988453270' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.199791+0000 mon.a (mon.0) 917 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.199791+0000 mon.a (mon.0) 917 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.202947+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.202947+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.203130+0000 mon.b (mon.1) 54 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm04-59274-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.203130+0000 mon.b (mon.1) 54 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm04-59274-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.203285+0000 mon.b (mon.1) 55 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm04-59599-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.203285+0000 mon.b (mon.1) 55 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm04-59599-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.203495+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm04-59578-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.203495+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm04-59578-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.212207+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.104:0/2308711789' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.212207+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.104:0/2308711789' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.212384+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.104:0/854341866' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm04-59252-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.212384+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.104:0/854341866' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm04-59252-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.212503+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.104:0/2931547798' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.212503+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.104:0/2931547798' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.228616+0000 mon.a (mon.0) 918 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.228616+0000 mon.a (mon.0) 918 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.229406+0000 mon.a (mon.0) 919 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm04-59274-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.229406+0000 mon.a (mon.0) 919 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm04-59274-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.235921+0000 mon.a (mon.0) 920 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm04-59599-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.235921+0000 mon.a (mon.0) 920 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm04-59599-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.240894+0000 mon.a (mon.0) 921 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm04-59578-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.240894+0000 mon.a (mon.0) 921 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm04-59578-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.269687+0000 mon.a (mon.0) 922 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.269687+0000 mon.a (mon.0) 922 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.293815+0000 mon.a (mon.0) 923 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm04-59252-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.293815+0000 mon.a (mon.0) 923 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm04-59252-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.293957+0000 mon.a (mon.0) 924 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:24 vm04 bash[20742]: audit 2026-03-10T10:16:24.293957+0000 mon.a (mon.0) 924 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.337427+0000 mon.a (mon.0) 899 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.337427+0000 mon.a (mon.0) 899 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.358083+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.104:0/4117200580' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.358083+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.104:0/4117200580' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: cluster 2026-03-10T10:16:23.397286+0000 client.admin (client.?) 0 : cluster [INF] onexx 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: cluster 2026-03-10T10:16:23.397286+0000 client.admin (client.?) 0 : cluster [INF] onexx 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.397544+0000 mon.a (mon.0) 900 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.397544+0000 mon.a (mon.0) 900 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.410740+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.410740+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.449316+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.449316+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.456429+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.456429+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.479290+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.479290+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.479294+0000 mon.a (mon.0) 901 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.479294+0000 mon.a (mon.0) 901 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.507564+0000 mon.a (mon.0) 902 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.507564+0000 mon.a (mon.0) 902 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.507703+0000 mon.a (mon.0) 903 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.507703+0000 mon.a (mon.0) 903 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.507844+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.507844+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.507971+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.507971+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.519365+0000 mon.a (mon.0) 904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.519365+0000 mon.a (mon.0) 904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.519864+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm04-59274-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.519864+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm04-59274-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.519991+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm04-59599-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.519991+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm04-59599-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.520094+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm04-59578-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.520094+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm04-59578-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.520600+0000 mon.a (mon.0) 905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.520600+0000 mon.a (mon.0) 905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.520666+0000 mon.a (mon.0) 906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.520666+0000 mon.a (mon.0) 906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.522530+0000 mon.a (mon.0) 907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm04-59274-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.522530+0000 mon.a (mon.0) 907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm04-59274-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.522669+0000 mon.a (mon.0) 908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm04-59599-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.522669+0000 mon.a (mon.0) 908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm04-59599-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.522742+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm04-59578-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.522742+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm04-59578-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.977443+0000 mon.c (mon.2) 44 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:23.977443+0000 mon.c (mon.2) 44 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: cluster 2026-03-10T10:16:23.979608+0000 mgr.y (mgr.24422) 97 : cluster [DBG] pgmap v57: 700 pgs: 4 active, 157 creating+peering, 42 creating+activating, 497 active+clean; 465 KiB data, 243 MiB used, 160 GiB / 160 GiB avail; 1.9 KiB/s rd, 19 KiB/s wr, 39 op/s 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: cluster 2026-03-10T10:16:23.979608+0000 mgr.y (mgr.24422) 97 : cluster [DBG] pgmap v57: 700 pgs: 4 active, 157 creating+peering, 42 creating+activating, 497 active+clean; 465 KiB data, 243 MiB used, 160 GiB / 160 GiB avail; 1.9 KiB/s rd, 19 KiB/s wr, 39 op/s 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.185157+0000 mon.a (mon.0) 910 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.185157+0000 mon.a (mon.0) 910 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.185191+0000 mon.a (mon.0) 911 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.185191+0000 mon.a (mon.0) 911 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.185707+0000 mon.a (mon.0) 912 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm04-59274-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.185707+0000 mon.a (mon.0) 912 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm04-59274-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:24.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.185734+0000 mon.a (mon.0) 913 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm04-59599-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.185734+0000 mon.a (mon.0) 913 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm04-59599-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.185758+0000 mon.a (mon.0) 914 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm04-59578-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.185758+0000 mon.a (mon.0) 914 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm04-59578-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: cluster 2026-03-10T10:16:24.189066+0000 mon.a (mon.0) 915 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: cluster 2026-03-10T10:16:24.189066+0000 mon.a (mon.0) 915 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.192547+0000 mon.a (mon.0) 916 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm04-59259-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.192547+0000 mon.a (mon.0) 916 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm04-59259-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.198698+0000 mon.c (mon.2) 45 : audit [INF] from='client.? 192.168.123.104:0/3988453270' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.198698+0000 mon.c (mon.2) 45 : audit [INF] from='client.? 192.168.123.104:0/3988453270' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.199791+0000 mon.a (mon.0) 917 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.199791+0000 mon.a (mon.0) 917 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.202947+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.202947+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.203130+0000 mon.b (mon.1) 54 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm04-59274-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.203130+0000 mon.b (mon.1) 54 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm04-59274-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.203285+0000 mon.b (mon.1) 55 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm04-59599-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.203285+0000 mon.b (mon.1) 55 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm04-59599-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.203495+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm04-59578-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.203495+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm04-59578-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.212207+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.104:0/2308711789' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.212207+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.104:0/2308711789' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.212384+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.104:0/854341866' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm04-59252-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.212384+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.104:0/854341866' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm04-59252-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.212503+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.104:0/2931547798' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.212503+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.104:0/2931547798' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.228616+0000 mon.a (mon.0) 918 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.228616+0000 mon.a (mon.0) 918 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.229406+0000 mon.a (mon.0) 919 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm04-59274-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.229406+0000 mon.a (mon.0) 919 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm04-59274-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.235921+0000 mon.a (mon.0) 920 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm04-59599-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.235921+0000 mon.a (mon.0) 920 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm04-59599-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.240894+0000 mon.a (mon.0) 921 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm04-59578-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.240894+0000 mon.a (mon.0) 921 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm04-59578-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.269687+0000 mon.a (mon.0) 922 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.269687+0000 mon.a (mon.0) 922 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.293815+0000 mon.a (mon.0) 923 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm04-59252-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.293815+0000 mon.a (mon.0) 923 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm04-59252-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.293957+0000 mon.a (mon.0) 924 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.708 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:24 vm04 bash[28289]: audit 2026-03-10T10:16:24.293957+0000 mon.a (mon.0) 924 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.337427+0000 mon.a (mon.0) 899 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.337427+0000 mon.a (mon.0) 899 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.358083+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.104:0/4117200580' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.358083+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.104:0/4117200580' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: cluster 2026-03-10T10:16:23.397286+0000 client.admin (client.?) 0 : cluster [INF] onexx 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: cluster 2026-03-10T10:16:23.397286+0000 client.admin (client.?) 0 : cluster [INF] onexx 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.397544+0000 mon.a (mon.0) 900 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.397544+0000 mon.a (mon.0) 900 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["onexx"]}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.410740+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.410740+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.449316+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.449316+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.456429+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.456429+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.479290+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.479290+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.479294+0000 mon.a (mon.0) 901 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.479294+0000 mon.a (mon.0) 901 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.507564+0000 mon.a (mon.0) 902 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.507564+0000 mon.a (mon.0) 902 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.507703+0000 mon.a (mon.0) 903 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.507703+0000 mon.a (mon.0) 903 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.507844+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.507844+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.507971+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.507971+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.519365+0000 mon.a (mon.0) 904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.519365+0000 mon.a (mon.0) 904 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.519864+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm04-59274-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.519864+0000 mon.b (mon.1) 50 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm04-59274-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.519991+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm04-59599-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.519991+0000 mon.b (mon.1) 51 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm04-59599-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.520094+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm04-59578-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.520094+0000 mon.b (mon.1) 52 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm04-59578-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.520600+0000 mon.a (mon.0) 905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.520600+0000 mon.a (mon.0) 905 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.520666+0000 mon.a (mon.0) 906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.520666+0000 mon.a (mon.0) 906 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.522530+0000 mon.a (mon.0) 907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm04-59274-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.522530+0000 mon.a (mon.0) 907 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm04-59274-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.522669+0000 mon.a (mon.0) 908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm04-59599-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.522669+0000 mon.a (mon.0) 908 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm04-59599-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.522742+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm04-59578-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.522742+0000 mon.a (mon.0) 909 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm04-59578-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.977443+0000 mon.c (mon.2) 44 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:23.977443+0000 mon.c (mon.2) 44 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: cluster 2026-03-10T10:16:23.979608+0000 mgr.y (mgr.24422) 97 : cluster [DBG] pgmap v57: 700 pgs: 4 active, 157 creating+peering, 42 creating+activating, 497 active+clean; 465 KiB data, 243 MiB used, 160 GiB / 160 GiB avail; 1.9 KiB/s rd, 19 KiB/s wr, 39 op/s 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: cluster 2026-03-10T10:16:23.979608+0000 mgr.y (mgr.24422) 97 : cluster [DBG] pgmap v57: 700 pgs: 4 active, 157 creating+peering, 42 creating+activating, 497 active+clean; 465 KiB data, 243 MiB used, 160 GiB / 160 GiB avail; 1.9 KiB/s rd, 19 KiB/s wr, 39 op/s 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.185157+0000 mon.a (mon.0) 910 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.185157+0000 mon.a (mon.0) 910 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.185191+0000 mon.a (mon.0) 911 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.185191+0000 mon.a (mon.0) 911 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.185707+0000 mon.a (mon.0) 912 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm04-59274-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.185707+0000 mon.a (mon.0) 912 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoEC_vm04-59274-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.185734+0000 mon.a (mon.0) 913 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm04-59599-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.185734+0000 mon.a (mon.0) 913 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatECPP_vm04-59599-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.185758+0000 mon.a (mon.0) 914 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm04-59578-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.185758+0000 mon.a (mon.0) 914 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosStatEC_vm04-59578-7", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: cluster 2026-03-10T10:16:24.189066+0000 mon.a (mon.0) 915 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-10T10:16:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: cluster 2026-03-10T10:16:24.189066+0000 mon.a (mon.0) 915 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-10T10:16:24.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.192547+0000 mon.a (mon.0) 916 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm04-59259-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.192547+0000 mon.a (mon.0) 916 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm04-59259-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.198698+0000 mon.c (mon.2) 45 : audit [INF] from='client.? 192.168.123.104:0/3988453270' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:24.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.198698+0000 mon.c (mon.2) 45 : audit [INF] from='client.? 192.168.123.104:0/3988453270' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:24.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.199791+0000 mon.a (mon.0) 917 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:24.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.199791+0000 mon.a (mon.0) 917 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:24.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.202947+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:24.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.202947+0000 mon.b (mon.1) 53 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:24.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.203130+0000 mon.b (mon.1) 54 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm04-59274-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.203130+0000 mon.b (mon.1) 54 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm04-59274-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.203285+0000 mon.b (mon.1) 55 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm04-59599-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.203285+0000 mon.b (mon.1) 55 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm04-59599-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.203495+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm04-59578-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.203495+0000 mon.b (mon.1) 56 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm04-59578-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.212207+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.104:0/2308711789' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.212207+0000 mon.b (mon.1) 57 : audit [INF] from='client.? 192.168.123.104:0/2308711789' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.212384+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.104:0/854341866' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm04-59252-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.212384+0000 mon.b (mon.1) 58 : audit [INF] from='client.? 192.168.123.104:0/854341866' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm04-59252-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.212503+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.104:0/2931547798' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.212503+0000 mon.b (mon.1) 59 : audit [INF] from='client.? 192.168.123.104:0/2931547798' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.228616+0000 mon.a (mon.0) 918 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:24.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.228616+0000 mon.a (mon.0) 918 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:24.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.229406+0000 mon.a (mon.0) 919 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm04-59274-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.229406+0000 mon.a (mon.0) 919 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm04-59274-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:24.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.235921+0000 mon.a (mon.0) 920 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm04-59599-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.235921+0000 mon.a (mon.0) 920 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm04-59599-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:24.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.240894+0000 mon.a (mon.0) 921 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm04-59578-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.240894+0000 mon.a (mon.0) 921 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm04-59578-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:24.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.269687+0000 mon.a (mon.0) 922 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.269687+0000 mon.a (mon.0) 922 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.293815+0000 mon.a (mon.0) 923 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm04-59252-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.293815+0000 mon.a (mon.0) 923 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm04-59252-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.293957+0000 mon.a (mon.0) 924 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:24.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:24 vm07 bash[23367]: audit 2026-03-10T10:16:24.293957+0000 mon.a (mon.0) 924 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:25.414 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_15_[61179]: listed object 50... 2026-03-10T10:16:25.414 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_15_[61179]: listed object 75... 2026-03-10T10:16:25.414 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_15_[61179]: listed object 100... 2026-03-10T10:16:25.414 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_15_[61179]: listed object 125... 2026-03-10T10:16:25.415 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_15_[61179]: saw 150 objects 2026-03-10T10:16:25.415 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_15_[61179]: shutting down. 2026-03-10T10:16:25.415 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:25.415 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:25.415 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:25.415 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:25.415 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_16_[61180]: starting. 2026-03-10T10:16:25.415 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_16_[61180]: added 25 objects... 2026-03-10T10:16:25.415 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_16_[61180]: added half of the objects 2026-03-10T10:16:25.415 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_16_[61180]: added 50 objects... 2026-03-10T10:16:25.415 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_16_[61180]: added 50 objects 2026-03-10T10:16:25.415 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: process_16_[61180]: shutting down. 2026-03-10T10:16:25.415 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:25.415 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:25.415 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:25.415 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:25.415 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******************************* 2026-03-10T10:16:25.415 INFO:tasks.workunit.client.0.vm04.stdout: list_parallel: ******* SUCCESS ********** 2026-03-10T10:16:25.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:25 vm07 bash[23367]: audit 2026-03-10T10:16:24.359117+0000 mon.a (mon.0) 925 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["onexx"]}]': finished 2026-03-10T10:16:25.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:25 vm07 bash[23367]: audit 2026-03-10T10:16:24.359117+0000 mon.a (mon.0) 925 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["onexx"]}]': finished 2026-03-10T10:16:25.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:25 vm07 bash[23367]: audit 2026-03-10T10:16:24.368515+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.104:0/4117200580' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-10T10:16:25.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:25 vm07 bash[23367]: audit 2026-03-10T10:16:24.368515+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.104:0/4117200580' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-10T10:16:25.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:25 vm07 bash[23367]: cluster 2026-03-10T10:16:24.372994+0000 client.admin (client.?) 0 : cluster [INF] twoxx 2026-03-10T10:16:25.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:25 vm07 bash[23367]: cluster 2026-03-10T10:16:24.372994+0000 client.admin (client.?) 0 : cluster [INF] twoxx 2026-03-10T10:16:25.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:25 vm07 bash[23367]: audit 2026-03-10T10:16:24.373198+0000 mon.a (mon.0) 926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-10T10:16:25.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:25 vm07 bash[23367]: audit 2026-03-10T10:16:24.373198+0000 mon.a (mon.0) 926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-10T10:16:25.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:25 vm07 bash[23367]: audit 2026-03-10T10:16:24.978125+0000 mon.c (mon.2) 46 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:25.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:25 vm07 bash[23367]: audit 2026-03-10T10:16:24.978125+0000 mon.c (mon.2) 46 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:25.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:25 vm07 bash[23367]: audit 2026-03-10T10:16:25.342822+0000 mon.a (mon.0) 927 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm04-59259-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:25.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:25 vm07 bash[23367]: audit 2026-03-10T10:16:25.342822+0000 mon.a (mon.0) 927 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm04-59259-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:25.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:25 vm07 bash[23367]: audit 2026-03-10T10:16:25.342954+0000 mon.a (mon.0) 928 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm04-60121-1"}]': finished 2026-03-10T10:16:25.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:25 vm07 bash[23367]: audit 2026-03-10T10:16:25.342954+0000 mon.a (mon.0) 928 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm04-60121-1"}]': finished 2026-03-10T10:16:25.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:25 vm07 bash[23367]: audit 2026-03-10T10:16:25.343057+0000 mon.a (mon.0) 929 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:25.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:25 vm07 bash[23367]: audit 2026-03-10T10:16:25.343057+0000 mon.a (mon.0) 929 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:25.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:25 vm07 bash[23367]: audit 2026-03-10T10:16:25.343174+0000 mon.a (mon.0) 930 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm04-59252-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:25.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:25 vm07 bash[23367]: audit 2026-03-10T10:16:25.343174+0000 mon.a (mon.0) 930 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm04-59252-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:25.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:25 vm07 bash[23367]: audit 2026-03-10T10:16:25.343276+0000 mon.a (mon.0) 931 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:25.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:25 vm07 bash[23367]: audit 2026-03-10T10:16:25.343276+0000 mon.a (mon.0) 931 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:25.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:25 vm07 bash[23367]: audit 2026-03-10T10:16:25.366366+0000 mon.c (mon.2) 47 : audit [INF] from='client.? 192.168.123.104:0/3988453270' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:25.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:25 vm07 bash[23367]: audit 2026-03-10T10:16:25.366366+0000 mon.c (mon.2) 47 : audit [INF] from='client.? 192.168.123.104:0/3988453270' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:25.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:25 vm07 bash[23367]: audit 2026-03-10T10:16:25.410348+0000 mon.b (mon.1) 61 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:25.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:25 vm07 bash[23367]: audit 2026-03-10T10:16:25.410348+0000 mon.b (mon.1) 61 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:25.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:25 vm07 bash[23367]: audit 2026-03-10T10:16:25.411110+0000 mon.b (mon.1) 62 : audit [INF] from='client.? 192.168.123.104:0/409720393' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:25.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:25 vm07 bash[23367]: audit 2026-03-10T10:16:25.411110+0000 mon.b (mon.1) 62 : audit [INF] from='client.? 192.168.123.104:0/409720393' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:25.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:25 vm07 bash[23367]: cluster 2026-03-10T10:16:25.423482+0000 mon.a (mon.0) 932 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-10T10:16:25.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:25 vm07 bash[23367]: cluster 2026-03-10T10:16:25.423482+0000 mon.a (mon.0) 932 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-10T10:16:25.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:25 vm07 bash[23367]: audit 2026-03-10T10:16:25.425679+0000 mon.a (mon.0) 933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:25.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:25 vm07 bash[23367]: audit 2026-03-10T10:16:25.425679+0000 mon.a (mon.0) 933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:25.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:25 vm07 bash[23367]: audit 2026-03-10T10:16:25.425893+0000 mon.a (mon.0) 934 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:25.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:25 vm07 bash[23367]: audit 2026-03-10T10:16:25.425893+0000 mon.a (mon.0) 934 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:25.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:25 vm04 bash[20742]: audit 2026-03-10T10:16:24.359117+0000 mon.a (mon.0) 925 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["onexx"]}]': finished 2026-03-10T10:16:25.970 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:25 vm04 bash[20742]: audit 2026-03-10T10:16:24.359117+0000 mon.a (mon.0) 925 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["onexx"]}]': finished 2026-03-10T10:16:25.970 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:25 vm04 bash[20742]: audit 2026-03-10T10:16:24.368515+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.104:0/4117200580' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-10T10:16:25.970 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:25 vm04 bash[20742]: audit 2026-03-10T10:16:24.368515+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.104:0/4117200580' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-10T10:16:25.970 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:25 vm04 bash[20742]: cluster 2026-03-10T10:16:24.372994+0000 client.admin (client.?) 0 : cluster [INF] twoxx 2026-03-10T10:16:25.970 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:25 vm04 bash[20742]: cluster 2026-03-10T10:16:24.372994+0000 client.admin (client.?) 0 : cluster [INF] twoxx 2026-03-10T10:16:25.970 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:25 vm04 bash[20742]: audit 2026-03-10T10:16:24.373198+0000 mon.a (mon.0) 926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-10T10:16:25.970 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:25 vm04 bash[20742]: audit 2026-03-10T10:16:24.373198+0000 mon.a (mon.0) 926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-10T10:16:25.970 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:25 vm04 bash[20742]: audit 2026-03-10T10:16:24.978125+0000 mon.c (mon.2) 46 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:25.970 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:25 vm04 bash[20742]: audit 2026-03-10T10:16:24.978125+0000 mon.c (mon.2) 46 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:25.970 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:25 vm04 bash[20742]: audit 2026-03-10T10:16:25.342822+0000 mon.a (mon.0) 927 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm04-59259-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:25.970 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:25 vm04 bash[20742]: audit 2026-03-10T10:16:25.342822+0000 mon.a (mon.0) 927 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm04-59259-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:25.970 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:25 vm04 bash[20742]: audit 2026-03-10T10:16:25.342954+0000 mon.a (mon.0) 928 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm04-60121-1"}]': finished 2026-03-10T10:16:25.970 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:25 vm04 bash[20742]: audit 2026-03-10T10:16:25.342954+0000 mon.a (mon.0) 928 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm04-60121-1"}]': finished 2026-03-10T10:16:25.970 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:25 vm04 bash[20742]: audit 2026-03-10T10:16:25.343057+0000 mon.a (mon.0) 929 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:25.970 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:25 vm04 bash[20742]: audit 2026-03-10T10:16:25.343057+0000 mon.a (mon.0) 929 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:25.970 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:25 vm04 bash[20742]: audit 2026-03-10T10:16:25.343174+0000 mon.a (mon.0) 930 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm04-59252-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:25.970 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:25 vm04 bash[28289]: audit 2026-03-10T10:16:24.359117+0000 mon.a (mon.0) 925 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["onexx"]}]': finished 2026-03-10T10:16:25.970 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:25 vm04 bash[28289]: audit 2026-03-10T10:16:24.359117+0000 mon.a (mon.0) 925 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["onexx"]}]': finished 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:25 vm04 bash[28289]: audit 2026-03-10T10:16:24.368515+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.104:0/4117200580' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:25 vm04 bash[28289]: audit 2026-03-10T10:16:24.368515+0000 mon.b (mon.1) 60 : audit [INF] from='client.? 192.168.123.104:0/4117200580' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:25 vm04 bash[28289]: cluster 2026-03-10T10:16:24.372994+0000 client.admin (client.?) 0 : cluster [INF] twoxx 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:25 vm04 bash[28289]: cluster 2026-03-10T10:16:24.372994+0000 client.admin (client.?) 0 : cluster [INF] twoxx 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:25 vm04 bash[28289]: audit 2026-03-10T10:16:24.373198+0000 mon.a (mon.0) 926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:25 vm04 bash[28289]: audit 2026-03-10T10:16:24.373198+0000 mon.a (mon.0) 926 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["twoxx"]}]: dispatch 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:25 vm04 bash[28289]: audit 2026-03-10T10:16:24.978125+0000 mon.c (mon.2) 46 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:25 vm04 bash[28289]: audit 2026-03-10T10:16:24.978125+0000 mon.c (mon.2) 46 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:25 vm04 bash[28289]: audit 2026-03-10T10:16:25.342822+0000 mon.a (mon.0) 927 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm04-59259-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:25 vm04 bash[28289]: audit 2026-03-10T10:16:25.342822+0000 mon.a (mon.0) 927 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm04-59259-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:25 vm04 bash[28289]: audit 2026-03-10T10:16:25.342954+0000 mon.a (mon.0) 928 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm04-60121-1"}]': finished 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:25 vm04 bash[28289]: audit 2026-03-10T10:16:25.342954+0000 mon.a (mon.0) 928 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritevm04-60121-1"}]': finished 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:25 vm04 bash[28289]: audit 2026-03-10T10:16:25.343057+0000 mon.a (mon.0) 929 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:25 vm04 bash[28289]: audit 2026-03-10T10:16:25.343057+0000 mon.a (mon.0) 929 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:25 vm04 bash[28289]: audit 2026-03-10T10:16:25.343174+0000 mon.a (mon.0) 930 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm04-59252-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:25 vm04 bash[28289]: audit 2026-03-10T10:16:25.343174+0000 mon.a (mon.0) 930 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm04-59252-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:25 vm04 bash[28289]: audit 2026-03-10T10:16:25.343276+0000 mon.a (mon.0) 931 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:25 vm04 bash[28289]: audit 2026-03-10T10:16:25.343276+0000 mon.a (mon.0) 931 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:25 vm04 bash[28289]: audit 2026-03-10T10:16:25.366366+0000 mon.c (mon.2) 47 : audit [INF] from='client.? 192.168.123.104:0/3988453270' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:25 vm04 bash[28289]: audit 2026-03-10T10:16:25.366366+0000 mon.c (mon.2) 47 : audit [INF] from='client.? 192.168.123.104:0/3988453270' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:25 vm04 bash[28289]: audit 2026-03-10T10:16:25.410348+0000 mon.b (mon.1) 61 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:25 vm04 bash[28289]: audit 2026-03-10T10:16:25.410348+0000 mon.b (mon.1) 61 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:25 vm04 bash[28289]: audit 2026-03-10T10:16:25.411110+0000 mon.b (mon.1) 62 : audit [INF] from='client.? 192.168.123.104:0/409720393' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:25 vm04 bash[28289]: audit 2026-03-10T10:16:25.411110+0000 mon.b (mon.1) 62 : audit [INF] from='client.? 192.168.123.104:0/409720393' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:25 vm04 bash[28289]: cluster 2026-03-10T10:16:25.423482+0000 mon.a (mon.0) 932 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:25 vm04 bash[28289]: cluster 2026-03-10T10:16:25.423482+0000 mon.a (mon.0) 932 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:25 vm04 bash[28289]: audit 2026-03-10T10:16:25.425679+0000 mon.a (mon.0) 933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:25 vm04 bash[28289]: audit 2026-03-10T10:16:25.425679+0000 mon.a (mon.0) 933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:25 vm04 bash[28289]: audit 2026-03-10T10:16:25.425893+0000 mon.a (mon.0) 934 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:25 vm04 bash[28289]: audit 2026-03-10T10:16:25.425893+0000 mon.a (mon.0) 934 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:25 vm04 bash[20742]: audit 2026-03-10T10:16:25.343174+0000 mon.a (mon.0) 930 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm04-59252-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:25 vm04 bash[20742]: audit 2026-03-10T10:16:25.343276+0000 mon.a (mon.0) 931 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:25 vm04 bash[20742]: audit 2026-03-10T10:16:25.343276+0000 mon.a (mon.0) 931 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-2","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:25 vm04 bash[20742]: audit 2026-03-10T10:16:25.366366+0000 mon.c (mon.2) 47 : audit [INF] from='client.? 192.168.123.104:0/3988453270' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:25 vm04 bash[20742]: audit 2026-03-10T10:16:25.366366+0000 mon.c (mon.2) 47 : audit [INF] from='client.? 192.168.123.104:0/3988453270' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:25 vm04 bash[20742]: audit 2026-03-10T10:16:25.410348+0000 mon.b (mon.1) 61 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:25 vm04 bash[20742]: audit 2026-03-10T10:16:25.410348+0000 mon.b (mon.1) 61 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:25 vm04 bash[20742]: audit 2026-03-10T10:16:25.411110+0000 mon.b (mon.1) 62 : audit [INF] from='client.? 192.168.123.104:0/409720393' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:25 vm04 bash[20742]: audit 2026-03-10T10:16:25.411110+0000 mon.b (mon.1) 62 : audit [INF] from='client.? 192.168.123.104:0/409720393' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:25 vm04 bash[20742]: cluster 2026-03-10T10:16:25.423482+0000 mon.a (mon.0) 932 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:25 vm04 bash[20742]: cluster 2026-03-10T10:16:25.423482+0000 mon.a (mon.0) 932 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:25 vm04 bash[20742]: audit 2026-03-10T10:16:25.425679+0000 mon.a (mon.0) 933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:25 vm04 bash[20742]: audit 2026-03-10T10:16:25.425679+0000 mon.a (mon.0) 933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritevm04-60121-1"}]: dispatch 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:25 vm04 bash[20742]: audit 2026-03-10T10:16:25.425893+0000 mon.a (mon.0) 934 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:25.971 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:25 vm04 bash[20742]: audit 2026-03-10T10:16:25.425893+0000 mon.a (mon.0) 934 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:26.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:25.441548+0000 mon.a (mon.0) 935 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:25.441548+0000 mon.a (mon.0) 935 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:25.441671+0000 mon.a (mon.0) 936 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:25.441671+0000 mon.a (mon.0) 936 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:25.451563+0000 mon.c (mon.2) 48 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:25.451563+0000 mon.c (mon.2) 48 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:25.452026+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:25.452026+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:25.469665+0000 mon.a (mon.0) 937 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:25.469665+0000 mon.a (mon.0) 937 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:25.497848+0000 mon.a (mon.0) 938 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:25.497848+0000 mon.a (mon.0) 938 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:25.498911+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:25.498911+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:25.499884+0000 mon.a (mon.0) 939 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:25.499884+0000 mon.a (mon.0) 939 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:25.508158+0000 mon.c (mon.2) 49 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:25.508158+0000 mon.c (mon.2) 49 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:25.536224+0000 mon.a (mon.0) 940 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:25.536224+0000 mon.a (mon.0) 940 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:25.540312+0000 mon.c (mon.2) 50 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm04-59364-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:25.540312+0000 mon.c (mon.2) 50 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm04-59364-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:25.546380+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm04-59409-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:25.546380+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm04-59409-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:25.547155+0000 mon.a (mon.0) 941 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:25.547155+0000 mon.a (mon.0) 941 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:25.559337+0000 mon.a (mon.0) 942 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm04-59364-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:25.559337+0000 mon.a (mon.0) 942 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm04-59364-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:25.563240+0000 mon.a (mon.0) 943 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm04-59409-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:25.563240+0000 mon.a (mon.0) 943 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm04-59409-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:25.734846+0000 mon.c (mon.2) 51 : audit [DBG] from='client.? 192.168.123.104:0/1289710247' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:25.734846+0000 mon.c (mon.2) 51 : audit [DBG] from='client.? 192.168.123.104:0/1289710247' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:25.978726+0000 mon.c (mon.2) 52 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:25.978726+0000 mon.c (mon.2) 52 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: cluster 2026-03-10T10:16:25.980196+0000 mgr.y (mgr.24422) 98 : cluster [DBG] pgmap v60: 868 pgs: 416 unknown, 4 active, 37 creating+peering, 42 creating+activating, 369 active+clean; 459 KiB data, 243 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 14 KiB/s wr, 40 op/s 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: cluster 2026-03-10T10:16:25.980196+0000 mgr.y (mgr.24422) 98 : cluster [DBG] pgmap v60: 868 pgs: 416 unknown, 4 active, 37 creating+peering, 42 creating+activating, 369 active+clean; 459 KiB data, 243 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 14 KiB/s wr, 40 op/s 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.386227+0000 mon.a (mon.0) 944 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.386227+0000 mon.a (mon.0) 944 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.386266+0000 mon.a (mon.0) 945 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm04-59274-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm04-59274-16"}]': finished 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.386266+0000 mon.a (mon.0) 945 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm04-59274-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm04-59274-16"}]': finished 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.386290+0000 mon.a (mon.0) 946 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm04-59599-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm04-59599-7"}]': finished 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.386290+0000 mon.a (mon.0) 946 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm04-59599-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm04-59599-7"}]': finished 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.386308+0000 mon.a (mon.0) 947 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm04-59578-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm04-59578-7"}]': finished 2026-03-10T10:16:26.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:25.441548+0000 mon.a (mon.0) 935 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:25.441548+0000 mon.a (mon.0) 935 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:25.441671+0000 mon.a (mon.0) 936 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:25.441671+0000 mon.a (mon.0) 936 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:25.451563+0000 mon.c (mon.2) 48 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:25.451563+0000 mon.c (mon.2) 48 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:25.452026+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:25.452026+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:25.469665+0000 mon.a (mon.0) 937 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:25.469665+0000 mon.a (mon.0) 937 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:25.497848+0000 mon.a (mon.0) 938 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:25.497848+0000 mon.a (mon.0) 938 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:25.498911+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:25.498911+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:25.499884+0000 mon.a (mon.0) 939 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:25.499884+0000 mon.a (mon.0) 939 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:25.508158+0000 mon.c (mon.2) 49 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:25.508158+0000 mon.c (mon.2) 49 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:25.536224+0000 mon.a (mon.0) 940 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:25.536224+0000 mon.a (mon.0) 940 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:25.540312+0000 mon.c (mon.2) 50 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm04-59364-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:25.540312+0000 mon.c (mon.2) 50 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm04-59364-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:25.546380+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm04-59409-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:25.546380+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm04-59409-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:25.547155+0000 mon.a (mon.0) 941 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:25.547155+0000 mon.a (mon.0) 941 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:25.559337+0000 mon.a (mon.0) 942 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm04-59364-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:25.559337+0000 mon.a (mon.0) 942 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm04-59364-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:25.563240+0000 mon.a (mon.0) 943 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm04-59409-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:25.563240+0000 mon.a (mon.0) 943 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm04-59409-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:25.734846+0000 mon.c (mon.2) 51 : audit [DBG] from='client.? 192.168.123.104:0/1289710247' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:25.734846+0000 mon.c (mon.2) 51 : audit [DBG] from='client.? 192.168.123.104:0/1289710247' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:25.978726+0000 mon.c (mon.2) 52 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:25.978726+0000 mon.c (mon.2) 52 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: cluster 2026-03-10T10:16:25.980196+0000 mgr.y (mgr.24422) 98 : cluster [DBG] pgmap v60: 868 pgs: 416 unknown, 4 active, 37 creating+peering, 42 creating+activating, 369 active+clean; 459 KiB data, 243 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 14 KiB/s wr, 40 op/s 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: cluster 2026-03-10T10:16:25.980196+0000 mgr.y (mgr.24422) 98 : cluster [DBG] pgmap v60: 868 pgs: 416 unknown, 4 active, 37 creating+peering, 42 creating+activating, 369 active+clean; 459 KiB data, 243 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 14 KiB/s wr, 40 op/s 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.386227+0000 mon.a (mon.0) 944 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.386227+0000 mon.a (mon.0) 944 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.386266+0000 mon.a (mon.0) 945 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm04-59274-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm04-59274-16"}]': finished 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.386266+0000 mon.a (mon.0) 945 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm04-59274-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm04-59274-16"}]': finished 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.386290+0000 mon.a (mon.0) 946 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm04-59599-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm04-59599-7"}]': finished 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.386290+0000 mon.a (mon.0) 946 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm04-59599-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm04-59599-7"}]': finished 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.386308+0000 mon.a (mon.0) 947 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm04-59578-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm04-59578-7"}]': finished 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.386308+0000 mon.a (mon.0) 947 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm04-59578-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm04-59578-7"}]': finished 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.386325+0000 mon.a (mon.0) 948 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritevm04-60121-1"}]': finished 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.386325+0000 mon.a (mon.0) 948 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritevm04-60121-1"}]': finished 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.386350+0000 mon.a (mon.0) 949 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]': finished 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.386350+0000 mon.a (mon.0) 949 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]': finished 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.386372+0000 mon.a (mon.0) 950 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm04-60174-1"}]': finished 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.386372+0000 mon.a (mon.0) 950 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm04-60174-1"}]': finished 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.386388+0000 mon.a (mon.0) 951 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-4-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.386388+0000 mon.a (mon.0) 951 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-4-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.386406+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm04-59364-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.386406+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm04-59364-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.386430+0000 mon.a (mon.0) 953 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm04-59409-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.386430+0000 mon.a (mon.0) 953 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm04-59409-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: cluster 2026-03-10T10:16:26.391078+0000 mon.a (mon.0) 954 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: cluster 2026-03-10T10:16:26.391078+0000 mon.a (mon.0) 954 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.395683+0000 mon.a (mon.0) 955 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm04-59259-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.395683+0000 mon.a (mon.0) 955 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm04-59259-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:26.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.396409+0000 mon.a (mon.0) 956 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-4", "tierpool": "test-rados-api-vm04-59491-4-cache"}]: dispatch 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.396409+0000 mon.a (mon.0) 956 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-4", "tierpool": "test-rados-api-vm04-59491-4-cache"}]: dispatch 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.396640+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm04-59409-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.396640+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm04-59409-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.398176+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.398176+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.398553+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.104:0/409720393' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.398553+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.104:0/409720393' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.406176+0000 mon.c (mon.2) 53 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm04-59364-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.406176+0000 mon.c (mon.2) 53 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm04-59364-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.415459+0000 mon.a (mon.0) 957 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm04-59364-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.415459+0000 mon.a (mon.0) 957 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm04-59364-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.443467+0000 mon.a (mon.0) 958 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm04-59409-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.443467+0000 mon.a (mon.0) 958 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm04-59409-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.456487+0000 mon.a (mon.0) 959 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.456487+0000 mon.a (mon.0) 959 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.457270+0000 mon.a (mon.0) 960 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:26 vm04 bash[28289]: audit 2026-03-10T10:16:26.457270+0000 mon.a (mon.0) 960 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.386308+0000 mon.a (mon.0) 947 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm04-59578-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm04-59578-7"}]': finished 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.386325+0000 mon.a (mon.0) 948 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritevm04-60121-1"}]': finished 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.386325+0000 mon.a (mon.0) 948 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritevm04-60121-1"}]': finished 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.386350+0000 mon.a (mon.0) 949 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]': finished 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.386350+0000 mon.a (mon.0) 949 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]': finished 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.386372+0000 mon.a (mon.0) 950 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm04-60174-1"}]': finished 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.386372+0000 mon.a (mon.0) 950 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm04-60174-1"}]': finished 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.386388+0000 mon.a (mon.0) 951 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-4-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.386388+0000 mon.a (mon.0) 951 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-4-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.386406+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm04-59364-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.386406+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm04-59364-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.386430+0000 mon.a (mon.0) 953 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm04-59409-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.386430+0000 mon.a (mon.0) 953 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm04-59409-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: cluster 2026-03-10T10:16:26.391078+0000 mon.a (mon.0) 954 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: cluster 2026-03-10T10:16:26.391078+0000 mon.a (mon.0) 954 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.395683+0000 mon.a (mon.0) 955 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm04-59259-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.395683+0000 mon.a (mon.0) 955 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm04-59259-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.396409+0000 mon.a (mon.0) 956 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-4", "tierpool": "test-rados-api-vm04-59491-4-cache"}]: dispatch 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.396409+0000 mon.a (mon.0) 956 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-4", "tierpool": "test-rados-api-vm04-59491-4-cache"}]: dispatch 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.396640+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm04-59409-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.396640+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm04-59409-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.398176+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.398176+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.398553+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.104:0/409720393' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.398553+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.104:0/409720393' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.406176+0000 mon.c (mon.2) 53 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm04-59364-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.406176+0000 mon.c (mon.2) 53 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm04-59364-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.415459+0000 mon.a (mon.0) 957 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm04-59364-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:26.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.415459+0000 mon.a (mon.0) 957 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm04-59364-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:26.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.443467+0000 mon.a (mon.0) 958 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm04-59409-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:26.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.443467+0000 mon.a (mon.0) 958 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm04-59409-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:26.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.456487+0000 mon.a (mon.0) 959 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:26.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.456487+0000 mon.a (mon.0) 959 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:26.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.457270+0000 mon.a (mon.0) 960 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:26.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:26 vm04 bash[20742]: audit 2026-03-10T10:16:26.457270+0000 mon.a (mon.0) 960 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:25.441548+0000 mon.a (mon.0) 935 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:25.441548+0000 mon.a (mon.0) 935 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:25.441671+0000 mon.a (mon.0) 936 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:25.441671+0000 mon.a (mon.0) 936 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:25.451563+0000 mon.c (mon.2) 48 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:25.451563+0000 mon.c (mon.2) 48 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:25.452026+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:25.452026+0000 mon.b (mon.1) 63 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:25.469665+0000 mon.a (mon.0) 937 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-10T10:16:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:25.469665+0000 mon.a (mon.0) 937 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-10T10:16:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:25.497848+0000 mon.a (mon.0) 938 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:25.497848+0000 mon.a (mon.0) 938 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:25.498911+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:25.498911+0000 mon.b (mon.1) 64 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:25.499884+0000 mon.a (mon.0) 939 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:25.499884+0000 mon.a (mon.0) 939 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:25.508158+0000 mon.c (mon.2) 49 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:25.508158+0000 mon.c (mon.2) 49 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:25.536224+0000 mon.a (mon.0) 940 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:25.536224+0000 mon.a (mon.0) 940 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:25.540312+0000 mon.c (mon.2) 50 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm04-59364-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:25.540312+0000 mon.c (mon.2) 50 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm04-59364-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:25.546380+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm04-59409-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:25.546380+0000 mon.b (mon.1) 65 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm04-59409-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:25.547155+0000 mon.a (mon.0) 941 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:25.547155+0000 mon.a (mon.0) 941 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:25.559337+0000 mon.a (mon.0) 942 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm04-59364-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:25.559337+0000 mon.a (mon.0) 942 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm04-59364-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:25.563240+0000 mon.a (mon.0) 943 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm04-59409-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:25.563240+0000 mon.a (mon.0) 943 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm04-59409-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:25.734846+0000 mon.c (mon.2) 51 : audit [DBG] from='client.? 192.168.123.104:0/1289710247' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:25.734846+0000 mon.c (mon.2) 51 : audit [DBG] from='client.? 192.168.123.104:0/1289710247' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:25.978726+0000 mon.c (mon.2) 52 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:25.978726+0000 mon.c (mon.2) 52 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: cluster 2026-03-10T10:16:25.980196+0000 mgr.y (mgr.24422) 98 : cluster [DBG] pgmap v60: 868 pgs: 416 unknown, 4 active, 37 creating+peering, 42 creating+activating, 369 active+clean; 459 KiB data, 243 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 14 KiB/s wr, 40 op/s 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: cluster 2026-03-10T10:16:25.980196+0000 mgr.y (mgr.24422) 98 : cluster [DBG] pgmap v60: 868 pgs: 416 unknown, 4 active, 37 creating+peering, 42 creating+activating, 369 active+clean; 459 KiB data, 243 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 14 KiB/s wr, 40 op/s 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.386227+0000 mon.a (mon.0) 944 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.386227+0000 mon.a (mon.0) 944 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.386266+0000 mon.a (mon.0) 945 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm04-59274-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm04-59274-16"}]': finished 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.386266+0000 mon.a (mon.0) 945 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm04-59274-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm04-59274-16"}]': finished 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.386290+0000 mon.a (mon.0) 946 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm04-59599-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm04-59599-7"}]': finished 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.386290+0000 mon.a (mon.0) 946 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm04-59599-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm04-59599-7"}]': finished 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.386308+0000 mon.a (mon.0) 947 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm04-59578-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm04-59578-7"}]': finished 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.386308+0000 mon.a (mon.0) 947 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm04-59578-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm04-59578-7"}]': finished 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.386325+0000 mon.a (mon.0) 948 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritevm04-60121-1"}]': finished 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.386325+0000 mon.a (mon.0) 948 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritevm04-60121-1"}]': finished 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.386350+0000 mon.a (mon.0) 949 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]': finished 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.386350+0000 mon.a (mon.0) 949 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]': finished 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.386372+0000 mon.a (mon.0) 950 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm04-60174-1"}]': finished 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.386372+0000 mon.a (mon.0) 950 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm04-60174-1"}]': finished 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.386388+0000 mon.a (mon.0) 951 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-4-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.386388+0000 mon.a (mon.0) 951 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-4-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.386406+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm04-59364-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.386406+0000 mon.a (mon.0) 952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm04-59364-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.386430+0000 mon.a (mon.0) 953 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm04-59409-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.386430+0000 mon.a (mon.0) 953 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm04-59409-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: cluster 2026-03-10T10:16:26.391078+0000 mon.a (mon.0) 954 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: cluster 2026-03-10T10:16:26.391078+0000 mon.a (mon.0) 954 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.395683+0000 mon.a (mon.0) 955 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm04-59259-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.395683+0000 mon.a (mon.0) 955 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm04-59259-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.396409+0000 mon.a (mon.0) 956 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-4", "tierpool": "test-rados-api-vm04-59491-4-cache"}]: dispatch 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.396409+0000 mon.a (mon.0) 956 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-4", "tierpool": "test-rados-api-vm04-59491-4-cache"}]: dispatch 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.396640+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm04-59409-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.396640+0000 mon.b (mon.1) 66 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm04-59409-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.398176+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.398176+0000 mon.b (mon.1) 67 : audit [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.398553+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.104:0/409720393' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.398553+0000 mon.b (mon.1) 68 : audit [INF] from='client.? 192.168.123.104:0/409720393' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.406176+0000 mon.c (mon.2) 53 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm04-59364-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.406176+0000 mon.c (mon.2) 53 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm04-59364-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.415459+0000 mon.a (mon.0) 957 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm04-59364-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.415459+0000 mon.a (mon.0) 957 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm04-59364-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.443467+0000 mon.a (mon.0) 958 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm04-59409-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.443467+0000 mon.a (mon.0) 958 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm04-59409-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.456487+0000 mon.a (mon.0) 959 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.456487+0000 mon.a (mon.0) 959 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.457270+0000 mon.a (mon.0) 960 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:26 vm07 bash[23367]: audit 2026-03-10T10:16:26.457270+0000 mon.a (mon.0) 960 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:27.492 INFO:tasks.workunit.client.0.vm04.stdout:ty='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm04-59274-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:27.492 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:24.203285+0000 mon.b [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm04-59599-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:27.492 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:24.203495+0000 mon.b [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm04-59578-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:27.492 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:24.212207+0000 mon.b [INF] from='client.? 192.168.123.104:0/2308711789' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.492 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:24.212384+0000 mon.b [INF] from='client.? 192.168.123.104:0/854341866' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm04-59252-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.492 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:24.212503+0000 mon.b [INF] from='client.? 192.168.123.104:0/2931547798' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.492 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:24.228616+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:27.492 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:24.229406+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm04-59274-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:27.492 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:24.235921+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm04-59599-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:27.492 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:24.240894+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm04-59578-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:27.492 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:24.269687+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.492 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:24.293815+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWrite_vm04-59252-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.492 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:24.293957+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-2","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.492 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:25.441548+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:27.492 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:25.441671+0000 mon.a [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-4-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.492 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:25.451563+0000 mon.c [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:27.492 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:25.452026+0000 mon.b [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:27.492 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:25.469665+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["twoxx"]}]': finished 2026-03-10T10:16:27.492 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:25.497848+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:27.492 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:25.498911+0000 mon.b [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:27.492 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:25.499884+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:27.492 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:25.508158+0000 mon.c [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:27.492 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:25.536224+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:27.492 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:25.540312+0000 mon.c [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm04-59364-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:27.492 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:25.546380+0000 mon.b [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm04-59409-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:27.492 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:25.547155+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:27.492 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:25.559337+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm04-59364-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:27.492 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:25.563240+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm04-59409-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:27.492 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:26.386227+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:27.492 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:26.386266+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoEC_vm04-59274-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoEC_vm04-59274-16"}]': finished 2026-03-10T10:16:27.492 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:26.386290+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatECPP_vm04-59599-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatECPP_vm04-59599-7"}]': finished 2026-03-10T10:16:27.492 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:26.386308+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosStatEC_vm04-59578-7", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosStatEC_vm04-59578-7"}]': finished 2026-03-10T10:16:27.493 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:26.386325+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritevm04-60121-1"}]': finished 2026-03-10T10:16:27.493 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:26.386350+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyECPP_vm04-59675-1"}]': finished 2026-03-10T10:16:27.493 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:26.386372+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsvm04-60174-1"}]': finished 2026-03-10T10:16:27.493 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:26.386388+0000 mon.a [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-4-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:27.493 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:26.386406+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockEC_vm04-59364-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:27.493 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:26.386430+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosLockECPP_vm04-59409-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:27.493 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:26.395683+0000 mon.a [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm04-59259-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.493 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:26.396409+0000 mon.a [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-4", "tierpool": "test-rados-api-vm04-59491-4-cache"}]: dispatch 2026-03-10T10:16:27.493 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:26.396640+0000 mon.b [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm04-59409-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:27.493 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:26.398176+0000 mon.b [INF] from='client.? 192.168.123.104:0/2194402493' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:27.493 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:26.398553+0000 mon.b [INF] from='client.? 192.168.123.104:0/409720393' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:27.493 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:26.406176+0000 mon.c [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm04-59364-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:27.493 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:26.415459+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm04-59364-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:27.530 INFO:tasks.workunit.client.0.vm04.stdout: api open_pools_parallel: process_1_[59940]: starting. 2026-03-10T10:16:27.530 INFO:tasks.workunit.client.0.vm04.stdout: open_pools_parallel: process_1_[59940]: creating pool ceph_test_rados_open_pools_parallel.vm04-59925 2026-03-10T10:16:27.530 INFO:tasks.workunit.client.0.vm04.stdout: open_pools_parallel: process_1_[59940]: created object 0... 2026-03-10T10:16:27.530 INFO:tasks.workunit.client.0.vm04.stdout: open_pools_parallel: process_1_[59940]: created object 25... 2026-03-10T10:16:27.530 INFO:tasks.workunit.client.0.vm04.stdout: open_pools_parallel: process_1_[59940]: created object 49... 2026-03-10T10:16:27.530 INFO:tasks.workunit.client.0.vm04.stdout: open_pools_parallel: process_1_[59940]: finishing. 2026-03-10T10:16:27.530 INFO:tasks.workunit.client.0.vm04.stdout: open_pools_parallel: process_1_[59940]: shutting down. 2026-03-10T10:16:27.530 INFO:tasks.workunit.client.0.vm04.stdout: open_pools_parallel: process_2_[59944]: starting. 2026-03-10T10:16:27.530 INFO:tasks.workunit.client.0.vm04.stdout: open_pools_parallel: process_2_[59944]: rados_pool_create. 2026-03-10T10:16:27.530 INFO:tasks.workunit.client.0.vm04.stdout: open_pools_parallel: process_2_[59944]: rados_ioctx_create. 2026-03-10T10:16:27.530 INFO:tasks.workunit.client.0.vm04.stdout: open_pools_parallel: process_2_[59944]: shutting down. 2026-03-10T10:16:27.530 INFO:tasks.workunit.client.0.vm04.stdout: open_pools_parallel: ******************************* 2026-03-10T10:16:27.530 INFO:tasks.workunit.client.0.vm04.stdout: open_pools_parallel: process_3_[60762]: starting. 2026-03-10T10:16:27.530 INFO:tasks.workunit.client.0.vm04.stdout: open_pools_parallel: process_3_[60762]: creating pool ceph_test_rados_open_pools_parallel.vm04-59925 2026-03-10T10:16:27.530 INFO:tasks.workunit.client.0.vm04.stdout: open_pools_parallel: process_3_[60762]: created object 0... 2026-03-10T10:16:27.530 INFO:tasks.workunit.client.0.vm04.stdout: open_pools_parallel: process_3_[60762]: created object 25... 2026-03-10T10:16:27.530 INFO:tasks.workunit.client.0.vm04.stdout: open_pools_parallel: process_3_[60762]: created object 49... 2026-03-10T10:16:27.530 INFO:tasks.workunit.client.0.vm04.stdout: open_pools_parallel: process_3_[60762]: finishing. 2026-03-10T10:16:27.530 INFO:tasks.workunit.client.0.vm04.stdout: open_pools_parallel: process_3_[60762]: shutting down. 2026-03-10T10:16:27.530 INFO:tasks.workunit.client.0.vm04.stdout: open_pools_parallel: ******************************* 2026-03-10T10:16:27.531 INFO:tasks.workunit.client.0.vm04.stdout: open_pools_parallel: process_4_[60763]: starting. 2026-03-10T10:16:27.531 INFO:tasks.workunit.client.0.vm04.stdout: open_pools_parallel: process_4_[60763]: rados_pool_create. 2026-03-10T10:16:27.531 INFO:tasks.workunit.client.0.vm04.stdout: open_pools_parallel: process_4_[60763]: rados_ioctx_create. 2026-03-10T10:16:27.531 INFO:tasks.workunit.client.0.vm04.stdout: open_pools_parallel: process_4_[60763]: shutting down. 2026-03-10T10:16:27.531 INFO:tasks.workunit.client.0.vm04.stdout: open_pools_parallel: ******************************* 2026-03-10T10:16:27.531 INFO:tasks.workunit.client.0.vm04.stdout: open_pools_parallel: ******************************* 2026-03-10T10:16:27.531 INFO:tasks.workunit.client.0.vm04.stdout: open_pools_parallel: ******* SUCCESS ********** 2026-03-10T10:16:27.531 INFO:tasks.workunit.client.0.vm04.stdout: cmd: Running main() from gmock_main.cc 2026-03-10T10:16:27.531 INFO:tasks.workunit.client.0.vm04.stdout: cmd: [==========] Running 3 tests from 1 test suite. 2026-03-10T10:16:27.531 INFO:tasks.workunit.client.0.vm04.stdout: cmd: [----------] Global test environment set-up. 2026-03-10T10:16:27.531 INFO:tasks.workunit.client.0.vm04.stdout: cmd: [----------] 3 tests from NeoRadosCmd 2026-03-10T10:16:27.531 INFO:tasks.workunit.client.0.vm04.stdout: cmd: [ RUN ] NeoRadosCmd.MonDescribe 2026-03-10T10:16:27.531 INFO:tasks.workunit.client.0.vm04.stdout: cmd: [ OK ] NeoRadosCmd.MonDescribe (1679 ms) 2026-03-10T10:16:27.531 INFO:tasks.workunit.client.0.vm04.stdout: cmd: [ RUN ] NeoRadosCmd.OSDCmd 2026-03-10T10:16:27.531 INFO:tasks.workunit.client.0.vm04.stdout: cmd: [ OK ] NeoRadosCmd.OSDCmd (1914 ms) 2026-03-10T10:16:27.531 INFO:tasks.workunit.client.0.vm04.stdout: cmd: [ RUN ] NeoRadosCmd.PGCmd 2026-03-10T10:16:27.531 INFO:tasks.workunit.client.0.vm04.stdout: cmd: [ OK ] NeoRadosCmd.PGCmd (3305 ms) 2026-03-10T10:16:27.531 INFO:tasks.workunit.client.0.vm04.stdout: cmd: [----------] 3 tests from NeoRadosCmd (6898 ms total) 2026-03-10T10:16:27.531 INFO:tasks.workunit.client.0.vm04.stdout: cmd: 2026-03-10T10:16:27.531 INFO:tasks.workunit.client.0.vm04.stdout: cmd: [----------] Global test environment tear-down 2026-03-10T10:16:27.531 INFO:tasks.workunit.client.0.vm04.stdout: cmd: [==========] 3 tests from 1 test suite ran. (6898 ms total) 2026-03-10T10:16:27.531 INFO:tasks.workunit.client.0.vm04.stdout: cmd: [ PASSED ] 3 tests. 2026-03-10T10:16:27.534 INFO:tasks.workunit.client.0.vm04.stdout: delete_pools_parallel: process_1_[59976]: starting. 2026-03-10T10:16:27.534 INFO:tasks.workunit.client.0.vm04.stdout: delete_pools_parallel: process_1_[59976]: creating pool ceph_test_rados_delete_pools_parallel.vm04-59956 2026-03-10T10:16:27.534 INFO:tasks.workunit.client.0.vm04.stdout: delete_pools_parallel: process_1_[59976]: created object 0... 2026-03-10T10:16:27.534 INFO:tasks.workunit.client.0.vm04.stdout: delete_pools_parallel: process_1_[59976]: created object 25... 2026-03-10T10:16:27.534 INFO:tasks.workunit.client.0.vm04.stdout: delete_pools_parallel: process_1_[59976]: created object 49... 2026-03-10T10:16:27.534 INFO:tasks.workunit.client.0.vm04.stdout: delete_pools_parallel: process_1_[59976]: finishing. 2026-03-10T10:16:27.534 INFO:tasks.workunit.client.0.vm04.stdout: delete_pools_parallel: process_1_[59976]: shutting down. 2026-03-10T10:16:27.534 INFO:tasks.workunit.client.0.vm04.stdout: delete_pools_parallel: process_2_[59979]: starting. 2026-03-10T10:16:27.534 INFO:tasks.workunit.client.0.vm04.stdout: delete_pools_parallel: process_2_[59979]: deleting pool ceph_test_rados_delete_pools_parallel.vm04-59956 2026-03-10T10:16:27.534 INFO:tasks.workunit.client.0.vm04.stdout: delete_pools_parallel: process_2_[59979]: shutting down. 2026-03-10T10:16:27.534 INFO:tasks.workunit.client.0.vm04.stdout: delete_pools_parallel: ******************************* 2026-03-10T10:16:27.534 INFO:tasks.workunit.client.0.vm04.stdout: delete_pools_parallel: process_3_[60745]: starting. 2026-03-10T10:16:27.534 INFO:tasks.workunit.client.0.vm04.stdout: delete_pools_parallel: process_3_[60745]: creating pool ceph_test_rados_delete_pools_parallel.vm04-59956 2026-03-10T10:16:27.534 INFO:tasks.workunit.client.0.vm04.stdout: delete_pools_parallel: process_3_[60745]: created object 0... 2026-03-10T10:16:27.534 INFO:tasks.workunit.client.0.vm04.stdout: delete_pools_parallel: process_3_[60745]: created object 25... 2026-03-10T10:16:27.534 INFO:tasks.workunit.client.0.vm04.stdout: delete_pools_parallel: process_3_[60745]: created object 49... 2026-03-10T10:16:27.534 INFO:tasks.workunit.client.0.vm04.stdout: delete_pools_parallel: process_3_[60745]: finishing. 2026-03-10T10:16:27.534 INFO:tasks.workunit.client.0.vm04.stdout: delete_pools_parallel: process_3_[60745]: shutting down. 2026-03-10T10:16:27.534 INFO:tasks.workunit.client.0.vm04.stdout: delete_pools_parallel: ******************************* 2026-03-10T10:16:27.534 INFO:tasks.workunit.client.0.vm04.stdout: delete_pools_parallel: process_5_[60747]: starting. 2026-03-10T10:16:27.534 INFO:tasks.workunit.client.0.vm04.stdout: delete_pools_parallel: process_5_[60747]: listing objects. 2026-03-10T10:16:27.534 INFO:tasks.workunit.client.0.vm04.stdout: delete_pools_parallel: process_5_[60747]: listed object 0... 2026-03-10T10:16:27.534 INFO:tasks.workunit.client.0.vm04.stdout: delete_pools_parallel: process_5_[60747]: listed object 25... 2026-03-10T10:16:27.534 INFO:tasks.workunit.client.0.vm04.stdout: delete_pools_parallel: process_5_[60747]: saw 50 objects 2026-03-10T10:16:27.534 INFO:tasks.workunit.client.0.vm04.stdout: delete_pools_parallel: process_5_[60747]: shutting down. 2026-03-10T10:16:27.534 INFO:tasks.workunit.client.0.vm04.stdout: delete_pools_parallel: ******************************* 2026-03-10T10:16:27.534 INFO:tasks.workunit.client.0.vm04.stdout: delete_pools_parallel: process_4_[60746]: starting. 2026-03-10T10:16:27.534 INFO:tasks.workunit.client.0.vm04.stdout: delete_pools_parallel: process_4_[60746]: deleting pool ceph_test_rados_delete_pools_parallel.vm04-59956 2026-03-10T10:16:27.534 INFO:tasks.workunit.client.0.vm04.stdout: delete_pools_parallel: process_4_[60746]: shutting down. 2026-03-10T10:16:27.534 INFO:tasks.workunit.client.0.vm04.stdout: delete_pools_parallel: ******************************* 2026-03-10T10:16:27.534 INFO:tasks.workunit.client.0.vm04.stdout: delete_pools_parallel: ******************************* 2026-03-10T10:16:27.534 INFO:tasks.workunit.client.0.vm04.stdout: delete_pools_parallel: ******* SUCCESS ********** 2026-03-10T10:16:27.615 INFO:tasks.workunit.client.0.vm04.stdout:_cmd: got: 2026-03-10T10:16:26.443467+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm04-59409-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:27.615 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:26.456487+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm04-59675-1"}]: dispatch 2026-03-10T10:16:27.615 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:26.457270+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsvm04-60174-1"}]: dispatch 2026-03-10T10:16:27.615 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:26.472452+0000 mon.b [INF] from='client.? 192.168.123.104:0/1401890826' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm04-60121-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:27.615 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:26.496890+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm04-60121-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:27.615 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:27.462684+0000 mon.a [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm04-59259-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:27.615 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:27.462725+0000 mon.a [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-4", "tierpool": "test-rados-api-vm04-59491-4-cache"}]': finished 2026-03-10T10:16:27.615 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:27.463110+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm04-59675-1"}]': finished 2026-03-10T10:16:27.615 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:27.463161+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsvm04-60174-1"}]': finished 2026-03-10T10:16:27.615 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:27.463197+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm04-60121-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:27.615 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:27.486121+0000 mon.b [INF] from='client.? 192.168.123.104:0/1401890826' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm04-60121-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:27.615 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:27.489115+0000 mon.b [INF] from='client.? 192.168.123.104:0/4117200580' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T10:16:27.615 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:27.491682+0000 client.admin [INF] threexx 2026-03-10T10:16:27.615 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:27.505174+0000 mon.b [INF] from='client.? 192.168.123.104:0/361861385' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm04-59252-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.615 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:27.505261+0000 mon.b [INF] from='client.? 192.168.123.104:0/276235663' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.615 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:27.507103+0000 mon.b [INF] from='client.? 192.168.123.104:0/4026762506' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.934 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:27.526479+0000 mon.a [INF] from='client.? ' entity='client.admin' c api_io_pp: Running main() from gmock_main.cc 2026-03-10T10:16:27.934 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [==========] Running 39 tests from 2 test suites. 2026-03-10T10:16:27.934 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [----------] Global test environment set-up. 2026-03-10T10:16:27.934 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [----------] 21 tests from LibRadosIoPP 2026-03-10T10:16:27.934 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: seed 59290 2026-03-10T10:16:27.934 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoPP.TooBigPP 2026-03-10T10:16:27.934 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoPP.TooBigPP (0 ms) 2026-03-10T10:16:27.934 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoPP.SimpleWritePP 2026-03-10T10:16:27.934 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoPP.SimpleWritePP (626 ms) 2026-03-10T10:16:27.934 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoPP.ReadOpPP 2026-03-10T10:16:27.934 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoPP.ReadOpPP (27 ms) 2026-03-10T10:16:27.934 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoPP.SparseReadOpPP 2026-03-10T10:16:27.934 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoPP.SparseReadOpPP (31 ms) 2026-03-10T10:16:27.934 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoPP.RoundTripPP 2026-03-10T10:16:27.934 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoPP.RoundTripPP (4 ms) 2026-03-10T10:16:27.934 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoPP.RoundTripPP2 2026-03-10T10:16:27.934 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoPP.RoundTripPP2 (5 ms) 2026-03-10T10:16:27.934 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoPP.Checksum 2026-03-10T10:16:27.934 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoPP.Checksum (12 ms) 2026-03-10T10:16:27.934 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoPP.ReadIntoBufferlist 2026-03-10T10:16:27.934 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoPP.ReadIntoBufferlist (13 ms) 2026-03-10T10:16:27.934 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoPP.OverlappingWriteRoundTripPP 2026-03-10T10:16:27.934 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoPP.OverlappingWriteRoundTripPP (42 ms) 2026-03-10T10:16:27.934 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoPP.WriteFullRoundTripPP 2026-03-10T10:16:27.934 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoPP.WriteFullRoundTripPP (11 ms) 2026-03-10T10:16:27.935 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoPP.WriteFullRoundTripPP2 2026-03-10T10:16:27.935 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoPP.WriteFullRoundTripPP2 (3 ms) 2026-03-10T10:16:27.935 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoPP.AppendRoundTripPP 2026-03-10T10:16:27.935 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoPP.AppendRoundTripPP (11 ms) 2026-03-10T10:16:27.935 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoPP.TruncTestPP 2026-03-10T10:16:27.935 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoPP.TruncTestPP (5 ms) 2026-03-10T10:16:27.935 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoPP.RemoveTestPP 2026-03-10T10:16:27.935 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoPP.RemoveTestPP (4 ms) 2026-03-10T10:16:27.935 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoPP.XattrsRoundTripPP 2026-03-10T10:16:27.935 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoPP.XattrsRoundTripPP (6 ms) 2026-03-10T10:16:27.935 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoPP.RmXattrPP 2026-03-10T10:16:27.935 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoPP.RmXattrPP (29 ms) 2026-03-10T10:16:27.935 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoPP.XattrListPP 2026-03-10T10:16:27.935 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoPP.XattrListPP (7 ms) 2026-03-10T10:16:27.935 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoPP.CrcZeroWrite 2026-03-10T10:16:27.935 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoPP.CrcZeroWrite (9 ms) 2026-03-10T10:16:27.935 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoPP.CmpExtPP 2026-03-10T10:16:27.935 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoPP.CmpExtPP (4 ms) 2026-03-10T10:16:27.935 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoPP.CmpExtDNEPP 2026-03-10T10:16:27.935 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoPP.CmpExtDNEPP (3 ms) 2026-03-10T10:16:27.935 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoPP.CmpExtMismatchPP 2026-03-10T10:16:27.935 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoPP.CmpExtMismatchPP (5 ms) 2026-03-10T10:16:27.935 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [----------] 21 tests from LibRadosIoPP (857 ms total) 2026-03-10T10:16:27.935 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: 2026-03-10T10:16:27.935 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [----------] 18 tests from LibRadosIoECPP 2026-03-10T10:16:27.935 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.SimpleWritePP 2026-03-10T10:16:27.935 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoECPP.SimpleWritePP (1352 ms) 2026-03-10T10:16:27.935 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.ReadOpPP 2026-03-10T10:16:27.935 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoECPP.ReadOpPP (37 ms) 2026-03-10T10:16:27.935 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.SparseReadOpPP 2026-03-10T10:16:27.935 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoECPP.SparseReadOpPP (14 ms) 2026-03-10T10:16:27.935 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.RoundTripPP 2026-03-10T10:16:27.935 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoECPP.RoundTripPP (110 ms) 2026-03-10T10:16:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:26.472452+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.104:0/1401890826' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm04-60121-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:26.472452+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.104:0/1401890826' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm04-60121-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:26.496890+0000 mon.a (mon.0) 961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm04-60121-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:26.496890+0000 mon.a (mon.0) 961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm04-60121-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:26.979500+0000 mon.c (mon.2) 54 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:26.979500+0000 mon.c (mon.2) 54 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.462684+0000 mon.a (mon.0) 962 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm04-59259-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.462684+0000 mon.a (mon.0) 962 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm04-59259-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.462725+0000 mon.a (mon.0) 963 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-4", "tierpool": "test-rados-api-vm04-59491-4-cache"}]': finished 2026-03-10T10:16:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.462725+0000 mon.a (mon.0) 963 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-4", "tierpool": "test-rados-api-vm04-59491-4-cache"}]': finished 2026-03-10T10:16:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.463110+0000 mon.a (mon.0) 964 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm04-59675-1"}]': finished 2026-03-10T10:16:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.463110+0000 mon.a (mon.0) 964 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm04-59675-1"}]': finished 2026-03-10T10:16:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.463161+0000 mon.a (mon.0) 965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsvm04-60174-1"}]': finished 2026-03-10T10:16:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.463161+0000 mon.a (mon.0) 965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsvm04-60174-1"}]': finished 2026-03-10T10:16:27.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:26.472452+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.104:0/1401890826' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm04-60121-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:27.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:26.472452+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.104:0/1401890826' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm04-60121-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:27.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:26.496890+0000 mon.a (mon.0) 961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm04-60121-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:27.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:26.496890+0000 mon.a (mon.0) 961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm04-60121-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:27.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:26.979500+0000 mon.c (mon.2) 54 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:27.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:26.979500+0000 mon.c (mon.2) 54 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:27.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.462684+0000 mon.a (mon.0) 962 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm04-59259-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.462684+0000 mon.a (mon.0) 962 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm04-59259-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.462725+0000 mon.a (mon.0) 963 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-4", "tierpool": "test-rados-api-vm04-59491-4-cache"}]': finished 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.462725+0000 mon.a (mon.0) 963 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-4", "tierpool": "test-rados-api-vm04-59491-4-cache"}]': finished 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.463110+0000 mon.a (mon.0) 964 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm04-59675-1"}]': finished 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.463110+0000 mon.a (mon.0) 964 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm04-59675-1"}]': finished 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.463161+0000 mon.a (mon.0) 965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsvm04-60174-1"}]': finished 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.463161+0000 mon.a (mon.0) 965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsvm04-60174-1"}]': finished 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.463197+0000 mon.a (mon.0) 966 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm04-60121-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.463197+0000 mon.a (mon.0) 966 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm04-60121-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.486121+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.104:0/1401890826' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm04-60121-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.486121+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.104:0/1401890826' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm04-60121-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.489115+0000 mon.b (mon.1) 71 : audit [INF] from='client.? 192.168.123.104:0/4117200580' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.489115+0000 mon.b (mon.1) 71 : audit [INF] from='client.? 192.168.123.104:0/4117200580' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: cluster 2026-03-10T10:16:27.491682+0000 client.admin (client.?) 0 : cluster [INF] threexx 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: cluster 2026-03-10T10:16:27.491682+0000 client.admin (client.?) 0 : cluster [INF] threexx 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: cluster 2026-03-10T10:16:27.499183+0000 mon.a (mon.0) 967 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: cluster 2026-03-10T10:16:27.499183+0000 mon.a (mon.0) 967 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.505174+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.104:0/361861385' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm04-59252-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.505174+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.104:0/361861385' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm04-59252-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.505261+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.104:0/276235663' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.505261+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.104:0/276235663' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.507103+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.104:0/4026762506' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.507103+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.104:0/4026762506' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.526479+0000 mon.a (mon.0) 968 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm04-60121-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.526479+0000 mon.a (mon.0) 968 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm04-60121-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.527263+0000 mon.a (mon.0) 969 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-4", "overlaypool": "test-rados-api-vm04-59491-4-cache"}]: dispatch 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.527263+0000 mon.a (mon.0) 969 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-4", "overlaypool": "test-rados-api-vm04-59491-4-cache"}]: dispatch 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.527351+0000 mon.a (mon.0) 970 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.527351+0000 mon.a (mon.0) 970 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.527537+0000 mon.a (mon.0) 971 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm04-59252-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.527537+0000 mon.a (mon.0) 971 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm04-59252-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.527696+0000 mon.a (mon.0) 972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.527696+0000 mon.a (mon.0) 972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.527834+0000 mon.a (mon.0) 973 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.527834+0000 mon.a (mon.0) 973 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.542558+0000 mon.a (mon.0) 974 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.542558+0000 mon.a (mon.0) 974 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.545570+0000 mon.a (mon.0) 975 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:16:27.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.545570+0000 mon.a (mon.0) 975 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:16:27.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.548676+0000 mon.a (mon.0) 976 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm04-59259-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-10T10:16:27.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.548676+0000 mon.a (mon.0) 976 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm04-59259-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-10T10:16:27.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.552231+0000 mon.b (mon.1) 75 : audit [INF] from='client.? 192.168.123.104:0/3349108748' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm04-60174-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:27.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.552231+0000 mon.b (mon.1) 75 : audit [INF] from='client.? 192.168.123.104:0/3349108748' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm04-60174-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:27.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.576721+0000 mon.a (mon.0) 977 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm04-60174-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:27.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:27 vm04 bash[28289]: audit 2026-03-10T10:16:27.576721+0000 mon.a (mon.0) 977 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm04-60174-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:27.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.463197+0000 mon.a (mon.0) 966 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm04-60121-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:27.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.463197+0000 mon.a (mon.0) 966 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm04-60121-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:27.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.486121+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.104:0/1401890826' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm04-60121-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:27.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.486121+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.104:0/1401890826' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm04-60121-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:27.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.489115+0000 mon.b (mon.1) 71 : audit [INF] from='client.? 192.168.123.104:0/4117200580' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T10:16:27.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.489115+0000 mon.b (mon.1) 71 : audit [INF] from='client.? 192.168.123.104:0/4117200580' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T10:16:27.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: cluster 2026-03-10T10:16:27.491682+0000 client.admin (client.?) 0 : cluster [INF] threexx 2026-03-10T10:16:27.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: cluster 2026-03-10T10:16:27.491682+0000 client.admin (client.?) 0 : cluster [INF] threexx 2026-03-10T10:16:27.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: cluster 2026-03-10T10:16:27.499183+0000 mon.a (mon.0) 967 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-10T10:16:27.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: cluster 2026-03-10T10:16:27.499183+0000 mon.a (mon.0) 967 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-10T10:16:27.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.505174+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.104:0/361861385' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm04-59252-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.505174+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.104:0/361861385' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm04-59252-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.505261+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.104:0/276235663' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.505261+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.104:0/276235663' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.507103+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.104:0/4026762506' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.507103+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.104:0/4026762506' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.526479+0000 mon.a (mon.0) 968 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm04-60121-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:27.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.526479+0000 mon.a (mon.0) 968 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm04-60121-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:27.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.527263+0000 mon.a (mon.0) 969 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-4", "overlaypool": "test-rados-api-vm04-59491-4-cache"}]: dispatch 2026-03-10T10:16:27.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.527263+0000 mon.a (mon.0) 969 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-4", "overlaypool": "test-rados-api-vm04-59491-4-cache"}]: dispatch 2026-03-10T10:16:27.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.527351+0000 mon.a (mon.0) 970 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T10:16:27.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.527351+0000 mon.a (mon.0) 970 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T10:16:27.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.527537+0000 mon.a (mon.0) 971 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm04-59252-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.527537+0000 mon.a (mon.0) 971 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm04-59252-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.527696+0000 mon.a (mon.0) 972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.527696+0000 mon.a (mon.0) 972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.527834+0000 mon.a (mon.0) 973 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.527834+0000 mon.a (mon.0) 973 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:27.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.542558+0000 mon.a (mon.0) 974 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:27.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.542558+0000 mon.a (mon.0) 974 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:27.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.545570+0000 mon.a (mon.0) 975 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:16:27.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.545570+0000 mon.a (mon.0) 975 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:16:27.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.548676+0000 mon.a (mon.0) 976 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm04-59259-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-10T10:16:27.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.548676+0000 mon.a (mon.0) 976 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm04-59259-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-10T10:16:27.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.552231+0000 mon.b (mon.1) 75 : audit [INF] from='client.? 192.168.123.104:0/3349108748' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm04-60174-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:27.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.552231+0000 mon.b (mon.1) 75 : audit [INF] from='client.? 192.168.123.104:0/3349108748' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm04-60174-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:27.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.576721+0000 mon.a (mon.0) 977 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm04-60174-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:27.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:27 vm04 bash[20742]: audit 2026-03-10T10:16:27.576721+0000 mon.a (mon.0) 977 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm04-60174-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:26.472452+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.104:0/1401890826' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm04-60121-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:26.472452+0000 mon.b (mon.1) 69 : audit [INF] from='client.? 192.168.123.104:0/1401890826' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm04-60121-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:26.496890+0000 mon.a (mon.0) 961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm04-60121-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:26.496890+0000 mon.a (mon.0) 961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm04-60121-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:26.979500+0000 mon.c (mon.2) 54 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:26.979500+0000 mon.c (mon.2) 54 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.462684+0000 mon.a (mon.0) 962 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm04-59259-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.462684+0000 mon.a (mon.0) 962 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolQuotaPP_vm04-59259-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.462725+0000 mon.a (mon.0) 963 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-4", "tierpool": "test-rados-api-vm04-59491-4-cache"}]': finished 2026-03-10T10:16:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.462725+0000 mon.a (mon.0) 963 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-4", "tierpool": "test-rados-api-vm04-59491-4-cache"}]': finished 2026-03-10T10:16:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.463110+0000 mon.a (mon.0) 964 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm04-59675-1"}]': finished 2026-03-10T10:16:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.463110+0000 mon.a (mon.0) 964 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyECPP_vm04-59675-1"}]': finished 2026-03-10T10:16:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.463161+0000 mon.a (mon.0) 965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsvm04-60174-1"}]': finished 2026-03-10T10:16:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.463161+0000 mon.a (mon.0) 965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsvm04-60174-1"}]': finished 2026-03-10T10:16:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.463197+0000 mon.a (mon.0) 966 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm04-60121-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.463197+0000 mon.a (mon.0) 966 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReadOpvm04-60121-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.486121+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.104:0/1401890826' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm04-60121-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.486121+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.104:0/1401890826' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm04-60121-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.489115+0000 mon.b (mon.1) 71 : audit [INF] from='client.? 192.168.123.104:0/4117200580' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T10:16:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.489115+0000 mon.b (mon.1) 71 : audit [INF] from='client.? 192.168.123.104:0/4117200580' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T10:16:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: cluster 2026-03-10T10:16:27.491682+0000 client.admin (client.?) 0 : cluster [INF] threexx 2026-03-10T10:16:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: cluster 2026-03-10T10:16:27.491682+0000 client.admin (client.?) 0 : cluster [INF] threexx 2026-03-10T10:16:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: cluster 2026-03-10T10:16:27.499183+0000 mon.a (mon.0) 967 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-10T10:16:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: cluster 2026-03-10T10:16:27.499183+0000 mon.a (mon.0) 967 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-10T10:16:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.505174+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.104:0/361861385' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm04-59252-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.505174+0000 mon.b (mon.1) 72 : audit [INF] from='client.? 192.168.123.104:0/361861385' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm04-59252-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.505261+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.104:0/276235663' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.505261+0000 mon.b (mon.1) 73 : audit [INF] from='client.? 192.168.123.104:0/276235663' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.507103+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.104:0/4026762506' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.507103+0000 mon.b (mon.1) 74 : audit [INF] from='client.? 192.168.123.104:0/4026762506' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.526479+0000 mon.a (mon.0) 968 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm04-60121-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.526479+0000 mon.a (mon.0) 968 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReadOpvm04-60121-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.527263+0000 mon.a (mon.0) 969 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-4", "overlaypool": "test-rados-api-vm04-59491-4-cache"}]: dispatch 2026-03-10T10:16:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.527263+0000 mon.a (mon.0) 969 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-4", "overlaypool": "test-rados-api-vm04-59491-4-cache"}]: dispatch 2026-03-10T10:16:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.527351+0000 mon.a (mon.0) 970 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T10:16:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.527351+0000 mon.a (mon.0) 970 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T10:16:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.527537+0000 mon.a (mon.0) 971 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm04-59252-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.527537+0000 mon.a (mon.0) 971 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm04-59252-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.527696+0000 mon.a (mon.0) 972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.527696+0000 mon.a (mon.0) 972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.527834+0000 mon.a (mon.0) 973 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.527834+0000 mon.a (mon.0) 973 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.542558+0000 mon.a (mon.0) 974 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.542558+0000 mon.a (mon.0) 974 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.545570+0000 mon.a (mon.0) 975 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:16:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.545570+0000 mon.a (mon.0) 975 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:16:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.548676+0000 mon.a (mon.0) 976 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm04-59259-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-10T10:16:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.548676+0000 mon.a (mon.0) 976 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm04-59259-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-10T10:16:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.552231+0000 mon.b (mon.1) 75 : audit [INF] from='client.? 192.168.123.104:0/3349108748' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm04-60174-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.552231+0000 mon.b (mon.1) 75 : audit [INF] from='client.? 192.168.123.104:0/3349108748' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm04-60174-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.576721+0000 mon.a (mon.0) 977 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm04-60174-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:27 vm07 bash[23367]: audit 2026-03-10T10:16:27.576721+0000 mon.a (mon.0) 977 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm04-60174-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:28.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:16:28 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:27.610852+0000 mon.a (mon.0) 978 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["threexx"]}]': finished 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:27.610852+0000 mon.a (mon.0) 978 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["threexx"]}]': finished 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:27.617737+0000 mon.b (mon.1) 76 : audit [INF] from='client.? 192.168.123.104:0/4117200580' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:27.617737+0000 mon.b (mon.1) 76 : audit [INF] from='client.? 192.168.123.104:0/4117200580' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: cluster 2026-03-10T10:16:27.626937+0000 client.admin (client.?) 0 : cluster [INF] fourxx 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: cluster 2026-03-10T10:16:27.626937+0000 client.admin (client.?) 0 : cluster [INF] fourxx 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:27.629289+0000 mon.a (mon.0) 979 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:27.629289+0000 mon.a (mon.0) 979 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:27.750784+0000 mon.b (mon.1) 77 : audit [DBG] from='client.? 192.168.123.104:0/1342698839' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:27.750784+0000 mon.b (mon.1) 77 : audit [DBG] from='client.? 192.168.123.104:0/1342698839' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: cluster 2026-03-10T10:16:27.980846+0000 mgr.y (mgr.24422) 99 : cluster [DBG] pgmap v63: 900 pgs: 448 unknown, 4 active, 37 creating+peering, 42 creating+activating, 369 active+clean; 459 KiB data, 243 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: cluster 2026-03-10T10:16:27.980846+0000 mgr.y (mgr.24422) 99 : cluster [DBG] pgmap v63: 900 pgs: 448 unknown, 4 active, 37 creating+peering, 42 creating+activating, 369 active+clean; 459 KiB data, 243 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:27.981113+0000 mon.c (mon.2) 55 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:27.981113+0000 mon.c (mon.2) 55 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.173896+0000 mon.b (mon.1) 78 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm04-59290-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.173896+0000 mon.b (mon.1) 78 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm04-59290-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.179886+0000 mon.a (mon.0) 980 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm04-59290-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.179886+0000 mon.a (mon.0) 980 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm04-59290-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.184663+0000 mgr.y (mgr.24422) 100 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.184663+0000 mgr.y (mgr.24422) 100 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.332291+0000 mon.c (mon.2) 56 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.332291+0000 mon.c (mon.2) 56 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.333557+0000 mon.c (mon.2) 57 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.333557+0000 mon.c (mon.2) 57 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.334547+0000 mon.c (mon.2) 58 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.334547+0000 mon.c (mon.2) 58 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.335674+0000 mon.c (mon.2) 59 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.335674+0000 mon.c (mon.2) 59 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.336844+0000 mon.c (mon.2) 60 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.336844+0000 mon.c (mon.2) 60 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.337710+0000 mon.c (mon.2) 61 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.337710+0000 mon.c (mon.2) 61 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.339187+0000 mon.c (mon.2) 62 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.339187+0000 mon.c (mon.2) 62 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.341042+0000 mon.c (mon.2) 63 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.341042+0000 mon.c (mon.2) 63 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:27.610852+0000 mon.a (mon.0) 978 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["threexx"]}]': finished 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:27.610852+0000 mon.a (mon.0) 978 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["threexx"]}]': finished 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:27.617737+0000 mon.b (mon.1) 76 : audit [INF] from='client.? 192.168.123.104:0/4117200580' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:27.617737+0000 mon.b (mon.1) 76 : audit [INF] from='client.? 192.168.123.104:0/4117200580' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-10T10:16:28.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: cluster 2026-03-10T10:16:27.626937+0000 client.admin (client.?) 0 : cluster [INF] fourxx 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: cluster 2026-03-10T10:16:27.626937+0000 client.admin (client.?) 0 : cluster [INF] fourxx 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:27.629289+0000 mon.a (mon.0) 979 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:27.629289+0000 mon.a (mon.0) 979 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:27.750784+0000 mon.b (mon.1) 77 : audit [DBG] from='client.? 192.168.123.104:0/1342698839' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:27.750784+0000 mon.b (mon.1) 77 : audit [DBG] from='client.? 192.168.123.104:0/1342698839' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: cluster 2026-03-10T10:16:27.980846+0000 mgr.y (mgr.24422) 99 : cluster [DBG] pgmap v63: 900 pgs: 448 unknown, 4 active, 37 creating+peering, 42 creating+activating, 369 active+clean; 459 KiB data, 243 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: cluster 2026-03-10T10:16:27.980846+0000 mgr.y (mgr.24422) 99 : cluster [DBG] pgmap v63: 900 pgs: 448 unknown, 4 active, 37 creating+peering, 42 creating+activating, 369 active+clean; 459 KiB data, 243 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:27.981113+0000 mon.c (mon.2) 55 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:27.981113+0000 mon.c (mon.2) 55 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.173896+0000 mon.b (mon.1) 78 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm04-59290-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.173896+0000 mon.b (mon.1) 78 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm04-59290-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.179886+0000 mon.a (mon.0) 980 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm04-59290-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.179886+0000 mon.a (mon.0) 980 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm04-59290-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.184663+0000 mgr.y (mgr.24422) 100 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.184663+0000 mgr.y (mgr.24422) 100 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.332291+0000 mon.c (mon.2) 56 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.332291+0000 mon.c (mon.2) 56 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.333557+0000 mon.c (mon.2) 57 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.333557+0000 mon.c (mon.2) 57 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.334547+0000 mon.c (mon.2) 58 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.334547+0000 mon.c (mon.2) 58 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.335674+0000 mon.c (mon.2) 59 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.335674+0000 mon.c (mon.2) 59 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.336844+0000 mon.c (mon.2) 60 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.336844+0000 mon.c (mon.2) 60 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.337710+0000 mon.c (mon.2) 61 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.337710+0000 mon.c (mon.2) 61 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.339187+0000 mon.c (mon.2) 62 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.339187+0000 mon.c (mon.2) 62 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.341042+0000 mon.c (mon.2) 63 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.341042+0000 mon.c (mon.2) 63 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.341584+0000 mon.c (mon.2) 64 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.341584+0000 mon.c (mon.2) 64 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.342719+0000 mon.c (mon.2) 65 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.342719+0000 mon.c (mon.2) 65 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.468360+0000 mon.a (mon.0) 981 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm04-59364-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm04-59364-10"}]': finished 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.468360+0000 mon.a (mon.0) 981 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm04-59364-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm04-59364-10"}]': finished 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.468393+0000 mon.a (mon.0) 982 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm04-59409-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm04-59409-10"}]': finished 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.468393+0000 mon.a (mon.0) 982 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm04-59409-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm04-59409-10"}]': finished 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.468411+0000 mon.a (mon.0) 983 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-4", "overlaypool": "test-rados-api-vm04-59491-4-cache"}]': finished 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.468411+0000 mon.a (mon.0) 983 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-4", "overlaypool": "test-rados-api-vm04-59491-4-cache"}]': finished 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.468427+0000 mon.a (mon.0) 984 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm04-59252-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.468427+0000 mon.a (mon.0) 984 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm04-59252-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.468444+0000 mon.a (mon.0) 985 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.468444+0000 mon.a (mon.0) 985 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.468465+0000 mon.a (mon.0) 986 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.468465+0000 mon.a (mon.0) 986 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.468483+0000 mon.a (mon.0) 987 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm04-59259-3", "field": "max_bytes", "val": "4096"}]': finished 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.468483+0000 mon.a (mon.0) 987 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm04-59259-3", "field": "max_bytes", "val": "4096"}]': finished 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.468502+0000 mon.a (mon.0) 988 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm04-60174-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.468502+0000 mon.a (mon.0) 988 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm04-60174-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.468520+0000 mon.a (mon.0) 989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm04-59290-23", "var": "allow_ec_overwrites", "val": "true"}]': finished 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.468520+0000 mon.a (mon.0) 989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm04-59290-23", "var": "allow_ec_overwrites", "val": "true"}]': finished 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: cluster 2026-03-10T10:16:28.490695+0000 mon.a (mon.0) 990 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: cluster 2026-03-10T10:16:28.490695+0000 mon.a (mon.0) 990 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.494173+0000 mon.b (mon.1) 79 : audit [INF] from='client.? 192.168.123.104:0/3349108748' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm04-60174-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.494173+0000 mon.b (mon.1) 79 : audit [INF] from='client.? 192.168.123.104:0/3349108748' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm04-60174-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.498549+0000 mon.b (mon.1) 80 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.498549+0000 mon.b (mon.1) 80 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:28.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.498670+0000 mon.b (mon.1) 81 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.498670+0000 mon.b (mon.1) 81 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.498803+0000 mon.b (mon.1) 82 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.498803+0000 mon.b (mon.1) 82 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.506110+0000 mon.a (mon.0) 991 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-4-cache", "mode": "writeback"}]: dispatch 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.506110+0000 mon.a (mon.0) 991 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-4-cache", "mode": "writeback"}]: dispatch 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.510309+0000 mon.a (mon.0) 992 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm04-60174-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.510309+0000 mon.a (mon.0) 992 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm04-60174-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.516207+0000 mon.a (mon.0) 993 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.516207+0000 mon.a (mon.0) 993 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.516482+0000 mon.a (mon.0) 994 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.516482+0000 mon.a (mon.0) 994 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.516682+0000 mon.a (mon.0) 995 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.516682+0000 mon.a (mon.0) 995 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.516795+0000 mon.a (mon.0) 996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:28 vm04 bash[28289]: audit 2026-03-10T10:16:28.516795+0000 mon.a (mon.0) 996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.341584+0000 mon.c (mon.2) 64 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.341584+0000 mon.c (mon.2) 64 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.342719+0000 mon.c (mon.2) 65 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.342719+0000 mon.c (mon.2) 65 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.468360+0000 mon.a (mon.0) 981 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm04-59364-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm04-59364-10"}]': finished 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.468360+0000 mon.a (mon.0) 981 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm04-59364-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm04-59364-10"}]': finished 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.468393+0000 mon.a (mon.0) 982 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm04-59409-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm04-59409-10"}]': finished 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.468393+0000 mon.a (mon.0) 982 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm04-59409-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm04-59409-10"}]': finished 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.468411+0000 mon.a (mon.0) 983 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-4", "overlaypool": "test-rados-api-vm04-59491-4-cache"}]': finished 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.468411+0000 mon.a (mon.0) 983 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-4", "overlaypool": "test-rados-api-vm04-59491-4-cache"}]': finished 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.468427+0000 mon.a (mon.0) 984 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm04-59252-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.468427+0000 mon.a (mon.0) 984 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm04-59252-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.468444+0000 mon.a (mon.0) 985 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.468444+0000 mon.a (mon.0) 985 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.468465+0000 mon.a (mon.0) 986 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.468465+0000 mon.a (mon.0) 986 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.468483+0000 mon.a (mon.0) 987 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm04-59259-3", "field": "max_bytes", "val": "4096"}]': finished 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.468483+0000 mon.a (mon.0) 987 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm04-59259-3", "field": "max_bytes", "val": "4096"}]': finished 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.468502+0000 mon.a (mon.0) 988 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm04-60174-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.468502+0000 mon.a (mon.0) 988 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm04-60174-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.468520+0000 mon.a (mon.0) 989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm04-59290-23", "var": "allow_ec_overwrites", "val": "true"}]': finished 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.468520+0000 mon.a (mon.0) 989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm04-59290-23", "var": "allow_ec_overwrites", "val": "true"}]': finished 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: cluster 2026-03-10T10:16:28.490695+0000 mon.a (mon.0) 990 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: cluster 2026-03-10T10:16:28.490695+0000 mon.a (mon.0) 990 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.494173+0000 mon.b (mon.1) 79 : audit [INF] from='client.? 192.168.123.104:0/3349108748' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm04-60174-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.494173+0000 mon.b (mon.1) 79 : audit [INF] from='client.? 192.168.123.104:0/3349108748' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm04-60174-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.498549+0000 mon.b (mon.1) 80 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.498549+0000 mon.b (mon.1) 80 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.498670+0000 mon.b (mon.1) 81 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.498670+0000 mon.b (mon.1) 81 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.498803+0000 mon.b (mon.1) 82 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.498803+0000 mon.b (mon.1) 82 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.506110+0000 mon.a (mon.0) 991 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-4-cache", "mode": "writeback"}]: dispatch 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.506110+0000 mon.a (mon.0) 991 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-4-cache", "mode": "writeback"}]: dispatch 2026-03-10T10:16:28.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.510309+0000 mon.a (mon.0) 992 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm04-60174-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:28.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.510309+0000 mon.a (mon.0) 992 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm04-60174-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:28.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.516207+0000 mon.a (mon.0) 993 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:28.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.516207+0000 mon.a (mon.0) 993 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:28.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.516482+0000 mon.a (mon.0) 994 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:28.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.516482+0000 mon.a (mon.0) 994 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:28.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.516682+0000 mon.a (mon.0) 995 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:28.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.516682+0000 mon.a (mon.0) 995 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:28.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.516795+0000 mon.a (mon.0) 996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:28.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:28 vm04 bash[20742]: audit 2026-03-10T10:16:28.516795+0000 mon.a (mon.0) 996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:27.610852+0000 mon.a (mon.0) 978 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["threexx"]}]': finished 2026-03-10T10:16:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:27.610852+0000 mon.a (mon.0) 978 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["threexx"]}]': finished 2026-03-10T10:16:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:27.617737+0000 mon.b (mon.1) 76 : audit [INF] from='client.? 192.168.123.104:0/4117200580' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-10T10:16:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:27.617737+0000 mon.b (mon.1) 76 : audit [INF] from='client.? 192.168.123.104:0/4117200580' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-10T10:16:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: cluster 2026-03-10T10:16:27.626937+0000 client.admin (client.?) 0 : cluster [INF] fourxx 2026-03-10T10:16:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: cluster 2026-03-10T10:16:27.626937+0000 client.admin (client.?) 0 : cluster [INF] fourxx 2026-03-10T10:16:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:27.629289+0000 mon.a (mon.0) 979 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-10T10:16:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:27.629289+0000 mon.a (mon.0) 979 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["fourxx"]}]: dispatch 2026-03-10T10:16:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:27.750784+0000 mon.b (mon.1) 77 : audit [DBG] from='client.? 192.168.123.104:0/1342698839' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-10T10:16:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:27.750784+0000 mon.b (mon.1) 77 : audit [DBG] from='client.? 192.168.123.104:0/1342698839' entity='client.admin' cmd=[{"prefix":"status"}]: dispatch 2026-03-10T10:16:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: cluster 2026-03-10T10:16:27.980846+0000 mgr.y (mgr.24422) 99 : cluster [DBG] pgmap v63: 900 pgs: 448 unknown, 4 active, 37 creating+peering, 42 creating+activating, 369 active+clean; 459 KiB data, 243 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:16:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: cluster 2026-03-10T10:16:27.980846+0000 mgr.y (mgr.24422) 99 : cluster [DBG] pgmap v63: 900 pgs: 448 unknown, 4 active, 37 creating+peering, 42 creating+activating, 369 active+clean; 459 KiB data, 243 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:16:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:27.981113+0000 mon.c (mon.2) 55 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:27.981113+0000 mon.c (mon.2) 55 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.173896+0000 mon.b (mon.1) 78 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm04-59290-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-10T10:16:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.173896+0000 mon.b (mon.1) 78 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm04-59290-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-10T10:16:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.179886+0000 mon.a (mon.0) 980 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm04-59290-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-10T10:16:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.179886+0000 mon.a (mon.0) 980 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm04-59290-23", "var": "allow_ec_overwrites", "val": "true"}]: dispatch 2026-03-10T10:16:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.184663+0000 mgr.y (mgr.24422) 100 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.184663+0000 mgr.y (mgr.24422) 100 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.332291+0000 mon.c (mon.2) 56 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-10T10:16:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.332291+0000 mon.c (mon.2) 56 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-10T10:16:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.333557+0000 mon.c (mon.2) 57 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.333557+0000 mon.c (mon.2) 57 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.334547+0000 mon.c (mon.2) 58 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.334547+0000 mon.c (mon.2) 58 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.335674+0000 mon.c (mon.2) 59 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.335674+0000 mon.c (mon.2) 59 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.336844+0000 mon.c (mon.2) 60 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.336844+0000 mon.c (mon.2) 60 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.337710+0000 mon.c (mon.2) 61 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.337710+0000 mon.c (mon.2) 61 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.339187+0000 mon.c (mon.2) 62 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.339187+0000 mon.c (mon.2) 62 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.341042+0000 mon.c (mon.2) 63 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.341042+0000 mon.c (mon.2) 63 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.341584+0000 mon.c (mon.2) 64 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.341584+0000 mon.c (mon.2) 64 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.342719+0000 mon.c (mon.2) 65 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.342719+0000 mon.c (mon.2) 65 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.468360+0000 mon.a (mon.0) 981 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm04-59364-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm04-59364-10"}]': finished 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.468360+0000 mon.a (mon.0) 981 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockEC_vm04-59364-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockEC_vm04-59364-10"}]': finished 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.468393+0000 mon.a (mon.0) 982 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm04-59409-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm04-59409-10"}]': finished 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.468393+0000 mon.a (mon.0) 982 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosLockECPP_vm04-59409-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosLockECPP_vm04-59409-10"}]': finished 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.468411+0000 mon.a (mon.0) 983 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-4", "overlaypool": "test-rados-api-vm04-59491-4-cache"}]': finished 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.468411+0000 mon.a (mon.0) 983 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-4", "overlaypool": "test-rados-api-vm04-59491-4-cache"}]': finished 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.468427+0000 mon.a (mon.0) 984 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm04-59252-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.468427+0000 mon.a (mon.0) 984 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm04-59252-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.468444+0000 mon.a (mon.0) 985 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.468444+0000 mon.a (mon.0) 985 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.468465+0000 mon.a (mon.0) 986 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.468465+0000 mon.a (mon.0) 986 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.468483+0000 mon.a (mon.0) 987 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm04-59259-3", "field": "max_bytes", "val": "4096"}]': finished 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.468483+0000 mon.a (mon.0) 987 : audit [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm04-59259-3", "field": "max_bytes", "val": "4096"}]': finished 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.468502+0000 mon.a (mon.0) 988 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm04-60174-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.468502+0000 mon.a (mon.0) 988 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm04-60174-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.468520+0000 mon.a (mon.0) 989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm04-59290-23", "var": "allow_ec_overwrites", "val": "true"}]': finished 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.468520+0000 mon.a (mon.0) 989 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "LibRadosIoECPP_vm04-59290-23", "var": "allow_ec_overwrites", "val": "true"}]': finished 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: cluster 2026-03-10T10:16:28.490695+0000 mon.a (mon.0) 990 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: cluster 2026-03-10T10:16:28.490695+0000 mon.a (mon.0) 990 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.494173+0000 mon.b (mon.1) 79 : audit [INF] from='client.? 192.168.123.104:0/3349108748' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm04-60174-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.494173+0000 mon.b (mon.1) 79 : audit [INF] from='client.? 192.168.123.104:0/3349108748' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm04-60174-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.498549+0000 mon.b (mon.1) 80 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.498549+0000 mon.b (mon.1) 80 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.498670+0000 mon.b (mon.1) 81 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.498670+0000 mon.b (mon.1) 81 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.498803+0000 mon.b (mon.1) 82 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.498803+0000 mon.b (mon.1) 82 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.506110+0000 mon.a (mon.0) 991 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-4-cache", "mode": "writeback"}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.506110+0000 mon.a (mon.0) 991 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-4-cache", "mode": "writeback"}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.510309+0000 mon.a (mon.0) 992 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm04-60174-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.510309+0000 mon.a (mon.0) 992 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsNSvm04-60174-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.516207+0000 mon.a (mon.0) 993 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.516207+0000 mon.a (mon.0) 993 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.516482+0000 mon.a (mon.0) 994 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.516482+0000 mon.a (mon.0) 994 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.516682+0000 mon.a (mon.0) 995 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.516682+0000 mon.a (mon.0) 995 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.516795+0000 mon.a (mon.0) 996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:29.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:28 vm07 bash[23367]: audit 2026-03-10T10:16:28.516795+0000 mon.a (mon.0) 996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:29.551 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN list: Running main() from gmock_main.cc 2026-03-10T10:16:29.551 INFO:tasks.workunit.client.0.vm04.stdout: list: [==========] Running 3 tests from 1 test suite. 2026-03-10T10:16:29.551 INFO:tasks.workunit.client.0.vm04.stdout: list: [----------] Global test environment set-up. 2026-03-10T10:16:29.551 INFO:tasks.workunit.client.0.vm04.stdout: list: [----------] 3 tests from NeoradosList 2026-03-10T10:16:29.551 INFO:tasks.workunit.client.0.vm04.stdout: list: [ RUN ] NeoradosList.ListObjects 2026-03-10T10:16:29.551 INFO:tasks.workunit.client.0.vm04.stdout: list: [ OK ] NeoradosList.ListObjects (2619 ms) 2026-03-10T10:16:29.551 INFO:tasks.workunit.client.0.vm04.stdout: list: [ RUN ] NeoradosList.ListObjectsNS 2026-03-10T10:16:29.551 INFO:tasks.workunit.client.0.vm04.stdout: list: [ OK ] NeoradosList.ListObjectsNS (3119 ms) 2026-03-10T10:16:29.551 INFO:tasks.workunit.client.0.vm04.stdout: list: [ RUN ] NeoradosList.ListObjectsMany 2026-03-10T10:16:29.551 INFO:tasks.workunit.client.0.vm04.stdout: list: [ OK ] NeoradosList.ListObjectsMany (3138 ms) 2026-03-10T10:16:29.552 INFO:tasks.workunit.client.0.vm04.stdout: list: [----------] 3 tests from NeoradosList (8876 ms total) 2026-03-10T10:16:29.552 INFO:tasks.workunit.client.0.vm04.stdout: list: 2026-03-10T10:16:29.552 INFO:tasks.workunit.client.0.vm04.stdout: list: [----------] Global test environment tear-down 2026-03-10T10:16:29.552 INFO:tasks.workunit.client.0.vm04.stdout: list: [==========] 3 tests from 1 test suite ran. (8876 ms total) 2026-03-10T10:16:29.552 INFO:tasks.workunit.client.0.vm04.stdout: list: [ PASSED ] 3 tests. 2026-03-10T10:16:29.637 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:28.332633+0000 mgr.y (mgr.24422) 101 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-10T10:16:29.637 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:28.332633+0000 mgr.y (mgr.24422) 101 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-10T10:16:29.637 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:28.333836+0000 mgr.y (mgr.24422) 102 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-10T10:16:29.637 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:28.333836+0000 mgr.y (mgr.24422) 102 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-10T10:16:29.637 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:28.334911+0000 mgr.y (mgr.24422) 103 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-10T10:16:29.637 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:28.334911+0000 mgr.y (mgr.24422) 103 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-10T10:16:29.637 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:28.335933+0000 mgr.y (mgr.24422) 104 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-10T10:16:29.637 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:28.335933+0000 mgr.y (mgr.24422) 104 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-10T10:16:29.637 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:28.337096+0000 mgr.y (mgr.24422) 105 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-10T10:16:29.637 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:28.337096+0000 mgr.y (mgr.24422) 105 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-10T10:16:29.637 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:28.337898+0000 mgr.y (mgr.24422) 106 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-10T10:16:29.637 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:28.337898+0000 mgr.y (mgr.24422) 106 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-10T10:16:29.637 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:28.339289+0000 mgr.y (mgr.24422) 107 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-10T10:16:29.637 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:28.339289+0000 mgr.y (mgr.24422) 107 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-10T10:16:29.637 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:28.341144+0000 mgr.y (mgr.24422) 108 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-10T10:16:29.637 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:28.341144+0000 mgr.y (mgr.24422) 108 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-10T10:16:29.637 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:28.341689+0000 mgr.y (mgr.24422) 109 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-10T10:16:29.637 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:28.341689+0000 mgr.y (mgr.24422) 109 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-10T10:16:29.637 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:28.342819+0000 mgr.y (mgr.24422) 110 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-10T10:16:29.637 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:28.342819+0000 mgr.y (mgr.24422) 110 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-10T10:16:29.637 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: cluster 2026-03-10T10:16:28.611993+0000 mon.a (mon.0) 997 : cluster [WRN] Health check update: 17 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:29.637 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: cluster 2026-03-10T10:16:28.611993+0000 mon.a (mon.0) 997 : cluster [WRN] Health check update: 17 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:29.637 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:28.614821+0000 mon.a (mon.0) 998 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["fourxx"]}]': finished 2026-03-10T10:16:29.637 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:28.614821+0000 mon.a (mon.0) 998 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["fourxx"]}]': finished 2026-03-10T10:16:29.637 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:28.982098+0000 mon.c (mon.2) 66 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:29.637 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:28.982098+0000 mon.c (mon.2) 66 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:29.637 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: cluster 2026-03-10T10:16:29.018785+0000 osd.6 (osd.6) 3 : cluster [DBG] 15.1 deep-scrub starts 2026-03-10T10:16:29.638 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: cluster 2026-03-10T10:16:29.018785+0000 osd.6 (osd.6) 3 : cluster [DBG] 15.1 deep-scrub starts 2026-03-10T10:16:29.638 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: cluster 2026-03-10T10:16:29.020520+0000 osd.6 (osd.6) 4 : cluster [DBG] 15.1 deep-scrub ok 2026-03-10T10:16:29.638 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: cluster 2026-03-10T10:16:29.020520+0000 osd.6 (osd.6) 4 : cluster [DBG] 15.1 deep-scrub ok 2026-03-10T10:16:29.638 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: cluster 2026-03-10T10:16:29.469411+0000 mon.a (mon.0) 999 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:16:29.638 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: cluster 2026-03-10T10:16:29.469411+0000 mon.a (mon.0) 999 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:16:29.638 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:29.473639+0000 mon.a (mon.0) 1000 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReadOpvm04-60121-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm04-60121-2"}]': finished 2026-03-10T10:16:29.638 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:29.473639+0000 mon.a (mon.0) 1000 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReadOpvm04-60121-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm04-60121-2"}]': finished 2026-03-10T10:16:29.638 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:29.473726+0000 mon.a (mon.0) 1001 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-4-cache", "mode": "writeback"}]': finished 2026-03-10T10:16:29.638 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:29.473726+0000 mon.a (mon.0) 1001 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-4-cache", "mode": "writeback"}]': finished 2026-03-10T10:16:29.638 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:29.474055+0000 mon.a (mon.0) 1002 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:29.638 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:29.474055+0000 mon.a (mon.0) 1002 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:29.638 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:29.474077+0000 mon.a (mon.0) 1003 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm04-59599-7"}]': finished 2026-03-10T10:16:29.638 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:29.474077+0000 mon.a (mon.0) 1003 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm04-59599-7"}]': finished 2026-03-10T10:16:29.638 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:29.474113+0000 mon.a (mon.0) 1004 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm04-59578-7"}]': finished 2026-03-10T10:16:29.638 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:29.474113+0000 mon.a (mon.0) 1004 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm04-59578-7"}]': finished 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:28.332633+0000 mgr.y (mgr.24422) 101 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:28.332633+0000 mgr.y (mgr.24422) 101 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:28.333836+0000 mgr.y (mgr.24422) 102 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:28.333836+0000 mgr.y (mgr.24422) 102 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:28.334911+0000 mgr.y (mgr.24422) 103 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:28.334911+0000 mgr.y (mgr.24422) 103 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:28.335933+0000 mgr.y (mgr.24422) 104 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:28.335933+0000 mgr.y (mgr.24422) 104 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:28.337096+0000 mgr.y (mgr.24422) 105 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:28.337096+0000 mgr.y (mgr.24422) 105 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:28.337898+0000 mgr.y (mgr.24422) 106 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:28.337898+0000 mgr.y (mgr.24422) 106 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:28.339289+0000 mgr.y (mgr.24422) 107 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:28.339289+0000 mgr.y (mgr.24422) 107 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:28.341144+0000 mgr.y (mgr.24422) 108 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:28.341144+0000 mgr.y (mgr.24422) 108 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:28.341689+0000 mgr.y (mgr.24422) 109 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:28.341689+0000 mgr.y (mgr.24422) 109 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:28.342819+0000 mgr.y (mgr.24422) 110 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:28.342819+0000 mgr.y (mgr.24422) 110 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: cluster 2026-03-10T10:16:28.611993+0000 mon.a (mon.0) 997 : cluster [WRN] Health check update: 17 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: cluster 2026-03-10T10:16:28.611993+0000 mon.a (mon.0) 997 : cluster [WRN] Health check update: 17 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:28.614821+0000 mon.a (mon.0) 998 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["fourxx"]}]': finished 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:28.614821+0000 mon.a (mon.0) 998 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["fourxx"]}]': finished 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:28.982098+0000 mon.c (mon.2) 66 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:28.982098+0000 mon.c (mon.2) 66 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: cluster 2026-03-10T10:16:29.018785+0000 osd.6 (osd.6) 3 : cluster [DBG] 15.1 deep-scrub starts 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: cluster 2026-03-10T10:16:29.018785+0000 osd.6 (osd.6) 3 : cluster [DBG] 15.1 deep-scrub starts 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: cluster 2026-03-10T10:16:29.020520+0000 osd.6 (osd.6) 4 : cluster [DBG] 15.1 deep-scrub ok 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: cluster 2026-03-10T10:16:29.020520+0000 osd.6 (osd.6) 4 : cluster [DBG] 15.1 deep-scrub ok 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: cluster 2026-03-10T10:16:29.469411+0000 mon.a (mon.0) 999 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: cluster 2026-03-10T10:16:29.469411+0000 mon.a (mon.0) 999 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:29.473639+0000 mon.a (mon.0) 1000 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReadOpvm04-60121-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm04-60121-2"}]': finished 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:29.473639+0000 mon.a (mon.0) 1000 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReadOpvm04-60121-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm04-60121-2"}]': finished 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:29.473726+0000 mon.a (mon.0) 1001 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-4-cache", "mode": "writeback"}]': finished 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:29.473726+0000 mon.a (mon.0) 1001 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-4-cache", "mode": "writeback"}]': finished 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:29.474055+0000 mon.a (mon.0) 1002 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:29.474055+0000 mon.a (mon.0) 1002 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:28.332633+0000 mgr.y (mgr.24422) 101 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-10T10:16:29.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:28.332633+0000 mgr.y (mgr.24422) 101 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:28.333836+0000 mgr.y (mgr.24422) 102 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:28.333836+0000 mgr.y (mgr.24422) 102 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:28.334911+0000 mgr.y (mgr.24422) 103 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:28.334911+0000 mgr.y (mgr.24422) 103 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:28.335933+0000 mgr.y (mgr.24422) 104 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:28.335933+0000 mgr.y (mgr.24422) 104 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:28.337096+0000 mgr.y (mgr.24422) 105 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:28.337096+0000 mgr.y (mgr.24422) 105 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:28.337898+0000 mgr.y (mgr.24422) 106 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:28.337898+0000 mgr.y (mgr.24422) 106 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:28.339289+0000 mgr.y (mgr.24422) 107 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:28.339289+0000 mgr.y (mgr.24422) 107 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:28.341144+0000 mgr.y (mgr.24422) 108 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:28.341144+0000 mgr.y (mgr.24422) 108 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:28.341689+0000 mgr.y (mgr.24422) 109 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:28.341689+0000 mgr.y (mgr.24422) 109 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:28.342819+0000 mgr.y (mgr.24422) 110 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:28.342819+0000 mgr.y (mgr.24422) 110 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: cluster 2026-03-10T10:16:28.611993+0000 mon.a (mon.0) 997 : cluster [WRN] Health check update: 17 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: cluster 2026-03-10T10:16:28.611993+0000 mon.a (mon.0) 997 : cluster [WRN] Health check update: 17 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:28.614821+0000 mon.a (mon.0) 998 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["fourxx"]}]': finished 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:28.614821+0000 mon.a (mon.0) 998 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"log", "logtext":["fourxx"]}]': finished 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:28.982098+0000 mon.c (mon.2) 66 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:28.982098+0000 mon.c (mon.2) 66 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: cluster 2026-03-10T10:16:29.018785+0000 osd.6 (osd.6) 3 : cluster [DBG] 15.1 deep-scrub starts 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: cluster 2026-03-10T10:16:29.018785+0000 osd.6 (osd.6) 3 : cluster [DBG] 15.1 deep-scrub starts 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: cluster 2026-03-10T10:16:29.020520+0000 osd.6 (osd.6) 4 : cluster [DBG] 15.1 deep-scrub ok 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: cluster 2026-03-10T10:16:29.020520+0000 osd.6 (osd.6) 4 : cluster [DBG] 15.1 deep-scrub ok 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: cluster 2026-03-10T10:16:29.469411+0000 mon.a (mon.0) 999 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: cluster 2026-03-10T10:16:29.469411+0000 mon.a (mon.0) 999 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:29.473639+0000 mon.a (mon.0) 1000 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReadOpvm04-60121-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm04-60121-2"}]': finished 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:29.473639+0000 mon.a (mon.0) 1000 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReadOpvm04-60121-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm04-60121-2"}]': finished 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:29.473726+0000 mon.a (mon.0) 1001 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-4-cache", "mode": "writeback"}]': finished 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:29.473726+0000 mon.a (mon.0) 1001 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-4-cache", "mode": "writeback"}]': finished 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:29.474055+0000 mon.a (mon.0) 1002 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:29.474055+0000 mon.a (mon.0) 1002 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:29.474077+0000 mon.a (mon.0) 1003 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm04-59599-7"}]': finished 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:29.474077+0000 mon.a (mon.0) 1003 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm04-59599-7"}]': finished 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:29.474113+0000 mon.a (mon.0) 1004 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm04-59578-7"}]': finished 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:29.474113+0000 mon.a (mon.0) 1004 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm04-59578-7"}]': finished 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:29.474148+0000 mon.a (mon.0) 1005 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm04-59274-16"}]': finished 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:29.474148+0000 mon.a (mon.0) 1005 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm04-59274-16"}]': finished 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:29.480295+0000 mon.b (mon.1) 83 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:29.480295+0000 mon.b (mon.1) 83 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:29.480389+0000 mon.b (mon.1) 84 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:29.480389+0000 mon.b (mon.1) 84 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:29.480534+0000 mon.b (mon.1) 85 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:29.480534+0000 mon.b (mon.1) 85 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: cluster 2026-03-10T10:16:29.507921+0000 mon.a (mon.0) 1006 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: cluster 2026-03-10T10:16:29.507921+0000 mon.a (mon.0) 1006 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:29.516789+0000 mon.b (mon.1) 86 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:29.516789+0000 mon.b (mon.1) 86 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:29.518856+0000 mon.a (mon.0) 1007 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:29.518856+0000 mon.a (mon.0) 1007 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:29.521032+0000 mon.a (mon.0) 1008 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:29.521032+0000 mon.a (mon.0) 1008 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:29.521347+0000 mon.a (mon.0) 1009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:29.521347+0000 mon.a (mon.0) 1009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:29.535090+0000 mon.a (mon.0) 1010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:29.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:29 vm04 bash[28289]: audit 2026-03-10T10:16:29.535090+0000 mon.a (mon.0) 1010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:29.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:29.474077+0000 mon.a (mon.0) 1003 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm04-59599-7"}]': finished 2026-03-10T10:16:29.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:29.474077+0000 mon.a (mon.0) 1003 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatECPP_vm04-59599-7"}]': finished 2026-03-10T10:16:29.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:29.474113+0000 mon.a (mon.0) 1004 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm04-59578-7"}]': finished 2026-03-10T10:16:29.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:29.474113+0000 mon.a (mon.0) 1004 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosStatEC_vm04-59578-7"}]': finished 2026-03-10T10:16:29.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:29.474148+0000 mon.a (mon.0) 1005 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm04-59274-16"}]': finished 2026-03-10T10:16:29.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:29.474148+0000 mon.a (mon.0) 1005 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm04-59274-16"}]': finished 2026-03-10T10:16:29.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:29.480295+0000 mon.b (mon.1) 83 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:29.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:29.480295+0000 mon.b (mon.1) 83 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:29.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:29.480389+0000 mon.b (mon.1) 84 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:29.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:29.480389+0000 mon.b (mon.1) 84 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:29.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:29.480534+0000 mon.b (mon.1) 85 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:29.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:29.480534+0000 mon.b (mon.1) 85 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:29.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: cluster 2026-03-10T10:16:29.507921+0000 mon.a (mon.0) 1006 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-10T10:16:29.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: cluster 2026-03-10T10:16:29.507921+0000 mon.a (mon.0) 1006 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-10T10:16:29.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:29.516789+0000 mon.b (mon.1) 86 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:29.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:29.516789+0000 mon.b (mon.1) 86 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:29.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:29.518856+0000 mon.a (mon.0) 1007 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:29.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:29.518856+0000 mon.a (mon.0) 1007 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:29.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:29.521032+0000 mon.a (mon.0) 1008 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:29.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:29.521032+0000 mon.a (mon.0) 1008 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:29.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:29.521347+0000 mon.a (mon.0) 1009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:29.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:29.521347+0000 mon.a (mon.0) 1009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:29.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:29.535090+0000 mon.a (mon.0) 1010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:29.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:29 vm04 bash[20742]: audit 2026-03-10T10:16:29.535090+0000 mon.a (mon.0) 1010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:29.474148+0000 mon.a (mon.0) 1005 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm04-59274-16"}]': finished 2026-03-10T10:16:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:29.474148+0000 mon.a (mon.0) 1005 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoEC_vm04-59274-16"}]': finished 2026-03-10T10:16:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:29.480295+0000 mon.b (mon.1) 83 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:29.480295+0000 mon.b (mon.1) 83 : audit [INF] from='client.? 192.168.123.104:0/355751671' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:29.480389+0000 mon.b (mon.1) 84 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:29.480389+0000 mon.b (mon.1) 84 : audit [INF] from='client.? 192.168.123.104:0/828729033' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:29.480534+0000 mon.b (mon.1) 85 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:29.480534+0000 mon.b (mon.1) 85 : audit [INF] from='client.? 192.168.123.104:0/546846441' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: cluster 2026-03-10T10:16:29.507921+0000 mon.a (mon.0) 1006 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-10T10:16:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: cluster 2026-03-10T10:16:29.507921+0000 mon.a (mon.0) 1006 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-10T10:16:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:29.516789+0000 mon.b (mon.1) 86 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:29.516789+0000 mon.b (mon.1) 86 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:29.518856+0000 mon.a (mon.0) 1007 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:29.518856+0000 mon.a (mon.0) 1007 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm04-59599-7"}]: dispatch 2026-03-10T10:16:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:29.521032+0000 mon.a (mon.0) 1008 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:29.521032+0000 mon.a (mon.0) 1008 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm04-59578-7"}]: dispatch 2026-03-10T10:16:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:29.521347+0000 mon.a (mon.0) 1009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:29.521347+0000 mon.a (mon.0) 1009 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm04-59274-16"}]: dispatch 2026-03-10T10:16:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:29.535090+0000 mon.a (mon.0) 1010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:29 vm07 bash[23367]: audit 2026-03-10T10:16:29.535090+0000 mon.a (mon.0) 1010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:30.556 INFO:tasks.workunit.client.0.vm04.stdout: api_stat_pp: Running main() from gmock_main.cc 2026-03-10T10:16:30.557 INFO:tasks.workunit.client.0.vm04.stdout: api_stat_pp: [==========] Running 9 tests from 2 test suites. 2026-03-10T10:16:30.557 INFO:tasks.workunit.client.0.vm04.stdout: api_stat_pp: [----------] Global test environment set-up. 2026-03-10T10:16:30.557 INFO:tasks.workunit.client.0.vm04.stdout: api_stat_pp: [----------] 5 tests from LibRadosStatPP 2026-03-10T10:16:30.557 INFO:tasks.workunit.client.0.vm04.stdout: api_stat_pp: seed 59599 2026-03-10T10:16:30.557 INFO:tasks.workunit.client.0.vm04.stdout: api_stat_pp: [ RUN ] LibRadosStatPP.StatPP 2026-03-10T10:16:30.557 INFO:tasks.workunit.client.0.vm04.stdout: api_stat_pp: [ OK ] LibRadosStatPP.StatPP (544 ms) 2026-03-10T10:16:30.557 INFO:tasks.workunit.client.0.vm04.stdout: api_stat_pp: [ RUN ] LibRadosStatPP.Stat2Mtime2PP 2026-03-10T10:16:30.557 INFO:tasks.workunit.client.0.vm04.stdout: api_stat_pp: [ OK ] LibRadosStatPP.Stat2Mtime2PP (34 ms) 2026-03-10T10:16:30.557 INFO:tasks.workunit.client.0.vm04.stdout: api_stat_pp: [ RUN ] LibRadosStatPP.ClusterStatPP 2026-03-10T10:16:30.557 INFO:tasks.workunit.client.0.vm04.stdout: api_stat_pp: [ OK ] LibRadosStatPP.ClusterStatPP (5 ms) 2026-03-10T10:16:30.557 INFO:tasks.workunit.client.0.vm04.stdout: api_stat_pp: [ RUN ] LibRadosStatPP.PoolStatPP 2026-03-10T10:16:30.557 INFO:tasks.workunit.client.0.vm04.stdout: api_stat_pp: [ OK ] LibRadosStatPP.PoolStatPP (11 ms) 2026-03-10T10:16:30.557 INFO:tasks.workunit.client.0.vm04.stdout: api_stat_pp: [ RUN ] LibRadosStatPP.StatPPNS 2026-03-10T10:16:30.557 INFO:tasks.workunit.client.0.vm04.stdout: api_stat_pp: [ OK ] LibRadosStatPP.StatPPNS (30 ms) 2026-03-10T10:16:30.557 INFO:tasks.workunit.client.0.vm04.stdout: api_stat_pp: [----------] 5 tests from LibRadosStatPP (624 ms total) 2026-03-10T10:16:30.557 INFO:tasks.workunit.client.0.vm04.stdout: api_stat_pp: 2026-03-10T10:16:30.557 INFO:tasks.workunit.client.0.vm04.stdout: api_stat_pp: [----------] 4 tests from LibRadosStatECPP 2026-03-10T10:16:30.557 INFO:tasks.workunit.client.0.vm04.stdout: api_stat_pp: [ RUN ] LibRadosStatECPP.StatPP 2026-03-10T10:16:30.557 INFO:tasks.workunit.client.0.vm04.stdout: api_stat_pp: [ OK ] LibRadosStatECPP.StatPP (1325 ms) 2026-03-10T10:16:30.557 INFO:tasks.workunit.client.0.vm04.stdout: api_stat_pp: [ RUN ] LibRadosStatECPP.ClusterStatPP 2026-03-10T10:16:30.557 INFO:tasks.workunit.client.0.vm04.stdout: api_stat_pp: [ OK ] LibRadosStatECPP.ClusterStatPP (0 ms) 2026-03-10T10:16:30.557 INFO:tasks.workunit.client.0.vm04.stdout: api_stat_pp: [ RUN ] LibRadosStatECPP.PoolStatPP 2026-03-10T10:16:30.557 INFO:tasks.workunit.client.0.vm04.stdout: api_stat_pp: [ OK ] LibRadosStatECPP.PoolStatPP (10 ms) 2026-03-10T10:16:30.557 INFO:tasks.workunit.client.0.vm04.stdout: api_stat_pp: [ RUN ] LibRadosStatECPP.StatPPNS 2026-03-10T10:16:30.557 INFO:tasks.workunit.client.0.vm04.stdout: api_stat_pp: [ OK ] LibRadosStatECPP.StatPPNS (58 ms) 2026-03-10T10:16:30.557 INFO:tasks.workunit.client.0.vm04.stdout: api_stat_pp: [----------] 4 tests from LibRadosStatECPP (1393 ms total) 2026-03-10T10:16:30.557 INFO:tasks.workunit.client.0.vm04.stdout: api_stat_pp: 2026-03-10T10:16:30.557 INFO:tasks.workunit.client.0.vm04.stdout: api_stat_pp: [----------] Global test environment tear-down 2026-03-10T10:16:30.557 INFO:tasks.workunit.client.0.vm04.stdout: api_stat_pp: [==========] 9 tests from 2 test suites ran. (10195 ms total) 2026-03-10T10:16:30.557 INFO:tasks.workunit.client.0.vm04.stdout: api_stat_pp: [ PASSED ] 9 tests. 2026-03-10T10:16:30.571 INFO:tasks.workunit.client.0.vm04.stdout: api_stat: Running main() from gmock_main.cc 2026-03-10T10:16:30.571 INFO:tasks.workunit.client.0.vm04.stdout: api_stat: [==========] Running 9 tests from 2 test suites. 2026-03-10T10:16:30.571 INFO:tasks.workunit.client.0.vm04.stdout: api_stat: [----------] Global test environment set-up. 2026-03-10T10:16:30.571 INFO:tasks.workunit.client.0.vm04.stdout: api_stat: [----------] 5 tests from LibRadosStat 2026-03-10T10:16:30.571 INFO:tasks.workunit.client.0.vm04.stdout: api_stat: [ RUN ] LibRadosStat.Stat 2026-03-10T10:16:30.571 INFO:tasks.workunit.client.0.vm04.stdout: api_stat: [ OK ] LibRadosStat.Stat (529 ms) 2026-03-10T10:16:30.571 INFO:tasks.workunit.client.0.vm04.stdout: api_stat: [ RUN ] LibRadosStat.Stat2 2026-03-10T10:16:30.571 INFO:tasks.workunit.client.0.vm04.stdout: api_stat: [ OK ] LibRadosStat.Stat2 (7 ms) 2026-03-10T10:16:30.571 INFO:tasks.workunit.client.0.vm04.stdout: api_stat: [ RUN ] LibRadosStat.StatNS 2026-03-10T10:16:30.571 INFO:tasks.workunit.client.0.vm04.stdout: api_stat: [ OK ] LibRadosStat.StatNS (74 ms) 2026-03-10T10:16:30.571 INFO:tasks.workunit.client.0.vm04.stdout: api_stat: [ RUN ] LibRadosStat.ClusterStat 2026-03-10T10:16:30.571 INFO:tasks.workunit.client.0.vm04.stdout: api_stat: [ OK ] LibRadosStat.ClusterStat (0 ms) 2026-03-10T10:16:30.571 INFO:tasks.workunit.client.0.vm04.stdout: api_stat: [ RUN ] LibRadosStat.PoolStat 2026-03-10T10:16:30.571 INFO:tasks.workunit.client.0.vm04.stdout: api_stat: [ OK ] LibRadosStat.PoolStat (10 ms) 2026-03-10T10:16:30.571 INFO:tasks.workunit.client.0.vm04.stdout: api_stat: [----------] 5 tests from LibRadosStat (620 ms total) 2026-03-10T10:16:30.571 INFO:tasks.workunit.client.0.vm04.stdout: api_stat: 2026-03-10T10:16:30.571 INFO:tasks.workunit.client.0.vm04.stdout: api_stat: [----------] 4 tests from LibRadosStatEC 2026-03-10T10:16:30.571 INFO:tasks.workunit.client.0.vm04.stdout: api_stat: [ RUN ] LibRadosStatEC.Stat 2026-03-10T10:16:30.571 INFO:tasks.workunit.client.0.vm04.stdout: api_stat: [ OK ] LibRadosStatEC.Stat (1271 ms) 2026-03-10T10:16:30.571 INFO:tasks.workunit.client.0.vm04.stdout: api_stat: [ RUN ] LibRadosStatEC.StatNS 2026-03-10T10:16:30.571 INFO:tasks.workunit.client.0.vm04.stdout: api_stat: [ OK ] LibRadosStatEC.StatNS (71 ms) 2026-03-10T10:16:30.571 INFO:tasks.workunit.client.0.vm04.stdout: api_stat: [ RUN ] LibRadosStatEC.ClusterStat 2026-03-10T10:16:30.571 INFO:tasks.workunit.client.0.vm04.stdout: api_stat: [ OK ] LibRadosStatEC.ClusterStat (0 ms) 2026-03-10T10:16:30.571 INFO:tasks.workunit.client.0.vm04.stdout: api_stat: [ RUN ] LibRadosStatEC.PoolStat 2026-03-10T10:16:30.571 INFO:tasks.workunit.client.0.vm04.stdout: api_stat: [ OK ] LibRadosStatEC.PoolStat (28 ms) 2026-03-10T10:16:30.571 INFO:tasks.workunit.client.0.vm04.stdout: api_stat: [----------] 4 tests from LibRadosStatEC (1370 ms total) 2026-03-10T10:16:30.571 INFO:tasks.workunit.client.0.vm04.stdout: api_stat: 2026-03-10T10:16:30.571 INFO:tasks.workunit.client.0.vm04.stdout: api_stat: [----------] Global test environment tear-down 2026-03-10T10:16:30.571 INFO:tasks.workunit.client.0.vm04.stdout: api_stat: [==========] 9 tests from 2 test suites ran. (10216 ms total) 2026-03-10T10:16:30.571 INFO:tasks.workunit.client.0.vm04.stdout: api_stat: [ PASSED ] 9 tests. 2026-03-10T10:16:30.580 INFO:tasks.workunit.client.0.vm04.stdout: api_io: Running main() from gmock_main.cc 2026-03-10T10:16:30.580 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [==========] Running 24 tests from 2 test suites. 2026-03-10T10:16:30.580 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [----------] Global test environment set-up. 2026-03-10T10:16:30.580 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [----------] 14 tests from LibRadosIo 2026-03-10T10:16:30.580 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ RUN ] LibRadosIo.SimpleWrite 2026-03-10T10:16:30.580 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ OK ] LibRadosIo.SimpleWrite (710 ms) 2026-03-10T10:16:30.580 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ RUN ] LibRadosIo.TooBig 2026-03-10T10:16:30.580 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ OK ] LibRadosIo.TooBig (0 ms) 2026-03-10T10:16:30.580 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ RUN ] LibRadosIo.ReadTimeout 2026-03-10T10:16:30.580 INFO:tasks.workunit.client.0.vm04.stdout: api_io: no timeout :/ 2026-03-10T10:16:30.580 INFO:tasks.workunit.client.0.vm04.stdout: api_io: no timeout :/ 2026-03-10T10:16:30.580 INFO:tasks.workunit.client.0.vm04.stdout: api_io: no timeout :/ 2026-03-10T10:16:30.580 INFO:tasks.workunit.client.0.vm04.stdout: api_io: no timeout :/ 2026-03-10T10:16:30.580 INFO:tasks.workunit.client.0.vm04.stdout: api_io: no timeout :/ 2026-03-10T10:16:30.580 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ OK ] LibRadosIo.ReadTimeout (50 ms) 2026-03-10T10:16:30.580 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ RUN ] LibRadosIo.RoundTrip 2026-03-10T10:16:30.580 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ OK ] LibRadosIo.RoundTrip (36 ms) 2026-03-10T10:16:30.580 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ RUN ] LibRadosIo.Checksum 2026-03-10T10:16:30.580 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ OK ] LibRadosIo.Checksum (9 ms) 2026-03-10T10:16:30.580 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ RUN ] LibRadosIo.OverlappingWriteRoundTrip 2026-03-10T10:16:30.580 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ OK ] LibRadosIo.OverlappingWriteRoundTrip (5 ms) 2026-03-10T10:16:30.580 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ RUN ] LibRadosIo.WriteFullRoundTrip 2026-03-10T10:16:30.580 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ OK ] LibRadosIo.WriteFullRoundTrip (10 ms) 2026-03-10T10:16:30.580 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ RUN ] LibRadosIo.AppendRoundTrip 2026-03-10T10:16:30.580 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ OK ] LibRadosIo.AppendRoundTrip (23 ms) 2026-03-10T10:16:30.580 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ RUN ] LibRadosIo.ZeroLenZero 2026-03-10T10:16:30.580 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ OK ] LibRadosIo.ZeroLenZero (6 ms) 2026-03-10T10:16:30.580 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ RUN ] LibRadosIo.TruncTest 2026-03-10T10:16:30.580 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ OK ] LibRadosIo.TruncTest (22 ms) 2026-03-10T10:16:30.580 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ RUN ] LibRadosIo.RemoveTest 2026-03-10T10:16:30.580 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ OK ] LibRadosIo.RemoveTest (6 ms) 2026-03-10T10:16:30.580 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ RUN ] LibRadosIo.XattrsRoundTrip 2026-03-10T10:16:30.580 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ OK ] LibRadosIo.XattrsRoundTrip (12 ms) 2026-03-10T10:16:30.580 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ RUN ] LibRadosIo.RmXattr 2026-03-10T10:16:30.581 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ OK ] LibRadosIo.RmXattr (33 ms) 2026-03-10T10:16:30.581 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ RUN ] LibRadosIo.XattrIter 2026-03-10T10:16:30.581 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ OK ] LibRadosIo.XattrIter (13 ms) 2026-03-10T10:16:30.581 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [----------] 14 tests from LibRadosIo (936 ms total) 2026-03-10T10:16:30.581 INFO:tasks.workunit.client.0.vm04.stdout: api_io: 2026-03-10T10:16:30.581 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [----------] 10 tests from LibRadosIoEC 2026-03-10T10:16:30.581 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ RUN ] LibRadosIoEC.SimpleWrite 2026-03-10T10:16:30.581 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ OK ] LibRadosIoEC.SimpleWrite (1363 ms) 2026-03-10T10:16:30.581 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ RUN ] LibRadosIoEC.RoundTrip 2026-03-10T10:16:30.581 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ OK ] LibRadosIoEC.RoundTrip (155 ms) 2026-03-10T10:16:30.581 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ RUN ] LibRadosIoEC.OverlappingWriteRoundTrip 2026-03-10T10:16:30.581 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ OK ] LibRadosIoEC.OverlappingWriteRoundTrip (10 ms) 2026-03-10T10:16:30.581 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ RUN ] LibRadosIoEC.WriteFullRoundTrip 2026-03-10T10:16:30.581 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ OK ] LibRadosIoEC.WriteFullRoundTrip (4 ms) 2026-03-10T10:16:30.581 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ RUN ] LibRadosIoEC.AppendRoundTrip 2026-03-10T10:16:30.581 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ OK ] LibRadosIoEC.AppendRoundTrip (15 ms) 2026-03-10T10:16:30.581 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ RUN ] LibRadosIoEC.TruncTest 2026-03-10T10:16:30.581 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ OK ] LibRadosIoEC.TruncTest (148 ms) 2026-03-10T10:16:30.581 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ RUN ] LibRadosIoEC.RemoveTest 2026-03-10T10:16:30.581 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ OK ] LibRadosIoEC.RemoveTest (5 ms) 2026-03-10T10:16:30.581 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ RUN ] LibRadosIoEC.XattrsRoundTrip 2026-03-10T10:16:30.581 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ OK ] LibRadosIoEC.XattrsRoundTrip (5 ms) 2026-03-10T10:16:30.581 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ RUN ] LibRadosIoEC.RmXattr 2026-03-10T10:16:30.581 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ OK ] LibRadosIoEC.RmXattr (190 ms) 2026-03-10T10:16:30.581 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ RUN ] LibRadosIoEC.XattrIter 2026-03-10T10:16:30.581 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ OK ] LibRadosIoEC.XattrIter (29 ms) 2026-03-10T10:16:30.581 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [----------] 10 tests from LibRadosIoEC (1924 ms total) 2026-03-10T10:16:30.581 INFO:tasks.workunit.client.0.vm04.stdout: api_io: 2026-03-10T10:16:30.581 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [----------] Global test environment tear-down 2026-03-10T10:16:30.581 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [==========] 24 tests from 2 test suites ran. (10407 ms total) 2026-03-10T10:16:30.581 INFO:tasks.workunit.client.0.vm04.stdout: api_io: [ PASSED ] 24 tests. 2026-03-10T10:16:30.629 INFO:tasks.workunit.client.0.vm04.stdout:md=[{"prefix": "osd pool create", "pool": "ReadOpvm04-60121-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:30.629 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:27.527263+0000 mon.a [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-4", "overlaypool": "test-rados-api-vm04-59491-4-cache"}]: dispatch 2026-03-10T10:16:30.629 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:27.527351+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"log", "logtext":["threexx"]}]: dispatch 2026-03-10T10:16:30.629 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:27.527537+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafe_vm04-59252-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:30.629 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:27.527696+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:30.629 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:27.527834+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:30.629 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:27.542558+0000 mon.a [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:30.629 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:27.548676+0000 mon.a [INF] from='client.? 192.168.123.104:0/2792562220' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "PoolQuotaPP_vm04-59259-3", "field": "max_bytes", "val": "4096"}]: dispatch 2026-03-10T10:16:30.629 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:27.552231+0000 mon.b [INF] from='client.? 192.168.123.104:0/3349108748' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm04-60174-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:30.629 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: got: 2026-03-10T10:16:27.576721+0000 mon.a [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsNSvm04-60174-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:30.629 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: [ OK ] LibRadosCmd.WatchLog (7411 ms) 2026-03-10T10:16:30.629 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: [----------] 4 tests from LibRadosCmd (10198 ms total) 2026-03-10T10:16:30.629 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: 2026-03-10T10:16:30.629 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: [----------] Global test environment tear-down 2026-03-10T10:16:30.629 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: [==========] 4 tests from 1 test suite ran. (10202 ms total) 2026-03-10T10:16:30.629 INFO:tasks.workunit.client.0.vm04.stdout: api_cmd: [ PASSED ] 4 tests. 2026-03-10T10:16:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: cluster 2026-03-10T10:16:28.706636+0000 osd.2 (osd.2) 3 : cluster [DBG] 15.2 deep-scrub starts 2026-03-10T10:16:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: cluster 2026-03-10T10:16:28.706636+0000 osd.2 (osd.2) 3 : cluster [DBG] 15.2 deep-scrub starts 2026-03-10T10:16:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: cluster 2026-03-10T10:16:28.713471+0000 osd.2 (osd.2) 4 : cluster [DBG] 15.2 deep-scrub ok 2026-03-10T10:16:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: cluster 2026-03-10T10:16:28.713471+0000 osd.2 (osd.2) 4 : cluster [DBG] 15.2 deep-scrub ok 2026-03-10T10:16:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: cluster 2026-03-10T10:16:28.919898+0000 osd.0 (osd.0) 3 : cluster [DBG] 15.6 deep-scrub starts 2026-03-10T10:16:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: cluster 2026-03-10T10:16:28.919898+0000 osd.0 (osd.0) 3 : cluster [DBG] 15.6 deep-scrub starts 2026-03-10T10:16:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: cluster 2026-03-10T10:16:28.920447+0000 osd.0 (osd.0) 4 : cluster [DBG] 15.6 deep-scrub ok 2026-03-10T10:16:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: cluster 2026-03-10T10:16:28.920447+0000 osd.0 (osd.0) 4 : cluster [DBG] 15.6 deep-scrub ok 2026-03-10T10:16:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: cluster 2026-03-10T10:16:29.218711+0000 osd.4 (osd.4) 3 : cluster [DBG] 15.3 deep-scrub starts 2026-03-10T10:16:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: cluster 2026-03-10T10:16:29.218711+0000 osd.4 (osd.4) 3 : cluster [DBG] 15.3 deep-scrub starts 2026-03-10T10:16:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: cluster 2026-03-10T10:16:29.219937+0000 osd.4 (osd.4) 4 : cluster [DBG] 15.3 deep-scrub ok 2026-03-10T10:16:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: cluster 2026-03-10T10:16:29.219937+0000 osd.4 (osd.4) 4 : cluster [DBG] 15.3 deep-scrub ok 2026-03-10T10:16:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:29.688622+0000 mon.a (mon.0) 1011 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59675-6", "pg_num": 4}]: dispatch 2026-03-10T10:16:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:29.688622+0000 mon.a (mon.0) 1011 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59675-6", "pg_num": 4}]: dispatch 2026-03-10T10:16:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:29.713251+0000 mon.a (mon.0) 1012 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-4"}]: dispatch 2026-03-10T10:16:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:29.713251+0000 mon.a (mon.0) 1012 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-4"}]: dispatch 2026-03-10T10:16:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: cluster 2026-03-10T10:16:29.981448+0000 mgr.y (mgr.24422) 111 : cluster [DBG] pgmap v66: 764 pgs: 3 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 89 unknown, 11 creating+activating, 1 active, 49 creating+peering, 602 active+clean; 144 MiB data, 654 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 37 MiB/s wr, 510 op/s 2026-03-10T10:16:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: cluster 2026-03-10T10:16:29.981448+0000 mgr.y (mgr.24422) 111 : cluster [DBG] pgmap v66: 764 pgs: 3 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 89 unknown, 11 creating+activating, 1 active, 49 creating+peering, 602 active+clean; 144 MiB data, 654 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 37 MiB/s wr, 510 op/s 2026-03-10T10:16:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:29.982826+0000 mon.c (mon.2) 67 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:29.982826+0000 mon.c (mon.2) 67 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: cluster 2026-03-10T10:16:30.061349+0000 osd.6 (osd.6) 5 : cluster [DBG] 15.9 deep-scrub starts 2026-03-10T10:16:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: cluster 2026-03-10T10:16:30.061349+0000 osd.6 (osd.6) 5 : cluster [DBG] 15.9 deep-scrub starts 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: cluster 2026-03-10T10:16:28.706636+0000 osd.2 (osd.2) 3 : cluster [DBG] 15.2 deep-scrub starts 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: cluster 2026-03-10T10:16:28.706636+0000 osd.2 (osd.2) 3 : cluster [DBG] 15.2 deep-scrub starts 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: cluster 2026-03-10T10:16:28.713471+0000 osd.2 (osd.2) 4 : cluster [DBG] 15.2 deep-scrub ok 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: cluster 2026-03-10T10:16:28.713471+0000 osd.2 (osd.2) 4 : cluster [DBG] 15.2 deep-scrub ok 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: cluster 2026-03-10T10:16:28.919898+0000 osd.0 (osd.0) 3 : cluster [DBG] 15.6 deep-scrub starts 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: cluster 2026-03-10T10:16:28.919898+0000 osd.0 (osd.0) 3 : cluster [DBG] 15.6 deep-scrub starts 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: cluster 2026-03-10T10:16:28.920447+0000 osd.0 (osd.0) 4 : cluster [DBG] 15.6 deep-scrub ok 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: cluster 2026-03-10T10:16:28.920447+0000 osd.0 (osd.0) 4 : cluster [DBG] 15.6 deep-scrub ok 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: cluster 2026-03-10T10:16:29.218711+0000 osd.4 (osd.4) 3 : cluster [DBG] 15.3 deep-scrub starts 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: cluster 2026-03-10T10:16:29.218711+0000 osd.4 (osd.4) 3 : cluster [DBG] 15.3 deep-scrub starts 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: cluster 2026-03-10T10:16:29.219937+0000 osd.4 (osd.4) 4 : cluster [DBG] 15.3 deep-scrub ok 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: cluster 2026-03-10T10:16:29.219937+0000 osd.4 (osd.4) 4 : cluster [DBG] 15.3 deep-scrub ok 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:29.688622+0000 mon.a (mon.0) 1011 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59675-6", "pg_num": 4}]: dispatch 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:29.688622+0000 mon.a (mon.0) 1011 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59675-6", "pg_num": 4}]: dispatch 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:29.713251+0000 mon.a (mon.0) 1012 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-4"}]: dispatch 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:29.713251+0000 mon.a (mon.0) 1012 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-4"}]: dispatch 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: cluster 2026-03-10T10:16:29.981448+0000 mgr.y (mgr.24422) 111 : cluster [DBG] pgmap v66: 764 pgs: 3 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 89 unknown, 11 creating+activating, 1 active, 49 creating+peering, 602 active+clean; 144 MiB data, 654 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 37 MiB/s wr, 510 op/s 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: cluster 2026-03-10T10:16:29.981448+0000 mgr.y (mgr.24422) 111 : cluster [DBG] pgmap v66: 764 pgs: 3 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 89 unknown, 11 creating+activating, 1 active, 49 creating+peering, 602 active+clean; 144 MiB data, 654 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 37 MiB/s wr, 510 op/s 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:29.982826+0000 mon.c (mon.2) 67 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:29.982826+0000 mon.c (mon.2) 67 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: cluster 2026-03-10T10:16:30.061349+0000 osd.6 (osd.6) 5 : cluster [DBG] 15.9 deep-scrub starts 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: cluster 2026-03-10T10:16:30.061349+0000 osd.6 (osd.6) 5 : cluster [DBG] 15.9 deep-scrub starts 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: cluster 2026-03-10T10:16:30.062216+0000 osd.6 (osd.6) 6 : cluster [DBG] 15.9 deep-scrub ok 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: cluster 2026-03-10T10:16:30.062216+0000 osd.6 (osd.6) 6 : cluster [DBG] 15.9 deep-scrub ok 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:30.544120+0000 mon.a (mon.0) 1013 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsNSvm04-60174-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm04-60174-2"}]': finished 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:30.544120+0000 mon.a (mon.0) 1013 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsNSvm04-60174-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm04-60174-2"}]': finished 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:30.544187+0000 mon.a (mon.0) 1014 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm04-59599-7"}]': finished 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:30.544187+0000 mon.a (mon.0) 1014 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm04-59599-7"}]': finished 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:30.544211+0000 mon.a (mon.0) 1015 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm04-59578-7"}]': finished 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:30.544211+0000 mon.a (mon.0) 1015 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm04-59578-7"}]': finished 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:30.544230+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm04-59274-16"}]': finished 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:30.544230+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm04-59274-16"}]': finished 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:30.544256+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:30.544256+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:30.544325+0000 mon.a (mon.0) 1018 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59675-6", "pg_num": 4}]': finished 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:30.544325+0000 mon.a (mon.0) 1018 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59675-6", "pg_num": 4}]': finished 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:30.544351+0000 mon.a (mon.0) 1019 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-4"}]': finished 2026-03-10T10:16:30.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:30.544351+0000 mon.a (mon.0) 1019 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-4"}]': finished 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: cluster 2026-03-10T10:16:30.551529+0000 mon.a (mon.0) 1020 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: cluster 2026-03-10T10:16:30.551529+0000 mon.a (mon.0) 1020 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:30.552469+0000 mon.b (mon.1) 87 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:30.552469+0000 mon.b (mon.1) 87 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:30.553503+0000 mon.b (mon.1) 88 : audit [INF] from='client.? 192.168.123.104:0/2974857334' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:30.553503+0000 mon.b (mon.1) 88 : audit [INF] from='client.? 192.168.123.104:0/2974857334' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:30.558017+0000 mon.b (mon.1) 89 : audit [INF] from='client.? 192.168.123.104:0/2115781352' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm04-59252-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:30.558017+0000 mon.b (mon.1) 89 : audit [INF] from='client.? 192.168.123.104:0/2115781352' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm04-59252-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:30.580838+0000 mon.b (mon.1) 90 : audit [INF] from='client.? 192.168.123.104:0/507877230' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:30.580838+0000 mon.b (mon.1) 90 : audit [INF] from='client.? 192.168.123.104:0/507877230' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:30.584558+0000 mon.a (mon.0) 1021 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-4", "tierpool": "test-rados-api-vm04-59491-4-cache"}]: dispatch 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:30.584558+0000 mon.a (mon.0) 1021 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-4", "tierpool": "test-rados-api-vm04-59491-4-cache"}]: dispatch 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:30.585228+0000 mon.a (mon.0) 1022 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "tierpool": "test-rados-api-vm04-59675-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:30.585228+0000 mon.a (mon.0) 1022 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "tierpool": "test-rados-api-vm04-59675-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:30.585365+0000 mon.a (mon.0) 1023 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:30.585365+0000 mon.a (mon.0) 1023 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:30.586588+0000 mon.a (mon.0) 1024 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:30.586588+0000 mon.a (mon.0) 1024 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:30.586785+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm04-59252-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:30.586785+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm04-59252-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:30.586985+0000 mon.a (mon.0) 1026 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:30 vm04 bash[28289]: audit 2026-03-10T10:16:30.586985+0000 mon.a (mon.0) 1026 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: cluster 2026-03-10T10:16:30.062216+0000 osd.6 (osd.6) 6 : cluster [DBG] 15.9 deep-scrub ok 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: cluster 2026-03-10T10:16:30.062216+0000 osd.6 (osd.6) 6 : cluster [DBG] 15.9 deep-scrub ok 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:30.544120+0000 mon.a (mon.0) 1013 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsNSvm04-60174-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm04-60174-2"}]': finished 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:30.544120+0000 mon.a (mon.0) 1013 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsNSvm04-60174-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm04-60174-2"}]': finished 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:30.544187+0000 mon.a (mon.0) 1014 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm04-59599-7"}]': finished 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:30.544187+0000 mon.a (mon.0) 1014 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm04-59599-7"}]': finished 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:30.544211+0000 mon.a (mon.0) 1015 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm04-59578-7"}]': finished 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:30.544211+0000 mon.a (mon.0) 1015 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm04-59578-7"}]': finished 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:30.544230+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm04-59274-16"}]': finished 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:30.544230+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm04-59274-16"}]': finished 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:30.544256+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:30.544256+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:30.544325+0000 mon.a (mon.0) 1018 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59675-6", "pg_num": 4}]': finished 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:30.544325+0000 mon.a (mon.0) 1018 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59675-6", "pg_num": 4}]': finished 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:30.544351+0000 mon.a (mon.0) 1019 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-4"}]': finished 2026-03-10T10:16:30.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:30.544351+0000 mon.a (mon.0) 1019 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-4"}]': finished 2026-03-10T10:16:30.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: cluster 2026-03-10T10:16:30.551529+0000 mon.a (mon.0) 1020 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-10T10:16:30.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: cluster 2026-03-10T10:16:30.551529+0000 mon.a (mon.0) 1020 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-10T10:16:30.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:30.552469+0000 mon.b (mon.1) 87 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:30.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:30.552469+0000 mon.b (mon.1) 87 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:30.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:30.553503+0000 mon.b (mon.1) 88 : audit [INF] from='client.? 192.168.123.104:0/2974857334' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:30.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:30.553503+0000 mon.b (mon.1) 88 : audit [INF] from='client.? 192.168.123.104:0/2974857334' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:30.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:30.558017+0000 mon.b (mon.1) 89 : audit [INF] from='client.? 192.168.123.104:0/2115781352' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm04-59252-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:30.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:30.558017+0000 mon.b (mon.1) 89 : audit [INF] from='client.? 192.168.123.104:0/2115781352' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm04-59252-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:30.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:30.580838+0000 mon.b (mon.1) 90 : audit [INF] from='client.? 192.168.123.104:0/507877230' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:30.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:30.580838+0000 mon.b (mon.1) 90 : audit [INF] from='client.? 192.168.123.104:0/507877230' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:30.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:30.584558+0000 mon.a (mon.0) 1021 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-4", "tierpool": "test-rados-api-vm04-59491-4-cache"}]: dispatch 2026-03-10T10:16:30.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:30.584558+0000 mon.a (mon.0) 1021 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-4", "tierpool": "test-rados-api-vm04-59491-4-cache"}]: dispatch 2026-03-10T10:16:30.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:30.585228+0000 mon.a (mon.0) 1022 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "tierpool": "test-rados-api-vm04-59675-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:16:30.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:30.585228+0000 mon.a (mon.0) 1022 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "tierpool": "test-rados-api-vm04-59675-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:16:30.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:30.585365+0000 mon.a (mon.0) 1023 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:30.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:30.585365+0000 mon.a (mon.0) 1023 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:30.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:30.586588+0000 mon.a (mon.0) 1024 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:30.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:30.586588+0000 mon.a (mon.0) 1024 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:30.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:30.586785+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm04-59252-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:30.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:30.586785+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm04-59252-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:30.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:30.586985+0000 mon.a (mon.0) 1026 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:30.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:30 vm04 bash[20742]: audit 2026-03-10T10:16:30.586985+0000 mon.a (mon.0) 1026 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: cluster 2026-03-10T10:16:28.706636+0000 osd.2 (osd.2) 3 : cluster [DBG] 15.2 deep-scrub starts 2026-03-10T10:16:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: cluster 2026-03-10T10:16:28.706636+0000 osd.2 (osd.2) 3 : cluster [DBG] 15.2 deep-scrub starts 2026-03-10T10:16:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: cluster 2026-03-10T10:16:28.713471+0000 osd.2 (osd.2) 4 : cluster [DBG] 15.2 deep-scrub ok 2026-03-10T10:16:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: cluster 2026-03-10T10:16:28.713471+0000 osd.2 (osd.2) 4 : cluster [DBG] 15.2 deep-scrub ok 2026-03-10T10:16:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: cluster 2026-03-10T10:16:28.919898+0000 osd.0 (osd.0) 3 : cluster [DBG] 15.6 deep-scrub starts 2026-03-10T10:16:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: cluster 2026-03-10T10:16:28.919898+0000 osd.0 (osd.0) 3 : cluster [DBG] 15.6 deep-scrub starts 2026-03-10T10:16:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: cluster 2026-03-10T10:16:28.920447+0000 osd.0 (osd.0) 4 : cluster [DBG] 15.6 deep-scrub ok 2026-03-10T10:16:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: cluster 2026-03-10T10:16:28.920447+0000 osd.0 (osd.0) 4 : cluster [DBG] 15.6 deep-scrub ok 2026-03-10T10:16:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: cluster 2026-03-10T10:16:29.218711+0000 osd.4 (osd.4) 3 : cluster [DBG] 15.3 deep-scrub starts 2026-03-10T10:16:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: cluster 2026-03-10T10:16:29.218711+0000 osd.4 (osd.4) 3 : cluster [DBG] 15.3 deep-scrub starts 2026-03-10T10:16:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: cluster 2026-03-10T10:16:29.219937+0000 osd.4 (osd.4) 4 : cluster [DBG] 15.3 deep-scrub ok 2026-03-10T10:16:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: cluster 2026-03-10T10:16:29.219937+0000 osd.4 (osd.4) 4 : cluster [DBG] 15.3 deep-scrub ok 2026-03-10T10:16:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:29.688622+0000 mon.a (mon.0) 1011 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59675-6", "pg_num": 4}]: dispatch 2026-03-10T10:16:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:29.688622+0000 mon.a (mon.0) 1011 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59675-6", "pg_num": 4}]: dispatch 2026-03-10T10:16:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:29.713251+0000 mon.a (mon.0) 1012 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-4"}]: dispatch 2026-03-10T10:16:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:29.713251+0000 mon.a (mon.0) 1012 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-4"}]: dispatch 2026-03-10T10:16:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: cluster 2026-03-10T10:16:29.981448+0000 mgr.y (mgr.24422) 111 : cluster [DBG] pgmap v66: 764 pgs: 3 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 89 unknown, 11 creating+activating, 1 active, 49 creating+peering, 602 active+clean; 144 MiB data, 654 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 37 MiB/s wr, 510 op/s 2026-03-10T10:16:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: cluster 2026-03-10T10:16:29.981448+0000 mgr.y (mgr.24422) 111 : cluster [DBG] pgmap v66: 764 pgs: 3 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 89 unknown, 11 creating+activating, 1 active, 49 creating+peering, 602 active+clean; 144 MiB data, 654 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 37 MiB/s wr, 510 op/s 2026-03-10T10:16:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:29.982826+0000 mon.c (mon.2) 67 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:29.982826+0000 mon.c (mon.2) 67 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: cluster 2026-03-10T10:16:30.061349+0000 osd.6 (osd.6) 5 : cluster [DBG] 15.9 deep-scrub starts 2026-03-10T10:16:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: cluster 2026-03-10T10:16:30.061349+0000 osd.6 (osd.6) 5 : cluster [DBG] 15.9 deep-scrub starts 2026-03-10T10:16:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: cluster 2026-03-10T10:16:30.062216+0000 osd.6 (osd.6) 6 : cluster [DBG] 15.9 deep-scrub ok 2026-03-10T10:16:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: cluster 2026-03-10T10:16:30.062216+0000 osd.6 (osd.6) 6 : cluster [DBG] 15.9 deep-scrub ok 2026-03-10T10:16:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:30.544120+0000 mon.a (mon.0) 1013 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsNSvm04-60174-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm04-60174-2"}]': finished 2026-03-10T10:16:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:30.544120+0000 mon.a (mon.0) 1013 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsNSvm04-60174-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsNSvm04-60174-2"}]': finished 2026-03-10T10:16:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:30.544187+0000 mon.a (mon.0) 1014 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm04-59599-7"}]': finished 2026-03-10T10:16:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:30.544187+0000 mon.a (mon.0) 1014 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatECPP_vm04-59599-7"}]': finished 2026-03-10T10:16:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:30.544211+0000 mon.a (mon.0) 1015 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm04-59578-7"}]': finished 2026-03-10T10:16:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:30.544211+0000 mon.a (mon.0) 1015 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosStatEC_vm04-59578-7"}]': finished 2026-03-10T10:16:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:30.544230+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm04-59274-16"}]': finished 2026-03-10T10:16:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:30.544230+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoEC_vm04-59274-16"}]': finished 2026-03-10T10:16:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:30.544256+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:30.544256+0000 mon.a (mon.0) 1017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:30.544325+0000 mon.a (mon.0) 1018 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59675-6", "pg_num": 4}]': finished 2026-03-10T10:16:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:30.544325+0000 mon.a (mon.0) 1018 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59675-6", "pg_num": 4}]': finished 2026-03-10T10:16:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:30.544351+0000 mon.a (mon.0) 1019 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-4"}]': finished 2026-03-10T10:16:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:30.544351+0000 mon.a (mon.0) 1019 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-4"}]': finished 2026-03-10T10:16:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: cluster 2026-03-10T10:16:30.551529+0000 mon.a (mon.0) 1020 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-10T10:16:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: cluster 2026-03-10T10:16:30.551529+0000 mon.a (mon.0) 1020 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-10T10:16:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:30.552469+0000 mon.b (mon.1) 87 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:30.552469+0000 mon.b (mon.1) 87 : audit [INF] from='client.? 192.168.123.104:0/764571464' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:30.553503+0000 mon.b (mon.1) 88 : audit [INF] from='client.? 192.168.123.104:0/2974857334' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:30.553503+0000 mon.b (mon.1) 88 : audit [INF] from='client.? 192.168.123.104:0/2974857334' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:30.558017+0000 mon.b (mon.1) 89 : audit [INF] from='client.? 192.168.123.104:0/2115781352' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm04-59252-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:30.558017+0000 mon.b (mon.1) 89 : audit [INF] from='client.? 192.168.123.104:0/2115781352' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm04-59252-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:30.580838+0000 mon.b (mon.1) 90 : audit [INF] from='client.? 192.168.123.104:0/507877230' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:30.580838+0000 mon.b (mon.1) 90 : audit [INF] from='client.? 192.168.123.104:0/507877230' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:30.584558+0000 mon.a (mon.0) 1021 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-4", "tierpool": "test-rados-api-vm04-59491-4-cache"}]: dispatch 2026-03-10T10:16:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:30.584558+0000 mon.a (mon.0) 1021 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-4", "tierpool": "test-rados-api-vm04-59491-4-cache"}]: dispatch 2026-03-10T10:16:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:30.585228+0000 mon.a (mon.0) 1022 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "tierpool": "test-rados-api-vm04-59675-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:16:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:30.585228+0000 mon.a (mon.0) 1022 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "tierpool": "test-rados-api-vm04-59675-6", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:16:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:30.585365+0000 mon.a (mon.0) 1023 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:30.585365+0000 mon.a (mon.0) 1023 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:30.586588+0000 mon.a (mon.0) 1024 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:30.586588+0000 mon.a (mon.0) 1024 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:30.586785+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm04-59252-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:30.586785+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip_vm04-59252-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:30.586985+0000 mon.a (mon.0) 1026 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:30 vm07 bash[23367]: audit 2026-03-10T10:16:30.586985+0000 mon.a (mon.0) 1026 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:32.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: cluster 2026-03-10T10:16:30.815280+0000 mon.a (mon.0) 1027 : cluster [WRN] pool 'PoolQuotaPP_vm04-59259-3' is full (reached quota's max_bytes: 4 KiB) 2026-03-10T10:16:32.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: cluster 2026-03-10T10:16:30.815280+0000 mon.a (mon.0) 1027 : cluster [WRN] pool 'PoolQuotaPP_vm04-59259-3' is full (reached quota's max_bytes: 4 KiB) 2026-03-10T10:16:32.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: cluster 2026-03-10T10:16:30.816520+0000 mon.a (mon.0) 1028 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T10:16:32.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: cluster 2026-03-10T10:16:30.816520+0000 mon.a (mon.0) 1028 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T10:16:32.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: cluster 2026-03-10T10:16:30.816532+0000 mon.a (mon.0) 1029 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:16:32.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: cluster 2026-03-10T10:16:30.816532+0000 mon.a (mon.0) 1029 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:16:32.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: audit 2026-03-10T10:16:30.983561+0000 mon.c (mon.2) 68 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:32.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: audit 2026-03-10T10:16:30.983561+0000 mon.c (mon.2) 68 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: cluster 2026-03-10T10:16:31.029867+0000 osd.6 (osd.6) 7 : cluster [DBG] 15.5 deep-scrub starts 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: cluster 2026-03-10T10:16:31.029867+0000 osd.6 (osd.6) 7 : cluster [DBG] 15.5 deep-scrub starts 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: cluster 2026-03-10T10:16:31.030893+0000 osd.6 (osd.6) 8 : cluster [DBG] 15.5 deep-scrub ok 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: cluster 2026-03-10T10:16:31.030893+0000 osd.6 (osd.6) 8 : cluster [DBG] 15.5 deep-scrub ok 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: audit 2026-03-10T10:16:31.175269+0000 mon.a (mon.0) 1030 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-4", "tierpool": "test-rados-api-vm04-59491-4-cache"}]': finished 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: audit 2026-03-10T10:16:31.175269+0000 mon.a (mon.0) 1030 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-4", "tierpool": "test-rados-api-vm04-59491-4-cache"}]': finished 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: audit 2026-03-10T10:16:31.175567+0000 mon.a (mon.0) 1031 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "tierpool": "test-rados-api-vm04-59675-6", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: audit 2026-03-10T10:16:31.175567+0000 mon.a (mon.0) 1031 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "tierpool": "test-rados-api-vm04-59675-6", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: audit 2026-03-10T10:16:31.175600+0000 mon.a (mon.0) 1032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: audit 2026-03-10T10:16:31.175600+0000 mon.a (mon.0) 1032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: audit 2026-03-10T10:16:31.175709+0000 mon.a (mon.0) 1033 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: audit 2026-03-10T10:16:31.175709+0000 mon.a (mon.0) 1033 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: audit 2026-03-10T10:16:31.175891+0000 mon.a (mon.0) 1034 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip_vm04-59252-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: audit 2026-03-10T10:16:31.175891+0000 mon.a (mon.0) 1034 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip_vm04-59252-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: audit 2026-03-10T10:16:31.175947+0000 mon.a (mon.0) 1035 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: audit 2026-03-10T10:16:31.175947+0000 mon.a (mon.0) 1035 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: audit 2026-03-10T10:16:31.206491+0000 mon.b (mon.1) 91 : audit [INF] from='client.? 192.168.123.104:0/1401890826' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: audit 2026-03-10T10:16:31.206491+0000 mon.b (mon.1) 91 : audit [INF] from='client.? 192.168.123.104:0/1401890826' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: cluster 2026-03-10T10:16:31.207465+0000 mon.a (mon.0) 1036 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: cluster 2026-03-10T10:16:31.207465+0000 mon.a (mon.0) 1036 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: audit 2026-03-10T10:16:31.216592+0000 mon.a (mon.0) 1037 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "overlaypool": "test-rados-api-vm04-59675-6"}]: dispatch 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: audit 2026-03-10T10:16:31.216592+0000 mon.a (mon.0) 1037 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "overlaypool": "test-rados-api-vm04-59675-6"}]: dispatch 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: audit 2026-03-10T10:16:31.216907+0000 mon.a (mon.0) 1038 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: audit 2026-03-10T10:16:31.216907+0000 mon.a (mon.0) 1038 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: audit 2026-03-10T10:16:31.224278+0000 mon.c (mon.2) 69 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: audit 2026-03-10T10:16:31.224278+0000 mon.c (mon.2) 69 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: audit 2026-03-10T10:16:31.224714+0000 mon.a (mon.0) 1039 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: audit 2026-03-10T10:16:31.224714+0000 mon.a (mon.0) 1039 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: audit 2026-03-10T10:16:31.231366+0000 mon.c (mon.2) 70 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: audit 2026-03-10T10:16:31.231366+0000 mon.c (mon.2) 70 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: audit 2026-03-10T10:16:31.231737+0000 mon.a (mon.0) 1040 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: audit 2026-03-10T10:16:31.231737+0000 mon.a (mon.0) 1040 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: audit 2026-03-10T10:16:31.232350+0000 mon.c (mon.2) 71 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: audit 2026-03-10T10:16:31.232350+0000 mon.c (mon.2) 71 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: audit 2026-03-10T10:16:31.232599+0000 mon.a (mon.0) 1041 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:31 vm04 bash[28289]: audit 2026-03-10T10:16:31.232599+0000 mon.a (mon.0) 1041 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: cluster 2026-03-10T10:16:30.815280+0000 mon.a (mon.0) 1027 : cluster [WRN] pool 'PoolQuotaPP_vm04-59259-3' is full (reached quota's max_bytes: 4 KiB) 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: cluster 2026-03-10T10:16:30.815280+0000 mon.a (mon.0) 1027 : cluster [WRN] pool 'PoolQuotaPP_vm04-59259-3' is full (reached quota's max_bytes: 4 KiB) 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: cluster 2026-03-10T10:16:30.816520+0000 mon.a (mon.0) 1028 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: cluster 2026-03-10T10:16:30.816520+0000 mon.a (mon.0) 1028 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: cluster 2026-03-10T10:16:30.816532+0000 mon.a (mon.0) 1029 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: cluster 2026-03-10T10:16:30.816532+0000 mon.a (mon.0) 1029 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: audit 2026-03-10T10:16:30.983561+0000 mon.c (mon.2) 68 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: audit 2026-03-10T10:16:30.983561+0000 mon.c (mon.2) 68 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: cluster 2026-03-10T10:16:31.029867+0000 osd.6 (osd.6) 7 : cluster [DBG] 15.5 deep-scrub starts 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: cluster 2026-03-10T10:16:31.029867+0000 osd.6 (osd.6) 7 : cluster [DBG] 15.5 deep-scrub starts 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: cluster 2026-03-10T10:16:31.030893+0000 osd.6 (osd.6) 8 : cluster [DBG] 15.5 deep-scrub ok 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: cluster 2026-03-10T10:16:31.030893+0000 osd.6 (osd.6) 8 : cluster [DBG] 15.5 deep-scrub ok 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: audit 2026-03-10T10:16:31.175269+0000 mon.a (mon.0) 1030 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-4", "tierpool": "test-rados-api-vm04-59491-4-cache"}]': finished 2026-03-10T10:16:32.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: audit 2026-03-10T10:16:31.175269+0000 mon.a (mon.0) 1030 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-4", "tierpool": "test-rados-api-vm04-59491-4-cache"}]': finished 2026-03-10T10:16:32.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: audit 2026-03-10T10:16:31.175567+0000 mon.a (mon.0) 1031 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "tierpool": "test-rados-api-vm04-59675-6", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:16:32.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: audit 2026-03-10T10:16:31.175567+0000 mon.a (mon.0) 1031 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "tierpool": "test-rados-api-vm04-59675-6", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:16:32.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: audit 2026-03-10T10:16:31.175600+0000 mon.a (mon.0) 1032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:32.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: audit 2026-03-10T10:16:31.175600+0000 mon.a (mon.0) 1032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:32.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: audit 2026-03-10T10:16:31.175709+0000 mon.a (mon.0) 1033 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:32.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: audit 2026-03-10T10:16:31.175709+0000 mon.a (mon.0) 1033 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:32.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: audit 2026-03-10T10:16:31.175891+0000 mon.a (mon.0) 1034 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip_vm04-59252-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:32.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: audit 2026-03-10T10:16:31.175891+0000 mon.a (mon.0) 1034 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip_vm04-59252-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:32.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: audit 2026-03-10T10:16:31.175947+0000 mon.a (mon.0) 1035 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:32.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: audit 2026-03-10T10:16:31.175947+0000 mon.a (mon.0) 1035 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:32.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: audit 2026-03-10T10:16:31.206491+0000 mon.b (mon.1) 91 : audit [INF] from='client.? 192.168.123.104:0/1401890826' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:32.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: audit 2026-03-10T10:16:31.206491+0000 mon.b (mon.1) 91 : audit [INF] from='client.? 192.168.123.104:0/1401890826' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:32.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: cluster 2026-03-10T10:16:31.207465+0000 mon.a (mon.0) 1036 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-10T10:16:32.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: cluster 2026-03-10T10:16:31.207465+0000 mon.a (mon.0) 1036 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-10T10:16:32.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: audit 2026-03-10T10:16:31.216592+0000 mon.a (mon.0) 1037 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "overlaypool": "test-rados-api-vm04-59675-6"}]: dispatch 2026-03-10T10:16:32.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: audit 2026-03-10T10:16:31.216592+0000 mon.a (mon.0) 1037 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "overlaypool": "test-rados-api-vm04-59675-6"}]: dispatch 2026-03-10T10:16:32.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: audit 2026-03-10T10:16:31.216907+0000 mon.a (mon.0) 1038 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:32.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: audit 2026-03-10T10:16:31.216907+0000 mon.a (mon.0) 1038 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:32.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: audit 2026-03-10T10:16:31.224278+0000 mon.c (mon.2) 69 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:32.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: audit 2026-03-10T10:16:31.224278+0000 mon.c (mon.2) 69 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:32.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: audit 2026-03-10T10:16:31.224714+0000 mon.a (mon.0) 1039 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:32.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: audit 2026-03-10T10:16:31.224714+0000 mon.a (mon.0) 1039 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:32.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: audit 2026-03-10T10:16:31.231366+0000 mon.c (mon.2) 70 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:32.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: audit 2026-03-10T10:16:31.231366+0000 mon.c (mon.2) 70 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:32.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: audit 2026-03-10T10:16:31.231737+0000 mon.a (mon.0) 1040 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:32.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: audit 2026-03-10T10:16:31.231737+0000 mon.a (mon.0) 1040 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:32.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: audit 2026-03-10T10:16:31.232350+0000 mon.c (mon.2) 71 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:32.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: audit 2026-03-10T10:16:31.232350+0000 mon.c (mon.2) 71 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:32.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: audit 2026-03-10T10:16:31.232599+0000 mon.a (mon.0) 1041 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:32.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:31 vm04 bash[20742]: audit 2026-03-10T10:16:31.232599+0000 mon.a (mon.0) 1041 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: cluster 2026-03-10T10:16:30.815280+0000 mon.a (mon.0) 1027 : cluster [WRN] pool 'PoolQuotaPP_vm04-59259-3' is full (reached quota's max_bytes: 4 KiB) 2026-03-10T10:16:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: cluster 2026-03-10T10:16:30.815280+0000 mon.a (mon.0) 1027 : cluster [WRN] pool 'PoolQuotaPP_vm04-59259-3' is full (reached quota's max_bytes: 4 KiB) 2026-03-10T10:16:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: cluster 2026-03-10T10:16:30.816520+0000 mon.a (mon.0) 1028 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T10:16:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: cluster 2026-03-10T10:16:30.816520+0000 mon.a (mon.0) 1028 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T10:16:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: cluster 2026-03-10T10:16:30.816532+0000 mon.a (mon.0) 1029 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:16:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: cluster 2026-03-10T10:16:30.816532+0000 mon.a (mon.0) 1029 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:16:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: audit 2026-03-10T10:16:30.983561+0000 mon.c (mon.2) 68 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: audit 2026-03-10T10:16:30.983561+0000 mon.c (mon.2) 68 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: cluster 2026-03-10T10:16:31.029867+0000 osd.6 (osd.6) 7 : cluster [DBG] 15.5 deep-scrub starts 2026-03-10T10:16:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: cluster 2026-03-10T10:16:31.029867+0000 osd.6 (osd.6) 7 : cluster [DBG] 15.5 deep-scrub starts 2026-03-10T10:16:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: cluster 2026-03-10T10:16:31.030893+0000 osd.6 (osd.6) 8 : cluster [DBG] 15.5 deep-scrub ok 2026-03-10T10:16:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: cluster 2026-03-10T10:16:31.030893+0000 osd.6 (osd.6) 8 : cluster [DBG] 15.5 deep-scrub ok 2026-03-10T10:16:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: audit 2026-03-10T10:16:31.175269+0000 mon.a (mon.0) 1030 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-4", "tierpool": "test-rados-api-vm04-59491-4-cache"}]': finished 2026-03-10T10:16:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: audit 2026-03-10T10:16:31.175269+0000 mon.a (mon.0) 1030 : audit [INF] from='client.? 192.168.123.104:0/69592159' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-4", "tierpool": "test-rados-api-vm04-59491-4-cache"}]': finished 2026-03-10T10:16:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: audit 2026-03-10T10:16:31.175567+0000 mon.a (mon.0) 1031 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "tierpool": "test-rados-api-vm04-59675-6", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:16:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: audit 2026-03-10T10:16:31.175567+0000 mon.a (mon.0) 1031 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "tierpool": "test-rados-api-vm04-59675-6", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:16:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: audit 2026-03-10T10:16:31.175600+0000 mon.a (mon.0) 1032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: audit 2026-03-10T10:16:31.175600+0000 mon.a (mon.0) 1032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: audit 2026-03-10T10:16:31.175709+0000 mon.a (mon.0) 1033 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: audit 2026-03-10T10:16:31.175709+0000 mon.a (mon.0) 1033 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: audit 2026-03-10T10:16:31.175891+0000 mon.a (mon.0) 1034 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip_vm04-59252-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: audit 2026-03-10T10:16:31.175891+0000 mon.a (mon.0) 1034 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip_vm04-59252-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: audit 2026-03-10T10:16:31.175947+0000 mon.a (mon.0) 1035 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: audit 2026-03-10T10:16:31.175947+0000 mon.a (mon.0) 1035 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: audit 2026-03-10T10:16:31.206491+0000 mon.b (mon.1) 91 : audit [INF] from='client.? 192.168.123.104:0/1401890826' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: audit 2026-03-10T10:16:31.206491+0000 mon.b (mon.1) 91 : audit [INF] from='client.? 192.168.123.104:0/1401890826' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: cluster 2026-03-10T10:16:31.207465+0000 mon.a (mon.0) 1036 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-10T10:16:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: cluster 2026-03-10T10:16:31.207465+0000 mon.a (mon.0) 1036 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-10T10:16:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: audit 2026-03-10T10:16:31.216592+0000 mon.a (mon.0) 1037 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "overlaypool": "test-rados-api-vm04-59675-6"}]: dispatch 2026-03-10T10:16:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: audit 2026-03-10T10:16:31.216592+0000 mon.a (mon.0) 1037 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "overlaypool": "test-rados-api-vm04-59675-6"}]: dispatch 2026-03-10T10:16:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: audit 2026-03-10T10:16:31.216907+0000 mon.a (mon.0) 1038 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: audit 2026-03-10T10:16:31.216907+0000 mon.a (mon.0) 1038 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: audit 2026-03-10T10:16:31.224278+0000 mon.c (mon.2) 69 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: audit 2026-03-10T10:16:31.224278+0000 mon.c (mon.2) 69 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: audit 2026-03-10T10:16:31.224714+0000 mon.a (mon.0) 1039 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: audit 2026-03-10T10:16:31.224714+0000 mon.a (mon.0) 1039 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: audit 2026-03-10T10:16:31.231366+0000 mon.c (mon.2) 70 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: audit 2026-03-10T10:16:31.231366+0000 mon.c (mon.2) 70 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: audit 2026-03-10T10:16:31.231737+0000 mon.a (mon.0) 1040 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: audit 2026-03-10T10:16:31.231737+0000 mon.a (mon.0) 1040 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: audit 2026-03-10T10:16:31.232350+0000 mon.c (mon.2) 71 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: audit 2026-03-10T10:16:31.232350+0000 mon.c (mon.2) 71 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: audit 2026-03-10T10:16:31.232599+0000 mon.a (mon.0) 1041 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:31 vm07 bash[23367]: audit 2026-03-10T10:16:31.232599+0000 mon.a (mon.0) 1041 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:33.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: cluster 2026-03-10T10:16:31.982035+0000 mgr.y (mgr.24422) 112 : cluster [DBG] pgmap v69: 832 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 295 unknown, 21 creating+peering, 511 active+clean; 144 MiB data, 654 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 37 MiB/s wr, 509 op/s 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: cluster 2026-03-10T10:16:31.982035+0000 mgr.y (mgr.24422) 112 : cluster [DBG] pgmap v69: 832 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 295 unknown, 21 creating+peering, 511 active+clean; 144 MiB data, 654 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 37 MiB/s wr, 509 op/s 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:31.984407+0000 mon.c (mon.2) 72 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:31.984407+0000 mon.c (mon.2) 72 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.179773+0000 mon.a (mon.0) 1042 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "overlaypool": "test-rados-api-vm04-59675-6"}]': finished 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.179773+0000 mon.a (mon.0) 1042 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "overlaypool": "test-rados-api-vm04-59675-6"}]': finished 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.179830+0000 mon.a (mon.0) 1043 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm04-60121-2"}]': finished 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.179830+0000 mon.a (mon.0) 1043 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm04-60121-2"}]': finished 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.179853+0000 mon.a (mon.0) 1044 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.179853+0000 mon.a (mon.0) 1044 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.186129+0000 mon.c (mon.2) 73 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.186129+0000 mon.c (mon.2) 73 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.207364+0000 mon.c (mon.2) 74 : audit [INF] from='client.? 192.168.123.104:0/1137720615' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm04-59531-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.207364+0000 mon.c (mon.2) 74 : audit [INF] from='client.? 192.168.123.104:0/1137720615' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm04-59531-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: cluster 2026-03-10T10:16:32.208160+0000 mon.a (mon.0) 1045 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: cluster 2026-03-10T10:16:32.208160+0000 mon.a (mon.0) 1045 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.209476+0000 mon.c (mon.2) 75 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.209476+0000 mon.c (mon.2) 75 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.224558+0000 mon.b (mon.1) 92 : audit [INF] from='client.? 192.168.123.104:0/1401890826' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.224558+0000 mon.b (mon.1) 92 : audit [INF] from='client.? 192.168.123.104:0/1401890826' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.230150+0000 mon.a (mon.0) 1046 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59675-6", "mode": "writeback"}]: dispatch 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.230150+0000 mon.a (mon.0) 1046 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59675-6", "mode": "writeback"}]: dispatch 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.230418+0000 mon.a (mon.0) 1047 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.230418+0000 mon.a (mon.0) 1047 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.230558+0000 mon.a (mon.0) 1048 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.230558+0000 mon.a (mon.0) 1048 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.230847+0000 mon.b (mon.1) 93 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.230847+0000 mon.b (mon.1) 93 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.231013+0000 mon.a (mon.0) 1049 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm04-59531-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.231013+0000 mon.a (mon.0) 1049 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm04-59531-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.231237+0000 mon.a (mon.0) 1050 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.231237+0000 mon.a (mon.0) 1050 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.231296+0000 mon.b (mon.1) 94 : audit [INF] from='client.? 192.168.123.104:0/3349108748' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.231296+0000 mon.b (mon.1) 94 : audit [INF] from='client.? 192.168.123.104:0/3349108748' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.234123+0000 mon.a (mon.0) 1051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.234123+0000 mon.a (mon.0) 1051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.234281+0000 mon.a (mon.0) 1052 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.234281+0000 mon.a (mon.0) 1052 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.255258+0000 mon.b (mon.1) 95 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.255258+0000 mon.b (mon.1) 95 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.300981+0000 mon.a (mon.0) 1053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:32 vm04 bash[28289]: audit 2026-03-10T10:16:32.300981+0000 mon.a (mon.0) 1053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:33.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: cluster 2026-03-10T10:16:31.982035+0000 mgr.y (mgr.24422) 112 : cluster [DBG] pgmap v69: 832 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 295 unknown, 21 creating+peering, 511 active+clean; 144 MiB data, 654 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 37 MiB/s wr, 509 op/s 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: cluster 2026-03-10T10:16:31.982035+0000 mgr.y (mgr.24422) 112 : cluster [DBG] pgmap v69: 832 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 295 unknown, 21 creating+peering, 511 active+clean; 144 MiB data, 654 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 37 MiB/s wr, 509 op/s 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:31.984407+0000 mon.c (mon.2) 72 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:31.984407+0000 mon.c (mon.2) 72 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.179773+0000 mon.a (mon.0) 1042 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "overlaypool": "test-rados-api-vm04-59675-6"}]': finished 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.179773+0000 mon.a (mon.0) 1042 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "overlaypool": "test-rados-api-vm04-59675-6"}]': finished 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.179830+0000 mon.a (mon.0) 1043 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm04-60121-2"}]': finished 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.179830+0000 mon.a (mon.0) 1043 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm04-60121-2"}]': finished 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.179853+0000 mon.a (mon.0) 1044 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.179853+0000 mon.a (mon.0) 1044 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.186129+0000 mon.c (mon.2) 73 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.186129+0000 mon.c (mon.2) 73 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.207364+0000 mon.c (mon.2) 74 : audit [INF] from='client.? 192.168.123.104:0/1137720615' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm04-59531-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.207364+0000 mon.c (mon.2) 74 : audit [INF] from='client.? 192.168.123.104:0/1137720615' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm04-59531-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: cluster 2026-03-10T10:16:32.208160+0000 mon.a (mon.0) 1045 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: cluster 2026-03-10T10:16:32.208160+0000 mon.a (mon.0) 1045 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.209476+0000 mon.c (mon.2) 75 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.209476+0000 mon.c (mon.2) 75 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.224558+0000 mon.b (mon.1) 92 : audit [INF] from='client.? 192.168.123.104:0/1401890826' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.224558+0000 mon.b (mon.1) 92 : audit [INF] from='client.? 192.168.123.104:0/1401890826' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.230150+0000 mon.a (mon.0) 1046 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59675-6", "mode": "writeback"}]: dispatch 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.230150+0000 mon.a (mon.0) 1046 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59675-6", "mode": "writeback"}]: dispatch 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.230418+0000 mon.a (mon.0) 1047 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.230418+0000 mon.a (mon.0) 1047 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.230558+0000 mon.a (mon.0) 1048 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.230558+0000 mon.a (mon.0) 1048 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.230847+0000 mon.b (mon.1) 93 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.230847+0000 mon.b (mon.1) 93 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.231013+0000 mon.a (mon.0) 1049 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm04-59531-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.231013+0000 mon.a (mon.0) 1049 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm04-59531-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.231237+0000 mon.a (mon.0) 1050 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.231237+0000 mon.a (mon.0) 1050 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.231296+0000 mon.b (mon.1) 94 : audit [INF] from='client.? 192.168.123.104:0/3349108748' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.231296+0000 mon.b (mon.1) 94 : audit [INF] from='client.? 192.168.123.104:0/3349108748' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.234123+0000 mon.a (mon.0) 1051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.234123+0000 mon.a (mon.0) 1051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.234281+0000 mon.a (mon.0) 1052 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.234281+0000 mon.a (mon.0) 1052 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.255258+0000 mon.b (mon.1) 95 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.255258+0000 mon.b (mon.1) 95 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.300981+0000 mon.a (mon.0) 1053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:32 vm04 bash[20742]: audit 2026-03-10T10:16:32.300981+0000 mon.a (mon.0) 1053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:33.205 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:16:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:16:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:16:33.258 INFO:tasks.workunit.client.0.vm04.stdout: ec_io: Running main() from gmock_main.cc 2026-03-10T10:16:33.258 INFO:tasks.workunit.client.0.vm04.stdout: ec_io: [==========] Running 2 tests from 1 test suite. 2026-03-10T10:16:33.258 INFO:tasks.workunit.client.0.vm04.stdout: ec_io: [----------] Global test environment set-up. 2026-03-10T10:16:33.258 INFO:tasks.workunit.client.0.vm04.stdout: ec_io: [----------] 2 tests from NeoRadosECIo 2026-03-10T10:16:33.258 INFO:tasks.workunit.client.0.vm04.stdout: ec_io: [ RUN ] NeoRadosECIo.SimpleWrite 2026-03-10T10:16:33.258 INFO:tasks.workunit.client.0.vm04.stdout: ec_io: [ OK ] NeoRadosECIo.SimpleWrite (5727 ms) 2026-03-10T10:16:33.258 INFO:tasks.workunit.client.0.vm04.stdout: ec_io: [ RUN ] NeoRadosECIo.ReadOp 2026-03-10T10:16:33.258 INFO:tasks.workunit.client.0.vm04.stdout: ec_io: [ OK ] NeoRadosECIo.ReadOp (6857 ms) 2026-03-10T10:16:33.258 INFO:tasks.workunit.client.0.vm04.stdout: ec_io: [----------] 2 tests from NeoRadosECIo (12584 ms total) 2026-03-10T10:16:33.258 INFO:tasks.workunit.client.0.vm04.stdout: ec_io: 2026-03-10T10:16:33.258 INFO:tasks.workunit.client.0.vm04.stdout: ec_io: [----------] Global test environment tear-down 2026-03-10T10:16:33.258 INFO:tasks.workunit.client.0.vm04.stdout: ec_io: [==========] 2 tests from 1 test suite ran. (12584 ms total) 2026-03-10T10:16:33.258 INFO:tasks.workunit.client.0.vm04.stdout: ec_io: [ PASSED ] 2 tests. 2026-03-10T10:16:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: cluster 2026-03-10T10:16:31.982035+0000 mgr.y (mgr.24422) 112 : cluster [DBG] pgmap v69: 832 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 295 unknown, 21 creating+peering, 511 active+clean; 144 MiB data, 654 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 37 MiB/s wr, 509 op/s 2026-03-10T10:16:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: cluster 2026-03-10T10:16:31.982035+0000 mgr.y (mgr.24422) 112 : cluster [DBG] pgmap v69: 832 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 295 unknown, 21 creating+peering, 511 active+clean; 144 MiB data, 654 MiB used, 159 GiB / 160 GiB avail; 12 MiB/s rd, 37 MiB/s wr, 509 op/s 2026-03-10T10:16:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:31.984407+0000 mon.c (mon.2) 72 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:31.984407+0000 mon.c (mon.2) 72 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.179773+0000 mon.a (mon.0) 1042 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "overlaypool": "test-rados-api-vm04-59675-6"}]': finished 2026-03-10T10:16:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.179773+0000 mon.a (mon.0) 1042 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "overlaypool": "test-rados-api-vm04-59675-6"}]': finished 2026-03-10T10:16:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.179830+0000 mon.a (mon.0) 1043 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm04-60121-2"}]': finished 2026-03-10T10:16:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.179830+0000 mon.a (mon.0) 1043 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReadOpvm04-60121-2"}]': finished 2026-03-10T10:16:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.179853+0000 mon.a (mon.0) 1044 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.179853+0000 mon.a (mon.0) 1044 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosIoECPP_vm04-59290-23", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.186129+0000 mon.c (mon.2) 73 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.186129+0000 mon.c (mon.2) 73 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.207364+0000 mon.c (mon.2) 74 : audit [INF] from='client.? 192.168.123.104:0/1137720615' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm04-59531-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.207364+0000 mon.c (mon.2) 74 : audit [INF] from='client.? 192.168.123.104:0/1137720615' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm04-59531-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: cluster 2026-03-10T10:16:32.208160+0000 mon.a (mon.0) 1045 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-10T10:16:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: cluster 2026-03-10T10:16:32.208160+0000 mon.a (mon.0) 1045 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-10T10:16:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.209476+0000 mon.c (mon.2) 75 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.209476+0000 mon.c (mon.2) 75 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.224558+0000 mon.b (mon.1) 92 : audit [INF] from='client.? 192.168.123.104:0/1401890826' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.224558+0000 mon.b (mon.1) 92 : audit [INF] from='client.? 192.168.123.104:0/1401890826' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.230150+0000 mon.a (mon.0) 1046 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59675-6", "mode": "writeback"}]: dispatch 2026-03-10T10:16:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.230150+0000 mon.a (mon.0) 1046 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59675-6", "mode": "writeback"}]: dispatch 2026-03-10T10:16:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.230418+0000 mon.a (mon.0) 1047 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.230418+0000 mon.a (mon.0) 1047 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReadOpvm04-60121-2"}]: dispatch 2026-03-10T10:16:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.230558+0000 mon.a (mon.0) 1048 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.230558+0000 mon.a (mon.0) 1048 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.230847+0000 mon.b (mon.1) 93 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.230847+0000 mon.b (mon.1) 93 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:33.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.231013+0000 mon.a (mon.0) 1049 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm04-59531-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:33.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.231013+0000 mon.a (mon.0) 1049 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm04-59531-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:33.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.231237+0000 mon.a (mon.0) 1050 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:33.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.231237+0000 mon.a (mon.0) 1050 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:33.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.231296+0000 mon.b (mon.1) 94 : audit [INF] from='client.? 192.168.123.104:0/3349108748' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:33.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.231296+0000 mon.b (mon.1) 94 : audit [INF] from='client.? 192.168.123.104:0/3349108748' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:33.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.234123+0000 mon.a (mon.0) 1051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:33.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.234123+0000 mon.a (mon.0) 1051 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:33.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.234281+0000 mon.a (mon.0) 1052 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:33.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.234281+0000 mon.a (mon.0) 1052 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:33.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.255258+0000 mon.b (mon.1) 95 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:33.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.255258+0000 mon.b (mon.1) 95 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:33.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.300981+0000 mon.a (mon.0) 1053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:33.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:32 vm07 bash[23367]: audit 2026-03-10T10:16:32.300981+0000 mon.a (mon.0) 1053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.206 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:32.985239+0000 mon.c (mon.2) 76 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:34.206 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:32.985239+0000 mon.c (mon.2) 76 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:34.206 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: cluster 2026-03-10T10:16:33.180885+0000 mon.a (mon.0) 1054 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:16:34.206 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: cluster 2026-03-10T10:16:33.180885+0000 mon.a (mon.0) 1054 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:16:34.206 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.185844+0000 mon.a (mon.0) 1055 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59675-6", "mode": "writeback"}]': finished 2026-03-10T10:16:34.206 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.185844+0000 mon.a (mon.0) 1055 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59675-6", "mode": "writeback"}]': finished 2026-03-10T10:16:34.206 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.185880+0000 mon.a (mon.0) 1056 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReadOpvm04-60121-2"}]': finished 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.185880+0000 mon.a (mon.0) 1056 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReadOpvm04-60121-2"}]': finished 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.186001+0000 mon.a (mon.0) 1057 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm04-59531-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.186001+0000 mon.a (mon.0) 1057 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm04-59531-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.186038+0000 mon.a (mon.0) 1058 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm04-59364-10"}]': finished 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.186038+0000 mon.a (mon.0) 1058 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm04-59364-10"}]': finished 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.186076+0000 mon.a (mon.0) 1059 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm04-59409-10"}]': finished 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.186076+0000 mon.a (mon.0) 1059 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm04-59409-10"}]': finished 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.186148+0000 mon.a (mon.0) 1060 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm04-60174-2"}]': finished 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.186148+0000 mon.a (mon.0) 1060 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm04-60174-2"}]': finished 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.186170+0000 mon.a (mon.0) 1061 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.186170+0000 mon.a (mon.0) 1061 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: cluster 2026-03-10T10:16:33.223337+0000 mon.a (mon.0) 1062 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: cluster 2026-03-10T10:16:33.223337+0000 mon.a (mon.0) 1062 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.262944+0000 mon.c (mon.2) 77 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.262944+0000 mon.c (mon.2) 77 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.263572+0000 mon.b (mon.1) 96 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.263572+0000 mon.b (mon.1) 96 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.264025+0000 mon.b (mon.1) 97 : audit [INF] from='client.? 192.168.123.104:0/3349108748' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.264025+0000 mon.b (mon.1) 97 : audit [INF] from='client.? 192.168.123.104:0/3349108748' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.264573+0000 mon.b (mon.1) 98 : audit [INF] from='client.? 192.168.123.104:0/22343092' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.264573+0000 mon.b (mon.1) 98 : audit [INF] from='client.? 192.168.123.104:0/22343092' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.265455+0000 mon.c (mon.2) 78 : audit [INF] from='client.? 192.168.123.104:0/3756706135' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm04-59252-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.265455+0000 mon.c (mon.2) 78 : audit [INF] from='client.? 192.168.123.104:0/3756706135' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm04-59252-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.265992+0000 mon.c (mon.2) 79 : audit [INF] from='client.? 192.168.123.104:0/3195318628' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.265992+0000 mon.c (mon.2) 79 : audit [INF] from='client.? 192.168.123.104:0/3195318628' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.269507+0000 mon.a (mon.0) 1063 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.269507+0000 mon.a (mon.0) 1063 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.269720+0000 mon.a (mon.0) 1064 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.269720+0000 mon.a (mon.0) 1064 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.269795+0000 mon.a (mon.0) 1065 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.269795+0000 mon.a (mon.0) 1065 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.269898+0000 mon.a (mon.0) 1066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm04-59252-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.269898+0000 mon.a (mon.0) 1066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm04-59252-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.269962+0000 mon.a (mon.0) 1067 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.269962+0000 mon.a (mon.0) 1067 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.207 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.270244+0000 mon.a (mon.0) 1068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.208 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.270244+0000 mon.a (mon.0) 1068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.208 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.287599+0000 mon.b (mon.1) 99 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:34.208 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.287599+0000 mon.b (mon.1) 99 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:34.208 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.295174+0000 mon.b (mon.1) 100 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:34.208 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.295174+0000 mon.b (mon.1) 100 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:34.208 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.295310+0000 mon.a (mon.0) 1069 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:34.208 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.295310+0000 mon.a (mon.0) 1069 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:34.208 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.300013+0000 mon.b (mon.1) 101 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:34.208 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.300013+0000 mon.b (mon.1) 101 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:34.208 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.300763+0000 mon.a (mon.0) 1070 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:34.208 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.300763+0000 mon.a (mon.0) 1070 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:34.208 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.302139+0000 mon.a (mon.0) 1071 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:34.208 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.302139+0000 mon.a (mon.0) 1071 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:34.208 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.373030+0000 mon.b (mon.1) 102 : audit [DBG] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix":"osd dump"}]: dispatch 2026-03-10T10:16:34.208 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.373030+0000 mon.b (mon.1) 102 : audit [DBG] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix":"osd dump"}]: dispatch 2026-03-10T10:16:34.208 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.375582+0000 mon.b (mon.1) 103 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.208 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.375582+0000 mon.b (mon.1) 103 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.208 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.377636+0000 mon.a (mon.0) 1072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.208 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:33 vm04 bash[28289]: audit 2026-03-10T10:16:33.377636+0000 mon.a (mon.0) 1072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.208 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:32.985239+0000 mon.c (mon.2) 76 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:34.208 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:32.985239+0000 mon.c (mon.2) 76 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:34.208 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: cluster 2026-03-10T10:16:33.180885+0000 mon.a (mon.0) 1054 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:16:34.208 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: cluster 2026-03-10T10:16:33.180885+0000 mon.a (mon.0) 1054 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:16:34.208 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.185844+0000 mon.a (mon.0) 1055 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59675-6", "mode": "writeback"}]': finished 2026-03-10T10:16:34.208 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.185844+0000 mon.a (mon.0) 1055 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59675-6", "mode": "writeback"}]': finished 2026-03-10T10:16:34.208 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.185880+0000 mon.a (mon.0) 1056 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReadOpvm04-60121-2"}]': finished 2026-03-10T10:16:34.208 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.185880+0000 mon.a (mon.0) 1056 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReadOpvm04-60121-2"}]': finished 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.186001+0000 mon.a (mon.0) 1057 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm04-59531-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.186001+0000 mon.a (mon.0) 1057 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm04-59531-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.186038+0000 mon.a (mon.0) 1058 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm04-59364-10"}]': finished 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.186038+0000 mon.a (mon.0) 1058 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm04-59364-10"}]': finished 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.186076+0000 mon.a (mon.0) 1059 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm04-59409-10"}]': finished 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.186076+0000 mon.a (mon.0) 1059 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm04-59409-10"}]': finished 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.186148+0000 mon.a (mon.0) 1060 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm04-60174-2"}]': finished 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.186148+0000 mon.a (mon.0) 1060 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm04-60174-2"}]': finished 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.186170+0000 mon.a (mon.0) 1061 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.186170+0000 mon.a (mon.0) 1061 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: cluster 2026-03-10T10:16:33.223337+0000 mon.a (mon.0) 1062 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: cluster 2026-03-10T10:16:33.223337+0000 mon.a (mon.0) 1062 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.262944+0000 mon.c (mon.2) 77 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.262944+0000 mon.c (mon.2) 77 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.263572+0000 mon.b (mon.1) 96 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.263572+0000 mon.b (mon.1) 96 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.264025+0000 mon.b (mon.1) 97 : audit [INF] from='client.? 192.168.123.104:0/3349108748' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.264025+0000 mon.b (mon.1) 97 : audit [INF] from='client.? 192.168.123.104:0/3349108748' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.264573+0000 mon.b (mon.1) 98 : audit [INF] from='client.? 192.168.123.104:0/22343092' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.264573+0000 mon.b (mon.1) 98 : audit [INF] from='client.? 192.168.123.104:0/22343092' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.265455+0000 mon.c (mon.2) 78 : audit [INF] from='client.? 192.168.123.104:0/3756706135' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm04-59252-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.265455+0000 mon.c (mon.2) 78 : audit [INF] from='client.? 192.168.123.104:0/3756706135' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm04-59252-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.265992+0000 mon.c (mon.2) 79 : audit [INF] from='client.? 192.168.123.104:0/3195318628' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.265992+0000 mon.c (mon.2) 79 : audit [INF] from='client.? 192.168.123.104:0/3195318628' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.269507+0000 mon.a (mon.0) 1063 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.269507+0000 mon.a (mon.0) 1063 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.269720+0000 mon.a (mon.0) 1064 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.269720+0000 mon.a (mon.0) 1064 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.269795+0000 mon.a (mon.0) 1065 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.269795+0000 mon.a (mon.0) 1065 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.269898+0000 mon.a (mon.0) 1066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm04-59252-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.269898+0000 mon.a (mon.0) 1066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm04-59252-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.269962+0000 mon.a (mon.0) 1067 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.269962+0000 mon.a (mon.0) 1067 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.270244+0000 mon.a (mon.0) 1068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.270244+0000 mon.a (mon.0) 1068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.287599+0000 mon.b (mon.1) 99 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.287599+0000 mon.b (mon.1) 99 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.295174+0000 mon.b (mon.1) 100 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.295174+0000 mon.b (mon.1) 100 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.295310+0000 mon.a (mon.0) 1069 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:34.209 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.295310+0000 mon.a (mon.0) 1069 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:34.210 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.300013+0000 mon.b (mon.1) 101 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:34.210 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.300013+0000 mon.b (mon.1) 101 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:34.210 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.300763+0000 mon.a (mon.0) 1070 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:34.210 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.300763+0000 mon.a (mon.0) 1070 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:34.210 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.302139+0000 mon.a (mon.0) 1071 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:34.210 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.302139+0000 mon.a (mon.0) 1071 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:34.210 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.373030+0000 mon.b (mon.1) 102 : audit [DBG] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix":"osd dump"}]: dispatch 2026-03-10T10:16:34.210 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.373030+0000 mon.b (mon.1) 102 : audit [DBG] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix":"osd dump"}]: dispatch 2026-03-10T10:16:34.210 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.375582+0000 mon.b (mon.1) 103 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.210 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.375582+0000 mon.b (mon.1) 103 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.210 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.377636+0000 mon.a (mon.0) 1072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.210 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:33 vm04 bash[20742]: audit 2026-03-10T10:16:33.377636+0000 mon.a (mon.0) 1072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.234 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: Running main() from gmock_main.cc 2026-03-10T10:16:34.234 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [==========] Running 16 tests from 2 test suites. 2026-03-10T10:16:34.234 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [----------] Global test environment set-up. 2026-03-10T10:16:34.234 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [----------] 8 tests from LibRadosLockPP 2026-03-10T10:16:34.234 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: seed 59409 2026-03-10T10:16:34.234 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.LockExclusivePP 2026-03-10T10:16:34.234 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [ OK ] LibRadosLockPP.LockExclusivePP (681 ms) 2026-03-10T10:16:34.234 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.LockSharedPP 2026-03-10T10:16:34.234 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [ OK ] LibRadosLockPP.LockSharedPP (39 ms) 2026-03-10T10:16:34.234 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.LockExclusiveDurPP 2026-03-10T10:16:34.234 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [ OK ] LibRadosLockPP.LockExclusiveDurPP (1013 ms) 2026-03-10T10:16:34.234 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.LockSharedDurPP 2026-03-10T10:16:34.234 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [ OK ] LibRadosLockPP.LockSharedDurPP (1005 ms) 2026-03-10T10:16:34.234 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.LockMayRenewPP 2026-03-10T10:16:34.234 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [ OK ] LibRadosLockPP.LockMayRenewPP (5 ms) 2026-03-10T10:16:34.234 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.UnlockPP 2026-03-10T10:16:34.234 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [ OK ] LibRadosLockPP.UnlockPP (5 ms) 2026-03-10T10:16:34.234 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.ListLockersPP 2026-03-10T10:16:34.234 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [ OK ] LibRadosLockPP.ListLockersPP (8 ms) 2026-03-10T10:16:34.234 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [ RUN ] LibRadosLockPP.BreakLockPP 2026-03-10T10:16:34.234 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [ OK ] LibRadosLockPP.BreakLockPP (2 ms) 2026-03-10T10:16:34.234 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [----------] 8 tests from LibRadosLockPP (2758 ms total) 2026-03-10T10:16:34.234 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: 2026-03-10T10:16:34.235 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [----------] 8 tests from LibRadosLockECPP 2026-03-10T10:16:34.235 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.LockExclusivePP 2026-03-10T10:16:34.235 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.LockExclusivePP (1183 ms) 2026-03-10T10:16:34.235 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.LockSharedPP 2026-03-10T10:16:34.235 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.LockSharedPP (22 ms) 2026-03-10T10:16:34.235 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.LockExclusiveDurPP 2026-03-10T10:16:34.235 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.LockExclusiveDurPP (1075 ms) 2026-03-10T10:16:34.235 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.LockSharedDurPP 2026-03-10T10:16:34.235 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.LockSharedDurPP (1094 ms) 2026-03-10T10:16:34.235 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.LockMayRenewPP 2026-03-10T10:16:34.235 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.LockMayRenewPP (4 ms) 2026-03-10T10:16:34.235 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.UnlockPP 2026-03-10T10:16:34.235 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.UnlockPP (5 ms) 2026-03-10T10:16:34.235 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.ListLockersPP 2026-03-10T10:16:34.235 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.ListLockersPP (5 ms) 2026-03-10T10:16:34.235 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [ RUN ] LibRadosLockECPP.BreakLockPP 2026-03-10T10:16:34.235 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [ OK ] LibRadosLockECPP.BreakLockPP (4 ms) 2026-03-10T10:16:34.235 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [----------] 8 tests from LibRadosLockECPP (3392 ms total) 2026-03-10T10:16:34.235 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: 2026-03-10T10:16:34.235 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [----------] Global test environment tear-down 2026-03-10T10:16:34.235 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [==========] 16 tests from 2 test suites ran. (13983 ms total) 2026-03-10T10:16:34.235 INFO:tasks.workunit.client.0.vm04.stdout: api_lock_pp: [ PASSED ] 16 tests. 2026-03-10T10:16:34.247 INFO:tasks.workunit.client.0.vm04.stdout: pool: Running main() from gmock_main.cc 2026-03-10T10:16:34.247 INFO:tasks.workunit.client.0.vm04.stdout: pool: [==========] Running 6 tests from 1 test suite. 2026-03-10T10:16:34.247 INFO:tasks.workunit.client.0.vm04.stdout: pool: [----------] Global test environment set-up. 2026-03-10T10:16:34.247 INFO:tasks.workunit.client.0.vm04.stdout: pool: [----------] 6 tests from NeoRadosPools 2026-03-10T10:16:34.247 INFO:tasks.workunit.client.0.vm04.stdout: pool: [ RUN ] NeoRadosPools.PoolList 2026-03-10T10:16:34.247 INFO:tasks.workunit.client.0.vm04.stdout: pool: [ OK ] NeoRadosPools.PoolList (1565 ms) 2026-03-10T10:16:34.247 INFO:tasks.workunit.client.0.vm04.stdout: pool: [ RUN ] NeoRadosPools.PoolLookup 2026-03-10T10:16:34.247 INFO:tasks.workunit.client.0.vm04.stdout: pool: [ OK ] NeoRadosPools.PoolLookup (1911 ms) 2026-03-10T10:16:34.247 INFO:tasks.workunit.client.0.vm04.stdout: pool: [ RUN ] NeoRadosPools.PoolLookupOtherInstance 2026-03-10T10:16:34.247 INFO:tasks.workunit.client.0.vm04.stdout: pool: [ OK ] NeoRadosPools.PoolLookupOtherInstance (2220 ms) 2026-03-10T10:16:34.247 INFO:tasks.workunit.client.0.vm04.stdout: pool: [ RUN ] NeoRadosPools.PoolDelete 2026-03-10T10:16:34.247 INFO:tasks.workunit.client.0.vm04.stdout: pool: [ OK ] NeoRadosPools.PoolDelete (4129 ms) 2026-03-10T10:16:34.247 INFO:tasks.workunit.client.0.vm04.stdout: pool: [ RUN ] NeoRadosPools.PoolCreateDelete 2026-03-10T10:16:34.247 INFO:tasks.workunit.client.0.vm04.stdout: pool: [ OK ] NeoRadosPools.PoolCreateDelete (1658 ms) 2026-03-10T10:16:34.247 INFO:tasks.workunit.client.0.vm04.stdout: pool: [ RUN ] NeoRadosPools.PoolCreateWithCrushRule 2026-03-10T10:16:34.247 INFO:tasks.workunit.client.0.vm04.stdout: pool: [ OK ] NeoRadosPools.PoolCreateWithCrushRule (2028 ms) 2026-03-10T10:16:34.247 INFO:tasks.workunit.client.0.vm04.stdout: pool: [----------] 6 tests from NeoRadosPools (13511 ms total) 2026-03-10T10:16:34.247 INFO:tasks.workunit.client.0.vm04.stdout: pool: 2026-03-10T10:16:34.247 INFO:tasks.workunit.client.0.vm04.stdout: pool: [----------] Global test environment tear-down 2026-03-10T10:16:34.248 INFO:tasks.workunit.client.0.vm04.stdout: pool: [==========] 6 tests from 1 test suite ran. (13511 ms total) 2026-03-10T10:16:34.248 INFO:tasks.workunit.client.0.vm04.stdout: pool: [ PASSED ] 6 tests. 2026-03-10T10:16:34.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:32.985239+0000 mon.c (mon.2) 76 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:34.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:32.985239+0000 mon.c (mon.2) 76 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:34.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: cluster 2026-03-10T10:16:33.180885+0000 mon.a (mon.0) 1054 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:16:34.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: cluster 2026-03-10T10:16:33.180885+0000 mon.a (mon.0) 1054 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:16:34.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.185844+0000 mon.a (mon.0) 1055 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59675-6", "mode": "writeback"}]': finished 2026-03-10T10:16:34.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.185844+0000 mon.a (mon.0) 1055 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59675-6", "mode": "writeback"}]': finished 2026-03-10T10:16:34.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.185880+0000 mon.a (mon.0) 1056 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReadOpvm04-60121-2"}]': finished 2026-03-10T10:16:34.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.185880+0000 mon.a (mon.0) 1056 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReadOpvm04-60121-2"}]': finished 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.186001+0000 mon.a (mon.0) 1057 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm04-59531-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.186001+0000 mon.a (mon.0) 1057 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManaged_vm04-59531-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.186038+0000 mon.a (mon.0) 1058 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm04-59364-10"}]': finished 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.186038+0000 mon.a (mon.0) 1058 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockEC_vm04-59364-10"}]': finished 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.186076+0000 mon.a (mon.0) 1059 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm04-59409-10"}]': finished 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.186076+0000 mon.a (mon.0) 1059 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosLockECPP_vm04-59409-10"}]': finished 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.186148+0000 mon.a (mon.0) 1060 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm04-60174-2"}]': finished 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.186148+0000 mon.a (mon.0) 1060 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsNSvm04-60174-2"}]': finished 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.186170+0000 mon.a (mon.0) 1061 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.186170+0000 mon.a (mon.0) 1061 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: cluster 2026-03-10T10:16:33.223337+0000 mon.a (mon.0) 1062 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: cluster 2026-03-10T10:16:33.223337+0000 mon.a (mon.0) 1062 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.262944+0000 mon.c (mon.2) 77 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.262944+0000 mon.c (mon.2) 77 : audit [INF] from='client.? 192.168.123.104:0/1636541385' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.263572+0000 mon.b (mon.1) 96 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.263572+0000 mon.b (mon.1) 96 : audit [INF] from='client.? 192.168.123.104:0/2503752920' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.264025+0000 mon.b (mon.1) 97 : audit [INF] from='client.? 192.168.123.104:0/3349108748' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.264025+0000 mon.b (mon.1) 97 : audit [INF] from='client.? 192.168.123.104:0/3349108748' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.264573+0000 mon.b (mon.1) 98 : audit [INF] from='client.? 192.168.123.104:0/22343092' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.264573+0000 mon.b (mon.1) 98 : audit [INF] from='client.? 192.168.123.104:0/22343092' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.265455+0000 mon.c (mon.2) 78 : audit [INF] from='client.? 192.168.123.104:0/3756706135' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm04-59252-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.265455+0000 mon.c (mon.2) 78 : audit [INF] from='client.? 192.168.123.104:0/3756706135' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm04-59252-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.265992+0000 mon.c (mon.2) 79 : audit [INF] from='client.? 192.168.123.104:0/3195318628' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.265992+0000 mon.c (mon.2) 79 : audit [INF] from='client.? 192.168.123.104:0/3195318628' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.269507+0000 mon.a (mon.0) 1063 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.269507+0000 mon.a (mon.0) 1063 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm04-59409-10"}]: dispatch 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.269720+0000 mon.a (mon.0) 1064 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.269720+0000 mon.a (mon.0) 1064 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm04-59364-10"}]: dispatch 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.269795+0000 mon.a (mon.0) 1065 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.269795+0000 mon.a (mon.0) 1065 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm04-60174-2"}]: dispatch 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.269898+0000 mon.a (mon.0) 1066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm04-59252-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.269898+0000 mon.a (mon.0) 1066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm04-59252-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.269962+0000 mon.a (mon.0) 1067 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.269962+0000 mon.a (mon.0) 1067 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.270244+0000 mon.a (mon.0) 1068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.270244+0000 mon.a (mon.0) 1068 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.287599+0000 mon.b (mon.1) 99 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.287599+0000 mon.b (mon.1) 99 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.295174+0000 mon.b (mon.1) 100 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.295174+0000 mon.b (mon.1) 100 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.295310+0000 mon.a (mon.0) 1069 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.295310+0000 mon.a (mon.0) 1069 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.300013+0000 mon.b (mon.1) 101 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:34.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.300013+0000 mon.b (mon.1) 101 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:34.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.300763+0000 mon.a (mon.0) 1070 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:34.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.300763+0000 mon.a (mon.0) 1070 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:34.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.302139+0000 mon.a (mon.0) 1071 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:34.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.302139+0000 mon.a (mon.0) 1071 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:34.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.373030+0000 mon.b (mon.1) 102 : audit [DBG] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix":"osd dump"}]: dispatch 2026-03-10T10:16:34.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.373030+0000 mon.b (mon.1) 102 : audit [DBG] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix":"osd dump"}]: dispatch 2026-03-10T10:16:34.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.375582+0000 mon.b (mon.1) 103 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.375582+0000 mon.b (mon.1) 103 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.377636+0000 mon.a (mon.0) 1072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:33 vm07 bash[23367]: audit 2026-03-10T10:16:33.377636+0000 mon.a (mon.0) 1072 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:34.268 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: Running main() from gmock_main.cc 2026-03-10T10:16:34.268 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [==========] Running 16 tests from 2 test suites. 2026-03-10T10:16:34.268 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [----------] Global test environment set-up. 2026-03-10T10:16:34.268 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [----------] 8 tests from LibRadosLock 2026-03-10T10:16:34.268 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [ RUN ] LibRadosLock.LockExclusive 2026-03-10T10:16:34.268 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [ OK ] LibRadosLock.LockExclusive (545 ms) 2026-03-10T10:16:34.268 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [ RUN ] LibRadosLock.LockShared 2026-03-10T10:16:34.268 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [ OK ] LibRadosLock.LockShared (31 ms) 2026-03-10T10:16:34.268 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [ RUN ] LibRadosLock.LockExclusiveDur 2026-03-10T10:16:34.268 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [ OK ] LibRadosLock.LockExclusiveDur (1020 ms) 2026-03-10T10:16:34.268 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [ RUN ] LibRadosLock.LockSharedDur 2026-03-10T10:16:34.268 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [ OK ] LibRadosLock.LockSharedDur (1005 ms) 2026-03-10T10:16:34.268 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [ RUN ] LibRadosLock.LockMayRenew 2026-03-10T10:16:34.268 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [ OK ] LibRadosLock.LockMayRenew (5 ms) 2026-03-10T10:16:34.268 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [ RUN ] LibRadosLock.Unlock 2026-03-10T10:16:34.268 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [ OK ] LibRadosLock.Unlock (5 ms) 2026-03-10T10:16:34.268 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [ RUN ] LibRadosLock.ListLockers 2026-03-10T10:16:34.268 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [ OK ] LibRadosLock.ListLockers (12 ms) 2026-03-10T10:16:34.268 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [ RUN ] LibRadosLock.BreakLock 2026-03-10T10:16:34.268 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [ OK ] LibRadosLock.BreakLock (3 ms) 2026-03-10T10:16:34.268 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [----------] 8 tests from LibRadosLock (2626 ms total) 2026-03-10T10:16:34.268 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: 2026-03-10T10:16:34.269 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [----------] 8 tests from LibRadosLockEC 2026-03-10T10:16:34.269 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [ RUN ] LibRadosLockEC.LockExclusive 2026-03-10T10:16:34.269 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [ OK ] LibRadosLockEC.LockExclusive (1192 ms) 2026-03-10T10:16:34.269 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [ RUN ] LibRadosLockEC.LockShared 2026-03-10T10:16:34.269 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [ OK ] LibRadosLockEC.LockShared (23 ms) 2026-03-10T10:16:34.269 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [ RUN ] LibRadosLockEC.LockExclusiveDur 2026-03-10T10:16:34.269 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [ OK ] LibRadosLockEC.LockExclusiveDur (1126 ms) 2026-03-10T10:16:34.269 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [ RUN ] LibRadosLockEC.LockSharedDur 2026-03-10T10:16:34.269 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [ OK ] LibRadosLockEC.LockSharedDur (1114 ms) 2026-03-10T10:16:34.269 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [ RUN ] LibRadosLockEC.LockMayRenew 2026-03-10T10:16:34.269 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [ OK ] LibRadosLockEC.LockMayRenew (5 ms) 2026-03-10T10:16:34.269 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [ RUN ] LibRadosLockEC.Unlock 2026-03-10T10:16:34.269 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [ OK ] LibRadosLockEC.Unlock (4 ms) 2026-03-10T10:16:34.269 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [ RUN ] LibRadosLockEC.ListLockers 2026-03-10T10:16:34.269 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [ OK ] LibRadosLockEC.ListLockers (4 ms) 2026-03-10T10:16:34.269 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [ RUN ] LibRadosLockEC.BreakLock 2026-03-10T10:16:34.269 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [ OK ] LibRadosLockEC.BreakLock (3 ms) 2026-03-10T10:16:34.269 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [----------] 8 tests from LibRadosLockEC (3471 ms total) 2026-03-10T10:16:34.269 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: 2026-03-10T10:16:34.269 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [----------] Global test environment tear-down 2026-03-10T10:16:34.269 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [==========] 16 tests from 2 test suites ran. (14047 ms total) 2026-03-10T10:16:34.269 INFO:tasks.workunit.client.0.vm04.stdout: api_lock: [ PASSED ] 16 tests. 2026-03-10T10:16:35.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: cluster 2026-03-10T10:16:33.733007+0000 osd.1 (osd.1) 3 : cluster [DBG] 15.4 deep-scrub starts 2026-03-10T10:16:35.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: cluster 2026-03-10T10:16:33.733007+0000 osd.1 (osd.1) 3 : cluster [DBG] 15.4 deep-scrub starts 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: cluster 2026-03-10T10:16:33.733699+0000 osd.1 (osd.1) 4 : cluster [DBG] 15.4 deep-scrub ok 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: cluster 2026-03-10T10:16:33.733699+0000 osd.1 (osd.1) 4 : cluster [DBG] 15.4 deep-scrub ok 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: cluster 2026-03-10T10:16:33.983375+0000 mgr.y (mgr.24422) 113 : cluster [DBG] pgmap v72: 744 pgs: 192 creating+peering, 1 creating+activating, 2 active+clean+snaptrim, 549 active+clean; 144 MiB data, 828 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 2.7 KiB/s wr, 5 op/s 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: cluster 2026-03-10T10:16:33.983375+0000 mgr.y (mgr.24422) 113 : cluster [DBG] pgmap v72: 744 pgs: 192 creating+peering, 1 creating+activating, 2 active+clean+snaptrim, 549 active+clean; 144 MiB data, 828 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 2.7 KiB/s wr, 5 op/s 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: audit 2026-03-10T10:16:33.985951+0000 mon.c (mon.2) 80 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: audit 2026-03-10T10:16:33.985951+0000 mon.c (mon.2) 80 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: audit 2026-03-10T10:16:34.191034+0000 mon.a (mon.0) 1073 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: audit 2026-03-10T10:16:34.191034+0000 mon.a (mon.0) 1073 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: audit 2026-03-10T10:16:34.191094+0000 mon.a (mon.0) 1074 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm04-59409-10"}]': finished 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: audit 2026-03-10T10:16:34.191094+0000 mon.a (mon.0) 1074 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm04-59409-10"}]': finished 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: audit 2026-03-10T10:16:34.191125+0000 mon.a (mon.0) 1075 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm04-59364-10"}]': finished 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: audit 2026-03-10T10:16:34.191125+0000 mon.a (mon.0) 1075 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm04-59364-10"}]': finished 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: audit 2026-03-10T10:16:34.191149+0000 mon.a (mon.0) 1076 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm04-60174-2"}]': finished 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: audit 2026-03-10T10:16:34.191149+0000 mon.a (mon.0) 1076 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm04-60174-2"}]': finished 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: audit 2026-03-10T10:16:34.191172+0000 mon.a (mon.0) 1077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm04-59252-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: audit 2026-03-10T10:16:34.191172+0000 mon.a (mon.0) 1077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm04-59252-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: audit 2026-03-10T10:16:34.191203+0000 mon.a (mon.0) 1078 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: audit 2026-03-10T10:16:34.191203+0000 mon.a (mon.0) 1078 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: audit 2026-03-10T10:16:34.191233+0000 mon.a (mon.0) 1079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: audit 2026-03-10T10:16:34.191233+0000 mon.a (mon.0) 1079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: audit 2026-03-10T10:16:34.191257+0000 mon.a (mon.0) 1080 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: audit 2026-03-10T10:16:34.191257+0000 mon.a (mon.0) 1080 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: audit 2026-03-10T10:16:34.191287+0000 mon.a (mon.0) 1081 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: audit 2026-03-10T10:16:34.191287+0000 mon.a (mon.0) 1081 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: cluster 2026-03-10T10:16:34.194268+0000 mon.a (mon.0) 1082 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: cluster 2026-03-10T10:16:34.194268+0000 mon.a (mon.0) 1082 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: audit 2026-03-10T10:16:34.227588+0000 mon.b (mon.1) 104 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm04-59623-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: audit 2026-03-10T10:16:34.227588+0000 mon.b (mon.1) 104 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm04-59623-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: audit 2026-03-10T10:16:34.229118+0000 mon.b (mon.1) 105 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app2"}]: dispatch 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: audit 2026-03-10T10:16:34.229118+0000 mon.b (mon.1) 105 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app2"}]: dispatch 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: audit 2026-03-10T10:16:34.230078+0000 mon.b (mon.1) 106 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: audit 2026-03-10T10:16:34.230078+0000 mon.b (mon.1) 106 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: audit 2026-03-10T10:16:34.230549+0000 mon.a (mon.0) 1083 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm04-59623-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: audit 2026-03-10T10:16:34.230549+0000 mon.a (mon.0) 1083 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm04-59623-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: audit 2026-03-10T10:16:34.232137+0000 mon.a (mon.0) 1084 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: audit 2026-03-10T10:16:34.232137+0000 mon.a (mon.0) 1084 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: audit 2026-03-10T10:16:34.278381+0000 mon.c (mon.2) 81 : audit [INF] from='client.? 192.168.123.104:0/2099848889' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm04-60174-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: audit 2026-03-10T10:16:34.278381+0000 mon.c (mon.2) 81 : audit [INF] from='client.? 192.168.123.104:0/2099848889' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm04-60174-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: audit 2026-03-10T10:16:34.278645+0000 mon.a (mon.0) 1085 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm04-60174-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:34 vm04 bash[28289]: audit 2026-03-10T10:16:34.278645+0000 mon.a (mon.0) 1085 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm04-60174-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: cluster 2026-03-10T10:16:33.733007+0000 osd.1 (osd.1) 3 : cluster [DBG] 15.4 deep-scrub starts 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: cluster 2026-03-10T10:16:33.733007+0000 osd.1 (osd.1) 3 : cluster [DBG] 15.4 deep-scrub starts 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: cluster 2026-03-10T10:16:33.733699+0000 osd.1 (osd.1) 4 : cluster [DBG] 15.4 deep-scrub ok 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: cluster 2026-03-10T10:16:33.733699+0000 osd.1 (osd.1) 4 : cluster [DBG] 15.4 deep-scrub ok 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: cluster 2026-03-10T10:16:33.983375+0000 mgr.y (mgr.24422) 113 : cluster [DBG] pgmap v72: 744 pgs: 192 creating+peering, 1 creating+activating, 2 active+clean+snaptrim, 549 active+clean; 144 MiB data, 828 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 2.7 KiB/s wr, 5 op/s 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: cluster 2026-03-10T10:16:33.983375+0000 mgr.y (mgr.24422) 113 : cluster [DBG] pgmap v72: 744 pgs: 192 creating+peering, 1 creating+activating, 2 active+clean+snaptrim, 549 active+clean; 144 MiB data, 828 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 2.7 KiB/s wr, 5 op/s 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: audit 2026-03-10T10:16:33.985951+0000 mon.c (mon.2) 80 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: audit 2026-03-10T10:16:33.985951+0000 mon.c (mon.2) 80 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: audit 2026-03-10T10:16:34.191034+0000 mon.a (mon.0) 1073 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: audit 2026-03-10T10:16:34.191034+0000 mon.a (mon.0) 1073 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: audit 2026-03-10T10:16:34.191094+0000 mon.a (mon.0) 1074 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm04-59409-10"}]': finished 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: audit 2026-03-10T10:16:34.191094+0000 mon.a (mon.0) 1074 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm04-59409-10"}]': finished 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: audit 2026-03-10T10:16:34.191125+0000 mon.a (mon.0) 1075 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm04-59364-10"}]': finished 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: audit 2026-03-10T10:16:34.191125+0000 mon.a (mon.0) 1075 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm04-59364-10"}]': finished 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: audit 2026-03-10T10:16:34.191149+0000 mon.a (mon.0) 1076 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm04-60174-2"}]': finished 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: audit 2026-03-10T10:16:34.191149+0000 mon.a (mon.0) 1076 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm04-60174-2"}]': finished 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: audit 2026-03-10T10:16:34.191172+0000 mon.a (mon.0) 1077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm04-59252-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: audit 2026-03-10T10:16:34.191172+0000 mon.a (mon.0) 1077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm04-59252-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: audit 2026-03-10T10:16:34.191203+0000 mon.a (mon.0) 1078 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: audit 2026-03-10T10:16:34.191203+0000 mon.a (mon.0) 1078 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: audit 2026-03-10T10:16:34.191233+0000 mon.a (mon.0) 1079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: audit 2026-03-10T10:16:34.191233+0000 mon.a (mon.0) 1079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: audit 2026-03-10T10:16:34.191257+0000 mon.a (mon.0) 1080 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: audit 2026-03-10T10:16:34.191257+0000 mon.a (mon.0) 1080 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: audit 2026-03-10T10:16:34.191287+0000 mon.a (mon.0) 1081 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: audit 2026-03-10T10:16:34.191287+0000 mon.a (mon.0) 1081 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: cluster 2026-03-10T10:16:34.194268+0000 mon.a (mon.0) 1082 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: cluster 2026-03-10T10:16:34.194268+0000 mon.a (mon.0) 1082 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: audit 2026-03-10T10:16:34.227588+0000 mon.b (mon.1) 104 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm04-59623-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: audit 2026-03-10T10:16:34.227588+0000 mon.b (mon.1) 104 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm04-59623-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: audit 2026-03-10T10:16:34.229118+0000 mon.b (mon.1) 105 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app2"}]: dispatch 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: audit 2026-03-10T10:16:34.229118+0000 mon.b (mon.1) 105 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app2"}]: dispatch 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: audit 2026-03-10T10:16:34.230078+0000 mon.b (mon.1) 106 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: audit 2026-03-10T10:16:34.230078+0000 mon.b (mon.1) 106 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: audit 2026-03-10T10:16:34.230549+0000 mon.a (mon.0) 1083 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm04-59623-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: audit 2026-03-10T10:16:34.230549+0000 mon.a (mon.0) 1083 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm04-59623-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: audit 2026-03-10T10:16:34.232137+0000 mon.a (mon.0) 1084 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: audit 2026-03-10T10:16:34.232137+0000 mon.a (mon.0) 1084 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: audit 2026-03-10T10:16:34.278381+0000 mon.c (mon.2) 81 : audit [INF] from='client.? 192.168.123.104:0/2099848889' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm04-60174-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: audit 2026-03-10T10:16:34.278381+0000 mon.c (mon.2) 81 : audit [INF] from='client.? 192.168.123.104:0/2099848889' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm04-60174-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: audit 2026-03-10T10:16:34.278645+0000 mon.a (mon.0) 1085 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm04-60174-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:35.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:34 vm04 bash[20742]: audit 2026-03-10T10:16:34.278645+0000 mon.a (mon.0) 1085 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm04-60174-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: cluster 2026-03-10T10:16:33.733007+0000 osd.1 (osd.1) 3 : cluster [DBG] 15.4 deep-scrub starts 2026-03-10T10:16:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: cluster 2026-03-10T10:16:33.733007+0000 osd.1 (osd.1) 3 : cluster [DBG] 15.4 deep-scrub starts 2026-03-10T10:16:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: cluster 2026-03-10T10:16:33.733699+0000 osd.1 (osd.1) 4 : cluster [DBG] 15.4 deep-scrub ok 2026-03-10T10:16:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: cluster 2026-03-10T10:16:33.733699+0000 osd.1 (osd.1) 4 : cluster [DBG] 15.4 deep-scrub ok 2026-03-10T10:16:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: cluster 2026-03-10T10:16:33.983375+0000 mgr.y (mgr.24422) 113 : cluster [DBG] pgmap v72: 744 pgs: 192 creating+peering, 1 creating+activating, 2 active+clean+snaptrim, 549 active+clean; 144 MiB data, 828 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 2.7 KiB/s wr, 5 op/s 2026-03-10T10:16:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: cluster 2026-03-10T10:16:33.983375+0000 mgr.y (mgr.24422) 113 : cluster [DBG] pgmap v72: 744 pgs: 192 creating+peering, 1 creating+activating, 2 active+clean+snaptrim, 549 active+clean; 144 MiB data, 828 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 2.7 KiB/s wr, 5 op/s 2026-03-10T10:16:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: audit 2026-03-10T10:16:33.985951+0000 mon.c (mon.2) 80 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: audit 2026-03-10T10:16:33.985951+0000 mon.c (mon.2) 80 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: audit 2026-03-10T10:16:34.191034+0000 mon.a (mon.0) 1073 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: audit 2026-03-10T10:16:34.191034+0000 mon.a (mon.0) 1073 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosIoECPP_vm04-59290-23", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: audit 2026-03-10T10:16:34.191094+0000 mon.a (mon.0) 1074 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm04-59409-10"}]': finished 2026-03-10T10:16:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: audit 2026-03-10T10:16:34.191094+0000 mon.a (mon.0) 1074 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockECPP_vm04-59409-10"}]': finished 2026-03-10T10:16:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: audit 2026-03-10T10:16:34.191125+0000 mon.a (mon.0) 1075 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm04-59364-10"}]': finished 2026-03-10T10:16:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: audit 2026-03-10T10:16:34.191125+0000 mon.a (mon.0) 1075 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosLockEC_vm04-59364-10"}]': finished 2026-03-10T10:16:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: audit 2026-03-10T10:16:34.191149+0000 mon.a (mon.0) 1076 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm04-60174-2"}]': finished 2026-03-10T10:16:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: audit 2026-03-10T10:16:34.191149+0000 mon.a (mon.0) 1076 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsNSvm04-60174-2"}]': finished 2026-03-10T10:16:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: audit 2026-03-10T10:16:34.191172+0000 mon.a (mon.0) 1077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm04-59252-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: audit 2026-03-10T10:16:34.191172+0000 mon.a (mon.0) 1077 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip2_vm04-59252-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: audit 2026-03-10T10:16:34.191203+0000 mon.a (mon.0) 1078 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: audit 2026-03-10T10:16:34.191203+0000 mon.a (mon.0) 1078 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: audit 2026-03-10T10:16:34.191233+0000 mon.a (mon.0) 1079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: audit 2026-03-10T10:16:34.191233+0000 mon.a (mon.0) 1079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: audit 2026-03-10T10:16:34.191257+0000 mon.a (mon.0) 1080 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: audit 2026-03-10T10:16:34.191257+0000 mon.a (mon.0) 1080 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: audit 2026-03-10T10:16:34.191287+0000 mon.a (mon.0) 1081 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: audit 2026-03-10T10:16:34.191287+0000 mon.a (mon.0) 1081 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: cluster 2026-03-10T10:16:34.194268+0000 mon.a (mon.0) 1082 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-10T10:16:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: cluster 2026-03-10T10:16:34.194268+0000 mon.a (mon.0) 1082 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-10T10:16:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: audit 2026-03-10T10:16:34.227588+0000 mon.b (mon.1) 104 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm04-59623-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: audit 2026-03-10T10:16:34.227588+0000 mon.b (mon.1) 104 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm04-59623-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: audit 2026-03-10T10:16:34.229118+0000 mon.b (mon.1) 105 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app2"}]: dispatch 2026-03-10T10:16:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: audit 2026-03-10T10:16:34.229118+0000 mon.b (mon.1) 105 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app2"}]: dispatch 2026-03-10T10:16:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: audit 2026-03-10T10:16:34.230078+0000 mon.b (mon.1) 106 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: audit 2026-03-10T10:16:34.230078+0000 mon.b (mon.1) 106 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: audit 2026-03-10T10:16:34.230549+0000 mon.a (mon.0) 1083 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm04-59623-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: audit 2026-03-10T10:16:34.230549+0000 mon.a (mon.0) 1083 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm04-59623-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: audit 2026-03-10T10:16:34.232137+0000 mon.a (mon.0) 1084 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: audit 2026-03-10T10:16:34.232137+0000 mon.a (mon.0) 1084 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: audit 2026-03-10T10:16:34.278381+0000 mon.c (mon.2) 81 : audit [INF] from='client.? 192.168.123.104:0/2099848889' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm04-60174-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: audit 2026-03-10T10:16:34.278381+0000 mon.c (mon.2) 81 : audit [INF] from='client.? 192.168.123.104:0/2099848889' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm04-60174-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: audit 2026-03-10T10:16:34.278645+0000 mon.a (mon.0) 1085 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm04-60174-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:34 vm07 bash[23367]: audit 2026-03-10T10:16:34.278645+0000 mon.a (mon.0) 1085 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm04-60174-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:35 vm04 bash[20742]: cluster 2026-03-10T10:16:34.698025+0000 osd.1 (osd.1) 5 : cluster [DBG] 15.7 deep-scrub starts 2026-03-10T10:16:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:35 vm04 bash[20742]: cluster 2026-03-10T10:16:34.698025+0000 osd.1 (osd.1) 5 : cluster [DBG] 15.7 deep-scrub starts 2026-03-10T10:16:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:35 vm04 bash[20742]: cluster 2026-03-10T10:16:34.722688+0000 osd.1 (osd.1) 6 : cluster [DBG] 15.7 deep-scrub ok 2026-03-10T10:16:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:35 vm04 bash[20742]: cluster 2026-03-10T10:16:34.722688+0000 osd.1 (osd.1) 6 : cluster [DBG] 15.7 deep-scrub ok 2026-03-10T10:16:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:35 vm04 bash[20742]: cluster 2026-03-10T10:16:34.851968+0000 mon.a (mon.0) 1086 : cluster [WRN] Health check update: 10 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:35 vm04 bash[20742]: cluster 2026-03-10T10:16:34.851968+0000 mon.a (mon.0) 1086 : cluster [WRN] Health check update: 10 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:35 vm04 bash[20742]: audit 2026-03-10T10:16:34.986636+0000 mon.c (mon.2) 82 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:35 vm04 bash[20742]: audit 2026-03-10T10:16:34.986636+0000 mon.c (mon.2) 82 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:35 vm04 bash[20742]: audit 2026-03-10T10:16:35.325654+0000 mon.a (mon.0) 1087 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:35 vm04 bash[20742]: audit 2026-03-10T10:16:35.325654+0000 mon.a (mon.0) 1087 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:35 vm04 bash[20742]: audit 2026-03-10T10:16:35.325787+0000 mon.a (mon.0) 1088 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm04-60174-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:35 vm04 bash[20742]: audit 2026-03-10T10:16:35.325787+0000 mon.a (mon.0) 1088 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm04-60174-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:35 vm04 bash[20742]: audit 2026-03-10T10:16:35.337134+0000 mon.c (mon.2) 83 : audit [INF] from='client.? 192.168.123.104:0/2099848889' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm04-60174-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:35 vm04 bash[20742]: audit 2026-03-10T10:16:35.337134+0000 mon.c (mon.2) 83 : audit [INF] from='client.? 192.168.123.104:0/2099848889' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm04-60174-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:35 vm04 bash[20742]: audit 2026-03-10T10:16:35.376106+0000 mon.b (mon.1) 107 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"dne","key":"key","value":"value"}]: dispatch 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:35 vm04 bash[20742]: audit 2026-03-10T10:16:35.376106+0000 mon.b (mon.1) 107 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"dne","key":"key","value":"value"}]: dispatch 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:35 vm04 bash[20742]: cluster 2026-03-10T10:16:35.384844+0000 mon.a (mon.0) 1089 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:35 vm04 bash[20742]: cluster 2026-03-10T10:16:35.384844+0000 mon.a (mon.0) 1089 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:35 vm04 bash[20742]: audit 2026-03-10T10:16:35.386116+0000 mon.a (mon.0) 1090 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm04-60174-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:35 vm04 bash[20742]: audit 2026-03-10T10:16:35.386116+0000 mon.a (mon.0) 1090 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm04-60174-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:35 vm04 bash[20742]: audit 2026-03-10T10:16:35.386363+0000 mon.a (mon.0) 1091 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:35 vm04 bash[20742]: audit 2026-03-10T10:16:35.386363+0000 mon.a (mon.0) 1091 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:35 vm04 bash[20742]: audit 2026-03-10T10:16:35.386679+0000 mon.b (mon.1) 108 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:35 vm04 bash[20742]: audit 2026-03-10T10:16:35.386679+0000 mon.b (mon.1) 108 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:35 vm04 bash[20742]: audit 2026-03-10T10:16:35.406063+0000 mon.a (mon.0) 1092 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:35 vm04 bash[20742]: audit 2026-03-10T10:16:35.406063+0000 mon.a (mon.0) 1092 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:35 vm04 bash[20742]: cluster 2026-03-10T10:16:35.620223+0000 osd.2 (osd.2) 5 : cluster [DBG] 15.8 deep-scrub starts 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:35 vm04 bash[20742]: cluster 2026-03-10T10:16:35.620223+0000 osd.2 (osd.2) 5 : cluster [DBG] 15.8 deep-scrub starts 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:35 vm04 bash[20742]: cluster 2026-03-10T10:16:35.621038+0000 osd.2 (osd.2) 6 : cluster [DBG] 15.8 deep-scrub ok 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:35 vm04 bash[20742]: cluster 2026-03-10T10:16:35.621038+0000 osd.2 (osd.2) 6 : cluster [DBG] 15.8 deep-scrub ok 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:35 vm04 bash[28289]: cluster 2026-03-10T10:16:34.698025+0000 osd.1 (osd.1) 5 : cluster [DBG] 15.7 deep-scrub starts 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:35 vm04 bash[28289]: cluster 2026-03-10T10:16:34.698025+0000 osd.1 (osd.1) 5 : cluster [DBG] 15.7 deep-scrub starts 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:35 vm04 bash[28289]: cluster 2026-03-10T10:16:34.722688+0000 osd.1 (osd.1) 6 : cluster [DBG] 15.7 deep-scrub ok 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:35 vm04 bash[28289]: cluster 2026-03-10T10:16:34.722688+0000 osd.1 (osd.1) 6 : cluster [DBG] 15.7 deep-scrub ok 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:35 vm04 bash[28289]: cluster 2026-03-10T10:16:34.851968+0000 mon.a (mon.0) 1086 : cluster [WRN] Health check update: 10 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:35 vm04 bash[28289]: cluster 2026-03-10T10:16:34.851968+0000 mon.a (mon.0) 1086 : cluster [WRN] Health check update: 10 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:35 vm04 bash[28289]: audit 2026-03-10T10:16:34.986636+0000 mon.c (mon.2) 82 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:35 vm04 bash[28289]: audit 2026-03-10T10:16:34.986636+0000 mon.c (mon.2) 82 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:35 vm04 bash[28289]: audit 2026-03-10T10:16:35.325654+0000 mon.a (mon.0) 1087 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:35 vm04 bash[28289]: audit 2026-03-10T10:16:35.325654+0000 mon.a (mon.0) 1087 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:35 vm04 bash[28289]: audit 2026-03-10T10:16:35.325787+0000 mon.a (mon.0) 1088 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm04-60174-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:35 vm04 bash[28289]: audit 2026-03-10T10:16:35.325787+0000 mon.a (mon.0) 1088 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm04-60174-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:35 vm04 bash[28289]: audit 2026-03-10T10:16:35.337134+0000 mon.c (mon.2) 83 : audit [INF] from='client.? 192.168.123.104:0/2099848889' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm04-60174-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:35 vm04 bash[28289]: audit 2026-03-10T10:16:35.337134+0000 mon.c (mon.2) 83 : audit [INF] from='client.? 192.168.123.104:0/2099848889' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm04-60174-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:35 vm04 bash[28289]: audit 2026-03-10T10:16:35.376106+0000 mon.b (mon.1) 107 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"dne","key":"key","value":"value"}]: dispatch 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:35 vm04 bash[28289]: audit 2026-03-10T10:16:35.376106+0000 mon.b (mon.1) 107 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"dne","key":"key","value":"value"}]: dispatch 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:35 vm04 bash[28289]: cluster 2026-03-10T10:16:35.384844+0000 mon.a (mon.0) 1089 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:35 vm04 bash[28289]: cluster 2026-03-10T10:16:35.384844+0000 mon.a (mon.0) 1089 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:35 vm04 bash[28289]: audit 2026-03-10T10:16:35.386116+0000 mon.a (mon.0) 1090 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm04-60174-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:35 vm04 bash[28289]: audit 2026-03-10T10:16:35.386116+0000 mon.a (mon.0) 1090 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm04-60174-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:35 vm04 bash[28289]: audit 2026-03-10T10:16:35.386363+0000 mon.a (mon.0) 1091 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:35 vm04 bash[28289]: audit 2026-03-10T10:16:35.386363+0000 mon.a (mon.0) 1091 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:35 vm04 bash[28289]: audit 2026-03-10T10:16:35.386679+0000 mon.b (mon.1) 108 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:35 vm04 bash[28289]: audit 2026-03-10T10:16:35.386679+0000 mon.b (mon.1) 108 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:35 vm04 bash[28289]: audit 2026-03-10T10:16:35.406063+0000 mon.a (mon.0) 1092 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:35 vm04 bash[28289]: audit 2026-03-10T10:16:35.406063+0000 mon.a (mon.0) 1092 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:35 vm04 bash[28289]: cluster 2026-03-10T10:16:35.620223+0000 osd.2 (osd.2) 5 : cluster [DBG] 15.8 deep-scrub starts 2026-03-10T10:16:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:35 vm04 bash[28289]: cluster 2026-03-10T10:16:35.620223+0000 osd.2 (osd.2) 5 : cluster [DBG] 15.8 deep-scrub starts 2026-03-10T10:16:36.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:35 vm04 bash[28289]: cluster 2026-03-10T10:16:35.621038+0000 osd.2 (osd.2) 6 : cluster [DBG] 15.8 deep-scrub ok 2026-03-10T10:16:36.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:35 vm04 bash[28289]: cluster 2026-03-10T10:16:35.621038+0000 osd.2 (osd.2) 6 : cluster [DBG] 15.8 deep-scrub ok 2026-03-10T10:16:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:35 vm07 bash[23367]: cluster 2026-03-10T10:16:34.698025+0000 osd.1 (osd.1) 5 : cluster [DBG] 15.7 deep-scrub starts 2026-03-10T10:16:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:35 vm07 bash[23367]: cluster 2026-03-10T10:16:34.698025+0000 osd.1 (osd.1) 5 : cluster [DBG] 15.7 deep-scrub starts 2026-03-10T10:16:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:35 vm07 bash[23367]: cluster 2026-03-10T10:16:34.722688+0000 osd.1 (osd.1) 6 : cluster [DBG] 15.7 deep-scrub ok 2026-03-10T10:16:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:35 vm07 bash[23367]: cluster 2026-03-10T10:16:34.722688+0000 osd.1 (osd.1) 6 : cluster [DBG] 15.7 deep-scrub ok 2026-03-10T10:16:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:35 vm07 bash[23367]: cluster 2026-03-10T10:16:34.851968+0000 mon.a (mon.0) 1086 : cluster [WRN] Health check update: 10 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:35 vm07 bash[23367]: cluster 2026-03-10T10:16:34.851968+0000 mon.a (mon.0) 1086 : cluster [WRN] Health check update: 10 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:35 vm07 bash[23367]: audit 2026-03-10T10:16:34.986636+0000 mon.c (mon.2) 82 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:35 vm07 bash[23367]: audit 2026-03-10T10:16:34.986636+0000 mon.c (mon.2) 82 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:35 vm07 bash[23367]: audit 2026-03-10T10:16:35.325654+0000 mon.a (mon.0) 1087 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:35 vm07 bash[23367]: audit 2026-03-10T10:16:35.325654+0000 mon.a (mon.0) 1087 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMisc_vm04-59423-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:35 vm07 bash[23367]: audit 2026-03-10T10:16:35.325787+0000 mon.a (mon.0) 1088 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm04-60174-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:35 vm07 bash[23367]: audit 2026-03-10T10:16:35.325787+0000 mon.a (mon.0) 1088 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ListObjectsManyvm04-60174-3", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:35 vm07 bash[23367]: audit 2026-03-10T10:16:35.337134+0000 mon.c (mon.2) 83 : audit [INF] from='client.? 192.168.123.104:0/2099848889' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm04-60174-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:35 vm07 bash[23367]: audit 2026-03-10T10:16:35.337134+0000 mon.c (mon.2) 83 : audit [INF] from='client.? 192.168.123.104:0/2099848889' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm04-60174-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:35 vm07 bash[23367]: audit 2026-03-10T10:16:35.376106+0000 mon.b (mon.1) 107 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"dne","key":"key","value":"value"}]: dispatch 2026-03-10T10:16:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:35 vm07 bash[23367]: audit 2026-03-10T10:16:35.376106+0000 mon.b (mon.1) 107 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"dne","key":"key","value":"value"}]: dispatch 2026-03-10T10:16:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:35 vm07 bash[23367]: cluster 2026-03-10T10:16:35.384844+0000 mon.a (mon.0) 1089 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-10T10:16:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:35 vm07 bash[23367]: cluster 2026-03-10T10:16:35.384844+0000 mon.a (mon.0) 1089 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-10T10:16:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:35 vm07 bash[23367]: audit 2026-03-10T10:16:35.386116+0000 mon.a (mon.0) 1090 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm04-60174-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:35 vm07 bash[23367]: audit 2026-03-10T10:16:35.386116+0000 mon.a (mon.0) 1090 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ListObjectsManyvm04-60174-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:35 vm07 bash[23367]: audit 2026-03-10T10:16:35.386363+0000 mon.a (mon.0) 1091 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:35 vm07 bash[23367]: audit 2026-03-10T10:16:35.386363+0000 mon.a (mon.0) 1091 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:35 vm07 bash[23367]: audit 2026-03-10T10:16:35.386679+0000 mon.b (mon.1) 108 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T10:16:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:35 vm07 bash[23367]: audit 2026-03-10T10:16:35.386679+0000 mon.b (mon.1) 108 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T10:16:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:35 vm07 bash[23367]: audit 2026-03-10T10:16:35.406063+0000 mon.a (mon.0) 1092 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T10:16:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:35 vm07 bash[23367]: audit 2026-03-10T10:16:35.406063+0000 mon.a (mon.0) 1092 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T10:16:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:35 vm07 bash[23367]: cluster 2026-03-10T10:16:35.620223+0000 osd.2 (osd.2) 5 : cluster [DBG] 15.8 deep-scrub starts 2026-03-10T10:16:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:35 vm07 bash[23367]: cluster 2026-03-10T10:16:35.620223+0000 osd.2 (osd.2) 5 : cluster [DBG] 15.8 deep-scrub starts 2026-03-10T10:16:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:35 vm07 bash[23367]: cluster 2026-03-10T10:16:35.621038+0000 osd.2 (osd.2) 6 : cluster [DBG] 15.8 deep-scrub ok 2026-03-10T10:16:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:35 vm07 bash[23367]: cluster 2026-03-10T10:16:35.621038+0000 osd.2 (osd.2) 6 : cluster [DBG] 15.8 deep-scrub ok 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: Running main() from gmock_main.cc 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [==========] Running 16 tests from 2 test suites. 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [----------] Global test environment set-up. 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [----------] 2 tests from LibRadosWatchNotifyECPP 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyECPP.WatchNotify 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: notify 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyECPP.WatchNotify (1117 ms) 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyECPP.WatchNotifyTimeout 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyECPP.WatchNotifyTimeout (66 ms) 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [----------] 2 tests from LibRadosWatchNotifyECPP (1183 ms total) 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [----------] 14 tests from LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify/0 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: notify 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify/0 (165 ms) 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify/1 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: notify 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify/1 (3669 ms) 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotifyTimeout/0 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotifyTimeout/0 (7 ms) 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotifyTimeout/1 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotifyTimeout/1 (10 ms) 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2/0 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: handle_notify cookie 94713271486304 notify_id 335007449091 notifier_gid 14982 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2/0 (7 ms) 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2/1 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: handle_notify cookie 94713271505328 notify_id 335007449088 notifier_gid 14982 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2/1 (5 ms) 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioWatchNotify2/0 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: handle_notify cookie 94713271505328 notify_id 335007449092 notifier_gid 14982 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioWatchNotify2/0 (5 ms) 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioWatchNotify2/1 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: handle_notify cookie 94713271505328 notify_id 335007449089 notifier_gid 14982 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioWatchNotify2/1 (5 ms) 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioNotify/0 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: handle_notify cookie 94713271505328 notify_id 335007449093 notifier_gid 14982 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioNotify/0 (4 ms) 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioNotify/1 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: handle_notify cookie 94713271505328 notify_id 335007449090 notifier_gid 14982 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.AioNotify/1 (6 ms) 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2Timeout/0 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: trying... 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: handle_notify cookie 94713271505328 notify_id 335007449091 notifier_gid 14982 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: timed out 2026-03-10T10:16:36.408 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: flushing 2026-03-10T10:16:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: cluster 2026-03-10T10:16:35.983848+0000 mgr.y (mgr.24422) 114 : cluster [DBG] pgmap v75: 560 pgs: 104 unknown, 1 creating+activating, 455 active+clean; 144 MiB data, 828 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:16:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: cluster 2026-03-10T10:16:35.983848+0000 mgr.y (mgr.24422) 114 : cluster [DBG] pgmap v75: 560 pgs: 104 unknown, 1 creating+activating, 455 active+clean; 144 MiB data, 828 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:16:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: audit 2026-03-10T10:16:35.987335+0000 mon.c (mon.2) 84 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: audit 2026-03-10T10:16:35.987335+0000 mon.c (mon.2) 84 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: audit 2026-03-10T10:16:36.335157+0000 mon.a (mon.0) 1093 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm04-59623-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]': finished 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: audit 2026-03-10T10:16:36.335157+0000 mon.a (mon.0) 1093 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm04-59623-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]': finished 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: audit 2026-03-10T10:16:36.335233+0000 mon.a (mon.0) 1094 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: audit 2026-03-10T10:16:36.335233+0000 mon.a (mon.0) 1094 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: audit 2026-03-10T10:16:36.335260+0000 mon.a (mon.0) 1095 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: audit 2026-03-10T10:16:36.335260+0000 mon.a (mon.0) 1095 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: audit 2026-03-10T10:16:36.380844+0000 mon.b (mon.1) 109 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: audit 2026-03-10T10:16:36.380844+0000 mon.b (mon.1) 109 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: cluster 2026-03-10T10:16:36.391601+0000 mon.a (mon.0) 1096 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: cluster 2026-03-10T10:16:36.391601+0000 mon.a (mon.0) 1096 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: audit 2026-03-10T10:16:36.392133+0000 mon.b (mon.1) 110 : audit [INF] from='client.? 192.168.123.104:0/4264529946' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: audit 2026-03-10T10:16:36.392133+0000 mon.b (mon.1) 110 : audit [INF] from='client.? 192.168.123.104:0/4264529946' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: audit 2026-03-10T10:16:36.392279+0000 mon.c (mon.2) 85 : audit [INF] from='client.? 192.168.123.104:0/2810964492' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm04-59252-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: audit 2026-03-10T10:16:36.392279+0000 mon.c (mon.2) 85 : audit [INF] from='client.? 192.168.123.104:0/2810964492' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm04-59252-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: audit 2026-03-10T10:16:36.392589+0000 mon.b (mon.1) 111 : audit [INF] from='client.? 192.168.123.104:0/117643029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: audit 2026-03-10T10:16:36.392589+0000 mon.b (mon.1) 111 : audit [INF] from='client.? 192.168.123.104:0/117643029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: audit 2026-03-10T10:16:36.394452+0000 mon.b (mon.1) 112 : audit [INF] from='client.? 192.168.123.104:0/3213550344' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm04-59541-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: audit 2026-03-10T10:16:36.394452+0000 mon.b (mon.1) 112 : audit [INF] from='client.? 192.168.123.104:0/3213550344' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm04-59541-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: audit 2026-03-10T10:16:36.395333+0000 mon.a (mon.0) 1097 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: audit 2026-03-10T10:16:36.395333+0000 mon.a (mon.0) 1097 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: audit 2026-03-10T10:16:36.396733+0000 mon.c (mon.2) 86 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: audit 2026-03-10T10:16:36.396733+0000 mon.c (mon.2) 86 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: audit 2026-03-10T10:16:36.407677+0000 mon.a (mon.0) 1098 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: audit 2026-03-10T10:16:36.407677+0000 mon.a (mon.0) 1098 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: cluster 2026-03-10T10:16:35.983848+0000 mgr.y (mgr.24422) 114 : cluster [DBG] pgmap v75: 560 pgs: 104 unknown, 1 creating+activating, 455 active+clean; 144 MiB data, 828 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: cluster 2026-03-10T10:16:35.983848+0000 mgr.y (mgr.24422) 114 : cluster [DBG] pgmap v75: 560 pgs: 104 unknown, 1 creating+activating, 455 active+clean; 144 MiB data, 828 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: audit 2026-03-10T10:16:35.987335+0000 mon.c (mon.2) 84 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: audit 2026-03-10T10:16:35.987335+0000 mon.c (mon.2) 84 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: audit 2026-03-10T10:16:36.335157+0000 mon.a (mon.0) 1093 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm04-59623-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]': finished 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: audit 2026-03-10T10:16:36.335157+0000 mon.a (mon.0) 1093 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm04-59623-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]': finished 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: audit 2026-03-10T10:16:36.335233+0000 mon.a (mon.0) 1094 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: audit 2026-03-10T10:16:36.335233+0000 mon.a (mon.0) 1094 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: audit 2026-03-10T10:16:36.335260+0000 mon.a (mon.0) 1095 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: audit 2026-03-10T10:16:36.335260+0000 mon.a (mon.0) 1095 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: audit 2026-03-10T10:16:36.380844+0000 mon.b (mon.1) 109 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: audit 2026-03-10T10:16:36.380844+0000 mon.b (mon.1) 109 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: cluster 2026-03-10T10:16:36.391601+0000 mon.a (mon.0) 1096 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: cluster 2026-03-10T10:16:36.391601+0000 mon.a (mon.0) 1096 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: audit 2026-03-10T10:16:36.392133+0000 mon.b (mon.1) 110 : audit [INF] from='client.? 192.168.123.104:0/4264529946' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: audit 2026-03-10T10:16:36.392133+0000 mon.b (mon.1) 110 : audit [INF] from='client.? 192.168.123.104:0/4264529946' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: audit 2026-03-10T10:16:36.392279+0000 mon.c (mon.2) 85 : audit [INF] from='client.? 192.168.123.104:0/2810964492' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm04-59252-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: audit 2026-03-10T10:16:36.392279+0000 mon.c (mon.2) 85 : audit [INF] from='client.? 192.168.123.104:0/2810964492' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm04-59252-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: audit 2026-03-10T10:16:36.392589+0000 mon.b (mon.1) 111 : audit [INF] from='client.? 192.168.123.104:0/117643029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: audit 2026-03-10T10:16:36.392589+0000 mon.b (mon.1) 111 : audit [INF] from='client.? 192.168.123.104:0/117643029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: audit 2026-03-10T10:16:36.394452+0000 mon.b (mon.1) 112 : audit [INF] from='client.? 192.168.123.104:0/3213550344' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm04-59541-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: audit 2026-03-10T10:16:36.394452+0000 mon.b (mon.1) 112 : audit [INF] from='client.? 192.168.123.104:0/3213550344' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm04-59541-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: audit 2026-03-10T10:16:36.395333+0000 mon.a (mon.0) 1097 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: audit 2026-03-10T10:16:36.395333+0000 mon.a (mon.0) 1097 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: audit 2026-03-10T10:16:36.396733+0000 mon.c (mon.2) 86 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: audit 2026-03-10T10:16:36.396733+0000 mon.c (mon.2) 86 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: audit 2026-03-10T10:16:36.407677+0000 mon.a (mon.0) 1098 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: audit 2026-03-10T10:16:36.407677+0000 mon.a (mon.0) 1098 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: audit 2026-03-10T10:16:36.407800+0000 mon.a (mon.0) 1099 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: audit 2026-03-10T10:16:36.407800+0000 mon.a (mon.0) 1099 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: audit 2026-03-10T10:16:36.407855+0000 mon.a (mon.0) 1100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm04-59252-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: audit 2026-03-10T10:16:36.407855+0000 mon.a (mon.0) 1100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm04-59252-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: audit 2026-03-10T10:16:36.407910+0000 mon.a (mon.0) 1101 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm04-59541-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: audit 2026-03-10T10:16:36.407910+0000 mon.a (mon.0) 1101 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm04-59541-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: audit 2026-03-10T10:16:36.428268+0000 mon.a (mon.0) 1102 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:37.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: audit 2026-03-10T10:16:36.428268+0000 mon.a (mon.0) 1102 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:37.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: cluster 2026-03-10T10:16:36.611417+0000 osd.2 (osd.2) 7 : cluster [DBG] 15.0 deep-scrub starts 2026-03-10T10:16:37.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: cluster 2026-03-10T10:16:36.611417+0000 osd.2 (osd.2) 7 : cluster [DBG] 15.0 deep-scrub starts 2026-03-10T10:16:37.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: cluster 2026-03-10T10:16:36.612878+0000 osd.2 (osd.2) 8 : cluster [DBG] 15.0 deep-scrub ok 2026-03-10T10:16:37.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:36 vm04 bash[28289]: cluster 2026-03-10T10:16:36.612878+0000 osd.2 (osd.2) 8 : cluster [DBG] 15.0 deep-scrub ok 2026-03-10T10:16:37.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: audit 2026-03-10T10:16:36.407800+0000 mon.a (mon.0) 1099 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: audit 2026-03-10T10:16:36.407800+0000 mon.a (mon.0) 1099 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: audit 2026-03-10T10:16:36.407855+0000 mon.a (mon.0) 1100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm04-59252-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: audit 2026-03-10T10:16:36.407855+0000 mon.a (mon.0) 1100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm04-59252-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: audit 2026-03-10T10:16:36.407910+0000 mon.a (mon.0) 1101 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm04-59541-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: audit 2026-03-10T10:16:36.407910+0000 mon.a (mon.0) 1101 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm04-59541-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: audit 2026-03-10T10:16:36.428268+0000 mon.a (mon.0) 1102 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:37.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: audit 2026-03-10T10:16:36.428268+0000 mon.a (mon.0) 1102 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:37.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: cluster 2026-03-10T10:16:36.611417+0000 osd.2 (osd.2) 7 : cluster [DBG] 15.0 deep-scrub starts 2026-03-10T10:16:37.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: cluster 2026-03-10T10:16:36.611417+0000 osd.2 (osd.2) 7 : cluster [DBG] 15.0 deep-scrub starts 2026-03-10T10:16:37.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: cluster 2026-03-10T10:16:36.612878+0000 osd.2 (osd.2) 8 : cluster [DBG] 15.0 deep-scrub ok 2026-03-10T10:16:37.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:36 vm04 bash[20742]: cluster 2026-03-10T10:16:36.612878+0000 osd.2 (osd.2) 8 : cluster [DBG] 15.0 deep-scrub ok 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: cluster 2026-03-10T10:16:35.983848+0000 mgr.y (mgr.24422) 114 : cluster [DBG] pgmap v75: 560 pgs: 104 unknown, 1 creating+activating, 455 active+clean; 144 MiB data, 828 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: cluster 2026-03-10T10:16:35.983848+0000 mgr.y (mgr.24422) 114 : cluster [DBG] pgmap v75: 560 pgs: 104 unknown, 1 creating+activating, 455 active+clean; 144 MiB data, 828 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2 op/s 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: audit 2026-03-10T10:16:35.987335+0000 mon.c (mon.2) 84 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: audit 2026-03-10T10:16:35.987335+0000 mon.c (mon.2) 84 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: audit 2026-03-10T10:16:36.335157+0000 mon.a (mon.0) 1093 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm04-59623-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]': finished 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: audit 2026-03-10T10:16:36.335157+0000 mon.a (mon.0) 1093 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosWatchNotifyEC_vm04-59623-12", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]': finished 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: audit 2026-03-10T10:16:36.335233+0000 mon.a (mon.0) 1094 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: audit 2026-03-10T10:16:36.335233+0000 mon.a (mon.0) 1094 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: audit 2026-03-10T10:16:36.335260+0000 mon.a (mon.0) 1095 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: audit 2026-03-10T10:16:36.335260+0000 mon.a (mon.0) 1095 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: audit 2026-03-10T10:16:36.380844+0000 mon.b (mon.1) 109 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: audit 2026-03-10T10:16:36.380844+0000 mon.b (mon.1) 109 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: cluster 2026-03-10T10:16:36.391601+0000 mon.a (mon.0) 1096 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: cluster 2026-03-10T10:16:36.391601+0000 mon.a (mon.0) 1096 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: audit 2026-03-10T10:16:36.392133+0000 mon.b (mon.1) 110 : audit [INF] from='client.? 192.168.123.104:0/4264529946' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: audit 2026-03-10T10:16:36.392133+0000 mon.b (mon.1) 110 : audit [INF] from='client.? 192.168.123.104:0/4264529946' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: audit 2026-03-10T10:16:36.392279+0000 mon.c (mon.2) 85 : audit [INF] from='client.? 192.168.123.104:0/2810964492' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm04-59252-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: audit 2026-03-10T10:16:36.392279+0000 mon.c (mon.2) 85 : audit [INF] from='client.? 192.168.123.104:0/2810964492' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm04-59252-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: audit 2026-03-10T10:16:36.392589+0000 mon.b (mon.1) 111 : audit [INF] from='client.? 192.168.123.104:0/117643029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: audit 2026-03-10T10:16:36.392589+0000 mon.b (mon.1) 111 : audit [INF] from='client.? 192.168.123.104:0/117643029' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: audit 2026-03-10T10:16:36.394452+0000 mon.b (mon.1) 112 : audit [INF] from='client.? 192.168.123.104:0/3213550344' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm04-59541-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: audit 2026-03-10T10:16:36.394452+0000 mon.b (mon.1) 112 : audit [INF] from='client.? 192.168.123.104:0/3213550344' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm04-59541-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: audit 2026-03-10T10:16:36.395333+0000 mon.a (mon.0) 1097 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: audit 2026-03-10T10:16:36.395333+0000 mon.a (mon.0) 1097 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: audit 2026-03-10T10:16:36.396733+0000 mon.c (mon.2) 86 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: audit 2026-03-10T10:16:36.396733+0000 mon.c (mon.2) 86 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: audit 2026-03-10T10:16:36.407677+0000 mon.a (mon.0) 1098 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: audit 2026-03-10T10:16:36.407677+0000 mon.a (mon.0) 1098 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: audit 2026-03-10T10:16:36.407800+0000 mon.a (mon.0) 1099 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: audit 2026-03-10T10:16:36.407800+0000 mon.a (mon.0) 1099 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: audit 2026-03-10T10:16:36.407855+0000 mon.a (mon.0) 1100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm04-59252-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: audit 2026-03-10T10:16:36.407855+0000 mon.a (mon.0) 1100 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm04-59252-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: audit 2026-03-10T10:16:36.407910+0000 mon.a (mon.0) 1101 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm04-59541-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: audit 2026-03-10T10:16:36.407910+0000 mon.a (mon.0) 1101 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm04-59541-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: audit 2026-03-10T10:16:36.428268+0000 mon.a (mon.0) 1102 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: audit 2026-03-10T10:16:36.428268+0000 mon.a (mon.0) 1102 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: cluster 2026-03-10T10:16:36.611417+0000 osd.2 (osd.2) 7 : cluster [DBG] 15.0 deep-scrub starts 2026-03-10T10:16:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: cluster 2026-03-10T10:16:36.611417+0000 osd.2 (osd.2) 7 : cluster [DBG] 15.0 deep-scrub starts 2026-03-10T10:16:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: cluster 2026-03-10T10:16:36.612878+0000 osd.2 (osd.2) 8 : cluster [DBG] 15.0 deep-scrub ok 2026-03-10T10:16:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:36 vm07 bash[23367]: cluster 2026-03-10T10:16:36.612878+0000 osd.2 (osd.2) 8 : cluster [DBG] 15.0 deep-scrub ok 2026-03-10T10:16:38.192 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:37 vm07 bash[23367]: audit 2026-03-10T10:16:36.988146+0000 mon.c (mon.2) 87 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:38.192 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:37 vm07 bash[23367]: audit 2026-03-10T10:16:36.988146+0000 mon.c (mon.2) 87 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:38.192 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:37 vm07 bash[23367]: cluster 2026-03-10T10:16:37.335847+0000 mon.a (mon.0) 1103 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T10:16:38.192 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:37 vm07 bash[23367]: cluster 2026-03-10T10:16:37.335847+0000 mon.a (mon.0) 1103 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T10:16:38.192 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:37 vm07 bash[23367]: audit 2026-03-10T10:16:37.340481+0000 mon.a (mon.0) 1104 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsManyvm04-60174-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm04-60174-3"}]': finished 2026-03-10T10:16:38.192 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:37 vm07 bash[23367]: audit 2026-03-10T10:16:37.340481+0000 mon.a (mon.0) 1104 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsManyvm04-60174-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm04-60174-3"}]': finished 2026-03-10T10:16:38.192 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:37 vm07 bash[23367]: audit 2026-03-10T10:16:37.340548+0000 mon.a (mon.0) 1105 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-10T10:16:38.192 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:37 vm07 bash[23367]: audit 2026-03-10T10:16:37.340548+0000 mon.a (mon.0) 1105 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-10T10:16:38.192 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:37 vm07 bash[23367]: audit 2026-03-10T10:16:37.340607+0000 mon.a (mon.0) 1106 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:38.192 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:37 vm07 bash[23367]: audit 2026-03-10T10:16:37.340607+0000 mon.a (mon.0) 1106 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:38.192 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:37 vm07 bash[23367]: audit 2026-03-10T10:16:37.340649+0000 mon.a (mon.0) 1107 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:38.192 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:37 vm07 bash[23367]: audit 2026-03-10T10:16:37.340649+0000 mon.a (mon.0) 1107 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:38.192 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:37 vm07 bash[23367]: audit 2026-03-10T10:16:37.340708+0000 mon.a (mon.0) 1108 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm04-59252-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:38.192 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:37 vm07 bash[23367]: audit 2026-03-10T10:16:37.340708+0000 mon.a (mon.0) 1108 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm04-59252-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:38.192 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:37 vm07 bash[23367]: audit 2026-03-10T10:16:37.340866+0000 mon.a (mon.0) 1109 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm04-59541-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:38.192 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:37 vm07 bash[23367]: audit 2026-03-10T10:16:37.340866+0000 mon.a (mon.0) 1109 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm04-59541-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:38.192 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:37 vm07 bash[23367]: audit 2026-03-10T10:16:37.340916+0000 mon.a (mon.0) 1110 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:38.192 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:37 vm07 bash[23367]: audit 2026-03-10T10:16:37.340916+0000 mon.a (mon.0) 1110 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:38.192 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:37 vm07 bash[23367]: cluster 2026-03-10T10:16:37.368792+0000 mon.a (mon.0) 1111 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-10T10:16:38.192 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:37 vm07 bash[23367]: cluster 2026-03-10T10:16:37.368792+0000 mon.a (mon.0) 1111 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-10T10:16:38.192 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:37 vm07 bash[23367]: audit 2026-03-10T10:16:37.371062+0000 mon.a (mon.0) 1112 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:38.192 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:37 vm07 bash[23367]: audit 2026-03-10T10:16:37.371062+0000 mon.a (mon.0) 1112 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:38.192 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:37 vm07 bash[23367]: audit 2026-03-10T10:16:37.377575+0000 mon.c (mon.2) 88 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:38.192 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:37 vm07 bash[23367]: audit 2026-03-10T10:16:37.377575+0000 mon.c (mon.2) 88 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:38.192 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:37 vm07 bash[23367]: audit 2026-03-10T10:16:37.377823+0000 mon.a (mon.0) 1113 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:38.192 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:37 vm07 bash[23367]: audit 2026-03-10T10:16:37.377823+0000 mon.a (mon.0) 1113 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:38.193 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:37 vm07 bash[23367]: audit 2026-03-10T10:16:37.392775+0000 mon.b (mon.1) 113 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T10:16:38.193 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:37 vm07 bash[23367]: audit 2026-03-10T10:16:37.392775+0000 mon.b (mon.1) 113 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T10:16:38.193 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:37 vm07 bash[23367]: audit 2026-03-10T10:16:37.403103+0000 mon.a (mon.0) 1114 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T10:16:38.193 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:37 vm07 bash[23367]: audit 2026-03-10T10:16:37.403103+0000 mon.a (mon.0) 1114 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T10:16:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:37 vm04 bash[28289]: audit 2026-03-10T10:16:36.988146+0000 mon.c (mon.2) 87 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:37 vm04 bash[28289]: audit 2026-03-10T10:16:36.988146+0000 mon.c (mon.2) 87 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:37 vm04 bash[28289]: cluster 2026-03-10T10:16:37.335847+0000 mon.a (mon.0) 1103 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T10:16:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:37 vm04 bash[28289]: cluster 2026-03-10T10:16:37.335847+0000 mon.a (mon.0) 1103 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T10:16:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:37 vm04 bash[28289]: audit 2026-03-10T10:16:37.340481+0000 mon.a (mon.0) 1104 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsManyvm04-60174-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm04-60174-3"}]': finished 2026-03-10T10:16:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:37 vm04 bash[28289]: audit 2026-03-10T10:16:37.340481+0000 mon.a (mon.0) 1104 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsManyvm04-60174-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm04-60174-3"}]': finished 2026-03-10T10:16:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:37 vm04 bash[28289]: audit 2026-03-10T10:16:37.340548+0000 mon.a (mon.0) 1105 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-10T10:16:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:37 vm04 bash[28289]: audit 2026-03-10T10:16:37.340548+0000 mon.a (mon.0) 1105 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-10T10:16:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:37 vm04 bash[28289]: audit 2026-03-10T10:16:37.340607+0000 mon.a (mon.0) 1106 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:37 vm04 bash[28289]: audit 2026-03-10T10:16:37.340607+0000 mon.a (mon.0) 1106 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:37 vm04 bash[28289]: audit 2026-03-10T10:16:37.340649+0000 mon.a (mon.0) 1107 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:37 vm04 bash[28289]: audit 2026-03-10T10:16:37.340649+0000 mon.a (mon.0) 1107 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:37 vm04 bash[28289]: audit 2026-03-10T10:16:37.340708+0000 mon.a (mon.0) 1108 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm04-59252-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:37 vm04 bash[28289]: audit 2026-03-10T10:16:37.340708+0000 mon.a (mon.0) 1108 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm04-59252-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:37 vm04 bash[28289]: audit 2026-03-10T10:16:37.340866+0000 mon.a (mon.0) 1109 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm04-59541-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:37 vm04 bash[28289]: audit 2026-03-10T10:16:37.340866+0000 mon.a (mon.0) 1109 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm04-59541-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:37 vm04 bash[28289]: audit 2026-03-10T10:16:37.340916+0000 mon.a (mon.0) 1110 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:37 vm04 bash[28289]: audit 2026-03-10T10:16:37.340916+0000 mon.a (mon.0) 1110 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:37 vm04 bash[28289]: cluster 2026-03-10T10:16:37.368792+0000 mon.a (mon.0) 1111 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:37 vm04 bash[28289]: cluster 2026-03-10T10:16:37.368792+0000 mon.a (mon.0) 1111 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:37 vm04 bash[28289]: audit 2026-03-10T10:16:37.371062+0000 mon.a (mon.0) 1112 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:37 vm04 bash[28289]: audit 2026-03-10T10:16:37.371062+0000 mon.a (mon.0) 1112 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:37 vm04 bash[28289]: audit 2026-03-10T10:16:37.377575+0000 mon.c (mon.2) 88 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:37 vm04 bash[28289]: audit 2026-03-10T10:16:37.377575+0000 mon.c (mon.2) 88 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:37 vm04 bash[28289]: audit 2026-03-10T10:16:37.377823+0000 mon.a (mon.0) 1113 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:37 vm04 bash[28289]: audit 2026-03-10T10:16:37.377823+0000 mon.a (mon.0) 1113 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:37 vm04 bash[28289]: audit 2026-03-10T10:16:37.392775+0000 mon.b (mon.1) 113 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:37 vm04 bash[28289]: audit 2026-03-10T10:16:37.392775+0000 mon.b (mon.1) 113 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:37 vm04 bash[28289]: audit 2026-03-10T10:16:37.403103+0000 mon.a (mon.0) 1114 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:37 vm04 bash[28289]: audit 2026-03-10T10:16:37.403103+0000 mon.a (mon.0) 1114 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:37 vm04 bash[20742]: audit 2026-03-10T10:16:36.988146+0000 mon.c (mon.2) 87 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:37 vm04 bash[20742]: audit 2026-03-10T10:16:36.988146+0000 mon.c (mon.2) 87 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:37 vm04 bash[20742]: cluster 2026-03-10T10:16:37.335847+0000 mon.a (mon.0) 1103 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:37 vm04 bash[20742]: cluster 2026-03-10T10:16:37.335847+0000 mon.a (mon.0) 1103 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:37 vm04 bash[20742]: audit 2026-03-10T10:16:37.340481+0000 mon.a (mon.0) 1104 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsManyvm04-60174-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm04-60174-3"}]': finished 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:37 vm04 bash[20742]: audit 2026-03-10T10:16:37.340481+0000 mon.a (mon.0) 1104 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ListObjectsManyvm04-60174-3", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ListObjectsManyvm04-60174-3"}]': finished 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:37 vm04 bash[20742]: audit 2026-03-10T10:16:37.340548+0000 mon.a (mon.0) 1105 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:37 vm04 bash[20742]: audit 2026-03-10T10:16:37.340548+0000 mon.a (mon.0) 1105 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:37 vm04 bash[20742]: audit 2026-03-10T10:16:37.340607+0000 mon.a (mon.0) 1106 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:37 vm04 bash[20742]: audit 2026-03-10T10:16:37.340607+0000 mon.a (mon.0) 1106 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:37 vm04 bash[20742]: audit 2026-03-10T10:16:37.340649+0000 mon.a (mon.0) 1107 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:37 vm04 bash[20742]: audit 2026-03-10T10:16:37.340649+0000 mon.a (mon.0) 1107 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:37 vm04 bash[20742]: audit 2026-03-10T10:16:37.340708+0000 mon.a (mon.0) 1108 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm04-59252-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:37 vm04 bash[20742]: audit 2026-03-10T10:16:37.340708+0000 mon.a (mon.0) 1108 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTrip3_vm04-59252-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:37 vm04 bash[20742]: audit 2026-03-10T10:16:37.340866+0000 mon.a (mon.0) 1109 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm04-59541-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:37 vm04 bash[20742]: audit 2026-03-10T10:16:37.340866+0000 mon.a (mon.0) 1109 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosSnapshotsSelfManagedPP_vm04-59541-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:38.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:37 vm04 bash[20742]: audit 2026-03-10T10:16:37.340916+0000 mon.a (mon.0) 1110 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:38.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:37 vm04 bash[20742]: audit 2026-03-10T10:16:37.340916+0000 mon.a (mon.0) 1110 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:38.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:37 vm04 bash[20742]: cluster 2026-03-10T10:16:37.368792+0000 mon.a (mon.0) 1111 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-10T10:16:38.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:37 vm04 bash[20742]: cluster 2026-03-10T10:16:37.368792+0000 mon.a (mon.0) 1111 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-10T10:16:38.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:37 vm04 bash[20742]: audit 2026-03-10T10:16:37.371062+0000 mon.a (mon.0) 1112 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:38.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:37 vm04 bash[20742]: audit 2026-03-10T10:16:37.371062+0000 mon.a (mon.0) 1112 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:38.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:37 vm04 bash[20742]: audit 2026-03-10T10:16:37.377575+0000 mon.c (mon.2) 88 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:38.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:37 vm04 bash[20742]: audit 2026-03-10T10:16:37.377575+0000 mon.c (mon.2) 88 : audit [INF] from='client.? 192.168.123.104:0/245677537' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:38.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:37 vm04 bash[20742]: audit 2026-03-10T10:16:37.377823+0000 mon.a (mon.0) 1113 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:38.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:37 vm04 bash[20742]: audit 2026-03-10T10:16:37.377823+0000 mon.a (mon.0) 1113 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]: dispatch 2026-03-10T10:16:38.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:37 vm04 bash[20742]: audit 2026-03-10T10:16:37.392775+0000 mon.b (mon.1) 113 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T10:16:38.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:37 vm04 bash[20742]: audit 2026-03-10T10:16:37.392775+0000 mon.b (mon.1) 113 : audit [INF] from='client.? 192.168.123.104:0/3658234945' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T10:16:38.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:37 vm04 bash[20742]: audit 2026-03-10T10:16:37.403103+0000 mon.a (mon.0) 1114 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T10:16:38.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:37 vm04 bash[20742]: audit 2026-03-10T10:16:37.403103+0000 mon.a (mon.0) 1114 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T10:16:38.383 INFO:tasks.workunit.client.0.vm04.stdout: api_wa ] LibRadosIoECPP.RoundTripPP2 2026-03-10T10:16:38.383 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoECPP.RoundTripPP2 (24 ms) 2026-03-10T10:16:38.383 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.OverlappingWriteRoundTripPP 2026-03-10T10:16:38.383 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoECPP.OverlappingWriteRoundTripPP (7 ms) 2026-03-10T10:16:38.383 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.WriteFullRoundTripPP 2026-03-10T10:16:38.383 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoECPP.WriteFullRoundTripPP (7 ms) 2026-03-10T10:16:38.383 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.WriteFullRoundTripPP2 2026-03-10T10:16:38.383 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoECPP.WriteFullRoundTripPP2 (4 ms) 2026-03-10T10:16:38.383 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.AppendRoundTripPP 2026-03-10T10:16:38.383 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoECPP.AppendRoundTripPP (134 ms) 2026-03-10T10:16:38.383 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.TruncTestPP 2026-03-10T10:16:38.383 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoECPP.TruncTestPP (13 ms) 2026-03-10T10:16:38.383 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.RemoveTestPP 2026-03-10T10:16:38.383 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoECPP.RemoveTestPP (22 ms) 2026-03-10T10:16:38.383 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.XattrsRoundTripPP 2026-03-10T10:16:38.383 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoECPP.XattrsRoundTripPP (9 ms) 2026-03-10T10:16:38.383 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.RmXattrPP 2026-03-10T10:16:38.383 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoECPP.RmXattrPP (20 ms) 2026-03-10T10:16:38.383 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.CrcZeroWrite 2026-03-10T10:16:38.383 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoECPP.CrcZeroWrite (6080 ms) 2026-03-10T10:16:38.383 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.XattrListPP 2026-03-10T10:16:38.383 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoECPP.XattrListPP (1279 ms) 2026-03-10T10:16:38.383 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.CmpExtPP 2026-03-10T10:16:38.383 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoECPP.CmpExtPP (7 ms) 2026-03-10T10:16:38.383 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.CmpExtDNEPP 2026-03-10T10:16:38.383 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoECPP.CmpExtDNEPP (2 ms) 2026-03-10T10:16:38.383 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ RUN ] LibRadosIoECPP.CmpExtMismatchPP 2026-03-10T10:16:38.383 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ OK ] LibRadosIoECPP.CmpExtMismatchPP (3 ms) 2026-03-10T10:16:38.383 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [----------] 18 tests from LibRadosIoECPP (9124 ms total) 2026-03-10T10:16:38.383 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: 2026-03-10T10:16:38.383 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [----------] Global test environment tear-down 2026-03-10T10:16:38.383 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [==========] 39 tests from 2 test suites ran. (18197 ms total) 2026-03-10T10:16:38.383 INFO:tasks.workunit.client.0.vm04.stdout: api_io_pp: [ PASSED ] 39 tests. 2026-03-10T10:16:38.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:16:38 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:16:39.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:38 vm04 bash[28289]: cluster 2026-03-10T10:16:37.984344+0000 mgr.y (mgr.24422) 115 : cluster [DBG] pgmap v78: 760 pgs: 336 unknown, 1 creating+activating, 423 active+clean; 144 MiB data, 828 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:16:39.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:38 vm04 bash[28289]: cluster 2026-03-10T10:16:37.984344+0000 mgr.y (mgr.24422) 115 : cluster [DBG] pgmap v78: 760 pgs: 336 unknown, 1 creating+activating, 423 active+clean; 144 MiB data, 828 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:16:39.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:38 vm04 bash[28289]: audit 2026-03-10T10:16:37.988885+0000 mon.c (mon.2) 89 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:39.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:38 vm04 bash[28289]: audit 2026-03-10T10:16:37.988885+0000 mon.c (mon.2) 89 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:39.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:38 vm04 bash[28289]: audit 2026-03-10T10:16:38.192599+0000 mgr.y (mgr.24422) 116 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:39.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:38 vm04 bash[28289]: audit 2026-03-10T10:16:38.192599+0000 mgr.y (mgr.24422) 116 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:39.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:38 vm04 bash[28289]: audit 2026-03-10T10:16:38.345160+0000 mon.a (mon.0) 1115 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:39.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:38 vm04 bash[28289]: audit 2026-03-10T10:16:38.345160+0000 mon.a (mon.0) 1115 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:39.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:38 vm04 bash[28289]: audit 2026-03-10T10:16:38.345285+0000 mon.a (mon.0) 1116 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:39.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:38 vm04 bash[28289]: audit 2026-03-10T10:16:38.345285+0000 mon.a (mon.0) 1116 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:39.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:38 vm04 bash[28289]: audit 2026-03-10T10:16:38.345328+0000 mon.a (mon.0) 1117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1"}]': finished 2026-03-10T10:16:39.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:38 vm04 bash[28289]: audit 2026-03-10T10:16:38.345328+0000 mon.a (mon.0) 1117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1"}]': finished 2026-03-10T10:16:39.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:38 vm04 bash[28289]: cluster 2026-03-10T10:16:38.392563+0000 mon.a (mon.0) 1118 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-10T10:16:39.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:38 vm04 bash[28289]: cluster 2026-03-10T10:16:38.392563+0000 mon.a (mon.0) 1118 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-10T10:16:39.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:38 vm04 bash[28289]: audit 2026-03-10T10:16:38.393834+0000 mon.b (mon.1) 114 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:39.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:38 vm04 bash[28289]: audit 2026-03-10T10:16:38.393834+0000 mon.b (mon.1) 114 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:39.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:38 vm04 bash[28289]: audit 2026-03-10T10:16:38.397738+0000 mon.a (mon.0) 1119 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:39.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:38 vm04 bash[28289]: audit 2026-03-10T10:16:38.397738+0000 mon.a (mon.0) 1119 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:39.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:38 vm04 bash[28289]: audit 2026-03-10T10:16:38.676117+0000 mon.a (mon.0) 1120 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:16:39.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:38 vm04 bash[28289]: audit 2026-03-10T10:16:38.676117+0000 mon.a (mon.0) 1120 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:16:39.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:38 vm04 bash[20742]: cluster 2026-03-10T10:16:37.984344+0000 mgr.y (mgr.24422) 115 : cluster [DBG] pgmap v78: 760 pgs: 336 unknown, 1 creating+activating, 423 active+clean; 144 MiB data, 828 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:16:39.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:38 vm04 bash[20742]: cluster 2026-03-10T10:16:37.984344+0000 mgr.y (mgr.24422) 115 : cluster [DBG] pgmap v78: 760 pgs: 336 unknown, 1 creating+activating, 423 active+clean; 144 MiB data, 828 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:16:39.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:38 vm04 bash[20742]: audit 2026-03-10T10:16:37.988885+0000 mon.c (mon.2) 89 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:39.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:38 vm04 bash[20742]: audit 2026-03-10T10:16:37.988885+0000 mon.c (mon.2) 89 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:39.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:38 vm04 bash[20742]: audit 2026-03-10T10:16:38.192599+0000 mgr.y (mgr.24422) 116 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:39.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:38 vm04 bash[20742]: audit 2026-03-10T10:16:38.192599+0000 mgr.y (mgr.24422) 116 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:39.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:38 vm04 bash[20742]: audit 2026-03-10T10:16:38.345160+0000 mon.a (mon.0) 1115 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:39.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:38 vm04 bash[20742]: audit 2026-03-10T10:16:38.345160+0000 mon.a (mon.0) 1115 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:39.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:38 vm04 bash[20742]: audit 2026-03-10T10:16:38.345285+0000 mon.a (mon.0) 1116 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:39.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:38 vm04 bash[20742]: audit 2026-03-10T10:16:38.345285+0000 mon.a (mon.0) 1116 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:39.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:38 vm04 bash[20742]: audit 2026-03-10T10:16:38.345328+0000 mon.a (mon.0) 1117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1"}]': finished 2026-03-10T10:16:39.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:38 vm04 bash[20742]: audit 2026-03-10T10:16:38.345328+0000 mon.a (mon.0) 1117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1"}]': finished 2026-03-10T10:16:39.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:38 vm04 bash[20742]: cluster 2026-03-10T10:16:38.392563+0000 mon.a (mon.0) 1118 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-10T10:16:39.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:38 vm04 bash[20742]: cluster 2026-03-10T10:16:38.392563+0000 mon.a (mon.0) 1118 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-10T10:16:39.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:38 vm04 bash[20742]: audit 2026-03-10T10:16:38.393834+0000 mon.b (mon.1) 114 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:39.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:38 vm04 bash[20742]: audit 2026-03-10T10:16:38.393834+0000 mon.b (mon.1) 114 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:39.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:38 vm04 bash[20742]: audit 2026-03-10T10:16:38.397738+0000 mon.a (mon.0) 1119 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:39.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:38 vm04 bash[20742]: audit 2026-03-10T10:16:38.397738+0000 mon.a (mon.0) 1119 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:39.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:38 vm04 bash[20742]: audit 2026-03-10T10:16:38.676117+0000 mon.a (mon.0) 1120 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:16:39.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:38 vm04 bash[20742]: audit 2026-03-10T10:16:38.676117+0000 mon.a (mon.0) 1120 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:16:39.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:38 vm07 bash[23367]: cluster 2026-03-10T10:16:37.984344+0000 mgr.y (mgr.24422) 115 : cluster [DBG] pgmap v78: 760 pgs: 336 unknown, 1 creating+activating, 423 active+clean; 144 MiB data, 828 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:16:39.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:38 vm07 bash[23367]: cluster 2026-03-10T10:16:37.984344+0000 mgr.y (mgr.24422) 115 : cluster [DBG] pgmap v78: 760 pgs: 336 unknown, 1 creating+activating, 423 active+clean; 144 MiB data, 828 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:16:39.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:38 vm07 bash[23367]: audit 2026-03-10T10:16:37.988885+0000 mon.c (mon.2) 89 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:39.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:38 vm07 bash[23367]: audit 2026-03-10T10:16:37.988885+0000 mon.c (mon.2) 89 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:39.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:38 vm07 bash[23367]: audit 2026-03-10T10:16:38.192599+0000 mgr.y (mgr.24422) 116 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:39.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:38 vm07 bash[23367]: audit 2026-03-10T10:16:38.192599+0000 mgr.y (mgr.24422) 116 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:39.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:38 vm07 bash[23367]: audit 2026-03-10T10:16:38.345160+0000 mon.a (mon.0) 1115 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:39.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:38 vm07 bash[23367]: audit 2026-03-10T10:16:38.345160+0000 mon.a (mon.0) 1115 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:39.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:38 vm07 bash[23367]: audit 2026-03-10T10:16:38.345285+0000 mon.a (mon.0) 1116 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:39.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:38 vm07 bash[23367]: audit 2026-03-10T10:16:38.345285+0000 mon.a (mon.0) 1116 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosIoECPP_vm04-59290-23"}]': finished 2026-03-10T10:16:39.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:38 vm07 bash[23367]: audit 2026-03-10T10:16:38.345328+0000 mon.a (mon.0) 1117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1"}]': finished 2026-03-10T10:16:39.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:38 vm07 bash[23367]: audit 2026-03-10T10:16:38.345328+0000 mon.a (mon.0) 1117 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMisc_vm04-59423-1","app":"app1","key":"key1"}]': finished 2026-03-10T10:16:39.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:38 vm07 bash[23367]: cluster 2026-03-10T10:16:38.392563+0000 mon.a (mon.0) 1118 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-10T10:16:39.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:38 vm07 bash[23367]: cluster 2026-03-10T10:16:38.392563+0000 mon.a (mon.0) 1118 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-10T10:16:39.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:38 vm07 bash[23367]: audit 2026-03-10T10:16:38.393834+0000 mon.b (mon.1) 114 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:39.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:38 vm07 bash[23367]: audit 2026-03-10T10:16:38.393834+0000 mon.b (mon.1) 114 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:39.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:38 vm07 bash[23367]: audit 2026-03-10T10:16:38.397738+0000 mon.a (mon.0) 1119 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:39.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:38 vm07 bash[23367]: audit 2026-03-10T10:16:38.397738+0000 mon.a (mon.0) 1119 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:39.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:38 vm07 bash[23367]: audit 2026-03-10T10:16:38.676117+0000 mon.a (mon.0) 1120 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:16:39.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:38 vm07 bash[23367]: audit 2026-03-10T10:16:38.676117+0000 mon.a (mon.0) 1120 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-7", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:16:40.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:39 vm04 bash[20742]: audit 2026-03-10T10:16:38.990803+0000 mon.c (mon.2) 90 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:40.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:39 vm04 bash[20742]: audit 2026-03-10T10:16:38.990803+0000 mon.c (mon.2) 90 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:40.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:39 vm04 bash[20742]: audit 2026-03-10T10:16:39.349358+0000 mon.a (mon.0) 1121 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]': finished 2026-03-10T10:16:40.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:39 vm04 bash[20742]: audit 2026-03-10T10:16:39.349358+0000 mon.a (mon.0) 1121 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]': finished 2026-03-10T10:16:40.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:39 vm04 bash[20742]: audit 2026-03-10T10:16:39.349390+0000 mon.a (mon.0) 1122 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-7", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:16:40.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:39 vm04 bash[20742]: audit 2026-03-10T10:16:39.349390+0000 mon.a (mon.0) 1122 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-7", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:16:40.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:39 vm04 bash[20742]: audit 2026-03-10T10:16:39.358573+0000 mon.b (mon.1) 115 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:40.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:39 vm04 bash[20742]: audit 2026-03-10T10:16:39.358573+0000 mon.b (mon.1) 115 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:40.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:39 vm04 bash[20742]: cluster 2026-03-10T10:16:39.369274+0000 mon.a (mon.0) 1123 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-10T10:16:40.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:39 vm04 bash[20742]: cluster 2026-03-10T10:16:39.369274+0000 mon.a (mon.0) 1123 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-10T10:16:40.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:39 vm04 bash[20742]: audit 2026-03-10T10:16:39.370977+0000 mon.a (mon.0) 1124 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-7"}]: dispatch 2026-03-10T10:16:40.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:39 vm04 bash[20742]: audit 2026-03-10T10:16:39.370977+0000 mon.a (mon.0) 1124 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-7"}]: dispatch 2026-03-10T10:16:40.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:39 vm04 bash[20742]: audit 2026-03-10T10:16:39.371152+0000 mon.a (mon.0) 1125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:40.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:39 vm04 bash[20742]: audit 2026-03-10T10:16:39.371152+0000 mon.a (mon.0) 1125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:40.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:39 vm04 bash[20742]: audit 2026-03-10T10:16:39.374369+0000 mon.b (mon.1) 116 : audit [INF] from='client.? 192.168.123.104:0/1274009318' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:39 vm04 bash[20742]: audit 2026-03-10T10:16:39.374369+0000 mon.b (mon.1) 116 : audit [INF] from='client.? 192.168.123.104:0/1274009318' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:39 vm04 bash[20742]: audit 2026-03-10T10:16:39.374611+0000 mon.b (mon.1) 117 : audit [INF] from='client.? 192.168.123.104:0/584488720' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm04-59259-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:39 vm04 bash[20742]: audit 2026-03-10T10:16:39.374611+0000 mon.b (mon.1) 117 : audit [INF] from='client.? 192.168.123.104:0/584488720' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm04-59259-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:39 vm04 bash[20742]: audit 2026-03-10T10:16:39.374801+0000 mon.b (mon.1) 118 : audit [INF] from='client.? 192.168.123.104:0/3813677498' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm04-59252-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:39 vm04 bash[20742]: audit 2026-03-10T10:16:39.374801+0000 mon.b (mon.1) 118 : audit [INF] from='client.? 192.168.123.104:0/3813677498' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm04-59252-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:39 vm04 bash[20742]: audit 2026-03-10T10:16:39.382924+0000 mon.a (mon.0) 1126 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:39 vm04 bash[20742]: audit 2026-03-10T10:16:39.382924+0000 mon.a (mon.0) 1126 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:39 vm04 bash[20742]: audit 2026-03-10T10:16:39.383710+0000 mon.a (mon.0) 1127 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm04-59259-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:39 vm04 bash[20742]: audit 2026-03-10T10:16:39.383710+0000 mon.a (mon.0) 1127 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm04-59259-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:39 vm04 bash[20742]: audit 2026-03-10T10:16:39.383865+0000 mon.a (mon.0) 1128 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm04-59252-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:39 vm04 bash[20742]: audit 2026-03-10T10:16:39.383865+0000 mon.a (mon.0) 1128 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm04-59252-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:39 vm04 bash[28289]: audit 2026-03-10T10:16:38.990803+0000 mon.c (mon.2) 90 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:39 vm04 bash[28289]: audit 2026-03-10T10:16:38.990803+0000 mon.c (mon.2) 90 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:39 vm04 bash[28289]: audit 2026-03-10T10:16:39.349358+0000 mon.a (mon.0) 1121 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]': finished 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:39 vm04 bash[28289]: audit 2026-03-10T10:16:39.349358+0000 mon.a (mon.0) 1121 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]': finished 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:39 vm04 bash[28289]: audit 2026-03-10T10:16:39.349390+0000 mon.a (mon.0) 1122 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-7", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:39 vm04 bash[28289]: audit 2026-03-10T10:16:39.349390+0000 mon.a (mon.0) 1122 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-7", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:39 vm04 bash[28289]: audit 2026-03-10T10:16:39.358573+0000 mon.b (mon.1) 115 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:39 vm04 bash[28289]: audit 2026-03-10T10:16:39.358573+0000 mon.b (mon.1) 115 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:39 vm04 bash[28289]: cluster 2026-03-10T10:16:39.369274+0000 mon.a (mon.0) 1123 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:39 vm04 bash[28289]: cluster 2026-03-10T10:16:39.369274+0000 mon.a (mon.0) 1123 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:39 vm04 bash[28289]: audit 2026-03-10T10:16:39.370977+0000 mon.a (mon.0) 1124 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-7"}]: dispatch 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:39 vm04 bash[28289]: audit 2026-03-10T10:16:39.370977+0000 mon.a (mon.0) 1124 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-7"}]: dispatch 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:39 vm04 bash[28289]: audit 2026-03-10T10:16:39.371152+0000 mon.a (mon.0) 1125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:39 vm04 bash[28289]: audit 2026-03-10T10:16:39.371152+0000 mon.a (mon.0) 1125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:39 vm04 bash[28289]: audit 2026-03-10T10:16:39.374369+0000 mon.b (mon.1) 116 : audit [INF] from='client.? 192.168.123.104:0/1274009318' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:39 vm04 bash[28289]: audit 2026-03-10T10:16:39.374369+0000 mon.b (mon.1) 116 : audit [INF] from='client.? 192.168.123.104:0/1274009318' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:39 vm04 bash[28289]: audit 2026-03-10T10:16:39.374611+0000 mon.b (mon.1) 117 : audit [INF] from='client.? 192.168.123.104:0/584488720' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm04-59259-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:39 vm04 bash[28289]: audit 2026-03-10T10:16:39.374611+0000 mon.b (mon.1) 117 : audit [INF] from='client.? 192.168.123.104:0/584488720' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm04-59259-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:39 vm04 bash[28289]: audit 2026-03-10T10:16:39.374801+0000 mon.b (mon.1) 118 : audit [INF] from='client.? 192.168.123.104:0/3813677498' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm04-59252-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:39 vm04 bash[28289]: audit 2026-03-10T10:16:39.374801+0000 mon.b (mon.1) 118 : audit [INF] from='client.? 192.168.123.104:0/3813677498' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm04-59252-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:39 vm04 bash[28289]: audit 2026-03-10T10:16:39.382924+0000 mon.a (mon.0) 1126 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:39 vm04 bash[28289]: audit 2026-03-10T10:16:39.382924+0000 mon.a (mon.0) 1126 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:39 vm04 bash[28289]: audit 2026-03-10T10:16:39.383710+0000 mon.a (mon.0) 1127 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm04-59259-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:39 vm04 bash[28289]: audit 2026-03-10T10:16:39.383710+0000 mon.a (mon.0) 1127 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm04-59259-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:39 vm04 bash[28289]: audit 2026-03-10T10:16:39.383865+0000 mon.a (mon.0) 1128 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm04-59252-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:39 vm04 bash[28289]: audit 2026-03-10T10:16:39.383865+0000 mon.a (mon.0) 1128 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm04-59252-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:39 vm07 bash[23367]: audit 2026-03-10T10:16:38.990803+0000 mon.c (mon.2) 90 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:39 vm07 bash[23367]: audit 2026-03-10T10:16:38.990803+0000 mon.c (mon.2) 90 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:39 vm07 bash[23367]: audit 2026-03-10T10:16:39.349358+0000 mon.a (mon.0) 1121 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]': finished 2026-03-10T10:16:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:39 vm07 bash[23367]: audit 2026-03-10T10:16:39.349358+0000 mon.a (mon.0) 1121 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosWatchNotifyEC_vm04-59623-12"}]': finished 2026-03-10T10:16:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:39 vm07 bash[23367]: audit 2026-03-10T10:16:39.349390+0000 mon.a (mon.0) 1122 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-7", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:16:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:39 vm07 bash[23367]: audit 2026-03-10T10:16:39.349390+0000 mon.a (mon.0) 1122 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-7", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:16:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:39 vm07 bash[23367]: audit 2026-03-10T10:16:39.358573+0000 mon.b (mon.1) 115 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:39 vm07 bash[23367]: audit 2026-03-10T10:16:39.358573+0000 mon.b (mon.1) 115 : audit [INF] from='client.? 192.168.123.104:0/171784483' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:39 vm07 bash[23367]: cluster 2026-03-10T10:16:39.369274+0000 mon.a (mon.0) 1123 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-10T10:16:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:39 vm07 bash[23367]: cluster 2026-03-10T10:16:39.369274+0000 mon.a (mon.0) 1123 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-10T10:16:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:39 vm07 bash[23367]: audit 2026-03-10T10:16:39.370977+0000 mon.a (mon.0) 1124 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-7"}]: dispatch 2026-03-10T10:16:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:39 vm07 bash[23367]: audit 2026-03-10T10:16:39.370977+0000 mon.a (mon.0) 1124 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-7"}]: dispatch 2026-03-10T10:16:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:39 vm07 bash[23367]: audit 2026-03-10T10:16:39.371152+0000 mon.a (mon.0) 1125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:39 vm07 bash[23367]: audit 2026-03-10T10:16:39.371152+0000 mon.a (mon.0) 1125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm04-59623-12"}]: dispatch 2026-03-10T10:16:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:39 vm07 bash[23367]: audit 2026-03-10T10:16:39.374369+0000 mon.b (mon.1) 116 : audit [INF] from='client.? 192.168.123.104:0/1274009318' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:39 vm07 bash[23367]: audit 2026-03-10T10:16:39.374369+0000 mon.b (mon.1) 116 : audit [INF] from='client.? 192.168.123.104:0/1274009318' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:39 vm07 bash[23367]: audit 2026-03-10T10:16:39.374611+0000 mon.b (mon.1) 117 : audit [INF] from='client.? 192.168.123.104:0/584488720' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm04-59259-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:39 vm07 bash[23367]: audit 2026-03-10T10:16:39.374611+0000 mon.b (mon.1) 117 : audit [INF] from='client.? 192.168.123.104:0/584488720' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm04-59259-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:39 vm07 bash[23367]: audit 2026-03-10T10:16:39.374801+0000 mon.b (mon.1) 118 : audit [INF] from='client.? 192.168.123.104:0/3813677498' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm04-59252-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:39 vm07 bash[23367]: audit 2026-03-10T10:16:39.374801+0000 mon.b (mon.1) 118 : audit [INF] from='client.? 192.168.123.104:0/3813677498' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm04-59252-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:39 vm07 bash[23367]: audit 2026-03-10T10:16:39.382924+0000 mon.a (mon.0) 1126 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:39 vm07 bash[23367]: audit 2026-03-10T10:16:39.382924+0000 mon.a (mon.0) 1126 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:39 vm07 bash[23367]: audit 2026-03-10T10:16:39.383710+0000 mon.a (mon.0) 1127 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm04-59259-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:39 vm07 bash[23367]: audit 2026-03-10T10:16:39.383710+0000 mon.a (mon.0) 1127 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm04-59259-4","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:39 vm07 bash[23367]: audit 2026-03-10T10:16:39.383865+0000 mon.a (mon.0) 1128 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm04-59252-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:39 vm07 bash[23367]: audit 2026-03-10T10:16:39.383865+0000 mon.a (mon.0) 1128 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm04-59252-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: Running main() from gmock_main.cc 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: [==========] Running 11 tests from 2 test suites. 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: [----------] Global test environment set-up. 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: [----------] 10 tests from LibRadosWatchNotify 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.WatchNotify 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: watch_notify_test_cb 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.WatchNotify (619 ms) 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.Watch2Delete 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: watch_notify2_test_errcb cookie 94519549003152 err -107 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: waiting up to 300 for disconnect notification ... 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.Watch2Delete (28 ms) 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.AioWatchDelete 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: watch_notify2_test_errcb cookie 94519549003152 err -107 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: waiting up to 300 for disconnect notification ... 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.AioWatchDelete (25 ms) 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.WatchNotify2 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: watch_notify2_test_cb from 24641 notify_id 287762808832 cookie 94519549003152 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.WatchNotify2 (15 ms) 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.AioWatchNotify2 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: watch_notify2_test_cb from 24641 notify_id 287762808832 cookie 94519549054512 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.AioWatchNotify2 (13 ms) 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.AioNotify 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: watch_notify2_test_cb from 24641 notify_id 287762808832 cookie 94519549040480 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.AioNotify (16 ms) 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.WatchNotify2Multi 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: watch_notify2_test_cb from 24641 notify_id 287762808833 cookie 94519549040480 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: watch_notify2_test_cb from 24641 notify_id 287762808833 cookie 94519549071072 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.WatchNotify2Multi (23 ms) 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.WatchNotify2Timeout 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: watch_notify2_test_cb from 24641 notify_id 287762808833 cookie 94519549040480 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: watch_notify2_test_cb from 24641 notify_id 292057776130 cookie 94519549040480 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.WatchNotify2Timeout (3127 ms) 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.Watch3Timeout 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: waiting up to 1024 for osd to time us out ... 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: watch_notify2_test_errcb cookie 94519549040480 err -107 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: watch_notify2_test_cb from 24641 notify_id 326417514499 cookie 94519549040480 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.Watch3Timeout (5202 ms) 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotify.AioWatchDelete2 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: waiting up to 30 for disconnect notification ... 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: watch_notify2_test_errcb cookie 94519549040480 err -107 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: [ OK ] LibRadosWatchNotify.AioWatchDelete2 (1007 ms) 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: [----------] 10 tests from LibRadosWatchNotify (10075 ms total) 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: [----------] 1 test from LibRadosWatchNotifyEC 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: [ RUN ] LibRadosWatchNotifyEC.WatchNotify 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: watch_notify_test_cb 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: [ OK ] LibRadosWatchNotifyEC.WatchNotify (1119 ms) 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: [----------] 1 test from LibRadosWatchNotifyEC (1119 ms total) 2026-03-10T10:16:40.699 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: 2026-03-10T10:16:40.700 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: [----------] Global test environment tear-down 2026-03-10T10:16:40.700 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: [==========] 11 tests from 2 test suites ran. (20311 ms total) 2026-03-10T10:16:40.735 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify: [ PASSED ] 11 tests. 2026-03-10T10:16:41.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:40 vm04 bash[28289]: cluster 2026-03-10T10:16:39.984807+0000 mgr.y (mgr.24422) 117 : cluster [DBG] pgmap v81: 688 pgs: 1 creating+activating, 23 creating+peering, 192 unknown, 472 active+clean; 144 MiB data, 838 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T10:16:41.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:40 vm04 bash[28289]: cluster 2026-03-10T10:16:39.984807+0000 mgr.y (mgr.24422) 117 : cluster [DBG] pgmap v81: 688 pgs: 1 creating+activating, 23 creating+peering, 192 unknown, 472 active+clean; 144 MiB data, 838 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T10:16:41.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:40 vm04 bash[28289]: audit 2026-03-10T10:16:39.992016+0000 mon.c (mon.2) 91 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:41.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:40 vm04 bash[28289]: audit 2026-03-10T10:16:39.992016+0000 mon.c (mon.2) 91 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:41.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:40 vm04 bash[28289]: audit 2026-03-10T10:16:40.551614+0000 mon.a (mon.0) 1129 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-7"}]': finished 2026-03-10T10:16:41.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:40 vm04 bash[28289]: audit 2026-03-10T10:16:40.551614+0000 mon.a (mon.0) 1129 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-7"}]': finished 2026-03-10T10:16:41.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:40 vm04 bash[28289]: audit 2026-03-10T10:16:40.551642+0000 mon.a (mon.0) 1130 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm04-59623-12"}]': finished 2026-03-10T10:16:41.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:40 vm04 bash[28289]: audit 2026-03-10T10:16:40.551642+0000 mon.a (mon.0) 1130 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm04-59623-12"}]': finished 2026-03-10T10:16:41.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:40 vm04 bash[28289]: audit 2026-03-10T10:16:40.551664+0000 mon.a (mon.0) 1131 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:41.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:40 vm04 bash[28289]: audit 2026-03-10T10:16:40.551664+0000 mon.a (mon.0) 1131 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:41.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:40 vm04 bash[28289]: audit 2026-03-10T10:16:40.551683+0000 mon.a (mon.0) 1132 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm04-59259-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:41.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:40 vm04 bash[28289]: audit 2026-03-10T10:16:40.551683+0000 mon.a (mon.0) 1132 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm04-59259-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:41.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:40 vm04 bash[28289]: audit 2026-03-10T10:16:40.551702+0000 mon.a (mon.0) 1133 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm04-59252-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:41.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:40 vm04 bash[28289]: audit 2026-03-10T10:16:40.551702+0000 mon.a (mon.0) 1133 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm04-59252-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:41.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:40 vm04 bash[28289]: cluster 2026-03-10T10:16:40.555868+0000 mon.a (mon.0) 1134 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-10T10:16:41.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:40 vm04 bash[28289]: cluster 2026-03-10T10:16:40.555868+0000 mon.a (mon.0) 1134 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-10T10:16:41.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:40 vm04 bash[20742]: cluster 2026-03-10T10:16:39.984807+0000 mgr.y (mgr.24422) 117 : cluster [DBG] pgmap v81: 688 pgs: 1 creating+activating, 23 creating+peering, 192 unknown, 472 active+clean; 144 MiB data, 838 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T10:16:41.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:40 vm04 bash[20742]: cluster 2026-03-10T10:16:39.984807+0000 mgr.y (mgr.24422) 117 : cluster [DBG] pgmap v81: 688 pgs: 1 creating+activating, 23 creating+peering, 192 unknown, 472 active+clean; 144 MiB data, 838 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T10:16:41.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:40 vm04 bash[20742]: audit 2026-03-10T10:16:39.992016+0000 mon.c (mon.2) 91 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:41.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:40 vm04 bash[20742]: audit 2026-03-10T10:16:39.992016+0000 mon.c (mon.2) 91 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:41.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:40 vm04 bash[20742]: audit 2026-03-10T10:16:40.551614+0000 mon.a (mon.0) 1129 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-7"}]': finished 2026-03-10T10:16:41.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:40 vm04 bash[20742]: audit 2026-03-10T10:16:40.551614+0000 mon.a (mon.0) 1129 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-7"}]': finished 2026-03-10T10:16:41.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:40 vm04 bash[20742]: audit 2026-03-10T10:16:40.551642+0000 mon.a (mon.0) 1130 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm04-59623-12"}]': finished 2026-03-10T10:16:41.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:40 vm04 bash[20742]: audit 2026-03-10T10:16:40.551642+0000 mon.a (mon.0) 1130 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm04-59623-12"}]': finished 2026-03-10T10:16:41.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:40 vm04 bash[20742]: audit 2026-03-10T10:16:40.551664+0000 mon.a (mon.0) 1131 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:41.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:40 vm04 bash[20742]: audit 2026-03-10T10:16:40.551664+0000 mon.a (mon.0) 1131 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:41.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:40 vm04 bash[20742]: audit 2026-03-10T10:16:40.551683+0000 mon.a (mon.0) 1132 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm04-59259-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:41.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:40 vm04 bash[20742]: audit 2026-03-10T10:16:40.551683+0000 mon.a (mon.0) 1132 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm04-59259-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:41.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:40 vm04 bash[20742]: audit 2026-03-10T10:16:40.551702+0000 mon.a (mon.0) 1133 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm04-59252-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:41.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:40 vm04 bash[20742]: audit 2026-03-10T10:16:40.551702+0000 mon.a (mon.0) 1133 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm04-59252-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:41.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:40 vm04 bash[20742]: cluster 2026-03-10T10:16:40.555868+0000 mon.a (mon.0) 1134 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-10T10:16:41.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:40 vm04 bash[20742]: cluster 2026-03-10T10:16:40.555868+0000 mon.a (mon.0) 1134 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-10T10:16:41.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:40 vm07 bash[23367]: cluster 2026-03-10T10:16:39.984807+0000 mgr.y (mgr.24422) 117 : cluster [DBG] pgmap v81: 688 pgs: 1 creating+activating, 23 creating+peering, 192 unknown, 472 active+clean; 144 MiB data, 838 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T10:16:41.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:40 vm07 bash[23367]: cluster 2026-03-10T10:16:39.984807+0000 mgr.y (mgr.24422) 117 : cluster [DBG] pgmap v81: 688 pgs: 1 creating+activating, 23 creating+peering, 192 unknown, 472 active+clean; 144 MiB data, 838 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T10:16:41.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:40 vm07 bash[23367]: audit 2026-03-10T10:16:39.992016+0000 mon.c (mon.2) 91 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:41.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:40 vm07 bash[23367]: audit 2026-03-10T10:16:39.992016+0000 mon.c (mon.2) 91 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:41.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:40 vm07 bash[23367]: audit 2026-03-10T10:16:40.551614+0000 mon.a (mon.0) 1129 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-7"}]': finished 2026-03-10T10:16:41.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:40 vm07 bash[23367]: audit 2026-03-10T10:16:40.551614+0000 mon.a (mon.0) 1129 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-7"}]': finished 2026-03-10T10:16:41.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:40 vm07 bash[23367]: audit 2026-03-10T10:16:40.551642+0000 mon.a (mon.0) 1130 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm04-59623-12"}]': finished 2026-03-10T10:16:41.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:40 vm07 bash[23367]: audit 2026-03-10T10:16:40.551642+0000 mon.a (mon.0) 1130 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosWatchNotifyEC_vm04-59623-12"}]': finished 2026-03-10T10:16:41.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:40 vm07 bash[23367]: audit 2026-03-10T10:16:40.551664+0000 mon.a (mon.0) 1131 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:41.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:40 vm07 bash[23367]: audit 2026-03-10T10:16:40.551664+0000 mon.a (mon.0) 1131 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59769-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:41.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:40 vm07 bash[23367]: audit 2026-03-10T10:16:40.551683+0000 mon.a (mon.0) 1132 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm04-59259-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:41.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:40 vm07 bash[23367]: audit 2026-03-10T10:16:40.551683+0000 mon.a (mon.0) 1132 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm04-59259-4","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:41.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:40 vm07 bash[23367]: audit 2026-03-10T10:16:40.551702+0000 mon.a (mon.0) 1133 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm04-59252-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:41.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:40 vm07 bash[23367]: audit 2026-03-10T10:16:40.551702+0000 mon.a (mon.0) 1133 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppend_vm04-59252-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:41.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:40 vm07 bash[23367]: cluster 2026-03-10T10:16:40.555868+0000 mon.a (mon.0) 1134 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-10T10:16:41.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:40 vm07 bash[23367]: cluster 2026-03-10T10:16:40.555868+0000 mon.a (mon.0) 1134 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-10T10:16:42.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:42 vm04 bash[28289]: audit 2026-03-10T10:16:40.949223+0000 mon.a (mon.0) 1135 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:16:42.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:42 vm04 bash[28289]: audit 2026-03-10T10:16:40.949223+0000 mon.a (mon.0) 1135 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:16:42.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:42 vm04 bash[28289]: audit 2026-03-10T10:16:40.996258+0000 mon.c (mon.2) 92 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:42.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:42 vm04 bash[28289]: audit 2026-03-10T10:16:40.996258+0000 mon.c (mon.2) 92 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:42.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:42 vm04 bash[28289]: audit 2026-03-10T10:16:41.185798+0000 mon.a (mon.0) 1136 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:16:42.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:42 vm04 bash[28289]: audit 2026-03-10T10:16:41.185798+0000 mon.a (mon.0) 1136 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:16:42.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:42 vm04 bash[28289]: cluster 2026-03-10T10:16:41.201297+0000 mon.a (mon.0) 1137 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-10T10:16:42.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:42 vm04 bash[28289]: cluster 2026-03-10T10:16:41.201297+0000 mon.a (mon.0) 1137 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-10T10:16:42.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:42 vm04 bash[28289]: audit 2026-03-10T10:16:41.202497+0000 mon.a (mon.0) 1138 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-7"}]: dispatch 2026-03-10T10:16:42.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:42 vm04 bash[28289]: audit 2026-03-10T10:16:41.202497+0000 mon.a (mon.0) 1138 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-7"}]: dispatch 2026-03-10T10:16:42.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:42 vm04 bash[28289]: audit 2026-03-10T10:16:41.241929+0000 mon.c (mon.2) 93 : audit [INF] from='client.? 192.168.123.104:0/2099848889' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:42.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:42 vm04 bash[28289]: audit 2026-03-10T10:16:41.241929+0000 mon.c (mon.2) 93 : audit [INF] from='client.? 192.168.123.104:0/2099848889' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:42.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:42 vm04 bash[28289]: audit 2026-03-10T10:16:41.244598+0000 mon.a (mon.0) 1139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:42.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:42 vm04 bash[28289]: audit 2026-03-10T10:16:41.244598+0000 mon.a (mon.0) 1139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:42.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:42 vm04 bash[28289]: audit 2026-03-10T10:16:41.377183+0000 mon.c (mon.2) 94 : audit [INF] from='client.? 192.168.123.104:0/1373066734' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:42.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:42 vm04 bash[28289]: audit 2026-03-10T10:16:41.377183+0000 mon.c (mon.2) 94 : audit [INF] from='client.? 192.168.123.104:0/1373066734' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:42.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:42 vm04 bash[28289]: audit 2026-03-10T10:16:41.378145+0000 mon.a (mon.0) 1140 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:42.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:42 vm04 bash[28289]: audit 2026-03-10T10:16:41.378145+0000 mon.a (mon.0) 1140 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:42.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:42 vm04 bash[20742]: audit 2026-03-10T10:16:40.949223+0000 mon.a (mon.0) 1135 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:16:42.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:42 vm04 bash[20742]: audit 2026-03-10T10:16:40.949223+0000 mon.a (mon.0) 1135 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:16:42.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:42 vm04 bash[20742]: audit 2026-03-10T10:16:40.996258+0000 mon.c (mon.2) 92 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:42.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:42 vm04 bash[20742]: audit 2026-03-10T10:16:40.996258+0000 mon.c (mon.2) 92 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:42.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:42 vm04 bash[20742]: audit 2026-03-10T10:16:41.185798+0000 mon.a (mon.0) 1136 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:16:42.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:42 vm04 bash[20742]: audit 2026-03-10T10:16:41.185798+0000 mon.a (mon.0) 1136 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:16:42.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:42 vm04 bash[20742]: cluster 2026-03-10T10:16:41.201297+0000 mon.a (mon.0) 1137 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-10T10:16:42.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:42 vm04 bash[20742]: cluster 2026-03-10T10:16:41.201297+0000 mon.a (mon.0) 1137 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-10T10:16:42.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:42 vm04 bash[20742]: audit 2026-03-10T10:16:41.202497+0000 mon.a (mon.0) 1138 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-7"}]: dispatch 2026-03-10T10:16:42.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:42 vm04 bash[20742]: audit 2026-03-10T10:16:41.202497+0000 mon.a (mon.0) 1138 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-7"}]: dispatch 2026-03-10T10:16:42.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:42 vm04 bash[20742]: audit 2026-03-10T10:16:41.241929+0000 mon.c (mon.2) 93 : audit [INF] from='client.? 192.168.123.104:0/2099848889' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:42.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:42 vm04 bash[20742]: audit 2026-03-10T10:16:41.241929+0000 mon.c (mon.2) 93 : audit [INF] from='client.? 192.168.123.104:0/2099848889' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:42.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:42 vm04 bash[20742]: audit 2026-03-10T10:16:41.244598+0000 mon.a (mon.0) 1139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:42.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:42 vm04 bash[20742]: audit 2026-03-10T10:16:41.244598+0000 mon.a (mon.0) 1139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:42.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:42 vm04 bash[20742]: audit 2026-03-10T10:16:41.377183+0000 mon.c (mon.2) 94 : audit [INF] from='client.? 192.168.123.104:0/1373066734' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:42.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:42 vm04 bash[20742]: audit 2026-03-10T10:16:41.377183+0000 mon.c (mon.2) 94 : audit [INF] from='client.? 192.168.123.104:0/1373066734' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:42.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:42 vm04 bash[20742]: audit 2026-03-10T10:16:41.378145+0000 mon.a (mon.0) 1140 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:42.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:42 vm04 bash[20742]: audit 2026-03-10T10:16:41.378145+0000 mon.a (mon.0) 1140 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:42.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:42 vm07 bash[23367]: audit 2026-03-10T10:16:40.949223+0000 mon.a (mon.0) 1135 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:16:42.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:42 vm07 bash[23367]: audit 2026-03-10T10:16:40.949223+0000 mon.a (mon.0) 1135 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:16:42.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:42 vm07 bash[23367]: audit 2026-03-10T10:16:40.996258+0000 mon.c (mon.2) 92 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:42.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:42 vm07 bash[23367]: audit 2026-03-10T10:16:40.996258+0000 mon.c (mon.2) 92 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:42.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:42 vm07 bash[23367]: audit 2026-03-10T10:16:41.185798+0000 mon.a (mon.0) 1136 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:16:42.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:42 vm07 bash[23367]: audit 2026-03-10T10:16:41.185798+0000 mon.a (mon.0) 1136 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:16:42.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:42 vm07 bash[23367]: cluster 2026-03-10T10:16:41.201297+0000 mon.a (mon.0) 1137 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-10T10:16:42.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:42 vm07 bash[23367]: cluster 2026-03-10T10:16:41.201297+0000 mon.a (mon.0) 1137 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-10T10:16:42.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:42 vm07 bash[23367]: audit 2026-03-10T10:16:41.202497+0000 mon.a (mon.0) 1138 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-7"}]: dispatch 2026-03-10T10:16:42.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:42 vm07 bash[23367]: audit 2026-03-10T10:16:41.202497+0000 mon.a (mon.0) 1138 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-7"}]: dispatch 2026-03-10T10:16:42.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:42 vm07 bash[23367]: audit 2026-03-10T10:16:41.241929+0000 mon.c (mon.2) 93 : audit [INF] from='client.? 192.168.123.104:0/2099848889' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:42.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:42 vm07 bash[23367]: audit 2026-03-10T10:16:41.241929+0000 mon.c (mon.2) 93 : audit [INF] from='client.? 192.168.123.104:0/2099848889' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:42.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:42 vm07 bash[23367]: audit 2026-03-10T10:16:41.244598+0000 mon.a (mon.0) 1139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:42.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:42 vm07 bash[23367]: audit 2026-03-10T10:16:41.244598+0000 mon.a (mon.0) 1139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:42.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:42 vm07 bash[23367]: audit 2026-03-10T10:16:41.377183+0000 mon.c (mon.2) 94 : audit [INF] from='client.? 192.168.123.104:0/1373066734' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:42.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:42 vm07 bash[23367]: audit 2026-03-10T10:16:41.377183+0000 mon.c (mon.2) 94 : audit [INF] from='client.? 192.168.123.104:0/1373066734' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:42.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:42 vm07 bash[23367]: audit 2026-03-10T10:16:41.378145+0000 mon.a (mon.0) 1140 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:42.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:42 vm07 bash[23367]: audit 2026-03-10T10:16:41.378145+0000 mon.a (mon.0) 1140 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:43.260 INFO:tasks.workunit.client.0.vm04.stdout: ec_list: Running main() from gmock_main.cc 2026-03-10T10:16:43.260 INFO:tasks.workunit.client.0.vm04.stdout: ec_list: [==========] Running 3 tests from 1 test suite. 2026-03-10T10:16:43.260 INFO:tasks.workunit.client.0.vm04.stdout: ec_list: [----------] Global test environment set-up. 2026-03-10T10:16:43.260 INFO:tasks.workunit.client.0.vm04.stdout: ec_list: [----------] 3 tests from NeoradosECList 2026-03-10T10:16:43.260 INFO:tasks.workunit.client.0.vm04.stdout: ec_list: [ RUN ] NeoradosECList.ListObjects 2026-03-10T10:16:43.260 INFO:tasks.workunit.client.0.vm04.stdout: ec_list: [ OK ] NeoradosECList.ListObjects (6785 ms) 2026-03-10T10:16:43.260 INFO:tasks.workunit.client.0.vm04.stdout: ec_list: [ RUN ] NeoradosECList.ListObjectsNS 2026-03-10T10:16:43.260 INFO:tasks.workunit.client.0.vm04.stdout: ec_list: [ OK ] NeoradosECList.ListObjectsNS (6735 ms) 2026-03-10T10:16:43.260 INFO:tasks.workunit.client.0.vm04.stdout: ec_list: [ RUN ] NeoradosECList.ListObjectsMany 2026-03-10T10:16:43.260 INFO:tasks.workunit.client.0.vm04.stdout: ec_list: [ OK ] NeoradosECList.ListObjectsMany (9032 ms) 2026-03-10T10:16:43.260 INFO:tasks.workunit.client.0.vm04.stdout: ec_list: [----------] 3 tests from NeoradosECList (22552 ms total) 2026-03-10T10:16:43.260 INFO:tasks.workunit.client.0.vm04.stdout: ec_list: 2026-03-10T10:16:43.260 INFO:tasks.workunit.client.0.vm04.stdout: ec_list: [----------] Global test environment tear-down 2026-03-10T10:16:43.260 INFO:tasks.workunit.client.0.vm04.stdout: ec_list: [==========] 3 tests from 1 test suite ran. (22552 ms total) 2026-03-10T10:16:43.260 INFO:tasks.workunit.client.0.vm04.stdout: ec_list: [ PASSED ] 3 tests. 2026-03-10T10:16:43.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:16:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:16:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:16:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:43 vm04 bash[28289]: cluster 2026-03-10T10:16:41.985192+0000 mgr.y (mgr.24422) 118 : cluster [DBG] pgmap v84: 616 pgs: 1 creating+activating, 17 creating+peering, 128 unknown, 470 active+clean; 144 MiB data, 838 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:16:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:43 vm04 bash[28289]: cluster 2026-03-10T10:16:41.985192+0000 mgr.y (mgr.24422) 118 : cluster [DBG] pgmap v84: 616 pgs: 1 creating+activating, 17 creating+peering, 128 unknown, 470 active+clean; 144 MiB data, 838 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:16:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:43 vm04 bash[28289]: audit 2026-03-10T10:16:41.997552+0000 mon.c (mon.2) 95 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:43 vm04 bash[28289]: audit 2026-03-10T10:16:41.997552+0000 mon.c (mon.2) 95 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:43 vm04 bash[28289]: cluster 2026-03-10T10:16:42.185862+0000 mon.a (mon.0) 1141 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:43 vm04 bash[28289]: cluster 2026-03-10T10:16:42.185862+0000 mon.a (mon.0) 1141 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:43 vm04 bash[28289]: audit 2026-03-10T10:16:42.219421+0000 mon.a (mon.0) 1142 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-7"}]': finished 2026-03-10T10:16:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:43 vm04 bash[28289]: audit 2026-03-10T10:16:42.219421+0000 mon.a (mon.0) 1142 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-7"}]': finished 2026-03-10T10:16:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:43 vm04 bash[28289]: audit 2026-03-10T10:16:42.219732+0000 mon.a (mon.0) 1143 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm04-60174-3"}]': finished 2026-03-10T10:16:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:43 vm04 bash[28289]: audit 2026-03-10T10:16:42.219732+0000 mon.a (mon.0) 1143 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm04-60174-3"}]': finished 2026-03-10T10:16:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:43 vm04 bash[28289]: audit 2026-03-10T10:16:42.219789+0000 mon.a (mon.0) 1144 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:43 vm04 bash[28289]: audit 2026-03-10T10:16:42.219789+0000 mon.a (mon.0) 1144 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:43 vm04 bash[28289]: cluster 2026-03-10T10:16:42.247510+0000 mon.a (mon.0) 1145 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-10T10:16:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:43 vm04 bash[28289]: cluster 2026-03-10T10:16:42.247510+0000 mon.a (mon.0) 1145 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-10T10:16:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:43 vm04 bash[28289]: audit 2026-03-10T10:16:42.262806+0000 mon.a (mon.0) 1146 : audit [INF] from='client.? 192.168.123.104:0/4063617743' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm04-59259-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:43 vm04 bash[28289]: audit 2026-03-10T10:16:42.262806+0000 mon.a (mon.0) 1146 : audit [INF] from='client.? 192.168.123.104:0/4063617743' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm04-59259-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:43 vm04 bash[28289]: audit 2026-03-10T10:16:42.262900+0000 mon.a (mon.0) 1147 : audit [INF] from='client.? 192.168.123.104:0/541847004' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm04-59252-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:43 vm04 bash[28289]: audit 2026-03-10T10:16:42.262900+0000 mon.a (mon.0) 1147 : audit [INF] from='client.? 192.168.123.104:0/541847004' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm04-59252-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:43 vm04 bash[28289]: audit 2026-03-10T10:16:42.265207+0000 mon.c (mon.2) 96 : audit [INF] from='client.? 192.168.123.104:0/2099848889' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:43 vm04 bash[28289]: audit 2026-03-10T10:16:42.265207+0000 mon.c (mon.2) 96 : audit [INF] from='client.? 192.168.123.104:0/2099848889' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:43 vm04 bash[28289]: audit 2026-03-10T10:16:42.283774+0000 mon.a (mon.0) 1148 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:43 vm04 bash[28289]: audit 2026-03-10T10:16:42.283774+0000 mon.a (mon.0) 1148 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:43 vm04 bash[28289]: audit 2026-03-10T10:16:42.564576+0000 mon.a (mon.0) 1149 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:43.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:43 vm04 bash[28289]: audit 2026-03-10T10:16:42.564576+0000 mon.a (mon.0) 1149 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:43.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:43 vm04 bash[28289]: audit 2026-03-10T10:16:42.566846+0000 mon.a (mon.0) 1150 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:16:43.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:43 vm04 bash[28289]: audit 2026-03-10T10:16:42.566846+0000 mon.a (mon.0) 1150 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:16:43.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:43 vm04 bash[28289]: audit 2026-03-10T10:16:42.999427+0000 mon.c (mon.2) 97 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:43.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:43 vm04 bash[28289]: audit 2026-03-10T10:16:42.999427+0000 mon.c (mon.2) 97 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:43.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:43 vm04 bash[20742]: cluster 2026-03-10T10:16:41.985192+0000 mgr.y (mgr.24422) 118 : cluster [DBG] pgmap v84: 616 pgs: 1 creating+activating, 17 creating+peering, 128 unknown, 470 active+clean; 144 MiB data, 838 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:16:43.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:43 vm04 bash[20742]: cluster 2026-03-10T10:16:41.985192+0000 mgr.y (mgr.24422) 118 : cluster [DBG] pgmap v84: 616 pgs: 1 creating+activating, 17 creating+peering, 128 unknown, 470 active+clean; 144 MiB data, 838 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:16:43.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:43 vm04 bash[20742]: audit 2026-03-10T10:16:41.997552+0000 mon.c (mon.2) 95 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:43.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:43 vm04 bash[20742]: audit 2026-03-10T10:16:41.997552+0000 mon.c (mon.2) 95 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:43.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:43 vm04 bash[20742]: cluster 2026-03-10T10:16:42.185862+0000 mon.a (mon.0) 1141 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:43.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:43 vm04 bash[20742]: cluster 2026-03-10T10:16:42.185862+0000 mon.a (mon.0) 1141 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:43.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:43 vm04 bash[20742]: audit 2026-03-10T10:16:42.219421+0000 mon.a (mon.0) 1142 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-7"}]': finished 2026-03-10T10:16:43.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:43 vm04 bash[20742]: audit 2026-03-10T10:16:42.219421+0000 mon.a (mon.0) 1142 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-7"}]': finished 2026-03-10T10:16:43.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:43 vm04 bash[20742]: audit 2026-03-10T10:16:42.219732+0000 mon.a (mon.0) 1143 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm04-60174-3"}]': finished 2026-03-10T10:16:43.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:43 vm04 bash[20742]: audit 2026-03-10T10:16:42.219732+0000 mon.a (mon.0) 1143 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm04-60174-3"}]': finished 2026-03-10T10:16:43.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:43 vm04 bash[20742]: audit 2026-03-10T10:16:42.219789+0000 mon.a (mon.0) 1144 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:43.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:43 vm04 bash[20742]: audit 2026-03-10T10:16:42.219789+0000 mon.a (mon.0) 1144 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:43.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:43 vm04 bash[20742]: cluster 2026-03-10T10:16:42.247510+0000 mon.a (mon.0) 1145 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-10T10:16:43.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:43 vm04 bash[20742]: cluster 2026-03-10T10:16:42.247510+0000 mon.a (mon.0) 1145 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-10T10:16:43.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:43 vm04 bash[20742]: audit 2026-03-10T10:16:42.262806+0000 mon.a (mon.0) 1146 : audit [INF] from='client.? 192.168.123.104:0/4063617743' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm04-59259-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:43.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:43 vm04 bash[20742]: audit 2026-03-10T10:16:42.262806+0000 mon.a (mon.0) 1146 : audit [INF] from='client.? 192.168.123.104:0/4063617743' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm04-59259-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:43.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:43 vm04 bash[20742]: audit 2026-03-10T10:16:42.262900+0000 mon.a (mon.0) 1147 : audit [INF] from='client.? 192.168.123.104:0/541847004' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm04-59252-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:43.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:43 vm04 bash[20742]: audit 2026-03-10T10:16:42.262900+0000 mon.a (mon.0) 1147 : audit [INF] from='client.? 192.168.123.104:0/541847004' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm04-59252-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:43.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:43 vm04 bash[20742]: audit 2026-03-10T10:16:42.265207+0000 mon.c (mon.2) 96 : audit [INF] from='client.? 192.168.123.104:0/2099848889' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:43.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:43 vm04 bash[20742]: audit 2026-03-10T10:16:42.265207+0000 mon.c (mon.2) 96 : audit [INF] from='client.? 192.168.123.104:0/2099848889' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:43.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:43 vm04 bash[20742]: audit 2026-03-10T10:16:42.283774+0000 mon.a (mon.0) 1148 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:43.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:43 vm04 bash[20742]: audit 2026-03-10T10:16:42.283774+0000 mon.a (mon.0) 1148 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:43.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:43 vm04 bash[20742]: audit 2026-03-10T10:16:42.564576+0000 mon.a (mon.0) 1149 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:43.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:43 vm04 bash[20742]: audit 2026-03-10T10:16:42.564576+0000 mon.a (mon.0) 1149 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:43.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:43 vm04 bash[20742]: audit 2026-03-10T10:16:42.566846+0000 mon.a (mon.0) 1150 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:16:43.457 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:43 vm04 bash[20742]: audit 2026-03-10T10:16:42.566846+0000 mon.a (mon.0) 1150 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:16:43.457 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:43 vm04 bash[20742]: audit 2026-03-10T10:16:42.999427+0000 mon.c (mon.2) 97 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:43.457 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:43 vm04 bash[20742]: audit 2026-03-10T10:16:42.999427+0000 mon.c (mon.2) 97 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:43 vm07 bash[23367]: cluster 2026-03-10T10:16:41.985192+0000 mgr.y (mgr.24422) 118 : cluster [DBG] pgmap v84: 616 pgs: 1 creating+activating, 17 creating+peering, 128 unknown, 470 active+clean; 144 MiB data, 838 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:16:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:43 vm07 bash[23367]: cluster 2026-03-10T10:16:41.985192+0000 mgr.y (mgr.24422) 118 : cluster [DBG] pgmap v84: 616 pgs: 1 creating+activating, 17 creating+peering, 128 unknown, 470 active+clean; 144 MiB data, 838 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:16:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:43 vm07 bash[23367]: audit 2026-03-10T10:16:41.997552+0000 mon.c (mon.2) 95 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:43 vm07 bash[23367]: audit 2026-03-10T10:16:41.997552+0000 mon.c (mon.2) 95 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:43 vm07 bash[23367]: cluster 2026-03-10T10:16:42.185862+0000 mon.a (mon.0) 1141 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:43 vm07 bash[23367]: cluster 2026-03-10T10:16:42.185862+0000 mon.a (mon.0) 1141 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:43 vm07 bash[23367]: audit 2026-03-10T10:16:42.219421+0000 mon.a (mon.0) 1142 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-7"}]': finished 2026-03-10T10:16:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:43 vm07 bash[23367]: audit 2026-03-10T10:16:42.219421+0000 mon.a (mon.0) 1142 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-7"}]': finished 2026-03-10T10:16:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:43 vm07 bash[23367]: audit 2026-03-10T10:16:42.219732+0000 mon.a (mon.0) 1143 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm04-60174-3"}]': finished 2026-03-10T10:16:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:43 vm07 bash[23367]: audit 2026-03-10T10:16:42.219732+0000 mon.a (mon.0) 1143 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ListObjectsManyvm04-60174-3"}]': finished 2026-03-10T10:16:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:43 vm07 bash[23367]: audit 2026-03-10T10:16:42.219789+0000 mon.a (mon.0) 1144 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:43 vm07 bash[23367]: audit 2026-03-10T10:16:42.219789+0000 mon.a (mon.0) 1144 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:43 vm07 bash[23367]: cluster 2026-03-10T10:16:42.247510+0000 mon.a (mon.0) 1145 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-10T10:16:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:43 vm07 bash[23367]: cluster 2026-03-10T10:16:42.247510+0000 mon.a (mon.0) 1145 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-10T10:16:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:43 vm07 bash[23367]: audit 2026-03-10T10:16:42.262806+0000 mon.a (mon.0) 1146 : audit [INF] from='client.? 192.168.123.104:0/4063617743' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm04-59259-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:43 vm07 bash[23367]: audit 2026-03-10T10:16:42.262806+0000 mon.a (mon.0) 1146 : audit [INF] from='client.? 192.168.123.104:0/4063617743' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm04-59259-5","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:43 vm07 bash[23367]: audit 2026-03-10T10:16:42.262900+0000 mon.a (mon.0) 1147 : audit [INF] from='client.? 192.168.123.104:0/541847004' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm04-59252-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:43 vm07 bash[23367]: audit 2026-03-10T10:16:42.262900+0000 mon.a (mon.0) 1147 : audit [INF] from='client.? 192.168.123.104:0/541847004' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTest_vm04-59252-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:43 vm07 bash[23367]: audit 2026-03-10T10:16:42.265207+0000 mon.c (mon.2) 96 : audit [INF] from='client.? 192.168.123.104:0/2099848889' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:43 vm07 bash[23367]: audit 2026-03-10T10:16:42.265207+0000 mon.c (mon.2) 96 : audit [INF] from='client.? 192.168.123.104:0/2099848889' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:43 vm07 bash[23367]: audit 2026-03-10T10:16:42.283774+0000 mon.a (mon.0) 1148 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:43 vm07 bash[23367]: audit 2026-03-10T10:16:42.283774+0000 mon.a (mon.0) 1148 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm04-60174-3"}]: dispatch 2026-03-10T10:16:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:43 vm07 bash[23367]: audit 2026-03-10T10:16:42.564576+0000 mon.a (mon.0) 1149 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:43 vm07 bash[23367]: audit 2026-03-10T10:16:42.564576+0000 mon.a (mon.0) 1149 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:16:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:43 vm07 bash[23367]: audit 2026-03-10T10:16:42.566846+0000 mon.a (mon.0) 1150 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:16:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:43 vm07 bash[23367]: audit 2026-03-10T10:16:42.566846+0000 mon.a (mon.0) 1150 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:16:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:43 vm07 bash[23367]: audit 2026-03-10T10:16:42.999427+0000 mon.c (mon.2) 97 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:43 vm07 bash[23367]: audit 2026-03-10T10:16:42.999427+0000 mon.c (mon.2) 97 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:44.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:44 vm07 bash[23367]: audit 2026-03-10T10:16:43.225572+0000 mon.a (mon.0) 1151 : audit [INF] from='client.? 192.168.123.104:0/4063617743' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm04-59259-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:44.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:44 vm07 bash[23367]: audit 2026-03-10T10:16:43.225572+0000 mon.a (mon.0) 1151 : audit [INF] from='client.? 192.168.123.104:0/4063617743' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm04-59259-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:44.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:44 vm07 bash[23367]: audit 2026-03-10T10:16:43.225602+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? 192.168.123.104:0/541847004' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTest_vm04-59252-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:44.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:44 vm07 bash[23367]: audit 2026-03-10T10:16:43.225602+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? 192.168.123.104:0/541847004' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTest_vm04-59252-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:44.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:44 vm07 bash[23367]: audit 2026-03-10T10:16:43.225618+0000 mon.a (mon.0) 1153 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm04-60174-3"}]': finished 2026-03-10T10:16:44.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:44 vm07 bash[23367]: audit 2026-03-10T10:16:43.225618+0000 mon.a (mon.0) 1153 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm04-60174-3"}]': finished 2026-03-10T10:16:44.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:44 vm07 bash[23367]: cluster 2026-03-10T10:16:43.233260+0000 mon.a (mon.0) 1154 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-10T10:16:44.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:44 vm07 bash[23367]: cluster 2026-03-10T10:16:43.233260+0000 mon.a (mon.0) 1154 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-10T10:16:44.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:44 vm07 bash[23367]: audit 2026-03-10T10:16:44.000782+0000 mon.c (mon.2) 98 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:44.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:44 vm07 bash[23367]: audit 2026-03-10T10:16:44.000782+0000 mon.c (mon.2) 98 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:44 vm04 bash[28289]: audit 2026-03-10T10:16:43.225572+0000 mon.a (mon.0) 1151 : audit [INF] from='client.? 192.168.123.104:0/4063617743' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm04-59259-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:44 vm04 bash[28289]: audit 2026-03-10T10:16:43.225572+0000 mon.a (mon.0) 1151 : audit [INF] from='client.? 192.168.123.104:0/4063617743' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm04-59259-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:44 vm04 bash[28289]: audit 2026-03-10T10:16:43.225602+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? 192.168.123.104:0/541847004' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTest_vm04-59252-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:44 vm04 bash[28289]: audit 2026-03-10T10:16:43.225602+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? 192.168.123.104:0/541847004' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTest_vm04-59252-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:44 vm04 bash[28289]: audit 2026-03-10T10:16:43.225618+0000 mon.a (mon.0) 1153 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm04-60174-3"}]': finished 2026-03-10T10:16:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:44 vm04 bash[28289]: audit 2026-03-10T10:16:43.225618+0000 mon.a (mon.0) 1153 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm04-60174-3"}]': finished 2026-03-10T10:16:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:44 vm04 bash[28289]: cluster 2026-03-10T10:16:43.233260+0000 mon.a (mon.0) 1154 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-10T10:16:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:44 vm04 bash[28289]: cluster 2026-03-10T10:16:43.233260+0000 mon.a (mon.0) 1154 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-10T10:16:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:44 vm04 bash[28289]: audit 2026-03-10T10:16:44.000782+0000 mon.c (mon.2) 98 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:44 vm04 bash[28289]: audit 2026-03-10T10:16:44.000782+0000 mon.c (mon.2) 98 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:44 vm04 bash[20742]: audit 2026-03-10T10:16:43.225572+0000 mon.a (mon.0) 1151 : audit [INF] from='client.? 192.168.123.104:0/4063617743' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm04-59259-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:44 vm04 bash[20742]: audit 2026-03-10T10:16:43.225572+0000 mon.a (mon.0) 1151 : audit [INF] from='client.? 192.168.123.104:0/4063617743' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleWritePP_vm04-59259-5","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:44 vm04 bash[20742]: audit 2026-03-10T10:16:43.225602+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? 192.168.123.104:0/541847004' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTest_vm04-59252-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:44.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:44 vm04 bash[20742]: audit 2026-03-10T10:16:43.225602+0000 mon.a (mon.0) 1152 : audit [INF] from='client.? 192.168.123.104:0/541847004' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTest_vm04-59252-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:44.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:44 vm04 bash[20742]: audit 2026-03-10T10:16:43.225618+0000 mon.a (mon.0) 1153 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm04-60174-3"}]': finished 2026-03-10T10:16:44.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:44 vm04 bash[20742]: audit 2026-03-10T10:16:43.225618+0000 mon.a (mon.0) 1153 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ListObjectsManyvm04-60174-3"}]': finished 2026-03-10T10:16:44.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:44 vm04 bash[20742]: cluster 2026-03-10T10:16:43.233260+0000 mon.a (mon.0) 1154 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-10T10:16:44.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:44 vm04 bash[20742]: cluster 2026-03-10T10:16:43.233260+0000 mon.a (mon.0) 1154 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-10T10:16:44.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:44 vm04 bash[20742]: audit 2026-03-10T10:16:44.000782+0000 mon.c (mon.2) 98 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:44.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:44 vm04 bash[20742]: audit 2026-03-10T10:16:44.000782+0000 mon.c (mon.2) 98 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:45.310 INFO:tasks.workunit.client.0.vm04.stdout: api_c_write_operations: Running main() from gmock_main.cc 2026-03-10T10:16:45.310 INFO:tasks.workunit.client.0.vm04.stdout: api_c_write_operations: [==========] Running 8 tests from 2 test suites. 2026-03-10T10:16:45.310 INFO:tasks.workunit.client.0.vm04.stdout: api_c_write_operations: [----------] Global test environment set-up. 2026-03-10T10:16:45.310 INFO:tasks.workunit.client.0.vm04.stdout: api_c_write_operations: [----------] 1 test from LibradosCWriteOps 2026-03-10T10:16:45.310 INFO:tasks.workunit.client.0.vm04.stdout: api_c_write_operations: [ RUN ] LibradosCWriteOps.NewDelete 2026-03-10T10:16:45.310 INFO:tasks.workunit.client.0.vm04.stdout: api_c_write_operations: [ OK ] LibradosCWriteOps.NewDelete (0 ms) 2026-03-10T10:16:45.310 INFO:tasks.workunit.client.0.vm04.stdout: api_c_write_operations: [----------] 1 test from LibradosCWriteOps (0 ms total) 2026-03-10T10:16:45.310 INFO:tasks.workunit.client.0.vm04.stdout: api_c_write_operations: 2026-03-10T10:16:45.310 INFO:tasks.workunit.client.0.vm04.stdout: api_c_write_operations: [----------] 7 tests from LibRadosCWriteOps 2026-03-10T10:16:45.310 INFO:tasks.workunit.client.0.vm04.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.assertExists 2026-03-10T10:16:45.310 INFO:tasks.workunit.client.0.vm04.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.assertExists (2737 ms) 2026-03-10T10:16:45.310 INFO:tasks.workunit.client.0.vm04.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.WriteOpAssertVersion 2026-03-10T10:16:45.310 INFO:tasks.workunit.client.0.vm04.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.WriteOpAssertVersion (3202 ms) 2026-03-10T10:16:45.310 INFO:tasks.workunit.client.0.vm04.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.Xattrs 2026-03-10T10:16:45.310 INFO:tasks.workunit.client.0.vm04.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.Xattrs (3095 ms) 2026-03-10T10:16:45.310 INFO:tasks.workunit.client.0.vm04.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.Write 2026-03-10T10:16:45.310 INFO:tasks.workunit.client.0.vm04.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.Write (2706 ms) 2026-03-10T10:16:45.310 INFO:tasks.workunit.client.0.vm04.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.Exec 2026-03-10T10:16:45.310 INFO:tasks.workunit.client.0.vm04.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.Exec (3191 ms) 2026-03-10T10:16:45.310 INFO:tasks.workunit.client.0.vm04.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.WriteSame 2026-03-10T10:16:45.310 INFO:tasks.workunit.client.0.vm04.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.WriteSame (2992 ms) 2026-03-10T10:16:45.310 INFO:tasks.workunit.client.0.vm04.stdout: api_c_write_operations: [ RUN ] LibRadosCWriteOps.CmpExt 2026-03-10T10:16:45.310 INFO:tasks.workunit.client.0.vm04.stdout: api_c_write_operations: [ OK ] LibRadosCWriteOps.CmpExt (6904 ms) 2026-03-10T10:16:45.310 INFO:tasks.workunit.client.0.vm04.stdout: api_c_write_operations: [----------] 7 tests from LibRadosCWriteOps (24827 ms total) 2026-03-10T10:16:45.310 INFO:tasks.workunit.client.0.vm04.stdout: api_c_write_operations: 2026-03-10T10:16:45.310 INFO:tasks.workunit.client.0.vm04.stdout: api_c_write_operations: [----------] Global test environment tear-down 2026-03-10T10:16:45.310 INFO:tasks.workunit.client.0.vm04.stdout: api_c_write_operations: [==========] 8 tests from 2 test suites ran. (24827 ms total) 2026-03-10T10:16:45.310 INFO:tasks.workunit.client.0.vm04.stdout: api_c_write_operations: [ PASSED ] 8 tests. 2026-03-10T10:16:45.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:45 vm04 bash[28289]: cluster 2026-03-10T10:16:43.986399+0000 mgr.y (mgr.24422) 119 : cluster [DBG] pgmap v87: 744 pgs: 64 creating+peering, 56 creating+activating, 1 active+clean+snaptrim, 623 active+clean; 144 MiB data, 849 MiB used, 159 GiB / 160 GiB avail; 4.0 KiB/s rd, 4.5 KiB/s wr, 11 op/s 2026-03-10T10:16:45.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:45 vm04 bash[28289]: cluster 2026-03-10T10:16:43.986399+0000 mgr.y (mgr.24422) 119 : cluster [DBG] pgmap v87: 744 pgs: 64 creating+peering, 56 creating+activating, 1 active+clean+snaptrim, 623 active+clean; 144 MiB data, 849 MiB used, 159 GiB / 160 GiB avail; 4.0 KiB/s rd, 4.5 KiB/s wr, 11 op/s 2026-03-10T10:16:45.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:45 vm04 bash[28289]: cluster 2026-03-10T10:16:44.311478+0000 mon.a (mon.0) 1155 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-10T10:16:45.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:45 vm04 bash[28289]: cluster 2026-03-10T10:16:44.311478+0000 mon.a (mon.0) 1155 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-10T10:16:45.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:45 vm04 bash[28289]: audit 2026-03-10T10:16:44.346013+0000 mon.a (mon.0) 1156 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:45.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:45 vm04 bash[28289]: audit 2026-03-10T10:16:44.346013+0000 mon.a (mon.0) 1156 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:45.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:45 vm04 bash[28289]: audit 2026-03-10T10:16:45.001926+0000 mon.c (mon.2) 99 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:45.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:45 vm04 bash[28289]: audit 2026-03-10T10:16:45.001926+0000 mon.c (mon.2) 99 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:45.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:45 vm04 bash[20742]: cluster 2026-03-10T10:16:43.986399+0000 mgr.y (mgr.24422) 119 : cluster [DBG] pgmap v87: 744 pgs: 64 creating+peering, 56 creating+activating, 1 active+clean+snaptrim, 623 active+clean; 144 MiB data, 849 MiB used, 159 GiB / 160 GiB avail; 4.0 KiB/s rd, 4.5 KiB/s wr, 11 op/s 2026-03-10T10:16:45.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:45 vm04 bash[20742]: cluster 2026-03-10T10:16:43.986399+0000 mgr.y (mgr.24422) 119 : cluster [DBG] pgmap v87: 744 pgs: 64 creating+peering, 56 creating+activating, 1 active+clean+snaptrim, 623 active+clean; 144 MiB data, 849 MiB used, 159 GiB / 160 GiB avail; 4.0 KiB/s rd, 4.5 KiB/s wr, 11 op/s 2026-03-10T10:16:45.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:45 vm04 bash[20742]: cluster 2026-03-10T10:16:44.311478+0000 mon.a (mon.0) 1155 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-10T10:16:45.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:45 vm04 bash[20742]: cluster 2026-03-10T10:16:44.311478+0000 mon.a (mon.0) 1155 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-10T10:16:45.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:45 vm04 bash[20742]: audit 2026-03-10T10:16:44.346013+0000 mon.a (mon.0) 1156 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:45.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:45 vm04 bash[20742]: audit 2026-03-10T10:16:44.346013+0000 mon.a (mon.0) 1156 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:45.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:45 vm04 bash[20742]: audit 2026-03-10T10:16:45.001926+0000 mon.c (mon.2) 99 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:45.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:45 vm04 bash[20742]: audit 2026-03-10T10:16:45.001926+0000 mon.c (mon.2) 99 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:45.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:45 vm07 bash[23367]: cluster 2026-03-10T10:16:43.986399+0000 mgr.y (mgr.24422) 119 : cluster [DBG] pgmap v87: 744 pgs: 64 creating+peering, 56 creating+activating, 1 active+clean+snaptrim, 623 active+clean; 144 MiB data, 849 MiB used, 159 GiB / 160 GiB avail; 4.0 KiB/s rd, 4.5 KiB/s wr, 11 op/s 2026-03-10T10:16:45.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:45 vm07 bash[23367]: cluster 2026-03-10T10:16:43.986399+0000 mgr.y (mgr.24422) 119 : cluster [DBG] pgmap v87: 744 pgs: 64 creating+peering, 56 creating+activating, 1 active+clean+snaptrim, 623 active+clean; 144 MiB data, 849 MiB used, 159 GiB / 160 GiB avail; 4.0 KiB/s rd, 4.5 KiB/s wr, 11 op/s 2026-03-10T10:16:45.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:45 vm07 bash[23367]: cluster 2026-03-10T10:16:44.311478+0000 mon.a (mon.0) 1155 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-10T10:16:45.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:45 vm07 bash[23367]: cluster 2026-03-10T10:16:44.311478+0000 mon.a (mon.0) 1155 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-10T10:16:45.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:45 vm07 bash[23367]: audit 2026-03-10T10:16:44.346013+0000 mon.a (mon.0) 1156 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:45.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:45 vm07 bash[23367]: audit 2026-03-10T10:16:44.346013+0000 mon.a (mon.0) 1156 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:45.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:45 vm07 bash[23367]: audit 2026-03-10T10:16:45.001926+0000 mon.c (mon.2) 99 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:45.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:45 vm07 bash[23367]: audit 2026-03-10T10:16:45.001926+0000 mon.c (mon.2) 99 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:46.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:46 vm04 bash[20742]: audit 2026-03-10T10:16:45.244557+0000 mon.a (mon.0) 1157 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:46 vm04 bash[20742]: audit 2026-03-10T10:16:45.244557+0000 mon.a (mon.0) 1157 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:46 vm04 bash[20742]: cluster 2026-03-10T10:16:45.295332+0000 mon.a (mon.0) 1158 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:46 vm04 bash[20742]: cluster 2026-03-10T10:16:45.295332+0000 mon.a (mon.0) 1158 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:46 vm04 bash[20742]: audit 2026-03-10T10:16:45.327033+0000 mon.a (mon.0) 1159 : audit [INF] from='client.? 192.168.123.104:0/1129958952' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm04-59259-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:46 vm04 bash[20742]: audit 2026-03-10T10:16:45.327033+0000 mon.a (mon.0) 1159 : audit [INF] from='client.? 192.168.123.104:0/1129958952' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm04-59259-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:46 vm04 bash[20742]: audit 2026-03-10T10:16:45.337823+0000 mon.a (mon.0) 1160 : audit [INF] from='client.? 192.168.123.104:0/2180463816' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm04-59252-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:46 vm04 bash[20742]: audit 2026-03-10T10:16:45.337823+0000 mon.a (mon.0) 1160 : audit [INF] from='client.? 192.168.123.104:0/2180463816' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm04-59252-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:46 vm04 bash[20742]: audit 2026-03-10T10:16:45.508161+0000 mon.a (mon.0) 1161 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:46 vm04 bash[20742]: audit 2026-03-10T10:16:45.508161+0000 mon.a (mon.0) 1161 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:46 vm04 bash[20742]: audit 2026-03-10T10:16:45.555667+0000 mon.a (mon.0) 1162 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4"}]: dispatch 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:46 vm04 bash[20742]: audit 2026-03-10T10:16:45.555667+0000 mon.a (mon.0) 1162 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4"}]: dispatch 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:46 vm04 bash[20742]: cluster 2026-03-10T10:16:45.986831+0000 mgr.y (mgr.24422) 120 : cluster [DBG] pgmap v90: 648 pgs: 160 unknown, 1 active+clean+snaptrim, 487 active+clean; 144 MiB data, 849 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 3.5 KiB/s wr, 8 op/s 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:46 vm04 bash[20742]: cluster 2026-03-10T10:16:45.986831+0000 mgr.y (mgr.24422) 120 : cluster [DBG] pgmap v90: 648 pgs: 160 unknown, 1 active+clean+snaptrim, 487 active+clean; 144 MiB data, 849 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 3.5 KiB/s wr, 8 op/s 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:46 vm04 bash[20742]: audit 2026-03-10T10:16:46.003805+0000 mon.c (mon.2) 100 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:46 vm04 bash[20742]: audit 2026-03-10T10:16:46.003805+0000 mon.c (mon.2) 100 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:46 vm04 bash[20742]: audit 2026-03-10T10:16:46.188014+0000 mon.a (mon.0) 1163 : audit [INF] from='client.? 192.168.123.104:0/1129958952' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm04-59259-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:46 vm04 bash[20742]: audit 2026-03-10T10:16:46.188014+0000 mon.a (mon.0) 1163 : audit [INF] from='client.? 192.168.123.104:0/1129958952' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm04-59259-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:46 vm04 bash[20742]: audit 2026-03-10T10:16:46.188088+0000 mon.a (mon.0) 1164 : audit [INF] from='client.? 192.168.123.104:0/2180463816' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm04-59252-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:46 vm04 bash[20742]: audit 2026-03-10T10:16:46.188088+0000 mon.a (mon.0) 1164 : audit [INF] from='client.? 192.168.123.104:0/2180463816' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm04-59252-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:46 vm04 bash[20742]: audit 2026-03-10T10:16:46.188110+0000 mon.a (mon.0) 1165 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-9", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:46 vm04 bash[20742]: audit 2026-03-10T10:16:46.188110+0000 mon.a (mon.0) 1165 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-9", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:46 vm04 bash[20742]: audit 2026-03-10T10:16:46.188126+0000 mon.a (mon.0) 1166 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4"}]': finished 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:46 vm04 bash[20742]: audit 2026-03-10T10:16:46.188126+0000 mon.a (mon.0) 1166 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4"}]': finished 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:46 vm04 bash[20742]: cluster 2026-03-10T10:16:46.192865+0000 mon.a (mon.0) 1167 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:46 vm04 bash[20742]: cluster 2026-03-10T10:16:46.192865+0000 mon.a (mon.0) 1167 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:46 vm04 bash[20742]: audit 2026-03-10T10:16:46.199972+0000 mon.a (mon.0) 1168 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-9"}]: dispatch 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:46 vm04 bash[20742]: audit 2026-03-10T10:16:46.199972+0000 mon.a (mon.0) 1168 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-9"}]: dispatch 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:46 vm04 bash[20742]: audit 2026-03-10T10:16:46.203870+0000 mon.a (mon.0) 1169 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "tierpool": "test-rados-api-vm04-59675-6"}]: dispatch 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:46 vm04 bash[20742]: audit 2026-03-10T10:16:46.203870+0000 mon.a (mon.0) 1169 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "tierpool": "test-rados-api-vm04-59675-6"}]: dispatch 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:46 vm04 bash[20742]: audit 2026-03-10T10:16:46.233884+0000 mon.b (mon.1) 119 : audit [INF] from='client.? 192.168.123.104:0/2706877323' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:46 vm04 bash[20742]: audit 2026-03-10T10:16:46.233884+0000 mon.b (mon.1) 119 : audit [INF] from='client.? 192.168.123.104:0/2706877323' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:46 vm04 bash[20742]: audit 2026-03-10T10:16:46.245043+0000 mon.a (mon.0) 1170 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:46 vm04 bash[20742]: audit 2026-03-10T10:16:46.245043+0000 mon.a (mon.0) 1170 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:46 vm04 bash[28289]: audit 2026-03-10T10:16:45.244557+0000 mon.a (mon.0) 1157 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:46 vm04 bash[28289]: audit 2026-03-10T10:16:45.244557+0000 mon.a (mon.0) 1157 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:46 vm04 bash[28289]: cluster 2026-03-10T10:16:45.295332+0000 mon.a (mon.0) 1158 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:46 vm04 bash[28289]: cluster 2026-03-10T10:16:45.295332+0000 mon.a (mon.0) 1158 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:46 vm04 bash[28289]: audit 2026-03-10T10:16:45.327033+0000 mon.a (mon.0) 1159 : audit [INF] from='client.? 192.168.123.104:0/1129958952' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm04-59259-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:46 vm04 bash[28289]: audit 2026-03-10T10:16:45.327033+0000 mon.a (mon.0) 1159 : audit [INF] from='client.? 192.168.123.104:0/1129958952' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm04-59259-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:46 vm04 bash[28289]: audit 2026-03-10T10:16:45.337823+0000 mon.a (mon.0) 1160 : audit [INF] from='client.? 192.168.123.104:0/2180463816' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm04-59252-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:46 vm04 bash[28289]: audit 2026-03-10T10:16:45.337823+0000 mon.a (mon.0) 1160 : audit [INF] from='client.? 192.168.123.104:0/2180463816' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm04-59252-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:46 vm04 bash[28289]: audit 2026-03-10T10:16:45.508161+0000 mon.a (mon.0) 1161 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:46 vm04 bash[28289]: audit 2026-03-10T10:16:45.508161+0000 mon.a (mon.0) 1161 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:46 vm04 bash[28289]: audit 2026-03-10T10:16:45.555667+0000 mon.a (mon.0) 1162 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4"}]: dispatch 2026-03-10T10:16:46.706 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:46 vm04 bash[28289]: audit 2026-03-10T10:16:45.555667+0000 mon.a (mon.0) 1162 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4"}]: dispatch 2026-03-10T10:16:46.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:46 vm04 bash[28289]: cluster 2026-03-10T10:16:45.986831+0000 mgr.y (mgr.24422) 120 : cluster [DBG] pgmap v90: 648 pgs: 160 unknown, 1 active+clean+snaptrim, 487 active+clean; 144 MiB data, 849 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 3.5 KiB/s wr, 8 op/s 2026-03-10T10:16:46.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:46 vm04 bash[28289]: cluster 2026-03-10T10:16:45.986831+0000 mgr.y (mgr.24422) 120 : cluster [DBG] pgmap v90: 648 pgs: 160 unknown, 1 active+clean+snaptrim, 487 active+clean; 144 MiB data, 849 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 3.5 KiB/s wr, 8 op/s 2026-03-10T10:16:46.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:46 vm04 bash[28289]: audit 2026-03-10T10:16:46.003805+0000 mon.c (mon.2) 100 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:46.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:46 vm04 bash[28289]: audit 2026-03-10T10:16:46.003805+0000 mon.c (mon.2) 100 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:46.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:46 vm04 bash[28289]: audit 2026-03-10T10:16:46.188014+0000 mon.a (mon.0) 1163 : audit [INF] from='client.? 192.168.123.104:0/1129958952' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm04-59259-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:46.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:46 vm04 bash[28289]: audit 2026-03-10T10:16:46.188014+0000 mon.a (mon.0) 1163 : audit [INF] from='client.? 192.168.123.104:0/1129958952' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm04-59259-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:46.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:46 vm04 bash[28289]: audit 2026-03-10T10:16:46.188088+0000 mon.a (mon.0) 1164 : audit [INF] from='client.? 192.168.123.104:0/2180463816' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm04-59252-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:46.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:46 vm04 bash[28289]: audit 2026-03-10T10:16:46.188088+0000 mon.a (mon.0) 1164 : audit [INF] from='client.? 192.168.123.104:0/2180463816' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm04-59252-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:46.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:46 vm04 bash[28289]: audit 2026-03-10T10:16:46.188110+0000 mon.a (mon.0) 1165 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-9", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:16:46.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:46 vm04 bash[28289]: audit 2026-03-10T10:16:46.188110+0000 mon.a (mon.0) 1165 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-9", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:16:46.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:46 vm04 bash[28289]: audit 2026-03-10T10:16:46.188126+0000 mon.a (mon.0) 1166 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4"}]': finished 2026-03-10T10:16:46.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:46 vm04 bash[28289]: audit 2026-03-10T10:16:46.188126+0000 mon.a (mon.0) 1166 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4"}]': finished 2026-03-10T10:16:46.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:46 vm04 bash[28289]: cluster 2026-03-10T10:16:46.192865+0000 mon.a (mon.0) 1167 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-10T10:16:46.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:46 vm04 bash[28289]: cluster 2026-03-10T10:16:46.192865+0000 mon.a (mon.0) 1167 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-10T10:16:46.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:46 vm04 bash[28289]: audit 2026-03-10T10:16:46.199972+0000 mon.a (mon.0) 1168 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-9"}]: dispatch 2026-03-10T10:16:46.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:46 vm04 bash[28289]: audit 2026-03-10T10:16:46.199972+0000 mon.a (mon.0) 1168 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-9"}]: dispatch 2026-03-10T10:16:46.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:46 vm04 bash[28289]: audit 2026-03-10T10:16:46.203870+0000 mon.a (mon.0) 1169 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "tierpool": "test-rados-api-vm04-59675-6"}]: dispatch 2026-03-10T10:16:46.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:46 vm04 bash[28289]: audit 2026-03-10T10:16:46.203870+0000 mon.a (mon.0) 1169 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "tierpool": "test-rados-api-vm04-59675-6"}]: dispatch 2026-03-10T10:16:46.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:46 vm04 bash[28289]: audit 2026-03-10T10:16:46.233884+0000 mon.b (mon.1) 119 : audit [INF] from='client.? 192.168.123.104:0/2706877323' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:46.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:46 vm04 bash[28289]: audit 2026-03-10T10:16:46.233884+0000 mon.b (mon.1) 119 : audit [INF] from='client.? 192.168.123.104:0/2706877323' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:46.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:46 vm04 bash[28289]: audit 2026-03-10T10:16:46.245043+0000 mon.a (mon.0) 1170 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:46.707 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:46 vm04 bash[28289]: audit 2026-03-10T10:16:46.245043+0000 mon.a (mon.0) 1170 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:46.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:46 vm07 bash[23367]: audit 2026-03-10T10:16:45.244557+0000 mon.a (mon.0) 1157 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:46.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:46 vm07 bash[23367]: audit 2026-03-10T10:16:45.244557+0000 mon.a (mon.0) 1157 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:46.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:46 vm07 bash[23367]: cluster 2026-03-10T10:16:45.295332+0000 mon.a (mon.0) 1158 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-10T10:16:46.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:46 vm07 bash[23367]: cluster 2026-03-10T10:16:45.295332+0000 mon.a (mon.0) 1158 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-10T10:16:46.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:46 vm07 bash[23367]: audit 2026-03-10T10:16:45.327033+0000 mon.a (mon.0) 1159 : audit [INF] from='client.? 192.168.123.104:0/1129958952' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm04-59259-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:46.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:46 vm07 bash[23367]: audit 2026-03-10T10:16:45.327033+0000 mon.a (mon.0) 1159 : audit [INF] from='client.? 192.168.123.104:0/1129958952' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm04-59259-6","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:46.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:46 vm07 bash[23367]: audit 2026-03-10T10:16:45.337823+0000 mon.a (mon.0) 1160 : audit [INF] from='client.? 192.168.123.104:0/2180463816' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm04-59252-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:46.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:46 vm07 bash[23367]: audit 2026-03-10T10:16:45.337823+0000 mon.a (mon.0) 1160 : audit [INF] from='client.? 192.168.123.104:0/2180463816' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm04-59252-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:46.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:46 vm07 bash[23367]: audit 2026-03-10T10:16:45.508161+0000 mon.a (mon.0) 1161 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:16:46.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:46 vm07 bash[23367]: audit 2026-03-10T10:16:45.508161+0000 mon.a (mon.0) 1161 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-9", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:16:46.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:46 vm07 bash[23367]: audit 2026-03-10T10:16:45.555667+0000 mon.a (mon.0) 1162 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4"}]: dispatch 2026-03-10T10:16:46.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:46 vm07 bash[23367]: audit 2026-03-10T10:16:45.555667+0000 mon.a (mon.0) 1162 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4"}]: dispatch 2026-03-10T10:16:46.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:46 vm07 bash[23367]: cluster 2026-03-10T10:16:45.986831+0000 mgr.y (mgr.24422) 120 : cluster [DBG] pgmap v90: 648 pgs: 160 unknown, 1 active+clean+snaptrim, 487 active+clean; 144 MiB data, 849 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 3.5 KiB/s wr, 8 op/s 2026-03-10T10:16:46.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:46 vm07 bash[23367]: cluster 2026-03-10T10:16:45.986831+0000 mgr.y (mgr.24422) 120 : cluster [DBG] pgmap v90: 648 pgs: 160 unknown, 1 active+clean+snaptrim, 487 active+clean; 144 MiB data, 849 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 3.5 KiB/s wr, 8 op/s 2026-03-10T10:16:46.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:46 vm07 bash[23367]: audit 2026-03-10T10:16:46.003805+0000 mon.c (mon.2) 100 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:46.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:46 vm07 bash[23367]: audit 2026-03-10T10:16:46.003805+0000 mon.c (mon.2) 100 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:46.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:46 vm07 bash[23367]: audit 2026-03-10T10:16:46.188014+0000 mon.a (mon.0) 1163 : audit [INF] from='client.? 192.168.123.104:0/1129958952' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm04-59259-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:46.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:46 vm07 bash[23367]: audit 2026-03-10T10:16:46.188014+0000 mon.a (mon.0) 1163 : audit [INF] from='client.? 192.168.123.104:0/1129958952' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "WaitForSafePP_vm04-59259-6","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:46.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:46 vm07 bash[23367]: audit 2026-03-10T10:16:46.188088+0000 mon.a (mon.0) 1164 : audit [INF] from='client.? 192.168.123.104:0/2180463816' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm04-59252-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:46.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:46 vm07 bash[23367]: audit 2026-03-10T10:16:46.188088+0000 mon.a (mon.0) 1164 : audit [INF] from='client.? 192.168.123.104:0/2180463816' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTrip_vm04-59252-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:46.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:46 vm07 bash[23367]: audit 2026-03-10T10:16:46.188110+0000 mon.a (mon.0) 1165 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-9", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:16:46.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:46 vm07 bash[23367]: audit 2026-03-10T10:16:46.188110+0000 mon.a (mon.0) 1165 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-9", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:16:46.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:46 vm07 bash[23367]: audit 2026-03-10T10:16:46.188126+0000 mon.a (mon.0) 1166 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4"}]': finished 2026-03-10T10:16:46.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:46 vm07 bash[23367]: audit 2026-03-10T10:16:46.188126+0000 mon.a (mon.0) 1166 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4"}]': finished 2026-03-10T10:16:46.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:46 vm07 bash[23367]: cluster 2026-03-10T10:16:46.192865+0000 mon.a (mon.0) 1167 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-10T10:16:46.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:46 vm07 bash[23367]: cluster 2026-03-10T10:16:46.192865+0000 mon.a (mon.0) 1167 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-10T10:16:46.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:46 vm07 bash[23367]: audit 2026-03-10T10:16:46.199972+0000 mon.a (mon.0) 1168 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-9"}]: dispatch 2026-03-10T10:16:46.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:46 vm07 bash[23367]: audit 2026-03-10T10:16:46.199972+0000 mon.a (mon.0) 1168 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-9"}]: dispatch 2026-03-10T10:16:46.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:46 vm07 bash[23367]: audit 2026-03-10T10:16:46.203870+0000 mon.a (mon.0) 1169 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "tierpool": "test-rados-api-vm04-59675-6"}]: dispatch 2026-03-10T10:16:46.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:46 vm07 bash[23367]: audit 2026-03-10T10:16:46.203870+0000 mon.a (mon.0) 1169 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "tierpool": "test-rados-api-vm04-59675-6"}]: dispatch 2026-03-10T10:16:46.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:46 vm07 bash[23367]: audit 2026-03-10T10:16:46.233884+0000 mon.b (mon.1) 119 : audit [INF] from='client.? 192.168.123.104:0/2706877323' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:46.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:46 vm07 bash[23367]: audit 2026-03-10T10:16:46.233884+0000 mon.b (mon.1) 119 : audit [INF] from='client.? 192.168.123.104:0/2706877323' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:46.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:46 vm07 bash[23367]: audit 2026-03-10T10:16:46.245043+0000 mon.a (mon.0) 1170 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:46.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:46 vm07 bash[23367]: audit 2026-03-10T10:16:46.245043+0000 mon.a (mon.0) 1170 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:47.817 INFO:tasks.workunit.client.0.vm04.stdout: api_service: [==========] Running 4 tests from 1 test suite. 2026-03-10T10:16:47.817 INFO:tasks.workunit.client.0.vm04.stdout: api_service: [----------] Global test environment set-up. 2026-03-10T10:16:47.817 INFO:tasks.workunit.client.0.vm04.stdout: api_service: [----------] 4 tests from LibRadosService 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: [ RUN ] LibRadosService.RegisterEarly 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: [ OK ] LibRadosService.RegisterEarly (5032 ms) 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: [ RUN ] LibRadosService.RegisterLate 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: [ OK ] LibRadosService.RegisterLate (23 ms) 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: [ RUN ] LibRadosService.StatusFormat 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: cluster: 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: id: e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: health: HEALTH_WARN 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: 2 stray daemon(s) not managed by cephadm 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: 12 pool(s) do not have an application enabled 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: services: 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: mon: 3 daemons, quorum a,b,c (age 7m) 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: mgr: y(active, since 103s), standbys: x 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: osd: 8 osds: 8 up (since 2m), 8 in (since 2m) 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: laundry: 2 daemons active (1 hosts) 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: rgw: 1 daemon active (1 hosts, 1 zones) 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: data: 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: pools: 26 pools, 700 pgs 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: objects: 301 objects, 465 KiB 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: usage: 243 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: pgs: 28.429% pgs not active 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: 497 active+clean 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: 157 creating+peering 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: 42 creating+activating 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: 4 active 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: io: 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: client: 1.9 KiB/s rd, 19 KiB/s wr, 9 op/s rd, 30 op/s wr 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: cluster: 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: id: e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: health: HEALTH_WARN 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: 2 stray daemon(s) not managed by cephadm 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: 12 pool(s) do not have an application enabled 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: services: 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: mon: 3 daemons, quorum a,b,c (age 7m) 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: mgr: y(active, since 105s), standbys: x 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: osd: 8 osds: 8 up (since 2m), 8 in (since 2m) 2026-03-10T10:16:47.818 INFO:tasks.workunit.client.0.vm04.stdout: api_service: foo: 16 portals active (1 hosts, 3 zones) 2026-03-10T10:16:47.819 INFO:tasks.workunit.client.0.vm04.stdout: api_service: laundry: 1 daemon active (1 hosts) 2026-03-10T10:16:47.819 INFO:tasks.workunit.client.0.vm04.stdout: api_service: rgw: 1 daemon active (1 hosts, 1 zones) 2026-03-10T10:16:47.819 INFO:tasks.workunit.client.0.vm04.stdout: api_service: 2026-03-10T10:16:47.819 INFO:tasks.workunit.client.0.vm04.stdout: api_service: data: 2026-03-10T10:16:47.819 INFO:tasks.workunit.client.0.vm04.stdout: api_service: pools: 29 pools, 868 pgs 2026-03-10T10:16:47.819 INFO:tasks.workunit.client.0.vm04.stdout: api_service: objects: 244 objects, 459 KiB 2026-03-10T10:16:47.819 INFO:tasks.workunit.client.0.vm04.stdout: api_service: usage: 243 MiB used, 160 GiB / 160 GiB avail 2026-03-10T10:16:47.819 INFO:tasks.workunit.client.0.vm04.stdout: api_service: pgs: 47.926% pgs unknown 2026-03-10T10:16:47.819 INFO:tasks.workunit.client.0.vm04.stdout: api_service: 9.101% pgs not active 2026-03-10T10:16:47.819 INFO:tasks.workunit.client.0.vm04.stdout: api_service: 416 unknown 2026-03-10T10:16:47.819 INFO:tasks.workunit.client.0.vm04.stdout: api_service: 369 active+clean 2026-03-10T10:16:47.819 INFO:tasks.workunit.client.0.vm04.stdout: api_service: 42 creating+activating 2026-03-10T10:16:47.819 INFO:tasks.workunit.client.0.vm04.stdout: api_service: 37 creating+peering 2026-03-10T10:16:47.819 INFO:tasks.workunit.client.0.vm04.stdout: api_service: 4 active 2026-03-10T10:16:47.819 INFO:tasks.workunit.client.0.vm04.stdout: api_service: 2026-03-10T10:16:47.819 INFO:tasks.workunit.client.0.vm04.stdout: api_service: io: 2026-03-10T10:16:47.819 INFO:tasks.workunit.client.0.vm04.stdout: api_service: client: 2.7 KiB/s rd, 14 KiB/s wr, 11 op/s rd, 28 op/s wr 2026-03-10T10:16:47.819 INFO:tasks.workunit.client.0.vm04.stdout: api_service: 2026-03-10T10:16:47.819 INFO:tasks.workunit.client.0.vm04.stdout: api_service: 2026-03-10T10:16:47.819 INFO:tasks.workunit.client.0.vm04.stdout: api_service: [ OK ] LibRadosService.StatusFormat (2307 ms) 2026-03-10T10:16:47.819 INFO:tasks.workunit.client.0.vm04.stdout: api_service: [ RUN ] LibRadosService.Status 2026-03-10T10:16:47.819 INFO:tasks.workunit.client.0.vm04.stdout: api_service: [ OK ] LibRadosService.Status (20028 ms) 2026-03-10T10:16:47.821 INFO:tasks.workunit.client.0.vm04.stdout: api_service: [----------] 4 tests from LibRadosService (27390 ms total) 2026-03-10T10:16:47.821 INFO:tasks.workunit.client.0.vm04.stdout: api_service: 2026-03-10T10:16:47.821 INFO:tasks.workunit.client.0.vm04.stdout: api_service: [----------] Global test environment tear-down 2026-03-10T10:16:47.821 INFO:tasks.workunit.client.0.vm04.stdout: api_service: [==========] 4 tests from 1 test suite ran. (27390 ms total) 2026-03-10T10:16:47.821 INFO:tasks.workunit.client.0.vm04.stdout: api_service: [ PASSED ] 4 tests. 2026-03-10T10:16:47.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:47 vm04 bash[20742]: audit 2026-03-10T10:16:47.004542+0000 mon.c (mon.2) 101 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:47.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:47 vm04 bash[20742]: audit 2026-03-10T10:16:47.004542+0000 mon.c (mon.2) 101 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:47.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:47 vm04 bash[28289]: audit 2026-03-10T10:16:47.004542+0000 mon.c (mon.2) 101 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:47.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:47 vm04 bash[28289]: audit 2026-03-10T10:16:47.004542+0000 mon.c (mon.2) 101 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:48.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:47 vm07 bash[23367]: audit 2026-03-10T10:16:47.004542+0000 mon.c (mon.2) 101 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:48.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:47 vm07 bash[23367]: audit 2026-03-10T10:16:47.004542+0000 mon.c (mon.2) 101 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:48.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:16:48 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: cluster 2026-03-10T10:16:47.188455+0000 mon.a (mon.0) 1171 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: cluster 2026-03-10T10:16:47.188455+0000 mon.a (mon.0) 1171 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:47.506696+0000 mon.a (mon.0) 1172 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-9"}]': finished 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:47.506696+0000 mon.a (mon.0) 1172 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-9"}]': finished 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:47.506727+0000 mon.a (mon.0) 1173 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "tierpool": "test-rados-api-vm04-59675-6"}]': finished 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:47.506727+0000 mon.a (mon.0) 1173 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "tierpool": "test-rados-api-vm04-59675-6"}]': finished 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:47.506751+0000 mon.a (mon.0) 1174 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:47.506751+0000 mon.a (mon.0) 1174 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: cluster 2026-03-10T10:16:47.511217+0000 mon.a (mon.0) 1175 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: cluster 2026-03-10T10:16:47.511217+0000 mon.a (mon.0) 1175 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:47.516478+0000 mon.a (mon.0) 1176 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-9", "mode": "writeback"}]: dispatch 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:47.516478+0000 mon.a (mon.0) 1176 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-9", "mode": "writeback"}]: dispatch 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:47.556570+0000 mon.a (mon.0) 1177 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm04-59675-6", "pool2": "test-rados-api-vm04-59675-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:47.556570+0000 mon.a (mon.0) 1177 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm04-59675-6", "pool2": "test-rados-api-vm04-59675-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: cluster 2026-03-10T10:16:47.987265+0000 mgr.y (mgr.24422) 121 : cluster [DBG] pgmap v93: 552 pgs: 128 unknown, 424 active+clean; 144 MiB data, 849 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: cluster 2026-03-10T10:16:47.987265+0000 mgr.y (mgr.24422) 121 : cluster [DBG] pgmap v93: 552 pgs: 128 unknown, 424 active+clean; 144 MiB data, 849 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:48.005269+0000 mon.c (mon.2) 102 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:48.005269+0000 mon.c (mon.2) 102 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:48.196158+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:48.196158+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:48.196465+0000 mon.a (mon.0) 1178 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:48.196465+0000 mon.a (mon.0) 1178 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:48.200340+0000 mgr.y (mgr.24422) 122 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:48.200340+0000 mgr.y (mgr.24422) 122 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:48.208845+0000 mon.c (mon.2) 104 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:48.208845+0000 mon.c (mon.2) 104 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:48.209275+0000 mon.a (mon.0) 1179 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:48.209275+0000 mon.a (mon.0) 1179 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:48.209931+0000 mon.c (mon.2) 105 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:48.209931+0000 mon.c (mon.2) 105 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:48.210295+0000 mon.a (mon.0) 1180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:48.210295+0000 mon.a (mon.0) 1180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: cluster 2026-03-10T10:16:48.507426+0000 mon.a (mon.0) 1181 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: cluster 2026-03-10T10:16:48.507426+0000 mon.a (mon.0) 1181 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:16:48.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:48.511586+0000 mon.a (mon.0) 1182 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-9", "mode": "writeback"}]': finished 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:48.511586+0000 mon.a (mon.0) 1182 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-9", "mode": "writeback"}]': finished 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:48.511618+0000 mon.a (mon.0) 1183 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "test-rados-api-vm04-59675-6", "pool2": "test-rados-api-vm04-59675-6", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:48.511618+0000 mon.a (mon.0) 1183 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "test-rados-api-vm04-59675-6", "pool2": "test-rados-api-vm04-59675-6", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:48.511642+0000 mon.a (mon.0) 1184 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:48.511642+0000 mon.a (mon.0) 1184 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: cluster 2026-03-10T10:16:48.516030+0000 mon.a (mon.0) 1185 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: cluster 2026-03-10T10:16:48.516030+0000 mon.a (mon.0) 1185 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:48.529475+0000 mon.a (mon.0) 1186 : audit [INF] from='client.? 192.168.123.104:0/2262249675' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm04-59259-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:48.529475+0000 mon.a (mon.0) 1186 : audit [INF] from='client.? 192.168.123.104:0/2262249675' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm04-59259-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:48.548553+0000 mon.b (mon.1) 120 : audit [INF] from='client.? 192.168.123.104:0/3084835803' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm04-59252-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:48 vm04 bash[20742]: audit 2026-03-10T10:16:48.548553+0000 mon.b (mon.1) 120 : audit [INF] from='client.? 192.168.123.104:0/3084835803' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm04-59252-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: cluster 2026-03-10T10:16:47.188455+0000 mon.a (mon.0) 1171 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: cluster 2026-03-10T10:16:47.188455+0000 mon.a (mon.0) 1171 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:47.506696+0000 mon.a (mon.0) 1172 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-9"}]': finished 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:47.506696+0000 mon.a (mon.0) 1172 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-9"}]': finished 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:47.506727+0000 mon.a (mon.0) 1173 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "tierpool": "test-rados-api-vm04-59675-6"}]': finished 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:47.506727+0000 mon.a (mon.0) 1173 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "tierpool": "test-rados-api-vm04-59675-6"}]': finished 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:47.506751+0000 mon.a (mon.0) 1174 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:47.506751+0000 mon.a (mon.0) 1174 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: cluster 2026-03-10T10:16:47.511217+0000 mon.a (mon.0) 1175 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: cluster 2026-03-10T10:16:47.511217+0000 mon.a (mon.0) 1175 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:47.516478+0000 mon.a (mon.0) 1176 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-9", "mode": "writeback"}]: dispatch 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:47.516478+0000 mon.a (mon.0) 1176 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-9", "mode": "writeback"}]: dispatch 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:47.556570+0000 mon.a (mon.0) 1177 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm04-59675-6", "pool2": "test-rados-api-vm04-59675-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:47.556570+0000 mon.a (mon.0) 1177 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm04-59675-6", "pool2": "test-rados-api-vm04-59675-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: cluster 2026-03-10T10:16:47.987265+0000 mgr.y (mgr.24422) 121 : cluster [DBG] pgmap v93: 552 pgs: 128 unknown, 424 active+clean; 144 MiB data, 849 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: cluster 2026-03-10T10:16:47.987265+0000 mgr.y (mgr.24422) 121 : cluster [DBG] pgmap v93: 552 pgs: 128 unknown, 424 active+clean; 144 MiB data, 849 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:48.005269+0000 mon.c (mon.2) 102 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:48.005269+0000 mon.c (mon.2) 102 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:48.196158+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:48.196158+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:48.196465+0000 mon.a (mon.0) 1178 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:48.196465+0000 mon.a (mon.0) 1178 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:48.200340+0000 mgr.y (mgr.24422) 122 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:48.200340+0000 mgr.y (mgr.24422) 122 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:48.208845+0000 mon.c (mon.2) 104 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:48.208845+0000 mon.c (mon.2) 104 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:48.209275+0000 mon.a (mon.0) 1179 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:48.209275+0000 mon.a (mon.0) 1179 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:48.209931+0000 mon.c (mon.2) 105 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:48.209931+0000 mon.c (mon.2) 105 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:48.210295+0000 mon.a (mon.0) 1180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:48.210295+0000 mon.a (mon.0) 1180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: cluster 2026-03-10T10:16:48.507426+0000 mon.a (mon.0) 1181 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: cluster 2026-03-10T10:16:48.507426+0000 mon.a (mon.0) 1181 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:48.511586+0000 mon.a (mon.0) 1182 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-9", "mode": "writeback"}]': finished 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:48.511586+0000 mon.a (mon.0) 1182 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-9", "mode": "writeback"}]': finished 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:48.511618+0000 mon.a (mon.0) 1183 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "test-rados-api-vm04-59675-6", "pool2": "test-rados-api-vm04-59675-6", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:48.511618+0000 mon.a (mon.0) 1183 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "test-rados-api-vm04-59675-6", "pool2": "test-rados-api-vm04-59675-6", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:48.511642+0000 mon.a (mon.0) 1184 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:48.511642+0000 mon.a (mon.0) 1184 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: cluster 2026-03-10T10:16:48.516030+0000 mon.a (mon.0) 1185 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: cluster 2026-03-10T10:16:48.516030+0000 mon.a (mon.0) 1185 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:48.529475+0000 mon.a (mon.0) 1186 : audit [INF] from='client.? 192.168.123.104:0/2262249675' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm04-59259-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:48.529475+0000 mon.a (mon.0) 1186 : audit [INF] from='client.? 192.168.123.104:0/2262249675' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm04-59259-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:48.548553+0000 mon.b (mon.1) 120 : audit [INF] from='client.? 192.168.123.104:0/3084835803' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm04-59252-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:48.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:48 vm04 bash[28289]: audit 2026-03-10T10:16:48.548553+0000 mon.b (mon.1) 120 : audit [INF] from='client.? 192.168.123.104:0/3084835803' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm04-59252-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: cluster 2026-03-10T10:16:47.188455+0000 mon.a (mon.0) 1171 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:16:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: cluster 2026-03-10T10:16:47.188455+0000 mon.a (mon.0) 1171 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:16:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:47.506696+0000 mon.a (mon.0) 1172 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-9"}]': finished 2026-03-10T10:16:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:47.506696+0000 mon.a (mon.0) 1172 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-9"}]': finished 2026-03-10T10:16:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:47.506727+0000 mon.a (mon.0) 1173 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "tierpool": "test-rados-api-vm04-59675-6"}]': finished 2026-03-10T10:16:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:47.506727+0000 mon.a (mon.0) 1173 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP_vm04-59675-4", "tierpool": "test-rados-api-vm04-59675-6"}]': finished 2026-03-10T10:16:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:47.506751+0000 mon.a (mon.0) 1174 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:47.506751+0000 mon.a (mon.0) 1174 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: cluster 2026-03-10T10:16:47.511217+0000 mon.a (mon.0) 1175 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-10T10:16:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: cluster 2026-03-10T10:16:47.511217+0000 mon.a (mon.0) 1175 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-10T10:16:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:47.516478+0000 mon.a (mon.0) 1176 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-9", "mode": "writeback"}]: dispatch 2026-03-10T10:16:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:47.516478+0000 mon.a (mon.0) 1176 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-9", "mode": "writeback"}]: dispatch 2026-03-10T10:16:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:47.556570+0000 mon.a (mon.0) 1177 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm04-59675-6", "pool2": "test-rados-api-vm04-59675-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:16:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:47.556570+0000 mon.a (mon.0) 1177 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "test-rados-api-vm04-59675-6", "pool2": "test-rados-api-vm04-59675-6", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:16:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: cluster 2026-03-10T10:16:47.987265+0000 mgr.y (mgr.24422) 121 : cluster [DBG] pgmap v93: 552 pgs: 128 unknown, 424 active+clean; 144 MiB data, 849 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:16:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: cluster 2026-03-10T10:16:47.987265+0000 mgr.y (mgr.24422) 121 : cluster [DBG] pgmap v93: 552 pgs: 128 unknown, 424 active+clean; 144 MiB data, 849 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:16:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:48.005269+0000 mon.c (mon.2) 102 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:48.005269+0000 mon.c (mon.2) 102 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:48.196158+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:48.196158+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:48.196465+0000 mon.a (mon.0) 1178 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:48.196465+0000 mon.a (mon.0) 1178 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:48.200340+0000 mgr.y (mgr.24422) 122 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:48.200340+0000 mgr.y (mgr.24422) 122 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:48.208845+0000 mon.c (mon.2) 104 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:48.208845+0000 mon.c (mon.2) 104 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:48.209275+0000 mon.a (mon.0) 1179 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:48.209275+0000 mon.a (mon.0) 1179 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:48.209931+0000 mon.c (mon.2) 105 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:48.209931+0000 mon.c (mon.2) 105 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:48.210295+0000 mon.a (mon.0) 1180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:48.210295+0000 mon.a (mon.0) 1180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:16:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: cluster 2026-03-10T10:16:48.507426+0000 mon.a (mon.0) 1181 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:16:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: cluster 2026-03-10T10:16:48.507426+0000 mon.a (mon.0) 1181 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:16:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:48.511586+0000 mon.a (mon.0) 1182 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-9", "mode": "writeback"}]': finished 2026-03-10T10:16:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:48.511586+0000 mon.a (mon.0) 1182 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-9", "mode": "writeback"}]': finished 2026-03-10T10:16:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:48.511618+0000 mon.a (mon.0) 1183 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "test-rados-api-vm04-59675-6", "pool2": "test-rados-api-vm04-59675-6", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T10:16:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:48.511618+0000 mon.a (mon.0) 1183 : audit [INF] from='client.? 192.168.123.104:0/3888875293' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "test-rados-api-vm04-59675-6", "pool2": "test-rados-api-vm04-59675-6", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T10:16:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:48.511642+0000 mon.a (mon.0) 1184 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:48.511642+0000 mon.a (mon.0) 1184 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:16:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: cluster 2026-03-10T10:16:48.516030+0000 mon.a (mon.0) 1185 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-10T10:16:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: cluster 2026-03-10T10:16:48.516030+0000 mon.a (mon.0) 1185 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-10T10:16:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:48.529475+0000 mon.a (mon.0) 1186 : audit [INF] from='client.? 192.168.123.104:0/2262249675' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm04-59259-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:48.529475+0000 mon.a (mon.0) 1186 : audit [INF] from='client.? 192.168.123.104:0/2262249675' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm04-59259-7","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:48.548553+0000 mon.b (mon.1) 120 : audit [INF] from='client.? 192.168.123.104:0/3084835803' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm04-59252-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:48 vm07 bash[23367]: audit 2026-03-10T10:16:48.548553+0000 mon.b (mon.1) 120 : audit [INF] from='client.? 192.168.123.104:0/3084835803' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm04-59252-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:49.801 INFO:tasks.workunit.client.0.vm04.stdout:tch_notify_pp: flushed 2026-03-10T10:16:49.801 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2Timeout/0 (3003 ms) 2026-03-10T10:16:49.801 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2Timeout/1 2026-03-10T10:16:49.801 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: trying... 2026-03-10T10:16:49.801 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: handle_notify cookie 94713271522480 notify_id 347892350982 notifier_gid 14982 2026-03-10T10:16:49.801 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: timed out 2026-03-10T10:16:49.801 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: flushing 2026-03-10T10:16:49.801 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: flushed 2026-03-10T10:16:49.801 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify2Timeout/1 (3067 ms) 2026-03-10T10:16:49.801 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify3/0 2026-03-10T10:16:49.801 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: List watches 2026-03-10T10:16:49.801 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: notify2 2026-03-10T10:16:49.801 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: handle_notify cookie 94713271505328 notify_id 360777252868 notifier_gid 14982 2026-03-10T10:16:49.801 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: notify2 done 2026-03-10T10:16:49.801 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: watch_check 2026-03-10T10:16:49.801 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: unwatch2 2026-03-10T10:16:49.801 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: flushing 2026-03-10T10:16:49.801 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: done 2026-03-10T10:16:49.801 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify3/0 (3034 ms) 2026-03-10T10:16:49.801 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [ RUN ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify3/1 2026-03-10T10:16:49.801 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: List watches 2026-03-10T10:16:49.801 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: notify2 2026-03-10T10:16:49.801 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: handle_notify cookie 94713271505328 notify_id 373662154759 notifier_gid 14982 2026-03-10T10:16:49.801 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: notify2 done 2026-03-10T10:16:49.801 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: watch_check 2026-03-10T10:16:49.801 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: unwatch2 2026-03-10T10:16:49.801 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: flushing 2026-03-10T10:16:49.801 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: done 2026-03-10T10:16:49.801 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [ OK ] LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify3/1 (3045 ms) 2026-03-10T10:16:49.801 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [----------] 14 tests from LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP (16032 ms total) 2026-03-10T10:16:49.801 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: 2026-03-10T10:16:49.801 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [----------] Global test environment tear-down 2026-03-10T10:16:49.801 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [==========] 16 tests from 2 test suites ran. (29402 ms total) 2026-03-10T10:16:49.801 INFO:tasks.workunit.client.0.vm04.stdout: api_watch_notify_pp: [ PASSED ] 16 tests. 2026-03-10T10:16:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:49 vm04 bash[28289]: audit 2026-03-10T10:16:48.569228+0000 mon.a (mon.0) 1187 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm04-59252-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:49.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:49 vm04 bash[28289]: audit 2026-03-10T10:16:48.569228+0000 mon.a (mon.0) 1187 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm04-59252-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:49.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:49 vm04 bash[28289]: cluster 2026-03-10T10:16:48.576480+0000 mon.a (mon.0) 1188 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:49.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:49 vm04 bash[28289]: cluster 2026-03-10T10:16:48.576480+0000 mon.a (mon.0) 1188 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:49.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:49 vm04 bash[28289]: audit 2026-03-10T10:16:48.721628+0000 mon.c (mon.2) 106 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm04-59531-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:49.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:49 vm04 bash[28289]: audit 2026-03-10T10:16:48.721628+0000 mon.c (mon.2) 106 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm04-59531-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:49.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:49 vm04 bash[28289]: audit 2026-03-10T10:16:48.721889+0000 mon.a (mon.0) 1189 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm04-59531-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:49.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:49 vm04 bash[28289]: audit 2026-03-10T10:16:48.721889+0000 mon.a (mon.0) 1189 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm04-59531-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:49.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:49 vm04 bash[28289]: audit 2026-03-10T10:16:48.770815+0000 mon.a (mon.0) 1190 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:16:49.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:49 vm04 bash[28289]: audit 2026-03-10T10:16:48.770815+0000 mon.a (mon.0) 1190 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:16:49.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:49 vm04 bash[28289]: audit 2026-03-10T10:16:49.006110+0000 mon.c (mon.2) 107 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:49.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:49 vm04 bash[28289]: audit 2026-03-10T10:16:49.006110+0000 mon.c (mon.2) 107 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:49.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:49 vm04 bash[28289]: audit 2026-03-10T10:16:49.515962+0000 mon.a (mon.0) 1191 : audit [INF] from='client.? 192.168.123.104:0/2262249675' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm04-59259-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:49.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:49 vm04 bash[28289]: audit 2026-03-10T10:16:49.515962+0000 mon.a (mon.0) 1191 : audit [INF] from='client.? 192.168.123.104:0/2262249675' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm04-59259-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:49.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:49 vm04 bash[20742]: audit 2026-03-10T10:16:48.569228+0000 mon.a (mon.0) 1187 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm04-59252-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:49.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:49 vm04 bash[20742]: audit 2026-03-10T10:16:48.569228+0000 mon.a (mon.0) 1187 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm04-59252-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:49.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:49 vm04 bash[20742]: cluster 2026-03-10T10:16:48.576480+0000 mon.a (mon.0) 1188 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:49.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:49 vm04 bash[20742]: cluster 2026-03-10T10:16:48.576480+0000 mon.a (mon.0) 1188 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:49.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:49 vm04 bash[20742]: audit 2026-03-10T10:16:48.721628+0000 mon.c (mon.2) 106 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm04-59531-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:49.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:49 vm04 bash[20742]: audit 2026-03-10T10:16:48.721628+0000 mon.c (mon.2) 106 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm04-59531-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:49.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:49 vm04 bash[20742]: audit 2026-03-10T10:16:48.721889+0000 mon.a (mon.0) 1189 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm04-59531-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:49.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:49 vm04 bash[20742]: audit 2026-03-10T10:16:48.721889+0000 mon.a (mon.0) 1189 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm04-59531-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:49.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:49 vm04 bash[20742]: audit 2026-03-10T10:16:48.770815+0000 mon.a (mon.0) 1190 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:16:49.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:49 vm04 bash[20742]: audit 2026-03-10T10:16:48.770815+0000 mon.a (mon.0) 1190 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:16:49.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:49 vm04 bash[20742]: audit 2026-03-10T10:16:49.006110+0000 mon.c (mon.2) 107 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:49.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:49 vm04 bash[20742]: audit 2026-03-10T10:16:49.006110+0000 mon.c (mon.2) 107 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:49.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:49 vm04 bash[20742]: audit 2026-03-10T10:16:49.515962+0000 mon.a (mon.0) 1191 : audit [INF] from='client.? 192.168.123.104:0/2262249675' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm04-59259-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:49.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:49 vm04 bash[20742]: audit 2026-03-10T10:16:49.515962+0000 mon.a (mon.0) 1191 : audit [INF] from='client.? 192.168.123.104:0/2262249675' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm04-59259-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:49.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:49 vm04 bash[20742]: audit 2026-03-10T10:16:49.515998+0000 mon.a (mon.0) 1192 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattr_vm04-59252-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:49.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:49 vm04 bash[20742]: audit 2026-03-10T10:16:49.515998+0000 mon.a (mon.0) 1192 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattr_vm04-59252-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:49.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:49 vm04 bash[20742]: audit 2026-03-10T10:16:49.516019+0000 mon.a (mon.0) 1193 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:16:49.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:49 vm04 bash[20742]: audit 2026-03-10T10:16:49.516019+0000 mon.a (mon.0) 1193 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:16:49.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:49 vm04 bash[20742]: cluster 2026-03-10T10:16:49.583547+0000 mon.a (mon.0) 1194 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-10T10:16:49.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:49 vm04 bash[20742]: cluster 2026-03-10T10:16:49.583547+0000 mon.a (mon.0) 1194 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-10T10:16:49.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:49 vm04 bash[28289]: audit 2026-03-10T10:16:49.515998+0000 mon.a (mon.0) 1192 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattr_vm04-59252-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:49.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:49 vm04 bash[28289]: audit 2026-03-10T10:16:49.515998+0000 mon.a (mon.0) 1192 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattr_vm04-59252-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:49.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:49 vm04 bash[28289]: audit 2026-03-10T10:16:49.516019+0000 mon.a (mon.0) 1193 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:16:49.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:49 vm04 bash[28289]: audit 2026-03-10T10:16:49.516019+0000 mon.a (mon.0) 1193 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:16:49.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:49 vm04 bash[28289]: cluster 2026-03-10T10:16:49.583547+0000 mon.a (mon.0) 1194 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-10T10:16:49.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:49 vm04 bash[28289]: cluster 2026-03-10T10:16:49.583547+0000 mon.a (mon.0) 1194 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-10T10:16:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:49 vm07 bash[23367]: audit 2026-03-10T10:16:48.569228+0000 mon.a (mon.0) 1187 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm04-59252-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:49 vm07 bash[23367]: audit 2026-03-10T10:16:48.569228+0000 mon.a (mon.0) 1187 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattr_vm04-59252-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:49 vm07 bash[23367]: cluster 2026-03-10T10:16:48.576480+0000 mon.a (mon.0) 1188 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:49 vm07 bash[23367]: cluster 2026-03-10T10:16:48.576480+0000 mon.a (mon.0) 1188 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:49 vm07 bash[23367]: audit 2026-03-10T10:16:48.721628+0000 mon.c (mon.2) 106 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm04-59531-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:49 vm07 bash[23367]: audit 2026-03-10T10:16:48.721628+0000 mon.c (mon.2) 106 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm04-59531-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:49 vm07 bash[23367]: audit 2026-03-10T10:16:48.721889+0000 mon.a (mon.0) 1189 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm04-59531-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:49 vm07 bash[23367]: audit 2026-03-10T10:16:48.721889+0000 mon.a (mon.0) 1189 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm04-59531-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:16:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:49 vm07 bash[23367]: audit 2026-03-10T10:16:48.770815+0000 mon.a (mon.0) 1190 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:16:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:49 vm07 bash[23367]: audit 2026-03-10T10:16:48.770815+0000 mon.a (mon.0) 1190 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:16:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:49 vm07 bash[23367]: audit 2026-03-10T10:16:49.006110+0000 mon.c (mon.2) 107 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:49 vm07 bash[23367]: audit 2026-03-10T10:16:49.006110+0000 mon.c (mon.2) 107 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:49 vm07 bash[23367]: audit 2026-03-10T10:16:49.515962+0000 mon.a (mon.0) 1191 : audit [INF] from='client.? 192.168.123.104:0/2262249675' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm04-59259-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:49 vm07 bash[23367]: audit 2026-03-10T10:16:49.515962+0000 mon.a (mon.0) 1191 : audit [INF] from='client.? 192.168.123.104:0/2262249675' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP_vm04-59259-7","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:49 vm07 bash[23367]: audit 2026-03-10T10:16:49.515998+0000 mon.a (mon.0) 1192 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattr_vm04-59252-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:49 vm07 bash[23367]: audit 2026-03-10T10:16:49.515998+0000 mon.a (mon.0) 1192 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattr_vm04-59252-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:49 vm07 bash[23367]: audit 2026-03-10T10:16:49.516019+0000 mon.a (mon.0) 1193 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:16:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:49 vm07 bash[23367]: audit 2026-03-10T10:16:49.516019+0000 mon.a (mon.0) 1193 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:16:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:49 vm07 bash[23367]: cluster 2026-03-10T10:16:49.583547+0000 mon.a (mon.0) 1194 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-10T10:16:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:49 vm07 bash[23367]: cluster 2026-03-10T10:16:49.583547+0000 mon.a (mon.0) 1194 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-10T10:16:50.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:50 vm04 bash[28289]: audit 2026-03-10T10:16:49.719351+0000 mon.a (mon.0) 1195 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-9"}]: dispatch 2026-03-10T10:16:50.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:50 vm04 bash[28289]: audit 2026-03-10T10:16:49.719351+0000 mon.a (mon.0) 1195 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-9"}]: dispatch 2026-03-10T10:16:50.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:50 vm04 bash[28289]: cluster 2026-03-10T10:16:49.987712+0000 mgr.y (mgr.24422) 123 : cluster [DBG] pgmap v96: 644 pgs: 2 creating+activating, 68 creating+peering, 110 unknown, 464 active+clean; 144 MiB data, 850 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:16:50.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:50 vm04 bash[28289]: cluster 2026-03-10T10:16:49.987712+0000 mgr.y (mgr.24422) 123 : cluster [DBG] pgmap v96: 644 pgs: 2 creating+activating, 68 creating+peering, 110 unknown, 464 active+clean; 144 MiB data, 850 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:16:50.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:50 vm04 bash[28289]: audit 2026-03-10T10:16:50.007456+0000 mon.c (mon.2) 108 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:50.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:50 vm04 bash[28289]: audit 2026-03-10T10:16:50.007456+0000 mon.c (mon.2) 108 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:50.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:50 vm04 bash[28289]: cluster 2026-03-10T10:16:50.516280+0000 mon.a (mon.0) 1196 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:16:50.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:50 vm04 bash[28289]: cluster 2026-03-10T10:16:50.516280+0000 mon.a (mon.0) 1196 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:16:50.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:50 vm04 bash[28289]: audit 2026-03-10T10:16:50.539128+0000 mon.a (mon.0) 1197 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm04-59531-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]': finished 2026-03-10T10:16:50.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:50 vm04 bash[28289]: audit 2026-03-10T10:16:50.539128+0000 mon.a (mon.0) 1197 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm04-59531-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]': finished 2026-03-10T10:16:50.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:50 vm04 bash[28289]: audit 2026-03-10T10:16:50.539168+0000 mon.a (mon.0) 1198 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-9"}]': finished 2026-03-10T10:16:50.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:50 vm04 bash[28289]: audit 2026-03-10T10:16:50.539168+0000 mon.a (mon.0) 1198 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-9"}]': finished 2026-03-10T10:16:50.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:50 vm04 bash[28289]: cluster 2026-03-10T10:16:50.541693+0000 mon.a (mon.0) 1199 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-10T10:16:50.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:50 vm04 bash[28289]: cluster 2026-03-10T10:16:50.541693+0000 mon.a (mon.0) 1199 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-10T10:16:50.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:50 vm04 bash[20742]: audit 2026-03-10T10:16:49.719351+0000 mon.a (mon.0) 1195 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-9"}]: dispatch 2026-03-10T10:16:50.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:50 vm04 bash[20742]: audit 2026-03-10T10:16:49.719351+0000 mon.a (mon.0) 1195 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-9"}]: dispatch 2026-03-10T10:16:50.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:50 vm04 bash[20742]: cluster 2026-03-10T10:16:49.987712+0000 mgr.y (mgr.24422) 123 : cluster [DBG] pgmap v96: 644 pgs: 2 creating+activating, 68 creating+peering, 110 unknown, 464 active+clean; 144 MiB data, 850 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:16:50.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:50 vm04 bash[20742]: cluster 2026-03-10T10:16:49.987712+0000 mgr.y (mgr.24422) 123 : cluster [DBG] pgmap v96: 644 pgs: 2 creating+activating, 68 creating+peering, 110 unknown, 464 active+clean; 144 MiB data, 850 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:16:50.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:50 vm04 bash[20742]: audit 2026-03-10T10:16:50.007456+0000 mon.c (mon.2) 108 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:50.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:50 vm04 bash[20742]: audit 2026-03-10T10:16:50.007456+0000 mon.c (mon.2) 108 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:50.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:50 vm04 bash[20742]: cluster 2026-03-10T10:16:50.516280+0000 mon.a (mon.0) 1196 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:16:50.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:50 vm04 bash[20742]: cluster 2026-03-10T10:16:50.516280+0000 mon.a (mon.0) 1196 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:16:50.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:50 vm04 bash[20742]: audit 2026-03-10T10:16:50.539128+0000 mon.a (mon.0) 1197 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm04-59531-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]': finished 2026-03-10T10:16:50.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:50 vm04 bash[20742]: audit 2026-03-10T10:16:50.539128+0000 mon.a (mon.0) 1197 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm04-59531-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]': finished 2026-03-10T10:16:50.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:50 vm04 bash[20742]: audit 2026-03-10T10:16:50.539168+0000 mon.a (mon.0) 1198 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-9"}]': finished 2026-03-10T10:16:50.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:50 vm04 bash[20742]: audit 2026-03-10T10:16:50.539168+0000 mon.a (mon.0) 1198 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-9"}]': finished 2026-03-10T10:16:50.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:50 vm04 bash[20742]: cluster 2026-03-10T10:16:50.541693+0000 mon.a (mon.0) 1199 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-10T10:16:50.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:50 vm04 bash[20742]: cluster 2026-03-10T10:16:50.541693+0000 mon.a (mon.0) 1199 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-10T10:16:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:50 vm07 bash[23367]: audit 2026-03-10T10:16:49.719351+0000 mon.a (mon.0) 1195 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-9"}]: dispatch 2026-03-10T10:16:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:50 vm07 bash[23367]: audit 2026-03-10T10:16:49.719351+0000 mon.a (mon.0) 1195 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-9"}]: dispatch 2026-03-10T10:16:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:50 vm07 bash[23367]: cluster 2026-03-10T10:16:49.987712+0000 mgr.y (mgr.24422) 123 : cluster [DBG] pgmap v96: 644 pgs: 2 creating+activating, 68 creating+peering, 110 unknown, 464 active+clean; 144 MiB data, 850 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:16:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:50 vm07 bash[23367]: cluster 2026-03-10T10:16:49.987712+0000 mgr.y (mgr.24422) 123 : cluster [DBG] pgmap v96: 644 pgs: 2 creating+activating, 68 creating+peering, 110 unknown, 464 active+clean; 144 MiB data, 850 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:16:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:50 vm07 bash[23367]: audit 2026-03-10T10:16:50.007456+0000 mon.c (mon.2) 108 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:50 vm07 bash[23367]: audit 2026-03-10T10:16:50.007456+0000 mon.c (mon.2) 108 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:50 vm07 bash[23367]: cluster 2026-03-10T10:16:50.516280+0000 mon.a (mon.0) 1196 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:16:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:50 vm07 bash[23367]: cluster 2026-03-10T10:16:50.516280+0000 mon.a (mon.0) 1196 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:16:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:50 vm07 bash[23367]: audit 2026-03-10T10:16:50.539128+0000 mon.a (mon.0) 1197 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm04-59531-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]': finished 2026-03-10T10:16:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:50 vm07 bash[23367]: audit 2026-03-10T10:16:50.539128+0000 mon.a (mon.0) 1197 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsEC_vm04-59531-10", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]': finished 2026-03-10T10:16:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:50 vm07 bash[23367]: audit 2026-03-10T10:16:50.539168+0000 mon.a (mon.0) 1198 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-9"}]': finished 2026-03-10T10:16:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:50 vm07 bash[23367]: audit 2026-03-10T10:16:50.539168+0000 mon.a (mon.0) 1198 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-9"}]': finished 2026-03-10T10:16:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:50 vm07 bash[23367]: cluster 2026-03-10T10:16:50.541693+0000 mon.a (mon.0) 1199 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-10T10:16:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:50 vm07 bash[23367]: cluster 2026-03-10T10:16:50.541693+0000 mon.a (mon.0) 1199 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-10T10:16:51.997 INFO:tasks.workunit.client.0.vm04.stdout: api_service_pp: [==========] Running 4 tests from 1 test suite. 2026-03-10T10:16:51.997 INFO:tasks.workunit.client.0.vm04.stdout: api_service_pp: [----------] Global test environment set-up. 2026-03-10T10:16:51.997 INFO:tasks.workunit.client.0.vm04.stdout: api_service_pp: [----------] 4 tests from LibRadosServicePP 2026-03-10T10:16:51.997 INFO:tasks.workunit.client.0.vm04.stdout: api_service_pp: [ RUN ] LibRadosServicePP.RegisterEarly 2026-03-10T10:16:51.997 INFO:tasks.workunit.client.0.vm04.stdout: api_service_pp: [ OK ] LibRadosServicePP.RegisterEarly (6024 ms) 2026-03-10T10:16:51.997 INFO:tasks.workunit.client.0.vm04.stdout: api_service_pp: [ RUN ] LibRadosServicePP.RegisterLate 2026-03-10T10:16:51.997 INFO:tasks.workunit.client.0.vm04.stdout: api_service_pp: [ OK ] LibRadosServicePP.RegisterLate (25 ms) 2026-03-10T10:16:51.997 INFO:tasks.workunit.client.0.vm04.stdout: api_service_pp: [ RUN ] LibRadosServicePP.Status 2026-03-10T10:16:51.997 INFO:tasks.workunit.client.0.vm04.stdout: api_service_pp: [ OK ] LibRadosServicePP.Status (20031 ms) 2026-03-10T10:16:51.997 INFO:tasks.workunit.client.0.vm04.stdout: api_service_pp: [ RUN ] LibRadosServicePP.Close 2026-03-10T10:16:51.997 INFO:tasks.workunit.client.0.vm04.stdout: api_service_pp: attempt 0 of 20 2026-03-10T10:16:51.997 INFO:tasks.workunit.client.0.vm04.stdout: api_service_pp: [ OK ] LibRadosServicePP.Close (5443 ms) 2026-03-10T10:16:51.997 INFO:tasks.workunit.client.0.vm04.stdout: api_service_pp: [----------] 4 tests from LibRadosServicePP (31523 ms total) 2026-03-10T10:16:51.997 INFO:tasks.workunit.client.0.vm04.stdout: api_service_pp: 2026-03-10T10:16:51.997 INFO:tasks.workunit.client.0.vm04.stdout: api_service_pp: [----------] Global test environment tear-down 2026-03-10T10:16:51.997 INFO:tasks.workunit.client.0.vm04.stdout: api_service_pp: [==========] 4 tests from 1 test suite ran. (31523 ms total) 2026-03-10T10:16:51.997 INFO:tasks.workunit.client.0.vm04.stdout: api_service_pp: [ PASSED ] 4 tests. 2026-03-10T10:16:52.210 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:51 vm04 bash[20742]: audit 2026-03-10T10:16:51.008105+0000 mon.c (mon.2) 109 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:52.210 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:51 vm04 bash[20742]: audit 2026-03-10T10:16:51.008105+0000 mon.c (mon.2) 109 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:52.210 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:51 vm04 bash[20742]: cluster 2026-03-10T10:16:51.193451+0000 mon.a (mon.0) 1200 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-10T10:16:52.210 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:51 vm04 bash[20742]: cluster 2026-03-10T10:16:51.193451+0000 mon.a (mon.0) 1200 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-10T10:16:52.210 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:51 vm04 bash[20742]: audit 2026-03-10T10:16:51.229228+0000 mon.b (mon.1) 121 : audit [INF] from='client.? 192.168.123.104:0/3909030748' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:52.210 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:51 vm04 bash[20742]: audit 2026-03-10T10:16:51.229228+0000 mon.b (mon.1) 121 : audit [INF] from='client.? 192.168.123.104:0/3909030748' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:52.210 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:51 vm04 bash[20742]: audit 2026-03-10T10:16:51.231535+0000 mon.a (mon.0) 1201 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:52.210 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:51 vm04 bash[20742]: audit 2026-03-10T10:16:51.231535+0000 mon.a (mon.0) 1201 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:52.210 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:51 vm04 bash[20742]: audit 2026-03-10T10:16:51.246568+0000 mon.c (mon.2) 110 : audit [INF] from='client.? 192.168.123.104:0/402921594' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm04-59252-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:52.210 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:51 vm04 bash[20742]: audit 2026-03-10T10:16:51.246568+0000 mon.c (mon.2) 110 : audit [INF] from='client.? 192.168.123.104:0/402921594' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm04-59252-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:52.210 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:51 vm04 bash[20742]: audit 2026-03-10T10:16:51.319197+0000 mon.a (mon.0) 1202 : audit [INF] from='client.? 192.168.123.104:0/677905614' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm04-59259-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:52.210 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:51 vm04 bash[20742]: audit 2026-03-10T10:16:51.319197+0000 mon.a (mon.0) 1202 : audit [INF] from='client.? 192.168.123.104:0/677905614' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm04-59259-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:52.210 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:51 vm04 bash[20742]: audit 2026-03-10T10:16:51.319335+0000 mon.a (mon.0) 1203 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm04-59252-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:52.210 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:51 vm04 bash[20742]: audit 2026-03-10T10:16:51.319335+0000 mon.a (mon.0) 1203 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm04-59252-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:52.210 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:51 vm04 bash[28289]: audit 2026-03-10T10:16:51.008105+0000 mon.c (mon.2) 109 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:52.210 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:51 vm04 bash[28289]: audit 2026-03-10T10:16:51.008105+0000 mon.c (mon.2) 109 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:52.210 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:51 vm04 bash[28289]: cluster 2026-03-10T10:16:51.193451+0000 mon.a (mon.0) 1200 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-10T10:16:52.210 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:51 vm04 bash[28289]: cluster 2026-03-10T10:16:51.193451+0000 mon.a (mon.0) 1200 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-10T10:16:52.210 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:51 vm04 bash[28289]: audit 2026-03-10T10:16:51.229228+0000 mon.b (mon.1) 121 : audit [INF] from='client.? 192.168.123.104:0/3909030748' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:52.210 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:51 vm04 bash[28289]: audit 2026-03-10T10:16:51.229228+0000 mon.b (mon.1) 121 : audit [INF] from='client.? 192.168.123.104:0/3909030748' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:52.210 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:51 vm04 bash[28289]: audit 2026-03-10T10:16:51.231535+0000 mon.a (mon.0) 1201 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:52.211 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:51 vm04 bash[28289]: audit 2026-03-10T10:16:51.231535+0000 mon.a (mon.0) 1201 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:52.211 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:51 vm04 bash[28289]: audit 2026-03-10T10:16:51.246568+0000 mon.c (mon.2) 110 : audit [INF] from='client.? 192.168.123.104:0/402921594' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm04-59252-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:52.211 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:51 vm04 bash[28289]: audit 2026-03-10T10:16:51.246568+0000 mon.c (mon.2) 110 : audit [INF] from='client.? 192.168.123.104:0/402921594' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm04-59252-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:52.211 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:51 vm04 bash[28289]: audit 2026-03-10T10:16:51.319197+0000 mon.a (mon.0) 1202 : audit [INF] from='client.? 192.168.123.104:0/677905614' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm04-59259-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:52.211 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:51 vm04 bash[28289]: audit 2026-03-10T10:16:51.319197+0000 mon.a (mon.0) 1202 : audit [INF] from='client.? 192.168.123.104:0/677905614' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm04-59259-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:52.211 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:51 vm04 bash[28289]: audit 2026-03-10T10:16:51.319335+0000 mon.a (mon.0) 1203 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm04-59252-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:52.211 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:51 vm04 bash[28289]: audit 2026-03-10T10:16:51.319335+0000 mon.a (mon.0) 1203 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm04-59252-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:51 vm07 bash[23367]: audit 2026-03-10T10:16:51.008105+0000 mon.c (mon.2) 109 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:51 vm07 bash[23367]: audit 2026-03-10T10:16:51.008105+0000 mon.c (mon.2) 109 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:51 vm07 bash[23367]: cluster 2026-03-10T10:16:51.193451+0000 mon.a (mon.0) 1200 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-10T10:16:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:51 vm07 bash[23367]: cluster 2026-03-10T10:16:51.193451+0000 mon.a (mon.0) 1200 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-10T10:16:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:51 vm07 bash[23367]: audit 2026-03-10T10:16:51.229228+0000 mon.b (mon.1) 121 : audit [INF] from='client.? 192.168.123.104:0/3909030748' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:51 vm07 bash[23367]: audit 2026-03-10T10:16:51.229228+0000 mon.b (mon.1) 121 : audit [INF] from='client.? 192.168.123.104:0/3909030748' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:51 vm07 bash[23367]: audit 2026-03-10T10:16:51.231535+0000 mon.a (mon.0) 1201 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:51 vm07 bash[23367]: audit 2026-03-10T10:16:51.231535+0000 mon.a (mon.0) 1201 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:51 vm07 bash[23367]: audit 2026-03-10T10:16:51.246568+0000 mon.c (mon.2) 110 : audit [INF] from='client.? 192.168.123.104:0/402921594' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm04-59252-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:51 vm07 bash[23367]: audit 2026-03-10T10:16:51.246568+0000 mon.c (mon.2) 110 : audit [INF] from='client.? 192.168.123.104:0/402921594' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm04-59252-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:51 vm07 bash[23367]: audit 2026-03-10T10:16:51.319197+0000 mon.a (mon.0) 1202 : audit [INF] from='client.? 192.168.123.104:0/677905614' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm04-59259-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:51 vm07 bash[23367]: audit 2026-03-10T10:16:51.319197+0000 mon.a (mon.0) 1202 : audit [INF] from='client.? 192.168.123.104:0/677905614' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm04-59259-8","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:51 vm07 bash[23367]: audit 2026-03-10T10:16:51.319335+0000 mon.a (mon.0) 1203 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm04-59252-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:51 vm07 bash[23367]: audit 2026-03-10T10:16:51.319335+0000 mon.a (mon.0) 1203 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrIter_vm04-59252-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:52 vm04 bash[20742]: audit 2026-03-10T10:16:51.980484+0000 mon.a (mon.0) 1204 : audit [DBG] from='client.? 192.168.123.104:0/2968822430' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-10T10:16:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:52 vm04 bash[20742]: audit 2026-03-10T10:16:51.980484+0000 mon.a (mon.0) 1204 : audit [DBG] from='client.? 192.168.123.104:0/2968822430' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-10T10:16:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:52 vm04 bash[20742]: audit 2026-03-10T10:16:51.980638+0000 mgr.y (mgr.24422) 124 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-10T10:16:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:52 vm04 bash[20742]: audit 2026-03-10T10:16:51.980638+0000 mgr.y (mgr.24422) 124 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-10T10:16:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:52 vm04 bash[20742]: cluster 2026-03-10T10:16:51.988167+0000 mgr.y (mgr.24422) 125 : cluster [DBG] pgmap v99: 588 pgs: 200 unknown, 388 active+clean; 144 MiB data, 850 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:16:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:52 vm04 bash[20742]: cluster 2026-03-10T10:16:51.988167+0000 mgr.y (mgr.24422) 125 : cluster [DBG] pgmap v99: 588 pgs: 200 unknown, 388 active+clean; 144 MiB data, 850 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:16:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:52 vm04 bash[20742]: audit 2026-03-10T10:16:52.008995+0000 mon.c (mon.2) 111 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:52 vm04 bash[20742]: audit 2026-03-10T10:16:52.008995+0000 mon.c (mon.2) 111 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:52 vm04 bash[20742]: audit 2026-03-10T10:16:52.199487+0000 mon.a (mon.0) 1205 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:52 vm04 bash[20742]: audit 2026-03-10T10:16:52.199487+0000 mon.a (mon.0) 1205 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:52 vm04 bash[20742]: audit 2026-03-10T10:16:52.199518+0000 mon.a (mon.0) 1206 : audit [INF] from='client.? 192.168.123.104:0/677905614' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm04-59259-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:52 vm04 bash[20742]: audit 2026-03-10T10:16:52.199518+0000 mon.a (mon.0) 1206 : audit [INF] from='client.? 192.168.123.104:0/677905614' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm04-59259-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:52 vm04 bash[20742]: audit 2026-03-10T10:16:52.199536+0000 mon.a (mon.0) 1207 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrIter_vm04-59252-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:52 vm04 bash[20742]: audit 2026-03-10T10:16:52.199536+0000 mon.a (mon.0) 1207 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrIter_vm04-59252-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:52 vm04 bash[20742]: cluster 2026-03-10T10:16:52.202573+0000 mon.a (mon.0) 1208 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-10T10:16:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:52 vm04 bash[20742]: cluster 2026-03-10T10:16:52.202573+0000 mon.a (mon.0) 1208 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-10T10:16:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:52 vm04 bash[20742]: audit 2026-03-10T10:16:52.240415+0000 mon.a (mon.0) 1209 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:53.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:52 vm04 bash[20742]: audit 2026-03-10T10:16:52.240415+0000 mon.a (mon.0) 1209 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:53.204 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:16:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:16:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:16:53.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:52 vm04 bash[28289]: audit 2026-03-10T10:16:51.980484+0000 mon.a (mon.0) 1204 : audit [DBG] from='client.? 192.168.123.104:0/2968822430' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-10T10:16:53.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:52 vm04 bash[28289]: audit 2026-03-10T10:16:51.980484+0000 mon.a (mon.0) 1204 : audit [DBG] from='client.? 192.168.123.104:0/2968822430' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-10T10:16:53.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:52 vm04 bash[28289]: audit 2026-03-10T10:16:51.980638+0000 mgr.y (mgr.24422) 124 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-10T10:16:53.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:52 vm04 bash[28289]: audit 2026-03-10T10:16:51.980638+0000 mgr.y (mgr.24422) 124 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-10T10:16:53.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:52 vm04 bash[28289]: cluster 2026-03-10T10:16:51.988167+0000 mgr.y (mgr.24422) 125 : cluster [DBG] pgmap v99: 588 pgs: 200 unknown, 388 active+clean; 144 MiB data, 850 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:16:53.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:52 vm04 bash[28289]: cluster 2026-03-10T10:16:51.988167+0000 mgr.y (mgr.24422) 125 : cluster [DBG] pgmap v99: 588 pgs: 200 unknown, 388 active+clean; 144 MiB data, 850 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:16:53.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:52 vm04 bash[28289]: audit 2026-03-10T10:16:52.008995+0000 mon.c (mon.2) 111 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:53.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:52 vm04 bash[28289]: audit 2026-03-10T10:16:52.008995+0000 mon.c (mon.2) 111 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:53.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:52 vm04 bash[28289]: audit 2026-03-10T10:16:52.199487+0000 mon.a (mon.0) 1205 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:53.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:52 vm04 bash[28289]: audit 2026-03-10T10:16:52.199487+0000 mon.a (mon.0) 1205 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:53.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:52 vm04 bash[28289]: audit 2026-03-10T10:16:52.199518+0000 mon.a (mon.0) 1206 : audit [INF] from='client.? 192.168.123.104:0/677905614' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm04-59259-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:53.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:52 vm04 bash[28289]: audit 2026-03-10T10:16:52.199518+0000 mon.a (mon.0) 1206 : audit [INF] from='client.? 192.168.123.104:0/677905614' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm04-59259-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:53.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:52 vm04 bash[28289]: audit 2026-03-10T10:16:52.199536+0000 mon.a (mon.0) 1207 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrIter_vm04-59252-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:53.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:52 vm04 bash[28289]: audit 2026-03-10T10:16:52.199536+0000 mon.a (mon.0) 1207 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrIter_vm04-59252-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:53.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:52 vm04 bash[28289]: cluster 2026-03-10T10:16:52.202573+0000 mon.a (mon.0) 1208 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-10T10:16:53.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:52 vm04 bash[28289]: cluster 2026-03-10T10:16:52.202573+0000 mon.a (mon.0) 1208 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-10T10:16:53.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:52 vm04 bash[28289]: audit 2026-03-10T10:16:52.240415+0000 mon.a (mon.0) 1209 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:53.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:52 vm04 bash[28289]: audit 2026-03-10T10:16:52.240415+0000 mon.a (mon.0) 1209 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:53.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:52 vm07 bash[23367]: audit 2026-03-10T10:16:51.980484+0000 mon.a (mon.0) 1204 : audit [DBG] from='client.? 192.168.123.104:0/2968822430' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-10T10:16:53.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:52 vm07 bash[23367]: audit 2026-03-10T10:16:51.980484+0000 mon.a (mon.0) 1204 : audit [DBG] from='client.? 192.168.123.104:0/2968822430' entity='client.admin' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-10T10:16:53.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:52 vm07 bash[23367]: audit 2026-03-10T10:16:51.980638+0000 mgr.y (mgr.24422) 124 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-10T10:16:53.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:52 vm07 bash[23367]: audit 2026-03-10T10:16:51.980638+0000 mgr.y (mgr.24422) 124 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "service dump"}]: dispatch 2026-03-10T10:16:53.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:52 vm07 bash[23367]: cluster 2026-03-10T10:16:51.988167+0000 mgr.y (mgr.24422) 125 : cluster [DBG] pgmap v99: 588 pgs: 200 unknown, 388 active+clean; 144 MiB data, 850 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:16:53.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:52 vm07 bash[23367]: cluster 2026-03-10T10:16:51.988167+0000 mgr.y (mgr.24422) 125 : cluster [DBG] pgmap v99: 588 pgs: 200 unknown, 388 active+clean; 144 MiB data, 850 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:16:53.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:52 vm07 bash[23367]: audit 2026-03-10T10:16:52.008995+0000 mon.c (mon.2) 111 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:53.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:52 vm07 bash[23367]: audit 2026-03-10T10:16:52.008995+0000 mon.c (mon.2) 111 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:53.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:52 vm07 bash[23367]: audit 2026-03-10T10:16:52.199487+0000 mon.a (mon.0) 1205 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:53.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:52 vm07 bash[23367]: audit 2026-03-10T10:16:52.199487+0000 mon.a (mon.0) 1205 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59507-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:53.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:52 vm07 bash[23367]: audit 2026-03-10T10:16:52.199518+0000 mon.a (mon.0) 1206 : audit [INF] from='client.? 192.168.123.104:0/677905614' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm04-59259-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:53.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:52 vm07 bash[23367]: audit 2026-03-10T10:16:52.199518+0000 mon.a (mon.0) 1206 : audit [INF] from='client.? 192.168.123.104:0/677905614' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP2_vm04-59259-8","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:53.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:52 vm07 bash[23367]: audit 2026-03-10T10:16:52.199536+0000 mon.a (mon.0) 1207 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrIter_vm04-59252-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:53.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:52 vm07 bash[23367]: audit 2026-03-10T10:16:52.199536+0000 mon.a (mon.0) 1207 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrIter_vm04-59252-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:53.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:52 vm07 bash[23367]: cluster 2026-03-10T10:16:52.202573+0000 mon.a (mon.0) 1208 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-10T10:16:53.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:52 vm07 bash[23367]: cluster 2026-03-10T10:16:52.202573+0000 mon.a (mon.0) 1208 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-10T10:16:53.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:52 vm07 bash[23367]: audit 2026-03-10T10:16:52.240415+0000 mon.a (mon.0) 1209 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:53.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:52 vm07 bash[23367]: audit 2026-03-10T10:16:52.240415+0000 mon.a (mon.0) 1209 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:53.270 INFO:tasks.workunit.client.0.vm04.stdout: misc: Running main() from gmock_main.cc 2026-03-10T10:16:53.270 INFO:tasks.workunit.client.0.vm04.stdout: misc: [==========] Running 12 tests from 1 test suite. 2026-03-10T10:16:53.271 INFO:tasks.workunit.client.0.vm04.stdout: misc: [----------] Global test environment set-up. 2026-03-10T10:16:53.271 INFO:tasks.workunit.client.0.vm04.stdout: misc: [----------] 12 tests from NeoRadosMisc 2026-03-10T10:16:53.271 INFO:tasks.workunit.client.0.vm04.stdout: misc: [ RUN ] NeoRadosMisc.Version 2026-03-10T10:16:53.271 INFO:tasks.workunit.client.0.vm04.stdout: misc: [ OK ] NeoRadosMisc.Version (1590 ms) 2026-03-10T10:16:53.271 INFO:tasks.workunit.client.0.vm04.stdout: misc: [ RUN ] NeoRadosMisc.WaitOSDMap 2026-03-10T10:16:53.271 INFO:tasks.workunit.client.0.vm04.stdout: misc: [ OK ] NeoRadosMisc.WaitOSDMap (1918 ms) 2026-03-10T10:16:53.271 INFO:tasks.workunit.client.0.vm04.stdout: misc: [ RUN ] NeoRadosMisc.LongName 2026-03-10T10:16:53.271 INFO:tasks.workunit.client.0.vm04.stdout: misc: [ OK ] NeoRadosMisc.LongName (3305 ms) 2026-03-10T10:16:53.271 INFO:tasks.workunit.client.0.vm04.stdout: misc: [ RUN ] NeoRadosMisc.LongLocator 2026-03-10T10:16:53.271 INFO:tasks.workunit.client.0.vm04.stdout: misc: [ OK ] NeoRadosMisc.LongLocator (3069 ms) 2026-03-10T10:16:53.271 INFO:tasks.workunit.client.0.vm04.stdout: misc: [ RUN ] NeoRadosMisc.LongNamespace 2026-03-10T10:16:53.271 INFO:tasks.workunit.client.0.vm04.stdout: misc: [ OK ] NeoRadosMisc.LongNamespace (2666 ms) 2026-03-10T10:16:53.271 INFO:tasks.workunit.client.0.vm04.stdout: misc: [ RUN ] NeoRadosMisc.LongAttrName 2026-03-10T10:16:53.271 INFO:tasks.workunit.client.0.vm04.stdout: misc: [ OK ] NeoRadosMisc.LongAttrName (3137 ms) 2026-03-10T10:16:53.271 INFO:tasks.workunit.client.0.vm04.stdout: misc: [ RUN ] NeoRadosMisc.Exec 2026-03-10T10:16:53.271 INFO:tasks.workunit.client.0.vm04.stdout: misc: [ OK ] NeoRadosMisc.Exec (2987 ms) 2026-03-10T10:16:53.271 INFO:tasks.workunit.client.0.vm04.stdout: misc: [ RUN ] NeoRadosMisc.Operate1 2026-03-10T10:16:53.271 INFO:tasks.workunit.client.0.vm04.stdout: misc: [ OK ] NeoRadosMisc.Operate1 (2891 ms) 2026-03-10T10:16:53.271 INFO:tasks.workunit.client.0.vm04.stdout: misc: [ RUN ] NeoRadosMisc.Operate2 2026-03-10T10:16:53.271 INFO:tasks.workunit.client.0.vm04.stdout: misc: [ OK ] NeoRadosMisc.Operate2 (3042 ms) 2026-03-10T10:16:53.271 INFO:tasks.workunit.client.0.vm04.stdout: misc: [ RUN ] NeoRadosMisc.BigObject 2026-03-10T10:16:53.271 INFO:tasks.workunit.client.0.vm04.stdout: misc: [ OK ] NeoRadosMisc.BigObject (3206 ms) 2026-03-10T10:16:53.271 INFO:tasks.workunit.client.0.vm04.stdout: misc: [ RUN ] NeoRadosMisc.BigAttr 2026-03-10T10:16:53.271 INFO:tasks.workunit.client.0.vm04.stdout: misc: [ OK ] NeoRadosMisc.BigAttr (2084 ms) 2026-03-10T10:16:53.271 INFO:tasks.workunit.client.0.vm04.stdout: misc: [ RUN ] NeoRadosMisc.WriteSame 2026-03-10T10:16:53.271 INFO:tasks.workunit.client.0.vm04.stdout: misc: [ OK ] NeoRadosMisc.WriteSame (2659 ms) 2026-03-10T10:16:53.271 INFO:tasks.workunit.client.0.vm04.stdout: misc: [----------] 12 tests from NeoRadosMisc (32554 ms total) 2026-03-10T10:16:53.271 INFO:tasks.workunit.client.0.vm04.stdout: misc: 2026-03-10T10:16:53.271 INFO:tasks.workunit.client.0.vm04.stdout: misc: [----------] Global test environment tear-down 2026-03-10T10:16:53.271 INFO:tasks.workunit.client.0.vm04.stdout: misc: [==========] 12 tests from 1 test suite ran. (32554 ms total) 2026-03-10T10:16:53.271 INFO:tasks.workunit.client.0.vm04.stdout: misc: [ PASSED ] 12 tests. 2026-03-10T10:16:54.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:53 vm04 bash[20742]: audit 2026-03-10T10:16:53.021544+0000 mon.c (mon.2) 112 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:54.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:53 vm04 bash[20742]: audit 2026-03-10T10:16:53.021544+0000 mon.c (mon.2) 112 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:54.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:53 vm04 bash[20742]: audit 2026-03-10T10:16:53.204107+0000 mon.a (mon.0) 1210 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:54.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:53 vm04 bash[20742]: audit 2026-03-10T10:16:53.204107+0000 mon.a (mon.0) 1210 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:54.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:53 vm04 bash[20742]: cluster 2026-03-10T10:16:53.207357+0000 mon.a (mon.0) 1211 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T10:16:54.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:53 vm04 bash[20742]: cluster 2026-03-10T10:16:53.207357+0000 mon.a (mon.0) 1211 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T10:16:54.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:53 vm04 bash[20742]: audit 2026-03-10T10:16:53.254462+0000 mon.b (mon.1) 122 : audit [INF] from='client.? 192.168.123.104:0/3909030748' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache", "force_nonempty":""}]: dispatch 2026-03-10T10:16:54.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:53 vm04 bash[20742]: audit 2026-03-10T10:16:53.254462+0000 mon.b (mon.1) 122 : audit [INF] from='client.? 192.168.123.104:0/3909030748' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache", "force_nonempty":""}]: dispatch 2026-03-10T10:16:54.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:53 vm04 bash[20742]: audit 2026-03-10T10:16:53.260296+0000 mon.a (mon.0) 1212 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache", "force_nonempty":""}]: dispatch 2026-03-10T10:16:54.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:53 vm04 bash[20742]: audit 2026-03-10T10:16:53.260296+0000 mon.a (mon.0) 1212 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache", "force_nonempty":""}]: dispatch 2026-03-10T10:16:54.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:53 vm04 bash[28289]: audit 2026-03-10T10:16:53.021544+0000 mon.c (mon.2) 112 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:54.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:53 vm04 bash[28289]: audit 2026-03-10T10:16:53.021544+0000 mon.c (mon.2) 112 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:54.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:53 vm04 bash[28289]: audit 2026-03-10T10:16:53.204107+0000 mon.a (mon.0) 1210 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:54.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:53 vm04 bash[28289]: audit 2026-03-10T10:16:53.204107+0000 mon.a (mon.0) 1210 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:54.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:53 vm04 bash[28289]: cluster 2026-03-10T10:16:53.207357+0000 mon.a (mon.0) 1211 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T10:16:54.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:53 vm04 bash[28289]: cluster 2026-03-10T10:16:53.207357+0000 mon.a (mon.0) 1211 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T10:16:54.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:53 vm04 bash[28289]: audit 2026-03-10T10:16:53.254462+0000 mon.b (mon.1) 122 : audit [INF] from='client.? 192.168.123.104:0/3909030748' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache", "force_nonempty":""}]: dispatch 2026-03-10T10:16:54.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:53 vm04 bash[28289]: audit 2026-03-10T10:16:53.254462+0000 mon.b (mon.1) 122 : audit [INF] from='client.? 192.168.123.104:0/3909030748' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache", "force_nonempty":""}]: dispatch 2026-03-10T10:16:54.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:53 vm04 bash[28289]: audit 2026-03-10T10:16:53.260296+0000 mon.a (mon.0) 1212 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache", "force_nonempty":""}]: dispatch 2026-03-10T10:16:54.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:53 vm04 bash[28289]: audit 2026-03-10T10:16:53.260296+0000 mon.a (mon.0) 1212 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache", "force_nonempty":""}]: dispatch 2026-03-10T10:16:54.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:53 vm07 bash[23367]: audit 2026-03-10T10:16:53.021544+0000 mon.c (mon.2) 112 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:54.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:53 vm07 bash[23367]: audit 2026-03-10T10:16:53.021544+0000 mon.c (mon.2) 112 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:54.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:53 vm07 bash[23367]: audit 2026-03-10T10:16:53.204107+0000 mon.a (mon.0) 1210 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:54.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:53 vm07 bash[23367]: audit 2026-03-10T10:16:53.204107+0000 mon.a (mon.0) 1210 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:54.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:53 vm07 bash[23367]: cluster 2026-03-10T10:16:53.207357+0000 mon.a (mon.0) 1211 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T10:16:54.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:53 vm07 bash[23367]: cluster 2026-03-10T10:16:53.207357+0000 mon.a (mon.0) 1211 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T10:16:54.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:53 vm07 bash[23367]: audit 2026-03-10T10:16:53.254462+0000 mon.b (mon.1) 122 : audit [INF] from='client.? 192.168.123.104:0/3909030748' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache", "force_nonempty":""}]: dispatch 2026-03-10T10:16:54.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:53 vm07 bash[23367]: audit 2026-03-10T10:16:53.254462+0000 mon.b (mon.1) 122 : audit [INF] from='client.? 192.168.123.104:0/3909030748' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache", "force_nonempty":""}]: dispatch 2026-03-10T10:16:54.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:53 vm07 bash[23367]: audit 2026-03-10T10:16:53.260296+0000 mon.a (mon.0) 1212 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache", "force_nonempty":""}]: dispatch 2026-03-10T10:16:54.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:53 vm07 bash[23367]: audit 2026-03-10T10:16:53.260296+0000 mon.a (mon.0) 1212 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache", "force_nonempty":""}]: dispatch 2026-03-10T10:16:55.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:54 vm04 bash[20742]: cluster 2026-03-10T10:16:53.989239+0000 mgr.y (mgr.24422) 126 : cluster [DBG] pgmap v102: 524 pgs: 32 creating+peering, 20 creating+activating, 472 active+clean; 145 MiB data, 883 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 259 KiB/s wr, 4 op/s 2026-03-10T10:16:55.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:54 vm04 bash[20742]: cluster 2026-03-10T10:16:53.989239+0000 mgr.y (mgr.24422) 126 : cluster [DBG] pgmap v102: 524 pgs: 32 creating+peering, 20 creating+activating, 472 active+clean; 145 MiB data, 883 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 259 KiB/s wr, 4 op/s 2026-03-10T10:16:55.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:54 vm04 bash[20742]: audit 2026-03-10T10:16:54.023447+0000 mon.c (mon.2) 113 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:55.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:54 vm04 bash[20742]: audit 2026-03-10T10:16:54.023447+0000 mon.c (mon.2) 113 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:55.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:54 vm04 bash[20742]: audit 2026-03-10T10:16:54.208795+0000 mon.a (mon.0) 1213 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache", "force_nonempty":""}]': finished 2026-03-10T10:16:55.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:54 vm04 bash[20742]: audit 2026-03-10T10:16:54.208795+0000 mon.a (mon.0) 1213 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache", "force_nonempty":""}]': finished 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:54 vm04 bash[20742]: cluster 2026-03-10T10:16:54.212391+0000 mon.a (mon.0) 1214 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:54 vm04 bash[20742]: cluster 2026-03-10T10:16:54.212391+0000 mon.a (mon.0) 1214 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:54 vm04 bash[20742]: audit 2026-03-10T10:16:54.249633+0000 mon.b (mon.1) 123 : audit [INF] from='client.? 192.168.123.104:0/3909030748' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59507-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:54 vm04 bash[20742]: audit 2026-03-10T10:16:54.249633+0000 mon.b (mon.1) 123 : audit [INF] from='client.? 192.168.123.104:0/3909030748' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59507-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:54 vm04 bash[20742]: audit 2026-03-10T10:16:54.250553+0000 mon.b (mon.1) 124 : audit [INF] from='client.? 192.168.123.104:0/2814982910' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm04-59252-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:54 vm04 bash[20742]: audit 2026-03-10T10:16:54.250553+0000 mon.b (mon.1) 124 : audit [INF] from='client.? 192.168.123.104:0/2814982910' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm04-59252-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:54 vm04 bash[20742]: audit 2026-03-10T10:16:54.252293+0000 mon.a (mon.0) 1215 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59507-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:54 vm04 bash[20742]: audit 2026-03-10T10:16:54.252293+0000 mon.a (mon.0) 1215 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59507-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:54 vm04 bash[20742]: audit 2026-03-10T10:16:54.252624+0000 mon.a (mon.0) 1216 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm04-59252-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:54 vm04 bash[20742]: audit 2026-03-10T10:16:54.252624+0000 mon.a (mon.0) 1216 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm04-59252-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:54 vm04 bash[20742]: audit 2026-03-10T10:16:54.253714+0000 mon.b (mon.1) 125 : audit [INF] from='client.? 192.168.123.104:0/1981927154' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:54 vm04 bash[20742]: audit 2026-03-10T10:16:54.253714+0000 mon.b (mon.1) 125 : audit [INF] from='client.? 192.168.123.104:0/1981927154' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:54 vm04 bash[20742]: audit 2026-03-10T10:16:54.258333+0000 mon.a (mon.0) 1217 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:54 vm04 bash[20742]: audit 2026-03-10T10:16:54.258333+0000 mon.a (mon.0) 1217 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:54 vm04 bash[20742]: audit 2026-03-10T10:16:54.349019+0000 mon.a (mon.0) 1218 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:54 vm04 bash[20742]: audit 2026-03-10T10:16:54.349019+0000 mon.a (mon.0) 1218 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:54 vm04 bash[28289]: cluster 2026-03-10T10:16:53.989239+0000 mgr.y (mgr.24422) 126 : cluster [DBG] pgmap v102: 524 pgs: 32 creating+peering, 20 creating+activating, 472 active+clean; 145 MiB data, 883 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 259 KiB/s wr, 4 op/s 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:54 vm04 bash[28289]: cluster 2026-03-10T10:16:53.989239+0000 mgr.y (mgr.24422) 126 : cluster [DBG] pgmap v102: 524 pgs: 32 creating+peering, 20 creating+activating, 472 active+clean; 145 MiB data, 883 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 259 KiB/s wr, 4 op/s 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:54 vm04 bash[28289]: audit 2026-03-10T10:16:54.023447+0000 mon.c (mon.2) 113 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:54 vm04 bash[28289]: audit 2026-03-10T10:16:54.023447+0000 mon.c (mon.2) 113 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:54 vm04 bash[28289]: audit 2026-03-10T10:16:54.208795+0000 mon.a (mon.0) 1213 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache", "force_nonempty":""}]': finished 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:54 vm04 bash[28289]: audit 2026-03-10T10:16:54.208795+0000 mon.a (mon.0) 1213 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache", "force_nonempty":""}]': finished 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:54 vm04 bash[28289]: cluster 2026-03-10T10:16:54.212391+0000 mon.a (mon.0) 1214 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:54 vm04 bash[28289]: cluster 2026-03-10T10:16:54.212391+0000 mon.a (mon.0) 1214 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:54 vm04 bash[28289]: audit 2026-03-10T10:16:54.249633+0000 mon.b (mon.1) 123 : audit [INF] from='client.? 192.168.123.104:0/3909030748' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59507-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:54 vm04 bash[28289]: audit 2026-03-10T10:16:54.249633+0000 mon.b (mon.1) 123 : audit [INF] from='client.? 192.168.123.104:0/3909030748' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59507-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:54 vm04 bash[28289]: audit 2026-03-10T10:16:54.250553+0000 mon.b (mon.1) 124 : audit [INF] from='client.? 192.168.123.104:0/2814982910' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm04-59252-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:54 vm04 bash[28289]: audit 2026-03-10T10:16:54.250553+0000 mon.b (mon.1) 124 : audit [INF] from='client.? 192.168.123.104:0/2814982910' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm04-59252-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:54 vm04 bash[28289]: audit 2026-03-10T10:16:54.252293+0000 mon.a (mon.0) 1215 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59507-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:54 vm04 bash[28289]: audit 2026-03-10T10:16:54.252293+0000 mon.a (mon.0) 1215 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59507-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:54 vm04 bash[28289]: audit 2026-03-10T10:16:54.252624+0000 mon.a (mon.0) 1216 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm04-59252-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:54 vm04 bash[28289]: audit 2026-03-10T10:16:54.252624+0000 mon.a (mon.0) 1216 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm04-59252-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:54 vm04 bash[28289]: audit 2026-03-10T10:16:54.253714+0000 mon.b (mon.1) 125 : audit [INF] from='client.? 192.168.123.104:0/1981927154' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:54 vm04 bash[28289]: audit 2026-03-10T10:16:54.253714+0000 mon.b (mon.1) 125 : audit [INF] from='client.? 192.168.123.104:0/1981927154' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:54 vm04 bash[28289]: audit 2026-03-10T10:16:54.258333+0000 mon.a (mon.0) 1217 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:54 vm04 bash[28289]: audit 2026-03-10T10:16:54.258333+0000 mon.a (mon.0) 1217 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:54 vm04 bash[28289]: audit 2026-03-10T10:16:54.349019+0000 mon.a (mon.0) 1218 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:16:55.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:54 vm04 bash[28289]: audit 2026-03-10T10:16:54.349019+0000 mon.a (mon.0) 1218 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:16:55.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:54 vm07 bash[23367]: cluster 2026-03-10T10:16:53.989239+0000 mgr.y (mgr.24422) 126 : cluster [DBG] pgmap v102: 524 pgs: 32 creating+peering, 20 creating+activating, 472 active+clean; 145 MiB data, 883 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 259 KiB/s wr, 4 op/s 2026-03-10T10:16:55.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:54 vm07 bash[23367]: cluster 2026-03-10T10:16:53.989239+0000 mgr.y (mgr.24422) 126 : cluster [DBG] pgmap v102: 524 pgs: 32 creating+peering, 20 creating+activating, 472 active+clean; 145 MiB data, 883 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 259 KiB/s wr, 4 op/s 2026-03-10T10:16:55.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:54 vm07 bash[23367]: audit 2026-03-10T10:16:54.023447+0000 mon.c (mon.2) 113 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:55.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:54 vm07 bash[23367]: audit 2026-03-10T10:16:54.023447+0000 mon.c (mon.2) 113 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:55.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:54 vm07 bash[23367]: audit 2026-03-10T10:16:54.208795+0000 mon.a (mon.0) 1213 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache", "force_nonempty":""}]': finished 2026-03-10T10:16:55.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:54 vm07 bash[23367]: audit 2026-03-10T10:16:54.208795+0000 mon.a (mon.0) 1213 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache", "force_nonempty":""}]': finished 2026-03-10T10:16:55.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:54 vm07 bash[23367]: cluster 2026-03-10T10:16:54.212391+0000 mon.a (mon.0) 1214 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-10T10:16:55.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:54 vm07 bash[23367]: cluster 2026-03-10T10:16:54.212391+0000 mon.a (mon.0) 1214 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-10T10:16:55.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:54 vm07 bash[23367]: audit 2026-03-10T10:16:54.249633+0000 mon.b (mon.1) 123 : audit [INF] from='client.? 192.168.123.104:0/3909030748' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59507-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:54 vm07 bash[23367]: audit 2026-03-10T10:16:54.249633+0000 mon.b (mon.1) 123 : audit [INF] from='client.? 192.168.123.104:0/3909030748' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59507-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:54 vm07 bash[23367]: audit 2026-03-10T10:16:54.250553+0000 mon.b (mon.1) 124 : audit [INF] from='client.? 192.168.123.104:0/2814982910' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm04-59252-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:54 vm07 bash[23367]: audit 2026-03-10T10:16:54.250553+0000 mon.b (mon.1) 124 : audit [INF] from='client.? 192.168.123.104:0/2814982910' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm04-59252-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:54 vm07 bash[23367]: audit 2026-03-10T10:16:54.252293+0000 mon.a (mon.0) 1215 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59507-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:54 vm07 bash[23367]: audit 2026-03-10T10:16:54.252293+0000 mon.a (mon.0) 1215 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59507-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:54 vm07 bash[23367]: audit 2026-03-10T10:16:54.252624+0000 mon.a (mon.0) 1216 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm04-59252-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:54 vm07 bash[23367]: audit 2026-03-10T10:16:54.252624+0000 mon.a (mon.0) 1216 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsComplete_vm04-59252-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:54 vm07 bash[23367]: audit 2026-03-10T10:16:54.253714+0000 mon.b (mon.1) 125 : audit [INF] from='client.? 192.168.123.104:0/1981927154' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:54 vm07 bash[23367]: audit 2026-03-10T10:16:54.253714+0000 mon.b (mon.1) 125 : audit [INF] from='client.? 192.168.123.104:0/1981927154' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:54 vm07 bash[23367]: audit 2026-03-10T10:16:54.258333+0000 mon.a (mon.0) 1217 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:54 vm07 bash[23367]: audit 2026-03-10T10:16:54.258333+0000 mon.a (mon.0) 1217 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-9","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:55.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:54 vm07 bash[23367]: audit 2026-03-10T10:16:54.349019+0000 mon.a (mon.0) 1218 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:16:55.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:54 vm07 bash[23367]: audit 2026-03-10T10:16:54.349019+0000 mon.a (mon.0) 1218 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-11", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:16:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:55 vm04 bash[28289]: cluster 2026-03-10T10:16:54.883226+0000 mon.a (mon.0) 1219 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:55 vm04 bash[28289]: cluster 2026-03-10T10:16:54.883226+0000 mon.a (mon.0) 1219 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:55 vm04 bash[28289]: audit 2026-03-10T10:16:55.028179+0000 mon.c (mon.2) 114 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:55 vm04 bash[28289]: audit 2026-03-10T10:16:55.028179+0000 mon.c (mon.2) 114 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:55 vm04 bash[28289]: audit 2026-03-10T10:16:55.212932+0000 mon.a (mon.0) 1220 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59507-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:55 vm04 bash[28289]: audit 2026-03-10T10:16:55.212932+0000 mon.a (mon.0) 1220 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59507-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:55 vm04 bash[28289]: audit 2026-03-10T10:16:55.212966+0000 mon.a (mon.0) 1221 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsComplete_vm04-59252-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:55 vm04 bash[28289]: audit 2026-03-10T10:16:55.212966+0000 mon.a (mon.0) 1221 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsComplete_vm04-59252-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:55 vm04 bash[28289]: audit 2026-03-10T10:16:55.212984+0000 mon.a (mon.0) 1222 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:55 vm04 bash[28289]: audit 2026-03-10T10:16:55.212984+0000 mon.a (mon.0) 1222 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:55 vm04 bash[28289]: audit 2026-03-10T10:16:55.213001+0000 mon.a (mon.0) 1223 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-11", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:16:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:55 vm04 bash[28289]: audit 2026-03-10T10:16:55.213001+0000 mon.a (mon.0) 1223 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-11", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:16:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:55 vm04 bash[28289]: cluster 2026-03-10T10:16:55.215696+0000 mon.a (mon.0) 1224 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-10T10:16:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:55 vm04 bash[28289]: cluster 2026-03-10T10:16:55.215696+0000 mon.a (mon.0) 1224 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-10T10:16:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:55 vm04 bash[28289]: audit 2026-03-10T10:16:55.217010+0000 mon.a (mon.0) 1225 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-11"}]: dispatch 2026-03-10T10:16:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:55 vm04 bash[28289]: audit 2026-03-10T10:16:55.217010+0000 mon.a (mon.0) 1225 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-11"}]: dispatch 2026-03-10T10:16:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:55 vm04 bash[28289]: audit 2026-03-10T10:16:55.241699+0000 mon.b (mon.1) 126 : audit [INF] from='client.? 192.168.123.104:0/3909030748' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache"}]: dispatch 2026-03-10T10:16:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:55 vm04 bash[28289]: audit 2026-03-10T10:16:55.241699+0000 mon.b (mon.1) 126 : audit [INF] from='client.? 192.168.123.104:0/3909030748' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache"}]: dispatch 2026-03-10T10:16:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:55 vm04 bash[28289]: audit 2026-03-10T10:16:55.243889+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache"}]: dispatch 2026-03-10T10:16:56.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:55 vm04 bash[28289]: audit 2026-03-10T10:16:55.243889+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache"}]: dispatch 2026-03-10T10:16:56.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:55 vm04 bash[20742]: cluster 2026-03-10T10:16:54.883226+0000 mon.a (mon.0) 1219 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:56.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:55 vm04 bash[20742]: cluster 2026-03-10T10:16:54.883226+0000 mon.a (mon.0) 1219 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:56.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:55 vm04 bash[20742]: audit 2026-03-10T10:16:55.028179+0000 mon.c (mon.2) 114 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:56.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:55 vm04 bash[20742]: audit 2026-03-10T10:16:55.028179+0000 mon.c (mon.2) 114 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:56.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:55 vm04 bash[20742]: audit 2026-03-10T10:16:55.212932+0000 mon.a (mon.0) 1220 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59507-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:56.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:55 vm04 bash[20742]: audit 2026-03-10T10:16:55.212932+0000 mon.a (mon.0) 1220 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59507-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:56.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:55 vm04 bash[20742]: audit 2026-03-10T10:16:55.212966+0000 mon.a (mon.0) 1221 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsComplete_vm04-59252-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:56.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:55 vm04 bash[20742]: audit 2026-03-10T10:16:55.212966+0000 mon.a (mon.0) 1221 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsComplete_vm04-59252-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:56.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:55 vm04 bash[20742]: audit 2026-03-10T10:16:55.212984+0000 mon.a (mon.0) 1222 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:56.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:55 vm04 bash[20742]: audit 2026-03-10T10:16:55.212984+0000 mon.a (mon.0) 1222 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:56.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:55 vm04 bash[20742]: audit 2026-03-10T10:16:55.213001+0000 mon.a (mon.0) 1223 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-11", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:16:56.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:55 vm04 bash[20742]: audit 2026-03-10T10:16:55.213001+0000 mon.a (mon.0) 1223 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-11", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:16:56.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:55 vm04 bash[20742]: cluster 2026-03-10T10:16:55.215696+0000 mon.a (mon.0) 1224 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-10T10:16:56.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:55 vm04 bash[20742]: cluster 2026-03-10T10:16:55.215696+0000 mon.a (mon.0) 1224 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-10T10:16:56.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:55 vm04 bash[20742]: audit 2026-03-10T10:16:55.217010+0000 mon.a (mon.0) 1225 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-11"}]: dispatch 2026-03-10T10:16:56.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:55 vm04 bash[20742]: audit 2026-03-10T10:16:55.217010+0000 mon.a (mon.0) 1225 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-11"}]: dispatch 2026-03-10T10:16:56.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:55 vm04 bash[20742]: audit 2026-03-10T10:16:55.241699+0000 mon.b (mon.1) 126 : audit [INF] from='client.? 192.168.123.104:0/3909030748' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache"}]: dispatch 2026-03-10T10:16:56.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:55 vm04 bash[20742]: audit 2026-03-10T10:16:55.241699+0000 mon.b (mon.1) 126 : audit [INF] from='client.? 192.168.123.104:0/3909030748' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache"}]: dispatch 2026-03-10T10:16:56.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:55 vm04 bash[20742]: audit 2026-03-10T10:16:55.243889+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache"}]: dispatch 2026-03-10T10:16:56.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:55 vm04 bash[20742]: audit 2026-03-10T10:16:55.243889+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache"}]: dispatch 2026-03-10T10:16:56.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:55 vm07 bash[23367]: cluster 2026-03-10T10:16:54.883226+0000 mon.a (mon.0) 1219 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:56.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:55 vm07 bash[23367]: cluster 2026-03-10T10:16:54.883226+0000 mon.a (mon.0) 1219 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:16:56.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:55 vm07 bash[23367]: audit 2026-03-10T10:16:55.028179+0000 mon.c (mon.2) 114 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:56.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:55 vm07 bash[23367]: audit 2026-03-10T10:16:55.028179+0000 mon.c (mon.2) 114 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:56.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:55 vm07 bash[23367]: audit 2026-03-10T10:16:55.212932+0000 mon.a (mon.0) 1220 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59507-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:56.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:55 vm07 bash[23367]: audit 2026-03-10T10:16:55.212932+0000 mon.a (mon.0) 1220 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59507-10-cache", "mode":"readonly", "yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:56.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:55 vm07 bash[23367]: audit 2026-03-10T10:16:55.212966+0000 mon.a (mon.0) 1221 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsComplete_vm04-59252-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:56.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:55 vm07 bash[23367]: audit 2026-03-10T10:16:55.212966+0000 mon.a (mon.0) 1221 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsComplete_vm04-59252-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:56.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:55 vm07 bash[23367]: audit 2026-03-10T10:16:55.212984+0000 mon.a (mon.0) 1222 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:56.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:55 vm07 bash[23367]: audit 2026-03-10T10:16:55.212984+0000 mon.a (mon.0) 1222 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-9","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:56.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:55 vm07 bash[23367]: audit 2026-03-10T10:16:55.213001+0000 mon.a (mon.0) 1223 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-11", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:16:56.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:55 vm07 bash[23367]: audit 2026-03-10T10:16:55.213001+0000 mon.a (mon.0) 1223 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-11", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:16:56.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:55 vm07 bash[23367]: cluster 2026-03-10T10:16:55.215696+0000 mon.a (mon.0) 1224 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-10T10:16:56.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:55 vm07 bash[23367]: cluster 2026-03-10T10:16:55.215696+0000 mon.a (mon.0) 1224 : cluster [DBG] osdmap e100: 8 total, 8 up, 8 in 2026-03-10T10:16:56.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:55 vm07 bash[23367]: audit 2026-03-10T10:16:55.217010+0000 mon.a (mon.0) 1225 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-11"}]: dispatch 2026-03-10T10:16:56.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:55 vm07 bash[23367]: audit 2026-03-10T10:16:55.217010+0000 mon.a (mon.0) 1225 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-11"}]: dispatch 2026-03-10T10:16:56.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:55 vm07 bash[23367]: audit 2026-03-10T10:16:55.241699+0000 mon.b (mon.1) 126 : audit [INF] from='client.? 192.168.123.104:0/3909030748' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache"}]: dispatch 2026-03-10T10:16:56.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:55 vm07 bash[23367]: audit 2026-03-10T10:16:55.241699+0000 mon.b (mon.1) 126 : audit [INF] from='client.? 192.168.123.104:0/3909030748' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache"}]: dispatch 2026-03-10T10:16:56.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:55 vm07 bash[23367]: audit 2026-03-10T10:16:55.243889+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache"}]: dispatch 2026-03-10T10:16:56.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:55 vm07 bash[23367]: audit 2026-03-10T10:16:55.243889+0000 mon.a (mon.0) 1226 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache"}]: dispatch 2026-03-10T10:16:57.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:56 vm04 bash[20742]: cluster 2026-03-10T10:16:55.989683+0000 mgr.y (mgr.24422) 127 : cluster [DBG] pgmap v105: 652 pgs: 128 unknown, 32 creating+peering, 20 creating+activating, 472 active+clean; 145 MiB data, 883 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 259 KiB/s wr, 4 op/s 2026-03-10T10:16:57.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:56 vm04 bash[20742]: cluster 2026-03-10T10:16:55.989683+0000 mgr.y (mgr.24422) 127 : cluster [DBG] pgmap v105: 652 pgs: 128 unknown, 32 creating+peering, 20 creating+activating, 472 active+clean; 145 MiB data, 883 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 259 KiB/s wr, 4 op/s 2026-03-10T10:16:57.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:56 vm04 bash[20742]: audit 2026-03-10T10:16:56.029608+0000 mon.c (mon.2) 115 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:57.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:56 vm04 bash[20742]: audit 2026-03-10T10:16:56.029608+0000 mon.c (mon.2) 115 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:57.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:56 vm04 bash[20742]: audit 2026-03-10T10:16:56.191900+0000 mon.a (mon.0) 1227 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-11"}]': finished 2026-03-10T10:16:57.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:56 vm04 bash[20742]: audit 2026-03-10T10:16:56.191900+0000 mon.a (mon.0) 1227 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-11"}]': finished 2026-03-10T10:16:57.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:56 vm04 bash[20742]: audit 2026-03-10T10:16:56.191930+0000 mon.a (mon.0) 1228 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache"}]': finished 2026-03-10T10:16:57.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:56 vm04 bash[20742]: audit 2026-03-10T10:16:56.191930+0000 mon.a (mon.0) 1228 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache"}]': finished 2026-03-10T10:16:57.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:56 vm04 bash[20742]: cluster 2026-03-10T10:16:56.195664+0000 mon.a (mon.0) 1229 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-10T10:16:57.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:56 vm04 bash[20742]: cluster 2026-03-10T10:16:56.195664+0000 mon.a (mon.0) 1229 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-10T10:16:57.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:56 vm04 bash[20742]: audit 2026-03-10T10:16:56.207410+0000 mon.a (mon.0) 1230 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-11", "mode": "writeback"}]: dispatch 2026-03-10T10:16:57.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:56 vm04 bash[20742]: audit 2026-03-10T10:16:56.207410+0000 mon.a (mon.0) 1230 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-11", "mode": "writeback"}]: dispatch 2026-03-10T10:16:57.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:56 vm04 bash[28289]: cluster 2026-03-10T10:16:55.989683+0000 mgr.y (mgr.24422) 127 : cluster [DBG] pgmap v105: 652 pgs: 128 unknown, 32 creating+peering, 20 creating+activating, 472 active+clean; 145 MiB data, 883 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 259 KiB/s wr, 4 op/s 2026-03-10T10:16:57.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:56 vm04 bash[28289]: cluster 2026-03-10T10:16:55.989683+0000 mgr.y (mgr.24422) 127 : cluster [DBG] pgmap v105: 652 pgs: 128 unknown, 32 creating+peering, 20 creating+activating, 472 active+clean; 145 MiB data, 883 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 259 KiB/s wr, 4 op/s 2026-03-10T10:16:57.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:56 vm04 bash[28289]: audit 2026-03-10T10:16:56.029608+0000 mon.c (mon.2) 115 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:57.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:56 vm04 bash[28289]: audit 2026-03-10T10:16:56.029608+0000 mon.c (mon.2) 115 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:57.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:56 vm04 bash[28289]: audit 2026-03-10T10:16:56.191900+0000 mon.a (mon.0) 1227 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-11"}]': finished 2026-03-10T10:16:57.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:56 vm04 bash[28289]: audit 2026-03-10T10:16:56.191900+0000 mon.a (mon.0) 1227 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-11"}]': finished 2026-03-10T10:16:57.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:56 vm04 bash[28289]: audit 2026-03-10T10:16:56.191930+0000 mon.a (mon.0) 1228 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache"}]': finished 2026-03-10T10:16:57.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:56 vm04 bash[28289]: audit 2026-03-10T10:16:56.191930+0000 mon.a (mon.0) 1228 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache"}]': finished 2026-03-10T10:16:57.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:56 vm04 bash[28289]: cluster 2026-03-10T10:16:56.195664+0000 mon.a (mon.0) 1229 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-10T10:16:57.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:56 vm04 bash[28289]: cluster 2026-03-10T10:16:56.195664+0000 mon.a (mon.0) 1229 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-10T10:16:57.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:56 vm04 bash[28289]: audit 2026-03-10T10:16:56.207410+0000 mon.a (mon.0) 1230 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-11", "mode": "writeback"}]: dispatch 2026-03-10T10:16:57.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:56 vm04 bash[28289]: audit 2026-03-10T10:16:56.207410+0000 mon.a (mon.0) 1230 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-11", "mode": "writeback"}]: dispatch 2026-03-10T10:16:57.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:56 vm07 bash[23367]: cluster 2026-03-10T10:16:55.989683+0000 mgr.y (mgr.24422) 127 : cluster [DBG] pgmap v105: 652 pgs: 128 unknown, 32 creating+peering, 20 creating+activating, 472 active+clean; 145 MiB data, 883 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 259 KiB/s wr, 4 op/s 2026-03-10T10:16:57.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:56 vm07 bash[23367]: cluster 2026-03-10T10:16:55.989683+0000 mgr.y (mgr.24422) 127 : cluster [DBG] pgmap v105: 652 pgs: 128 unknown, 32 creating+peering, 20 creating+activating, 472 active+clean; 145 MiB data, 883 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 259 KiB/s wr, 4 op/s 2026-03-10T10:16:57.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:56 vm07 bash[23367]: audit 2026-03-10T10:16:56.029608+0000 mon.c (mon.2) 115 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:57.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:56 vm07 bash[23367]: audit 2026-03-10T10:16:56.029608+0000 mon.c (mon.2) 115 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:57.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:56 vm07 bash[23367]: audit 2026-03-10T10:16:56.191900+0000 mon.a (mon.0) 1227 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-11"}]': finished 2026-03-10T10:16:57.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:56 vm07 bash[23367]: audit 2026-03-10T10:16:56.191900+0000 mon.a (mon.0) 1227 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-11"}]': finished 2026-03-10T10:16:57.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:56 vm07 bash[23367]: audit 2026-03-10T10:16:56.191930+0000 mon.a (mon.0) 1228 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache"}]': finished 2026-03-10T10:16:57.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:56 vm07 bash[23367]: audit 2026-03-10T10:16:56.191930+0000 mon.a (mon.0) 1228 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59507-10", "tierpool":"test-rados-api-vm04-59507-10-cache"}]': finished 2026-03-10T10:16:57.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:56 vm07 bash[23367]: cluster 2026-03-10T10:16:56.195664+0000 mon.a (mon.0) 1229 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-10T10:16:57.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:56 vm07 bash[23367]: cluster 2026-03-10T10:16:56.195664+0000 mon.a (mon.0) 1229 : cluster [DBG] osdmap e101: 8 total, 8 up, 8 in 2026-03-10T10:16:57.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:56 vm07 bash[23367]: audit 2026-03-10T10:16:56.207410+0000 mon.a (mon.0) 1230 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-11", "mode": "writeback"}]: dispatch 2026-03-10T10:16:57.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:56 vm07 bash[23367]: audit 2026-03-10T10:16:56.207410+0000 mon.a (mon.0) 1230 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-11", "mode": "writeback"}]: dispatch 2026-03-10T10:16:58.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:57 vm04 bash[28289]: audit 2026-03-10T10:16:57.030299+0000 mon.c (mon.2) 116 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:58.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:57 vm04 bash[28289]: audit 2026-03-10T10:16:57.030299+0000 mon.c (mon.2) 116 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:58.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:57 vm04 bash[28289]: cluster 2026-03-10T10:16:57.192466+0000 mon.a (mon.0) 1231 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:16:58.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:57 vm04 bash[28289]: cluster 2026-03-10T10:16:57.192466+0000 mon.a (mon.0) 1231 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:16:58.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:57 vm04 bash[28289]: audit 2026-03-10T10:16:57.196231+0000 mon.a (mon.0) 1232 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-11", "mode": "writeback"}]': finished 2026-03-10T10:16:58.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:57 vm04 bash[28289]: audit 2026-03-10T10:16:57.196231+0000 mon.a (mon.0) 1232 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-11", "mode": "writeback"}]': finished 2026-03-10T10:16:58.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:57 vm04 bash[28289]: cluster 2026-03-10T10:16:57.236761+0000 mon.a (mon.0) 1233 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-10T10:16:58.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:57 vm04 bash[28289]: cluster 2026-03-10T10:16:57.236761+0000 mon.a (mon.0) 1233 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-10T10:16:58.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:57 vm04 bash[28289]: audit 2026-03-10T10:16:57.252271+0000 mon.c (mon.2) 117 : audit [INF] from='client.? 192.168.123.104:0/3153952360' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm04-59259-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:58.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:57 vm04 bash[28289]: audit 2026-03-10T10:16:57.252271+0000 mon.c (mon.2) 117 : audit [INF] from='client.? 192.168.123.104:0/3153952360' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm04-59259-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:58.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:57 vm04 bash[28289]: audit 2026-03-10T10:16:57.252540+0000 mon.a (mon.0) 1234 : audit [INF] from='client.? 192.168.123.104:0/668915646' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm04-59252-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:58.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:57 vm04 bash[28289]: audit 2026-03-10T10:16:57.252540+0000 mon.a (mon.0) 1234 : audit [INF] from='client.? 192.168.123.104:0/668915646' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm04-59252-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:58.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:57 vm04 bash[28289]: audit 2026-03-10T10:16:57.261520+0000 mon.a (mon.0) 1235 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm04-59259-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:58.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:57 vm04 bash[28289]: audit 2026-03-10T10:16:57.261520+0000 mon.a (mon.0) 1235 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm04-59259-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:58.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:57 vm04 bash[28289]: audit 2026-03-10T10:16:57.582442+0000 mon.a (mon.0) 1236 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:16:58.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:57 vm04 bash[28289]: audit 2026-03-10T10:16:57.582442+0000 mon.a (mon.0) 1236 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:16:58.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:57 vm04 bash[20742]: audit 2026-03-10T10:16:57.030299+0000 mon.c (mon.2) 116 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:58.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:57 vm04 bash[20742]: audit 2026-03-10T10:16:57.030299+0000 mon.c (mon.2) 116 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:58.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:57 vm04 bash[20742]: cluster 2026-03-10T10:16:57.192466+0000 mon.a (mon.0) 1231 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:16:58.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:57 vm04 bash[20742]: cluster 2026-03-10T10:16:57.192466+0000 mon.a (mon.0) 1231 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:16:58.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:57 vm04 bash[20742]: audit 2026-03-10T10:16:57.196231+0000 mon.a (mon.0) 1232 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-11", "mode": "writeback"}]': finished 2026-03-10T10:16:58.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:57 vm04 bash[20742]: audit 2026-03-10T10:16:57.196231+0000 mon.a (mon.0) 1232 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-11", "mode": "writeback"}]': finished 2026-03-10T10:16:58.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:57 vm04 bash[20742]: cluster 2026-03-10T10:16:57.236761+0000 mon.a (mon.0) 1233 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-10T10:16:58.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:57 vm04 bash[20742]: cluster 2026-03-10T10:16:57.236761+0000 mon.a (mon.0) 1233 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-10T10:16:58.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:57 vm04 bash[20742]: audit 2026-03-10T10:16:57.252271+0000 mon.c (mon.2) 117 : audit [INF] from='client.? 192.168.123.104:0/3153952360' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm04-59259-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:58.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:57 vm04 bash[20742]: audit 2026-03-10T10:16:57.252271+0000 mon.c (mon.2) 117 : audit [INF] from='client.? 192.168.123.104:0/3153952360' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm04-59259-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:58.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:57 vm04 bash[20742]: audit 2026-03-10T10:16:57.252540+0000 mon.a (mon.0) 1234 : audit [INF] from='client.? 192.168.123.104:0/668915646' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm04-59252-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:58.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:57 vm04 bash[20742]: audit 2026-03-10T10:16:57.252540+0000 mon.a (mon.0) 1234 : audit [INF] from='client.? 192.168.123.104:0/668915646' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm04-59252-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:58.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:57 vm04 bash[20742]: audit 2026-03-10T10:16:57.261520+0000 mon.a (mon.0) 1235 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm04-59259-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:58.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:57 vm04 bash[20742]: audit 2026-03-10T10:16:57.261520+0000 mon.a (mon.0) 1235 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm04-59259-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:58.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:57 vm04 bash[20742]: audit 2026-03-10T10:16:57.582442+0000 mon.a (mon.0) 1236 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:16:58.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:57 vm04 bash[20742]: audit 2026-03-10T10:16:57.582442+0000 mon.a (mon.0) 1236 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:16:58.217 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:57 vm07 bash[23367]: audit 2026-03-10T10:16:57.030299+0000 mon.c (mon.2) 116 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:58.217 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:57 vm07 bash[23367]: audit 2026-03-10T10:16:57.030299+0000 mon.c (mon.2) 116 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:58.217 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:57 vm07 bash[23367]: cluster 2026-03-10T10:16:57.192466+0000 mon.a (mon.0) 1231 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:16:58.217 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:57 vm07 bash[23367]: cluster 2026-03-10T10:16:57.192466+0000 mon.a (mon.0) 1231 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:16:58.217 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:57 vm07 bash[23367]: audit 2026-03-10T10:16:57.196231+0000 mon.a (mon.0) 1232 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-11", "mode": "writeback"}]': finished 2026-03-10T10:16:58.217 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:57 vm07 bash[23367]: audit 2026-03-10T10:16:57.196231+0000 mon.a (mon.0) 1232 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-11", "mode": "writeback"}]': finished 2026-03-10T10:16:58.217 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:57 vm07 bash[23367]: cluster 2026-03-10T10:16:57.236761+0000 mon.a (mon.0) 1233 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-10T10:16:58.217 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:57 vm07 bash[23367]: cluster 2026-03-10T10:16:57.236761+0000 mon.a (mon.0) 1233 : cluster [DBG] osdmap e102: 8 total, 8 up, 8 in 2026-03-10T10:16:58.217 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:57 vm07 bash[23367]: audit 2026-03-10T10:16:57.252271+0000 mon.c (mon.2) 117 : audit [INF] from='client.? 192.168.123.104:0/3153952360' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm04-59259-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:58.217 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:57 vm07 bash[23367]: audit 2026-03-10T10:16:57.252271+0000 mon.c (mon.2) 117 : audit [INF] from='client.? 192.168.123.104:0/3153952360' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm04-59259-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:58.217 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:57 vm07 bash[23367]: audit 2026-03-10T10:16:57.252540+0000 mon.a (mon.0) 1234 : audit [INF] from='client.? 192.168.123.104:0/668915646' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm04-59252-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:58.217 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:57 vm07 bash[23367]: audit 2026-03-10T10:16:57.252540+0000 mon.a (mon.0) 1234 : audit [INF] from='client.? 192.168.123.104:0/668915646' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafe_vm04-59252-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:58.217 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:57 vm07 bash[23367]: audit 2026-03-10T10:16:57.261520+0000 mon.a (mon.0) 1235 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm04-59259-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:58.217 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:57 vm07 bash[23367]: audit 2026-03-10T10:16:57.261520+0000 mon.a (mon.0) 1235 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm04-59259-10","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:16:58.217 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:57 vm07 bash[23367]: audit 2026-03-10T10:16:57.582442+0000 mon.a (mon.0) 1236 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:16:58.217 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:57 vm07 bash[23367]: audit 2026-03-10T10:16:57.582442+0000 mon.a (mon.0) 1236 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:16:58.299 INFO:tasks.workunit.client.0.vm04.stdout: api_pool: Running main() from gmock_main.cc 2026-03-10T10:16:58.299 INFO:tasks.workunit.client.0.vm04.stdout: api_pool: [==========] Running 9 tests from 1 test suite. 2026-03-10T10:16:58.299 INFO:tasks.workunit.client.0.vm04.stdout: api_pool: [----------] Global test environment set-up. 2026-03-10T10:16:58.299 INFO:tasks.workunit.client.0.vm04.stdout: api_pool: [----------] 9 tests from LibRadosPools 2026-03-10T10:16:58.299 INFO:tasks.workunit.client.0.vm04.stdout: api_pool: [ RUN ] LibRadosPools.PoolList 2026-03-10T10:16:58.299 INFO:tasks.workunit.client.0.vm04.stdout: api_pool: [ OK ] LibRadosPools.PoolList (2928 ms) 2026-03-10T10:16:58.299 INFO:tasks.workunit.client.0.vm04.stdout: api_pool: [ RUN ] LibRadosPools.PoolLookup 2026-03-10T10:16:58.299 INFO:tasks.workunit.client.0.vm04.stdout: api_pool: [ OK ] LibRadosPools.PoolLookup (3198 ms) 2026-03-10T10:16:58.299 INFO:tasks.workunit.client.0.vm04.stdout: api_pool: [ RUN ] LibRadosPools.PoolLookup2 2026-03-10T10:16:58.299 INFO:tasks.workunit.client.0.vm04.stdout: api_pool: [ OK ] LibRadosPools.PoolLookup2 (3095 ms) 2026-03-10T10:16:58.299 INFO:tasks.workunit.client.0.vm04.stdout: api_pool: [ RUN ] LibRadosPools.PoolLookupOtherInstance 2026-03-10T10:16:58.300 INFO:tasks.workunit.client.0.vm04.stdout: api_pool: [ OK ] LibRadosPools.PoolLookupOtherInstance (2724 ms) 2026-03-10T10:16:58.300 INFO:tasks.workunit.client.0.vm04.stdout: api_pool: [ RUN ] LibRadosPools.PoolReverseLookupOtherInstance 2026-03-10T10:16:58.300 INFO:tasks.workunit.client.0.vm04.stdout: api_pool: [ OK ] LibRadosPools.PoolReverseLookupOtherInstance (3151 ms) 2026-03-10T10:16:58.300 INFO:tasks.workunit.client.0.vm04.stdout: api_pool: [ RUN ] LibRadosPools.PoolDelete 2026-03-10T10:16:58.300 INFO:tasks.workunit.client.0.vm04.stdout: api_pool: [ OK ] LibRadosPools.PoolDelete (5303 ms) 2026-03-10T10:16:58.300 INFO:tasks.workunit.client.0.vm04.stdout: api_pool: [ RUN ] LibRadosPools.PoolCreateDelete 2026-03-10T10:16:58.300 INFO:tasks.workunit.client.0.vm04.stdout: api_pool: [ OK ] LibRadosPools.PoolCreateDelete (4623 ms) 2026-03-10T10:16:58.300 INFO:tasks.workunit.client.0.vm04.stdout: api_pool: [ RUN ] LibRadosPools.PoolCreateWithCrushRule 2026-03-10T10:16:58.300 INFO:tasks.workunit.client.0.vm04.stdout: api_pool: [ OK ] LibRadosPools.PoolCreateWithCrushRule (5299 ms) 2026-03-10T10:16:58.300 INFO:tasks.workunit.client.0.vm04.stdout: api_pool: [ RUN ] LibRadosPools.PoolGetBaseTier 2026-03-10T10:16:58.300 INFO:tasks.workunit.client.0.vm04.stdout: api_pool: [ OK ] LibRadosPools.PoolGetBaseTier (7640 ms) 2026-03-10T10:16:58.300 INFO:tasks.workunit.client.0.vm04.stdout: api_pool: [----------] 9 tests from LibRadosPools (37961 ms total) 2026-03-10T10:16:58.300 INFO:tasks.workunit.client.0.vm04.stdout: api_pool: 2026-03-10T10:16:58.300 INFO:tasks.workunit.client.0.vm04.stdout: api_pool: [----------] Global test environment tear-down 2026-03-10T10:16:58.300 INFO:tasks.workunit.client.0.vm04.stdout: api_pool: [==========] 9 tests from 1 test suite ran. (37961 ms total) 2026-03-10T10:16:58.300 INFO:tasks.workunit.client.0.vm04.stdout: api_pool: [ PASSED ] 9 tests. 2026-03-10T10:16:58.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:16:58 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:16:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:59 vm04 bash[20742]: cluster 2026-03-10T10:16:57.990118+0000 mgr.y (mgr.24422) 128 : cluster [DBG] pgmap v108: 588 pgs: 128 unknown, 9 creating+activating, 451 active+clean; 145 MiB data, 883 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:16:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:59 vm04 bash[20742]: cluster 2026-03-10T10:16:57.990118+0000 mgr.y (mgr.24422) 128 : cluster [DBG] pgmap v108: 588 pgs: 128 unknown, 9 creating+activating, 451 active+clean; 145 MiB data, 883 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:16:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:59 vm04 bash[20742]: audit 2026-03-10T10:16:58.031440+0000 mon.c (mon.2) 118 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:59 vm04 bash[20742]: audit 2026-03-10T10:16:58.031440+0000 mon.c (mon.2) 118 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:59 vm04 bash[20742]: audit 2026-03-10T10:16:58.200686+0000 mon.a (mon.0) 1237 : audit [INF] from='client.? 192.168.123.104:0/668915646' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafe_vm04-59252-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:59 vm04 bash[20742]: audit 2026-03-10T10:16:58.200686+0000 mon.a (mon.0) 1237 : audit [INF] from='client.? 192.168.123.104:0/668915646' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafe_vm04-59252-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:59 vm04 bash[20742]: audit 2026-03-10T10:16:58.200724+0000 mon.a (mon.0) 1238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm04-59259-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:59.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:59 vm04 bash[20742]: audit 2026-03-10T10:16:58.200724+0000 mon.a (mon.0) 1238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm04-59259-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:59.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:59 vm04 bash[20742]: cluster 2026-03-10T10:16:58.204019+0000 mon.a (mon.0) 1239 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-10T10:16:59.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:59 vm04 bash[20742]: cluster 2026-03-10T10:16:58.204019+0000 mon.a (mon.0) 1239 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-10T10:16:59.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:59 vm04 bash[20742]: audit 2026-03-10T10:16:58.216538+0000 mgr.y (mgr.24422) 129 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:59.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:59 vm04 bash[20742]: audit 2026-03-10T10:16:58.216538+0000 mgr.y (mgr.24422) 129 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:59.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:59 vm04 bash[20742]: audit 2026-03-10T10:16:58.707227+0000 mon.a (mon.0) 1240 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:16:59.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:16:59 vm04 bash[20742]: audit 2026-03-10T10:16:58.707227+0000 mon.a (mon.0) 1240 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:16:59.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:59 vm04 bash[28289]: cluster 2026-03-10T10:16:57.990118+0000 mgr.y (mgr.24422) 128 : cluster [DBG] pgmap v108: 588 pgs: 128 unknown, 9 creating+activating, 451 active+clean; 145 MiB data, 883 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:16:59.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:59 vm04 bash[28289]: cluster 2026-03-10T10:16:57.990118+0000 mgr.y (mgr.24422) 128 : cluster [DBG] pgmap v108: 588 pgs: 128 unknown, 9 creating+activating, 451 active+clean; 145 MiB data, 883 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:16:59.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:59 vm04 bash[28289]: audit 2026-03-10T10:16:58.031440+0000 mon.c (mon.2) 118 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:59.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:59 vm04 bash[28289]: audit 2026-03-10T10:16:58.031440+0000 mon.c (mon.2) 118 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:59.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:59 vm04 bash[28289]: audit 2026-03-10T10:16:58.200686+0000 mon.a (mon.0) 1237 : audit [INF] from='client.? 192.168.123.104:0/668915646' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafe_vm04-59252-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:59.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:59 vm04 bash[28289]: audit 2026-03-10T10:16:58.200686+0000 mon.a (mon.0) 1237 : audit [INF] from='client.? 192.168.123.104:0/668915646' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafe_vm04-59252-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:59.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:59 vm04 bash[28289]: audit 2026-03-10T10:16:58.200724+0000 mon.a (mon.0) 1238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm04-59259-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:59.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:59 vm04 bash[28289]: audit 2026-03-10T10:16:58.200724+0000 mon.a (mon.0) 1238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm04-59259-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:59.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:59 vm04 bash[28289]: cluster 2026-03-10T10:16:58.204019+0000 mon.a (mon.0) 1239 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-10T10:16:59.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:59 vm04 bash[28289]: cluster 2026-03-10T10:16:58.204019+0000 mon.a (mon.0) 1239 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-10T10:16:59.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:59 vm04 bash[28289]: audit 2026-03-10T10:16:58.216538+0000 mgr.y (mgr.24422) 129 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:59.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:59 vm04 bash[28289]: audit 2026-03-10T10:16:58.216538+0000 mgr.y (mgr.24422) 129 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:59.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:59 vm04 bash[28289]: audit 2026-03-10T10:16:58.707227+0000 mon.a (mon.0) 1240 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:16:59.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:16:59 vm04 bash[28289]: audit 2026-03-10T10:16:58.707227+0000 mon.a (mon.0) 1240 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:16:59.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:59 vm07 bash[23367]: cluster 2026-03-10T10:16:57.990118+0000 mgr.y (mgr.24422) 128 : cluster [DBG] pgmap v108: 588 pgs: 128 unknown, 9 creating+activating, 451 active+clean; 145 MiB data, 883 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:16:59.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:59 vm07 bash[23367]: cluster 2026-03-10T10:16:57.990118+0000 mgr.y (mgr.24422) 128 : cluster [DBG] pgmap v108: 588 pgs: 128 unknown, 9 creating+activating, 451 active+clean; 145 MiB data, 883 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:16:59.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:59 vm07 bash[23367]: audit 2026-03-10T10:16:58.031440+0000 mon.c (mon.2) 118 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:59.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:59 vm07 bash[23367]: audit 2026-03-10T10:16:58.031440+0000 mon.c (mon.2) 118 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:16:59.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:59 vm07 bash[23367]: audit 2026-03-10T10:16:58.200686+0000 mon.a (mon.0) 1237 : audit [INF] from='client.? 192.168.123.104:0/668915646' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafe_vm04-59252-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:59.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:59 vm07 bash[23367]: audit 2026-03-10T10:16:58.200686+0000 mon.a (mon.0) 1237 : audit [INF] from='client.? 192.168.123.104:0/668915646' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafe_vm04-59252-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:59.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:59 vm07 bash[23367]: audit 2026-03-10T10:16:58.200724+0000 mon.a (mon.0) 1238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm04-59259-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:59.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:59 vm07 bash[23367]: audit 2026-03-10T10:16:58.200724+0000 mon.a (mon.0) 1238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripSparseReadPP_vm04-59259-10","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:16:59.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:59 vm07 bash[23367]: cluster 2026-03-10T10:16:58.204019+0000 mon.a (mon.0) 1239 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-10T10:16:59.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:59 vm07 bash[23367]: cluster 2026-03-10T10:16:58.204019+0000 mon.a (mon.0) 1239 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-10T10:16:59.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:59 vm07 bash[23367]: audit 2026-03-10T10:16:58.216538+0000 mgr.y (mgr.24422) 129 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:59.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:59 vm07 bash[23367]: audit 2026-03-10T10:16:58.216538+0000 mgr.y (mgr.24422) 129 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:16:59.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:59 vm07 bash[23367]: audit 2026-03-10T10:16:58.707227+0000 mon.a (mon.0) 1240 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:16:59.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:16:59 vm07 bash[23367]: audit 2026-03-10T10:16:58.707227+0000 mon.a (mon.0) 1240 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.035213+0000 mon.c (mon.2) 119 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.035213+0000 mon.c (mon.2) 119 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.108482+0000 mon.c (mon.2) 120 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.108482+0000 mon.c (mon.2) 120 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.108635+0000 mgr.y (mgr.24422) 130 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.108635+0000 mgr.y (mgr.24422) 130 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.120323+0000 mon.c (mon.2) 121 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.120323+0000 mon.c (mon.2) 121 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.120440+0000 mgr.y (mgr.24422) 131 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.120440+0000 mgr.y (mgr.24422) 131 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.121459+0000 mon.c (mon.2) 122 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.121459+0000 mon.c (mon.2) 122 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.121545+0000 mgr.y (mgr.24422) 132 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.121545+0000 mgr.y (mgr.24422) 132 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.122473+0000 mon.c (mon.2) 123 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.122473+0000 mon.c (mon.2) 123 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.122559+0000 mgr.y (mgr.24422) 133 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.122559+0000 mgr.y (mgr.24422) 133 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.123038+0000 mon.c (mon.2) 124 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.123038+0000 mon.c (mon.2) 124 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.123109+0000 mgr.y (mgr.24422) 134 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.123109+0000 mgr.y (mgr.24422) 134 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.123822+0000 mon.c (mon.2) 125 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.123822+0000 mon.c (mon.2) 125 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.123956+0000 mgr.y (mgr.24422) 135 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.123956+0000 mgr.y (mgr.24422) 135 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.124440+0000 mon.c (mon.2) 126 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.124440+0000 mon.c (mon.2) 126 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.124523+0000 mgr.y (mgr.24422) 136 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.124523+0000 mgr.y (mgr.24422) 136 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.126404+0000 mon.c (mon.2) 127 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.126404+0000 mon.c (mon.2) 127 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.126502+0000 mgr.y (mgr.24422) 137 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.126502+0000 mgr.y (mgr.24422) 137 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.128259+0000 mon.c (mon.2) 128 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.128259+0000 mon.c (mon.2) 128 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-10T10:17:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.128372+0000 mgr.y (mgr.24422) 138 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.128372+0000 mgr.y (mgr.24422) 138 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.128868+0000 mon.c (mon.2) 129 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.128868+0000 mon.c (mon.2) 129 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.128933+0000 mgr.y (mgr.24422) 139 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.128933+0000 mgr.y (mgr.24422) 139 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.204344+0000 mon.a (mon.0) 1241 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.204344+0000 mon.a (mon.0) 1241 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: cluster 2026-03-10T10:16:59.207139+0000 mon.a (mon.0) 1242 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: cluster 2026-03-10T10:16:59.207139+0000 mon.a (mon.0) 1242 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.210508+0000 mon.a (mon.0) 1243 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-11"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:16:59.210508+0000 mon.a (mon.0) 1243 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-11"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: cluster 2026-03-10T10:16:59.835707+0000 osd.1 (osd.1) 7 : cluster [DBG] 15.4 deep-scrub starts 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: cluster 2026-03-10T10:16:59.835707+0000 osd.1 (osd.1) 7 : cluster [DBG] 15.4 deep-scrub starts 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: cluster 2026-03-10T10:16:59.836591+0000 osd.1 (osd.1) 8 : cluster [DBG] 15.4 deep-scrub ok 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: cluster 2026-03-10T10:16:59.836591+0000 osd.1 (osd.1) 8 : cluster [DBG] 15.4 deep-scrub ok 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:17:00.035964+0000 mon.c (mon.2) 130 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:00 vm04 bash[28289]: audit 2026-03-10T10:17:00.035964+0000 mon.c (mon.2) 130 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.035213+0000 mon.c (mon.2) 119 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.035213+0000 mon.c (mon.2) 119 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.108482+0000 mon.c (mon.2) 120 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.108482+0000 mon.c (mon.2) 120 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.108635+0000 mgr.y (mgr.24422) 130 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.108635+0000 mgr.y (mgr.24422) 130 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.120323+0000 mon.c (mon.2) 121 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.120323+0000 mon.c (mon.2) 121 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.120440+0000 mgr.y (mgr.24422) 131 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.120440+0000 mgr.y (mgr.24422) 131 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.121459+0000 mon.c (mon.2) 122 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.121459+0000 mon.c (mon.2) 122 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.121545+0000 mgr.y (mgr.24422) 132 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.121545+0000 mgr.y (mgr.24422) 132 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.122473+0000 mon.c (mon.2) 123 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.122473+0000 mon.c (mon.2) 123 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.122559+0000 mgr.y (mgr.24422) 133 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.122559+0000 mgr.y (mgr.24422) 133 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.123038+0000 mon.c (mon.2) 124 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.123038+0000 mon.c (mon.2) 124 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.123109+0000 mgr.y (mgr.24422) 134 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.123109+0000 mgr.y (mgr.24422) 134 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.123822+0000 mon.c (mon.2) 125 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.123822+0000 mon.c (mon.2) 125 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.123956+0000 mgr.y (mgr.24422) 135 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.123956+0000 mgr.y (mgr.24422) 135 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.124440+0000 mon.c (mon.2) 126 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.124440+0000 mon.c (mon.2) 126 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.124523+0000 mgr.y (mgr.24422) 136 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.124523+0000 mgr.y (mgr.24422) 136 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.126404+0000 mon.c (mon.2) 127 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.126404+0000 mon.c (mon.2) 127 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.126502+0000 mgr.y (mgr.24422) 137 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.126502+0000 mgr.y (mgr.24422) 137 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.128259+0000 mon.c (mon.2) 128 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-10T10:17:00.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.128259+0000 mon.c (mon.2) 128 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-10T10:17:00.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.128372+0000 mgr.y (mgr.24422) 138 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-10T10:17:00.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.128372+0000 mgr.y (mgr.24422) 138 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-10T10:17:00.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.128868+0000 mon.c (mon.2) 129 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-10T10:17:00.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.128868+0000 mon.c (mon.2) 129 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-10T10:17:00.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.128933+0000 mgr.y (mgr.24422) 139 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-10T10:17:00.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.128933+0000 mgr.y (mgr.24422) 139 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-10T10:17:00.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.204344+0000 mon.a (mon.0) 1241 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:17:00.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.204344+0000 mon.a (mon.0) 1241 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:17:00.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: cluster 2026-03-10T10:16:59.207139+0000 mon.a (mon.0) 1242 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-10T10:17:00.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: cluster 2026-03-10T10:16:59.207139+0000 mon.a (mon.0) 1242 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-10T10:17:00.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.210508+0000 mon.a (mon.0) 1243 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-11"}]: dispatch 2026-03-10T10:17:00.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:16:59.210508+0000 mon.a (mon.0) 1243 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-11"}]: dispatch 2026-03-10T10:17:00.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: cluster 2026-03-10T10:16:59.835707+0000 osd.1 (osd.1) 7 : cluster [DBG] 15.4 deep-scrub starts 2026-03-10T10:17:00.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: cluster 2026-03-10T10:16:59.835707+0000 osd.1 (osd.1) 7 : cluster [DBG] 15.4 deep-scrub starts 2026-03-10T10:17:00.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: cluster 2026-03-10T10:16:59.836591+0000 osd.1 (osd.1) 8 : cluster [DBG] 15.4 deep-scrub ok 2026-03-10T10:17:00.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: cluster 2026-03-10T10:16:59.836591+0000 osd.1 (osd.1) 8 : cluster [DBG] 15.4 deep-scrub ok 2026-03-10T10:17:00.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:17:00.035964+0000 mon.c (mon.2) 130 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:00.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:00 vm04 bash[20742]: audit 2026-03-10T10:17:00.035964+0000 mon.c (mon.2) 130 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.035213+0000 mon.c (mon.2) 119 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.035213+0000 mon.c (mon.2) 119 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.108482+0000 mon.c (mon.2) 120 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.108482+0000 mon.c (mon.2) 120 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.108635+0000 mgr.y (mgr.24422) 130 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.108635+0000 mgr.y (mgr.24422) 130 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.0"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.120323+0000 mon.c (mon.2) 121 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.120323+0000 mon.c (mon.2) 121 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.120440+0000 mgr.y (mgr.24422) 131 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.120440+0000 mgr.y (mgr.24422) 131 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.1"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.121459+0000 mon.c (mon.2) 122 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.121459+0000 mon.c (mon.2) 122 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.121545+0000 mgr.y (mgr.24422) 132 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.121545+0000 mgr.y (mgr.24422) 132 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.2"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.122473+0000 mon.c (mon.2) 123 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.122473+0000 mon.c (mon.2) 123 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.122559+0000 mgr.y (mgr.24422) 133 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.122559+0000 mgr.y (mgr.24422) 133 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.3"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.123038+0000 mon.c (mon.2) 124 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.123038+0000 mon.c (mon.2) 124 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.123109+0000 mgr.y (mgr.24422) 134 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.123109+0000 mgr.y (mgr.24422) 134 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.4"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.123822+0000 mon.c (mon.2) 125 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.123822+0000 mon.c (mon.2) 125 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.123956+0000 mgr.y (mgr.24422) 135 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.123956+0000 mgr.y (mgr.24422) 135 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.5"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.124440+0000 mon.c (mon.2) 126 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.124440+0000 mon.c (mon.2) 126 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.124523+0000 mgr.y (mgr.24422) 136 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.124523+0000 mgr.y (mgr.24422) 136 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.6"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.126404+0000 mon.c (mon.2) 127 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.126404+0000 mon.c (mon.2) 127 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.126502+0000 mgr.y (mgr.24422) 137 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.126502+0000 mgr.y (mgr.24422) 137 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.7"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.128259+0000 mon.c (mon.2) 128 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.128259+0000 mon.c (mon.2) 128 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.128372+0000 mgr.y (mgr.24422) 138 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.128372+0000 mgr.y (mgr.24422) 138 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.8"}]: dispatch 2026-03-10T10:17:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.128868+0000 mon.c (mon.2) 129 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-10T10:17:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.128868+0000 mon.c (mon.2) 129 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-10T10:17:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.128933+0000 mgr.y (mgr.24422) 139 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-10T10:17:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.128933+0000 mgr.y (mgr.24422) 139 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "15.9"}]: dispatch 2026-03-10T10:17:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.204344+0000 mon.a (mon.0) 1241 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:17:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.204344+0000 mon.a (mon.0) 1241 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:17:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: cluster 2026-03-10T10:16:59.207139+0000 mon.a (mon.0) 1242 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-10T10:17:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: cluster 2026-03-10T10:16:59.207139+0000 mon.a (mon.0) 1242 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-10T10:17:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.210508+0000 mon.a (mon.0) 1243 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-11"}]: dispatch 2026-03-10T10:17:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:16:59.210508+0000 mon.a (mon.0) 1243 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-11"}]: dispatch 2026-03-10T10:17:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: cluster 2026-03-10T10:16:59.835707+0000 osd.1 (osd.1) 7 : cluster [DBG] 15.4 deep-scrub starts 2026-03-10T10:17:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: cluster 2026-03-10T10:16:59.835707+0000 osd.1 (osd.1) 7 : cluster [DBG] 15.4 deep-scrub starts 2026-03-10T10:17:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: cluster 2026-03-10T10:16:59.836591+0000 osd.1 (osd.1) 8 : cluster [DBG] 15.4 deep-scrub ok 2026-03-10T10:17:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: cluster 2026-03-10T10:16:59.836591+0000 osd.1 (osd.1) 8 : cluster [DBG] 15.4 deep-scrub ok 2026-03-10T10:17:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:17:00.035964+0000 mon.c (mon.2) 130 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:00 vm07 bash[23367]: audit 2026-03-10T10:17:00.035964+0000 mon.c (mon.2) 130 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:01.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: cluster 2026-03-10T10:16:59.423969+0000 osd.6 (osd.6) 9 : cluster [DBG] 15.1 deep-scrub starts 2026-03-10T10:17:01.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: cluster 2026-03-10T10:16:59.423969+0000 osd.6 (osd.6) 9 : cluster [DBG] 15.1 deep-scrub starts 2026-03-10T10:17:01.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: cluster 2026-03-10T10:16:59.426140+0000 osd.6 (osd.6) 10 : cluster [DBG] 15.1 deep-scrub ok 2026-03-10T10:17:01.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: cluster 2026-03-10T10:16:59.426140+0000 osd.6 (osd.6) 10 : cluster [DBG] 15.1 deep-scrub ok 2026-03-10T10:17:01.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: cluster 2026-03-10T10:16:59.854309+0000 osd.2 (osd.2) 9 : cluster [DBG] 15.2 deep-scrub starts 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: cluster 2026-03-10T10:16:59.854309+0000 osd.2 (osd.2) 9 : cluster [DBG] 15.2 deep-scrub starts 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: cluster 2026-03-10T10:16:59.875678+0000 osd.2 (osd.2) 10 : cluster [DBG] 15.2 deep-scrub ok 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: cluster 2026-03-10T10:16:59.875678+0000 osd.2 (osd.2) 10 : cluster [DBG] 15.2 deep-scrub ok 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: cluster 2026-03-10T10:16:59.887253+0000 osd.0 (osd.0) 5 : cluster [DBG] 15.6 deep-scrub starts 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: cluster 2026-03-10T10:16:59.887253+0000 osd.0 (osd.0) 5 : cluster [DBG] 15.6 deep-scrub starts 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: cluster 2026-03-10T10:16:59.888139+0000 osd.0 (osd.0) 6 : cluster [DBG] 15.6 deep-scrub ok 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: cluster 2026-03-10T10:16:59.888139+0000 osd.0 (osd.0) 6 : cluster [DBG] 15.6 deep-scrub ok 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: cluster 2026-03-10T10:16:59.990702+0000 mgr.y (mgr.24422) 140 : cluster [DBG] pgmap v111: 460 pgs: 32 creating+peering, 6 active+clean+snaptrim, 422 active+clean; 168 MiB data, 899 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 6.0 MiB/s wr, 7 op/s 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: cluster 2026-03-10T10:16:59.990702+0000 mgr.y (mgr.24422) 140 : cluster [DBG] pgmap v111: 460 pgs: 32 creating+peering, 6 active+clean+snaptrim, 422 active+clean; 168 MiB data, 899 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 6.0 MiB/s wr, 7 op/s 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: cluster 2026-03-10T10:17:00.204413+0000 mon.a (mon.0) 1244 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: cluster 2026-03-10T10:17:00.204413+0000 mon.a (mon.0) 1244 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: cluster 2026-03-10T10:17:00.206478+0000 mon.a (mon.0) 1245 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: cluster 2026-03-10T10:17:00.206478+0000 mon.a (mon.0) 1245 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: audit 2026-03-10T10:17:00.215657+0000 mon.a (mon.0) 1246 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-11"}]': finished 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: audit 2026-03-10T10:17:00.215657+0000 mon.a (mon.0) 1246 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-11"}]': finished 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: cluster 2026-03-10T10:17:00.219811+0000 mon.a (mon.0) 1247 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: cluster 2026-03-10T10:17:00.219811+0000 mon.a (mon.0) 1247 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: audit 2026-03-10T10:17:00.266221+0000 mon.c (mon.2) 131 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: audit 2026-03-10T10:17:00.266221+0000 mon.c (mon.2) 131 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: audit 2026-03-10T10:17:00.266437+0000 mon.a (mon.0) 1248 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: audit 2026-03-10T10:17:00.266437+0000 mon.a (mon.0) 1248 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: audit 2026-03-10T10:17:00.273084+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.104:0/854122284' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm04-59259-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: audit 2026-03-10T10:17:00.273084+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.104:0/854122284' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm04-59259-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: audit 2026-03-10T10:17:00.273318+0000 mon.a (mon.0) 1249 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm04-59259-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: audit 2026-03-10T10:17:00.273318+0000 mon.a (mon.0) 1249 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm04-59259-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: audit 2026-03-10T10:17:00.310168+0000 mon.a (mon.0) 1250 : audit [INF] from='client.? 192.168.123.104:0/754474874' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm04-59252-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: audit 2026-03-10T10:17:00.310168+0000 mon.a (mon.0) 1250 : audit [INF] from='client.? 192.168.123.104:0/754474874' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm04-59252-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: cluster 2026-03-10T10:17:00.805236+0000 osd.1 (osd.1) 9 : cluster [DBG] 15.7 deep-scrub starts 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: cluster 2026-03-10T10:17:00.805236+0000 osd.1 (osd.1) 9 : cluster [DBG] 15.7 deep-scrub starts 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: cluster 2026-03-10T10:17:00.864527+0000 osd.1 (osd.1) 10 : cluster [DBG] 15.7 deep-scrub ok 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: cluster 2026-03-10T10:17:00.864527+0000 osd.1 (osd.1) 10 : cluster [DBG] 15.7 deep-scrub ok 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: audit 2026-03-10T10:17:01.037778+0000 mon.c (mon.2) 133 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:01 vm04 bash[20742]: audit 2026-03-10T10:17:01.037778+0000 mon.c (mon.2) 133 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: cluster 2026-03-10T10:16:59.423969+0000 osd.6 (osd.6) 9 : cluster [DBG] 15.1 deep-scrub starts 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: cluster 2026-03-10T10:16:59.423969+0000 osd.6 (osd.6) 9 : cluster [DBG] 15.1 deep-scrub starts 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: cluster 2026-03-10T10:16:59.426140+0000 osd.6 (osd.6) 10 : cluster [DBG] 15.1 deep-scrub ok 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: cluster 2026-03-10T10:16:59.426140+0000 osd.6 (osd.6) 10 : cluster [DBG] 15.1 deep-scrub ok 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: cluster 2026-03-10T10:16:59.854309+0000 osd.2 (osd.2) 9 : cluster [DBG] 15.2 deep-scrub starts 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: cluster 2026-03-10T10:16:59.854309+0000 osd.2 (osd.2) 9 : cluster [DBG] 15.2 deep-scrub starts 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: cluster 2026-03-10T10:16:59.875678+0000 osd.2 (osd.2) 10 : cluster [DBG] 15.2 deep-scrub ok 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: cluster 2026-03-10T10:16:59.875678+0000 osd.2 (osd.2) 10 : cluster [DBG] 15.2 deep-scrub ok 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: cluster 2026-03-10T10:16:59.887253+0000 osd.0 (osd.0) 5 : cluster [DBG] 15.6 deep-scrub starts 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: cluster 2026-03-10T10:16:59.887253+0000 osd.0 (osd.0) 5 : cluster [DBG] 15.6 deep-scrub starts 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: cluster 2026-03-10T10:16:59.888139+0000 osd.0 (osd.0) 6 : cluster [DBG] 15.6 deep-scrub ok 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: cluster 2026-03-10T10:16:59.888139+0000 osd.0 (osd.0) 6 : cluster [DBG] 15.6 deep-scrub ok 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: cluster 2026-03-10T10:16:59.990702+0000 mgr.y (mgr.24422) 140 : cluster [DBG] pgmap v111: 460 pgs: 32 creating+peering, 6 active+clean+snaptrim, 422 active+clean; 168 MiB data, 899 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 6.0 MiB/s wr, 7 op/s 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: cluster 2026-03-10T10:16:59.990702+0000 mgr.y (mgr.24422) 140 : cluster [DBG] pgmap v111: 460 pgs: 32 creating+peering, 6 active+clean+snaptrim, 422 active+clean; 168 MiB data, 899 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 6.0 MiB/s wr, 7 op/s 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: cluster 2026-03-10T10:17:00.204413+0000 mon.a (mon.0) 1244 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: cluster 2026-03-10T10:17:00.204413+0000 mon.a (mon.0) 1244 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: cluster 2026-03-10T10:17:00.206478+0000 mon.a (mon.0) 1245 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:17:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: cluster 2026-03-10T10:17:00.206478+0000 mon.a (mon.0) 1245 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:17:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: audit 2026-03-10T10:17:00.215657+0000 mon.a (mon.0) 1246 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-11"}]': finished 2026-03-10T10:17:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: audit 2026-03-10T10:17:00.215657+0000 mon.a (mon.0) 1246 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-11"}]': finished 2026-03-10T10:17:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: cluster 2026-03-10T10:17:00.219811+0000 mon.a (mon.0) 1247 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-10T10:17:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: cluster 2026-03-10T10:17:00.219811+0000 mon.a (mon.0) 1247 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-10T10:17:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: audit 2026-03-10T10:17:00.266221+0000 mon.c (mon.2) 131 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:17:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: audit 2026-03-10T10:17:00.266221+0000 mon.c (mon.2) 131 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:17:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: audit 2026-03-10T10:17:00.266437+0000 mon.a (mon.0) 1248 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:17:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: audit 2026-03-10T10:17:00.266437+0000 mon.a (mon.0) 1248 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:17:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: audit 2026-03-10T10:17:00.273084+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.104:0/854122284' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm04-59259-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: audit 2026-03-10T10:17:00.273084+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.104:0/854122284' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm04-59259-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: audit 2026-03-10T10:17:00.273318+0000 mon.a (mon.0) 1249 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm04-59259-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: audit 2026-03-10T10:17:00.273318+0000 mon.a (mon.0) 1249 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm04-59259-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: audit 2026-03-10T10:17:00.310168+0000 mon.a (mon.0) 1250 : audit [INF] from='client.? 192.168.123.104:0/754474874' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm04-59252-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: audit 2026-03-10T10:17:00.310168+0000 mon.a (mon.0) 1250 : audit [INF] from='client.? 192.168.123.104:0/754474874' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm04-59252-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: cluster 2026-03-10T10:17:00.805236+0000 osd.1 (osd.1) 9 : cluster [DBG] 15.7 deep-scrub starts 2026-03-10T10:17:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: cluster 2026-03-10T10:17:00.805236+0000 osd.1 (osd.1) 9 : cluster [DBG] 15.7 deep-scrub starts 2026-03-10T10:17:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: cluster 2026-03-10T10:17:00.864527+0000 osd.1 (osd.1) 10 : cluster [DBG] 15.7 deep-scrub ok 2026-03-10T10:17:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: cluster 2026-03-10T10:17:00.864527+0000 osd.1 (osd.1) 10 : cluster [DBG] 15.7 deep-scrub ok 2026-03-10T10:17:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: audit 2026-03-10T10:17:01.037778+0000 mon.c (mon.2) 133 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:01.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:01 vm04 bash[28289]: audit 2026-03-10T10:17:01.037778+0000 mon.c (mon.2) 133 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:01.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: cluster 2026-03-10T10:16:59.423969+0000 osd.6 (osd.6) 9 : cluster [DBG] 15.1 deep-scrub starts 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: cluster 2026-03-10T10:16:59.423969+0000 osd.6 (osd.6) 9 : cluster [DBG] 15.1 deep-scrub starts 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: cluster 2026-03-10T10:16:59.426140+0000 osd.6 (osd.6) 10 : cluster [DBG] 15.1 deep-scrub ok 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: cluster 2026-03-10T10:16:59.426140+0000 osd.6 (osd.6) 10 : cluster [DBG] 15.1 deep-scrub ok 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: cluster 2026-03-10T10:16:59.854309+0000 osd.2 (osd.2) 9 : cluster [DBG] 15.2 deep-scrub starts 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: cluster 2026-03-10T10:16:59.854309+0000 osd.2 (osd.2) 9 : cluster [DBG] 15.2 deep-scrub starts 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: cluster 2026-03-10T10:16:59.875678+0000 osd.2 (osd.2) 10 : cluster [DBG] 15.2 deep-scrub ok 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: cluster 2026-03-10T10:16:59.875678+0000 osd.2 (osd.2) 10 : cluster [DBG] 15.2 deep-scrub ok 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: cluster 2026-03-10T10:16:59.887253+0000 osd.0 (osd.0) 5 : cluster [DBG] 15.6 deep-scrub starts 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: cluster 2026-03-10T10:16:59.887253+0000 osd.0 (osd.0) 5 : cluster [DBG] 15.6 deep-scrub starts 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: cluster 2026-03-10T10:16:59.888139+0000 osd.0 (osd.0) 6 : cluster [DBG] 15.6 deep-scrub ok 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: cluster 2026-03-10T10:16:59.888139+0000 osd.0 (osd.0) 6 : cluster [DBG] 15.6 deep-scrub ok 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: cluster 2026-03-10T10:16:59.990702+0000 mgr.y (mgr.24422) 140 : cluster [DBG] pgmap v111: 460 pgs: 32 creating+peering, 6 active+clean+snaptrim, 422 active+clean; 168 MiB data, 899 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 6.0 MiB/s wr, 7 op/s 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: cluster 2026-03-10T10:16:59.990702+0000 mgr.y (mgr.24422) 140 : cluster [DBG] pgmap v111: 460 pgs: 32 creating+peering, 6 active+clean+snaptrim, 422 active+clean; 168 MiB data, 899 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 6.0 MiB/s wr, 7 op/s 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: cluster 2026-03-10T10:17:00.204413+0000 mon.a (mon.0) 1244 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: cluster 2026-03-10T10:17:00.204413+0000 mon.a (mon.0) 1244 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: cluster 2026-03-10T10:17:00.206478+0000 mon.a (mon.0) 1245 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: cluster 2026-03-10T10:17:00.206478+0000 mon.a (mon.0) 1245 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: audit 2026-03-10T10:17:00.215657+0000 mon.a (mon.0) 1246 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-11"}]': finished 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: audit 2026-03-10T10:17:00.215657+0000 mon.a (mon.0) 1246 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-11"}]': finished 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: cluster 2026-03-10T10:17:00.219811+0000 mon.a (mon.0) 1247 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: cluster 2026-03-10T10:17:00.219811+0000 mon.a (mon.0) 1247 : cluster [DBG] osdmap e105: 8 total, 8 up, 8 in 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: audit 2026-03-10T10:17:00.266221+0000 mon.c (mon.2) 131 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: audit 2026-03-10T10:17:00.266221+0000 mon.c (mon.2) 131 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: audit 2026-03-10T10:17:00.266437+0000 mon.a (mon.0) 1248 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: audit 2026-03-10T10:17:00.266437+0000 mon.a (mon.0) 1248 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: audit 2026-03-10T10:17:00.273084+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.104:0/854122284' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm04-59259-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: audit 2026-03-10T10:17:00.273084+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.104:0/854122284' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm04-59259-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: audit 2026-03-10T10:17:00.273318+0000 mon.a (mon.0) 1249 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm04-59259-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: audit 2026-03-10T10:17:00.273318+0000 mon.a (mon.0) 1249 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm04-59259-11","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: audit 2026-03-10T10:17:00.310168+0000 mon.a (mon.0) 1250 : audit [INF] from='client.? 192.168.123.104:0/754474874' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm04-59252-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: audit 2026-03-10T10:17:00.310168+0000 mon.a (mon.0) 1250 : audit [INF] from='client.? 192.168.123.104:0/754474874' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValue_vm04-59252-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: cluster 2026-03-10T10:17:00.805236+0000 osd.1 (osd.1) 9 : cluster [DBG] 15.7 deep-scrub starts 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: cluster 2026-03-10T10:17:00.805236+0000 osd.1 (osd.1) 9 : cluster [DBG] 15.7 deep-scrub starts 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: cluster 2026-03-10T10:17:00.864527+0000 osd.1 (osd.1) 10 : cluster [DBG] 15.7 deep-scrub ok 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: cluster 2026-03-10T10:17:00.864527+0000 osd.1 (osd.1) 10 : cluster [DBG] 15.7 deep-scrub ok 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: audit 2026-03-10T10:17:01.037778+0000 mon.c (mon.2) 133 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:01 vm07 bash[23367]: audit 2026-03-10T10:17:01.037778+0000 mon.c (mon.2) 133 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:02.519 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: Running main() from gmock_main.cc 2026-03-10T10:17:02.519 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [==========] Running 14 tests from 1 test suite. 2026-03-10T10:17:02.519 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [----------] Global test environment set-up. 2026-03-10T10:17:02.519 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [----------] 14 tests from NeoRadosReadOps 2026-03-10T10:17:02.519 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [ RUN ] NeoRadosReadOps.SetOpFlags 2026-03-10T10:17:02.519 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [ OK ] NeoRadosReadOps.SetOpFlags (2450 ms) 2026-03-10T10:17:02.520 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [ RUN ] NeoRadosReadOps.AssertExists 2026-03-10T10:17:02.520 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [ OK ] NeoRadosReadOps.AssertExists (3205 ms) 2026-03-10T10:17:02.520 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [ RUN ] NeoRadosReadOps.AssertVersion 2026-03-10T10:17:02.520 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [ OK ] NeoRadosReadOps.AssertVersion (3122 ms) 2026-03-10T10:17:02.520 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [ RUN ] NeoRadosReadOps.CmpXattr 2026-03-10T10:17:02.520 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [ OK ] NeoRadosReadOps.CmpXattr (2709 ms) 2026-03-10T10:17:02.520 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [ RUN ] NeoRadosReadOps.Read 2026-03-10T10:17:02.520 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [ OK ] NeoRadosReadOps.Read (3155 ms) 2026-03-10T10:17:02.520 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [ RUN ] NeoRadosReadOps.Checksum 2026-03-10T10:17:02.520 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [ OK ] NeoRadosReadOps.Checksum (3021 ms) 2026-03-10T10:17:02.520 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [ RUN ] NeoRadosReadOps.RWOrderedRead 2026-03-10T10:17:02.520 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [ OK ] NeoRadosReadOps.RWOrderedRead (2856 ms) 2026-03-10T10:17:02.520 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [ RUN ] NeoRadosReadOps.ShortRead 2026-03-10T10:17:02.520 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [ OK ] NeoRadosReadOps.ShortRead (3080 ms) 2026-03-10T10:17:02.520 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [ RUN ] NeoRadosReadOps.Exec 2026-03-10T10:17:02.520 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [ OK ] NeoRadosReadOps.Exec (3184 ms) 2026-03-10T10:17:02.520 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [ RUN ] NeoRadosReadOps.Stat 2026-03-10T10:17:02.520 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [ OK ] NeoRadosReadOps.Stat (3079 ms) 2026-03-10T10:17:02.520 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [ RUN ] NeoRadosReadOps.Omap 2026-03-10T10:17:02.520 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [ OK ] NeoRadosReadOps.Omap (2647 ms) 2026-03-10T10:17:02.520 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [ RUN ] NeoRadosReadOps.OmapNuls 2026-03-10T10:17:02.520 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [ OK ] NeoRadosReadOps.OmapNuls (2971 ms) 2026-03-10T10:17:02.520 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [ RUN ] NeoRadosReadOps.GetXattrs 2026-03-10T10:17:02.520 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [ OK ] NeoRadosReadOps.GetXattrs (3041 ms) 2026-03-10T10:17:02.520 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [ RUN ] NeoRadosReadOps.CmpExt 2026-03-10T10:17:02.520 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [ OK ] NeoRadosReadOps.CmpExt (3244 ms) 2026-03-10T10:17:02.520 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [----------] 14 tests from NeoRadosReadOps (41764 ms total) 2026-03-10T10:17:02.520 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: 2026-03-10T10:17:02.520 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [----------] Global test environment tear-down 2026-03-10T10:17:02.520 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [==========] 14 tests from 1 test suite ran. (41764 ms total) 2026-03-10T10:17:02.520 INFO:tasks.workunit.client.0.vm04.stdout: read_operations: [ PASSED ] 14 tests. 2026-03-10T10:17:02.524 INFO:tasks.workunit.client.0.vm04.stdout: io: Running main() from gmock_main.cc 2026-03-10T10:17:02.524 INFO:tasks.workunit.client.0.vm04.stdout: io: [==========] Running 14 tests from 1 test suite. 2026-03-10T10:17:02.524 INFO:tasks.workunit.client.0.vm04.stdout: io: [----------] Global test environment set-up. 2026-03-10T10:17:02.524 INFO:tasks.workunit.client.0.vm04.stdout: io: [----------] 14 tests from NeoRadosIo 2026-03-10T10:17:02.524 INFO:tasks.workunit.client.0.vm04.stdout: io: [ RUN ] NeoRadosIo.Limits 2026-03-10T10:17:02.524 INFO:tasks.workunit.client.0.vm04.stdout: io: [ OK ] NeoRadosIo.Limits (2613 ms) 2026-03-10T10:17:02.524 INFO:tasks.workunit.client.0.vm04.stdout: io: [ RUN ] NeoRadosIo.SimpleWrite 2026-03-10T10:17:02.524 INFO:tasks.workunit.client.0.vm04.stdout: io: [ OK ] NeoRadosIo.SimpleWrite (3134 ms) 2026-03-10T10:17:02.524 INFO:tasks.workunit.client.0.vm04.stdout: io: [ RUN ] NeoRadosIo.ReadOp 2026-03-10T10:17:02.524 INFO:tasks.workunit.client.0.vm04.stdout: io: [ OK ] NeoRadosIo.ReadOp (3112 ms) 2026-03-10T10:17:02.524 INFO:tasks.workunit.client.0.vm04.stdout: io: [ RUN ] NeoRadosIo.SparseRead 2026-03-10T10:17:02.524 INFO:tasks.workunit.client.0.vm04.stdout: io: [ OK ] NeoRadosIo.SparseRead (2704 ms) 2026-03-10T10:17:02.524 INFO:tasks.workunit.client.0.vm04.stdout: io: [ RUN ] NeoRadosIo.RoundTrip 2026-03-10T10:17:02.524 INFO:tasks.workunit.client.0.vm04.stdout: io: [ OK ] NeoRadosIo.RoundTrip (3178 ms) 2026-03-10T10:17:02.524 INFO:tasks.workunit.client.0.vm04.stdout: io: [ RUN ] NeoRadosIo.ReadIntoBuufferlist 2026-03-10T10:17:02.524 INFO:tasks.workunit.client.0.vm04.stdout: io: [ OK ] NeoRadosIo.ReadIntoBuufferlist (3009 ms) 2026-03-10T10:17:02.524 INFO:tasks.workunit.client.0.vm04.stdout: io: [ RUN ] NeoRadosIo.OverlappingWriteRoundTrip 2026-03-10T10:17:02.524 INFO:tasks.workunit.client.0.vm04.stdout: io: [ OK ] NeoRadosIo.OverlappingWriteRoundTrip (2833 ms) 2026-03-10T10:17:02.524 INFO:tasks.workunit.client.0.vm04.stdout: io: [ RUN ] NeoRadosIo.WriteFullRoundTrip 2026-03-10T10:17:02.524 INFO:tasks.workunit.client.0.vm04.stdout: io: [ OK ] NeoRadosIo.WriteFullRoundTrip (3093 ms) 2026-03-10T10:17:02.524 INFO:tasks.workunit.client.0.vm04.stdout: io: [ RUN ] NeoRadosIo.AppendRoundTrip 2026-03-10T10:17:02.524 INFO:tasks.workunit.client.0.vm04.stdout: io: [ OK ] NeoRadosIo.AppendRoundTrip (3215 ms) 2026-03-10T10:17:02.524 INFO:tasks.workunit.client.0.vm04.stdout: io: [ RUN ] NeoRadosIo.Trunc 2026-03-10T10:17:02.524 INFO:tasks.workunit.client.0.vm04.stdout: io: [ OK ] NeoRadosIo.Trunc (3055 ms) 2026-03-10T10:17:02.524 INFO:tasks.workunit.client.0.vm04.stdout: io: [ RUN ] NeoRadosIo.Remove 2026-03-10T10:17:02.524 INFO:tasks.workunit.client.0.vm04.stdout: io: [ OK ] NeoRadosIo.Remove (2654 ms) 2026-03-10T10:17:02.524 INFO:tasks.workunit.client.0.vm04.stdout: io: [ RUN ] NeoRadosIo.XattrsRoundTrip 2026-03-10T10:17:02.525 INFO:tasks.workunit.client.0.vm04.stdout: io: [ OK ] NeoRadosIo.XattrsRoundTrip (2961 ms) 2026-03-10T10:17:02.525 INFO:tasks.workunit.client.0.vm04.stdout: io: [ RUN ] NeoRadosIo.RmXattr 2026-03-10T10:17:02.525 INFO:tasks.workunit.client.0.vm04.stdout: io: [ OK ] NeoRadosIo.RmXattr (3036 ms) 2026-03-10T10:17:02.525 INFO:tasks.workunit.client.0.vm04.stdout: io: [ RUN ] NeoRadosIo.GetXattrs 2026-03-10T10:17:02.525 INFO:tasks.workunit.client.0.vm04.stdout: io: [ OK ] NeoRadosIo.GetXattrs (3253 ms) 2026-03-10T10:17:02.525 INFO:tasks.workunit.client.0.vm04.stdout: io: [----------] 14 tests from NeoRadosIo (41850 ms total) 2026-03-10T10:17:02.525 INFO:tasks.workunit.client.0.vm04.stdout: io: 2026-03-10T10:17:02.525 INFO:tasks.workunit.client.0.vm04.stdout: io: [----------] Global test environment tear-down 2026-03-10T10:17:02.525 INFO:tasks.workunit.client.0.vm04.stdout: io: [==========] 14 tests from 1 test suite ran. (41850 ms total) 2026-03-10T10:17:02.525 INFO:tasks.workunit.client.0.vm04.stdout: io: [ PASSED ] 14 tests. 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:02 vm04 bash[20742]: cluster 2026-03-10T10:17:00.419772+0000 osd.6 (osd.6) 11 : cluster [DBG] 15.9 deep-scrub starts 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:02 vm04 bash[20742]: cluster 2026-03-10T10:17:00.419772+0000 osd.6 (osd.6) 11 : cluster [DBG] 15.9 deep-scrub starts 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:02 vm04 bash[20742]: cluster 2026-03-10T10:17:00.420382+0000 osd.6 (osd.6) 12 : cluster [DBG] 15.9 deep-scrub ok 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:02 vm04 bash[20742]: cluster 2026-03-10T10:17:00.420382+0000 osd.6 (osd.6) 12 : cluster [DBG] 15.9 deep-scrub ok 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:02 vm04 bash[20742]: cluster 2026-03-10T10:17:00.812718+0000 osd.2 (osd.2) 11 : cluster [DBG] 15.8 deep-scrub starts 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:02 vm04 bash[20742]: cluster 2026-03-10T10:17:00.812718+0000 osd.2 (osd.2) 11 : cluster [DBG] 15.8 deep-scrub starts 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:02 vm04 bash[20742]: cluster 2026-03-10T10:17:00.883467+0000 osd.2 (osd.2) 12 : cluster [DBG] 15.8 deep-scrub ok 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:02 vm04 bash[20742]: cluster 2026-03-10T10:17:00.883467+0000 osd.2 (osd.2) 12 : cluster [DBG] 15.8 deep-scrub ok 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:02 vm04 bash[20742]: audit 2026-03-10T10:17:01.436704+0000 mon.a (mon.0) 1251 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]': finished 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:02 vm04 bash[20742]: audit 2026-03-10T10:17:01.436704+0000 mon.a (mon.0) 1251 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]': finished 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:02 vm04 bash[20742]: audit 2026-03-10T10:17:01.436808+0000 mon.a (mon.0) 1252 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm04-59259-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:02 vm04 bash[20742]: audit 2026-03-10T10:17:01.436808+0000 mon.a (mon.0) 1252 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm04-59259-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:02 vm04 bash[20742]: audit 2026-03-10T10:17:01.437173+0000 mon.a (mon.0) 1253 : audit [INF] from='client.? 192.168.123.104:0/754474874' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValue_vm04-59252-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:02 vm04 bash[20742]: audit 2026-03-10T10:17:01.437173+0000 mon.a (mon.0) 1253 : audit [INF] from='client.? 192.168.123.104:0/754474874' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValue_vm04-59252-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:02 vm04 bash[20742]: cluster 2026-03-10T10:17:01.440911+0000 mon.a (mon.0) 1254 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:02 vm04 bash[20742]: cluster 2026-03-10T10:17:01.440911+0000 mon.a (mon.0) 1254 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:02 vm04 bash[20742]: audit 2026-03-10T10:17:01.453528+0000 mon.c (mon.2) 134 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:02 vm04 bash[20742]: audit 2026-03-10T10:17:01.453528+0000 mon.c (mon.2) 134 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:02 vm04 bash[20742]: cluster 2026-03-10T10:17:01.463137+0000 osd.6 (osd.6) 13 : cluster [DBG] 15.5 deep-scrub starts 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:02 vm04 bash[20742]: cluster 2026-03-10T10:17:01.463137+0000 osd.6 (osd.6) 13 : cluster [DBG] 15.5 deep-scrub starts 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:02 vm04 bash[20742]: audit 2026-03-10T10:17:01.464606+0000 mon.a (mon.0) 1255 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:02 vm04 bash[28289]: cluster 2026-03-10T10:17:00.419772+0000 osd.6 (osd.6) 11 : cluster [DBG] 15.9 deep-scrub starts 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:02 vm04 bash[28289]: cluster 2026-03-10T10:17:00.419772+0000 osd.6 (osd.6) 11 : cluster [DBG] 15.9 deep-scrub starts 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:02 vm04 bash[28289]: cluster 2026-03-10T10:17:00.420382+0000 osd.6 (osd.6) 12 : cluster [DBG] 15.9 deep-scrub ok 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:02 vm04 bash[28289]: cluster 2026-03-10T10:17:00.420382+0000 osd.6 (osd.6) 12 : cluster [DBG] 15.9 deep-scrub ok 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:02 vm04 bash[28289]: cluster 2026-03-10T10:17:00.812718+0000 osd.2 (osd.2) 11 : cluster [DBG] 15.8 deep-scrub starts 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:02 vm04 bash[28289]: cluster 2026-03-10T10:17:00.812718+0000 osd.2 (osd.2) 11 : cluster [DBG] 15.8 deep-scrub starts 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:02 vm04 bash[28289]: cluster 2026-03-10T10:17:00.883467+0000 osd.2 (osd.2) 12 : cluster [DBG] 15.8 deep-scrub ok 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:02 vm04 bash[28289]: cluster 2026-03-10T10:17:00.883467+0000 osd.2 (osd.2) 12 : cluster [DBG] 15.8 deep-scrub ok 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:02 vm04 bash[28289]: audit 2026-03-10T10:17:01.436704+0000 mon.a (mon.0) 1251 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]': finished 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:02 vm04 bash[28289]: audit 2026-03-10T10:17:01.436704+0000 mon.a (mon.0) 1251 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]': finished 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:02 vm04 bash[28289]: audit 2026-03-10T10:17:01.436808+0000 mon.a (mon.0) 1252 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm04-59259-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:02 vm04 bash[28289]: audit 2026-03-10T10:17:01.436808+0000 mon.a (mon.0) 1252 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm04-59259-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:02 vm04 bash[28289]: audit 2026-03-10T10:17:01.437173+0000 mon.a (mon.0) 1253 : audit [INF] from='client.? 192.168.123.104:0/754474874' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValue_vm04-59252-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:02 vm04 bash[28289]: audit 2026-03-10T10:17:01.437173+0000 mon.a (mon.0) 1253 : audit [INF] from='client.? 192.168.123.104:0/754474874' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValue_vm04-59252-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:02 vm04 bash[28289]: cluster 2026-03-10T10:17:01.440911+0000 mon.a (mon.0) 1254 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:02 vm04 bash[28289]: cluster 2026-03-10T10:17:01.440911+0000 mon.a (mon.0) 1254 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:02 vm04 bash[28289]: audit 2026-03-10T10:17:01.453528+0000 mon.c (mon.2) 134 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:02 vm04 bash[28289]: audit 2026-03-10T10:17:01.453528+0000 mon.c (mon.2) 134 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:02 vm04 bash[28289]: cluster 2026-03-10T10:17:01.463137+0000 osd.6 (osd.6) 13 : cluster [DBG] 15.5 deep-scrub starts 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:02 vm04 bash[28289]: cluster 2026-03-10T10:17:01.463137+0000 osd.6 (osd.6) 13 : cluster [DBG] 15.5 deep-scrub starts 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:02 vm04 bash[28289]: audit 2026-03-10T10:17:01.464606+0000 mon.a (mon.0) 1255 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:17:02.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:02 vm04 bash[28289]: audit 2026-03-10T10:17:01.464606+0000 mon.a (mon.0) 1255 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:17:02.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:02 vm04 bash[28289]: cluster 2026-03-10T10:17:01.649437+0000 osd.6 (osd.6) 14 : cluster [DBG] 15.5 deep-scrub ok 2026-03-10T10:17:02.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:02 vm04 bash[28289]: cluster 2026-03-10T10:17:01.649437+0000 osd.6 (osd.6) 14 : cluster [DBG] 15.5 deep-scrub ok 2026-03-10T10:17:02.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:02 vm04 bash[28289]: cluster 2026-03-10T10:17:01.991165+0000 mgr.y (mgr.24422) 141 : cluster [DBG] pgmap v114: 548 pgs: 128 unknown, 32 creating+peering, 2 active+clean+snaptrim, 386 active+clean; 168 MiB data, 899 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 6.0 MiB/s wr, 4 op/s 2026-03-10T10:17:02.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:02 vm04 bash[28289]: cluster 2026-03-10T10:17:01.991165+0000 mgr.y (mgr.24422) 141 : cluster [DBG] pgmap v114: 548 pgs: 128 unknown, 32 creating+peering, 2 active+clean+snaptrim, 386 active+clean; 168 MiB data, 899 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 6.0 MiB/s wr, 4 op/s 2026-03-10T10:17:02.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:02 vm04 bash[28289]: audit 2026-03-10T10:17:02.038595+0000 mon.c (mon.2) 135 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:02.705 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:02 vm04 bash[28289]: audit 2026-03-10T10:17:02.038595+0000 mon.c (mon.2) 135 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:02.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:02 vm04 bash[20742]: audit 2026-03-10T10:17:01.464606+0000 mon.a (mon.0) 1255 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:17:02.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:02 vm04 bash[20742]: cluster 2026-03-10T10:17:01.649437+0000 osd.6 (osd.6) 14 : cluster [DBG] 15.5 deep-scrub ok 2026-03-10T10:17:02.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:02 vm04 bash[20742]: cluster 2026-03-10T10:17:01.649437+0000 osd.6 (osd.6) 14 : cluster [DBG] 15.5 deep-scrub ok 2026-03-10T10:17:02.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:02 vm04 bash[20742]: cluster 2026-03-10T10:17:01.991165+0000 mgr.y (mgr.24422) 141 : cluster [DBG] pgmap v114: 548 pgs: 128 unknown, 32 creating+peering, 2 active+clean+snaptrim, 386 active+clean; 168 MiB data, 899 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 6.0 MiB/s wr, 4 op/s 2026-03-10T10:17:02.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:02 vm04 bash[20742]: cluster 2026-03-10T10:17:01.991165+0000 mgr.y (mgr.24422) 141 : cluster [DBG] pgmap v114: 548 pgs: 128 unknown, 32 creating+peering, 2 active+clean+snaptrim, 386 active+clean; 168 MiB data, 899 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 6.0 MiB/s wr, 4 op/s 2026-03-10T10:17:02.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:02 vm04 bash[20742]: audit 2026-03-10T10:17:02.038595+0000 mon.c (mon.2) 135 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:02.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:02 vm04 bash[20742]: audit 2026-03-10T10:17:02.038595+0000 mon.c (mon.2) 135 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:02 vm07 bash[23367]: cluster 2026-03-10T10:17:00.419772+0000 osd.6 (osd.6) 11 : cluster [DBG] 15.9 deep-scrub starts 2026-03-10T10:17:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:02 vm07 bash[23367]: cluster 2026-03-10T10:17:00.419772+0000 osd.6 (osd.6) 11 : cluster [DBG] 15.9 deep-scrub starts 2026-03-10T10:17:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:02 vm07 bash[23367]: cluster 2026-03-10T10:17:00.420382+0000 osd.6 (osd.6) 12 : cluster [DBG] 15.9 deep-scrub ok 2026-03-10T10:17:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:02 vm07 bash[23367]: cluster 2026-03-10T10:17:00.420382+0000 osd.6 (osd.6) 12 : cluster [DBG] 15.9 deep-scrub ok 2026-03-10T10:17:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:02 vm07 bash[23367]: cluster 2026-03-10T10:17:00.812718+0000 osd.2 (osd.2) 11 : cluster [DBG] 15.8 deep-scrub starts 2026-03-10T10:17:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:02 vm07 bash[23367]: cluster 2026-03-10T10:17:00.812718+0000 osd.2 (osd.2) 11 : cluster [DBG] 15.8 deep-scrub starts 2026-03-10T10:17:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:02 vm07 bash[23367]: cluster 2026-03-10T10:17:00.883467+0000 osd.2 (osd.2) 12 : cluster [DBG] 15.8 deep-scrub ok 2026-03-10T10:17:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:02 vm07 bash[23367]: cluster 2026-03-10T10:17:00.883467+0000 osd.2 (osd.2) 12 : cluster [DBG] 15.8 deep-scrub ok 2026-03-10T10:17:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:02 vm07 bash[23367]: audit 2026-03-10T10:17:01.436704+0000 mon.a (mon.0) 1251 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]': finished 2026-03-10T10:17:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:02 vm07 bash[23367]: audit 2026-03-10T10:17:01.436704+0000 mon.a (mon.0) 1251 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsEC_vm04-59531-10"}]': finished 2026-03-10T10:17:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:02 vm07 bash[23367]: audit 2026-03-10T10:17:01.436808+0000 mon.a (mon.0) 1252 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm04-59259-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:02 vm07 bash[23367]: audit 2026-03-10T10:17:01.436808+0000 mon.a (mon.0) 1252 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsCompletePP_vm04-59259-11","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:02 vm07 bash[23367]: audit 2026-03-10T10:17:01.437173+0000 mon.a (mon.0) 1253 : audit [INF] from='client.? 192.168.123.104:0/754474874' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValue_vm04-59252-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:02 vm07 bash[23367]: audit 2026-03-10T10:17:01.437173+0000 mon.a (mon.0) 1253 : audit [INF] from='client.? 192.168.123.104:0/754474874' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValue_vm04-59252-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:02 vm07 bash[23367]: cluster 2026-03-10T10:17:01.440911+0000 mon.a (mon.0) 1254 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-10T10:17:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:02 vm07 bash[23367]: cluster 2026-03-10T10:17:01.440911+0000 mon.a (mon.0) 1254 : cluster [DBG] osdmap e106: 8 total, 8 up, 8 in 2026-03-10T10:17:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:02 vm07 bash[23367]: audit 2026-03-10T10:17:01.453528+0000 mon.c (mon.2) 134 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:17:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:02 vm07 bash[23367]: audit 2026-03-10T10:17:01.453528+0000 mon.c (mon.2) 134 : audit [INF] from='client.? 192.168.123.104:0/1011972710' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:17:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:02 vm07 bash[23367]: cluster 2026-03-10T10:17:01.463137+0000 osd.6 (osd.6) 13 : cluster [DBG] 15.5 deep-scrub starts 2026-03-10T10:17:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:02 vm07 bash[23367]: cluster 2026-03-10T10:17:01.463137+0000 osd.6 (osd.6) 13 : cluster [DBG] 15.5 deep-scrub starts 2026-03-10T10:17:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:02 vm07 bash[23367]: audit 2026-03-10T10:17:01.464606+0000 mon.a (mon.0) 1255 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:17:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:02 vm07 bash[23367]: audit 2026-03-10T10:17:01.464606+0000 mon.a (mon.0) 1255 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm04-59531-10"}]: dispatch 2026-03-10T10:17:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:02 vm07 bash[23367]: cluster 2026-03-10T10:17:01.649437+0000 osd.6 (osd.6) 14 : cluster [DBG] 15.5 deep-scrub ok 2026-03-10T10:17:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:02 vm07 bash[23367]: cluster 2026-03-10T10:17:01.649437+0000 osd.6 (osd.6) 14 : cluster [DBG] 15.5 deep-scrub ok 2026-03-10T10:17:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:02 vm07 bash[23367]: cluster 2026-03-10T10:17:01.991165+0000 mgr.y (mgr.24422) 141 : cluster [DBG] pgmap v114: 548 pgs: 128 unknown, 32 creating+peering, 2 active+clean+snaptrim, 386 active+clean; 168 MiB data, 899 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 6.0 MiB/s wr, 4 op/s 2026-03-10T10:17:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:02 vm07 bash[23367]: cluster 2026-03-10T10:17:01.991165+0000 mgr.y (mgr.24422) 141 : cluster [DBG] pgmap v114: 548 pgs: 128 unknown, 32 creating+peering, 2 active+clean+snaptrim, 386 active+clean; 168 MiB data, 899 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 6.0 MiB/s wr, 4 op/s 2026-03-10T10:17:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:02 vm07 bash[23367]: audit 2026-03-10T10:17:02.038595+0000 mon.c (mon.2) 135 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:02 vm07 bash[23367]: audit 2026-03-10T10:17:02.038595+0000 mon.c (mon.2) 135 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:03.450 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:03 vm04 bash[20742]: cluster 2026-03-10T10:17:01.828212+0000 osd.2 (osd.2) 13 : cluster [DBG] 15.0 deep-scrub starts 2026-03-10T10:17:03.450 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:03 vm04 bash[20742]: cluster 2026-03-10T10:17:01.828212+0000 osd.2 (osd.2) 13 : cluster [DBG] 15.0 deep-scrub starts 2026-03-10T10:17:03.450 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:03 vm04 bash[20742]: cluster 2026-03-10T10:17:01.830393+0000 osd.2 (osd.2) 14 : cluster [DBG] 15.0 deep-scrub ok 2026-03-10T10:17:03.450 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:03 vm04 bash[20742]: cluster 2026-03-10T10:17:01.830393+0000 osd.2 (osd.2) 14 : cluster [DBG] 15.0 deep-scrub ok 2026-03-10T10:17:03.450 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:03 vm04 bash[20742]: audit 2026-03-10T10:17:02.446537+0000 mon.a (mon.0) 1256 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm04-59531-10"}]': finished 2026-03-10T10:17:03.450 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:03 vm04 bash[20742]: audit 2026-03-10T10:17:02.446537+0000 mon.a (mon.0) 1256 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm04-59531-10"}]': finished 2026-03-10T10:17:03.450 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:03 vm04 bash[20742]: cluster 2026-03-10T10:17:02.451250+0000 mon.a (mon.0) 1257 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-10T10:17:03.450 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:03 vm04 bash[20742]: cluster 2026-03-10T10:17:02.451250+0000 mon.a (mon.0) 1257 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-10T10:17:03.450 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:03 vm04 bash[20742]: audit 2026-03-10T10:17:02.490076+0000 mon.a (mon.0) 1258 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:03.450 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:03 vm04 bash[20742]: audit 2026-03-10T10:17:02.490076+0000 mon.a (mon.0) 1258 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:03.450 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:03 vm04 bash[20742]: audit 2026-03-10T10:17:02.693730+0000 mon.c (mon.2) 136 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:03.450 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:03 vm04 bash[20742]: audit 2026-03-10T10:17:02.693730+0000 mon.c (mon.2) 136 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:03.450 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:03 vm04 bash[20742]: audit 2026-03-10T10:17:02.694209+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:03.450 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:03 vm04 bash[20742]: audit 2026-03-10T10:17:02.694209+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:03.450 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:03 vm04 bash[20742]: audit 2026-03-10T10:17:02.826940+0000 mon.c (mon.2) 137 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:03.450 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:03 vm04 bash[20742]: audit 2026-03-10T10:17:02.826940+0000 mon.c (mon.2) 137 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:03.450 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:03 vm04 bash[20742]: audit 2026-03-10T10:17:02.827236+0000 mon.a (mon.0) 1260 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:03.450 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:03 vm04 bash[20742]: audit 2026-03-10T10:17:02.827236+0000 mon.a (mon.0) 1260 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:03.450 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:03 vm04 bash[20742]: audit 2026-03-10T10:17:02.828011+0000 mon.c (mon.2) 138 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:03.450 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:03 vm04 bash[20742]: audit 2026-03-10T10:17:02.828011+0000 mon.c (mon.2) 138 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:03.450 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:03 vm04 bash[20742]: audit 2026-03-10T10:17:02.828290+0000 mon.a (mon.0) 1261 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:03.450 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:03 vm04 bash[20742]: audit 2026-03-10T10:17:02.828290+0000 mon.a (mon.0) 1261 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:03.450 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:03 vm04 bash[20742]: audit 2026-03-10T10:17:03.039348+0000 mon.c (mon.2) 139 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:03.450 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:03 vm04 bash[20742]: audit 2026-03-10T10:17:03.039348+0000 mon.c (mon.2) 139 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:03.450 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:17:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:17:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:17:03.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:03 vm04 bash[28289]: cluster 2026-03-10T10:17:01.828212+0000 osd.2 (osd.2) 13 : cluster [DBG] 15.0 deep-scrub starts 2026-03-10T10:17:03.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:03 vm04 bash[28289]: cluster 2026-03-10T10:17:01.828212+0000 osd.2 (osd.2) 13 : cluster [DBG] 15.0 deep-scrub starts 2026-03-10T10:17:03.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:03 vm04 bash[28289]: cluster 2026-03-10T10:17:01.830393+0000 osd.2 (osd.2) 14 : cluster [DBG] 15.0 deep-scrub ok 2026-03-10T10:17:03.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:03 vm04 bash[28289]: cluster 2026-03-10T10:17:01.830393+0000 osd.2 (osd.2) 14 : cluster [DBG] 15.0 deep-scrub ok 2026-03-10T10:17:03.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:03 vm04 bash[28289]: audit 2026-03-10T10:17:02.446537+0000 mon.a (mon.0) 1256 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm04-59531-10"}]': finished 2026-03-10T10:17:03.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:03 vm04 bash[28289]: audit 2026-03-10T10:17:02.446537+0000 mon.a (mon.0) 1256 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm04-59531-10"}]': finished 2026-03-10T10:17:03.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:03 vm04 bash[28289]: cluster 2026-03-10T10:17:02.451250+0000 mon.a (mon.0) 1257 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-10T10:17:03.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:03 vm04 bash[28289]: cluster 2026-03-10T10:17:02.451250+0000 mon.a (mon.0) 1257 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-10T10:17:03.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:03 vm04 bash[28289]: audit 2026-03-10T10:17:02.490076+0000 mon.a (mon.0) 1258 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:03.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:03 vm04 bash[28289]: audit 2026-03-10T10:17:02.490076+0000 mon.a (mon.0) 1258 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:03.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:03 vm04 bash[28289]: audit 2026-03-10T10:17:02.693730+0000 mon.c (mon.2) 136 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:03.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:03 vm04 bash[28289]: audit 2026-03-10T10:17:02.693730+0000 mon.c (mon.2) 136 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:03.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:03 vm04 bash[28289]: audit 2026-03-10T10:17:02.694209+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:03.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:03 vm04 bash[28289]: audit 2026-03-10T10:17:02.694209+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:03.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:03 vm04 bash[28289]: audit 2026-03-10T10:17:02.826940+0000 mon.c (mon.2) 137 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:03.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:03 vm04 bash[28289]: audit 2026-03-10T10:17:02.826940+0000 mon.c (mon.2) 137 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:03.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:03 vm04 bash[28289]: audit 2026-03-10T10:17:02.827236+0000 mon.a (mon.0) 1260 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:03.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:03 vm04 bash[28289]: audit 2026-03-10T10:17:02.827236+0000 mon.a (mon.0) 1260 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:03.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:03 vm04 bash[28289]: audit 2026-03-10T10:17:02.828011+0000 mon.c (mon.2) 138 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:03.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:03 vm04 bash[28289]: audit 2026-03-10T10:17:02.828011+0000 mon.c (mon.2) 138 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:03.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:03 vm04 bash[28289]: audit 2026-03-10T10:17:02.828290+0000 mon.a (mon.0) 1261 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:03.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:03 vm04 bash[28289]: audit 2026-03-10T10:17:02.828290+0000 mon.a (mon.0) 1261 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:03.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:03 vm04 bash[28289]: audit 2026-03-10T10:17:03.039348+0000 mon.c (mon.2) 139 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:03.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:03 vm04 bash[28289]: audit 2026-03-10T10:17:03.039348+0000 mon.c (mon.2) 139 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:03 vm07 bash[23367]: cluster 2026-03-10T10:17:01.828212+0000 osd.2 (osd.2) 13 : cluster [DBG] 15.0 deep-scrub starts 2026-03-10T10:17:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:03 vm07 bash[23367]: cluster 2026-03-10T10:17:01.828212+0000 osd.2 (osd.2) 13 : cluster [DBG] 15.0 deep-scrub starts 2026-03-10T10:17:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:03 vm07 bash[23367]: cluster 2026-03-10T10:17:01.830393+0000 osd.2 (osd.2) 14 : cluster [DBG] 15.0 deep-scrub ok 2026-03-10T10:17:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:03 vm07 bash[23367]: cluster 2026-03-10T10:17:01.830393+0000 osd.2 (osd.2) 14 : cluster [DBG] 15.0 deep-scrub ok 2026-03-10T10:17:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:03 vm07 bash[23367]: audit 2026-03-10T10:17:02.446537+0000 mon.a (mon.0) 1256 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm04-59531-10"}]': finished 2026-03-10T10:17:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:03 vm07 bash[23367]: audit 2026-03-10T10:17:02.446537+0000 mon.a (mon.0) 1256 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsEC_vm04-59531-10"}]': finished 2026-03-10T10:17:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:03 vm07 bash[23367]: cluster 2026-03-10T10:17:02.451250+0000 mon.a (mon.0) 1257 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-10T10:17:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:03 vm07 bash[23367]: cluster 2026-03-10T10:17:02.451250+0000 mon.a (mon.0) 1257 : cluster [DBG] osdmap e107: 8 total, 8 up, 8 in 2026-03-10T10:17:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:03 vm07 bash[23367]: audit 2026-03-10T10:17:02.490076+0000 mon.a (mon.0) 1258 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:03 vm07 bash[23367]: audit 2026-03-10T10:17:02.490076+0000 mon.a (mon.0) 1258 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:03 vm07 bash[23367]: audit 2026-03-10T10:17:02.693730+0000 mon.c (mon.2) 136 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:03 vm07 bash[23367]: audit 2026-03-10T10:17:02.693730+0000 mon.c (mon.2) 136 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:03 vm07 bash[23367]: audit 2026-03-10T10:17:02.694209+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:03 vm07 bash[23367]: audit 2026-03-10T10:17:02.694209+0000 mon.a (mon.0) 1259 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:03 vm07 bash[23367]: audit 2026-03-10T10:17:02.826940+0000 mon.c (mon.2) 137 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:03 vm07 bash[23367]: audit 2026-03-10T10:17:02.826940+0000 mon.c (mon.2) 137 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:03 vm07 bash[23367]: audit 2026-03-10T10:17:02.827236+0000 mon.a (mon.0) 1260 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:03 vm07 bash[23367]: audit 2026-03-10T10:17:02.827236+0000 mon.a (mon.0) 1260 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:03.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:03 vm07 bash[23367]: audit 2026-03-10T10:17:02.828011+0000 mon.c (mon.2) 138 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:03.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:03 vm07 bash[23367]: audit 2026-03-10T10:17:02.828011+0000 mon.c (mon.2) 138 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:03.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:03 vm07 bash[23367]: audit 2026-03-10T10:17:02.828290+0000 mon.a (mon.0) 1261 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:03.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:03 vm07 bash[23367]: audit 2026-03-10T10:17:02.828290+0000 mon.a (mon.0) 1261 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:03.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:03 vm07 bash[23367]: audit 2026-03-10T10:17:03.039348+0000 mon.c (mon.2) 139 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:03.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:03 vm07 bash[23367]: audit 2026-03-10T10:17:03.039348+0000 mon.c (mon.2) 139 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:04.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:04 vm07 bash[23367]: audit 2026-03-10T10:17:03.459500+0000 mon.a (mon.0) 1262 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:04.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:04 vm07 bash[23367]: audit 2026-03-10T10:17:03.459500+0000 mon.a (mon.0) 1262 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:04.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:04 vm07 bash[23367]: audit 2026-03-10T10:17:03.459553+0000 mon.a (mon.0) 1263 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:04.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:04 vm07 bash[23367]: audit 2026-03-10T10:17:03.459553+0000 mon.a (mon.0) 1263 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:04.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:04 vm07 bash[23367]: cluster 2026-03-10T10:17:03.462714+0000 mon.a (mon.0) 1264 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-10T10:17:04.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:04 vm07 bash[23367]: cluster 2026-03-10T10:17:03.462714+0000 mon.a (mon.0) 1264 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-10T10:17:04.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:04 vm07 bash[23367]: audit 2026-03-10T10:17:03.482152+0000 mon.b (mon.1) 127 : audit [INF] from='client.? 192.168.123.104:0/3462791976' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm04-59252-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:04.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:04 vm07 bash[23367]: audit 2026-03-10T10:17:03.482152+0000 mon.b (mon.1) 127 : audit [INF] from='client.? 192.168.123.104:0/3462791976' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm04-59252-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:04.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:04 vm07 bash[23367]: audit 2026-03-10T10:17:03.498482+0000 mon.c (mon.2) 140 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:04.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:04 vm07 bash[23367]: audit 2026-03-10T10:17:03.498482+0000 mon.c (mon.2) 140 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:04.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:04 vm07 bash[23367]: audit 2026-03-10T10:17:03.509241+0000 mon.a (mon.0) 1265 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm04-59252-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:04.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:04 vm07 bash[23367]: audit 2026-03-10T10:17:03.509241+0000 mon.a (mon.0) 1265 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm04-59252-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:04.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:04 vm07 bash[23367]: audit 2026-03-10T10:17:03.509429+0000 mon.a (mon.0) 1266 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:04.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:04 vm07 bash[23367]: audit 2026-03-10T10:17:03.509429+0000 mon.a (mon.0) 1266 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:04.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:04 vm07 bash[23367]: audit 2026-03-10T10:17:03.518729+0000 mon.a (mon.0) 1267 : audit [INF] from='client.? 192.168.123.104:0/2371376710' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm04-59259-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:04.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:04 vm07 bash[23367]: audit 2026-03-10T10:17:03.518729+0000 mon.a (mon.0) 1267 : audit [INF] from='client.? 192.168.123.104:0/2371376710' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm04-59259-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:04.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:04 vm07 bash[23367]: cluster 2026-03-10T10:17:03.992319+0000 mgr.y (mgr.24422) 142 : cluster [DBG] pgmap v117: 516 pgs: 30 unknown, 54 creating+peering, 432 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 12 MiB/s wr, 5 op/s 2026-03-10T10:17:04.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:04 vm07 bash[23367]: cluster 2026-03-10T10:17:03.992319+0000 mgr.y (mgr.24422) 142 : cluster [DBG] pgmap v117: 516 pgs: 30 unknown, 54 creating+peering, 432 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 12 MiB/s wr, 5 op/s 2026-03-10T10:17:04.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:04 vm07 bash[23367]: audit 2026-03-10T10:17:04.185793+0000 mon.c (mon.2) 141 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:04.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:04 vm07 bash[23367]: audit 2026-03-10T10:17:04.185793+0000 mon.c (mon.2) 141 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:04.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:04 vm04 bash[20742]: audit 2026-03-10T10:17:03.459500+0000 mon.a (mon.0) 1262 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:04 vm04 bash[20742]: audit 2026-03-10T10:17:03.459500+0000 mon.a (mon.0) 1262 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:04 vm04 bash[20742]: audit 2026-03-10T10:17:03.459553+0000 mon.a (mon.0) 1263 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:04 vm04 bash[20742]: audit 2026-03-10T10:17:03.459553+0000 mon.a (mon.0) 1263 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:04 vm04 bash[20742]: cluster 2026-03-10T10:17:03.462714+0000 mon.a (mon.0) 1264 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:04 vm04 bash[20742]: cluster 2026-03-10T10:17:03.462714+0000 mon.a (mon.0) 1264 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:04 vm04 bash[20742]: audit 2026-03-10T10:17:03.482152+0000 mon.b (mon.1) 127 : audit [INF] from='client.? 192.168.123.104:0/3462791976' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm04-59252-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:04 vm04 bash[20742]: audit 2026-03-10T10:17:03.482152+0000 mon.b (mon.1) 127 : audit [INF] from='client.? 192.168.123.104:0/3462791976' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm04-59252-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:04 vm04 bash[20742]: audit 2026-03-10T10:17:03.498482+0000 mon.c (mon.2) 140 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:04 vm04 bash[20742]: audit 2026-03-10T10:17:03.498482+0000 mon.c (mon.2) 140 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:04 vm04 bash[20742]: audit 2026-03-10T10:17:03.509241+0000 mon.a (mon.0) 1265 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm04-59252-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:04 vm04 bash[20742]: audit 2026-03-10T10:17:03.509241+0000 mon.a (mon.0) 1265 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm04-59252-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:04 vm04 bash[20742]: audit 2026-03-10T10:17:03.509429+0000 mon.a (mon.0) 1266 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:04 vm04 bash[20742]: audit 2026-03-10T10:17:03.509429+0000 mon.a (mon.0) 1266 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:04 vm04 bash[20742]: audit 2026-03-10T10:17:03.518729+0000 mon.a (mon.0) 1267 : audit [INF] from='client.? 192.168.123.104:0/2371376710' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm04-59259-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:04 vm04 bash[20742]: audit 2026-03-10T10:17:03.518729+0000 mon.a (mon.0) 1267 : audit [INF] from='client.? 192.168.123.104:0/2371376710' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm04-59259-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:04 vm04 bash[20742]: cluster 2026-03-10T10:17:03.992319+0000 mgr.y (mgr.24422) 142 : cluster [DBG] pgmap v117: 516 pgs: 30 unknown, 54 creating+peering, 432 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 12 MiB/s wr, 5 op/s 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:04 vm04 bash[20742]: cluster 2026-03-10T10:17:03.992319+0000 mgr.y (mgr.24422) 142 : cluster [DBG] pgmap v117: 516 pgs: 30 unknown, 54 creating+peering, 432 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 12 MiB/s wr, 5 op/s 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:04 vm04 bash[20742]: audit 2026-03-10T10:17:04.185793+0000 mon.c (mon.2) 141 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:04 vm04 bash[20742]: audit 2026-03-10T10:17:04.185793+0000 mon.c (mon.2) 141 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:04 vm04 bash[28289]: audit 2026-03-10T10:17:03.459500+0000 mon.a (mon.0) 1262 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:04 vm04 bash[28289]: audit 2026-03-10T10:17:03.459500+0000 mon.a (mon.0) 1262 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:04 vm04 bash[28289]: audit 2026-03-10T10:17:03.459553+0000 mon.a (mon.0) 1263 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:04 vm04 bash[28289]: audit 2026-03-10T10:17:03.459553+0000 mon.a (mon.0) 1263 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:04 vm04 bash[28289]: cluster 2026-03-10T10:17:03.462714+0000 mon.a (mon.0) 1264 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:04 vm04 bash[28289]: cluster 2026-03-10T10:17:03.462714+0000 mon.a (mon.0) 1264 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:04 vm04 bash[28289]: audit 2026-03-10T10:17:03.482152+0000 mon.b (mon.1) 127 : audit [INF] from='client.? 192.168.123.104:0/3462791976' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm04-59252-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:04 vm04 bash[28289]: audit 2026-03-10T10:17:03.482152+0000 mon.b (mon.1) 127 : audit [INF] from='client.? 192.168.123.104:0/3462791976' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm04-59252-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:04 vm04 bash[28289]: audit 2026-03-10T10:17:03.498482+0000 mon.c (mon.2) 140 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:04 vm04 bash[28289]: audit 2026-03-10T10:17:03.498482+0000 mon.c (mon.2) 140 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:04 vm04 bash[28289]: audit 2026-03-10T10:17:03.509241+0000 mon.a (mon.0) 1265 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm04-59252-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:04 vm04 bash[28289]: audit 2026-03-10T10:17:03.509241+0000 mon.a (mon.0) 1265 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Flush_vm04-59252-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:04 vm04 bash[28289]: audit 2026-03-10T10:17:03.509429+0000 mon.a (mon.0) 1266 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:04 vm04 bash[28289]: audit 2026-03-10T10:17:03.509429+0000 mon.a (mon.0) 1266 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:04 vm04 bash[28289]: audit 2026-03-10T10:17:03.518729+0000 mon.a (mon.0) 1267 : audit [INF] from='client.? 192.168.123.104:0/2371376710' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm04-59259-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:04 vm04 bash[28289]: audit 2026-03-10T10:17:03.518729+0000 mon.a (mon.0) 1267 : audit [INF] from='client.? 192.168.123.104:0/2371376710' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "IsSafePP_vm04-59259-12","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:04 vm04 bash[28289]: cluster 2026-03-10T10:17:03.992319+0000 mgr.y (mgr.24422) 142 : cluster [DBG] pgmap v117: 516 pgs: 30 unknown, 54 creating+peering, 432 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 12 MiB/s wr, 5 op/s 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:04 vm04 bash[28289]: cluster 2026-03-10T10:17:03.992319+0000 mgr.y (mgr.24422) 142 : cluster [DBG] pgmap v117: 516 pgs: 30 unknown, 54 creating+peering, 432 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 12 MiB/s wr, 5 op/s 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:04 vm04 bash[28289]: audit 2026-03-10T10:17:04.185793+0000 mon.c (mon.2) 141 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:04.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:04 vm04 bash[28289]: audit 2026-03-10T10:17:04.185793+0000 mon.c (mon.2) 141 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:05.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:05 vm07 bash[23367]: audit 2026-03-10T10:17:04.470807+0000 mon.a (mon.0) 1268 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Flush_vm04-59252-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:05.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:05 vm07 bash[23367]: audit 2026-03-10T10:17:04.470807+0000 mon.a (mon.0) 1268 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Flush_vm04-59252-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:05.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:05 vm07 bash[23367]: audit 2026-03-10T10:17:04.470857+0000 mon.a (mon.0) 1269 : audit [INF] from='client.? 192.168.123.104:0/2371376710' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafePP_vm04-59259-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:05.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:05 vm07 bash[23367]: audit 2026-03-10T10:17:04.470857+0000 mon.a (mon.0) 1269 : audit [INF] from='client.? 192.168.123.104:0/2371376710' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafePP_vm04-59259-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:05.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:05 vm07 bash[23367]: cluster 2026-03-10T10:17:04.506174+0000 mon.a (mon.0) 1270 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-10T10:17:05.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:05 vm07 bash[23367]: cluster 2026-03-10T10:17:04.506174+0000 mon.a (mon.0) 1270 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-10T10:17:05.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:05 vm07 bash[23367]: audit 2026-03-10T10:17:05.189198+0000 mon.c (mon.2) 142 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:05.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:05 vm07 bash[23367]: audit 2026-03-10T10:17:05.189198+0000 mon.c (mon.2) 142 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:05.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:05 vm04 bash[20742]: audit 2026-03-10T10:17:04.470807+0000 mon.a (mon.0) 1268 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Flush_vm04-59252-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:05.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:05 vm04 bash[20742]: audit 2026-03-10T10:17:04.470807+0000 mon.a (mon.0) 1268 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Flush_vm04-59252-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:05.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:05 vm04 bash[20742]: audit 2026-03-10T10:17:04.470857+0000 mon.a (mon.0) 1269 : audit [INF] from='client.? 192.168.123.104:0/2371376710' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafePP_vm04-59259-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:05.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:05 vm04 bash[20742]: audit 2026-03-10T10:17:04.470857+0000 mon.a (mon.0) 1269 : audit [INF] from='client.? 192.168.123.104:0/2371376710' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafePP_vm04-59259-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:05.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:05 vm04 bash[20742]: cluster 2026-03-10T10:17:04.506174+0000 mon.a (mon.0) 1270 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-10T10:17:05.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:05 vm04 bash[20742]: cluster 2026-03-10T10:17:04.506174+0000 mon.a (mon.0) 1270 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-10T10:17:05.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:05 vm04 bash[20742]: audit 2026-03-10T10:17:05.189198+0000 mon.c (mon.2) 142 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:05.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:05 vm04 bash[20742]: audit 2026-03-10T10:17:05.189198+0000 mon.c (mon.2) 142 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:05.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:05 vm04 bash[28289]: audit 2026-03-10T10:17:04.470807+0000 mon.a (mon.0) 1268 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Flush_vm04-59252-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:05.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:05 vm04 bash[28289]: audit 2026-03-10T10:17:04.470807+0000 mon.a (mon.0) 1268 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Flush_vm04-59252-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:05.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:05 vm04 bash[28289]: audit 2026-03-10T10:17:04.470857+0000 mon.a (mon.0) 1269 : audit [INF] from='client.? 192.168.123.104:0/2371376710' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafePP_vm04-59259-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:05.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:05 vm04 bash[28289]: audit 2026-03-10T10:17:04.470857+0000 mon.a (mon.0) 1269 : audit [INF] from='client.? 192.168.123.104:0/2371376710' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "IsSafePP_vm04-59259-12","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:05.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:05 vm04 bash[28289]: cluster 2026-03-10T10:17:04.506174+0000 mon.a (mon.0) 1270 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-10T10:17:05.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:05 vm04 bash[28289]: cluster 2026-03-10T10:17:04.506174+0000 mon.a (mon.0) 1270 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-10T10:17:05.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:05 vm04 bash[28289]: audit 2026-03-10T10:17:05.189198+0000 mon.c (mon.2) 142 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:05.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:05 vm04 bash[28289]: audit 2026-03-10T10:17:05.189198+0000 mon.c (mon.2) 142 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:06 vm07 bash[23367]: cluster 2026-03-10T10:17:05.208180+0000 osd.4 (osd.4) 5 : cluster [DBG] 15.3 deep-scrub starts 2026-03-10T10:17:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:06 vm07 bash[23367]: cluster 2026-03-10T10:17:05.208180+0000 osd.4 (osd.4) 5 : cluster [DBG] 15.3 deep-scrub starts 2026-03-10T10:17:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:06 vm07 bash[23367]: cluster 2026-03-10T10:17:05.211766+0000 osd.4 (osd.4) 6 : cluster [DBG] 15.3 deep-scrub ok 2026-03-10T10:17:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:06 vm07 bash[23367]: cluster 2026-03-10T10:17:05.211766+0000 osd.4 (osd.4) 6 : cluster [DBG] 15.3 deep-scrub ok 2026-03-10T10:17:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:06 vm07 bash[23367]: audit 2026-03-10T10:17:05.483556+0000 mon.a (mon.0) 1271 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]': finished 2026-03-10T10:17:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:06 vm07 bash[23367]: audit 2026-03-10T10:17:05.483556+0000 mon.a (mon.0) 1271 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]': finished 2026-03-10T10:17:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:06 vm07 bash[23367]: cluster 2026-03-10T10:17:05.486492+0000 mon.a (mon.0) 1272 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-10T10:17:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:06 vm07 bash[23367]: cluster 2026-03-10T10:17:05.486492+0000 mon.a (mon.0) 1272 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-10T10:17:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:06 vm07 bash[23367]: cluster 2026-03-10T10:17:05.992668+0000 mgr.y (mgr.24422) 143 : cluster [DBG] pgmap v120: 428 pgs: 8 unknown, 20 creating+peering, 400 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 12 MiB/s wr, 4 op/s 2026-03-10T10:17:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:06 vm07 bash[23367]: cluster 2026-03-10T10:17:05.992668+0000 mgr.y (mgr.24422) 143 : cluster [DBG] pgmap v120: 428 pgs: 8 unknown, 20 creating+peering, 400 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 12 MiB/s wr, 4 op/s 2026-03-10T10:17:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:06 vm07 bash[23367]: audit 2026-03-10T10:17:06.190307+0000 mon.c (mon.2) 143 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:06 vm07 bash[23367]: audit 2026-03-10T10:17:06.190307+0000 mon.c (mon.2) 143 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:06 vm07 bash[23367]: cluster 2026-03-10T10:17:06.190747+0000 mon.a (mon.0) 1273 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:06 vm07 bash[23367]: cluster 2026-03-10T10:17:06.190747+0000 mon.a (mon.0) 1273 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:06 vm07 bash[23367]: cluster 2026-03-10T10:17:06.229534+0000 mon.a (mon.0) 1274 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-10T10:17:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:06 vm07 bash[23367]: cluster 2026-03-10T10:17:06.229534+0000 mon.a (mon.0) 1274 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-10T10:17:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:06 vm07 bash[23367]: audit 2026-03-10T10:17:06.272166+0000 mon.c (mon.2) 144 : audit [INF] from='client.? 192.168.123.104:0/1790046302' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm04-59259-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:06 vm07 bash[23367]: audit 2026-03-10T10:17:06.272166+0000 mon.c (mon.2) 144 : audit [INF] from='client.? 192.168.123.104:0/1790046302' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm04-59259-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:06 vm07 bash[23367]: audit 2026-03-10T10:17:06.274231+0000 mon.a (mon.0) 1275 : audit [INF] from='client.? 192.168.123.104:0/908324504' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm04-59252-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:06 vm07 bash[23367]: audit 2026-03-10T10:17:06.274231+0000 mon.a (mon.0) 1275 : audit [INF] from='client.? 192.168.123.104:0/908324504' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm04-59252-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:06 vm07 bash[23367]: audit 2026-03-10T10:17:06.274458+0000 mon.a (mon.0) 1276 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm04-59259-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:06 vm07 bash[23367]: audit 2026-03-10T10:17:06.274458+0000 mon.a (mon.0) 1276 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm04-59259-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:06.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:06 vm04 bash[20742]: cluster 2026-03-10T10:17:05.208180+0000 osd.4 (osd.4) 5 : cluster [DBG] 15.3 deep-scrub starts 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:06 vm04 bash[20742]: cluster 2026-03-10T10:17:05.208180+0000 osd.4 (osd.4) 5 : cluster [DBG] 15.3 deep-scrub starts 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:06 vm04 bash[20742]: cluster 2026-03-10T10:17:05.211766+0000 osd.4 (osd.4) 6 : cluster [DBG] 15.3 deep-scrub ok 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:06 vm04 bash[20742]: cluster 2026-03-10T10:17:05.211766+0000 osd.4 (osd.4) 6 : cluster [DBG] 15.3 deep-scrub ok 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:06 vm04 bash[20742]: audit 2026-03-10T10:17:05.483556+0000 mon.a (mon.0) 1271 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]': finished 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:06 vm04 bash[20742]: audit 2026-03-10T10:17:05.483556+0000 mon.a (mon.0) 1271 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]': finished 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:06 vm04 bash[20742]: cluster 2026-03-10T10:17:05.486492+0000 mon.a (mon.0) 1272 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:06 vm04 bash[20742]: cluster 2026-03-10T10:17:05.486492+0000 mon.a (mon.0) 1272 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:06 vm04 bash[20742]: cluster 2026-03-10T10:17:05.992668+0000 mgr.y (mgr.24422) 143 : cluster [DBG] pgmap v120: 428 pgs: 8 unknown, 20 creating+peering, 400 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 12 MiB/s wr, 4 op/s 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:06 vm04 bash[20742]: cluster 2026-03-10T10:17:05.992668+0000 mgr.y (mgr.24422) 143 : cluster [DBG] pgmap v120: 428 pgs: 8 unknown, 20 creating+peering, 400 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 12 MiB/s wr, 4 op/s 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:06 vm04 bash[20742]: audit 2026-03-10T10:17:06.190307+0000 mon.c (mon.2) 143 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:06 vm04 bash[20742]: audit 2026-03-10T10:17:06.190307+0000 mon.c (mon.2) 143 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:06 vm04 bash[20742]: cluster 2026-03-10T10:17:06.190747+0000 mon.a (mon.0) 1273 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:06 vm04 bash[20742]: cluster 2026-03-10T10:17:06.190747+0000 mon.a (mon.0) 1273 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:06 vm04 bash[20742]: cluster 2026-03-10T10:17:06.229534+0000 mon.a (mon.0) 1274 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:06 vm04 bash[20742]: cluster 2026-03-10T10:17:06.229534+0000 mon.a (mon.0) 1274 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:06 vm04 bash[20742]: audit 2026-03-10T10:17:06.272166+0000 mon.c (mon.2) 144 : audit [INF] from='client.? 192.168.123.104:0/1790046302' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm04-59259-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:06 vm04 bash[20742]: audit 2026-03-10T10:17:06.272166+0000 mon.c (mon.2) 144 : audit [INF] from='client.? 192.168.123.104:0/1790046302' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm04-59259-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:06 vm04 bash[20742]: audit 2026-03-10T10:17:06.274231+0000 mon.a (mon.0) 1275 : audit [INF] from='client.? 192.168.123.104:0/908324504' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm04-59252-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:06 vm04 bash[20742]: audit 2026-03-10T10:17:06.274231+0000 mon.a (mon.0) 1275 : audit [INF] from='client.? 192.168.123.104:0/908324504' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm04-59252-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:06 vm04 bash[20742]: audit 2026-03-10T10:17:06.274458+0000 mon.a (mon.0) 1276 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm04-59259-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:06 vm04 bash[20742]: audit 2026-03-10T10:17:06.274458+0000 mon.a (mon.0) 1276 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm04-59259-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:06 vm04 bash[28289]: cluster 2026-03-10T10:17:05.208180+0000 osd.4 (osd.4) 5 : cluster [DBG] 15.3 deep-scrub starts 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:06 vm04 bash[28289]: cluster 2026-03-10T10:17:05.208180+0000 osd.4 (osd.4) 5 : cluster [DBG] 15.3 deep-scrub starts 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:06 vm04 bash[28289]: cluster 2026-03-10T10:17:05.211766+0000 osd.4 (osd.4) 6 : cluster [DBG] 15.3 deep-scrub ok 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:06 vm04 bash[28289]: cluster 2026-03-10T10:17:05.211766+0000 osd.4 (osd.4) 6 : cluster [DBG] 15.3 deep-scrub ok 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:06 vm04 bash[28289]: audit 2026-03-10T10:17:05.483556+0000 mon.a (mon.0) 1271 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]': finished 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:06 vm04 bash[28289]: audit 2026-03-10T10:17:05.483556+0000 mon.a (mon.0) 1271 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedEC_vm04-59531-15", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]': finished 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:06 vm04 bash[28289]: cluster 2026-03-10T10:17:05.486492+0000 mon.a (mon.0) 1272 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:06 vm04 bash[28289]: cluster 2026-03-10T10:17:05.486492+0000 mon.a (mon.0) 1272 : cluster [DBG] osdmap e110: 8 total, 8 up, 8 in 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:06 vm04 bash[28289]: cluster 2026-03-10T10:17:05.992668+0000 mgr.y (mgr.24422) 143 : cluster [DBG] pgmap v120: 428 pgs: 8 unknown, 20 creating+peering, 400 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 12 MiB/s wr, 4 op/s 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:06 vm04 bash[28289]: cluster 2026-03-10T10:17:05.992668+0000 mgr.y (mgr.24422) 143 : cluster [DBG] pgmap v120: 428 pgs: 8 unknown, 20 creating+peering, 400 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 12 MiB/s wr, 4 op/s 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:06 vm04 bash[28289]: audit 2026-03-10T10:17:06.190307+0000 mon.c (mon.2) 143 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:06 vm04 bash[28289]: audit 2026-03-10T10:17:06.190307+0000 mon.c (mon.2) 143 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:06 vm04 bash[28289]: cluster 2026-03-10T10:17:06.190747+0000 mon.a (mon.0) 1273 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:06 vm04 bash[28289]: cluster 2026-03-10T10:17:06.190747+0000 mon.a (mon.0) 1273 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:06 vm04 bash[28289]: cluster 2026-03-10T10:17:06.229534+0000 mon.a (mon.0) 1274 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-10T10:17:06.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:06 vm04 bash[28289]: cluster 2026-03-10T10:17:06.229534+0000 mon.a (mon.0) 1274 : cluster [DBG] osdmap e111: 8 total, 8 up, 8 in 2026-03-10T10:17:06.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:06 vm04 bash[28289]: audit 2026-03-10T10:17:06.272166+0000 mon.c (mon.2) 144 : audit [INF] from='client.? 192.168.123.104:0/1790046302' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm04-59259-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:06.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:06 vm04 bash[28289]: audit 2026-03-10T10:17:06.272166+0000 mon.c (mon.2) 144 : audit [INF] from='client.? 192.168.123.104:0/1790046302' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm04-59259-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:06.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:06 vm04 bash[28289]: audit 2026-03-10T10:17:06.274231+0000 mon.a (mon.0) 1275 : audit [INF] from='client.? 192.168.123.104:0/908324504' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm04-59252-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:06.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:06 vm04 bash[28289]: audit 2026-03-10T10:17:06.274231+0000 mon.a (mon.0) 1275 : audit [INF] from='client.? 192.168.123.104:0/908324504' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsync_vm04-59252-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:06.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:06 vm04 bash[28289]: audit 2026-03-10T10:17:06.274458+0000 mon.a (mon.0) 1276 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm04-59259-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:06.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:06 vm04 bash[28289]: audit 2026-03-10T10:17:06.274458+0000 mon.a (mon.0) 1276 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm04-59259-13","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:07.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:07 vm07 bash[23367]: audit 2026-03-10T10:17:07.190981+0000 mon.c (mon.2) 145 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:07.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:07 vm07 bash[23367]: audit 2026-03-10T10:17:07.190981+0000 mon.c (mon.2) 145 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:07.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:07 vm07 bash[23367]: audit 2026-03-10T10:17:07.229461+0000 mon.a (mon.0) 1277 : audit [INF] from='client.? 192.168.123.104:0/908324504' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsync_vm04-59252-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:07.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:07 vm07 bash[23367]: audit 2026-03-10T10:17:07.229461+0000 mon.a (mon.0) 1277 : audit [INF] from='client.? 192.168.123.104:0/908324504' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsync_vm04-59252-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:07.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:07 vm07 bash[23367]: audit 2026-03-10T10:17:07.229494+0000 mon.a (mon.0) 1278 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm04-59259-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:07.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:07 vm07 bash[23367]: audit 2026-03-10T10:17:07.229494+0000 mon.a (mon.0) 1278 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm04-59259-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:07.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:07 vm07 bash[23367]: cluster 2026-03-10T10:17:07.232215+0000 mon.a (mon.0) 1279 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-10T10:17:07.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:07 vm07 bash[23367]: cluster 2026-03-10T10:17:07.232215+0000 mon.a (mon.0) 1279 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-10T10:17:07.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:07 vm04 bash[20742]: audit 2026-03-10T10:17:07.190981+0000 mon.c (mon.2) 145 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:07.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:07 vm04 bash[20742]: audit 2026-03-10T10:17:07.190981+0000 mon.c (mon.2) 145 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:07.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:07 vm04 bash[20742]: audit 2026-03-10T10:17:07.229461+0000 mon.a (mon.0) 1277 : audit [INF] from='client.? 192.168.123.104:0/908324504' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsync_vm04-59252-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:07.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:07 vm04 bash[20742]: audit 2026-03-10T10:17:07.229461+0000 mon.a (mon.0) 1277 : audit [INF] from='client.? 192.168.123.104:0/908324504' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsync_vm04-59252-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:07.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:07 vm04 bash[20742]: audit 2026-03-10T10:17:07.229494+0000 mon.a (mon.0) 1278 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm04-59259-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:07.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:07 vm04 bash[20742]: audit 2026-03-10T10:17:07.229494+0000 mon.a (mon.0) 1278 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm04-59259-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:07.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:07 vm04 bash[20742]: cluster 2026-03-10T10:17:07.232215+0000 mon.a (mon.0) 1279 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-10T10:17:07.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:07 vm04 bash[20742]: cluster 2026-03-10T10:17:07.232215+0000 mon.a (mon.0) 1279 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-10T10:17:07.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:07 vm04 bash[28289]: audit 2026-03-10T10:17:07.190981+0000 mon.c (mon.2) 145 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:07.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:07 vm04 bash[28289]: audit 2026-03-10T10:17:07.190981+0000 mon.c (mon.2) 145 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:07.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:07 vm04 bash[28289]: audit 2026-03-10T10:17:07.229461+0000 mon.a (mon.0) 1277 : audit [INF] from='client.? 192.168.123.104:0/908324504' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsync_vm04-59252-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:07.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:07 vm04 bash[28289]: audit 2026-03-10T10:17:07.229461+0000 mon.a (mon.0) 1277 : audit [INF] from='client.? 192.168.123.104:0/908324504' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsync_vm04-59252-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:07.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:07 vm04 bash[28289]: audit 2026-03-10T10:17:07.229494+0000 mon.a (mon.0) 1278 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm04-59259-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:07.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:07 vm04 bash[28289]: audit 2026-03-10T10:17:07.229494+0000 mon.a (mon.0) 1278 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReturnValuePP_vm04-59259-13","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:07.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:07 vm04 bash[28289]: cluster 2026-03-10T10:17:07.232215+0000 mon.a (mon.0) 1279 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-10T10:17:07.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:07 vm04 bash[28289]: cluster 2026-03-10T10:17:07.232215+0000 mon.a (mon.0) 1279 : cluster [DBG] osdmap e112: 8 total, 8 up, 8 in 2026-03-10T10:17:08.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:17:08 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:17:08.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:08 vm04 bash[20742]: audit 2026-03-10T10:17:07.872002+0000 mon.a (mon.0) 1280 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:17:08.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:08 vm04 bash[20742]: audit 2026-03-10T10:17:07.872002+0000 mon.a (mon.0) 1280 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:17:08.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:08 vm04 bash[20742]: cluster 2026-03-10T10:17:07.993059+0000 mgr.y (mgr.24422) 144 : cluster [DBG] pgmap v123: 524 pgs: 104 unknown, 20 creating+peering, 400 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:08.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:08 vm04 bash[20742]: cluster 2026-03-10T10:17:07.993059+0000 mgr.y (mgr.24422) 144 : cluster [DBG] pgmap v123: 524 pgs: 104 unknown, 20 creating+peering, 400 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:08.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:08 vm04 bash[20742]: audit 2026-03-10T10:17:08.192429+0000 mon.c (mon.2) 146 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:08.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:08 vm04 bash[20742]: audit 2026-03-10T10:17:08.192429+0000 mon.c (mon.2) 146 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:08.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:08 vm04 bash[20742]: audit 2026-03-10T10:17:08.227877+0000 mgr.y (mgr.24422) 145 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:08.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:08 vm04 bash[20742]: audit 2026-03-10T10:17:08.227877+0000 mgr.y (mgr.24422) 145 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:08.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:08 vm04 bash[20742]: audit 2026-03-10T10:17:08.284105+0000 mon.a (mon.0) 1281 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-13", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:17:08.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:08 vm04 bash[20742]: audit 2026-03-10T10:17:08.284105+0000 mon.a (mon.0) 1281 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-13", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:17:08.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:08 vm04 bash[20742]: cluster 2026-03-10T10:17:08.295892+0000 mon.a (mon.0) 1282 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-10T10:17:08.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:08 vm04 bash[20742]: cluster 2026-03-10T10:17:08.295892+0000 mon.a (mon.0) 1282 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-10T10:17:08.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:08 vm04 bash[20742]: audit 2026-03-10T10:17:08.298931+0000 mon.a (mon.0) 1283 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-13"}]: dispatch 2026-03-10T10:17:08.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:08 vm04 bash[20742]: audit 2026-03-10T10:17:08.298931+0000 mon.a (mon.0) 1283 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-13"}]: dispatch 2026-03-10T10:17:08.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:08 vm04 bash[28289]: audit 2026-03-10T10:17:07.872002+0000 mon.a (mon.0) 1280 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:17:08.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:08 vm04 bash[28289]: audit 2026-03-10T10:17:07.872002+0000 mon.a (mon.0) 1280 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:17:08.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:08 vm04 bash[28289]: cluster 2026-03-10T10:17:07.993059+0000 mgr.y (mgr.24422) 144 : cluster [DBG] pgmap v123: 524 pgs: 104 unknown, 20 creating+peering, 400 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:08.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:08 vm04 bash[28289]: cluster 2026-03-10T10:17:07.993059+0000 mgr.y (mgr.24422) 144 : cluster [DBG] pgmap v123: 524 pgs: 104 unknown, 20 creating+peering, 400 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:08.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:08 vm04 bash[28289]: audit 2026-03-10T10:17:08.192429+0000 mon.c (mon.2) 146 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:08.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:08 vm04 bash[28289]: audit 2026-03-10T10:17:08.192429+0000 mon.c (mon.2) 146 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:08.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:08 vm04 bash[28289]: audit 2026-03-10T10:17:08.227877+0000 mgr.y (mgr.24422) 145 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:08.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:08 vm04 bash[28289]: audit 2026-03-10T10:17:08.227877+0000 mgr.y (mgr.24422) 145 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:08.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:08 vm04 bash[28289]: audit 2026-03-10T10:17:08.284105+0000 mon.a (mon.0) 1281 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-13", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:17:08.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:08 vm04 bash[28289]: audit 2026-03-10T10:17:08.284105+0000 mon.a (mon.0) 1281 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-13", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:17:08.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:08 vm04 bash[28289]: cluster 2026-03-10T10:17:08.295892+0000 mon.a (mon.0) 1282 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-10T10:17:08.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:08 vm04 bash[28289]: cluster 2026-03-10T10:17:08.295892+0000 mon.a (mon.0) 1282 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-10T10:17:08.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:08 vm04 bash[28289]: audit 2026-03-10T10:17:08.298931+0000 mon.a (mon.0) 1283 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-13"}]: dispatch 2026-03-10T10:17:08.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:08 vm04 bash[28289]: audit 2026-03-10T10:17:08.298931+0000 mon.a (mon.0) 1283 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-13"}]: dispatch 2026-03-10T10:17:09.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:08 vm07 bash[23367]: audit 2026-03-10T10:17:07.872002+0000 mon.a (mon.0) 1280 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:17:09.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:08 vm07 bash[23367]: audit 2026-03-10T10:17:07.872002+0000 mon.a (mon.0) 1280 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-13", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:17:09.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:08 vm07 bash[23367]: cluster 2026-03-10T10:17:07.993059+0000 mgr.y (mgr.24422) 144 : cluster [DBG] pgmap v123: 524 pgs: 104 unknown, 20 creating+peering, 400 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:09.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:08 vm07 bash[23367]: cluster 2026-03-10T10:17:07.993059+0000 mgr.y (mgr.24422) 144 : cluster [DBG] pgmap v123: 524 pgs: 104 unknown, 20 creating+peering, 400 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:09.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:08 vm07 bash[23367]: audit 2026-03-10T10:17:08.192429+0000 mon.c (mon.2) 146 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:09.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:08 vm07 bash[23367]: audit 2026-03-10T10:17:08.192429+0000 mon.c (mon.2) 146 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:09.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:08 vm07 bash[23367]: audit 2026-03-10T10:17:08.227877+0000 mgr.y (mgr.24422) 145 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:09.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:08 vm07 bash[23367]: audit 2026-03-10T10:17:08.227877+0000 mgr.y (mgr.24422) 145 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:09.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:08 vm07 bash[23367]: audit 2026-03-10T10:17:08.284105+0000 mon.a (mon.0) 1281 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-13", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:17:09.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:08 vm07 bash[23367]: audit 2026-03-10T10:17:08.284105+0000 mon.a (mon.0) 1281 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-13", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:17:09.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:08 vm07 bash[23367]: cluster 2026-03-10T10:17:08.295892+0000 mon.a (mon.0) 1282 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-10T10:17:09.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:08 vm07 bash[23367]: cluster 2026-03-10T10:17:08.295892+0000 mon.a (mon.0) 1282 : cluster [DBG] osdmap e113: 8 total, 8 up, 8 in 2026-03-10T10:17:09.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:08 vm07 bash[23367]: audit 2026-03-10T10:17:08.298931+0000 mon.a (mon.0) 1283 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-13"}]: dispatch 2026-03-10T10:17:09.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:08 vm07 bash[23367]: audit 2026-03-10T10:17:08.298931+0000 mon.a (mon.0) 1283 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-13"}]: dispatch 2026-03-10T10:17:09.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:09 vm04 bash[20742]: audit 2026-03-10T10:17:09.193636+0000 mon.c (mon.2) 147 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:09.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:09 vm04 bash[20742]: audit 2026-03-10T10:17:09.193636+0000 mon.c (mon.2) 147 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:09.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:09 vm04 bash[20742]: audit 2026-03-10T10:17:09.288916+0000 mon.a (mon.0) 1284 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-13"}]': finished 2026-03-10T10:17:09.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:09 vm04 bash[20742]: audit 2026-03-10T10:17:09.288916+0000 mon.a (mon.0) 1284 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-13"}]': finished 2026-03-10T10:17:09.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:09 vm04 bash[20742]: cluster 2026-03-10T10:17:09.295139+0000 mon.a (mon.0) 1285 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-10T10:17:09.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:09 vm04 bash[20742]: cluster 2026-03-10T10:17:09.295139+0000 mon.a (mon.0) 1285 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-10T10:17:09.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:09 vm04 bash[20742]: audit 2026-03-10T10:17:09.303555+0000 mon.a (mon.0) 1286 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-13", "mode": "writeback"}]: dispatch 2026-03-10T10:17:09.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:09 vm04 bash[20742]: audit 2026-03-10T10:17:09.303555+0000 mon.a (mon.0) 1286 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-13", "mode": "writeback"}]: dispatch 2026-03-10T10:17:09.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:09 vm04 bash[20742]: audit 2026-03-10T10:17:09.312424+0000 mon.a (mon.0) 1287 : audit [INF] from='client.? 192.168.123.104:0/2668920143' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm04-59259-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:09.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:09 vm04 bash[20742]: audit 2026-03-10T10:17:09.312424+0000 mon.a (mon.0) 1287 : audit [INF] from='client.? 192.168.123.104:0/2668920143' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm04-59259-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:09.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:09 vm04 bash[20742]: audit 2026-03-10T10:17:09.353338+0000 mon.c (mon.2) 148 : audit [INF] from='client.? 192.168.123.104:0/275123545' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm04-59252-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:09.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:09 vm04 bash[20742]: audit 2026-03-10T10:17:09.353338+0000 mon.c (mon.2) 148 : audit [INF] from='client.? 192.168.123.104:0/275123545' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm04-59252-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:09.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:09 vm04 bash[20742]: audit 2026-03-10T10:17:09.354577+0000 mon.a (mon.0) 1288 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm04-59252-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:09.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:09 vm04 bash[20742]: audit 2026-03-10T10:17:09.354577+0000 mon.a (mon.0) 1288 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm04-59252-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:09.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:09 vm04 bash[28289]: audit 2026-03-10T10:17:09.193636+0000 mon.c (mon.2) 147 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:09.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:09 vm04 bash[28289]: audit 2026-03-10T10:17:09.193636+0000 mon.c (mon.2) 147 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:09.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:09 vm04 bash[28289]: audit 2026-03-10T10:17:09.288916+0000 mon.a (mon.0) 1284 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-13"}]': finished 2026-03-10T10:17:09.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:09 vm04 bash[28289]: audit 2026-03-10T10:17:09.288916+0000 mon.a (mon.0) 1284 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-13"}]': finished 2026-03-10T10:17:09.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:09 vm04 bash[28289]: cluster 2026-03-10T10:17:09.295139+0000 mon.a (mon.0) 1285 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-10T10:17:09.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:09 vm04 bash[28289]: cluster 2026-03-10T10:17:09.295139+0000 mon.a (mon.0) 1285 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-10T10:17:09.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:09 vm04 bash[28289]: audit 2026-03-10T10:17:09.303555+0000 mon.a (mon.0) 1286 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-13", "mode": "writeback"}]: dispatch 2026-03-10T10:17:09.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:09 vm04 bash[28289]: audit 2026-03-10T10:17:09.303555+0000 mon.a (mon.0) 1286 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-13", "mode": "writeback"}]: dispatch 2026-03-10T10:17:09.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:09 vm04 bash[28289]: audit 2026-03-10T10:17:09.312424+0000 mon.a (mon.0) 1287 : audit [INF] from='client.? 192.168.123.104:0/2668920143' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm04-59259-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:09.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:09 vm04 bash[28289]: audit 2026-03-10T10:17:09.312424+0000 mon.a (mon.0) 1287 : audit [INF] from='client.? 192.168.123.104:0/2668920143' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm04-59259-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:09.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:09 vm04 bash[28289]: audit 2026-03-10T10:17:09.353338+0000 mon.c (mon.2) 148 : audit [INF] from='client.? 192.168.123.104:0/275123545' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm04-59252-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:09.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:09 vm04 bash[28289]: audit 2026-03-10T10:17:09.353338+0000 mon.c (mon.2) 148 : audit [INF] from='client.? 192.168.123.104:0/275123545' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm04-59252-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:09.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:09 vm04 bash[28289]: audit 2026-03-10T10:17:09.354577+0000 mon.a (mon.0) 1288 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm04-59252-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:09.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:09 vm04 bash[28289]: audit 2026-03-10T10:17:09.354577+0000 mon.a (mon.0) 1288 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm04-59252-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:10.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:09 vm07 bash[23367]: audit 2026-03-10T10:17:09.193636+0000 mon.c (mon.2) 147 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:10.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:09 vm07 bash[23367]: audit 2026-03-10T10:17:09.193636+0000 mon.c (mon.2) 147 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:10.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:09 vm07 bash[23367]: audit 2026-03-10T10:17:09.288916+0000 mon.a (mon.0) 1284 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-13"}]': finished 2026-03-10T10:17:10.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:09 vm07 bash[23367]: audit 2026-03-10T10:17:09.288916+0000 mon.a (mon.0) 1284 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-13"}]': finished 2026-03-10T10:17:10.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:09 vm07 bash[23367]: cluster 2026-03-10T10:17:09.295139+0000 mon.a (mon.0) 1285 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-10T10:17:10.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:09 vm07 bash[23367]: cluster 2026-03-10T10:17:09.295139+0000 mon.a (mon.0) 1285 : cluster [DBG] osdmap e114: 8 total, 8 up, 8 in 2026-03-10T10:17:10.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:09 vm07 bash[23367]: audit 2026-03-10T10:17:09.303555+0000 mon.a (mon.0) 1286 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-13", "mode": "writeback"}]: dispatch 2026-03-10T10:17:10.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:09 vm07 bash[23367]: audit 2026-03-10T10:17:09.303555+0000 mon.a (mon.0) 1286 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-13", "mode": "writeback"}]: dispatch 2026-03-10T10:17:10.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:09 vm07 bash[23367]: audit 2026-03-10T10:17:09.312424+0000 mon.a (mon.0) 1287 : audit [INF] from='client.? 192.168.123.104:0/2668920143' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm04-59259-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:10.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:09 vm07 bash[23367]: audit 2026-03-10T10:17:09.312424+0000 mon.a (mon.0) 1287 : audit [INF] from='client.? 192.168.123.104:0/2668920143' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushPP_vm04-59259-14","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:10.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:09 vm07 bash[23367]: audit 2026-03-10T10:17:09.353338+0000 mon.c (mon.2) 148 : audit [INF] from='client.? 192.168.123.104:0/275123545' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm04-59252-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:10.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:09 vm07 bash[23367]: audit 2026-03-10T10:17:09.353338+0000 mon.c (mon.2) 148 : audit [INF] from='client.? 192.168.123.104:0/275123545' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm04-59252-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:10.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:09 vm07 bash[23367]: audit 2026-03-10T10:17:09.354577+0000 mon.a (mon.0) 1288 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm04-59252-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:10.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:09 vm07 bash[23367]: audit 2026-03-10T10:17:09.354577+0000 mon.a (mon.0) 1288 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm04-59252-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:10.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:10 vm04 bash[20742]: cluster 2026-03-10T10:17:09.993487+0000 mgr.y (mgr.24422) 146 : cluster [DBG] pgmap v126: 524 pgs: 64 unknown, 1 active+clean+snaptrim, 459 active+clean; 217 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 382 KiB/s wr, 124 op/s 2026-03-10T10:17:10.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:10 vm04 bash[20742]: cluster 2026-03-10T10:17:09.993487+0000 mgr.y (mgr.24422) 146 : cluster [DBG] pgmap v126: 524 pgs: 64 unknown, 1 active+clean+snaptrim, 459 active+clean; 217 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 382 KiB/s wr, 124 op/s 2026-03-10T10:17:10.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:10 vm04 bash[20742]: audit 2026-03-10T10:17:10.195162+0000 mon.c (mon.2) 149 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:10.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:10 vm04 bash[20742]: audit 2026-03-10T10:17:10.195162+0000 mon.c (mon.2) 149 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:10.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:10 vm04 bash[20742]: cluster 2026-03-10T10:17:10.289046+0000 mon.a (mon.0) 1289 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:17:10.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:10 vm04 bash[20742]: cluster 2026-03-10T10:17:10.289046+0000 mon.a (mon.0) 1289 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:17:10.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:10 vm04 bash[20742]: audit 2026-03-10T10:17:10.294238+0000 mon.a (mon.0) 1290 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-13", "mode": "writeback"}]': finished 2026-03-10T10:17:10.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:10 vm04 bash[20742]: audit 2026-03-10T10:17:10.294238+0000 mon.a (mon.0) 1290 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-13", "mode": "writeback"}]': finished 2026-03-10T10:17:10.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:10 vm04 bash[20742]: audit 2026-03-10T10:17:10.294261+0000 mon.a (mon.0) 1291 : audit [INF] from='client.? 192.168.123.104:0/2668920143' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushPP_vm04-59259-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:10.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:10 vm04 bash[20742]: audit 2026-03-10T10:17:10.294261+0000 mon.a (mon.0) 1291 : audit [INF] from='client.? 192.168.123.104:0/2668920143' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushPP_vm04-59259-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:10.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:10 vm04 bash[20742]: audit 2026-03-10T10:17:10.294276+0000 mon.a (mon.0) 1292 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm04-59252-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:10.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:10 vm04 bash[20742]: audit 2026-03-10T10:17:10.294276+0000 mon.a (mon.0) 1292 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm04-59252-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:10.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:10 vm04 bash[20742]: cluster 2026-03-10T10:17:10.307680+0000 mon.a (mon.0) 1293 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-10T10:17:10.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:10 vm04 bash[20742]: cluster 2026-03-10T10:17:10.307680+0000 mon.a (mon.0) 1293 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-10T10:17:10.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:10 vm04 bash[28289]: cluster 2026-03-10T10:17:09.993487+0000 mgr.y (mgr.24422) 146 : cluster [DBG] pgmap v126: 524 pgs: 64 unknown, 1 active+clean+snaptrim, 459 active+clean; 217 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 382 KiB/s wr, 124 op/s 2026-03-10T10:17:10.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:10 vm04 bash[28289]: cluster 2026-03-10T10:17:09.993487+0000 mgr.y (mgr.24422) 146 : cluster [DBG] pgmap v126: 524 pgs: 64 unknown, 1 active+clean+snaptrim, 459 active+clean; 217 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 382 KiB/s wr, 124 op/s 2026-03-10T10:17:10.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:10 vm04 bash[28289]: audit 2026-03-10T10:17:10.195162+0000 mon.c (mon.2) 149 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:10.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:10 vm04 bash[28289]: audit 2026-03-10T10:17:10.195162+0000 mon.c (mon.2) 149 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:10.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:10 vm04 bash[28289]: cluster 2026-03-10T10:17:10.289046+0000 mon.a (mon.0) 1289 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:17:10.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:10 vm04 bash[28289]: cluster 2026-03-10T10:17:10.289046+0000 mon.a (mon.0) 1289 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:17:10.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:10 vm04 bash[28289]: audit 2026-03-10T10:17:10.294238+0000 mon.a (mon.0) 1290 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-13", "mode": "writeback"}]': finished 2026-03-10T10:17:10.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:10 vm04 bash[28289]: audit 2026-03-10T10:17:10.294238+0000 mon.a (mon.0) 1290 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-13", "mode": "writeback"}]': finished 2026-03-10T10:17:10.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:10 vm04 bash[28289]: audit 2026-03-10T10:17:10.294261+0000 mon.a (mon.0) 1291 : audit [INF] from='client.? 192.168.123.104:0/2668920143' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushPP_vm04-59259-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:10.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:10 vm04 bash[28289]: audit 2026-03-10T10:17:10.294261+0000 mon.a (mon.0) 1291 : audit [INF] from='client.? 192.168.123.104:0/2668920143' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushPP_vm04-59259-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:10.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:10 vm04 bash[28289]: audit 2026-03-10T10:17:10.294276+0000 mon.a (mon.0) 1292 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm04-59252-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:10.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:10 vm04 bash[28289]: audit 2026-03-10T10:17:10.294276+0000 mon.a (mon.0) 1292 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm04-59252-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:10.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:10 vm04 bash[28289]: cluster 2026-03-10T10:17:10.307680+0000 mon.a (mon.0) 1293 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-10T10:17:10.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:10 vm04 bash[28289]: cluster 2026-03-10T10:17:10.307680+0000 mon.a (mon.0) 1293 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-10T10:17:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:10 vm07 bash[23367]: cluster 2026-03-10T10:17:09.993487+0000 mgr.y (mgr.24422) 146 : cluster [DBG] pgmap v126: 524 pgs: 64 unknown, 1 active+clean+snaptrim, 459 active+clean; 217 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 382 KiB/s wr, 124 op/s 2026-03-10T10:17:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:10 vm07 bash[23367]: cluster 2026-03-10T10:17:09.993487+0000 mgr.y (mgr.24422) 146 : cluster [DBG] pgmap v126: 524 pgs: 64 unknown, 1 active+clean+snaptrim, 459 active+clean; 217 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 382 KiB/s wr, 124 op/s 2026-03-10T10:17:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:10 vm07 bash[23367]: audit 2026-03-10T10:17:10.195162+0000 mon.c (mon.2) 149 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:10 vm07 bash[23367]: audit 2026-03-10T10:17:10.195162+0000 mon.c (mon.2) 149 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:10 vm07 bash[23367]: cluster 2026-03-10T10:17:10.289046+0000 mon.a (mon.0) 1289 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:17:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:10 vm07 bash[23367]: cluster 2026-03-10T10:17:10.289046+0000 mon.a (mon.0) 1289 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:17:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:10 vm07 bash[23367]: audit 2026-03-10T10:17:10.294238+0000 mon.a (mon.0) 1290 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-13", "mode": "writeback"}]': finished 2026-03-10T10:17:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:10 vm07 bash[23367]: audit 2026-03-10T10:17:10.294238+0000 mon.a (mon.0) 1290 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-13", "mode": "writeback"}]': finished 2026-03-10T10:17:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:10 vm07 bash[23367]: audit 2026-03-10T10:17:10.294261+0000 mon.a (mon.0) 1291 : audit [INF] from='client.? 192.168.123.104:0/2668920143' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushPP_vm04-59259-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:10 vm07 bash[23367]: audit 2026-03-10T10:17:10.294261+0000 mon.a (mon.0) 1291 : audit [INF] from='client.? 192.168.123.104:0/2668920143' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushPP_vm04-59259-14","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:10 vm07 bash[23367]: audit 2026-03-10T10:17:10.294276+0000 mon.a (mon.0) 1292 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm04-59252-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:10 vm07 bash[23367]: audit 2026-03-10T10:17:10.294276+0000 mon.a (mon.0) 1292 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFull_vm04-59252-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:10 vm07 bash[23367]: cluster 2026-03-10T10:17:10.307680+0000 mon.a (mon.0) 1293 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-10T10:17:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:10 vm07 bash[23367]: cluster 2026-03-10T10:17:10.307680+0000 mon.a (mon.0) 1293 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-10T10:17:11.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.100192+0000 mon.a (mon.0) 1294 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.0"}]: dispatch 2026-03-10T10:17:11.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.100192+0000 mon.a (mon.0) 1294 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.0"}]: dispatch 2026-03-10T10:17:11.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.100358+0000 mgr.y (mgr.24422) 147 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.0"}]: dispatch 2026-03-10T10:17:11.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.100358+0000 mgr.y (mgr.24422) 147 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.0"}]: dispatch 2026-03-10T10:17:11.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.100922+0000 mon.a (mon.0) 1295 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.1"}]: dispatch 2026-03-10T10:17:11.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.100922+0000 mon.a (mon.0) 1295 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.1"}]: dispatch 2026-03-10T10:17:11.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.101034+0000 mgr.y (mgr.24422) 148 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.1"}]: dispatch 2026-03-10T10:17:11.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.101034+0000 mgr.y (mgr.24422) 148 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.1"}]: dispatch 2026-03-10T10:17:11.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.101428+0000 mon.a (mon.0) 1296 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.2"}]: dispatch 2026-03-10T10:17:11.956 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.101428+0000 mon.a (mon.0) 1296 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.2"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.101539+0000 mgr.y (mgr.24422) 149 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.2"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.101539+0000 mgr.y (mgr.24422) 149 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.2"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.101972+0000 mon.a (mon.0) 1297 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.3"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.101972+0000 mon.a (mon.0) 1297 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.3"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.102092+0000 mgr.y (mgr.24422) 150 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.3"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.102092+0000 mgr.y (mgr.24422) 150 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.3"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.102906+0000 mon.a (mon.0) 1298 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.4"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.102906+0000 mon.a (mon.0) 1298 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.4"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.103031+0000 mgr.y (mgr.24422) 151 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.4"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.103031+0000 mgr.y (mgr.24422) 151 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.4"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.103550+0000 mon.a (mon.0) 1299 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.5"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.103550+0000 mon.a (mon.0) 1299 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.5"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.103660+0000 mgr.y (mgr.24422) 152 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.5"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.103660+0000 mgr.y (mgr.24422) 152 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.5"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.104375+0000 mon.a (mon.0) 1300 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.6"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.104375+0000 mon.a (mon.0) 1300 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.6"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.104497+0000 mgr.y (mgr.24422) 153 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.6"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.104497+0000 mgr.y (mgr.24422) 153 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.6"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.106502+0000 mon.a (mon.0) 1301 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.7"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.106502+0000 mon.a (mon.0) 1301 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.7"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.106620+0000 mgr.y (mgr.24422) 154 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.7"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.106620+0000 mgr.y (mgr.24422) 154 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.7"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.107054+0000 mon.a (mon.0) 1302 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.8"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.107054+0000 mon.a (mon.0) 1302 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.8"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.107177+0000 mgr.y (mgr.24422) 155 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.8"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.107177+0000 mgr.y (mgr.24422) 155 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.8"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.107807+0000 mon.a (mon.0) 1303 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.9"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.107807+0000 mon.a (mon.0) 1303 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.9"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.107919+0000 mgr.y (mgr.24422) 156 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.9"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.107919+0000 mgr.y (mgr.24422) 156 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.9"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: cluster 2026-03-10T10:17:11.192806+0000 mon.a (mon.0) 1304 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: cluster 2026-03-10T10:17:11.192806+0000 mon.a (mon.0) 1304 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.196361+0000 mon.c (mon.2) 150 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: audit 2026-03-10T10:17:11.196361+0000 mon.c (mon.2) 150 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: cluster 2026-03-10T10:17:11.321833+0000 mon.a (mon.0) 1305 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:11 vm04 bash[20742]: cluster 2026-03-10T10:17:11.321833+0000 mon.a (mon.0) 1305 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.100192+0000 mon.a (mon.0) 1294 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.0"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.100192+0000 mon.a (mon.0) 1294 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.0"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.100358+0000 mgr.y (mgr.24422) 147 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.0"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.100358+0000 mgr.y (mgr.24422) 147 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.0"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.100922+0000 mon.a (mon.0) 1295 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.1"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.100922+0000 mon.a (mon.0) 1295 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.1"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.101034+0000 mgr.y (mgr.24422) 148 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.1"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.101034+0000 mgr.y (mgr.24422) 148 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.1"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.101428+0000 mon.a (mon.0) 1296 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.2"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.101428+0000 mon.a (mon.0) 1296 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.2"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.101539+0000 mgr.y (mgr.24422) 149 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.2"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.101539+0000 mgr.y (mgr.24422) 149 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.2"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.101972+0000 mon.a (mon.0) 1297 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.3"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.101972+0000 mon.a (mon.0) 1297 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.3"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.102092+0000 mgr.y (mgr.24422) 150 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.3"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.102092+0000 mgr.y (mgr.24422) 150 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.3"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.102906+0000 mon.a (mon.0) 1298 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.4"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.102906+0000 mon.a (mon.0) 1298 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.4"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.103031+0000 mgr.y (mgr.24422) 151 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.4"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.103031+0000 mgr.y (mgr.24422) 151 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.4"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.103550+0000 mon.a (mon.0) 1299 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.5"}]: dispatch 2026-03-10T10:17:11.957 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.103550+0000 mon.a (mon.0) 1299 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.5"}]: dispatch 2026-03-10T10:17:11.958 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.103660+0000 mgr.y (mgr.24422) 152 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.5"}]: dispatch 2026-03-10T10:17:11.958 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.103660+0000 mgr.y (mgr.24422) 152 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.5"}]: dispatch 2026-03-10T10:17:11.958 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.104375+0000 mon.a (mon.0) 1300 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.6"}]: dispatch 2026-03-10T10:17:11.958 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.104375+0000 mon.a (mon.0) 1300 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.6"}]: dispatch 2026-03-10T10:17:11.958 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.104497+0000 mgr.y (mgr.24422) 153 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.6"}]: dispatch 2026-03-10T10:17:11.958 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.104497+0000 mgr.y (mgr.24422) 153 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.6"}]: dispatch 2026-03-10T10:17:11.958 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.106502+0000 mon.a (mon.0) 1301 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.7"}]: dispatch 2026-03-10T10:17:11.958 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.106502+0000 mon.a (mon.0) 1301 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.7"}]: dispatch 2026-03-10T10:17:11.958 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.106620+0000 mgr.y (mgr.24422) 154 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.7"}]: dispatch 2026-03-10T10:17:11.958 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.106620+0000 mgr.y (mgr.24422) 154 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.7"}]: dispatch 2026-03-10T10:17:11.958 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.107054+0000 mon.a (mon.0) 1302 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.8"}]: dispatch 2026-03-10T10:17:11.958 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.107054+0000 mon.a (mon.0) 1302 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.8"}]: dispatch 2026-03-10T10:17:11.958 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.107177+0000 mgr.y (mgr.24422) 155 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.8"}]: dispatch 2026-03-10T10:17:11.958 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.107177+0000 mgr.y (mgr.24422) 155 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.8"}]: dispatch 2026-03-10T10:17:11.958 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.107807+0000 mon.a (mon.0) 1303 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.9"}]: dispatch 2026-03-10T10:17:11.958 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.107807+0000 mon.a (mon.0) 1303 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.9"}]: dispatch 2026-03-10T10:17:11.958 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.107919+0000 mgr.y (mgr.24422) 156 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.9"}]: dispatch 2026-03-10T10:17:11.958 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.107919+0000 mgr.y (mgr.24422) 156 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.9"}]: dispatch 2026-03-10T10:17:11.958 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: cluster 2026-03-10T10:17:11.192806+0000 mon.a (mon.0) 1304 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:11.958 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: cluster 2026-03-10T10:17:11.192806+0000 mon.a (mon.0) 1304 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:11.958 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.196361+0000 mon.c (mon.2) 150 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:11.958 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: audit 2026-03-10T10:17:11.196361+0000 mon.c (mon.2) 150 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:11.958 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: cluster 2026-03-10T10:17:11.321833+0000 mon.a (mon.0) 1305 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-10T10:17:11.958 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:11 vm04 bash[28289]: cluster 2026-03-10T10:17:11.321833+0000 mon.a (mon.0) 1305 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-10T10:17:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.100192+0000 mon.a (mon.0) 1294 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.0"}]: dispatch 2026-03-10T10:17:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.100192+0000 mon.a (mon.0) 1294 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.0"}]: dispatch 2026-03-10T10:17:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.100358+0000 mgr.y (mgr.24422) 147 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.0"}]: dispatch 2026-03-10T10:17:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.100358+0000 mgr.y (mgr.24422) 147 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.0"}]: dispatch 2026-03-10T10:17:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.100922+0000 mon.a (mon.0) 1295 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.1"}]: dispatch 2026-03-10T10:17:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.100922+0000 mon.a (mon.0) 1295 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.1"}]: dispatch 2026-03-10T10:17:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.101034+0000 mgr.y (mgr.24422) 148 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.1"}]: dispatch 2026-03-10T10:17:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.101034+0000 mgr.y (mgr.24422) 148 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.1"}]: dispatch 2026-03-10T10:17:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.101428+0000 mon.a (mon.0) 1296 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.2"}]: dispatch 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.101428+0000 mon.a (mon.0) 1296 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.2"}]: dispatch 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.101539+0000 mgr.y (mgr.24422) 149 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.2"}]: dispatch 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.101539+0000 mgr.y (mgr.24422) 149 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.2"}]: dispatch 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.101972+0000 mon.a (mon.0) 1297 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.3"}]: dispatch 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.101972+0000 mon.a (mon.0) 1297 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.3"}]: dispatch 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.102092+0000 mgr.y (mgr.24422) 150 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.3"}]: dispatch 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.102092+0000 mgr.y (mgr.24422) 150 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.3"}]: dispatch 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.102906+0000 mon.a (mon.0) 1298 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.4"}]: dispatch 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.102906+0000 mon.a (mon.0) 1298 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.4"}]: dispatch 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.103031+0000 mgr.y (mgr.24422) 151 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.4"}]: dispatch 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.103031+0000 mgr.y (mgr.24422) 151 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.4"}]: dispatch 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.103550+0000 mon.a (mon.0) 1299 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.5"}]: dispatch 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.103550+0000 mon.a (mon.0) 1299 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.5"}]: dispatch 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.103660+0000 mgr.y (mgr.24422) 152 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.5"}]: dispatch 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.103660+0000 mgr.y (mgr.24422) 152 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.5"}]: dispatch 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.104375+0000 mon.a (mon.0) 1300 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.6"}]: dispatch 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.104375+0000 mon.a (mon.0) 1300 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.6"}]: dispatch 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.104497+0000 mgr.y (mgr.24422) 153 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.6"}]: dispatch 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.104497+0000 mgr.y (mgr.24422) 153 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.6"}]: dispatch 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.106502+0000 mon.a (mon.0) 1301 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.7"}]: dispatch 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.106502+0000 mon.a (mon.0) 1301 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.7"}]: dispatch 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.106620+0000 mgr.y (mgr.24422) 154 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.7"}]: dispatch 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.106620+0000 mgr.y (mgr.24422) 154 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.7"}]: dispatch 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.107054+0000 mon.a (mon.0) 1302 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.8"}]: dispatch 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.107054+0000 mon.a (mon.0) 1302 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.8"}]: dispatch 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.107177+0000 mgr.y (mgr.24422) 155 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.8"}]: dispatch 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.107177+0000 mgr.y (mgr.24422) 155 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.8"}]: dispatch 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.107807+0000 mon.a (mon.0) 1303 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.9"}]: dispatch 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.107807+0000 mon.a (mon.0) 1303 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "166.9"}]: dispatch 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.107919+0000 mgr.y (mgr.24422) 156 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.9"}]: dispatch 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.107919+0000 mgr.y (mgr.24422) 156 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "166.9"}]: dispatch 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: cluster 2026-03-10T10:17:11.192806+0000 mon.a (mon.0) 1304 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: cluster 2026-03-10T10:17:11.192806+0000 mon.a (mon.0) 1304 : cluster [WRN] Health check update: 7 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.196361+0000 mon.c (mon.2) 150 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: audit 2026-03-10T10:17:11.196361+0000 mon.c (mon.2) 150 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: cluster 2026-03-10T10:17:11.321833+0000 mon.a (mon.0) 1305 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-10T10:17:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:11 vm07 bash[23367]: cluster 2026-03-10T10:17:11.321833+0000 mon.a (mon.0) 1305 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:12 vm04 bash[20742]: cluster 2026-03-10T10:17:11.175788+0000 osd.4 (osd.4) 7 : cluster [DBG] 166.7 scrub starts 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:12 vm04 bash[20742]: cluster 2026-03-10T10:17:11.175788+0000 osd.4 (osd.4) 7 : cluster [DBG] 166.7 scrub starts 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:12 vm04 bash[20742]: cluster 2026-03-10T10:17:11.177277+0000 osd.4 (osd.4) 8 : cluster [DBG] 166.7 scrub ok 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:12 vm04 bash[20742]: cluster 2026-03-10T10:17:11.177277+0000 osd.4 (osd.4) 8 : cluster [DBG] 166.7 scrub ok 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:12 vm04 bash[20742]: cluster 2026-03-10T10:17:11.532767+0000 osd.6 (osd.6) 15 : cluster [DBG] 166.5 deep-scrub starts 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:12 vm04 bash[20742]: cluster 2026-03-10T10:17:11.532767+0000 osd.6 (osd.6) 15 : cluster [DBG] 166.5 deep-scrub starts 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:12 vm04 bash[20742]: cluster 2026-03-10T10:17:11.553055+0000 osd.6 (osd.6) 16 : cluster [DBG] 166.5 deep-scrub ok 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:12 vm04 bash[20742]: cluster 2026-03-10T10:17:11.553055+0000 osd.6 (osd.6) 16 : cluster [DBG] 166.5 deep-scrub ok 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:12 vm04 bash[20742]: cluster 2026-03-10T10:17:11.704869+0000 osd.1 (osd.1) 11 : cluster [DBG] 166.6 scrub starts 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:12 vm04 bash[20742]: cluster 2026-03-10T10:17:11.704869+0000 osd.1 (osd.1) 11 : cluster [DBG] 166.6 scrub starts 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:12 vm04 bash[20742]: cluster 2026-03-10T10:17:11.706419+0000 osd.1 (osd.1) 12 : cluster [DBG] 166.6 scrub ok 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:12 vm04 bash[20742]: cluster 2026-03-10T10:17:11.706419+0000 osd.1 (osd.1) 12 : cluster [DBG] 166.6 scrub ok 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:12 vm04 bash[20742]: cluster 2026-03-10T10:17:11.766282+0000 osd.7 (osd.7) 3 : cluster [DBG] 166.1 scrub starts 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:12 vm04 bash[20742]: cluster 2026-03-10T10:17:11.766282+0000 osd.7 (osd.7) 3 : cluster [DBG] 166.1 scrub starts 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:12 vm04 bash[20742]: cluster 2026-03-10T10:17:11.767305+0000 osd.7 (osd.7) 4 : cluster [DBG] 166.1 scrub ok 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:12 vm04 bash[20742]: cluster 2026-03-10T10:17:11.767305+0000 osd.7 (osd.7) 4 : cluster [DBG] 166.1 scrub ok 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:12 vm04 bash[20742]: cluster 2026-03-10T10:17:11.993882+0000 mgr.y (mgr.24422) 157 : cluster [DBG] pgmap v129: 428 pgs: 1 active+clean+snaptrim, 427 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 126 KiB/s wr, 124 op/s 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:12 vm04 bash[20742]: cluster 2026-03-10T10:17:11.993882+0000 mgr.y (mgr.24422) 157 : cluster [DBG] pgmap v129: 428 pgs: 1 active+clean+snaptrim, 427 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 126 KiB/s wr, 124 op/s 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:12 vm04 bash[20742]: audit 2026-03-10T10:17:12.197402+0000 mon.c (mon.2) 151 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:12 vm04 bash[20742]: audit 2026-03-10T10:17:12.197402+0000 mon.c (mon.2) 151 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:12 vm04 bash[20742]: cluster 2026-03-10T10:17:12.326293+0000 mon.a (mon.0) 1306 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:12 vm04 bash[20742]: cluster 2026-03-10T10:17:12.326293+0000 mon.a (mon.0) 1306 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:12 vm04 bash[20742]: audit 2026-03-10T10:17:12.332878+0000 mon.a (mon.0) 1307 : audit [INF] from='client.? 192.168.123.104:0/2864615561' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm04-59252-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:12 vm04 bash[20742]: audit 2026-03-10T10:17:12.332878+0000 mon.a (mon.0) 1307 : audit [INF] from='client.? 192.168.123.104:0/2864615561' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm04-59252-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:12 vm04 bash[20742]: audit 2026-03-10T10:17:12.355940+0000 mon.b (mon.1) 128 : audit [INF] from='client.? 192.168.123.104:0/821591932' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm04-59259-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:12 vm04 bash[20742]: audit 2026-03-10T10:17:12.355940+0000 mon.b (mon.1) 128 : audit [INF] from='client.? 192.168.123.104:0/821591932' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm04-59259-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:12 vm04 bash[20742]: audit 2026-03-10T10:17:12.365766+0000 mon.a (mon.0) 1308 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm04-59259-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:12 vm04 bash[20742]: audit 2026-03-10T10:17:12.365766+0000 mon.a (mon.0) 1308 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm04-59259-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:12 vm04 bash[28289]: cluster 2026-03-10T10:17:11.175788+0000 osd.4 (osd.4) 7 : cluster [DBG] 166.7 scrub starts 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:12 vm04 bash[28289]: cluster 2026-03-10T10:17:11.175788+0000 osd.4 (osd.4) 7 : cluster [DBG] 166.7 scrub starts 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:12 vm04 bash[28289]: cluster 2026-03-10T10:17:11.177277+0000 osd.4 (osd.4) 8 : cluster [DBG] 166.7 scrub ok 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:12 vm04 bash[28289]: cluster 2026-03-10T10:17:11.177277+0000 osd.4 (osd.4) 8 : cluster [DBG] 166.7 scrub ok 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:12 vm04 bash[28289]: cluster 2026-03-10T10:17:11.532767+0000 osd.6 (osd.6) 15 : cluster [DBG] 166.5 deep-scrub starts 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:12 vm04 bash[28289]: cluster 2026-03-10T10:17:11.532767+0000 osd.6 (osd.6) 15 : cluster [DBG] 166.5 deep-scrub starts 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:12 vm04 bash[28289]: cluster 2026-03-10T10:17:11.553055+0000 osd.6 (osd.6) 16 : cluster [DBG] 166.5 deep-scrub ok 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:12 vm04 bash[28289]: cluster 2026-03-10T10:17:11.553055+0000 osd.6 (osd.6) 16 : cluster [DBG] 166.5 deep-scrub ok 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:12 vm04 bash[28289]: cluster 2026-03-10T10:17:11.704869+0000 osd.1 (osd.1) 11 : cluster [DBG] 166.6 scrub starts 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:12 vm04 bash[28289]: cluster 2026-03-10T10:17:11.704869+0000 osd.1 (osd.1) 11 : cluster [DBG] 166.6 scrub starts 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:12 vm04 bash[28289]: cluster 2026-03-10T10:17:11.706419+0000 osd.1 (osd.1) 12 : cluster [DBG] 166.6 scrub ok 2026-03-10T10:17:12.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:12 vm04 bash[28289]: cluster 2026-03-10T10:17:11.706419+0000 osd.1 (osd.1) 12 : cluster [DBG] 166.6 scrub ok 2026-03-10T10:17:12.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:12 vm04 bash[28289]: cluster 2026-03-10T10:17:11.766282+0000 osd.7 (osd.7) 3 : cluster [DBG] 166.1 scrub starts 2026-03-10T10:17:12.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:12 vm04 bash[28289]: cluster 2026-03-10T10:17:11.766282+0000 osd.7 (osd.7) 3 : cluster [DBG] 166.1 scrub starts 2026-03-10T10:17:12.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:12 vm04 bash[28289]: cluster 2026-03-10T10:17:11.767305+0000 osd.7 (osd.7) 4 : cluster [DBG] 166.1 scrub ok 2026-03-10T10:17:12.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:12 vm04 bash[28289]: cluster 2026-03-10T10:17:11.767305+0000 osd.7 (osd.7) 4 : cluster [DBG] 166.1 scrub ok 2026-03-10T10:17:12.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:12 vm04 bash[28289]: cluster 2026-03-10T10:17:11.993882+0000 mgr.y (mgr.24422) 157 : cluster [DBG] pgmap v129: 428 pgs: 1 active+clean+snaptrim, 427 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 126 KiB/s wr, 124 op/s 2026-03-10T10:17:12.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:12 vm04 bash[28289]: cluster 2026-03-10T10:17:11.993882+0000 mgr.y (mgr.24422) 157 : cluster [DBG] pgmap v129: 428 pgs: 1 active+clean+snaptrim, 427 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 126 KiB/s wr, 124 op/s 2026-03-10T10:17:12.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:12 vm04 bash[28289]: audit 2026-03-10T10:17:12.197402+0000 mon.c (mon.2) 151 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:12.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:12 vm04 bash[28289]: audit 2026-03-10T10:17:12.197402+0000 mon.c (mon.2) 151 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:12.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:12 vm04 bash[28289]: cluster 2026-03-10T10:17:12.326293+0000 mon.a (mon.0) 1306 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-10T10:17:12.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:12 vm04 bash[28289]: cluster 2026-03-10T10:17:12.326293+0000 mon.a (mon.0) 1306 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-10T10:17:12.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:12 vm04 bash[28289]: audit 2026-03-10T10:17:12.332878+0000 mon.a (mon.0) 1307 : audit [INF] from='client.? 192.168.123.104:0/2864615561' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm04-59252-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:12.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:12 vm04 bash[28289]: audit 2026-03-10T10:17:12.332878+0000 mon.a (mon.0) 1307 : audit [INF] from='client.? 192.168.123.104:0/2864615561' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm04-59252-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:12.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:12 vm04 bash[28289]: audit 2026-03-10T10:17:12.355940+0000 mon.b (mon.1) 128 : audit [INF] from='client.? 192.168.123.104:0/821591932' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm04-59259-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:12.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:12 vm04 bash[28289]: audit 2026-03-10T10:17:12.355940+0000 mon.b (mon.1) 128 : audit [INF] from='client.? 192.168.123.104:0/821591932' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm04-59259-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:12.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:12 vm04 bash[28289]: audit 2026-03-10T10:17:12.365766+0000 mon.a (mon.0) 1308 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm04-59259-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:12.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:12 vm04 bash[28289]: audit 2026-03-10T10:17:12.365766+0000 mon.a (mon.0) 1308 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm04-59259-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:13.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:12 vm07 bash[23367]: cluster 2026-03-10T10:17:11.175788+0000 osd.4 (osd.4) 7 : cluster [DBG] 166.7 scrub starts 2026-03-10T10:17:13.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:12 vm07 bash[23367]: cluster 2026-03-10T10:17:11.175788+0000 osd.4 (osd.4) 7 : cluster [DBG] 166.7 scrub starts 2026-03-10T10:17:13.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:12 vm07 bash[23367]: cluster 2026-03-10T10:17:11.177277+0000 osd.4 (osd.4) 8 : cluster [DBG] 166.7 scrub ok 2026-03-10T10:17:13.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:12 vm07 bash[23367]: cluster 2026-03-10T10:17:11.177277+0000 osd.4 (osd.4) 8 : cluster [DBG] 166.7 scrub ok 2026-03-10T10:17:13.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:12 vm07 bash[23367]: cluster 2026-03-10T10:17:11.532767+0000 osd.6 (osd.6) 15 : cluster [DBG] 166.5 deep-scrub starts 2026-03-10T10:17:13.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:12 vm07 bash[23367]: cluster 2026-03-10T10:17:11.532767+0000 osd.6 (osd.6) 15 : cluster [DBG] 166.5 deep-scrub starts 2026-03-10T10:17:13.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:12 vm07 bash[23367]: cluster 2026-03-10T10:17:11.553055+0000 osd.6 (osd.6) 16 : cluster [DBG] 166.5 deep-scrub ok 2026-03-10T10:17:13.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:12 vm07 bash[23367]: cluster 2026-03-10T10:17:11.553055+0000 osd.6 (osd.6) 16 : cluster [DBG] 166.5 deep-scrub ok 2026-03-10T10:17:13.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:12 vm07 bash[23367]: cluster 2026-03-10T10:17:11.704869+0000 osd.1 (osd.1) 11 : cluster [DBG] 166.6 scrub starts 2026-03-10T10:17:13.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:12 vm07 bash[23367]: cluster 2026-03-10T10:17:11.704869+0000 osd.1 (osd.1) 11 : cluster [DBG] 166.6 scrub starts 2026-03-10T10:17:13.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:12 vm07 bash[23367]: cluster 2026-03-10T10:17:11.706419+0000 osd.1 (osd.1) 12 : cluster [DBG] 166.6 scrub ok 2026-03-10T10:17:13.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:12 vm07 bash[23367]: cluster 2026-03-10T10:17:11.706419+0000 osd.1 (osd.1) 12 : cluster [DBG] 166.6 scrub ok 2026-03-10T10:17:13.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:12 vm07 bash[23367]: cluster 2026-03-10T10:17:11.766282+0000 osd.7 (osd.7) 3 : cluster [DBG] 166.1 scrub starts 2026-03-10T10:17:13.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:12 vm07 bash[23367]: cluster 2026-03-10T10:17:11.766282+0000 osd.7 (osd.7) 3 : cluster [DBG] 166.1 scrub starts 2026-03-10T10:17:13.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:12 vm07 bash[23367]: cluster 2026-03-10T10:17:11.767305+0000 osd.7 (osd.7) 4 : cluster [DBG] 166.1 scrub ok 2026-03-10T10:17:13.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:12 vm07 bash[23367]: cluster 2026-03-10T10:17:11.767305+0000 osd.7 (osd.7) 4 : cluster [DBG] 166.1 scrub ok 2026-03-10T10:17:13.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:12 vm07 bash[23367]: cluster 2026-03-10T10:17:11.993882+0000 mgr.y (mgr.24422) 157 : cluster [DBG] pgmap v129: 428 pgs: 1 active+clean+snaptrim, 427 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 126 KiB/s wr, 124 op/s 2026-03-10T10:17:13.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:12 vm07 bash[23367]: cluster 2026-03-10T10:17:11.993882+0000 mgr.y (mgr.24422) 157 : cluster [DBG] pgmap v129: 428 pgs: 1 active+clean+snaptrim, 427 active+clean; 216 MiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 126 KiB/s wr, 124 op/s 2026-03-10T10:17:13.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:12 vm07 bash[23367]: audit 2026-03-10T10:17:12.197402+0000 mon.c (mon.2) 151 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:13.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:12 vm07 bash[23367]: audit 2026-03-10T10:17:12.197402+0000 mon.c (mon.2) 151 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:13.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:12 vm07 bash[23367]: cluster 2026-03-10T10:17:12.326293+0000 mon.a (mon.0) 1306 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-10T10:17:13.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:12 vm07 bash[23367]: cluster 2026-03-10T10:17:12.326293+0000 mon.a (mon.0) 1306 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-10T10:17:13.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:12 vm07 bash[23367]: audit 2026-03-10T10:17:12.332878+0000 mon.a (mon.0) 1307 : audit [INF] from='client.? 192.168.123.104:0/2864615561' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm04-59252-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:13.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:12 vm07 bash[23367]: audit 2026-03-10T10:17:12.332878+0000 mon.a (mon.0) 1307 : audit [INF] from='client.? 192.168.123.104:0/2864615561' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm04-59252-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:13.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:12 vm07 bash[23367]: audit 2026-03-10T10:17:12.355940+0000 mon.b (mon.1) 128 : audit [INF] from='client.? 192.168.123.104:0/821591932' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm04-59259-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:13.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:12 vm07 bash[23367]: audit 2026-03-10T10:17:12.355940+0000 mon.b (mon.1) 128 : audit [INF] from='client.? 192.168.123.104:0/821591932' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm04-59259-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:13.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:12 vm07 bash[23367]: audit 2026-03-10T10:17:12.365766+0000 mon.a (mon.0) 1308 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm04-59259-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:13.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:12 vm07 bash[23367]: audit 2026-03-10T10:17:12.365766+0000 mon.a (mon.0) 1308 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm04-59259-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:13.454 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:17:13 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:17:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:17:13.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:13 vm04 bash[20742]: cluster 2026-03-10T10:17:11.923859+0000 osd.2 (osd.2) 15 : cluster [DBG] 166.4 scrub starts 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:13 vm04 bash[20742]: cluster 2026-03-10T10:17:11.923859+0000 osd.2 (osd.2) 15 : cluster [DBG] 166.4 scrub starts 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:13 vm04 bash[20742]: cluster 2026-03-10T10:17:11.925382+0000 osd.2 (osd.2) 16 : cluster [DBG] 166.4 scrub ok 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:13 vm04 bash[20742]: cluster 2026-03-10T10:17:11.925382+0000 osd.2 (osd.2) 16 : cluster [DBG] 166.4 scrub ok 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:13 vm04 bash[20742]: cluster 2026-03-10T10:17:11.930816+0000 osd.3 (osd.3) 3 : cluster [DBG] 166.0 scrub starts 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:13 vm04 bash[20742]: cluster 2026-03-10T10:17:11.930816+0000 osd.3 (osd.3) 3 : cluster [DBG] 166.0 scrub starts 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:13 vm04 bash[20742]: cluster 2026-03-10T10:17:11.932694+0000 osd.3 (osd.3) 4 : cluster [DBG] 166.0 scrub ok 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:13 vm04 bash[20742]: cluster 2026-03-10T10:17:11.932694+0000 osd.3 (osd.3) 4 : cluster [DBG] 166.0 scrub ok 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:13 vm04 bash[20742]: cluster 2026-03-10T10:17:12.150415+0000 osd.4 (osd.4) 9 : cluster [DBG] 166.3 scrub starts 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:13 vm04 bash[20742]: cluster 2026-03-10T10:17:12.150415+0000 osd.4 (osd.4) 9 : cluster [DBG] 166.3 scrub starts 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:13 vm04 bash[20742]: cluster 2026-03-10T10:17:12.152455+0000 osd.4 (osd.4) 10 : cluster [DBG] 166.3 scrub ok 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:13 vm04 bash[20742]: cluster 2026-03-10T10:17:12.152455+0000 osd.4 (osd.4) 10 : cluster [DBG] 166.3 scrub ok 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:13 vm04 bash[20742]: cluster 2026-03-10T10:17:12.544060+0000 osd.6 (osd.6) 17 : cluster [DBG] 166.9 scrub starts 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:13 vm04 bash[20742]: cluster 2026-03-10T10:17:12.544060+0000 osd.6 (osd.6) 17 : cluster [DBG] 166.9 scrub starts 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:13 vm04 bash[20742]: cluster 2026-03-10T10:17:12.545589+0000 osd.6 (osd.6) 18 : cluster [DBG] 166.9 scrub ok 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:13 vm04 bash[20742]: cluster 2026-03-10T10:17:12.545589+0000 osd.6 (osd.6) 18 : cluster [DBG] 166.9 scrub ok 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:13 vm04 bash[20742]: audit 2026-03-10T10:17:12.588857+0000 mon.a (mon.0) 1309 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:13 vm04 bash[20742]: audit 2026-03-10T10:17:12.588857+0000 mon.a (mon.0) 1309 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:13 vm04 bash[20742]: audit 2026-03-10T10:17:13.198666+0000 mon.c (mon.2) 152 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:13 vm04 bash[20742]: audit 2026-03-10T10:17:13.198666+0000 mon.c (mon.2) 152 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:13 vm04 bash[20742]: audit 2026-03-10T10:17:13.327098+0000 mon.a (mon.0) 1310 : audit [INF] from='client.? 192.168.123.104:0/2864615561' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm04-59252-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:13 vm04 bash[20742]: audit 2026-03-10T10:17:13.327098+0000 mon.a (mon.0) 1310 : audit [INF] from='client.? 192.168.123.104:0/2864615561' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm04-59252-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:13 vm04 bash[20742]: audit 2026-03-10T10:17:13.327128+0000 mon.a (mon.0) 1311 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm04-59259-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:13 vm04 bash[20742]: audit 2026-03-10T10:17:13.327128+0000 mon.a (mon.0) 1311 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm04-59259-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:13 vm04 bash[20742]: cluster 2026-03-10T10:17:13.331564+0000 mon.a (mon.0) 1312 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:13 vm04 bash[20742]: cluster 2026-03-10T10:17:13.331564+0000 mon.a (mon.0) 1312 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:13 vm04 bash[28289]: cluster 2026-03-10T10:17:11.923859+0000 osd.2 (osd.2) 15 : cluster [DBG] 166.4 scrub starts 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:13 vm04 bash[28289]: cluster 2026-03-10T10:17:11.923859+0000 osd.2 (osd.2) 15 : cluster [DBG] 166.4 scrub starts 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:13 vm04 bash[28289]: cluster 2026-03-10T10:17:11.925382+0000 osd.2 (osd.2) 16 : cluster [DBG] 166.4 scrub ok 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:13 vm04 bash[28289]: cluster 2026-03-10T10:17:11.925382+0000 osd.2 (osd.2) 16 : cluster [DBG] 166.4 scrub ok 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:13 vm04 bash[28289]: cluster 2026-03-10T10:17:11.930816+0000 osd.3 (osd.3) 3 : cluster [DBG] 166.0 scrub starts 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:13 vm04 bash[28289]: cluster 2026-03-10T10:17:11.930816+0000 osd.3 (osd.3) 3 : cluster [DBG] 166.0 scrub starts 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:13 vm04 bash[28289]: cluster 2026-03-10T10:17:11.932694+0000 osd.3 (osd.3) 4 : cluster [DBG] 166.0 scrub ok 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:13 vm04 bash[28289]: cluster 2026-03-10T10:17:11.932694+0000 osd.3 (osd.3) 4 : cluster [DBG] 166.0 scrub ok 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:13 vm04 bash[28289]: cluster 2026-03-10T10:17:12.150415+0000 osd.4 (osd.4) 9 : cluster [DBG] 166.3 scrub starts 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:13 vm04 bash[28289]: cluster 2026-03-10T10:17:12.150415+0000 osd.4 (osd.4) 9 : cluster [DBG] 166.3 scrub starts 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:13 vm04 bash[28289]: cluster 2026-03-10T10:17:12.152455+0000 osd.4 (osd.4) 10 : cluster [DBG] 166.3 scrub ok 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:13 vm04 bash[28289]: cluster 2026-03-10T10:17:12.152455+0000 osd.4 (osd.4) 10 : cluster [DBG] 166.3 scrub ok 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:13 vm04 bash[28289]: cluster 2026-03-10T10:17:12.544060+0000 osd.6 (osd.6) 17 : cluster [DBG] 166.9 scrub starts 2026-03-10T10:17:13.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:13 vm04 bash[28289]: cluster 2026-03-10T10:17:12.544060+0000 osd.6 (osd.6) 17 : cluster [DBG] 166.9 scrub starts 2026-03-10T10:17:13.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:13 vm04 bash[28289]: cluster 2026-03-10T10:17:12.545589+0000 osd.6 (osd.6) 18 : cluster [DBG] 166.9 scrub ok 2026-03-10T10:17:13.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:13 vm04 bash[28289]: cluster 2026-03-10T10:17:12.545589+0000 osd.6 (osd.6) 18 : cluster [DBG] 166.9 scrub ok 2026-03-10T10:17:13.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:13 vm04 bash[28289]: audit 2026-03-10T10:17:12.588857+0000 mon.a (mon.0) 1309 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:17:13.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:13 vm04 bash[28289]: audit 2026-03-10T10:17:12.588857+0000 mon.a (mon.0) 1309 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:17:13.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:13 vm04 bash[28289]: audit 2026-03-10T10:17:13.198666+0000 mon.c (mon.2) 152 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:13.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:13 vm04 bash[28289]: audit 2026-03-10T10:17:13.198666+0000 mon.c (mon.2) 152 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:13.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:13 vm04 bash[28289]: audit 2026-03-10T10:17:13.327098+0000 mon.a (mon.0) 1310 : audit [INF] from='client.? 192.168.123.104:0/2864615561' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm04-59252-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:13.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:13 vm04 bash[28289]: audit 2026-03-10T10:17:13.327098+0000 mon.a (mon.0) 1310 : audit [INF] from='client.? 192.168.123.104:0/2864615561' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm04-59252-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:13.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:13 vm04 bash[28289]: audit 2026-03-10T10:17:13.327128+0000 mon.a (mon.0) 1311 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm04-59259-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:13.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:13 vm04 bash[28289]: audit 2026-03-10T10:17:13.327128+0000 mon.a (mon.0) 1311 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm04-59259-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:13.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:13 vm04 bash[28289]: cluster 2026-03-10T10:17:13.331564+0000 mon.a (mon.0) 1312 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-10T10:17:13.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:13 vm04 bash[28289]: cluster 2026-03-10T10:17:13.331564+0000 mon.a (mon.0) 1312 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-10T10:17:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:13 vm07 bash[23367]: cluster 2026-03-10T10:17:11.923859+0000 osd.2 (osd.2) 15 : cluster [DBG] 166.4 scrub starts 2026-03-10T10:17:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:13 vm07 bash[23367]: cluster 2026-03-10T10:17:11.923859+0000 osd.2 (osd.2) 15 : cluster [DBG] 166.4 scrub starts 2026-03-10T10:17:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:13 vm07 bash[23367]: cluster 2026-03-10T10:17:11.925382+0000 osd.2 (osd.2) 16 : cluster [DBG] 166.4 scrub ok 2026-03-10T10:17:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:13 vm07 bash[23367]: cluster 2026-03-10T10:17:11.925382+0000 osd.2 (osd.2) 16 : cluster [DBG] 166.4 scrub ok 2026-03-10T10:17:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:13 vm07 bash[23367]: cluster 2026-03-10T10:17:11.930816+0000 osd.3 (osd.3) 3 : cluster [DBG] 166.0 scrub starts 2026-03-10T10:17:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:13 vm07 bash[23367]: cluster 2026-03-10T10:17:11.930816+0000 osd.3 (osd.3) 3 : cluster [DBG] 166.0 scrub starts 2026-03-10T10:17:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:13 vm07 bash[23367]: cluster 2026-03-10T10:17:11.932694+0000 osd.3 (osd.3) 4 : cluster [DBG] 166.0 scrub ok 2026-03-10T10:17:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:13 vm07 bash[23367]: cluster 2026-03-10T10:17:11.932694+0000 osd.3 (osd.3) 4 : cluster [DBG] 166.0 scrub ok 2026-03-10T10:17:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:13 vm07 bash[23367]: cluster 2026-03-10T10:17:12.150415+0000 osd.4 (osd.4) 9 : cluster [DBG] 166.3 scrub starts 2026-03-10T10:17:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:13 vm07 bash[23367]: cluster 2026-03-10T10:17:12.150415+0000 osd.4 (osd.4) 9 : cluster [DBG] 166.3 scrub starts 2026-03-10T10:17:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:13 vm07 bash[23367]: cluster 2026-03-10T10:17:12.152455+0000 osd.4 (osd.4) 10 : cluster [DBG] 166.3 scrub ok 2026-03-10T10:17:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:13 vm07 bash[23367]: cluster 2026-03-10T10:17:12.152455+0000 osd.4 (osd.4) 10 : cluster [DBG] 166.3 scrub ok 2026-03-10T10:17:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:13 vm07 bash[23367]: cluster 2026-03-10T10:17:12.544060+0000 osd.6 (osd.6) 17 : cluster [DBG] 166.9 scrub starts 2026-03-10T10:17:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:13 vm07 bash[23367]: cluster 2026-03-10T10:17:12.544060+0000 osd.6 (osd.6) 17 : cluster [DBG] 166.9 scrub starts 2026-03-10T10:17:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:13 vm07 bash[23367]: cluster 2026-03-10T10:17:12.545589+0000 osd.6 (osd.6) 18 : cluster [DBG] 166.9 scrub ok 2026-03-10T10:17:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:13 vm07 bash[23367]: cluster 2026-03-10T10:17:12.545589+0000 osd.6 (osd.6) 18 : cluster [DBG] 166.9 scrub ok 2026-03-10T10:17:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:13 vm07 bash[23367]: audit 2026-03-10T10:17:12.588857+0000 mon.a (mon.0) 1309 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:17:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:13 vm07 bash[23367]: audit 2026-03-10T10:17:12.588857+0000 mon.a (mon.0) 1309 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:17:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:13 vm07 bash[23367]: audit 2026-03-10T10:17:13.198666+0000 mon.c (mon.2) 152 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:13 vm07 bash[23367]: audit 2026-03-10T10:17:13.198666+0000 mon.c (mon.2) 152 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:13 vm07 bash[23367]: audit 2026-03-10T10:17:13.327098+0000 mon.a (mon.0) 1310 : audit [INF] from='client.? 192.168.123.104:0/2864615561' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm04-59252-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:13 vm07 bash[23367]: audit 2026-03-10T10:17:13.327098+0000 mon.a (mon.0) 1310 : audit [INF] from='client.? 192.168.123.104:0/2864615561' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSame_vm04-59252-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:13 vm07 bash[23367]: audit 2026-03-10T10:17:13.327128+0000 mon.a (mon.0) 1311 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm04-59259-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:13 vm07 bash[23367]: audit 2026-03-10T10:17:13.327128+0000 mon.a (mon.0) 1311 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "FlushAsyncPP_vm04-59259-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:13 vm07 bash[23367]: cluster 2026-03-10T10:17:13.331564+0000 mon.a (mon.0) 1312 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-10T10:17:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:13 vm07 bash[23367]: cluster 2026-03-10T10:17:13.331564+0000 mon.a (mon.0) 1312 : cluster [DBG] osdmap e118: 8 total, 8 up, 8 in 2026-03-10T10:17:14.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:14 vm04 bash[20742]: cluster 2026-03-10T10:17:12.901306+0000 osd.2 (osd.2) 17 : cluster [DBG] 166.2 scrub starts 2026-03-10T10:17:14.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:14 vm04 bash[20742]: cluster 2026-03-10T10:17:12.901306+0000 osd.2 (osd.2) 17 : cluster [DBG] 166.2 scrub starts 2026-03-10T10:17:14.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:14 vm04 bash[20742]: cluster 2026-03-10T10:17:12.902820+0000 osd.2 (osd.2) 18 : cluster [DBG] 166.2 scrub ok 2026-03-10T10:17:14.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:14 vm04 bash[20742]: cluster 2026-03-10T10:17:12.902820+0000 osd.2 (osd.2) 18 : cluster [DBG] 166.2 scrub ok 2026-03-10T10:17:14.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:14 vm04 bash[20742]: cluster 2026-03-10T10:17:13.994995+0000 mgr.y (mgr.24422) 158 : cluster [DBG] pgmap v132: 524 pgs: 27 creating+peering, 24 creating+activating, 1 active+clean+snaptrim, 472 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 59 KiB/s rd, 0 B/s wr, 77 op/s 2026-03-10T10:17:14.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:14 vm04 bash[20742]: cluster 2026-03-10T10:17:13.994995+0000 mgr.y (mgr.24422) 158 : cluster [DBG] pgmap v132: 524 pgs: 27 creating+peering, 24 creating+activating, 1 active+clean+snaptrim, 472 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 59 KiB/s rd, 0 B/s wr, 77 op/s 2026-03-10T10:17:14.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:14 vm04 bash[20742]: audit 2026-03-10T10:17:14.200252+0000 mon.c (mon.2) 153 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:14.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:14 vm04 bash[20742]: audit 2026-03-10T10:17:14.200252+0000 mon.c (mon.2) 153 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:14.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:14 vm04 bash[20742]: cluster 2026-03-10T10:17:14.332984+0000 mon.a (mon.0) 1313 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-10T10:17:14.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:14 vm04 bash[20742]: cluster 2026-03-10T10:17:14.332984+0000 mon.a (mon.0) 1313 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-10T10:17:14.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:14 vm04 bash[20742]: audit 2026-03-10T10:17:14.339355+0000 mon.c (mon.2) 154 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:14.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:14 vm04 bash[20742]: audit 2026-03-10T10:17:14.339355+0000 mon.c (mon.2) 154 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:14.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:14 vm04 bash[20742]: audit 2026-03-10T10:17:14.344117+0000 mon.a (mon.0) 1314 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:14.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:14 vm04 bash[20742]: audit 2026-03-10T10:17:14.344117+0000 mon.a (mon.0) 1314 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:14.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:14 vm04 bash[28289]: cluster 2026-03-10T10:17:12.901306+0000 osd.2 (osd.2) 17 : cluster [DBG] 166.2 scrub starts 2026-03-10T10:17:14.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:14 vm04 bash[28289]: cluster 2026-03-10T10:17:12.901306+0000 osd.2 (osd.2) 17 : cluster [DBG] 166.2 scrub starts 2026-03-10T10:17:14.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:14 vm04 bash[28289]: cluster 2026-03-10T10:17:12.902820+0000 osd.2 (osd.2) 18 : cluster [DBG] 166.2 scrub ok 2026-03-10T10:17:14.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:14 vm04 bash[28289]: cluster 2026-03-10T10:17:12.902820+0000 osd.2 (osd.2) 18 : cluster [DBG] 166.2 scrub ok 2026-03-10T10:17:14.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:14 vm04 bash[28289]: cluster 2026-03-10T10:17:13.994995+0000 mgr.y (mgr.24422) 158 : cluster [DBG] pgmap v132: 524 pgs: 27 creating+peering, 24 creating+activating, 1 active+clean+snaptrim, 472 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 59 KiB/s rd, 0 B/s wr, 77 op/s 2026-03-10T10:17:14.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:14 vm04 bash[28289]: cluster 2026-03-10T10:17:13.994995+0000 mgr.y (mgr.24422) 158 : cluster [DBG] pgmap v132: 524 pgs: 27 creating+peering, 24 creating+activating, 1 active+clean+snaptrim, 472 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 59 KiB/s rd, 0 B/s wr, 77 op/s 2026-03-10T10:17:14.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:14 vm04 bash[28289]: audit 2026-03-10T10:17:14.200252+0000 mon.c (mon.2) 153 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:14.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:14 vm04 bash[28289]: audit 2026-03-10T10:17:14.200252+0000 mon.c (mon.2) 153 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:14.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:14 vm04 bash[28289]: cluster 2026-03-10T10:17:14.332984+0000 mon.a (mon.0) 1313 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-10T10:17:14.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:14 vm04 bash[28289]: cluster 2026-03-10T10:17:14.332984+0000 mon.a (mon.0) 1313 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-10T10:17:14.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:14 vm04 bash[28289]: audit 2026-03-10T10:17:14.339355+0000 mon.c (mon.2) 154 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:14.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:14 vm04 bash[28289]: audit 2026-03-10T10:17:14.339355+0000 mon.c (mon.2) 154 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:14.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:14 vm04 bash[28289]: audit 2026-03-10T10:17:14.344117+0000 mon.a (mon.0) 1314 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:14.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:14 vm04 bash[28289]: audit 2026-03-10T10:17:14.344117+0000 mon.a (mon.0) 1314 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:15.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:14 vm07 bash[23367]: cluster 2026-03-10T10:17:12.901306+0000 osd.2 (osd.2) 17 : cluster [DBG] 166.2 scrub starts 2026-03-10T10:17:15.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:14 vm07 bash[23367]: cluster 2026-03-10T10:17:12.901306+0000 osd.2 (osd.2) 17 : cluster [DBG] 166.2 scrub starts 2026-03-10T10:17:15.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:14 vm07 bash[23367]: cluster 2026-03-10T10:17:12.902820+0000 osd.2 (osd.2) 18 : cluster [DBG] 166.2 scrub ok 2026-03-10T10:17:15.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:14 vm07 bash[23367]: cluster 2026-03-10T10:17:12.902820+0000 osd.2 (osd.2) 18 : cluster [DBG] 166.2 scrub ok 2026-03-10T10:17:15.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:14 vm07 bash[23367]: cluster 2026-03-10T10:17:13.994995+0000 mgr.y (mgr.24422) 158 : cluster [DBG] pgmap v132: 524 pgs: 27 creating+peering, 24 creating+activating, 1 active+clean+snaptrim, 472 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 59 KiB/s rd, 0 B/s wr, 77 op/s 2026-03-10T10:17:15.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:14 vm07 bash[23367]: cluster 2026-03-10T10:17:13.994995+0000 mgr.y (mgr.24422) 158 : cluster [DBG] pgmap v132: 524 pgs: 27 creating+peering, 24 creating+activating, 1 active+clean+snaptrim, 472 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 59 KiB/s rd, 0 B/s wr, 77 op/s 2026-03-10T10:17:15.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:14 vm07 bash[23367]: audit 2026-03-10T10:17:14.200252+0000 mon.c (mon.2) 153 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:15.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:14 vm07 bash[23367]: audit 2026-03-10T10:17:14.200252+0000 mon.c (mon.2) 153 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:15.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:14 vm07 bash[23367]: cluster 2026-03-10T10:17:14.332984+0000 mon.a (mon.0) 1313 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-10T10:17:15.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:14 vm07 bash[23367]: cluster 2026-03-10T10:17:14.332984+0000 mon.a (mon.0) 1313 : cluster [DBG] osdmap e119: 8 total, 8 up, 8 in 2026-03-10T10:17:15.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:14 vm07 bash[23367]: audit 2026-03-10T10:17:14.339355+0000 mon.c (mon.2) 154 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:15.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:14 vm07 bash[23367]: audit 2026-03-10T10:17:14.339355+0000 mon.c (mon.2) 154 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:15.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:14 vm07 bash[23367]: audit 2026-03-10T10:17:14.344117+0000 mon.a (mon.0) 1314 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:15.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:14 vm07 bash[23367]: audit 2026-03-10T10:17:14.344117+0000 mon.a (mon.0) 1314 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:15.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:15 vm04 bash[20742]: cluster 2026-03-10T10:17:13.869867+0000 osd.2 (osd.2) 19 : cluster [DBG] 166.8 scrub starts 2026-03-10T10:17:15.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:15 vm04 bash[20742]: cluster 2026-03-10T10:17:13.869867+0000 osd.2 (osd.2) 19 : cluster [DBG] 166.8 scrub starts 2026-03-10T10:17:15.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:15 vm04 bash[20742]: cluster 2026-03-10T10:17:13.871203+0000 osd.2 (osd.2) 20 : cluster [DBG] 166.8 scrub ok 2026-03-10T10:17:15.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:15 vm04 bash[20742]: cluster 2026-03-10T10:17:13.871203+0000 osd.2 (osd.2) 20 : cluster [DBG] 166.8 scrub ok 2026-03-10T10:17:15.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:15 vm04 bash[20742]: audit 2026-03-10T10:17:15.202111+0000 mon.c (mon.2) 155 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:15.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:15 vm04 bash[20742]: audit 2026-03-10T10:17:15.202111+0000 mon.c (mon.2) 155 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:15.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:15 vm04 bash[20742]: audit 2026-03-10T10:17:15.334323+0000 mon.a (mon.0) 1315 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]': finished 2026-03-10T10:17:15.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:15 vm04 bash[20742]: audit 2026-03-10T10:17:15.334323+0000 mon.a (mon.0) 1315 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]': finished 2026-03-10T10:17:15.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:15 vm04 bash[20742]: cluster 2026-03-10T10:17:15.338610+0000 mon.a (mon.0) 1316 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-10T10:17:15.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:15 vm04 bash[20742]: cluster 2026-03-10T10:17:15.338610+0000 mon.a (mon.0) 1316 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-10T10:17:15.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:15 vm04 bash[20742]: audit 2026-03-10T10:17:15.341113+0000 mon.a (mon.0) 1317 : audit [INF] from='client.? 192.168.123.104:0/575539854' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm04-59259-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:15.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:15 vm04 bash[20742]: audit 2026-03-10T10:17:15.341113+0000 mon.a (mon.0) 1317 : audit [INF] from='client.? 192.168.123.104:0/575539854' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm04-59259-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:15.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:15 vm04 bash[20742]: audit 2026-03-10T10:17:15.341275+0000 mon.a (mon.0) 1318 : audit [INF] from='client.? 192.168.123.104:0/2351404990' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm04-59252-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:15.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:15 vm04 bash[20742]: audit 2026-03-10T10:17:15.341275+0000 mon.a (mon.0) 1318 : audit [INF] from='client.? 192.168.123.104:0/2351404990' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm04-59252-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:15.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:15 vm04 bash[20742]: audit 2026-03-10T10:17:15.386285+0000 mon.c (mon.2) 156 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:15.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:15 vm04 bash[20742]: audit 2026-03-10T10:17:15.386285+0000 mon.c (mon.2) 156 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:15.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:15 vm04 bash[20742]: audit 2026-03-10T10:17:15.386762+0000 mon.a (mon.0) 1319 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:15.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:15 vm04 bash[20742]: audit 2026-03-10T10:17:15.386762+0000 mon.a (mon.0) 1319 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:15.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:15 vm04 bash[28289]: cluster 2026-03-10T10:17:13.869867+0000 osd.2 (osd.2) 19 : cluster [DBG] 166.8 scrub starts 2026-03-10T10:17:15.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:15 vm04 bash[28289]: cluster 2026-03-10T10:17:13.869867+0000 osd.2 (osd.2) 19 : cluster [DBG] 166.8 scrub starts 2026-03-10T10:17:15.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:15 vm04 bash[28289]: cluster 2026-03-10T10:17:13.871203+0000 osd.2 (osd.2) 20 : cluster [DBG] 166.8 scrub ok 2026-03-10T10:17:15.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:15 vm04 bash[28289]: cluster 2026-03-10T10:17:13.871203+0000 osd.2 (osd.2) 20 : cluster [DBG] 166.8 scrub ok 2026-03-10T10:17:15.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:15 vm04 bash[28289]: audit 2026-03-10T10:17:15.202111+0000 mon.c (mon.2) 155 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:15.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:15 vm04 bash[28289]: audit 2026-03-10T10:17:15.202111+0000 mon.c (mon.2) 155 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:15.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:15 vm04 bash[28289]: audit 2026-03-10T10:17:15.334323+0000 mon.a (mon.0) 1315 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]': finished 2026-03-10T10:17:15.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:15 vm04 bash[28289]: audit 2026-03-10T10:17:15.334323+0000 mon.a (mon.0) 1315 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]': finished 2026-03-10T10:17:15.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:15 vm04 bash[28289]: cluster 2026-03-10T10:17:15.338610+0000 mon.a (mon.0) 1316 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-10T10:17:15.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:15 vm04 bash[28289]: cluster 2026-03-10T10:17:15.338610+0000 mon.a (mon.0) 1316 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-10T10:17:15.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:15 vm04 bash[28289]: audit 2026-03-10T10:17:15.341113+0000 mon.a (mon.0) 1317 : audit [INF] from='client.? 192.168.123.104:0/575539854' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm04-59259-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:15.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:15 vm04 bash[28289]: audit 2026-03-10T10:17:15.341113+0000 mon.a (mon.0) 1317 : audit [INF] from='client.? 192.168.123.104:0/575539854' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm04-59259-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:15.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:15 vm04 bash[28289]: audit 2026-03-10T10:17:15.341275+0000 mon.a (mon.0) 1318 : audit [INF] from='client.? 192.168.123.104:0/2351404990' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm04-59252-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:15.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:15 vm04 bash[28289]: audit 2026-03-10T10:17:15.341275+0000 mon.a (mon.0) 1318 : audit [INF] from='client.? 192.168.123.104:0/2351404990' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm04-59252-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:15.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:15 vm04 bash[28289]: audit 2026-03-10T10:17:15.386285+0000 mon.c (mon.2) 156 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:15.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:15 vm04 bash[28289]: audit 2026-03-10T10:17:15.386285+0000 mon.c (mon.2) 156 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:15.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:15 vm04 bash[28289]: audit 2026-03-10T10:17:15.386762+0000 mon.a (mon.0) 1319 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:15.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:15 vm04 bash[28289]: audit 2026-03-10T10:17:15.386762+0000 mon.a (mon.0) 1319 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:15 vm07 bash[23367]: cluster 2026-03-10T10:17:13.869867+0000 osd.2 (osd.2) 19 : cluster [DBG] 166.8 scrub starts 2026-03-10T10:17:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:15 vm07 bash[23367]: cluster 2026-03-10T10:17:13.869867+0000 osd.2 (osd.2) 19 : cluster [DBG] 166.8 scrub starts 2026-03-10T10:17:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:15 vm07 bash[23367]: cluster 2026-03-10T10:17:13.871203+0000 osd.2 (osd.2) 20 : cluster [DBG] 166.8 scrub ok 2026-03-10T10:17:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:15 vm07 bash[23367]: cluster 2026-03-10T10:17:13.871203+0000 osd.2 (osd.2) 20 : cluster [DBG] 166.8 scrub ok 2026-03-10T10:17:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:15 vm07 bash[23367]: audit 2026-03-10T10:17:15.202111+0000 mon.c (mon.2) 155 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:15 vm07 bash[23367]: audit 2026-03-10T10:17:15.202111+0000 mon.c (mon.2) 155 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:15 vm07 bash[23367]: audit 2026-03-10T10:17:15.334323+0000 mon.a (mon.0) 1315 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]': finished 2026-03-10T10:17:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:15 vm07 bash[23367]: audit 2026-03-10T10:17:15.334323+0000 mon.a (mon.0) 1315 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]': finished 2026-03-10T10:17:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:15 vm07 bash[23367]: cluster 2026-03-10T10:17:15.338610+0000 mon.a (mon.0) 1316 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-10T10:17:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:15 vm07 bash[23367]: cluster 2026-03-10T10:17:15.338610+0000 mon.a (mon.0) 1316 : cluster [DBG] osdmap e120: 8 total, 8 up, 8 in 2026-03-10T10:17:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:15 vm07 bash[23367]: audit 2026-03-10T10:17:15.341113+0000 mon.a (mon.0) 1317 : audit [INF] from='client.? 192.168.123.104:0/575539854' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm04-59259-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:15 vm07 bash[23367]: audit 2026-03-10T10:17:15.341113+0000 mon.a (mon.0) 1317 : audit [INF] from='client.? 192.168.123.104:0/575539854' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm04-59259-16","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:15 vm07 bash[23367]: audit 2026-03-10T10:17:15.341275+0000 mon.a (mon.0) 1318 : audit [INF] from='client.? 192.168.123.104:0/2351404990' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm04-59252-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:15 vm07 bash[23367]: audit 2026-03-10T10:17:15.341275+0000 mon.a (mon.0) 1318 : audit [INF] from='client.? 192.168.123.104:0/2351404990' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStat_vm04-59252-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:15 vm07 bash[23367]: audit 2026-03-10T10:17:15.386285+0000 mon.c (mon.2) 156 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:15 vm07 bash[23367]: audit 2026-03-10T10:17:15.386285+0000 mon.c (mon.2) 156 : audit [INF] from='client.? 192.168.123.104:0/2570967141' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:15 vm07 bash[23367]: audit 2026-03-10T10:17:15.386762+0000 mon.a (mon.0) 1319 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:15 vm07 bash[23367]: audit 2026-03-10T10:17:15.386762+0000 mon.a (mon.0) 1319 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]: dispatch 2026-03-10T10:17:16.442 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: Running main() from gmock_main.cc 2026-03-10T10:17:16.442 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [==========] Running 13 tests from 4 test suites. 2026-03-10T10:17:16.442 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [----------] Global test environment set-up. 2026-03-10T10:17:16.442 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [----------] 4 tests from LibRadosSnapshots 2026-03-10T10:17:16.442 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [ RUN ] LibRadosSnapshots.SnapList 2026-03-10T10:17:16.442 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [ OK ] LibRadosSnapshots.SnapList (1846 ms) 2026-03-10T10:17:16.442 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [ RUN ] LibRadosSnapshots.SnapRemove 2026-03-10T10:17:16.442 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [ OK ] LibRadosSnapshots.SnapRemove (2222 ms) 2026-03-10T10:17:16.442 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [ RUN ] LibRadosSnapshots.Rollback 2026-03-10T10:17:16.442 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [ OK ] LibRadosSnapshots.Rollback (2089 ms) 2026-03-10T10:17:16.442 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [ RUN ] LibRadosSnapshots.SnapGetName 2026-03-10T10:17:16.442 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [ OK ] LibRadosSnapshots.SnapGetName (2080 ms) 2026-03-10T10:17:16.442 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [----------] 4 tests from LibRadosSnapshots (8237 ms total) 2026-03-10T10:17:16.442 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: 2026-03-10T10:17:16.442 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [----------] 3 tests from LibRadosSnapshotsSelfManaged 2026-03-10T10:17:16.442 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsSelfManaged.Snap 2026-03-10T10:17:16.442 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [ OK ] LibRadosSnapshotsSelfManaged.Snap (4200 ms) 2026-03-10T10:17:16.442 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsSelfManaged.Rollback 2026-03-10T10:17:16.442 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [ OK ] LibRadosSnapshotsSelfManaged.Rollback (3835 ms) 2026-03-10T10:17:16.442 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsSelfManaged.FutureSnapRollback 2026-03-10T10:17:16.442 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [ OK ] LibRadosSnapshotsSelfManaged.FutureSnapRollback (4985 ms) 2026-03-10T10:17:16.442 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [----------] 3 tests from LibRadosSnapshotsSelfManaged (13020 ms total) 2026-03-10T10:17:16.443 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: 2026-03-10T10:17:16.443 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [----------] 4 tests from LibRadosSnapshotsEC 2026-03-10T10:17:16.443 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsEC.SnapList 2026-03-10T10:17:16.443 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [ OK ] LibRadosSnapshotsEC.SnapList (2663 ms) 2026-03-10T10:17:16.443 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsEC.SnapRemove 2026-03-10T10:17:16.443 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [ OK ] LibRadosSnapshotsEC.SnapRemove (1995 ms) 2026-03-10T10:17:16.443 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsEC.Rollback 2026-03-10T10:17:16.443 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [ OK ] LibRadosSnapshotsEC.Rollback (2010 ms) 2026-03-10T10:17:16.443 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsEC.SnapGetName 2026-03-10T10:17:16.443 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [ OK ] LibRadosSnapshotsEC.SnapGetName (1991 ms) 2026-03-10T10:17:16.443 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [----------] 4 tests from LibRadosSnapshotsEC (8659 ms total) 2026-03-10T10:17:16.443 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: 2026-03-10T10:17:16.443 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [----------] 2 tests from LibRadosSnapshotsSelfManagedEC 2026-03-10T10:17:16.443 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsSelfManagedEC.Snap 2026-03-10T10:17:16.443 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [ OK ] LibRadosSnapshotsSelfManagedEC.Snap (3823 ms) 2026-03-10T10:17:16.443 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [ RUN ] LibRadosSnapshotsSelfManagedEC.Rollback 2026-03-10T10:17:16.443 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [ OK ] LibRadosSnapshotsSelfManagedEC.Rollback (4135 ms) 2026-03-10T10:17:16.443 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [----------] 2 tests from LibRadosSnapshotsSelfManagedEC (7958 ms total) 2026-03-10T10:17:16.443 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: 2026-03-10T10:17:16.443 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [----------] Global test environment tear-down 2026-03-10T10:17:16.443 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [==========] 13 tests from 4 test suites ran. (56080 ms total) 2026-03-10T10:17:16.443 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots: [ PASSED ] 13 tests. 2026-03-10T10:17:16.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:16 vm04 bash[28289]: cluster 2026-03-10T10:17:15.995472+0000 mgr.y (mgr.24422) 159 : cluster [DBG] pgmap v135: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 54 KiB/s rd, 73 op/s 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:16 vm04 bash[28289]: cluster 2026-03-10T10:17:15.995472+0000 mgr.y (mgr.24422) 159 : cluster [DBG] pgmap v135: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 54 KiB/s rd, 73 op/s 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:16 vm04 bash[28289]: cluster 2026-03-10T10:17:16.194065+0000 mon.a (mon.0) 1320 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:16 vm04 bash[28289]: cluster 2026-03-10T10:17:16.194065+0000 mon.a (mon.0) 1320 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:16 vm04 bash[28289]: audit 2026-03-10T10:17:16.203630+0000 mon.c (mon.2) 157 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:16 vm04 bash[28289]: audit 2026-03-10T10:17:16.203630+0000 mon.c (mon.2) 157 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:16 vm04 bash[28289]: audit 2026-03-10T10:17:16.338948+0000 mon.a (mon.0) 1321 : audit [INF] from='client.? 192.168.123.104:0/575539854' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm04-59259-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:16 vm04 bash[28289]: audit 2026-03-10T10:17:16.338948+0000 mon.a (mon.0) 1321 : audit [INF] from='client.? 192.168.123.104:0/575539854' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm04-59259-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:16 vm04 bash[28289]: audit 2026-03-10T10:17:16.338982+0000 mon.a (mon.0) 1322 : audit [INF] from='client.? 192.168.123.104:0/2351404990' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStat_vm04-59252-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:16 vm04 bash[28289]: audit 2026-03-10T10:17:16.338982+0000 mon.a (mon.0) 1322 : audit [INF] from='client.? 192.168.123.104:0/2351404990' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStat_vm04-59252-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:16 vm04 bash[28289]: audit 2026-03-10T10:17:16.338997+0000 mon.a (mon.0) 1323 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]': finished 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:16 vm04 bash[28289]: audit 2026-03-10T10:17:16.338997+0000 mon.a (mon.0) 1323 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]': finished 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:16 vm04 bash[28289]: cluster 2026-03-10T10:17:16.345270+0000 mon.a (mon.0) 1324 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:16 vm04 bash[28289]: cluster 2026-03-10T10:17:16.345270+0000 mon.a (mon.0) 1324 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:16 vm04 bash[28289]: audit 2026-03-10T10:17:16.395293+0000 mon.c (mon.2) 158 : audit [INF] from='client.? 192.168.123.104:0/233380934' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59541-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:16 vm04 bash[28289]: audit 2026-03-10T10:17:16.395293+0000 mon.c (mon.2) 158 : audit [INF] from='client.? 192.168.123.104:0/233380934' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59541-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:16 vm04 bash[28289]: audit 2026-03-10T10:17:16.396248+0000 mon.a (mon.0) 1325 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59541-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:16 vm04 bash[28289]: audit 2026-03-10T10:17:16.396248+0000 mon.a (mon.0) 1325 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59541-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:16 vm04 bash[20742]: cluster 2026-03-10T10:17:15.995472+0000 mgr.y (mgr.24422) 159 : cluster [DBG] pgmap v135: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 54 KiB/s rd, 73 op/s 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:16 vm04 bash[20742]: cluster 2026-03-10T10:17:15.995472+0000 mgr.y (mgr.24422) 159 : cluster [DBG] pgmap v135: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 54 KiB/s rd, 73 op/s 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:16 vm04 bash[20742]: cluster 2026-03-10T10:17:16.194065+0000 mon.a (mon.0) 1320 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:16 vm04 bash[20742]: cluster 2026-03-10T10:17:16.194065+0000 mon.a (mon.0) 1320 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:16 vm04 bash[20742]: audit 2026-03-10T10:17:16.203630+0000 mon.c (mon.2) 157 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:16 vm04 bash[20742]: audit 2026-03-10T10:17:16.203630+0000 mon.c (mon.2) 157 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:16 vm04 bash[20742]: audit 2026-03-10T10:17:16.338948+0000 mon.a (mon.0) 1321 : audit [INF] from='client.? 192.168.123.104:0/575539854' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm04-59259-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:16 vm04 bash[20742]: audit 2026-03-10T10:17:16.338948+0000 mon.a (mon.0) 1321 : audit [INF] from='client.? 192.168.123.104:0/575539854' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm04-59259-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:16 vm04 bash[20742]: audit 2026-03-10T10:17:16.338982+0000 mon.a (mon.0) 1322 : audit [INF] from='client.? 192.168.123.104:0/2351404990' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStat_vm04-59252-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:16 vm04 bash[20742]: audit 2026-03-10T10:17:16.338982+0000 mon.a (mon.0) 1322 : audit [INF] from='client.? 192.168.123.104:0/2351404990' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStat_vm04-59252-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:16 vm04 bash[20742]: audit 2026-03-10T10:17:16.338997+0000 mon.a (mon.0) 1323 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]': finished 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:16 vm04 bash[20742]: audit 2026-03-10T10:17:16.338997+0000 mon.a (mon.0) 1323 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]': finished 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:16 vm04 bash[20742]: cluster 2026-03-10T10:17:16.345270+0000 mon.a (mon.0) 1324 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:16 vm04 bash[20742]: cluster 2026-03-10T10:17:16.345270+0000 mon.a (mon.0) 1324 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:16 vm04 bash[20742]: audit 2026-03-10T10:17:16.395293+0000 mon.c (mon.2) 158 : audit [INF] from='client.? 192.168.123.104:0/233380934' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59541-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:16 vm04 bash[20742]: audit 2026-03-10T10:17:16.395293+0000 mon.c (mon.2) 158 : audit [INF] from='client.? 192.168.123.104:0/233380934' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59541-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:16 vm04 bash[20742]: audit 2026-03-10T10:17:16.396248+0000 mon.a (mon.0) 1325 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59541-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:16 vm04 bash[20742]: audit 2026-03-10T10:17:16.396248+0000 mon.a (mon.0) 1325 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59541-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:17.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:16 vm07 bash[23367]: cluster 2026-03-10T10:17:15.995472+0000 mgr.y (mgr.24422) 159 : cluster [DBG] pgmap v135: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 54 KiB/s rd, 73 op/s 2026-03-10T10:17:17.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:16 vm07 bash[23367]: cluster 2026-03-10T10:17:15.995472+0000 mgr.y (mgr.24422) 159 : cluster [DBG] pgmap v135: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 54 KiB/s rd, 73 op/s 2026-03-10T10:17:17.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:16 vm07 bash[23367]: cluster 2026-03-10T10:17:16.194065+0000 mon.a (mon.0) 1320 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:17.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:16 vm07 bash[23367]: cluster 2026-03-10T10:17:16.194065+0000 mon.a (mon.0) 1320 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:17.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:16 vm07 bash[23367]: audit 2026-03-10T10:17:16.203630+0000 mon.c (mon.2) 157 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:17.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:16 vm07 bash[23367]: audit 2026-03-10T10:17:16.203630+0000 mon.c (mon.2) 157 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:17.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:16 vm07 bash[23367]: audit 2026-03-10T10:17:16.338948+0000 mon.a (mon.0) 1321 : audit [INF] from='client.? 192.168.123.104:0/575539854' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm04-59259-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:17.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:16 vm07 bash[23367]: audit 2026-03-10T10:17:16.338948+0000 mon.a (mon.0) 1321 : audit [INF] from='client.? 192.168.123.104:0/575539854' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP_vm04-59259-16","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:17.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:16 vm07 bash[23367]: audit 2026-03-10T10:17:16.338982+0000 mon.a (mon.0) 1322 : audit [INF] from='client.? 192.168.123.104:0/2351404990' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStat_vm04-59252-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:17.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:16 vm07 bash[23367]: audit 2026-03-10T10:17:16.338982+0000 mon.a (mon.0) 1322 : audit [INF] from='client.? 192.168.123.104:0/2351404990' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStat_vm04-59252-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:17.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:16 vm07 bash[23367]: audit 2026-03-10T10:17:16.338997+0000 mon.a (mon.0) 1323 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]': finished 2026-03-10T10:17:17.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:16 vm07 bash[23367]: audit 2026-03-10T10:17:16.338997+0000 mon.a (mon.0) 1323 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedEC_vm04-59531-15"}]': finished 2026-03-10T10:17:17.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:16 vm07 bash[23367]: cluster 2026-03-10T10:17:16.345270+0000 mon.a (mon.0) 1324 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-10T10:17:17.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:16 vm07 bash[23367]: cluster 2026-03-10T10:17:16.345270+0000 mon.a (mon.0) 1324 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-10T10:17:17.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:16 vm07 bash[23367]: audit 2026-03-10T10:17:16.395293+0000 mon.c (mon.2) 158 : audit [INF] from='client.? 192.168.123.104:0/233380934' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59541-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:17.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:16 vm07 bash[23367]: audit 2026-03-10T10:17:16.395293+0000 mon.c (mon.2) 158 : audit [INF] from='client.? 192.168.123.104:0/233380934' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59541-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:17.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:16 vm07 bash[23367]: audit 2026-03-10T10:17:16.396248+0000 mon.a (mon.0) 1325 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59541-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:17.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:16 vm07 bash[23367]: audit 2026-03-10T10:17:16.396248+0000 mon.a (mon.0) 1325 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59541-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:17.427 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [==========] Running 12 tests from 4 test suites. 2026-03-10T10:17:17.427 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [----------] Global test environment set-up. 2026-03-10T10:17:17.427 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [----------] 1 test from LibRadosMiscVersion 2026-03-10T10:17:17.427 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [ RUN ] LibRadosMiscVersion.Version 2026-03-10T10:17:17.427 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [ OK ] LibRadosMiscVersion.Version (0 ms) 2026-03-10T10:17:17.427 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [----------] 1 test from LibRadosMiscVersion (0 ms total) 2026-03-10T10:17:17.427 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: 2026-03-10T10:17:17.427 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [----------] 2 tests from LibRadosMiscConnectFailure 2026-03-10T10:17:17.427 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [ RUN ] LibRadosMiscConnectFailure.ConnectFailure 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: unable to get monitor info from DNS SRV with service name: ceph-mon 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: 2026-03-10T10:16:20.277+0000 7f63b4f47980 -1 failed for service _ceph-mon._tcp 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: 2026-03-10T10:16:20.277+0000 7f63b4f47980 -1 monclient: get_monmap_and_config cannot identify monitors to contact 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [ OK ] LibRadosMiscConnectFailure.ConnectFailure (26 ms) 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [ RUN ] LibRadosMiscConnectFailure.ConnectTimeout 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [ OK ] LibRadosMiscConnectFailure.ConnectTimeout (5005 ms) 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [----------] 2 tests from LibRadosMiscConnectFailure (5031 ms total) 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [----------] 1 test from LibRadosMiscPool 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [ RUN ] LibRadosMiscPool.PoolCreationRace 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: started 0x7f6394067d80 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: started 0x557073888670 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: started 2 aios 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: waiting 0x7f6394067d80 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: waiting 0x557073888670 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: done. 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [ OK ] LibRadosMiscPool.PoolCreationRace (5950 ms) 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [----------] 1 test from LibRadosMiscPool (5950 ms total) 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [----------] 8 tests from LibRadosMisc 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [ RUN ] LibRadosMisc.ClusterFSID 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [ OK ] LibRadosMisc.ClusterFSID (0 ms) 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [ RUN ] LibRadosMisc.Exec 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [ OK ] LibRadosMisc.Exec (89 ms) 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [ RUN ] LibRadosMisc.WriteSame 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [ OK ] LibRadosMisc.WriteSame (16 ms) 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [ RUN ] LibRadosMisc.CmpExt 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [ OK ] LibRadosMisc.CmpExt (3 ms) 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [ RUN ] LibRadosMisc.Applications 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [ OK ] LibRadosMisc.Applications (5003 ms) 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [ RUN ] LibRadosMisc.MinCompatOSD 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [ OK ] LibRadosMisc.MinCompatOSD (0 ms) 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [ RUN ] LibRadosMisc.MinCompatClient 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [ OK ] LibRadosMisc.MinCompatClient (0 ms) 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [ RUN ] LibRadosMisc.ShutdownRace 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [ OK ] LibRadosMisc.ShutdownRace (38772 ms) 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [----------] 8 tests from LibRadosMisc (43883 ms total) 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [----------] Global test environment tear-down 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [==========] 12 tests from 4 test suites ran. (57135 ms total) 2026-03-10T10:17:17.428 INFO:tasks.workunit.client.0.vm04.stdout: api_misc: [ PASSED ] 12 tests. 2026-03-10T10:17:17.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:17 vm04 bash[28289]: audit 2026-03-10T10:17:17.205365+0000 mon.c (mon.2) 159 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:17.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:17 vm04 bash[28289]: audit 2026-03-10T10:17:17.205365+0000 mon.c (mon.2) 159 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:17.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:17 vm04 bash[28289]: audit 2026-03-10T10:17:17.342182+0000 mon.a (mon.0) 1326 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59541-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:17.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:17 vm04 bash[28289]: audit 2026-03-10T10:17:17.342182+0000 mon.a (mon.0) 1326 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59541-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:17.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:17 vm04 bash[28289]: cluster 2026-03-10T10:17:17.344952+0000 mon.a (mon.0) 1327 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-10T10:17:17.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:17 vm04 bash[28289]: cluster 2026-03-10T10:17:17.344952+0000 mon.a (mon.0) 1327 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-10T10:17:17.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:17 vm04 bash[20742]: audit 2026-03-10T10:17:17.205365+0000 mon.c (mon.2) 159 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:17.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:17 vm04 bash[20742]: audit 2026-03-10T10:17:17.205365+0000 mon.c (mon.2) 159 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:17.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:17 vm04 bash[20742]: audit 2026-03-10T10:17:17.342182+0000 mon.a (mon.0) 1326 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59541-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:17.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:17 vm04 bash[20742]: audit 2026-03-10T10:17:17.342182+0000 mon.a (mon.0) 1326 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59541-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:17.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:17 vm04 bash[20742]: cluster 2026-03-10T10:17:17.344952+0000 mon.a (mon.0) 1327 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-10T10:17:17.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:17 vm04 bash[20742]: cluster 2026-03-10T10:17:17.344952+0000 mon.a (mon.0) 1327 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-10T10:17:18.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:17 vm07 bash[23367]: audit 2026-03-10T10:17:17.205365+0000 mon.c (mon.2) 159 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:18.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:17 vm07 bash[23367]: audit 2026-03-10T10:17:17.205365+0000 mon.c (mon.2) 159 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:18.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:17 vm07 bash[23367]: audit 2026-03-10T10:17:17.342182+0000 mon.a (mon.0) 1326 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59541-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:18.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:17 vm07 bash[23367]: audit 2026-03-10T10:17:17.342182+0000 mon.a (mon.0) 1326 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59541-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:18.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:17 vm07 bash[23367]: cluster 2026-03-10T10:17:17.344952+0000 mon.a (mon.0) 1327 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-10T10:17:18.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:17 vm07 bash[23367]: cluster 2026-03-10T10:17:17.344952+0000 mon.a (mon.0) 1327 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-10T10:17:18.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:17:18 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:17:18.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:18 vm04 bash[28289]: cluster 2026-03-10T10:17:17.995881+0000 mgr.y (mgr.24422) 160 : cluster [DBG] pgmap v138: 420 pgs: 64 unknown, 356 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:18.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:18 vm04 bash[28289]: cluster 2026-03-10T10:17:17.995881+0000 mgr.y (mgr.24422) 160 : cluster [DBG] pgmap v138: 420 pgs: 64 unknown, 356 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:18.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:18 vm04 bash[28289]: audit 2026-03-10T10:17:18.206182+0000 mon.c (mon.2) 160 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:18.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:18 vm04 bash[28289]: audit 2026-03-10T10:17:18.206182+0000 mon.c (mon.2) 160 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:18.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:18 vm04 bash[28289]: audit 2026-03-10T10:17:18.238046+0000 mgr.y (mgr.24422) 161 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:18.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:18 vm04 bash[28289]: audit 2026-03-10T10:17:18.238046+0000 mgr.y (mgr.24422) 161 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:18.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:18 vm04 bash[28289]: cluster 2026-03-10T10:17:18.349802+0000 mon.a (mon.0) 1328 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-10T10:17:18.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:18 vm04 bash[28289]: cluster 2026-03-10T10:17:18.349802+0000 mon.a (mon.0) 1328 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-10T10:17:18.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:18 vm04 bash[28289]: audit 2026-03-10T10:17:18.365887+0000 mon.b (mon.1) 129 : audit [INF] from='client.? 192.168.123.104:0/585796633' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm04-59252-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:18.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:18 vm04 bash[28289]: audit 2026-03-10T10:17:18.365887+0000 mon.b (mon.1) 129 : audit [INF] from='client.? 192.168.123.104:0/585796633' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm04-59252-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:18.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:18 vm04 bash[28289]: audit 2026-03-10T10:17:18.370873+0000 mon.a (mon.0) 1329 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm04-59252-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:18.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:18 vm04 bash[28289]: audit 2026-03-10T10:17:18.370873+0000 mon.a (mon.0) 1329 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm04-59252-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:18.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:18 vm04 bash[28289]: audit 2026-03-10T10:17:18.386557+0000 mon.c (mon.2) 161 : audit [INF] from='client.? 192.168.123.104:0/1946409407' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:18.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:18 vm04 bash[28289]: audit 2026-03-10T10:17:18.386557+0000 mon.c (mon.2) 161 : audit [INF] from='client.? 192.168.123.104:0/1946409407' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:18.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:18 vm04 bash[28289]: audit 2026-03-10T10:17:18.387379+0000 mon.a (mon.0) 1330 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:18.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:18 vm04 bash[28289]: audit 2026-03-10T10:17:18.387379+0000 mon.a (mon.0) 1330 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:18.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:18 vm04 bash[20742]: cluster 2026-03-10T10:17:17.995881+0000 mgr.y (mgr.24422) 160 : cluster [DBG] pgmap v138: 420 pgs: 64 unknown, 356 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:18.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:18 vm04 bash[20742]: cluster 2026-03-10T10:17:17.995881+0000 mgr.y (mgr.24422) 160 : cluster [DBG] pgmap v138: 420 pgs: 64 unknown, 356 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:18.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:18 vm04 bash[20742]: audit 2026-03-10T10:17:18.206182+0000 mon.c (mon.2) 160 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:18.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:18 vm04 bash[20742]: audit 2026-03-10T10:17:18.206182+0000 mon.c (mon.2) 160 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:18.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:18 vm04 bash[20742]: audit 2026-03-10T10:17:18.238046+0000 mgr.y (mgr.24422) 161 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:18.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:18 vm04 bash[20742]: audit 2026-03-10T10:17:18.238046+0000 mgr.y (mgr.24422) 161 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:18.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:18 vm04 bash[20742]: cluster 2026-03-10T10:17:18.349802+0000 mon.a (mon.0) 1328 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-10T10:17:18.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:18 vm04 bash[20742]: cluster 2026-03-10T10:17:18.349802+0000 mon.a (mon.0) 1328 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-10T10:17:18.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:18 vm04 bash[20742]: audit 2026-03-10T10:17:18.365887+0000 mon.b (mon.1) 129 : audit [INF] from='client.? 192.168.123.104:0/585796633' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm04-59252-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:18.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:18 vm04 bash[20742]: audit 2026-03-10T10:17:18.365887+0000 mon.b (mon.1) 129 : audit [INF] from='client.? 192.168.123.104:0/585796633' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm04-59252-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:18.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:18 vm04 bash[20742]: audit 2026-03-10T10:17:18.370873+0000 mon.a (mon.0) 1329 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm04-59252-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:18.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:18 vm04 bash[20742]: audit 2026-03-10T10:17:18.370873+0000 mon.a (mon.0) 1329 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm04-59252-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:18.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:18 vm04 bash[20742]: audit 2026-03-10T10:17:18.386557+0000 mon.c (mon.2) 161 : audit [INF] from='client.? 192.168.123.104:0/1946409407' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:18.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:18 vm04 bash[20742]: audit 2026-03-10T10:17:18.386557+0000 mon.c (mon.2) 161 : audit [INF] from='client.? 192.168.123.104:0/1946409407' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:18.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:18 vm04 bash[20742]: audit 2026-03-10T10:17:18.387379+0000 mon.a (mon.0) 1330 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:18.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:18 vm04 bash[20742]: audit 2026-03-10T10:17:18.387379+0000 mon.a (mon.0) 1330 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:19.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:18 vm07 bash[23367]: cluster 2026-03-10T10:17:17.995881+0000 mgr.y (mgr.24422) 160 : cluster [DBG] pgmap v138: 420 pgs: 64 unknown, 356 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:19.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:18 vm07 bash[23367]: cluster 2026-03-10T10:17:17.995881+0000 mgr.y (mgr.24422) 160 : cluster [DBG] pgmap v138: 420 pgs: 64 unknown, 356 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:19.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:18 vm07 bash[23367]: audit 2026-03-10T10:17:18.206182+0000 mon.c (mon.2) 160 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:19.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:18 vm07 bash[23367]: audit 2026-03-10T10:17:18.206182+0000 mon.c (mon.2) 160 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:19.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:18 vm07 bash[23367]: audit 2026-03-10T10:17:18.238046+0000 mgr.y (mgr.24422) 161 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:19.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:18 vm07 bash[23367]: audit 2026-03-10T10:17:18.238046+0000 mgr.y (mgr.24422) 161 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:19.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:18 vm07 bash[23367]: cluster 2026-03-10T10:17:18.349802+0000 mon.a (mon.0) 1328 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-10T10:17:19.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:18 vm07 bash[23367]: cluster 2026-03-10T10:17:18.349802+0000 mon.a (mon.0) 1328 : cluster [DBG] osdmap e123: 8 total, 8 up, 8 in 2026-03-10T10:17:19.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:18 vm07 bash[23367]: audit 2026-03-10T10:17:18.365887+0000 mon.b (mon.1) 129 : audit [INF] from='client.? 192.168.123.104:0/585796633' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm04-59252-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:19.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:18 vm07 bash[23367]: audit 2026-03-10T10:17:18.365887+0000 mon.b (mon.1) 129 : audit [INF] from='client.? 192.168.123.104:0/585796633' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm04-59252-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:19.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:18 vm07 bash[23367]: audit 2026-03-10T10:17:18.370873+0000 mon.a (mon.0) 1329 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm04-59252-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:19.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:18 vm07 bash[23367]: audit 2026-03-10T10:17:18.370873+0000 mon.a (mon.0) 1329 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm04-59252-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:19.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:18 vm07 bash[23367]: audit 2026-03-10T10:17:18.386557+0000 mon.c (mon.2) 161 : audit [INF] from='client.? 192.168.123.104:0/1946409407' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:19.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:18 vm07 bash[23367]: audit 2026-03-10T10:17:18.386557+0000 mon.c (mon.2) 161 : audit [INF] from='client.? 192.168.123.104:0/1946409407' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:19.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:18 vm07 bash[23367]: audit 2026-03-10T10:17:18.387379+0000 mon.a (mon.0) 1330 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:19.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:18 vm07 bash[23367]: audit 2026-03-10T10:17:18.387379+0000 mon.a (mon.0) 1330 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:19.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:19 vm04 bash[28289]: audit 2026-03-10T10:17:19.207039+0000 mon.c (mon.2) 162 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:19.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:19 vm04 bash[28289]: audit 2026-03-10T10:17:19.207039+0000 mon.c (mon.2) 162 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:19.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:19 vm04 bash[28289]: audit 2026-03-10T10:17:19.349375+0000 mon.a (mon.0) 1331 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm04-59252-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:19.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:19 vm04 bash[28289]: audit 2026-03-10T10:17:19.349375+0000 mon.a (mon.0) 1331 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm04-59252-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:19.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:19 vm04 bash[28289]: audit 2026-03-10T10:17:19.349466+0000 mon.a (mon.0) 1332 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:19.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:19 vm04 bash[28289]: audit 2026-03-10T10:17:19.349466+0000 mon.a (mon.0) 1332 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:19.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:19 vm04 bash[28289]: cluster 2026-03-10T10:17:19.353098+0000 mon.a (mon.0) 1333 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-10T10:17:19.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:19 vm04 bash[28289]: cluster 2026-03-10T10:17:19.353098+0000 mon.a (mon.0) 1333 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-10T10:17:19.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:19 vm04 bash[20742]: audit 2026-03-10T10:17:19.207039+0000 mon.c (mon.2) 162 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:19.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:19 vm04 bash[20742]: audit 2026-03-10T10:17:19.207039+0000 mon.c (mon.2) 162 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:19.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:19 vm04 bash[20742]: audit 2026-03-10T10:17:19.349375+0000 mon.a (mon.0) 1331 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm04-59252-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:19.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:19 vm04 bash[20742]: audit 2026-03-10T10:17:19.349375+0000 mon.a (mon.0) 1331 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm04-59252-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:19.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:19 vm04 bash[20742]: audit 2026-03-10T10:17:19.349466+0000 mon.a (mon.0) 1332 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:19.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:19 vm04 bash[20742]: audit 2026-03-10T10:17:19.349466+0000 mon.a (mon.0) 1332 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:19.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:19 vm04 bash[20742]: cluster 2026-03-10T10:17:19.353098+0000 mon.a (mon.0) 1333 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-10T10:17:19.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:19 vm04 bash[20742]: cluster 2026-03-10T10:17:19.353098+0000 mon.a (mon.0) 1333 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-10T10:17:20.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:19 vm07 bash[23367]: audit 2026-03-10T10:17:19.207039+0000 mon.c (mon.2) 162 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:20.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:19 vm07 bash[23367]: audit 2026-03-10T10:17:19.207039+0000 mon.c (mon.2) 162 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:20.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:19 vm07 bash[23367]: audit 2026-03-10T10:17:19.349375+0000 mon.a (mon.0) 1331 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm04-59252-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:20.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:19 vm07 bash[23367]: audit 2026-03-10T10:17:19.349375+0000 mon.a (mon.0) 1331 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm04-59252-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:20.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:19 vm07 bash[23367]: audit 2026-03-10T10:17:19.349466+0000 mon.a (mon.0) 1332 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:20.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:19 vm07 bash[23367]: audit 2026-03-10T10:17:19.349466+0000 mon.a (mon.0) 1332 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:20.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:19 vm07 bash[23367]: cluster 2026-03-10T10:17:19.353098+0000 mon.a (mon.0) 1333 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-10T10:17:20.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:19 vm07 bash[23367]: cluster 2026-03-10T10:17:19.353098+0000 mon.a (mon.0) 1333 : cluster [DBG] osdmap e124: 8 total, 8 up, 8 in 2026-03-10T10:17:20.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:20 vm04 bash[28289]: cluster 2026-03-10T10:17:19.996345+0000 mgr.y (mgr.24422) 162 : cluster [DBG] pgmap v141: 484 pgs: 44 creating+peering, 20 unknown, 420 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:17:20.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:20 vm04 bash[28289]: cluster 2026-03-10T10:17:19.996345+0000 mgr.y (mgr.24422) 162 : cluster [DBG] pgmap v141: 484 pgs: 44 creating+peering, 20 unknown, 420 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:17:20.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:20 vm04 bash[28289]: audit 2026-03-10T10:17:20.207818+0000 mon.c (mon.2) 163 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:20.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:20 vm04 bash[28289]: audit 2026-03-10T10:17:20.207818+0000 mon.c (mon.2) 163 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:20.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:20 vm04 bash[28289]: cluster 2026-03-10T10:17:20.356408+0000 mon.a (mon.0) 1334 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-10T10:17:20.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:20 vm04 bash[28289]: cluster 2026-03-10T10:17:20.356408+0000 mon.a (mon.0) 1334 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-10T10:17:20.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:20 vm04 bash[28289]: audit 2026-03-10T10:17:20.386624+0000 mon.c (mon.2) 164 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:20.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:20 vm04 bash[28289]: audit 2026-03-10T10:17:20.386624+0000 mon.c (mon.2) 164 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:20.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:20 vm04 bash[28289]: audit 2026-03-10T10:17:20.390520+0000 mon.a (mon.0) 1335 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:20.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:20 vm04 bash[28289]: audit 2026-03-10T10:17:20.390520+0000 mon.a (mon.0) 1335 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:20.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:20 vm04 bash[28289]: audit 2026-03-10T10:17:20.391783+0000 mon.c (mon.2) 165 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:20.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:20 vm04 bash[28289]: audit 2026-03-10T10:17:20.391783+0000 mon.c (mon.2) 165 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:20.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:20 vm04 bash[28289]: audit 2026-03-10T10:17:20.392264+0000 mon.a (mon.0) 1336 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:20.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:20 vm04 bash[28289]: audit 2026-03-10T10:17:20.392264+0000 mon.a (mon.0) 1336 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:20.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:20 vm04 bash[28289]: audit 2026-03-10T10:17:20.393741+0000 mon.c (mon.2) 166 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:20.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:20 vm04 bash[28289]: audit 2026-03-10T10:17:20.393741+0000 mon.c (mon.2) 166 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:20.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:20 vm04 bash[28289]: audit 2026-03-10T10:17:20.394020+0000 mon.a (mon.0) 1337 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:20.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:20 vm04 bash[28289]: audit 2026-03-10T10:17:20.394020+0000 mon.a (mon.0) 1337 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:20.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:20 vm04 bash[20742]: cluster 2026-03-10T10:17:19.996345+0000 mgr.y (mgr.24422) 162 : cluster [DBG] pgmap v141: 484 pgs: 44 creating+peering, 20 unknown, 420 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:17:20.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:20 vm04 bash[20742]: cluster 2026-03-10T10:17:19.996345+0000 mgr.y (mgr.24422) 162 : cluster [DBG] pgmap v141: 484 pgs: 44 creating+peering, 20 unknown, 420 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:17:20.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:20 vm04 bash[20742]: audit 2026-03-10T10:17:20.207818+0000 mon.c (mon.2) 163 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:20.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:20 vm04 bash[20742]: audit 2026-03-10T10:17:20.207818+0000 mon.c (mon.2) 163 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:20.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:20 vm04 bash[20742]: cluster 2026-03-10T10:17:20.356408+0000 mon.a (mon.0) 1334 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-10T10:17:20.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:20 vm04 bash[20742]: cluster 2026-03-10T10:17:20.356408+0000 mon.a (mon.0) 1334 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-10T10:17:20.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:20 vm04 bash[20742]: audit 2026-03-10T10:17:20.386624+0000 mon.c (mon.2) 164 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:20.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:20 vm04 bash[20742]: audit 2026-03-10T10:17:20.386624+0000 mon.c (mon.2) 164 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:20.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:20 vm04 bash[20742]: audit 2026-03-10T10:17:20.390520+0000 mon.a (mon.0) 1335 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:20.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:20 vm04 bash[20742]: audit 2026-03-10T10:17:20.390520+0000 mon.a (mon.0) 1335 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:20.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:20 vm04 bash[20742]: audit 2026-03-10T10:17:20.391783+0000 mon.c (mon.2) 165 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:20.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:20 vm04 bash[20742]: audit 2026-03-10T10:17:20.391783+0000 mon.c (mon.2) 165 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:20.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:20 vm04 bash[20742]: audit 2026-03-10T10:17:20.392264+0000 mon.a (mon.0) 1336 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:20.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:20 vm04 bash[20742]: audit 2026-03-10T10:17:20.392264+0000 mon.a (mon.0) 1336 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:20.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:20 vm04 bash[20742]: audit 2026-03-10T10:17:20.393741+0000 mon.c (mon.2) 166 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:20.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:20 vm04 bash[20742]: audit 2026-03-10T10:17:20.393741+0000 mon.c (mon.2) 166 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:20.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:20 vm04 bash[20742]: audit 2026-03-10T10:17:20.394020+0000 mon.a (mon.0) 1337 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:20.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:20 vm04 bash[20742]: audit 2026-03-10T10:17:20.394020+0000 mon.a (mon.0) 1337 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:21.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:20 vm07 bash[23367]: cluster 2026-03-10T10:17:19.996345+0000 mgr.y (mgr.24422) 162 : cluster [DBG] pgmap v141: 484 pgs: 44 creating+peering, 20 unknown, 420 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:17:21.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:20 vm07 bash[23367]: cluster 2026-03-10T10:17:19.996345+0000 mgr.y (mgr.24422) 162 : cluster [DBG] pgmap v141: 484 pgs: 44 creating+peering, 20 unknown, 420 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:17:21.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:20 vm07 bash[23367]: audit 2026-03-10T10:17:20.207818+0000 mon.c (mon.2) 163 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:21.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:20 vm07 bash[23367]: audit 2026-03-10T10:17:20.207818+0000 mon.c (mon.2) 163 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:21.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:20 vm07 bash[23367]: cluster 2026-03-10T10:17:20.356408+0000 mon.a (mon.0) 1334 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-10T10:17:21.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:20 vm07 bash[23367]: cluster 2026-03-10T10:17:20.356408+0000 mon.a (mon.0) 1334 : cluster [DBG] osdmap e125: 8 total, 8 up, 8 in 2026-03-10T10:17:21.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:20 vm07 bash[23367]: audit 2026-03-10T10:17:20.386624+0000 mon.c (mon.2) 164 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:21.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:20 vm07 bash[23367]: audit 2026-03-10T10:17:20.386624+0000 mon.c (mon.2) 164 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:21.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:20 vm07 bash[23367]: audit 2026-03-10T10:17:20.390520+0000 mon.a (mon.0) 1335 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:21.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:20 vm07 bash[23367]: audit 2026-03-10T10:17:20.390520+0000 mon.a (mon.0) 1335 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:21.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:20 vm07 bash[23367]: audit 2026-03-10T10:17:20.391783+0000 mon.c (mon.2) 165 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:21.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:20 vm07 bash[23367]: audit 2026-03-10T10:17:20.391783+0000 mon.c (mon.2) 165 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:21.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:20 vm07 bash[23367]: audit 2026-03-10T10:17:20.392264+0000 mon.a (mon.0) 1336 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:21.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:20 vm07 bash[23367]: audit 2026-03-10T10:17:20.392264+0000 mon.a (mon.0) 1336 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:21.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:20 vm07 bash[23367]: audit 2026-03-10T10:17:20.393741+0000 mon.c (mon.2) 166 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:21.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:20 vm07 bash[23367]: audit 2026-03-10T10:17:20.393741+0000 mon.c (mon.2) 166 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:21.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:20 vm07 bash[23367]: audit 2026-03-10T10:17:20.394020+0000 mon.a (mon.0) 1337 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:21.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:20 vm07 bash[23367]: audit 2026-03-10T10:17:20.394020+0000 mon.a (mon.0) 1337 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:21 vm04 bash[28289]: cluster 2026-03-10T10:17:21.194671+0000 mon.a (mon.0) 1338 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:21 vm04 bash[28289]: cluster 2026-03-10T10:17:21.194671+0000 mon.a (mon.0) 1338 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:21 vm04 bash[28289]: audit 2026-03-10T10:17:21.208570+0000 mon.c (mon.2) 167 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:21 vm04 bash[28289]: audit 2026-03-10T10:17:21.208570+0000 mon.c (mon.2) 167 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:21 vm04 bash[28289]: audit 2026-03-10T10:17:21.507024+0000 mon.a (mon.0) 1339 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:21 vm04 bash[28289]: audit 2026-03-10T10:17:21.507024+0000 mon.a (mon.0) 1339 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:21 vm04 bash[28289]: cluster 2026-03-10T10:17:21.511128+0000 mon.a (mon.0) 1340 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-10T10:17:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:21 vm04 bash[28289]: cluster 2026-03-10T10:17:21.511128+0000 mon.a (mon.0) 1340 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-10T10:17:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:21 vm04 bash[28289]: audit 2026-03-10T10:17:21.512301+0000 mon.a (mon.0) 1341 : audit [INF] from='client.? 192.168.123.104:0/1049650401' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm04-59259-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:21 vm04 bash[28289]: audit 2026-03-10T10:17:21.512301+0000 mon.a (mon.0) 1341 : audit [INF] from='client.? 192.168.123.104:0/1049650401' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm04-59259-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:21 vm04 bash[28289]: audit 2026-03-10T10:17:21.512446+0000 mon.c (mon.2) 168 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm04-59541-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:21 vm04 bash[28289]: audit 2026-03-10T10:17:21.512446+0000 mon.c (mon.2) 168 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm04-59541-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:21 vm04 bash[28289]: audit 2026-03-10T10:17:21.513771+0000 mon.c (mon.2) 169 : audit [INF] from='client.? 192.168.123.104:0/3704612937' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm04-59252-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:21 vm04 bash[28289]: audit 2026-03-10T10:17:21.513771+0000 mon.c (mon.2) 169 : audit [INF] from='client.? 192.168.123.104:0/3704612937' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm04-59252-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:21 vm04 bash[28289]: audit 2026-03-10T10:17:21.514903+0000 mon.a (mon.0) 1342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm04-59541-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:21 vm04 bash[28289]: audit 2026-03-10T10:17:21.514903+0000 mon.a (mon.0) 1342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm04-59541-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:21 vm04 bash[28289]: audit 2026-03-10T10:17:21.515091+0000 mon.a (mon.0) 1343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm04-59252-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:21 vm04 bash[28289]: audit 2026-03-10T10:17:21.515091+0000 mon.a (mon.0) 1343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm04-59252-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:21 vm04 bash[20742]: cluster 2026-03-10T10:17:21.194671+0000 mon.a (mon.0) 1338 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:21 vm04 bash[20742]: cluster 2026-03-10T10:17:21.194671+0000 mon.a (mon.0) 1338 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:21 vm04 bash[20742]: audit 2026-03-10T10:17:21.208570+0000 mon.c (mon.2) 167 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:21 vm04 bash[20742]: audit 2026-03-10T10:17:21.208570+0000 mon.c (mon.2) 167 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:21 vm04 bash[20742]: audit 2026-03-10T10:17:21.507024+0000 mon.a (mon.0) 1339 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:21 vm04 bash[20742]: audit 2026-03-10T10:17:21.507024+0000 mon.a (mon.0) 1339 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:21 vm04 bash[20742]: cluster 2026-03-10T10:17:21.511128+0000 mon.a (mon.0) 1340 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-10T10:17:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:21 vm04 bash[20742]: cluster 2026-03-10T10:17:21.511128+0000 mon.a (mon.0) 1340 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-10T10:17:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:21 vm04 bash[20742]: audit 2026-03-10T10:17:21.512301+0000 mon.a (mon.0) 1341 : audit [INF] from='client.? 192.168.123.104:0/1049650401' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm04-59259-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:21 vm04 bash[20742]: audit 2026-03-10T10:17:21.512301+0000 mon.a (mon.0) 1341 : audit [INF] from='client.? 192.168.123.104:0/1049650401' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm04-59259-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:21 vm04 bash[20742]: audit 2026-03-10T10:17:21.512446+0000 mon.c (mon.2) 168 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm04-59541-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:21 vm04 bash[20742]: audit 2026-03-10T10:17:21.512446+0000 mon.c (mon.2) 168 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm04-59541-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:21 vm04 bash[20742]: audit 2026-03-10T10:17:21.513771+0000 mon.c (mon.2) 169 : audit [INF] from='client.? 192.168.123.104:0/3704612937' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm04-59252-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:21 vm04 bash[20742]: audit 2026-03-10T10:17:21.513771+0000 mon.c (mon.2) 169 : audit [INF] from='client.? 192.168.123.104:0/3704612937' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm04-59252-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:21 vm04 bash[20742]: audit 2026-03-10T10:17:21.514903+0000 mon.a (mon.0) 1342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm04-59541-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:21 vm04 bash[20742]: audit 2026-03-10T10:17:21.514903+0000 mon.a (mon.0) 1342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm04-59541-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:21 vm04 bash[20742]: audit 2026-03-10T10:17:21.515091+0000 mon.a (mon.0) 1343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm04-59252-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:21 vm04 bash[20742]: audit 2026-03-10T10:17:21.515091+0000 mon.a (mon.0) 1343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm04-59252-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:22.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:21 vm07 bash[23367]: cluster 2026-03-10T10:17:21.194671+0000 mon.a (mon.0) 1338 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:22.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:21 vm07 bash[23367]: cluster 2026-03-10T10:17:21.194671+0000 mon.a (mon.0) 1338 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:22.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:21 vm07 bash[23367]: audit 2026-03-10T10:17:21.208570+0000 mon.c (mon.2) 167 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:22.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:21 vm07 bash[23367]: audit 2026-03-10T10:17:21.208570+0000 mon.c (mon.2) 167 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:22.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:21 vm07 bash[23367]: audit 2026-03-10T10:17:21.507024+0000 mon.a (mon.0) 1339 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:22.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:21 vm07 bash[23367]: audit 2026-03-10T10:17:21.507024+0000 mon.a (mon.0) 1339 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:22.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:21 vm07 bash[23367]: cluster 2026-03-10T10:17:21.511128+0000 mon.a (mon.0) 1340 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-10T10:17:22.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:21 vm07 bash[23367]: cluster 2026-03-10T10:17:21.511128+0000 mon.a (mon.0) 1340 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-10T10:17:22.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:21 vm07 bash[23367]: audit 2026-03-10T10:17:21.512301+0000 mon.a (mon.0) 1341 : audit [INF] from='client.? 192.168.123.104:0/1049650401' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm04-59259-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:22.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:21 vm07 bash[23367]: audit 2026-03-10T10:17:21.512301+0000 mon.a (mon.0) 1341 : audit [INF] from='client.? 192.168.123.104:0/1049650401' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm04-59259-18","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:22.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:21 vm07 bash[23367]: audit 2026-03-10T10:17:21.512446+0000 mon.c (mon.2) 168 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm04-59541-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:22.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:21 vm07 bash[23367]: audit 2026-03-10T10:17:21.512446+0000 mon.c (mon.2) 168 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm04-59541-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:22.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:21 vm07 bash[23367]: audit 2026-03-10T10:17:21.513771+0000 mon.c (mon.2) 169 : audit [INF] from='client.? 192.168.123.104:0/3704612937' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm04-59252-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:22.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:21 vm07 bash[23367]: audit 2026-03-10T10:17:21.513771+0000 mon.c (mon.2) 169 : audit [INF] from='client.? 192.168.123.104:0/3704612937' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm04-59252-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:22.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:21 vm07 bash[23367]: audit 2026-03-10T10:17:21.514903+0000 mon.a (mon.0) 1342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm04-59541-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:22.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:21 vm07 bash[23367]: audit 2026-03-10T10:17:21.514903+0000 mon.a (mon.0) 1342 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm04-59541-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:22.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:21 vm07 bash[23367]: audit 2026-03-10T10:17:21.515091+0000 mon.a (mon.0) 1343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm04-59252-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:22.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:21 vm07 bash[23367]: audit 2026-03-10T10:17:21.515091+0000 mon.a (mon.0) 1343 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm04-59252-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:22 vm04 bash[28289]: cluster 2026-03-10T10:17:21.996740+0000 mgr.y (mgr.24422) 163 : cluster [DBG] pgmap v144: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:17:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:22 vm04 bash[28289]: cluster 2026-03-10T10:17:21.996740+0000 mgr.y (mgr.24422) 163 : cluster [DBG] pgmap v144: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:17:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:22 vm04 bash[28289]: audit 2026-03-10T10:17:22.022306+0000 mon.a (mon.0) 1344 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:17:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:22 vm04 bash[28289]: audit 2026-03-10T10:17:22.022306+0000 mon.a (mon.0) 1344 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:17:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:22 vm04 bash[28289]: audit 2026-03-10T10:17:22.209420+0000 mon.c (mon.2) 170 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:22 vm04 bash[28289]: audit 2026-03-10T10:17:22.209420+0000 mon.c (mon.2) 170 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:22 vm04 bash[28289]: audit 2026-03-10T10:17:22.385666+0000 mon.a (mon.0) 1345 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:17:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:22 vm04 bash[28289]: audit 2026-03-10T10:17:22.385666+0000 mon.a (mon.0) 1345 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:17:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:22 vm04 bash[28289]: audit 2026-03-10T10:17:22.386136+0000 mon.a (mon.0) 1346 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:17:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:22 vm04 bash[28289]: audit 2026-03-10T10:17:22.386136+0000 mon.a (mon.0) 1346 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:17:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:22 vm04 bash[28289]: audit 2026-03-10T10:17:22.402543+0000 mon.a (mon.0) 1347 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:17:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:22 vm04 bash[28289]: audit 2026-03-10T10:17:22.402543+0000 mon.a (mon.0) 1347 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:17:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:22 vm04 bash[28289]: audit 2026-03-10T10:17:22.510961+0000 mon.a (mon.0) 1348 : audit [INF] from='client.? 192.168.123.104:0/1049650401' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm04-59259-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:22.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:22 vm04 bash[28289]: audit 2026-03-10T10:17:22.510961+0000 mon.a (mon.0) 1348 : audit [INF] from='client.? 192.168.123.104:0/1049650401' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm04-59259-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:22.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:22 vm04 bash[28289]: audit 2026-03-10T10:17:22.511053+0000 mon.a (mon.0) 1349 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm04-59252-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:22.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:22 vm04 bash[28289]: audit 2026-03-10T10:17:22.511053+0000 mon.a (mon.0) 1349 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm04-59252-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:22.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:22 vm04 bash[28289]: cluster 2026-03-10T10:17:22.520162+0000 mon.a (mon.0) 1350 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-10T10:17:22.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:22 vm04 bash[28289]: cluster 2026-03-10T10:17:22.520162+0000 mon.a (mon.0) 1350 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-10T10:17:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:22 vm04 bash[20742]: cluster 2026-03-10T10:17:21.996740+0000 mgr.y (mgr.24422) 163 : cluster [DBG] pgmap v144: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:17:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:22 vm04 bash[20742]: cluster 2026-03-10T10:17:21.996740+0000 mgr.y (mgr.24422) 163 : cluster [DBG] pgmap v144: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:17:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:22 vm04 bash[20742]: audit 2026-03-10T10:17:22.022306+0000 mon.a (mon.0) 1344 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:17:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:22 vm04 bash[20742]: audit 2026-03-10T10:17:22.022306+0000 mon.a (mon.0) 1344 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:17:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:22 vm04 bash[20742]: audit 2026-03-10T10:17:22.209420+0000 mon.c (mon.2) 170 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:22 vm04 bash[20742]: audit 2026-03-10T10:17:22.209420+0000 mon.c (mon.2) 170 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:22 vm04 bash[20742]: audit 2026-03-10T10:17:22.385666+0000 mon.a (mon.0) 1345 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:17:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:22 vm04 bash[20742]: audit 2026-03-10T10:17:22.385666+0000 mon.a (mon.0) 1345 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:17:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:22 vm04 bash[20742]: audit 2026-03-10T10:17:22.386136+0000 mon.a (mon.0) 1346 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:17:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:22 vm04 bash[20742]: audit 2026-03-10T10:17:22.386136+0000 mon.a (mon.0) 1346 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:17:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:22 vm04 bash[20742]: audit 2026-03-10T10:17:22.402543+0000 mon.a (mon.0) 1347 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:17:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:22 vm04 bash[20742]: audit 2026-03-10T10:17:22.402543+0000 mon.a (mon.0) 1347 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:17:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:22 vm04 bash[20742]: audit 2026-03-10T10:17:22.510961+0000 mon.a (mon.0) 1348 : audit [INF] from='client.? 192.168.123.104:0/1049650401' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm04-59259-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:22 vm04 bash[20742]: audit 2026-03-10T10:17:22.510961+0000 mon.a (mon.0) 1348 : audit [INF] from='client.? 192.168.123.104:0/1049650401' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm04-59259-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:22 vm04 bash[20742]: audit 2026-03-10T10:17:22.511053+0000 mon.a (mon.0) 1349 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm04-59252-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:22 vm04 bash[20742]: audit 2026-03-10T10:17:22.511053+0000 mon.a (mon.0) 1349 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm04-59252-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:22 vm04 bash[20742]: cluster 2026-03-10T10:17:22.520162+0000 mon.a (mon.0) 1350 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-10T10:17:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:22 vm04 bash[20742]: cluster 2026-03-10T10:17:22.520162+0000 mon.a (mon.0) 1350 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-10T10:17:23.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:22 vm07 bash[23367]: cluster 2026-03-10T10:17:21.996740+0000 mgr.y (mgr.24422) 163 : cluster [DBG] pgmap v144: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:17:23.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:22 vm07 bash[23367]: cluster 2026-03-10T10:17:21.996740+0000 mgr.y (mgr.24422) 163 : cluster [DBG] pgmap v144: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:17:23.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:22 vm07 bash[23367]: audit 2026-03-10T10:17:22.022306+0000 mon.a (mon.0) 1344 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:17:23.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:22 vm07 bash[23367]: audit 2026-03-10T10:17:22.022306+0000 mon.a (mon.0) 1344 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:17:23.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:22 vm07 bash[23367]: audit 2026-03-10T10:17:22.209420+0000 mon.c (mon.2) 170 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:23.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:22 vm07 bash[23367]: audit 2026-03-10T10:17:22.209420+0000 mon.c (mon.2) 170 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:23.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:22 vm07 bash[23367]: audit 2026-03-10T10:17:22.385666+0000 mon.a (mon.0) 1345 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:17:23.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:22 vm07 bash[23367]: audit 2026-03-10T10:17:22.385666+0000 mon.a (mon.0) 1345 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:17:23.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:22 vm07 bash[23367]: audit 2026-03-10T10:17:22.386136+0000 mon.a (mon.0) 1346 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:17:23.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:22 vm07 bash[23367]: audit 2026-03-10T10:17:22.386136+0000 mon.a (mon.0) 1346 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:17:23.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:22 vm07 bash[23367]: audit 2026-03-10T10:17:22.402543+0000 mon.a (mon.0) 1347 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:17:23.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:22 vm07 bash[23367]: audit 2026-03-10T10:17:22.402543+0000 mon.a (mon.0) 1347 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:17:23.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:22 vm07 bash[23367]: audit 2026-03-10T10:17:22.510961+0000 mon.a (mon.0) 1348 : audit [INF] from='client.? 192.168.123.104:0/1049650401' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm04-59259-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:23.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:22 vm07 bash[23367]: audit 2026-03-10T10:17:22.510961+0000 mon.a (mon.0) 1348 : audit [INF] from='client.? 192.168.123.104:0/1049650401' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP_vm04-59259-18","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:23.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:22 vm07 bash[23367]: audit 2026-03-10T10:17:22.511053+0000 mon.a (mon.0) 1349 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm04-59252-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:23.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:22 vm07 bash[23367]: audit 2026-03-10T10:17:22.511053+0000 mon.a (mon.0) 1349 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "Operate2Mtime_vm04-59252-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:23.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:22 vm07 bash[23367]: cluster 2026-03-10T10:17:22.520162+0000 mon.a (mon.0) 1350 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-10T10:17:23.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:22 vm07 bash[23367]: cluster 2026-03-10T10:17:22.520162+0000 mon.a (mon.0) 1350 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-10T10:17:23.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:17:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:17:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:17:23.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:23 vm04 bash[28289]: cluster 2026-03-10T10:17:22.387250+0000 mgr.y (mgr.24422) 164 : cluster [DBG] pgmap v145: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 253 B/s wr, 1 op/s 2026-03-10T10:17:23.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:23 vm04 bash[28289]: cluster 2026-03-10T10:17:22.387250+0000 mgr.y (mgr.24422) 164 : cluster [DBG] pgmap v145: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 253 B/s wr, 1 op/s 2026-03-10T10:17:23.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:23 vm04 bash[28289]: cluster 2026-03-10T10:17:22.635525+0000 mon.a (mon.0) 1351 : cluster [INF] Health check cleared: CEPHADM_STRAY_DAEMON (was: 2 stray daemon(s) not managed by cephadm) 2026-03-10T10:17:23.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:23 vm04 bash[28289]: cluster 2026-03-10T10:17:22.635525+0000 mon.a (mon.0) 1351 : cluster [INF] Health check cleared: CEPHADM_STRAY_DAEMON (was: 2 stray daemon(s) not managed by cephadm) 2026-03-10T10:17:23.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:23 vm04 bash[28289]: audit 2026-03-10T10:17:23.210218+0000 mon.c (mon.2) 171 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:23.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:23 vm04 bash[28289]: audit 2026-03-10T10:17:23.210218+0000 mon.c (mon.2) 171 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:23.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:23 vm04 bash[28289]: audit 2026-03-10T10:17:23.515825+0000 mon.a (mon.0) 1352 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm04-59541-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]': finished 2026-03-10T10:17:23.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:23 vm04 bash[28289]: audit 2026-03-10T10:17:23.515825+0000 mon.a (mon.0) 1352 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm04-59541-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]': finished 2026-03-10T10:17:23.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:23 vm04 bash[28289]: cluster 2026-03-10T10:17:23.520242+0000 mon.a (mon.0) 1353 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-10T10:17:23.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:23 vm04 bash[28289]: cluster 2026-03-10T10:17:23.520242+0000 mon.a (mon.0) 1353 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-10T10:17:23.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:23 vm04 bash[20742]: cluster 2026-03-10T10:17:22.387250+0000 mgr.y (mgr.24422) 164 : cluster [DBG] pgmap v145: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 253 B/s wr, 1 op/s 2026-03-10T10:17:23.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:23 vm04 bash[20742]: cluster 2026-03-10T10:17:22.387250+0000 mgr.y (mgr.24422) 164 : cluster [DBG] pgmap v145: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 253 B/s wr, 1 op/s 2026-03-10T10:17:23.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:23 vm04 bash[20742]: cluster 2026-03-10T10:17:22.635525+0000 mon.a (mon.0) 1351 : cluster [INF] Health check cleared: CEPHADM_STRAY_DAEMON (was: 2 stray daemon(s) not managed by cephadm) 2026-03-10T10:17:23.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:23 vm04 bash[20742]: cluster 2026-03-10T10:17:22.635525+0000 mon.a (mon.0) 1351 : cluster [INF] Health check cleared: CEPHADM_STRAY_DAEMON (was: 2 stray daemon(s) not managed by cephadm) 2026-03-10T10:17:23.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:23 vm04 bash[20742]: audit 2026-03-10T10:17:23.210218+0000 mon.c (mon.2) 171 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:23.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:23 vm04 bash[20742]: audit 2026-03-10T10:17:23.210218+0000 mon.c (mon.2) 171 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:23.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:23 vm04 bash[20742]: audit 2026-03-10T10:17:23.515825+0000 mon.a (mon.0) 1352 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm04-59541-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]': finished 2026-03-10T10:17:23.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:23 vm04 bash[20742]: audit 2026-03-10T10:17:23.515825+0000 mon.a (mon.0) 1352 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm04-59541-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]': finished 2026-03-10T10:17:23.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:23 vm04 bash[20742]: cluster 2026-03-10T10:17:23.520242+0000 mon.a (mon.0) 1353 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-10T10:17:23.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:23 vm04 bash[20742]: cluster 2026-03-10T10:17:23.520242+0000 mon.a (mon.0) 1353 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-10T10:17:24.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:23 vm07 bash[23367]: cluster 2026-03-10T10:17:22.387250+0000 mgr.y (mgr.24422) 164 : cluster [DBG] pgmap v145: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 253 B/s wr, 1 op/s 2026-03-10T10:17:24.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:23 vm07 bash[23367]: cluster 2026-03-10T10:17:22.387250+0000 mgr.y (mgr.24422) 164 : cluster [DBG] pgmap v145: 452 pgs: 64 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 253 B/s wr, 1 op/s 2026-03-10T10:17:24.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:23 vm07 bash[23367]: cluster 2026-03-10T10:17:22.635525+0000 mon.a (mon.0) 1351 : cluster [INF] Health check cleared: CEPHADM_STRAY_DAEMON (was: 2 stray daemon(s) not managed by cephadm) 2026-03-10T10:17:24.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:23 vm07 bash[23367]: cluster 2026-03-10T10:17:22.635525+0000 mon.a (mon.0) 1351 : cluster [INF] Health check cleared: CEPHADM_STRAY_DAEMON (was: 2 stray daemon(s) not managed by cephadm) 2026-03-10T10:17:24.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:23 vm07 bash[23367]: audit 2026-03-10T10:17:23.210218+0000 mon.c (mon.2) 171 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:24.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:23 vm07 bash[23367]: audit 2026-03-10T10:17:23.210218+0000 mon.c (mon.2) 171 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:24.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:23 vm07 bash[23367]: audit 2026-03-10T10:17:23.515825+0000 mon.a (mon.0) 1352 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm04-59541-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]': finished 2026-03-10T10:17:24.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:23 vm07 bash[23367]: audit 2026-03-10T10:17:23.515825+0000 mon.a (mon.0) 1352 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsECPP_vm04-59541-16", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]': finished 2026-03-10T10:17:24.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:23 vm07 bash[23367]: cluster 2026-03-10T10:17:23.520242+0000 mon.a (mon.0) 1353 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-10T10:17:24.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:23 vm07 bash[23367]: cluster 2026-03-10T10:17:23.520242+0000 mon.a (mon.0) 1353 : cluster [DBG] osdmap e128: 8 total, 8 up, 8 in 2026-03-10T10:17:24.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:24 vm04 bash[28289]: audit 2026-03-10T10:17:24.210919+0000 mon.c (mon.2) 172 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:24.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:24 vm04 bash[28289]: audit 2026-03-10T10:17:24.210919+0000 mon.c (mon.2) 172 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:24.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:24 vm04 bash[28289]: cluster 2026-03-10T10:17:24.592242+0000 mon.a (mon.0) 1354 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-10T10:17:24.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:24 vm04 bash[28289]: cluster 2026-03-10T10:17:24.592242+0000 mon.a (mon.0) 1354 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-10T10:17:24.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:24 vm04 bash[28289]: audit 2026-03-10T10:17:24.595105+0000 mon.a (mon.0) 1355 : audit [INF] from='client.? 192.168.123.104:0/1950169979' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm04-59252-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:24.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:24 vm04 bash[28289]: audit 2026-03-10T10:17:24.595105+0000 mon.a (mon.0) 1355 : audit [INF] from='client.? 192.168.123.104:0/1950169979' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm04-59252-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:24.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:24 vm04 bash[28289]: audit 2026-03-10T10:17:24.600407+0000 mon.b (mon.1) 130 : audit [INF] from='client.? 192.168.123.104:0/1431265960' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm04-59259-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:24.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:24 vm04 bash[28289]: audit 2026-03-10T10:17:24.600407+0000 mon.b (mon.1) 130 : audit [INF] from='client.? 192.168.123.104:0/1431265960' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm04-59259-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:24.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:24 vm04 bash[28289]: audit 2026-03-10T10:17:24.603742+0000 mon.a (mon.0) 1356 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm04-59259-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:24.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:24 vm04 bash[28289]: audit 2026-03-10T10:17:24.603742+0000 mon.a (mon.0) 1356 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm04-59259-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:24.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:24 vm04 bash[20742]: audit 2026-03-10T10:17:24.210919+0000 mon.c (mon.2) 172 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:24.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:24 vm04 bash[20742]: audit 2026-03-10T10:17:24.210919+0000 mon.c (mon.2) 172 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:24.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:24 vm04 bash[20742]: cluster 2026-03-10T10:17:24.592242+0000 mon.a (mon.0) 1354 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-10T10:17:24.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:24 vm04 bash[20742]: cluster 2026-03-10T10:17:24.592242+0000 mon.a (mon.0) 1354 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-10T10:17:24.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:24 vm04 bash[20742]: audit 2026-03-10T10:17:24.595105+0000 mon.a (mon.0) 1355 : audit [INF] from='client.? 192.168.123.104:0/1950169979' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm04-59252-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:24.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:24 vm04 bash[20742]: audit 2026-03-10T10:17:24.595105+0000 mon.a (mon.0) 1355 : audit [INF] from='client.? 192.168.123.104:0/1950169979' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm04-59252-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:24.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:24 vm04 bash[20742]: audit 2026-03-10T10:17:24.600407+0000 mon.b (mon.1) 130 : audit [INF] from='client.? 192.168.123.104:0/1431265960' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm04-59259-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:24.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:24 vm04 bash[20742]: audit 2026-03-10T10:17:24.600407+0000 mon.b (mon.1) 130 : audit [INF] from='client.? 192.168.123.104:0/1431265960' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm04-59259-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:24.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:24 vm04 bash[20742]: audit 2026-03-10T10:17:24.603742+0000 mon.a (mon.0) 1356 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm04-59259-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:24.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:24 vm04 bash[20742]: audit 2026-03-10T10:17:24.603742+0000 mon.a (mon.0) 1356 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm04-59259-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:25.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:24 vm07 bash[23367]: audit 2026-03-10T10:17:24.210919+0000 mon.c (mon.2) 172 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:25.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:24 vm07 bash[23367]: audit 2026-03-10T10:17:24.210919+0000 mon.c (mon.2) 172 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:25.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:24 vm07 bash[23367]: cluster 2026-03-10T10:17:24.592242+0000 mon.a (mon.0) 1354 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-10T10:17:25.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:24 vm07 bash[23367]: cluster 2026-03-10T10:17:24.592242+0000 mon.a (mon.0) 1354 : cluster [DBG] osdmap e129: 8 total, 8 up, 8 in 2026-03-10T10:17:25.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:24 vm07 bash[23367]: audit 2026-03-10T10:17:24.595105+0000 mon.a (mon.0) 1355 : audit [INF] from='client.? 192.168.123.104:0/1950169979' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm04-59252-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:25.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:24 vm07 bash[23367]: audit 2026-03-10T10:17:24.595105+0000 mon.a (mon.0) 1355 : audit [INF] from='client.? 192.168.123.104:0/1950169979' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm04-59252-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:25.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:24 vm07 bash[23367]: audit 2026-03-10T10:17:24.600407+0000 mon.b (mon.1) 130 : audit [INF] from='client.? 192.168.123.104:0/1431265960' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm04-59259-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:25.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:24 vm07 bash[23367]: audit 2026-03-10T10:17:24.600407+0000 mon.b (mon.1) 130 : audit [INF] from='client.? 192.168.123.104:0/1431265960' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm04-59259-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:25.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:24 vm07 bash[23367]: audit 2026-03-10T10:17:24.603742+0000 mon.a (mon.0) 1356 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm04-59259-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:25.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:24 vm07 bash[23367]: audit 2026-03-10T10:17:24.603742+0000 mon.a (mon.0) 1356 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm04-59259-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:25.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:25 vm04 bash[28289]: cluster 2026-03-10T10:17:24.388080+0000 mgr.y (mgr.24422) 165 : cluster [DBG] pgmap v148: 396 pgs: 2 creating+peering, 6 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:17:25.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:25 vm04 bash[28289]: cluster 2026-03-10T10:17:24.388080+0000 mgr.y (mgr.24422) 165 : cluster [DBG] pgmap v148: 396 pgs: 2 creating+peering, 6 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:17:25.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:25 vm04 bash[28289]: audit 2026-03-10T10:17:25.211901+0000 mon.c (mon.2) 173 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:25.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:25 vm04 bash[28289]: audit 2026-03-10T10:17:25.211901+0000 mon.c (mon.2) 173 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:25.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:25 vm04 bash[28289]: audit 2026-03-10T10:17:25.579943+0000 mon.a (mon.0) 1357 : audit [INF] from='client.? 192.168.123.104:0/1950169979' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm04-59252-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:25.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:25 vm04 bash[28289]: audit 2026-03-10T10:17:25.579943+0000 mon.a (mon.0) 1357 : audit [INF] from='client.? 192.168.123.104:0/1950169979' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm04-59252-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:25.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:25 vm04 bash[28289]: audit 2026-03-10T10:17:25.580026+0000 mon.a (mon.0) 1358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm04-59259-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:25.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:25 vm04 bash[28289]: audit 2026-03-10T10:17:25.580026+0000 mon.a (mon.0) 1358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm04-59259-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:25.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:25 vm04 bash[28289]: cluster 2026-03-10T10:17:25.583381+0000 mon.a (mon.0) 1359 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-10T10:17:25.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:25 vm04 bash[28289]: cluster 2026-03-10T10:17:25.583381+0000 mon.a (mon.0) 1359 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-10T10:17:25.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:25 vm04 bash[20742]: cluster 2026-03-10T10:17:24.388080+0000 mgr.y (mgr.24422) 165 : cluster [DBG] pgmap v148: 396 pgs: 2 creating+peering, 6 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:17:25.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:25 vm04 bash[20742]: cluster 2026-03-10T10:17:24.388080+0000 mgr.y (mgr.24422) 165 : cluster [DBG] pgmap v148: 396 pgs: 2 creating+peering, 6 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:17:25.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:25 vm04 bash[20742]: audit 2026-03-10T10:17:25.211901+0000 mon.c (mon.2) 173 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:25.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:25 vm04 bash[20742]: audit 2026-03-10T10:17:25.211901+0000 mon.c (mon.2) 173 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:25.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:25 vm04 bash[20742]: audit 2026-03-10T10:17:25.579943+0000 mon.a (mon.0) 1357 : audit [INF] from='client.? 192.168.123.104:0/1950169979' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm04-59252-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:25.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:25 vm04 bash[20742]: audit 2026-03-10T10:17:25.579943+0000 mon.a (mon.0) 1357 : audit [INF] from='client.? 192.168.123.104:0/1950169979' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm04-59252-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:25.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:25 vm04 bash[20742]: audit 2026-03-10T10:17:25.580026+0000 mon.a (mon.0) 1358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm04-59259-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:25.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:25 vm04 bash[20742]: audit 2026-03-10T10:17:25.580026+0000 mon.a (mon.0) 1358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm04-59259-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:25.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:25 vm04 bash[20742]: cluster 2026-03-10T10:17:25.583381+0000 mon.a (mon.0) 1359 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-10T10:17:25.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:25 vm04 bash[20742]: cluster 2026-03-10T10:17:25.583381+0000 mon.a (mon.0) 1359 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-10T10:17:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:25 vm07 bash[23367]: cluster 2026-03-10T10:17:24.388080+0000 mgr.y (mgr.24422) 165 : cluster [DBG] pgmap v148: 396 pgs: 2 creating+peering, 6 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:17:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:25 vm07 bash[23367]: cluster 2026-03-10T10:17:24.388080+0000 mgr.y (mgr.24422) 165 : cluster [DBG] pgmap v148: 396 pgs: 2 creating+peering, 6 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:17:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:25 vm07 bash[23367]: audit 2026-03-10T10:17:25.211901+0000 mon.c (mon.2) 173 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:25 vm07 bash[23367]: audit 2026-03-10T10:17:25.211901+0000 mon.c (mon.2) 173 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:25 vm07 bash[23367]: audit 2026-03-10T10:17:25.579943+0000 mon.a (mon.0) 1357 : audit [INF] from='client.? 192.168.123.104:0/1950169979' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm04-59252-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:25 vm07 bash[23367]: audit 2026-03-10T10:17:25.579943+0000 mon.a (mon.0) 1357 : audit [INF] from='client.? 192.168.123.104:0/1950169979' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatNS_vm04-59252-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:25 vm07 bash[23367]: audit 2026-03-10T10:17:25.580026+0000 mon.a (mon.0) 1358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm04-59259-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:25 vm07 bash[23367]: audit 2026-03-10T10:17:25.580026+0000 mon.a (mon.0) 1358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteSamePP2_vm04-59259-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:25 vm07 bash[23367]: cluster 2026-03-10T10:17:25.583381+0000 mon.a (mon.0) 1359 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-10T10:17:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:25 vm07 bash[23367]: cluster 2026-03-10T10:17:25.583381+0000 mon.a (mon.0) 1359 : cluster [DBG] osdmap e130: 8 total, 8 up, 8 in 2026-03-10T10:17:27.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:26 vm04 bash[28289]: cluster 2026-03-10T10:17:26.195368+0000 mon.a (mon.0) 1360 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:27.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:26 vm04 bash[28289]: cluster 2026-03-10T10:17:26.195368+0000 mon.a (mon.0) 1360 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:27.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:26 vm04 bash[28289]: cluster 2026-03-10T10:17:26.202454+0000 mon.a (mon.0) 1361 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-10T10:17:27.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:26 vm04 bash[28289]: cluster 2026-03-10T10:17:26.202454+0000 mon.a (mon.0) 1361 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-10T10:17:27.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:26 vm04 bash[28289]: audit 2026-03-10T10:17:26.213266+0000 mon.c (mon.2) 174 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:27.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:26 vm04 bash[28289]: audit 2026-03-10T10:17:26.213266+0000 mon.c (mon.2) 174 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:27.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:26 vm04 bash[20742]: cluster 2026-03-10T10:17:26.195368+0000 mon.a (mon.0) 1360 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:27.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:26 vm04 bash[20742]: cluster 2026-03-10T10:17:26.195368+0000 mon.a (mon.0) 1360 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:27.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:26 vm04 bash[20742]: cluster 2026-03-10T10:17:26.202454+0000 mon.a (mon.0) 1361 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-10T10:17:27.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:26 vm04 bash[20742]: cluster 2026-03-10T10:17:26.202454+0000 mon.a (mon.0) 1361 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-10T10:17:27.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:26 vm04 bash[20742]: audit 2026-03-10T10:17:26.213266+0000 mon.c (mon.2) 174 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:27.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:26 vm04 bash[20742]: audit 2026-03-10T10:17:26.213266+0000 mon.c (mon.2) 174 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:27.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:26 vm07 bash[23367]: cluster 2026-03-10T10:17:26.195368+0000 mon.a (mon.0) 1360 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:27.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:26 vm07 bash[23367]: cluster 2026-03-10T10:17:26.195368+0000 mon.a (mon.0) 1360 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:27.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:26 vm07 bash[23367]: cluster 2026-03-10T10:17:26.202454+0000 mon.a (mon.0) 1361 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-10T10:17:27.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:26 vm07 bash[23367]: cluster 2026-03-10T10:17:26.202454+0000 mon.a (mon.0) 1361 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-10T10:17:27.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:26 vm07 bash[23367]: audit 2026-03-10T10:17:26.213266+0000 mon.c (mon.2) 174 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:27.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:26 vm07 bash[23367]: audit 2026-03-10T10:17:26.213266+0000 mon.c (mon.2) 174 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:28.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:17:28 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:17:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:28 vm07 bash[23367]: cluster 2026-03-10T10:17:26.388461+0000 mgr.y (mgr.24422) 166 : cluster [DBG] pgmap v152: 396 pgs: 2 creating+peering, 6 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T10:17:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:28 vm07 bash[23367]: cluster 2026-03-10T10:17:26.388461+0000 mgr.y (mgr.24422) 166 : cluster [DBG] pgmap v152: 396 pgs: 2 creating+peering, 6 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T10:17:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:28 vm07 bash[23367]: cluster 2026-03-10T10:17:27.213208+0000 mon.a (mon.0) 1362 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-10T10:17:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:28 vm07 bash[23367]: cluster 2026-03-10T10:17:27.213208+0000 mon.a (mon.0) 1362 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-10T10:17:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:28 vm07 bash[23367]: audit 2026-03-10T10:17:27.213893+0000 mon.c (mon.2) 175 : audit [INF] from='client.? 192.168.123.104:0/1825969584' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm04-59252-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:28 vm07 bash[23367]: audit 2026-03-10T10:17:27.213893+0000 mon.c (mon.2) 175 : audit [INF] from='client.? 192.168.123.104:0/1825969584' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm04-59252-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:28 vm07 bash[23367]: audit 2026-03-10T10:17:27.214807+0000 mon.c (mon.2) 176 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:28 vm07 bash[23367]: audit 2026-03-10T10:17:27.214807+0000 mon.c (mon.2) 176 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:28 vm07 bash[23367]: audit 2026-03-10T10:17:27.215915+0000 mon.a (mon.0) 1363 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm04-59252-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:28 vm07 bash[23367]: audit 2026-03-10T10:17:27.215915+0000 mon.a (mon.0) 1363 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm04-59252-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:28 vm07 bash[23367]: audit 2026-03-10T10:17:27.216257+0000 mon.a (mon.0) 1364 : audit [INF] from='client.? 192.168.123.104:0/417807305' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm04-59259-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:28 vm07 bash[23367]: audit 2026-03-10T10:17:27.216257+0000 mon.a (mon.0) 1364 : audit [INF] from='client.? 192.168.123.104:0/417807305' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm04-59259-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:28 vm07 bash[23367]: audit 2026-03-10T10:17:27.743733+0000 mon.a (mon.0) 1365 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:17:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:28 vm07 bash[23367]: audit 2026-03-10T10:17:27.743733+0000 mon.a (mon.0) 1365 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:17:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:28 vm07 bash[23367]: audit 2026-03-10T10:17:27.744704+0000 mon.a (mon.0) 1366 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:17:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:28 vm07 bash[23367]: audit 2026-03-10T10:17:27.744704+0000 mon.a (mon.0) 1366 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:17:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:28 vm04 bash[28289]: cluster 2026-03-10T10:17:26.388461+0000 mgr.y (mgr.24422) 166 : cluster [DBG] pgmap v152: 396 pgs: 2 creating+peering, 6 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T10:17:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:28 vm04 bash[28289]: cluster 2026-03-10T10:17:26.388461+0000 mgr.y (mgr.24422) 166 : cluster [DBG] pgmap v152: 396 pgs: 2 creating+peering, 6 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T10:17:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:28 vm04 bash[28289]: cluster 2026-03-10T10:17:27.213208+0000 mon.a (mon.0) 1362 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-10T10:17:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:28 vm04 bash[28289]: cluster 2026-03-10T10:17:27.213208+0000 mon.a (mon.0) 1362 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-10T10:17:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:28 vm04 bash[28289]: audit 2026-03-10T10:17:27.213893+0000 mon.c (mon.2) 175 : audit [INF] from='client.? 192.168.123.104:0/1825969584' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm04-59252-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:28 vm04 bash[28289]: audit 2026-03-10T10:17:27.213893+0000 mon.c (mon.2) 175 : audit [INF] from='client.? 192.168.123.104:0/1825969584' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm04-59252-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:28 vm04 bash[28289]: audit 2026-03-10T10:17:27.214807+0000 mon.c (mon.2) 176 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:28 vm04 bash[28289]: audit 2026-03-10T10:17:27.214807+0000 mon.c (mon.2) 176 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:28 vm04 bash[28289]: audit 2026-03-10T10:17:27.215915+0000 mon.a (mon.0) 1363 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm04-59252-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:28 vm04 bash[28289]: audit 2026-03-10T10:17:27.215915+0000 mon.a (mon.0) 1363 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm04-59252-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:28 vm04 bash[28289]: audit 2026-03-10T10:17:27.216257+0000 mon.a (mon.0) 1364 : audit [INF] from='client.? 192.168.123.104:0/417807305' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm04-59259-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:28 vm04 bash[28289]: audit 2026-03-10T10:17:27.216257+0000 mon.a (mon.0) 1364 : audit [INF] from='client.? 192.168.123.104:0/417807305' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm04-59259-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:28 vm04 bash[28289]: audit 2026-03-10T10:17:27.743733+0000 mon.a (mon.0) 1365 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:17:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:28 vm04 bash[28289]: audit 2026-03-10T10:17:27.743733+0000 mon.a (mon.0) 1365 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:17:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:28 vm04 bash[28289]: audit 2026-03-10T10:17:27.744704+0000 mon.a (mon.0) 1366 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:17:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:28 vm04 bash[28289]: audit 2026-03-10T10:17:27.744704+0000 mon.a (mon.0) 1366 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:17:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:28 vm04 bash[20742]: cluster 2026-03-10T10:17:26.388461+0000 mgr.y (mgr.24422) 166 : cluster [DBG] pgmap v152: 396 pgs: 2 creating+peering, 6 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T10:17:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:28 vm04 bash[20742]: cluster 2026-03-10T10:17:26.388461+0000 mgr.y (mgr.24422) 166 : cluster [DBG] pgmap v152: 396 pgs: 2 creating+peering, 6 unknown, 388 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T10:17:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:28 vm04 bash[20742]: cluster 2026-03-10T10:17:27.213208+0000 mon.a (mon.0) 1362 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-10T10:17:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:28 vm04 bash[20742]: cluster 2026-03-10T10:17:27.213208+0000 mon.a (mon.0) 1362 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-10T10:17:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:28 vm04 bash[20742]: audit 2026-03-10T10:17:27.213893+0000 mon.c (mon.2) 175 : audit [INF] from='client.? 192.168.123.104:0/1825969584' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm04-59252-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:28 vm04 bash[20742]: audit 2026-03-10T10:17:27.213893+0000 mon.c (mon.2) 175 : audit [INF] from='client.? 192.168.123.104:0/1825969584' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm04-59252-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:28 vm04 bash[20742]: audit 2026-03-10T10:17:27.214807+0000 mon.c (mon.2) 176 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:28 vm04 bash[20742]: audit 2026-03-10T10:17:27.214807+0000 mon.c (mon.2) 176 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:28 vm04 bash[20742]: audit 2026-03-10T10:17:27.215915+0000 mon.a (mon.0) 1363 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm04-59252-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:28 vm04 bash[20742]: audit 2026-03-10T10:17:27.215915+0000 mon.a (mon.0) 1363 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemove_vm04-59252-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:28 vm04 bash[20742]: audit 2026-03-10T10:17:27.216257+0000 mon.a (mon.0) 1364 : audit [INF] from='client.? 192.168.123.104:0/417807305' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm04-59259-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:28 vm04 bash[20742]: audit 2026-03-10T10:17:27.216257+0000 mon.a (mon.0) 1364 : audit [INF] from='client.? 192.168.123.104:0/417807305' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm04-59259-20","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:28 vm04 bash[20742]: audit 2026-03-10T10:17:27.743733+0000 mon.a (mon.0) 1365 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:17:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:28 vm04 bash[20742]: audit 2026-03-10T10:17:27.743733+0000 mon.a (mon.0) 1365 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:17:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:28 vm04 bash[20742]: audit 2026-03-10T10:17:27.744704+0000 mon.a (mon.0) 1366 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:17:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:28 vm04 bash[20742]: audit 2026-03-10T10:17:27.744704+0000 mon.a (mon.0) 1366 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:17:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:29 vm04 bash[28289]: audit 2026-03-10T10:17:28.212137+0000 mon.a (mon.0) 1367 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemove_vm04-59252-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:29 vm04 bash[28289]: audit 2026-03-10T10:17:28.212137+0000 mon.a (mon.0) 1367 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemove_vm04-59252-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:29 vm04 bash[28289]: audit 2026-03-10T10:17:28.212260+0000 mon.a (mon.0) 1368 : audit [INF] from='client.? 192.168.123.104:0/417807305' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm04-59259-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:29 vm04 bash[28289]: audit 2026-03-10T10:17:28.212260+0000 mon.a (mon.0) 1368 : audit [INF] from='client.? 192.168.123.104:0/417807305' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm04-59259-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:29 vm04 bash[28289]: cluster 2026-03-10T10:17:28.221954+0000 mon.a (mon.0) 1369 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-10T10:17:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:29 vm04 bash[28289]: cluster 2026-03-10T10:17:28.221954+0000 mon.a (mon.0) 1369 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-10T10:17:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:29 vm04 bash[28289]: audit 2026-03-10T10:17:28.228991+0000 mon.c (mon.2) 177 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:29 vm04 bash[28289]: audit 2026-03-10T10:17:28.228991+0000 mon.c (mon.2) 177 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:29 vm04 bash[28289]: audit 2026-03-10T10:17:28.248146+0000 mgr.y (mgr.24422) 167 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:29 vm04 bash[28289]: audit 2026-03-10T10:17:28.248146+0000 mgr.y (mgr.24422) 167 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:29 vm04 bash[28289]: audit 2026-03-10T10:17:29.136625+0000 mon.c (mon.2) 178 : audit [DBG] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch 2026-03-10T10:17:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:29 vm04 bash[28289]: audit 2026-03-10T10:17:29.136625+0000 mon.c (mon.2) 178 : audit [DBG] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch 2026-03-10T10:17:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:29 vm04 bash[28289]: audit 2026-03-10T10:17:29.138117+0000 mon.c (mon.2) 179 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:29 vm04 bash[28289]: audit 2026-03-10T10:17:29.138117+0000 mon.c (mon.2) 179 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:29 vm04 bash[28289]: audit 2026-03-10T10:17:29.138351+0000 mon.a (mon.0) 1370 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:29 vm04 bash[28289]: audit 2026-03-10T10:17:29.138351+0000 mon.a (mon.0) 1370 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:29 vm04 bash[28289]: audit 2026-03-10T10:17:29.229921+0000 mon.c (mon.2) 180 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:29.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:29 vm04 bash[28289]: audit 2026-03-10T10:17:29.229921+0000 mon.c (mon.2) 180 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:29 vm04 bash[20742]: audit 2026-03-10T10:17:28.212137+0000 mon.a (mon.0) 1367 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemove_vm04-59252-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:29 vm04 bash[20742]: audit 2026-03-10T10:17:28.212137+0000 mon.a (mon.0) 1367 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemove_vm04-59252-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:29 vm04 bash[20742]: audit 2026-03-10T10:17:28.212260+0000 mon.a (mon.0) 1368 : audit [INF] from='client.? 192.168.123.104:0/417807305' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm04-59259-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:29 vm04 bash[20742]: audit 2026-03-10T10:17:28.212260+0000 mon.a (mon.0) 1368 : audit [INF] from='client.? 192.168.123.104:0/417807305' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm04-59259-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:29 vm04 bash[20742]: cluster 2026-03-10T10:17:28.221954+0000 mon.a (mon.0) 1369 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-10T10:17:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:29 vm04 bash[20742]: cluster 2026-03-10T10:17:28.221954+0000 mon.a (mon.0) 1369 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-10T10:17:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:29 vm04 bash[20742]: audit 2026-03-10T10:17:28.228991+0000 mon.c (mon.2) 177 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:29 vm04 bash[20742]: audit 2026-03-10T10:17:28.228991+0000 mon.c (mon.2) 177 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:29 vm04 bash[20742]: audit 2026-03-10T10:17:28.248146+0000 mgr.y (mgr.24422) 167 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:29 vm04 bash[20742]: audit 2026-03-10T10:17:28.248146+0000 mgr.y (mgr.24422) 167 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:29 vm04 bash[20742]: audit 2026-03-10T10:17:29.136625+0000 mon.c (mon.2) 178 : audit [DBG] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch 2026-03-10T10:17:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:29 vm04 bash[20742]: audit 2026-03-10T10:17:29.136625+0000 mon.c (mon.2) 178 : audit [DBG] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch 2026-03-10T10:17:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:29 vm04 bash[20742]: audit 2026-03-10T10:17:29.138117+0000 mon.c (mon.2) 179 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:29 vm04 bash[20742]: audit 2026-03-10T10:17:29.138117+0000 mon.c (mon.2) 179 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:29 vm04 bash[20742]: audit 2026-03-10T10:17:29.138351+0000 mon.a (mon.0) 1370 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:29 vm04 bash[20742]: audit 2026-03-10T10:17:29.138351+0000 mon.a (mon.0) 1370 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:29 vm04 bash[20742]: audit 2026-03-10T10:17:29.229921+0000 mon.c (mon.2) 180 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:29 vm04 bash[20742]: audit 2026-03-10T10:17:29.229921+0000 mon.c (mon.2) 180 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:29.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:29 vm07 bash[23367]: audit 2026-03-10T10:17:28.212137+0000 mon.a (mon.0) 1367 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemove_vm04-59252-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:29.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:29 vm07 bash[23367]: audit 2026-03-10T10:17:28.212137+0000 mon.a (mon.0) 1367 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemove_vm04-59252-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:29.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:29 vm07 bash[23367]: audit 2026-03-10T10:17:28.212260+0000 mon.a (mon.0) 1368 : audit [INF] from='client.? 192.168.123.104:0/417807305' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm04-59259-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:29.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:29 vm07 bash[23367]: audit 2026-03-10T10:17:28.212260+0000 mon.a (mon.0) 1368 : audit [INF] from='client.? 192.168.123.104:0/417807305' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPPNS_vm04-59259-20","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:29.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:29 vm07 bash[23367]: cluster 2026-03-10T10:17:28.221954+0000 mon.a (mon.0) 1369 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-10T10:17:29.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:29 vm07 bash[23367]: cluster 2026-03-10T10:17:28.221954+0000 mon.a (mon.0) 1369 : cluster [DBG] osdmap e133: 8 total, 8 up, 8 in 2026-03-10T10:17:29.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:29 vm07 bash[23367]: audit 2026-03-10T10:17:28.228991+0000 mon.c (mon.2) 177 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:29.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:29 vm07 bash[23367]: audit 2026-03-10T10:17:28.228991+0000 mon.c (mon.2) 177 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:29.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:29 vm07 bash[23367]: audit 2026-03-10T10:17:28.248146+0000 mgr.y (mgr.24422) 167 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:29.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:29 vm07 bash[23367]: audit 2026-03-10T10:17:28.248146+0000 mgr.y (mgr.24422) 167 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:29.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:29 vm07 bash[23367]: audit 2026-03-10T10:17:29.136625+0000 mon.c (mon.2) 178 : audit [DBG] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch 2026-03-10T10:17:29.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:29 vm07 bash[23367]: audit 2026-03-10T10:17:29.136625+0000 mon.c (mon.2) 178 : audit [DBG] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch 2026-03-10T10:17:29.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:29 vm07 bash[23367]: audit 2026-03-10T10:17:29.138117+0000 mon.c (mon.2) 179 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:29.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:29 vm07 bash[23367]: audit 2026-03-10T10:17:29.138117+0000 mon.c (mon.2) 179 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:29.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:29 vm07 bash[23367]: audit 2026-03-10T10:17:29.138351+0000 mon.a (mon.0) 1370 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:29.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:29 vm07 bash[23367]: audit 2026-03-10T10:17:29.138351+0000 mon.a (mon.0) 1370 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app1","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:29.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:29 vm07 bash[23367]: audit 2026-03-10T10:17:29.229921+0000 mon.c (mon.2) 180 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:29.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:29 vm07 bash[23367]: audit 2026-03-10T10:17:29.229921+0000 mon.c (mon.2) 180 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:30.599 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: Running main() from gmock_main.cc 2026-03-10T10:17:30.599 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [==========] Running 21 tests from 5 test suites. 2026-03-10T10:17:30.599 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [----------] Global test environment set-up. 2026-03-10T10:17:30.599 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [----------] 5 tests from LibRadosSnapshotsPP 2026-03-10T10:17:30.599 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: seed 59541 2026-03-10T10:17:30.599 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsPP.SnapListPP 2026-03-10T10:17:30.599 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsPP.SnapListPP (1965 ms) 2026-03-10T10:17:30.599 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsPP.SnapRemovePP 2026-03-10T10:17:30.599 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsPP.SnapRemovePP (2204 ms) 2026-03-10T10:17:30.599 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsPP.RollbackPP 2026-03-10T10:17:30.599 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsPP.RollbackPP (2116 ms) 2026-03-10T10:17:30.599 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsPP.SnapGetNamePP 2026-03-10T10:17:30.599 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsPP.SnapGetNamePP (2076 ms) 2026-03-10T10:17:30.599 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsPP.SnapCreateRemovePP 2026-03-10T10:17:30.599 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsPP.SnapCreateRemovePP (3609 ms) 2026-03-10T10:17:30.599 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [----------] 5 tests from LibRadosSnapshotsPP (11970 ms total) 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [----------] 7 tests from LibRadosSnapshotsSelfManagedPP 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.SnapPP 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.SnapPP (4008 ms) 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.RollbackPP 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.RollbackPP (3897 ms) 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.SnapOverlapPP 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.SnapOverlapPP (5927 ms) 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.Bug11677 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.Bug11677 (4008 ms) 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.OrderSnap 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.OrderSnap (2098 ms) 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.WriteRollback 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: ./src/test/librados/snapshots_cxx.cc:460: Skipped 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ SKIPPED ] LibRadosSnapshotsSelfManagedPP.WriteRollback (0 ms) 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedPP.ReusePurgedSnap 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: deleting snap 14 in pool LibRadosSnapshotsSelfManagedPP_vm04-59541-7 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: waiting for snaps to purge 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedPP.ReusePurgedSnap (17914 ms) 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [----------] 7 tests from LibRadosSnapshotsSelfManagedPP (37852 ms total) 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [----------] 2 tests from LibRadosPoolIsInSelfmanagedSnapsMode 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ RUN ] LibRadosPoolIsInSelfmanagedSnapsMode.NotConnected 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ OK ] LibRadosPoolIsInSelfmanagedSnapsMode.NotConnected (2 ms) 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ RUN ] LibRadosPoolIsInSelfmanagedSnapsMode.FreshInstance 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ OK ] LibRadosPoolIsInSelfmanagedSnapsMode.FreshInstance (4995 ms) 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [----------] 2 tests from LibRadosPoolIsInSelfmanagedSnapsMode (4997 ms total) 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [----------] 4 tests from LibRadosSnapshotsECPP 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsECPP.SnapListPP 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsECPP.SnapListPP (2669 ms) 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsECPP.SnapRemovePP 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsECPP.SnapRemovePP (2017 ms) 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsECPP.RollbackPP 2026-03-10T10:17:30.600 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsECPP.RollbackPP (2371 ms) 2026-03-10T10:17:30.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:30 vm04 bash[28289]: cluster 2026-03-10T10:17:28.389059+0000 mgr.y (mgr.24422) 168 : cluster [DBG] pgmap v155: 460 pgs: 2 creating+peering, 41 unknown, 417 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 1.3 KiB/s wr, 2 op/s 2026-03-10T10:17:30.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:30 vm04 bash[28289]: cluster 2026-03-10T10:17:28.389059+0000 mgr.y (mgr.24422) 168 : cluster [DBG] pgmap v155: 460 pgs: 2 creating+peering, 41 unknown, 417 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 1.3 KiB/s wr, 2 op/s 2026-03-10T10:17:30.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:30 vm04 bash[28289]: audit 2026-03-10T10:17:29.246665+0000 mon.a (mon.0) 1371 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:30.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:30 vm04 bash[28289]: audit 2026-03-10T10:17:29.246665+0000 mon.a (mon.0) 1371 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:30.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:30 vm04 bash[28289]: cluster 2026-03-10T10:17:29.249680+0000 mon.a (mon.0) 1372 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-10T10:17:30.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:30 vm04 bash[28289]: cluster 2026-03-10T10:17:29.249680+0000 mon.a (mon.0) 1372 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-10T10:17:30.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:30 vm04 bash[28289]: audit 2026-03-10T10:17:29.284497+0000 mon.c (mon.2) 181 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app2"}]: dispatch 2026-03-10T10:17:30.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:30 vm04 bash[28289]: audit 2026-03-10T10:17:29.284497+0000 mon.c (mon.2) 181 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app2"}]: dispatch 2026-03-10T10:17:30.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:30 vm04 bash[28289]: audit 2026-03-10T10:17:29.285379+0000 mon.c (mon.2) 182 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:30.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:30 vm04 bash[28289]: audit 2026-03-10T10:17:29.285379+0000 mon.c (mon.2) 182 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:30.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:30 vm04 bash[28289]: audit 2026-03-10T10:17:29.312863+0000 mon.a (mon.0) 1373 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:30.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:30 vm04 bash[28289]: audit 2026-03-10T10:17:29.312863+0000 mon.a (mon.0) 1373 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:30.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:30 vm04 bash[28289]: audit 2026-03-10T10:17:30.230729+0000 mon.c (mon.2) 183 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:30.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:30 vm04 bash[28289]: audit 2026-03-10T10:17:30.230729+0000 mon.c (mon.2) 183 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:30.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:30 vm04 bash[20742]: cluster 2026-03-10T10:17:28.389059+0000 mgr.y (mgr.24422) 168 : cluster [DBG] pgmap v155: 460 pgs: 2 creating+peering, 41 unknown, 417 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 1.3 KiB/s wr, 2 op/s 2026-03-10T10:17:30.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:30 vm04 bash[20742]: cluster 2026-03-10T10:17:28.389059+0000 mgr.y (mgr.24422) 168 : cluster [DBG] pgmap v155: 460 pgs: 2 creating+peering, 41 unknown, 417 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 1.3 KiB/s wr, 2 op/s 2026-03-10T10:17:30.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:30 vm04 bash[20742]: audit 2026-03-10T10:17:29.246665+0000 mon.a (mon.0) 1371 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:30 vm04 bash[20742]: audit 2026-03-10T10:17:29.246665+0000 mon.a (mon.0) 1371 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:30 vm04 bash[20742]: cluster 2026-03-10T10:17:29.249680+0000 mon.a (mon.0) 1372 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-10T10:17:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:30 vm04 bash[20742]: cluster 2026-03-10T10:17:29.249680+0000 mon.a (mon.0) 1372 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-10T10:17:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:30 vm04 bash[20742]: audit 2026-03-10T10:17:29.284497+0000 mon.c (mon.2) 181 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app2"}]: dispatch 2026-03-10T10:17:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:30 vm04 bash[20742]: audit 2026-03-10T10:17:29.284497+0000 mon.c (mon.2) 181 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app2"}]: dispatch 2026-03-10T10:17:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:30 vm04 bash[20742]: audit 2026-03-10T10:17:29.285379+0000 mon.c (mon.2) 182 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:30 vm04 bash[20742]: audit 2026-03-10T10:17:29.285379+0000 mon.c (mon.2) 182 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:30 vm04 bash[20742]: audit 2026-03-10T10:17:29.312863+0000 mon.a (mon.0) 1373 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:30 vm04 bash[20742]: audit 2026-03-10T10:17:29.312863+0000 mon.a (mon.0) 1373 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:30 vm04 bash[20742]: audit 2026-03-10T10:17:30.230729+0000 mon.c (mon.2) 183 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:30 vm04 bash[20742]: audit 2026-03-10T10:17:30.230729+0000 mon.c (mon.2) 183 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:30 vm07 bash[23367]: cluster 2026-03-10T10:17:28.389059+0000 mgr.y (mgr.24422) 168 : cluster [DBG] pgmap v155: 460 pgs: 2 creating+peering, 41 unknown, 417 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 1.3 KiB/s wr, 2 op/s 2026-03-10T10:17:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:30 vm07 bash[23367]: cluster 2026-03-10T10:17:28.389059+0000 mgr.y (mgr.24422) 168 : cluster [DBG] pgmap v155: 460 pgs: 2 creating+peering, 41 unknown, 417 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 1.3 KiB/s wr, 2 op/s 2026-03-10T10:17:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:30 vm07 bash[23367]: audit 2026-03-10T10:17:29.246665+0000 mon.a (mon.0) 1371 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:30 vm07 bash[23367]: audit 2026-03-10T10:17:29.246665+0000 mon.a (mon.0) 1371 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app1","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:30 vm07 bash[23367]: cluster 2026-03-10T10:17:29.249680+0000 mon.a (mon.0) 1372 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-10T10:17:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:30 vm07 bash[23367]: cluster 2026-03-10T10:17:29.249680+0000 mon.a (mon.0) 1372 : cluster [DBG] osdmap e134: 8 total, 8 up, 8 in 2026-03-10T10:17:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:30 vm07 bash[23367]: audit 2026-03-10T10:17:29.284497+0000 mon.c (mon.2) 181 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app2"}]: dispatch 2026-03-10T10:17:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:30 vm07 bash[23367]: audit 2026-03-10T10:17:29.284497+0000 mon.c (mon.2) 181 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app2"}]: dispatch 2026-03-10T10:17:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:30 vm07 bash[23367]: audit 2026-03-10T10:17:29.285379+0000 mon.c (mon.2) 182 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:30 vm07 bash[23367]: audit 2026-03-10T10:17:29.285379+0000 mon.c (mon.2) 182 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:30 vm07 bash[23367]: audit 2026-03-10T10:17:29.312863+0000 mon.a (mon.0) 1373 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:30 vm07 bash[23367]: audit 2026-03-10T10:17:29.312863+0000 mon.a (mon.0) 1373 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:30 vm07 bash[23367]: audit 2026-03-10T10:17:30.230729+0000 mon.c (mon.2) 183 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:30 vm07 bash[23367]: audit 2026-03-10T10:17:30.230729+0000 mon.c (mon.2) 183 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:31.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:31 vm04 bash[20742]: cluster 2026-03-10T10:17:30.389369+0000 mgr.y (mgr.24422) 169 : cluster [DBG] pgmap v157: 396 pgs: 396 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 733 B/s wr, 1 op/s 2026-03-10T10:17:31.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:31 vm04 bash[20742]: cluster 2026-03-10T10:17:30.389369+0000 mgr.y (mgr.24422) 169 : cluster [DBG] pgmap v157: 396 pgs: 396 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 733 B/s wr, 1 op/s 2026-03-10T10:17:31.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:31 vm04 bash[20742]: audit 2026-03-10T10:17:30.569346+0000 mon.a (mon.0) 1374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:31.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:31 vm04 bash[20742]: audit 2026-03-10T10:17:30.569346+0000 mon.a (mon.0) 1374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:31.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:31 vm04 bash[20742]: cluster 2026-03-10T10:17:30.572357+0000 mon.a (mon.0) 1375 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-10T10:17:31.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:31 vm04 bash[20742]: cluster 2026-03-10T10:17:30.572357+0000 mon.a (mon.0) 1375 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-10T10:17:31.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:31 vm04 bash[20742]: audit 2026-03-10T10:17:30.595037+0000 mon.a (mon.0) 1376 : audit [INF] from='client.? 192.168.123.104:0/2726151259' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm04-59252-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:31.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:31 vm04 bash[20742]: audit 2026-03-10T10:17:30.595037+0000 mon.a (mon.0) 1376 : audit [INF] from='client.? 192.168.123.104:0/2726151259' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm04-59252-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:31.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:31 vm04 bash[20742]: audit 2026-03-10T10:17:30.595103+0000 mon.a (mon.0) 1377 : audit [INF] from='client.? 192.168.123.104:0/2813706835' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm04-59259-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:31 vm04 bash[20742]: audit 2026-03-10T10:17:30.595103+0000 mon.a (mon.0) 1377 : audit [INF] from='client.? 192.168.123.104:0/2813706835' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm04-59259-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:31 vm04 bash[20742]: audit 2026-03-10T10:17:30.603177+0000 mon.c (mon.2) 184 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"dne","key":"key1","value":"value1"}]: dispatch 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:31 vm04 bash[20742]: audit 2026-03-10T10:17:30.603177+0000 mon.c (mon.2) 184 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"dne","key":"key1","value":"value1"}]: dispatch 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:31 vm04 bash[20742]: audit 2026-03-10T10:17:30.604158+0000 mon.c (mon.2) 185 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:31 vm04 bash[20742]: audit 2026-03-10T10:17:30.604158+0000 mon.c (mon.2) 185 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:31 vm04 bash[20742]: audit 2026-03-10T10:17:30.604426+0000 mon.a (mon.0) 1378 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:31 vm04 bash[28289]: cluster 2026-03-10T10:17:30.389369+0000 mgr.y (mgr.24422) 169 : cluster [DBG] pgmap v157: 396 pgs: 396 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 733 B/s wr, 1 op/s 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:31 vm04 bash[28289]: cluster 2026-03-10T10:17:30.389369+0000 mgr.y (mgr.24422) 169 : cluster [DBG] pgmap v157: 396 pgs: 396 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 733 B/s wr, 1 op/s 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:31 vm04 bash[28289]: audit 2026-03-10T10:17:30.569346+0000 mon.a (mon.0) 1374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:31 vm04 bash[28289]: audit 2026-03-10T10:17:30.569346+0000 mon.a (mon.0) 1374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:31 vm04 bash[28289]: cluster 2026-03-10T10:17:30.572357+0000 mon.a (mon.0) 1375 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:31 vm04 bash[28289]: cluster 2026-03-10T10:17:30.572357+0000 mon.a (mon.0) 1375 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:31 vm04 bash[28289]: audit 2026-03-10T10:17:30.595037+0000 mon.a (mon.0) 1376 : audit [INF] from='client.? 192.168.123.104:0/2726151259' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm04-59252-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:31 vm04 bash[28289]: audit 2026-03-10T10:17:30.595037+0000 mon.a (mon.0) 1376 : audit [INF] from='client.? 192.168.123.104:0/2726151259' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm04-59252-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:31 vm04 bash[28289]: audit 2026-03-10T10:17:30.595103+0000 mon.a (mon.0) 1377 : audit [INF] from='client.? 192.168.123.104:0/2813706835' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm04-59259-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:31 vm04 bash[28289]: audit 2026-03-10T10:17:30.595103+0000 mon.a (mon.0) 1377 : audit [INF] from='client.? 192.168.123.104:0/2813706835' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm04-59259-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:31 vm04 bash[28289]: audit 2026-03-10T10:17:30.603177+0000 mon.c (mon.2) 184 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"dne","key":"key1","value":"value1"}]: dispatch 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:31 vm04 bash[28289]: audit 2026-03-10T10:17:30.603177+0000 mon.c (mon.2) 184 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"dne","key":"key1","value":"value1"}]: dispatch 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:31 vm04 bash[28289]: audit 2026-03-10T10:17:30.604158+0000 mon.c (mon.2) 185 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:31 vm04 bash[28289]: audit 2026-03-10T10:17:30.604158+0000 mon.c (mon.2) 185 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:31 vm04 bash[28289]: audit 2026-03-10T10:17:30.604426+0000 mon.a (mon.0) 1378 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:31 vm04 bash[28289]: audit 2026-03-10T10:17:30.604426+0000 mon.a (mon.0) 1378 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:31 vm04 bash[28289]: audit 2026-03-10T10:17:31.201372+0000 mon.a (mon.0) 1379 : audit [INF] from='client.? 192.168.123.104:0/2726151259' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm04-59252-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:31 vm04 bash[28289]: audit 2026-03-10T10:17:31.201372+0000 mon.a (mon.0) 1379 : audit [INF] from='client.? 192.168.123.104:0/2726151259' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm04-59252-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:31 vm04 bash[28289]: audit 2026-03-10T10:17:31.201479+0000 mon.a (mon.0) 1380 : audit [INF] from='client.? 192.168.123.104:0/2813706835' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm04-59259-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:31 vm04 bash[28289]: audit 2026-03-10T10:17:31.201479+0000 mon.a (mon.0) 1380 : audit [INF] from='client.? 192.168.123.104:0/2813706835' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm04-59259-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:31 vm04 bash[28289]: audit 2026-03-10T10:17:31.201715+0000 mon.a (mon.0) 1381 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:31 vm04 bash[28289]: audit 2026-03-10T10:17:31.201715+0000 mon.a (mon.0) 1381 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:31 vm04 bash[28289]: cluster 2026-03-10T10:17:31.204192+0000 mon.a (mon.0) 1382 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:31 vm04 bash[28289]: cluster 2026-03-10T10:17:31.204192+0000 mon.a (mon.0) 1382 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:31 vm04 bash[28289]: audit 2026-03-10T10:17:31.232302+0000 mon.c (mon.2) 186 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:31 vm04 bash[28289]: audit 2026-03-10T10:17:31.232302+0000 mon.c (mon.2) 186 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:31 vm04 bash[28289]: audit 2026-03-10T10:17:31.235861+0000 mon.c (mon.2) 187 : audit [INF] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:31 vm04 bash[28289]: audit 2026-03-10T10:17:31.235861+0000 mon.c (mon.2) 187 : audit [INF] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:31 vm04 bash[28289]: audit 2026-03-10T10:17:31.240787+0000 mon.c (mon.2) 188 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:31 vm04 bash[28289]: audit 2026-03-10T10:17:31.240787+0000 mon.c (mon.2) 188 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:31 vm04 bash[28289]: audit 2026-03-10T10:17:31.258019+0000 mon.a (mon.0) 1383 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:31 vm04 bash[28289]: audit 2026-03-10T10:17:31.258019+0000 mon.a (mon.0) 1383 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:31 vm04 bash[28289]: audit 2026-03-10T10:17:31.258125+0000 mon.a (mon.0) 1384 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:31 vm04 bash[28289]: audit 2026-03-10T10:17:31.258125+0000 mon.a (mon.0) 1384 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T10:17:31.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:31 vm04 bash[20742]: audit 2026-03-10T10:17:30.604426+0000 mon.a (mon.0) 1378 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T10:17:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:31 vm04 bash[20742]: audit 2026-03-10T10:17:31.201372+0000 mon.a (mon.0) 1379 : audit [INF] from='client.? 192.168.123.104:0/2726151259' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm04-59252-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:31 vm04 bash[20742]: audit 2026-03-10T10:17:31.201372+0000 mon.a (mon.0) 1379 : audit [INF] from='client.? 192.168.123.104:0/2726151259' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm04-59252-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:31 vm04 bash[20742]: audit 2026-03-10T10:17:31.201479+0000 mon.a (mon.0) 1380 : audit [INF] from='client.? 192.168.123.104:0/2813706835' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm04-59259-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:31 vm04 bash[20742]: audit 2026-03-10T10:17:31.201479+0000 mon.a (mon.0) 1380 : audit [INF] from='client.? 192.168.123.104:0/2813706835' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm04-59259-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:31 vm04 bash[20742]: audit 2026-03-10T10:17:31.201715+0000 mon.a (mon.0) 1381 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-10T10:17:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:31 vm04 bash[20742]: audit 2026-03-10T10:17:31.201715+0000 mon.a (mon.0) 1381 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-10T10:17:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:31 vm04 bash[20742]: cluster 2026-03-10T10:17:31.204192+0000 mon.a (mon.0) 1382 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-10T10:17:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:31 vm04 bash[20742]: cluster 2026-03-10T10:17:31.204192+0000 mon.a (mon.0) 1382 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-10T10:17:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:31 vm04 bash[20742]: audit 2026-03-10T10:17:31.232302+0000 mon.c (mon.2) 186 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:31 vm04 bash[20742]: audit 2026-03-10T10:17:31.232302+0000 mon.c (mon.2) 186 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:31 vm04 bash[20742]: audit 2026-03-10T10:17:31.235861+0000 mon.c (mon.2) 187 : audit [INF] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T10:17:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:31 vm04 bash[20742]: audit 2026-03-10T10:17:31.235861+0000 mon.c (mon.2) 187 : audit [INF] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T10:17:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:31 vm04 bash[20742]: audit 2026-03-10T10:17:31.240787+0000 mon.c (mon.2) 188 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T10:17:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:31 vm04 bash[20742]: audit 2026-03-10T10:17:31.240787+0000 mon.c (mon.2) 188 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T10:17:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:31 vm04 bash[20742]: audit 2026-03-10T10:17:31.258019+0000 mon.a (mon.0) 1383 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T10:17:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:31 vm04 bash[20742]: audit 2026-03-10T10:17:31.258019+0000 mon.a (mon.0) 1383 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T10:17:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:31 vm04 bash[20742]: audit 2026-03-10T10:17:31.258125+0000 mon.a (mon.0) 1384 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T10:17:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:31 vm04 bash[20742]: audit 2026-03-10T10:17:31.258125+0000 mon.a (mon.0) 1384 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T10:17:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:31 vm07 bash[23367]: cluster 2026-03-10T10:17:30.389369+0000 mgr.y (mgr.24422) 169 : cluster [DBG] pgmap v157: 396 pgs: 396 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 733 B/s wr, 1 op/s 2026-03-10T10:17:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:31 vm07 bash[23367]: cluster 2026-03-10T10:17:30.389369+0000 mgr.y (mgr.24422) 169 : cluster [DBG] pgmap v157: 396 pgs: 396 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 733 B/s wr, 1 op/s 2026-03-10T10:17:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:31 vm07 bash[23367]: audit 2026-03-10T10:17:30.569346+0000 mon.a (mon.0) 1374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:31 vm07 bash[23367]: audit 2026-03-10T10:17:30.569346+0000 mon.a (mon.0) 1374 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosMiscPP_vm04-59484-1","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:31 vm07 bash[23367]: cluster 2026-03-10T10:17:30.572357+0000 mon.a (mon.0) 1375 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-10T10:17:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:31 vm07 bash[23367]: cluster 2026-03-10T10:17:30.572357+0000 mon.a (mon.0) 1375 : cluster [DBG] osdmap e135: 8 total, 8 up, 8 in 2026-03-10T10:17:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:31 vm07 bash[23367]: audit 2026-03-10T10:17:30.595037+0000 mon.a (mon.0) 1376 : audit [INF] from='client.? 192.168.123.104:0/2726151259' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm04-59252-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:31 vm07 bash[23367]: audit 2026-03-10T10:17:30.595037+0000 mon.a (mon.0) 1376 : audit [INF] from='client.? 192.168.123.104:0/2726151259' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm04-59252-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:31 vm07 bash[23367]: audit 2026-03-10T10:17:30.595103+0000 mon.a (mon.0) 1377 : audit [INF] from='client.? 192.168.123.104:0/2813706835' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm04-59259-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:31 vm07 bash[23367]: audit 2026-03-10T10:17:30.595103+0000 mon.a (mon.0) 1377 : audit [INF] from='client.? 192.168.123.104:0/2813706835' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm04-59259-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:31 vm07 bash[23367]: audit 2026-03-10T10:17:30.603177+0000 mon.c (mon.2) 184 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"dne","key":"key1","value":"value1"}]: dispatch 2026-03-10T10:17:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:31 vm07 bash[23367]: audit 2026-03-10T10:17:30.603177+0000 mon.c (mon.2) 184 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"dne","key":"key1","value":"value1"}]: dispatch 2026-03-10T10:17:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:31 vm07 bash[23367]: audit 2026-03-10T10:17:30.604158+0000 mon.c (mon.2) 185 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T10:17:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:31 vm07 bash[23367]: audit 2026-03-10T10:17:30.604158+0000 mon.c (mon.2) 185 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T10:17:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:31 vm07 bash[23367]: audit 2026-03-10T10:17:30.604426+0000 mon.a (mon.0) 1378 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T10:17:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:31 vm07 bash[23367]: audit 2026-03-10T10:17:30.604426+0000 mon.a (mon.0) 1378 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1","value":"value1"}]: dispatch 2026-03-10T10:17:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:31 vm07 bash[23367]: audit 2026-03-10T10:17:31.201372+0000 mon.a (mon.0) 1379 : audit [INF] from='client.? 192.168.123.104:0/2726151259' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm04-59252-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:31 vm07 bash[23367]: audit 2026-03-10T10:17:31.201372+0000 mon.a (mon.0) 1379 : audit [INF] from='client.? 192.168.123.104:0/2726151259' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClass_vm04-59252-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:31 vm07 bash[23367]: audit 2026-03-10T10:17:31.201479+0000 mon.a (mon.0) 1380 : audit [INF] from='client.? 192.168.123.104:0/2813706835' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm04-59259-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:31 vm07 bash[23367]: audit 2026-03-10T10:17:31.201479+0000 mon.a (mon.0) 1380 : audit [INF] from='client.? 192.168.123.104:0/2813706835' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "SimpleStatPP_vm04-59259-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:31 vm07 bash[23367]: audit 2026-03-10T10:17:31.201715+0000 mon.a (mon.0) 1381 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-10T10:17:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:31 vm07 bash[23367]: audit 2026-03-10T10:17:31.201715+0000 mon.a (mon.0) 1381 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1","value":"value1"}]': finished 2026-03-10T10:17:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:31 vm07 bash[23367]: cluster 2026-03-10T10:17:31.204192+0000 mon.a (mon.0) 1382 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-10T10:17:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:31 vm07 bash[23367]: cluster 2026-03-10T10:17:31.204192+0000 mon.a (mon.0) 1382 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-10T10:17:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:31 vm07 bash[23367]: audit 2026-03-10T10:17:31.232302+0000 mon.c (mon.2) 186 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:32.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:31 vm07 bash[23367]: audit 2026-03-10T10:17:31.232302+0000 mon.c (mon.2) 186 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:32.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:31 vm07 bash[23367]: audit 2026-03-10T10:17:31.235861+0000 mon.c (mon.2) 187 : audit [INF] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T10:17:32.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:31 vm07 bash[23367]: audit 2026-03-10T10:17:31.235861+0000 mon.c (mon.2) 187 : audit [INF] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T10:17:32.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:31 vm07 bash[23367]: audit 2026-03-10T10:17:31.240787+0000 mon.c (mon.2) 188 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T10:17:32.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:31 vm07 bash[23367]: audit 2026-03-10T10:17:31.240787+0000 mon.c (mon.2) 188 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T10:17:32.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:31 vm07 bash[23367]: audit 2026-03-10T10:17:31.258019+0000 mon.a (mon.0) 1383 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T10:17:32.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:31 vm07 bash[23367]: audit 2026-03-10T10:17:31.258019+0000 mon.a (mon.0) 1383 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T10:17:32.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:31 vm07 bash[23367]: audit 2026-03-10T10:17:31.258125+0000 mon.a (mon.0) 1384 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T10:17:32.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:31 vm07 bash[23367]: audit 2026-03-10T10:17:31.258125+0000 mon.a (mon.0) 1384 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key2","value":"value2"}]: dispatch 2026-03-10T10:17:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:33 vm04 bash[28289]: audit 2026-03-10T10:17:32.236687+0000 mon.a (mon.0) 1385 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]': finished 2026-03-10T10:17:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:33 vm04 bash[28289]: audit 2026-03-10T10:17:32.236687+0000 mon.a (mon.0) 1385 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]': finished 2026-03-10T10:17:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:33 vm04 bash[28289]: audit 2026-03-10T10:17:32.236772+0000 mon.a (mon.0) 1386 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-10T10:17:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:33 vm04 bash[28289]: audit 2026-03-10T10:17:32.236772+0000 mon.a (mon.0) 1386 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-10T10:17:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:33 vm04 bash[28289]: cluster 2026-03-10T10:17:32.240100+0000 mon.a (mon.0) 1387 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-10T10:17:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:33 vm04 bash[28289]: cluster 2026-03-10T10:17:32.240100+0000 mon.a (mon.0) 1387 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-10T10:17:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:33 vm04 bash[28289]: audit 2026-03-10T10:17:32.244703+0000 mon.c (mon.2) 189 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:33 vm04 bash[28289]: audit 2026-03-10T10:17:32.244703+0000 mon.c (mon.2) 189 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:33 vm04 bash[28289]: audit 2026-03-10T10:17:32.245286+0000 mon.c (mon.2) 190 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:33 vm04 bash[28289]: audit 2026-03-10T10:17:32.245286+0000 mon.c (mon.2) 190 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:33 vm04 bash[28289]: audit 2026-03-10T10:17:32.246061+0000 mon.c (mon.2) 191 : audit [INF] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T10:17:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:33 vm04 bash[28289]: audit 2026-03-10T10:17:32.246061+0000 mon.c (mon.2) 191 : audit [INF] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T10:17:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:33 vm04 bash[28289]: audit 2026-03-10T10:17:32.246419+0000 mon.c (mon.2) 192 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T10:17:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:33 vm04 bash[28289]: audit 2026-03-10T10:17:32.246419+0000 mon.c (mon.2) 192 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T10:17:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:33 vm04 bash[28289]: audit 2026-03-10T10:17:32.247045+0000 mon.a (mon.0) 1388 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T10:17:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:33 vm04 bash[28289]: audit 2026-03-10T10:17:32.247045+0000 mon.a (mon.0) 1388 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T10:17:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:33 vm04 bash[28289]: audit 2026-03-10T10:17:32.247108+0000 mon.a (mon.0) 1389 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T10:17:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:33 vm04 bash[28289]: audit 2026-03-10T10:17:32.247108+0000 mon.a (mon.0) 1389 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T10:17:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:33 vm04 bash[28289]: audit 2026-03-10T10:17:32.390326+0000 mon.a (mon.0) 1390 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-10T10:17:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:33 vm04 bash[28289]: audit 2026-03-10T10:17:32.390326+0000 mon.a (mon.0) 1390 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-10T10:17:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:33 vm04 bash[20742]: audit 2026-03-10T10:17:32.236687+0000 mon.a (mon.0) 1385 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]': finished 2026-03-10T10:17:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:33 vm04 bash[20742]: audit 2026-03-10T10:17:32.236687+0000 mon.a (mon.0) 1385 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]': finished 2026-03-10T10:17:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:33 vm04 bash[20742]: audit 2026-03-10T10:17:32.236772+0000 mon.a (mon.0) 1386 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-10T10:17:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:33 vm04 bash[20742]: audit 2026-03-10T10:17:32.236772+0000 mon.a (mon.0) 1386 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-10T10:17:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:33 vm04 bash[20742]: cluster 2026-03-10T10:17:32.240100+0000 mon.a (mon.0) 1387 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-10T10:17:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:33 vm04 bash[20742]: cluster 2026-03-10T10:17:32.240100+0000 mon.a (mon.0) 1387 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-10T10:17:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:33 vm04 bash[20742]: audit 2026-03-10T10:17:32.244703+0000 mon.c (mon.2) 189 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:33 vm04 bash[20742]: audit 2026-03-10T10:17:32.244703+0000 mon.c (mon.2) 189 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:33 vm04 bash[20742]: audit 2026-03-10T10:17:32.245286+0000 mon.c (mon.2) 190 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:33 vm04 bash[20742]: audit 2026-03-10T10:17:32.245286+0000 mon.c (mon.2) 190 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:33 vm04 bash[20742]: audit 2026-03-10T10:17:32.246061+0000 mon.c (mon.2) 191 : audit [INF] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T10:17:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:33 vm04 bash[20742]: audit 2026-03-10T10:17:32.246061+0000 mon.c (mon.2) 191 : audit [INF] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T10:17:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:33 vm04 bash[20742]: audit 2026-03-10T10:17:32.246419+0000 mon.c (mon.2) 192 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T10:17:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:33 vm04 bash[20742]: audit 2026-03-10T10:17:32.246419+0000 mon.c (mon.2) 192 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T10:17:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:33 vm04 bash[20742]: audit 2026-03-10T10:17:32.247045+0000 mon.a (mon.0) 1388 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T10:17:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:33 vm04 bash[20742]: audit 2026-03-10T10:17:32.247045+0000 mon.a (mon.0) 1388 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T10:17:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:33 vm04 bash[20742]: audit 2026-03-10T10:17:32.247108+0000 mon.a (mon.0) 1389 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T10:17:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:33 vm04 bash[20742]: audit 2026-03-10T10:17:32.247108+0000 mon.a (mon.0) 1389 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T10:17:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:33 vm04 bash[20742]: audit 2026-03-10T10:17:32.390326+0000 mon.a (mon.0) 1390 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-10T10:17:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:33 vm04 bash[20742]: audit 2026-03-10T10:17:32.390326+0000 mon.a (mon.0) 1390 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-10T10:17:33.454 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:17:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:17:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:17:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:33 vm07 bash[23367]: audit 2026-03-10T10:17:32.236687+0000 mon.a (mon.0) 1385 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]': finished 2026-03-10T10:17:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:33 vm07 bash[23367]: audit 2026-03-10T10:17:32.236687+0000 mon.a (mon.0) 1385 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]': finished 2026-03-10T10:17:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:33 vm07 bash[23367]: audit 2026-03-10T10:17:32.236772+0000 mon.a (mon.0) 1386 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-10T10:17:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:33 vm07 bash[23367]: audit 2026-03-10T10:17:32.236772+0000 mon.a (mon.0) 1386 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key2","value":"value2"}]': finished 2026-03-10T10:17:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:33 vm07 bash[23367]: cluster 2026-03-10T10:17:32.240100+0000 mon.a (mon.0) 1387 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-10T10:17:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:33 vm07 bash[23367]: cluster 2026-03-10T10:17:32.240100+0000 mon.a (mon.0) 1387 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-10T10:17:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:33 vm07 bash[23367]: audit 2026-03-10T10:17:32.244703+0000 mon.c (mon.2) 189 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:33 vm07 bash[23367]: audit 2026-03-10T10:17:32.244703+0000 mon.c (mon.2) 189 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:33 vm07 bash[23367]: audit 2026-03-10T10:17:32.245286+0000 mon.c (mon.2) 190 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:33 vm07 bash[23367]: audit 2026-03-10T10:17:32.245286+0000 mon.c (mon.2) 190 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:33 vm07 bash[23367]: audit 2026-03-10T10:17:32.246061+0000 mon.c (mon.2) 191 : audit [INF] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T10:17:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:33 vm07 bash[23367]: audit 2026-03-10T10:17:32.246061+0000 mon.c (mon.2) 191 : audit [INF] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T10:17:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:33 vm07 bash[23367]: audit 2026-03-10T10:17:32.246419+0000 mon.c (mon.2) 192 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T10:17:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:33 vm07 bash[23367]: audit 2026-03-10T10:17:32.246419+0000 mon.c (mon.2) 192 : audit [INF] from='client.? 192.168.123.104:0/127419870' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T10:17:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:33 vm07 bash[23367]: audit 2026-03-10T10:17:32.247045+0000 mon.a (mon.0) 1388 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T10:17:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:33 vm07 bash[23367]: audit 2026-03-10T10:17:32.247045+0000 mon.a (mon.0) 1388 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T10:17:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:33 vm07 bash[23367]: audit 2026-03-10T10:17:32.247108+0000 mon.a (mon.0) 1389 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T10:17:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:33 vm07 bash[23367]: audit 2026-03-10T10:17:32.247108+0000 mon.a (mon.0) 1389 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1"}]: dispatch 2026-03-10T10:17:33.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:33 vm07 bash[23367]: audit 2026-03-10T10:17:32.390326+0000 mon.a (mon.0) 1390 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-10T10:17:33.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:33 vm07 bash[23367]: audit 2026-03-10T10:17:32.390326+0000 mon.a (mon.0) 1390 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "31"}]: dispatch 2026-03-10T10:17:34.275 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ api_misc_pp: [==========] Running 31 tests from 7 test suites. 2026-03-10T10:17:34.275 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [----------] Global test environment set-up. 2026-03-10T10:17:34.275 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [----------] 1 test from LibRadosMiscVersion 2026-03-10T10:17:34.275 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ RUN ] LibRadosMiscVersion.VersionPP 2026-03-10T10:17:34.275 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ OK ] LibRadosMiscVersion.VersionPP (0 ms) 2026-03-10T10:17:34.275 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [----------] 1 test from LibRadosMiscVersion (0 ms total) 2026-03-10T10:17:34.275 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: 2026-03-10T10:17:34.275 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [----------] 22 tests from LibRadosMiscPP 2026-03-10T10:17:34.275 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: seed 59484 2026-03-10T10:17:34.275 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.WaitOSDMapPP 2026-03-10T10:17:34.275 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.WaitOSDMapPP (3 ms) 2026-03-10T10:17:34.275 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.LongNamePP 2026-03-10T10:17:34.275 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.LongNamePP (483 ms) 2026-03-10T10:17:34.275 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.LongLocatorPP 2026-03-10T10:17:34.275 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.LongLocatorPP (75 ms) 2026-03-10T10:17:34.275 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.LongNSpacePP 2026-03-10T10:17:34.275 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.LongNSpacePP (21 ms) 2026-03-10T10:17:34.275 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.LongAttrNamePP 2026-03-10T10:17:34.275 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.LongAttrNamePP (12 ms) 2026-03-10T10:17:34.275 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.ExecPP 2026-03-10T10:17:34.275 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.ExecPP (13 ms) 2026-03-10T10:17:34.275 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.BadFlagsPP 2026-03-10T10:17:34.275 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.BadFlagsPP (12 ms) 2026-03-10T10:17:34.275 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.Operate1PP 2026-03-10T10:17:34.275 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.Operate1PP (8 ms) 2026-03-10T10:17:34.275 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.Operate2PP 2026-03-10T10:17:34.275 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.Operate2PP (2 ms) 2026-03-10T10:17:34.275 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.BigObjectPP 2026-03-10T10:17:34.275 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.BigObjectPP (19 ms) 2026-03-10T10:17:34.275 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.AioOperatePP 2026-03-10T10:17:34.275 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.AioOperatePP (5 ms) 2026-03-10T10:17:34.276 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.AssertExistsPP 2026-03-10T10:17:34.276 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.AssertExistsPP (9 ms) 2026-03-10T10:17:34.276 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.AssertVersionPP 2026-03-10T10:17:34.276 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.AssertVersionPP (16 ms) 2026-03-10T10:17:34.276 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.BigAttrPP 2026-03-10T10:17:34.276 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: osd_max_attr_size = 0 2026-03-10T10:17:34.276 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: osd_max_attr_size == 0; skipping test 2026-03-10T10:17:34.276 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.BigAttrPP (3981 ms) 2026-03-10T10:17:34.276 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.CopyPP 2026-03-10T10:17:34.276 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.CopyPP (665 ms) 2026-03-10T10:17:34.276 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.CopyScrubPP 2026-03-10T10:17:34.276 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: waiting for initial deep scrubs... 2026-03-10T10:17:34.276 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: done waiting, doing copies 2026-03-10T10:17:34.276 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: waiting for final deep scrubs... 2026-03-10T10:17:34.276 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: done waiting 2026-03-10T10:17:34.276 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.CopyScrubPP (61453 ms) 2026-03-10T10:17:34.276 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.WriteSamePP 2026-03-10T10:17:34.276 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.WriteSamePP (5 ms) 2026-03-10T10:17:34.276 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.CmpExtPP 2026-03-10T10:17:34.276 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.CmpExtPP (2 ms) 2026-03-10T10:17:34.276 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.Applications 2026-03-10T10:17:34.276 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.Applications (4153 ms) 2026-03-10T10:17:34.276 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.MinCompatOSD 2026-03-10T10:17:34.276 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.MinCompatOSD (0 ms) 2026-03-10T10:17:34.276 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.MinCompatClient 2026-03-10T10:17:34.276 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.MinCompatClient (0 ms) 2026-03-10T10:17:34.276 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ RUN ] LibRadosMiscPP.Conf 2026-03-10T10:17:34.276 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ OK ] LibRadosMiscPP.Conf (0 ms) 2026-03-10T10:17:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:34 vm04 bash[28289]: cluster 2026-03-10T10:17:32.389703+0000 mgr.y (mgr.24422) 170 : cluster [DBG] pgmap v161: 396 pgs: 396 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T10:17:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:34 vm04 bash[28289]: cluster 2026-03-10T10:17:32.389703+0000 mgr.y (mgr.24422) 170 : cluster [DBG] pgmap v161: 396 pgs: 396 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T10:17:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:34 vm04 bash[28289]: audit 2026-03-10T10:17:33.248397+0000 mon.a (mon.0) 1391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]': finished 2026-03-10T10:17:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:34 vm04 bash[28289]: audit 2026-03-10T10:17:33.248397+0000 mon.a (mon.0) 1391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]': finished 2026-03-10T10:17:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:34 vm04 bash[28289]: audit 2026-03-10T10:17:33.248565+0000 mon.a (mon.0) 1392 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1"}]': finished 2026-03-10T10:17:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:34 vm04 bash[28289]: audit 2026-03-10T10:17:33.248565+0000 mon.a (mon.0) 1392 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1"}]': finished 2026-03-10T10:17:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:34 vm04 bash[28289]: audit 2026-03-10T10:17:33.248645+0000 mon.a (mon.0) 1393 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "31"}]': finished 2026-03-10T10:17:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:34 vm04 bash[28289]: audit 2026-03-10T10:17:33.248645+0000 mon.a (mon.0) 1393 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "31"}]': finished 2026-03-10T10:17:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:34 vm04 bash[28289]: cluster 2026-03-10T10:17:33.273788+0000 mon.a (mon.0) 1394 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-10T10:17:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:34 vm04 bash[28289]: cluster 2026-03-10T10:17:33.273788+0000 mon.a (mon.0) 1394 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-10T10:17:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:34 vm04 bash[28289]: audit 2026-03-10T10:17:33.285215+0000 mon.a (mon.0) 1395 : audit [INF] from='client.? 192.168.123.104:0/701138524' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm04-59259-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:34 vm04 bash[28289]: audit 2026-03-10T10:17:33.285215+0000 mon.a (mon.0) 1395 : audit [INF] from='client.? 192.168.123.104:0/701138524' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm04-59259-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:34 vm04 bash[28289]: audit 2026-03-10T10:17:33.286550+0000 mon.c (mon.2) 193 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:34 vm04 bash[28289]: audit 2026-03-10T10:17:33.286550+0000 mon.c (mon.2) 193 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:34 vm04 bash[28289]: audit 2026-03-10T10:17:33.289570+0000 mon.c (mon.2) 194 : audit [INF] from='client.? 192.168.123.104:0/3081359102' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm04-59252-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:34 vm04 bash[28289]: audit 2026-03-10T10:17:33.289570+0000 mon.c (mon.2) 194 : audit [INF] from='client.? 192.168.123.104:0/3081359102' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm04-59252-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:34 vm04 bash[28289]: audit 2026-03-10T10:17:33.289725+0000 mon.c (mon.2) 195 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:34 vm04 bash[28289]: audit 2026-03-10T10:17:33.289725+0000 mon.c (mon.2) 195 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:34 vm04 bash[28289]: audit 2026-03-10T10:17:33.290421+0000 mon.a (mon.0) 1396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm04-59252-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:34 vm04 bash[28289]: audit 2026-03-10T10:17:33.290421+0000 mon.a (mon.0) 1396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm04-59252-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:34.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:34 vm04 bash[28289]: audit 2026-03-10T10:17:33.290544+0000 mon.a (mon.0) 1397 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:34.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:34 vm04 bash[28289]: audit 2026-03-10T10:17:33.290544+0000 mon.a (mon.0) 1397 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:34.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:34 vm04 bash[20742]: cluster 2026-03-10T10:17:32.389703+0000 mgr.y (mgr.24422) 170 : cluster [DBG] pgmap v161: 396 pgs: 396 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T10:17:34.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:34 vm04 bash[20742]: cluster 2026-03-10T10:17:32.389703+0000 mgr.y (mgr.24422) 170 : cluster [DBG] pgmap v161: 396 pgs: 396 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T10:17:34.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:34 vm04 bash[20742]: audit 2026-03-10T10:17:33.248397+0000 mon.a (mon.0) 1391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]': finished 2026-03-10T10:17:34.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:34 vm04 bash[20742]: audit 2026-03-10T10:17:33.248397+0000 mon.a (mon.0) 1391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]': finished 2026-03-10T10:17:34.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:34 vm04 bash[20742]: audit 2026-03-10T10:17:33.248565+0000 mon.a (mon.0) 1392 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1"}]': finished 2026-03-10T10:17:34.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:34 vm04 bash[20742]: audit 2026-03-10T10:17:33.248565+0000 mon.a (mon.0) 1392 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1"}]': finished 2026-03-10T10:17:34.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:34 vm04 bash[20742]: audit 2026-03-10T10:17:33.248645+0000 mon.a (mon.0) 1393 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "31"}]': finished 2026-03-10T10:17:34.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:34 vm04 bash[20742]: audit 2026-03-10T10:17:33.248645+0000 mon.a (mon.0) 1393 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "31"}]': finished 2026-03-10T10:17:34.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:34 vm04 bash[20742]: cluster 2026-03-10T10:17:33.273788+0000 mon.a (mon.0) 1394 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-10T10:17:34.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:34 vm04 bash[20742]: cluster 2026-03-10T10:17:33.273788+0000 mon.a (mon.0) 1394 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-10T10:17:34.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:34 vm04 bash[20742]: audit 2026-03-10T10:17:33.285215+0000 mon.a (mon.0) 1395 : audit [INF] from='client.? 192.168.123.104:0/701138524' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm04-59259-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:34.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:34 vm04 bash[20742]: audit 2026-03-10T10:17:33.285215+0000 mon.a (mon.0) 1395 : audit [INF] from='client.? 192.168.123.104:0/701138524' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm04-59259-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:34.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:34 vm04 bash[20742]: audit 2026-03-10T10:17:33.286550+0000 mon.c (mon.2) 193 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:34.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:34 vm04 bash[20742]: audit 2026-03-10T10:17:33.286550+0000 mon.c (mon.2) 193 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:34.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:34 vm04 bash[20742]: audit 2026-03-10T10:17:33.289570+0000 mon.c (mon.2) 194 : audit [INF] from='client.? 192.168.123.104:0/3081359102' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm04-59252-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:34.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:34 vm04 bash[20742]: audit 2026-03-10T10:17:33.289570+0000 mon.c (mon.2) 194 : audit [INF] from='client.? 192.168.123.104:0/3081359102' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm04-59252-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:34.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:34 vm04 bash[20742]: audit 2026-03-10T10:17:33.289725+0000 mon.c (mon.2) 195 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:34.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:34 vm04 bash[20742]: audit 2026-03-10T10:17:33.289725+0000 mon.c (mon.2) 195 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:34.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:34 vm04 bash[20742]: audit 2026-03-10T10:17:33.290421+0000 mon.a (mon.0) 1396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm04-59252-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:34.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:34 vm04 bash[20742]: audit 2026-03-10T10:17:33.290421+0000 mon.a (mon.0) 1396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm04-59252-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:34.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:34 vm04 bash[20742]: audit 2026-03-10T10:17:33.290544+0000 mon.a (mon.0) 1397 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:34.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:34 vm04 bash[20742]: audit 2026-03-10T10:17:33.290544+0000 mon.a (mon.0) 1397 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:34 vm07 bash[23367]: cluster 2026-03-10T10:17:32.389703+0000 mgr.y (mgr.24422) 170 : cluster [DBG] pgmap v161: 396 pgs: 396 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T10:17:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:34 vm07 bash[23367]: cluster 2026-03-10T10:17:32.389703+0000 mgr.y (mgr.24422) 170 : cluster [DBG] pgmap v161: 396 pgs: 396 active+clean; 216 MiB data, 1.2 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T10:17:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:34 vm07 bash[23367]: audit 2026-03-10T10:17:33.248397+0000 mon.a (mon.0) 1391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]': finished 2026-03-10T10:17:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:34 vm07 bash[23367]: audit 2026-03-10T10:17:33.248397+0000 mon.a (mon.0) 1391 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]': finished 2026-03-10T10:17:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:34 vm07 bash[23367]: audit 2026-03-10T10:17:33.248565+0000 mon.a (mon.0) 1392 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1"}]': finished 2026-03-10T10:17:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:34 vm07 bash[23367]: audit 2026-03-10T10:17:33.248565+0000 mon.a (mon.0) 1392 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"LibRadosMiscPP_vm04-59484-1","app":"app1","key":"key1"}]': finished 2026-03-10T10:17:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:34 vm07 bash[23367]: audit 2026-03-10T10:17:33.248645+0000 mon.a (mon.0) 1393 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "31"}]': finished 2026-03-10T10:17:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:34 vm07 bash[23367]: audit 2026-03-10T10:17:33.248645+0000 mon.a (mon.0) 1393 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "31"}]': finished 2026-03-10T10:17:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:34 vm07 bash[23367]: cluster 2026-03-10T10:17:33.273788+0000 mon.a (mon.0) 1394 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-10T10:17:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:34 vm07 bash[23367]: cluster 2026-03-10T10:17:33.273788+0000 mon.a (mon.0) 1394 : cluster [DBG] osdmap e138: 8 total, 8 up, 8 in 2026-03-10T10:17:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:34 vm07 bash[23367]: audit 2026-03-10T10:17:33.285215+0000 mon.a (mon.0) 1395 : audit [INF] from='client.? 192.168.123.104:0/701138524' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm04-59259-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:34 vm07 bash[23367]: audit 2026-03-10T10:17:33.285215+0000 mon.a (mon.0) 1395 : audit [INF] from='client.? 192.168.123.104:0/701138524' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime_vm04-59259-22","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:34 vm07 bash[23367]: audit 2026-03-10T10:17:33.286550+0000 mon.c (mon.2) 193 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:34 vm07 bash[23367]: audit 2026-03-10T10:17:33.286550+0000 mon.c (mon.2) 193 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:34.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:34 vm07 bash[23367]: audit 2026-03-10T10:17:33.289570+0000 mon.c (mon.2) 194 : audit [INF] from='client.? 192.168.123.104:0/3081359102' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm04-59252-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:34.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:34 vm07 bash[23367]: audit 2026-03-10T10:17:33.289570+0000 mon.c (mon.2) 194 : audit [INF] from='client.? 192.168.123.104:0/3081359102' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm04-59252-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:34.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:34 vm07 bash[23367]: audit 2026-03-10T10:17:33.289725+0000 mon.c (mon.2) 195 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:34.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:34 vm07 bash[23367]: audit 2026-03-10T10:17:33.289725+0000 mon.c (mon.2) 195 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:34.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:34 vm07 bash[23367]: audit 2026-03-10T10:17:33.290421+0000 mon.a (mon.0) 1396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm04-59252-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:34.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:34 vm07 bash[23367]: audit 2026-03-10T10:17:33.290421+0000 mon.a (mon.0) 1396 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWrite_vm04-59252-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:34.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:34 vm07 bash[23367]: audit 2026-03-10T10:17:33.290544+0000 mon.a (mon.0) 1397 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:34.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:34 vm07 bash[23367]: audit 2026-03-10T10:17:33.290544+0000 mon.a (mon.0) 1397 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:35.331 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [----------] 22 tests from LibRadosMis snapshots: Running main() from gmock_main.cc 2026-03-10T10:17:35.331 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: [==========] Running 11 tests from 2 test suites. 2026-03-10T10:17:35.331 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: [----------] Global test environment set-up. 2026-03-10T10:17:35.331 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: [----------] 5 tests from NeoRadosSnapshots 2026-03-10T10:17:35.331 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: [ RUN ] NeoRadosSnapshots.SnapList 2026-03-10T10:17:35.331 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: [ OK ] NeoRadosSnapshots.SnapList (4621 ms) 2026-03-10T10:17:35.331 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: [ RUN ] NeoRadosSnapshots.SnapRemove 2026-03-10T10:17:35.331 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: [ OK ] NeoRadosSnapshots.SnapRemove (5232 ms) 2026-03-10T10:17:35.331 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: [ RUN ] NeoRadosSnapshots.Rollback 2026-03-10T10:17:35.331 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: [ OK ] NeoRadosSnapshots.Rollback (3657 ms) 2026-03-10T10:17:35.331 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: [ RUN ] NeoRadosSnapshots.SnapGetName 2026-03-10T10:17:35.331 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: [ OK ] NeoRadosSnapshots.SnapGetName (5148 ms) 2026-03-10T10:17:35.331 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: [ RUN ] NeoRadosSnapshots.SnapCreateRemove 2026-03-10T10:17:35.331 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: [ OK ] NeoRadosSnapshots.SnapCreateRemove (6847 ms) 2026-03-10T10:17:35.331 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: [----------] 5 tests from NeoRadosSnapshots (25506 ms total) 2026-03-10T10:17:35.331 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: 2026-03-10T10:17:35.331 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: [----------] 6 tests from NeoRadosSelfManagedSnaps 2026-03-10T10:17:35.331 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.Snap 2026-03-10T10:17:35.331 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.Snap (5005 ms) 2026-03-10T10:17:35.331 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.Rollback 2026-03-10T10:17:35.331 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.Rollback (6002 ms) 2026-03-10T10:17:35.331 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.SnapOverlap 2026-03-10T10:17:35.331 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.SnapOverlap (8292 ms) 2026-03-10T10:17:35.331 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.Bug11677 2026-03-10T10:17:35.331 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.Bug11677 (5802 ms) 2026-03-10T10:17:35.331 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.OrderSnap 2026-03-10T10:17:35.331 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.OrderSnap (4034 ms) 2026-03-10T10:17:35.331 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: [ RUN ] NeoRadosSelfManagedSnaps.ReusePurgedSnap 2026-03-10T10:17:35.331 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: Deleting snap 3 in pool ReusePurgedSnapvm04-60247-11. 2026-03-10T10:17:35.331 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: Waiting for snaps to purge. 2026-03-10T10:17:35.331 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: [ OK ] NeoRadosSelfManagedSnaps.ReusePurgedSnap (19943 ms) 2026-03-10T10:17:35.332 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: [----------] 6 tests from NeoRadosSelfManagedSnaps (49079 ms total) 2026-03-10T10:17:35.332 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: 2026-03-10T10:17:35.332 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: [----------] Global test environment tear-down 2026-03-10T10:17:35.332 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: [==========] 11 tests from 2 test suites ran. (74587 ms total) 2026-03-10T10:17:35.332 INFO:tasks.workunit.client.0.vm04.stdout: snapshots: [ PASSED ] 11 tests. 2026-03-10T10:17:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:35 vm04 bash[28289]: audit 2026-03-10T10:17:34.252264+0000 mon.a (mon.0) 1398 : audit [INF] from='client.? 192.168.123.104:0/701138524' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm04-59259-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:35 vm04 bash[28289]: audit 2026-03-10T10:17:34.252264+0000 mon.a (mon.0) 1398 : audit [INF] from='client.? 192.168.123.104:0/701138524' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm04-59259-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:35 vm04 bash[28289]: audit 2026-03-10T10:17:34.252328+0000 mon.a (mon.0) 1399 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWrite_vm04-59252-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:35 vm04 bash[28289]: audit 2026-03-10T10:17:34.252328+0000 mon.a (mon.0) 1399 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWrite_vm04-59252-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:35 vm04 bash[28289]: audit 2026-03-10T10:17:34.252348+0000 mon.a (mon.0) 1400 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]': finished 2026-03-10T10:17:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:35 vm04 bash[28289]: audit 2026-03-10T10:17:34.252348+0000 mon.a (mon.0) 1400 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]': finished 2026-03-10T10:17:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:35 vm04 bash[28289]: cluster 2026-03-10T10:17:34.256912+0000 mon.a (mon.0) 1401 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-10T10:17:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:35 vm04 bash[28289]: cluster 2026-03-10T10:17:34.256912+0000 mon.a (mon.0) 1401 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-10T10:17:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:35 vm04 bash[28289]: audit 2026-03-10T10:17:34.257791+0000 mon.c (mon.2) 196 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:35 vm04 bash[28289]: audit 2026-03-10T10:17:34.257791+0000 mon.c (mon.2) 196 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:35 vm04 bash[28289]: audit 2026-03-10T10:17:34.260402+0000 mon.a (mon.0) 1402 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:35 vm04 bash[28289]: audit 2026-03-10T10:17:34.260402+0000 mon.a (mon.0) 1402 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:35 vm04 bash[28289]: audit 2026-03-10T10:17:34.327912+0000 mon.c (mon.2) 197 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:35 vm04 bash[28289]: audit 2026-03-10T10:17:34.327912+0000 mon.c (mon.2) 197 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:35 vm04 bash[28289]: audit 2026-03-10T10:17:34.328160+0000 mon.a (mon.0) 1403 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:35 vm04 bash[28289]: audit 2026-03-10T10:17:34.328160+0000 mon.a (mon.0) 1403 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:35 vm04 bash[28289]: audit 2026-03-10T10:17:34.329477+0000 mon.c (mon.2) 198 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:35 vm04 bash[28289]: audit 2026-03-10T10:17:34.329477+0000 mon.c (mon.2) 198 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:35 vm04 bash[28289]: audit 2026-03-10T10:17:34.329681+0000 mon.a (mon.0) 1404 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:35 vm04 bash[28289]: audit 2026-03-10T10:17:34.329681+0000 mon.a (mon.0) 1404 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:35 vm04 bash[28289]: audit 2026-03-10T10:17:34.331486+0000 mon.c (mon.2) 199 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59484-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:35 vm04 bash[28289]: audit 2026-03-10T10:17:34.331486+0000 mon.c (mon.2) 199 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59484-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:35 vm04 bash[28289]: audit 2026-03-10T10:17:34.331712+0000 mon.a (mon.0) 1405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59484-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:35 vm04 bash[28289]: audit 2026-03-10T10:17:34.331712+0000 mon.a (mon.0) 1405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59484-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:35 vm04 bash[28289]: audit 2026-03-10T10:17:34.412022+0000 mon.c (mon.2) 200 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:35 vm04 bash[28289]: audit 2026-03-10T10:17:34.412022+0000 mon.c (mon.2) 200 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:35 vm04 bash[28289]: audit 2026-03-10T10:17:34.412585+0000 mon.c (mon.2) 201 : audit [INF] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:35 vm04 bash[28289]: audit 2026-03-10T10:17:34.412585+0000 mon.c (mon.2) 201 : audit [INF] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:35 vm04 bash[28289]: audit 2026-03-10T10:17:34.412862+0000 mon.a (mon.0) 1406 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:35 vm04 bash[28289]: audit 2026-03-10T10:17:34.412862+0000 mon.a (mon.0) 1406 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:35 vm04 bash[20742]: audit 2026-03-10T10:17:34.252264+0000 mon.a (mon.0) 1398 : audit [INF] from='client.? 192.168.123.104:0/701138524' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm04-59259-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:35 vm04 bash[20742]: audit 2026-03-10T10:17:34.252264+0000 mon.a (mon.0) 1398 : audit [INF] from='client.? 192.168.123.104:0/701138524' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm04-59259-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:35 vm04 bash[20742]: audit 2026-03-10T10:17:34.252328+0000 mon.a (mon.0) 1399 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWrite_vm04-59252-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:35 vm04 bash[20742]: audit 2026-03-10T10:17:34.252328+0000 mon.a (mon.0) 1399 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWrite_vm04-59252-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:35 vm04 bash[20742]: audit 2026-03-10T10:17:34.252348+0000 mon.a (mon.0) 1400 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]': finished 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:35 vm04 bash[20742]: audit 2026-03-10T10:17:34.252348+0000 mon.a (mon.0) 1400 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]': finished 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:35 vm04 bash[20742]: cluster 2026-03-10T10:17:34.256912+0000 mon.a (mon.0) 1401 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:35 vm04 bash[20742]: cluster 2026-03-10T10:17:34.256912+0000 mon.a (mon.0) 1401 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:35 vm04 bash[20742]: audit 2026-03-10T10:17:34.257791+0000 mon.c (mon.2) 196 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:35 vm04 bash[20742]: audit 2026-03-10T10:17:34.257791+0000 mon.c (mon.2) 196 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:35 vm04 bash[20742]: audit 2026-03-10T10:17:34.260402+0000 mon.a (mon.0) 1402 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:35 vm04 bash[20742]: audit 2026-03-10T10:17:34.260402+0000 mon.a (mon.0) 1402 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:35 vm04 bash[20742]: audit 2026-03-10T10:17:34.327912+0000 mon.c (mon.2) 197 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:35 vm04 bash[20742]: audit 2026-03-10T10:17:34.327912+0000 mon.c (mon.2) 197 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:35 vm04 bash[20742]: audit 2026-03-10T10:17:34.328160+0000 mon.a (mon.0) 1403 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:35 vm04 bash[20742]: audit 2026-03-10T10:17:34.328160+0000 mon.a (mon.0) 1403 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:35 vm04 bash[20742]: audit 2026-03-10T10:17:34.329477+0000 mon.c (mon.2) 198 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:35 vm04 bash[20742]: audit 2026-03-10T10:17:34.329477+0000 mon.c (mon.2) 198 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:35 vm04 bash[20742]: audit 2026-03-10T10:17:34.329681+0000 mon.a (mon.0) 1404 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:35 vm04 bash[20742]: audit 2026-03-10T10:17:34.329681+0000 mon.a (mon.0) 1404 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:35 vm04 bash[20742]: audit 2026-03-10T10:17:34.331486+0000 mon.c (mon.2) 199 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59484-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:35 vm04 bash[20742]: audit 2026-03-10T10:17:34.331486+0000 mon.c (mon.2) 199 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59484-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:35 vm04 bash[20742]: audit 2026-03-10T10:17:34.331712+0000 mon.a (mon.0) 1405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59484-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:35 vm04 bash[20742]: audit 2026-03-10T10:17:34.331712+0000 mon.a (mon.0) 1405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59484-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:35 vm04 bash[20742]: audit 2026-03-10T10:17:34.412022+0000 mon.c (mon.2) 200 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:35 vm04 bash[20742]: audit 2026-03-10T10:17:34.412022+0000 mon.c (mon.2) 200 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:35 vm04 bash[20742]: audit 2026-03-10T10:17:34.412585+0000 mon.c (mon.2) 201 : audit [INF] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:35 vm04 bash[20742]: audit 2026-03-10T10:17:34.412585+0000 mon.c (mon.2) 201 : audit [INF] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:35 vm04 bash[20742]: audit 2026-03-10T10:17:34.412862+0000 mon.a (mon.0) 1406 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T10:17:35.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:35 vm04 bash[20742]: audit 2026-03-10T10:17:34.412862+0000 mon.a (mon.0) 1406 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T10:17:35.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:35 vm07 bash[23367]: audit 2026-03-10T10:17:34.252264+0000 mon.a (mon.0) 1398 : audit [INF] from='client.? 192.168.123.104:0/701138524' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm04-59259-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:35.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:35 vm07 bash[23367]: audit 2026-03-10T10:17:34.252264+0000 mon.a (mon.0) 1398 : audit [INF] from='client.? 192.168.123.104:0/701138524' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime_vm04-59259-22","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:35.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:35 vm07 bash[23367]: audit 2026-03-10T10:17:34.252328+0000 mon.a (mon.0) 1399 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWrite_vm04-59252-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:35.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:35 vm07 bash[23367]: audit 2026-03-10T10:17:34.252328+0000 mon.a (mon.0) 1399 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWrite_vm04-59252-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:35.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:35 vm07 bash[23367]: audit 2026-03-10T10:17:34.252348+0000 mon.a (mon.0) 1400 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]': finished 2026-03-10T10:17:35.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:35 vm07 bash[23367]: audit 2026-03-10T10:17:34.252348+0000 mon.a (mon.0) 1400 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsECPP_vm04-59541-16"}]': finished 2026-03-10T10:17:35.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:35 vm07 bash[23367]: cluster 2026-03-10T10:17:34.256912+0000 mon.a (mon.0) 1401 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-10T10:17:35.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:35 vm07 bash[23367]: cluster 2026-03-10T10:17:34.256912+0000 mon.a (mon.0) 1401 : cluster [DBG] osdmap e139: 8 total, 8 up, 8 in 2026-03-10T10:17:35.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:35 vm07 bash[23367]: audit 2026-03-10T10:17:34.257791+0000 mon.c (mon.2) 196 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:35.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:35 vm07 bash[23367]: audit 2026-03-10T10:17:34.257791+0000 mon.c (mon.2) 196 : audit [INF] from='client.? 192.168.123.104:0/550102913' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:35.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:35 vm07 bash[23367]: audit 2026-03-10T10:17:34.260402+0000 mon.a (mon.0) 1402 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:35.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:35 vm07 bash[23367]: audit 2026-03-10T10:17:34.260402+0000 mon.a (mon.0) 1402 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm04-59541-16"}]: dispatch 2026-03-10T10:17:35.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:35 vm07 bash[23367]: audit 2026-03-10T10:17:34.327912+0000 mon.c (mon.2) 197 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:35.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:35 vm07 bash[23367]: audit 2026-03-10T10:17:34.327912+0000 mon.c (mon.2) 197 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:35.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:35 vm07 bash[23367]: audit 2026-03-10T10:17:34.328160+0000 mon.a (mon.0) 1403 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:35.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:35 vm07 bash[23367]: audit 2026-03-10T10:17:34.328160+0000 mon.a (mon.0) 1403 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:35.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:35 vm07 bash[23367]: audit 2026-03-10T10:17:34.329477+0000 mon.c (mon.2) 198 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:35.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:35 vm07 bash[23367]: audit 2026-03-10T10:17:34.329477+0000 mon.c (mon.2) 198 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:35.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:35 vm07 bash[23367]: audit 2026-03-10T10:17:34.329681+0000 mon.a (mon.0) 1404 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:35.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:35 vm07 bash[23367]: audit 2026-03-10T10:17:34.329681+0000 mon.a (mon.0) 1404 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:35.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:35 vm07 bash[23367]: audit 2026-03-10T10:17:34.331486+0000 mon.c (mon.2) 199 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59484-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:35.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:35 vm07 bash[23367]: audit 2026-03-10T10:17:34.331486+0000 mon.c (mon.2) 199 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59484-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:35.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:35 vm07 bash[23367]: audit 2026-03-10T10:17:34.331712+0000 mon.a (mon.0) 1405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59484-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:35.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:35 vm07 bash[23367]: audit 2026-03-10T10:17:34.331712+0000 mon.a (mon.0) 1405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59484-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:35.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:35 vm07 bash[23367]: audit 2026-03-10T10:17:34.412022+0000 mon.c (mon.2) 200 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:35.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:35 vm07 bash[23367]: audit 2026-03-10T10:17:34.412022+0000 mon.c (mon.2) 200 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:35.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:35 vm07 bash[23367]: audit 2026-03-10T10:17:34.412585+0000 mon.c (mon.2) 201 : audit [INF] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T10:17:35.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:35 vm07 bash[23367]: audit 2026-03-10T10:17:34.412585+0000 mon.c (mon.2) 201 : audit [INF] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T10:17:35.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:35 vm07 bash[23367]: audit 2026-03-10T10:17:34.412862+0000 mon.a (mon.0) 1406 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T10:17:35.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:35 vm07 bash[23367]: audit 2026-03-10T10:17:34.412862+0000 mon.a (mon.0) 1406 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]: dispatch 2026-03-10T10:17:36.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:36 vm04 bash[28289]: cluster 2026-03-10T10:17:34.390099+0000 mgr.y (mgr.24422) 171 : cluster [DBG] pgmap v164: 420 pgs: 64 creating+peering, 1 peering, 355 active+clean; 464 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:36.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:36 vm04 bash[28289]: cluster 2026-03-10T10:17:34.390099+0000 mgr.y (mgr.24422) 171 : cluster [DBG] pgmap v164: 420 pgs: 64 creating+peering, 1 peering, 355 active+clean; 464 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:36.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:36 vm04 bash[28289]: cluster 2026-03-10T10:17:35.309055+0000 mon.a (mon.0) 1407 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:36.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:36 vm04 bash[28289]: cluster 2026-03-10T10:17:35.309055+0000 mon.a (mon.0) 1407 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:36.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:36 vm04 bash[28289]: audit 2026-03-10T10:17:35.311418+0000 mon.a (mon.0) 1408 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm04-59541-16"}]': finished 2026-03-10T10:17:36.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:36 vm04 bash[28289]: audit 2026-03-10T10:17:35.311418+0000 mon.a (mon.0) 1408 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm04-59541-16"}]': finished 2026-03-10T10:17:36.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:36 vm04 bash[28289]: audit 2026-03-10T10:17:35.311564+0000 mon.a (mon.0) 1409 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59484-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:36.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:36 vm04 bash[28289]: audit 2026-03-10T10:17:35.311564+0000 mon.a (mon.0) 1409 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59484-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:36.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:36 vm04 bash[28289]: audit 2026-03-10T10:17:35.311671+0000 mon.a (mon.0) 1410 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]': finished 2026-03-10T10:17:36.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:36 vm04 bash[28289]: audit 2026-03-10T10:17:35.311671+0000 mon.a (mon.0) 1410 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]': finished 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:36 vm04 bash[28289]: audit 2026-03-10T10:17:35.317097+0000 mon.c (mon.2) 202 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59484-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:36 vm04 bash[28289]: audit 2026-03-10T10:17:35.317097+0000 mon.c (mon.2) 202 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59484-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:36 vm04 bash[28289]: cluster 2026-03-10T10:17:35.317492+0000 mon.a (mon.0) 1411 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:36 vm04 bash[28289]: cluster 2026-03-10T10:17:35.317492+0000 mon.a (mon.0) 1411 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:36 vm04 bash[28289]: audit 2026-03-10T10:17:35.318430+0000 mon.c (mon.2) 203 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:36 vm04 bash[28289]: audit 2026-03-10T10:17:35.318430+0000 mon.c (mon.2) 203 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:36 vm04 bash[28289]: audit 2026-03-10T10:17:35.320238+0000 mon.a (mon.0) 1412 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59484-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:36 vm04 bash[28289]: audit 2026-03-10T10:17:35.320238+0000 mon.a (mon.0) 1412 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59484-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:36 vm04 bash[28289]: audit 2026-03-10T10:17:35.341971+0000 mon.b (mon.1) 131 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:36 vm04 bash[28289]: audit 2026-03-10T10:17:35.341971+0000 mon.b (mon.1) 131 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:36 vm04 bash[28289]: audit 2026-03-10T10:17:35.342574+0000 mon.c (mon.2) 204 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:36 vm04 bash[28289]: audit 2026-03-10T10:17:35.342574+0000 mon.c (mon.2) 204 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:36 vm04 bash[28289]: audit 2026-03-10T10:17:35.345076+0000 mon.b (mon.1) 132 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:36 vm04 bash[28289]: audit 2026-03-10T10:17:35.345076+0000 mon.b (mon.1) 132 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:36 vm04 bash[28289]: audit 2026-03-10T10:17:35.345727+0000 mon.b (mon.1) 133 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:36 vm04 bash[28289]: audit 2026-03-10T10:17:35.345727+0000 mon.b (mon.1) 133 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:36 vm04 bash[28289]: audit 2026-03-10T10:17:35.345774+0000 mon.a (mon.0) 1413 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:36 vm04 bash[28289]: audit 2026-03-10T10:17:35.345774+0000 mon.a (mon.0) 1413 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:36 vm04 bash[28289]: audit 2026-03-10T10:17:35.347403+0000 mon.a (mon.0) 1414 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:36 vm04 bash[28289]: audit 2026-03-10T10:17:35.347403+0000 mon.a (mon.0) 1414 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:36 vm04 bash[28289]: audit 2026-03-10T10:17:35.348044+0000 mon.a (mon.0) 1415 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:36 vm04 bash[28289]: audit 2026-03-10T10:17:35.348044+0000 mon.a (mon.0) 1415 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:36 vm04 bash[20742]: cluster 2026-03-10T10:17:34.390099+0000 mgr.y (mgr.24422) 171 : cluster [DBG] pgmap v164: 420 pgs: 64 creating+peering, 1 peering, 355 active+clean; 464 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:36 vm04 bash[20742]: cluster 2026-03-10T10:17:34.390099+0000 mgr.y (mgr.24422) 171 : cluster [DBG] pgmap v164: 420 pgs: 64 creating+peering, 1 peering, 355 active+clean; 464 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:36 vm04 bash[20742]: cluster 2026-03-10T10:17:35.309055+0000 mon.a (mon.0) 1407 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:36 vm04 bash[20742]: cluster 2026-03-10T10:17:35.309055+0000 mon.a (mon.0) 1407 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:36 vm04 bash[20742]: audit 2026-03-10T10:17:35.311418+0000 mon.a (mon.0) 1408 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm04-59541-16"}]': finished 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:36 vm04 bash[20742]: audit 2026-03-10T10:17:35.311418+0000 mon.a (mon.0) 1408 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm04-59541-16"}]': finished 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:36 vm04 bash[20742]: audit 2026-03-10T10:17:35.311564+0000 mon.a (mon.0) 1409 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59484-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:36 vm04 bash[20742]: audit 2026-03-10T10:17:35.311564+0000 mon.a (mon.0) 1409 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59484-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:36 vm04 bash[20742]: audit 2026-03-10T10:17:35.311671+0000 mon.a (mon.0) 1410 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]': finished 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:36 vm04 bash[20742]: audit 2026-03-10T10:17:35.311671+0000 mon.a (mon.0) 1410 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]': finished 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:36 vm04 bash[20742]: audit 2026-03-10T10:17:35.317097+0000 mon.c (mon.2) 202 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59484-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:36 vm04 bash[20742]: audit 2026-03-10T10:17:35.317097+0000 mon.c (mon.2) 202 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59484-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:36 vm04 bash[20742]: cluster 2026-03-10T10:17:35.317492+0000 mon.a (mon.0) 1411 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:36 vm04 bash[20742]: cluster 2026-03-10T10:17:35.317492+0000 mon.a (mon.0) 1411 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:36 vm04 bash[20742]: audit 2026-03-10T10:17:35.318430+0000 mon.c (mon.2) 203 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:36 vm04 bash[20742]: audit 2026-03-10T10:17:35.318430+0000 mon.c (mon.2) 203 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:36 vm04 bash[20742]: audit 2026-03-10T10:17:35.320238+0000 mon.a (mon.0) 1412 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59484-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:36 vm04 bash[20742]: audit 2026-03-10T10:17:35.320238+0000 mon.a (mon.0) 1412 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59484-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:36 vm04 bash[20742]: audit 2026-03-10T10:17:35.341971+0000 mon.b (mon.1) 131 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:36 vm04 bash[20742]: audit 2026-03-10T10:17:35.341971+0000 mon.b (mon.1) 131 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:36 vm04 bash[20742]: audit 2026-03-10T10:17:35.342574+0000 mon.c (mon.2) 204 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:36 vm04 bash[20742]: audit 2026-03-10T10:17:35.342574+0000 mon.c (mon.2) 204 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:36 vm04 bash[20742]: audit 2026-03-10T10:17:35.345076+0000 mon.b (mon.1) 132 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:36 vm04 bash[20742]: audit 2026-03-10T10:17:35.345076+0000 mon.b (mon.1) 132 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:36.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:36 vm04 bash[20742]: audit 2026-03-10T10:17:35.345727+0000 mon.b (mon.1) 133 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:36.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:36 vm04 bash[20742]: audit 2026-03-10T10:17:35.345727+0000 mon.b (mon.1) 133 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:36.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:36 vm04 bash[20742]: audit 2026-03-10T10:17:35.345774+0000 mon.a (mon.0) 1413 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:36.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:36 vm04 bash[20742]: audit 2026-03-10T10:17:35.345774+0000 mon.a (mon.0) 1413 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:36.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:36 vm04 bash[20742]: audit 2026-03-10T10:17:35.347403+0000 mon.a (mon.0) 1414 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:36.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:36 vm04 bash[20742]: audit 2026-03-10T10:17:35.347403+0000 mon.a (mon.0) 1414 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:36.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:36 vm04 bash[20742]: audit 2026-03-10T10:17:35.348044+0000 mon.a (mon.0) 1415 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:36.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:36 vm04 bash[20742]: audit 2026-03-10T10:17:35.348044+0000 mon.a (mon.0) 1415 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:36 vm07 bash[23367]: cluster 2026-03-10T10:17:34.390099+0000 mgr.y (mgr.24422) 171 : cluster [DBG] pgmap v164: 420 pgs: 64 creating+peering, 1 peering, 355 active+clean; 464 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:36 vm07 bash[23367]: cluster 2026-03-10T10:17:34.390099+0000 mgr.y (mgr.24422) 171 : cluster [DBG] pgmap v164: 420 pgs: 64 creating+peering, 1 peering, 355 active+clean; 464 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:36 vm07 bash[23367]: cluster 2026-03-10T10:17:35.309055+0000 mon.a (mon.0) 1407 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:36 vm07 bash[23367]: cluster 2026-03-10T10:17:35.309055+0000 mon.a (mon.0) 1407 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:36 vm07 bash[23367]: audit 2026-03-10T10:17:35.311418+0000 mon.a (mon.0) 1408 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm04-59541-16"}]': finished 2026-03-10T10:17:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:36 vm07 bash[23367]: audit 2026-03-10T10:17:35.311418+0000 mon.a (mon.0) 1408 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsECPP_vm04-59541-16"}]': finished 2026-03-10T10:17:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:36 vm07 bash[23367]: audit 2026-03-10T10:17:35.311564+0000 mon.a (mon.0) 1409 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59484-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:36 vm07 bash[23367]: audit 2026-03-10T10:17:35.311564+0000 mon.a (mon.0) 1409 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59484-24", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:36 vm07 bash[23367]: audit 2026-03-10T10:17:35.311671+0000 mon.a (mon.0) 1410 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]': finished 2026-03-10T10:17:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:36 vm07 bash[23367]: audit 2026-03-10T10:17:35.311671+0000 mon.a (mon.0) 1410 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pg_num","val":"11"}]': finished 2026-03-10T10:17:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:36 vm07 bash[23367]: audit 2026-03-10T10:17:35.317097+0000 mon.c (mon.2) 202 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59484-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:36 vm07 bash[23367]: audit 2026-03-10T10:17:35.317097+0000 mon.c (mon.2) 202 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59484-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:36 vm07 bash[23367]: cluster 2026-03-10T10:17:35.317492+0000 mon.a (mon.0) 1411 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-10T10:17:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:36 vm07 bash[23367]: cluster 2026-03-10T10:17:35.317492+0000 mon.a (mon.0) 1411 : cluster [DBG] osdmap e140: 8 total, 8 up, 8 in 2026-03-10T10:17:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:36 vm07 bash[23367]: audit 2026-03-10T10:17:35.318430+0000 mon.c (mon.2) 203 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:36 vm07 bash[23367]: audit 2026-03-10T10:17:35.318430+0000 mon.c (mon.2) 203 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:36 vm07 bash[23367]: audit 2026-03-10T10:17:35.320238+0000 mon.a (mon.0) 1412 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59484-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:36 vm07 bash[23367]: audit 2026-03-10T10:17:35.320238+0000 mon.a (mon.0) 1412 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59484-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:36 vm07 bash[23367]: audit 2026-03-10T10:17:35.341971+0000 mon.b (mon.1) 131 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:36 vm07 bash[23367]: audit 2026-03-10T10:17:35.341971+0000 mon.b (mon.1) 131 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:36 vm07 bash[23367]: audit 2026-03-10T10:17:35.342574+0000 mon.c (mon.2) 204 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:36 vm07 bash[23367]: audit 2026-03-10T10:17:35.342574+0000 mon.c (mon.2) 204 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:36 vm07 bash[23367]: audit 2026-03-10T10:17:35.345076+0000 mon.b (mon.1) 132 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:36 vm07 bash[23367]: audit 2026-03-10T10:17:35.345076+0000 mon.b (mon.1) 132 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:36 vm07 bash[23367]: audit 2026-03-10T10:17:35.345727+0000 mon.b (mon.1) 133 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:36 vm07 bash[23367]: audit 2026-03-10T10:17:35.345727+0000 mon.b (mon.1) 133 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:36 vm07 bash[23367]: audit 2026-03-10T10:17:35.345774+0000 mon.a (mon.0) 1413 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:36 vm07 bash[23367]: audit 2026-03-10T10:17:35.345774+0000 mon.a (mon.0) 1413 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:36 vm07 bash[23367]: audit 2026-03-10T10:17:35.347403+0000 mon.a (mon.0) 1414 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:36 vm07 bash[23367]: audit 2026-03-10T10:17:35.347403+0000 mon.a (mon.0) 1414 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:36 vm07 bash[23367]: audit 2026-03-10T10:17:35.348044+0000 mon.a (mon.0) 1415 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:36 vm07 bash[23367]: audit 2026-03-10T10:17:35.348044+0000 mon.a (mon.0) 1415 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:37 vm04 bash[28289]: audit 2026-03-10T10:17:36.345447+0000 mon.c (mon.2) 205 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:37 vm04 bash[28289]: audit 2026-03-10T10:17:36.345447+0000 mon.c (mon.2) 205 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:37 vm04 bash[28289]: cluster 2026-03-10T10:17:36.390742+0000 mgr.y (mgr.24422) 172 : cluster [DBG] pgmap v166: 324 pgs: 1 peering, 323 active+clean; 464 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:37 vm04 bash[28289]: cluster 2026-03-10T10:17:36.390742+0000 mgr.y (mgr.24422) 172 : cluster [DBG] pgmap v166: 324 pgs: 1 peering, 323 active+clean; 464 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:37 vm04 bash[28289]: audit 2026-03-10T10:17:36.463843+0000 mon.a (mon.0) 1416 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:37 vm04 bash[28289]: audit 2026-03-10T10:17:36.463843+0000 mon.a (mon.0) 1416 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:37 vm04 bash[28289]: audit 2026-03-10T10:17:36.516263+0000 mon.b (mon.1) 134 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:37 vm04 bash[28289]: audit 2026-03-10T10:17:36.516263+0000 mon.b (mon.1) 134 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:37 vm04 bash[28289]: cluster 2026-03-10T10:17:36.526571+0000 mon.a (mon.0) 1417 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:37 vm04 bash[28289]: cluster 2026-03-10T10:17:36.526571+0000 mon.a (mon.0) 1417 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:37 vm04 bash[28289]: audit 2026-03-10T10:17:36.538025+0000 mon.b (mon.1) 135 : audit [INF] from='client.? 192.168.123.104:0/2632516138' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm04-59252-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:37 vm04 bash[28289]: audit 2026-03-10T10:17:36.538025+0000 mon.b (mon.1) 135 : audit [INF] from='client.? 192.168.123.104:0/2632516138' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm04-59252-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:37 vm04 bash[28289]: audit 2026-03-10T10:17:36.538292+0000 mon.b (mon.1) 136 : audit [INF] from='client.? 192.168.123.104:0/2390619970' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm04-59259-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:37 vm04 bash[28289]: audit 2026-03-10T10:17:36.538292+0000 mon.b (mon.1) 136 : audit [INF] from='client.? 192.168.123.104:0/2390619970' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm04-59259-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:37 vm04 bash[28289]: audit 2026-03-10T10:17:36.541219+0000 mon.a (mon.0) 1418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:37 vm04 bash[28289]: audit 2026-03-10T10:17:36.541219+0000 mon.a (mon.0) 1418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:37 vm04 bash[28289]: audit 2026-03-10T10:17:36.557836+0000 mon.a (mon.0) 1419 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm04-59252-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:37 vm04 bash[28289]: audit 2026-03-10T10:17:36.557836+0000 mon.a (mon.0) 1419 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm04-59252-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:37 vm04 bash[28289]: audit 2026-03-10T10:17:36.557936+0000 mon.a (mon.0) 1420 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm04-59259-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:37 vm04 bash[28289]: audit 2026-03-10T10:17:36.557936+0000 mon.a (mon.0) 1420 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm04-59259-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:37 vm04 bash[28289]: audit 2026-03-10T10:17:37.346175+0000 mon.c (mon.2) 206 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:37 vm04 bash[28289]: audit 2026-03-10T10:17:37.346175+0000 mon.c (mon.2) 206 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:37 vm04 bash[28289]: audit 2026-03-10T10:17:37.467836+0000 mon.a (mon.0) 1421 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59484-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59484-24"}]': finished 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:37 vm04 bash[28289]: audit 2026-03-10T10:17:37.467836+0000 mon.a (mon.0) 1421 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59484-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59484-24"}]': finished 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:37 vm04 bash[28289]: audit 2026-03-10T10:17:37.467899+0000 mon.a (mon.0) 1422 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlock_vm04-59252-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:37 vm04 bash[28289]: audit 2026-03-10T10:17:37.467899+0000 mon.a (mon.0) 1422 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlock_vm04-59252-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:37 vm04 bash[28289]: audit 2026-03-10T10:17:37.467938+0000 mon.a (mon.0) 1423 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm04-59259-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:37 vm04 bash[28289]: audit 2026-03-10T10:17:37.467938+0000 mon.a (mon.0) 1423 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm04-59259-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:37 vm04 bash[28289]: cluster 2026-03-10T10:17:37.504589+0000 mon.a (mon.0) 1424 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:37 vm04 bash[28289]: cluster 2026-03-10T10:17:37.504589+0000 mon.a (mon.0) 1424 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:37 vm04 bash[20742]: audit 2026-03-10T10:17:36.345447+0000 mon.c (mon.2) 205 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:37 vm04 bash[20742]: audit 2026-03-10T10:17:36.345447+0000 mon.c (mon.2) 205 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:37 vm04 bash[20742]: cluster 2026-03-10T10:17:36.390742+0000 mgr.y (mgr.24422) 172 : cluster [DBG] pgmap v166: 324 pgs: 1 peering, 323 active+clean; 464 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:37 vm04 bash[20742]: cluster 2026-03-10T10:17:36.390742+0000 mgr.y (mgr.24422) 172 : cluster [DBG] pgmap v166: 324 pgs: 1 peering, 323 active+clean; 464 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:37 vm04 bash[20742]: audit 2026-03-10T10:17:36.463843+0000 mon.a (mon.0) 1416 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:37 vm04 bash[20742]: audit 2026-03-10T10:17:36.463843+0000 mon.a (mon.0) 1416 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:37 vm04 bash[20742]: audit 2026-03-10T10:17:36.516263+0000 mon.b (mon.1) 134 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:37 vm04 bash[20742]: audit 2026-03-10T10:17:36.516263+0000 mon.b (mon.1) 134 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:37 vm04 bash[20742]: cluster 2026-03-10T10:17:36.526571+0000 mon.a (mon.0) 1417 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:37 vm04 bash[20742]: cluster 2026-03-10T10:17:36.526571+0000 mon.a (mon.0) 1417 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:37 vm04 bash[20742]: audit 2026-03-10T10:17:36.538025+0000 mon.b (mon.1) 135 : audit [INF] from='client.? 192.168.123.104:0/2632516138' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm04-59252-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:37 vm04 bash[20742]: audit 2026-03-10T10:17:36.538025+0000 mon.b (mon.1) 135 : audit [INF] from='client.? 192.168.123.104:0/2632516138' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm04-59252-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:37 vm04 bash[20742]: audit 2026-03-10T10:17:36.538292+0000 mon.b (mon.1) 136 : audit [INF] from='client.? 192.168.123.104:0/2390619970' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm04-59259-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:37 vm04 bash[20742]: audit 2026-03-10T10:17:36.538292+0000 mon.b (mon.1) 136 : audit [INF] from='client.? 192.168.123.104:0/2390619970' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm04-59259-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:37 vm04 bash[20742]: audit 2026-03-10T10:17:36.541219+0000 mon.a (mon.0) 1418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:37 vm04 bash[20742]: audit 2026-03-10T10:17:36.541219+0000 mon.a (mon.0) 1418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:37 vm04 bash[20742]: audit 2026-03-10T10:17:36.557836+0000 mon.a (mon.0) 1419 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm04-59252-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:37 vm04 bash[20742]: audit 2026-03-10T10:17:36.557836+0000 mon.a (mon.0) 1419 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm04-59252-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:37 vm04 bash[20742]: audit 2026-03-10T10:17:36.557936+0000 mon.a (mon.0) 1420 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm04-59259-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:37 vm04 bash[20742]: audit 2026-03-10T10:17:36.557936+0000 mon.a (mon.0) 1420 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm04-59259-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:37 vm04 bash[20742]: audit 2026-03-10T10:17:37.346175+0000 mon.c (mon.2) 206 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:37 vm04 bash[20742]: audit 2026-03-10T10:17:37.346175+0000 mon.c (mon.2) 206 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:37 vm04 bash[20742]: audit 2026-03-10T10:17:37.467836+0000 mon.a (mon.0) 1421 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59484-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59484-24"}]': finished 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:37 vm04 bash[20742]: audit 2026-03-10T10:17:37.467836+0000 mon.a (mon.0) 1421 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59484-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59484-24"}]': finished 2026-03-10T10:17:37.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:37 vm04 bash[20742]: audit 2026-03-10T10:17:37.467899+0000 mon.a (mon.0) 1422 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlock_vm04-59252-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:37.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:37 vm04 bash[20742]: audit 2026-03-10T10:17:37.467899+0000 mon.a (mon.0) 1422 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlock_vm04-59252-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:37.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:37 vm04 bash[20742]: audit 2026-03-10T10:17:37.467938+0000 mon.a (mon.0) 1423 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm04-59259-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:37.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:37 vm04 bash[20742]: audit 2026-03-10T10:17:37.467938+0000 mon.a (mon.0) 1423 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm04-59259-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:37.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:37 vm04 bash[20742]: cluster 2026-03-10T10:17:37.504589+0000 mon.a (mon.0) 1424 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-10T10:17:37.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:37 vm04 bash[20742]: cluster 2026-03-10T10:17:37.504589+0000 mon.a (mon.0) 1424 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-10T10:17:38.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:37 vm07 bash[23367]: audit 2026-03-10T10:17:36.345447+0000 mon.c (mon.2) 205 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:38.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:37 vm07 bash[23367]: audit 2026-03-10T10:17:36.345447+0000 mon.c (mon.2) 205 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:38.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:37 vm07 bash[23367]: cluster 2026-03-10T10:17:36.390742+0000 mgr.y (mgr.24422) 172 : cluster [DBG] pgmap v166: 324 pgs: 1 peering, 323 active+clean; 464 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:38.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:37 vm07 bash[23367]: cluster 2026-03-10T10:17:36.390742+0000 mgr.y (mgr.24422) 172 : cluster [DBG] pgmap v166: 324 pgs: 1 peering, 323 active+clean; 464 KiB data, 1.2 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:38.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:37 vm07 bash[23367]: audit 2026-03-10T10:17:36.463843+0000 mon.a (mon.0) 1416 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:38.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:37 vm07 bash[23367]: audit 2026-03-10T10:17:36.463843+0000 mon.a (mon.0) 1416 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:38.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:37 vm07 bash[23367]: audit 2026-03-10T10:17:36.516263+0000 mon.b (mon.1) 134 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:38.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:37 vm07 bash[23367]: audit 2026-03-10T10:17:36.516263+0000 mon.b (mon.1) 134 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:38.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:37 vm07 bash[23367]: cluster 2026-03-10T10:17:36.526571+0000 mon.a (mon.0) 1417 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-10T10:17:38.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:37 vm07 bash[23367]: cluster 2026-03-10T10:17:36.526571+0000 mon.a (mon.0) 1417 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-10T10:17:38.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:37 vm07 bash[23367]: audit 2026-03-10T10:17:36.538025+0000 mon.b (mon.1) 135 : audit [INF] from='client.? 192.168.123.104:0/2632516138' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm04-59252-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:38.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:37 vm07 bash[23367]: audit 2026-03-10T10:17:36.538025+0000 mon.b (mon.1) 135 : audit [INF] from='client.? 192.168.123.104:0/2632516138' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm04-59252-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:38.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:37 vm07 bash[23367]: audit 2026-03-10T10:17:36.538292+0000 mon.b (mon.1) 136 : audit [INF] from='client.? 192.168.123.104:0/2390619970' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm04-59259-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:38.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:37 vm07 bash[23367]: audit 2026-03-10T10:17:36.538292+0000 mon.b (mon.1) 136 : audit [INF] from='client.? 192.168.123.104:0/2390619970' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm04-59259-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:38.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:37 vm07 bash[23367]: audit 2026-03-10T10:17:36.541219+0000 mon.a (mon.0) 1418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:38.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:37 vm07 bash[23367]: audit 2026-03-10T10:17:36.541219+0000 mon.a (mon.0) 1418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:38.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:37 vm07 bash[23367]: audit 2026-03-10T10:17:36.557836+0000 mon.a (mon.0) 1419 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm04-59252-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:38.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:37 vm07 bash[23367]: audit 2026-03-10T10:17:36.557836+0000 mon.a (mon.0) 1419 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlock_vm04-59252-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:38.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:37 vm07 bash[23367]: audit 2026-03-10T10:17:36.557936+0000 mon.a (mon.0) 1420 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm04-59259-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:38.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:37 vm07 bash[23367]: audit 2026-03-10T10:17:36.557936+0000 mon.a (mon.0) 1420 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm04-59259-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:38.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:37 vm07 bash[23367]: audit 2026-03-10T10:17:37.346175+0000 mon.c (mon.2) 206 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:38.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:37 vm07 bash[23367]: audit 2026-03-10T10:17:37.346175+0000 mon.c (mon.2) 206 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:38.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:37 vm07 bash[23367]: audit 2026-03-10T10:17:37.467836+0000 mon.a (mon.0) 1421 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59484-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59484-24"}]': finished 2026-03-10T10:17:38.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:37 vm07 bash[23367]: audit 2026-03-10T10:17:37.467836+0000 mon.a (mon.0) 1421 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59484-24", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59484-24"}]': finished 2026-03-10T10:17:38.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:37 vm07 bash[23367]: audit 2026-03-10T10:17:37.467899+0000 mon.a (mon.0) 1422 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlock_vm04-59252-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:38.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:37 vm07 bash[23367]: audit 2026-03-10T10:17:37.467899+0000 mon.a (mon.0) 1422 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlock_vm04-59252-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:38.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:37 vm07 bash[23367]: audit 2026-03-10T10:17:37.467938+0000 mon.a (mon.0) 1423 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm04-59259-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:38.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:37 vm07 bash[23367]: audit 2026-03-10T10:17:37.467938+0000 mon.a (mon.0) 1423 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OperateMtime2_vm04-59259-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:38.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:37 vm07 bash[23367]: cluster 2026-03-10T10:17:37.504589+0000 mon.a (mon.0) 1424 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-10T10:17:38.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:37 vm07 bash[23367]: cluster 2026-03-10T10:17:37.504589+0000 mon.a (mon.0) 1424 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-10T10:17:38.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:17:38 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:17:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:38 vm04 bash[28289]: audit 2026-03-10T10:17:38.260794+0000 mgr.y (mgr.24422) 173 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:38 vm04 bash[28289]: audit 2026-03-10T10:17:38.260794+0000 mgr.y (mgr.24422) 173 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:38 vm04 bash[28289]: audit 2026-03-10T10:17:38.346773+0000 mon.c (mon.2) 207 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:38 vm04 bash[28289]: audit 2026-03-10T10:17:38.346773+0000 mon.c (mon.2) 207 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:38 vm04 bash[28289]: audit 2026-03-10T10:17:38.392154+0000 mon.a (mon.0) 1425 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "31"}]: dispatch 2026-03-10T10:17:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:38 vm04 bash[28289]: audit 2026-03-10T10:17:38.392154+0000 mon.a (mon.0) 1425 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "31"}]: dispatch 2026-03-10T10:17:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:38 vm04 bash[28289]: audit 2026-03-10T10:17:38.471871+0000 mon.a (mon.0) 1426 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]': finished 2026-03-10T10:17:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:38 vm04 bash[28289]: audit 2026-03-10T10:17:38.471871+0000 mon.a (mon.0) 1426 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]': finished 2026-03-10T10:17:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:38 vm04 bash[28289]: audit 2026-03-10T10:17:38.471968+0000 mon.a (mon.0) 1427 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "31"}]': finished 2026-03-10T10:17:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:38 vm04 bash[28289]: audit 2026-03-10T10:17:38.471968+0000 mon.a (mon.0) 1427 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "31"}]': finished 2026-03-10T10:17:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:38 vm04 bash[28289]: cluster 2026-03-10T10:17:38.486061+0000 mon.a (mon.0) 1428 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-10T10:17:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:38 vm04 bash[28289]: cluster 2026-03-10T10:17:38.486061+0000 mon.a (mon.0) 1428 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-10T10:17:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:38 vm04 bash[28289]: audit 2026-03-10T10:17:38.503864+0000 mon.b (mon.1) 137 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:38 vm04 bash[28289]: audit 2026-03-10T10:17:38.503864+0000 mon.b (mon.1) 137 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:38 vm04 bash[28289]: audit 2026-03-10T10:17:38.508375+0000 mon.b (mon.1) 138 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:38 vm04 bash[28289]: audit 2026-03-10T10:17:38.508375+0000 mon.b (mon.1) 138 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:38 vm04 bash[28289]: audit 2026-03-10T10:17:38.509918+0000 mon.a (mon.0) 1429 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:38 vm04 bash[28289]: audit 2026-03-10T10:17:38.509918+0000 mon.a (mon.0) 1429 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:38 vm04 bash[28289]: audit 2026-03-10T10:17:38.512370+0000 mon.b (mon.1) 139 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm04-59252-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:38 vm04 bash[28289]: audit 2026-03-10T10:17:38.512370+0000 mon.b (mon.1) 139 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm04-59252-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:38 vm04 bash[28289]: audit 2026-03-10T10:17:38.514041+0000 mon.a (mon.0) 1430 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:38 vm04 bash[28289]: audit 2026-03-10T10:17:38.514041+0000 mon.a (mon.0) 1430 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:38 vm04 bash[28289]: audit 2026-03-10T10:17:38.516177+0000 mon.a (mon.0) 1431 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm04-59252-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:38 vm04 bash[28289]: audit 2026-03-10T10:17:38.516177+0000 mon.a (mon.0) 1431 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm04-59252-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:38.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:38 vm04 bash[28289]: audit 2026-03-10T10:17:38.517395+0000 mon.c (mon.2) 208 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:38.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:38 vm04 bash[28289]: audit 2026-03-10T10:17:38.517395+0000 mon.c (mon.2) 208 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:38.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:38 vm04 bash[28289]: audit 2026-03-10T10:17:38.525782+0000 mon.a (mon.0) 1432 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:38.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:38 vm04 bash[28289]: audit 2026-03-10T10:17:38.525782+0000 mon.a (mon.0) 1432 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:38 vm04 bash[20742]: audit 2026-03-10T10:17:38.260794+0000 mgr.y (mgr.24422) 173 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:38 vm04 bash[20742]: audit 2026-03-10T10:17:38.260794+0000 mgr.y (mgr.24422) 173 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:38 vm04 bash[20742]: audit 2026-03-10T10:17:38.346773+0000 mon.c (mon.2) 207 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:38 vm04 bash[20742]: audit 2026-03-10T10:17:38.346773+0000 mon.c (mon.2) 207 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:38 vm04 bash[20742]: audit 2026-03-10T10:17:38.392154+0000 mon.a (mon.0) 1425 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "31"}]: dispatch 2026-03-10T10:17:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:38 vm04 bash[20742]: audit 2026-03-10T10:17:38.392154+0000 mon.a (mon.0) 1425 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "31"}]: dispatch 2026-03-10T10:17:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:38 vm04 bash[20742]: audit 2026-03-10T10:17:38.471871+0000 mon.a (mon.0) 1426 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]': finished 2026-03-10T10:17:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:38 vm04 bash[20742]: audit 2026-03-10T10:17:38.471871+0000 mon.a (mon.0) 1426 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]': finished 2026-03-10T10:17:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:38 vm04 bash[20742]: audit 2026-03-10T10:17:38.471968+0000 mon.a (mon.0) 1427 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "31"}]': finished 2026-03-10T10:17:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:38 vm04 bash[20742]: audit 2026-03-10T10:17:38.471968+0000 mon.a (mon.0) 1427 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "31"}]': finished 2026-03-10T10:17:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:38 vm04 bash[20742]: cluster 2026-03-10T10:17:38.486061+0000 mon.a (mon.0) 1428 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-10T10:17:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:38 vm04 bash[20742]: cluster 2026-03-10T10:17:38.486061+0000 mon.a (mon.0) 1428 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-10T10:17:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:38 vm04 bash[20742]: audit 2026-03-10T10:17:38.503864+0000 mon.b (mon.1) 137 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:38 vm04 bash[20742]: audit 2026-03-10T10:17:38.503864+0000 mon.b (mon.1) 137 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:38 vm04 bash[20742]: audit 2026-03-10T10:17:38.508375+0000 mon.b (mon.1) 138 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:38 vm04 bash[20742]: audit 2026-03-10T10:17:38.508375+0000 mon.b (mon.1) 138 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:38 vm04 bash[20742]: audit 2026-03-10T10:17:38.509918+0000 mon.a (mon.0) 1429 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:38 vm04 bash[20742]: audit 2026-03-10T10:17:38.509918+0000 mon.a (mon.0) 1429 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:38 vm04 bash[20742]: audit 2026-03-10T10:17:38.512370+0000 mon.b (mon.1) 139 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm04-59252-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:38 vm04 bash[20742]: audit 2026-03-10T10:17:38.512370+0000 mon.b (mon.1) 139 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm04-59252-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:38 vm04 bash[20742]: audit 2026-03-10T10:17:38.514041+0000 mon.a (mon.0) 1430 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:38 vm04 bash[20742]: audit 2026-03-10T10:17:38.514041+0000 mon.a (mon.0) 1430 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:38 vm04 bash[20742]: audit 2026-03-10T10:17:38.516177+0000 mon.a (mon.0) 1431 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm04-59252-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:38 vm04 bash[20742]: audit 2026-03-10T10:17:38.516177+0000 mon.a (mon.0) 1431 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm04-59252-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:38 vm04 bash[20742]: audit 2026-03-10T10:17:38.517395+0000 mon.c (mon.2) 208 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:38 vm04 bash[20742]: audit 2026-03-10T10:17:38.517395+0000 mon.c (mon.2) 208 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:38 vm04 bash[20742]: audit 2026-03-10T10:17:38.525782+0000 mon.a (mon.0) 1432 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:38 vm04 bash[20742]: audit 2026-03-10T10:17:38.525782+0000 mon.a (mon.0) 1432 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:38 vm07 bash[23367]: audit 2026-03-10T10:17:38.260794+0000 mgr.y (mgr.24422) 173 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:38 vm07 bash[23367]: audit 2026-03-10T10:17:38.260794+0000 mgr.y (mgr.24422) 173 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:38 vm07 bash[23367]: audit 2026-03-10T10:17:38.346773+0000 mon.c (mon.2) 207 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:38 vm07 bash[23367]: audit 2026-03-10T10:17:38.346773+0000 mon.c (mon.2) 207 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:38 vm07 bash[23367]: audit 2026-03-10T10:17:38.392154+0000 mon.a (mon.0) 1425 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "31"}]: dispatch 2026-03-10T10:17:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:38 vm07 bash[23367]: audit 2026-03-10T10:17:38.392154+0000 mon.a (mon.0) 1425 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "31"}]: dispatch 2026-03-10T10:17:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:38 vm07 bash[23367]: audit 2026-03-10T10:17:38.471871+0000 mon.a (mon.0) 1426 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]': finished 2026-03-10T10:17:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:38 vm07 bash[23367]: audit 2026-03-10T10:17:38.471871+0000 mon.a (mon.0) 1426 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosSnapshotsSelfManagedECPP_vm04-59541-21", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]': finished 2026-03-10T10:17:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:38 vm07 bash[23367]: audit 2026-03-10T10:17:38.471968+0000 mon.a (mon.0) 1427 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "31"}]': finished 2026-03-10T10:17:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:38 vm07 bash[23367]: audit 2026-03-10T10:17:38.471968+0000 mon.a (mon.0) 1427 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "31"}]': finished 2026-03-10T10:17:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:38 vm07 bash[23367]: cluster 2026-03-10T10:17:38.486061+0000 mon.a (mon.0) 1428 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-10T10:17:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:38 vm07 bash[23367]: cluster 2026-03-10T10:17:38.486061+0000 mon.a (mon.0) 1428 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-10T10:17:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:38 vm07 bash[23367]: audit 2026-03-10T10:17:38.503864+0000 mon.b (mon.1) 137 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:38 vm07 bash[23367]: audit 2026-03-10T10:17:38.503864+0000 mon.b (mon.1) 137 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:38 vm07 bash[23367]: audit 2026-03-10T10:17:38.508375+0000 mon.b (mon.1) 138 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:38 vm07 bash[23367]: audit 2026-03-10T10:17:38.508375+0000 mon.b (mon.1) 138 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:38 vm07 bash[23367]: audit 2026-03-10T10:17:38.509918+0000 mon.a (mon.0) 1429 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:38 vm07 bash[23367]: audit 2026-03-10T10:17:38.509918+0000 mon.a (mon.0) 1429 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:38 vm07 bash[23367]: audit 2026-03-10T10:17:38.512370+0000 mon.b (mon.1) 139 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm04-59252-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:38 vm07 bash[23367]: audit 2026-03-10T10:17:38.512370+0000 mon.b (mon.1) 139 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm04-59252-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:38 vm07 bash[23367]: audit 2026-03-10T10:17:38.514041+0000 mon.a (mon.0) 1430 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:38 vm07 bash[23367]: audit 2026-03-10T10:17:38.514041+0000 mon.a (mon.0) 1430 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:38 vm07 bash[23367]: audit 2026-03-10T10:17:38.516177+0000 mon.a (mon.0) 1431 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm04-59252-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:38 vm07 bash[23367]: audit 2026-03-10T10:17:38.516177+0000 mon.a (mon.0) 1431 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm04-59252-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:38 vm07 bash[23367]: audit 2026-03-10T10:17:38.517395+0000 mon.c (mon.2) 208 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:38 vm07 bash[23367]: audit 2026-03-10T10:17:38.517395+0000 mon.c (mon.2) 208 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:38 vm07 bash[23367]: audit 2026-03-10T10:17:38.525782+0000 mon.a (mon.0) 1432 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:38 vm07 bash[23367]: audit 2026-03-10T10:17:38.525782+0000 mon.a (mon.0) 1432 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:39 vm04 bash[28289]: cluster 2026-03-10T10:17:38.391319+0000 mgr.y (mgr.24422) 174 : cluster [DBG] pgmap v169: 396 pgs: 5 creating+peering, 37 unknown, 354 active+clean; 464 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 495 B/s rd, 2.2 KiB/s wr, 6 op/s; 30 B/s, 0 objects/s recovering 2026-03-10T10:17:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:39 vm04 bash[28289]: cluster 2026-03-10T10:17:38.391319+0000 mgr.y (mgr.24422) 174 : cluster [DBG] pgmap v169: 396 pgs: 5 creating+peering, 37 unknown, 354 active+clean; 464 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 495 B/s rd, 2.2 KiB/s wr, 6 op/s; 30 B/s, 0 objects/s recovering 2026-03-10T10:17:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:39 vm04 bash[28289]: audit 2026-03-10T10:17:39.347566+0000 mon.c (mon.2) 209 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:39 vm04 bash[28289]: audit 2026-03-10T10:17:39.347566+0000 mon.c (mon.2) 209 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:39 vm04 bash[28289]: audit 2026-03-10T10:17:39.475975+0000 mon.a (mon.0) 1433 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm04-59252-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:39 vm04 bash[28289]: audit 2026-03-10T10:17:39.475975+0000 mon.a (mon.0) 1433 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm04-59252-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:39 vm04 bash[28289]: audit 2026-03-10T10:17:39.476093+0000 mon.a (mon.0) 1434 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:39 vm04 bash[28289]: audit 2026-03-10T10:17:39.476093+0000 mon.a (mon.0) 1434 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:39 vm04 bash[28289]: audit 2026-03-10T10:17:39.477463+0000 mon.b (mon.1) 140 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm04-59252-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:39 vm04 bash[28289]: audit 2026-03-10T10:17:39.477463+0000 mon.b (mon.1) 140 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm04-59252-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:39 vm04 bash[28289]: audit 2026-03-10T10:17:39.498053+0000 mon.c (mon.2) 210 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:39 vm04 bash[28289]: audit 2026-03-10T10:17:39.498053+0000 mon.c (mon.2) 210 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:39 vm04 bash[28289]: audit 2026-03-10T10:17:39.499918+0000 mon.b (mon.1) 141 : audit [INF] from='client.? 192.168.123.104:0/1788598374' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm04-59259-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:39 vm04 bash[28289]: audit 2026-03-10T10:17:39.499918+0000 mon.b (mon.1) 141 : audit [INF] from='client.? 192.168.123.104:0/1788598374' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm04-59259-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:39 vm04 bash[28289]: cluster 2026-03-10T10:17:39.501688+0000 mon.a (mon.0) 1435 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-10T10:17:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:39 vm04 bash[28289]: cluster 2026-03-10T10:17:39.501688+0000 mon.a (mon.0) 1435 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-10T10:17:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:39 vm04 bash[28289]: audit 2026-03-10T10:17:39.512017+0000 mon.a (mon.0) 1436 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm04-59252-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:39 vm04 bash[28289]: audit 2026-03-10T10:17:39.512017+0000 mon.a (mon.0) 1436 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm04-59252-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:39 vm04 bash[28289]: audit 2026-03-10T10:17:39.512456+0000 mon.a (mon.0) 1437 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:39 vm04 bash[28289]: audit 2026-03-10T10:17:39.512456+0000 mon.a (mon.0) 1437 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:39 vm04 bash[28289]: audit 2026-03-10T10:17:39.512527+0000 mon.a (mon.0) 1438 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm04-59259-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:39.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:39 vm04 bash[28289]: audit 2026-03-10T10:17:39.512527+0000 mon.a (mon.0) 1438 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm04-59259-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:39.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:39 vm04 bash[20742]: cluster 2026-03-10T10:17:38.391319+0000 mgr.y (mgr.24422) 174 : cluster [DBG] pgmap v169: 396 pgs: 5 creating+peering, 37 unknown, 354 active+clean; 464 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 495 B/s rd, 2.2 KiB/s wr, 6 op/s; 30 B/s, 0 objects/s recovering 2026-03-10T10:17:39.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:39 vm04 bash[20742]: cluster 2026-03-10T10:17:38.391319+0000 mgr.y (mgr.24422) 174 : cluster [DBG] pgmap v169: 396 pgs: 5 creating+peering, 37 unknown, 354 active+clean; 464 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 495 B/s rd, 2.2 KiB/s wr, 6 op/s; 30 B/s, 0 objects/s recovering 2026-03-10T10:17:39.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:39 vm04 bash[20742]: audit 2026-03-10T10:17:39.347566+0000 mon.c (mon.2) 209 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:39.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:39 vm04 bash[20742]: audit 2026-03-10T10:17:39.347566+0000 mon.c (mon.2) 209 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:39.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:39 vm04 bash[20742]: audit 2026-03-10T10:17:39.475975+0000 mon.a (mon.0) 1433 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm04-59252-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:39.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:39 vm04 bash[20742]: audit 2026-03-10T10:17:39.475975+0000 mon.a (mon.0) 1433 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm04-59252-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:39.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:39 vm04 bash[20742]: audit 2026-03-10T10:17:39.476093+0000 mon.a (mon.0) 1434 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:39.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:39 vm04 bash[20742]: audit 2026-03-10T10:17:39.476093+0000 mon.a (mon.0) 1434 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:39.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:39 vm04 bash[20742]: audit 2026-03-10T10:17:39.477463+0000 mon.b (mon.1) 140 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm04-59252-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:39.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:39 vm04 bash[20742]: audit 2026-03-10T10:17:39.477463+0000 mon.b (mon.1) 140 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm04-59252-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:39.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:39 vm04 bash[20742]: audit 2026-03-10T10:17:39.498053+0000 mon.c (mon.2) 210 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:39.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:39 vm04 bash[20742]: audit 2026-03-10T10:17:39.498053+0000 mon.c (mon.2) 210 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:39.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:39 vm04 bash[20742]: audit 2026-03-10T10:17:39.499918+0000 mon.b (mon.1) 141 : audit [INF] from='client.? 192.168.123.104:0/1788598374' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm04-59259-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:39.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:39 vm04 bash[20742]: audit 2026-03-10T10:17:39.499918+0000 mon.b (mon.1) 141 : audit [INF] from='client.? 192.168.123.104:0/1788598374' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm04-59259-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:39.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:39 vm04 bash[20742]: cluster 2026-03-10T10:17:39.501688+0000 mon.a (mon.0) 1435 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-10T10:17:39.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:39 vm04 bash[20742]: cluster 2026-03-10T10:17:39.501688+0000 mon.a (mon.0) 1435 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-10T10:17:39.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:39 vm04 bash[20742]: audit 2026-03-10T10:17:39.512017+0000 mon.a (mon.0) 1436 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm04-59252-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:39.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:39 vm04 bash[20742]: audit 2026-03-10T10:17:39.512017+0000 mon.a (mon.0) 1436 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm04-59252-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:39.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:39 vm04 bash[20742]: audit 2026-03-10T10:17:39.512456+0000 mon.a (mon.0) 1437 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:39.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:39 vm04 bash[20742]: audit 2026-03-10T10:17:39.512456+0000 mon.a (mon.0) 1437 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:39.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:39 vm04 bash[20742]: audit 2026-03-10T10:17:39.512527+0000 mon.a (mon.0) 1438 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm04-59259-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:39.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:39 vm04 bash[20742]: audit 2026-03-10T10:17:39.512527+0000 mon.a (mon.0) 1438 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm04-59259-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:39 vm07 bash[23367]: cluster 2026-03-10T10:17:38.391319+0000 mgr.y (mgr.24422) 174 : cluster [DBG] pgmap v169: 396 pgs: 5 creating+peering, 37 unknown, 354 active+clean; 464 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 495 B/s rd, 2.2 KiB/s wr, 6 op/s; 30 B/s, 0 objects/s recovering 2026-03-10T10:17:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:39 vm07 bash[23367]: cluster 2026-03-10T10:17:38.391319+0000 mgr.y (mgr.24422) 174 : cluster [DBG] pgmap v169: 396 pgs: 5 creating+peering, 37 unknown, 354 active+clean; 464 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 495 B/s rd, 2.2 KiB/s wr, 6 op/s; 30 B/s, 0 objects/s recovering 2026-03-10T10:17:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:39 vm07 bash[23367]: audit 2026-03-10T10:17:39.347566+0000 mon.c (mon.2) 209 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:39 vm07 bash[23367]: audit 2026-03-10T10:17:39.347566+0000 mon.c (mon.2) 209 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:39 vm07 bash[23367]: audit 2026-03-10T10:17:39.475975+0000 mon.a (mon.0) 1433 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm04-59252-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:39 vm07 bash[23367]: audit 2026-03-10T10:17:39.475975+0000 mon.a (mon.0) 1433 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWrite_vm04-59252-27", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:39 vm07 bash[23367]: audit 2026-03-10T10:17:39.476093+0000 mon.a (mon.0) 1434 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:39 vm07 bash[23367]: audit 2026-03-10T10:17:39.476093+0000 mon.a (mon.0) 1434 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:39 vm07 bash[23367]: audit 2026-03-10T10:17:39.477463+0000 mon.b (mon.1) 140 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm04-59252-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:39 vm07 bash[23367]: audit 2026-03-10T10:17:39.477463+0000 mon.b (mon.1) 140 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm04-59252-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:39 vm07 bash[23367]: audit 2026-03-10T10:17:39.498053+0000 mon.c (mon.2) 210 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:39 vm07 bash[23367]: audit 2026-03-10T10:17:39.498053+0000 mon.c (mon.2) 210 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:39 vm07 bash[23367]: audit 2026-03-10T10:17:39.499918+0000 mon.b (mon.1) 141 : audit [INF] from='client.? 192.168.123.104:0/1788598374' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm04-59259-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:39 vm07 bash[23367]: audit 2026-03-10T10:17:39.499918+0000 mon.b (mon.1) 141 : audit [INF] from='client.? 192.168.123.104:0/1788598374' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm04-59259-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:39 vm07 bash[23367]: cluster 2026-03-10T10:17:39.501688+0000 mon.a (mon.0) 1435 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-10T10:17:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:39 vm07 bash[23367]: cluster 2026-03-10T10:17:39.501688+0000 mon.a (mon.0) 1435 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-10T10:17:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:39 vm07 bash[23367]: audit 2026-03-10T10:17:39.512017+0000 mon.a (mon.0) 1436 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm04-59252-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:39 vm07 bash[23367]: audit 2026-03-10T10:17:39.512017+0000 mon.a (mon.0) 1436 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWrite_vm04-59252-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:39 vm07 bash[23367]: audit 2026-03-10T10:17:39.512456+0000 mon.a (mon.0) 1437 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:39 vm07 bash[23367]: audit 2026-03-10T10:17:39.512456+0000 mon.a (mon.0) 1437 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:40.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:39 vm07 bash[23367]: audit 2026-03-10T10:17:39.512527+0000 mon.a (mon.0) 1438 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm04-59259-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:40.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:39 vm07 bash[23367]: audit 2026-03-10T10:17:39.512527+0000 mon.a (mon.0) 1438 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm04-59259-24","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:40 vm04 bash[28289]: audit 2026-03-10T10:17:40.348283+0000 mon.c (mon.2) 211 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:40 vm04 bash[28289]: audit 2026-03-10T10:17:40.348283+0000 mon.c (mon.2) 211 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:40 vm04 bash[28289]: audit 2026-03-10T10:17:40.479882+0000 mon.a (mon.0) 1439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:40 vm04 bash[28289]: audit 2026-03-10T10:17:40.479882+0000 mon.a (mon.0) 1439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:40 vm04 bash[28289]: audit 2026-03-10T10:17:40.479990+0000 mon.a (mon.0) 1440 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm04-59259-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:40 vm04 bash[28289]: audit 2026-03-10T10:17:40.479990+0000 mon.a (mon.0) 1440 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm04-59259-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:40 vm04 bash[28289]: cluster 2026-03-10T10:17:40.490619+0000 mon.a (mon.0) 1441 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-10T10:17:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:40 vm04 bash[28289]: cluster 2026-03-10T10:17:40.490619+0000 mon.a (mon.0) 1441 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-10T10:17:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:40 vm04 bash[20742]: audit 2026-03-10T10:17:40.348283+0000 mon.c (mon.2) 211 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:40 vm04 bash[20742]: audit 2026-03-10T10:17:40.348283+0000 mon.c (mon.2) 211 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:40 vm04 bash[20742]: audit 2026-03-10T10:17:40.479882+0000 mon.a (mon.0) 1439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:40 vm04 bash[20742]: audit 2026-03-10T10:17:40.479882+0000 mon.a (mon.0) 1439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:40 vm04 bash[20742]: audit 2026-03-10T10:17:40.479990+0000 mon.a (mon.0) 1440 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm04-59259-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:40 vm04 bash[20742]: audit 2026-03-10T10:17:40.479990+0000 mon.a (mon.0) 1440 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm04-59259-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:40 vm04 bash[20742]: cluster 2026-03-10T10:17:40.490619+0000 mon.a (mon.0) 1441 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-10T10:17:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:40 vm04 bash[20742]: cluster 2026-03-10T10:17:40.490619+0000 mon.a (mon.0) 1441 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-10T10:17:41.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:40 vm07 bash[23367]: audit 2026-03-10T10:17:40.348283+0000 mon.c (mon.2) 211 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:41.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:40 vm07 bash[23367]: audit 2026-03-10T10:17:40.348283+0000 mon.c (mon.2) 211 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:41.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:40 vm07 bash[23367]: audit 2026-03-10T10:17:40.479882+0000 mon.a (mon.0) 1439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:41.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:40 vm07 bash[23367]: audit 2026-03-10T10:17:40.479882+0000 mon.a (mon.0) 1439 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59484-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:41.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:40 vm07 bash[23367]: audit 2026-03-10T10:17:40.479990+0000 mon.a (mon.0) 1440 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm04-59259-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:41.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:40 vm07 bash[23367]: audit 2026-03-10T10:17:40.479990+0000 mon.a (mon.0) 1440 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "StatRemovePP_vm04-59259-24","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:41.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:40 vm07 bash[23367]: cluster 2026-03-10T10:17:40.490619+0000 mon.a (mon.0) 1441 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-10T10:17:41.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:40 vm07 bash[23367]: cluster 2026-03-10T10:17:40.490619+0000 mon.a (mon.0) 1441 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-10T10:17:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:41 vm04 bash[28289]: cluster 2026-03-10T10:17:40.391665+0000 mgr.y (mgr.24422) 175 : cluster [DBG] pgmap v172: 404 pgs: 20 creating+peering, 57 unknown, 327 active+clean; 464 KiB data, 632 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 4.0 KiB/s wr, 9 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:17:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:41 vm04 bash[28289]: cluster 2026-03-10T10:17:40.391665+0000 mgr.y (mgr.24422) 175 : cluster [DBG] pgmap v172: 404 pgs: 20 creating+peering, 57 unknown, 327 active+clean; 464 KiB data, 632 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 4.0 KiB/s wr, 9 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:17:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:41 vm04 bash[28289]: cluster 2026-03-10T10:17:40.566685+0000 mon.a (mon.0) 1442 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:41 vm04 bash[28289]: cluster 2026-03-10T10:17:40.566685+0000 mon.a (mon.0) 1442 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:41 vm04 bash[28289]: audit 2026-03-10T10:17:41.349134+0000 mon.c (mon.2) 212 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:41 vm04 bash[28289]: audit 2026-03-10T10:17:41.349134+0000 mon.c (mon.2) 212 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:41 vm04 bash[28289]: audit 2026-03-10T10:17:41.483505+0000 mon.a (mon.0) 1443 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWrite_vm04-59252-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm04-59252-27"}]': finished 2026-03-10T10:17:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:41 vm04 bash[28289]: audit 2026-03-10T10:17:41.483505+0000 mon.a (mon.0) 1443 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWrite_vm04-59252-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm04-59252-27"}]': finished 2026-03-10T10:17:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:41 vm04 bash[28289]: cluster 2026-03-10T10:17:41.490614+0000 mon.a (mon.0) 1444 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-10T10:17:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:41 vm04 bash[28289]: cluster 2026-03-10T10:17:41.490614+0000 mon.a (mon.0) 1444 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-10T10:17:41.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:41 vm04 bash[20742]: cluster 2026-03-10T10:17:40.391665+0000 mgr.y (mgr.24422) 175 : cluster [DBG] pgmap v172: 404 pgs: 20 creating+peering, 57 unknown, 327 active+clean; 464 KiB data, 632 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 4.0 KiB/s wr, 9 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:17:41.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:41 vm04 bash[20742]: cluster 2026-03-10T10:17:40.391665+0000 mgr.y (mgr.24422) 175 : cluster [DBG] pgmap v172: 404 pgs: 20 creating+peering, 57 unknown, 327 active+clean; 464 KiB data, 632 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 4.0 KiB/s wr, 9 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:17:41.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:41 vm04 bash[20742]: cluster 2026-03-10T10:17:40.566685+0000 mon.a (mon.0) 1442 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:41.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:41 vm04 bash[20742]: cluster 2026-03-10T10:17:40.566685+0000 mon.a (mon.0) 1442 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:41.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:41 vm04 bash[20742]: audit 2026-03-10T10:17:41.349134+0000 mon.c (mon.2) 212 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:41.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:41 vm04 bash[20742]: audit 2026-03-10T10:17:41.349134+0000 mon.c (mon.2) 212 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:41.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:41 vm04 bash[20742]: audit 2026-03-10T10:17:41.483505+0000 mon.a (mon.0) 1443 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWrite_vm04-59252-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm04-59252-27"}]': finished 2026-03-10T10:17:41.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:41 vm04 bash[20742]: audit 2026-03-10T10:17:41.483505+0000 mon.a (mon.0) 1443 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWrite_vm04-59252-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm04-59252-27"}]': finished 2026-03-10T10:17:41.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:41 vm04 bash[20742]: cluster 2026-03-10T10:17:41.490614+0000 mon.a (mon.0) 1444 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-10T10:17:41.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:41 vm04 bash[20742]: cluster 2026-03-10T10:17:41.490614+0000 mon.a (mon.0) 1444 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-10T10:17:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:41 vm07 bash[23367]: cluster 2026-03-10T10:17:40.391665+0000 mgr.y (mgr.24422) 175 : cluster [DBG] pgmap v172: 404 pgs: 20 creating+peering, 57 unknown, 327 active+clean; 464 KiB data, 632 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 4.0 KiB/s wr, 9 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:17:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:41 vm07 bash[23367]: cluster 2026-03-10T10:17:40.391665+0000 mgr.y (mgr.24422) 175 : cluster [DBG] pgmap v172: 404 pgs: 20 creating+peering, 57 unknown, 327 active+clean; 464 KiB data, 632 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 4.0 KiB/s wr, 9 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:17:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:41 vm07 bash[23367]: cluster 2026-03-10T10:17:40.566685+0000 mon.a (mon.0) 1442 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:41 vm07 bash[23367]: cluster 2026-03-10T10:17:40.566685+0000 mon.a (mon.0) 1442 : cluster [WRN] Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:41 vm07 bash[23367]: audit 2026-03-10T10:17:41.349134+0000 mon.c (mon.2) 212 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:41 vm07 bash[23367]: audit 2026-03-10T10:17:41.349134+0000 mon.c (mon.2) 212 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:41 vm07 bash[23367]: audit 2026-03-10T10:17:41.483505+0000 mon.a (mon.0) 1443 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWrite_vm04-59252-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm04-59252-27"}]': finished 2026-03-10T10:17:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:41 vm07 bash[23367]: audit 2026-03-10T10:17:41.483505+0000 mon.a (mon.0) 1443 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWrite_vm04-59252-27", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWrite_vm04-59252-27"}]': finished 2026-03-10T10:17:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:41 vm07 bash[23367]: cluster 2026-03-10T10:17:41.490614+0000 mon.a (mon.0) 1444 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-10T10:17:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:41 vm07 bash[23367]: cluster 2026-03-10T10:17:41.490614+0000 mon.a (mon.0) 1444 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-10T10:17:42.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:42 vm04 bash[28289]: audit 2026-03-10T10:17:42.349952+0000 mon.c (mon.2) 213 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:42.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:42 vm04 bash[28289]: audit 2026-03-10T10:17:42.349952+0000 mon.c (mon.2) 213 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:42.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:42 vm04 bash[28289]: cluster 2026-03-10T10:17:42.495701+0000 mon.a (mon.0) 1445 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-10T10:17:42.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:42 vm04 bash[28289]: cluster 2026-03-10T10:17:42.495701+0000 mon.a (mon.0) 1445 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-10T10:17:42.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:42 vm04 bash[28289]: audit 2026-03-10T10:17:42.523234+0000 mon.c (mon.2) 214 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:42.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:42 vm04 bash[28289]: audit 2026-03-10T10:17:42.523234+0000 mon.c (mon.2) 214 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:42.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:42 vm04 bash[28289]: audit 2026-03-10T10:17:42.523381+0000 mon.c (mon.2) 215 : audit [INF] from='client.? 192.168.123.104:0/3379441119' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm04-59259-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:42.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:42 vm04 bash[28289]: audit 2026-03-10T10:17:42.523381+0000 mon.c (mon.2) 215 : audit [INF] from='client.? 192.168.123.104:0/3379441119' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm04-59259-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:42.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:42 vm04 bash[28289]: audit 2026-03-10T10:17:42.525086+0000 mon.a (mon.0) 1446 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:42.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:42 vm04 bash[28289]: audit 2026-03-10T10:17:42.525086+0000 mon.a (mon.0) 1446 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:42.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:42 vm04 bash[28289]: audit 2026-03-10T10:17:42.525158+0000 mon.a (mon.0) 1447 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm04-59259-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:42.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:42 vm04 bash[28289]: audit 2026-03-10T10:17:42.525158+0000 mon.a (mon.0) 1447 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm04-59259-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:42.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:42 vm04 bash[20742]: audit 2026-03-10T10:17:42.349952+0000 mon.c (mon.2) 213 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:42.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:42 vm04 bash[20742]: audit 2026-03-10T10:17:42.349952+0000 mon.c (mon.2) 213 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:42.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:42 vm04 bash[20742]: cluster 2026-03-10T10:17:42.495701+0000 mon.a (mon.0) 1445 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-10T10:17:42.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:42 vm04 bash[20742]: cluster 2026-03-10T10:17:42.495701+0000 mon.a (mon.0) 1445 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-10T10:17:42.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:42 vm04 bash[20742]: audit 2026-03-10T10:17:42.523234+0000 mon.c (mon.2) 214 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:42.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:42 vm04 bash[20742]: audit 2026-03-10T10:17:42.523234+0000 mon.c (mon.2) 214 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:42.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:42 vm04 bash[20742]: audit 2026-03-10T10:17:42.523381+0000 mon.c (mon.2) 215 : audit [INF] from='client.? 192.168.123.104:0/3379441119' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm04-59259-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:42.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:42 vm04 bash[20742]: audit 2026-03-10T10:17:42.523381+0000 mon.c (mon.2) 215 : audit [INF] from='client.? 192.168.123.104:0/3379441119' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm04-59259-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:42.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:42 vm04 bash[20742]: audit 2026-03-10T10:17:42.525086+0000 mon.a (mon.0) 1446 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:42.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:42 vm04 bash[20742]: audit 2026-03-10T10:17:42.525086+0000 mon.a (mon.0) 1446 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:42.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:42 vm04 bash[20742]: audit 2026-03-10T10:17:42.525158+0000 mon.a (mon.0) 1447 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm04-59259-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:42.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:42 vm04 bash[20742]: audit 2026-03-10T10:17:42.525158+0000 mon.a (mon.0) 1447 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm04-59259-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:43.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:42 vm07 bash[23367]: audit 2026-03-10T10:17:42.349952+0000 mon.c (mon.2) 213 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:43.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:42 vm07 bash[23367]: audit 2026-03-10T10:17:42.349952+0000 mon.c (mon.2) 213 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:43.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:42 vm07 bash[23367]: cluster 2026-03-10T10:17:42.495701+0000 mon.a (mon.0) 1445 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-10T10:17:43.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:42 vm07 bash[23367]: cluster 2026-03-10T10:17:42.495701+0000 mon.a (mon.0) 1445 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-10T10:17:43.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:42 vm07 bash[23367]: audit 2026-03-10T10:17:42.523234+0000 mon.c (mon.2) 214 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:43.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:42 vm07 bash[23367]: audit 2026-03-10T10:17:42.523234+0000 mon.c (mon.2) 214 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:43.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:42 vm07 bash[23367]: audit 2026-03-10T10:17:42.523381+0000 mon.c (mon.2) 215 : audit [INF] from='client.? 192.168.123.104:0/3379441119' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm04-59259-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:43.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:42 vm07 bash[23367]: audit 2026-03-10T10:17:42.523381+0000 mon.c (mon.2) 215 : audit [INF] from='client.? 192.168.123.104:0/3379441119' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm04-59259-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:43.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:42 vm07 bash[23367]: audit 2026-03-10T10:17:42.525086+0000 mon.a (mon.0) 1446 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:43.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:42 vm07 bash[23367]: audit 2026-03-10T10:17:42.525086+0000 mon.a (mon.0) 1446 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:43.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:42 vm07 bash[23367]: audit 2026-03-10T10:17:42.525158+0000 mon.a (mon.0) 1447 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm04-59259-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:43.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:42 vm07 bash[23367]: audit 2026-03-10T10:17:42.525158+0000 mon.a (mon.0) 1447 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm04-59259-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:43.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:17:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:17:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:17:44.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:43 vm07 bash[23367]: cluster 2026-03-10T10:17:42.392023+0000 mgr.y (mgr.24422) 176 : cluster [DBG] pgmap v175: 347 pgs: 7 creating+peering, 14 unknown, 326 active+clean; 464 KiB data, 632 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 511 B/s wr, 0 op/s 2026-03-10T10:17:44.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:43 vm07 bash[23367]: cluster 2026-03-10T10:17:42.392023+0000 mgr.y (mgr.24422) 176 : cluster [DBG] pgmap v175: 347 pgs: 7 creating+peering, 14 unknown, 326 active+clean; 464 KiB data, 632 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 511 B/s wr, 0 op/s 2026-03-10T10:17:44.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:43 vm07 bash[23367]: audit 2026-03-10T10:17:42.751393+0000 mon.a (mon.0) 1448 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:17:44.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:43 vm07 bash[23367]: audit 2026-03-10T10:17:42.751393+0000 mon.a (mon.0) 1448 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:17:44.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:43 vm07 bash[23367]: audit 2026-03-10T10:17:43.350830+0000 mon.c (mon.2) 216 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:44.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:43 vm07 bash[23367]: audit 2026-03-10T10:17:43.350830+0000 mon.c (mon.2) 216 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:43 vm04 bash[28289]: cluster 2026-03-10T10:17:42.392023+0000 mgr.y (mgr.24422) 176 : cluster [DBG] pgmap v175: 347 pgs: 7 creating+peering, 14 unknown, 326 active+clean; 464 KiB data, 632 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 511 B/s wr, 0 op/s 2026-03-10T10:17:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:43 vm04 bash[28289]: cluster 2026-03-10T10:17:42.392023+0000 mgr.y (mgr.24422) 176 : cluster [DBG] pgmap v175: 347 pgs: 7 creating+peering, 14 unknown, 326 active+clean; 464 KiB data, 632 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 511 B/s wr, 0 op/s 2026-03-10T10:17:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:43 vm04 bash[28289]: audit 2026-03-10T10:17:42.751393+0000 mon.a (mon.0) 1448 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:17:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:43 vm04 bash[28289]: audit 2026-03-10T10:17:42.751393+0000 mon.a (mon.0) 1448 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:17:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:43 vm04 bash[28289]: audit 2026-03-10T10:17:43.350830+0000 mon.c (mon.2) 216 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:43 vm04 bash[28289]: audit 2026-03-10T10:17:43.350830+0000 mon.c (mon.2) 216 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:43 vm04 bash[20742]: cluster 2026-03-10T10:17:42.392023+0000 mgr.y (mgr.24422) 176 : cluster [DBG] pgmap v175: 347 pgs: 7 creating+peering, 14 unknown, 326 active+clean; 464 KiB data, 632 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 511 B/s wr, 0 op/s 2026-03-10T10:17:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:43 vm04 bash[20742]: cluster 2026-03-10T10:17:42.392023+0000 mgr.y (mgr.24422) 176 : cluster [DBG] pgmap v175: 347 pgs: 7 creating+peering, 14 unknown, 326 active+clean; 464 KiB data, 632 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 511 B/s wr, 0 op/s 2026-03-10T10:17:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:43 vm04 bash[20742]: audit 2026-03-10T10:17:42.751393+0000 mon.a (mon.0) 1448 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:17:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:43 vm04 bash[20742]: audit 2026-03-10T10:17:42.751393+0000 mon.a (mon.0) 1448 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:17:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:43 vm04 bash[20742]: audit 2026-03-10T10:17:43.350830+0000 mon.c (mon.2) 216 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:43 vm04 bash[20742]: audit 2026-03-10T10:17:43.350830+0000 mon.c (mon.2) 216 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:44 vm04 bash[28289]: audit 2026-03-10T10:17:43.726032+0000 mon.a (mon.0) 1449 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59484-24"}]': finished 2026-03-10T10:17:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:44 vm04 bash[28289]: audit 2026-03-10T10:17:43.726032+0000 mon.a (mon.0) 1449 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59484-24"}]': finished 2026-03-10T10:17:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:44 vm04 bash[28289]: audit 2026-03-10T10:17:43.726138+0000 mon.a (mon.0) 1450 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm04-59259-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:44 vm04 bash[28289]: audit 2026-03-10T10:17:43.726138+0000 mon.a (mon.0) 1450 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm04-59259-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:44 vm04 bash[28289]: cluster 2026-03-10T10:17:43.731657+0000 mon.a (mon.0) 1451 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-10T10:17:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:44 vm04 bash[28289]: cluster 2026-03-10T10:17:43.731657+0000 mon.a (mon.0) 1451 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-10T10:17:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:44 vm04 bash[28289]: audit 2026-03-10T10:17:43.733755+0000 mon.c (mon.2) 217 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:44 vm04 bash[28289]: audit 2026-03-10T10:17:43.733755+0000 mon.c (mon.2) 217 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:44 vm04 bash[28289]: audit 2026-03-10T10:17:43.737093+0000 mon.a (mon.0) 1452 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:44 vm04 bash[28289]: audit 2026-03-10T10:17:43.737093+0000 mon.a (mon.0) 1452 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:44 vm04 bash[28289]: audit 2026-03-10T10:17:43.756071+0000 mon.b (mon.1) 142 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:44 vm04 bash[28289]: audit 2026-03-10T10:17:43.756071+0000 mon.b (mon.1) 142 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:44 vm04 bash[28289]: audit 2026-03-10T10:17:43.764271+0000 mon.a (mon.0) 1453 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:44 vm04 bash[28289]: audit 2026-03-10T10:17:43.764271+0000 mon.a (mon.0) 1453 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:44 vm04 bash[28289]: audit 2026-03-10T10:17:44.351707+0000 mon.c (mon.2) 218 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:44 vm04 bash[28289]: audit 2026-03-10T10:17:44.351707+0000 mon.c (mon.2) 218 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:44 vm04 bash[28289]: audit 2026-03-10T10:17:44.729999+0000 mon.a (mon.0) 1454 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59484-24"}]': finished 2026-03-10T10:17:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:44 vm04 bash[28289]: audit 2026-03-10T10:17:44.729999+0000 mon.a (mon.0) 1454 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59484-24"}]': finished 2026-03-10T10:17:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:44 vm04 bash[28289]: audit 2026-03-10T10:17:44.730065+0000 mon.a (mon.0) 1455 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm04-59252-27"}]': finished 2026-03-10T10:17:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:44 vm04 bash[28289]: audit 2026-03-10T10:17:44.730065+0000 mon.a (mon.0) 1455 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm04-59252-27"}]': finished 2026-03-10T10:17:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:44 vm04 bash[28289]: cluster 2026-03-10T10:17:44.734121+0000 mon.a (mon.0) 1456 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-10T10:17:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:44 vm04 bash[28289]: cluster 2026-03-10T10:17:44.734121+0000 mon.a (mon.0) 1456 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-10T10:17:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:44 vm04 bash[28289]: audit 2026-03-10T10:17:44.734303+0000 mon.b (mon.1) 143 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:44 vm04 bash[28289]: audit 2026-03-10T10:17:44.734303+0000 mon.b (mon.1) 143 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:45.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:44 vm04 bash[28289]: audit 2026-03-10T10:17:44.740637+0000 mon.a (mon.0) 1457 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:45.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:44 vm04 bash[28289]: audit 2026-03-10T10:17:44.740637+0000 mon.a (mon.0) 1457 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:45.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:44 vm04 bash[20742]: audit 2026-03-10T10:17:43.726032+0000 mon.a (mon.0) 1449 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59484-24"}]': finished 2026-03-10T10:17:45.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:44 vm04 bash[20742]: audit 2026-03-10T10:17:43.726032+0000 mon.a (mon.0) 1449 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59484-24"}]': finished 2026-03-10T10:17:45.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:44 vm04 bash[20742]: audit 2026-03-10T10:17:43.726138+0000 mon.a (mon.0) 1450 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm04-59259-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:45.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:44 vm04 bash[20742]: audit 2026-03-10T10:17:43.726138+0000 mon.a (mon.0) 1450 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm04-59259-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:45.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:44 vm04 bash[20742]: cluster 2026-03-10T10:17:43.731657+0000 mon.a (mon.0) 1451 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-10T10:17:45.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:44 vm04 bash[20742]: cluster 2026-03-10T10:17:43.731657+0000 mon.a (mon.0) 1451 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-10T10:17:45.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:44 vm04 bash[20742]: audit 2026-03-10T10:17:43.733755+0000 mon.c (mon.2) 217 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:45.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:44 vm04 bash[20742]: audit 2026-03-10T10:17:43.733755+0000 mon.c (mon.2) 217 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:45.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:44 vm04 bash[20742]: audit 2026-03-10T10:17:43.737093+0000 mon.a (mon.0) 1452 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:45.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:44 vm04 bash[20742]: audit 2026-03-10T10:17:43.737093+0000 mon.a (mon.0) 1452 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:45.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:44 vm04 bash[20742]: audit 2026-03-10T10:17:43.756071+0000 mon.b (mon.1) 142 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:45.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:44 vm04 bash[20742]: audit 2026-03-10T10:17:43.756071+0000 mon.b (mon.1) 142 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:45.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:44 vm04 bash[20742]: audit 2026-03-10T10:17:43.764271+0000 mon.a (mon.0) 1453 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:45.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:44 vm04 bash[20742]: audit 2026-03-10T10:17:43.764271+0000 mon.a (mon.0) 1453 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:45.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:44 vm04 bash[20742]: audit 2026-03-10T10:17:44.351707+0000 mon.c (mon.2) 218 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:45.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:44 vm04 bash[20742]: audit 2026-03-10T10:17:44.351707+0000 mon.c (mon.2) 218 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:45.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:44 vm04 bash[20742]: audit 2026-03-10T10:17:44.729999+0000 mon.a (mon.0) 1454 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59484-24"}]': finished 2026-03-10T10:17:45.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:44 vm04 bash[20742]: audit 2026-03-10T10:17:44.729999+0000 mon.a (mon.0) 1454 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59484-24"}]': finished 2026-03-10T10:17:45.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:44 vm04 bash[20742]: audit 2026-03-10T10:17:44.730065+0000 mon.a (mon.0) 1455 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm04-59252-27"}]': finished 2026-03-10T10:17:45.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:44 vm04 bash[20742]: audit 2026-03-10T10:17:44.730065+0000 mon.a (mon.0) 1455 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm04-59252-27"}]': finished 2026-03-10T10:17:45.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:44 vm04 bash[20742]: cluster 2026-03-10T10:17:44.734121+0000 mon.a (mon.0) 1456 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-10T10:17:45.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:44 vm04 bash[20742]: cluster 2026-03-10T10:17:44.734121+0000 mon.a (mon.0) 1456 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-10T10:17:45.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:44 vm04 bash[20742]: audit 2026-03-10T10:17:44.734303+0000 mon.b (mon.1) 143 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:45.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:44 vm04 bash[20742]: audit 2026-03-10T10:17:44.734303+0000 mon.b (mon.1) 143 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:45.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:44 vm04 bash[20742]: audit 2026-03-10T10:17:44.740637+0000 mon.a (mon.0) 1457 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:45.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:44 vm04 bash[20742]: audit 2026-03-10T10:17:44.740637+0000 mon.a (mon.0) 1457 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:45.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:44 vm07 bash[23367]: audit 2026-03-10T10:17:43.726032+0000 mon.a (mon.0) 1449 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59484-24"}]': finished 2026-03-10T10:17:45.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:44 vm07 bash[23367]: audit 2026-03-10T10:17:43.726032+0000 mon.a (mon.0) 1449 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59484-24"}]': finished 2026-03-10T10:17:45.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:44 vm07 bash[23367]: audit 2026-03-10T10:17:43.726138+0000 mon.a (mon.0) 1450 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm04-59259-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:45.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:44 vm07 bash[23367]: audit 2026-03-10T10:17:43.726138+0000 mon.a (mon.0) 1450 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ExecuteClassPP_vm04-59259-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:45.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:44 vm07 bash[23367]: cluster 2026-03-10T10:17:43.731657+0000 mon.a (mon.0) 1451 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-10T10:17:45.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:44 vm07 bash[23367]: cluster 2026-03-10T10:17:43.731657+0000 mon.a (mon.0) 1451 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-10T10:17:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:44 vm07 bash[23367]: audit 2026-03-10T10:17:43.733755+0000 mon.c (mon.2) 217 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:44 vm07 bash[23367]: audit 2026-03-10T10:17:43.733755+0000 mon.c (mon.2) 217 : audit [INF] from='client.? 192.168.123.104:0/3477447496' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:44 vm07 bash[23367]: audit 2026-03-10T10:17:43.737093+0000 mon.a (mon.0) 1452 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:44 vm07 bash[23367]: audit 2026-03-10T10:17:43.737093+0000 mon.a (mon.0) 1452 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59484-24"}]: dispatch 2026-03-10T10:17:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:44 vm07 bash[23367]: audit 2026-03-10T10:17:43.756071+0000 mon.b (mon.1) 142 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:44 vm07 bash[23367]: audit 2026-03-10T10:17:43.756071+0000 mon.b (mon.1) 142 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:44 vm07 bash[23367]: audit 2026-03-10T10:17:43.764271+0000 mon.a (mon.0) 1453 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:44 vm07 bash[23367]: audit 2026-03-10T10:17:43.764271+0000 mon.a (mon.0) 1453 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:44 vm07 bash[23367]: audit 2026-03-10T10:17:44.351707+0000 mon.c (mon.2) 218 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:44 vm07 bash[23367]: audit 2026-03-10T10:17:44.351707+0000 mon.c (mon.2) 218 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:44 vm07 bash[23367]: audit 2026-03-10T10:17:44.729999+0000 mon.a (mon.0) 1454 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59484-24"}]': finished 2026-03-10T10:17:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:44 vm07 bash[23367]: audit 2026-03-10T10:17:44.729999+0000 mon.a (mon.0) 1454 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59484-24"}]': finished 2026-03-10T10:17:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:44 vm07 bash[23367]: audit 2026-03-10T10:17:44.730065+0000 mon.a (mon.0) 1455 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm04-59252-27"}]': finished 2026-03-10T10:17:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:44 vm07 bash[23367]: audit 2026-03-10T10:17:44.730065+0000 mon.a (mon.0) 1455 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWrite_vm04-59252-27"}]': finished 2026-03-10T10:17:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:44 vm07 bash[23367]: cluster 2026-03-10T10:17:44.734121+0000 mon.a (mon.0) 1456 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-10T10:17:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:44 vm07 bash[23367]: cluster 2026-03-10T10:17:44.734121+0000 mon.a (mon.0) 1456 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-10T10:17:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:44 vm07 bash[23367]: audit 2026-03-10T10:17:44.734303+0000 mon.b (mon.1) 143 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:44 vm07 bash[23367]: audit 2026-03-10T10:17:44.734303+0000 mon.b (mon.1) 143 : audit [INF] from='client.? 192.168.123.104:0/351104463' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:44 vm07 bash[23367]: audit 2026-03-10T10:17:44.740637+0000 mon.a (mon.0) 1457 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:44 vm07 bash[23367]: audit 2026-03-10T10:17:44.740637+0000 mon.a (mon.0) 1457 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm04-59252-27"}]: dispatch 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: Running main() from gmock_main.cc 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [==========] Running 42 tests from 2 test suites. 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [----------] Global test environment set-up. 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [----------] 26 tests from LibRadosAio 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAio.TooBig 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAio.TooBig (3159 ms) 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAio.SimpleWrite 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAio.SimpleWrite (3125 ms) 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAio.WaitForSafe 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAio.WaitForSafe (3078 ms) 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAio.RoundTrip 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAio.RoundTrip (2706 ms) 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAio.RoundTrip2 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAio.RoundTrip2 (3183 ms) 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAio.RoundTrip3 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAio.RoundTrip3 (3012 ms) 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAio.RoundTripAppend 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAio.RoundTripAppend (2831 ms) 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAio.RemoveTest 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAio.RemoveTest (3101 ms) 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAio.XattrsRoundTrip 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAio.XattrsRoundTrip (3214 ms) 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAio.RmXattr 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAio.RmXattr (3043 ms) 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAio.XattrIter 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAio.XattrIter (2650 ms) 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAio.IsComplete 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAio.IsComplete (2951 ms) 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAio.IsSafe 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAio.IsSafe (3011 ms) 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAio.ReturnValue 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAio.ReturnValue (3309 ms) 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAio.Flush 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAio.Flush (2996 ms) 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAio.FlushAsync 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAio.FlushAsync (2787 ms) 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAio.RoundTripWriteFull 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAio.RoundTripWriteFull (3051 ms) 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAio.RoundTripWriteSame 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAio.RoundTripWriteSame (3005 ms) 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAio.SimpleStat 2026-03-10T10:17:45.763 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAio.SimpleStat (2987 ms) 2026-03-10T10:17:45.764 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAio.OperateMtime 2026-03-10T10:17:45.764 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAio.OperateMtime (3021 ms) 2026-03-10T10:17:45.764 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAio.Operate2Mtime 2026-03-10T10:17:45.764 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAio.Operate2Mtime (3157 ms) 2026-03-10T10:17:45.764 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAio.SimpleStatNS 2026-03-10T10:17:45.764 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAio.SimpleStatNS (2677 ms) 2026-03-10T10:17:45.764 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAio.StatRemove 2026-03-10T10:17:45.764 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAio.StatRemove (3055 ms) 2026-03-10T10:17:45.764 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAio.ExecuteClass 2026-03-10T10:17:45.764 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAio.ExecuteClass (2989 ms) 2026-03-10T10:17:45.764 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAio.MultiWrite 2026-03-10T10:17:45.764 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAio.MultiWrite (3088 ms) 2026-03-10T10:17:45.764 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAio.AioUnlock 2026-03-10T10:17:45.764 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAio.AioUnlock (3157 ms) 2026-03-10T10:17:45.764 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [----------] 26 tests from LibRadosAio (78343 ms total) 2026-03-10T10:17:45.764 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: 2026-03-10T10:17:45.764 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [----------] 16 tests from LibRadosAioEC 2026-03-10T10:17:45.764 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAioEC.SimpleWrite 2026-03-10T10:17:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:45 vm04 bash[28289]: cluster 2026-03-10T10:17:44.392456+0000 mgr.y (mgr.24422) 177 : cluster [DBG] pgmap v178: 363 pgs: 32 creating+peering, 34 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 287 active+clean; 463 KiB data, 640 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 4.0 KiB/s wr, 2 op/s 2026-03-10T10:17:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:45 vm04 bash[28289]: cluster 2026-03-10T10:17:44.392456+0000 mgr.y (mgr.24422) 177 : cluster [DBG] pgmap v178: 363 pgs: 32 creating+peering, 34 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 287 active+clean; 463 KiB data, 640 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 4.0 KiB/s wr, 2 op/s 2026-03-10T10:17:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:45 vm04 bash[28289]: audit 2026-03-10T10:17:44.930733+0000 mon.a (mon.0) 1458 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:17:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:45 vm04 bash[28289]: audit 2026-03-10T10:17:44.930733+0000 mon.a (mon.0) 1458 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:17:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:45 vm04 bash[28289]: audit 2026-03-10T10:17:45.352653+0000 mon.c (mon.2) 219 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:45 vm04 bash[28289]: audit 2026-03-10T10:17:45.352653+0000 mon.c (mon.2) 219 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:45 vm04 bash[28289]: audit 2026-03-10T10:17:45.734296+0000 mon.a (mon.0) 1459 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm04-59252-27"}]': finished 2026-03-10T10:17:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:45 vm04 bash[28289]: audit 2026-03-10T10:17:45.734296+0000 mon.a (mon.0) 1459 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm04-59252-27"}]': finished 2026-03-10T10:17:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:45 vm04 bash[28289]: audit 2026-03-10T10:17:45.734416+0000 mon.a (mon.0) 1460 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:17:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:45 vm04 bash[28289]: audit 2026-03-10T10:17:45.734416+0000 mon.a (mon.0) 1460 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:17:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:45 vm04 bash[28289]: cluster 2026-03-10T10:17:45.741963+0000 mon.a (mon.0) 1461 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-10T10:17:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:45 vm04 bash[28289]: cluster 2026-03-10T10:17:45.741963+0000 mon.a (mon.0) 1461 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-10T10:17:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:45 vm04 bash[28289]: audit 2026-03-10T10:17:45.742767+0000 mon.a (mon.0) 1462 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-13"}]: dispatch 2026-03-10T10:17:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:45 vm04 bash[28289]: audit 2026-03-10T10:17:45.742767+0000 mon.a (mon.0) 1462 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-13"}]: dispatch 2026-03-10T10:17:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:45 vm04 bash[28289]: audit 2026-03-10T10:17:45.759809+0000 mon.b (mon.1) 144 : audit [INF] from='client.? 192.168.123.104:0/336261947' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm04-59259-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:45 vm04 bash[28289]: audit 2026-03-10T10:17:45.759809+0000 mon.b (mon.1) 144 : audit [INF] from='client.? 192.168.123.104:0/336261947' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm04-59259-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:45 vm04 bash[28289]: audit 2026-03-10T10:17:45.767735+0000 mon.b (mon.1) 145 : audit [INF] from='client.? 192.168.123.104:0/1215341871' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm04-59484-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:45 vm04 bash[28289]: audit 2026-03-10T10:17:45.767735+0000 mon.b (mon.1) 145 : audit [INF] from='client.? 192.168.123.104:0/1215341871' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm04-59484-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:45 vm04 bash[28289]: audit 2026-03-10T10:17:45.769873+0000 mon.a (mon.0) 1463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm04-59259-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:45 vm04 bash[28289]: audit 2026-03-10T10:17:45.769873+0000 mon.a (mon.0) 1463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm04-59259-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:45 vm04 bash[28289]: audit 2026-03-10T10:17:45.770326+0000 mon.a (mon.0) 1464 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm04-59484-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:45 vm04 bash[28289]: audit 2026-03-10T10:17:45.770326+0000 mon.a (mon.0) 1464 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm04-59484-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:45 vm04 bash[28289]: audit 2026-03-10T10:17:45.779983+0000 mon.c (mon.2) 220 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:45 vm04 bash[28289]: audit 2026-03-10T10:17:45.779983+0000 mon.c (mon.2) 220 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:45 vm04 bash[28289]: audit 2026-03-10T10:17:45.781474+0000 mon.a (mon.0) 1465 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:45 vm04 bash[28289]: audit 2026-03-10T10:17:45.781474+0000 mon.a (mon.0) 1465 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:45 vm04 bash[28289]: audit 2026-03-10T10:17:45.782535+0000 mon.c (mon.2) 221 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:45 vm04 bash[28289]: audit 2026-03-10T10:17:45.782535+0000 mon.c (mon.2) 221 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:45 vm04 bash[28289]: audit 2026-03-10T10:17:45.782901+0000 mon.a (mon.0) 1466 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:45 vm04 bash[28289]: audit 2026-03-10T10:17:45.782901+0000 mon.a (mon.0) 1466 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:45 vm04 bash[28289]: audit 2026-03-10T10:17:45.784549+0000 mon.c (mon.2) 222 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm04-59252-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:45 vm04 bash[28289]: audit 2026-03-10T10:17:45.784549+0000 mon.c (mon.2) 222 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm04-59252-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:45 vm04 bash[28289]: audit 2026-03-10T10:17:45.784935+0000 mon.a (mon.0) 1467 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm04-59252-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:45 vm04 bash[28289]: audit 2026-03-10T10:17:45.784935+0000 mon.a (mon.0) 1467 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm04-59252-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:45 vm04 bash[20742]: cluster 2026-03-10T10:17:44.392456+0000 mgr.y (mgr.24422) 177 : cluster [DBG] pgmap v178: 363 pgs: 32 creating+peering, 34 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 287 active+clean; 463 KiB data, 640 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 4.0 KiB/s wr, 2 op/s 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:45 vm04 bash[20742]: cluster 2026-03-10T10:17:44.392456+0000 mgr.y (mgr.24422) 177 : cluster [DBG] pgmap v178: 363 pgs: 32 creating+peering, 34 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 287 active+clean; 463 KiB data, 640 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 4.0 KiB/s wr, 2 op/s 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:45 vm04 bash[20742]: audit 2026-03-10T10:17:44.930733+0000 mon.a (mon.0) 1458 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:45 vm04 bash[20742]: audit 2026-03-10T10:17:44.930733+0000 mon.a (mon.0) 1458 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:45 vm04 bash[20742]: audit 2026-03-10T10:17:45.352653+0000 mon.c (mon.2) 219 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:45 vm04 bash[20742]: audit 2026-03-10T10:17:45.352653+0000 mon.c (mon.2) 219 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:45 vm04 bash[20742]: audit 2026-03-10T10:17:45.734296+0000 mon.a (mon.0) 1459 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm04-59252-27"}]': finished 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:45 vm04 bash[20742]: audit 2026-03-10T10:17:45.734296+0000 mon.a (mon.0) 1459 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm04-59252-27"}]': finished 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:45 vm04 bash[20742]: audit 2026-03-10T10:17:45.734416+0000 mon.a (mon.0) 1460 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:45 vm04 bash[20742]: audit 2026-03-10T10:17:45.734416+0000 mon.a (mon.0) 1460 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:45 vm04 bash[20742]: cluster 2026-03-10T10:17:45.741963+0000 mon.a (mon.0) 1461 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:45 vm04 bash[20742]: cluster 2026-03-10T10:17:45.741963+0000 mon.a (mon.0) 1461 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:45 vm04 bash[20742]: audit 2026-03-10T10:17:45.742767+0000 mon.a (mon.0) 1462 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-13"}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:45 vm04 bash[20742]: audit 2026-03-10T10:17:45.742767+0000 mon.a (mon.0) 1462 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-13"}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:45 vm04 bash[20742]: audit 2026-03-10T10:17:45.759809+0000 mon.b (mon.1) 144 : audit [INF] from='client.? 192.168.123.104:0/336261947' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm04-59259-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:45 vm04 bash[20742]: audit 2026-03-10T10:17:45.759809+0000 mon.b (mon.1) 144 : audit [INF] from='client.? 192.168.123.104:0/336261947' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm04-59259-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:45 vm04 bash[20742]: audit 2026-03-10T10:17:45.767735+0000 mon.b (mon.1) 145 : audit [INF] from='client.? 192.168.123.104:0/1215341871' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm04-59484-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:45 vm04 bash[20742]: audit 2026-03-10T10:17:45.767735+0000 mon.b (mon.1) 145 : audit [INF] from='client.? 192.168.123.104:0/1215341871' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm04-59484-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:45 vm04 bash[20742]: audit 2026-03-10T10:17:45.769873+0000 mon.a (mon.0) 1463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm04-59259-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:45 vm04 bash[20742]: audit 2026-03-10T10:17:45.769873+0000 mon.a (mon.0) 1463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm04-59259-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:45 vm04 bash[20742]: audit 2026-03-10T10:17:45.770326+0000 mon.a (mon.0) 1464 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm04-59484-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:45 vm04 bash[20742]: audit 2026-03-10T10:17:45.770326+0000 mon.a (mon.0) 1464 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm04-59484-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:45 vm04 bash[20742]: audit 2026-03-10T10:17:45.779983+0000 mon.c (mon.2) 220 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:45 vm04 bash[20742]: audit 2026-03-10T10:17:45.779983+0000 mon.c (mon.2) 220 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:45 vm04 bash[20742]: audit 2026-03-10T10:17:45.781474+0000 mon.a (mon.0) 1465 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:45 vm04 bash[20742]: audit 2026-03-10T10:17:45.781474+0000 mon.a (mon.0) 1465 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:45 vm04 bash[20742]: audit 2026-03-10T10:17:45.782535+0000 mon.c (mon.2) 221 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:45 vm04 bash[20742]: audit 2026-03-10T10:17:45.782535+0000 mon.c (mon.2) 221 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:45 vm04 bash[20742]: audit 2026-03-10T10:17:45.782901+0000 mon.a (mon.0) 1466 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:45 vm04 bash[20742]: audit 2026-03-10T10:17:45.782901+0000 mon.a (mon.0) 1466 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:45 vm04 bash[20742]: audit 2026-03-10T10:17:45.784549+0000 mon.c (mon.2) 222 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm04-59252-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:45 vm04 bash[20742]: audit 2026-03-10T10:17:45.784549+0000 mon.c (mon.2) 222 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm04-59252-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:45 vm04 bash[20742]: audit 2026-03-10T10:17:45.784935+0000 mon.a (mon.0) 1467 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm04-59252-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:45 vm04 bash[20742]: audit 2026-03-10T10:17:45.784935+0000 mon.a (mon.0) 1467 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm04-59252-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:46.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:45 vm07 bash[23367]: cluster 2026-03-10T10:17:44.392456+0000 mgr.y (mgr.24422) 177 : cluster [DBG] pgmap v178: 363 pgs: 32 creating+peering, 34 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 287 active+clean; 463 KiB data, 640 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 4.0 KiB/s wr, 2 op/s 2026-03-10T10:17:46.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:45 vm07 bash[23367]: cluster 2026-03-10T10:17:44.392456+0000 mgr.y (mgr.24422) 177 : cluster [DBG] pgmap v178: 363 pgs: 32 creating+peering, 34 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 287 active+clean; 463 KiB data, 640 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 4.0 KiB/s wr, 2 op/s 2026-03-10T10:17:46.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:45 vm07 bash[23367]: audit 2026-03-10T10:17:44.930733+0000 mon.a (mon.0) 1458 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:17:46.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:45 vm07 bash[23367]: audit 2026-03-10T10:17:44.930733+0000 mon.a (mon.0) 1458 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:17:46.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:45 vm07 bash[23367]: audit 2026-03-10T10:17:45.352653+0000 mon.c (mon.2) 219 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:46.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:45 vm07 bash[23367]: audit 2026-03-10T10:17:45.352653+0000 mon.c (mon.2) 219 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:46.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:45 vm07 bash[23367]: audit 2026-03-10T10:17:45.734296+0000 mon.a (mon.0) 1459 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm04-59252-27"}]': finished 2026-03-10T10:17:46.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:45 vm07 bash[23367]: audit 2026-03-10T10:17:45.734296+0000 mon.a (mon.0) 1459 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWrite_vm04-59252-27"}]': finished 2026-03-10T10:17:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:45 vm07 bash[23367]: audit 2026-03-10T10:17:45.734416+0000 mon.a (mon.0) 1460 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:17:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:45 vm07 bash[23367]: audit 2026-03-10T10:17:45.734416+0000 mon.a (mon.0) 1460 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:17:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:45 vm07 bash[23367]: cluster 2026-03-10T10:17:45.741963+0000 mon.a (mon.0) 1461 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-10T10:17:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:45 vm07 bash[23367]: cluster 2026-03-10T10:17:45.741963+0000 mon.a (mon.0) 1461 : cluster [DBG] osdmap e150: 8 total, 8 up, 8 in 2026-03-10T10:17:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:45 vm07 bash[23367]: audit 2026-03-10T10:17:45.742767+0000 mon.a (mon.0) 1462 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-13"}]: dispatch 2026-03-10T10:17:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:45 vm07 bash[23367]: audit 2026-03-10T10:17:45.742767+0000 mon.a (mon.0) 1462 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-13"}]: dispatch 2026-03-10T10:17:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:45 vm07 bash[23367]: audit 2026-03-10T10:17:45.759809+0000 mon.b (mon.1) 144 : audit [INF] from='client.? 192.168.123.104:0/336261947' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm04-59259-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:45 vm07 bash[23367]: audit 2026-03-10T10:17:45.759809+0000 mon.b (mon.1) 144 : audit [INF] from='client.? 192.168.123.104:0/336261947' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm04-59259-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:45 vm07 bash[23367]: audit 2026-03-10T10:17:45.767735+0000 mon.b (mon.1) 145 : audit [INF] from='client.? 192.168.123.104:0/1215341871' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm04-59484-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:45 vm07 bash[23367]: audit 2026-03-10T10:17:45.767735+0000 mon.b (mon.1) 145 : audit [INF] from='client.? 192.168.123.104:0/1215341871' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm04-59484-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:45 vm07 bash[23367]: audit 2026-03-10T10:17:45.769873+0000 mon.a (mon.0) 1463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm04-59259-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:45 vm07 bash[23367]: audit 2026-03-10T10:17:45.769873+0000 mon.a (mon.0) 1463 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "OmapPP_vm04-59259-26","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:45 vm07 bash[23367]: audit 2026-03-10T10:17:45.770326+0000 mon.a (mon.0) 1464 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm04-59484-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:45 vm07 bash[23367]: audit 2026-03-10T10:17:45.770326+0000 mon.a (mon.0) 1464 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm04-59484-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:45 vm07 bash[23367]: audit 2026-03-10T10:17:45.779983+0000 mon.c (mon.2) 220 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:45 vm07 bash[23367]: audit 2026-03-10T10:17:45.779983+0000 mon.c (mon.2) 220 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:45 vm07 bash[23367]: audit 2026-03-10T10:17:45.781474+0000 mon.a (mon.0) 1465 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:45 vm07 bash[23367]: audit 2026-03-10T10:17:45.781474+0000 mon.a (mon.0) 1465 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:45 vm07 bash[23367]: audit 2026-03-10T10:17:45.782535+0000 mon.c (mon.2) 221 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:45 vm07 bash[23367]: audit 2026-03-10T10:17:45.782535+0000 mon.c (mon.2) 221 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:45 vm07 bash[23367]: audit 2026-03-10T10:17:45.782901+0000 mon.a (mon.0) 1466 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:45 vm07 bash[23367]: audit 2026-03-10T10:17:45.782901+0000 mon.a (mon.0) 1466 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:45 vm07 bash[23367]: audit 2026-03-10T10:17:45.784549+0000 mon.c (mon.2) 222 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm04-59252-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:45 vm07 bash[23367]: audit 2026-03-10T10:17:45.784549+0000 mon.c (mon.2) 222 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm04-59252-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:45 vm07 bash[23367]: audit 2026-03-10T10:17:45.784935+0000 mon.a (mon.0) 1467 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm04-59252-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:45 vm07 bash[23367]: audit 2026-03-10T10:17:45.784935+0000 mon.a (mon.0) 1467 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm04-59252-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:46 vm04 bash[28289]: cluster 2026-03-10T10:17:46.201915+0000 mon.a (mon.0) 1468 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:46 vm04 bash[28289]: cluster 2026-03-10T10:17:46.201915+0000 mon.a (mon.0) 1468 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:46 vm04 bash[28289]: audit 2026-03-10T10:17:46.353548+0000 mon.c (mon.2) 223 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:46 vm04 bash[28289]: audit 2026-03-10T10:17:46.353548+0000 mon.c (mon.2) 223 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:46 vm04 bash[28289]: cluster 2026-03-10T10:17:46.734607+0000 mon.a (mon.0) 1469 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:17:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:46 vm04 bash[28289]: cluster 2026-03-10T10:17:46.734607+0000 mon.a (mon.0) 1469 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:17:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:46 vm04 bash[28289]: audit 2026-03-10T10:17:46.765476+0000 mon.a (mon.0) 1470 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-13"}]': finished 2026-03-10T10:17:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:46 vm04 bash[28289]: audit 2026-03-10T10:17:46.765476+0000 mon.a (mon.0) 1470 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-13"}]': finished 2026-03-10T10:17:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:46 vm04 bash[28289]: audit 2026-03-10T10:17:46.765770+0000 mon.a (mon.0) 1471 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OmapPP_vm04-59259-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:46 vm04 bash[28289]: audit 2026-03-10T10:17:46.765770+0000 mon.a (mon.0) 1471 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OmapPP_vm04-59259-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:46 vm04 bash[28289]: audit 2026-03-10T10:17:46.765856+0000 mon.a (mon.0) 1472 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm04-59484-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:46 vm04 bash[28289]: audit 2026-03-10T10:17:46.765856+0000 mon.a (mon.0) 1472 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm04-59484-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:46 vm04 bash[28289]: audit 2026-03-10T10:17:46.766002+0000 mon.a (mon.0) 1473 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm04-59252-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:46 vm04 bash[28289]: audit 2026-03-10T10:17:46.766002+0000 mon.a (mon.0) 1473 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm04-59252-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:46 vm04 bash[28289]: cluster 2026-03-10T10:17:46.777044+0000 mon.a (mon.0) 1474 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-10T10:17:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:46 vm04 bash[28289]: cluster 2026-03-10T10:17:46.777044+0000 mon.a (mon.0) 1474 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-10T10:17:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:46 vm04 bash[28289]: audit 2026-03-10T10:17:46.791631+0000 mon.c (mon.2) 224 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm04-59252-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:46 vm04 bash[28289]: audit 2026-03-10T10:17:46.791631+0000 mon.c (mon.2) 224 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm04-59252-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:46 vm04 bash[28289]: audit 2026-03-10T10:17:46.792233+0000 mon.a (mon.0) 1475 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm04-59252-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:46 vm04 bash[28289]: audit 2026-03-10T10:17:46.792233+0000 mon.a (mon.0) 1475 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm04-59252-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:47.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:46 vm04 bash[20742]: cluster 2026-03-10T10:17:46.201915+0000 mon.a (mon.0) 1468 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:47.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:46 vm04 bash[20742]: cluster 2026-03-10T10:17:46.201915+0000 mon.a (mon.0) 1468 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:47.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:46 vm04 bash[20742]: audit 2026-03-10T10:17:46.353548+0000 mon.c (mon.2) 223 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:47.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:46 vm04 bash[20742]: audit 2026-03-10T10:17:46.353548+0000 mon.c (mon.2) 223 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:47.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:46 vm04 bash[20742]: cluster 2026-03-10T10:17:46.734607+0000 mon.a (mon.0) 1469 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:17:47.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:46 vm04 bash[20742]: cluster 2026-03-10T10:17:46.734607+0000 mon.a (mon.0) 1469 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:17:47.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:46 vm04 bash[20742]: audit 2026-03-10T10:17:46.765476+0000 mon.a (mon.0) 1470 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-13"}]': finished 2026-03-10T10:17:47.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:46 vm04 bash[20742]: audit 2026-03-10T10:17:46.765476+0000 mon.a (mon.0) 1470 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-13"}]': finished 2026-03-10T10:17:47.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:46 vm04 bash[20742]: audit 2026-03-10T10:17:46.765770+0000 mon.a (mon.0) 1471 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OmapPP_vm04-59259-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:47.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:46 vm04 bash[20742]: audit 2026-03-10T10:17:46.765770+0000 mon.a (mon.0) 1471 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OmapPP_vm04-59259-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:47.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:46 vm04 bash[20742]: audit 2026-03-10T10:17:46.765856+0000 mon.a (mon.0) 1472 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm04-59484-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:47.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:46 vm04 bash[20742]: audit 2026-03-10T10:17:46.765856+0000 mon.a (mon.0) 1472 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm04-59484-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:47.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:46 vm04 bash[20742]: audit 2026-03-10T10:17:46.766002+0000 mon.a (mon.0) 1473 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm04-59252-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:47.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:46 vm04 bash[20742]: audit 2026-03-10T10:17:46.766002+0000 mon.a (mon.0) 1473 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm04-59252-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:47.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:46 vm04 bash[20742]: cluster 2026-03-10T10:17:46.777044+0000 mon.a (mon.0) 1474 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-10T10:17:47.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:46 vm04 bash[20742]: cluster 2026-03-10T10:17:46.777044+0000 mon.a (mon.0) 1474 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-10T10:17:47.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:46 vm04 bash[20742]: audit 2026-03-10T10:17:46.791631+0000 mon.c (mon.2) 224 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm04-59252-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:47.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:46 vm04 bash[20742]: audit 2026-03-10T10:17:46.791631+0000 mon.c (mon.2) 224 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm04-59252-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:47.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:46 vm04 bash[20742]: audit 2026-03-10T10:17:46.792233+0000 mon.a (mon.0) 1475 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm04-59252-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:47.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:46 vm04 bash[20742]: audit 2026-03-10T10:17:46.792233+0000 mon.a (mon.0) 1475 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm04-59252-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:47.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:46 vm07 bash[23367]: cluster 2026-03-10T10:17:46.201915+0000 mon.a (mon.0) 1468 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:47.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:46 vm07 bash[23367]: cluster 2026-03-10T10:17:46.201915+0000 mon.a (mon.0) 1468 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:47.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:46 vm07 bash[23367]: audit 2026-03-10T10:17:46.353548+0000 mon.c (mon.2) 223 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:47.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:46 vm07 bash[23367]: audit 2026-03-10T10:17:46.353548+0000 mon.c (mon.2) 223 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:47.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:46 vm07 bash[23367]: cluster 2026-03-10T10:17:46.734607+0000 mon.a (mon.0) 1469 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:17:47.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:46 vm07 bash[23367]: cluster 2026-03-10T10:17:46.734607+0000 mon.a (mon.0) 1469 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:17:47.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:46 vm07 bash[23367]: audit 2026-03-10T10:17:46.765476+0000 mon.a (mon.0) 1470 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-13"}]': finished 2026-03-10T10:17:47.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:46 vm07 bash[23367]: audit 2026-03-10T10:17:46.765476+0000 mon.a (mon.0) 1470 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-13"}]': finished 2026-03-10T10:17:47.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:46 vm07 bash[23367]: audit 2026-03-10T10:17:46.765770+0000 mon.a (mon.0) 1471 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OmapPP_vm04-59259-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:47.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:46 vm07 bash[23367]: audit 2026-03-10T10:17:46.765770+0000 mon.a (mon.0) 1471 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "OmapPP_vm04-59259-26","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:47.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:46 vm07 bash[23367]: audit 2026-03-10T10:17:46.765856+0000 mon.a (mon.0) 1472 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm04-59484-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:47.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:46 vm07 bash[23367]: audit 2026-03-10T10:17:46.765856+0000 mon.a (mon.0) 1472 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/0_vm04-59484-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:47.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:46 vm07 bash[23367]: audit 2026-03-10T10:17:46.766002+0000 mon.a (mon.0) 1473 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm04-59252-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:47.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:46 vm07 bash[23367]: audit 2026-03-10T10:17:46.766002+0000 mon.a (mon.0) 1473 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForComplete_vm04-59252-28", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:47.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:46 vm07 bash[23367]: cluster 2026-03-10T10:17:46.777044+0000 mon.a (mon.0) 1474 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-10T10:17:47.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:46 vm07 bash[23367]: cluster 2026-03-10T10:17:46.777044+0000 mon.a (mon.0) 1474 : cluster [DBG] osdmap e151: 8 total, 8 up, 8 in 2026-03-10T10:17:47.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:46 vm07 bash[23367]: audit 2026-03-10T10:17:46.791631+0000 mon.c (mon.2) 224 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm04-59252-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:47.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:46 vm07 bash[23367]: audit 2026-03-10T10:17:46.791631+0000 mon.c (mon.2) 224 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm04-59252-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:47.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:46 vm07 bash[23367]: audit 2026-03-10T10:17:46.792233+0000 mon.a (mon.0) 1475 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm04-59252-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:47.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:46 vm07 bash[23367]: audit 2026-03-10T10:17:46.792233+0000 mon.a (mon.0) 1475 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForComplete_vm04-59252-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:47 vm04 bash[28289]: cluster 2026-03-10T10:17:46.392851+0000 mgr.y (mgr.24422) 178 : cluster [DBG] pgmap v181: 395 pgs: 64 unknown, 34 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 287 active+clean; 463 KiB data, 640 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 5.0 KiB/s wr, 3 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:17:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:47 vm04 bash[28289]: cluster 2026-03-10T10:17:46.392851+0000 mgr.y (mgr.24422) 178 : cluster [DBG] pgmap v181: 395 pgs: 64 unknown, 34 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 287 active+clean; 463 KiB data, 640 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 5.0 KiB/s wr, 3 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:17:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:47 vm04 bash[28289]: audit 2026-03-10T10:17:47.354425+0000 mon.c (mon.2) 225 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:47 vm04 bash[28289]: audit 2026-03-10T10:17:47.354425+0000 mon.c (mon.2) 225 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:47 vm04 bash[20742]: cluster 2026-03-10T10:17:46.392851+0000 mgr.y (mgr.24422) 178 : cluster [DBG] pgmap v181: 395 pgs: 64 unknown, 34 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 287 active+clean; 463 KiB data, 640 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 5.0 KiB/s wr, 3 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:17:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:47 vm04 bash[20742]: cluster 2026-03-10T10:17:46.392851+0000 mgr.y (mgr.24422) 178 : cluster [DBG] pgmap v181: 395 pgs: 64 unknown, 34 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 287 active+clean; 463 KiB data, 640 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 5.0 KiB/s wr, 3 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:17:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:47 vm04 bash[20742]: audit 2026-03-10T10:17:47.354425+0000 mon.c (mon.2) 225 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:47 vm04 bash[20742]: audit 2026-03-10T10:17:47.354425+0000 mon.c (mon.2) 225 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:48.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:47 vm07 bash[23367]: cluster 2026-03-10T10:17:46.392851+0000 mgr.y (mgr.24422) 178 : cluster [DBG] pgmap v181: 395 pgs: 64 unknown, 34 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 287 active+clean; 463 KiB data, 640 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 5.0 KiB/s wr, 3 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:17:48.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:47 vm07 bash[23367]: cluster 2026-03-10T10:17:46.392851+0000 mgr.y (mgr.24422) 178 : cluster [DBG] pgmap v181: 395 pgs: 64 unknown, 34 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 287 active+clean; 463 KiB data, 640 MiB used, 159 GiB / 160 GiB avail; 3.2 KiB/s rd, 5.0 KiB/s wr, 3 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:17:48.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:47 vm07 bash[23367]: audit 2026-03-10T10:17:47.354425+0000 mon.c (mon.2) 225 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:48.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:47 vm07 bash[23367]: audit 2026-03-10T10:17:47.354425+0000 mon.c (mon.2) 225 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:48.766 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:17:48 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:17:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:48 vm04 bash[28289]: cluster 2026-03-10T10:17:47.899077+0000 mon.a (mon.0) 1476 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-10T10:17:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:48 vm04 bash[28289]: cluster 2026-03-10T10:17:47.899077+0000 mon.a (mon.0) 1476 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-10T10:17:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:48 vm04 bash[28289]: audit 2026-03-10T10:17:48.270248+0000 mgr.y (mgr.24422) 179 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:48 vm04 bash[28289]: audit 2026-03-10T10:17:48.270248+0000 mgr.y (mgr.24422) 179 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:48 vm04 bash[28289]: audit 2026-03-10T10:17:48.355339+0000 mon.c (mon.2) 226 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:48 vm04 bash[28289]: audit 2026-03-10T10:17:48.355339+0000 mon.c (mon.2) 226 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:48 vm04 bash[28289]: audit 2026-03-10T10:17:48.394051+0000 mon.a (mon.0) 1477 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "30"}]: dispatch 2026-03-10T10:17:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:48 vm04 bash[28289]: audit 2026-03-10T10:17:48.394051+0000 mon.a (mon.0) 1477 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "30"}]: dispatch 2026-03-10T10:17:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:48 vm04 bash[28289]: audit 2026-03-10T10:17:48.891092+0000 mon.a (mon.0) 1478 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForComplete_vm04-59252-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm04-59252-28"}]': finished 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:48 vm04 bash[28289]: audit 2026-03-10T10:17:48.891092+0000 mon.a (mon.0) 1478 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForComplete_vm04-59252-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm04-59252-28"}]': finished 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:48 vm04 bash[28289]: audit 2026-03-10T10:17:48.891154+0000 mon.a (mon.0) 1479 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "30"}]': finished 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:48 vm04 bash[28289]: audit 2026-03-10T10:17:48.891154+0000 mon.a (mon.0) 1479 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "30"}]': finished 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:48 vm04 bash[28289]: cluster 2026-03-10T10:17:48.895694+0000 mon.a (mon.0) 1480 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:48 vm04 bash[28289]: cluster 2026-03-10T10:17:48.895694+0000 mon.a (mon.0) 1480 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:48 vm04 bash[28289]: audit 2026-03-10T10:17:48.905789+0000 mon.a (mon.0) 1481 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:48 vm04 bash[28289]: audit 2026-03-10T10:17:48.905789+0000 mon.a (mon.0) 1481 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:48 vm04 bash[28289]: audit 2026-03-10T10:17:48.905986+0000 mon.c (mon.2) 227 : audit [INF] from='client.? 192.168.123.104:0/724962059' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm04-59259-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:48 vm04 bash[28289]: audit 2026-03-10T10:17:48.905986+0000 mon.c (mon.2) 227 : audit [INF] from='client.? 192.168.123.104:0/724962059' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm04-59259-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:48 vm04 bash[28289]: audit 2026-03-10T10:17:48.906282+0000 mon.c (mon.2) 228 : audit [INF] from='client.? 192.168.123.104:0/2322374103' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm04-59484-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:48 vm04 bash[28289]: audit 2026-03-10T10:17:48.906282+0000 mon.c (mon.2) 228 : audit [INF] from='client.? 192.168.123.104:0/2322374103' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm04-59484-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:48 vm04 bash[28289]: audit 2026-03-10T10:17:48.906473+0000 mon.a (mon.0) 1482 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm04-59259-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:48 vm04 bash[28289]: audit 2026-03-10T10:17:48.906473+0000 mon.a (mon.0) 1482 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm04-59259-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:48 vm04 bash[28289]: audit 2026-03-10T10:17:48.906684+0000 mon.a (mon.0) 1483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm04-59484-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:48 vm04 bash[28289]: audit 2026-03-10T10:17:48.906684+0000 mon.a (mon.0) 1483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm04-59484-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:48 vm04 bash[20742]: cluster 2026-03-10T10:17:47.899077+0000 mon.a (mon.0) 1476 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:48 vm04 bash[20742]: cluster 2026-03-10T10:17:47.899077+0000 mon.a (mon.0) 1476 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:48 vm04 bash[20742]: audit 2026-03-10T10:17:48.270248+0000 mgr.y (mgr.24422) 179 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:48 vm04 bash[20742]: audit 2026-03-10T10:17:48.270248+0000 mgr.y (mgr.24422) 179 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:48 vm04 bash[20742]: audit 2026-03-10T10:17:48.355339+0000 mon.c (mon.2) 226 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:48 vm04 bash[20742]: audit 2026-03-10T10:17:48.355339+0000 mon.c (mon.2) 226 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:48 vm04 bash[20742]: audit 2026-03-10T10:17:48.394051+0000 mon.a (mon.0) 1477 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "30"}]: dispatch 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:48 vm04 bash[20742]: audit 2026-03-10T10:17:48.394051+0000 mon.a (mon.0) 1477 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "30"}]: dispatch 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:48 vm04 bash[20742]: audit 2026-03-10T10:17:48.891092+0000 mon.a (mon.0) 1478 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForComplete_vm04-59252-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm04-59252-28"}]': finished 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:48 vm04 bash[20742]: audit 2026-03-10T10:17:48.891092+0000 mon.a (mon.0) 1478 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForComplete_vm04-59252-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm04-59252-28"}]': finished 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:48 vm04 bash[20742]: audit 2026-03-10T10:17:48.891154+0000 mon.a (mon.0) 1479 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "30"}]': finished 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:48 vm04 bash[20742]: audit 2026-03-10T10:17:48.891154+0000 mon.a (mon.0) 1479 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "30"}]': finished 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:48 vm04 bash[20742]: cluster 2026-03-10T10:17:48.895694+0000 mon.a (mon.0) 1480 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:48 vm04 bash[20742]: cluster 2026-03-10T10:17:48.895694+0000 mon.a (mon.0) 1480 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:48 vm04 bash[20742]: audit 2026-03-10T10:17:48.905789+0000 mon.a (mon.0) 1481 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:48 vm04 bash[20742]: audit 2026-03-10T10:17:48.905789+0000 mon.a (mon.0) 1481 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:48 vm04 bash[20742]: audit 2026-03-10T10:17:48.905986+0000 mon.c (mon.2) 227 : audit [INF] from='client.? 192.168.123.104:0/724962059' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm04-59259-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:48 vm04 bash[20742]: audit 2026-03-10T10:17:48.905986+0000 mon.c (mon.2) 227 : audit [INF] from='client.? 192.168.123.104:0/724962059' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm04-59259-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:48 vm04 bash[20742]: audit 2026-03-10T10:17:48.906282+0000 mon.c (mon.2) 228 : audit [INF] from='client.? 192.168.123.104:0/2322374103' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm04-59484-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:48 vm04 bash[20742]: audit 2026-03-10T10:17:48.906282+0000 mon.c (mon.2) 228 : audit [INF] from='client.? 192.168.123.104:0/2322374103' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm04-59484-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:48 vm04 bash[20742]: audit 2026-03-10T10:17:48.906473+0000 mon.a (mon.0) 1482 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm04-59259-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:48 vm04 bash[20742]: audit 2026-03-10T10:17:48.906473+0000 mon.a (mon.0) 1482 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm04-59259-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:48 vm04 bash[20742]: audit 2026-03-10T10:17:48.906684+0000 mon.a (mon.0) 1483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm04-59484-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:48 vm04 bash[20742]: audit 2026-03-10T10:17:48.906684+0000 mon.a (mon.0) 1483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm04-59484-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:49.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:48 vm07 bash[23367]: cluster 2026-03-10T10:17:47.899077+0000 mon.a (mon.0) 1476 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-10T10:17:49.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:48 vm07 bash[23367]: cluster 2026-03-10T10:17:47.899077+0000 mon.a (mon.0) 1476 : cluster [DBG] osdmap e152: 8 total, 8 up, 8 in 2026-03-10T10:17:49.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:48 vm07 bash[23367]: audit 2026-03-10T10:17:48.270248+0000 mgr.y (mgr.24422) 179 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:49.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:48 vm07 bash[23367]: audit 2026-03-10T10:17:48.270248+0000 mgr.y (mgr.24422) 179 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:49.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:48 vm07 bash[23367]: audit 2026-03-10T10:17:48.355339+0000 mon.c (mon.2) 226 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:49.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:48 vm07 bash[23367]: audit 2026-03-10T10:17:48.355339+0000 mon.c (mon.2) 226 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:49.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:48 vm07 bash[23367]: audit 2026-03-10T10:17:48.394051+0000 mon.a (mon.0) 1477 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "30"}]: dispatch 2026-03-10T10:17:49.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:48 vm07 bash[23367]: audit 2026-03-10T10:17:48.394051+0000 mon.a (mon.0) 1477 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "30"}]: dispatch 2026-03-10T10:17:49.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:48 vm07 bash[23367]: audit 2026-03-10T10:17:48.891092+0000 mon.a (mon.0) 1478 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForComplete_vm04-59252-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm04-59252-28"}]': finished 2026-03-10T10:17:49.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:48 vm07 bash[23367]: audit 2026-03-10T10:17:48.891092+0000 mon.a (mon.0) 1478 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForComplete_vm04-59252-28", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForComplete_vm04-59252-28"}]': finished 2026-03-10T10:17:49.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:48 vm07 bash[23367]: audit 2026-03-10T10:17:48.891154+0000 mon.a (mon.0) 1479 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "30"}]': finished 2026-03-10T10:17:49.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:48 vm07 bash[23367]: audit 2026-03-10T10:17:48.891154+0000 mon.a (mon.0) 1479 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "30"}]': finished 2026-03-10T10:17:49.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:48 vm07 bash[23367]: cluster 2026-03-10T10:17:48.895694+0000 mon.a (mon.0) 1480 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-10T10:17:49.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:48 vm07 bash[23367]: cluster 2026-03-10T10:17:48.895694+0000 mon.a (mon.0) 1480 : cluster [DBG] osdmap e153: 8 total, 8 up, 8 in 2026-03-10T10:17:49.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:48 vm07 bash[23367]: audit 2026-03-10T10:17:48.905789+0000 mon.a (mon.0) 1481 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:49.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:48 vm07 bash[23367]: audit 2026-03-10T10:17:48.905789+0000 mon.a (mon.0) 1481 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-15","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:49.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:48 vm07 bash[23367]: audit 2026-03-10T10:17:48.905986+0000 mon.c (mon.2) 227 : audit [INF] from='client.? 192.168.123.104:0/724962059' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm04-59259-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:49.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:48 vm07 bash[23367]: audit 2026-03-10T10:17:48.905986+0000 mon.c (mon.2) 227 : audit [INF] from='client.? 192.168.123.104:0/724962059' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm04-59259-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:49.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:48 vm07 bash[23367]: audit 2026-03-10T10:17:48.906282+0000 mon.c (mon.2) 228 : audit [INF] from='client.? 192.168.123.104:0/2322374103' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm04-59484-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:49.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:48 vm07 bash[23367]: audit 2026-03-10T10:17:48.906282+0000 mon.c (mon.2) 228 : audit [INF] from='client.? 192.168.123.104:0/2322374103' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm04-59484-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:49.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:48 vm07 bash[23367]: audit 2026-03-10T10:17:48.906473+0000 mon.a (mon.0) 1482 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm04-59259-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:49.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:48 vm07 bash[23367]: audit 2026-03-10T10:17:48.906473+0000 mon.a (mon.0) 1482 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm04-59259-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:49.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:48 vm07 bash[23367]: audit 2026-03-10T10:17:48.906684+0000 mon.a (mon.0) 1483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm04-59484-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:49.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:48 vm07 bash[23367]: audit 2026-03-10T10:17:48.906684+0000 mon.a (mon.0) 1483 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm04-59484-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:50 vm04 bash[28289]: cluster 2026-03-10T10:17:48.393377+0000 mgr.y (mgr.24422) 180 : cluster [DBG] pgmap v184: 299 pgs: 15 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 274 active+clean; 468 KiB data, 639 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s wr, 0 op/s 2026-03-10T10:17:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:50 vm04 bash[28289]: cluster 2026-03-10T10:17:48.393377+0000 mgr.y (mgr.24422) 180 : cluster [DBG] pgmap v184: 299 pgs: 15 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 274 active+clean; 468 KiB data, 639 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s wr, 0 op/s 2026-03-10T10:17:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:50 vm04 bash[28289]: audit 2026-03-10T10:17:49.356274+0000 mon.c (mon.2) 229 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:50 vm04 bash[28289]: audit 2026-03-10T10:17:49.356274+0000 mon.c (mon.2) 229 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:50 vm04 bash[28289]: audit 2026-03-10T10:17:49.895667+0000 mon.a (mon.0) 1484 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:50 vm04 bash[28289]: audit 2026-03-10T10:17:49.895667+0000 mon.a (mon.0) 1484 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:50 vm04 bash[28289]: audit 2026-03-10T10:17:49.895768+0000 mon.a (mon.0) 1485 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm04-59259-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:50 vm04 bash[28289]: audit 2026-03-10T10:17:49.895768+0000 mon.a (mon.0) 1485 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm04-59259-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:50 vm04 bash[28289]: audit 2026-03-10T10:17:49.895833+0000 mon.a (mon.0) 1486 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm04-59484-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:50 vm04 bash[28289]: audit 2026-03-10T10:17:49.895833+0000 mon.a (mon.0) 1486 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm04-59484-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:50 vm04 bash[28289]: cluster 2026-03-10T10:17:49.900260+0000 mon.a (mon.0) 1487 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-10T10:17:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:50 vm04 bash[28289]: cluster 2026-03-10T10:17:49.900260+0000 mon.a (mon.0) 1487 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-10T10:17:50.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:50 vm04 bash[20742]: cluster 2026-03-10T10:17:48.393377+0000 mgr.y (mgr.24422) 180 : cluster [DBG] pgmap v184: 299 pgs: 15 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 274 active+clean; 468 KiB data, 639 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s wr, 0 op/s 2026-03-10T10:17:50.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:50 vm04 bash[20742]: cluster 2026-03-10T10:17:48.393377+0000 mgr.y (mgr.24422) 180 : cluster [DBG] pgmap v184: 299 pgs: 15 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 274 active+clean; 468 KiB data, 639 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s wr, 0 op/s 2026-03-10T10:17:50.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:50 vm04 bash[20742]: audit 2026-03-10T10:17:49.356274+0000 mon.c (mon.2) 229 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:50.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:50 vm04 bash[20742]: audit 2026-03-10T10:17:49.356274+0000 mon.c (mon.2) 229 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:50.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:50 vm04 bash[20742]: audit 2026-03-10T10:17:49.895667+0000 mon.a (mon.0) 1484 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:50.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:50 vm04 bash[20742]: audit 2026-03-10T10:17:49.895667+0000 mon.a (mon.0) 1484 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:50.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:50 vm04 bash[20742]: audit 2026-03-10T10:17:49.895768+0000 mon.a (mon.0) 1485 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm04-59259-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:50.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:50 vm04 bash[20742]: audit 2026-03-10T10:17:49.895768+0000 mon.a (mon.0) 1485 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm04-59259-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:50.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:50 vm04 bash[20742]: audit 2026-03-10T10:17:49.895833+0000 mon.a (mon.0) 1486 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm04-59484-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:50.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:50 vm04 bash[20742]: audit 2026-03-10T10:17:49.895833+0000 mon.a (mon.0) 1486 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm04-59484-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:50.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:50 vm04 bash[20742]: cluster 2026-03-10T10:17:49.900260+0000 mon.a (mon.0) 1487 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-10T10:17:50.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:50 vm04 bash[20742]: cluster 2026-03-10T10:17:49.900260+0000 mon.a (mon.0) 1487 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-10T10:17:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:50 vm07 bash[23367]: cluster 2026-03-10T10:17:48.393377+0000 mgr.y (mgr.24422) 180 : cluster [DBG] pgmap v184: 299 pgs: 15 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 274 active+clean; 468 KiB data, 639 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s wr, 0 op/s 2026-03-10T10:17:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:50 vm07 bash[23367]: cluster 2026-03-10T10:17:48.393377+0000 mgr.y (mgr.24422) 180 : cluster [DBG] pgmap v184: 299 pgs: 15 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 274 active+clean; 468 KiB data, 639 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s wr, 0 op/s 2026-03-10T10:17:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:50 vm07 bash[23367]: audit 2026-03-10T10:17:49.356274+0000 mon.c (mon.2) 229 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:50 vm07 bash[23367]: audit 2026-03-10T10:17:49.356274+0000 mon.c (mon.2) 229 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:50 vm07 bash[23367]: audit 2026-03-10T10:17:49.895667+0000 mon.a (mon.0) 1484 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:50 vm07 bash[23367]: audit 2026-03-10T10:17:49.895667+0000 mon.a (mon.0) 1484 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-15","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:50 vm07 bash[23367]: audit 2026-03-10T10:17:49.895768+0000 mon.a (mon.0) 1485 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm04-59259-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:50 vm07 bash[23367]: audit 2026-03-10T10:17:49.895768+0000 mon.a (mon.0) 1485 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiWritePP_vm04-59259-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:50 vm07 bash[23367]: audit 2026-03-10T10:17:49.895833+0000 mon.a (mon.0) 1486 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm04-59484-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:50 vm07 bash[23367]: audit 2026-03-10T10:17:49.895833+0000 mon.a (mon.0) 1486 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/1_vm04-59484-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:50 vm07 bash[23367]: cluster 2026-03-10T10:17:49.900260+0000 mon.a (mon.0) 1487 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-10T10:17:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:50 vm07 bash[23367]: cluster 2026-03-10T10:17:49.900260+0000 mon.a (mon.0) 1487 : cluster [DBG] osdmap e154: 8 total, 8 up, 8 in 2026-03-10T10:17:51.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:51 vm04 bash[28289]: audit 2026-03-10T10:17:50.357110+0000 mon.c (mon.2) 230 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:51.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:51 vm04 bash[28289]: audit 2026-03-10T10:17:50.357110+0000 mon.c (mon.2) 230 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:51.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:51 vm04 bash[28289]: cluster 2026-03-10T10:17:50.908701+0000 mon.a (mon.0) 1488 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-10T10:17:51.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:51 vm04 bash[28289]: cluster 2026-03-10T10:17:50.908701+0000 mon.a (mon.0) 1488 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-10T10:17:51.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:51 vm04 bash[28289]: audit 2026-03-10T10:17:50.913930+0000 mon.c (mon.2) 231 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:51.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:51 vm04 bash[28289]: audit 2026-03-10T10:17:50.913930+0000 mon.c (mon.2) 231 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:51.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:51 vm04 bash[28289]: audit 2026-03-10T10:17:50.924477+0000 mon.a (mon.0) 1489 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:51.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:51 vm04 bash[28289]: audit 2026-03-10T10:17:50.924477+0000 mon.a (mon.0) 1489 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:51.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:51 vm04 bash[28289]: audit 2026-03-10T10:17:50.944722+0000 mon.a (mon.0) 1490 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:17:51.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:51 vm04 bash[28289]: audit 2026-03-10T10:17:50.944722+0000 mon.a (mon.0) 1490 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:17:51.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:51 vm04 bash[20742]: audit 2026-03-10T10:17:50.357110+0000 mon.c (mon.2) 230 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:51.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:51 vm04 bash[20742]: audit 2026-03-10T10:17:50.357110+0000 mon.c (mon.2) 230 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:51.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:51 vm04 bash[20742]: cluster 2026-03-10T10:17:50.908701+0000 mon.a (mon.0) 1488 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-10T10:17:51.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:51 vm04 bash[20742]: cluster 2026-03-10T10:17:50.908701+0000 mon.a (mon.0) 1488 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-10T10:17:51.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:51 vm04 bash[20742]: audit 2026-03-10T10:17:50.913930+0000 mon.c (mon.2) 231 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:51.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:51 vm04 bash[20742]: audit 2026-03-10T10:17:50.913930+0000 mon.c (mon.2) 231 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:51.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:51 vm04 bash[20742]: audit 2026-03-10T10:17:50.924477+0000 mon.a (mon.0) 1489 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:51.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:51 vm04 bash[20742]: audit 2026-03-10T10:17:50.924477+0000 mon.a (mon.0) 1489 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:51.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:51 vm04 bash[20742]: audit 2026-03-10T10:17:50.944722+0000 mon.a (mon.0) 1490 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:17:51.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:51 vm04 bash[20742]: audit 2026-03-10T10:17:50.944722+0000 mon.a (mon.0) 1490 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:17:51.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:51 vm07 bash[23367]: audit 2026-03-10T10:17:50.357110+0000 mon.c (mon.2) 230 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:51.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:51 vm07 bash[23367]: audit 2026-03-10T10:17:50.357110+0000 mon.c (mon.2) 230 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:51.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:51 vm07 bash[23367]: cluster 2026-03-10T10:17:50.908701+0000 mon.a (mon.0) 1488 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-10T10:17:51.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:51 vm07 bash[23367]: cluster 2026-03-10T10:17:50.908701+0000 mon.a (mon.0) 1488 : cluster [DBG] osdmap e155: 8 total, 8 up, 8 in 2026-03-10T10:17:51.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:51 vm07 bash[23367]: audit 2026-03-10T10:17:50.913930+0000 mon.c (mon.2) 231 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:51.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:51 vm07 bash[23367]: audit 2026-03-10T10:17:50.913930+0000 mon.c (mon.2) 231 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:51.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:51 vm07 bash[23367]: audit 2026-03-10T10:17:50.924477+0000 mon.a (mon.0) 1489 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:51.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:51 vm07 bash[23367]: audit 2026-03-10T10:17:50.924477+0000 mon.a (mon.0) 1489 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:51.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:51 vm07 bash[23367]: audit 2026-03-10T10:17:50.944722+0000 mon.a (mon.0) 1490 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:17:51.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:51 vm07 bash[23367]: audit 2026-03-10T10:17:50.944722+0000 mon.a (mon.0) 1490 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-15", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:17:52.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:52 vm04 bash[28289]: cluster 2026-03-10T10:17:50.393800+0000 mgr.y (mgr.24422) 181 : cluster [DBG] pgmap v187: 403 pgs: 104 unknown, 16 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 274 active+clean; 492 KiB data, 638 MiB used, 159 GiB / 160 GiB avail; 10 KiB/s wr, 1 op/s 2026-03-10T10:17:52.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:52 vm04 bash[28289]: cluster 2026-03-10T10:17:50.393800+0000 mgr.y (mgr.24422) 181 : cluster [DBG] pgmap v187: 403 pgs: 104 unknown, 16 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 274 active+clean; 492 KiB data, 638 MiB used, 159 GiB / 160 GiB avail; 10 KiB/s wr, 1 op/s 2026-03-10T10:17:52.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:52 vm04 bash[28289]: cluster 2026-03-10T10:17:51.202653+0000 mon.a (mon.0) 1491 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:52 vm04 bash[28289]: cluster 2026-03-10T10:17:51.202653+0000 mon.a (mon.0) 1491 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:52 vm04 bash[28289]: audit 2026-03-10T10:17:51.358029+0000 mon.c (mon.2) 232 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:52 vm04 bash[28289]: audit 2026-03-10T10:17:51.358029+0000 mon.c (mon.2) 232 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:52 vm04 bash[28289]: audit 2026-03-10T10:17:52.002687+0000 mon.a (mon.0) 1492 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm04-59252-28"}]': finished 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:52 vm04 bash[28289]: audit 2026-03-10T10:17:52.002687+0000 mon.a (mon.0) 1492 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm04-59252-28"}]': finished 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:52 vm04 bash[28289]: audit 2026-03-10T10:17:52.002876+0000 mon.a (mon.0) 1493 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-15", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:52 vm04 bash[28289]: audit 2026-03-10T10:17:52.002876+0000 mon.a (mon.0) 1493 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-15", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:52 vm04 bash[28289]: audit 2026-03-10T10:17:52.009243+0000 mon.b (mon.1) 146 : audit [INF] from='client.? 192.168.123.104:0/1933807887' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm04-59259-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:52 vm04 bash[28289]: audit 2026-03-10T10:17:52.009243+0000 mon.b (mon.1) 146 : audit [INF] from='client.? 192.168.123.104:0/1933807887' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm04-59259-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:52 vm04 bash[28289]: cluster 2026-03-10T10:17:52.009341+0000 mon.a (mon.0) 1494 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:52 vm04 bash[28289]: cluster 2026-03-10T10:17:52.009341+0000 mon.a (mon.0) 1494 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:52 vm04 bash[28289]: audit 2026-03-10T10:17:52.009712+0000 mon.b (mon.1) 147 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:52 vm04 bash[28289]: audit 2026-03-10T10:17:52.009712+0000 mon.b (mon.1) 147 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:52 vm04 bash[28289]: audit 2026-03-10T10:17:52.010350+0000 mon.b (mon.1) 148 : audit [INF] from='client.? 192.168.123.104:0/2760442329' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm04-59484-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:52 vm04 bash[28289]: audit 2026-03-10T10:17:52.010350+0000 mon.b (mon.1) 148 : audit [INF] from='client.? 192.168.123.104:0/2760442329' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm04-59484-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:52 vm04 bash[28289]: audit 2026-03-10T10:17:52.010419+0000 mon.a (mon.0) 1495 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-15"}]: dispatch 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:52 vm04 bash[28289]: audit 2026-03-10T10:17:52.010419+0000 mon.a (mon.0) 1495 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-15"}]: dispatch 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:52 vm04 bash[28289]: audit 2026-03-10T10:17:52.011565+0000 mon.c (mon.2) 233 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:52 vm04 bash[28289]: audit 2026-03-10T10:17:52.011565+0000 mon.c (mon.2) 233 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:52 vm04 bash[28289]: audit 2026-03-10T10:17:52.011687+0000 mon.a (mon.0) 1496 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm04-59259-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:52 vm04 bash[28289]: audit 2026-03-10T10:17:52.011687+0000 mon.a (mon.0) 1496 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm04-59259-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:52 vm04 bash[28289]: audit 2026-03-10T10:17:52.012403+0000 mon.a (mon.0) 1497 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:52 vm04 bash[28289]: audit 2026-03-10T10:17:52.012403+0000 mon.a (mon.0) 1497 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:52 vm04 bash[28289]: audit 2026-03-10T10:17:52.012496+0000 mon.a (mon.0) 1498 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:52 vm04 bash[28289]: audit 2026-03-10T10:17:52.012496+0000 mon.a (mon.0) 1498 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:52 vm04 bash[28289]: audit 2026-03-10T10:17:52.012827+0000 mon.a (mon.0) 1499 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm04-59484-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:52 vm04 bash[28289]: audit 2026-03-10T10:17:52.012827+0000 mon.a (mon.0) 1499 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm04-59484-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:52 vm04 bash[20742]: cluster 2026-03-10T10:17:50.393800+0000 mgr.y (mgr.24422) 181 : cluster [DBG] pgmap v187: 403 pgs: 104 unknown, 16 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 274 active+clean; 492 KiB data, 638 MiB used, 159 GiB / 160 GiB avail; 10 KiB/s wr, 1 op/s 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:52 vm04 bash[20742]: cluster 2026-03-10T10:17:50.393800+0000 mgr.y (mgr.24422) 181 : cluster [DBG] pgmap v187: 403 pgs: 104 unknown, 16 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 274 active+clean; 492 KiB data, 638 MiB used, 159 GiB / 160 GiB avail; 10 KiB/s wr, 1 op/s 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:52 vm04 bash[20742]: cluster 2026-03-10T10:17:51.202653+0000 mon.a (mon.0) 1491 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:52 vm04 bash[20742]: cluster 2026-03-10T10:17:51.202653+0000 mon.a (mon.0) 1491 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:52 vm04 bash[20742]: audit 2026-03-10T10:17:51.358029+0000 mon.c (mon.2) 232 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:52 vm04 bash[20742]: audit 2026-03-10T10:17:51.358029+0000 mon.c (mon.2) 232 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:52 vm04 bash[20742]: audit 2026-03-10T10:17:52.002687+0000 mon.a (mon.0) 1492 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm04-59252-28"}]': finished 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:52 vm04 bash[20742]: audit 2026-03-10T10:17:52.002687+0000 mon.a (mon.0) 1492 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm04-59252-28"}]': finished 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:52 vm04 bash[20742]: audit 2026-03-10T10:17:52.002876+0000 mon.a (mon.0) 1493 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-15", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:52 vm04 bash[20742]: audit 2026-03-10T10:17:52.002876+0000 mon.a (mon.0) 1493 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-15", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:52 vm04 bash[20742]: audit 2026-03-10T10:17:52.009243+0000 mon.b (mon.1) 146 : audit [INF] from='client.? 192.168.123.104:0/1933807887' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm04-59259-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:52 vm04 bash[20742]: audit 2026-03-10T10:17:52.009243+0000 mon.b (mon.1) 146 : audit [INF] from='client.? 192.168.123.104:0/1933807887' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm04-59259-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:52.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:52 vm04 bash[20742]: cluster 2026-03-10T10:17:52.009341+0000 mon.a (mon.0) 1494 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-10T10:17:52.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:52 vm04 bash[20742]: cluster 2026-03-10T10:17:52.009341+0000 mon.a (mon.0) 1494 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-10T10:17:52.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:52 vm04 bash[20742]: audit 2026-03-10T10:17:52.009712+0000 mon.b (mon.1) 147 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:52.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:52 vm04 bash[20742]: audit 2026-03-10T10:17:52.009712+0000 mon.b (mon.1) 147 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:52.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:52 vm04 bash[20742]: audit 2026-03-10T10:17:52.010350+0000 mon.b (mon.1) 148 : audit [INF] from='client.? 192.168.123.104:0/2760442329' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm04-59484-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:52.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:52 vm04 bash[20742]: audit 2026-03-10T10:17:52.010350+0000 mon.b (mon.1) 148 : audit [INF] from='client.? 192.168.123.104:0/2760442329' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm04-59484-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:52.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:52 vm04 bash[20742]: audit 2026-03-10T10:17:52.010419+0000 mon.a (mon.0) 1495 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-15"}]: dispatch 2026-03-10T10:17:52.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:52 vm04 bash[20742]: audit 2026-03-10T10:17:52.010419+0000 mon.a (mon.0) 1495 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-15"}]: dispatch 2026-03-10T10:17:52.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:52 vm04 bash[20742]: audit 2026-03-10T10:17:52.011565+0000 mon.c (mon.2) 233 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:52.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:52 vm04 bash[20742]: audit 2026-03-10T10:17:52.011565+0000 mon.c (mon.2) 233 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:52.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:52 vm04 bash[20742]: audit 2026-03-10T10:17:52.011687+0000 mon.a (mon.0) 1496 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm04-59259-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:52.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:52 vm04 bash[20742]: audit 2026-03-10T10:17:52.011687+0000 mon.a (mon.0) 1496 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm04-59259-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:52.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:52 vm04 bash[20742]: audit 2026-03-10T10:17:52.012403+0000 mon.a (mon.0) 1497 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:52.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:52 vm04 bash[20742]: audit 2026-03-10T10:17:52.012403+0000 mon.a (mon.0) 1497 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:52.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:52 vm04 bash[20742]: audit 2026-03-10T10:17:52.012496+0000 mon.a (mon.0) 1498 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:52.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:52 vm04 bash[20742]: audit 2026-03-10T10:17:52.012496+0000 mon.a (mon.0) 1498 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:52.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:52 vm04 bash[20742]: audit 2026-03-10T10:17:52.012827+0000 mon.a (mon.0) 1499 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm04-59484-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:52.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:52 vm04 bash[20742]: audit 2026-03-10T10:17:52.012827+0000 mon.a (mon.0) 1499 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm04-59484-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:52.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:52 vm07 bash[23367]: cluster 2026-03-10T10:17:50.393800+0000 mgr.y (mgr.24422) 181 : cluster [DBG] pgmap v187: 403 pgs: 104 unknown, 16 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 274 active+clean; 492 KiB data, 638 MiB used, 159 GiB / 160 GiB avail; 10 KiB/s wr, 1 op/s 2026-03-10T10:17:52.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:52 vm07 bash[23367]: cluster 2026-03-10T10:17:50.393800+0000 mgr.y (mgr.24422) 181 : cluster [DBG] pgmap v187: 403 pgs: 104 unknown, 16 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 274 active+clean; 492 KiB data, 638 MiB used, 159 GiB / 160 GiB avail; 10 KiB/s wr, 1 op/s 2026-03-10T10:17:52.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:52 vm07 bash[23367]: cluster 2026-03-10T10:17:51.202653+0000 mon.a (mon.0) 1491 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:52 vm07 bash[23367]: cluster 2026-03-10T10:17:51.202653+0000 mon.a (mon.0) 1491 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:52 vm07 bash[23367]: audit 2026-03-10T10:17:51.358029+0000 mon.c (mon.2) 232 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:52 vm07 bash[23367]: audit 2026-03-10T10:17:51.358029+0000 mon.c (mon.2) 232 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:52 vm07 bash[23367]: audit 2026-03-10T10:17:52.002687+0000 mon.a (mon.0) 1492 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm04-59252-28"}]': finished 2026-03-10T10:17:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:52 vm07 bash[23367]: audit 2026-03-10T10:17:52.002687+0000 mon.a (mon.0) 1492 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForComplete_vm04-59252-28"}]': finished 2026-03-10T10:17:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:52 vm07 bash[23367]: audit 2026-03-10T10:17:52.002876+0000 mon.a (mon.0) 1493 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-15", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:17:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:52 vm07 bash[23367]: audit 2026-03-10T10:17:52.002876+0000 mon.a (mon.0) 1493 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-15", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:17:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:52 vm07 bash[23367]: audit 2026-03-10T10:17:52.009243+0000 mon.b (mon.1) 146 : audit [INF] from='client.? 192.168.123.104:0/1933807887' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm04-59259-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:52 vm07 bash[23367]: audit 2026-03-10T10:17:52.009243+0000 mon.b (mon.1) 146 : audit [INF] from='client.? 192.168.123.104:0/1933807887' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm04-59259-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:52 vm07 bash[23367]: cluster 2026-03-10T10:17:52.009341+0000 mon.a (mon.0) 1494 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-10T10:17:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:52 vm07 bash[23367]: cluster 2026-03-10T10:17:52.009341+0000 mon.a (mon.0) 1494 : cluster [DBG] osdmap e156: 8 total, 8 up, 8 in 2026-03-10T10:17:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:52 vm07 bash[23367]: audit 2026-03-10T10:17:52.009712+0000 mon.b (mon.1) 147 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:52 vm07 bash[23367]: audit 2026-03-10T10:17:52.009712+0000 mon.b (mon.1) 147 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:52 vm07 bash[23367]: audit 2026-03-10T10:17:52.010350+0000 mon.b (mon.1) 148 : audit [INF] from='client.? 192.168.123.104:0/2760442329' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm04-59484-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:52 vm07 bash[23367]: audit 2026-03-10T10:17:52.010350+0000 mon.b (mon.1) 148 : audit [INF] from='client.? 192.168.123.104:0/2760442329' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm04-59484-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:52 vm07 bash[23367]: audit 2026-03-10T10:17:52.010419+0000 mon.a (mon.0) 1495 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-15"}]: dispatch 2026-03-10T10:17:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:52 vm07 bash[23367]: audit 2026-03-10T10:17:52.010419+0000 mon.a (mon.0) 1495 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-15"}]: dispatch 2026-03-10T10:17:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:52 vm07 bash[23367]: audit 2026-03-10T10:17:52.011565+0000 mon.c (mon.2) 233 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:52 vm07 bash[23367]: audit 2026-03-10T10:17:52.011565+0000 mon.c (mon.2) 233 : audit [INF] from='client.? 192.168.123.104:0/3238338202' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:52 vm07 bash[23367]: audit 2026-03-10T10:17:52.011687+0000 mon.a (mon.0) 1496 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm04-59259-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:52 vm07 bash[23367]: audit 2026-03-10T10:17:52.011687+0000 mon.a (mon.0) 1496 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm04-59259-28","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:52 vm07 bash[23367]: audit 2026-03-10T10:17:52.012403+0000 mon.a (mon.0) 1497 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:52 vm07 bash[23367]: audit 2026-03-10T10:17:52.012403+0000 mon.a (mon.0) 1497 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm04-59252-28"}]: dispatch 2026-03-10T10:17:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:52 vm07 bash[23367]: audit 2026-03-10T10:17:52.012496+0000 mon.a (mon.0) 1498 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:52 vm07 bash[23367]: audit 2026-03-10T10:17:52.012496+0000 mon.a (mon.0) 1498 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:52 vm07 bash[23367]: audit 2026-03-10T10:17:52.012827+0000 mon.a (mon.0) 1499 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm04-59484-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:52 vm07 bash[23367]: audit 2026-03-10T10:17:52.012827+0000 mon.a (mon.0) 1499 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm04-59484-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:53.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:53 vm04 bash[28289]: audit 2026-03-10T10:17:52.358940+0000 mon.c (mon.2) 234 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:53.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:53 vm04 bash[28289]: audit 2026-03-10T10:17:52.358940+0000 mon.c (mon.2) 234 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:53.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:53 vm04 bash[20742]: audit 2026-03-10T10:17:52.358940+0000 mon.c (mon.2) 234 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:53.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:53 vm04 bash[20742]: audit 2026-03-10T10:17:52.358940+0000 mon.c (mon.2) 234 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:53.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:17:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:17:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:17:53.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:53 vm07 bash[23367]: audit 2026-03-10T10:17:52.358940+0000 mon.c (mon.2) 234 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:53.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:53 vm07 bash[23367]: audit 2026-03-10T10:17:52.358940+0000 mon.c (mon.2) 234 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:54.162 INFO:tasks.workunit.client.0.vm04.stdout: RUN ] LibRadosSnapshotsECPP.SnapGetNamePP 2026-03-10T10:17:54.162 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsECPP.SnapGetNamePP (1645 ms) 2026-03-10T10:17:54.162 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [----------] 4 tests from LibRadosSnapshotsECPP (8702 ms total) 2026-03-10T10:17:54.162 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: 2026-03-10T10:17:54.162 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [----------] 3 tests from LibRadosSnapshotsSelfManagedECPP 2026-03-10T10:17:54.162 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedECPP.SnapPP 2026-03-10T10:17:54.162 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedECPP.SnapPP (4047 ms) 2026-03-10T10:17:54.162 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedECPP.RollbackPP 2026-03-10T10:17:54.162 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedECPP.RollbackPP (4265 ms) 2026-03-10T10:17:54.162 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ RUN ] LibRadosSnapshotsSelfManagedECPP.Bug11677 2026-03-10T10:17:54.162 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ OK ] LibRadosSnapshotsSelfManagedECPP.Bug11677 (4105 ms) 2026-03-10T10:17:54.162 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [----------] 3 tests from LibRadosSnapshotsSelfManagedECPP (12417 ms total) 2026-03-10T10:17:54.162 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: 2026-03-10T10:17:54.162 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [----------] Global test environment tear-down 2026-03-10T10:17:54.162 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [==========] 21 tests from 5 test suites ran. (93835 ms total) 2026-03-10T10:17:54.162 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ PASSED ] 20 tests. 2026-03-10T10:17:54.162 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ SKIPPED ] 1 test, listed below: 2026-03-10T10:17:54.162 INFO:tasks.workunit.client.0.vm04.stdout: api_snapshots_pp: [ SKIPPED ] LibRadosSnapshotsSelfManagedPP.WriteRollback 2026-03-10T10:17:54.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: cluster 2026-03-10T10:17:52.394225+0000 mgr.y (mgr.24422) 182 : cluster [DBG] pgmap v190: 387 pgs: 96 unknown, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 271 active+clean; 460 KiB data, 638 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 op/s 2026-03-10T10:17:54.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: cluster 2026-03-10T10:17:52.394225+0000 mgr.y (mgr.24422) 182 : cluster [DBG] pgmap v190: 387 pgs: 96 unknown, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 271 active+clean; 460 KiB data, 638 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 op/s 2026-03-10T10:17:54.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:53.136022+0000 mon.a (mon.0) 1500 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-15"}]': finished 2026-03-10T10:17:54.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:53.136022+0000 mon.a (mon.0) 1500 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-15"}]': finished 2026-03-10T10:17:54.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:53.136139+0000 mon.a (mon.0) 1501 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm04-59259-28","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:54.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:53.136139+0000 mon.a (mon.0) 1501 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm04-59259-28","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:54.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:53.136326+0000 mon.a (mon.0) 1502 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm04-59252-28"}]': finished 2026-03-10T10:17:54.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:53.136326+0000 mon.a (mon.0) 1502 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm04-59252-28"}]': finished 2026-03-10T10:17:54.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:53.136497+0000 mon.a (mon.0) 1503 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]': finished 2026-03-10T10:17:54.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:53.136497+0000 mon.a (mon.0) 1503 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]': finished 2026-03-10T10:17:54.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:53.136622+0000 mon.a (mon.0) 1504 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm04-59484-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:54.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:53.136622+0000 mon.a (mon.0) 1504 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm04-59484-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:54.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: cluster 2026-03-10T10:17:53.149636+0000 mon.a (mon.0) 1505 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-10T10:17:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: cluster 2026-03-10T10:17:53.149636+0000 mon.a (mon.0) 1505 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-10T10:17:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:53.150702+0000 mon.b (mon.1) 149 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:53.150702+0000 mon.b (mon.1) 149 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:53.156234+0000 mon.a (mon.0) 1506 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-15", "mode": "writeback"}]: dispatch 2026-03-10T10:17:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:53.156234+0000 mon.a (mon.0) 1506 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-15", "mode": "writeback"}]: dispatch 2026-03-10T10:17:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:53.156301+0000 mon.a (mon.0) 1507 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:53.156301+0000 mon.a (mon.0) 1507 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:53.275386+0000 mon.a (mon.0) 1508 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm04-59252-29"}]: dispatch 2026-03-10T10:17:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:53.275386+0000 mon.a (mon.0) 1508 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm04-59252-29"}]: dispatch 2026-03-10T10:17:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:53.276158+0000 mon.a (mon.0) 1509 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm04-59252-29"}]: dispatch 2026-03-10T10:17:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:53.276158+0000 mon.a (mon.0) 1509 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm04-59252-29"}]: dispatch 2026-03-10T10:17:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:53.276901+0000 mon.a (mon.0) 1510 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm04-59252-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:53.276901+0000 mon.a (mon.0) 1510 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm04-59252-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:53.359709+0000 mon.c (mon.2) 235 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:53.359709+0000 mon.c (mon.2) 235 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: cluster 2026-03-10T10:17:54.136316+0000 mon.a (mon.0) 1511 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:17:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: cluster 2026-03-10T10:17:54.136316+0000 mon.a (mon.0) 1511 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:17:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:54.140702+0000 mon.a (mon.0) 1512 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-15", "mode": "writeback"}]': finished 2026-03-10T10:17:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:54.140702+0000 mon.a (mon.0) 1512 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-15", "mode": "writeback"}]': finished 2026-03-10T10:17:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:54.140738+0000 mon.a (mon.0) 1513 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]': finished 2026-03-10T10:17:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:54.140738+0000 mon.a (mon.0) 1513 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]': finished 2026-03-10T10:17:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:54.140867+0000 mon.a (mon.0) 1514 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm04-59252-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:54.140867+0000 mon.a (mon.0) 1514 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm04-59252-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: cluster 2026-03-10T10:17:54.144128+0000 mon.a (mon.0) 1515 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-10T10:17:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: cluster 2026-03-10T10:17:54.144128+0000 mon.a (mon.0) 1515 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-10T10:17:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:54.146829+0000 mon.a (mon.0) 1516 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm04-59252-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm04-59252-29"}]: dispatch 2026-03-10T10:17:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:54.146829+0000 mon.a (mon.0) 1516 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm04-59252-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm04-59252-29"}]: dispatch 2026-03-10T10:17:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:54.184294+0000 mon.a (mon.0) 1517 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm04-59484-36"}]: dispatch 2026-03-10T10:17:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:54.184294+0000 mon.a (mon.0) 1517 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm04-59484-36"}]: dispatch 2026-03-10T10:17:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:54.184801+0000 mon.a (mon.0) 1518 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm04-59484-36"}]: dispatch 2026-03-10T10:17:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:54.184801+0000 mon.a (mon.0) 1518 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm04-59484-36"}]: dispatch 2026-03-10T10:17:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:54.185261+0000 mon.a (mon.0) 1519 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm04-59484-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:54 vm07 bash[23367]: audit 2026-03-10T10:17:54.185261+0000 mon.a (mon.0) 1519 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm04-59484-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: cluster 2026-03-10T10:17:52.394225+0000 mgr.y (mgr.24422) 182 : cluster [DBG] pgmap v190: 387 pgs: 96 unknown, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 271 active+clean; 460 KiB data, 638 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 op/s 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: cluster 2026-03-10T10:17:52.394225+0000 mgr.y (mgr.24422) 182 : cluster [DBG] pgmap v190: 387 pgs: 96 unknown, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 271 active+clean; 460 KiB data, 638 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 op/s 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:53.136022+0000 mon.a (mon.0) 1500 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-15"}]': finished 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:53.136022+0000 mon.a (mon.0) 1500 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-15"}]': finished 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:53.136139+0000 mon.a (mon.0) 1501 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm04-59259-28","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:53.136139+0000 mon.a (mon.0) 1501 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm04-59259-28","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:53.136326+0000 mon.a (mon.0) 1502 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm04-59252-28"}]': finished 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:53.136326+0000 mon.a (mon.0) 1502 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm04-59252-28"}]': finished 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:53.136497+0000 mon.a (mon.0) 1503 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]': finished 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:53.136497+0000 mon.a (mon.0) 1503 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]': finished 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:53.136622+0000 mon.a (mon.0) 1504 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm04-59484-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:53.136622+0000 mon.a (mon.0) 1504 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm04-59484-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: cluster 2026-03-10T10:17:53.149636+0000 mon.a (mon.0) 1505 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: cluster 2026-03-10T10:17:53.149636+0000 mon.a (mon.0) 1505 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:53.150702+0000 mon.b (mon.1) 149 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:53.150702+0000 mon.b (mon.1) 149 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:53.156234+0000 mon.a (mon.0) 1506 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-15", "mode": "writeback"}]: dispatch 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:53.156234+0000 mon.a (mon.0) 1506 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-15", "mode": "writeback"}]: dispatch 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:53.156301+0000 mon.a (mon.0) 1507 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:53.156301+0000 mon.a (mon.0) 1507 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:53.275386+0000 mon.a (mon.0) 1508 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm04-59252-29"}]: dispatch 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:53.275386+0000 mon.a (mon.0) 1508 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm04-59252-29"}]: dispatch 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:53.276158+0000 mon.a (mon.0) 1509 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm04-59252-29"}]: dispatch 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:53.276158+0000 mon.a (mon.0) 1509 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm04-59252-29"}]: dispatch 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:53.276901+0000 mon.a (mon.0) 1510 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm04-59252-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:53.276901+0000 mon.a (mon.0) 1510 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm04-59252-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:53.359709+0000 mon.c (mon.2) 235 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:53.359709+0000 mon.c (mon.2) 235 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: cluster 2026-03-10T10:17:54.136316+0000 mon.a (mon.0) 1511 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: cluster 2026-03-10T10:17:54.136316+0000 mon.a (mon.0) 1511 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:54.140702+0000 mon.a (mon.0) 1512 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-15", "mode": "writeback"}]': finished 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:54.140702+0000 mon.a (mon.0) 1512 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-15", "mode": "writeback"}]': finished 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:54.140738+0000 mon.a (mon.0) 1513 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]': finished 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:54.140738+0000 mon.a (mon.0) 1513 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]': finished 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:54.140867+0000 mon.a (mon.0) 1514 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm04-59252-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:54.140867+0000 mon.a (mon.0) 1514 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm04-59252-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: cluster 2026-03-10T10:17:54.144128+0000 mon.a (mon.0) 1515 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: cluster 2026-03-10T10:17:54.144128+0000 mon.a (mon.0) 1515 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:54.146829+0000 mon.a (mon.0) 1516 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm04-59252-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm04-59252-29"}]: dispatch 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:54.146829+0000 mon.a (mon.0) 1516 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm04-59252-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm04-59252-29"}]: dispatch 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:54.184294+0000 mon.a (mon.0) 1517 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm04-59484-36"}]: dispatch 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:54.184294+0000 mon.a (mon.0) 1517 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm04-59484-36"}]: dispatch 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:54.184801+0000 mon.a (mon.0) 1518 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm04-59484-36"}]: dispatch 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:54.184801+0000 mon.a (mon.0) 1518 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm04-59484-36"}]: dispatch 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:54.185261+0000 mon.a (mon.0) 1519 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm04-59484-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:54 vm04 bash[28289]: audit 2026-03-10T10:17:54.185261+0000 mon.a (mon.0) 1519 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm04-59484-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:54.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: cluster 2026-03-10T10:17:52.394225+0000 mgr.y (mgr.24422) 182 : cluster [DBG] pgmap v190: 387 pgs: 96 unknown, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 271 active+clean; 460 KiB data, 638 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 op/s 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: cluster 2026-03-10T10:17:52.394225+0000 mgr.y (mgr.24422) 182 : cluster [DBG] pgmap v190: 387 pgs: 96 unknown, 11 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 271 active+clean; 460 KiB data, 638 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 0 op/s 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:53.136022+0000 mon.a (mon.0) 1500 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-15"}]': finished 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:53.136022+0000 mon.a (mon.0) 1500 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-15"}]': finished 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:53.136139+0000 mon.a (mon.0) 1501 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm04-59259-28","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:53.136139+0000 mon.a (mon.0) 1501 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "AioUnlockPP_vm04-59259-28","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:53.136326+0000 mon.a (mon.0) 1502 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm04-59252-28"}]': finished 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:53.136326+0000 mon.a (mon.0) 1502 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForComplete_vm04-59252-28"}]': finished 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:53.136497+0000 mon.a (mon.0) 1503 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]': finished 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:53.136497+0000 mon.a (mon.0) 1503 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]': finished 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:53.136622+0000 mon.a (mon.0) 1504 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm04-59484-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:53.136622+0000 mon.a (mon.0) 1504 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "LibRadosChecksum/2_vm04-59484-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: cluster 2026-03-10T10:17:53.149636+0000 mon.a (mon.0) 1505 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: cluster 2026-03-10T10:17:53.149636+0000 mon.a (mon.0) 1505 : cluster [DBG] osdmap e157: 8 total, 8 up, 8 in 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:53.150702+0000 mon.b (mon.1) 149 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:53.150702+0000 mon.b (mon.1) 149 : audit [INF] from='client.? 192.168.123.104:0/3443098715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:53.156234+0000 mon.a (mon.0) 1506 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-15", "mode": "writeback"}]: dispatch 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:53.156234+0000 mon.a (mon.0) 1506 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-15", "mode": "writeback"}]: dispatch 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:53.156301+0000 mon.a (mon.0) 1507 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:53.156301+0000 mon.a (mon.0) 1507 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]: dispatch 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:53.275386+0000 mon.a (mon.0) 1508 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm04-59252-29"}]: dispatch 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:53.275386+0000 mon.a (mon.0) 1508 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm04-59252-29"}]: dispatch 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:53.276158+0000 mon.a (mon.0) 1509 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm04-59252-29"}]: dispatch 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:53.276158+0000 mon.a (mon.0) 1509 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm04-59252-29"}]: dispatch 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:53.276901+0000 mon.a (mon.0) 1510 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm04-59252-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:53.276901+0000 mon.a (mon.0) 1510 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm04-59252-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:53.359709+0000 mon.c (mon.2) 235 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:53.359709+0000 mon.c (mon.2) 235 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: cluster 2026-03-10T10:17:54.136316+0000 mon.a (mon.0) 1511 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: cluster 2026-03-10T10:17:54.136316+0000 mon.a (mon.0) 1511 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:54.140702+0000 mon.a (mon.0) 1512 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-15", "mode": "writeback"}]': finished 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:54.140702+0000 mon.a (mon.0) 1512 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-15", "mode": "writeback"}]': finished 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:54.140738+0000 mon.a (mon.0) 1513 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]': finished 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:54.140738+0000 mon.a (mon.0) 1513 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosSnapshotsSelfManagedECPP_vm04-59541-21"}]': finished 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:54.140867+0000 mon.a (mon.0) 1514 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm04-59252-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:54.140867+0000 mon.a (mon.0) 1514 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip_vm04-59252-29", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: cluster 2026-03-10T10:17:54.144128+0000 mon.a (mon.0) 1515 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: cluster 2026-03-10T10:17:54.144128+0000 mon.a (mon.0) 1515 : cluster [DBG] osdmap e158: 8 total, 8 up, 8 in 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:54.146829+0000 mon.a (mon.0) 1516 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm04-59252-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm04-59252-29"}]: dispatch 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:54.146829+0000 mon.a (mon.0) 1516 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip_vm04-59252-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm04-59252-29"}]: dispatch 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:54.184294+0000 mon.a (mon.0) 1517 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm04-59484-36"}]: dispatch 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:54.184294+0000 mon.a (mon.0) 1517 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm04-59484-36"}]: dispatch 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:54.184801+0000 mon.a (mon.0) 1518 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm04-59484-36"}]: dispatch 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:54.184801+0000 mon.a (mon.0) 1518 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm04-59484-36"}]: dispatch 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:54.185261+0000 mon.a (mon.0) 1519 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm04-59484-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:54.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:54 vm04 bash[20742]: audit 2026-03-10T10:17:54.185261+0000 mon.a (mon.0) 1519 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm04-59484-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:17:55.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:55 vm04 bash[28289]: audit 2026-03-10T10:17:54.360590+0000 mon.c (mon.2) 236 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:55.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:55 vm04 bash[28289]: audit 2026-03-10T10:17:54.360590+0000 mon.c (mon.2) 236 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:55.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:55 vm04 bash[28289]: audit 2026-03-10T10:17:54.395300+0000 mon.a (mon.0) 1520 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "30"}]: dispatch 2026-03-10T10:17:55.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:55 vm04 bash[28289]: audit 2026-03-10T10:17:54.395300+0000 mon.a (mon.0) 1520 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "30"}]: dispatch 2026-03-10T10:17:55.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:55 vm04 bash[20742]: audit 2026-03-10T10:17:54.360590+0000 mon.c (mon.2) 236 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:55.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:55 vm04 bash[20742]: audit 2026-03-10T10:17:54.360590+0000 mon.c (mon.2) 236 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:55.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:55 vm04 bash[20742]: audit 2026-03-10T10:17:54.395300+0000 mon.a (mon.0) 1520 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "30"}]: dispatch 2026-03-10T10:17:55.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:55 vm04 bash[20742]: audit 2026-03-10T10:17:54.395300+0000 mon.a (mon.0) 1520 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "30"}]: dispatch 2026-03-10T10:17:55.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:55 vm07 bash[23367]: audit 2026-03-10T10:17:54.360590+0000 mon.c (mon.2) 236 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:55.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:55 vm07 bash[23367]: audit 2026-03-10T10:17:54.360590+0000 mon.c (mon.2) 236 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:55.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:55 vm07 bash[23367]: audit 2026-03-10T10:17:54.395300+0000 mon.a (mon.0) 1520 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "30"}]: dispatch 2026-03-10T10:17:55.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:55 vm07 bash[23367]: audit 2026-03-10T10:17:54.395300+0000 mon.a (mon.0) 1520 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "30"}]: dispatch 2026-03-10T10:17:56.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:56 vm04 bash[28289]: cluster 2026-03-10T10:17:54.394615+0000 mgr.y (mgr.24422) 183 : cluster [DBG] pgmap v193: 323 pgs: 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 303 active+clean; 460 KiB data, 646 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:56.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:56 vm04 bash[28289]: cluster 2026-03-10T10:17:54.394615+0000 mgr.y (mgr.24422) 183 : cluster [DBG] pgmap v193: 323 pgs: 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 303 active+clean; 460 KiB data, 646 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:56.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:56 vm04 bash[28289]: audit 2026-03-10T10:17:55.323165+0000 mon.a (mon.0) 1521 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm04-59484-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:56.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:56 vm04 bash[28289]: audit 2026-03-10T10:17:55.323165+0000 mon.a (mon.0) 1521 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm04-59484-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:56.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:56 vm04 bash[28289]: audit 2026-03-10T10:17:55.323275+0000 mon.a (mon.0) 1522 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "30"}]': finished 2026-03-10T10:17:56.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:56 vm04 bash[28289]: audit 2026-03-10T10:17:55.323275+0000 mon.a (mon.0) 1522 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "30"}]': finished 2026-03-10T10:17:56.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:56 vm04 bash[28289]: cluster 2026-03-10T10:17:55.350977+0000 mon.a (mon.0) 1523 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-10T10:17:56.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:56 vm04 bash[28289]: cluster 2026-03-10T10:17:55.350977+0000 mon.a (mon.0) 1523 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-10T10:17:56.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:56 vm04 bash[28289]: audit 2026-03-10T10:17:55.352341+0000 mon.b (mon.1) 150 : audit [INF] from='client.? 192.168.123.104:0/704906562' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm04-59259-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:56.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:56 vm04 bash[28289]: audit 2026-03-10T10:17:55.352341+0000 mon.b (mon.1) 150 : audit [INF] from='client.? 192.168.123.104:0/704906562' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm04-59259-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:56.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:56 vm04 bash[28289]: audit 2026-03-10T10:17:55.354738+0000 mon.a (mon.0) 1524 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm04-59484-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm04-59484-36"}]: dispatch 2026-03-10T10:17:56.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:56 vm04 bash[28289]: audit 2026-03-10T10:17:55.354738+0000 mon.a (mon.0) 1524 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm04-59484-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm04-59484-36"}]: dispatch 2026-03-10T10:17:56.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:56 vm04 bash[28289]: audit 2026-03-10T10:17:55.362741+0000 mon.c (mon.2) 237 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:56.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:56 vm04 bash[28289]: audit 2026-03-10T10:17:55.362741+0000 mon.c (mon.2) 237 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:56.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:56 vm04 bash[28289]: audit 2026-03-10T10:17:55.377018+0000 mon.a (mon.0) 1525 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm04-59259-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:56.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:56 vm04 bash[28289]: audit 2026-03-10T10:17:55.377018+0000 mon.a (mon.0) 1525 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm04-59259-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:56.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:56 vm04 bash[28289]: audit 2026-03-10T10:17:55.390421+0000 mon.a (mon.0) 1526 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:17:56.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:56 vm04 bash[28289]: audit 2026-03-10T10:17:55.390421+0000 mon.a (mon.0) 1526 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:17:56.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:56 vm04 bash[28289]: cluster 2026-03-10T10:17:56.203349+0000 mon.a (mon.0) 1527 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:56.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:56 vm04 bash[28289]: cluster 2026-03-10T10:17:56.203349+0000 mon.a (mon.0) 1527 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:56 vm04 bash[28289]: audit 2026-03-10T10:17:56.326256+0000 mon.a (mon.0) 1528 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip_vm04-59252-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm04-59252-29"}]': finished 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:56 vm04 bash[28289]: audit 2026-03-10T10:17:56.326256+0000 mon.a (mon.0) 1528 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip_vm04-59252-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm04-59252-29"}]': finished 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:56 vm04 bash[28289]: audit 2026-03-10T10:17:56.326287+0000 mon.a (mon.0) 1529 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm04-59259-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:56 vm04 bash[28289]: audit 2026-03-10T10:17:56.326287+0000 mon.a (mon.0) 1529 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm04-59259-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:56 vm04 bash[28289]: audit 2026-03-10T10:17:56.326307+0000 mon.a (mon.0) 1530 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:56 vm04 bash[28289]: audit 2026-03-10T10:17:56.326307+0000 mon.a (mon.0) 1530 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:56 vm04 bash[28289]: cluster 2026-03-10T10:17:56.333660+0000 mon.a (mon.0) 1531 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:56 vm04 bash[28289]: cluster 2026-03-10T10:17:56.333660+0000 mon.a (mon.0) 1531 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:56 vm04 bash[28289]: audit 2026-03-10T10:17:56.335421+0000 mon.a (mon.0) 1532 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-15"}]: dispatch 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:56 vm04 bash[28289]: audit 2026-03-10T10:17:56.335421+0000 mon.a (mon.0) 1532 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-15"}]: dispatch 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:56 vm04 bash[20742]: cluster 2026-03-10T10:17:54.394615+0000 mgr.y (mgr.24422) 183 : cluster [DBG] pgmap v193: 323 pgs: 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 303 active+clean; 460 KiB data, 646 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:56 vm04 bash[20742]: cluster 2026-03-10T10:17:54.394615+0000 mgr.y (mgr.24422) 183 : cluster [DBG] pgmap v193: 323 pgs: 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 303 active+clean; 460 KiB data, 646 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:56 vm04 bash[20742]: audit 2026-03-10T10:17:55.323165+0000 mon.a (mon.0) 1521 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm04-59484-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:56 vm04 bash[20742]: audit 2026-03-10T10:17:55.323165+0000 mon.a (mon.0) 1521 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm04-59484-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:56 vm04 bash[20742]: audit 2026-03-10T10:17:55.323275+0000 mon.a (mon.0) 1522 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "30"}]': finished 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:56 vm04 bash[20742]: audit 2026-03-10T10:17:55.323275+0000 mon.a (mon.0) 1522 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "30"}]': finished 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:56 vm04 bash[20742]: cluster 2026-03-10T10:17:55.350977+0000 mon.a (mon.0) 1523 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:56 vm04 bash[20742]: cluster 2026-03-10T10:17:55.350977+0000 mon.a (mon.0) 1523 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:56 vm04 bash[20742]: audit 2026-03-10T10:17:55.352341+0000 mon.b (mon.1) 150 : audit [INF] from='client.? 192.168.123.104:0/704906562' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm04-59259-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:56 vm04 bash[20742]: audit 2026-03-10T10:17:55.352341+0000 mon.b (mon.1) 150 : audit [INF] from='client.? 192.168.123.104:0/704906562' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm04-59259-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:56 vm04 bash[20742]: audit 2026-03-10T10:17:55.354738+0000 mon.a (mon.0) 1524 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm04-59484-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm04-59484-36"}]: dispatch 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:56 vm04 bash[20742]: audit 2026-03-10T10:17:55.354738+0000 mon.a (mon.0) 1524 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm04-59484-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm04-59484-36"}]: dispatch 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:56 vm04 bash[20742]: audit 2026-03-10T10:17:55.362741+0000 mon.c (mon.2) 237 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:56 vm04 bash[20742]: audit 2026-03-10T10:17:55.362741+0000 mon.c (mon.2) 237 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:56 vm04 bash[20742]: audit 2026-03-10T10:17:55.377018+0000 mon.a (mon.0) 1525 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm04-59259-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:56 vm04 bash[20742]: audit 2026-03-10T10:17:55.377018+0000 mon.a (mon.0) 1525 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm04-59259-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:56 vm04 bash[20742]: audit 2026-03-10T10:17:55.390421+0000 mon.a (mon.0) 1526 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:56 vm04 bash[20742]: audit 2026-03-10T10:17:55.390421+0000 mon.a (mon.0) 1526 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:56 vm04 bash[20742]: cluster 2026-03-10T10:17:56.203349+0000 mon.a (mon.0) 1527 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:56 vm04 bash[20742]: cluster 2026-03-10T10:17:56.203349+0000 mon.a (mon.0) 1527 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:56 vm04 bash[20742]: audit 2026-03-10T10:17:56.326256+0000 mon.a (mon.0) 1528 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip_vm04-59252-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm04-59252-29"}]': finished 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:56 vm04 bash[20742]: audit 2026-03-10T10:17:56.326256+0000 mon.a (mon.0) 1528 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip_vm04-59252-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm04-59252-29"}]': finished 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:56 vm04 bash[20742]: audit 2026-03-10T10:17:56.326287+0000 mon.a (mon.0) 1529 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm04-59259-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:56 vm04 bash[20742]: audit 2026-03-10T10:17:56.326287+0000 mon.a (mon.0) 1529 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm04-59259-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:56 vm04 bash[20742]: audit 2026-03-10T10:17:56.326307+0000 mon.a (mon.0) 1530 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:56 vm04 bash[20742]: audit 2026-03-10T10:17:56.326307+0000 mon.a (mon.0) 1530 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:56 vm04 bash[20742]: cluster 2026-03-10T10:17:56.333660+0000 mon.a (mon.0) 1531 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:56 vm04 bash[20742]: cluster 2026-03-10T10:17:56.333660+0000 mon.a (mon.0) 1531 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:56 vm04 bash[20742]: audit 2026-03-10T10:17:56.335421+0000 mon.a (mon.0) 1532 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-15"}]: dispatch 2026-03-10T10:17:56.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:56 vm04 bash[20742]: audit 2026-03-10T10:17:56.335421+0000 mon.a (mon.0) 1532 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-15"}]: dispatch 2026-03-10T10:17:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:56 vm07 bash[23367]: cluster 2026-03-10T10:17:54.394615+0000 mgr.y (mgr.24422) 183 : cluster [DBG] pgmap v193: 323 pgs: 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 303 active+clean; 460 KiB data, 646 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:56 vm07 bash[23367]: cluster 2026-03-10T10:17:54.394615+0000 mgr.y (mgr.24422) 183 : cluster [DBG] pgmap v193: 323 pgs: 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 303 active+clean; 460 KiB data, 646 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:56 vm07 bash[23367]: audit 2026-03-10T10:17:55.323165+0000 mon.a (mon.0) 1521 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm04-59484-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:56 vm07 bash[23367]: audit 2026-03-10T10:17:55.323165+0000 mon.a (mon.0) 1521 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosMiscECPP_vm04-59484-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:17:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:56 vm07 bash[23367]: audit 2026-03-10T10:17:55.323275+0000 mon.a (mon.0) 1522 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "30"}]': finished 2026-03-10T10:17:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:56 vm07 bash[23367]: audit 2026-03-10T10:17:55.323275+0000 mon.a (mon.0) 1522 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "30"}]': finished 2026-03-10T10:17:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:56 vm07 bash[23367]: cluster 2026-03-10T10:17:55.350977+0000 mon.a (mon.0) 1523 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-10T10:17:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:56 vm07 bash[23367]: cluster 2026-03-10T10:17:55.350977+0000 mon.a (mon.0) 1523 : cluster [DBG] osdmap e159: 8 total, 8 up, 8 in 2026-03-10T10:17:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:56 vm07 bash[23367]: audit 2026-03-10T10:17:55.352341+0000 mon.b (mon.1) 150 : audit [INF] from='client.? 192.168.123.104:0/704906562' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm04-59259-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:56 vm07 bash[23367]: audit 2026-03-10T10:17:55.352341+0000 mon.b (mon.1) 150 : audit [INF] from='client.? 192.168.123.104:0/704906562' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm04-59259-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:56 vm07 bash[23367]: audit 2026-03-10T10:17:55.354738+0000 mon.a (mon.0) 1524 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm04-59484-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm04-59484-36"}]: dispatch 2026-03-10T10:17:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:56 vm07 bash[23367]: audit 2026-03-10T10:17:55.354738+0000 mon.a (mon.0) 1524 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm04-59484-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm04-59484-36"}]: dispatch 2026-03-10T10:17:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:56 vm07 bash[23367]: audit 2026-03-10T10:17:55.362741+0000 mon.c (mon.2) 237 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:56 vm07 bash[23367]: audit 2026-03-10T10:17:55.362741+0000 mon.c (mon.2) 237 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:56 vm07 bash[23367]: audit 2026-03-10T10:17:55.377018+0000 mon.a (mon.0) 1525 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm04-59259-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:56 vm07 bash[23367]: audit 2026-03-10T10:17:55.377018+0000 mon.a (mon.0) 1525 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm04-59259-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:56 vm07 bash[23367]: audit 2026-03-10T10:17:55.390421+0000 mon.a (mon.0) 1526 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:17:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:56 vm07 bash[23367]: audit 2026-03-10T10:17:55.390421+0000 mon.a (mon.0) 1526 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:17:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:56 vm07 bash[23367]: cluster 2026-03-10T10:17:56.203349+0000 mon.a (mon.0) 1527 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:56 vm07 bash[23367]: cluster 2026-03-10T10:17:56.203349+0000 mon.a (mon.0) 1527 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:17:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:56 vm07 bash[23367]: audit 2026-03-10T10:17:56.326256+0000 mon.a (mon.0) 1528 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip_vm04-59252-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm04-59252-29"}]': finished 2026-03-10T10:17:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:56 vm07 bash[23367]: audit 2026-03-10T10:17:56.326256+0000 mon.a (mon.0) 1528 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip_vm04-59252-29", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip_vm04-59252-29"}]': finished 2026-03-10T10:17:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:56 vm07 bash[23367]: audit 2026-03-10T10:17:56.326287+0000 mon.a (mon.0) 1529 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm04-59259-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:56 vm07 bash[23367]: audit 2026-03-10T10:17:56.326287+0000 mon.a (mon.0) 1529 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripAppendPP_vm04-59259-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:56 vm07 bash[23367]: audit 2026-03-10T10:17:56.326307+0000 mon.a (mon.0) 1530 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:17:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:56 vm07 bash[23367]: audit 2026-03-10T10:17:56.326307+0000 mon.a (mon.0) 1530 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:17:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:56 vm07 bash[23367]: cluster 2026-03-10T10:17:56.333660+0000 mon.a (mon.0) 1531 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-10T10:17:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:56 vm07 bash[23367]: cluster 2026-03-10T10:17:56.333660+0000 mon.a (mon.0) 1531 : cluster [DBG] osdmap e160: 8 total, 8 up, 8 in 2026-03-10T10:17:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:56 vm07 bash[23367]: audit 2026-03-10T10:17:56.335421+0000 mon.a (mon.0) 1532 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-15"}]: dispatch 2026-03-10T10:17:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:56 vm07 bash[23367]: audit 2026-03-10T10:17:56.335421+0000 mon.a (mon.0) 1532 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-15"}]: dispatch 2026-03-10T10:17:57.338 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: Running main() from gmock_main.cc 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [==========] Running 57 tests from 4 test suites. 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [----------] Global test environment set-up. 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [----------] 32 tests from LibRadosAio 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAio.TooBigPP 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAio.TooBigPP (3047 ms) 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAio.PoolQuotaPP 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAio.PoolQuotaPP (15209 ms) 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAio.SimpleWritePP 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAio.SimpleWritePP (5933 ms) 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAio.WaitForSafePP 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAio.WaitForSafePP (3193 ms) 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripPP 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripPP (3096 ms) 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripPP2 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripPP2 (2636 ms) 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripPP3 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripPP3 (2950 ms) 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripSparseReadPP 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripSparseReadPP (3058 ms) 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAio.IsCompletePP 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAio.IsCompletePP (3186 ms) 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAio.IsSafePP 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAio.IsSafePP (3078 ms) 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAio.ReturnValuePP 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAio.ReturnValuePP (2783 ms) 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAio.FlushPP 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAio.FlushPP (3014 ms) 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAio.FlushAsyncPP 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAio.FlushAsyncPP (3021 ms) 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripWriteFullPP 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripWriteFullPP (2993 ms) 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripWriteFullPP2 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripWriteFullPP2 (3007 ms) 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripWriteSamePP 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripWriteSamePP (3163 ms) 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripWriteSamePP2 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripWriteSamePP2 (2696 ms) 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAio.SimpleStatPPNS 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAio.SimpleStatPPNS (3034 ms) 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAio.SimpleStatPP 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAio.SimpleStatPP (2990 ms) 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAio.OperateMtime 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAio.OperateMtime (3088 ms) 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAio.OperateMtime2 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAio.OperateMtime2 (3166 ms) 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAio.StatRemovePP 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAio.StatRemovePP (3008 ms) 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAio.ExecuteClassPP 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAio.ExecuteClassPP (3236 ms) 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAio.OmapPP 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAio.OmapPP (3163 ms) 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAio.MultiWritePP 2026-03-10T10:17:57.339 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAio.MultiWritePP (3025 ms) 2026-03-10T10:17:57.340 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAio.AioUnlockPP 2026-03-10T10:17:57.340 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAio.AioUnlockPP (3228 ms) 2026-03-10T10:17:57.340 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripAppendPP 2026-03-10T10:17:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:57 vm04 bash[28289]: audit 2026-03-10T10:17:56.364895+0000 mon.c (mon.2) 238 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:57 vm04 bash[28289]: audit 2026-03-10T10:17:56.364895+0000 mon.c (mon.2) 238 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:57 vm04 bash[28289]: cluster 2026-03-10T10:17:56.395066+0000 mgr.y (mgr.24422) 184 : cluster [DBG] pgmap v196: 363 pgs: 40 unknown, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 303 active+clean; 460 KiB data, 646 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:17:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:57 vm04 bash[28289]: cluster 2026-03-10T10:17:56.395066+0000 mgr.y (mgr.24422) 184 : cluster [DBG] pgmap v196: 363 pgs: 40 unknown, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 303 active+clean; 460 KiB data, 646 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:17:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:57 vm04 bash[28289]: cluster 2026-03-10T10:17:57.326548+0000 mon.a (mon.0) 1533 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:17:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:57 vm04 bash[28289]: cluster 2026-03-10T10:17:57.326548+0000 mon.a (mon.0) 1533 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:17:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:57 vm04 bash[28289]: audit 2026-03-10T10:17:57.330265+0000 mon.a (mon.0) 1534 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm04-59484-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm04-59484-36"}]': finished 2026-03-10T10:17:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:57 vm04 bash[28289]: audit 2026-03-10T10:17:57.330265+0000 mon.a (mon.0) 1534 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm04-59484-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm04-59484-36"}]': finished 2026-03-10T10:17:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:57 vm04 bash[28289]: audit 2026-03-10T10:17:57.330398+0000 mon.a (mon.0) 1535 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-15"}]': finished 2026-03-10T10:17:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:57 vm04 bash[28289]: audit 2026-03-10T10:17:57.330398+0000 mon.a (mon.0) 1535 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-15"}]': finished 2026-03-10T10:17:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:57 vm04 bash[28289]: cluster 2026-03-10T10:17:57.336455+0000 mon.a (mon.0) 1536 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-10T10:17:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:57 vm04 bash[28289]: cluster 2026-03-10T10:17:57.336455+0000 mon.a (mon.0) 1536 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-10T10:17:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:57 vm04 bash[20742]: audit 2026-03-10T10:17:56.364895+0000 mon.c (mon.2) 238 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:57 vm04 bash[20742]: audit 2026-03-10T10:17:56.364895+0000 mon.c (mon.2) 238 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:57 vm04 bash[20742]: cluster 2026-03-10T10:17:56.395066+0000 mgr.y (mgr.24422) 184 : cluster [DBG] pgmap v196: 363 pgs: 40 unknown, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 303 active+clean; 460 KiB data, 646 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:17:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:57 vm04 bash[20742]: cluster 2026-03-10T10:17:56.395066+0000 mgr.y (mgr.24422) 184 : cluster [DBG] pgmap v196: 363 pgs: 40 unknown, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 303 active+clean; 460 KiB data, 646 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:17:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:57 vm04 bash[20742]: cluster 2026-03-10T10:17:57.326548+0000 mon.a (mon.0) 1533 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:17:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:57 vm04 bash[20742]: cluster 2026-03-10T10:17:57.326548+0000 mon.a (mon.0) 1533 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:17:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:57 vm04 bash[20742]: audit 2026-03-10T10:17:57.330265+0000 mon.a (mon.0) 1534 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm04-59484-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm04-59484-36"}]': finished 2026-03-10T10:17:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:57 vm04 bash[20742]: audit 2026-03-10T10:17:57.330265+0000 mon.a (mon.0) 1534 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm04-59484-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm04-59484-36"}]': finished 2026-03-10T10:17:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:57 vm04 bash[20742]: audit 2026-03-10T10:17:57.330398+0000 mon.a (mon.0) 1535 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-15"}]': finished 2026-03-10T10:17:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:57 vm04 bash[20742]: audit 2026-03-10T10:17:57.330398+0000 mon.a (mon.0) 1535 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-15"}]': finished 2026-03-10T10:17:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:57 vm04 bash[20742]: cluster 2026-03-10T10:17:57.336455+0000 mon.a (mon.0) 1536 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-10T10:17:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:57 vm04 bash[20742]: cluster 2026-03-10T10:17:57.336455+0000 mon.a (mon.0) 1536 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-10T10:17:57.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:57 vm07 bash[23367]: audit 2026-03-10T10:17:56.364895+0000 mon.c (mon.2) 238 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:57.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:57 vm07 bash[23367]: audit 2026-03-10T10:17:56.364895+0000 mon.c (mon.2) 238 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:57.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:57 vm07 bash[23367]: cluster 2026-03-10T10:17:56.395066+0000 mgr.y (mgr.24422) 184 : cluster [DBG] pgmap v196: 363 pgs: 40 unknown, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 303 active+clean; 460 KiB data, 646 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:17:57.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:57 vm07 bash[23367]: cluster 2026-03-10T10:17:56.395066+0000 mgr.y (mgr.24422) 184 : cluster [DBG] pgmap v196: 363 pgs: 40 unknown, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 303 active+clean; 460 KiB data, 646 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:17:57.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:57 vm07 bash[23367]: cluster 2026-03-10T10:17:57.326548+0000 mon.a (mon.0) 1533 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:17:57.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:57 vm07 bash[23367]: cluster 2026-03-10T10:17:57.326548+0000 mon.a (mon.0) 1533 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:17:57.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:57 vm07 bash[23367]: audit 2026-03-10T10:17:57.330265+0000 mon.a (mon.0) 1534 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm04-59484-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm04-59484-36"}]': finished 2026-03-10T10:17:57.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:57 vm07 bash[23367]: audit 2026-03-10T10:17:57.330265+0000 mon.a (mon.0) 1534 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosMiscECPP_vm04-59484-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosMiscECPP_vm04-59484-36"}]': finished 2026-03-10T10:17:57.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:57 vm07 bash[23367]: audit 2026-03-10T10:17:57.330398+0000 mon.a (mon.0) 1535 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-15"}]': finished 2026-03-10T10:17:57.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:57 vm07 bash[23367]: audit 2026-03-10T10:17:57.330398+0000 mon.a (mon.0) 1535 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-15"}]': finished 2026-03-10T10:17:57.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:57 vm07 bash[23367]: cluster 2026-03-10T10:17:57.336455+0000 mon.a (mon.0) 1536 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-10T10:17:57.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:57 vm07 bash[23367]: cluster 2026-03-10T10:17:57.336455+0000 mon.a (mon.0) 1536 : cluster [DBG] osdmap e161: 8 total, 8 up, 8 in 2026-03-10T10:17:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:58 vm04 bash[28289]: audit 2026-03-10T10:17:57.366814+0000 mon.c (mon.2) 239 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:58 vm04 bash[28289]: audit 2026-03-10T10:17:57.366814+0000 mon.c (mon.2) 239 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:58 vm04 bash[28289]: audit 2026-03-10T10:17:57.766224+0000 mon.a (mon.0) 1537 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:17:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:58 vm04 bash[28289]: audit 2026-03-10T10:17:57.766224+0000 mon.a (mon.0) 1537 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:17:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:58 vm04 bash[28289]: audit 2026-03-10T10:17:57.766955+0000 mon.a (mon.0) 1538 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:17:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:58 vm04 bash[28289]: audit 2026-03-10T10:17:57.766955+0000 mon.a (mon.0) 1538 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:17:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:58 vm04 bash[28289]: audit 2026-03-10T10:17:58.272257+0000 mgr.y (mgr.24422) 185 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:58 vm04 bash[28289]: audit 2026-03-10T10:17:58.272257+0000 mgr.y (mgr.24422) 185 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:58 vm04 bash[28289]: cluster 2026-03-10T10:17:58.351967+0000 mon.a (mon.0) 1539 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-10T10:17:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:58 vm04 bash[28289]: cluster 2026-03-10T10:17:58.351967+0000 mon.a (mon.0) 1539 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-10T10:17:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:58 vm04 bash[28289]: audit 2026-03-10T10:17:58.361473+0000 mon.a (mon.0) 1540 : audit [INF] from='client.? 192.168.123.104:0/4164177938' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm04-59259-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:58 vm04 bash[28289]: audit 2026-03-10T10:17:58.361473+0000 mon.a (mon.0) 1540 : audit [INF] from='client.? 192.168.123.104:0/4164177938' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm04-59259-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:58 vm04 bash[28289]: audit 2026-03-10T10:17:58.368503+0000 mon.c (mon.2) 240 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:58 vm04 bash[28289]: audit 2026-03-10T10:17:58.368503+0000 mon.c (mon.2) 240 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:58 vm04 bash[28289]: audit 2026-03-10T10:17:58.377627+0000 mon.a (mon.0) 1541 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm04-59252-29"}]: dispatch 2026-03-10T10:17:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:58 vm04 bash[28289]: audit 2026-03-10T10:17:58.377627+0000 mon.a (mon.0) 1541 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm04-59252-29"}]: dispatch 2026-03-10T10:17:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:58 vm04 bash[20742]: audit 2026-03-10T10:17:57.366814+0000 mon.c (mon.2) 239 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:58 vm04 bash[20742]: audit 2026-03-10T10:17:57.366814+0000 mon.c (mon.2) 239 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:58 vm04 bash[20742]: audit 2026-03-10T10:17:57.766224+0000 mon.a (mon.0) 1537 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:17:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:58 vm04 bash[20742]: audit 2026-03-10T10:17:57.766224+0000 mon.a (mon.0) 1537 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:17:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:58 vm04 bash[20742]: audit 2026-03-10T10:17:57.766955+0000 mon.a (mon.0) 1538 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:17:58.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:58 vm04 bash[20742]: audit 2026-03-10T10:17:57.766955+0000 mon.a (mon.0) 1538 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:17:58.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:58 vm04 bash[20742]: audit 2026-03-10T10:17:58.272257+0000 mgr.y (mgr.24422) 185 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:58.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:58 vm04 bash[20742]: audit 2026-03-10T10:17:58.272257+0000 mgr.y (mgr.24422) 185 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:58.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:58 vm04 bash[20742]: cluster 2026-03-10T10:17:58.351967+0000 mon.a (mon.0) 1539 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-10T10:17:58.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:58 vm04 bash[20742]: cluster 2026-03-10T10:17:58.351967+0000 mon.a (mon.0) 1539 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-10T10:17:58.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:58 vm04 bash[20742]: audit 2026-03-10T10:17:58.361473+0000 mon.a (mon.0) 1540 : audit [INF] from='client.? 192.168.123.104:0/4164177938' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm04-59259-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:58.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:58 vm04 bash[20742]: audit 2026-03-10T10:17:58.361473+0000 mon.a (mon.0) 1540 : audit [INF] from='client.? 192.168.123.104:0/4164177938' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm04-59259-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:58.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:58 vm04 bash[20742]: audit 2026-03-10T10:17:58.368503+0000 mon.c (mon.2) 240 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:58.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:58 vm04 bash[20742]: audit 2026-03-10T10:17:58.368503+0000 mon.c (mon.2) 240 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:58.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:58 vm04 bash[20742]: audit 2026-03-10T10:17:58.377627+0000 mon.a (mon.0) 1541 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm04-59252-29"}]: dispatch 2026-03-10T10:17:58.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:58 vm04 bash[20742]: audit 2026-03-10T10:17:58.377627+0000 mon.a (mon.0) 1541 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm04-59252-29"}]: dispatch 2026-03-10T10:17:58.766 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:17:58 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:17:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:58 vm07 bash[23367]: audit 2026-03-10T10:17:57.366814+0000 mon.c (mon.2) 239 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:58 vm07 bash[23367]: audit 2026-03-10T10:17:57.366814+0000 mon.c (mon.2) 239 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:58 vm07 bash[23367]: audit 2026-03-10T10:17:57.766224+0000 mon.a (mon.0) 1537 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:17:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:58 vm07 bash[23367]: audit 2026-03-10T10:17:57.766224+0000 mon.a (mon.0) 1537 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:17:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:58 vm07 bash[23367]: audit 2026-03-10T10:17:57.766955+0000 mon.a (mon.0) 1538 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:17:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:58 vm07 bash[23367]: audit 2026-03-10T10:17:57.766955+0000 mon.a (mon.0) 1538 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:17:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:58 vm07 bash[23367]: audit 2026-03-10T10:17:58.272257+0000 mgr.y (mgr.24422) 185 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:58 vm07 bash[23367]: audit 2026-03-10T10:17:58.272257+0000 mgr.y (mgr.24422) 185 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:17:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:58 vm07 bash[23367]: cluster 2026-03-10T10:17:58.351967+0000 mon.a (mon.0) 1539 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-10T10:17:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:58 vm07 bash[23367]: cluster 2026-03-10T10:17:58.351967+0000 mon.a (mon.0) 1539 : cluster [DBG] osdmap e162: 8 total, 8 up, 8 in 2026-03-10T10:17:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:58 vm07 bash[23367]: audit 2026-03-10T10:17:58.361473+0000 mon.a (mon.0) 1540 : audit [INF] from='client.? 192.168.123.104:0/4164177938' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm04-59259-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:58 vm07 bash[23367]: audit 2026-03-10T10:17:58.361473+0000 mon.a (mon.0) 1540 : audit [INF] from='client.? 192.168.123.104:0/4164177938' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm04-59259-30","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:58 vm07 bash[23367]: audit 2026-03-10T10:17:58.368503+0000 mon.c (mon.2) 240 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:58 vm07 bash[23367]: audit 2026-03-10T10:17:58.368503+0000 mon.c (mon.2) 240 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:58 vm07 bash[23367]: audit 2026-03-10T10:17:58.377627+0000 mon.a (mon.0) 1541 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm04-59252-29"}]: dispatch 2026-03-10T10:17:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:58 vm07 bash[23367]: audit 2026-03-10T10:17:58.377627+0000 mon.a (mon.0) 1541 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm04-59252-29"}]: dispatch 2026-03-10T10:17:59.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:59 vm04 bash[28289]: cluster 2026-03-10T10:17:58.396017+0000 mgr.y (mgr.24422) 186 : cluster [DBG] pgmap v199: 330 pgs: 17 creating+peering, 2 creating+activating, 21 unknown, 1 peering, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 269 active+clean; 460 KiB data, 650 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:59.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:59 vm04 bash[28289]: cluster 2026-03-10T10:17:58.396017+0000 mgr.y (mgr.24422) 186 : cluster [DBG] pgmap v199: 330 pgs: 17 creating+peering, 2 creating+activating, 21 unknown, 1 peering, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 269 active+clean; 460 KiB data, 650 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:59.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:59 vm04 bash[28289]: audit 2026-03-10T10:17:59.339153+0000 mon.a (mon.0) 1542 : audit [INF] from='client.? 192.168.123.104:0/4164177938' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm04-59259-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:59.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:59 vm04 bash[28289]: audit 2026-03-10T10:17:59.339153+0000 mon.a (mon.0) 1542 : audit [INF] from='client.? 192.168.123.104:0/4164177938' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm04-59259-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:59.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:59 vm04 bash[28289]: audit 2026-03-10T10:17:59.339199+0000 mon.a (mon.0) 1543 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm04-59252-29"}]': finished 2026-03-10T10:17:59.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:59 vm04 bash[28289]: audit 2026-03-10T10:17:59.339199+0000 mon.a (mon.0) 1543 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm04-59252-29"}]': finished 2026-03-10T10:17:59.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:59 vm04 bash[28289]: cluster 2026-03-10T10:17:59.343343+0000 mon.a (mon.0) 1544 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-10T10:17:59.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:59 vm04 bash[28289]: cluster 2026-03-10T10:17:59.343343+0000 mon.a (mon.0) 1544 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-10T10:17:59.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:59 vm04 bash[28289]: audit 2026-03-10T10:17:59.344228+0000 mon.a (mon.0) 1545 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm04-59252-29"}]: dispatch 2026-03-10T10:17:59.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:59 vm04 bash[28289]: audit 2026-03-10T10:17:59.344228+0000 mon.a (mon.0) 1545 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm04-59252-29"}]: dispatch 2026-03-10T10:17:59.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:59 vm04 bash[28289]: audit 2026-03-10T10:17:59.350298+0000 mon.a (mon.0) 1546 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:59.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:59 vm04 bash[28289]: audit 2026-03-10T10:17:59.350298+0000 mon.a (mon.0) 1546 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:59.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:59 vm04 bash[28289]: audit 2026-03-10T10:17:59.359200+0000 mon.a (mon.0) 1547 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm04-59484-36"}]: dispatch 2026-03-10T10:17:59.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:59 vm04 bash[28289]: audit 2026-03-10T10:17:59.359200+0000 mon.a (mon.0) 1547 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm04-59484-36"}]: dispatch 2026-03-10T10:17:59.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:59 vm04 bash[28289]: audit 2026-03-10T10:17:59.370303+0000 mon.c (mon.2) 241 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:59.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:17:59 vm04 bash[28289]: audit 2026-03-10T10:17:59.370303+0000 mon.c (mon.2) 241 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:59.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:59 vm04 bash[20742]: cluster 2026-03-10T10:17:58.396017+0000 mgr.y (mgr.24422) 186 : cluster [DBG] pgmap v199: 330 pgs: 17 creating+peering, 2 creating+activating, 21 unknown, 1 peering, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 269 active+clean; 460 KiB data, 650 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:59.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:59 vm04 bash[20742]: cluster 2026-03-10T10:17:58.396017+0000 mgr.y (mgr.24422) 186 : cluster [DBG] pgmap v199: 330 pgs: 17 creating+peering, 2 creating+activating, 21 unknown, 1 peering, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 269 active+clean; 460 KiB data, 650 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:59.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:59 vm04 bash[20742]: audit 2026-03-10T10:17:59.339153+0000 mon.a (mon.0) 1542 : audit [INF] from='client.? 192.168.123.104:0/4164177938' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm04-59259-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:59.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:59 vm04 bash[20742]: audit 2026-03-10T10:17:59.339153+0000 mon.a (mon.0) 1542 : audit [INF] from='client.? 192.168.123.104:0/4164177938' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm04-59259-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:59.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:59 vm04 bash[20742]: audit 2026-03-10T10:17:59.339199+0000 mon.a (mon.0) 1543 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm04-59252-29"}]': finished 2026-03-10T10:17:59.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:59 vm04 bash[20742]: audit 2026-03-10T10:17:59.339199+0000 mon.a (mon.0) 1543 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm04-59252-29"}]': finished 2026-03-10T10:17:59.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:59 vm04 bash[20742]: cluster 2026-03-10T10:17:59.343343+0000 mon.a (mon.0) 1544 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-10T10:17:59.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:59 vm04 bash[20742]: cluster 2026-03-10T10:17:59.343343+0000 mon.a (mon.0) 1544 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-10T10:17:59.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:59 vm04 bash[20742]: audit 2026-03-10T10:17:59.344228+0000 mon.a (mon.0) 1545 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm04-59252-29"}]: dispatch 2026-03-10T10:17:59.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:59 vm04 bash[20742]: audit 2026-03-10T10:17:59.344228+0000 mon.a (mon.0) 1545 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm04-59252-29"}]: dispatch 2026-03-10T10:17:59.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:59 vm04 bash[20742]: audit 2026-03-10T10:17:59.350298+0000 mon.a (mon.0) 1546 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:59.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:59 vm04 bash[20742]: audit 2026-03-10T10:17:59.350298+0000 mon.a (mon.0) 1546 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:59.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:59 vm04 bash[20742]: audit 2026-03-10T10:17:59.359200+0000 mon.a (mon.0) 1547 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm04-59484-36"}]: dispatch 2026-03-10T10:17:59.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:59 vm04 bash[20742]: audit 2026-03-10T10:17:59.359200+0000 mon.a (mon.0) 1547 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm04-59484-36"}]: dispatch 2026-03-10T10:17:59.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:59 vm04 bash[20742]: audit 2026-03-10T10:17:59.370303+0000 mon.c (mon.2) 241 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:59.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:17:59 vm04 bash[20742]: audit 2026-03-10T10:17:59.370303+0000 mon.c (mon.2) 241 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:59.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:59 vm07 bash[23367]: cluster 2026-03-10T10:17:58.396017+0000 mgr.y (mgr.24422) 186 : cluster [DBG] pgmap v199: 330 pgs: 17 creating+peering, 2 creating+activating, 21 unknown, 1 peering, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 269 active+clean; 460 KiB data, 650 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:59.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:59 vm07 bash[23367]: cluster 2026-03-10T10:17:58.396017+0000 mgr.y (mgr.24422) 186 : cluster [DBG] pgmap v199: 330 pgs: 17 creating+peering, 2 creating+activating, 21 unknown, 1 peering, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 269 active+clean; 460 KiB data, 650 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:17:59.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:59 vm07 bash[23367]: audit 2026-03-10T10:17:59.339153+0000 mon.a (mon.0) 1542 : audit [INF] from='client.? 192.168.123.104:0/4164177938' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm04-59259-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:59.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:59 vm07 bash[23367]: audit 2026-03-10T10:17:59.339153+0000 mon.a (mon.0) 1542 : audit [INF] from='client.? 192.168.123.104:0/4164177938' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RacingRemovePP_vm04-59259-30","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:17:59.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:59 vm07 bash[23367]: audit 2026-03-10T10:17:59.339199+0000 mon.a (mon.0) 1543 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm04-59252-29"}]': finished 2026-03-10T10:17:59.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:59 vm07 bash[23367]: audit 2026-03-10T10:17:59.339199+0000 mon.a (mon.0) 1543 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip_vm04-59252-29"}]': finished 2026-03-10T10:17:59.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:59 vm07 bash[23367]: cluster 2026-03-10T10:17:59.343343+0000 mon.a (mon.0) 1544 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-10T10:17:59.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:59 vm07 bash[23367]: cluster 2026-03-10T10:17:59.343343+0000 mon.a (mon.0) 1544 : cluster [DBG] osdmap e163: 8 total, 8 up, 8 in 2026-03-10T10:17:59.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:59 vm07 bash[23367]: audit 2026-03-10T10:17:59.344228+0000 mon.a (mon.0) 1545 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm04-59252-29"}]: dispatch 2026-03-10T10:17:59.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:59 vm07 bash[23367]: audit 2026-03-10T10:17:59.344228+0000 mon.a (mon.0) 1545 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm04-59252-29"}]: dispatch 2026-03-10T10:17:59.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:59 vm07 bash[23367]: audit 2026-03-10T10:17:59.350298+0000 mon.a (mon.0) 1546 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:59.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:59 vm07 bash[23367]: audit 2026-03-10T10:17:59.350298+0000 mon.a (mon.0) 1546 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-17","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:17:59.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:59 vm07 bash[23367]: audit 2026-03-10T10:17:59.359200+0000 mon.a (mon.0) 1547 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm04-59484-36"}]: dispatch 2026-03-10T10:17:59.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:59 vm07 bash[23367]: audit 2026-03-10T10:17:59.359200+0000 mon.a (mon.0) 1547 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm04-59484-36"}]: dispatch 2026-03-10T10:17:59.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:59 vm07 bash[23367]: audit 2026-03-10T10:17:59.370303+0000 mon.c (mon.2) 241 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:17:59.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:17:59 vm07 bash[23367]: audit 2026-03-10T10:17:59.370303+0000 mon.c (mon.2) 241 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:01.363 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAiocPP (70937 ms total) 2026-03-10T10:18:01.363 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: 2026-03-10T10:18:01.363 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [----------] 1 test from LibRadosTwoPoolsECPP 2026-03-10T10:18:01.363 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ RUN ] LibRadosTwoPoolsECPP.CopyFrom 2026-03-10T10:18:01.363 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ OK ] LibRadosTwoPoolsECPP.CopyFrom (151 ms) 2026-03-10T10:18:01.363 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [----------] 1 test from LibRadosTwoPoolsECPP (151 ms total) 2026-03-10T10:18:01.364 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: 2026-03-10T10:18:01.364 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/0, where TypeParam = LibRadosChecksumParams<(rados_checksum_type_t)0, Checksummer::xxhash32, ceph_le > 2026-03-10T10:18:01.364 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/0.Subset 2026-03-10T10:18:01.364 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ OK ] LibRadosChecksum/0.Subset (66 ms) 2026-03-10T10:18:01.364 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/0.Chunked 2026-03-10T10:18:01.364 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ OK ] LibRadosChecksum/0.Chunked (3 ms) 2026-03-10T10:18:01.364 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/0 (69 ms total) 2026-03-10T10:18:01.364 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: 2026-03-10T10:18:01.364 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/1, where TypeParam = LibRadosChecksumParams<(rados_checksum_type_t)1, Checksummer::xxhash64, ceph_le > 2026-03-10T10:18:01.364 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/1.Subset 2026-03-10T10:18:01.364 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ OK ] LibRadosChecksum/1.Subset (80 ms) 2026-03-10T10:18:01.364 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/1.Chunked 2026-03-10T10:18:01.364 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ OK ] LibRadosChecksum/1.Chunked (24 ms) 2026-03-10T10:18:01.364 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/1 (104 ms total) 2026-03-10T10:18:01.364 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: 2026-03-10T10:18:01.364 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/2, where TypeParam = LibRadosChecksumParams<(rados_checksum_type_t)2, Checksummer::crc32c, ceph_le > 2026-03-10T10:18:01.364 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/2.Subset 2026-03-10T10:18:01.364 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ OK ] LibRadosChecksum/2.Subset (152 ms) 2026-03-10T10:18:01.364 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ RUN ] LibRadosChecksum/2.Chunked 2026-03-10T10:18:01.364 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ OK ] LibRadosChecksum/2.Chunked (33 ms) 2026-03-10T10:18:01.364 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [----------] 2 tests from LibRadosChecksum/2 (185 ms total) 2026-03-10T10:18:01.364 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: 2026-03-10T10:18:01.364 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [----------] 1 test from LibRadosMiscECPP 2026-03-10T10:18:01.364 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ RUN ] LibRadosMiscECPP.CompareExtentRange 2026-03-10T10:18:01.364 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ OK ] LibRadosMiscECPP.CompareExtentRange (1044 ms) 2026-03-10T10:18:01.364 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [----------] 1 test from LibRadosMiscECPP (1044 ms total) 2026-03-10T10:18:01.364 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: 2026-03-10T10:18:01.364 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [----------] Global test environment tear-down 2026-03-10T10:18:01.364 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [==========] 31 tests from 7 test suites ran. (101072 ms total) 2026-03-10T10:18:01.364 INFO:tasks.workunit.client.0.vm04.stdout: api_misc_pp: [ PASSED ] 31 tests. 2026-03-10T10:18:01.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:01 vm04 bash[28289]: audit 2026-03-10T10:18:00.349477+0000 mon.a (mon.0) 1548 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm04-59252-29"}]': finished 2026-03-10T10:18:01.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:01 vm04 bash[28289]: audit 2026-03-10T10:18:00.349477+0000 mon.a (mon.0) 1548 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm04-59252-29"}]': finished 2026-03-10T10:18:01.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:01 vm04 bash[28289]: audit 2026-03-10T10:18:00.349610+0000 mon.a (mon.0) 1549 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:01.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:01 vm04 bash[28289]: audit 2026-03-10T10:18:00.349610+0000 mon.a (mon.0) 1549 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:01.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:01 vm04 bash[28289]: audit 2026-03-10T10:18:00.349637+0000 mon.a (mon.0) 1550 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm04-59484-36"}]': finished 2026-03-10T10:18:01.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:01 vm04 bash[28289]: audit 2026-03-10T10:18:00.349637+0000 mon.a (mon.0) 1550 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm04-59484-36"}]': finished 2026-03-10T10:18:01.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:01 vm04 bash[28289]: cluster 2026-03-10T10:18:00.352951+0000 mon.a (mon.0) 1551 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-10T10:18:01.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:01 vm04 bash[28289]: cluster 2026-03-10T10:18:00.352951+0000 mon.a (mon.0) 1551 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-10T10:18:01.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:01 vm04 bash[28289]: audit 2026-03-10T10:18:00.353528+0000 mon.a (mon.0) 1552 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm04-59484-36"}]: dispatch 2026-03-10T10:18:01.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:01 vm04 bash[28289]: audit 2026-03-10T10:18:00.353528+0000 mon.a (mon.0) 1552 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm04-59484-36"}]: dispatch 2026-03-10T10:18:01.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:01 vm04 bash[28289]: audit 2026-03-10T10:18:00.380373+0000 mon.c (mon.2) 242 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:01.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:01 vm04 bash[28289]: audit 2026-03-10T10:18:00.380373+0000 mon.c (mon.2) 242 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:01.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:01 vm04 bash[28289]: cluster 2026-03-10T10:18:00.396419+0000 mgr.y (mgr.24422) 187 : cluster [DBG] pgmap v202: 322 pgs: 32 unknown, 1 peering, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 269 active+clean; 460 KiB data, 654 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:01 vm04 bash[28289]: cluster 2026-03-10T10:18:00.396419+0000 mgr.y (mgr.24422) 187 : cluster [DBG] pgmap v202: 322 pgs: 32 unknown, 1 peering, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 269 active+clean; 460 KiB data, 654 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:01 vm04 bash[28289]: audit 2026-03-10T10:18:00.401922+0000 mon.c (mon.2) 243 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:01 vm04 bash[28289]: audit 2026-03-10T10:18:00.401922+0000 mon.c (mon.2) 243 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:01 vm04 bash[28289]: audit 2026-03-10T10:18:00.417663+0000 mon.a (mon.0) 1553 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:01 vm04 bash[28289]: audit 2026-03-10T10:18:00.417663+0000 mon.a (mon.0) 1553 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:01 vm04 bash[28289]: audit 2026-03-10T10:18:00.426849+0000 mon.c (mon.2) 244 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:01 vm04 bash[28289]: audit 2026-03-10T10:18:00.426849+0000 mon.c (mon.2) 244 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:01 vm04 bash[28289]: audit 2026-03-10T10:18:00.430813+0000 mon.a (mon.0) 1554 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:01 vm04 bash[28289]: audit 2026-03-10T10:18:00.430813+0000 mon.a (mon.0) 1554 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:01 vm04 bash[28289]: audit 2026-03-10T10:18:00.433090+0000 mon.c (mon.2) 245 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm04-59252-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:01 vm04 bash[28289]: audit 2026-03-10T10:18:00.433090+0000 mon.c (mon.2) 245 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm04-59252-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:01 vm04 bash[28289]: audit 2026-03-10T10:18:00.435782+0000 mon.a (mon.0) 1555 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm04-59252-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:01 vm04 bash[28289]: audit 2026-03-10T10:18:00.435782+0000 mon.a (mon.0) 1555 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm04-59252-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:01 vm04 bash[28289]: audit 2026-03-10T10:18:00.438769+0000 mon.a (mon.0) 1556 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:01 vm04 bash[28289]: audit 2026-03-10T10:18:00.438769+0000 mon.a (mon.0) 1556 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:01 vm04 bash[28289]: cluster 2026-03-10T10:18:01.204026+0000 mon.a (mon.0) 1557 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:01 vm04 bash[28289]: cluster 2026-03-10T10:18:01.204026+0000 mon.a (mon.0) 1557 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:01 vm04 bash[20742]: audit 2026-03-10T10:18:00.349477+0000 mon.a (mon.0) 1548 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm04-59252-29"}]': finished 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:01 vm04 bash[20742]: audit 2026-03-10T10:18:00.349477+0000 mon.a (mon.0) 1548 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm04-59252-29"}]': finished 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:01 vm04 bash[20742]: audit 2026-03-10T10:18:00.349610+0000 mon.a (mon.0) 1549 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:01 vm04 bash[20742]: audit 2026-03-10T10:18:00.349610+0000 mon.a (mon.0) 1549 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:01 vm04 bash[20742]: audit 2026-03-10T10:18:00.349637+0000 mon.a (mon.0) 1550 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm04-59484-36"}]': finished 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:01 vm04 bash[20742]: audit 2026-03-10T10:18:00.349637+0000 mon.a (mon.0) 1550 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm04-59484-36"}]': finished 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:01 vm04 bash[20742]: cluster 2026-03-10T10:18:00.352951+0000 mon.a (mon.0) 1551 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:01 vm04 bash[20742]: cluster 2026-03-10T10:18:00.352951+0000 mon.a (mon.0) 1551 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:01 vm04 bash[20742]: audit 2026-03-10T10:18:00.353528+0000 mon.a (mon.0) 1552 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm04-59484-36"}]: dispatch 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:01 vm04 bash[20742]: audit 2026-03-10T10:18:00.353528+0000 mon.a (mon.0) 1552 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm04-59484-36"}]: dispatch 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:01 vm04 bash[20742]: audit 2026-03-10T10:18:00.380373+0000 mon.c (mon.2) 242 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:01 vm04 bash[20742]: audit 2026-03-10T10:18:00.380373+0000 mon.c (mon.2) 242 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:01 vm04 bash[20742]: cluster 2026-03-10T10:18:00.396419+0000 mgr.y (mgr.24422) 187 : cluster [DBG] pgmap v202: 322 pgs: 32 unknown, 1 peering, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 269 active+clean; 460 KiB data, 654 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:01 vm04 bash[20742]: cluster 2026-03-10T10:18:00.396419+0000 mgr.y (mgr.24422) 187 : cluster [DBG] pgmap v202: 322 pgs: 32 unknown, 1 peering, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 269 active+clean; 460 KiB data, 654 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:01 vm04 bash[20742]: audit 2026-03-10T10:18:00.401922+0000 mon.c (mon.2) 243 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:01 vm04 bash[20742]: audit 2026-03-10T10:18:00.401922+0000 mon.c (mon.2) 243 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:01 vm04 bash[20742]: audit 2026-03-10T10:18:00.417663+0000 mon.a (mon.0) 1553 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:01 vm04 bash[20742]: audit 2026-03-10T10:18:00.417663+0000 mon.a (mon.0) 1553 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:01 vm04 bash[20742]: audit 2026-03-10T10:18:00.426849+0000 mon.c (mon.2) 244 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:01 vm04 bash[20742]: audit 2026-03-10T10:18:00.426849+0000 mon.c (mon.2) 244 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:01 vm04 bash[20742]: audit 2026-03-10T10:18:00.430813+0000 mon.a (mon.0) 1554 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:01 vm04 bash[20742]: audit 2026-03-10T10:18:00.430813+0000 mon.a (mon.0) 1554 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:01 vm04 bash[20742]: audit 2026-03-10T10:18:00.433090+0000 mon.c (mon.2) 245 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm04-59252-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:01 vm04 bash[20742]: audit 2026-03-10T10:18:00.433090+0000 mon.c (mon.2) 245 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm04-59252-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:01 vm04 bash[20742]: audit 2026-03-10T10:18:00.435782+0000 mon.a (mon.0) 1555 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm04-59252-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:01 vm04 bash[20742]: audit 2026-03-10T10:18:00.435782+0000 mon.a (mon.0) 1555 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm04-59252-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:01 vm04 bash[20742]: audit 2026-03-10T10:18:00.438769+0000 mon.a (mon.0) 1556 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:01 vm04 bash[20742]: audit 2026-03-10T10:18:00.438769+0000 mon.a (mon.0) 1556 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:01.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:01 vm04 bash[20742]: cluster 2026-03-10T10:18:01.204026+0000 mon.a (mon.0) 1557 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:01.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:01 vm04 bash[20742]: cluster 2026-03-10T10:18:01.204026+0000 mon.a (mon.0) 1557 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:01.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:01 vm07 bash[23367]: audit 2026-03-10T10:18:00.349477+0000 mon.a (mon.0) 1548 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm04-59252-29"}]': finished 2026-03-10T10:18:01.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:01 vm07 bash[23367]: audit 2026-03-10T10:18:00.349477+0000 mon.a (mon.0) 1548 : audit [INF] from='client.? 192.168.123.104:0/4021244836' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip_vm04-59252-29"}]': finished 2026-03-10T10:18:01.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:01 vm07 bash[23367]: audit 2026-03-10T10:18:00.349610+0000 mon.a (mon.0) 1549 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:01.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:01 vm07 bash[23367]: audit 2026-03-10T10:18:00.349610+0000 mon.a (mon.0) 1549 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-17","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:01.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:01 vm07 bash[23367]: audit 2026-03-10T10:18:00.349637+0000 mon.a (mon.0) 1550 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm04-59484-36"}]': finished 2026-03-10T10:18:01.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:01 vm07 bash[23367]: audit 2026-03-10T10:18:00.349637+0000 mon.a (mon.0) 1550 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosMiscECPP_vm04-59484-36"}]': finished 2026-03-10T10:18:01.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:01 vm07 bash[23367]: cluster 2026-03-10T10:18:00.352951+0000 mon.a (mon.0) 1551 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-10T10:18:01.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:01 vm07 bash[23367]: cluster 2026-03-10T10:18:00.352951+0000 mon.a (mon.0) 1551 : cluster [DBG] osdmap e164: 8 total, 8 up, 8 in 2026-03-10T10:18:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:01 vm07 bash[23367]: audit 2026-03-10T10:18:00.353528+0000 mon.a (mon.0) 1552 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm04-59484-36"}]: dispatch 2026-03-10T10:18:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:01 vm07 bash[23367]: audit 2026-03-10T10:18:00.353528+0000 mon.a (mon.0) 1552 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm04-59484-36"}]: dispatch 2026-03-10T10:18:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:01 vm07 bash[23367]: audit 2026-03-10T10:18:00.380373+0000 mon.c (mon.2) 242 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:01 vm07 bash[23367]: audit 2026-03-10T10:18:00.380373+0000 mon.c (mon.2) 242 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:01 vm07 bash[23367]: cluster 2026-03-10T10:18:00.396419+0000 mgr.y (mgr.24422) 187 : cluster [DBG] pgmap v202: 322 pgs: 32 unknown, 1 peering, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 269 active+clean; 460 KiB data, 654 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:01 vm07 bash[23367]: cluster 2026-03-10T10:18:00.396419+0000 mgr.y (mgr.24422) 187 : cluster [DBG] pgmap v202: 322 pgs: 32 unknown, 1 peering, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 269 active+clean; 460 KiB data, 654 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:01 vm07 bash[23367]: audit 2026-03-10T10:18:00.401922+0000 mon.c (mon.2) 243 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:01 vm07 bash[23367]: audit 2026-03-10T10:18:00.401922+0000 mon.c (mon.2) 243 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:01 vm07 bash[23367]: audit 2026-03-10T10:18:00.417663+0000 mon.a (mon.0) 1553 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:01 vm07 bash[23367]: audit 2026-03-10T10:18:00.417663+0000 mon.a (mon.0) 1553 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:01 vm07 bash[23367]: audit 2026-03-10T10:18:00.426849+0000 mon.c (mon.2) 244 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:01 vm07 bash[23367]: audit 2026-03-10T10:18:00.426849+0000 mon.c (mon.2) 244 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:01 vm07 bash[23367]: audit 2026-03-10T10:18:00.430813+0000 mon.a (mon.0) 1554 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:01 vm07 bash[23367]: audit 2026-03-10T10:18:00.430813+0000 mon.a (mon.0) 1554 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:01 vm07 bash[23367]: audit 2026-03-10T10:18:00.433090+0000 mon.c (mon.2) 245 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm04-59252-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:01 vm07 bash[23367]: audit 2026-03-10T10:18:00.433090+0000 mon.c (mon.2) 245 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm04-59252-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:01 vm07 bash[23367]: audit 2026-03-10T10:18:00.435782+0000 mon.a (mon.0) 1555 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm04-59252-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:01 vm07 bash[23367]: audit 2026-03-10T10:18:00.435782+0000 mon.a (mon.0) 1555 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm04-59252-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:01 vm07 bash[23367]: audit 2026-03-10T10:18:00.438769+0000 mon.a (mon.0) 1556 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:01 vm07 bash[23367]: audit 2026-03-10T10:18:00.438769+0000 mon.a (mon.0) 1556 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-17", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:01 vm07 bash[23367]: cluster 2026-03-10T10:18:01.204026+0000 mon.a (mon.0) 1557 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:01 vm07 bash[23367]: cluster 2026-03-10T10:18:01.204026+0000 mon.a (mon.0) 1557 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:02 vm04 bash[28289]: audit 2026-03-10T10:18:01.353611+0000 mon.a (mon.0) 1558 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm04-59484-36"}]': finished 2026-03-10T10:18:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:02 vm04 bash[28289]: audit 2026-03-10T10:18:01.353611+0000 mon.a (mon.0) 1558 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm04-59484-36"}]': finished 2026-03-10T10:18:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:02 vm04 bash[28289]: audit 2026-03-10T10:18:01.353644+0000 mon.a (mon.0) 1559 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm04-59252-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:02 vm04 bash[28289]: audit 2026-03-10T10:18:01.353644+0000 mon.a (mon.0) 1559 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm04-59252-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:02 vm04 bash[28289]: audit 2026-03-10T10:18:01.353668+0000 mon.a (mon.0) 1560 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-17", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:02 vm04 bash[28289]: audit 2026-03-10T10:18:01.353668+0000 mon.a (mon.0) 1560 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-17", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:02 vm04 bash[28289]: cluster 2026-03-10T10:18:01.357365+0000 mon.a (mon.0) 1561 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-10T10:18:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:02 vm04 bash[28289]: cluster 2026-03-10T10:18:01.357365+0000 mon.a (mon.0) 1561 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-10T10:18:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:02 vm04 bash[28289]: audit 2026-03-10T10:18:01.357989+0000 mon.c (mon.2) 246 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm04-59252-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:02 vm04 bash[28289]: audit 2026-03-10T10:18:01.357989+0000 mon.c (mon.2) 246 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm04-59252-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:02 vm04 bash[28289]: audit 2026-03-10T10:18:01.358706+0000 mon.c (mon.2) 247 : audit [INF] from='client.? 192.168.123.104:0/2071111516' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm04-59259-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:02 vm04 bash[28289]: audit 2026-03-10T10:18:01.358706+0000 mon.c (mon.2) 247 : audit [INF] from='client.? 192.168.123.104:0/2071111516' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm04-59259-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:02 vm04 bash[28289]: audit 2026-03-10T10:18:01.359481+0000 mon.a (mon.0) 1562 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm04-59252-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:02 vm04 bash[28289]: audit 2026-03-10T10:18:01.359481+0000 mon.a (mon.0) 1562 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm04-59252-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:02 vm04 bash[28289]: audit 2026-03-10T10:18:01.359926+0000 mon.a (mon.0) 1563 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-17"}]: dispatch 2026-03-10T10:18:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:02 vm04 bash[28289]: audit 2026-03-10T10:18:01.359926+0000 mon.a (mon.0) 1563 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-17"}]: dispatch 2026-03-10T10:18:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:02 vm04 bash[28289]: audit 2026-03-10T10:18:01.369379+0000 mon.a (mon.0) 1564 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm04-59259-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:02 vm04 bash[28289]: audit 2026-03-10T10:18:01.369379+0000 mon.a (mon.0) 1564 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm04-59259-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:02 vm04 bash[28289]: audit 2026-03-10T10:18:01.383192+0000 mon.c (mon.2) 248 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:02 vm04 bash[28289]: audit 2026-03-10T10:18:01.383192+0000 mon.c (mon.2) 248 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:02 vm04 bash[28289]: audit 2026-03-10T10:18:02.357509+0000 mon.a (mon.0) 1565 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-17"}]': finished 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:02 vm04 bash[28289]: audit 2026-03-10T10:18:02.357509+0000 mon.a (mon.0) 1565 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-17"}]': finished 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:02 vm04 bash[28289]: audit 2026-03-10T10:18:02.357582+0000 mon.a (mon.0) 1566 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm04-59259-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:02 vm04 bash[28289]: audit 2026-03-10T10:18:02.357582+0000 mon.a (mon.0) 1566 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm04-59259-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:02 vm04 bash[28289]: cluster 2026-03-10T10:18:02.366375+0000 mon.a (mon.0) 1567 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:02 vm04 bash[28289]: cluster 2026-03-10T10:18:02.366375+0000 mon.a (mon.0) 1567 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:02 vm04 bash[28289]: audit 2026-03-10T10:18:02.367413+0000 mon.a (mon.0) 1568 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-17", "mode": "writeback"}]: dispatch 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:02 vm04 bash[28289]: audit 2026-03-10T10:18:02.367413+0000 mon.a (mon.0) 1568 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-17", "mode": "writeback"}]: dispatch 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:02 vm04 bash[20742]: audit 2026-03-10T10:18:01.353611+0000 mon.a (mon.0) 1558 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm04-59484-36"}]': finished 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:02 vm04 bash[20742]: audit 2026-03-10T10:18:01.353611+0000 mon.a (mon.0) 1558 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm04-59484-36"}]': finished 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:02 vm04 bash[20742]: audit 2026-03-10T10:18:01.353644+0000 mon.a (mon.0) 1559 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm04-59252-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:02 vm04 bash[20742]: audit 2026-03-10T10:18:01.353644+0000 mon.a (mon.0) 1559 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm04-59252-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:02 vm04 bash[20742]: audit 2026-03-10T10:18:01.353668+0000 mon.a (mon.0) 1560 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-17", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:02 vm04 bash[20742]: audit 2026-03-10T10:18:01.353668+0000 mon.a (mon.0) 1560 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-17", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:02 vm04 bash[20742]: cluster 2026-03-10T10:18:01.357365+0000 mon.a (mon.0) 1561 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:02 vm04 bash[20742]: cluster 2026-03-10T10:18:01.357365+0000 mon.a (mon.0) 1561 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:02 vm04 bash[20742]: audit 2026-03-10T10:18:01.357989+0000 mon.c (mon.2) 246 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm04-59252-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:02 vm04 bash[20742]: audit 2026-03-10T10:18:01.357989+0000 mon.c (mon.2) 246 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm04-59252-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:02 vm04 bash[20742]: audit 2026-03-10T10:18:01.358706+0000 mon.c (mon.2) 247 : audit [INF] from='client.? 192.168.123.104:0/2071111516' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm04-59259-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:02 vm04 bash[20742]: audit 2026-03-10T10:18:01.358706+0000 mon.c (mon.2) 247 : audit [INF] from='client.? 192.168.123.104:0/2071111516' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm04-59259-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:02 vm04 bash[20742]: audit 2026-03-10T10:18:01.359481+0000 mon.a (mon.0) 1562 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm04-59252-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:02 vm04 bash[20742]: audit 2026-03-10T10:18:01.359481+0000 mon.a (mon.0) 1562 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm04-59252-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:02 vm04 bash[20742]: audit 2026-03-10T10:18:01.359926+0000 mon.a (mon.0) 1563 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-17"}]: dispatch 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:02 vm04 bash[20742]: audit 2026-03-10T10:18:01.359926+0000 mon.a (mon.0) 1563 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-17"}]: dispatch 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:02 vm04 bash[20742]: audit 2026-03-10T10:18:01.369379+0000 mon.a (mon.0) 1564 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm04-59259-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:02 vm04 bash[20742]: audit 2026-03-10T10:18:01.369379+0000 mon.a (mon.0) 1564 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm04-59259-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:02 vm04 bash[20742]: audit 2026-03-10T10:18:01.383192+0000 mon.c (mon.2) 248 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:02 vm04 bash[20742]: audit 2026-03-10T10:18:01.383192+0000 mon.c (mon.2) 248 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:02 vm04 bash[20742]: audit 2026-03-10T10:18:02.357509+0000 mon.a (mon.0) 1565 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-17"}]': finished 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:02 vm04 bash[20742]: audit 2026-03-10T10:18:02.357509+0000 mon.a (mon.0) 1565 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-17"}]': finished 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:02 vm04 bash[20742]: audit 2026-03-10T10:18:02.357582+0000 mon.a (mon.0) 1566 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm04-59259-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:02 vm04 bash[20742]: audit 2026-03-10T10:18:02.357582+0000 mon.a (mon.0) 1566 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm04-59259-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:02 vm04 bash[20742]: cluster 2026-03-10T10:18:02.366375+0000 mon.a (mon.0) 1567 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:02 vm04 bash[20742]: cluster 2026-03-10T10:18:02.366375+0000 mon.a (mon.0) 1567 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:02 vm04 bash[20742]: audit 2026-03-10T10:18:02.367413+0000 mon.a (mon.0) 1568 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-17", "mode": "writeback"}]: dispatch 2026-03-10T10:18:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:02 vm04 bash[20742]: audit 2026-03-10T10:18:02.367413+0000 mon.a (mon.0) 1568 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-17", "mode": "writeback"}]: dispatch 2026-03-10T10:18:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:02 vm07 bash[23367]: audit 2026-03-10T10:18:01.353611+0000 mon.a (mon.0) 1558 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm04-59484-36"}]': finished 2026-03-10T10:18:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:02 vm07 bash[23367]: audit 2026-03-10T10:18:01.353611+0000 mon.a (mon.0) 1558 : audit [INF] from='client.? 192.168.123.104:0/2300628871' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosMiscECPP_vm04-59484-36"}]': finished 2026-03-10T10:18:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:02 vm07 bash[23367]: audit 2026-03-10T10:18:01.353644+0000 mon.a (mon.0) 1559 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm04-59252-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:02 vm07 bash[23367]: audit 2026-03-10T10:18:01.353644+0000 mon.a (mon.0) 1559 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTrip2_vm04-59252-30", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:02 vm07 bash[23367]: audit 2026-03-10T10:18:01.353668+0000 mon.a (mon.0) 1560 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-17", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:02 vm07 bash[23367]: audit 2026-03-10T10:18:01.353668+0000 mon.a (mon.0) 1560 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-17", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:02 vm07 bash[23367]: cluster 2026-03-10T10:18:01.357365+0000 mon.a (mon.0) 1561 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-10T10:18:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:02 vm07 bash[23367]: cluster 2026-03-10T10:18:01.357365+0000 mon.a (mon.0) 1561 : cluster [DBG] osdmap e165: 8 total, 8 up, 8 in 2026-03-10T10:18:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:02 vm07 bash[23367]: audit 2026-03-10T10:18:01.357989+0000 mon.c (mon.2) 246 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm04-59252-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:02 vm07 bash[23367]: audit 2026-03-10T10:18:01.357989+0000 mon.c (mon.2) 246 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm04-59252-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:02 vm07 bash[23367]: audit 2026-03-10T10:18:01.358706+0000 mon.c (mon.2) 247 : audit [INF] from='client.? 192.168.123.104:0/2071111516' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm04-59259-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:02 vm07 bash[23367]: audit 2026-03-10T10:18:01.358706+0000 mon.c (mon.2) 247 : audit [INF] from='client.? 192.168.123.104:0/2071111516' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm04-59259-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:02 vm07 bash[23367]: audit 2026-03-10T10:18:01.359481+0000 mon.a (mon.0) 1562 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm04-59252-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:02 vm07 bash[23367]: audit 2026-03-10T10:18:01.359481+0000 mon.a (mon.0) 1562 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTrip2_vm04-59252-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:02 vm07 bash[23367]: audit 2026-03-10T10:18:01.359926+0000 mon.a (mon.0) 1563 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-17"}]: dispatch 2026-03-10T10:18:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:02 vm07 bash[23367]: audit 2026-03-10T10:18:01.359926+0000 mon.a (mon.0) 1563 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-17"}]: dispatch 2026-03-10T10:18:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:02 vm07 bash[23367]: audit 2026-03-10T10:18:01.369379+0000 mon.a (mon.0) 1564 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm04-59259-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:02 vm07 bash[23367]: audit 2026-03-10T10:18:01.369379+0000 mon.a (mon.0) 1564 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm04-59259-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:02 vm07 bash[23367]: audit 2026-03-10T10:18:01.383192+0000 mon.c (mon.2) 248 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:02 vm07 bash[23367]: audit 2026-03-10T10:18:01.383192+0000 mon.c (mon.2) 248 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:02 vm07 bash[23367]: audit 2026-03-10T10:18:02.357509+0000 mon.a (mon.0) 1565 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-17"}]': finished 2026-03-10T10:18:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:02 vm07 bash[23367]: audit 2026-03-10T10:18:02.357509+0000 mon.a (mon.0) 1565 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-17"}]': finished 2026-03-10T10:18:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:02 vm07 bash[23367]: audit 2026-03-10T10:18:02.357582+0000 mon.a (mon.0) 1566 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm04-59259-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:02 vm07 bash[23367]: audit 2026-03-10T10:18:02.357582+0000 mon.a (mon.0) 1566 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP_vm04-59259-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:02 vm07 bash[23367]: cluster 2026-03-10T10:18:02.366375+0000 mon.a (mon.0) 1567 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-10T10:18:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:02 vm07 bash[23367]: cluster 2026-03-10T10:18:02.366375+0000 mon.a (mon.0) 1567 : cluster [DBG] osdmap e166: 8 total, 8 up, 8 in 2026-03-10T10:18:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:02 vm07 bash[23367]: audit 2026-03-10T10:18:02.367413+0000 mon.a (mon.0) 1568 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-17", "mode": "writeback"}]: dispatch 2026-03-10T10:18:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:02 vm07 bash[23367]: audit 2026-03-10T10:18:02.367413+0000 mon.a (mon.0) 1568 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-17", "mode": "writeback"}]: dispatch 2026-03-10T10:18:03.425 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:18:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:18:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:18:03.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:03 vm04 bash[28289]: audit 2026-03-10T10:18:02.384321+0000 mon.c (mon.2) 249 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:03.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:03 vm04 bash[28289]: audit 2026-03-10T10:18:02.384321+0000 mon.c (mon.2) 249 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:03.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:03 vm04 bash[28289]: cluster 2026-03-10T10:18:02.396771+0000 mgr.y (mgr.24422) 188 : cluster [DBG] pgmap v205: 354 pgs: 64 unknown, 1 peering, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 269 active+clean; 460 KiB data, 654 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:03.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:03 vm04 bash[28289]: cluster 2026-03-10T10:18:02.396771+0000 mgr.y (mgr.24422) 188 : cluster [DBG] pgmap v205: 354 pgs: 64 unknown, 1 peering, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 269 active+clean; 460 KiB data, 654 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:03.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:03 vm04 bash[28289]: cluster 2026-03-10T10:18:03.357715+0000 mon.a (mon.0) 1569 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:03.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:03 vm04 bash[28289]: cluster 2026-03-10T10:18:03.357715+0000 mon.a (mon.0) 1569 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:03.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:03 vm04 bash[28289]: audit 2026-03-10T10:18:03.360725+0000 mon.a (mon.0) 1570 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip2_vm04-59252-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm04-59252-30"}]': finished 2026-03-10T10:18:03.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:03 vm04 bash[28289]: audit 2026-03-10T10:18:03.360725+0000 mon.a (mon.0) 1570 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip2_vm04-59252-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm04-59252-30"}]': finished 2026-03-10T10:18:03.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:03 vm04 bash[28289]: audit 2026-03-10T10:18:03.360788+0000 mon.a (mon.0) 1571 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-17", "mode": "writeback"}]': finished 2026-03-10T10:18:03.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:03 vm04 bash[28289]: audit 2026-03-10T10:18:03.360788+0000 mon.a (mon.0) 1571 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-17", "mode": "writeback"}]': finished 2026-03-10T10:18:03.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:03 vm04 bash[28289]: cluster 2026-03-10T10:18:03.363684+0000 mon.a (mon.0) 1572 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-10T10:18:03.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:03 vm04 bash[28289]: cluster 2026-03-10T10:18:03.363684+0000 mon.a (mon.0) 1572 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-10T10:18:03.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:03 vm04 bash[28289]: audit 2026-03-10T10:18:03.391700+0000 mon.c (mon.2) 250 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:03.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:03 vm04 bash[28289]: audit 2026-03-10T10:18:03.391700+0000 mon.c (mon.2) 250 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:03.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:03 vm04 bash[20742]: audit 2026-03-10T10:18:02.384321+0000 mon.c (mon.2) 249 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:03.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:03 vm04 bash[20742]: audit 2026-03-10T10:18:02.384321+0000 mon.c (mon.2) 249 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:03.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:03 vm04 bash[20742]: cluster 2026-03-10T10:18:02.396771+0000 mgr.y (mgr.24422) 188 : cluster [DBG] pgmap v205: 354 pgs: 64 unknown, 1 peering, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 269 active+clean; 460 KiB data, 654 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:03.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:03 vm04 bash[20742]: cluster 2026-03-10T10:18:02.396771+0000 mgr.y (mgr.24422) 188 : cluster [DBG] pgmap v205: 354 pgs: 64 unknown, 1 peering, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 269 active+clean; 460 KiB data, 654 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:03.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:03 vm04 bash[20742]: cluster 2026-03-10T10:18:03.357715+0000 mon.a (mon.0) 1569 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:03.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:03 vm04 bash[20742]: cluster 2026-03-10T10:18:03.357715+0000 mon.a (mon.0) 1569 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:03.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:03 vm04 bash[20742]: audit 2026-03-10T10:18:03.360725+0000 mon.a (mon.0) 1570 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip2_vm04-59252-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm04-59252-30"}]': finished 2026-03-10T10:18:03.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:03 vm04 bash[20742]: audit 2026-03-10T10:18:03.360725+0000 mon.a (mon.0) 1570 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip2_vm04-59252-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm04-59252-30"}]': finished 2026-03-10T10:18:03.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:03 vm04 bash[20742]: audit 2026-03-10T10:18:03.360788+0000 mon.a (mon.0) 1571 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-17", "mode": "writeback"}]': finished 2026-03-10T10:18:03.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:03 vm04 bash[20742]: audit 2026-03-10T10:18:03.360788+0000 mon.a (mon.0) 1571 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-17", "mode": "writeback"}]': finished 2026-03-10T10:18:03.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:03 vm04 bash[20742]: cluster 2026-03-10T10:18:03.363684+0000 mon.a (mon.0) 1572 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-10T10:18:03.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:03 vm04 bash[20742]: cluster 2026-03-10T10:18:03.363684+0000 mon.a (mon.0) 1572 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-10T10:18:03.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:03 vm04 bash[20742]: audit 2026-03-10T10:18:03.391700+0000 mon.c (mon.2) 250 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:03.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:03 vm04 bash[20742]: audit 2026-03-10T10:18:03.391700+0000 mon.c (mon.2) 250 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:03 vm07 bash[23367]: audit 2026-03-10T10:18:02.384321+0000 mon.c (mon.2) 249 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:03 vm07 bash[23367]: audit 2026-03-10T10:18:02.384321+0000 mon.c (mon.2) 249 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:03 vm07 bash[23367]: cluster 2026-03-10T10:18:02.396771+0000 mgr.y (mgr.24422) 188 : cluster [DBG] pgmap v205: 354 pgs: 64 unknown, 1 peering, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 269 active+clean; 460 KiB data, 654 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:03 vm07 bash[23367]: cluster 2026-03-10T10:18:02.396771+0000 mgr.y (mgr.24422) 188 : cluster [DBG] pgmap v205: 354 pgs: 64 unknown, 1 peering, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 269 active+clean; 460 KiB data, 654 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:03 vm07 bash[23367]: cluster 2026-03-10T10:18:03.357715+0000 mon.a (mon.0) 1569 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:03 vm07 bash[23367]: cluster 2026-03-10T10:18:03.357715+0000 mon.a (mon.0) 1569 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:03 vm07 bash[23367]: audit 2026-03-10T10:18:03.360725+0000 mon.a (mon.0) 1570 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip2_vm04-59252-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm04-59252-30"}]': finished 2026-03-10T10:18:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:03 vm07 bash[23367]: audit 2026-03-10T10:18:03.360725+0000 mon.a (mon.0) 1570 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTrip2_vm04-59252-30", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTrip2_vm04-59252-30"}]': finished 2026-03-10T10:18:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:03 vm07 bash[23367]: audit 2026-03-10T10:18:03.360788+0000 mon.a (mon.0) 1571 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-17", "mode": "writeback"}]': finished 2026-03-10T10:18:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:03 vm07 bash[23367]: audit 2026-03-10T10:18:03.360788+0000 mon.a (mon.0) 1571 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-17", "mode": "writeback"}]': finished 2026-03-10T10:18:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:03 vm07 bash[23367]: cluster 2026-03-10T10:18:03.363684+0000 mon.a (mon.0) 1572 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-10T10:18:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:03 vm07 bash[23367]: cluster 2026-03-10T10:18:03.363684+0000 mon.a (mon.0) 1572 : cluster [DBG] osdmap e167: 8 total, 8 up, 8 in 2026-03-10T10:18:03.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:03 vm07 bash[23367]: audit 2026-03-10T10:18:03.391700+0000 mon.c (mon.2) 250 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:03.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:03 vm07 bash[23367]: audit 2026-03-10T10:18:03.391700+0000 mon.c (mon.2) 250 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:04 vm04 bash[28289]: audit 2026-03-10T10:18:03.457615+0000 mon.a (mon.0) 1573 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:04 vm04 bash[28289]: audit 2026-03-10T10:18:03.457615+0000 mon.a (mon.0) 1573 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:04 vm04 bash[28289]: audit 2026-03-10T10:18:04.364402+0000 mon.a (mon.0) 1574 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:04 vm04 bash[28289]: audit 2026-03-10T10:18:04.364402+0000 mon.a (mon.0) 1574 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:04 vm04 bash[28289]: cluster 2026-03-10T10:18:04.369183+0000 mon.a (mon.0) 1575 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-10T10:18:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:04 vm04 bash[28289]: cluster 2026-03-10T10:18:04.369183+0000 mon.a (mon.0) 1575 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-10T10:18:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:04 vm04 bash[28289]: audit 2026-03-10T10:18:04.370213+0000 mon.a (mon.0) 1576 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-17"}]: dispatch 2026-03-10T10:18:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:04 vm04 bash[28289]: audit 2026-03-10T10:18:04.370213+0000 mon.a (mon.0) 1576 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-17"}]: dispatch 2026-03-10T10:18:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:04 vm04 bash[28289]: audit 2026-03-10T10:18:04.371415+0000 mon.a (mon.0) 1577 : audit [INF] from='client.? 192.168.123.104:0/1996186191' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm04-59259-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:04 vm04 bash[28289]: audit 2026-03-10T10:18:04.371415+0000 mon.a (mon.0) 1577 : audit [INF] from='client.? 192.168.123.104:0/1996186191' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm04-59259-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:04 vm04 bash[28289]: audit 2026-03-10T10:18:04.392384+0000 mon.c (mon.2) 251 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:04 vm04 bash[28289]: audit 2026-03-10T10:18:04.392384+0000 mon.c (mon.2) 251 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:04 vm04 bash[20742]: audit 2026-03-10T10:18:03.457615+0000 mon.a (mon.0) 1573 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:04 vm04 bash[20742]: audit 2026-03-10T10:18:03.457615+0000 mon.a (mon.0) 1573 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:04 vm04 bash[20742]: audit 2026-03-10T10:18:04.364402+0000 mon.a (mon.0) 1574 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:04 vm04 bash[20742]: audit 2026-03-10T10:18:04.364402+0000 mon.a (mon.0) 1574 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:04 vm04 bash[20742]: cluster 2026-03-10T10:18:04.369183+0000 mon.a (mon.0) 1575 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-10T10:18:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:04 vm04 bash[20742]: cluster 2026-03-10T10:18:04.369183+0000 mon.a (mon.0) 1575 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-10T10:18:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:04 vm04 bash[20742]: audit 2026-03-10T10:18:04.370213+0000 mon.a (mon.0) 1576 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-17"}]: dispatch 2026-03-10T10:18:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:04 vm04 bash[20742]: audit 2026-03-10T10:18:04.370213+0000 mon.a (mon.0) 1576 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-17"}]: dispatch 2026-03-10T10:18:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:04 vm04 bash[20742]: audit 2026-03-10T10:18:04.371415+0000 mon.a (mon.0) 1577 : audit [INF] from='client.? 192.168.123.104:0/1996186191' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm04-59259-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:04 vm04 bash[20742]: audit 2026-03-10T10:18:04.371415+0000 mon.a (mon.0) 1577 : audit [INF] from='client.? 192.168.123.104:0/1996186191' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm04-59259-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:04 vm04 bash[20742]: audit 2026-03-10T10:18:04.392384+0000 mon.c (mon.2) 251 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:04 vm04 bash[20742]: audit 2026-03-10T10:18:04.392384+0000 mon.c (mon.2) 251 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:04.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:04 vm07 bash[23367]: audit 2026-03-10T10:18:03.457615+0000 mon.a (mon.0) 1573 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:04.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:04 vm07 bash[23367]: audit 2026-03-10T10:18:03.457615+0000 mon.a (mon.0) 1573 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:04.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:04 vm07 bash[23367]: audit 2026-03-10T10:18:04.364402+0000 mon.a (mon.0) 1574 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:04.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:04 vm07 bash[23367]: audit 2026-03-10T10:18:04.364402+0000 mon.a (mon.0) 1574 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:04.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:04 vm07 bash[23367]: cluster 2026-03-10T10:18:04.369183+0000 mon.a (mon.0) 1575 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-10T10:18:04.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:04 vm07 bash[23367]: cluster 2026-03-10T10:18:04.369183+0000 mon.a (mon.0) 1575 : cluster [DBG] osdmap e168: 8 total, 8 up, 8 in 2026-03-10T10:18:04.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:04 vm07 bash[23367]: audit 2026-03-10T10:18:04.370213+0000 mon.a (mon.0) 1576 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-17"}]: dispatch 2026-03-10T10:18:04.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:04 vm07 bash[23367]: audit 2026-03-10T10:18:04.370213+0000 mon.a (mon.0) 1576 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-17"}]: dispatch 2026-03-10T10:18:04.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:04 vm07 bash[23367]: audit 2026-03-10T10:18:04.371415+0000 mon.a (mon.0) 1577 : audit [INF] from='client.? 192.168.123.104:0/1996186191' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm04-59259-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:04.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:04 vm07 bash[23367]: audit 2026-03-10T10:18:04.371415+0000 mon.a (mon.0) 1577 : audit [INF] from='client.? 192.168.123.104:0/1996186191' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm04-59259-32","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:04.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:04 vm07 bash[23367]: audit 2026-03-10T10:18:04.392384+0000 mon.c (mon.2) 251 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:04.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:04 vm07 bash[23367]: audit 2026-03-10T10:18:04.392384+0000 mon.c (mon.2) 251 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:05.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:05 vm04 bash[28289]: cluster 2026-03-10T10:18:04.397181+0000 mgr.y (mgr.24422) 189 : cluster [DBG] pgmap v208: 362 pgs: 32 unknown, 8 creating+peering, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 302 active+clean; 459 KiB data, 655 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 3 op/s 2026-03-10T10:18:05.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:05 vm04 bash[28289]: cluster 2026-03-10T10:18:04.397181+0000 mgr.y (mgr.24422) 189 : cluster [DBG] pgmap v208: 362 pgs: 32 unknown, 8 creating+peering, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 302 active+clean; 459 KiB data, 655 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 3 op/s 2026-03-10T10:18:05.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:05 vm04 bash[28289]: cluster 2026-03-10T10:18:05.364587+0000 mon.a (mon.0) 1578 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:05.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:05 vm04 bash[28289]: cluster 2026-03-10T10:18:05.364587+0000 mon.a (mon.0) 1578 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:05.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:05 vm04 bash[28289]: audit 2026-03-10T10:18:05.379960+0000 mon.a (mon.0) 1579 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-17"}]': finished 2026-03-10T10:18:05.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:05 vm04 bash[28289]: audit 2026-03-10T10:18:05.379960+0000 mon.a (mon.0) 1579 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-17"}]': finished 2026-03-10T10:18:05.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:05 vm04 bash[28289]: audit 2026-03-10T10:18:05.380518+0000 mon.a (mon.0) 1580 : audit [INF] from='client.? 192.168.123.104:0/1996186191' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm04-59259-32","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:05.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:05 vm04 bash[28289]: audit 2026-03-10T10:18:05.380518+0000 mon.a (mon.0) 1580 : audit [INF] from='client.? 192.168.123.104:0/1996186191' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm04-59259-32","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:05.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:05 vm04 bash[28289]: cluster 2026-03-10T10:18:05.391045+0000 mon.a (mon.0) 1581 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-10T10:18:05.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:05 vm04 bash[28289]: cluster 2026-03-10T10:18:05.391045+0000 mon.a (mon.0) 1581 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-10T10:18:05.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:05 vm04 bash[28289]: audit 2026-03-10T10:18:05.400317+0000 mon.c (mon.2) 252 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:05.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:05 vm04 bash[28289]: audit 2026-03-10T10:18:05.400317+0000 mon.c (mon.2) 252 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:05.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:05 vm04 bash[28289]: audit 2026-03-10T10:18:05.411675+0000 mon.c (mon.2) 253 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:05.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:05 vm04 bash[28289]: audit 2026-03-10T10:18:05.411675+0000 mon.c (mon.2) 253 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:05.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:05 vm04 bash[28289]: audit 2026-03-10T10:18:05.415864+0000 mon.a (mon.0) 1582 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:05.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:05 vm04 bash[28289]: audit 2026-03-10T10:18:05.415864+0000 mon.a (mon.0) 1582 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:05.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:05 vm04 bash[20742]: cluster 2026-03-10T10:18:04.397181+0000 mgr.y (mgr.24422) 189 : cluster [DBG] pgmap v208: 362 pgs: 32 unknown, 8 creating+peering, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 302 active+clean; 459 KiB data, 655 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 3 op/s 2026-03-10T10:18:05.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:05 vm04 bash[20742]: cluster 2026-03-10T10:18:04.397181+0000 mgr.y (mgr.24422) 189 : cluster [DBG] pgmap v208: 362 pgs: 32 unknown, 8 creating+peering, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 302 active+clean; 459 KiB data, 655 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 3 op/s 2026-03-10T10:18:05.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:05 vm04 bash[20742]: cluster 2026-03-10T10:18:05.364587+0000 mon.a (mon.0) 1578 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:05.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:05 vm04 bash[20742]: cluster 2026-03-10T10:18:05.364587+0000 mon.a (mon.0) 1578 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:05.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:05 vm04 bash[20742]: audit 2026-03-10T10:18:05.379960+0000 mon.a (mon.0) 1579 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-17"}]': finished 2026-03-10T10:18:05.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:05 vm04 bash[20742]: audit 2026-03-10T10:18:05.379960+0000 mon.a (mon.0) 1579 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-17"}]': finished 2026-03-10T10:18:05.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:05 vm04 bash[20742]: audit 2026-03-10T10:18:05.380518+0000 mon.a (mon.0) 1580 : audit [INF] from='client.? 192.168.123.104:0/1996186191' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm04-59259-32","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:05.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:05 vm04 bash[20742]: audit 2026-03-10T10:18:05.380518+0000 mon.a (mon.0) 1580 : audit [INF] from='client.? 192.168.123.104:0/1996186191' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm04-59259-32","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:05.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:05 vm04 bash[20742]: cluster 2026-03-10T10:18:05.391045+0000 mon.a (mon.0) 1581 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-10T10:18:05.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:05 vm04 bash[20742]: cluster 2026-03-10T10:18:05.391045+0000 mon.a (mon.0) 1581 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-10T10:18:05.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:05 vm04 bash[20742]: audit 2026-03-10T10:18:05.400317+0000 mon.c (mon.2) 252 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:05.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:05 vm04 bash[20742]: audit 2026-03-10T10:18:05.400317+0000 mon.c (mon.2) 252 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:05.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:05 vm04 bash[20742]: audit 2026-03-10T10:18:05.411675+0000 mon.c (mon.2) 253 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:05.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:05 vm04 bash[20742]: audit 2026-03-10T10:18:05.411675+0000 mon.c (mon.2) 253 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:05.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:05 vm04 bash[20742]: audit 2026-03-10T10:18:05.415864+0000 mon.a (mon.0) 1582 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:05.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:05 vm04 bash[20742]: audit 2026-03-10T10:18:05.415864+0000 mon.a (mon.0) 1582 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:05.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:05 vm07 bash[23367]: cluster 2026-03-10T10:18:04.397181+0000 mgr.y (mgr.24422) 189 : cluster [DBG] pgmap v208: 362 pgs: 32 unknown, 8 creating+peering, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 302 active+clean; 459 KiB data, 655 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 3 op/s 2026-03-10T10:18:05.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:05 vm07 bash[23367]: cluster 2026-03-10T10:18:04.397181+0000 mgr.y (mgr.24422) 189 : cluster [DBG] pgmap v208: 362 pgs: 32 unknown, 8 creating+peering, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 302 active+clean; 459 KiB data, 655 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 3 op/s 2026-03-10T10:18:05.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:05 vm07 bash[23367]: cluster 2026-03-10T10:18:05.364587+0000 mon.a (mon.0) 1578 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:05.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:05 vm07 bash[23367]: cluster 2026-03-10T10:18:05.364587+0000 mon.a (mon.0) 1578 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:05.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:05 vm07 bash[23367]: audit 2026-03-10T10:18:05.379960+0000 mon.a (mon.0) 1579 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-17"}]': finished 2026-03-10T10:18:05.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:05 vm07 bash[23367]: audit 2026-03-10T10:18:05.379960+0000 mon.a (mon.0) 1579 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-17"}]': finished 2026-03-10T10:18:05.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:05 vm07 bash[23367]: audit 2026-03-10T10:18:05.380518+0000 mon.a (mon.0) 1580 : audit [INF] from='client.? 192.168.123.104:0/1996186191' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm04-59259-32","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:05.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:05 vm07 bash[23367]: audit 2026-03-10T10:18:05.380518+0000 mon.a (mon.0) 1580 : audit [INF] from='client.? 192.168.123.104:0/1996186191' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripCmpExtPP2_vm04-59259-32","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:05.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:05 vm07 bash[23367]: cluster 2026-03-10T10:18:05.391045+0000 mon.a (mon.0) 1581 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-10T10:18:05.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:05 vm07 bash[23367]: cluster 2026-03-10T10:18:05.391045+0000 mon.a (mon.0) 1581 : cluster [DBG] osdmap e169: 8 total, 8 up, 8 in 2026-03-10T10:18:05.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:05 vm07 bash[23367]: audit 2026-03-10T10:18:05.400317+0000 mon.c (mon.2) 252 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:05.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:05 vm07 bash[23367]: audit 2026-03-10T10:18:05.400317+0000 mon.c (mon.2) 252 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:05.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:05 vm07 bash[23367]: audit 2026-03-10T10:18:05.411675+0000 mon.c (mon.2) 253 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:05.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:05 vm07 bash[23367]: audit 2026-03-10T10:18:05.411675+0000 mon.c (mon.2) 253 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:05.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:05 vm07 bash[23367]: audit 2026-03-10T10:18:05.415864+0000 mon.a (mon.0) 1582 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:05.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:05 vm07 bash[23367]: audit 2026-03-10T10:18:05.415864+0000 mon.a (mon.0) 1582 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:06 vm07 bash[23367]: audit 2026-03-10T10:18:06.408092+0000 mon.c (mon.2) 254 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:06 vm07 bash[23367]: audit 2026-03-10T10:18:06.408092+0000 mon.c (mon.2) 254 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:06.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:06 vm04 bash[28289]: audit 2026-03-10T10:18:06.408092+0000 mon.c (mon.2) 254 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:06.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:06 vm04 bash[28289]: audit 2026-03-10T10:18:06.408092+0000 mon.c (mon.2) 254 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:06.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:06 vm04 bash[20742]: audit 2026-03-10T10:18:06.408092+0000 mon.c (mon.2) 254 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:06.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:06 vm04 bash[20742]: audit 2026-03-10T10:18:06.408092+0000 mon.c (mon.2) 254 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:07.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:07 vm07 bash[23367]: cluster 2026-03-10T10:18:06.397532+0000 mgr.y (mgr.24422) 190 : cluster [DBG] pgmap v210: 354 pgs: 32 unknown, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 302 active+clean; 459 KiB data, 655 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 760 B/s wr, 3 op/s 2026-03-10T10:18:07.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:07 vm07 bash[23367]: cluster 2026-03-10T10:18:06.397532+0000 mgr.y (mgr.24422) 190 : cluster [DBG] pgmap v210: 354 pgs: 32 unknown, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 302 active+clean; 459 KiB data, 655 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 760 B/s wr, 3 op/s 2026-03-10T10:18:07.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:07 vm07 bash[23367]: cluster 2026-03-10T10:18:06.438653+0000 mon.a (mon.0) 1583 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:07.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:07 vm07 bash[23367]: cluster 2026-03-10T10:18:06.438653+0000 mon.a (mon.0) 1583 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:07.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:07 vm07 bash[23367]: audit 2026-03-10T10:18:06.471204+0000 mon.a (mon.0) 1584 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm04-59252-30"}]': finished 2026-03-10T10:18:07.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:07 vm07 bash[23367]: audit 2026-03-10T10:18:06.471204+0000 mon.a (mon.0) 1584 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm04-59252-30"}]': finished 2026-03-10T10:18:07.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:07 vm07 bash[23367]: audit 2026-03-10T10:18:06.479519+0000 mon.c (mon.2) 255 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:07.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:07 vm07 bash[23367]: audit 2026-03-10T10:18:06.479519+0000 mon.c (mon.2) 255 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:07.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:07 vm07 bash[23367]: cluster 2026-03-10T10:18:06.480665+0000 mon.a (mon.0) 1585 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-10T10:18:07.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:07 vm07 bash[23367]: cluster 2026-03-10T10:18:06.480665+0000 mon.a (mon.0) 1585 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-10T10:18:07.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:07 vm07 bash[23367]: audit 2026-03-10T10:18:06.481962+0000 mon.a (mon.0) 1586 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:07.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:07 vm07 bash[23367]: audit 2026-03-10T10:18:06.481962+0000 mon.a (mon.0) 1586 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:07.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:07 vm07 bash[23367]: audit 2026-03-10T10:18:07.408972+0000 mon.c (mon.2) 256 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:07.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:07 vm07 bash[23367]: audit 2026-03-10T10:18:07.408972+0000 mon.c (mon.2) 256 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:07.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:07 vm07 bash[23367]: audit 2026-03-10T10:18:07.475721+0000 mon.a (mon.0) 1587 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm04-59252-30"}]': finished 2026-03-10T10:18:07.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:07 vm07 bash[23367]: audit 2026-03-10T10:18:07.475721+0000 mon.a (mon.0) 1587 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm04-59252-30"}]': finished 2026-03-10T10:18:07.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:07 vm07 bash[23367]: audit 2026-03-10T10:18:07.486797+0000 mon.b (mon.1) 151 : audit [INF] from='client.? 192.168.123.104:0/161708398' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm04-59259-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:07.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:07 vm07 bash[23367]: audit 2026-03-10T10:18:07.486797+0000 mon.b (mon.1) 151 : audit [INF] from='client.? 192.168.123.104:0/161708398' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm04-59259-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:07.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:07 vm07 bash[23367]: cluster 2026-03-10T10:18:07.487020+0000 mon.a (mon.0) 1588 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-10T10:18:07.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:07 vm07 bash[23367]: cluster 2026-03-10T10:18:07.487020+0000 mon.a (mon.0) 1588 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-10T10:18:07.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:07 vm04 bash[28289]: cluster 2026-03-10T10:18:06.397532+0000 mgr.y (mgr.24422) 190 : cluster [DBG] pgmap v210: 354 pgs: 32 unknown, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 302 active+clean; 459 KiB data, 655 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 760 B/s wr, 3 op/s 2026-03-10T10:18:07.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:07 vm04 bash[28289]: cluster 2026-03-10T10:18:06.397532+0000 mgr.y (mgr.24422) 190 : cluster [DBG] pgmap v210: 354 pgs: 32 unknown, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 302 active+clean; 459 KiB data, 655 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 760 B/s wr, 3 op/s 2026-03-10T10:18:07.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:07 vm04 bash[28289]: cluster 2026-03-10T10:18:06.438653+0000 mon.a (mon.0) 1583 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:07.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:07 vm04 bash[28289]: cluster 2026-03-10T10:18:06.438653+0000 mon.a (mon.0) 1583 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:07.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:07 vm04 bash[28289]: audit 2026-03-10T10:18:06.471204+0000 mon.a (mon.0) 1584 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm04-59252-30"}]': finished 2026-03-10T10:18:07.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:07 vm04 bash[28289]: audit 2026-03-10T10:18:06.471204+0000 mon.a (mon.0) 1584 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm04-59252-30"}]': finished 2026-03-10T10:18:07.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:07 vm04 bash[28289]: audit 2026-03-10T10:18:06.479519+0000 mon.c (mon.2) 255 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:07.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:07 vm04 bash[28289]: audit 2026-03-10T10:18:06.479519+0000 mon.c (mon.2) 255 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:07.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:07 vm04 bash[28289]: cluster 2026-03-10T10:18:06.480665+0000 mon.a (mon.0) 1585 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-10T10:18:07.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:07 vm04 bash[28289]: cluster 2026-03-10T10:18:06.480665+0000 mon.a (mon.0) 1585 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-10T10:18:07.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:07 vm04 bash[28289]: audit 2026-03-10T10:18:06.481962+0000 mon.a (mon.0) 1586 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:07.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:07 vm04 bash[28289]: audit 2026-03-10T10:18:06.481962+0000 mon.a (mon.0) 1586 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:07.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:07 vm04 bash[28289]: audit 2026-03-10T10:18:07.408972+0000 mon.c (mon.2) 256 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:07.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:07 vm04 bash[28289]: audit 2026-03-10T10:18:07.408972+0000 mon.c (mon.2) 256 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:07.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:07 vm04 bash[28289]: audit 2026-03-10T10:18:07.475721+0000 mon.a (mon.0) 1587 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm04-59252-30"}]': finished 2026-03-10T10:18:07.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:07 vm04 bash[28289]: audit 2026-03-10T10:18:07.475721+0000 mon.a (mon.0) 1587 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm04-59252-30"}]': finished 2026-03-10T10:18:07.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:07 vm04 bash[28289]: audit 2026-03-10T10:18:07.486797+0000 mon.b (mon.1) 151 : audit [INF] from='client.? 192.168.123.104:0/161708398' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm04-59259-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:07.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:07 vm04 bash[28289]: audit 2026-03-10T10:18:07.486797+0000 mon.b (mon.1) 151 : audit [INF] from='client.? 192.168.123.104:0/161708398' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm04-59259-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:07.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:07 vm04 bash[28289]: cluster 2026-03-10T10:18:07.487020+0000 mon.a (mon.0) 1588 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-10T10:18:07.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:07 vm04 bash[28289]: cluster 2026-03-10T10:18:07.487020+0000 mon.a (mon.0) 1588 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-10T10:18:07.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:07 vm04 bash[20742]: cluster 2026-03-10T10:18:06.397532+0000 mgr.y (mgr.24422) 190 : cluster [DBG] pgmap v210: 354 pgs: 32 unknown, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 302 active+clean; 459 KiB data, 655 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 760 B/s wr, 3 op/s 2026-03-10T10:18:07.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:07 vm04 bash[20742]: cluster 2026-03-10T10:18:06.397532+0000 mgr.y (mgr.24422) 190 : cluster [DBG] pgmap v210: 354 pgs: 32 unknown, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 302 active+clean; 459 KiB data, 655 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 760 B/s wr, 3 op/s 2026-03-10T10:18:07.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:07 vm04 bash[20742]: cluster 2026-03-10T10:18:06.438653+0000 mon.a (mon.0) 1583 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:07.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:07 vm04 bash[20742]: cluster 2026-03-10T10:18:06.438653+0000 mon.a (mon.0) 1583 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:07.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:07 vm04 bash[20742]: audit 2026-03-10T10:18:06.471204+0000 mon.a (mon.0) 1584 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm04-59252-30"}]': finished 2026-03-10T10:18:07.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:07 vm04 bash[20742]: audit 2026-03-10T10:18:06.471204+0000 mon.a (mon.0) 1584 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTrip2_vm04-59252-30"}]': finished 2026-03-10T10:18:07.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:07 vm04 bash[20742]: audit 2026-03-10T10:18:06.479519+0000 mon.c (mon.2) 255 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:07.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:07 vm04 bash[20742]: audit 2026-03-10T10:18:06.479519+0000 mon.c (mon.2) 255 : audit [INF] from='client.? 192.168.123.104:0/2153582505' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:07.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:07 vm04 bash[20742]: cluster 2026-03-10T10:18:06.480665+0000 mon.a (mon.0) 1585 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-10T10:18:07.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:07 vm04 bash[20742]: cluster 2026-03-10T10:18:06.480665+0000 mon.a (mon.0) 1585 : cluster [DBG] osdmap e170: 8 total, 8 up, 8 in 2026-03-10T10:18:07.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:07 vm04 bash[20742]: audit 2026-03-10T10:18:06.481962+0000 mon.a (mon.0) 1586 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:07.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:07 vm04 bash[20742]: audit 2026-03-10T10:18:06.481962+0000 mon.a (mon.0) 1586 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm04-59252-30"}]: dispatch 2026-03-10T10:18:07.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:07 vm04 bash[20742]: audit 2026-03-10T10:18:07.408972+0000 mon.c (mon.2) 256 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:07.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:07 vm04 bash[20742]: audit 2026-03-10T10:18:07.408972+0000 mon.c (mon.2) 256 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:07.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:07 vm04 bash[20742]: audit 2026-03-10T10:18:07.475721+0000 mon.a (mon.0) 1587 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm04-59252-30"}]': finished 2026-03-10T10:18:07.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:07 vm04 bash[20742]: audit 2026-03-10T10:18:07.475721+0000 mon.a (mon.0) 1587 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTrip2_vm04-59252-30"}]': finished 2026-03-10T10:18:07.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:07 vm04 bash[20742]: audit 2026-03-10T10:18:07.486797+0000 mon.b (mon.1) 151 : audit [INF] from='client.? 192.168.123.104:0/161708398' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm04-59259-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:07.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:07 vm04 bash[20742]: audit 2026-03-10T10:18:07.486797+0000 mon.b (mon.1) 151 : audit [INF] from='client.? 192.168.123.104:0/161708398' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm04-59259-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:07.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:07 vm04 bash[20742]: cluster 2026-03-10T10:18:07.487020+0000 mon.a (mon.0) 1588 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-10T10:18:07.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:07 vm04 bash[20742]: cluster 2026-03-10T10:18:07.487020+0000 mon.a (mon.0) 1588 : cluster [DBG] osdmap e171: 8 total, 8 up, 8 in 2026-03-10T10:18:08.703 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:18:08 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:18:08.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:08 vm04 bash[28289]: audit 2026-03-10T10:18:07.495458+0000 mon.a (mon.0) 1589 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm04-59259-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:08.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:08 vm04 bash[28289]: audit 2026-03-10T10:18:07.495458+0000 mon.a (mon.0) 1589 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm04-59259-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:08.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:08 vm04 bash[28289]: audit 2026-03-10T10:18:07.496140+0000 mon.a (mon.0) 1590 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:08.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:08 vm04 bash[28289]: audit 2026-03-10T10:18:07.496140+0000 mon.a (mon.0) 1590 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:08.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:08 vm04 bash[28289]: audit 2026-03-10T10:18:07.531604+0000 mon.c (mon.2) 257 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:08.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:08 vm04 bash[28289]: audit 2026-03-10T10:18:07.531604+0000 mon.c (mon.2) 257 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:08.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:08 vm04 bash[28289]: audit 2026-03-10T10:18:07.532220+0000 mon.a (mon.0) 1591 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:08.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:08 vm04 bash[28289]: audit 2026-03-10T10:18:07.532220+0000 mon.a (mon.0) 1591 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:08.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:08 vm04 bash[28289]: audit 2026-03-10T10:18:07.532622+0000 mon.c (mon.2) 258 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:08.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:08 vm04 bash[28289]: audit 2026-03-10T10:18:07.532622+0000 mon.c (mon.2) 258 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:08.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:08 vm04 bash[28289]: audit 2026-03-10T10:18:07.532843+0000 mon.a (mon.0) 1592 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:08.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:08 vm04 bash[28289]: audit 2026-03-10T10:18:07.532843+0000 mon.a (mon.0) 1592 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:08.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:08 vm04 bash[28289]: audit 2026-03-10T10:18:07.533145+0000 mon.c (mon.2) 259 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm04-59252-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:08.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:08 vm04 bash[28289]: audit 2026-03-10T10:18:07.533145+0000 mon.c (mon.2) 259 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm04-59252-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:08.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:08 vm04 bash[28289]: audit 2026-03-10T10:18:07.533323+0000 mon.a (mon.0) 1593 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm04-59252-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:08.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:08 vm04 bash[28289]: audit 2026-03-10T10:18:07.533323+0000 mon.a (mon.0) 1593 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm04-59252-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:08.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:08 vm04 bash[28289]: audit 2026-03-10T10:18:08.283280+0000 mgr.y (mgr.24422) 191 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:18:08.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:08 vm04 bash[28289]: audit 2026-03-10T10:18:08.283280+0000 mgr.y (mgr.24422) 191 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:18:08.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:08 vm04 bash[28289]: audit 2026-03-10T10:18:08.409741+0000 mon.c (mon.2) 260 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:08.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:08 vm04 bash[28289]: audit 2026-03-10T10:18:08.409741+0000 mon.c (mon.2) 260 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:08.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:08 vm04 bash[20742]: audit 2026-03-10T10:18:07.495458+0000 mon.a (mon.0) 1589 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm04-59259-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:08.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:08 vm04 bash[20742]: audit 2026-03-10T10:18:07.495458+0000 mon.a (mon.0) 1589 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm04-59259-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:08.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:08 vm04 bash[20742]: audit 2026-03-10T10:18:07.496140+0000 mon.a (mon.0) 1590 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:08.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:08 vm04 bash[20742]: audit 2026-03-10T10:18:07.496140+0000 mon.a (mon.0) 1590 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:08.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:08 vm04 bash[20742]: audit 2026-03-10T10:18:07.531604+0000 mon.c (mon.2) 257 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:08.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:08 vm04 bash[20742]: audit 2026-03-10T10:18:07.531604+0000 mon.c (mon.2) 257 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:08.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:08 vm04 bash[20742]: audit 2026-03-10T10:18:07.532220+0000 mon.a (mon.0) 1591 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:08.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:08 vm04 bash[20742]: audit 2026-03-10T10:18:07.532220+0000 mon.a (mon.0) 1591 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:08.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:08 vm04 bash[20742]: audit 2026-03-10T10:18:07.532622+0000 mon.c (mon.2) 258 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:08.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:08 vm04 bash[20742]: audit 2026-03-10T10:18:07.532622+0000 mon.c (mon.2) 258 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:08.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:08 vm04 bash[20742]: audit 2026-03-10T10:18:07.532843+0000 mon.a (mon.0) 1592 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:08.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:08 vm04 bash[20742]: audit 2026-03-10T10:18:07.532843+0000 mon.a (mon.0) 1592 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:08.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:08 vm04 bash[20742]: audit 2026-03-10T10:18:07.533145+0000 mon.c (mon.2) 259 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm04-59252-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:08.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:08 vm04 bash[20742]: audit 2026-03-10T10:18:07.533145+0000 mon.c (mon.2) 259 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm04-59252-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:08.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:08 vm04 bash[20742]: audit 2026-03-10T10:18:07.533323+0000 mon.a (mon.0) 1593 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm04-59252-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:08.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:08 vm04 bash[20742]: audit 2026-03-10T10:18:07.533323+0000 mon.a (mon.0) 1593 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm04-59252-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:08.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:08 vm04 bash[20742]: audit 2026-03-10T10:18:08.283280+0000 mgr.y (mgr.24422) 191 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:18:08.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:08 vm04 bash[20742]: audit 2026-03-10T10:18:08.283280+0000 mgr.y (mgr.24422) 191 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:18:08.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:08 vm04 bash[20742]: audit 2026-03-10T10:18:08.409741+0000 mon.c (mon.2) 260 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:08.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:08 vm04 bash[20742]: audit 2026-03-10T10:18:08.409741+0000 mon.c (mon.2) 260 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:09.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:08 vm07 bash[23367]: audit 2026-03-10T10:18:07.495458+0000 mon.a (mon.0) 1589 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm04-59259-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:09.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:08 vm07 bash[23367]: audit 2026-03-10T10:18:07.495458+0000 mon.a (mon.0) 1589 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm04-59259-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:09.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:08 vm07 bash[23367]: audit 2026-03-10T10:18:07.496140+0000 mon.a (mon.0) 1590 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:09.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:08 vm07 bash[23367]: audit 2026-03-10T10:18:07.496140+0000 mon.a (mon.0) 1590 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-19","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:09.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:08 vm07 bash[23367]: audit 2026-03-10T10:18:07.531604+0000 mon.c (mon.2) 257 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:09.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:08 vm07 bash[23367]: audit 2026-03-10T10:18:07.531604+0000 mon.c (mon.2) 257 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:09.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:08 vm07 bash[23367]: audit 2026-03-10T10:18:07.532220+0000 mon.a (mon.0) 1591 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:09.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:08 vm07 bash[23367]: audit 2026-03-10T10:18:07.532220+0000 mon.a (mon.0) 1591 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:09.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:08 vm07 bash[23367]: audit 2026-03-10T10:18:07.532622+0000 mon.c (mon.2) 258 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:09.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:08 vm07 bash[23367]: audit 2026-03-10T10:18:07.532622+0000 mon.c (mon.2) 258 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:09.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:08 vm07 bash[23367]: audit 2026-03-10T10:18:07.532843+0000 mon.a (mon.0) 1592 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:09.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:08 vm07 bash[23367]: audit 2026-03-10T10:18:07.532843+0000 mon.a (mon.0) 1592 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:09.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:08 vm07 bash[23367]: audit 2026-03-10T10:18:07.533145+0000 mon.c (mon.2) 259 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm04-59252-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:09.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:08 vm07 bash[23367]: audit 2026-03-10T10:18:07.533145+0000 mon.c (mon.2) 259 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm04-59252-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:09.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:08 vm07 bash[23367]: audit 2026-03-10T10:18:07.533323+0000 mon.a (mon.0) 1593 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm04-59252-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:09.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:08 vm07 bash[23367]: audit 2026-03-10T10:18:07.533323+0000 mon.a (mon.0) 1593 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm04-59252-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:09.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:08 vm07 bash[23367]: audit 2026-03-10T10:18:08.283280+0000 mgr.y (mgr.24422) 191 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:18:09.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:08 vm07 bash[23367]: audit 2026-03-10T10:18:08.283280+0000 mgr.y (mgr.24422) 191 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:18:09.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:08 vm07 bash[23367]: audit 2026-03-10T10:18:08.409741+0000 mon.c (mon.2) 260 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:09.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:08 vm07 bash[23367]: audit 2026-03-10T10:18:08.409741+0000 mon.c (mon.2) 260 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:10.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: cluster 2026-03-10T10:18:08.398049+0000 mgr.y (mgr.24422) 192 : cluster [DBG] pgmap v213: 354 pgs: 28 creating+peering, 36 unknown, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 270 active+clean; 459 KiB data, 655 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:18:10.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: cluster 2026-03-10T10:18:08.398049+0000 mgr.y (mgr.24422) 192 : cluster [DBG] pgmap v213: 354 pgs: 28 creating+peering, 36 unknown, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 270 active+clean; 459 KiB data, 655 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:18:10.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:08.665174+0000 mon.a (mon.0) 1594 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm04-59259-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:10.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:08.665174+0000 mon.a (mon.0) 1594 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm04-59259-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:10.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:08.665394+0000 mon.a (mon.0) 1595 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:10.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:08.665394+0000 mon.a (mon.0) 1595 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:10.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:08.665515+0000 mon.a (mon.0) 1596 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm04-59252-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:10.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:08.665515+0000 mon.a (mon.0) 1596 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm04-59252-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:10.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: cluster 2026-03-10T10:18:08.675133+0000 mon.a (mon.0) 1597 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-10T10:18:10.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: cluster 2026-03-10T10:18:08.675133+0000 mon.a (mon.0) 1597 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-10T10:18:10.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:08.687993+0000 mon.c (mon.2) 261 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm04-59252-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:10.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:08.687993+0000 mon.c (mon.2) 261 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm04-59252-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:10.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:08.692259+0000 mon.a (mon.0) 1598 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm04-59252-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:10.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:08.692259+0000 mon.a (mon.0) 1598 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm04-59252-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:10.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:08.693181+0000 mon.a (mon.0) 1599 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:10.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:08.693181+0000 mon.a (mon.0) 1599 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:10.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.168320+0000 mon.b (mon.1) 152 : audit [INF] from='client.? 192.168.123.104:0/161708398' entity='client.admin' cmd=[{ 2026-03-10T10:18:10.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.168320+0000 mon.b (mon.1) 152 : audit [INF] from='client.? 192.168.123.104:0/161708398' entity='client.admin' cmd=[{ 2026-03-10T10:18:10.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.168374+0000 mon.b (mon.1) 153 : audit [INF] "prefix": "osd pool set", 2026-03-10T10:18:10.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.168374+0000 mon.b (mon.1) 153 : audit [INF] "prefix": "osd pool set", 2026-03-10T10:18:10.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.168415+0000 mon.b (mon.1) 154 : audit [INF] "pool": "PoolEIOFlag_vm04-59259-33", 2026-03-10T10:18:10.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.168415+0000 mon.b (mon.1) 154 : audit [INF] "pool": "PoolEIOFlag_vm04-59259-33", 2026-03-10T10:18:10.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.168465+0000 mon.b (mon.1) 155 : audit [INF] "var": "eio", 2026-03-10T10:18:10.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.168465+0000 mon.b (mon.1) 155 : audit [INF] "var": "eio", 2026-03-10T10:18:10.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.168518+0000 mon.b (mon.1) 156 : audit [INF] "val": "true" 2026-03-10T10:18:10.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.168518+0000 mon.b (mon.1) 156 : audit [INF] "val": "true" 2026-03-10T10:18:10.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.168560+0000 mon.b (mon.1) 157 : audit [INF] }]: dispatch 2026-03-10T10:18:10.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.168560+0000 mon.b (mon.1) 157 : audit [INF] }]: dispatch 2026-03-10T10:18:10.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.171254+0000 mon.a (mon.0) 1600 : audit [INF] from='client.? ' entity='client.admin' cmd=[{ 2026-03-10T10:18:10.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.171254+0000 mon.a (mon.0) 1600 : audit [INF] from='client.? ' entity='client.admin' cmd=[{ 2026-03-10T10:18:10.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.171288+0000 mon.a (mon.0) 1601 : audit [INF] "prefix": "osd pool set", 2026-03-10T10:18:10.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.171288+0000 mon.a (mon.0) 1601 : audit [INF] "prefix": "osd pool set", 2026-03-10T10:18:10.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.171301+0000 mon.a (mon.0) 1602 : audit [INF] "pool": "PoolEIOFlag_vm04-59259-33", 2026-03-10T10:18:10.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.171301+0000 mon.a (mon.0) 1602 : audit [INF] "pool": "PoolEIOFlag_vm04-59259-33", 2026-03-10T10:18:10.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.171313+0000 mon.a (mon.0) 1603 : audit [INF] "var": "eio", 2026-03-10T10:18:10.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.171313+0000 mon.a (mon.0) 1603 : audit [INF] "var": "eio", 2026-03-10T10:18:10.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.171322+0000 mon.a (mon.0) 1604 : audit [INF] "val": "true" 2026-03-10T10:18:10.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.171322+0000 mon.a (mon.0) 1604 : audit [INF] "val": "true" 2026-03-10T10:18:10.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.171334+0000 mon.a (mon.0) 1605 : audit [INF] }]: dispatch 2026-03-10T10:18:10.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.171334+0000 mon.a (mon.0) 1605 : audit [INF] }]: dispatch 2026-03-10T10:18:10.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.410498+0000 mon.c (mon.2) 262 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:10.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.410498+0000 mon.c (mon.2) 262 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:10.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.669508+0000 mon.a (mon.0) 1606 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-19", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:10.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.669508+0000 mon.a (mon.0) 1606 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-19", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:10.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.669855+0000 mon.a (mon.0) 1607 : audit [INF] from='client.? ' entity='client.admin' cmd='[{ 2026-03-10T10:18:10.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.669855+0000 mon.a (mon.0) 1607 : audit [INF] from='client.? ' entity='client.admin' cmd='[{ 2026-03-10T10:18:10.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.669904+0000 mon.a (mon.0) 1608 : audit [INF] "prefix": "osd pool set", 2026-03-10T10:18:10.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.669904+0000 mon.a (mon.0) 1608 : audit [INF] "prefix": "osd pool set", 2026-03-10T10:18:10.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.669951+0000 mon.a (mon.0) 1609 : audit [INF] "pool": "PoolEIOFlag_vm04-59259-33", 2026-03-10T10:18:10.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.669951+0000 mon.a (mon.0) 1609 : audit [INF] "pool": "PoolEIOFlag_vm04-59259-33", 2026-03-10T10:18:10.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.669998+0000 mon.a (mon.0) 1610 : audit [INF] "var": "eio", 2026-03-10T10:18:10.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.669998+0000 mon.a (mon.0) 1610 : audit [INF] "var": "eio", 2026-03-10T10:18:10.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.670045+0000 mon.a (mon.0) 1611 : audit [INF] "val": "true" 2026-03-10T10:18:10.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.670045+0000 mon.a (mon.0) 1611 : audit [INF] "val": "true" 2026-03-10T10:18:10.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.670093+0000 mon.a (mon.0) 1612 : audit [INF] }]': finished 2026-03-10T10:18:10.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.670093+0000 mon.a (mon.0) 1612 : audit [INF] }]': finished 2026-03-10T10:18:10.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: cluster 2026-03-10T10:18:09.682124+0000 mon.a (mon.0) 1613 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-10T10:18:10.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: cluster 2026-03-10T10:18:09.682124+0000 mon.a (mon.0) 1613 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-10T10:18:10.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.683113+0000 mon.a (mon.0) 1614 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-19"}]: dispatch 2026-03-10T10:18:10.019 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:09 vm07 bash[23367]: audit 2026-03-10T10:18:09.683113+0000 mon.a (mon.0) 1614 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-19"}]: dispatch 2026-03-10T10:18:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: cluster 2026-03-10T10:18:08.398049+0000 mgr.y (mgr.24422) 192 : cluster [DBG] pgmap v213: 354 pgs: 28 creating+peering, 36 unknown, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 270 active+clean; 459 KiB data, 655 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:18:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: cluster 2026-03-10T10:18:08.398049+0000 mgr.y (mgr.24422) 192 : cluster [DBG] pgmap v213: 354 pgs: 28 creating+peering, 36 unknown, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 270 active+clean; 459 KiB data, 655 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:18:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:08.665174+0000 mon.a (mon.0) 1594 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm04-59259-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:08.665174+0000 mon.a (mon.0) 1594 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm04-59259-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:08.665394+0000 mon.a (mon.0) 1595 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:08.665394+0000 mon.a (mon.0) 1595 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:08.665515+0000 mon.a (mon.0) 1596 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm04-59252-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:08.665515+0000 mon.a (mon.0) 1596 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm04-59252-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: cluster 2026-03-10T10:18:08.675133+0000 mon.a (mon.0) 1597 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-10T10:18:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: cluster 2026-03-10T10:18:08.675133+0000 mon.a (mon.0) 1597 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-10T10:18:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:08.687993+0000 mon.c (mon.2) 261 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm04-59252-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:08.687993+0000 mon.c (mon.2) 261 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm04-59252-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:08.692259+0000 mon.a (mon.0) 1598 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm04-59252-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:08.692259+0000 mon.a (mon.0) 1598 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm04-59252-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:08.693181+0000 mon.a (mon.0) 1599 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:08.693181+0000 mon.a (mon.0) 1599 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.168320+0000 mon.b (mon.1) 152 : audit [INF] from='client.? 192.168.123.104:0/161708398' entity='client.admin' cmd=[{ 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.168320+0000 mon.b (mon.1) 152 : audit [INF] from='client.? 192.168.123.104:0/161708398' entity='client.admin' cmd=[{ 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.168374+0000 mon.b (mon.1) 153 : audit [INF] "prefix": "osd pool set", 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.168374+0000 mon.b (mon.1) 153 : audit [INF] "prefix": "osd pool set", 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.168415+0000 mon.b (mon.1) 154 : audit [INF] "pool": "PoolEIOFlag_vm04-59259-33", 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.168415+0000 mon.b (mon.1) 154 : audit [INF] "pool": "PoolEIOFlag_vm04-59259-33", 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.168465+0000 mon.b (mon.1) 155 : audit [INF] "var": "eio", 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.168465+0000 mon.b (mon.1) 155 : audit [INF] "var": "eio", 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.168518+0000 mon.b (mon.1) 156 : audit [INF] "val": "true" 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.168518+0000 mon.b (mon.1) 156 : audit [INF] "val": "true" 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.168560+0000 mon.b (mon.1) 157 : audit [INF] }]: dispatch 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.168560+0000 mon.b (mon.1) 157 : audit [INF] }]: dispatch 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.171254+0000 mon.a (mon.0) 1600 : audit [INF] from='client.? ' entity='client.admin' cmd=[{ 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.171254+0000 mon.a (mon.0) 1600 : audit [INF] from='client.? ' entity='client.admin' cmd=[{ 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.171288+0000 mon.a (mon.0) 1601 : audit [INF] "prefix": "osd pool set", 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.171288+0000 mon.a (mon.0) 1601 : audit [INF] "prefix": "osd pool set", 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.171301+0000 mon.a (mon.0) 1602 : audit [INF] "pool": "PoolEIOFlag_vm04-59259-33", 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.171301+0000 mon.a (mon.0) 1602 : audit [INF] "pool": "PoolEIOFlag_vm04-59259-33", 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.171313+0000 mon.a (mon.0) 1603 : audit [INF] "var": "eio", 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.171313+0000 mon.a (mon.0) 1603 : audit [INF] "var": "eio", 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.171322+0000 mon.a (mon.0) 1604 : audit [INF] "val": "true" 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.171322+0000 mon.a (mon.0) 1604 : audit [INF] "val": "true" 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.171334+0000 mon.a (mon.0) 1605 : audit [INF] }]: dispatch 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.171334+0000 mon.a (mon.0) 1605 : audit [INF] }]: dispatch 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.410498+0000 mon.c (mon.2) 262 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.410498+0000 mon.c (mon.2) 262 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.669508+0000 mon.a (mon.0) 1606 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-19", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.669508+0000 mon.a (mon.0) 1606 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-19", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.669855+0000 mon.a (mon.0) 1607 : audit [INF] from='client.? ' entity='client.admin' cmd='[{ 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.669855+0000 mon.a (mon.0) 1607 : audit [INF] from='client.? ' entity='client.admin' cmd='[{ 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.669904+0000 mon.a (mon.0) 1608 : audit [INF] "prefix": "osd pool set", 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.669904+0000 mon.a (mon.0) 1608 : audit [INF] "prefix": "osd pool set", 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.669951+0000 mon.a (mon.0) 1609 : audit [INF] "pool": "PoolEIOFlag_vm04-59259-33", 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.669951+0000 mon.a (mon.0) 1609 : audit [INF] "pool": "PoolEIOFlag_vm04-59259-33", 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.669998+0000 mon.a (mon.0) 1610 : audit [INF] "var": "eio", 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.669998+0000 mon.a (mon.0) 1610 : audit [INF] "var": "eio", 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.670045+0000 mon.a (mon.0) 1611 : audit [INF] "val": "true" 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.670045+0000 mon.a (mon.0) 1611 : audit [INF] "val": "true" 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.670093+0000 mon.a (mon.0) 1612 : audit [INF] }]': finished 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.670093+0000 mon.a (mon.0) 1612 : audit [INF] }]': finished 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: cluster 2026-03-10T10:18:09.682124+0000 mon.a (mon.0) 1613 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: cluster 2026-03-10T10:18:09.682124+0000 mon.a (mon.0) 1613 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.683113+0000 mon.a (mon.0) 1614 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-19"}]: dispatch 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:09 vm04 bash[28289]: audit 2026-03-10T10:18:09.683113+0000 mon.a (mon.0) 1614 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-19"}]: dispatch 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: cluster 2026-03-10T10:18:08.398049+0000 mgr.y (mgr.24422) 192 : cluster [DBG] pgmap v213: 354 pgs: 28 creating+peering, 36 unknown, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 270 active+clean; 459 KiB data, 655 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: cluster 2026-03-10T10:18:08.398049+0000 mgr.y (mgr.24422) 192 : cluster [DBG] pgmap v213: 354 pgs: 28 creating+peering, 36 unknown, 10 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 270 active+clean; 459 KiB data, 655 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:08.665174+0000 mon.a (mon.0) 1594 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm04-59259-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:10.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:08.665174+0000 mon.a (mon.0) 1594 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "PoolEIOFlag_vm04-59259-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:08.665394+0000 mon.a (mon.0) 1595 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:08.665394+0000 mon.a (mon.0) 1595 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-19","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:08.665515+0000 mon.a (mon.0) 1596 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm04-59252-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:08.665515+0000 mon.a (mon.0) 1596 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppend_vm04-59252-31", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: cluster 2026-03-10T10:18:08.675133+0000 mon.a (mon.0) 1597 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: cluster 2026-03-10T10:18:08.675133+0000 mon.a (mon.0) 1597 : cluster [DBG] osdmap e172: 8 total, 8 up, 8 in 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:08.687993+0000 mon.c (mon.2) 261 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm04-59252-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:08.687993+0000 mon.c (mon.2) 261 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm04-59252-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:08.692259+0000 mon.a (mon.0) 1598 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm04-59252-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:08.692259+0000 mon.a (mon.0) 1598 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm04-59252-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:08.693181+0000 mon.a (mon.0) 1599 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:08.693181+0000 mon.a (mon.0) 1599 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-19", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.168320+0000 mon.b (mon.1) 152 : audit [INF] from='client.? 192.168.123.104:0/161708398' entity='client.admin' cmd=[{ 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.168320+0000 mon.b (mon.1) 152 : audit [INF] from='client.? 192.168.123.104:0/161708398' entity='client.admin' cmd=[{ 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.168374+0000 mon.b (mon.1) 153 : audit [INF] "prefix": "osd pool set", 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.168374+0000 mon.b (mon.1) 153 : audit [INF] "prefix": "osd pool set", 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.168415+0000 mon.b (mon.1) 154 : audit [INF] "pool": "PoolEIOFlag_vm04-59259-33", 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.168415+0000 mon.b (mon.1) 154 : audit [INF] "pool": "PoolEIOFlag_vm04-59259-33", 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.168465+0000 mon.b (mon.1) 155 : audit [INF] "var": "eio", 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.168465+0000 mon.b (mon.1) 155 : audit [INF] "var": "eio", 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.168518+0000 mon.b (mon.1) 156 : audit [INF] "val": "true" 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.168518+0000 mon.b (mon.1) 156 : audit [INF] "val": "true" 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.168560+0000 mon.b (mon.1) 157 : audit [INF] }]: dispatch 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.168560+0000 mon.b (mon.1) 157 : audit [INF] }]: dispatch 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.171254+0000 mon.a (mon.0) 1600 : audit [INF] from='client.? ' entity='client.admin' cmd=[{ 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.171254+0000 mon.a (mon.0) 1600 : audit [INF] from='client.? ' entity='client.admin' cmd=[{ 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.171288+0000 mon.a (mon.0) 1601 : audit [INF] "prefix": "osd pool set", 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.171288+0000 mon.a (mon.0) 1601 : audit [INF] "prefix": "osd pool set", 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.171301+0000 mon.a (mon.0) 1602 : audit [INF] "pool": "PoolEIOFlag_vm04-59259-33", 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.171301+0000 mon.a (mon.0) 1602 : audit [INF] "pool": "PoolEIOFlag_vm04-59259-33", 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.171313+0000 mon.a (mon.0) 1603 : audit [INF] "var": "eio", 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.171313+0000 mon.a (mon.0) 1603 : audit [INF] "var": "eio", 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.171322+0000 mon.a (mon.0) 1604 : audit [INF] "val": "true" 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.171322+0000 mon.a (mon.0) 1604 : audit [INF] "val": "true" 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.171334+0000 mon.a (mon.0) 1605 : audit [INF] }]: dispatch 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.171334+0000 mon.a (mon.0) 1605 : audit [INF] }]: dispatch 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.410498+0000 mon.c (mon.2) 262 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.410498+0000 mon.c (mon.2) 262 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.669508+0000 mon.a (mon.0) 1606 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-19", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.669508+0000 mon.a (mon.0) 1606 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-19", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.669855+0000 mon.a (mon.0) 1607 : audit [INF] from='client.? ' entity='client.admin' cmd='[{ 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.669855+0000 mon.a (mon.0) 1607 : audit [INF] from='client.? ' entity='client.admin' cmd='[{ 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.669904+0000 mon.a (mon.0) 1608 : audit [INF] "prefix": "osd pool set", 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.669904+0000 mon.a (mon.0) 1608 : audit [INF] "prefix": "osd pool set", 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.669951+0000 mon.a (mon.0) 1609 : audit [INF] "pool": "PoolEIOFlag_vm04-59259-33", 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.669951+0000 mon.a (mon.0) 1609 : audit [INF] "pool": "PoolEIOFlag_vm04-59259-33", 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.669998+0000 mon.a (mon.0) 1610 : audit [INF] "var": "eio", 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.669998+0000 mon.a (mon.0) 1610 : audit [INF] "var": "eio", 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.670045+0000 mon.a (mon.0) 1611 : audit [INF] "val": "true" 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.670045+0000 mon.a (mon.0) 1611 : audit [INF] "val": "true" 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.670093+0000 mon.a (mon.0) 1612 : audit [INF] }]': finished 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.670093+0000 mon.a (mon.0) 1612 : audit [INF] }]': finished 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: cluster 2026-03-10T10:18:09.682124+0000 mon.a (mon.0) 1613 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: cluster 2026-03-10T10:18:09.682124+0000 mon.a (mon.0) 1613 : cluster [DBG] osdmap e173: 8 total, 8 up, 8 in 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.683113+0000 mon.a (mon.0) 1614 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-19"}]: dispatch 2026-03-10T10:18:10.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:09 vm04 bash[20742]: audit 2026-03-10T10:18:09.683113+0000 mon.a (mon.0) 1614 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-19"}]: dispatch 2026-03-10T10:18:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:10 vm07 bash[23367]: audit 2026-03-10T10:18:10.411401+0000 mon.c (mon.2) 263 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:10 vm07 bash[23367]: audit 2026-03-10T10:18:10.411401+0000 mon.c (mon.2) 263 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:10 vm07 bash[23367]: audit 2026-03-10T10:18:10.673286+0000 mon.a (mon.0) 1615 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm04-59252-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm04-59252-31"}]': finished 2026-03-10T10:18:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:10 vm07 bash[23367]: audit 2026-03-10T10:18:10.673286+0000 mon.a (mon.0) 1615 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm04-59252-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm04-59252-31"}]': finished 2026-03-10T10:18:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:10 vm07 bash[23367]: audit 2026-03-10T10:18:10.673414+0000 mon.a (mon.0) 1616 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-19"}]': finished 2026-03-10T10:18:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:10 vm07 bash[23367]: audit 2026-03-10T10:18:10.673414+0000 mon.a (mon.0) 1616 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-19"}]': finished 2026-03-10T10:18:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:10 vm07 bash[23367]: cluster 2026-03-10T10:18:10.680868+0000 mon.a (mon.0) 1617 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-10T10:18:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:10 vm07 bash[23367]: cluster 2026-03-10T10:18:10.680868+0000 mon.a (mon.0) 1617 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-10T10:18:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:10 vm07 bash[23367]: audit 2026-03-10T10:18:10.681952+0000 mon.a (mon.0) 1618 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-19", "mode": "writeback"}]: dispatch 2026-03-10T10:18:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:10 vm07 bash[23367]: audit 2026-03-10T10:18:10.681952+0000 mon.a (mon.0) 1618 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-19", "mode": "writeback"}]: dispatch 2026-03-10T10:18:11.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:10 vm04 bash[28289]: audit 2026-03-10T10:18:10.411401+0000 mon.c (mon.2) 263 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:11.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:10 vm04 bash[28289]: audit 2026-03-10T10:18:10.411401+0000 mon.c (mon.2) 263 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:11.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:10 vm04 bash[28289]: audit 2026-03-10T10:18:10.673286+0000 mon.a (mon.0) 1615 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm04-59252-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm04-59252-31"}]': finished 2026-03-10T10:18:11.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:10 vm04 bash[28289]: audit 2026-03-10T10:18:10.673286+0000 mon.a (mon.0) 1615 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm04-59252-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm04-59252-31"}]': finished 2026-03-10T10:18:11.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:10 vm04 bash[28289]: audit 2026-03-10T10:18:10.673414+0000 mon.a (mon.0) 1616 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-19"}]': finished 2026-03-10T10:18:11.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:10 vm04 bash[28289]: audit 2026-03-10T10:18:10.673414+0000 mon.a (mon.0) 1616 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-19"}]': finished 2026-03-10T10:18:11.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:10 vm04 bash[28289]: cluster 2026-03-10T10:18:10.680868+0000 mon.a (mon.0) 1617 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-10T10:18:11.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:10 vm04 bash[28289]: cluster 2026-03-10T10:18:10.680868+0000 mon.a (mon.0) 1617 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-10T10:18:11.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:10 vm04 bash[28289]: audit 2026-03-10T10:18:10.681952+0000 mon.a (mon.0) 1618 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-19", "mode": "writeback"}]: dispatch 2026-03-10T10:18:11.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:10 vm04 bash[28289]: audit 2026-03-10T10:18:10.681952+0000 mon.a (mon.0) 1618 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-19", "mode": "writeback"}]: dispatch 2026-03-10T10:18:11.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:10 vm04 bash[20742]: audit 2026-03-10T10:18:10.411401+0000 mon.c (mon.2) 263 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:11.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:10 vm04 bash[20742]: audit 2026-03-10T10:18:10.411401+0000 mon.c (mon.2) 263 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:11.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:10 vm04 bash[20742]: audit 2026-03-10T10:18:10.673286+0000 mon.a (mon.0) 1615 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm04-59252-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm04-59252-31"}]': finished 2026-03-10T10:18:11.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:10 vm04 bash[20742]: audit 2026-03-10T10:18:10.673286+0000 mon.a (mon.0) 1615 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppend_vm04-59252-31", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppend_vm04-59252-31"}]': finished 2026-03-10T10:18:11.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:10 vm04 bash[20742]: audit 2026-03-10T10:18:10.673414+0000 mon.a (mon.0) 1616 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-19"}]': finished 2026-03-10T10:18:11.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:10 vm04 bash[20742]: audit 2026-03-10T10:18:10.673414+0000 mon.a (mon.0) 1616 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-19"}]': finished 2026-03-10T10:18:11.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:10 vm04 bash[20742]: cluster 2026-03-10T10:18:10.680868+0000 mon.a (mon.0) 1617 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-10T10:18:11.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:10 vm04 bash[20742]: cluster 2026-03-10T10:18:10.680868+0000 mon.a (mon.0) 1617 : cluster [DBG] osdmap e174: 8 total, 8 up, 8 in 2026-03-10T10:18:11.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:10 vm04 bash[20742]: audit 2026-03-10T10:18:10.681952+0000 mon.a (mon.0) 1618 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-19", "mode": "writeback"}]: dispatch 2026-03-10T10:18:11.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:10 vm04 bash[20742]: audit 2026-03-10T10:18:10.681952+0000 mon.a (mon.0) 1618 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-19", "mode": "writeback"}]: dispatch 2026-03-10T10:18:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:11 vm04 bash[28289]: cluster 2026-03-10T10:18:10.398427+0000 mgr.y (mgr.24422) 193 : cluster [DBG] pgmap v216: 354 pgs: 6 creating+activating, 57 creating+peering, 10 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 272 active+clean; 459 KiB data, 656 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:18:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:11 vm04 bash[28289]: cluster 2026-03-10T10:18:10.398427+0000 mgr.y (mgr.24422) 193 : cluster [DBG] pgmap v216: 354 pgs: 6 creating+activating, 57 creating+peering, 10 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 272 active+clean; 459 KiB data, 656 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:18:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:11 vm04 bash[28289]: audit 2026-03-10T10:18:11.412380+0000 mon.c (mon.2) 264 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:11 vm04 bash[28289]: audit 2026-03-10T10:18:11.412380+0000 mon.c (mon.2) 264 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:11 vm04 bash[28289]: cluster 2026-03-10T10:18:11.673665+0000 mon.a (mon.0) 1619 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:11 vm04 bash[28289]: cluster 2026-03-10T10:18:11.673665+0000 mon.a (mon.0) 1619 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:11 vm04 bash[28289]: audit 2026-03-10T10:18:11.677747+0000 mon.a (mon.0) 1620 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-19", "mode": "writeback"}]': finished 2026-03-10T10:18:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:11 vm04 bash[28289]: audit 2026-03-10T10:18:11.677747+0000 mon.a (mon.0) 1620 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-19", "mode": "writeback"}]': finished 2026-03-10T10:18:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:11 vm04 bash[28289]: cluster 2026-03-10T10:18:11.683057+0000 mon.a (mon.0) 1621 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-10T10:18:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:11 vm04 bash[28289]: cluster 2026-03-10T10:18:11.683057+0000 mon.a (mon.0) 1621 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-10T10:18:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:11 vm04 bash[28289]: audit 2026-03-10T10:18:11.685013+0000 mon.c (mon.2) 265 : audit [INF] from='client.? 192.168.123.104:0/2458084341' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm04-59259-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:11 vm04 bash[28289]: audit 2026-03-10T10:18:11.685013+0000 mon.c (mon.2) 265 : audit [INF] from='client.? 192.168.123.104:0/2458084341' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm04-59259-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:11 vm04 bash[28289]: audit 2026-03-10T10:18:11.687153+0000 mon.a (mon.0) 1622 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm04-59259-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:11 vm04 bash[28289]: audit 2026-03-10T10:18:11.687153+0000 mon.a (mon.0) 1622 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm04-59259-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:12.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:11 vm04 bash[20742]: cluster 2026-03-10T10:18:10.398427+0000 mgr.y (mgr.24422) 193 : cluster [DBG] pgmap v216: 354 pgs: 6 creating+activating, 57 creating+peering, 10 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 272 active+clean; 459 KiB data, 656 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:18:12.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:11 vm04 bash[20742]: cluster 2026-03-10T10:18:10.398427+0000 mgr.y (mgr.24422) 193 : cluster [DBG] pgmap v216: 354 pgs: 6 creating+activating, 57 creating+peering, 10 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 272 active+clean; 459 KiB data, 656 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:18:12.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:11 vm04 bash[20742]: audit 2026-03-10T10:18:11.412380+0000 mon.c (mon.2) 264 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:12.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:11 vm04 bash[20742]: audit 2026-03-10T10:18:11.412380+0000 mon.c (mon.2) 264 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:12.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:11 vm04 bash[20742]: cluster 2026-03-10T10:18:11.673665+0000 mon.a (mon.0) 1619 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:12.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:11 vm04 bash[20742]: cluster 2026-03-10T10:18:11.673665+0000 mon.a (mon.0) 1619 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:12.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:11 vm04 bash[20742]: audit 2026-03-10T10:18:11.677747+0000 mon.a (mon.0) 1620 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-19", "mode": "writeback"}]': finished 2026-03-10T10:18:12.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:11 vm04 bash[20742]: audit 2026-03-10T10:18:11.677747+0000 mon.a (mon.0) 1620 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-19", "mode": "writeback"}]': finished 2026-03-10T10:18:12.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:11 vm04 bash[20742]: cluster 2026-03-10T10:18:11.683057+0000 mon.a (mon.0) 1621 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-10T10:18:12.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:11 vm04 bash[20742]: cluster 2026-03-10T10:18:11.683057+0000 mon.a (mon.0) 1621 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-10T10:18:12.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:11 vm04 bash[20742]: audit 2026-03-10T10:18:11.685013+0000 mon.c (mon.2) 265 : audit [INF] from='client.? 192.168.123.104:0/2458084341' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm04-59259-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:12.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:11 vm04 bash[20742]: audit 2026-03-10T10:18:11.685013+0000 mon.c (mon.2) 265 : audit [INF] from='client.? 192.168.123.104:0/2458084341' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm04-59259-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:12.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:11 vm04 bash[20742]: audit 2026-03-10T10:18:11.687153+0000 mon.a (mon.0) 1622 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm04-59259-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:12.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:11 vm04 bash[20742]: audit 2026-03-10T10:18:11.687153+0000 mon.a (mon.0) 1622 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm04-59259-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:12.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:11 vm07 bash[23367]: cluster 2026-03-10T10:18:10.398427+0000 mgr.y (mgr.24422) 193 : cluster [DBG] pgmap v216: 354 pgs: 6 creating+activating, 57 creating+peering, 10 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 272 active+clean; 459 KiB data, 656 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:18:12.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:11 vm07 bash[23367]: cluster 2026-03-10T10:18:10.398427+0000 mgr.y (mgr.24422) 193 : cluster [DBG] pgmap v216: 354 pgs: 6 creating+activating, 57 creating+peering, 10 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 272 active+clean; 459 KiB data, 656 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:18:12.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:11 vm07 bash[23367]: audit 2026-03-10T10:18:11.412380+0000 mon.c (mon.2) 264 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:12.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:11 vm07 bash[23367]: audit 2026-03-10T10:18:11.412380+0000 mon.c (mon.2) 264 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:12.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:11 vm07 bash[23367]: cluster 2026-03-10T10:18:11.673665+0000 mon.a (mon.0) 1619 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:12.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:11 vm07 bash[23367]: cluster 2026-03-10T10:18:11.673665+0000 mon.a (mon.0) 1619 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:12.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:11 vm07 bash[23367]: audit 2026-03-10T10:18:11.677747+0000 mon.a (mon.0) 1620 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-19", "mode": "writeback"}]': finished 2026-03-10T10:18:12.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:11 vm07 bash[23367]: audit 2026-03-10T10:18:11.677747+0000 mon.a (mon.0) 1620 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-19", "mode": "writeback"}]': finished 2026-03-10T10:18:12.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:11 vm07 bash[23367]: cluster 2026-03-10T10:18:11.683057+0000 mon.a (mon.0) 1621 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-10T10:18:12.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:11 vm07 bash[23367]: cluster 2026-03-10T10:18:11.683057+0000 mon.a (mon.0) 1621 : cluster [DBG] osdmap e175: 8 total, 8 up, 8 in 2026-03-10T10:18:12.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:11 vm07 bash[23367]: audit 2026-03-10T10:18:11.685013+0000 mon.c (mon.2) 265 : audit [INF] from='client.? 192.168.123.104:0/2458084341' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm04-59259-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:12.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:11 vm07 bash[23367]: audit 2026-03-10T10:18:11.685013+0000 mon.c (mon.2) 265 : audit [INF] from='client.? 192.168.123.104:0/2458084341' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm04-59259-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:12.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:11 vm07 bash[23367]: audit 2026-03-10T10:18:11.687153+0000 mon.a (mon.0) 1622 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm04-59259-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:12.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:11 vm07 bash[23367]: audit 2026-03-10T10:18:11.687153+0000 mon.a (mon.0) 1622 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "MultiReads_vm04-59259-34","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:12 vm04 bash[28289]: audit 2026-03-10T10:18:11.920214+0000 mon.a (mon.0) 1623 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:12 vm04 bash[28289]: audit 2026-03-10T10:18:11.920214+0000 mon.a (mon.0) 1623 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:12 vm04 bash[28289]: audit 2026-03-10T10:18:12.413291+0000 mon.c (mon.2) 266 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:12 vm04 bash[28289]: audit 2026-03-10T10:18:12.413291+0000 mon.c (mon.2) 266 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:12 vm04 bash[28289]: audit 2026-03-10T10:18:12.741774+0000 mon.a (mon.0) 1624 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiReads_vm04-59259-34","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:12 vm04 bash[28289]: audit 2026-03-10T10:18:12.741774+0000 mon.a (mon.0) 1624 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiReads_vm04-59259-34","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:12 vm04 bash[28289]: audit 2026-03-10T10:18:12.741935+0000 mon.a (mon.0) 1625 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:12 vm04 bash[28289]: audit 2026-03-10T10:18:12.741935+0000 mon.a (mon.0) 1625 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:12 vm04 bash[28289]: cluster 2026-03-10T10:18:12.749717+0000 mon.a (mon.0) 1626 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-10T10:18:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:12 vm04 bash[28289]: cluster 2026-03-10T10:18:12.749717+0000 mon.a (mon.0) 1626 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-10T10:18:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:12 vm04 bash[28289]: audit 2026-03-10T10:18:12.751038+0000 mon.a (mon.0) 1627 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-19"}]: dispatch 2026-03-10T10:18:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:12 vm04 bash[28289]: audit 2026-03-10T10:18:12.751038+0000 mon.a (mon.0) 1627 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-19"}]: dispatch 2026-03-10T10:18:13.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:12 vm04 bash[28289]: audit 2026-03-10T10:18:12.771048+0000 mon.c (mon.2) 267 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:13.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:12 vm04 bash[28289]: audit 2026-03-10T10:18:12.771048+0000 mon.c (mon.2) 267 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:13.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:12 vm04 bash[28289]: audit 2026-03-10T10:18:12.771309+0000 mon.a (mon.0) 1628 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:13.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:12 vm04 bash[28289]: audit 2026-03-10T10:18:12.771309+0000 mon.a (mon.0) 1628 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:13.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:12 vm04 bash[28289]: audit 2026-03-10T10:18:12.777210+0000 mon.a (mon.0) 1629 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:18:13.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:12 vm04 bash[28289]: audit 2026-03-10T10:18:12.777210+0000 mon.a (mon.0) 1629 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:18:13.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:12 vm04 bash[28289]: audit 2026-03-10T10:18:12.778240+0000 mon.a (mon.0) 1630 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:18:13.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:12 vm04 bash[28289]: audit 2026-03-10T10:18:12.778240+0000 mon.a (mon.0) 1630 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:18:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:12 vm04 bash[20742]: audit 2026-03-10T10:18:11.920214+0000 mon.a (mon.0) 1623 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:12 vm04 bash[20742]: audit 2026-03-10T10:18:11.920214+0000 mon.a (mon.0) 1623 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:12 vm04 bash[20742]: audit 2026-03-10T10:18:12.413291+0000 mon.c (mon.2) 266 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:12 vm04 bash[20742]: audit 2026-03-10T10:18:12.413291+0000 mon.c (mon.2) 266 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:12 vm04 bash[20742]: audit 2026-03-10T10:18:12.741774+0000 mon.a (mon.0) 1624 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiReads_vm04-59259-34","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:12 vm04 bash[20742]: audit 2026-03-10T10:18:12.741774+0000 mon.a (mon.0) 1624 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiReads_vm04-59259-34","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:12 vm04 bash[20742]: audit 2026-03-10T10:18:12.741935+0000 mon.a (mon.0) 1625 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:12 vm04 bash[20742]: audit 2026-03-10T10:18:12.741935+0000 mon.a (mon.0) 1625 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:12 vm04 bash[20742]: cluster 2026-03-10T10:18:12.749717+0000 mon.a (mon.0) 1626 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-10T10:18:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:12 vm04 bash[20742]: cluster 2026-03-10T10:18:12.749717+0000 mon.a (mon.0) 1626 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-10T10:18:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:12 vm04 bash[20742]: audit 2026-03-10T10:18:12.751038+0000 mon.a (mon.0) 1627 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-19"}]: dispatch 2026-03-10T10:18:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:12 vm04 bash[20742]: audit 2026-03-10T10:18:12.751038+0000 mon.a (mon.0) 1627 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-19"}]: dispatch 2026-03-10T10:18:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:12 vm04 bash[20742]: audit 2026-03-10T10:18:12.771048+0000 mon.c (mon.2) 267 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:12 vm04 bash[20742]: audit 2026-03-10T10:18:12.771048+0000 mon.c (mon.2) 267 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:12 vm04 bash[20742]: audit 2026-03-10T10:18:12.771309+0000 mon.a (mon.0) 1628 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:12 vm04 bash[20742]: audit 2026-03-10T10:18:12.771309+0000 mon.a (mon.0) 1628 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:12 vm04 bash[20742]: audit 2026-03-10T10:18:12.777210+0000 mon.a (mon.0) 1629 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:18:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:12 vm04 bash[20742]: audit 2026-03-10T10:18:12.777210+0000 mon.a (mon.0) 1629 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:18:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:12 vm04 bash[20742]: audit 2026-03-10T10:18:12.778240+0000 mon.a (mon.0) 1630 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:18:13.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:12 vm04 bash[20742]: audit 2026-03-10T10:18:12.778240+0000 mon.a (mon.0) 1630 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:18:13.204 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:18:13 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:18:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:18:13.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:12 vm07 bash[23367]: audit 2026-03-10T10:18:11.920214+0000 mon.a (mon.0) 1623 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:13.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:12 vm07 bash[23367]: audit 2026-03-10T10:18:11.920214+0000 mon.a (mon.0) 1623 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:13.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:12 vm07 bash[23367]: audit 2026-03-10T10:18:12.413291+0000 mon.c (mon.2) 266 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:13.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:12 vm07 bash[23367]: audit 2026-03-10T10:18:12.413291+0000 mon.c (mon.2) 266 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:13.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:12 vm07 bash[23367]: audit 2026-03-10T10:18:12.741774+0000 mon.a (mon.0) 1624 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiReads_vm04-59259-34","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:13.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:12 vm07 bash[23367]: audit 2026-03-10T10:18:12.741774+0000 mon.a (mon.0) 1624 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "MultiReads_vm04-59259-34","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:13.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:12 vm07 bash[23367]: audit 2026-03-10T10:18:12.741935+0000 mon.a (mon.0) 1625 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:13.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:12 vm07 bash[23367]: audit 2026-03-10T10:18:12.741935+0000 mon.a (mon.0) 1625 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:13.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:12 vm07 bash[23367]: cluster 2026-03-10T10:18:12.749717+0000 mon.a (mon.0) 1626 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-10T10:18:13.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:12 vm07 bash[23367]: cluster 2026-03-10T10:18:12.749717+0000 mon.a (mon.0) 1626 : cluster [DBG] osdmap e176: 8 total, 8 up, 8 in 2026-03-10T10:18:13.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:12 vm07 bash[23367]: audit 2026-03-10T10:18:12.751038+0000 mon.a (mon.0) 1627 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-19"}]: dispatch 2026-03-10T10:18:13.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:12 vm07 bash[23367]: audit 2026-03-10T10:18:12.751038+0000 mon.a (mon.0) 1627 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-19"}]: dispatch 2026-03-10T10:18:13.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:12 vm07 bash[23367]: audit 2026-03-10T10:18:12.771048+0000 mon.c (mon.2) 267 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:13.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:12 vm07 bash[23367]: audit 2026-03-10T10:18:12.771048+0000 mon.c (mon.2) 267 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:13.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:12 vm07 bash[23367]: audit 2026-03-10T10:18:12.771309+0000 mon.a (mon.0) 1628 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:13.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:12 vm07 bash[23367]: audit 2026-03-10T10:18:12.771309+0000 mon.a (mon.0) 1628 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:13.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:12 vm07 bash[23367]: audit 2026-03-10T10:18:12.777210+0000 mon.a (mon.0) 1629 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:18:13.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:12 vm07 bash[23367]: audit 2026-03-10T10:18:12.777210+0000 mon.a (mon.0) 1629 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:18:13.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:12 vm07 bash[23367]: audit 2026-03-10T10:18:12.778240+0000 mon.a (mon.0) 1630 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:18:13.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:12 vm07 bash[23367]: audit 2026-03-10T10:18:12.778240+0000 mon.a (mon.0) 1630 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:18:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:13 vm04 bash[28289]: cluster 2026-03-10T10:18:12.398813+0000 mgr.y (mgr.24422) 194 : cluster [DBG] pgmap v219: 362 pgs: 40 unknown, 1 creating+activating, 30 creating+peering, 10 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 272 active+clean; 459 KiB data, 656 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:13 vm04 bash[28289]: cluster 2026-03-10T10:18:12.398813+0000 mgr.y (mgr.24422) 194 : cluster [DBG] pgmap v219: 362 pgs: 40 unknown, 1 creating+activating, 30 creating+peering, 10 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 272 active+clean; 459 KiB data, 656 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:13 vm04 bash[28289]: cluster 2026-03-10T10:18:12.800214+0000 mon.a (mon.0) 1631 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:13 vm04 bash[28289]: cluster 2026-03-10T10:18:12.800214+0000 mon.a (mon.0) 1631 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:13 vm04 bash[28289]: audit 2026-03-10T10:18:13.414376+0000 mon.c (mon.2) 268 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:13 vm04 bash[28289]: audit 2026-03-10T10:18:13.414376+0000 mon.c (mon.2) 268 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:13 vm04 bash[28289]: cluster 2026-03-10T10:18:13.741963+0000 mon.a (mon.0) 1632 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:13 vm04 bash[28289]: cluster 2026-03-10T10:18:13.741963+0000 mon.a (mon.0) 1632 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:13 vm04 bash[28289]: audit 2026-03-10T10:18:13.748357+0000 mon.a (mon.0) 1633 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-19"}]': finished 2026-03-10T10:18:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:13 vm04 bash[28289]: audit 2026-03-10T10:18:13.748357+0000 mon.a (mon.0) 1633 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-19"}]': finished 2026-03-10T10:18:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:13 vm04 bash[28289]: audit 2026-03-10T10:18:13.748455+0000 mon.a (mon.0) 1634 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm04-59252-31"}]': finished 2026-03-10T10:18:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:13 vm04 bash[28289]: audit 2026-03-10T10:18:13.748455+0000 mon.a (mon.0) 1634 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm04-59252-31"}]': finished 2026-03-10T10:18:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:13 vm04 bash[28289]: cluster 2026-03-10T10:18:13.757216+0000 mon.a (mon.0) 1635 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-10T10:18:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:13 vm04 bash[28289]: cluster 2026-03-10T10:18:13.757216+0000 mon.a (mon.0) 1635 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-10T10:18:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:13 vm04 bash[28289]: audit 2026-03-10T10:18:13.759168+0000 mon.c (mon.2) 269 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:13 vm04 bash[28289]: audit 2026-03-10T10:18:13.759168+0000 mon.c (mon.2) 269 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:13 vm04 bash[28289]: audit 2026-03-10T10:18:13.759641+0000 mon.a (mon.0) 1636 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:13 vm04 bash[28289]: audit 2026-03-10T10:18:13.759641+0000 mon.a (mon.0) 1636 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:13 vm04 bash[20742]: cluster 2026-03-10T10:18:12.398813+0000 mgr.y (mgr.24422) 194 : cluster [DBG] pgmap v219: 362 pgs: 40 unknown, 1 creating+activating, 30 creating+peering, 10 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 272 active+clean; 459 KiB data, 656 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:13 vm04 bash[20742]: cluster 2026-03-10T10:18:12.398813+0000 mgr.y (mgr.24422) 194 : cluster [DBG] pgmap v219: 362 pgs: 40 unknown, 1 creating+activating, 30 creating+peering, 10 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 272 active+clean; 459 KiB data, 656 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:13 vm04 bash[20742]: cluster 2026-03-10T10:18:12.800214+0000 mon.a (mon.0) 1631 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:13 vm04 bash[20742]: cluster 2026-03-10T10:18:12.800214+0000 mon.a (mon.0) 1631 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:13 vm04 bash[20742]: audit 2026-03-10T10:18:13.414376+0000 mon.c (mon.2) 268 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:13 vm04 bash[20742]: audit 2026-03-10T10:18:13.414376+0000 mon.c (mon.2) 268 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:13 vm04 bash[20742]: cluster 2026-03-10T10:18:13.741963+0000 mon.a (mon.0) 1632 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:14.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:13 vm04 bash[20742]: cluster 2026-03-10T10:18:13.741963+0000 mon.a (mon.0) 1632 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:14.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:13 vm04 bash[20742]: audit 2026-03-10T10:18:13.748357+0000 mon.a (mon.0) 1633 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-19"}]': finished 2026-03-10T10:18:14.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:13 vm04 bash[20742]: audit 2026-03-10T10:18:13.748357+0000 mon.a (mon.0) 1633 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-19"}]': finished 2026-03-10T10:18:14.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:13 vm04 bash[20742]: audit 2026-03-10T10:18:13.748455+0000 mon.a (mon.0) 1634 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm04-59252-31"}]': finished 2026-03-10T10:18:14.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:13 vm04 bash[20742]: audit 2026-03-10T10:18:13.748455+0000 mon.a (mon.0) 1634 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm04-59252-31"}]': finished 2026-03-10T10:18:14.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:13 vm04 bash[20742]: cluster 2026-03-10T10:18:13.757216+0000 mon.a (mon.0) 1635 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-10T10:18:14.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:13 vm04 bash[20742]: cluster 2026-03-10T10:18:13.757216+0000 mon.a (mon.0) 1635 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-10T10:18:14.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:13 vm04 bash[20742]: audit 2026-03-10T10:18:13.759168+0000 mon.c (mon.2) 269 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:14.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:13 vm04 bash[20742]: audit 2026-03-10T10:18:13.759168+0000 mon.c (mon.2) 269 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:14.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:13 vm04 bash[20742]: audit 2026-03-10T10:18:13.759641+0000 mon.a (mon.0) 1636 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:14.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:13 vm04 bash[20742]: audit 2026-03-10T10:18:13.759641+0000 mon.a (mon.0) 1636 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:13 vm07 bash[23367]: cluster 2026-03-10T10:18:12.398813+0000 mgr.y (mgr.24422) 194 : cluster [DBG] pgmap v219: 362 pgs: 40 unknown, 1 creating+activating, 30 creating+peering, 10 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 272 active+clean; 459 KiB data, 656 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:13 vm07 bash[23367]: cluster 2026-03-10T10:18:12.398813+0000 mgr.y (mgr.24422) 194 : cluster [DBG] pgmap v219: 362 pgs: 40 unknown, 1 creating+activating, 30 creating+peering, 10 active+clean+snaptrim_wait, 9 active+clean+snaptrim, 272 active+clean; 459 KiB data, 656 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:13 vm07 bash[23367]: cluster 2026-03-10T10:18:12.800214+0000 mon.a (mon.0) 1631 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:13 vm07 bash[23367]: cluster 2026-03-10T10:18:12.800214+0000 mon.a (mon.0) 1631 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:13 vm07 bash[23367]: audit 2026-03-10T10:18:13.414376+0000 mon.c (mon.2) 268 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:13 vm07 bash[23367]: audit 2026-03-10T10:18:13.414376+0000 mon.c (mon.2) 268 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:13 vm07 bash[23367]: cluster 2026-03-10T10:18:13.741963+0000 mon.a (mon.0) 1632 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:13 vm07 bash[23367]: cluster 2026-03-10T10:18:13.741963+0000 mon.a (mon.0) 1632 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:13 vm07 bash[23367]: audit 2026-03-10T10:18:13.748357+0000 mon.a (mon.0) 1633 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-19"}]': finished 2026-03-10T10:18:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:13 vm07 bash[23367]: audit 2026-03-10T10:18:13.748357+0000 mon.a (mon.0) 1633 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-19"}]': finished 2026-03-10T10:18:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:13 vm07 bash[23367]: audit 2026-03-10T10:18:13.748455+0000 mon.a (mon.0) 1634 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm04-59252-31"}]': finished 2026-03-10T10:18:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:13 vm07 bash[23367]: audit 2026-03-10T10:18:13.748455+0000 mon.a (mon.0) 1634 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppend_vm04-59252-31"}]': finished 2026-03-10T10:18:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:13 vm07 bash[23367]: cluster 2026-03-10T10:18:13.757216+0000 mon.a (mon.0) 1635 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-10T10:18:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:13 vm07 bash[23367]: cluster 2026-03-10T10:18:13.757216+0000 mon.a (mon.0) 1635 : cluster [DBG] osdmap e177: 8 total, 8 up, 8 in 2026-03-10T10:18:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:13 vm07 bash[23367]: audit 2026-03-10T10:18:13.759168+0000 mon.c (mon.2) 269 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:13 vm07 bash[23367]: audit 2026-03-10T10:18:13.759168+0000 mon.c (mon.2) 269 : audit [INF] from='client.? 192.168.123.104:0/3856061808' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:13 vm07 bash[23367]: audit 2026-03-10T10:18:13.759641+0000 mon.a (mon.0) 1636 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:13 vm07 bash[23367]: audit 2026-03-10T10:18:13.759641+0000 mon.a (mon.0) 1636 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm04-59252-31"}]: dispatch 2026-03-10T10:18:15.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:14 vm04 bash[20742]: audit 2026-03-10T10:18:14.400000+0000 mon.a (mon.0) 1637 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "29"}]: dispatch 2026-03-10T10:18:15.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:14 vm04 bash[20742]: audit 2026-03-10T10:18:14.400000+0000 mon.a (mon.0) 1637 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "29"}]: dispatch 2026-03-10T10:18:15.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:14 vm04 bash[20742]: audit 2026-03-10T10:18:14.415379+0000 mon.c (mon.2) 270 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:14 vm04 bash[20742]: audit 2026-03-10T10:18:14.415379+0000 mon.c (mon.2) 270 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:14 vm04 bash[20742]: audit 2026-03-10T10:18:14.751732+0000 mon.a (mon.0) 1638 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm04-59252-31"}]': finished 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:14 vm04 bash[20742]: audit 2026-03-10T10:18:14.751732+0000 mon.a (mon.0) 1638 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm04-59252-31"}]': finished 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:14 vm04 bash[20742]: audit 2026-03-10T10:18:14.751883+0000 mon.a (mon.0) 1639 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "29"}]': finished 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:14 vm04 bash[20742]: audit 2026-03-10T10:18:14.751883+0000 mon.a (mon.0) 1639 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "29"}]': finished 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:14 vm04 bash[20742]: audit 2026-03-10T10:18:14.755159+0000 mon.b (mon.1) 158 : audit [INF] from='client.? 192.168.123.104:0/1233305476' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm04-59259-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:14 vm04 bash[20742]: audit 2026-03-10T10:18:14.755159+0000 mon.b (mon.1) 158 : audit [INF] from='client.? 192.168.123.104:0/1233305476' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm04-59259-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:14 vm04 bash[20742]: cluster 2026-03-10T10:18:14.755297+0000 mon.a (mon.0) 1640 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:14 vm04 bash[20742]: cluster 2026-03-10T10:18:14.755297+0000 mon.a (mon.0) 1640 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:14 vm04 bash[20742]: audit 2026-03-10T10:18:14.803424+0000 mon.b (mon.1) 159 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:14 vm04 bash[20742]: audit 2026-03-10T10:18:14.803424+0000 mon.b (mon.1) 159 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:14 vm04 bash[20742]: audit 2026-03-10T10:18:14.805399+0000 mon.a (mon.0) 1641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm04-59259-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:14 vm04 bash[20742]: audit 2026-03-10T10:18:14.805399+0000 mon.a (mon.0) 1641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm04-59259-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:14 vm04 bash[20742]: audit 2026-03-10T10:18:14.807606+0000 mon.b (mon.1) 160 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:14 vm04 bash[20742]: audit 2026-03-10T10:18:14.807606+0000 mon.b (mon.1) 160 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:14 vm04 bash[20742]: audit 2026-03-10T10:18:14.808358+0000 mon.b (mon.1) 161 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm04-59252-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:14 vm04 bash[20742]: audit 2026-03-10T10:18:14.808358+0000 mon.b (mon.1) 161 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm04-59252-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:14 vm04 bash[20742]: audit 2026-03-10T10:18:14.809152+0000 mon.a (mon.0) 1642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:14 vm04 bash[20742]: audit 2026-03-10T10:18:14.809152+0000 mon.a (mon.0) 1642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:14 vm04 bash[20742]: audit 2026-03-10T10:18:14.810188+0000 mon.a (mon.0) 1643 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:14 vm04 bash[20742]: audit 2026-03-10T10:18:14.810188+0000 mon.a (mon.0) 1643 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:14 vm04 bash[20742]: audit 2026-03-10T10:18:14.811075+0000 mon.a (mon.0) 1644 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm04-59252-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:14 vm04 bash[20742]: audit 2026-03-10T10:18:14.811075+0000 mon.a (mon.0) 1644 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm04-59252-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:14 vm04 bash[28289]: audit 2026-03-10T10:18:14.400000+0000 mon.a (mon.0) 1637 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "29"}]: dispatch 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:14 vm04 bash[28289]: audit 2026-03-10T10:18:14.400000+0000 mon.a (mon.0) 1637 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "29"}]: dispatch 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:14 vm04 bash[28289]: audit 2026-03-10T10:18:14.415379+0000 mon.c (mon.2) 270 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:14 vm04 bash[28289]: audit 2026-03-10T10:18:14.415379+0000 mon.c (mon.2) 270 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:14 vm04 bash[28289]: audit 2026-03-10T10:18:14.751732+0000 mon.a (mon.0) 1638 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm04-59252-31"}]': finished 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:14 vm04 bash[28289]: audit 2026-03-10T10:18:14.751732+0000 mon.a (mon.0) 1638 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm04-59252-31"}]': finished 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:14 vm04 bash[28289]: audit 2026-03-10T10:18:14.751883+0000 mon.a (mon.0) 1639 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "29"}]': finished 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:14 vm04 bash[28289]: audit 2026-03-10T10:18:14.751883+0000 mon.a (mon.0) 1639 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "29"}]': finished 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:14 vm04 bash[28289]: audit 2026-03-10T10:18:14.755159+0000 mon.b (mon.1) 158 : audit [INF] from='client.? 192.168.123.104:0/1233305476' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm04-59259-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:14 vm04 bash[28289]: audit 2026-03-10T10:18:14.755159+0000 mon.b (mon.1) 158 : audit [INF] from='client.? 192.168.123.104:0/1233305476' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm04-59259-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:14 vm04 bash[28289]: cluster 2026-03-10T10:18:14.755297+0000 mon.a (mon.0) 1640 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:14 vm04 bash[28289]: cluster 2026-03-10T10:18:14.755297+0000 mon.a (mon.0) 1640 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:14 vm04 bash[28289]: audit 2026-03-10T10:18:14.803424+0000 mon.b (mon.1) 159 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:14 vm04 bash[28289]: audit 2026-03-10T10:18:14.803424+0000 mon.b (mon.1) 159 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:14 vm04 bash[28289]: audit 2026-03-10T10:18:14.805399+0000 mon.a (mon.0) 1641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm04-59259-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:14 vm04 bash[28289]: audit 2026-03-10T10:18:14.805399+0000 mon.a (mon.0) 1641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm04-59259-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:14 vm04 bash[28289]: audit 2026-03-10T10:18:14.807606+0000 mon.b (mon.1) 160 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:14 vm04 bash[28289]: audit 2026-03-10T10:18:14.807606+0000 mon.b (mon.1) 160 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:14 vm04 bash[28289]: audit 2026-03-10T10:18:14.808358+0000 mon.b (mon.1) 161 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm04-59252-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:15.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:14 vm04 bash[28289]: audit 2026-03-10T10:18:14.808358+0000 mon.b (mon.1) 161 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm04-59252-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:15.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:14 vm04 bash[28289]: audit 2026-03-10T10:18:14.809152+0000 mon.a (mon.0) 1642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:15.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:14 vm04 bash[28289]: audit 2026-03-10T10:18:14.809152+0000 mon.a (mon.0) 1642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:15.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:14 vm04 bash[28289]: audit 2026-03-10T10:18:14.810188+0000 mon.a (mon.0) 1643 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:15.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:14 vm04 bash[28289]: audit 2026-03-10T10:18:14.810188+0000 mon.a (mon.0) 1643 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:15.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:14 vm04 bash[28289]: audit 2026-03-10T10:18:14.811075+0000 mon.a (mon.0) 1644 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm04-59252-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:15.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:14 vm04 bash[28289]: audit 2026-03-10T10:18:14.811075+0000 mon.a (mon.0) 1644 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm04-59252-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:15.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:14 vm07 bash[23367]: audit 2026-03-10T10:18:14.400000+0000 mon.a (mon.0) 1637 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "29"}]: dispatch 2026-03-10T10:18:15.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:14 vm07 bash[23367]: audit 2026-03-10T10:18:14.400000+0000 mon.a (mon.0) 1637 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "29"}]: dispatch 2026-03-10T10:18:15.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:14 vm07 bash[23367]: audit 2026-03-10T10:18:14.415379+0000 mon.c (mon.2) 270 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:15.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:14 vm07 bash[23367]: audit 2026-03-10T10:18:14.415379+0000 mon.c (mon.2) 270 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:15.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:14 vm07 bash[23367]: audit 2026-03-10T10:18:14.751732+0000 mon.a (mon.0) 1638 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm04-59252-31"}]': finished 2026-03-10T10:18:15.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:14 vm07 bash[23367]: audit 2026-03-10T10:18:14.751732+0000 mon.a (mon.0) 1638 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppend_vm04-59252-31"}]': finished 2026-03-10T10:18:15.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:14 vm07 bash[23367]: audit 2026-03-10T10:18:14.751883+0000 mon.a (mon.0) 1639 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "29"}]': finished 2026-03-10T10:18:15.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:14 vm07 bash[23367]: audit 2026-03-10T10:18:14.751883+0000 mon.a (mon.0) 1639 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "29"}]': finished 2026-03-10T10:18:15.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:14 vm07 bash[23367]: audit 2026-03-10T10:18:14.755159+0000 mon.b (mon.1) 158 : audit [INF] from='client.? 192.168.123.104:0/1233305476' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm04-59259-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:15.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:14 vm07 bash[23367]: audit 2026-03-10T10:18:14.755159+0000 mon.b (mon.1) 158 : audit [INF] from='client.? 192.168.123.104:0/1233305476' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm04-59259-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:15.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:14 vm07 bash[23367]: cluster 2026-03-10T10:18:14.755297+0000 mon.a (mon.0) 1640 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-10T10:18:15.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:14 vm07 bash[23367]: cluster 2026-03-10T10:18:14.755297+0000 mon.a (mon.0) 1640 : cluster [DBG] osdmap e178: 8 total, 8 up, 8 in 2026-03-10T10:18:15.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:14 vm07 bash[23367]: audit 2026-03-10T10:18:14.803424+0000 mon.b (mon.1) 159 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:15.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:14 vm07 bash[23367]: audit 2026-03-10T10:18:14.803424+0000 mon.b (mon.1) 159 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:15.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:14 vm07 bash[23367]: audit 2026-03-10T10:18:14.805399+0000 mon.a (mon.0) 1641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm04-59259-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:15.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:14 vm07 bash[23367]: audit 2026-03-10T10:18:14.805399+0000 mon.a (mon.0) 1641 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm04-59259-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:15.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:14 vm07 bash[23367]: audit 2026-03-10T10:18:14.807606+0000 mon.b (mon.1) 160 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:15.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:14 vm07 bash[23367]: audit 2026-03-10T10:18:14.807606+0000 mon.b (mon.1) 160 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:15.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:14 vm07 bash[23367]: audit 2026-03-10T10:18:14.808358+0000 mon.b (mon.1) 161 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm04-59252-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:15.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:14 vm07 bash[23367]: audit 2026-03-10T10:18:14.808358+0000 mon.b (mon.1) 161 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm04-59252-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:15.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:14 vm07 bash[23367]: audit 2026-03-10T10:18:14.809152+0000 mon.a (mon.0) 1642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:15.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:14 vm07 bash[23367]: audit 2026-03-10T10:18:14.809152+0000 mon.a (mon.0) 1642 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:15.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:14 vm07 bash[23367]: audit 2026-03-10T10:18:14.810188+0000 mon.a (mon.0) 1643 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:15.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:14 vm07 bash[23367]: audit 2026-03-10T10:18:14.810188+0000 mon.a (mon.0) 1643 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:15.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:14 vm07 bash[23367]: audit 2026-03-10T10:18:14.811075+0000 mon.a (mon.0) 1644 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm04-59252-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:15.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:14 vm07 bash[23367]: audit 2026-03-10T10:18:14.811075+0000 mon.a (mon.0) 1644 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm04-59252-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:15 vm04 bash[28289]: cluster 2026-03-10T10:18:14.399320+0000 mgr.y (mgr.24422) 195 : cluster [DBG] pgmap v222: 322 pgs: 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 304 active+clean; 459 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T10:18:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:15 vm04 bash[28289]: cluster 2026-03-10T10:18:14.399320+0000 mgr.y (mgr.24422) 195 : cluster [DBG] pgmap v222: 322 pgs: 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 304 active+clean; 459 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T10:18:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:15 vm04 bash[28289]: audit 2026-03-10T10:18:15.416238+0000 mon.c (mon.2) 271 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:15 vm04 bash[28289]: audit 2026-03-10T10:18:15.416238+0000 mon.c (mon.2) 271 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:15 vm04 bash[28289]: audit 2026-03-10T10:18:15.755640+0000 mon.a (mon.0) 1645 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm04-59259-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:15 vm04 bash[28289]: audit 2026-03-10T10:18:15.755640+0000 mon.a (mon.0) 1645 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm04-59259-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:15 vm04 bash[28289]: audit 2026-03-10T10:18:15.755704+0000 mon.a (mon.0) 1646 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm04-59252-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:15 vm04 bash[28289]: audit 2026-03-10T10:18:15.755704+0000 mon.a (mon.0) 1646 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm04-59252-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:15 vm04 bash[28289]: cluster 2026-03-10T10:18:15.762864+0000 mon.a (mon.0) 1647 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-10T10:18:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:15 vm04 bash[28289]: cluster 2026-03-10T10:18:15.762864+0000 mon.a (mon.0) 1647 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-10T10:18:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:15 vm04 bash[28289]: audit 2026-03-10T10:18:15.769952+0000 mon.b (mon.1) 162 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm04-59252-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:15 vm04 bash[28289]: audit 2026-03-10T10:18:15.769952+0000 mon.b (mon.1) 162 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm04-59252-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:15 vm04 bash[28289]: audit 2026-03-10T10:18:15.772672+0000 mon.a (mon.0) 1648 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm04-59252-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:15 vm04 bash[28289]: audit 2026-03-10T10:18:15.772672+0000 mon.a (mon.0) 1648 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm04-59252-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:15 vm04 bash[28289]: audit 2026-03-10T10:18:15.774948+0000 mon.a (mon.0) 1649 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:15 vm04 bash[28289]: audit 2026-03-10T10:18:15.774948+0000 mon.a (mon.0) 1649 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:15 vm04 bash[20742]: cluster 2026-03-10T10:18:14.399320+0000 mgr.y (mgr.24422) 195 : cluster [DBG] pgmap v222: 322 pgs: 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 304 active+clean; 459 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T10:18:16.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:15 vm04 bash[20742]: cluster 2026-03-10T10:18:14.399320+0000 mgr.y (mgr.24422) 195 : cluster [DBG] pgmap v222: 322 pgs: 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 304 active+clean; 459 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T10:18:16.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:15 vm04 bash[20742]: audit 2026-03-10T10:18:15.416238+0000 mon.c (mon.2) 271 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:16.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:15 vm04 bash[20742]: audit 2026-03-10T10:18:15.416238+0000 mon.c (mon.2) 271 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:16.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:15 vm04 bash[20742]: audit 2026-03-10T10:18:15.755640+0000 mon.a (mon.0) 1645 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm04-59259-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:16.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:15 vm04 bash[20742]: audit 2026-03-10T10:18:15.755640+0000 mon.a (mon.0) 1645 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm04-59259-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:16.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:15 vm04 bash[20742]: audit 2026-03-10T10:18:15.755704+0000 mon.a (mon.0) 1646 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm04-59252-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:16.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:15 vm04 bash[20742]: audit 2026-03-10T10:18:15.755704+0000 mon.a (mon.0) 1646 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm04-59252-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:16.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:15 vm04 bash[20742]: cluster 2026-03-10T10:18:15.762864+0000 mon.a (mon.0) 1647 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-10T10:18:16.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:15 vm04 bash[20742]: cluster 2026-03-10T10:18:15.762864+0000 mon.a (mon.0) 1647 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-10T10:18:16.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:15 vm04 bash[20742]: audit 2026-03-10T10:18:15.769952+0000 mon.b (mon.1) 162 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm04-59252-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:16.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:15 vm04 bash[20742]: audit 2026-03-10T10:18:15.769952+0000 mon.b (mon.1) 162 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm04-59252-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:16.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:15 vm04 bash[20742]: audit 2026-03-10T10:18:15.772672+0000 mon.a (mon.0) 1648 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm04-59252-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:16.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:15 vm04 bash[20742]: audit 2026-03-10T10:18:15.772672+0000 mon.a (mon.0) 1648 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm04-59252-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:16.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:15 vm04 bash[20742]: audit 2026-03-10T10:18:15.774948+0000 mon.a (mon.0) 1649 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:16.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:15 vm04 bash[20742]: audit 2026-03-10T10:18:15.774948+0000 mon.a (mon.0) 1649 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:16.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:15 vm07 bash[23367]: cluster 2026-03-10T10:18:14.399320+0000 mgr.y (mgr.24422) 195 : cluster [DBG] pgmap v222: 322 pgs: 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 304 active+clean; 459 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T10:18:16.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:15 vm07 bash[23367]: cluster 2026-03-10T10:18:14.399320+0000 mgr.y (mgr.24422) 195 : cluster [DBG] pgmap v222: 322 pgs: 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 304 active+clean; 459 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T10:18:16.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:15 vm07 bash[23367]: audit 2026-03-10T10:18:15.416238+0000 mon.c (mon.2) 271 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:16.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:15 vm07 bash[23367]: audit 2026-03-10T10:18:15.416238+0000 mon.c (mon.2) 271 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:16.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:15 vm07 bash[23367]: audit 2026-03-10T10:18:15.755640+0000 mon.a (mon.0) 1645 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm04-59259-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:16.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:15 vm07 bash[23367]: audit 2026-03-10T10:18:15.755640+0000 mon.a (mon.0) 1645 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "ReadIntoBufferlist_vm04-59259-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:16.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:15 vm07 bash[23367]: audit 2026-03-10T10:18:15.755704+0000 mon.a (mon.0) 1646 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm04-59252-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:16.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:15 vm07 bash[23367]: audit 2026-03-10T10:18:15.755704+0000 mon.a (mon.0) 1646 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsComplete_vm04-59252-32", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:16.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:15 vm07 bash[23367]: cluster 2026-03-10T10:18:15.762864+0000 mon.a (mon.0) 1647 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-10T10:18:16.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:15 vm07 bash[23367]: cluster 2026-03-10T10:18:15.762864+0000 mon.a (mon.0) 1647 : cluster [DBG] osdmap e179: 8 total, 8 up, 8 in 2026-03-10T10:18:16.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:15 vm07 bash[23367]: audit 2026-03-10T10:18:15.769952+0000 mon.b (mon.1) 162 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm04-59252-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:16.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:15 vm07 bash[23367]: audit 2026-03-10T10:18:15.769952+0000 mon.b (mon.1) 162 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm04-59252-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:16.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:15 vm07 bash[23367]: audit 2026-03-10T10:18:15.772672+0000 mon.a (mon.0) 1648 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm04-59252-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:16.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:15 vm07 bash[23367]: audit 2026-03-10T10:18:15.772672+0000 mon.a (mon.0) 1648 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsComplete_vm04-59252-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:16.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:15 vm07 bash[23367]: audit 2026-03-10T10:18:15.774948+0000 mon.a (mon.0) 1649 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:16.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:15 vm07 bash[23367]: audit 2026-03-10T10:18:15.774948+0000 mon.a (mon.0) 1649 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-21","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:17.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:16 vm04 bash[28289]: audit 2026-03-10T10:18:16.416962+0000 mon.c (mon.2) 272 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:17.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:16 vm04 bash[28289]: audit 2026-03-10T10:18:16.416962+0000 mon.c (mon.2) 272 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:17.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:16 vm04 bash[28289]: audit 2026-03-10T10:18:16.759872+0000 mon.a (mon.0) 1650 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:17.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:16 vm04 bash[28289]: audit 2026-03-10T10:18:16.759872+0000 mon.a (mon.0) 1650 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:17.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:16 vm04 bash[28289]: cluster 2026-03-10T10:18:16.772634+0000 mon.a (mon.0) 1651 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-10T10:18:17.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:16 vm04 bash[28289]: cluster 2026-03-10T10:18:16.772634+0000 mon.a (mon.0) 1651 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-10T10:18:17.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:16 vm04 bash[28289]: audit 2026-03-10T10:18:16.827072+0000 mon.a (mon.0) 1652 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:17.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:16 vm04 bash[28289]: audit 2026-03-10T10:18:16.827072+0000 mon.a (mon.0) 1652 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:17.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:16 vm04 bash[20742]: audit 2026-03-10T10:18:16.416962+0000 mon.c (mon.2) 272 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:17.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:16 vm04 bash[20742]: audit 2026-03-10T10:18:16.416962+0000 mon.c (mon.2) 272 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:17.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:16 vm04 bash[20742]: audit 2026-03-10T10:18:16.759872+0000 mon.a (mon.0) 1650 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:17.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:16 vm04 bash[20742]: audit 2026-03-10T10:18:16.759872+0000 mon.a (mon.0) 1650 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:17.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:16 vm04 bash[20742]: cluster 2026-03-10T10:18:16.772634+0000 mon.a (mon.0) 1651 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-10T10:18:17.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:16 vm04 bash[20742]: cluster 2026-03-10T10:18:16.772634+0000 mon.a (mon.0) 1651 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-10T10:18:17.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:16 vm04 bash[20742]: audit 2026-03-10T10:18:16.827072+0000 mon.a (mon.0) 1652 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:17.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:16 vm04 bash[20742]: audit 2026-03-10T10:18:16.827072+0000 mon.a (mon.0) 1652 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:17.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:16 vm07 bash[23367]: audit 2026-03-10T10:18:16.416962+0000 mon.c (mon.2) 272 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:17.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:16 vm07 bash[23367]: audit 2026-03-10T10:18:16.416962+0000 mon.c (mon.2) 272 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:17.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:16 vm07 bash[23367]: audit 2026-03-10T10:18:16.759872+0000 mon.a (mon.0) 1650 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:17.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:16 vm07 bash[23367]: audit 2026-03-10T10:18:16.759872+0000 mon.a (mon.0) 1650 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-21","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:17.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:16 vm07 bash[23367]: cluster 2026-03-10T10:18:16.772634+0000 mon.a (mon.0) 1651 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-10T10:18:17.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:16 vm07 bash[23367]: cluster 2026-03-10T10:18:16.772634+0000 mon.a (mon.0) 1651 : cluster [DBG] osdmap e180: 8 total, 8 up, 8 in 2026-03-10T10:18:17.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:16 vm07 bash[23367]: audit 2026-03-10T10:18:16.827072+0000 mon.a (mon.0) 1652 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:17.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:16 vm07 bash[23367]: audit 2026-03-10T10:18:16.827072+0000 mon.a (mon.0) 1652 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-21", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:18.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:17 vm04 bash[28289]: cluster 2026-03-10T10:18:16.399739+0000 mgr.y (mgr.24422) 196 : cluster [DBG] pgmap v225: 354 pgs: 64 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 272 active+clean; 459 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:18:18.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:17 vm04 bash[28289]: cluster 2026-03-10T10:18:16.399739+0000 mgr.y (mgr.24422) 196 : cluster [DBG] pgmap v225: 354 pgs: 64 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 272 active+clean; 459 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:18:18.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:17 vm04 bash[28289]: audit 2026-03-10T10:18:17.417856+0000 mon.c (mon.2) 273 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:18.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:17 vm04 bash[28289]: audit 2026-03-10T10:18:17.417856+0000 mon.c (mon.2) 273 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:18.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:17 vm04 bash[28289]: audit 2026-03-10T10:18:17.845750+0000 mon.a (mon.0) 1653 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsComplete_vm04-59252-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm04-59252-32"}]': finished 2026-03-10T10:18:18.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:17 vm04 bash[28289]: audit 2026-03-10T10:18:17.845750+0000 mon.a (mon.0) 1653 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsComplete_vm04-59252-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm04-59252-32"}]': finished 2026-03-10T10:18:18.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:17 vm04 bash[28289]: audit 2026-03-10T10:18:17.845850+0000 mon.a (mon.0) 1654 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-21", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:18.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:17 vm04 bash[28289]: audit 2026-03-10T10:18:17.845850+0000 mon.a (mon.0) 1654 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-21", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:18.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:17 vm04 bash[28289]: cluster 2026-03-10T10:18:17.849814+0000 mon.a (mon.0) 1655 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-10T10:18:18.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:17 vm04 bash[28289]: cluster 2026-03-10T10:18:17.849814+0000 mon.a (mon.0) 1655 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-10T10:18:18.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:17 vm04 bash[28289]: audit 2026-03-10T10:18:17.851346+0000 mon.a (mon.0) 1656 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-21"}]: dispatch 2026-03-10T10:18:18.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:17 vm04 bash[28289]: audit 2026-03-10T10:18:17.851346+0000 mon.a (mon.0) 1656 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-21"}]: dispatch 2026-03-10T10:18:18.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:17 vm04 bash[28289]: audit 2026-03-10T10:18:17.855565+0000 mon.c (mon.2) 274 : audit [INF] from='client.? 192.168.123.104:0/2321472316' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:18.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:17 vm04 bash[28289]: audit 2026-03-10T10:18:17.855565+0000 mon.c (mon.2) 274 : audit [INF] from='client.? 192.168.123.104:0/2321472316' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:18.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:17 vm04 bash[28289]: audit 2026-03-10T10:18:17.867489+0000 mon.a (mon.0) 1657 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:18.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:17 vm04 bash[28289]: audit 2026-03-10T10:18:17.867489+0000 mon.a (mon.0) 1657 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:18.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:17 vm04 bash[20742]: cluster 2026-03-10T10:18:16.399739+0000 mgr.y (mgr.24422) 196 : cluster [DBG] pgmap v225: 354 pgs: 64 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 272 active+clean; 459 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:18:18.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:17 vm04 bash[20742]: cluster 2026-03-10T10:18:16.399739+0000 mgr.y (mgr.24422) 196 : cluster [DBG] pgmap v225: 354 pgs: 64 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 272 active+clean; 459 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:18:18.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:17 vm04 bash[20742]: audit 2026-03-10T10:18:17.417856+0000 mon.c (mon.2) 273 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:18.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:17 vm04 bash[20742]: audit 2026-03-10T10:18:17.417856+0000 mon.c (mon.2) 273 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:18.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:17 vm04 bash[20742]: audit 2026-03-10T10:18:17.845750+0000 mon.a (mon.0) 1653 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsComplete_vm04-59252-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm04-59252-32"}]': finished 2026-03-10T10:18:18.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:17 vm04 bash[20742]: audit 2026-03-10T10:18:17.845750+0000 mon.a (mon.0) 1653 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsComplete_vm04-59252-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm04-59252-32"}]': finished 2026-03-10T10:18:18.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:17 vm04 bash[20742]: audit 2026-03-10T10:18:17.845850+0000 mon.a (mon.0) 1654 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-21", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:18.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:17 vm04 bash[20742]: audit 2026-03-10T10:18:17.845850+0000 mon.a (mon.0) 1654 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-21", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:18.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:17 vm04 bash[20742]: cluster 2026-03-10T10:18:17.849814+0000 mon.a (mon.0) 1655 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-10T10:18:18.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:17 vm04 bash[20742]: cluster 2026-03-10T10:18:17.849814+0000 mon.a (mon.0) 1655 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-10T10:18:18.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:17 vm04 bash[20742]: audit 2026-03-10T10:18:17.851346+0000 mon.a (mon.0) 1656 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-21"}]: dispatch 2026-03-10T10:18:18.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:17 vm04 bash[20742]: audit 2026-03-10T10:18:17.851346+0000 mon.a (mon.0) 1656 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-21"}]: dispatch 2026-03-10T10:18:18.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:17 vm04 bash[20742]: audit 2026-03-10T10:18:17.855565+0000 mon.c (mon.2) 274 : audit [INF] from='client.? 192.168.123.104:0/2321472316' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:18.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:17 vm04 bash[20742]: audit 2026-03-10T10:18:17.855565+0000 mon.c (mon.2) 274 : audit [INF] from='client.? 192.168.123.104:0/2321472316' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:18.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:17 vm04 bash[20742]: audit 2026-03-10T10:18:17.867489+0000 mon.a (mon.0) 1657 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:18.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:17 vm04 bash[20742]: audit 2026-03-10T10:18:17.867489+0000 mon.a (mon.0) 1657 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:18.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:17 vm07 bash[23367]: cluster 2026-03-10T10:18:16.399739+0000 mgr.y (mgr.24422) 196 : cluster [DBG] pgmap v225: 354 pgs: 64 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 272 active+clean; 459 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:18:18.281 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:17 vm07 bash[23367]: cluster 2026-03-10T10:18:16.399739+0000 mgr.y (mgr.24422) 196 : cluster [DBG] pgmap v225: 354 pgs: 64 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 272 active+clean; 459 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:18:18.281 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:17 vm07 bash[23367]: audit 2026-03-10T10:18:17.417856+0000 mon.c (mon.2) 273 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:18.281 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:17 vm07 bash[23367]: audit 2026-03-10T10:18:17.417856+0000 mon.c (mon.2) 273 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:18.281 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:17 vm07 bash[23367]: audit 2026-03-10T10:18:17.845750+0000 mon.a (mon.0) 1653 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsComplete_vm04-59252-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm04-59252-32"}]': finished 2026-03-10T10:18:18.281 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:17 vm07 bash[23367]: audit 2026-03-10T10:18:17.845750+0000 mon.a (mon.0) 1653 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsComplete_vm04-59252-32", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsComplete_vm04-59252-32"}]': finished 2026-03-10T10:18:18.281 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:17 vm07 bash[23367]: audit 2026-03-10T10:18:17.845850+0000 mon.a (mon.0) 1654 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-21", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:18.281 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:17 vm07 bash[23367]: audit 2026-03-10T10:18:17.845850+0000 mon.a (mon.0) 1654 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-21", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:18.281 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:17 vm07 bash[23367]: cluster 2026-03-10T10:18:17.849814+0000 mon.a (mon.0) 1655 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-10T10:18:18.281 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:17 vm07 bash[23367]: cluster 2026-03-10T10:18:17.849814+0000 mon.a (mon.0) 1655 : cluster [DBG] osdmap e181: 8 total, 8 up, 8 in 2026-03-10T10:18:18.281 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:17 vm07 bash[23367]: audit 2026-03-10T10:18:17.851346+0000 mon.a (mon.0) 1656 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-21"}]: dispatch 2026-03-10T10:18:18.281 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:17 vm07 bash[23367]: audit 2026-03-10T10:18:17.851346+0000 mon.a (mon.0) 1656 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-21"}]: dispatch 2026-03-10T10:18:18.281 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:17 vm07 bash[23367]: audit 2026-03-10T10:18:17.855565+0000 mon.c (mon.2) 274 : audit [INF] from='client.? 192.168.123.104:0/2321472316' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:18.281 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:17 vm07 bash[23367]: audit 2026-03-10T10:18:17.855565+0000 mon.c (mon.2) 274 : audit [INF] from='client.? 192.168.123.104:0/2321472316' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:18.281 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:17 vm07 bash[23367]: audit 2026-03-10T10:18:17.867489+0000 mon.a (mon.0) 1657 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:18.281 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:17 vm07 bash[23367]: audit 2026-03-10T10:18:17.867489+0000 mon.a (mon.0) 1657 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-36","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:18.766 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:18:18 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:18:19.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:19 vm04 bash[28289]: audit 2026-03-10T10:18:18.294263+0000 mgr.y (mgr.24422) 197 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:18:19.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:19 vm04 bash[28289]: audit 2026-03-10T10:18:18.294263+0000 mgr.y (mgr.24422) 197 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:18:19.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:19 vm04 bash[28289]: audit 2026-03-10T10:18:18.401500+0000 mon.a (mon.0) 1658 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "29"}]: dispatch 2026-03-10T10:18:19.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:19 vm04 bash[28289]: audit 2026-03-10T10:18:18.401500+0000 mon.a (mon.0) 1658 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "29"}]: dispatch 2026-03-10T10:18:19.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:19 vm04 bash[28289]: audit 2026-03-10T10:18:18.418821+0000 mon.c (mon.2) 275 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:19.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:19 vm04 bash[28289]: audit 2026-03-10T10:18:18.418821+0000 mon.c (mon.2) 275 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:19.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:19 vm04 bash[20742]: audit 2026-03-10T10:18:18.294263+0000 mgr.y (mgr.24422) 197 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:18:19.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:19 vm04 bash[20742]: audit 2026-03-10T10:18:18.294263+0000 mgr.y (mgr.24422) 197 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:18:19.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:19 vm04 bash[20742]: audit 2026-03-10T10:18:18.401500+0000 mon.a (mon.0) 1658 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "29"}]: dispatch 2026-03-10T10:18:19.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:19 vm04 bash[20742]: audit 2026-03-10T10:18:18.401500+0000 mon.a (mon.0) 1658 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "29"}]: dispatch 2026-03-10T10:18:19.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:19 vm04 bash[20742]: audit 2026-03-10T10:18:18.418821+0000 mon.c (mon.2) 275 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:19.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:19 vm04 bash[20742]: audit 2026-03-10T10:18:18.418821+0000 mon.c (mon.2) 275 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:19.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:19 vm07 bash[23367]: audit 2026-03-10T10:18:18.294263+0000 mgr.y (mgr.24422) 197 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:18:19.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:19 vm07 bash[23367]: audit 2026-03-10T10:18:18.294263+0000 mgr.y (mgr.24422) 197 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:18:19.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:19 vm07 bash[23367]: audit 2026-03-10T10:18:18.401500+0000 mon.a (mon.0) 1658 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "29"}]: dispatch 2026-03-10T10:18:19.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:19 vm07 bash[23367]: audit 2026-03-10T10:18:18.401500+0000 mon.a (mon.0) 1658 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "29"}]: dispatch 2026-03-10T10:18:19.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:19 vm07 bash[23367]: audit 2026-03-10T10:18:18.418821+0000 mon.c (mon.2) 275 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:19.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:19 vm07 bash[23367]: audit 2026-03-10T10:18:18.418821+0000 mon.c (mon.2) 275 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:20 vm04 bash[28289]: cluster 2026-03-10T10:18:18.400376+0000 mgr.y (mgr.24422) 198 : cluster [DBG] pgmap v228: 362 pgs: 27 creating+peering, 30 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 287 active+clean; 459 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 31 B/s, 0 objects/s recovering 2026-03-10T10:18:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:20 vm04 bash[28289]: cluster 2026-03-10T10:18:18.400376+0000 mgr.y (mgr.24422) 198 : cluster [DBG] pgmap v228: 362 pgs: 27 creating+peering, 30 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 287 active+clean; 459 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 31 B/s, 0 objects/s recovering 2026-03-10T10:18:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:20 vm04 bash[28289]: cluster 2026-03-10T10:18:18.999984+0000 mon.a (mon.0) 1659 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:20 vm04 bash[28289]: cluster 2026-03-10T10:18:18.999984+0000 mon.a (mon.0) 1659 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:20 vm04 bash[28289]: audit 2026-03-10T10:18:19.140432+0000 mon.a (mon.0) 1660 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-21"}]': finished 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:20 vm04 bash[28289]: audit 2026-03-10T10:18:19.140432+0000 mon.a (mon.0) 1660 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-21"}]': finished 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:20 vm04 bash[28289]: audit 2026-03-10T10:18:19.141129+0000 mon.a (mon.0) 1661 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-36","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:20 vm04 bash[28289]: audit 2026-03-10T10:18:19.141129+0000 mon.a (mon.0) 1661 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-36","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:20 vm04 bash[28289]: audit 2026-03-10T10:18:19.141165+0000 mon.a (mon.0) 1662 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "29"}]': finished 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:20 vm04 bash[28289]: audit 2026-03-10T10:18:19.141165+0000 mon.a (mon.0) 1662 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "29"}]': finished 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:20 vm04 bash[28289]: cluster 2026-03-10T10:18:19.227762+0000 mon.a (mon.0) 1663 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:20 vm04 bash[28289]: cluster 2026-03-10T10:18:19.227762+0000 mon.a (mon.0) 1663 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:20 vm04 bash[28289]: audit 2026-03-10T10:18:19.278917+0000 mon.a (mon.0) 1664 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-21", "mode": "writeback"}]: dispatch 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:20 vm04 bash[28289]: audit 2026-03-10T10:18:19.278917+0000 mon.a (mon.0) 1664 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-21", "mode": "writeback"}]: dispatch 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:20 vm04 bash[28289]: audit 2026-03-10T10:18:19.419562+0000 mon.c (mon.2) 276 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:20 vm04 bash[28289]: audit 2026-03-10T10:18:19.419562+0000 mon.c (mon.2) 276 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:20 vm04 bash[28289]: cluster 2026-03-10T10:18:20.140807+0000 mon.a (mon.0) 1665 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:20 vm04 bash[28289]: cluster 2026-03-10T10:18:20.140807+0000 mon.a (mon.0) 1665 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:20 vm04 bash[28289]: audit 2026-03-10T10:18:20.146120+0000 mon.a (mon.0) 1666 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-21", "mode": "writeback"}]': finished 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:20 vm04 bash[28289]: audit 2026-03-10T10:18:20.146120+0000 mon.a (mon.0) 1666 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-21", "mode": "writeback"}]': finished 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:20 vm04 bash[28289]: cluster 2026-03-10T10:18:20.161546+0000 mon.a (mon.0) 1667 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:20 vm04 bash[28289]: cluster 2026-03-10T10:18:20.161546+0000 mon.a (mon.0) 1667 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:20 vm04 bash[28289]: audit 2026-03-10T10:18:20.164182+0000 mon.b (mon.1) 163 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:20 vm04 bash[28289]: audit 2026-03-10T10:18:20.164182+0000 mon.b (mon.1) 163 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:20 vm04 bash[28289]: audit 2026-03-10T10:18:20.166752+0000 mon.a (mon.0) 1668 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:20 vm04 bash[28289]: audit 2026-03-10T10:18:20.166752+0000 mon.a (mon.0) 1668 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:20 vm04 bash[28289]: audit 2026-03-10T10:18:20.168239+0000 mon.a (mon.0) 1669 : audit [INF] from='client.? 192.168.123.104:0/4201828935' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:20 vm04 bash[28289]: audit 2026-03-10T10:18:20.168239+0000 mon.a (mon.0) 1669 : audit [INF] from='client.? 192.168.123.104:0/4201828935' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:20 vm04 bash[28289]: audit 2026-03-10T10:18:20.226449+0000 mon.a (mon.0) 1670 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:20 vm04 bash[28289]: audit 2026-03-10T10:18:20.226449+0000 mon.a (mon.0) 1670 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:20 vm04 bash[20742]: cluster 2026-03-10T10:18:18.400376+0000 mgr.y (mgr.24422) 198 : cluster [DBG] pgmap v228: 362 pgs: 27 creating+peering, 30 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 287 active+clean; 459 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 31 B/s, 0 objects/s recovering 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:20 vm04 bash[20742]: cluster 2026-03-10T10:18:18.400376+0000 mgr.y (mgr.24422) 198 : cluster [DBG] pgmap v228: 362 pgs: 27 creating+peering, 30 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 287 active+clean; 459 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 31 B/s, 0 objects/s recovering 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:20 vm04 bash[20742]: cluster 2026-03-10T10:18:18.999984+0000 mon.a (mon.0) 1659 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:20 vm04 bash[20742]: cluster 2026-03-10T10:18:18.999984+0000 mon.a (mon.0) 1659 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:20 vm04 bash[20742]: audit 2026-03-10T10:18:19.140432+0000 mon.a (mon.0) 1660 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-21"}]': finished 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:20 vm04 bash[20742]: audit 2026-03-10T10:18:19.140432+0000 mon.a (mon.0) 1660 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-21"}]': finished 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:20 vm04 bash[20742]: audit 2026-03-10T10:18:19.141129+0000 mon.a (mon.0) 1661 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-36","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:20 vm04 bash[20742]: audit 2026-03-10T10:18:19.141129+0000 mon.a (mon.0) 1661 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-36","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:20 vm04 bash[20742]: audit 2026-03-10T10:18:19.141165+0000 mon.a (mon.0) 1662 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "29"}]': finished 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:20 vm04 bash[20742]: audit 2026-03-10T10:18:19.141165+0000 mon.a (mon.0) 1662 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "29"}]': finished 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:20 vm04 bash[20742]: cluster 2026-03-10T10:18:19.227762+0000 mon.a (mon.0) 1663 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:20 vm04 bash[20742]: cluster 2026-03-10T10:18:19.227762+0000 mon.a (mon.0) 1663 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:20 vm04 bash[20742]: audit 2026-03-10T10:18:19.278917+0000 mon.a (mon.0) 1664 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-21", "mode": "writeback"}]: dispatch 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:20 vm04 bash[20742]: audit 2026-03-10T10:18:19.278917+0000 mon.a (mon.0) 1664 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-21", "mode": "writeback"}]: dispatch 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:20 vm04 bash[20742]: audit 2026-03-10T10:18:19.419562+0000 mon.c (mon.2) 276 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:20 vm04 bash[20742]: audit 2026-03-10T10:18:19.419562+0000 mon.c (mon.2) 276 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:20 vm04 bash[20742]: cluster 2026-03-10T10:18:20.140807+0000 mon.a (mon.0) 1665 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:20 vm04 bash[20742]: cluster 2026-03-10T10:18:20.140807+0000 mon.a (mon.0) 1665 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:20 vm04 bash[20742]: audit 2026-03-10T10:18:20.146120+0000 mon.a (mon.0) 1666 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-21", "mode": "writeback"}]': finished 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:20 vm04 bash[20742]: audit 2026-03-10T10:18:20.146120+0000 mon.a (mon.0) 1666 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-21", "mode": "writeback"}]': finished 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:20 vm04 bash[20742]: cluster 2026-03-10T10:18:20.161546+0000 mon.a (mon.0) 1667 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:20 vm04 bash[20742]: cluster 2026-03-10T10:18:20.161546+0000 mon.a (mon.0) 1667 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-10T10:18:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:20 vm04 bash[20742]: audit 2026-03-10T10:18:20.164182+0000 mon.b (mon.1) 163 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:20.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:20 vm04 bash[20742]: audit 2026-03-10T10:18:20.164182+0000 mon.b (mon.1) 163 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:20.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:20 vm04 bash[20742]: audit 2026-03-10T10:18:20.166752+0000 mon.a (mon.0) 1668 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:20.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:20 vm04 bash[20742]: audit 2026-03-10T10:18:20.166752+0000 mon.a (mon.0) 1668 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:20.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:20 vm04 bash[20742]: audit 2026-03-10T10:18:20.168239+0000 mon.a (mon.0) 1669 : audit [INF] from='client.? 192.168.123.104:0/4201828935' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:20.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:20 vm04 bash[20742]: audit 2026-03-10T10:18:20.168239+0000 mon.a (mon.0) 1669 : audit [INF] from='client.? 192.168.123.104:0/4201828935' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:20.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:20 vm04 bash[20742]: audit 2026-03-10T10:18:20.226449+0000 mon.a (mon.0) 1670 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:20.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:20 vm04 bash[20742]: audit 2026-03-10T10:18:20.226449+0000 mon.a (mon.0) 1670 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:20 vm07 bash[23367]: cluster 2026-03-10T10:18:18.400376+0000 mgr.y (mgr.24422) 198 : cluster [DBG] pgmap v228: 362 pgs: 27 creating+peering, 30 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 287 active+clean; 459 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 31 B/s, 0 objects/s recovering 2026-03-10T10:18:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:20 vm07 bash[23367]: cluster 2026-03-10T10:18:18.400376+0000 mgr.y (mgr.24422) 198 : cluster [DBG] pgmap v228: 362 pgs: 27 creating+peering, 30 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 287 active+clean; 459 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 31 B/s, 0 objects/s recovering 2026-03-10T10:18:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:20 vm07 bash[23367]: cluster 2026-03-10T10:18:18.999984+0000 mon.a (mon.0) 1659 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:20 vm07 bash[23367]: cluster 2026-03-10T10:18:18.999984+0000 mon.a (mon.0) 1659 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:20 vm07 bash[23367]: audit 2026-03-10T10:18:19.140432+0000 mon.a (mon.0) 1660 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-21"}]': finished 2026-03-10T10:18:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:20 vm07 bash[23367]: audit 2026-03-10T10:18:19.140432+0000 mon.a (mon.0) 1660 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-21"}]': finished 2026-03-10T10:18:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:20 vm07 bash[23367]: audit 2026-03-10T10:18:19.141129+0000 mon.a (mon.0) 1661 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-36","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:20 vm07 bash[23367]: audit 2026-03-10T10:18:19.141129+0000 mon.a (mon.0) 1661 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-36","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:20 vm07 bash[23367]: audit 2026-03-10T10:18:19.141165+0000 mon.a (mon.0) 1662 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "29"}]': finished 2026-03-10T10:18:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:20 vm07 bash[23367]: audit 2026-03-10T10:18:19.141165+0000 mon.a (mon.0) 1662 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "29"}]': finished 2026-03-10T10:18:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:20 vm07 bash[23367]: cluster 2026-03-10T10:18:19.227762+0000 mon.a (mon.0) 1663 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-10T10:18:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:20 vm07 bash[23367]: cluster 2026-03-10T10:18:19.227762+0000 mon.a (mon.0) 1663 : cluster [DBG] osdmap e182: 8 total, 8 up, 8 in 2026-03-10T10:18:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:20 vm07 bash[23367]: audit 2026-03-10T10:18:19.278917+0000 mon.a (mon.0) 1664 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-21", "mode": "writeback"}]: dispatch 2026-03-10T10:18:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:20 vm07 bash[23367]: audit 2026-03-10T10:18:19.278917+0000 mon.a (mon.0) 1664 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-21", "mode": "writeback"}]: dispatch 2026-03-10T10:18:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:20 vm07 bash[23367]: audit 2026-03-10T10:18:19.419562+0000 mon.c (mon.2) 276 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:20 vm07 bash[23367]: audit 2026-03-10T10:18:19.419562+0000 mon.c (mon.2) 276 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:20 vm07 bash[23367]: cluster 2026-03-10T10:18:20.140807+0000 mon.a (mon.0) 1665 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:20 vm07 bash[23367]: cluster 2026-03-10T10:18:20.140807+0000 mon.a (mon.0) 1665 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:20 vm07 bash[23367]: audit 2026-03-10T10:18:20.146120+0000 mon.a (mon.0) 1666 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-21", "mode": "writeback"}]': finished 2026-03-10T10:18:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:20 vm07 bash[23367]: audit 2026-03-10T10:18:20.146120+0000 mon.a (mon.0) 1666 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-21", "mode": "writeback"}]': finished 2026-03-10T10:18:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:20 vm07 bash[23367]: cluster 2026-03-10T10:18:20.161546+0000 mon.a (mon.0) 1667 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-10T10:18:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:20 vm07 bash[23367]: cluster 2026-03-10T10:18:20.161546+0000 mon.a (mon.0) 1667 : cluster [DBG] osdmap e183: 8 total, 8 up, 8 in 2026-03-10T10:18:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:20 vm07 bash[23367]: audit 2026-03-10T10:18:20.164182+0000 mon.b (mon.1) 163 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:20 vm07 bash[23367]: audit 2026-03-10T10:18:20.164182+0000 mon.b (mon.1) 163 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:20 vm07 bash[23367]: audit 2026-03-10T10:18:20.166752+0000 mon.a (mon.0) 1668 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:20 vm07 bash[23367]: audit 2026-03-10T10:18:20.166752+0000 mon.a (mon.0) 1668 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:20 vm07 bash[23367]: audit 2026-03-10T10:18:20.168239+0000 mon.a (mon.0) 1669 : audit [INF] from='client.? 192.168.123.104:0/4201828935' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:20 vm07 bash[23367]: audit 2026-03-10T10:18:20.168239+0000 mon.a (mon.0) 1669 : audit [INF] from='client.? 192.168.123.104:0/4201828935' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:20 vm07 bash[23367]: audit 2026-03-10T10:18:20.226449+0000 mon.a (mon.0) 1670 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:20 vm07 bash[23367]: audit 2026-03-10T10:18:20.226449+0000 mon.a (mon.0) 1670 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:21 vm07 bash[23367]: cluster 2026-03-10T10:18:20.400997+0000 mgr.y (mgr.24422) 199 : cluster [DBG] pgmap v231: 386 pgs: 32 unknown, 32 creating+peering, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 304 active+clean; 459 KiB data, 661 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:18:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:21 vm07 bash[23367]: cluster 2026-03-10T10:18:20.400997+0000 mgr.y (mgr.24422) 199 : cluster [DBG] pgmap v231: 386 pgs: 32 unknown, 32 creating+peering, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 304 active+clean; 459 KiB data, 661 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:18:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:21 vm07 bash[23367]: audit 2026-03-10T10:18:20.420545+0000 mon.c (mon.2) 277 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:21 vm07 bash[23367]: audit 2026-03-10T10:18:20.420545+0000 mon.c (mon.2) 277 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:21 vm07 bash[23367]: audit 2026-03-10T10:18:21.334145+0000 mon.a (mon.0) 1671 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm04-59252-32"}]': finished 2026-03-10T10:18:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:21 vm07 bash[23367]: audit 2026-03-10T10:18:21.334145+0000 mon.a (mon.0) 1671 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm04-59252-32"}]': finished 2026-03-10T10:18:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:21 vm07 bash[23367]: audit 2026-03-10T10:18:21.334303+0000 mon.a (mon.0) 1672 : audit [INF] from='client.? 192.168.123.104:0/4201828935' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:21 vm07 bash[23367]: audit 2026-03-10T10:18:21.334303+0000 mon.a (mon.0) 1672 : audit [INF] from='client.? 192.168.123.104:0/4201828935' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:21 vm07 bash[23367]: audit 2026-03-10T10:18:21.334682+0000 mon.a (mon.0) 1673 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:21 vm07 bash[23367]: audit 2026-03-10T10:18:21.334682+0000 mon.a (mon.0) 1673 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:21 vm07 bash[23367]: cluster 2026-03-10T10:18:21.342797+0000 mon.a (mon.0) 1674 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-10T10:18:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:21 vm07 bash[23367]: cluster 2026-03-10T10:18:21.342797+0000 mon.a (mon.0) 1674 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-10T10:18:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:21 vm07 bash[23367]: audit 2026-03-10T10:18:21.343992+0000 mon.b (mon.1) 164 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:21 vm07 bash[23367]: audit 2026-03-10T10:18:21.343992+0000 mon.b (mon.1) 164 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:21 vm07 bash[23367]: audit 2026-03-10T10:18:21.361650+0000 mon.a (mon.0) 1675 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:21 vm07 bash[23367]: audit 2026-03-10T10:18:21.361650+0000 mon.a (mon.0) 1675 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:21 vm07 bash[23367]: audit 2026-03-10T10:18:21.361784+0000 mon.a (mon.0) 1676 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-21"}]: dispatch 2026-03-10T10:18:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:21 vm07 bash[23367]: audit 2026-03-10T10:18:21.361784+0000 mon.a (mon.0) 1676 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-21"}]: dispatch 2026-03-10T10:18:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:21 vm04 bash[28289]: cluster 2026-03-10T10:18:20.400997+0000 mgr.y (mgr.24422) 199 : cluster [DBG] pgmap v231: 386 pgs: 32 unknown, 32 creating+peering, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 304 active+clean; 459 KiB data, 661 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:18:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:21 vm04 bash[28289]: cluster 2026-03-10T10:18:20.400997+0000 mgr.y (mgr.24422) 199 : cluster [DBG] pgmap v231: 386 pgs: 32 unknown, 32 creating+peering, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 304 active+clean; 459 KiB data, 661 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:18:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:21 vm04 bash[28289]: audit 2026-03-10T10:18:20.420545+0000 mon.c (mon.2) 277 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:21 vm04 bash[28289]: audit 2026-03-10T10:18:20.420545+0000 mon.c (mon.2) 277 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:21 vm04 bash[28289]: audit 2026-03-10T10:18:21.334145+0000 mon.a (mon.0) 1671 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm04-59252-32"}]': finished 2026-03-10T10:18:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:21 vm04 bash[28289]: audit 2026-03-10T10:18:21.334145+0000 mon.a (mon.0) 1671 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm04-59252-32"}]': finished 2026-03-10T10:18:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:21 vm04 bash[28289]: audit 2026-03-10T10:18:21.334303+0000 mon.a (mon.0) 1672 : audit [INF] from='client.? 192.168.123.104:0/4201828935' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:21.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:21 vm04 bash[28289]: audit 2026-03-10T10:18:21.334303+0000 mon.a (mon.0) 1672 : audit [INF] from='client.? 192.168.123.104:0/4201828935' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:21.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:21 vm04 bash[28289]: audit 2026-03-10T10:18:21.334682+0000 mon.a (mon.0) 1673 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:21.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:21 vm04 bash[28289]: audit 2026-03-10T10:18:21.334682+0000 mon.a (mon.0) 1673 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:21.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:21 vm04 bash[28289]: cluster 2026-03-10T10:18:21.342797+0000 mon.a (mon.0) 1674 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-10T10:18:21.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:21 vm04 bash[28289]: cluster 2026-03-10T10:18:21.342797+0000 mon.a (mon.0) 1674 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-10T10:18:21.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:21 vm04 bash[28289]: audit 2026-03-10T10:18:21.343992+0000 mon.b (mon.1) 164 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:21.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:21 vm04 bash[28289]: audit 2026-03-10T10:18:21.343992+0000 mon.b (mon.1) 164 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:21.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:21 vm04 bash[28289]: audit 2026-03-10T10:18:21.361650+0000 mon.a (mon.0) 1675 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:21.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:21 vm04 bash[28289]: audit 2026-03-10T10:18:21.361650+0000 mon.a (mon.0) 1675 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:21.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:21 vm04 bash[28289]: audit 2026-03-10T10:18:21.361784+0000 mon.a (mon.0) 1676 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-21"}]: dispatch 2026-03-10T10:18:21.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:21 vm04 bash[28289]: audit 2026-03-10T10:18:21.361784+0000 mon.a (mon.0) 1676 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-21"}]: dispatch 2026-03-10T10:18:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:21 vm04 bash[20742]: cluster 2026-03-10T10:18:20.400997+0000 mgr.y (mgr.24422) 199 : cluster [DBG] pgmap v231: 386 pgs: 32 unknown, 32 creating+peering, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 304 active+clean; 459 KiB data, 661 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:18:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:21 vm04 bash[20742]: cluster 2026-03-10T10:18:20.400997+0000 mgr.y (mgr.24422) 199 : cluster [DBG] pgmap v231: 386 pgs: 32 unknown, 32 creating+peering, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 304 active+clean; 459 KiB data, 661 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:18:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:21 vm04 bash[20742]: audit 2026-03-10T10:18:20.420545+0000 mon.c (mon.2) 277 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:21 vm04 bash[20742]: audit 2026-03-10T10:18:20.420545+0000 mon.c (mon.2) 277 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:21 vm04 bash[20742]: audit 2026-03-10T10:18:21.334145+0000 mon.a (mon.0) 1671 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm04-59252-32"}]': finished 2026-03-10T10:18:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:21 vm04 bash[20742]: audit 2026-03-10T10:18:21.334145+0000 mon.a (mon.0) 1671 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsComplete_vm04-59252-32"}]': finished 2026-03-10T10:18:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:21 vm04 bash[20742]: audit 2026-03-10T10:18:21.334303+0000 mon.a (mon.0) 1672 : audit [INF] from='client.? 192.168.123.104:0/4201828935' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:21 vm04 bash[20742]: audit 2026-03-10T10:18:21.334303+0000 mon.a (mon.0) 1672 : audit [INF] from='client.? 192.168.123.104:0/4201828935' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:21 vm04 bash[20742]: audit 2026-03-10T10:18:21.334682+0000 mon.a (mon.0) 1673 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:21 vm04 bash[20742]: audit 2026-03-10T10:18:21.334682+0000 mon.a (mon.0) 1673 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:21 vm04 bash[20742]: cluster 2026-03-10T10:18:21.342797+0000 mon.a (mon.0) 1674 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-10T10:18:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:21 vm04 bash[20742]: cluster 2026-03-10T10:18:21.342797+0000 mon.a (mon.0) 1674 : cluster [DBG] osdmap e184: 8 total, 8 up, 8 in 2026-03-10T10:18:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:21 vm04 bash[20742]: audit 2026-03-10T10:18:21.343992+0000 mon.b (mon.1) 164 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:21 vm04 bash[20742]: audit 2026-03-10T10:18:21.343992+0000 mon.b (mon.1) 164 : audit [INF] from='client.? 192.168.123.104:0/2248729316' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:21 vm04 bash[20742]: audit 2026-03-10T10:18:21.361650+0000 mon.a (mon.0) 1675 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:21 vm04 bash[20742]: audit 2026-03-10T10:18:21.361650+0000 mon.a (mon.0) 1675 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsComplete_vm04-59252-32"}]: dispatch 2026-03-10T10:18:21.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:21 vm04 bash[20742]: audit 2026-03-10T10:18:21.361784+0000 mon.a (mon.0) 1676 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-21"}]: dispatch 2026-03-10T10:18:21.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:21 vm04 bash[20742]: audit 2026-03-10T10:18:21.361784+0000 mon.a (mon.0) 1676 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-21"}]: dispatch 2026-03-10T10:18:22.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:22 vm07 bash[23367]: audit 2026-03-10T10:18:21.442495+0000 mon.c (mon.2) 278 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:22.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:22 vm07 bash[23367]: audit 2026-03-10T10:18:21.442495+0000 mon.c (mon.2) 278 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:22 vm07 bash[23367]: cluster 2026-03-10T10:18:22.334358+0000 mon.a (mon.0) 1677 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:22 vm07 bash[23367]: cluster 2026-03-10T10:18:22.334358+0000 mon.a (mon.0) 1677 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:22 vm07 bash[23367]: audit 2026-03-10T10:18:22.339161+0000 mon.a (mon.0) 1678 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsComplete_vm04-59252-32"}]': finished 2026-03-10T10:18:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:22 vm07 bash[23367]: audit 2026-03-10T10:18:22.339161+0000 mon.a (mon.0) 1678 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsComplete_vm04-59252-32"}]': finished 2026-03-10T10:18:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:22 vm07 bash[23367]: audit 2026-03-10T10:18:22.339273+0000 mon.a (mon.0) 1679 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-21"}]': finished 2026-03-10T10:18:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:22 vm07 bash[23367]: audit 2026-03-10T10:18:22.339273+0000 mon.a (mon.0) 1679 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-21"}]': finished 2026-03-10T10:18:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:22 vm07 bash[23367]: cluster 2026-03-10T10:18:22.364106+0000 mon.a (mon.0) 1680 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-10T10:18:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:22 vm07 bash[23367]: cluster 2026-03-10T10:18:22.364106+0000 mon.a (mon.0) 1680 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-10T10:18:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:22 vm07 bash[23367]: audit 2026-03-10T10:18:22.370320+0000 mon.a (mon.0) 1681 : audit [INF] from='client.? 192.168.123.104:0/2485058037' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:22 vm07 bash[23367]: audit 2026-03-10T10:18:22.370320+0000 mon.a (mon.0) 1681 : audit [INF] from='client.? 192.168.123.104:0/2485058037' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:22 vm07 bash[23367]: audit 2026-03-10T10:18:22.382983+0000 mon.b (mon.1) 165 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:22 vm07 bash[23367]: audit 2026-03-10T10:18:22.382983+0000 mon.b (mon.1) 165 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:22 vm07 bash[23367]: audit 2026-03-10T10:18:22.388278+0000 mon.b (mon.1) 166 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:22 vm07 bash[23367]: audit 2026-03-10T10:18:22.388278+0000 mon.b (mon.1) 166 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:22 vm07 bash[23367]: audit 2026-03-10T10:18:22.389734+0000 mon.a (mon.0) 1682 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:22 vm07 bash[23367]: audit 2026-03-10T10:18:22.389734+0000 mon.a (mon.0) 1682 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:22 vm07 bash[23367]: audit 2026-03-10T10:18:22.391198+0000 mon.b (mon.1) 167 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm04-59252-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:22 vm07 bash[23367]: audit 2026-03-10T10:18:22.391198+0000 mon.b (mon.1) 167 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm04-59252-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:22 vm07 bash[23367]: audit 2026-03-10T10:18:22.392788+0000 mon.a (mon.0) 1683 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:22 vm07 bash[23367]: audit 2026-03-10T10:18:22.392788+0000 mon.a (mon.0) 1683 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:22 vm07 bash[23367]: audit 2026-03-10T10:18:22.394186+0000 mon.a (mon.0) 1684 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm04-59252-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:22 vm07 bash[23367]: audit 2026-03-10T10:18:22.394186+0000 mon.a (mon.0) 1684 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm04-59252-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:22 vm07 bash[23367]: audit 2026-03-10T10:18:22.444441+0000 mon.c (mon.2) 279 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:22 vm07 bash[23367]: audit 2026-03-10T10:18:22.444441+0000 mon.c (mon.2) 279 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:22 vm07 bash[23367]: audit 2026-03-10T10:18:22.445671+0000 mon.a (mon.0) 1685 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:18:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:22 vm07 bash[23367]: audit 2026-03-10T10:18:22.445671+0000 mon.a (mon.0) 1685 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:18:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:22 vm04 bash[28289]: audit 2026-03-10T10:18:21.442495+0000 mon.c (mon.2) 278 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:22 vm04 bash[28289]: audit 2026-03-10T10:18:21.442495+0000 mon.c (mon.2) 278 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:22 vm04 bash[28289]: cluster 2026-03-10T10:18:22.334358+0000 mon.a (mon.0) 1677 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:22 vm04 bash[28289]: cluster 2026-03-10T10:18:22.334358+0000 mon.a (mon.0) 1677 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:22 vm04 bash[28289]: audit 2026-03-10T10:18:22.339161+0000 mon.a (mon.0) 1678 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsComplete_vm04-59252-32"}]': finished 2026-03-10T10:18:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:22 vm04 bash[28289]: audit 2026-03-10T10:18:22.339161+0000 mon.a (mon.0) 1678 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsComplete_vm04-59252-32"}]': finished 2026-03-10T10:18:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:22 vm04 bash[28289]: audit 2026-03-10T10:18:22.339273+0000 mon.a (mon.0) 1679 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-21"}]': finished 2026-03-10T10:18:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:22 vm04 bash[28289]: audit 2026-03-10T10:18:22.339273+0000 mon.a (mon.0) 1679 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-21"}]': finished 2026-03-10T10:18:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:22 vm04 bash[28289]: cluster 2026-03-10T10:18:22.364106+0000 mon.a (mon.0) 1680 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-10T10:18:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:22 vm04 bash[28289]: cluster 2026-03-10T10:18:22.364106+0000 mon.a (mon.0) 1680 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-10T10:18:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:22 vm04 bash[28289]: audit 2026-03-10T10:18:22.370320+0000 mon.a (mon.0) 1681 : audit [INF] from='client.? 192.168.123.104:0/2485058037' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:22 vm04 bash[28289]: audit 2026-03-10T10:18:22.370320+0000 mon.a (mon.0) 1681 : audit [INF] from='client.? 192.168.123.104:0/2485058037' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:22 vm04 bash[28289]: audit 2026-03-10T10:18:22.382983+0000 mon.b (mon.1) 165 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:22 vm04 bash[28289]: audit 2026-03-10T10:18:22.382983+0000 mon.b (mon.1) 165 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:22 vm04 bash[28289]: audit 2026-03-10T10:18:22.388278+0000 mon.b (mon.1) 166 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:22 vm04 bash[28289]: audit 2026-03-10T10:18:22.388278+0000 mon.b (mon.1) 166 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:22 vm04 bash[28289]: audit 2026-03-10T10:18:22.389734+0000 mon.a (mon.0) 1682 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:22 vm04 bash[28289]: audit 2026-03-10T10:18:22.389734+0000 mon.a (mon.0) 1682 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:22 vm04 bash[28289]: audit 2026-03-10T10:18:22.391198+0000 mon.b (mon.1) 167 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm04-59252-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:22 vm04 bash[28289]: audit 2026-03-10T10:18:22.391198+0000 mon.b (mon.1) 167 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm04-59252-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:22 vm04 bash[28289]: audit 2026-03-10T10:18:22.392788+0000 mon.a (mon.0) 1683 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:22 vm04 bash[28289]: audit 2026-03-10T10:18:22.392788+0000 mon.a (mon.0) 1683 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:22 vm04 bash[28289]: audit 2026-03-10T10:18:22.394186+0000 mon.a (mon.0) 1684 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm04-59252-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:22 vm04 bash[28289]: audit 2026-03-10T10:18:22.394186+0000 mon.a (mon.0) 1684 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm04-59252-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:22 vm04 bash[28289]: audit 2026-03-10T10:18:22.444441+0000 mon.c (mon.2) 279 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:22 vm04 bash[28289]: audit 2026-03-10T10:18:22.444441+0000 mon.c (mon.2) 279 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:22 vm04 bash[28289]: audit 2026-03-10T10:18:22.445671+0000 mon.a (mon.0) 1685 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:22 vm04 bash[28289]: audit 2026-03-10T10:18:22.445671+0000 mon.a (mon.0) 1685 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:22 vm04 bash[20742]: audit 2026-03-10T10:18:21.442495+0000 mon.c (mon.2) 278 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:22 vm04 bash[20742]: audit 2026-03-10T10:18:21.442495+0000 mon.c (mon.2) 278 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:22 vm04 bash[20742]: cluster 2026-03-10T10:18:22.334358+0000 mon.a (mon.0) 1677 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:22 vm04 bash[20742]: cluster 2026-03-10T10:18:22.334358+0000 mon.a (mon.0) 1677 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:22 vm04 bash[20742]: audit 2026-03-10T10:18:22.339161+0000 mon.a (mon.0) 1678 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsComplete_vm04-59252-32"}]': finished 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:22 vm04 bash[20742]: audit 2026-03-10T10:18:22.339161+0000 mon.a (mon.0) 1678 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsComplete_vm04-59252-32"}]': finished 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:22 vm04 bash[20742]: audit 2026-03-10T10:18:22.339273+0000 mon.a (mon.0) 1679 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-21"}]': finished 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:22 vm04 bash[20742]: audit 2026-03-10T10:18:22.339273+0000 mon.a (mon.0) 1679 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-21"}]': finished 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:22 vm04 bash[20742]: cluster 2026-03-10T10:18:22.364106+0000 mon.a (mon.0) 1680 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:22 vm04 bash[20742]: cluster 2026-03-10T10:18:22.364106+0000 mon.a (mon.0) 1680 : cluster [DBG] osdmap e185: 8 total, 8 up, 8 in 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:22 vm04 bash[20742]: audit 2026-03-10T10:18:22.370320+0000 mon.a (mon.0) 1681 : audit [INF] from='client.? 192.168.123.104:0/2485058037' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:22 vm04 bash[20742]: audit 2026-03-10T10:18:22.370320+0000 mon.a (mon.0) 1681 : audit [INF] from='client.? 192.168.123.104:0/2485058037' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-38","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:22 vm04 bash[20742]: audit 2026-03-10T10:18:22.382983+0000 mon.b (mon.1) 165 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:22 vm04 bash[20742]: audit 2026-03-10T10:18:22.382983+0000 mon.b (mon.1) 165 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:22 vm04 bash[20742]: audit 2026-03-10T10:18:22.388278+0000 mon.b (mon.1) 166 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:22 vm04 bash[20742]: audit 2026-03-10T10:18:22.388278+0000 mon.b (mon.1) 166 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:22 vm04 bash[20742]: audit 2026-03-10T10:18:22.389734+0000 mon.a (mon.0) 1682 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:22 vm04 bash[20742]: audit 2026-03-10T10:18:22.389734+0000 mon.a (mon.0) 1682 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:22 vm04 bash[20742]: audit 2026-03-10T10:18:22.391198+0000 mon.b (mon.1) 167 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm04-59252-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:22.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:22 vm04 bash[20742]: audit 2026-03-10T10:18:22.391198+0000 mon.b (mon.1) 167 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm04-59252-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:22.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:22 vm04 bash[20742]: audit 2026-03-10T10:18:22.392788+0000 mon.a (mon.0) 1683 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:22.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:22 vm04 bash[20742]: audit 2026-03-10T10:18:22.392788+0000 mon.a (mon.0) 1683 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:22.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:22 vm04 bash[20742]: audit 2026-03-10T10:18:22.394186+0000 mon.a (mon.0) 1684 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm04-59252-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:22.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:22 vm04 bash[20742]: audit 2026-03-10T10:18:22.394186+0000 mon.a (mon.0) 1684 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm04-59252-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:22.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:22 vm04 bash[20742]: audit 2026-03-10T10:18:22.444441+0000 mon.c (mon.2) 279 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:22.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:22 vm04 bash[20742]: audit 2026-03-10T10:18:22.444441+0000 mon.c (mon.2) 279 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:22.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:22 vm04 bash[20742]: audit 2026-03-10T10:18:22.445671+0000 mon.a (mon.0) 1685 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:18:22.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:22 vm04 bash[20742]: audit 2026-03-10T10:18:22.445671+0000 mon.a (mon.0) 1685 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:18:23.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:18:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:18:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:18:23.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:23 vm04 bash[28289]: cluster 2026-03-10T10:18:22.401425+0000 mgr.y (mgr.24422) 200 : cluster [DBG] pgmap v234: 417 pgs: 64 unknown, 32 creating+peering, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 303 active+clean; 459 KiB data, 661 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:23.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:23 vm04 bash[28289]: cluster 2026-03-10T10:18:22.401425+0000 mgr.y (mgr.24422) 200 : cluster [DBG] pgmap v234: 417 pgs: 64 unknown, 32 creating+peering, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 303 active+clean; 459 KiB data, 661 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:23.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:23 vm04 bash[28289]: audit 2026-03-10T10:18:22.808685+0000 mon.a (mon.0) 1686 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:18:23.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:23 vm04 bash[28289]: audit 2026-03-10T10:18:22.808685+0000 mon.a (mon.0) 1686 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:18:23.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:23 vm04 bash[28289]: audit 2026-03-10T10:18:22.809263+0000 mon.a (mon.0) 1687 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:18:23.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:23 vm04 bash[28289]: audit 2026-03-10T10:18:22.809263+0000 mon.a (mon.0) 1687 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:18:23.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:23 vm04 bash[28289]: audit 2026-03-10T10:18:22.971220+0000 mon.a (mon.0) 1688 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:18:23.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:23 vm04 bash[28289]: audit 2026-03-10T10:18:22.971220+0000 mon.a (mon.0) 1688 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:18:23.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:23 vm04 bash[28289]: audit 2026-03-10T10:18:23.409577+0000 mon.a (mon.0) 1689 : audit [INF] from='client.? 192.168.123.104:0/2485058037' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-38","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:23.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:23 vm04 bash[28289]: audit 2026-03-10T10:18:23.409577+0000 mon.a (mon.0) 1689 : audit [INF] from='client.? 192.168.123.104:0/2485058037' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-38","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:23.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:23 vm04 bash[28289]: audit 2026-03-10T10:18:23.409789+0000 mon.a (mon.0) 1690 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm04-59252-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:23.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:23 vm04 bash[28289]: audit 2026-03-10T10:18:23.409789+0000 mon.a (mon.0) 1690 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm04-59252-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:23.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:23 vm04 bash[28289]: audit 2026-03-10T10:18:23.414568+0000 mon.b (mon.1) 168 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm04-59252-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:23.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:23 vm04 bash[28289]: audit 2026-03-10T10:18:23.414568+0000 mon.b (mon.1) 168 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm04-59252-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:23.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:23 vm04 bash[28289]: cluster 2026-03-10T10:18:23.416120+0000 mon.a (mon.0) 1691 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-10T10:18:23.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:23 vm04 bash[28289]: cluster 2026-03-10T10:18:23.416120+0000 mon.a (mon.0) 1691 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-10T10:18:23.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:23 vm04 bash[28289]: audit 2026-03-10T10:18:23.417928+0000 mon.a (mon.0) 1692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm04-59252-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:23.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:23 vm04 bash[28289]: audit 2026-03-10T10:18:23.417928+0000 mon.a (mon.0) 1692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm04-59252-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:23.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:23 vm04 bash[28289]: audit 2026-03-10T10:18:23.445217+0000 mon.c (mon.2) 280 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:23.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:23 vm04 bash[28289]: audit 2026-03-10T10:18:23.445217+0000 mon.c (mon.2) 280 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:23.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:23 vm04 bash[20742]: cluster 2026-03-10T10:18:22.401425+0000 mgr.y (mgr.24422) 200 : cluster [DBG] pgmap v234: 417 pgs: 64 unknown, 32 creating+peering, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 303 active+clean; 459 KiB data, 661 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:23.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:23 vm04 bash[20742]: cluster 2026-03-10T10:18:22.401425+0000 mgr.y (mgr.24422) 200 : cluster [DBG] pgmap v234: 417 pgs: 64 unknown, 32 creating+peering, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 303 active+clean; 459 KiB data, 661 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:23.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:23 vm04 bash[20742]: audit 2026-03-10T10:18:22.808685+0000 mon.a (mon.0) 1686 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:18:23.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:23 vm04 bash[20742]: audit 2026-03-10T10:18:22.808685+0000 mon.a (mon.0) 1686 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:18:23.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:23 vm04 bash[20742]: audit 2026-03-10T10:18:22.809263+0000 mon.a (mon.0) 1687 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:18:23.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:23 vm04 bash[20742]: audit 2026-03-10T10:18:22.809263+0000 mon.a (mon.0) 1687 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:18:23.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:23 vm04 bash[20742]: audit 2026-03-10T10:18:22.971220+0000 mon.a (mon.0) 1688 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:18:23.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:23 vm04 bash[20742]: audit 2026-03-10T10:18:22.971220+0000 mon.a (mon.0) 1688 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:18:23.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:23 vm04 bash[20742]: audit 2026-03-10T10:18:23.409577+0000 mon.a (mon.0) 1689 : audit [INF] from='client.? 192.168.123.104:0/2485058037' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-38","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:23.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:23 vm04 bash[20742]: audit 2026-03-10T10:18:23.409577+0000 mon.a (mon.0) 1689 : audit [INF] from='client.? 192.168.123.104:0/2485058037' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-38","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:23.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:23 vm04 bash[20742]: audit 2026-03-10T10:18:23.409789+0000 mon.a (mon.0) 1690 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm04-59252-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:23.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:23 vm04 bash[20742]: audit 2026-03-10T10:18:23.409789+0000 mon.a (mon.0) 1690 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm04-59252-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:23.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:23 vm04 bash[20742]: audit 2026-03-10T10:18:23.414568+0000 mon.b (mon.1) 168 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm04-59252-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:23.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:23 vm04 bash[20742]: audit 2026-03-10T10:18:23.414568+0000 mon.b (mon.1) 168 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm04-59252-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:23.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:23 vm04 bash[20742]: cluster 2026-03-10T10:18:23.416120+0000 mon.a (mon.0) 1691 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-10T10:18:23.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:23 vm04 bash[20742]: cluster 2026-03-10T10:18:23.416120+0000 mon.a (mon.0) 1691 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-10T10:18:23.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:23 vm04 bash[20742]: audit 2026-03-10T10:18:23.417928+0000 mon.a (mon.0) 1692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm04-59252-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:23.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:23 vm04 bash[20742]: audit 2026-03-10T10:18:23.417928+0000 mon.a (mon.0) 1692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm04-59252-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:23.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:23 vm04 bash[20742]: audit 2026-03-10T10:18:23.445217+0000 mon.c (mon.2) 280 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:23.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:23 vm04 bash[20742]: audit 2026-03-10T10:18:23.445217+0000 mon.c (mon.2) 280 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:24.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:23 vm07 bash[23367]: cluster 2026-03-10T10:18:22.401425+0000 mgr.y (mgr.24422) 200 : cluster [DBG] pgmap v234: 417 pgs: 64 unknown, 32 creating+peering, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 303 active+clean; 459 KiB data, 661 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:24.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:23 vm07 bash[23367]: cluster 2026-03-10T10:18:22.401425+0000 mgr.y (mgr.24422) 200 : cluster [DBG] pgmap v234: 417 pgs: 64 unknown, 32 creating+peering, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 303 active+clean; 459 KiB data, 661 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:24.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:23 vm07 bash[23367]: audit 2026-03-10T10:18:22.808685+0000 mon.a (mon.0) 1686 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:18:24.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:23 vm07 bash[23367]: audit 2026-03-10T10:18:22.808685+0000 mon.a (mon.0) 1686 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:18:24.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:23 vm07 bash[23367]: audit 2026-03-10T10:18:22.809263+0000 mon.a (mon.0) 1687 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:18:24.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:23 vm07 bash[23367]: audit 2026-03-10T10:18:22.809263+0000 mon.a (mon.0) 1687 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:18:24.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:23 vm07 bash[23367]: audit 2026-03-10T10:18:22.971220+0000 mon.a (mon.0) 1688 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:18:24.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:23 vm07 bash[23367]: audit 2026-03-10T10:18:22.971220+0000 mon.a (mon.0) 1688 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:18:24.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:23 vm07 bash[23367]: audit 2026-03-10T10:18:23.409577+0000 mon.a (mon.0) 1689 : audit [INF] from='client.? 192.168.123.104:0/2485058037' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-38","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:24.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:23 vm07 bash[23367]: audit 2026-03-10T10:18:23.409577+0000 mon.a (mon.0) 1689 : audit [INF] from='client.? 192.168.123.104:0/2485058037' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrsRoundTripPP_vm04-59259-38","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:24.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:23 vm07 bash[23367]: audit 2026-03-10T10:18:23.409789+0000 mon.a (mon.0) 1690 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm04-59252-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:24.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:23 vm07 bash[23367]: audit 2026-03-10T10:18:23.409789+0000 mon.a (mon.0) 1690 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafe_vm04-59252-33", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:24.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:23 vm07 bash[23367]: audit 2026-03-10T10:18:23.414568+0000 mon.b (mon.1) 168 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm04-59252-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:24.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:23 vm07 bash[23367]: audit 2026-03-10T10:18:23.414568+0000 mon.b (mon.1) 168 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm04-59252-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:24.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:23 vm07 bash[23367]: cluster 2026-03-10T10:18:23.416120+0000 mon.a (mon.0) 1691 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-10T10:18:24.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:23 vm07 bash[23367]: cluster 2026-03-10T10:18:23.416120+0000 mon.a (mon.0) 1691 : cluster [DBG] osdmap e186: 8 total, 8 up, 8 in 2026-03-10T10:18:24.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:23 vm07 bash[23367]: audit 2026-03-10T10:18:23.417928+0000 mon.a (mon.0) 1692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm04-59252-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:24.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:23 vm07 bash[23367]: audit 2026-03-10T10:18:23.417928+0000 mon.a (mon.0) 1692 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafe_vm04-59252-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:24.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:23 vm07 bash[23367]: audit 2026-03-10T10:18:23.445217+0000 mon.c (mon.2) 280 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:24.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:23 vm07 bash[23367]: audit 2026-03-10T10:18:23.445217+0000 mon.c (mon.2) 280 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:25.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:25 vm04 bash[28289]: cluster 2026-03-10T10:18:24.402324+0000 mgr.y (mgr.24422) 201 : cluster [DBG] pgmap v236: 385 pgs: 13 creating+peering, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 354 active+clean; 459 KiB data, 661 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 481 B/s wr, 2 op/s 2026-03-10T10:18:25.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:25 vm04 bash[28289]: cluster 2026-03-10T10:18:24.402324+0000 mgr.y (mgr.24422) 201 : cluster [DBG] pgmap v236: 385 pgs: 13 creating+peering, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 354 active+clean; 459 KiB data, 661 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 481 B/s wr, 2 op/s 2026-03-10T10:18:25.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:25 vm04 bash[28289]: audit 2026-03-10T10:18:24.446133+0000 mon.c (mon.2) 281 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:25.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:25 vm04 bash[28289]: audit 2026-03-10T10:18:24.446133+0000 mon.c (mon.2) 281 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:25.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:25 vm04 bash[28289]: cluster 2026-03-10T10:18:24.447539+0000 mon.a (mon.0) 1693 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-10T10:18:25.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:25 vm04 bash[28289]: cluster 2026-03-10T10:18:24.447539+0000 mon.a (mon.0) 1693 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-10T10:18:25.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:25 vm04 bash[28289]: audit 2026-03-10T10:18:24.450505+0000 mon.a (mon.0) 1694 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:25.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:25 vm04 bash[28289]: audit 2026-03-10T10:18:24.450505+0000 mon.a (mon.0) 1694 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:25.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:25 vm04 bash[28289]: cluster 2026-03-10T10:18:24.516586+0000 mon.a (mon.0) 1695 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:25.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:25 vm04 bash[28289]: cluster 2026-03-10T10:18:24.516586+0000 mon.a (mon.0) 1695 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:25.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:25 vm04 bash[20742]: cluster 2026-03-10T10:18:24.402324+0000 mgr.y (mgr.24422) 201 : cluster [DBG] pgmap v236: 385 pgs: 13 creating+peering, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 354 active+clean; 459 KiB data, 661 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 481 B/s wr, 2 op/s 2026-03-10T10:18:25.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:25 vm04 bash[20742]: cluster 2026-03-10T10:18:24.402324+0000 mgr.y (mgr.24422) 201 : cluster [DBG] pgmap v236: 385 pgs: 13 creating+peering, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 354 active+clean; 459 KiB data, 661 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 481 B/s wr, 2 op/s 2026-03-10T10:18:25.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:25 vm04 bash[20742]: audit 2026-03-10T10:18:24.446133+0000 mon.c (mon.2) 281 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:25.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:25 vm04 bash[20742]: audit 2026-03-10T10:18:24.446133+0000 mon.c (mon.2) 281 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:25.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:25 vm04 bash[20742]: cluster 2026-03-10T10:18:24.447539+0000 mon.a (mon.0) 1693 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-10T10:18:25.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:25 vm04 bash[20742]: cluster 2026-03-10T10:18:24.447539+0000 mon.a (mon.0) 1693 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-10T10:18:25.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:25 vm04 bash[20742]: audit 2026-03-10T10:18:24.450505+0000 mon.a (mon.0) 1694 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:25.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:25 vm04 bash[20742]: audit 2026-03-10T10:18:24.450505+0000 mon.a (mon.0) 1694 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:25.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:25 vm04 bash[20742]: cluster 2026-03-10T10:18:24.516586+0000 mon.a (mon.0) 1695 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:25.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:25 vm04 bash[20742]: cluster 2026-03-10T10:18:24.516586+0000 mon.a (mon.0) 1695 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:25 vm07 bash[23367]: cluster 2026-03-10T10:18:24.402324+0000 mgr.y (mgr.24422) 201 : cluster [DBG] pgmap v236: 385 pgs: 13 creating+peering, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 354 active+clean; 459 KiB data, 661 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 481 B/s wr, 2 op/s 2026-03-10T10:18:26.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:25 vm07 bash[23367]: cluster 2026-03-10T10:18:24.402324+0000 mgr.y (mgr.24422) 201 : cluster [DBG] pgmap v236: 385 pgs: 13 creating+peering, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 354 active+clean; 459 KiB data, 661 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 481 B/s wr, 2 op/s 2026-03-10T10:18:26.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:25 vm07 bash[23367]: audit 2026-03-10T10:18:24.446133+0000 mon.c (mon.2) 281 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:26.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:25 vm07 bash[23367]: audit 2026-03-10T10:18:24.446133+0000 mon.c (mon.2) 281 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:26.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:25 vm07 bash[23367]: cluster 2026-03-10T10:18:24.447539+0000 mon.a (mon.0) 1693 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-10T10:18:26.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:25 vm07 bash[23367]: cluster 2026-03-10T10:18:24.447539+0000 mon.a (mon.0) 1693 : cluster [DBG] osdmap e187: 8 total, 8 up, 8 in 2026-03-10T10:18:26.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:25 vm07 bash[23367]: audit 2026-03-10T10:18:24.450505+0000 mon.a (mon.0) 1694 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:26.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:25 vm07 bash[23367]: audit 2026-03-10T10:18:24.450505+0000 mon.a (mon.0) 1694 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-23","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:26.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:25 vm07 bash[23367]: cluster 2026-03-10T10:18:24.516586+0000 mon.a (mon.0) 1695 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:26.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:25 vm07 bash[23367]: cluster 2026-03-10T10:18:24.516586+0000 mon.a (mon.0) 1695 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:26.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:26 vm04 bash[28289]: audit 2026-03-10T10:18:25.447315+0000 mon.c (mon.2) 282 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:26.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:26 vm04 bash[28289]: audit 2026-03-10T10:18:25.447315+0000 mon.c (mon.2) 282 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:26.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:26 vm04 bash[28289]: audit 2026-03-10T10:18:25.546992+0000 mon.a (mon.0) 1696 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafe_vm04-59252-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm04-59252-33"}]': finished 2026-03-10T10:18:26.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:26 vm04 bash[28289]: audit 2026-03-10T10:18:25.546992+0000 mon.a (mon.0) 1696 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafe_vm04-59252-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm04-59252-33"}]': finished 2026-03-10T10:18:26.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:26 vm04 bash[28289]: audit 2026-03-10T10:18:25.547093+0000 mon.a (mon.0) 1697 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:26.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:26 vm04 bash[28289]: audit 2026-03-10T10:18:25.547093+0000 mon.a (mon.0) 1697 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:26.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:26 vm04 bash[28289]: cluster 2026-03-10T10:18:25.552456+0000 mon.a (mon.0) 1698 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-10T10:18:26.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:26 vm04 bash[28289]: cluster 2026-03-10T10:18:25.552456+0000 mon.a (mon.0) 1698 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-10T10:18:26.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:26 vm04 bash[28289]: cluster 2026-03-10T10:18:26.441790+0000 mon.a (mon.0) 1699 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-10T10:18:26.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:26 vm04 bash[28289]: cluster 2026-03-10T10:18:26.441790+0000 mon.a (mon.0) 1699 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-10T10:18:26.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:26 vm04 bash[28289]: audit 2026-03-10T10:18:26.449715+0000 mon.c (mon.2) 283 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:26.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:26 vm04 bash[28289]: audit 2026-03-10T10:18:26.449715+0000 mon.c (mon.2) 283 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:26.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:26 vm04 bash[28289]: audit 2026-03-10T10:18:26.501926+0000 mon.a (mon.0) 1700 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:26.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:26 vm04 bash[28289]: audit 2026-03-10T10:18:26.501926+0000 mon.a (mon.0) 1700 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:26 vm04 bash[20742]: audit 2026-03-10T10:18:25.447315+0000 mon.c (mon.2) 282 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:26 vm04 bash[20742]: audit 2026-03-10T10:18:25.447315+0000 mon.c (mon.2) 282 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:26 vm04 bash[20742]: audit 2026-03-10T10:18:25.546992+0000 mon.a (mon.0) 1696 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafe_vm04-59252-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm04-59252-33"}]': finished 2026-03-10T10:18:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:26 vm04 bash[20742]: audit 2026-03-10T10:18:25.546992+0000 mon.a (mon.0) 1696 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafe_vm04-59252-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm04-59252-33"}]': finished 2026-03-10T10:18:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:26 vm04 bash[20742]: audit 2026-03-10T10:18:25.547093+0000 mon.a (mon.0) 1697 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:26 vm04 bash[20742]: audit 2026-03-10T10:18:25.547093+0000 mon.a (mon.0) 1697 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:26 vm04 bash[20742]: cluster 2026-03-10T10:18:25.552456+0000 mon.a (mon.0) 1698 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-10T10:18:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:26 vm04 bash[20742]: cluster 2026-03-10T10:18:25.552456+0000 mon.a (mon.0) 1698 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-10T10:18:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:26 vm04 bash[20742]: cluster 2026-03-10T10:18:26.441790+0000 mon.a (mon.0) 1699 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-10T10:18:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:26 vm04 bash[20742]: cluster 2026-03-10T10:18:26.441790+0000 mon.a (mon.0) 1699 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-10T10:18:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:26 vm04 bash[20742]: audit 2026-03-10T10:18:26.449715+0000 mon.c (mon.2) 283 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:26 vm04 bash[20742]: audit 2026-03-10T10:18:26.449715+0000 mon.c (mon.2) 283 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:26 vm04 bash[20742]: audit 2026-03-10T10:18:26.501926+0000 mon.a (mon.0) 1700 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:26 vm04 bash[20742]: audit 2026-03-10T10:18:26.501926+0000 mon.a (mon.0) 1700 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:26 vm07 bash[23367]: audit 2026-03-10T10:18:25.447315+0000 mon.c (mon.2) 282 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:26 vm07 bash[23367]: audit 2026-03-10T10:18:25.447315+0000 mon.c (mon.2) 282 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:26 vm07 bash[23367]: audit 2026-03-10T10:18:25.546992+0000 mon.a (mon.0) 1696 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafe_vm04-59252-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm04-59252-33"}]': finished 2026-03-10T10:18:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:26 vm07 bash[23367]: audit 2026-03-10T10:18:25.546992+0000 mon.a (mon.0) 1696 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafe_vm04-59252-33", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafe_vm04-59252-33"}]': finished 2026-03-10T10:18:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:26 vm07 bash[23367]: audit 2026-03-10T10:18:25.547093+0000 mon.a (mon.0) 1697 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:26 vm07 bash[23367]: audit 2026-03-10T10:18:25.547093+0000 mon.a (mon.0) 1697 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-23","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:26 vm07 bash[23367]: cluster 2026-03-10T10:18:25.552456+0000 mon.a (mon.0) 1698 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-10T10:18:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:26 vm07 bash[23367]: cluster 2026-03-10T10:18:25.552456+0000 mon.a (mon.0) 1698 : cluster [DBG] osdmap e188: 8 total, 8 up, 8 in 2026-03-10T10:18:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:26 vm07 bash[23367]: cluster 2026-03-10T10:18:26.441790+0000 mon.a (mon.0) 1699 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-10T10:18:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:26 vm07 bash[23367]: cluster 2026-03-10T10:18:26.441790+0000 mon.a (mon.0) 1699 : cluster [DBG] osdmap e189: 8 total, 8 up, 8 in 2026-03-10T10:18:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:26 vm07 bash[23367]: audit 2026-03-10T10:18:26.449715+0000 mon.c (mon.2) 283 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:26 vm07 bash[23367]: audit 2026-03-10T10:18:26.449715+0000 mon.c (mon.2) 283 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:26 vm07 bash[23367]: audit 2026-03-10T10:18:26.501926+0000 mon.a (mon.0) 1700 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:26 vm07 bash[23367]: audit 2026-03-10T10:18:26.501926+0000 mon.a (mon.0) 1700 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-23", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:27 vm04 bash[28289]: cluster 2026-03-10T10:18:26.402694+0000 mgr.y (mgr.24422) 202 : cluster [DBG] pgmap v239: 361 pgs: 40 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 303 active+clean; 459 KiB data, 661 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 4 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:18:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:27 vm04 bash[28289]: cluster 2026-03-10T10:18:26.402694+0000 mgr.y (mgr.24422) 202 : cluster [DBG] pgmap v239: 361 pgs: 40 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 303 active+clean; 459 KiB data, 661 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 4 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:18:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:27 vm04 bash[28289]: audit 2026-03-10T10:18:27.421919+0000 mon.a (mon.0) 1701 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-23", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:27 vm04 bash[28289]: audit 2026-03-10T10:18:27.421919+0000 mon.a (mon.0) 1701 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-23", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:27 vm04 bash[28289]: cluster 2026-03-10T10:18:27.426049+0000 mon.a (mon.0) 1702 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-10T10:18:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:27 vm04 bash[28289]: cluster 2026-03-10T10:18:27.426049+0000 mon.a (mon.0) 1702 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-10T10:18:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:27 vm04 bash[28289]: audit 2026-03-10T10:18:27.426433+0000 mon.b (mon.1) 169 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:27 vm04 bash[28289]: audit 2026-03-10T10:18:27.426433+0000 mon.b (mon.1) 169 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:27 vm04 bash[28289]: audit 2026-03-10T10:18:27.427524+0000 mon.c (mon.2) 284 : audit [INF] from='client.? 192.168.123.104:0/3104931346' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:27 vm04 bash[28289]: audit 2026-03-10T10:18:27.427524+0000 mon.c (mon.2) 284 : audit [INF] from='client.? 192.168.123.104:0/3104931346' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:27 vm04 bash[28289]: audit 2026-03-10T10:18:27.430171+0000 mon.a (mon.0) 1703 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-23"}]: dispatch 2026-03-10T10:18:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:27 vm04 bash[28289]: audit 2026-03-10T10:18:27.430171+0000 mon.a (mon.0) 1703 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-23"}]: dispatch 2026-03-10T10:18:27.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:27 vm04 bash[28289]: audit 2026-03-10T10:18:27.430247+0000 mon.a (mon.0) 1704 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:27.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:27 vm04 bash[28289]: audit 2026-03-10T10:18:27.430247+0000 mon.a (mon.0) 1704 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:27.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:27 vm04 bash[28289]: audit 2026-03-10T10:18:27.434185+0000 mon.a (mon.0) 1705 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:27.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:27 vm04 bash[28289]: audit 2026-03-10T10:18:27.434185+0000 mon.a (mon.0) 1705 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:27.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:27 vm04 bash[28289]: audit 2026-03-10T10:18:27.451019+0000 mon.c (mon.2) 285 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:27.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:27 vm04 bash[28289]: audit 2026-03-10T10:18:27.451019+0000 mon.c (mon.2) 285 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:27 vm04 bash[20742]: cluster 2026-03-10T10:18:26.402694+0000 mgr.y (mgr.24422) 202 : cluster [DBG] pgmap v239: 361 pgs: 40 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 303 active+clean; 459 KiB data, 661 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 4 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:18:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:27 vm04 bash[20742]: cluster 2026-03-10T10:18:26.402694+0000 mgr.y (mgr.24422) 202 : cluster [DBG] pgmap v239: 361 pgs: 40 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 303 active+clean; 459 KiB data, 661 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 4 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:18:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:27 vm04 bash[20742]: audit 2026-03-10T10:18:27.421919+0000 mon.a (mon.0) 1701 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-23", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:27 vm04 bash[20742]: audit 2026-03-10T10:18:27.421919+0000 mon.a (mon.0) 1701 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-23", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:27 vm04 bash[20742]: cluster 2026-03-10T10:18:27.426049+0000 mon.a (mon.0) 1702 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-10T10:18:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:27 vm04 bash[20742]: cluster 2026-03-10T10:18:27.426049+0000 mon.a (mon.0) 1702 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-10T10:18:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:27 vm04 bash[20742]: audit 2026-03-10T10:18:27.426433+0000 mon.b (mon.1) 169 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:27 vm04 bash[20742]: audit 2026-03-10T10:18:27.426433+0000 mon.b (mon.1) 169 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:27 vm04 bash[20742]: audit 2026-03-10T10:18:27.427524+0000 mon.c (mon.2) 284 : audit [INF] from='client.? 192.168.123.104:0/3104931346' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:27 vm04 bash[20742]: audit 2026-03-10T10:18:27.427524+0000 mon.c (mon.2) 284 : audit [INF] from='client.? 192.168.123.104:0/3104931346' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:27 vm04 bash[20742]: audit 2026-03-10T10:18:27.430171+0000 mon.a (mon.0) 1703 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-23"}]: dispatch 2026-03-10T10:18:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:27 vm04 bash[20742]: audit 2026-03-10T10:18:27.430171+0000 mon.a (mon.0) 1703 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-23"}]: dispatch 2026-03-10T10:18:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:27 vm04 bash[20742]: audit 2026-03-10T10:18:27.430247+0000 mon.a (mon.0) 1704 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:27 vm04 bash[20742]: audit 2026-03-10T10:18:27.430247+0000 mon.a (mon.0) 1704 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:27 vm04 bash[20742]: audit 2026-03-10T10:18:27.434185+0000 mon.a (mon.0) 1705 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:27 vm04 bash[20742]: audit 2026-03-10T10:18:27.434185+0000 mon.a (mon.0) 1705 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:27 vm04 bash[20742]: audit 2026-03-10T10:18:27.451019+0000 mon.c (mon.2) 285 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:27 vm04 bash[20742]: audit 2026-03-10T10:18:27.451019+0000 mon.c (mon.2) 285 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:27 vm07 bash[23367]: cluster 2026-03-10T10:18:26.402694+0000 mgr.y (mgr.24422) 202 : cluster [DBG] pgmap v239: 361 pgs: 40 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 303 active+clean; 459 KiB data, 661 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 4 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:18:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:27 vm07 bash[23367]: cluster 2026-03-10T10:18:26.402694+0000 mgr.y (mgr.24422) 202 : cluster [DBG] pgmap v239: 361 pgs: 40 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 303 active+clean; 459 KiB data, 661 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 4 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:18:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:27 vm07 bash[23367]: audit 2026-03-10T10:18:27.421919+0000 mon.a (mon.0) 1701 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-23", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:27 vm07 bash[23367]: audit 2026-03-10T10:18:27.421919+0000 mon.a (mon.0) 1701 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-23", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:27 vm07 bash[23367]: cluster 2026-03-10T10:18:27.426049+0000 mon.a (mon.0) 1702 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-10T10:18:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:27 vm07 bash[23367]: cluster 2026-03-10T10:18:27.426049+0000 mon.a (mon.0) 1702 : cluster [DBG] osdmap e190: 8 total, 8 up, 8 in 2026-03-10T10:18:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:27 vm07 bash[23367]: audit 2026-03-10T10:18:27.426433+0000 mon.b (mon.1) 169 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:27 vm07 bash[23367]: audit 2026-03-10T10:18:27.426433+0000 mon.b (mon.1) 169 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:27 vm07 bash[23367]: audit 2026-03-10T10:18:27.427524+0000 mon.c (mon.2) 284 : audit [INF] from='client.? 192.168.123.104:0/3104931346' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:27 vm07 bash[23367]: audit 2026-03-10T10:18:27.427524+0000 mon.c (mon.2) 284 : audit [INF] from='client.? 192.168.123.104:0/3104931346' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:27 vm07 bash[23367]: audit 2026-03-10T10:18:27.430171+0000 mon.a (mon.0) 1703 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-23"}]: dispatch 2026-03-10T10:18:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:27 vm07 bash[23367]: audit 2026-03-10T10:18:27.430171+0000 mon.a (mon.0) 1703 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-23"}]: dispatch 2026-03-10T10:18:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:27 vm07 bash[23367]: audit 2026-03-10T10:18:27.430247+0000 mon.a (mon.0) 1704 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:27 vm07 bash[23367]: audit 2026-03-10T10:18:27.430247+0000 mon.a (mon.0) 1704 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:27 vm07 bash[23367]: audit 2026-03-10T10:18:27.434185+0000 mon.a (mon.0) 1705 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:27 vm07 bash[23367]: audit 2026-03-10T10:18:27.434185+0000 mon.a (mon.0) 1705 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:27 vm07 bash[23367]: audit 2026-03-10T10:18:27.451019+0000 mon.c (mon.2) 285 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:27 vm07 bash[23367]: audit 2026-03-10T10:18:27.451019+0000 mon.c (mon.2) 285 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:28.766 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:18:28 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:18:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:28 vm04 bash[28289]: audit 2026-03-10T10:18:27.791186+0000 mon.a (mon.0) 1706 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:18:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:28 vm04 bash[28289]: audit 2026-03-10T10:18:27.791186+0000 mon.a (mon.0) 1706 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:18:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:28 vm04 bash[28289]: audit 2026-03-10T10:18:27.792029+0000 mon.a (mon.0) 1707 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:18:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:28 vm04 bash[28289]: audit 2026-03-10T10:18:27.792029+0000 mon.a (mon.0) 1707 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:18:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:28 vm04 bash[28289]: audit 2026-03-10T10:18:28.451729+0000 mon.c (mon.2) 286 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:28 vm04 bash[28289]: audit 2026-03-10T10:18:28.451729+0000 mon.c (mon.2) 286 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:28 vm04 bash[28289]: audit 2026-03-10T10:18:28.475762+0000 mon.a (mon.0) 1708 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-23"}]': finished 2026-03-10T10:18:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:28 vm04 bash[28289]: audit 2026-03-10T10:18:28.475762+0000 mon.a (mon.0) 1708 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-23"}]': finished 2026-03-10T10:18:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:28 vm04 bash[28289]: audit 2026-03-10T10:18:28.476237+0000 mon.a (mon.0) 1709 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:28 vm04 bash[28289]: audit 2026-03-10T10:18:28.476237+0000 mon.a (mon.0) 1709 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:28 vm04 bash[28289]: audit 2026-03-10T10:18:28.476344+0000 mon.a (mon.0) 1710 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm04-59252-33"}]': finished 2026-03-10T10:18:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:28 vm04 bash[28289]: audit 2026-03-10T10:18:28.476344+0000 mon.a (mon.0) 1710 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm04-59252-33"}]': finished 2026-03-10T10:18:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:28 vm04 bash[28289]: audit 2026-03-10T10:18:28.480304+0000 mon.b (mon.1) 170 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:28 vm04 bash[28289]: audit 2026-03-10T10:18:28.480304+0000 mon.b (mon.1) 170 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:28 vm04 bash[28289]: cluster 2026-03-10T10:18:28.490440+0000 mon.a (mon.0) 1711 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-10T10:18:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:28 vm04 bash[28289]: cluster 2026-03-10T10:18:28.490440+0000 mon.a (mon.0) 1711 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-10T10:18:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:28 vm04 bash[28289]: audit 2026-03-10T10:18:28.514241+0000 mon.a (mon.0) 1712 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-23", "mode": "writeback"}]: dispatch 2026-03-10T10:18:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:28 vm04 bash[28289]: audit 2026-03-10T10:18:28.514241+0000 mon.a (mon.0) 1712 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-23", "mode": "writeback"}]: dispatch 2026-03-10T10:18:29.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:28 vm04 bash[28289]: audit 2026-03-10T10:18:28.514317+0000 mon.a (mon.0) 1713 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:29.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:28 vm04 bash[28289]: audit 2026-03-10T10:18:28.514317+0000 mon.a (mon.0) 1713 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:29.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:28 vm04 bash[20742]: audit 2026-03-10T10:18:27.791186+0000 mon.a (mon.0) 1706 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:18:29.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:28 vm04 bash[20742]: audit 2026-03-10T10:18:27.791186+0000 mon.a (mon.0) 1706 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:18:29.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:28 vm04 bash[20742]: audit 2026-03-10T10:18:27.792029+0000 mon.a (mon.0) 1707 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:18:29.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:28 vm04 bash[20742]: audit 2026-03-10T10:18:27.792029+0000 mon.a (mon.0) 1707 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:18:29.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:28 vm04 bash[20742]: audit 2026-03-10T10:18:28.451729+0000 mon.c (mon.2) 286 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:29.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:28 vm04 bash[20742]: audit 2026-03-10T10:18:28.451729+0000 mon.c (mon.2) 286 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:29.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:28 vm04 bash[20742]: audit 2026-03-10T10:18:28.475762+0000 mon.a (mon.0) 1708 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-23"}]': finished 2026-03-10T10:18:29.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:28 vm04 bash[20742]: audit 2026-03-10T10:18:28.475762+0000 mon.a (mon.0) 1708 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-23"}]': finished 2026-03-10T10:18:29.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:28 vm04 bash[20742]: audit 2026-03-10T10:18:28.476237+0000 mon.a (mon.0) 1709 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:29.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:28 vm04 bash[20742]: audit 2026-03-10T10:18:28.476237+0000 mon.a (mon.0) 1709 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:29.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:28 vm04 bash[20742]: audit 2026-03-10T10:18:28.476344+0000 mon.a (mon.0) 1710 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm04-59252-33"}]': finished 2026-03-10T10:18:29.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:28 vm04 bash[20742]: audit 2026-03-10T10:18:28.476344+0000 mon.a (mon.0) 1710 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm04-59252-33"}]': finished 2026-03-10T10:18:29.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:28 vm04 bash[20742]: audit 2026-03-10T10:18:28.480304+0000 mon.b (mon.1) 170 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:29.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:28 vm04 bash[20742]: audit 2026-03-10T10:18:28.480304+0000 mon.b (mon.1) 170 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:29.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:28 vm04 bash[20742]: cluster 2026-03-10T10:18:28.490440+0000 mon.a (mon.0) 1711 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-10T10:18:29.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:28 vm04 bash[20742]: cluster 2026-03-10T10:18:28.490440+0000 mon.a (mon.0) 1711 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-10T10:18:29.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:28 vm04 bash[20742]: audit 2026-03-10T10:18:28.514241+0000 mon.a (mon.0) 1712 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-23", "mode": "writeback"}]: dispatch 2026-03-10T10:18:29.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:28 vm04 bash[20742]: audit 2026-03-10T10:18:28.514241+0000 mon.a (mon.0) 1712 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-23", "mode": "writeback"}]: dispatch 2026-03-10T10:18:29.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:28 vm04 bash[20742]: audit 2026-03-10T10:18:28.514317+0000 mon.a (mon.0) 1713 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:29.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:28 vm04 bash[20742]: audit 2026-03-10T10:18:28.514317+0000 mon.a (mon.0) 1713 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:28 vm07 bash[23367]: audit 2026-03-10T10:18:27.791186+0000 mon.a (mon.0) 1706 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:18:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:28 vm07 bash[23367]: audit 2026-03-10T10:18:27.791186+0000 mon.a (mon.0) 1706 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:18:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:28 vm07 bash[23367]: audit 2026-03-10T10:18:27.792029+0000 mon.a (mon.0) 1707 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:18:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:28 vm07 bash[23367]: audit 2026-03-10T10:18:27.792029+0000 mon.a (mon.0) 1707 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:18:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:28 vm07 bash[23367]: audit 2026-03-10T10:18:28.451729+0000 mon.c (mon.2) 286 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:28 vm07 bash[23367]: audit 2026-03-10T10:18:28.451729+0000 mon.c (mon.2) 286 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:28 vm07 bash[23367]: audit 2026-03-10T10:18:28.475762+0000 mon.a (mon.0) 1708 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-23"}]': finished 2026-03-10T10:18:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:28 vm07 bash[23367]: audit 2026-03-10T10:18:28.475762+0000 mon.a (mon.0) 1708 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-23"}]': finished 2026-03-10T10:18:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:28 vm07 bash[23367]: audit 2026-03-10T10:18:28.476237+0000 mon.a (mon.0) 1709 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:28 vm07 bash[23367]: audit 2026-03-10T10:18:28.476237+0000 mon.a (mon.0) 1709 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:28 vm07 bash[23367]: audit 2026-03-10T10:18:28.476344+0000 mon.a (mon.0) 1710 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm04-59252-33"}]': finished 2026-03-10T10:18:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:28 vm07 bash[23367]: audit 2026-03-10T10:18:28.476344+0000 mon.a (mon.0) 1710 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafe_vm04-59252-33"}]': finished 2026-03-10T10:18:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:28 vm07 bash[23367]: audit 2026-03-10T10:18:28.480304+0000 mon.b (mon.1) 170 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:28 vm07 bash[23367]: audit 2026-03-10T10:18:28.480304+0000 mon.b (mon.1) 170 : audit [INF] from='client.? 192.168.123.104:0/1388574959' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:28 vm07 bash[23367]: cluster 2026-03-10T10:18:28.490440+0000 mon.a (mon.0) 1711 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-10T10:18:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:28 vm07 bash[23367]: cluster 2026-03-10T10:18:28.490440+0000 mon.a (mon.0) 1711 : cluster [DBG] osdmap e191: 8 total, 8 up, 8 in 2026-03-10T10:18:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:28 vm07 bash[23367]: audit 2026-03-10T10:18:28.514241+0000 mon.a (mon.0) 1712 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-23", "mode": "writeback"}]: dispatch 2026-03-10T10:18:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:28 vm07 bash[23367]: audit 2026-03-10T10:18:28.514241+0000 mon.a (mon.0) 1712 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-23", "mode": "writeback"}]: dispatch 2026-03-10T10:18:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:28 vm07 bash[23367]: audit 2026-03-10T10:18:28.514317+0000 mon.a (mon.0) 1713 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:28 vm07 bash[23367]: audit 2026-03-10T10:18:28.514317+0000 mon.a (mon.0) 1713 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafe_vm04-59252-33"}]: dispatch 2026-03-10T10:18:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:29 vm04 bash[28289]: audit 2026-03-10T10:18:28.305213+0000 mgr.y (mgr.24422) 203 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:18:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:29 vm04 bash[28289]: audit 2026-03-10T10:18:28.305213+0000 mgr.y (mgr.24422) 203 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:18:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:29 vm04 bash[28289]: cluster 2026-03-10T10:18:28.403308+0000 mgr.y (mgr.24422) 204 : cluster [DBG] pgmap v242: 353 pgs: 17 creating+peering, 31 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 287 active+clean; 459 KiB data, 670 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:18:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:29 vm04 bash[28289]: cluster 2026-03-10T10:18:28.403308+0000 mgr.y (mgr.24422) 204 : cluster [DBG] pgmap v242: 353 pgs: 17 creating+peering, 31 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 287 active+clean; 459 KiB data, 670 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:29 vm04 bash[28289]: audit 2026-03-10T10:18:29.452442+0000 mon.c (mon.2) 287 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:29 vm04 bash[28289]: audit 2026-03-10T10:18:29.452442+0000 mon.c (mon.2) 287 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:29 vm04 bash[28289]: cluster 2026-03-10T10:18:29.476407+0000 mon.a (mon.0) 1714 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:29 vm04 bash[28289]: cluster 2026-03-10T10:18:29.476407+0000 mon.a (mon.0) 1714 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:29 vm04 bash[28289]: audit 2026-03-10T10:18:29.480282+0000 mon.a (mon.0) 1715 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-23", "mode": "writeback"}]': finished 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:29 vm04 bash[28289]: audit 2026-03-10T10:18:29.480282+0000 mon.a (mon.0) 1715 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-23", "mode": "writeback"}]': finished 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:29 vm04 bash[28289]: audit 2026-03-10T10:18:29.480387+0000 mon.a (mon.0) 1716 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafe_vm04-59252-33"}]': finished 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:29 vm04 bash[28289]: audit 2026-03-10T10:18:29.480387+0000 mon.a (mon.0) 1716 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafe_vm04-59252-33"}]': finished 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:29 vm04 bash[28289]: cluster 2026-03-10T10:18:29.485533+0000 mon.a (mon.0) 1717 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:29 vm04 bash[28289]: cluster 2026-03-10T10:18:29.485533+0000 mon.a (mon.0) 1717 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:29 vm04 bash[28289]: audit 2026-03-10T10:18:29.488143+0000 mon.c (mon.2) 288 : audit [INF] from='client.? 192.168.123.104:0/1804647699' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:29 vm04 bash[28289]: audit 2026-03-10T10:18:29.488143+0000 mon.c (mon.2) 288 : audit [INF] from='client.? 192.168.123.104:0/1804647699' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:29 vm04 bash[28289]: audit 2026-03-10T10:18:29.494436+0000 mon.a (mon.0) 1718 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:29 vm04 bash[28289]: audit 2026-03-10T10:18:29.494436+0000 mon.a (mon.0) 1718 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:29 vm04 bash[28289]: audit 2026-03-10T10:18:29.515039+0000 mon.b (mon.1) 171 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:29 vm04 bash[28289]: audit 2026-03-10T10:18:29.515039+0000 mon.b (mon.1) 171 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:29 vm04 bash[28289]: audit 2026-03-10T10:18:29.516801+0000 mon.b (mon.1) 172 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:29 vm04 bash[28289]: audit 2026-03-10T10:18:29.516801+0000 mon.b (mon.1) 172 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:29 vm04 bash[28289]: audit 2026-03-10T10:18:29.517853+0000 mon.b (mon.1) 173 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm04-59252-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:29 vm04 bash[28289]: audit 2026-03-10T10:18:29.517853+0000 mon.b (mon.1) 173 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm04-59252-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:29 vm04 bash[28289]: audit 2026-03-10T10:18:29.518500+0000 mon.a (mon.0) 1719 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:29 vm04 bash[28289]: audit 2026-03-10T10:18:29.518500+0000 mon.a (mon.0) 1719 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:29 vm04 bash[28289]: audit 2026-03-10T10:18:29.519501+0000 mon.a (mon.0) 1720 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:29 vm04 bash[28289]: audit 2026-03-10T10:18:29.519501+0000 mon.a (mon.0) 1720 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:29 vm04 bash[28289]: audit 2026-03-10T10:18:29.520425+0000 mon.a (mon.0) 1721 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm04-59252-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:29 vm04 bash[28289]: audit 2026-03-10T10:18:29.520425+0000 mon.a (mon.0) 1721 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm04-59252-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:29 vm04 bash[20742]: audit 2026-03-10T10:18:28.305213+0000 mgr.y (mgr.24422) 203 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:29 vm04 bash[20742]: audit 2026-03-10T10:18:28.305213+0000 mgr.y (mgr.24422) 203 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:29 vm04 bash[20742]: cluster 2026-03-10T10:18:28.403308+0000 mgr.y (mgr.24422) 204 : cluster [DBG] pgmap v242: 353 pgs: 17 creating+peering, 31 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 287 active+clean; 459 KiB data, 670 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:29 vm04 bash[20742]: cluster 2026-03-10T10:18:28.403308+0000 mgr.y (mgr.24422) 204 : cluster [DBG] pgmap v242: 353 pgs: 17 creating+peering, 31 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 287 active+clean; 459 KiB data, 670 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:29 vm04 bash[20742]: audit 2026-03-10T10:18:29.452442+0000 mon.c (mon.2) 287 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:29 vm04 bash[20742]: audit 2026-03-10T10:18:29.452442+0000 mon.c (mon.2) 287 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:29 vm04 bash[20742]: cluster 2026-03-10T10:18:29.476407+0000 mon.a (mon.0) 1714 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:29 vm04 bash[20742]: cluster 2026-03-10T10:18:29.476407+0000 mon.a (mon.0) 1714 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:29 vm04 bash[20742]: audit 2026-03-10T10:18:29.480282+0000 mon.a (mon.0) 1715 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-23", "mode": "writeback"}]': finished 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:29 vm04 bash[20742]: audit 2026-03-10T10:18:29.480282+0000 mon.a (mon.0) 1715 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-23", "mode": "writeback"}]': finished 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:29 vm04 bash[20742]: audit 2026-03-10T10:18:29.480387+0000 mon.a (mon.0) 1716 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafe_vm04-59252-33"}]': finished 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:29 vm04 bash[20742]: audit 2026-03-10T10:18:29.480387+0000 mon.a (mon.0) 1716 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafe_vm04-59252-33"}]': finished 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:29 vm04 bash[20742]: cluster 2026-03-10T10:18:29.485533+0000 mon.a (mon.0) 1717 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:29 vm04 bash[20742]: cluster 2026-03-10T10:18:29.485533+0000 mon.a (mon.0) 1717 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:29 vm04 bash[20742]: audit 2026-03-10T10:18:29.488143+0000 mon.c (mon.2) 288 : audit [INF] from='client.? 192.168.123.104:0/1804647699' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:30.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:29 vm04 bash[20742]: audit 2026-03-10T10:18:29.488143+0000 mon.c (mon.2) 288 : audit [INF] from='client.? 192.168.123.104:0/1804647699' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:30.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:29 vm04 bash[20742]: audit 2026-03-10T10:18:29.494436+0000 mon.a (mon.0) 1718 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:30.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:29 vm04 bash[20742]: audit 2026-03-10T10:18:29.494436+0000 mon.a (mon.0) 1718 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:30.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:29 vm04 bash[20742]: audit 2026-03-10T10:18:29.515039+0000 mon.b (mon.1) 171 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:30.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:29 vm04 bash[20742]: audit 2026-03-10T10:18:29.515039+0000 mon.b (mon.1) 171 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:30.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:29 vm04 bash[20742]: audit 2026-03-10T10:18:29.516801+0000 mon.b (mon.1) 172 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:30.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:29 vm04 bash[20742]: audit 2026-03-10T10:18:29.516801+0000 mon.b (mon.1) 172 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:30.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:29 vm04 bash[20742]: audit 2026-03-10T10:18:29.517853+0000 mon.b (mon.1) 173 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm04-59252-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:30.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:29 vm04 bash[20742]: audit 2026-03-10T10:18:29.517853+0000 mon.b (mon.1) 173 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm04-59252-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:30.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:29 vm04 bash[20742]: audit 2026-03-10T10:18:29.518500+0000 mon.a (mon.0) 1719 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:30.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:29 vm04 bash[20742]: audit 2026-03-10T10:18:29.518500+0000 mon.a (mon.0) 1719 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:30.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:29 vm04 bash[20742]: audit 2026-03-10T10:18:29.519501+0000 mon.a (mon.0) 1720 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:30.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:29 vm04 bash[20742]: audit 2026-03-10T10:18:29.519501+0000 mon.a (mon.0) 1720 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:30.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:29 vm04 bash[20742]: audit 2026-03-10T10:18:29.520425+0000 mon.a (mon.0) 1721 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm04-59252-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:30.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:29 vm04 bash[20742]: audit 2026-03-10T10:18:29.520425+0000 mon.a (mon.0) 1721 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm04-59252-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:30.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:29 vm07 bash[23367]: audit 2026-03-10T10:18:28.305213+0000 mgr.y (mgr.24422) 203 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:18:30.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:29 vm07 bash[23367]: audit 2026-03-10T10:18:28.305213+0000 mgr.y (mgr.24422) 203 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:18:30.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:29 vm07 bash[23367]: cluster 2026-03-10T10:18:28.403308+0000 mgr.y (mgr.24422) 204 : cluster [DBG] pgmap v242: 353 pgs: 17 creating+peering, 31 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 287 active+clean; 459 KiB data, 670 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:18:30.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:29 vm07 bash[23367]: cluster 2026-03-10T10:18:28.403308+0000 mgr.y (mgr.24422) 204 : cluster [DBG] pgmap v242: 353 pgs: 17 creating+peering, 31 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 287 active+clean; 459 KiB data, 670 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:18:30.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:29 vm07 bash[23367]: audit 2026-03-10T10:18:29.452442+0000 mon.c (mon.2) 287 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:30.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:29 vm07 bash[23367]: audit 2026-03-10T10:18:29.452442+0000 mon.c (mon.2) 287 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:30.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:29 vm07 bash[23367]: cluster 2026-03-10T10:18:29.476407+0000 mon.a (mon.0) 1714 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:30.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:29 vm07 bash[23367]: cluster 2026-03-10T10:18:29.476407+0000 mon.a (mon.0) 1714 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:30.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:29 vm07 bash[23367]: audit 2026-03-10T10:18:29.480282+0000 mon.a (mon.0) 1715 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-23", "mode": "writeback"}]': finished 2026-03-10T10:18:30.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:29 vm07 bash[23367]: audit 2026-03-10T10:18:29.480282+0000 mon.a (mon.0) 1715 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-23", "mode": "writeback"}]': finished 2026-03-10T10:18:30.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:29 vm07 bash[23367]: audit 2026-03-10T10:18:29.480387+0000 mon.a (mon.0) 1716 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafe_vm04-59252-33"}]': finished 2026-03-10T10:18:30.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:29 vm07 bash[23367]: audit 2026-03-10T10:18:29.480387+0000 mon.a (mon.0) 1716 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafe_vm04-59252-33"}]': finished 2026-03-10T10:18:30.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:29 vm07 bash[23367]: cluster 2026-03-10T10:18:29.485533+0000 mon.a (mon.0) 1717 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-10T10:18:30.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:29 vm07 bash[23367]: cluster 2026-03-10T10:18:29.485533+0000 mon.a (mon.0) 1717 : cluster [DBG] osdmap e192: 8 total, 8 up, 8 in 2026-03-10T10:18:30.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:29 vm07 bash[23367]: audit 2026-03-10T10:18:29.488143+0000 mon.c (mon.2) 288 : audit [INF] from='client.? 192.168.123.104:0/1804647699' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:30.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:29 vm07 bash[23367]: audit 2026-03-10T10:18:29.488143+0000 mon.c (mon.2) 288 : audit [INF] from='client.? 192.168.123.104:0/1804647699' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:30.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:29 vm07 bash[23367]: audit 2026-03-10T10:18:29.494436+0000 mon.a (mon.0) 1718 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:30.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:29 vm07 bash[23367]: audit 2026-03-10T10:18:29.494436+0000 mon.a (mon.0) 1718 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-40","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:30.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:29 vm07 bash[23367]: audit 2026-03-10T10:18:29.515039+0000 mon.b (mon.1) 171 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:30.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:29 vm07 bash[23367]: audit 2026-03-10T10:18:29.515039+0000 mon.b (mon.1) 171 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:30.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:29 vm07 bash[23367]: audit 2026-03-10T10:18:29.516801+0000 mon.b (mon.1) 172 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:30.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:29 vm07 bash[23367]: audit 2026-03-10T10:18:29.516801+0000 mon.b (mon.1) 172 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:30.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:29 vm07 bash[23367]: audit 2026-03-10T10:18:29.517853+0000 mon.b (mon.1) 173 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm04-59252-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:30.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:29 vm07 bash[23367]: audit 2026-03-10T10:18:29.517853+0000 mon.b (mon.1) 173 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm04-59252-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:30.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:29 vm07 bash[23367]: audit 2026-03-10T10:18:29.518500+0000 mon.a (mon.0) 1719 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:30.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:29 vm07 bash[23367]: audit 2026-03-10T10:18:29.518500+0000 mon.a (mon.0) 1719 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:30.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:29 vm07 bash[23367]: audit 2026-03-10T10:18:29.519501+0000 mon.a (mon.0) 1720 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:30.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:29 vm07 bash[23367]: audit 2026-03-10T10:18:29.519501+0000 mon.a (mon.0) 1720 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:30.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:29 vm07 bash[23367]: audit 2026-03-10T10:18:29.520425+0000 mon.a (mon.0) 1721 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm04-59252-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:30.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:29 vm07 bash[23367]: audit 2026-03-10T10:18:29.520425+0000 mon.a (mon.0) 1721 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm04-59252-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:31.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:31 vm07 bash[23367]: audit 2026-03-10T10:18:30.453300+0000 mon.c (mon.2) 289 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:31.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:31 vm07 bash[23367]: audit 2026-03-10T10:18:30.453300+0000 mon.c (mon.2) 289 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:31.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:31 vm07 bash[23367]: audit 2026-03-10T10:18:30.484764+0000 mon.a (mon.0) 1722 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-40","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:31.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:31 vm07 bash[23367]: audit 2026-03-10T10:18:30.484764+0000 mon.a (mon.0) 1722 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-40","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:31.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:31 vm07 bash[23367]: audit 2026-03-10T10:18:30.484930+0000 mon.a (mon.0) 1723 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm04-59252-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:31.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:31 vm07 bash[23367]: audit 2026-03-10T10:18:30.484930+0000 mon.a (mon.0) 1723 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm04-59252-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:31.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:31 vm07 bash[23367]: cluster 2026-03-10T10:18:30.635891+0000 mon.a (mon.0) 1724 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-10T10:18:31.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:31 vm07 bash[23367]: cluster 2026-03-10T10:18:30.635891+0000 mon.a (mon.0) 1724 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-10T10:18:31.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:31 vm07 bash[23367]: audit 2026-03-10T10:18:30.677674+0000 mon.b (mon.1) 174 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm04-59252-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:31.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:31 vm07 bash[23367]: audit 2026-03-10T10:18:30.677674+0000 mon.b (mon.1) 174 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm04-59252-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:31.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:31 vm07 bash[23367]: audit 2026-03-10T10:18:30.680375+0000 mon.a (mon.0) 1725 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm04-59252-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:31.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:31 vm07 bash[23367]: audit 2026-03-10T10:18:30.680375+0000 mon.a (mon.0) 1725 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm04-59252-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:31 vm04 bash[28289]: audit 2026-03-10T10:18:30.453300+0000 mon.c (mon.2) 289 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:31 vm04 bash[28289]: audit 2026-03-10T10:18:30.453300+0000 mon.c (mon.2) 289 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:31 vm04 bash[28289]: audit 2026-03-10T10:18:30.484764+0000 mon.a (mon.0) 1722 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-40","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:31 vm04 bash[28289]: audit 2026-03-10T10:18:30.484764+0000 mon.a (mon.0) 1722 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-40","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:31 vm04 bash[28289]: audit 2026-03-10T10:18:30.484930+0000 mon.a (mon.0) 1723 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm04-59252-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:31 vm04 bash[28289]: audit 2026-03-10T10:18:30.484930+0000 mon.a (mon.0) 1723 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm04-59252-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:31 vm04 bash[28289]: cluster 2026-03-10T10:18:30.635891+0000 mon.a (mon.0) 1724 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-10T10:18:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:31 vm04 bash[28289]: cluster 2026-03-10T10:18:30.635891+0000 mon.a (mon.0) 1724 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-10T10:18:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:31 vm04 bash[28289]: audit 2026-03-10T10:18:30.677674+0000 mon.b (mon.1) 174 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm04-59252-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:31 vm04 bash[28289]: audit 2026-03-10T10:18:30.677674+0000 mon.b (mon.1) 174 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm04-59252-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:31 vm04 bash[28289]: audit 2026-03-10T10:18:30.680375+0000 mon.a (mon.0) 1725 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm04-59252-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:31 vm04 bash[28289]: audit 2026-03-10T10:18:30.680375+0000 mon.a (mon.0) 1725 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm04-59252-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:31.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:31 vm04 bash[20742]: audit 2026-03-10T10:18:30.453300+0000 mon.c (mon.2) 289 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:31.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:31 vm04 bash[20742]: audit 2026-03-10T10:18:30.453300+0000 mon.c (mon.2) 289 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:31.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:31 vm04 bash[20742]: audit 2026-03-10T10:18:30.484764+0000 mon.a (mon.0) 1722 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-40","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:31.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:31 vm04 bash[20742]: audit 2026-03-10T10:18:30.484764+0000 mon.a (mon.0) 1722 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-40","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:31.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:31 vm04 bash[20742]: audit 2026-03-10T10:18:30.484930+0000 mon.a (mon.0) 1723 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm04-59252-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:31.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:31 vm04 bash[20742]: audit 2026-03-10T10:18:30.484930+0000 mon.a (mon.0) 1723 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValue_vm04-59252-34", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:31.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:31 vm04 bash[20742]: cluster 2026-03-10T10:18:30.635891+0000 mon.a (mon.0) 1724 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-10T10:18:31.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:31 vm04 bash[20742]: cluster 2026-03-10T10:18:30.635891+0000 mon.a (mon.0) 1724 : cluster [DBG] osdmap e193: 8 total, 8 up, 8 in 2026-03-10T10:18:31.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:31 vm04 bash[20742]: audit 2026-03-10T10:18:30.677674+0000 mon.b (mon.1) 174 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm04-59252-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:31.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:31 vm04 bash[20742]: audit 2026-03-10T10:18:30.677674+0000 mon.b (mon.1) 174 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm04-59252-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:31.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:31 vm04 bash[20742]: audit 2026-03-10T10:18:30.680375+0000 mon.a (mon.0) 1725 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm04-59252-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:31.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:31 vm04 bash[20742]: audit 2026-03-10T10:18:30.680375+0000 mon.a (mon.0) 1725 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValue_vm04-59252-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:32.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:32 vm07 bash[23367]: cluster 2026-03-10T10:18:30.403692+0000 mgr.y (mgr.24422) 205 : cluster [DBG] pgmap v245: 385 pgs: 32 unknown, 17 creating+peering, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 318 active+clean; 459 KiB data, 670 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:18:32.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:32 vm07 bash[23367]: cluster 2026-03-10T10:18:30.403692+0000 mgr.y (mgr.24422) 205 : cluster [DBG] pgmap v245: 385 pgs: 32 unknown, 17 creating+peering, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 318 active+clean; 459 KiB data, 670 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:18:32.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:32 vm07 bash[23367]: audit 2026-03-10T10:18:31.086361+0000 mon.a (mon.0) 1726 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:32.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:32 vm07 bash[23367]: audit 2026-03-10T10:18:31.086361+0000 mon.a (mon.0) 1726 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:32.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:32 vm07 bash[23367]: cluster 2026-03-10T10:18:31.413848+0000 mon.a (mon.0) 1727 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:32.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:32 vm07 bash[23367]: cluster 2026-03-10T10:18:31.413848+0000 mon.a (mon.0) 1727 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:32.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:32 vm07 bash[23367]: audit 2026-03-10T10:18:31.454207+0000 mon.c (mon.2) 290 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:32.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:32 vm07 bash[23367]: audit 2026-03-10T10:18:31.454207+0000 mon.c (mon.2) 290 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:32.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:32 vm07 bash[23367]: audit 2026-03-10T10:18:31.488856+0000 mon.a (mon.0) 1728 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:32.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:32 vm07 bash[23367]: audit 2026-03-10T10:18:31.488856+0000 mon.a (mon.0) 1728 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:32.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:32 vm07 bash[23367]: audit 2026-03-10T10:18:31.526601+0000 mon.b (mon.1) 175 : audit [INF] from='client.? 192.168.123.104:0/1005334568' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:32.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:32 vm07 bash[23367]: audit 2026-03-10T10:18:31.526601+0000 mon.b (mon.1) 175 : audit [INF] from='client.? 192.168.123.104:0/1005334568' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:32.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:32 vm07 bash[23367]: cluster 2026-03-10T10:18:31.527736+0000 mon.a (mon.0) 1729 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-10T10:18:32.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:32 vm07 bash[23367]: cluster 2026-03-10T10:18:31.527736+0000 mon.a (mon.0) 1729 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-10T10:18:32.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:32 vm07 bash[23367]: audit 2026-03-10T10:18:31.528548+0000 mon.a (mon.0) 1730 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-23"}]: dispatch 2026-03-10T10:18:32.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:32 vm07 bash[23367]: audit 2026-03-10T10:18:31.528548+0000 mon.a (mon.0) 1730 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-23"}]: dispatch 2026-03-10T10:18:32.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:32 vm07 bash[23367]: audit 2026-03-10T10:18:31.529423+0000 mon.a (mon.0) 1731 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:32.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:32 vm07 bash[23367]: audit 2026-03-10T10:18:31.529423+0000 mon.a (mon.0) 1731 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:32 vm04 bash[28289]: cluster 2026-03-10T10:18:30.403692+0000 mgr.y (mgr.24422) 205 : cluster [DBG] pgmap v245: 385 pgs: 32 unknown, 17 creating+peering, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 318 active+clean; 459 KiB data, 670 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:18:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:32 vm04 bash[28289]: cluster 2026-03-10T10:18:30.403692+0000 mgr.y (mgr.24422) 205 : cluster [DBG] pgmap v245: 385 pgs: 32 unknown, 17 creating+peering, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 318 active+clean; 459 KiB data, 670 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:18:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:32 vm04 bash[28289]: audit 2026-03-10T10:18:31.086361+0000 mon.a (mon.0) 1726 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:32 vm04 bash[28289]: audit 2026-03-10T10:18:31.086361+0000 mon.a (mon.0) 1726 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:32 vm04 bash[28289]: cluster 2026-03-10T10:18:31.413848+0000 mon.a (mon.0) 1727 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:32 vm04 bash[28289]: cluster 2026-03-10T10:18:31.413848+0000 mon.a (mon.0) 1727 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:32 vm04 bash[28289]: audit 2026-03-10T10:18:31.454207+0000 mon.c (mon.2) 290 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:32 vm04 bash[28289]: audit 2026-03-10T10:18:31.454207+0000 mon.c (mon.2) 290 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:32 vm04 bash[28289]: audit 2026-03-10T10:18:31.488856+0000 mon.a (mon.0) 1728 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:32 vm04 bash[28289]: audit 2026-03-10T10:18:31.488856+0000 mon.a (mon.0) 1728 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:32 vm04 bash[28289]: audit 2026-03-10T10:18:31.526601+0000 mon.b (mon.1) 175 : audit [INF] from='client.? 192.168.123.104:0/1005334568' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:32 vm04 bash[28289]: audit 2026-03-10T10:18:31.526601+0000 mon.b (mon.1) 175 : audit [INF] from='client.? 192.168.123.104:0/1005334568' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:32 vm04 bash[28289]: cluster 2026-03-10T10:18:31.527736+0000 mon.a (mon.0) 1729 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-10T10:18:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:32 vm04 bash[28289]: cluster 2026-03-10T10:18:31.527736+0000 mon.a (mon.0) 1729 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-10T10:18:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:32 vm04 bash[28289]: audit 2026-03-10T10:18:31.528548+0000 mon.a (mon.0) 1730 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-23"}]: dispatch 2026-03-10T10:18:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:32 vm04 bash[28289]: audit 2026-03-10T10:18:31.528548+0000 mon.a (mon.0) 1730 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-23"}]: dispatch 2026-03-10T10:18:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:32 vm04 bash[28289]: audit 2026-03-10T10:18:31.529423+0000 mon.a (mon.0) 1731 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:32 vm04 bash[28289]: audit 2026-03-10T10:18:31.529423+0000 mon.a (mon.0) 1731 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:32 vm04 bash[20742]: cluster 2026-03-10T10:18:30.403692+0000 mgr.y (mgr.24422) 205 : cluster [DBG] pgmap v245: 385 pgs: 32 unknown, 17 creating+peering, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 318 active+clean; 459 KiB data, 670 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:18:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:32 vm04 bash[20742]: cluster 2026-03-10T10:18:30.403692+0000 mgr.y (mgr.24422) 205 : cluster [DBG] pgmap v245: 385 pgs: 32 unknown, 17 creating+peering, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 318 active+clean; 459 KiB data, 670 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:18:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:32 vm04 bash[20742]: audit 2026-03-10T10:18:31.086361+0000 mon.a (mon.0) 1726 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:32 vm04 bash[20742]: audit 2026-03-10T10:18:31.086361+0000 mon.a (mon.0) 1726 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:32.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:32 vm04 bash[20742]: cluster 2026-03-10T10:18:31.413848+0000 mon.a (mon.0) 1727 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:32.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:32 vm04 bash[20742]: cluster 2026-03-10T10:18:31.413848+0000 mon.a (mon.0) 1727 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:32.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:32 vm04 bash[20742]: audit 2026-03-10T10:18:31.454207+0000 mon.c (mon.2) 290 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:32.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:32 vm04 bash[20742]: audit 2026-03-10T10:18:31.454207+0000 mon.c (mon.2) 290 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:32.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:32 vm04 bash[20742]: audit 2026-03-10T10:18:31.488856+0000 mon.a (mon.0) 1728 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:32.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:32 vm04 bash[20742]: audit 2026-03-10T10:18:31.488856+0000 mon.a (mon.0) 1728 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:32.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:32 vm04 bash[20742]: audit 2026-03-10T10:18:31.526601+0000 mon.b (mon.1) 175 : audit [INF] from='client.? 192.168.123.104:0/1005334568' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:32.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:32 vm04 bash[20742]: audit 2026-03-10T10:18:31.526601+0000 mon.b (mon.1) 175 : audit [INF] from='client.? 192.168.123.104:0/1005334568' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:32.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:32 vm04 bash[20742]: cluster 2026-03-10T10:18:31.527736+0000 mon.a (mon.0) 1729 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-10T10:18:32.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:32 vm04 bash[20742]: cluster 2026-03-10T10:18:31.527736+0000 mon.a (mon.0) 1729 : cluster [DBG] osdmap e194: 8 total, 8 up, 8 in 2026-03-10T10:18:32.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:32 vm04 bash[20742]: audit 2026-03-10T10:18:31.528548+0000 mon.a (mon.0) 1730 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-23"}]: dispatch 2026-03-10T10:18:32.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:32 vm04 bash[20742]: audit 2026-03-10T10:18:31.528548+0000 mon.a (mon.0) 1730 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-23"}]: dispatch 2026-03-10T10:18:32.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:32 vm04 bash[20742]: audit 2026-03-10T10:18:31.529423+0000 mon.a (mon.0) 1731 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:32.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:32 vm04 bash[20742]: audit 2026-03-10T10:18:31.529423+0000 mon.a (mon.0) 1731 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:33.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:18:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:18:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:18:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:33 vm04 bash[28289]: audit 2026-03-10T10:18:32.454964+0000 mon.c (mon.2) 291 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:33 vm04 bash[28289]: audit 2026-03-10T10:18:32.454964+0000 mon.c (mon.2) 291 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:33 vm04 bash[28289]: cluster 2026-03-10T10:18:32.489050+0000 mon.a (mon.0) 1732 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:33 vm04 bash[28289]: cluster 2026-03-10T10:18:32.489050+0000 mon.a (mon.0) 1732 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:33 vm04 bash[28289]: audit 2026-03-10T10:18:32.492138+0000 mon.a (mon.0) 1733 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValue_vm04-59252-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm04-59252-34"}]': finished 2026-03-10T10:18:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:33 vm04 bash[28289]: audit 2026-03-10T10:18:32.492138+0000 mon.a (mon.0) 1733 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValue_vm04-59252-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm04-59252-34"}]': finished 2026-03-10T10:18:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:33 vm04 bash[28289]: audit 2026-03-10T10:18:32.492518+0000 mon.a (mon.0) 1734 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-23"}]': finished 2026-03-10T10:18:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:33 vm04 bash[28289]: audit 2026-03-10T10:18:32.492518+0000 mon.a (mon.0) 1734 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-23"}]': finished 2026-03-10T10:18:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:33 vm04 bash[28289]: audit 2026-03-10T10:18:32.492638+0000 mon.a (mon.0) 1735 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:33 vm04 bash[28289]: audit 2026-03-10T10:18:32.492638+0000 mon.a (mon.0) 1735 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:33.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:33 vm04 bash[28289]: cluster 2026-03-10T10:18:32.500106+0000 mon.a (mon.0) 1736 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-10T10:18:33.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:33 vm04 bash[28289]: cluster 2026-03-10T10:18:32.500106+0000 mon.a (mon.0) 1736 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-10T10:18:33.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:33 vm04 bash[20742]: audit 2026-03-10T10:18:32.454964+0000 mon.c (mon.2) 291 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:33.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:33 vm04 bash[20742]: audit 2026-03-10T10:18:32.454964+0000 mon.c (mon.2) 291 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:33.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:33 vm04 bash[20742]: cluster 2026-03-10T10:18:32.489050+0000 mon.a (mon.0) 1732 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:33.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:33 vm04 bash[20742]: cluster 2026-03-10T10:18:32.489050+0000 mon.a (mon.0) 1732 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:33.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:33 vm04 bash[20742]: audit 2026-03-10T10:18:32.492138+0000 mon.a (mon.0) 1733 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValue_vm04-59252-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm04-59252-34"}]': finished 2026-03-10T10:18:33.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:33 vm04 bash[20742]: audit 2026-03-10T10:18:32.492138+0000 mon.a (mon.0) 1733 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValue_vm04-59252-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm04-59252-34"}]': finished 2026-03-10T10:18:33.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:33 vm04 bash[20742]: audit 2026-03-10T10:18:32.492518+0000 mon.a (mon.0) 1734 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-23"}]': finished 2026-03-10T10:18:33.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:33 vm04 bash[20742]: audit 2026-03-10T10:18:32.492518+0000 mon.a (mon.0) 1734 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-23"}]': finished 2026-03-10T10:18:33.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:33 vm04 bash[20742]: audit 2026-03-10T10:18:32.492638+0000 mon.a (mon.0) 1735 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:33.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:33 vm04 bash[20742]: audit 2026-03-10T10:18:32.492638+0000 mon.a (mon.0) 1735 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:33.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:33 vm04 bash[20742]: cluster 2026-03-10T10:18:32.500106+0000 mon.a (mon.0) 1736 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-10T10:18:33.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:33 vm04 bash[20742]: cluster 2026-03-10T10:18:32.500106+0000 mon.a (mon.0) 1736 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-10T10:18:34.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:33 vm07 bash[23367]: audit 2026-03-10T10:18:32.454964+0000 mon.c (mon.2) 291 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:34.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:33 vm07 bash[23367]: audit 2026-03-10T10:18:32.454964+0000 mon.c (mon.2) 291 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:34.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:33 vm07 bash[23367]: cluster 2026-03-10T10:18:32.489050+0000 mon.a (mon.0) 1732 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:34.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:33 vm07 bash[23367]: cluster 2026-03-10T10:18:32.489050+0000 mon.a (mon.0) 1732 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:34.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:33 vm07 bash[23367]: audit 2026-03-10T10:18:32.492138+0000 mon.a (mon.0) 1733 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValue_vm04-59252-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm04-59252-34"}]': finished 2026-03-10T10:18:34.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:33 vm07 bash[23367]: audit 2026-03-10T10:18:32.492138+0000 mon.a (mon.0) 1733 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValue_vm04-59252-34", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValue_vm04-59252-34"}]': finished 2026-03-10T10:18:34.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:33 vm07 bash[23367]: audit 2026-03-10T10:18:32.492518+0000 mon.a (mon.0) 1734 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-23"}]': finished 2026-03-10T10:18:34.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:33 vm07 bash[23367]: audit 2026-03-10T10:18:32.492518+0000 mon.a (mon.0) 1734 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-23"}]': finished 2026-03-10T10:18:34.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:33 vm07 bash[23367]: audit 2026-03-10T10:18:32.492638+0000 mon.a (mon.0) 1735 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:34.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:33 vm07 bash[23367]: audit 2026-03-10T10:18:32.492638+0000 mon.a (mon.0) 1735 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:34.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:33 vm07 bash[23367]: cluster 2026-03-10T10:18:32.500106+0000 mon.a (mon.0) 1736 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-10T10:18:34.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:33 vm07 bash[23367]: cluster 2026-03-10T10:18:32.500106+0000 mon.a (mon.0) 1736 : cluster [DBG] osdmap e195: 8 total, 8 up, 8 in 2026-03-10T10:18:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:34 vm04 bash[28289]: cluster 2026-03-10T10:18:32.404152+0000 mgr.y (mgr.24422) 206 : cluster [DBG] pgmap v248: 417 pgs: 64 unknown, 17 creating+peering, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 318 active+clean; 459 KiB data, 670 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:18:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:34 vm04 bash[28289]: cluster 2026-03-10T10:18:32.404152+0000 mgr.y (mgr.24422) 206 : cluster [DBG] pgmap v248: 417 pgs: 64 unknown, 17 creating+peering, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 318 active+clean; 459 KiB data, 670 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:18:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:34 vm04 bash[28289]: audit 2026-03-10T10:18:33.455891+0000 mon.c (mon.2) 292 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:34 vm04 bash[28289]: audit 2026-03-10T10:18:33.455891+0000 mon.c (mon.2) 292 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:34 vm04 bash[28289]: cluster 2026-03-10T10:18:33.660863+0000 mon.a (mon.0) 1737 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-10T10:18:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:34 vm04 bash[28289]: cluster 2026-03-10T10:18:33.660863+0000 mon.a (mon.0) 1737 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-10T10:18:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:34 vm04 bash[28289]: audit 2026-03-10T10:18:33.661419+0000 mon.c (mon.2) 293 : audit [INF] from='client.? 192.168.123.104:0/3233088854' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:34 vm04 bash[28289]: audit 2026-03-10T10:18:33.661419+0000 mon.c (mon.2) 293 : audit [INF] from='client.? 192.168.123.104:0/3233088854' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:34 vm04 bash[28289]: audit 2026-03-10T10:18:33.662888+0000 mon.a (mon.0) 1738 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:34 vm04 bash[28289]: audit 2026-03-10T10:18:33.662888+0000 mon.a (mon.0) 1738 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:34 vm04 bash[28289]: audit 2026-03-10T10:18:34.456797+0000 mon.c (mon.2) 294 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:34 vm04 bash[28289]: audit 2026-03-10T10:18:34.456797+0000 mon.c (mon.2) 294 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:34.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:34 vm04 bash[20742]: cluster 2026-03-10T10:18:32.404152+0000 mgr.y (mgr.24422) 206 : cluster [DBG] pgmap v248: 417 pgs: 64 unknown, 17 creating+peering, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 318 active+clean; 459 KiB data, 670 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:18:34.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:34 vm04 bash[20742]: cluster 2026-03-10T10:18:32.404152+0000 mgr.y (mgr.24422) 206 : cluster [DBG] pgmap v248: 417 pgs: 64 unknown, 17 creating+peering, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 318 active+clean; 459 KiB data, 670 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:18:34.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:34 vm04 bash[20742]: audit 2026-03-10T10:18:33.455891+0000 mon.c (mon.2) 292 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:34.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:34 vm04 bash[20742]: audit 2026-03-10T10:18:33.455891+0000 mon.c (mon.2) 292 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:34.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:34 vm04 bash[20742]: cluster 2026-03-10T10:18:33.660863+0000 mon.a (mon.0) 1737 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-10T10:18:34.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:34 vm04 bash[20742]: cluster 2026-03-10T10:18:33.660863+0000 mon.a (mon.0) 1737 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-10T10:18:34.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:34 vm04 bash[20742]: audit 2026-03-10T10:18:33.661419+0000 mon.c (mon.2) 293 : audit [INF] from='client.? 192.168.123.104:0/3233088854' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:34.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:34 vm04 bash[20742]: audit 2026-03-10T10:18:33.661419+0000 mon.c (mon.2) 293 : audit [INF] from='client.? 192.168.123.104:0/3233088854' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:34.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:34 vm04 bash[20742]: audit 2026-03-10T10:18:33.662888+0000 mon.a (mon.0) 1738 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:34.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:34 vm04 bash[20742]: audit 2026-03-10T10:18:33.662888+0000 mon.a (mon.0) 1738 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:34.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:34 vm04 bash[20742]: audit 2026-03-10T10:18:34.456797+0000 mon.c (mon.2) 294 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:34.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:34 vm04 bash[20742]: audit 2026-03-10T10:18:34.456797+0000 mon.c (mon.2) 294 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:34 vm07 bash[23367]: cluster 2026-03-10T10:18:32.404152+0000 mgr.y (mgr.24422) 206 : cluster [DBG] pgmap v248: 417 pgs: 64 unknown, 17 creating+peering, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 318 active+clean; 459 KiB data, 670 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:18:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:34 vm07 bash[23367]: cluster 2026-03-10T10:18:32.404152+0000 mgr.y (mgr.24422) 206 : cluster [DBG] pgmap v248: 417 pgs: 64 unknown, 17 creating+peering, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 318 active+clean; 459 KiB data, 670 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:18:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:34 vm07 bash[23367]: audit 2026-03-10T10:18:33.455891+0000 mon.c (mon.2) 292 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:34 vm07 bash[23367]: audit 2026-03-10T10:18:33.455891+0000 mon.c (mon.2) 292 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:34 vm07 bash[23367]: cluster 2026-03-10T10:18:33.660863+0000 mon.a (mon.0) 1737 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-10T10:18:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:34 vm07 bash[23367]: cluster 2026-03-10T10:18:33.660863+0000 mon.a (mon.0) 1737 : cluster [DBG] osdmap e196: 8 total, 8 up, 8 in 2026-03-10T10:18:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:34 vm07 bash[23367]: audit 2026-03-10T10:18:33.661419+0000 mon.c (mon.2) 293 : audit [INF] from='client.? 192.168.123.104:0/3233088854' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:34 vm07 bash[23367]: audit 2026-03-10T10:18:33.661419+0000 mon.c (mon.2) 293 : audit [INF] from='client.? 192.168.123.104:0/3233088854' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:35.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:34 vm07 bash[23367]: audit 2026-03-10T10:18:33.662888+0000 mon.a (mon.0) 1738 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:35.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:34 vm07 bash[23367]: audit 2026-03-10T10:18:33.662888+0000 mon.a (mon.0) 1738 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-42","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:35.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:34 vm07 bash[23367]: audit 2026-03-10T10:18:34.456797+0000 mon.c (mon.2) 294 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:35.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:34 vm07 bash[23367]: audit 2026-03-10T10:18:34.456797+0000 mon.c (mon.2) 294 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:35 vm04 bash[28289]: cluster 2026-03-10T10:18:34.404942+0000 mgr.y (mgr.24422) 207 : cluster [DBG] pgmap v251: 425 pgs: 3 creating+activating, 16 creating+peering, 21 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 367 active+clean; 459 KiB data, 674 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:18:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:35 vm04 bash[28289]: cluster 2026-03-10T10:18:34.404942+0000 mgr.y (mgr.24422) 207 : cluster [DBG] pgmap v251: 425 pgs: 3 creating+activating, 16 creating+peering, 21 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 367 active+clean; 459 KiB data, 674 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:18:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:35 vm04 bash[28289]: audit 2026-03-10T10:18:34.642779+0000 mon.a (mon.0) 1739 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-42","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:35 vm04 bash[28289]: audit 2026-03-10T10:18:34.642779+0000 mon.a (mon.0) 1739 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-42","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:35 vm04 bash[28289]: cluster 2026-03-10T10:18:34.652934+0000 mon.a (mon.0) 1740 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-10T10:18:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:35 vm04 bash[28289]: cluster 2026-03-10T10:18:34.652934+0000 mon.a (mon.0) 1740 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-10T10:18:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:35 vm04 bash[28289]: audit 2026-03-10T10:18:34.653992+0000 mon.a (mon.0) 1741 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:35 vm04 bash[28289]: audit 2026-03-10T10:18:34.653992+0000 mon.a (mon.0) 1741 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:35 vm04 bash[28289]: audit 2026-03-10T10:18:34.678045+0000 mon.b (mon.1) 176 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:35 vm04 bash[28289]: audit 2026-03-10T10:18:34.678045+0000 mon.b (mon.1) 176 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:35 vm04 bash[28289]: audit 2026-03-10T10:18:34.681329+0000 mon.a (mon.0) 1742 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:35 vm04 bash[28289]: audit 2026-03-10T10:18:34.681329+0000 mon.a (mon.0) 1742 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:35 vm04 bash[28289]: audit 2026-03-10T10:18:35.457518+0000 mon.c (mon.2) 295 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:35 vm04 bash[28289]: audit 2026-03-10T10:18:35.457518+0000 mon.c (mon.2) 295 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:35 vm04 bash[20742]: cluster 2026-03-10T10:18:34.404942+0000 mgr.y (mgr.24422) 207 : cluster [DBG] pgmap v251: 425 pgs: 3 creating+activating, 16 creating+peering, 21 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 367 active+clean; 459 KiB data, 674 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:18:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:35 vm04 bash[20742]: cluster 2026-03-10T10:18:34.404942+0000 mgr.y (mgr.24422) 207 : cluster [DBG] pgmap v251: 425 pgs: 3 creating+activating, 16 creating+peering, 21 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 367 active+clean; 459 KiB data, 674 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:18:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:35 vm04 bash[20742]: audit 2026-03-10T10:18:34.642779+0000 mon.a (mon.0) 1739 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-42","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:35 vm04 bash[20742]: audit 2026-03-10T10:18:34.642779+0000 mon.a (mon.0) 1739 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-42","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:35 vm04 bash[20742]: cluster 2026-03-10T10:18:34.652934+0000 mon.a (mon.0) 1740 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-10T10:18:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:35 vm04 bash[20742]: cluster 2026-03-10T10:18:34.652934+0000 mon.a (mon.0) 1740 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-10T10:18:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:35 vm04 bash[20742]: audit 2026-03-10T10:18:34.653992+0000 mon.a (mon.0) 1741 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:35 vm04 bash[20742]: audit 2026-03-10T10:18:34.653992+0000 mon.a (mon.0) 1741 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:35 vm04 bash[20742]: audit 2026-03-10T10:18:34.678045+0000 mon.b (mon.1) 176 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:35 vm04 bash[20742]: audit 2026-03-10T10:18:34.678045+0000 mon.b (mon.1) 176 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:35 vm04 bash[20742]: audit 2026-03-10T10:18:34.681329+0000 mon.a (mon.0) 1742 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:35 vm04 bash[20742]: audit 2026-03-10T10:18:34.681329+0000 mon.a (mon.0) 1742 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:35 vm04 bash[20742]: audit 2026-03-10T10:18:35.457518+0000 mon.c (mon.2) 295 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:35 vm04 bash[20742]: audit 2026-03-10T10:18:35.457518+0000 mon.c (mon.2) 295 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:35 vm07 bash[23367]: cluster 2026-03-10T10:18:34.404942+0000 mgr.y (mgr.24422) 207 : cluster [DBG] pgmap v251: 425 pgs: 3 creating+activating, 16 creating+peering, 21 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 367 active+clean; 459 KiB data, 674 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:18:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:35 vm07 bash[23367]: cluster 2026-03-10T10:18:34.404942+0000 mgr.y (mgr.24422) 207 : cluster [DBG] pgmap v251: 425 pgs: 3 creating+activating, 16 creating+peering, 21 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 367 active+clean; 459 KiB data, 674 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:18:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:35 vm07 bash[23367]: audit 2026-03-10T10:18:34.642779+0000 mon.a (mon.0) 1739 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-42","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:35 vm07 bash[23367]: audit 2026-03-10T10:18:34.642779+0000 mon.a (mon.0) 1739 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-42","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:35 vm07 bash[23367]: cluster 2026-03-10T10:18:34.652934+0000 mon.a (mon.0) 1740 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-10T10:18:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:35 vm07 bash[23367]: cluster 2026-03-10T10:18:34.652934+0000 mon.a (mon.0) 1740 : cluster [DBG] osdmap e197: 8 total, 8 up, 8 in 2026-03-10T10:18:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:35 vm07 bash[23367]: audit 2026-03-10T10:18:34.653992+0000 mon.a (mon.0) 1741 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:35 vm07 bash[23367]: audit 2026-03-10T10:18:34.653992+0000 mon.a (mon.0) 1741 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-25","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:35 vm07 bash[23367]: audit 2026-03-10T10:18:34.678045+0000 mon.b (mon.1) 176 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:35 vm07 bash[23367]: audit 2026-03-10T10:18:34.678045+0000 mon.b (mon.1) 176 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:35 vm07 bash[23367]: audit 2026-03-10T10:18:34.681329+0000 mon.a (mon.0) 1742 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:35 vm07 bash[23367]: audit 2026-03-10T10:18:34.681329+0000 mon.a (mon.0) 1742 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:35 vm07 bash[23367]: audit 2026-03-10T10:18:35.457518+0000 mon.c (mon.2) 295 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:35 vm07 bash[23367]: audit 2026-03-10T10:18:35.457518+0000 mon.c (mon.2) 295 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:36 vm04 bash[28289]: audit 2026-03-10T10:18:35.738898+0000 mon.a (mon.0) 1743 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:36 vm04 bash[28289]: audit 2026-03-10T10:18:35.738898+0000 mon.a (mon.0) 1743 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:36 vm04 bash[28289]: audit 2026-03-10T10:18:35.738934+0000 mon.a (mon.0) 1744 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm04-59252-34"}]': finished 2026-03-10T10:18:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:36 vm04 bash[28289]: audit 2026-03-10T10:18:35.738934+0000 mon.a (mon.0) 1744 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm04-59252-34"}]': finished 2026-03-10T10:18:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:36 vm04 bash[28289]: cluster 2026-03-10T10:18:35.746086+0000 mon.a (mon.0) 1745 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-10T10:18:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:36 vm04 bash[28289]: cluster 2026-03-10T10:18:35.746086+0000 mon.a (mon.0) 1745 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-10T10:18:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:36 vm04 bash[28289]: audit 2026-03-10T10:18:35.748015+0000 mon.b (mon.1) 177 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:36 vm04 bash[28289]: audit 2026-03-10T10:18:35.748015+0000 mon.b (mon.1) 177 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:36 vm04 bash[28289]: audit 2026-03-10T10:18:35.763432+0000 mon.a (mon.0) 1746 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:36 vm04 bash[28289]: audit 2026-03-10T10:18:35.763432+0000 mon.a (mon.0) 1746 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:36 vm04 bash[28289]: audit 2026-03-10T10:18:35.764442+0000 mon.a (mon.0) 1747 : audit [INF] from='client.? 192.168.123.104:0/2097493993' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:36 vm04 bash[28289]: audit 2026-03-10T10:18:35.764442+0000 mon.a (mon.0) 1747 : audit [INF] from='client.? 192.168.123.104:0/2097493993' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:36 vm04 bash[28289]: cluster 2026-03-10T10:18:36.414730+0000 mon.a (mon.0) 1748 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:36 vm04 bash[28289]: cluster 2026-03-10T10:18:36.414730+0000 mon.a (mon.0) 1748 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:36 vm04 bash[28289]: audit 2026-03-10T10:18:36.458252+0000 mon.c (mon.2) 296 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:36 vm04 bash[28289]: audit 2026-03-10T10:18:36.458252+0000 mon.c (mon.2) 296 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:36 vm04 bash[20742]: audit 2026-03-10T10:18:35.738898+0000 mon.a (mon.0) 1743 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:36 vm04 bash[20742]: audit 2026-03-10T10:18:35.738898+0000 mon.a (mon.0) 1743 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:36 vm04 bash[20742]: audit 2026-03-10T10:18:35.738934+0000 mon.a (mon.0) 1744 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm04-59252-34"}]': finished 2026-03-10T10:18:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:36 vm04 bash[20742]: audit 2026-03-10T10:18:35.738934+0000 mon.a (mon.0) 1744 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm04-59252-34"}]': finished 2026-03-10T10:18:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:36 vm04 bash[20742]: cluster 2026-03-10T10:18:35.746086+0000 mon.a (mon.0) 1745 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-10T10:18:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:36 vm04 bash[20742]: cluster 2026-03-10T10:18:35.746086+0000 mon.a (mon.0) 1745 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-10T10:18:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:36 vm04 bash[20742]: audit 2026-03-10T10:18:35.748015+0000 mon.b (mon.1) 177 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:36 vm04 bash[20742]: audit 2026-03-10T10:18:35.748015+0000 mon.b (mon.1) 177 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:36 vm04 bash[20742]: audit 2026-03-10T10:18:35.763432+0000 mon.a (mon.0) 1746 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:36 vm04 bash[20742]: audit 2026-03-10T10:18:35.763432+0000 mon.a (mon.0) 1746 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:36 vm04 bash[20742]: audit 2026-03-10T10:18:35.764442+0000 mon.a (mon.0) 1747 : audit [INF] from='client.? 192.168.123.104:0/2097493993' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:36 vm04 bash[20742]: audit 2026-03-10T10:18:35.764442+0000 mon.a (mon.0) 1747 : audit [INF] from='client.? 192.168.123.104:0/2097493993' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:36 vm04 bash[20742]: cluster 2026-03-10T10:18:36.414730+0000 mon.a (mon.0) 1748 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:36 vm04 bash[20742]: cluster 2026-03-10T10:18:36.414730+0000 mon.a (mon.0) 1748 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:36 vm04 bash[20742]: audit 2026-03-10T10:18:36.458252+0000 mon.c (mon.2) 296 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:36 vm04 bash[20742]: audit 2026-03-10T10:18:36.458252+0000 mon.c (mon.2) 296 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:36 vm07 bash[23367]: audit 2026-03-10T10:18:35.738898+0000 mon.a (mon.0) 1743 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:36 vm07 bash[23367]: audit 2026-03-10T10:18:35.738898+0000 mon.a (mon.0) 1743 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-25","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:36 vm07 bash[23367]: audit 2026-03-10T10:18:35.738934+0000 mon.a (mon.0) 1744 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm04-59252-34"}]': finished 2026-03-10T10:18:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:36 vm07 bash[23367]: audit 2026-03-10T10:18:35.738934+0000 mon.a (mon.0) 1744 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValue_vm04-59252-34"}]': finished 2026-03-10T10:18:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:36 vm07 bash[23367]: cluster 2026-03-10T10:18:35.746086+0000 mon.a (mon.0) 1745 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-10T10:18:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:36 vm07 bash[23367]: cluster 2026-03-10T10:18:35.746086+0000 mon.a (mon.0) 1745 : cluster [DBG] osdmap e198: 8 total, 8 up, 8 in 2026-03-10T10:18:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:36 vm07 bash[23367]: audit 2026-03-10T10:18:35.748015+0000 mon.b (mon.1) 177 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:36 vm07 bash[23367]: audit 2026-03-10T10:18:35.748015+0000 mon.b (mon.1) 177 : audit [INF] from='client.? 192.168.123.104:0/1119534468' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:36 vm07 bash[23367]: audit 2026-03-10T10:18:35.763432+0000 mon.a (mon.0) 1746 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:36 vm07 bash[23367]: audit 2026-03-10T10:18:35.763432+0000 mon.a (mon.0) 1746 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm04-59252-34"}]: dispatch 2026-03-10T10:18:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:36 vm07 bash[23367]: audit 2026-03-10T10:18:35.764442+0000 mon.a (mon.0) 1747 : audit [INF] from='client.? 192.168.123.104:0/2097493993' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:36 vm07 bash[23367]: audit 2026-03-10T10:18:35.764442+0000 mon.a (mon.0) 1747 : audit [INF] from='client.? 192.168.123.104:0/2097493993' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:36 vm07 bash[23367]: cluster 2026-03-10T10:18:36.414730+0000 mon.a (mon.0) 1748 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:36 vm07 bash[23367]: cluster 2026-03-10T10:18:36.414730+0000 mon.a (mon.0) 1748 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:36 vm07 bash[23367]: audit 2026-03-10T10:18:36.458252+0000 mon.c (mon.2) 296 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:36 vm07 bash[23367]: audit 2026-03-10T10:18:36.458252+0000 mon.c (mon.2) 296 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:37 vm04 bash[28289]: cluster 2026-03-10T10:18:36.405347+0000 mgr.y (mgr.24422) 208 : cluster [DBG] pgmap v254: 481 pgs: 11 creating+peering, 85 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 367 active+clean; 459 KiB data, 674 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:18:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:37 vm04 bash[28289]: cluster 2026-03-10T10:18:36.405347+0000 mgr.y (mgr.24422) 208 : cluster [DBG] pgmap v254: 481 pgs: 11 creating+peering, 85 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 367 active+clean; 459 KiB data, 674 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:18:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:37 vm04 bash[28289]: audit 2026-03-10T10:18:36.742583+0000 mon.a (mon.0) 1749 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm04-59252-34"}]': finished 2026-03-10T10:18:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:37 vm04 bash[28289]: audit 2026-03-10T10:18:36.742583+0000 mon.a (mon.0) 1749 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm04-59252-34"}]': finished 2026-03-10T10:18:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:37 vm04 bash[28289]: audit 2026-03-10T10:18:36.742667+0000 mon.a (mon.0) 1750 : audit [INF] from='client.? 192.168.123.104:0/2097493993' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:37 vm04 bash[28289]: audit 2026-03-10T10:18:36.742667+0000 mon.a (mon.0) 1750 : audit [INF] from='client.? 192.168.123.104:0/2097493993' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:37 vm04 bash[28289]: cluster 2026-03-10T10:18:36.764242+0000 mon.a (mon.0) 1751 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-10T10:18:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:37 vm04 bash[28289]: cluster 2026-03-10T10:18:36.764242+0000 mon.a (mon.0) 1751 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-10T10:18:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:37 vm04 bash[28289]: audit 2026-03-10T10:18:36.799843+0000 mon.a (mon.0) 1752 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:37 vm04 bash[28289]: audit 2026-03-10T10:18:36.799843+0000 mon.a (mon.0) 1752 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:37 vm04 bash[28289]: audit 2026-03-10T10:18:36.801280+0000 mon.a (mon.0) 1753 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm04-59252-35"}]: dispatch 2026-03-10T10:18:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:37 vm04 bash[28289]: audit 2026-03-10T10:18:36.801280+0000 mon.a (mon.0) 1753 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm04-59252-35"}]: dispatch 2026-03-10T10:18:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:37 vm04 bash[28289]: audit 2026-03-10T10:18:36.807171+0000 mon.a (mon.0) 1754 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm04-59252-35"}]: dispatch 2026-03-10T10:18:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:37 vm04 bash[28289]: audit 2026-03-10T10:18:36.807171+0000 mon.a (mon.0) 1754 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm04-59252-35"}]: dispatch 2026-03-10T10:18:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:37 vm04 bash[28289]: audit 2026-03-10T10:18:36.810829+0000 mon.a (mon.0) 1755 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm04-59252-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:37 vm04 bash[28289]: audit 2026-03-10T10:18:36.810829+0000 mon.a (mon.0) 1755 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm04-59252-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:37 vm04 bash[28289]: audit 2026-03-10T10:18:37.459081+0000 mon.c (mon.2) 297 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:37 vm04 bash[28289]: audit 2026-03-10T10:18:37.459081+0000 mon.c (mon.2) 297 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:37 vm04 bash[28289]: audit 2026-03-10T10:18:37.746328+0000 mon.a (mon.0) 1756 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-25", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:37 vm04 bash[28289]: audit 2026-03-10T10:18:37.746328+0000 mon.a (mon.0) 1756 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-25", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:37 vm04 bash[28289]: audit 2026-03-10T10:18:37.746382+0000 mon.a (mon.0) 1757 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm04-59252-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:37 vm04 bash[28289]: audit 2026-03-10T10:18:37.746382+0000 mon.a (mon.0) 1757 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm04-59252-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:38.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:37 vm04 bash[28289]: cluster 2026-03-10T10:18:37.756752+0000 mon.a (mon.0) 1758 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-10T10:18:38.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:37 vm04 bash[28289]: cluster 2026-03-10T10:18:37.756752+0000 mon.a (mon.0) 1758 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-10T10:18:38.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:37 vm04 bash[28289]: audit 2026-03-10T10:18:37.759020+0000 mon.a (mon.0) 1759 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-25"}]: dispatch 2026-03-10T10:18:38.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:37 vm04 bash[28289]: audit 2026-03-10T10:18:37.759020+0000 mon.a (mon.0) 1759 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-25"}]: dispatch 2026-03-10T10:18:38.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:37 vm04 bash[28289]: audit 2026-03-10T10:18:37.759184+0000 mon.a (mon.0) 1760 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm04-59252-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm04-59252-35"}]: dispatch 2026-03-10T10:18:38.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:37 vm04 bash[28289]: audit 2026-03-10T10:18:37.759184+0000 mon.a (mon.0) 1760 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm04-59252-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm04-59252-35"}]: dispatch 2026-03-10T10:18:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:37 vm04 bash[20742]: cluster 2026-03-10T10:18:36.405347+0000 mgr.y (mgr.24422) 208 : cluster [DBG] pgmap v254: 481 pgs: 11 creating+peering, 85 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 367 active+clean; 459 KiB data, 674 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:18:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:37 vm04 bash[20742]: cluster 2026-03-10T10:18:36.405347+0000 mgr.y (mgr.24422) 208 : cluster [DBG] pgmap v254: 481 pgs: 11 creating+peering, 85 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 367 active+clean; 459 KiB data, 674 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:18:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:37 vm04 bash[20742]: audit 2026-03-10T10:18:36.742583+0000 mon.a (mon.0) 1749 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm04-59252-34"}]': finished 2026-03-10T10:18:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:37 vm04 bash[20742]: audit 2026-03-10T10:18:36.742583+0000 mon.a (mon.0) 1749 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm04-59252-34"}]': finished 2026-03-10T10:18:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:37 vm04 bash[20742]: audit 2026-03-10T10:18:36.742667+0000 mon.a (mon.0) 1750 : audit [INF] from='client.? 192.168.123.104:0/2097493993' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:37 vm04 bash[20742]: audit 2026-03-10T10:18:36.742667+0000 mon.a (mon.0) 1750 : audit [INF] from='client.? 192.168.123.104:0/2097493993' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:37 vm04 bash[20742]: cluster 2026-03-10T10:18:36.764242+0000 mon.a (mon.0) 1751 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-10T10:18:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:37 vm04 bash[20742]: cluster 2026-03-10T10:18:36.764242+0000 mon.a (mon.0) 1751 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-10T10:18:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:37 vm04 bash[20742]: audit 2026-03-10T10:18:36.799843+0000 mon.a (mon.0) 1752 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:37 vm04 bash[20742]: audit 2026-03-10T10:18:36.799843+0000 mon.a (mon.0) 1752 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:37 vm04 bash[20742]: audit 2026-03-10T10:18:36.801280+0000 mon.a (mon.0) 1753 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm04-59252-35"}]: dispatch 2026-03-10T10:18:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:37 vm04 bash[20742]: audit 2026-03-10T10:18:36.801280+0000 mon.a (mon.0) 1753 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm04-59252-35"}]: dispatch 2026-03-10T10:18:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:37 vm04 bash[20742]: audit 2026-03-10T10:18:36.807171+0000 mon.a (mon.0) 1754 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm04-59252-35"}]: dispatch 2026-03-10T10:18:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:37 vm04 bash[20742]: audit 2026-03-10T10:18:36.807171+0000 mon.a (mon.0) 1754 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm04-59252-35"}]: dispatch 2026-03-10T10:18:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:37 vm04 bash[20742]: audit 2026-03-10T10:18:36.810829+0000 mon.a (mon.0) 1755 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm04-59252-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:37 vm04 bash[20742]: audit 2026-03-10T10:18:36.810829+0000 mon.a (mon.0) 1755 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm04-59252-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:37 vm04 bash[20742]: audit 2026-03-10T10:18:37.459081+0000 mon.c (mon.2) 297 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:37 vm04 bash[20742]: audit 2026-03-10T10:18:37.459081+0000 mon.c (mon.2) 297 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:37 vm04 bash[20742]: audit 2026-03-10T10:18:37.746328+0000 mon.a (mon.0) 1756 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-25", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:37 vm04 bash[20742]: audit 2026-03-10T10:18:37.746328+0000 mon.a (mon.0) 1756 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-25", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:37 vm04 bash[20742]: audit 2026-03-10T10:18:37.746382+0000 mon.a (mon.0) 1757 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm04-59252-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:37 vm04 bash[20742]: audit 2026-03-10T10:18:37.746382+0000 mon.a (mon.0) 1757 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm04-59252-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:37 vm04 bash[20742]: cluster 2026-03-10T10:18:37.756752+0000 mon.a (mon.0) 1758 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-10T10:18:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:37 vm04 bash[20742]: cluster 2026-03-10T10:18:37.756752+0000 mon.a (mon.0) 1758 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-10T10:18:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:37 vm04 bash[20742]: audit 2026-03-10T10:18:37.759020+0000 mon.a (mon.0) 1759 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-25"}]: dispatch 2026-03-10T10:18:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:37 vm04 bash[20742]: audit 2026-03-10T10:18:37.759020+0000 mon.a (mon.0) 1759 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-25"}]: dispatch 2026-03-10T10:18:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:37 vm04 bash[20742]: audit 2026-03-10T10:18:37.759184+0000 mon.a (mon.0) 1760 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm04-59252-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm04-59252-35"}]: dispatch 2026-03-10T10:18:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:37 vm04 bash[20742]: audit 2026-03-10T10:18:37.759184+0000 mon.a (mon.0) 1760 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm04-59252-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm04-59252-35"}]: dispatch 2026-03-10T10:18:38.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:37 vm07 bash[23367]: cluster 2026-03-10T10:18:36.405347+0000 mgr.y (mgr.24422) 208 : cluster [DBG] pgmap v254: 481 pgs: 11 creating+peering, 85 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 367 active+clean; 459 KiB data, 674 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:18:38.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:37 vm07 bash[23367]: cluster 2026-03-10T10:18:36.405347+0000 mgr.y (mgr.24422) 208 : cluster [DBG] pgmap v254: 481 pgs: 11 creating+peering, 85 unknown, 10 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 367 active+clean; 459 KiB data, 674 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:18:38.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:37 vm07 bash[23367]: audit 2026-03-10T10:18:36.742583+0000 mon.a (mon.0) 1749 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm04-59252-34"}]': finished 2026-03-10T10:18:38.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:37 vm07 bash[23367]: audit 2026-03-10T10:18:36.742583+0000 mon.a (mon.0) 1749 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValue_vm04-59252-34"}]': finished 2026-03-10T10:18:38.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:37 vm07 bash[23367]: audit 2026-03-10T10:18:36.742667+0000 mon.a (mon.0) 1750 : audit [INF] from='client.? 192.168.123.104:0/2097493993' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:38.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:37 vm07 bash[23367]: audit 2026-03-10T10:18:36.742667+0000 mon.a (mon.0) 1750 : audit [INF] from='client.? 192.168.123.104:0/2097493993' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RmXattrPP_vm04-59259-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:38.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:37 vm07 bash[23367]: cluster 2026-03-10T10:18:36.764242+0000 mon.a (mon.0) 1751 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-10T10:18:38.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:37 vm07 bash[23367]: cluster 2026-03-10T10:18:36.764242+0000 mon.a (mon.0) 1751 : cluster [DBG] osdmap e199: 8 total, 8 up, 8 in 2026-03-10T10:18:38.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:37 vm07 bash[23367]: audit 2026-03-10T10:18:36.799843+0000 mon.a (mon.0) 1752 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:38.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:37 vm07 bash[23367]: audit 2026-03-10T10:18:36.799843+0000 mon.a (mon.0) 1752 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-25", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:38.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:37 vm07 bash[23367]: audit 2026-03-10T10:18:36.801280+0000 mon.a (mon.0) 1753 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm04-59252-35"}]: dispatch 2026-03-10T10:18:38.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:37 vm07 bash[23367]: audit 2026-03-10T10:18:36.801280+0000 mon.a (mon.0) 1753 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm04-59252-35"}]: dispatch 2026-03-10T10:18:38.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:37 vm07 bash[23367]: audit 2026-03-10T10:18:36.807171+0000 mon.a (mon.0) 1754 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm04-59252-35"}]: dispatch 2026-03-10T10:18:38.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:37 vm07 bash[23367]: audit 2026-03-10T10:18:36.807171+0000 mon.a (mon.0) 1754 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm04-59252-35"}]: dispatch 2026-03-10T10:18:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:37 vm07 bash[23367]: audit 2026-03-10T10:18:36.810829+0000 mon.a (mon.0) 1755 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm04-59252-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:37 vm07 bash[23367]: audit 2026-03-10T10:18:36.810829+0000 mon.a (mon.0) 1755 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm04-59252-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:37 vm07 bash[23367]: audit 2026-03-10T10:18:37.459081+0000 mon.c (mon.2) 297 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:37 vm07 bash[23367]: audit 2026-03-10T10:18:37.459081+0000 mon.c (mon.2) 297 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:37 vm07 bash[23367]: audit 2026-03-10T10:18:37.746328+0000 mon.a (mon.0) 1756 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-25", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:37 vm07 bash[23367]: audit 2026-03-10T10:18:37.746328+0000 mon.a (mon.0) 1756 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-25", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:37 vm07 bash[23367]: audit 2026-03-10T10:18:37.746382+0000 mon.a (mon.0) 1757 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm04-59252-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:37 vm07 bash[23367]: audit 2026-03-10T10:18:37.746382+0000 mon.a (mon.0) 1757 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-Flush_vm04-59252-35", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:37 vm07 bash[23367]: cluster 2026-03-10T10:18:37.756752+0000 mon.a (mon.0) 1758 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-10T10:18:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:37 vm07 bash[23367]: cluster 2026-03-10T10:18:37.756752+0000 mon.a (mon.0) 1758 : cluster [DBG] osdmap e200: 8 total, 8 up, 8 in 2026-03-10T10:18:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:37 vm07 bash[23367]: audit 2026-03-10T10:18:37.759020+0000 mon.a (mon.0) 1759 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-25"}]: dispatch 2026-03-10T10:18:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:37 vm07 bash[23367]: audit 2026-03-10T10:18:37.759020+0000 mon.a (mon.0) 1759 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-25"}]: dispatch 2026-03-10T10:18:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:37 vm07 bash[23367]: audit 2026-03-10T10:18:37.759184+0000 mon.a (mon.0) 1760 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm04-59252-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm04-59252-35"}]: dispatch 2026-03-10T10:18:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:37 vm07 bash[23367]: audit 2026-03-10T10:18:37.759184+0000 mon.a (mon.0) 1760 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "Flush_vm04-59252-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm04-59252-35"}]: dispatch 2026-03-10T10:18:38.766 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:18:38 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:18:39.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:38 vm04 bash[20742]: audit 2026-03-10T10:18:38.459809+0000 mon.c (mon.2) 298 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:39.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:38 vm04 bash[20742]: audit 2026-03-10T10:18:38.459809+0000 mon.c (mon.2) 298 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:39.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:38 vm04 bash[28289]: audit 2026-03-10T10:18:38.459809+0000 mon.c (mon.2) 298 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:39.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:38 vm04 bash[28289]: audit 2026-03-10T10:18:38.459809+0000 mon.c (mon.2) 298 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:39.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:38 vm07 bash[23367]: audit 2026-03-10T10:18:38.459809+0000 mon.c (mon.2) 298 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:39.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:38 vm07 bash[23367]: audit 2026-03-10T10:18:38.459809+0000 mon.c (mon.2) 298 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:39 vm04 bash[20742]: audit 2026-03-10T10:18:38.315503+0000 mgr.y (mgr.24422) 209 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:18:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:39 vm04 bash[20742]: audit 2026-03-10T10:18:38.315503+0000 mgr.y (mgr.24422) 209 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:18:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:39 vm04 bash[20742]: cluster 2026-03-10T10:18:38.406099+0000 mgr.y (mgr.24422) 210 : cluster [DBG] pgmap v257: 449 pgs: 11 creating+peering, 23 unknown, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 399 active+clean; 459 KiB data, 675 MiB used, 159 GiB / 160 GiB avail; 0 B/s wr, 0 op/s 2026-03-10T10:18:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:39 vm04 bash[20742]: cluster 2026-03-10T10:18:38.406099+0000 mgr.y (mgr.24422) 210 : cluster [DBG] pgmap v257: 449 pgs: 11 creating+peering, 23 unknown, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 399 active+clean; 459 KiB data, 675 MiB used, 159 GiB / 160 GiB avail; 0 B/s wr, 0 op/s 2026-03-10T10:18:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:39 vm04 bash[20742]: audit 2026-03-10T10:18:38.883916+0000 mon.a (mon.0) 1761 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-25"}]': finished 2026-03-10T10:18:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:39 vm04 bash[20742]: audit 2026-03-10T10:18:38.883916+0000 mon.a (mon.0) 1761 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-25"}]': finished 2026-03-10T10:18:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:39 vm04 bash[20742]: cluster 2026-03-10T10:18:38.895666+0000 mon.a (mon.0) 1762 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-10T10:18:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:39 vm04 bash[20742]: cluster 2026-03-10T10:18:38.895666+0000 mon.a (mon.0) 1762 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-10T10:18:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:39 vm04 bash[20742]: audit 2026-03-10T10:18:38.898246+0000 mon.a (mon.0) 1763 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-25", "mode": "writeback"}]: dispatch 2026-03-10T10:18:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:39 vm04 bash[20742]: audit 2026-03-10T10:18:38.898246+0000 mon.a (mon.0) 1763 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-25", "mode": "writeback"}]: dispatch 2026-03-10T10:18:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:39 vm04 bash[20742]: audit 2026-03-10T10:18:39.460659+0000 mon.c (mon.2) 299 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:39 vm04 bash[20742]: audit 2026-03-10T10:18:39.460659+0000 mon.c (mon.2) 299 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:40.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:39 vm04 bash[20742]: cluster 2026-03-10T10:18:39.884005+0000 mon.a (mon.0) 1764 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:40.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:39 vm04 bash[20742]: cluster 2026-03-10T10:18:39.884005+0000 mon.a (mon.0) 1764 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:40.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:39 vm04 bash[20742]: audit 2026-03-10T10:18:39.888457+0000 mon.a (mon.0) 1765 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "Flush_vm04-59252-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm04-59252-35"}]': finished 2026-03-10T10:18:40.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:39 vm04 bash[20742]: audit 2026-03-10T10:18:39.888457+0000 mon.a (mon.0) 1765 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "Flush_vm04-59252-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm04-59252-35"}]': finished 2026-03-10T10:18:40.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:39 vm04 bash[20742]: audit 2026-03-10T10:18:39.888604+0000 mon.a (mon.0) 1766 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-25", "mode": "writeback"}]': finished 2026-03-10T10:18:40.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:39 vm04 bash[20742]: audit 2026-03-10T10:18:39.888604+0000 mon.a (mon.0) 1766 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-25", "mode": "writeback"}]': finished 2026-03-10T10:18:40.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:39 vm04 bash[28289]: audit 2026-03-10T10:18:38.315503+0000 mgr.y (mgr.24422) 209 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:18:40.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:39 vm04 bash[28289]: audit 2026-03-10T10:18:38.315503+0000 mgr.y (mgr.24422) 209 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:18:40.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:39 vm04 bash[28289]: cluster 2026-03-10T10:18:38.406099+0000 mgr.y (mgr.24422) 210 : cluster [DBG] pgmap v257: 449 pgs: 11 creating+peering, 23 unknown, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 399 active+clean; 459 KiB data, 675 MiB used, 159 GiB / 160 GiB avail; 0 B/s wr, 0 op/s 2026-03-10T10:18:40.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:39 vm04 bash[28289]: cluster 2026-03-10T10:18:38.406099+0000 mgr.y (mgr.24422) 210 : cluster [DBG] pgmap v257: 449 pgs: 11 creating+peering, 23 unknown, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 399 active+clean; 459 KiB data, 675 MiB used, 159 GiB / 160 GiB avail; 0 B/s wr, 0 op/s 2026-03-10T10:18:40.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:39 vm04 bash[28289]: audit 2026-03-10T10:18:38.883916+0000 mon.a (mon.0) 1761 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-25"}]': finished 2026-03-10T10:18:40.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:39 vm04 bash[28289]: audit 2026-03-10T10:18:38.883916+0000 mon.a (mon.0) 1761 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-25"}]': finished 2026-03-10T10:18:40.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:39 vm04 bash[28289]: cluster 2026-03-10T10:18:38.895666+0000 mon.a (mon.0) 1762 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-10T10:18:40.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:39 vm04 bash[28289]: cluster 2026-03-10T10:18:38.895666+0000 mon.a (mon.0) 1762 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-10T10:18:40.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:39 vm04 bash[28289]: audit 2026-03-10T10:18:38.898246+0000 mon.a (mon.0) 1763 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-25", "mode": "writeback"}]: dispatch 2026-03-10T10:18:40.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:39 vm04 bash[28289]: audit 2026-03-10T10:18:38.898246+0000 mon.a (mon.0) 1763 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-25", "mode": "writeback"}]: dispatch 2026-03-10T10:18:40.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:39 vm04 bash[28289]: audit 2026-03-10T10:18:39.460659+0000 mon.c (mon.2) 299 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:40.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:39 vm04 bash[28289]: audit 2026-03-10T10:18:39.460659+0000 mon.c (mon.2) 299 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:40.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:39 vm04 bash[28289]: cluster 2026-03-10T10:18:39.884005+0000 mon.a (mon.0) 1764 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:40.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:39 vm04 bash[28289]: cluster 2026-03-10T10:18:39.884005+0000 mon.a (mon.0) 1764 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:40.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:39 vm04 bash[28289]: audit 2026-03-10T10:18:39.888457+0000 mon.a (mon.0) 1765 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "Flush_vm04-59252-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm04-59252-35"}]': finished 2026-03-10T10:18:40.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:39 vm04 bash[28289]: audit 2026-03-10T10:18:39.888457+0000 mon.a (mon.0) 1765 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "Flush_vm04-59252-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm04-59252-35"}]': finished 2026-03-10T10:18:40.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:39 vm04 bash[28289]: audit 2026-03-10T10:18:39.888604+0000 mon.a (mon.0) 1766 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-25", "mode": "writeback"}]': finished 2026-03-10T10:18:40.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:39 vm04 bash[28289]: audit 2026-03-10T10:18:39.888604+0000 mon.a (mon.0) 1766 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-25", "mode": "writeback"}]': finished 2026-03-10T10:18:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:39 vm07 bash[23367]: audit 2026-03-10T10:18:38.315503+0000 mgr.y (mgr.24422) 209 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:18:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:39 vm07 bash[23367]: audit 2026-03-10T10:18:38.315503+0000 mgr.y (mgr.24422) 209 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:18:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:39 vm07 bash[23367]: cluster 2026-03-10T10:18:38.406099+0000 mgr.y (mgr.24422) 210 : cluster [DBG] pgmap v257: 449 pgs: 11 creating+peering, 23 unknown, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 399 active+clean; 459 KiB data, 675 MiB used, 159 GiB / 160 GiB avail; 0 B/s wr, 0 op/s 2026-03-10T10:18:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:39 vm07 bash[23367]: cluster 2026-03-10T10:18:38.406099+0000 mgr.y (mgr.24422) 210 : cluster [DBG] pgmap v257: 449 pgs: 11 creating+peering, 23 unknown, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 399 active+clean; 459 KiB data, 675 MiB used, 159 GiB / 160 GiB avail; 0 B/s wr, 0 op/s 2026-03-10T10:18:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:39 vm07 bash[23367]: audit 2026-03-10T10:18:38.883916+0000 mon.a (mon.0) 1761 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-25"}]': finished 2026-03-10T10:18:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:39 vm07 bash[23367]: audit 2026-03-10T10:18:38.883916+0000 mon.a (mon.0) 1761 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-25"}]': finished 2026-03-10T10:18:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:39 vm07 bash[23367]: cluster 2026-03-10T10:18:38.895666+0000 mon.a (mon.0) 1762 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-10T10:18:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:39 vm07 bash[23367]: cluster 2026-03-10T10:18:38.895666+0000 mon.a (mon.0) 1762 : cluster [DBG] osdmap e201: 8 total, 8 up, 8 in 2026-03-10T10:18:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:39 vm07 bash[23367]: audit 2026-03-10T10:18:38.898246+0000 mon.a (mon.0) 1763 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-25", "mode": "writeback"}]: dispatch 2026-03-10T10:18:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:39 vm07 bash[23367]: audit 2026-03-10T10:18:38.898246+0000 mon.a (mon.0) 1763 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-25", "mode": "writeback"}]: dispatch 2026-03-10T10:18:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:39 vm07 bash[23367]: audit 2026-03-10T10:18:39.460659+0000 mon.c (mon.2) 299 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:39 vm07 bash[23367]: audit 2026-03-10T10:18:39.460659+0000 mon.c (mon.2) 299 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:39 vm07 bash[23367]: cluster 2026-03-10T10:18:39.884005+0000 mon.a (mon.0) 1764 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:39 vm07 bash[23367]: cluster 2026-03-10T10:18:39.884005+0000 mon.a (mon.0) 1764 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:39 vm07 bash[23367]: audit 2026-03-10T10:18:39.888457+0000 mon.a (mon.0) 1765 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "Flush_vm04-59252-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm04-59252-35"}]': finished 2026-03-10T10:18:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:39 vm07 bash[23367]: audit 2026-03-10T10:18:39.888457+0000 mon.a (mon.0) 1765 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "Flush_vm04-59252-35", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-Flush_vm04-59252-35"}]': finished 2026-03-10T10:18:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:39 vm07 bash[23367]: audit 2026-03-10T10:18:39.888604+0000 mon.a (mon.0) 1766 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-25", "mode": "writeback"}]': finished 2026-03-10T10:18:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:39 vm07 bash[23367]: audit 2026-03-10T10:18:39.888604+0000 mon.a (mon.0) 1766 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-25", "mode": "writeback"}]': finished 2026-03-10T10:18:41.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:40 vm04 bash[20742]: cluster 2026-03-10T10:18:39.905623+0000 mon.a (mon.0) 1767 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-10T10:18:41.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:40 vm04 bash[20742]: cluster 2026-03-10T10:18:39.905623+0000 mon.a (mon.0) 1767 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-10T10:18:41.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:40 vm04 bash[20742]: audit 2026-03-10T10:18:39.959272+0000 mon.a (mon.0) 1768 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:41.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:40 vm04 bash[20742]: audit 2026-03-10T10:18:39.959272+0000 mon.a (mon.0) 1768 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:41.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:40 vm04 bash[20742]: audit 2026-03-10T10:18:40.461358+0000 mon.c (mon.2) 300 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:41.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:40 vm04 bash[20742]: audit 2026-03-10T10:18:40.461358+0000 mon.c (mon.2) 300 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:41.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:40 vm04 bash[20742]: audit 2026-03-10T10:18:40.891305+0000 mon.a (mon.0) 1769 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:41.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:40 vm04 bash[20742]: audit 2026-03-10T10:18:40.891305+0000 mon.a (mon.0) 1769 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:41.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:40 vm04 bash[20742]: cluster 2026-03-10T10:18:40.896611+0000 mon.a (mon.0) 1770 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-10T10:18:41.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:40 vm04 bash[20742]: cluster 2026-03-10T10:18:40.896611+0000 mon.a (mon.0) 1770 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-10T10:18:41.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:40 vm04 bash[20742]: audit 2026-03-10T10:18:40.897068+0000 mon.a (mon.0) 1771 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-25"}]: dispatch 2026-03-10T10:18:41.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:40 vm04 bash[20742]: audit 2026-03-10T10:18:40.897068+0000 mon.a (mon.0) 1771 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-25"}]: dispatch 2026-03-10T10:18:41.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:40 vm04 bash[28289]: cluster 2026-03-10T10:18:39.905623+0000 mon.a (mon.0) 1767 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-10T10:18:41.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:40 vm04 bash[28289]: cluster 2026-03-10T10:18:39.905623+0000 mon.a (mon.0) 1767 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-10T10:18:41.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:40 vm04 bash[28289]: audit 2026-03-10T10:18:39.959272+0000 mon.a (mon.0) 1768 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:41.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:40 vm04 bash[28289]: audit 2026-03-10T10:18:39.959272+0000 mon.a (mon.0) 1768 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:41.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:40 vm04 bash[28289]: audit 2026-03-10T10:18:40.461358+0000 mon.c (mon.2) 300 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:41.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:40 vm04 bash[28289]: audit 2026-03-10T10:18:40.461358+0000 mon.c (mon.2) 300 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:41.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:40 vm04 bash[28289]: audit 2026-03-10T10:18:40.891305+0000 mon.a (mon.0) 1769 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:41.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:40 vm04 bash[28289]: audit 2026-03-10T10:18:40.891305+0000 mon.a (mon.0) 1769 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:41.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:40 vm04 bash[28289]: cluster 2026-03-10T10:18:40.896611+0000 mon.a (mon.0) 1770 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-10T10:18:41.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:40 vm04 bash[28289]: cluster 2026-03-10T10:18:40.896611+0000 mon.a (mon.0) 1770 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-10T10:18:41.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:40 vm04 bash[28289]: audit 2026-03-10T10:18:40.897068+0000 mon.a (mon.0) 1771 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-25"}]: dispatch 2026-03-10T10:18:41.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:40 vm04 bash[28289]: audit 2026-03-10T10:18:40.897068+0000 mon.a (mon.0) 1771 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-25"}]: dispatch 2026-03-10T10:18:41.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:40 vm07 bash[23367]: cluster 2026-03-10T10:18:39.905623+0000 mon.a (mon.0) 1767 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-10T10:18:41.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:40 vm07 bash[23367]: cluster 2026-03-10T10:18:39.905623+0000 mon.a (mon.0) 1767 : cluster [DBG] osdmap e202: 8 total, 8 up, 8 in 2026-03-10T10:18:41.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:40 vm07 bash[23367]: audit 2026-03-10T10:18:39.959272+0000 mon.a (mon.0) 1768 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:41.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:40 vm07 bash[23367]: audit 2026-03-10T10:18:39.959272+0000 mon.a (mon.0) 1768 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:41.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:40 vm07 bash[23367]: audit 2026-03-10T10:18:40.461358+0000 mon.c (mon.2) 300 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:41.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:40 vm07 bash[23367]: audit 2026-03-10T10:18:40.461358+0000 mon.c (mon.2) 300 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:41.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:40 vm07 bash[23367]: audit 2026-03-10T10:18:40.891305+0000 mon.a (mon.0) 1769 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:41.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:40 vm07 bash[23367]: audit 2026-03-10T10:18:40.891305+0000 mon.a (mon.0) 1769 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:41.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:40 vm07 bash[23367]: cluster 2026-03-10T10:18:40.896611+0000 mon.a (mon.0) 1770 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-10T10:18:41.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:40 vm07 bash[23367]: cluster 2026-03-10T10:18:40.896611+0000 mon.a (mon.0) 1770 : cluster [DBG] osdmap e203: 8 total, 8 up, 8 in 2026-03-10T10:18:41.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:40 vm07 bash[23367]: audit 2026-03-10T10:18:40.897068+0000 mon.a (mon.0) 1771 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-25"}]: dispatch 2026-03-10T10:18:41.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:40 vm07 bash[23367]: audit 2026-03-10T10:18:40.897068+0000 mon.a (mon.0) 1771 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-25"}]: dispatch 2026-03-10T10:18:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:41 vm04 bash[20742]: cluster 2026-03-10T10:18:40.406482+0000 mgr.y (mgr.24422) 211 : cluster [DBG] pgmap v260: 393 pgs: 8 unknown, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 369 active+clean; 459 KiB data, 675 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:18:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:41 vm04 bash[20742]: cluster 2026-03-10T10:18:40.406482+0000 mgr.y (mgr.24422) 211 : cluster [DBG] pgmap v260: 393 pgs: 8 unknown, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 369 active+clean; 459 KiB data, 675 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:18:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:41 vm04 bash[20742]: cluster 2026-03-10T10:18:41.415385+0000 mon.a (mon.0) 1772 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:41 vm04 bash[20742]: cluster 2026-03-10T10:18:41.415385+0000 mon.a (mon.0) 1772 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:41 vm04 bash[20742]: audit 2026-03-10T10:18:41.462008+0000 mon.c (mon.2) 301 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:41 vm04 bash[20742]: audit 2026-03-10T10:18:41.462008+0000 mon.c (mon.2) 301 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:41 vm04 bash[20742]: cluster 2026-03-10T10:18:41.891440+0000 mon.a (mon.0) 1773 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:41 vm04 bash[20742]: cluster 2026-03-10T10:18:41.891440+0000 mon.a (mon.0) 1773 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:41 vm04 bash[20742]: audit 2026-03-10T10:18:41.894766+0000 mon.a (mon.0) 1774 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-25"}]': finished 2026-03-10T10:18:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:41 vm04 bash[20742]: audit 2026-03-10T10:18:41.894766+0000 mon.a (mon.0) 1774 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-25"}]': finished 2026-03-10T10:18:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:41 vm04 bash[20742]: cluster 2026-03-10T10:18:41.897578+0000 mon.a (mon.0) 1775 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-10T10:18:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:41 vm04 bash[20742]: cluster 2026-03-10T10:18:41.897578+0000 mon.a (mon.0) 1775 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-10T10:18:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:41 vm04 bash[20742]: audit 2026-03-10T10:18:41.899243+0000 mon.a (mon.0) 1776 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm04-59252-35"}]: dispatch 2026-03-10T10:18:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:41 vm04 bash[20742]: audit 2026-03-10T10:18:41.899243+0000 mon.a (mon.0) 1776 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm04-59252-35"}]: dispatch 2026-03-10T10:18:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:41 vm04 bash[28289]: cluster 2026-03-10T10:18:40.406482+0000 mgr.y (mgr.24422) 211 : cluster [DBG] pgmap v260: 393 pgs: 8 unknown, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 369 active+clean; 459 KiB data, 675 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:18:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:41 vm04 bash[28289]: cluster 2026-03-10T10:18:40.406482+0000 mgr.y (mgr.24422) 211 : cluster [DBG] pgmap v260: 393 pgs: 8 unknown, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 369 active+clean; 459 KiB data, 675 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:18:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:41 vm04 bash[28289]: cluster 2026-03-10T10:18:41.415385+0000 mon.a (mon.0) 1772 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:41 vm04 bash[28289]: cluster 2026-03-10T10:18:41.415385+0000 mon.a (mon.0) 1772 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:41 vm04 bash[28289]: audit 2026-03-10T10:18:41.462008+0000 mon.c (mon.2) 301 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:41 vm04 bash[28289]: audit 2026-03-10T10:18:41.462008+0000 mon.c (mon.2) 301 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:41 vm04 bash[28289]: cluster 2026-03-10T10:18:41.891440+0000 mon.a (mon.0) 1773 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:41 vm04 bash[28289]: cluster 2026-03-10T10:18:41.891440+0000 mon.a (mon.0) 1773 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:41 vm04 bash[28289]: audit 2026-03-10T10:18:41.894766+0000 mon.a (mon.0) 1774 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-25"}]': finished 2026-03-10T10:18:42.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:41 vm04 bash[28289]: audit 2026-03-10T10:18:41.894766+0000 mon.a (mon.0) 1774 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-25"}]': finished 2026-03-10T10:18:42.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:41 vm04 bash[28289]: cluster 2026-03-10T10:18:41.897578+0000 mon.a (mon.0) 1775 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-10T10:18:42.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:41 vm04 bash[28289]: cluster 2026-03-10T10:18:41.897578+0000 mon.a (mon.0) 1775 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-10T10:18:42.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:41 vm04 bash[28289]: audit 2026-03-10T10:18:41.899243+0000 mon.a (mon.0) 1776 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm04-59252-35"}]: dispatch 2026-03-10T10:18:42.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:41 vm04 bash[28289]: audit 2026-03-10T10:18:41.899243+0000 mon.a (mon.0) 1776 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm04-59252-35"}]: dispatch 2026-03-10T10:18:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:41 vm07 bash[23367]: cluster 2026-03-10T10:18:40.406482+0000 mgr.y (mgr.24422) 211 : cluster [DBG] pgmap v260: 393 pgs: 8 unknown, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 369 active+clean; 459 KiB data, 675 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:18:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:41 vm07 bash[23367]: cluster 2026-03-10T10:18:40.406482+0000 mgr.y (mgr.24422) 211 : cluster [DBG] pgmap v260: 393 pgs: 8 unknown, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 369 active+clean; 459 KiB data, 675 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:18:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:41 vm07 bash[23367]: cluster 2026-03-10T10:18:41.415385+0000 mon.a (mon.0) 1772 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:41 vm07 bash[23367]: cluster 2026-03-10T10:18:41.415385+0000 mon.a (mon.0) 1772 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:41 vm07 bash[23367]: audit 2026-03-10T10:18:41.462008+0000 mon.c (mon.2) 301 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:41 vm07 bash[23367]: audit 2026-03-10T10:18:41.462008+0000 mon.c (mon.2) 301 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:41 vm07 bash[23367]: cluster 2026-03-10T10:18:41.891440+0000 mon.a (mon.0) 1773 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:41 vm07 bash[23367]: cluster 2026-03-10T10:18:41.891440+0000 mon.a (mon.0) 1773 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:41 vm07 bash[23367]: audit 2026-03-10T10:18:41.894766+0000 mon.a (mon.0) 1774 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-25"}]': finished 2026-03-10T10:18:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:41 vm07 bash[23367]: audit 2026-03-10T10:18:41.894766+0000 mon.a (mon.0) 1774 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-25"}]': finished 2026-03-10T10:18:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:41 vm07 bash[23367]: cluster 2026-03-10T10:18:41.897578+0000 mon.a (mon.0) 1775 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-10T10:18:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:41 vm07 bash[23367]: cluster 2026-03-10T10:18:41.897578+0000 mon.a (mon.0) 1775 : cluster [DBG] osdmap e204: 8 total, 8 up, 8 in 2026-03-10T10:18:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:41 vm07 bash[23367]: audit 2026-03-10T10:18:41.899243+0000 mon.a (mon.0) 1776 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm04-59252-35"}]: dispatch 2026-03-10T10:18:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:41 vm07 bash[23367]: audit 2026-03-10T10:18:41.899243+0000 mon.a (mon.0) 1776 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm04-59252-35"}]: dispatch 2026-03-10T10:18:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:42 vm07 bash[23367]: audit 2026-03-10T10:18:42.311515+0000 mon.a (mon.0) 1777 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]: dispatch 2026-03-10T10:18:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:42 vm07 bash[23367]: audit 2026-03-10T10:18:42.311515+0000 mon.a (mon.0) 1777 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]: dispatch 2026-03-10T10:18:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:42 vm07 bash[23367]: audit 2026-03-10T10:18:42.311844+0000 mon.a (mon.0) 1778 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.d", "id": [3, 1]}]: dispatch 2026-03-10T10:18:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:42 vm07 bash[23367]: audit 2026-03-10T10:18:42.311844+0000 mon.a (mon.0) 1778 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.d", "id": [3, 1]}]: dispatch 2026-03-10T10:18:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:42 vm07 bash[23367]: audit 2026-03-10T10:18:42.311910+0000 mon.a (mon.0) 1779 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.e", "id": [3, 1]}]: dispatch 2026-03-10T10:18:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:42 vm07 bash[23367]: audit 2026-03-10T10:18:42.311910+0000 mon.a (mon.0) 1779 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.e", "id": [3, 1]}]: dispatch 2026-03-10T10:18:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:42 vm07 bash[23367]: audit 2026-03-10T10:18:42.311952+0000 mon.a (mon.0) 1780 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.13", "id": [3, 1]}]: dispatch 2026-03-10T10:18:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:42 vm07 bash[23367]: audit 2026-03-10T10:18:42.311952+0000 mon.a (mon.0) 1780 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.13", "id": [3, 1]}]: dispatch 2026-03-10T10:18:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:42 vm07 bash[23367]: audit 2026-03-10T10:18:42.311991+0000 mon.a (mon.0) 1781 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "102.1d", "id": [6, 7]}]: dispatch 2026-03-10T10:18:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:42 vm07 bash[23367]: audit 2026-03-10T10:18:42.311991+0000 mon.a (mon.0) 1781 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "102.1d", "id": [6, 7]}]: dispatch 2026-03-10T10:18:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:42 vm07 bash[23367]: audit 2026-03-10T10:18:42.407343+0000 mon.a (mon.0) 1782 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "28"}]: dispatch 2026-03-10T10:18:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:42 vm07 bash[23367]: audit 2026-03-10T10:18:42.407343+0000 mon.a (mon.0) 1782 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "28"}]: dispatch 2026-03-10T10:18:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:42 vm07 bash[23367]: audit 2026-03-10T10:18:42.462758+0000 mon.c (mon.2) 302 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:42 vm07 bash[23367]: audit 2026-03-10T10:18:42.462758+0000 mon.c (mon.2) 302 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:42 vm07 bash[23367]: audit 2026-03-10T10:18:42.798181+0000 mon.a (mon.0) 1783 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:18:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:42 vm07 bash[23367]: audit 2026-03-10T10:18:42.798181+0000 mon.a (mon.0) 1783 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:18:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:42 vm07 bash[23367]: audit 2026-03-10T10:18:42.899176+0000 mon.a (mon.0) 1784 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm04-59252-35"}]': finished 2026-03-10T10:18:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:42 vm07 bash[23367]: audit 2026-03-10T10:18:42.899176+0000 mon.a (mon.0) 1784 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm04-59252-35"}]': finished 2026-03-10T10:18:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:42 vm07 bash[23367]: audit 2026-03-10T10:18:42.899269+0000 mon.a (mon.0) 1785 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]': finished 2026-03-10T10:18:43.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:42 vm07 bash[23367]: audit 2026-03-10T10:18:42.899269+0000 mon.a (mon.0) 1785 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]': finished 2026-03-10T10:18:43.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:42 vm07 bash[23367]: audit 2026-03-10T10:18:42.899445+0000 mon.a (mon.0) 1786 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.d", "id": [3, 1]}]': finished 2026-03-10T10:18:43.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:42 vm07 bash[23367]: audit 2026-03-10T10:18:42.899445+0000 mon.a (mon.0) 1786 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.d", "id": [3, 1]}]': finished 2026-03-10T10:18:43.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:42 vm07 bash[23367]: audit 2026-03-10T10:18:42.899515+0000 mon.a (mon.0) 1787 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.e", "id": [3, 1]}]': finished 2026-03-10T10:18:43.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:42 vm07 bash[23367]: audit 2026-03-10T10:18:42.899515+0000 mon.a (mon.0) 1787 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.e", "id": [3, 1]}]': finished 2026-03-10T10:18:43.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:42 vm07 bash[23367]: audit 2026-03-10T10:18:42.899577+0000 mon.a (mon.0) 1788 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.13", "id": [3, 1]}]': finished 2026-03-10T10:18:43.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:42 vm07 bash[23367]: audit 2026-03-10T10:18:42.899577+0000 mon.a (mon.0) 1788 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.13", "id": [3, 1]}]': finished 2026-03-10T10:18:43.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:42 vm07 bash[23367]: audit 2026-03-10T10:18:42.899632+0000 mon.a (mon.0) 1789 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "102.1d", "id": [6, 7]}]': finished 2026-03-10T10:18:43.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:42 vm07 bash[23367]: audit 2026-03-10T10:18:42.899632+0000 mon.a (mon.0) 1789 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "102.1d", "id": [6, 7]}]': finished 2026-03-10T10:18:43.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:42 vm07 bash[23367]: audit 2026-03-10T10:18:42.899678+0000 mon.a (mon.0) 1790 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "28"}]': finished 2026-03-10T10:18:43.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:42 vm07 bash[23367]: audit 2026-03-10T10:18:42.899678+0000 mon.a (mon.0) 1790 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "28"}]': finished 2026-03-10T10:18:43.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:42 vm07 bash[23367]: cluster 2026-03-10T10:18:42.903188+0000 mon.a (mon.0) 1791 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-10T10:18:43.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:42 vm07 bash[23367]: cluster 2026-03-10T10:18:42.903188+0000 mon.a (mon.0) 1791 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-10T10:18:43.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:42 vm07 bash[23367]: audit 2026-03-10T10:18:42.904606+0000 mon.a (mon.0) 1792 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm04-59252-35"}]: dispatch 2026-03-10T10:18:43.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:42 vm07 bash[23367]: audit 2026-03-10T10:18:42.904606+0000 mon.a (mon.0) 1792 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm04-59252-35"}]: dispatch 2026-03-10T10:18:43.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:42 vm04 bash[20742]: audit 2026-03-10T10:18:42.311515+0000 mon.a (mon.0) 1777 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]: dispatch 2026-03-10T10:18:43.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:42 vm04 bash[20742]: audit 2026-03-10T10:18:42.311515+0000 mon.a (mon.0) 1777 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]: dispatch 2026-03-10T10:18:43.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:42 vm04 bash[20742]: audit 2026-03-10T10:18:42.311844+0000 mon.a (mon.0) 1778 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.d", "id": [3, 1]}]: dispatch 2026-03-10T10:18:43.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:42 vm04 bash[20742]: audit 2026-03-10T10:18:42.311844+0000 mon.a (mon.0) 1778 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.d", "id": [3, 1]}]: dispatch 2026-03-10T10:18:43.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:42 vm04 bash[20742]: audit 2026-03-10T10:18:42.311910+0000 mon.a (mon.0) 1779 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.e", "id": [3, 1]}]: dispatch 2026-03-10T10:18:43.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:42 vm04 bash[20742]: audit 2026-03-10T10:18:42.311910+0000 mon.a (mon.0) 1779 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.e", "id": [3, 1]}]: dispatch 2026-03-10T10:18:43.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:42 vm04 bash[20742]: audit 2026-03-10T10:18:42.311952+0000 mon.a (mon.0) 1780 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.13", "id": [3, 1]}]: dispatch 2026-03-10T10:18:43.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:42 vm04 bash[20742]: audit 2026-03-10T10:18:42.311952+0000 mon.a (mon.0) 1780 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.13", "id": [3, 1]}]: dispatch 2026-03-10T10:18:43.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:42 vm04 bash[20742]: audit 2026-03-10T10:18:42.311991+0000 mon.a (mon.0) 1781 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "102.1d", "id": [6, 7]}]: dispatch 2026-03-10T10:18:43.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:42 vm04 bash[20742]: audit 2026-03-10T10:18:42.311991+0000 mon.a (mon.0) 1781 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "102.1d", "id": [6, 7]}]: dispatch 2026-03-10T10:18:43.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:42 vm04 bash[20742]: audit 2026-03-10T10:18:42.407343+0000 mon.a (mon.0) 1782 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "28"}]: dispatch 2026-03-10T10:18:43.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:42 vm04 bash[20742]: audit 2026-03-10T10:18:42.407343+0000 mon.a (mon.0) 1782 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "28"}]: dispatch 2026-03-10T10:18:43.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:42 vm04 bash[20742]: audit 2026-03-10T10:18:42.462758+0000 mon.c (mon.2) 302 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:43.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:42 vm04 bash[20742]: audit 2026-03-10T10:18:42.462758+0000 mon.c (mon.2) 302 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:43.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:42 vm04 bash[20742]: audit 2026-03-10T10:18:42.798181+0000 mon.a (mon.0) 1783 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:18:43.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:42 vm04 bash[20742]: audit 2026-03-10T10:18:42.798181+0000 mon.a (mon.0) 1783 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:18:43.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:42 vm04 bash[20742]: audit 2026-03-10T10:18:42.899176+0000 mon.a (mon.0) 1784 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm04-59252-35"}]': finished 2026-03-10T10:18:43.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:42 vm04 bash[20742]: audit 2026-03-10T10:18:42.899176+0000 mon.a (mon.0) 1784 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm04-59252-35"}]': finished 2026-03-10T10:18:43.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:42 vm04 bash[20742]: audit 2026-03-10T10:18:42.899269+0000 mon.a (mon.0) 1785 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]': finished 2026-03-10T10:18:43.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:42 vm04 bash[20742]: audit 2026-03-10T10:18:42.899269+0000 mon.a (mon.0) 1785 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]': finished 2026-03-10T10:18:43.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:42 vm04 bash[20742]: audit 2026-03-10T10:18:42.899445+0000 mon.a (mon.0) 1786 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.d", "id": [3, 1]}]': finished 2026-03-10T10:18:43.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:42 vm04 bash[20742]: audit 2026-03-10T10:18:42.899445+0000 mon.a (mon.0) 1786 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.d", "id": [3, 1]}]': finished 2026-03-10T10:18:43.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:42 vm04 bash[20742]: audit 2026-03-10T10:18:42.899515+0000 mon.a (mon.0) 1787 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.e", "id": [3, 1]}]': finished 2026-03-10T10:18:43.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:42 vm04 bash[20742]: audit 2026-03-10T10:18:42.899515+0000 mon.a (mon.0) 1787 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.e", "id": [3, 1]}]': finished 2026-03-10T10:18:43.458 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:42 vm04 bash[20742]: audit 2026-03-10T10:18:42.899577+0000 mon.a (mon.0) 1788 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.13", "id": [3, 1]}]': finished 2026-03-10T10:18:43.458 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:42 vm04 bash[20742]: audit 2026-03-10T10:18:42.899577+0000 mon.a (mon.0) 1788 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.13", "id": [3, 1]}]': finished 2026-03-10T10:18:43.458 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:42 vm04 bash[20742]: audit 2026-03-10T10:18:42.899632+0000 mon.a (mon.0) 1789 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "102.1d", "id": [6, 7]}]': finished 2026-03-10T10:18:43.458 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:42 vm04 bash[20742]: audit 2026-03-10T10:18:42.899632+0000 mon.a (mon.0) 1789 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "102.1d", "id": [6, 7]}]': finished 2026-03-10T10:18:43.458 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:42 vm04 bash[20742]: audit 2026-03-10T10:18:42.899678+0000 mon.a (mon.0) 1790 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "28"}]': finished 2026-03-10T10:18:43.458 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:42 vm04 bash[20742]: audit 2026-03-10T10:18:42.899678+0000 mon.a (mon.0) 1790 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "28"}]': finished 2026-03-10T10:18:43.458 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:42 vm04 bash[20742]: cluster 2026-03-10T10:18:42.903188+0000 mon.a (mon.0) 1791 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-10T10:18:43.458 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:42 vm04 bash[20742]: cluster 2026-03-10T10:18:42.903188+0000 mon.a (mon.0) 1791 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-10T10:18:43.458 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:42 vm04 bash[20742]: audit 2026-03-10T10:18:42.904606+0000 mon.a (mon.0) 1792 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm04-59252-35"}]: dispatch 2026-03-10T10:18:43.458 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:42 vm04 bash[20742]: audit 2026-03-10T10:18:42.904606+0000 mon.a (mon.0) 1792 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm04-59252-35"}]: dispatch 2026-03-10T10:18:43.458 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:42 vm04 bash[28289]: audit 2026-03-10T10:18:42.311515+0000 mon.a (mon.0) 1777 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]: dispatch 2026-03-10T10:18:43.458 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:42 vm04 bash[28289]: audit 2026-03-10T10:18:42.311515+0000 mon.a (mon.0) 1777 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]: dispatch 2026-03-10T10:18:43.458 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:42 vm04 bash[28289]: audit 2026-03-10T10:18:42.311844+0000 mon.a (mon.0) 1778 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.d", "id": [3, 1]}]: dispatch 2026-03-10T10:18:43.458 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:42 vm04 bash[28289]: audit 2026-03-10T10:18:42.311844+0000 mon.a (mon.0) 1778 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.d", "id": [3, 1]}]: dispatch 2026-03-10T10:18:43.458 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:42 vm04 bash[28289]: audit 2026-03-10T10:18:42.311910+0000 mon.a (mon.0) 1779 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.e", "id": [3, 1]}]: dispatch 2026-03-10T10:18:43.458 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:42 vm04 bash[28289]: audit 2026-03-10T10:18:42.311910+0000 mon.a (mon.0) 1779 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.e", "id": [3, 1]}]: dispatch 2026-03-10T10:18:43.458 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:42 vm04 bash[28289]: audit 2026-03-10T10:18:42.311952+0000 mon.a (mon.0) 1780 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.13", "id": [3, 1]}]: dispatch 2026-03-10T10:18:43.458 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:42 vm04 bash[28289]: audit 2026-03-10T10:18:42.311952+0000 mon.a (mon.0) 1780 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.13", "id": [3, 1]}]: dispatch 2026-03-10T10:18:43.458 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:42 vm04 bash[28289]: audit 2026-03-10T10:18:42.311991+0000 mon.a (mon.0) 1781 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "102.1d", "id": [6, 7]}]: dispatch 2026-03-10T10:18:43.458 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:42 vm04 bash[28289]: audit 2026-03-10T10:18:42.311991+0000 mon.a (mon.0) 1781 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "102.1d", "id": [6, 7]}]: dispatch 2026-03-10T10:18:43.458 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:42 vm04 bash[28289]: audit 2026-03-10T10:18:42.407343+0000 mon.a (mon.0) 1782 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "28"}]: dispatch 2026-03-10T10:18:43.458 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:42 vm04 bash[28289]: audit 2026-03-10T10:18:42.407343+0000 mon.a (mon.0) 1782 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "28"}]: dispatch 2026-03-10T10:18:43.458 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:42 vm04 bash[28289]: audit 2026-03-10T10:18:42.462758+0000 mon.c (mon.2) 302 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:43.458 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:42 vm04 bash[28289]: audit 2026-03-10T10:18:42.462758+0000 mon.c (mon.2) 302 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:43.459 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:42 vm04 bash[28289]: audit 2026-03-10T10:18:42.798181+0000 mon.a (mon.0) 1783 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:18:43.459 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:42 vm04 bash[28289]: audit 2026-03-10T10:18:42.798181+0000 mon.a (mon.0) 1783 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:18:43.459 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:42 vm04 bash[28289]: audit 2026-03-10T10:18:42.899176+0000 mon.a (mon.0) 1784 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm04-59252-35"}]': finished 2026-03-10T10:18:43.459 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:42 vm04 bash[28289]: audit 2026-03-10T10:18:42.899176+0000 mon.a (mon.0) 1784 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-Flush_vm04-59252-35"}]': finished 2026-03-10T10:18:43.459 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:42 vm04 bash[28289]: audit 2026-03-10T10:18:42.899269+0000 mon.a (mon.0) 1785 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]': finished 2026-03-10T10:18:43.459 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:42 vm04 bash[28289]: audit 2026-03-10T10:18:42.899269+0000 mon.a (mon.0) 1785 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.1", "id": [3, 2]}]': finished 2026-03-10T10:18:43.459 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:42 vm04 bash[28289]: audit 2026-03-10T10:18:42.899445+0000 mon.a (mon.0) 1786 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.d", "id": [3, 1]}]': finished 2026-03-10T10:18:43.459 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:42 vm04 bash[28289]: audit 2026-03-10T10:18:42.899445+0000 mon.a (mon.0) 1786 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.d", "id": [3, 1]}]': finished 2026-03-10T10:18:43.459 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:42 vm04 bash[28289]: audit 2026-03-10T10:18:42.899515+0000 mon.a (mon.0) 1787 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.e", "id": [3, 1]}]': finished 2026-03-10T10:18:43.459 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:42 vm04 bash[28289]: audit 2026-03-10T10:18:42.899515+0000 mon.a (mon.0) 1787 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.e", "id": [3, 1]}]': finished 2026-03-10T10:18:43.459 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:42 vm04 bash[28289]: audit 2026-03-10T10:18:42.899577+0000 mon.a (mon.0) 1788 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.13", "id": [3, 1]}]': finished 2026-03-10T10:18:43.459 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:42 vm04 bash[28289]: audit 2026-03-10T10:18:42.899577+0000 mon.a (mon.0) 1788 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "39.13", "id": [3, 1]}]': finished 2026-03-10T10:18:43.459 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:42 vm04 bash[28289]: audit 2026-03-10T10:18:42.899632+0000 mon.a (mon.0) 1789 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "102.1d", "id": [6, 7]}]': finished 2026-03-10T10:18:43.459 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:42 vm04 bash[28289]: audit 2026-03-10T10:18:42.899632+0000 mon.a (mon.0) 1789 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "102.1d", "id": [6, 7]}]': finished 2026-03-10T10:18:43.459 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:42 vm04 bash[28289]: audit 2026-03-10T10:18:42.899678+0000 mon.a (mon.0) 1790 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "28"}]': finished 2026-03-10T10:18:43.459 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:42 vm04 bash[28289]: audit 2026-03-10T10:18:42.899678+0000 mon.a (mon.0) 1790 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "28"}]': finished 2026-03-10T10:18:43.459 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:42 vm04 bash[28289]: cluster 2026-03-10T10:18:42.903188+0000 mon.a (mon.0) 1791 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-10T10:18:43.459 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:42 vm04 bash[28289]: cluster 2026-03-10T10:18:42.903188+0000 mon.a (mon.0) 1791 : cluster [DBG] osdmap e205: 8 total, 8 up, 8 in 2026-03-10T10:18:43.459 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:42 vm04 bash[28289]: audit 2026-03-10T10:18:42.904606+0000 mon.a (mon.0) 1792 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm04-59252-35"}]: dispatch 2026-03-10T10:18:43.459 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:42 vm04 bash[28289]: audit 2026-03-10T10:18:42.904606+0000 mon.a (mon.0) 1792 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"Flush_vm04-59252-35"}]: dispatch 2026-03-10T10:18:43.459 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:18:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:18:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:18:44.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:43 vm07 bash[23367]: cluster 2026-03-10T10:18:42.406779+0000 mgr.y (mgr.24422) 212 : cluster [DBG] pgmap v263: 321 pgs: 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 305 active+clean; 458 KiB data, 675 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:43 vm07 bash[23367]: cluster 2026-03-10T10:18:42.406779+0000 mgr.y (mgr.24422) 212 : cluster [DBG] pgmap v263: 321 pgs: 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 305 active+clean; 458 KiB data, 675 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:43 vm07 bash[23367]: audit 2026-03-10T10:18:42.937893+0000 mon.b (mon.1) 178 : audit [INF] from='client.? 192.168.123.104:0/1549585806' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm04-59259-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:43 vm07 bash[23367]: audit 2026-03-10T10:18:42.937893+0000 mon.b (mon.1) 178 : audit [INF] from='client.? 192.168.123.104:0/1549585806' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm04-59259-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:43 vm07 bash[23367]: audit 2026-03-10T10:18:42.941749+0000 mon.a (mon.0) 1793 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm04-59259-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:43 vm07 bash[23367]: audit 2026-03-10T10:18:42.941749+0000 mon.a (mon.0) 1793 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm04-59259-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:43 vm07 bash[23367]: audit 2026-03-10T10:18:43.463458+0000 mon.c (mon.2) 303 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:43 vm07 bash[23367]: audit 2026-03-10T10:18:43.463458+0000 mon.c (mon.2) 303 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:43 vm07 bash[23367]: audit 2026-03-10T10:18:43.925329+0000 mon.a (mon.0) 1794 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"Flush_vm04-59252-35"}]': finished 2026-03-10T10:18:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:43 vm07 bash[23367]: audit 2026-03-10T10:18:43.925329+0000 mon.a (mon.0) 1794 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"Flush_vm04-59252-35"}]': finished 2026-03-10T10:18:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:43 vm07 bash[23367]: audit 2026-03-10T10:18:43.925427+0000 mon.a (mon.0) 1795 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm04-59259-44","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:43 vm07 bash[23367]: audit 2026-03-10T10:18:43.925427+0000 mon.a (mon.0) 1795 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm04-59259-44","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:43 vm07 bash[23367]: cluster 2026-03-10T10:18:43.933939+0000 mon.a (mon.0) 1796 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-10T10:18:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:43 vm07 bash[23367]: cluster 2026-03-10T10:18:43.933939+0000 mon.a (mon.0) 1796 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-10T10:18:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:43 vm07 bash[23367]: audit 2026-03-10T10:18:43.946085+0000 mon.a (mon.0) 1797 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:43 vm07 bash[23367]: audit 2026-03-10T10:18:43.946085+0000 mon.a (mon.0) 1797 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:43 vm04 bash[28289]: cluster 2026-03-10T10:18:42.406779+0000 mgr.y (mgr.24422) 212 : cluster [DBG] pgmap v263: 321 pgs: 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 305 active+clean; 458 KiB data, 675 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:43 vm04 bash[28289]: cluster 2026-03-10T10:18:42.406779+0000 mgr.y (mgr.24422) 212 : cluster [DBG] pgmap v263: 321 pgs: 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 305 active+clean; 458 KiB data, 675 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:43 vm04 bash[28289]: audit 2026-03-10T10:18:42.937893+0000 mon.b (mon.1) 178 : audit [INF] from='client.? 192.168.123.104:0/1549585806' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm04-59259-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:43 vm04 bash[28289]: audit 2026-03-10T10:18:42.937893+0000 mon.b (mon.1) 178 : audit [INF] from='client.? 192.168.123.104:0/1549585806' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm04-59259-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:43 vm04 bash[28289]: audit 2026-03-10T10:18:42.941749+0000 mon.a (mon.0) 1793 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm04-59259-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:43 vm04 bash[28289]: audit 2026-03-10T10:18:42.941749+0000 mon.a (mon.0) 1793 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm04-59259-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:43 vm04 bash[28289]: audit 2026-03-10T10:18:43.463458+0000 mon.c (mon.2) 303 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:43 vm04 bash[28289]: audit 2026-03-10T10:18:43.463458+0000 mon.c (mon.2) 303 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:43 vm04 bash[28289]: audit 2026-03-10T10:18:43.925329+0000 mon.a (mon.0) 1794 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"Flush_vm04-59252-35"}]': finished 2026-03-10T10:18:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:43 vm04 bash[28289]: audit 2026-03-10T10:18:43.925329+0000 mon.a (mon.0) 1794 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"Flush_vm04-59252-35"}]': finished 2026-03-10T10:18:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:43 vm04 bash[28289]: audit 2026-03-10T10:18:43.925427+0000 mon.a (mon.0) 1795 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm04-59259-44","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:43 vm04 bash[28289]: audit 2026-03-10T10:18:43.925427+0000 mon.a (mon.0) 1795 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm04-59259-44","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:43 vm04 bash[28289]: cluster 2026-03-10T10:18:43.933939+0000 mon.a (mon.0) 1796 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-10T10:18:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:43 vm04 bash[28289]: cluster 2026-03-10T10:18:43.933939+0000 mon.a (mon.0) 1796 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-10T10:18:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:43 vm04 bash[28289]: audit 2026-03-10T10:18:43.946085+0000 mon.a (mon.0) 1797 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:43 vm04 bash[28289]: audit 2026-03-10T10:18:43.946085+0000 mon.a (mon.0) 1797 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:44.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:43 vm04 bash[20742]: cluster 2026-03-10T10:18:42.406779+0000 mgr.y (mgr.24422) 212 : cluster [DBG] pgmap v263: 321 pgs: 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 305 active+clean; 458 KiB data, 675 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:44.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:43 vm04 bash[20742]: cluster 2026-03-10T10:18:42.406779+0000 mgr.y (mgr.24422) 212 : cluster [DBG] pgmap v263: 321 pgs: 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 305 active+clean; 458 KiB data, 675 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:44.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:43 vm04 bash[20742]: audit 2026-03-10T10:18:42.937893+0000 mon.b (mon.1) 178 : audit [INF] from='client.? 192.168.123.104:0/1549585806' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm04-59259-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:44.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:43 vm04 bash[20742]: audit 2026-03-10T10:18:42.937893+0000 mon.b (mon.1) 178 : audit [INF] from='client.? 192.168.123.104:0/1549585806' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm04-59259-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:44.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:43 vm04 bash[20742]: audit 2026-03-10T10:18:42.941749+0000 mon.a (mon.0) 1793 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm04-59259-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:44.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:43 vm04 bash[20742]: audit 2026-03-10T10:18:42.941749+0000 mon.a (mon.0) 1793 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm04-59259-44","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:44.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:43 vm04 bash[20742]: audit 2026-03-10T10:18:43.463458+0000 mon.c (mon.2) 303 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:44.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:43 vm04 bash[20742]: audit 2026-03-10T10:18:43.463458+0000 mon.c (mon.2) 303 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:44.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:43 vm04 bash[20742]: audit 2026-03-10T10:18:43.925329+0000 mon.a (mon.0) 1794 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"Flush_vm04-59252-35"}]': finished 2026-03-10T10:18:44.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:43 vm04 bash[20742]: audit 2026-03-10T10:18:43.925329+0000 mon.a (mon.0) 1794 : audit [INF] from='client.? 192.168.123.104:0/4273362836' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"Flush_vm04-59252-35"}]': finished 2026-03-10T10:18:44.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:43 vm04 bash[20742]: audit 2026-03-10T10:18:43.925427+0000 mon.a (mon.0) 1795 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm04-59259-44","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:44.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:43 vm04 bash[20742]: audit 2026-03-10T10:18:43.925427+0000 mon.a (mon.0) 1795 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RemoveTestPP_vm04-59259-44","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:44.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:43 vm04 bash[20742]: cluster 2026-03-10T10:18:43.933939+0000 mon.a (mon.0) 1796 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-10T10:18:44.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:43 vm04 bash[20742]: cluster 2026-03-10T10:18:43.933939+0000 mon.a (mon.0) 1796 : cluster [DBG] osdmap e206: 8 total, 8 up, 8 in 2026-03-10T10:18:44.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:43 vm04 bash[20742]: audit 2026-03-10T10:18:43.946085+0000 mon.a (mon.0) 1797 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:44.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:43 vm04 bash[20742]: audit 2026-03-10T10:18:43.946085+0000 mon.a (mon.0) 1797 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-27","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:44 vm07 bash[23367]: audit 2026-03-10T10:18:43.960260+0000 mon.a (mon.0) 1798 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm04-59252-36"}]: dispatch 2026-03-10T10:18:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:44 vm07 bash[23367]: audit 2026-03-10T10:18:43.960260+0000 mon.a (mon.0) 1798 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm04-59252-36"}]: dispatch 2026-03-10T10:18:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:44 vm07 bash[23367]: audit 2026-03-10T10:18:43.962459+0000 mon.a (mon.0) 1799 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm04-59252-36"}]: dispatch 2026-03-10T10:18:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:44 vm07 bash[23367]: audit 2026-03-10T10:18:43.962459+0000 mon.a (mon.0) 1799 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm04-59252-36"}]: dispatch 2026-03-10T10:18:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:44 vm07 bash[23367]: audit 2026-03-10T10:18:43.962998+0000 mon.a (mon.0) 1800 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm04-59252-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:44 vm07 bash[23367]: audit 2026-03-10T10:18:43.962998+0000 mon.a (mon.0) 1800 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm04-59252-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:44 vm07 bash[23367]: audit 2026-03-10T10:18:44.464169+0000 mon.c (mon.2) 304 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:44 vm07 bash[23367]: audit 2026-03-10T10:18:44.464169+0000 mon.c (mon.2) 304 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:44 vm07 bash[23367]: audit 2026-03-10T10:18:44.929021+0000 mon.a (mon.0) 1801 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:44 vm07 bash[23367]: audit 2026-03-10T10:18:44.929021+0000 mon.a (mon.0) 1801 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:44 vm07 bash[23367]: audit 2026-03-10T10:18:44.929113+0000 mon.a (mon.0) 1802 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm04-59252-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:44 vm07 bash[23367]: audit 2026-03-10T10:18:44.929113+0000 mon.a (mon.0) 1802 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm04-59252-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:44 vm07 bash[23367]: cluster 2026-03-10T10:18:44.933432+0000 mon.a (mon.0) 1803 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-10T10:18:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:44 vm07 bash[23367]: cluster 2026-03-10T10:18:44.933432+0000 mon.a (mon.0) 1803 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-10T10:18:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:44 vm07 bash[23367]: audit 2026-03-10T10:18:44.935025+0000 mon.a (mon.0) 1804 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm04-59252-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm04-59252-36"}]: dispatch 2026-03-10T10:18:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:44 vm07 bash[23367]: audit 2026-03-10T10:18:44.935025+0000 mon.a (mon.0) 1804 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm04-59252-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm04-59252-36"}]: dispatch 2026-03-10T10:18:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:44 vm04 bash[20742]: audit 2026-03-10T10:18:43.960260+0000 mon.a (mon.0) 1798 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm04-59252-36"}]: dispatch 2026-03-10T10:18:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:44 vm04 bash[20742]: audit 2026-03-10T10:18:43.960260+0000 mon.a (mon.0) 1798 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm04-59252-36"}]: dispatch 2026-03-10T10:18:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:44 vm04 bash[20742]: audit 2026-03-10T10:18:43.962459+0000 mon.a (mon.0) 1799 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm04-59252-36"}]: dispatch 2026-03-10T10:18:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:44 vm04 bash[20742]: audit 2026-03-10T10:18:43.962459+0000 mon.a (mon.0) 1799 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm04-59252-36"}]: dispatch 2026-03-10T10:18:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:44 vm04 bash[20742]: audit 2026-03-10T10:18:43.962998+0000 mon.a (mon.0) 1800 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm04-59252-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:44 vm04 bash[20742]: audit 2026-03-10T10:18:43.962998+0000 mon.a (mon.0) 1800 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm04-59252-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:44 vm04 bash[20742]: audit 2026-03-10T10:18:44.464169+0000 mon.c (mon.2) 304 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:44 vm04 bash[20742]: audit 2026-03-10T10:18:44.464169+0000 mon.c (mon.2) 304 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:44 vm04 bash[20742]: audit 2026-03-10T10:18:44.929021+0000 mon.a (mon.0) 1801 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:44 vm04 bash[20742]: audit 2026-03-10T10:18:44.929021+0000 mon.a (mon.0) 1801 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:44 vm04 bash[20742]: audit 2026-03-10T10:18:44.929113+0000 mon.a (mon.0) 1802 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm04-59252-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:44 vm04 bash[20742]: audit 2026-03-10T10:18:44.929113+0000 mon.a (mon.0) 1802 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm04-59252-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:44 vm04 bash[20742]: cluster 2026-03-10T10:18:44.933432+0000 mon.a (mon.0) 1803 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-10T10:18:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:44 vm04 bash[20742]: cluster 2026-03-10T10:18:44.933432+0000 mon.a (mon.0) 1803 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-10T10:18:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:45 vm04 bash[20742]: audit 2026-03-10T10:18:44.935025+0000 mon.a (mon.0) 1804 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm04-59252-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm04-59252-36"}]: dispatch 2026-03-10T10:18:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:45 vm04 bash[20742]: audit 2026-03-10T10:18:44.935025+0000 mon.a (mon.0) 1804 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm04-59252-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm04-59252-36"}]: dispatch 2026-03-10T10:18:45.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:44 vm04 bash[28289]: audit 2026-03-10T10:18:43.960260+0000 mon.a (mon.0) 1798 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm04-59252-36"}]: dispatch 2026-03-10T10:18:45.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:44 vm04 bash[28289]: audit 2026-03-10T10:18:43.960260+0000 mon.a (mon.0) 1798 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm04-59252-36"}]: dispatch 2026-03-10T10:18:45.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:44 vm04 bash[28289]: audit 2026-03-10T10:18:43.962459+0000 mon.a (mon.0) 1799 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm04-59252-36"}]: dispatch 2026-03-10T10:18:45.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:44 vm04 bash[28289]: audit 2026-03-10T10:18:43.962459+0000 mon.a (mon.0) 1799 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm04-59252-36"}]: dispatch 2026-03-10T10:18:45.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:44 vm04 bash[28289]: audit 2026-03-10T10:18:43.962998+0000 mon.a (mon.0) 1800 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm04-59252-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:45.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:44 vm04 bash[28289]: audit 2026-03-10T10:18:43.962998+0000 mon.a (mon.0) 1800 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm04-59252-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:45.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:44 vm04 bash[28289]: audit 2026-03-10T10:18:44.464169+0000 mon.c (mon.2) 304 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:45.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:44 vm04 bash[28289]: audit 2026-03-10T10:18:44.464169+0000 mon.c (mon.2) 304 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:45.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:44 vm04 bash[28289]: audit 2026-03-10T10:18:44.929021+0000 mon.a (mon.0) 1801 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:45.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:44 vm04 bash[28289]: audit 2026-03-10T10:18:44.929021+0000 mon.a (mon.0) 1801 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-27","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:45.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:44 vm04 bash[28289]: audit 2026-03-10T10:18:44.929113+0000 mon.a (mon.0) 1802 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm04-59252-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:45.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:44 vm04 bash[28289]: audit 2026-03-10T10:18:44.929113+0000 mon.a (mon.0) 1802 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsync_vm04-59252-36", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:45.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:44 vm04 bash[28289]: cluster 2026-03-10T10:18:44.933432+0000 mon.a (mon.0) 1803 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-10T10:18:45.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:44 vm04 bash[28289]: cluster 2026-03-10T10:18:44.933432+0000 mon.a (mon.0) 1803 : cluster [DBG] osdmap e207: 8 total, 8 up, 8 in 2026-03-10T10:18:45.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:44 vm04 bash[28289]: audit 2026-03-10T10:18:44.935025+0000 mon.a (mon.0) 1804 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm04-59252-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm04-59252-36"}]: dispatch 2026-03-10T10:18:45.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:44 vm04 bash[28289]: audit 2026-03-10T10:18:44.935025+0000 mon.a (mon.0) 1804 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsync_vm04-59252-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm04-59252-36"}]: dispatch 2026-03-10T10:18:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:45 vm07 bash[23367]: cluster 2026-03-10T10:18:44.407168+0000 mgr.y (mgr.24422) 213 : cluster [DBG] pgmap v266: 353 pgs: 32 unknown, 32 creating+peering, 6 peering, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 267 active+clean; 458 KiB data, 659 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:45 vm07 bash[23367]: cluster 2026-03-10T10:18:44.407168+0000 mgr.y (mgr.24422) 213 : cluster [DBG] pgmap v266: 353 pgs: 32 unknown, 32 creating+peering, 6 peering, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 267 active+clean; 458 KiB data, 659 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:45 vm07 bash[23367]: cluster 2026-03-10T10:18:44.964803+0000 mon.a (mon.0) 1805 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-10T10:18:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:45 vm07 bash[23367]: cluster 2026-03-10T10:18:44.964803+0000 mon.a (mon.0) 1805 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-10T10:18:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:45 vm07 bash[23367]: audit 2026-03-10T10:18:45.464928+0000 mon.c (mon.2) 305 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:45 vm07 bash[23367]: audit 2026-03-10T10:18:45.464928+0000 mon.c (mon.2) 305 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:45 vm07 bash[23367]: cluster 2026-03-10T10:18:45.991430+0000 mon.a (mon.0) 1806 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-10T10:18:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:45 vm07 bash[23367]: cluster 2026-03-10T10:18:45.991430+0000 mon.a (mon.0) 1806 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-10T10:18:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:45 vm07 bash[23367]: audit 2026-03-10T10:18:45.992393+0000 mon.c (mon.2) 306 : audit [INF] from='client.? 192.168.123.104:0/2495138609' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm04-59259-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:46 vm07 bash[23367]: audit 2026-03-10T10:18:45.992393+0000 mon.c (mon.2) 306 : audit [INF] from='client.? 192.168.123.104:0/2495138609' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm04-59259-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:46 vm07 bash[23367]: audit 2026-03-10T10:18:45.997445+0000 mon.a (mon.0) 1807 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm04-59259-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:46 vm07 bash[23367]: audit 2026-03-10T10:18:45.997445+0000 mon.a (mon.0) 1807 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm04-59259-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:46.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:46 vm04 bash[20742]: cluster 2026-03-10T10:18:44.407168+0000 mgr.y (mgr.24422) 213 : cluster [DBG] pgmap v266: 353 pgs: 32 unknown, 32 creating+peering, 6 peering, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 267 active+clean; 458 KiB data, 659 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:46.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:46 vm04 bash[20742]: cluster 2026-03-10T10:18:44.407168+0000 mgr.y (mgr.24422) 213 : cluster [DBG] pgmap v266: 353 pgs: 32 unknown, 32 creating+peering, 6 peering, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 267 active+clean; 458 KiB data, 659 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:46.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:46 vm04 bash[20742]: cluster 2026-03-10T10:18:44.964803+0000 mon.a (mon.0) 1805 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-10T10:18:46.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:46 vm04 bash[20742]: cluster 2026-03-10T10:18:44.964803+0000 mon.a (mon.0) 1805 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-10T10:18:46.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:46 vm04 bash[20742]: audit 2026-03-10T10:18:45.464928+0000 mon.c (mon.2) 305 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:46.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:46 vm04 bash[20742]: audit 2026-03-10T10:18:45.464928+0000 mon.c (mon.2) 305 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:46.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:46 vm04 bash[20742]: cluster 2026-03-10T10:18:45.991430+0000 mon.a (mon.0) 1806 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-10T10:18:46.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:46 vm04 bash[20742]: cluster 2026-03-10T10:18:45.991430+0000 mon.a (mon.0) 1806 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-10T10:18:46.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:46 vm04 bash[20742]: audit 2026-03-10T10:18:45.992393+0000 mon.c (mon.2) 306 : audit [INF] from='client.? 192.168.123.104:0/2495138609' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm04-59259-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:46.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:46 vm04 bash[20742]: audit 2026-03-10T10:18:45.992393+0000 mon.c (mon.2) 306 : audit [INF] from='client.? 192.168.123.104:0/2495138609' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm04-59259-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:46.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:46 vm04 bash[20742]: audit 2026-03-10T10:18:45.997445+0000 mon.a (mon.0) 1807 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm04-59259-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:46.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:46 vm04 bash[20742]: audit 2026-03-10T10:18:45.997445+0000 mon.a (mon.0) 1807 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm04-59259-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:46.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:46 vm04 bash[28289]: cluster 2026-03-10T10:18:44.407168+0000 mgr.y (mgr.24422) 213 : cluster [DBG] pgmap v266: 353 pgs: 32 unknown, 32 creating+peering, 6 peering, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 267 active+clean; 458 KiB data, 659 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:46.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:46 vm04 bash[28289]: cluster 2026-03-10T10:18:44.407168+0000 mgr.y (mgr.24422) 213 : cluster [DBG] pgmap v266: 353 pgs: 32 unknown, 32 creating+peering, 6 peering, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 267 active+clean; 458 KiB data, 659 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:46.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:46 vm04 bash[28289]: cluster 2026-03-10T10:18:44.964803+0000 mon.a (mon.0) 1805 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-10T10:18:46.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:46 vm04 bash[28289]: cluster 2026-03-10T10:18:44.964803+0000 mon.a (mon.0) 1805 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY) 2026-03-10T10:18:46.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:46 vm04 bash[28289]: audit 2026-03-10T10:18:45.464928+0000 mon.c (mon.2) 305 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:46.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:46 vm04 bash[28289]: audit 2026-03-10T10:18:45.464928+0000 mon.c (mon.2) 305 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:46.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:46 vm04 bash[28289]: cluster 2026-03-10T10:18:45.991430+0000 mon.a (mon.0) 1806 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-10T10:18:46.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:46 vm04 bash[28289]: cluster 2026-03-10T10:18:45.991430+0000 mon.a (mon.0) 1806 : cluster [DBG] osdmap e208: 8 total, 8 up, 8 in 2026-03-10T10:18:46.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:46 vm04 bash[28289]: audit 2026-03-10T10:18:45.992393+0000 mon.c (mon.2) 306 : audit [INF] from='client.? 192.168.123.104:0/2495138609' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm04-59259-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:46.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:46 vm04 bash[28289]: audit 2026-03-10T10:18:45.992393+0000 mon.c (mon.2) 306 : audit [INF] from='client.? 192.168.123.104:0/2495138609' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm04-59259-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:46.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:46 vm04 bash[28289]: audit 2026-03-10T10:18:45.997445+0000 mon.a (mon.0) 1807 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm04-59259-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:46.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:46 vm04 bash[28289]: audit 2026-03-10T10:18:45.997445+0000 mon.a (mon.0) 1807 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "XattrListPP_vm04-59259-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:48.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:47 vm07 bash[23367]: audit 2026-03-10T10:18:46.013489+0000 mon.a (mon.0) 1808 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:48.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:47 vm07 bash[23367]: audit 2026-03-10T10:18:46.013489+0000 mon.a (mon.0) 1808 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:48.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:47 vm07 bash[23367]: audit 2026-03-10T10:18:46.465631+0000 mon.c (mon.2) 307 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:48.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:47 vm07 bash[23367]: audit 2026-03-10T10:18:46.465631+0000 mon.c (mon.2) 307 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:47 vm04 bash[28289]: audit 2026-03-10T10:18:46.013489+0000 mon.a (mon.0) 1808 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:47 vm04 bash[28289]: audit 2026-03-10T10:18:46.013489+0000 mon.a (mon.0) 1808 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:47 vm04 bash[28289]: audit 2026-03-10T10:18:46.465631+0000 mon.c (mon.2) 307 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:47 vm04 bash[28289]: audit 2026-03-10T10:18:46.465631+0000 mon.c (mon.2) 307 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:47 vm04 bash[20742]: audit 2026-03-10T10:18:46.013489+0000 mon.a (mon.0) 1808 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:47 vm04 bash[20742]: audit 2026-03-10T10:18:46.013489+0000 mon.a (mon.0) 1808 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-27", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:47 vm04 bash[20742]: audit 2026-03-10T10:18:46.465631+0000 mon.c (mon.2) 307 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:47 vm04 bash[20742]: audit 2026-03-10T10:18:46.465631+0000 mon.c (mon.2) 307 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:48.739 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:18:48 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:18:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:48 vm07 bash[23367]: cluster 2026-03-10T10:18:46.407620+0000 mgr.y (mgr.24422) 214 : cluster [DBG] pgmap v269: 353 pgs: 64 unknown, 6 peering, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 267 active+clean; 458 KiB data, 659 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:48 vm07 bash[23367]: cluster 2026-03-10T10:18:46.407620+0000 mgr.y (mgr.24422) 214 : cluster [DBG] pgmap v269: 353 pgs: 64 unknown, 6 peering, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 267 active+clean; 458 KiB data, 659 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:48 vm07 bash[23367]: audit 2026-03-10T10:18:47.466484+0000 mon.c (mon.2) 308 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:48 vm07 bash[23367]: audit 2026-03-10T10:18:47.466484+0000 mon.c (mon.2) 308 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:48 vm07 bash[23367]: audit 2026-03-10T10:18:47.488860+0000 mon.a (mon.0) 1809 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsync_vm04-59252-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm04-59252-36"}]': finished 2026-03-10T10:18:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:48 vm07 bash[23367]: audit 2026-03-10T10:18:47.488860+0000 mon.a (mon.0) 1809 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsync_vm04-59252-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm04-59252-36"}]': finished 2026-03-10T10:18:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:48 vm07 bash[23367]: audit 2026-03-10T10:18:47.489170+0000 mon.a (mon.0) 1810 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrListPP_vm04-59259-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:48 vm07 bash[23367]: audit 2026-03-10T10:18:47.489170+0000 mon.a (mon.0) 1810 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrListPP_vm04-59259-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:48 vm07 bash[23367]: audit 2026-03-10T10:18:47.489377+0000 mon.a (mon.0) 1811 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-27", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:48 vm07 bash[23367]: audit 2026-03-10T10:18:47.489377+0000 mon.a (mon.0) 1811 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-27", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:48 vm07 bash[23367]: cluster 2026-03-10T10:18:47.672805+0000 mon.a (mon.0) 1812 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-10T10:18:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:48 vm07 bash[23367]: cluster 2026-03-10T10:18:47.672805+0000 mon.a (mon.0) 1812 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-10T10:18:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:48 vm07 bash[23367]: audit 2026-03-10T10:18:47.674371+0000 mon.a (mon.0) 1813 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-27"}]: dispatch 2026-03-10T10:18:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:48 vm07 bash[23367]: audit 2026-03-10T10:18:47.674371+0000 mon.a (mon.0) 1813 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-27"}]: dispatch 2026-03-10T10:18:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:48 vm07 bash[23367]: audit 2026-03-10T10:18:48.467208+0000 mon.c (mon.2) 309 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:48 vm07 bash[23367]: audit 2026-03-10T10:18:48.467208+0000 mon.c (mon.2) 309 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:48 vm07 bash[23367]: audit 2026-03-10T10:18:48.492423+0000 mon.a (mon.0) 1814 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-27"}]': finished 2026-03-10T10:18:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:48 vm07 bash[23367]: audit 2026-03-10T10:18:48.492423+0000 mon.a (mon.0) 1814 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-27"}]': finished 2026-03-10T10:18:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:48 vm07 bash[23367]: cluster 2026-03-10T10:18:48.495732+0000 mon.a (mon.0) 1815 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-10T10:18:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:48 vm07 bash[23367]: cluster 2026-03-10T10:18:48.495732+0000 mon.a (mon.0) 1815 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-10T10:18:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:48 vm07 bash[23367]: audit 2026-03-10T10:18:48.495853+0000 mon.a (mon.0) 1816 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-27", "mode": "writeback"}]: dispatch 2026-03-10T10:18:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:48 vm07 bash[23367]: audit 2026-03-10T10:18:48.495853+0000 mon.a (mon.0) 1816 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-27", "mode": "writeback"}]: dispatch 2026-03-10T10:18:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:48 vm07 bash[23367]: audit 2026-03-10T10:18:48.516168+0000 mon.c (mon.2) 310 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:48 vm07 bash[23367]: audit 2026-03-10T10:18:48.516168+0000 mon.c (mon.2) 310 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:48 vm07 bash[23367]: audit 2026-03-10T10:18:48.517002+0000 mon.a (mon.0) 1817 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:48 vm07 bash[23367]: audit 2026-03-10T10:18:48.517002+0000 mon.a (mon.0) 1817 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:48 vm07 bash[23367]: audit 2026-03-10T10:18:48.517974+0000 mon.c (mon.2) 311 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:48 vm07 bash[23367]: audit 2026-03-10T10:18:48.517974+0000 mon.c (mon.2) 311 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:48 vm07 bash[23367]: audit 2026-03-10T10:18:48.518512+0000 mon.a (mon.0) 1818 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:48 vm07 bash[23367]: audit 2026-03-10T10:18:48.518512+0000 mon.a (mon.0) 1818 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:48 vm07 bash[23367]: audit 2026-03-10T10:18:48.519262+0000 mon.c (mon.2) 312 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:48 vm07 bash[23367]: audit 2026-03-10T10:18:48.519262+0000 mon.c (mon.2) 312 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:48 vm07 bash[23367]: audit 2026-03-10T10:18:48.519835+0000 mon.a (mon.0) 1819 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:49.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:48 vm07 bash[23367]: audit 2026-03-10T10:18:48.519835+0000 mon.a (mon.0) 1819 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:49.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:48 vm04 bash[20742]: cluster 2026-03-10T10:18:46.407620+0000 mgr.y (mgr.24422) 214 : cluster [DBG] pgmap v269: 353 pgs: 64 unknown, 6 peering, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 267 active+clean; 458 KiB data, 659 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:49.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:48 vm04 bash[20742]: cluster 2026-03-10T10:18:46.407620+0000 mgr.y (mgr.24422) 214 : cluster [DBG] pgmap v269: 353 pgs: 64 unknown, 6 peering, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 267 active+clean; 458 KiB data, 659 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:49.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:48 vm04 bash[20742]: audit 2026-03-10T10:18:47.466484+0000 mon.c (mon.2) 308 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:49.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:48 vm04 bash[20742]: audit 2026-03-10T10:18:47.466484+0000 mon.c (mon.2) 308 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:49.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:48 vm04 bash[20742]: audit 2026-03-10T10:18:47.488860+0000 mon.a (mon.0) 1809 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsync_vm04-59252-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm04-59252-36"}]': finished 2026-03-10T10:18:49.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:48 vm04 bash[20742]: audit 2026-03-10T10:18:47.488860+0000 mon.a (mon.0) 1809 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsync_vm04-59252-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm04-59252-36"}]': finished 2026-03-10T10:18:49.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:48 vm04 bash[20742]: audit 2026-03-10T10:18:47.489170+0000 mon.a (mon.0) 1810 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrListPP_vm04-59259-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:49.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:48 vm04 bash[20742]: audit 2026-03-10T10:18:47.489170+0000 mon.a (mon.0) 1810 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrListPP_vm04-59259-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:49.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:48 vm04 bash[20742]: audit 2026-03-10T10:18:47.489377+0000 mon.a (mon.0) 1811 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-27", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:48 vm04 bash[20742]: audit 2026-03-10T10:18:47.489377+0000 mon.a (mon.0) 1811 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-27", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:48 vm04 bash[20742]: cluster 2026-03-10T10:18:47.672805+0000 mon.a (mon.0) 1812 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:48 vm04 bash[20742]: cluster 2026-03-10T10:18:47.672805+0000 mon.a (mon.0) 1812 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:48 vm04 bash[20742]: audit 2026-03-10T10:18:47.674371+0000 mon.a (mon.0) 1813 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-27"}]: dispatch 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:48 vm04 bash[20742]: audit 2026-03-10T10:18:47.674371+0000 mon.a (mon.0) 1813 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-27"}]: dispatch 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:48 vm04 bash[28289]: cluster 2026-03-10T10:18:46.407620+0000 mgr.y (mgr.24422) 214 : cluster [DBG] pgmap v269: 353 pgs: 64 unknown, 6 peering, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 267 active+clean; 458 KiB data, 659 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:48 vm04 bash[28289]: cluster 2026-03-10T10:18:46.407620+0000 mgr.y (mgr.24422) 214 : cluster [DBG] pgmap v269: 353 pgs: 64 unknown, 6 peering, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 267 active+clean; 458 KiB data, 659 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:48 vm04 bash[28289]: audit 2026-03-10T10:18:47.466484+0000 mon.c (mon.2) 308 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:48 vm04 bash[28289]: audit 2026-03-10T10:18:47.466484+0000 mon.c (mon.2) 308 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:48 vm04 bash[28289]: audit 2026-03-10T10:18:47.488860+0000 mon.a (mon.0) 1809 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsync_vm04-59252-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm04-59252-36"}]': finished 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:48 vm04 bash[28289]: audit 2026-03-10T10:18:47.488860+0000 mon.a (mon.0) 1809 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsync_vm04-59252-36", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsync_vm04-59252-36"}]': finished 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:48 vm04 bash[28289]: audit 2026-03-10T10:18:47.489170+0000 mon.a (mon.0) 1810 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrListPP_vm04-59259-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:48 vm04 bash[28289]: audit 2026-03-10T10:18:47.489170+0000 mon.a (mon.0) 1810 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "XattrListPP_vm04-59259-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:48 vm04 bash[28289]: audit 2026-03-10T10:18:47.489377+0000 mon.a (mon.0) 1811 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-27", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:48 vm04 bash[28289]: audit 2026-03-10T10:18:47.489377+0000 mon.a (mon.0) 1811 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-27", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:48 vm04 bash[28289]: cluster 2026-03-10T10:18:47.672805+0000 mon.a (mon.0) 1812 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:48 vm04 bash[28289]: cluster 2026-03-10T10:18:47.672805+0000 mon.a (mon.0) 1812 : cluster [DBG] osdmap e209: 8 total, 8 up, 8 in 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:48 vm04 bash[28289]: audit 2026-03-10T10:18:47.674371+0000 mon.a (mon.0) 1813 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-27"}]: dispatch 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:48 vm04 bash[28289]: audit 2026-03-10T10:18:47.674371+0000 mon.a (mon.0) 1813 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-27"}]: dispatch 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:48 vm04 bash[28289]: audit 2026-03-10T10:18:48.467208+0000 mon.c (mon.2) 309 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:48 vm04 bash[28289]: audit 2026-03-10T10:18:48.467208+0000 mon.c (mon.2) 309 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:48 vm04 bash[28289]: audit 2026-03-10T10:18:48.492423+0000 mon.a (mon.0) 1814 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-27"}]': finished 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:48 vm04 bash[28289]: audit 2026-03-10T10:18:48.492423+0000 mon.a (mon.0) 1814 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-27"}]': finished 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:48 vm04 bash[28289]: cluster 2026-03-10T10:18:48.495732+0000 mon.a (mon.0) 1815 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:48 vm04 bash[28289]: cluster 2026-03-10T10:18:48.495732+0000 mon.a (mon.0) 1815 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:48 vm04 bash[28289]: audit 2026-03-10T10:18:48.495853+0000 mon.a (mon.0) 1816 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-27", "mode": "writeback"}]: dispatch 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:48 vm04 bash[28289]: audit 2026-03-10T10:18:48.495853+0000 mon.a (mon.0) 1816 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-27", "mode": "writeback"}]: dispatch 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:48 vm04 bash[28289]: audit 2026-03-10T10:18:48.516168+0000 mon.c (mon.2) 310 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:48 vm04 bash[28289]: audit 2026-03-10T10:18:48.516168+0000 mon.c (mon.2) 310 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:48 vm04 bash[28289]: audit 2026-03-10T10:18:48.517002+0000 mon.a (mon.0) 1817 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:48 vm04 bash[28289]: audit 2026-03-10T10:18:48.517002+0000 mon.a (mon.0) 1817 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:48 vm04 bash[28289]: audit 2026-03-10T10:18:48.517974+0000 mon.c (mon.2) 311 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:48 vm04 bash[28289]: audit 2026-03-10T10:18:48.517974+0000 mon.c (mon.2) 311 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:48 vm04 bash[28289]: audit 2026-03-10T10:18:48.518512+0000 mon.a (mon.0) 1818 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:48 vm04 bash[28289]: audit 2026-03-10T10:18:48.518512+0000 mon.a (mon.0) 1818 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:48 vm04 bash[28289]: audit 2026-03-10T10:18:48.519262+0000 mon.c (mon.2) 312 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:48 vm04 bash[28289]: audit 2026-03-10T10:18:48.519262+0000 mon.c (mon.2) 312 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:48 vm04 bash[28289]: audit 2026-03-10T10:18:48.519835+0000 mon.a (mon.0) 1819 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:48 vm04 bash[28289]: audit 2026-03-10T10:18:48.519835+0000 mon.a (mon.0) 1819 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:48 vm04 bash[20742]: audit 2026-03-10T10:18:48.467208+0000 mon.c (mon.2) 309 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:48 vm04 bash[20742]: audit 2026-03-10T10:18:48.467208+0000 mon.c (mon.2) 309 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:48 vm04 bash[20742]: audit 2026-03-10T10:18:48.492423+0000 mon.a (mon.0) 1814 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-27"}]': finished 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:48 vm04 bash[20742]: audit 2026-03-10T10:18:48.492423+0000 mon.a (mon.0) 1814 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-27"}]': finished 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:48 vm04 bash[20742]: cluster 2026-03-10T10:18:48.495732+0000 mon.a (mon.0) 1815 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:48 vm04 bash[20742]: cluster 2026-03-10T10:18:48.495732+0000 mon.a (mon.0) 1815 : cluster [DBG] osdmap e210: 8 total, 8 up, 8 in 2026-03-10T10:18:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:48 vm04 bash[20742]: audit 2026-03-10T10:18:48.495853+0000 mon.a (mon.0) 1816 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-27", "mode": "writeback"}]: dispatch 2026-03-10T10:18:49.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:48 vm04 bash[20742]: audit 2026-03-10T10:18:48.495853+0000 mon.a (mon.0) 1816 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-27", "mode": "writeback"}]: dispatch 2026-03-10T10:18:49.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:48 vm04 bash[20742]: audit 2026-03-10T10:18:48.516168+0000 mon.c (mon.2) 310 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:49.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:48 vm04 bash[20742]: audit 2026-03-10T10:18:48.516168+0000 mon.c (mon.2) 310 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:49.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:48 vm04 bash[20742]: audit 2026-03-10T10:18:48.517002+0000 mon.a (mon.0) 1817 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:49.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:48 vm04 bash[20742]: audit 2026-03-10T10:18:48.517002+0000 mon.a (mon.0) 1817 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:49.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:48 vm04 bash[20742]: audit 2026-03-10T10:18:48.517974+0000 mon.c (mon.2) 311 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:49.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:48 vm04 bash[20742]: audit 2026-03-10T10:18:48.517974+0000 mon.c (mon.2) 311 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:49.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:48 vm04 bash[20742]: audit 2026-03-10T10:18:48.518512+0000 mon.a (mon.0) 1818 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:49.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:48 vm04 bash[20742]: audit 2026-03-10T10:18:48.518512+0000 mon.a (mon.0) 1818 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:49.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:48 vm04 bash[20742]: audit 2026-03-10T10:18:48.519262+0000 mon.c (mon.2) 312 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:49.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:48 vm04 bash[20742]: audit 2026-03-10T10:18:48.519262+0000 mon.c (mon.2) 312 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:49.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:48 vm04 bash[20742]: audit 2026-03-10T10:18:48.519835+0000 mon.a (mon.0) 1819 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:49.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:48 vm04 bash[20742]: audit 2026-03-10T10:18:48.519835+0000 mon.a (mon.0) 1819 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:49 vm07 bash[23367]: audit 2026-03-10T10:18:48.324385+0000 mgr.y (mgr.24422) 215 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:18:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:49 vm07 bash[23367]: audit 2026-03-10T10:18:48.324385+0000 mgr.y (mgr.24422) 215 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:18:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:49 vm07 bash[23367]: cluster 2026-03-10T10:18:48.408242+0000 mgr.y (mgr.24422) 216 : cluster [DBG] pgmap v271: 361 pgs: 5 creating+peering, 34 unknown, 4 peering, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 303 active+clean; 458 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 228 B/s rd, 1.1 KiB/s wr, 1 op/s 2026-03-10T10:18:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:49 vm07 bash[23367]: cluster 2026-03-10T10:18:48.408242+0000 mgr.y (mgr.24422) 216 : cluster [DBG] pgmap v271: 361 pgs: 5 creating+peering, 34 unknown, 4 peering, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 303 active+clean; 458 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 228 B/s rd, 1.1 KiB/s wr, 1 op/s 2026-03-10T10:18:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:49 vm07 bash[23367]: audit 2026-03-10T10:18:49.468010+0000 mon.c (mon.2) 313 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:49 vm07 bash[23367]: audit 2026-03-10T10:18:49.468010+0000 mon.c (mon.2) 313 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:49 vm07 bash[23367]: cluster 2026-03-10T10:18:49.492661+0000 mon.a (mon.0) 1820 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:49 vm07 bash[23367]: cluster 2026-03-10T10:18:49.492661+0000 mon.a (mon.0) 1820 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:49 vm07 bash[23367]: audit 2026-03-10T10:18:49.495359+0000 mon.a (mon.0) 1821 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-27", "mode": "writeback"}]': finished 2026-03-10T10:18:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:49 vm07 bash[23367]: audit 2026-03-10T10:18:49.495359+0000 mon.a (mon.0) 1821 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-27", "mode": "writeback"}]': finished 2026-03-10T10:18:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:49 vm07 bash[23367]: audit 2026-03-10T10:18:49.495463+0000 mon.a (mon.0) 1822 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:49 vm07 bash[23367]: audit 2026-03-10T10:18:49.495463+0000 mon.a (mon.0) 1822 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:49 vm07 bash[23367]: audit 2026-03-10T10:18:49.500183+0000 mon.c (mon.2) 314 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:49 vm07 bash[23367]: audit 2026-03-10T10:18:49.500183+0000 mon.c (mon.2) 314 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:49 vm07 bash[23367]: cluster 2026-03-10T10:18:49.500952+0000 mon.a (mon.0) 1823 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-10T10:18:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:49 vm07 bash[23367]: cluster 2026-03-10T10:18:49.500952+0000 mon.a (mon.0) 1823 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-10T10:18:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:49 vm07 bash[23367]: audit 2026-03-10T10:18:49.501787+0000 mon.a (mon.0) 1824 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:49 vm07 bash[23367]: audit 2026-03-10T10:18:49.501787+0000 mon.a (mon.0) 1824 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:49 vm07 bash[23367]: audit 2026-03-10T10:18:49.502181+0000 mon.a (mon.0) 1825 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm04-59252-36"}]: dispatch 2026-03-10T10:18:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:49 vm07 bash[23367]: audit 2026-03-10T10:18:49.502181+0000 mon.a (mon.0) 1825 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm04-59252-36"}]: dispatch 2026-03-10T10:18:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:49 vm04 bash[28289]: audit 2026-03-10T10:18:48.324385+0000 mgr.y (mgr.24422) 215 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:18:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:49 vm04 bash[28289]: audit 2026-03-10T10:18:48.324385+0000 mgr.y (mgr.24422) 215 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:18:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:49 vm04 bash[28289]: cluster 2026-03-10T10:18:48.408242+0000 mgr.y (mgr.24422) 216 : cluster [DBG] pgmap v271: 361 pgs: 5 creating+peering, 34 unknown, 4 peering, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 303 active+clean; 458 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 228 B/s rd, 1.1 KiB/s wr, 1 op/s 2026-03-10T10:18:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:49 vm04 bash[28289]: cluster 2026-03-10T10:18:48.408242+0000 mgr.y (mgr.24422) 216 : cluster [DBG] pgmap v271: 361 pgs: 5 creating+peering, 34 unknown, 4 peering, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 303 active+clean; 458 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 228 B/s rd, 1.1 KiB/s wr, 1 op/s 2026-03-10T10:18:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:49 vm04 bash[28289]: audit 2026-03-10T10:18:49.468010+0000 mon.c (mon.2) 313 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:49 vm04 bash[28289]: audit 2026-03-10T10:18:49.468010+0000 mon.c (mon.2) 313 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:49 vm04 bash[28289]: cluster 2026-03-10T10:18:49.492661+0000 mon.a (mon.0) 1820 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:49 vm04 bash[28289]: cluster 2026-03-10T10:18:49.492661+0000 mon.a (mon.0) 1820 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:49 vm04 bash[28289]: audit 2026-03-10T10:18:49.495359+0000 mon.a (mon.0) 1821 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-27", "mode": "writeback"}]': finished 2026-03-10T10:18:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:49 vm04 bash[28289]: audit 2026-03-10T10:18:49.495359+0000 mon.a (mon.0) 1821 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-27", "mode": "writeback"}]': finished 2026-03-10T10:18:50.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:49 vm04 bash[28289]: audit 2026-03-10T10:18:49.495463+0000 mon.a (mon.0) 1822 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:50.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:49 vm04 bash[28289]: audit 2026-03-10T10:18:49.495463+0000 mon.a (mon.0) 1822 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:50.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:49 vm04 bash[28289]: audit 2026-03-10T10:18:49.500183+0000 mon.c (mon.2) 314 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:50.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:49 vm04 bash[28289]: audit 2026-03-10T10:18:49.500183+0000 mon.c (mon.2) 314 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:50.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:49 vm04 bash[28289]: cluster 2026-03-10T10:18:49.500952+0000 mon.a (mon.0) 1823 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-10T10:18:50.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:49 vm04 bash[28289]: cluster 2026-03-10T10:18:49.500952+0000 mon.a (mon.0) 1823 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-10T10:18:50.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:49 vm04 bash[28289]: audit 2026-03-10T10:18:49.501787+0000 mon.a (mon.0) 1824 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:50.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:49 vm04 bash[28289]: audit 2026-03-10T10:18:49.501787+0000 mon.a (mon.0) 1824 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:50.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:49 vm04 bash[28289]: audit 2026-03-10T10:18:49.502181+0000 mon.a (mon.0) 1825 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm04-59252-36"}]: dispatch 2026-03-10T10:18:50.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:49 vm04 bash[28289]: audit 2026-03-10T10:18:49.502181+0000 mon.a (mon.0) 1825 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm04-59252-36"}]: dispatch 2026-03-10T10:18:50.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:49 vm04 bash[20742]: audit 2026-03-10T10:18:48.324385+0000 mgr.y (mgr.24422) 215 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:18:50.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:49 vm04 bash[20742]: audit 2026-03-10T10:18:48.324385+0000 mgr.y (mgr.24422) 215 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:18:50.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:49 vm04 bash[20742]: cluster 2026-03-10T10:18:48.408242+0000 mgr.y (mgr.24422) 216 : cluster [DBG] pgmap v271: 361 pgs: 5 creating+peering, 34 unknown, 4 peering, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 303 active+clean; 458 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 228 B/s rd, 1.1 KiB/s wr, 1 op/s 2026-03-10T10:18:50.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:49 vm04 bash[20742]: cluster 2026-03-10T10:18:48.408242+0000 mgr.y (mgr.24422) 216 : cluster [DBG] pgmap v271: 361 pgs: 5 creating+peering, 34 unknown, 4 peering, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 303 active+clean; 458 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 228 B/s rd, 1.1 KiB/s wr, 1 op/s 2026-03-10T10:18:50.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:49 vm04 bash[20742]: audit 2026-03-10T10:18:49.468010+0000 mon.c (mon.2) 313 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:50.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:49 vm04 bash[20742]: audit 2026-03-10T10:18:49.468010+0000 mon.c (mon.2) 313 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:50.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:49 vm04 bash[20742]: cluster 2026-03-10T10:18:49.492661+0000 mon.a (mon.0) 1820 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:50.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:49 vm04 bash[20742]: cluster 2026-03-10T10:18:49.492661+0000 mon.a (mon.0) 1820 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:18:50.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:49 vm04 bash[20742]: audit 2026-03-10T10:18:49.495359+0000 mon.a (mon.0) 1821 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-27", "mode": "writeback"}]': finished 2026-03-10T10:18:50.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:49 vm04 bash[20742]: audit 2026-03-10T10:18:49.495359+0000 mon.a (mon.0) 1821 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-27", "mode": "writeback"}]': finished 2026-03-10T10:18:50.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:49 vm04 bash[20742]: audit 2026-03-10T10:18:49.495463+0000 mon.a (mon.0) 1822 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:50.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:49 vm04 bash[20742]: audit 2026-03-10T10:18:49.495463+0000 mon.a (mon.0) 1822 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-46", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:50.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:49 vm04 bash[20742]: audit 2026-03-10T10:18:49.500183+0000 mon.c (mon.2) 314 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:50.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:49 vm04 bash[20742]: audit 2026-03-10T10:18:49.500183+0000 mon.c (mon.2) 314 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:50.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:49 vm04 bash[20742]: cluster 2026-03-10T10:18:49.500952+0000 mon.a (mon.0) 1823 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-10T10:18:50.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:49 vm04 bash[20742]: cluster 2026-03-10T10:18:49.500952+0000 mon.a (mon.0) 1823 : cluster [DBG] osdmap e211: 8 total, 8 up, 8 in 2026-03-10T10:18:50.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:49 vm04 bash[20742]: audit 2026-03-10T10:18:49.501787+0000 mon.a (mon.0) 1824 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:50.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:49 vm04 bash[20742]: audit 2026-03-10T10:18:49.501787+0000 mon.a (mon.0) 1824 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:50.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:49 vm04 bash[20742]: audit 2026-03-10T10:18:49.502181+0000 mon.a (mon.0) 1825 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm04-59252-36"}]: dispatch 2026-03-10T10:18:50.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:49 vm04 bash[20742]: audit 2026-03-10T10:18:49.502181+0000 mon.a (mon.0) 1825 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm04-59252-36"}]: dispatch 2026-03-10T10:18:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:50 vm07 bash[23367]: audit 2026-03-10T10:18:50.409564+0000 mon.a (mon.0) 1826 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "28"}]: dispatch 2026-03-10T10:18:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:50 vm07 bash[23367]: audit 2026-03-10T10:18:50.409564+0000 mon.a (mon.0) 1826 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "28"}]: dispatch 2026-03-10T10:18:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:50 vm07 bash[23367]: audit 2026-03-10T10:18:50.468796+0000 mon.c (mon.2) 315 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:50 vm07 bash[23367]: audit 2026-03-10T10:18:50.468796+0000 mon.c (mon.2) 315 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:50 vm07 bash[23367]: audit 2026-03-10T10:18:50.499536+0000 mon.a (mon.0) 1827 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm04-59252-36"}]': finished 2026-03-10T10:18:51.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:50 vm07 bash[23367]: audit 2026-03-10T10:18:50.499536+0000 mon.a (mon.0) 1827 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm04-59252-36"}]': finished 2026-03-10T10:18:51.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:50 vm07 bash[23367]: audit 2026-03-10T10:18:50.499682+0000 mon.a (mon.0) 1828 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "28"}]': finished 2026-03-10T10:18:51.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:50 vm07 bash[23367]: audit 2026-03-10T10:18:50.499682+0000 mon.a (mon.0) 1828 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "28"}]': finished 2026-03-10T10:18:51.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:50 vm07 bash[23367]: cluster 2026-03-10T10:18:50.527539+0000 mon.a (mon.0) 1829 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-10T10:18:51.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:50 vm07 bash[23367]: cluster 2026-03-10T10:18:50.527539+0000 mon.a (mon.0) 1829 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-10T10:18:51.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:50 vm07 bash[23367]: audit 2026-03-10T10:18:50.528158+0000 mon.a (mon.0) 1830 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm04-59252-36"}]: dispatch 2026-03-10T10:18:51.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:50 vm07 bash[23367]: audit 2026-03-10T10:18:50.528158+0000 mon.a (mon.0) 1830 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm04-59252-36"}]: dispatch 2026-03-10T10:18:51.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:50 vm07 bash[23367]: audit 2026-03-10T10:18:50.550706+0000 mon.a (mon.0) 1831 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:51.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:50 vm07 bash[23367]: audit 2026-03-10T10:18:50.550706+0000 mon.a (mon.0) 1831 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:50 vm04 bash[28289]: audit 2026-03-10T10:18:50.409564+0000 mon.a (mon.0) 1826 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "28"}]: dispatch 2026-03-10T10:18:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:50 vm04 bash[28289]: audit 2026-03-10T10:18:50.409564+0000 mon.a (mon.0) 1826 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "28"}]: dispatch 2026-03-10T10:18:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:50 vm04 bash[28289]: audit 2026-03-10T10:18:50.468796+0000 mon.c (mon.2) 315 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:50 vm04 bash[28289]: audit 2026-03-10T10:18:50.468796+0000 mon.c (mon.2) 315 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:50 vm04 bash[28289]: audit 2026-03-10T10:18:50.499536+0000 mon.a (mon.0) 1827 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm04-59252-36"}]': finished 2026-03-10T10:18:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:50 vm04 bash[28289]: audit 2026-03-10T10:18:50.499536+0000 mon.a (mon.0) 1827 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm04-59252-36"}]': finished 2026-03-10T10:18:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:50 vm04 bash[28289]: audit 2026-03-10T10:18:50.499682+0000 mon.a (mon.0) 1828 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "28"}]': finished 2026-03-10T10:18:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:50 vm04 bash[28289]: audit 2026-03-10T10:18:50.499682+0000 mon.a (mon.0) 1828 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "28"}]': finished 2026-03-10T10:18:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:50 vm04 bash[28289]: cluster 2026-03-10T10:18:50.527539+0000 mon.a (mon.0) 1829 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-10T10:18:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:50 vm04 bash[28289]: cluster 2026-03-10T10:18:50.527539+0000 mon.a (mon.0) 1829 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-10T10:18:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:50 vm04 bash[28289]: audit 2026-03-10T10:18:50.528158+0000 mon.a (mon.0) 1830 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm04-59252-36"}]: dispatch 2026-03-10T10:18:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:50 vm04 bash[28289]: audit 2026-03-10T10:18:50.528158+0000 mon.a (mon.0) 1830 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm04-59252-36"}]: dispatch 2026-03-10T10:18:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:50 vm04 bash[28289]: audit 2026-03-10T10:18:50.550706+0000 mon.a (mon.0) 1831 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:50 vm04 bash[28289]: audit 2026-03-10T10:18:50.550706+0000 mon.a (mon.0) 1831 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:50 vm04 bash[20742]: audit 2026-03-10T10:18:50.409564+0000 mon.a (mon.0) 1826 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "28"}]: dispatch 2026-03-10T10:18:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:50 vm04 bash[20742]: audit 2026-03-10T10:18:50.409564+0000 mon.a (mon.0) 1826 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "28"}]: dispatch 2026-03-10T10:18:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:50 vm04 bash[20742]: audit 2026-03-10T10:18:50.468796+0000 mon.c (mon.2) 315 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:50 vm04 bash[20742]: audit 2026-03-10T10:18:50.468796+0000 mon.c (mon.2) 315 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:50 vm04 bash[20742]: audit 2026-03-10T10:18:50.499536+0000 mon.a (mon.0) 1827 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm04-59252-36"}]': finished 2026-03-10T10:18:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:50 vm04 bash[20742]: audit 2026-03-10T10:18:50.499536+0000 mon.a (mon.0) 1827 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsync_vm04-59252-36"}]': finished 2026-03-10T10:18:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:50 vm04 bash[20742]: audit 2026-03-10T10:18:50.499682+0000 mon.a (mon.0) 1828 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "28"}]': finished 2026-03-10T10:18:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:50 vm04 bash[20742]: audit 2026-03-10T10:18:50.499682+0000 mon.a (mon.0) 1828 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "28"}]': finished 2026-03-10T10:18:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:50 vm04 bash[20742]: cluster 2026-03-10T10:18:50.527539+0000 mon.a (mon.0) 1829 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-10T10:18:51.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:50 vm04 bash[20742]: cluster 2026-03-10T10:18:50.527539+0000 mon.a (mon.0) 1829 : cluster [DBG] osdmap e212: 8 total, 8 up, 8 in 2026-03-10T10:18:51.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:50 vm04 bash[20742]: audit 2026-03-10T10:18:50.528158+0000 mon.a (mon.0) 1830 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm04-59252-36"}]: dispatch 2026-03-10T10:18:51.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:50 vm04 bash[20742]: audit 2026-03-10T10:18:50.528158+0000 mon.a (mon.0) 1830 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm04-59252-36"}]: dispatch 2026-03-10T10:18:51.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:50 vm04 bash[20742]: audit 2026-03-10T10:18:50.550706+0000 mon.a (mon.0) 1831 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:51.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:50 vm04 bash[20742]: audit 2026-03-10T10:18:50.550706+0000 mon.a (mon.0) 1831 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:18:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:51 vm04 bash[20742]: cluster 2026-03-10T10:18:50.408648+0000 mgr.y (mgr.24422) 217 : cluster [DBG] pgmap v274: 321 pgs: 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 306 active+clean; 458 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.6 KiB/s wr, 2 op/s; 2 B/s, 1 objects/s recovering 2026-03-10T10:18:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:51 vm04 bash[20742]: cluster 2026-03-10T10:18:50.408648+0000 mgr.y (mgr.24422) 217 : cluster [DBG] pgmap v274: 321 pgs: 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 306 active+clean; 458 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.6 KiB/s wr, 2 op/s; 2 B/s, 1 objects/s recovering 2026-03-10T10:18:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:51 vm04 bash[20742]: cluster 2026-03-10T10:18:50.743461+0000 mon.a (mon.0) 1832 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:51 vm04 bash[20742]: cluster 2026-03-10T10:18:50.743461+0000 mon.a (mon.0) 1832 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:51 vm04 bash[20742]: cluster 2026-03-10T10:18:50.743490+0000 mon.a (mon.0) 1833 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-10T10:18:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:51 vm04 bash[20742]: cluster 2026-03-10T10:18:50.743490+0000 mon.a (mon.0) 1833 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-10T10:18:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:51 vm04 bash[20742]: audit 2026-03-10T10:18:51.421688+0000 mon.a (mon.0) 1834 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-46"}]': finished 2026-03-10T10:18:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:51 vm04 bash[20742]: audit 2026-03-10T10:18:51.421688+0000 mon.a (mon.0) 1834 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-46"}]': finished 2026-03-10T10:18:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:51 vm04 bash[20742]: audit 2026-03-10T10:18:51.421746+0000 mon.a (mon.0) 1835 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm04-59252-36"}]': finished 2026-03-10T10:18:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:51 vm04 bash[20742]: audit 2026-03-10T10:18:51.421746+0000 mon.a (mon.0) 1835 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm04-59252-36"}]': finished 2026-03-10T10:18:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:51 vm04 bash[20742]: audit 2026-03-10T10:18:51.422094+0000 mon.a (mon.0) 1836 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:51 vm04 bash[20742]: audit 2026-03-10T10:18:51.422094+0000 mon.a (mon.0) 1836 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:51 vm04 bash[20742]: cluster 2026-03-10T10:18:51.427198+0000 mon.a (mon.0) 1837 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-10T10:18:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:51 vm04 bash[20742]: cluster 2026-03-10T10:18:51.427198+0000 mon.a (mon.0) 1837 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:51 vm04 bash[20742]: audit 2026-03-10T10:18:51.442362+0000 mon.a (mon.0) 1838 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-27"}]: dispatch 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:51 vm04 bash[20742]: audit 2026-03-10T10:18:51.442362+0000 mon.a (mon.0) 1838 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-27"}]: dispatch 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:51 vm04 bash[28289]: cluster 2026-03-10T10:18:50.408648+0000 mgr.y (mgr.24422) 217 : cluster [DBG] pgmap v274: 321 pgs: 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 306 active+clean; 458 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.6 KiB/s wr, 2 op/s; 2 B/s, 1 objects/s recovering 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:51 vm04 bash[28289]: cluster 2026-03-10T10:18:50.408648+0000 mgr.y (mgr.24422) 217 : cluster [DBG] pgmap v274: 321 pgs: 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 306 active+clean; 458 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.6 KiB/s wr, 2 op/s; 2 B/s, 1 objects/s recovering 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:51 vm04 bash[28289]: cluster 2026-03-10T10:18:50.743461+0000 mon.a (mon.0) 1832 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:51 vm04 bash[28289]: cluster 2026-03-10T10:18:50.743461+0000 mon.a (mon.0) 1832 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:51 vm04 bash[28289]: cluster 2026-03-10T10:18:50.743490+0000 mon.a (mon.0) 1833 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:51 vm04 bash[28289]: cluster 2026-03-10T10:18:50.743490+0000 mon.a (mon.0) 1833 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:51 vm04 bash[28289]: audit 2026-03-10T10:18:51.421688+0000 mon.a (mon.0) 1834 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-46"}]': finished 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:51 vm04 bash[28289]: audit 2026-03-10T10:18:51.421688+0000 mon.a (mon.0) 1834 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-46"}]': finished 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:51 vm04 bash[28289]: audit 2026-03-10T10:18:51.421746+0000 mon.a (mon.0) 1835 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm04-59252-36"}]': finished 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:51 vm04 bash[28289]: audit 2026-03-10T10:18:51.421746+0000 mon.a (mon.0) 1835 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm04-59252-36"}]': finished 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:51 vm04 bash[28289]: audit 2026-03-10T10:18:51.422094+0000 mon.a (mon.0) 1836 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:51 vm04 bash[28289]: audit 2026-03-10T10:18:51.422094+0000 mon.a (mon.0) 1836 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:51 vm04 bash[28289]: cluster 2026-03-10T10:18:51.427198+0000 mon.a (mon.0) 1837 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:51 vm04 bash[28289]: cluster 2026-03-10T10:18:51.427198+0000 mon.a (mon.0) 1837 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:51 vm04 bash[28289]: audit 2026-03-10T10:18:51.442362+0000 mon.a (mon.0) 1838 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-27"}]: dispatch 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:51 vm04 bash[28289]: audit 2026-03-10T10:18:51.442362+0000 mon.a (mon.0) 1838 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-27"}]: dispatch 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:51 vm04 bash[28289]: audit 2026-03-10T10:18:51.453269+0000 mon.b (mon.1) 179 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:51 vm04 bash[28289]: audit 2026-03-10T10:18:51.453269+0000 mon.b (mon.1) 179 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:51 vm04 bash[28289]: audit 2026-03-10T10:18:51.454451+0000 mon.b (mon.1) 180 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:51 vm04 bash[28289]: audit 2026-03-10T10:18:51.454451+0000 mon.b (mon.1) 180 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:51 vm04 bash[28289]: audit 2026-03-10T10:18:51.455998+0000 mon.a (mon.0) 1839 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:51 vm04 bash[28289]: audit 2026-03-10T10:18:51.455998+0000 mon.a (mon.0) 1839 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:51 vm04 bash[28289]: audit 2026-03-10T10:18:51.457382+0000 mon.b (mon.1) 181 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm04-59252-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:51 vm04 bash[28289]: audit 2026-03-10T10:18:51.457382+0000 mon.b (mon.1) 181 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm04-59252-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:51 vm04 bash[28289]: audit 2026-03-10T10:18:51.458734+0000 mon.a (mon.0) 1840 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:51 vm04 bash[28289]: audit 2026-03-10T10:18:51.458734+0000 mon.a (mon.0) 1840 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:51 vm04 bash[28289]: audit 2026-03-10T10:18:51.460067+0000 mon.a (mon.0) 1841 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm04-59252-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:51 vm04 bash[28289]: audit 2026-03-10T10:18:51.460067+0000 mon.a (mon.0) 1841 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm04-59252-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:51 vm04 bash[28289]: audit 2026-03-10T10:18:51.469603+0000 mon.c (mon.2) 316 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:51 vm04 bash[28289]: audit 2026-03-10T10:18:51.469603+0000 mon.c (mon.2) 316 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:51 vm04 bash[20742]: audit 2026-03-10T10:18:51.453269+0000 mon.b (mon.1) 179 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:51 vm04 bash[20742]: audit 2026-03-10T10:18:51.453269+0000 mon.b (mon.1) 179 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:51 vm04 bash[20742]: audit 2026-03-10T10:18:51.454451+0000 mon.b (mon.1) 180 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:51 vm04 bash[20742]: audit 2026-03-10T10:18:51.454451+0000 mon.b (mon.1) 180 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:51 vm04 bash[20742]: audit 2026-03-10T10:18:51.455998+0000 mon.a (mon.0) 1839 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:51 vm04 bash[20742]: audit 2026-03-10T10:18:51.455998+0000 mon.a (mon.0) 1839 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:51 vm04 bash[20742]: audit 2026-03-10T10:18:51.457382+0000 mon.b (mon.1) 181 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm04-59252-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:51 vm04 bash[20742]: audit 2026-03-10T10:18:51.457382+0000 mon.b (mon.1) 181 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm04-59252-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:51 vm04 bash[20742]: audit 2026-03-10T10:18:51.458734+0000 mon.a (mon.0) 1840 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:51 vm04 bash[20742]: audit 2026-03-10T10:18:51.458734+0000 mon.a (mon.0) 1840 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:51 vm04 bash[20742]: audit 2026-03-10T10:18:51.460067+0000 mon.a (mon.0) 1841 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm04-59252-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:51 vm04 bash[20742]: audit 2026-03-10T10:18:51.460067+0000 mon.a (mon.0) 1841 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm04-59252-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:51 vm04 bash[20742]: audit 2026-03-10T10:18:51.469603+0000 mon.c (mon.2) 316 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:51 vm04 bash[20742]: audit 2026-03-10T10:18:51.469603+0000 mon.c (mon.2) 316 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:51 vm07 bash[23367]: cluster 2026-03-10T10:18:50.408648+0000 mgr.y (mgr.24422) 217 : cluster [DBG] pgmap v274: 321 pgs: 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 306 active+clean; 458 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.6 KiB/s wr, 2 op/s; 2 B/s, 1 objects/s recovering 2026-03-10T10:18:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:51 vm07 bash[23367]: cluster 2026-03-10T10:18:50.408648+0000 mgr.y (mgr.24422) 217 : cluster [DBG] pgmap v274: 321 pgs: 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 306 active+clean; 458 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.6 KiB/s wr, 2 op/s; 2 B/s, 1 objects/s recovering 2026-03-10T10:18:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:51 vm07 bash[23367]: cluster 2026-03-10T10:18:50.743461+0000 mon.a (mon.0) 1832 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:51 vm07 bash[23367]: cluster 2026-03-10T10:18:50.743461+0000 mon.a (mon.0) 1832 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:51 vm07 bash[23367]: cluster 2026-03-10T10:18:50.743490+0000 mon.a (mon.0) 1833 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-10T10:18:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:51 vm07 bash[23367]: cluster 2026-03-10T10:18:50.743490+0000 mon.a (mon.0) 1833 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 1 pg peering) 2026-03-10T10:18:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:51 vm07 bash[23367]: audit 2026-03-10T10:18:51.421688+0000 mon.a (mon.0) 1834 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-46"}]': finished 2026-03-10T10:18:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:51 vm07 bash[23367]: audit 2026-03-10T10:18:51.421688+0000 mon.a (mon.0) 1834 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-46", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-46"}]': finished 2026-03-10T10:18:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:51 vm07 bash[23367]: audit 2026-03-10T10:18:51.421746+0000 mon.a (mon.0) 1835 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm04-59252-36"}]': finished 2026-03-10T10:18:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:51 vm07 bash[23367]: audit 2026-03-10T10:18:51.421746+0000 mon.a (mon.0) 1835 : audit [INF] from='client.? 192.168.123.104:0/1244303263' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsync_vm04-59252-36"}]': finished 2026-03-10T10:18:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:51 vm07 bash[23367]: audit 2026-03-10T10:18:51.422094+0000 mon.a (mon.0) 1836 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:51 vm07 bash[23367]: audit 2026-03-10T10:18:51.422094+0000 mon.a (mon.0) 1836 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:18:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:51 vm07 bash[23367]: cluster 2026-03-10T10:18:51.427198+0000 mon.a (mon.0) 1837 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-10T10:18:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:51 vm07 bash[23367]: cluster 2026-03-10T10:18:51.427198+0000 mon.a (mon.0) 1837 : cluster [DBG] osdmap e213: 8 total, 8 up, 8 in 2026-03-10T10:18:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:51 vm07 bash[23367]: audit 2026-03-10T10:18:51.442362+0000 mon.a (mon.0) 1838 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-27"}]: dispatch 2026-03-10T10:18:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:51 vm07 bash[23367]: audit 2026-03-10T10:18:51.442362+0000 mon.a (mon.0) 1838 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-27"}]: dispatch 2026-03-10T10:18:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:51 vm07 bash[23367]: audit 2026-03-10T10:18:51.453269+0000 mon.b (mon.1) 179 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:51 vm07 bash[23367]: audit 2026-03-10T10:18:51.453269+0000 mon.b (mon.1) 179 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:51 vm07 bash[23367]: audit 2026-03-10T10:18:51.454451+0000 mon.b (mon.1) 180 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:51 vm07 bash[23367]: audit 2026-03-10T10:18:51.454451+0000 mon.b (mon.1) 180 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:51 vm07 bash[23367]: audit 2026-03-10T10:18:51.455998+0000 mon.a (mon.0) 1839 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:51 vm07 bash[23367]: audit 2026-03-10T10:18:51.455998+0000 mon.a (mon.0) 1839 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:51 vm07 bash[23367]: audit 2026-03-10T10:18:51.457382+0000 mon.b (mon.1) 181 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm04-59252-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:51 vm07 bash[23367]: audit 2026-03-10T10:18:51.457382+0000 mon.b (mon.1) 181 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm04-59252-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:51 vm07 bash[23367]: audit 2026-03-10T10:18:51.458734+0000 mon.a (mon.0) 1840 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:51 vm07 bash[23367]: audit 2026-03-10T10:18:51.458734+0000 mon.a (mon.0) 1840 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:51 vm07 bash[23367]: audit 2026-03-10T10:18:51.460067+0000 mon.a (mon.0) 1841 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm04-59252-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:51 vm07 bash[23367]: audit 2026-03-10T10:18:51.460067+0000 mon.a (mon.0) 1841 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm04-59252-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:51 vm07 bash[23367]: audit 2026-03-10T10:18:51.469603+0000 mon.c (mon.2) 316 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:51 vm07 bash[23367]: audit 2026-03-10T10:18:51.469603+0000 mon.c (mon.2) 316 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:53.432 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:18:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:18:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:18:53.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:53 vm04 bash[28289]: cluster 2026-03-10T10:18:52.421875+0000 mon.a (mon.0) 1842 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:53.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:53 vm04 bash[28289]: cluster 2026-03-10T10:18:52.421875+0000 mon.a (mon.0) 1842 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:53.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:53 vm04 bash[28289]: audit 2026-03-10T10:18:52.470585+0000 mon.c (mon.2) 317 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:53.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:53 vm04 bash[28289]: audit 2026-03-10T10:18:52.470585+0000 mon.c (mon.2) 317 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:53.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:53 vm04 bash[20742]: cluster 2026-03-10T10:18:52.421875+0000 mon.a (mon.0) 1842 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:53.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:53 vm04 bash[20742]: cluster 2026-03-10T10:18:52.421875+0000 mon.a (mon.0) 1842 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:53.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:53 vm04 bash[20742]: audit 2026-03-10T10:18:52.470585+0000 mon.c (mon.2) 317 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:53.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:53 vm04 bash[20742]: audit 2026-03-10T10:18:52.470585+0000 mon.c (mon.2) 317 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:53.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:53 vm07 bash[23367]: cluster 2026-03-10T10:18:52.421875+0000 mon.a (mon.0) 1842 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:53.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:53 vm07 bash[23367]: cluster 2026-03-10T10:18:52.421875+0000 mon.a (mon.0) 1842 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:18:53.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:53 vm07 bash[23367]: audit 2026-03-10T10:18:52.470585+0000 mon.c (mon.2) 317 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:53.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:53 vm07 bash[23367]: audit 2026-03-10T10:18:52.470585+0000 mon.c (mon.2) 317 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:54 vm04 bash[28289]: cluster 2026-03-10T10:18:52.409045+0000 mgr.y (mgr.24422) 218 : cluster [DBG] pgmap v277: 329 pgs: 8 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 306 active+clean; 458 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s; 2 B/s, 1 objects/s recovering 2026-03-10T10:18:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:54 vm04 bash[28289]: cluster 2026-03-10T10:18:52.409045+0000 mgr.y (mgr.24422) 218 : cluster [DBG] pgmap v277: 329 pgs: 8 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 306 active+clean; 458 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s; 2 B/s, 1 objects/s recovering 2026-03-10T10:18:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:54 vm04 bash[28289]: audit 2026-03-10T10:18:53.143310+0000 mon.a (mon.0) 1843 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-27"}]': finished 2026-03-10T10:18:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:54 vm04 bash[28289]: audit 2026-03-10T10:18:53.143310+0000 mon.a (mon.0) 1843 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-27"}]': finished 2026-03-10T10:18:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:54 vm04 bash[28289]: audit 2026-03-10T10:18:53.143520+0000 mon.a (mon.0) 1844 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm04-59252-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:54 vm04 bash[28289]: audit 2026-03-10T10:18:53.143520+0000 mon.a (mon.0) 1844 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm04-59252-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:54 vm04 bash[28289]: cluster 2026-03-10T10:18:53.266579+0000 mon.a (mon.0) 1845 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-10T10:18:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:54 vm04 bash[28289]: cluster 2026-03-10T10:18:53.266579+0000 mon.a (mon.0) 1845 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-10T10:18:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:54 vm04 bash[28289]: audit 2026-03-10T10:18:53.332210+0000 mon.b (mon.1) 182 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm04-59252-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:54 vm04 bash[28289]: audit 2026-03-10T10:18:53.332210+0000 mon.b (mon.1) 182 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm04-59252-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:54 vm04 bash[28289]: audit 2026-03-10T10:18:53.345716+0000 mon.a (mon.0) 1846 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm04-59252-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:54 vm04 bash[28289]: audit 2026-03-10T10:18:53.345716+0000 mon.a (mon.0) 1846 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm04-59252-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:54 vm04 bash[28289]: audit 2026-03-10T10:18:53.471539+0000 mon.c (mon.2) 318 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:54 vm04 bash[28289]: audit 2026-03-10T10:18:53.471539+0000 mon.c (mon.2) 318 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:54 vm04 bash[28289]: cluster 2026-03-10T10:18:54.157658+0000 mon.a (mon.0) 1847 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-10T10:18:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:54 vm04 bash[28289]: cluster 2026-03-10T10:18:54.157658+0000 mon.a (mon.0) 1847 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-10T10:18:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:54 vm04 bash[28289]: audit 2026-03-10T10:18:54.159014+0000 mon.c (mon.2) 319 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:54 vm04 bash[28289]: audit 2026-03-10T10:18:54.159014+0000 mon.c (mon.2) 319 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:54 vm04 bash[28289]: audit 2026-03-10T10:18:54.160161+0000 mon.a (mon.0) 1848 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:54 vm04 bash[28289]: audit 2026-03-10T10:18:54.160161+0000 mon.a (mon.0) 1848 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:54.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:54 vm04 bash[20742]: cluster 2026-03-10T10:18:52.409045+0000 mgr.y (mgr.24422) 218 : cluster [DBG] pgmap v277: 329 pgs: 8 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 306 active+clean; 458 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s; 2 B/s, 1 objects/s recovering 2026-03-10T10:18:54.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:54 vm04 bash[20742]: cluster 2026-03-10T10:18:52.409045+0000 mgr.y (mgr.24422) 218 : cluster [DBG] pgmap v277: 329 pgs: 8 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 306 active+clean; 458 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s; 2 B/s, 1 objects/s recovering 2026-03-10T10:18:54.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:54 vm04 bash[20742]: audit 2026-03-10T10:18:53.143310+0000 mon.a (mon.0) 1843 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-27"}]': finished 2026-03-10T10:18:54.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:54 vm04 bash[20742]: audit 2026-03-10T10:18:53.143310+0000 mon.a (mon.0) 1843 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-27"}]': finished 2026-03-10T10:18:54.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:54 vm04 bash[20742]: audit 2026-03-10T10:18:53.143520+0000 mon.a (mon.0) 1844 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm04-59252-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:54.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:54 vm04 bash[20742]: audit 2026-03-10T10:18:53.143520+0000 mon.a (mon.0) 1844 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm04-59252-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:54.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:54 vm04 bash[20742]: cluster 2026-03-10T10:18:53.266579+0000 mon.a (mon.0) 1845 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-10T10:18:54.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:54 vm04 bash[20742]: cluster 2026-03-10T10:18:53.266579+0000 mon.a (mon.0) 1845 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-10T10:18:54.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:54 vm04 bash[20742]: audit 2026-03-10T10:18:53.332210+0000 mon.b (mon.1) 182 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm04-59252-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:54.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:54 vm04 bash[20742]: audit 2026-03-10T10:18:53.332210+0000 mon.b (mon.1) 182 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm04-59252-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:54.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:54 vm04 bash[20742]: audit 2026-03-10T10:18:53.345716+0000 mon.a (mon.0) 1846 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm04-59252-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:54.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:54 vm04 bash[20742]: audit 2026-03-10T10:18:53.345716+0000 mon.a (mon.0) 1846 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm04-59252-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:54.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:54 vm04 bash[20742]: audit 2026-03-10T10:18:53.471539+0000 mon.c (mon.2) 318 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:54.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:54 vm04 bash[20742]: audit 2026-03-10T10:18:53.471539+0000 mon.c (mon.2) 318 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:54.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:54 vm04 bash[20742]: cluster 2026-03-10T10:18:54.157658+0000 mon.a (mon.0) 1847 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-10T10:18:54.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:54 vm04 bash[20742]: cluster 2026-03-10T10:18:54.157658+0000 mon.a (mon.0) 1847 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-10T10:18:54.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:54 vm04 bash[20742]: audit 2026-03-10T10:18:54.159014+0000 mon.c (mon.2) 319 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:54.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:54 vm04 bash[20742]: audit 2026-03-10T10:18:54.159014+0000 mon.c (mon.2) 319 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:54.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:54 vm04 bash[20742]: audit 2026-03-10T10:18:54.160161+0000 mon.a (mon.0) 1848 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:54.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:54 vm04 bash[20742]: audit 2026-03-10T10:18:54.160161+0000 mon.a (mon.0) 1848 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:54.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:54 vm07 bash[23367]: cluster 2026-03-10T10:18:52.409045+0000 mgr.y (mgr.24422) 218 : cluster [DBG] pgmap v277: 329 pgs: 8 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 306 active+clean; 458 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s; 2 B/s, 1 objects/s recovering 2026-03-10T10:18:54.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:54 vm07 bash[23367]: cluster 2026-03-10T10:18:52.409045+0000 mgr.y (mgr.24422) 218 : cluster [DBG] pgmap v277: 329 pgs: 8 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 306 active+clean; 458 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s; 2 B/s, 1 objects/s recovering 2026-03-10T10:18:54.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:54 vm07 bash[23367]: audit 2026-03-10T10:18:53.143310+0000 mon.a (mon.0) 1843 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-27"}]': finished 2026-03-10T10:18:54.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:54 vm07 bash[23367]: audit 2026-03-10T10:18:53.143310+0000 mon.a (mon.0) 1843 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-27"}]': finished 2026-03-10T10:18:54.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:54 vm07 bash[23367]: audit 2026-03-10T10:18:53.143520+0000 mon.a (mon.0) 1844 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm04-59252-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:54.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:54 vm07 bash[23367]: audit 2026-03-10T10:18:53.143520+0000 mon.a (mon.0) 1844 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFull_vm04-59252-37", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:54.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:54 vm07 bash[23367]: cluster 2026-03-10T10:18:53.266579+0000 mon.a (mon.0) 1845 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-10T10:18:54.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:54 vm07 bash[23367]: cluster 2026-03-10T10:18:53.266579+0000 mon.a (mon.0) 1845 : cluster [DBG] osdmap e214: 8 total, 8 up, 8 in 2026-03-10T10:18:54.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:54 vm07 bash[23367]: audit 2026-03-10T10:18:53.332210+0000 mon.b (mon.1) 182 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm04-59252-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:54.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:54 vm07 bash[23367]: audit 2026-03-10T10:18:53.332210+0000 mon.b (mon.1) 182 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm04-59252-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:54.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:54 vm07 bash[23367]: audit 2026-03-10T10:18:53.345716+0000 mon.a (mon.0) 1846 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm04-59252-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:54.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:54 vm07 bash[23367]: audit 2026-03-10T10:18:53.345716+0000 mon.a (mon.0) 1846 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm04-59252-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:54.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:54 vm07 bash[23367]: audit 2026-03-10T10:18:53.471539+0000 mon.c (mon.2) 318 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:54.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:54 vm07 bash[23367]: audit 2026-03-10T10:18:53.471539+0000 mon.c (mon.2) 318 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:54.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:54 vm07 bash[23367]: cluster 2026-03-10T10:18:54.157658+0000 mon.a (mon.0) 1847 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-10T10:18:54.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:54 vm07 bash[23367]: cluster 2026-03-10T10:18:54.157658+0000 mon.a (mon.0) 1847 : cluster [DBG] osdmap e215: 8 total, 8 up, 8 in 2026-03-10T10:18:54.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:54 vm07 bash[23367]: audit 2026-03-10T10:18:54.159014+0000 mon.c (mon.2) 319 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:54.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:54 vm07 bash[23367]: audit 2026-03-10T10:18:54.159014+0000 mon.c (mon.2) 319 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:54.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:54 vm07 bash[23367]: audit 2026-03-10T10:18:54.160161+0000 mon.a (mon.0) 1848 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:54.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:54 vm07 bash[23367]: audit 2026-03-10T10:18:54.160161+0000 mon.a (mon.0) 1848 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:55.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:55 vm04 bash[28289]: cluster 2026-03-10T10:18:54.409380+0000 mgr.y (mgr.24422) 219 : cluster [DBG] pgmap v280: 288 pgs: 1 peering, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 271 active+clean; 458 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:55.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:55 vm04 bash[28289]: cluster 2026-03-10T10:18:54.409380+0000 mgr.y (mgr.24422) 219 : cluster [DBG] pgmap v280: 288 pgs: 1 peering, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 271 active+clean; 458 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:55.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:55 vm04 bash[28289]: audit 2026-03-10T10:18:54.472304+0000 mon.c (mon.2) 320 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:55.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:55 vm04 bash[28289]: audit 2026-03-10T10:18:54.472304+0000 mon.c (mon.2) 320 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:55.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:55 vm04 bash[28289]: audit 2026-03-10T10:18:55.157115+0000 mon.a (mon.0) 1849 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm04-59252-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm04-59252-37"}]': finished 2026-03-10T10:18:55.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:55 vm04 bash[28289]: audit 2026-03-10T10:18:55.157115+0000 mon.a (mon.0) 1849 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm04-59252-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm04-59252-37"}]': finished 2026-03-10T10:18:55.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:55 vm04 bash[28289]: audit 2026-03-10T10:18:55.157218+0000 mon.a (mon.0) 1850 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-46"}]': finished 2026-03-10T10:18:55.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:55 vm04 bash[28289]: audit 2026-03-10T10:18:55.157218+0000 mon.a (mon.0) 1850 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-46"}]': finished 2026-03-10T10:18:55.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:55 vm04 bash[28289]: cluster 2026-03-10T10:18:55.167030+0000 mon.a (mon.0) 1851 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-10T10:18:55.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:55 vm04 bash[28289]: cluster 2026-03-10T10:18:55.167030+0000 mon.a (mon.0) 1851 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-10T10:18:55.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:55 vm04 bash[28289]: audit 2026-03-10T10:18:55.171129+0000 mon.a (mon.0) 1852 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:55.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:55 vm04 bash[28289]: audit 2026-03-10T10:18:55.171129+0000 mon.a (mon.0) 1852 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:55.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:55 vm04 bash[28289]: audit 2026-03-10T10:18:55.189086+0000 mon.c (mon.2) 321 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:55.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:55 vm04 bash[28289]: audit 2026-03-10T10:18:55.189086+0000 mon.c (mon.2) 321 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:55.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:55 vm04 bash[28289]: audit 2026-03-10T10:18:55.189312+0000 mon.a (mon.0) 1853 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:55.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:55 vm04 bash[28289]: audit 2026-03-10T10:18:55.189312+0000 mon.a (mon.0) 1853 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:55.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:55 vm04 bash[20742]: cluster 2026-03-10T10:18:54.409380+0000 mgr.y (mgr.24422) 219 : cluster [DBG] pgmap v280: 288 pgs: 1 peering, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 271 active+clean; 458 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:55.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:55 vm04 bash[20742]: cluster 2026-03-10T10:18:54.409380+0000 mgr.y (mgr.24422) 219 : cluster [DBG] pgmap v280: 288 pgs: 1 peering, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 271 active+clean; 458 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:55.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:55 vm04 bash[20742]: audit 2026-03-10T10:18:54.472304+0000 mon.c (mon.2) 320 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:55.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:55 vm04 bash[20742]: audit 2026-03-10T10:18:54.472304+0000 mon.c (mon.2) 320 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:55.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:55 vm04 bash[20742]: audit 2026-03-10T10:18:55.157115+0000 mon.a (mon.0) 1849 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm04-59252-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm04-59252-37"}]': finished 2026-03-10T10:18:55.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:55 vm04 bash[20742]: audit 2026-03-10T10:18:55.157115+0000 mon.a (mon.0) 1849 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm04-59252-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm04-59252-37"}]': finished 2026-03-10T10:18:55.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:55 vm04 bash[20742]: audit 2026-03-10T10:18:55.157218+0000 mon.a (mon.0) 1850 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-46"}]': finished 2026-03-10T10:18:55.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:55 vm04 bash[20742]: audit 2026-03-10T10:18:55.157218+0000 mon.a (mon.0) 1850 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-46"}]': finished 2026-03-10T10:18:55.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:55 vm04 bash[20742]: cluster 2026-03-10T10:18:55.167030+0000 mon.a (mon.0) 1851 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-10T10:18:55.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:55 vm04 bash[20742]: cluster 2026-03-10T10:18:55.167030+0000 mon.a (mon.0) 1851 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-10T10:18:55.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:55 vm04 bash[20742]: audit 2026-03-10T10:18:55.171129+0000 mon.a (mon.0) 1852 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:55.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:55 vm04 bash[20742]: audit 2026-03-10T10:18:55.171129+0000 mon.a (mon.0) 1852 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:55.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:55 vm04 bash[20742]: audit 2026-03-10T10:18:55.189086+0000 mon.c (mon.2) 321 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:55.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:55 vm04 bash[20742]: audit 2026-03-10T10:18:55.189086+0000 mon.c (mon.2) 321 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:55.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:55 vm04 bash[20742]: audit 2026-03-10T10:18:55.189312+0000 mon.a (mon.0) 1853 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:55.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:55 vm04 bash[20742]: audit 2026-03-10T10:18:55.189312+0000 mon.a (mon.0) 1853 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:56.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:55 vm07 bash[23367]: cluster 2026-03-10T10:18:54.409380+0000 mgr.y (mgr.24422) 219 : cluster [DBG] pgmap v280: 288 pgs: 1 peering, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 271 active+clean; 458 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:56.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:55 vm07 bash[23367]: cluster 2026-03-10T10:18:54.409380+0000 mgr.y (mgr.24422) 219 : cluster [DBG] pgmap v280: 288 pgs: 1 peering, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 271 active+clean; 458 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:56.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:55 vm07 bash[23367]: audit 2026-03-10T10:18:54.472304+0000 mon.c (mon.2) 320 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:56.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:55 vm07 bash[23367]: audit 2026-03-10T10:18:54.472304+0000 mon.c (mon.2) 320 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:56.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:55 vm07 bash[23367]: audit 2026-03-10T10:18:55.157115+0000 mon.a (mon.0) 1849 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm04-59252-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm04-59252-37"}]': finished 2026-03-10T10:18:56.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:55 vm07 bash[23367]: audit 2026-03-10T10:18:55.157115+0000 mon.a (mon.0) 1849 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFull_vm04-59252-37", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFull_vm04-59252-37"}]': finished 2026-03-10T10:18:56.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:55 vm07 bash[23367]: audit 2026-03-10T10:18:55.157218+0000 mon.a (mon.0) 1850 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-46"}]': finished 2026-03-10T10:18:56.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:55 vm07 bash[23367]: audit 2026-03-10T10:18:55.157218+0000 mon.a (mon.0) 1850 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-46"}]': finished 2026-03-10T10:18:56.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:55 vm07 bash[23367]: cluster 2026-03-10T10:18:55.167030+0000 mon.a (mon.0) 1851 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-10T10:18:56.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:55 vm07 bash[23367]: cluster 2026-03-10T10:18:55.167030+0000 mon.a (mon.0) 1851 : cluster [DBG] osdmap e216: 8 total, 8 up, 8 in 2026-03-10T10:18:56.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:55 vm07 bash[23367]: audit 2026-03-10T10:18:55.171129+0000 mon.a (mon.0) 1852 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:56.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:55 vm07 bash[23367]: audit 2026-03-10T10:18:55.171129+0000 mon.a (mon.0) 1852 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-29","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:18:56.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:55 vm07 bash[23367]: audit 2026-03-10T10:18:55.189086+0000 mon.c (mon.2) 321 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:56.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:55 vm07 bash[23367]: audit 2026-03-10T10:18:55.189086+0000 mon.c (mon.2) 321 : audit [INF] from='client.? 192.168.123.104:0/619952875' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:56.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:55 vm07 bash[23367]: audit 2026-03-10T10:18:55.189312+0000 mon.a (mon.0) 1853 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:56.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:55 vm07 bash[23367]: audit 2026-03-10T10:18:55.189312+0000 mon.a (mon.0) 1853 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-46"}]: dispatch 2026-03-10T10:18:56.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:56 vm04 bash[28289]: audit 2026-03-10T10:18:55.472988+0000 mon.c (mon.2) 322 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:56.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:56 vm04 bash[28289]: audit 2026-03-10T10:18:55.472988+0000 mon.c (mon.2) 322 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:56.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:56 vm04 bash[28289]: audit 2026-03-10T10:18:56.161263+0000 mon.a (mon.0) 1854 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:56.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:56 vm04 bash[28289]: audit 2026-03-10T10:18:56.161263+0000 mon.a (mon.0) 1854 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:56.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:56 vm04 bash[28289]: audit 2026-03-10T10:18:56.161313+0000 mon.a (mon.0) 1855 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-46"}]': finished 2026-03-10T10:18:56.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:56 vm04 bash[28289]: audit 2026-03-10T10:18:56.161313+0000 mon.a (mon.0) 1855 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-46"}]': finished 2026-03-10T10:18:56.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:56 vm04 bash[28289]: cluster 2026-03-10T10:18:56.166934+0000 mon.a (mon.0) 1856 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-10T10:18:56.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:56 vm04 bash[28289]: cluster 2026-03-10T10:18:56.166934+0000 mon.a (mon.0) 1856 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-10T10:18:56.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:56 vm04 bash[28289]: audit 2026-03-10T10:18:56.198898+0000 mon.b (mon.1) 183 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:56.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:56 vm04 bash[28289]: audit 2026-03-10T10:18:56.198898+0000 mon.b (mon.1) 183 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:56.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:56 vm04 bash[28289]: audit 2026-03-10T10:18:56.210258+0000 mon.a (mon.0) 1857 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:56.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:56 vm04 bash[28289]: audit 2026-03-10T10:18:56.210258+0000 mon.a (mon.0) 1857 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:56.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:56 vm04 bash[28289]: audit 2026-03-10T10:18:56.210585+0000 mon.b (mon.1) 184 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:56.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:56 vm04 bash[28289]: audit 2026-03-10T10:18:56.210585+0000 mon.b (mon.1) 184 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:56.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:56 vm04 bash[28289]: audit 2026-03-10T10:18:56.218197+0000 mon.b (mon.1) 185 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:56.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:56 vm04 bash[28289]: audit 2026-03-10T10:18:56.218197+0000 mon.b (mon.1) 185 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:56.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:56 vm04 bash[28289]: audit 2026-03-10T10:18:56.218899+0000 mon.a (mon.0) 1858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:56.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:56 vm04 bash[28289]: audit 2026-03-10T10:18:56.218899+0000 mon.a (mon.0) 1858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:56.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:56 vm04 bash[28289]: audit 2026-03-10T10:18:56.224001+0000 mon.a (mon.0) 1859 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:56.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:56 vm04 bash[28289]: audit 2026-03-10T10:18:56.224001+0000 mon.a (mon.0) 1859 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:56.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:56 vm04 bash[28289]: audit 2026-03-10T10:18:56.473715+0000 mon.c (mon.2) 323 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:56.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:56 vm04 bash[28289]: audit 2026-03-10T10:18:56.473715+0000 mon.c (mon.2) 323 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:56.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:56 vm04 bash[20742]: audit 2026-03-10T10:18:55.472988+0000 mon.c (mon.2) 322 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:56.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:56 vm04 bash[20742]: audit 2026-03-10T10:18:55.472988+0000 mon.c (mon.2) 322 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:56.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:56 vm04 bash[20742]: audit 2026-03-10T10:18:56.161263+0000 mon.a (mon.0) 1854 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:56.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:56 vm04 bash[20742]: audit 2026-03-10T10:18:56.161263+0000 mon.a (mon.0) 1854 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:56.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:56 vm04 bash[20742]: audit 2026-03-10T10:18:56.161313+0000 mon.a (mon.0) 1855 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-46"}]': finished 2026-03-10T10:18:56.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:56 vm04 bash[20742]: audit 2026-03-10T10:18:56.161313+0000 mon.a (mon.0) 1855 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-46"}]': finished 2026-03-10T10:18:56.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:56 vm04 bash[20742]: cluster 2026-03-10T10:18:56.166934+0000 mon.a (mon.0) 1856 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-10T10:18:56.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:56 vm04 bash[20742]: cluster 2026-03-10T10:18:56.166934+0000 mon.a (mon.0) 1856 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-10T10:18:56.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:56 vm04 bash[20742]: audit 2026-03-10T10:18:56.198898+0000 mon.b (mon.1) 183 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:56.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:56 vm04 bash[20742]: audit 2026-03-10T10:18:56.198898+0000 mon.b (mon.1) 183 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:56.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:56 vm04 bash[20742]: audit 2026-03-10T10:18:56.210258+0000 mon.a (mon.0) 1857 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:56.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:56 vm04 bash[20742]: audit 2026-03-10T10:18:56.210258+0000 mon.a (mon.0) 1857 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:56.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:56 vm04 bash[20742]: audit 2026-03-10T10:18:56.210585+0000 mon.b (mon.1) 184 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:56.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:56 vm04 bash[20742]: audit 2026-03-10T10:18:56.210585+0000 mon.b (mon.1) 184 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:56.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:56 vm04 bash[20742]: audit 2026-03-10T10:18:56.218197+0000 mon.b (mon.1) 185 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:56.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:56 vm04 bash[20742]: audit 2026-03-10T10:18:56.218197+0000 mon.b (mon.1) 185 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:56.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:56 vm04 bash[20742]: audit 2026-03-10T10:18:56.218899+0000 mon.a (mon.0) 1858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:56.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:56 vm04 bash[20742]: audit 2026-03-10T10:18:56.218899+0000 mon.a (mon.0) 1858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:56.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:56 vm04 bash[20742]: audit 2026-03-10T10:18:56.224001+0000 mon.a (mon.0) 1859 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:56.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:56 vm04 bash[20742]: audit 2026-03-10T10:18:56.224001+0000 mon.a (mon.0) 1859 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:56.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:56 vm04 bash[20742]: audit 2026-03-10T10:18:56.473715+0000 mon.c (mon.2) 323 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:56.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:56 vm04 bash[20742]: audit 2026-03-10T10:18:56.473715+0000 mon.c (mon.2) 323 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:57.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:56 vm07 bash[23367]: audit 2026-03-10T10:18:55.472988+0000 mon.c (mon.2) 322 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:57.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:56 vm07 bash[23367]: audit 2026-03-10T10:18:55.472988+0000 mon.c (mon.2) 322 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:57.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:56 vm07 bash[23367]: audit 2026-03-10T10:18:56.161263+0000 mon.a (mon.0) 1854 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:57.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:56 vm07 bash[23367]: audit 2026-03-10T10:18:56.161263+0000 mon.a (mon.0) 1854 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-29","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:18:57.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:56 vm07 bash[23367]: audit 2026-03-10T10:18:56.161313+0000 mon.a (mon.0) 1855 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-46"}]': finished 2026-03-10T10:18:57.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:56 vm07 bash[23367]: audit 2026-03-10T10:18:56.161313+0000 mon.a (mon.0) 1855 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-46"}]': finished 2026-03-10T10:18:57.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:56 vm07 bash[23367]: cluster 2026-03-10T10:18:56.166934+0000 mon.a (mon.0) 1856 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-10T10:18:57.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:56 vm07 bash[23367]: cluster 2026-03-10T10:18:56.166934+0000 mon.a (mon.0) 1856 : cluster [DBG] osdmap e217: 8 total, 8 up, 8 in 2026-03-10T10:18:57.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:56 vm07 bash[23367]: audit 2026-03-10T10:18:56.198898+0000 mon.b (mon.1) 183 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:57.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:56 vm07 bash[23367]: audit 2026-03-10T10:18:56.198898+0000 mon.b (mon.1) 183 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:57.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:56 vm07 bash[23367]: audit 2026-03-10T10:18:56.210258+0000 mon.a (mon.0) 1857 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:57.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:56 vm07 bash[23367]: audit 2026-03-10T10:18:56.210258+0000 mon.a (mon.0) 1857 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:57.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:56 vm07 bash[23367]: audit 2026-03-10T10:18:56.210585+0000 mon.b (mon.1) 184 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:57.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:56 vm07 bash[23367]: audit 2026-03-10T10:18:56.210585+0000 mon.b (mon.1) 184 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:57.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:56 vm07 bash[23367]: audit 2026-03-10T10:18:56.218197+0000 mon.b (mon.1) 185 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:57.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:56 vm07 bash[23367]: audit 2026-03-10T10:18:56.218197+0000 mon.b (mon.1) 185 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:57.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:56 vm07 bash[23367]: audit 2026-03-10T10:18:56.218899+0000 mon.a (mon.0) 1858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:57.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:56 vm07 bash[23367]: audit 2026-03-10T10:18:56.218899+0000 mon.a (mon.0) 1858 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:57.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:56 vm07 bash[23367]: audit 2026-03-10T10:18:56.224001+0000 mon.a (mon.0) 1859 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:57.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:56 vm07 bash[23367]: audit 2026-03-10T10:18:56.224001+0000 mon.a (mon.0) 1859 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:57.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:56 vm07 bash[23367]: audit 2026-03-10T10:18:56.473715+0000 mon.c (mon.2) 323 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:57.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:56 vm07 bash[23367]: audit 2026-03-10T10:18:56.473715+0000 mon.c (mon.2) 323 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:57.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:57 vm04 bash[28289]: cluster 2026-03-10T10:18:56.409798+0000 mgr.y (mgr.24422) 220 : cluster [DBG] pgmap v283: 328 pgs: 40 unknown, 1 peering, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 271 active+clean; 458 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:57.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:57 vm04 bash[28289]: cluster 2026-03-10T10:18:56.409798+0000 mgr.y (mgr.24422) 220 : cluster [DBG] pgmap v283: 328 pgs: 40 unknown, 1 peering, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 271 active+clean; 458 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:57.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:57 vm04 bash[28289]: cluster 2026-03-10T10:18:57.189709+0000 mon.a (mon.0) 1860 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:57.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:57 vm04 bash[28289]: cluster 2026-03-10T10:18:57.189709+0000 mon.a (mon.0) 1860 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:57.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:57 vm04 bash[28289]: audit 2026-03-10T10:18:57.192972+0000 mon.a (mon.0) 1861 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:57.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:57 vm04 bash[28289]: audit 2026-03-10T10:18:57.192972+0000 mon.a (mon.0) 1861 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:57.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:57 vm04 bash[28289]: audit 2026-03-10T10:18:57.196629+0000 mon.b (mon.1) 186 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:57.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:57 vm04 bash[28289]: audit 2026-03-10T10:18:57.196629+0000 mon.b (mon.1) 186 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:57.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:57 vm04 bash[28289]: audit 2026-03-10T10:18:57.197189+0000 mon.b (mon.1) 187 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:57.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:57 vm04 bash[28289]: audit 2026-03-10T10:18:57.197189+0000 mon.b (mon.1) 187 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:57.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:57 vm04 bash[28289]: cluster 2026-03-10T10:18:57.198697+0000 mon.a (mon.0) 1862 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-10T10:18:57.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:57 vm04 bash[28289]: cluster 2026-03-10T10:18:57.198697+0000 mon.a (mon.0) 1862 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-10T10:18:57.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:57 vm04 bash[28289]: audit 2026-03-10T10:18:57.200489+0000 mon.a (mon.0) 1863 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:57.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:57 vm04 bash[28289]: audit 2026-03-10T10:18:57.200489+0000 mon.a (mon.0) 1863 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:57.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:57 vm04 bash[28289]: audit 2026-03-10T10:18:57.205696+0000 mon.a (mon.0) 1864 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:57.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:57 vm04 bash[28289]: audit 2026-03-10T10:18:57.205696+0000 mon.a (mon.0) 1864 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:57.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:57 vm04 bash[20742]: cluster 2026-03-10T10:18:56.409798+0000 mgr.y (mgr.24422) 220 : cluster [DBG] pgmap v283: 328 pgs: 40 unknown, 1 peering, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 271 active+clean; 458 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:57.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:57 vm04 bash[20742]: cluster 2026-03-10T10:18:56.409798+0000 mgr.y (mgr.24422) 220 : cluster [DBG] pgmap v283: 328 pgs: 40 unknown, 1 peering, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 271 active+clean; 458 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:57.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:57 vm04 bash[20742]: cluster 2026-03-10T10:18:57.189709+0000 mon.a (mon.0) 1860 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:57.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:57 vm04 bash[20742]: cluster 2026-03-10T10:18:57.189709+0000 mon.a (mon.0) 1860 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:57.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:57 vm04 bash[20742]: audit 2026-03-10T10:18:57.192972+0000 mon.a (mon.0) 1861 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:57.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:57 vm04 bash[20742]: audit 2026-03-10T10:18:57.192972+0000 mon.a (mon.0) 1861 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:57.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:57 vm04 bash[20742]: audit 2026-03-10T10:18:57.196629+0000 mon.b (mon.1) 186 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:57.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:57 vm04 bash[20742]: audit 2026-03-10T10:18:57.196629+0000 mon.b (mon.1) 186 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:57.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:57 vm04 bash[20742]: audit 2026-03-10T10:18:57.197189+0000 mon.b (mon.1) 187 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:57.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:57 vm04 bash[20742]: audit 2026-03-10T10:18:57.197189+0000 mon.b (mon.1) 187 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:57.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:57 vm04 bash[20742]: cluster 2026-03-10T10:18:57.198697+0000 mon.a (mon.0) 1862 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-10T10:18:57.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:57 vm04 bash[20742]: cluster 2026-03-10T10:18:57.198697+0000 mon.a (mon.0) 1862 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-10T10:18:57.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:57 vm04 bash[20742]: audit 2026-03-10T10:18:57.200489+0000 mon.a (mon.0) 1863 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:57.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:57 vm04 bash[20742]: audit 2026-03-10T10:18:57.200489+0000 mon.a (mon.0) 1863 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:57.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:57 vm04 bash[20742]: audit 2026-03-10T10:18:57.205696+0000 mon.a (mon.0) 1864 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:57.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:57 vm04 bash[20742]: audit 2026-03-10T10:18:57.205696+0000 mon.a (mon.0) 1864 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:58.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:57 vm07 bash[23367]: cluster 2026-03-10T10:18:56.409798+0000 mgr.y (mgr.24422) 220 : cluster [DBG] pgmap v283: 328 pgs: 40 unknown, 1 peering, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 271 active+clean; 458 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:58.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:57 vm07 bash[23367]: cluster 2026-03-10T10:18:56.409798+0000 mgr.y (mgr.24422) 220 : cluster [DBG] pgmap v283: 328 pgs: 40 unknown, 1 peering, 9 active+clean+snaptrim_wait, 7 active+clean+snaptrim, 271 active+clean; 458 KiB data, 660 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:18:58.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:57 vm07 bash[23367]: cluster 2026-03-10T10:18:57.189709+0000 mon.a (mon.0) 1860 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:58.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:57 vm07 bash[23367]: cluster 2026-03-10T10:18:57.189709+0000 mon.a (mon.0) 1860 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:18:58.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:57 vm07 bash[23367]: audit 2026-03-10T10:18:57.192972+0000 mon.a (mon.0) 1861 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:58.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:57 vm07 bash[23367]: audit 2026-03-10T10:18:57.192972+0000 mon.a (mon.0) 1861 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleWritePP_vm04-59259-47", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:18:58.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:57 vm07 bash[23367]: audit 2026-03-10T10:18:57.196629+0000 mon.b (mon.1) 186 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:58.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:57 vm07 bash[23367]: audit 2026-03-10T10:18:57.196629+0000 mon.b (mon.1) 186 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:58.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:57 vm07 bash[23367]: audit 2026-03-10T10:18:57.197189+0000 mon.b (mon.1) 187 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:58.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:57 vm07 bash[23367]: audit 2026-03-10T10:18:57.197189+0000 mon.b (mon.1) 187 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:58.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:57 vm07 bash[23367]: cluster 2026-03-10T10:18:57.198697+0000 mon.a (mon.0) 1862 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-10T10:18:58.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:57 vm07 bash[23367]: cluster 2026-03-10T10:18:57.198697+0000 mon.a (mon.0) 1862 : cluster [DBG] osdmap e218: 8 total, 8 up, 8 in 2026-03-10T10:18:58.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:57 vm07 bash[23367]: audit 2026-03-10T10:18:57.200489+0000 mon.a (mon.0) 1863 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:58.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:57 vm07 bash[23367]: audit 2026-03-10T10:18:57.200489+0000 mon.a (mon.0) 1863 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:18:58.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:57 vm07 bash[23367]: audit 2026-03-10T10:18:57.205696+0000 mon.a (mon.0) 1864 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:58.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:57 vm07 bash[23367]: audit 2026-03-10T10:18:57.205696+0000 mon.a (mon.0) 1864 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:58.631 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:18:58 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:18:58.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:58 vm04 bash[28289]: audit 2026-03-10T10:18:57.612073+0000 mon.c (mon.2) 324 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:58.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:58 vm04 bash[28289]: audit 2026-03-10T10:18:57.612073+0000 mon.c (mon.2) 324 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:58.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:58 vm04 bash[28289]: audit 2026-03-10T10:18:57.804900+0000 mon.a (mon.0) 1865 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:18:58.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:58 vm04 bash[28289]: audit 2026-03-10T10:18:57.804900+0000 mon.a (mon.0) 1865 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:18:58.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:58 vm04 bash[28289]: audit 2026-03-10T10:18:58.196059+0000 mon.a (mon.0) 1866 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm04-59252-37"}]': finished 2026-03-10T10:18:58.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:58 vm04 bash[28289]: audit 2026-03-10T10:18:58.196059+0000 mon.a (mon.0) 1866 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm04-59252-37"}]': finished 2026-03-10T10:18:58.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:58 vm04 bash[28289]: audit 2026-03-10T10:18:58.201060+0000 mon.b (mon.1) 188 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:58.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:58 vm04 bash[28289]: audit 2026-03-10T10:18:58.201060+0000 mon.b (mon.1) 188 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:58.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:58 vm04 bash[28289]: cluster 2026-03-10T10:18:58.203150+0000 mon.a (mon.0) 1867 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-10T10:18:58.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:58 vm04 bash[28289]: cluster 2026-03-10T10:18:58.203150+0000 mon.a (mon.0) 1867 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-10T10:18:58.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:58 vm04 bash[28289]: audit 2026-03-10T10:18:58.203893+0000 mon.a (mon.0) 1868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:58.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:58 vm04 bash[28289]: audit 2026-03-10T10:18:58.203893+0000 mon.a (mon.0) 1868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:58.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:58 vm04 bash[28289]: audit 2026-03-10T10:18:58.214976+0000 mon.a (mon.0) 1869 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:58.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:58 vm04 bash[28289]: audit 2026-03-10T10:18:58.214976+0000 mon.a (mon.0) 1869 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:58.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:58 vm04 bash[20742]: audit 2026-03-10T10:18:57.612073+0000 mon.c (mon.2) 324 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:58.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:58 vm04 bash[20742]: audit 2026-03-10T10:18:57.612073+0000 mon.c (mon.2) 324 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:58.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:58 vm04 bash[20742]: audit 2026-03-10T10:18:57.804900+0000 mon.a (mon.0) 1865 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:18:58.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:58 vm04 bash[20742]: audit 2026-03-10T10:18:57.804900+0000 mon.a (mon.0) 1865 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:18:58.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:58 vm04 bash[20742]: audit 2026-03-10T10:18:58.196059+0000 mon.a (mon.0) 1866 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm04-59252-37"}]': finished 2026-03-10T10:18:58.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:58 vm04 bash[20742]: audit 2026-03-10T10:18:58.196059+0000 mon.a (mon.0) 1866 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm04-59252-37"}]': finished 2026-03-10T10:18:58.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:58 vm04 bash[20742]: audit 2026-03-10T10:18:58.201060+0000 mon.b (mon.1) 188 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:58.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:58 vm04 bash[20742]: audit 2026-03-10T10:18:58.201060+0000 mon.b (mon.1) 188 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:58.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:58 vm04 bash[20742]: cluster 2026-03-10T10:18:58.203150+0000 mon.a (mon.0) 1867 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-10T10:18:58.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:58 vm04 bash[20742]: cluster 2026-03-10T10:18:58.203150+0000 mon.a (mon.0) 1867 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-10T10:18:58.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:58 vm04 bash[20742]: audit 2026-03-10T10:18:58.203893+0000 mon.a (mon.0) 1868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:58.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:58 vm04 bash[20742]: audit 2026-03-10T10:18:58.203893+0000 mon.a (mon.0) 1868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:58.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:58 vm04 bash[20742]: audit 2026-03-10T10:18:58.214976+0000 mon.a (mon.0) 1869 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:58.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:58 vm04 bash[20742]: audit 2026-03-10T10:18:58.214976+0000 mon.a (mon.0) 1869 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:59.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:58 vm07 bash[23367]: audit 2026-03-10T10:18:57.612073+0000 mon.c (mon.2) 324 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:59.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:58 vm07 bash[23367]: audit 2026-03-10T10:18:57.612073+0000 mon.c (mon.2) 324 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:59.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:58 vm07 bash[23367]: audit 2026-03-10T10:18:57.804900+0000 mon.a (mon.0) 1865 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:18:59.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:58 vm07 bash[23367]: audit 2026-03-10T10:18:57.804900+0000 mon.a (mon.0) 1865 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:18:59.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:58 vm07 bash[23367]: audit 2026-03-10T10:18:58.196059+0000 mon.a (mon.0) 1866 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm04-59252-37"}]': finished 2026-03-10T10:18:59.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:58 vm07 bash[23367]: audit 2026-03-10T10:18:58.196059+0000 mon.a (mon.0) 1866 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFull_vm04-59252-37"}]': finished 2026-03-10T10:18:59.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:58 vm07 bash[23367]: audit 2026-03-10T10:18:58.201060+0000 mon.b (mon.1) 188 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:59.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:58 vm07 bash[23367]: audit 2026-03-10T10:18:58.201060+0000 mon.b (mon.1) 188 : audit [INF] from='client.? 192.168.123.104:0/2795374799' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:59.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:58 vm07 bash[23367]: cluster 2026-03-10T10:18:58.203150+0000 mon.a (mon.0) 1867 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-10T10:18:59.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:58 vm07 bash[23367]: cluster 2026-03-10T10:18:58.203150+0000 mon.a (mon.0) 1867 : cluster [DBG] osdmap e219: 8 total, 8 up, 8 in 2026-03-10T10:18:59.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:58 vm07 bash[23367]: audit 2026-03-10T10:18:58.203893+0000 mon.a (mon.0) 1868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:59.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:58 vm07 bash[23367]: audit 2026-03-10T10:18:58.203893+0000 mon.a (mon.0) 1868 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm04-59252-37"}]: dispatch 2026-03-10T10:18:59.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:58 vm07 bash[23367]: audit 2026-03-10T10:18:58.214976+0000 mon.a (mon.0) 1869 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:59.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:58 vm07 bash[23367]: audit 2026-03-10T10:18:58.214976+0000 mon.a (mon.0) 1869 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-29", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:18:59.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:59 vm04 bash[28289]: audit 2026-03-10T10:18:58.335028+0000 mgr.y (mgr.24422) 221 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:18:59.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:59 vm04 bash[28289]: audit 2026-03-10T10:18:58.335028+0000 mgr.y (mgr.24422) 221 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:18:59.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:59 vm04 bash[28289]: cluster 2026-03-10T10:18:58.410365+0000 mgr.y (mgr.24422) 222 : cluster [DBG] pgmap v286: 320 pgs: 16 unknown, 1 peering, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 288 active+clean; 458 KiB data, 665 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:18:59.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:59 vm04 bash[28289]: cluster 2026-03-10T10:18:58.410365+0000 mgr.y (mgr.24422) 222 : cluster [DBG] pgmap v286: 320 pgs: 16 unknown, 1 peering, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 288 active+clean; 458 KiB data, 665 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:18:59.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:59 vm04 bash[28289]: audit 2026-03-10T10:18:58.613074+0000 mon.c (mon.2) 325 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:59.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:59 vm04 bash[28289]: audit 2026-03-10T10:18:58.613074+0000 mon.c (mon.2) 325 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:59.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:59 vm04 bash[28289]: audit 2026-03-10T10:18:59.311642+0000 mon.a (mon.0) 1870 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-47"}]': finished 2026-03-10T10:18:59.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:59 vm04 bash[28289]: audit 2026-03-10T10:18:59.311642+0000 mon.a (mon.0) 1870 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-47"}]': finished 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:59 vm04 bash[28289]: audit 2026-03-10T10:18:59.311797+0000 mon.a (mon.0) 1871 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm04-59252-37"}]': finished 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:59 vm04 bash[28289]: audit 2026-03-10T10:18:59.311797+0000 mon.a (mon.0) 1871 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm04-59252-37"}]': finished 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:59 vm04 bash[28289]: audit 2026-03-10T10:18:59.311912+0000 mon.a (mon.0) 1872 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-29", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:59 vm04 bash[28289]: audit 2026-03-10T10:18:59.311912+0000 mon.a (mon.0) 1872 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-29", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:59 vm04 bash[28289]: cluster 2026-03-10T10:18:59.316825+0000 mon.a (mon.0) 1873 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:59 vm04 bash[28289]: cluster 2026-03-10T10:18:59.316825+0000 mon.a (mon.0) 1873 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:59 vm04 bash[28289]: audit 2026-03-10T10:18:59.317457+0000 mon.a (mon.0) 1874 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-29"}]: dispatch 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:59 vm04 bash[28289]: audit 2026-03-10T10:18:59.317457+0000 mon.a (mon.0) 1874 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-29"}]: dispatch 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:59 vm04 bash[28289]: audit 2026-03-10T10:18:59.349233+0000 mon.c (mon.2) 326 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:59 vm04 bash[28289]: audit 2026-03-10T10:18:59.349233+0000 mon.c (mon.2) 326 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:59 vm04 bash[28289]: audit 2026-03-10T10:18:59.352056+0000 mon.a (mon.0) 1875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:59 vm04 bash[28289]: audit 2026-03-10T10:18:59.352056+0000 mon.a (mon.0) 1875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:59 vm04 bash[28289]: audit 2026-03-10T10:18:59.352889+0000 mon.c (mon.2) 327 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:59 vm04 bash[28289]: audit 2026-03-10T10:18:59.352889+0000 mon.c (mon.2) 327 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:59 vm04 bash[28289]: audit 2026-03-10T10:18:59.353294+0000 mon.a (mon.0) 1876 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:59 vm04 bash[28289]: audit 2026-03-10T10:18:59.353294+0000 mon.a (mon.0) 1876 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:59 vm04 bash[28289]: audit 2026-03-10T10:18:59.353877+0000 mon.c (mon.2) 328 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm04-59252-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:59 vm04 bash[28289]: audit 2026-03-10T10:18:59.353877+0000 mon.c (mon.2) 328 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm04-59252-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:59 vm04 bash[28289]: audit 2026-03-10T10:18:59.354234+0000 mon.a (mon.0) 1877 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm04-59252-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:59 vm04 bash[28289]: audit 2026-03-10T10:18:59.354234+0000 mon.a (mon.0) 1877 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm04-59252-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:59 vm04 bash[28289]: audit 2026-03-10T10:18:59.613851+0000 mon.c (mon.2) 329 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:18:59 vm04 bash[28289]: audit 2026-03-10T10:18:59.613851+0000 mon.c (mon.2) 329 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:59 vm04 bash[20742]: audit 2026-03-10T10:18:58.335028+0000 mgr.y (mgr.24422) 221 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:59 vm04 bash[20742]: audit 2026-03-10T10:18:58.335028+0000 mgr.y (mgr.24422) 221 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:59 vm04 bash[20742]: cluster 2026-03-10T10:18:58.410365+0000 mgr.y (mgr.24422) 222 : cluster [DBG] pgmap v286: 320 pgs: 16 unknown, 1 peering, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 288 active+clean; 458 KiB data, 665 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:59 vm04 bash[20742]: cluster 2026-03-10T10:18:58.410365+0000 mgr.y (mgr.24422) 222 : cluster [DBG] pgmap v286: 320 pgs: 16 unknown, 1 peering, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 288 active+clean; 458 KiB data, 665 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:59 vm04 bash[20742]: audit 2026-03-10T10:18:58.613074+0000 mon.c (mon.2) 325 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:59 vm04 bash[20742]: audit 2026-03-10T10:18:58.613074+0000 mon.c (mon.2) 325 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:59 vm04 bash[20742]: audit 2026-03-10T10:18:59.311642+0000 mon.a (mon.0) 1870 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-47"}]': finished 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:59 vm04 bash[20742]: audit 2026-03-10T10:18:59.311642+0000 mon.a (mon.0) 1870 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-47"}]': finished 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:59 vm04 bash[20742]: audit 2026-03-10T10:18:59.311797+0000 mon.a (mon.0) 1871 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm04-59252-37"}]': finished 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:59 vm04 bash[20742]: audit 2026-03-10T10:18:59.311797+0000 mon.a (mon.0) 1871 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm04-59252-37"}]': finished 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:59 vm04 bash[20742]: audit 2026-03-10T10:18:59.311912+0000 mon.a (mon.0) 1872 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-29", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:59 vm04 bash[20742]: audit 2026-03-10T10:18:59.311912+0000 mon.a (mon.0) 1872 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-29", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:59 vm04 bash[20742]: cluster 2026-03-10T10:18:59.316825+0000 mon.a (mon.0) 1873 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:59 vm04 bash[20742]: cluster 2026-03-10T10:18:59.316825+0000 mon.a (mon.0) 1873 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:59 vm04 bash[20742]: audit 2026-03-10T10:18:59.317457+0000 mon.a (mon.0) 1874 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-29"}]: dispatch 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:59 vm04 bash[20742]: audit 2026-03-10T10:18:59.317457+0000 mon.a (mon.0) 1874 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-29"}]: dispatch 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:59 vm04 bash[20742]: audit 2026-03-10T10:18:59.349233+0000 mon.c (mon.2) 326 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:59 vm04 bash[20742]: audit 2026-03-10T10:18:59.349233+0000 mon.c (mon.2) 326 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:59 vm04 bash[20742]: audit 2026-03-10T10:18:59.352056+0000 mon.a (mon.0) 1875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:59 vm04 bash[20742]: audit 2026-03-10T10:18:59.352056+0000 mon.a (mon.0) 1875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:59 vm04 bash[20742]: audit 2026-03-10T10:18:59.352889+0000 mon.c (mon.2) 327 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:59 vm04 bash[20742]: audit 2026-03-10T10:18:59.352889+0000 mon.c (mon.2) 327 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:59 vm04 bash[20742]: audit 2026-03-10T10:18:59.353294+0000 mon.a (mon.0) 1876 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:59 vm04 bash[20742]: audit 2026-03-10T10:18:59.353294+0000 mon.a (mon.0) 1876 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:59 vm04 bash[20742]: audit 2026-03-10T10:18:59.353877+0000 mon.c (mon.2) 328 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm04-59252-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:59.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:59 vm04 bash[20742]: audit 2026-03-10T10:18:59.353877+0000 mon.c (mon.2) 328 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm04-59252-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:59.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:59 vm04 bash[20742]: audit 2026-03-10T10:18:59.354234+0000 mon.a (mon.0) 1877 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm04-59252-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:59.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:59 vm04 bash[20742]: audit 2026-03-10T10:18:59.354234+0000 mon.a (mon.0) 1877 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm04-59252-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:18:59.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:59 vm04 bash[20742]: audit 2026-03-10T10:18:59.613851+0000 mon.c (mon.2) 329 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:18:59.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:18:59 vm04 bash[20742]: audit 2026-03-10T10:18:59.613851+0000 mon.c (mon.2) 329 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:59 vm07 bash[23367]: audit 2026-03-10T10:18:58.335028+0000 mgr.y (mgr.24422) 221 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:19:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:59 vm07 bash[23367]: audit 2026-03-10T10:18:58.335028+0000 mgr.y (mgr.24422) 221 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:19:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:59 vm07 bash[23367]: cluster 2026-03-10T10:18:58.410365+0000 mgr.y (mgr.24422) 222 : cluster [DBG] pgmap v286: 320 pgs: 16 unknown, 1 peering, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 288 active+clean; 458 KiB data, 665 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:19:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:59 vm07 bash[23367]: cluster 2026-03-10T10:18:58.410365+0000 mgr.y (mgr.24422) 222 : cluster [DBG] pgmap v286: 320 pgs: 16 unknown, 1 peering, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 288 active+clean; 458 KiB data, 665 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:19:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:59 vm07 bash[23367]: audit 2026-03-10T10:18:58.613074+0000 mon.c (mon.2) 325 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:59 vm07 bash[23367]: audit 2026-03-10T10:18:58.613074+0000 mon.c (mon.2) 325 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:59 vm07 bash[23367]: audit 2026-03-10T10:18:59.311642+0000 mon.a (mon.0) 1870 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-47"}]': finished 2026-03-10T10:19:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:59 vm07 bash[23367]: audit 2026-03-10T10:18:59.311642+0000 mon.a (mon.0) 1870 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleWritePP_vm04-59259-47", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleWritePP_vm04-59259-47"}]': finished 2026-03-10T10:19:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:59 vm07 bash[23367]: audit 2026-03-10T10:18:59.311797+0000 mon.a (mon.0) 1871 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm04-59252-37"}]': finished 2026-03-10T10:19:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:59 vm07 bash[23367]: audit 2026-03-10T10:18:59.311797+0000 mon.a (mon.0) 1871 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFull_vm04-59252-37"}]': finished 2026-03-10T10:19:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:59 vm07 bash[23367]: audit 2026-03-10T10:18:59.311912+0000 mon.a (mon.0) 1872 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-29", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:59 vm07 bash[23367]: audit 2026-03-10T10:18:59.311912+0000 mon.a (mon.0) 1872 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-29", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:59 vm07 bash[23367]: cluster 2026-03-10T10:18:59.316825+0000 mon.a (mon.0) 1873 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-10T10:19:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:59 vm07 bash[23367]: cluster 2026-03-10T10:18:59.316825+0000 mon.a (mon.0) 1873 : cluster [DBG] osdmap e220: 8 total, 8 up, 8 in 2026-03-10T10:19:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:59 vm07 bash[23367]: audit 2026-03-10T10:18:59.317457+0000 mon.a (mon.0) 1874 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-29"}]: dispatch 2026-03-10T10:19:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:59 vm07 bash[23367]: audit 2026-03-10T10:18:59.317457+0000 mon.a (mon.0) 1874 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-29"}]: dispatch 2026-03-10T10:19:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:59 vm07 bash[23367]: audit 2026-03-10T10:18:59.349233+0000 mon.c (mon.2) 326 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:59 vm07 bash[23367]: audit 2026-03-10T10:18:59.349233+0000 mon.c (mon.2) 326 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:59 vm07 bash[23367]: audit 2026-03-10T10:18:59.352056+0000 mon.a (mon.0) 1875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:59 vm07 bash[23367]: audit 2026-03-10T10:18:59.352056+0000 mon.a (mon.0) 1875 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:59 vm07 bash[23367]: audit 2026-03-10T10:18:59.352889+0000 mon.c (mon.2) 327 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:59 vm07 bash[23367]: audit 2026-03-10T10:18:59.352889+0000 mon.c (mon.2) 327 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:59 vm07 bash[23367]: audit 2026-03-10T10:18:59.353294+0000 mon.a (mon.0) 1876 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:59 vm07 bash[23367]: audit 2026-03-10T10:18:59.353294+0000 mon.a (mon.0) 1876 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:59 vm07 bash[23367]: audit 2026-03-10T10:18:59.353877+0000 mon.c (mon.2) 328 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm04-59252-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:59 vm07 bash[23367]: audit 2026-03-10T10:18:59.353877+0000 mon.c (mon.2) 328 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm04-59252-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:59 vm07 bash[23367]: audit 2026-03-10T10:18:59.354234+0000 mon.a (mon.0) 1877 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm04-59252-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:59 vm07 bash[23367]: audit 2026-03-10T10:18:59.354234+0000 mon.a (mon.0) 1877 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm04-59252-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:59 vm07 bash[23367]: audit 2026-03-10T10:18:59.613851+0000 mon.c (mon.2) 329 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:18:59 vm07 bash[23367]: audit 2026-03-10T10:18:59.613851+0000 mon.c (mon.2) 329 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:01.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:01 vm04 bash[28289]: audit 2026-03-10T10:19:00.386459+0000 mon.a (mon.0) 1878 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-29"}]': finished 2026-03-10T10:19:01.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:01 vm04 bash[28289]: audit 2026-03-10T10:19:00.386459+0000 mon.a (mon.0) 1878 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-29"}]': finished 2026-03-10T10:19:01.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:01 vm04 bash[28289]: audit 2026-03-10T10:19:00.386507+0000 mon.a (mon.0) 1879 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm04-59252-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:01.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:01 vm04 bash[28289]: audit 2026-03-10T10:19:00.386507+0000 mon.a (mon.0) 1879 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm04-59252-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:01.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:01 vm04 bash[28289]: cluster 2026-03-10T10:19:00.392795+0000 mon.a (mon.0) 1880 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-10T10:19:01.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:01 vm04 bash[28289]: cluster 2026-03-10T10:19:00.392795+0000 mon.a (mon.0) 1880 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-10T10:19:01.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:01 vm04 bash[28289]: audit 2026-03-10T10:19:00.393662+0000 mon.a (mon.0) 1881 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-29", "mode": "writeback"}]: dispatch 2026-03-10T10:19:01.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:01 vm04 bash[28289]: audit 2026-03-10T10:19:00.393662+0000 mon.a (mon.0) 1881 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-29", "mode": "writeback"}]: dispatch 2026-03-10T10:19:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:01 vm04 bash[28289]: audit 2026-03-10T10:19:00.394982+0000 mon.c (mon.2) 330 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm04-59252-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:01 vm04 bash[28289]: audit 2026-03-10T10:19:00.394982+0000 mon.c (mon.2) 330 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm04-59252-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:01 vm04 bash[28289]: audit 2026-03-10T10:19:00.395724+0000 mon.a (mon.0) 1882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm04-59252-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:01 vm04 bash[28289]: audit 2026-03-10T10:19:00.395724+0000 mon.a (mon.0) 1882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm04-59252-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:01 vm04 bash[28289]: cluster 2026-03-10T10:19:00.410815+0000 mgr.y (mgr.24422) 223 : cluster [DBG] pgmap v289: 328 pgs: 8 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 305 active+clean; 4.4 MiB data, 677 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.0 MiB/s wr, 1 op/s 2026-03-10T10:19:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:01 vm04 bash[28289]: cluster 2026-03-10T10:19:00.410815+0000 mgr.y (mgr.24422) 223 : cluster [DBG] pgmap v289: 328 pgs: 8 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 305 active+clean; 4.4 MiB data, 677 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.0 MiB/s wr, 1 op/s 2026-03-10T10:19:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:01 vm04 bash[28289]: audit 2026-03-10T10:19:00.614755+0000 mon.c (mon.2) 331 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:01.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:01 vm04 bash[28289]: audit 2026-03-10T10:19:00.614755+0000 mon.c (mon.2) 331 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:01 vm04 bash[20742]: audit 2026-03-10T10:19:00.386459+0000 mon.a (mon.0) 1878 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-29"}]': finished 2026-03-10T10:19:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:01 vm04 bash[20742]: audit 2026-03-10T10:19:00.386459+0000 mon.a (mon.0) 1878 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-29"}]': finished 2026-03-10T10:19:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:01 vm04 bash[20742]: audit 2026-03-10T10:19:00.386507+0000 mon.a (mon.0) 1879 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm04-59252-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:01 vm04 bash[20742]: audit 2026-03-10T10:19:00.386507+0000 mon.a (mon.0) 1879 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm04-59252-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:01 vm04 bash[20742]: cluster 2026-03-10T10:19:00.392795+0000 mon.a (mon.0) 1880 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-10T10:19:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:01 vm04 bash[20742]: cluster 2026-03-10T10:19:00.392795+0000 mon.a (mon.0) 1880 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-10T10:19:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:01 vm04 bash[20742]: audit 2026-03-10T10:19:00.393662+0000 mon.a (mon.0) 1881 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-29", "mode": "writeback"}]: dispatch 2026-03-10T10:19:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:01 vm04 bash[20742]: audit 2026-03-10T10:19:00.393662+0000 mon.a (mon.0) 1881 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-29", "mode": "writeback"}]: dispatch 2026-03-10T10:19:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:01 vm04 bash[20742]: audit 2026-03-10T10:19:00.394982+0000 mon.c (mon.2) 330 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm04-59252-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:01 vm04 bash[20742]: audit 2026-03-10T10:19:00.394982+0000 mon.c (mon.2) 330 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm04-59252-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:01 vm04 bash[20742]: audit 2026-03-10T10:19:00.395724+0000 mon.a (mon.0) 1882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm04-59252-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:01 vm04 bash[20742]: audit 2026-03-10T10:19:00.395724+0000 mon.a (mon.0) 1882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm04-59252-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:01 vm04 bash[20742]: cluster 2026-03-10T10:19:00.410815+0000 mgr.y (mgr.24422) 223 : cluster [DBG] pgmap v289: 328 pgs: 8 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 305 active+clean; 4.4 MiB data, 677 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.0 MiB/s wr, 1 op/s 2026-03-10T10:19:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:01 vm04 bash[20742]: cluster 2026-03-10T10:19:00.410815+0000 mgr.y (mgr.24422) 223 : cluster [DBG] pgmap v289: 328 pgs: 8 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 305 active+clean; 4.4 MiB data, 677 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.0 MiB/s wr, 1 op/s 2026-03-10T10:19:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:01 vm04 bash[20742]: audit 2026-03-10T10:19:00.614755+0000 mon.c (mon.2) 331 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:01.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:01 vm04 bash[20742]: audit 2026-03-10T10:19:00.614755+0000 mon.c (mon.2) 331 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:01 vm07 bash[23367]: audit 2026-03-10T10:19:00.386459+0000 mon.a (mon.0) 1878 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-29"}]': finished 2026-03-10T10:19:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:01 vm07 bash[23367]: audit 2026-03-10T10:19:00.386459+0000 mon.a (mon.0) 1878 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-29"}]': finished 2026-03-10T10:19:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:01 vm07 bash[23367]: audit 2026-03-10T10:19:00.386507+0000 mon.a (mon.0) 1879 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm04-59252-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:01 vm07 bash[23367]: audit 2026-03-10T10:19:00.386507+0000 mon.a (mon.0) 1879 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStat_vm04-59252-38", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:01 vm07 bash[23367]: cluster 2026-03-10T10:19:00.392795+0000 mon.a (mon.0) 1880 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-10T10:19:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:01 vm07 bash[23367]: cluster 2026-03-10T10:19:00.392795+0000 mon.a (mon.0) 1880 : cluster [DBG] osdmap e221: 8 total, 8 up, 8 in 2026-03-10T10:19:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:01 vm07 bash[23367]: audit 2026-03-10T10:19:00.393662+0000 mon.a (mon.0) 1881 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-29", "mode": "writeback"}]: dispatch 2026-03-10T10:19:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:01 vm07 bash[23367]: audit 2026-03-10T10:19:00.393662+0000 mon.a (mon.0) 1881 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-29", "mode": "writeback"}]: dispatch 2026-03-10T10:19:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:01 vm07 bash[23367]: audit 2026-03-10T10:19:00.394982+0000 mon.c (mon.2) 330 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm04-59252-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:01 vm07 bash[23367]: audit 2026-03-10T10:19:00.394982+0000 mon.c (mon.2) 330 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm04-59252-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:01 vm07 bash[23367]: audit 2026-03-10T10:19:00.395724+0000 mon.a (mon.0) 1882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm04-59252-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:01 vm07 bash[23367]: audit 2026-03-10T10:19:00.395724+0000 mon.a (mon.0) 1882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStat_vm04-59252-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:01 vm07 bash[23367]: cluster 2026-03-10T10:19:00.410815+0000 mgr.y (mgr.24422) 223 : cluster [DBG] pgmap v289: 328 pgs: 8 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 305 active+clean; 4.4 MiB data, 677 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.0 MiB/s wr, 1 op/s 2026-03-10T10:19:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:01 vm07 bash[23367]: cluster 2026-03-10T10:19:00.410815+0000 mgr.y (mgr.24422) 223 : cluster [DBG] pgmap v289: 328 pgs: 8 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 305 active+clean; 4.4 MiB data, 677 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.0 MiB/s wr, 1 op/s 2026-03-10T10:19:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:01 vm07 bash[23367]: audit 2026-03-10T10:19:00.614755+0000 mon.c (mon.2) 331 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:01.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:01 vm07 bash[23367]: audit 2026-03-10T10:19:00.614755+0000 mon.c (mon.2) 331 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:02 vm04 bash[28289]: cluster 2026-03-10T10:19:01.386916+0000 mon.a (mon.0) 1883 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:02 vm04 bash[28289]: cluster 2026-03-10T10:19:01.386916+0000 mon.a (mon.0) 1883 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:02 vm04 bash[28289]: audit 2026-03-10T10:19:01.389632+0000 mon.a (mon.0) 1884 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-29", "mode": "writeback"}]': finished 2026-03-10T10:19:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:02 vm04 bash[28289]: audit 2026-03-10T10:19:01.389632+0000 mon.a (mon.0) 1884 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-29", "mode": "writeback"}]': finished 2026-03-10T10:19:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:02 vm04 bash[28289]: audit 2026-03-10T10:19:01.395471+0000 mon.b (mon.1) 189 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:19:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:02 vm04 bash[28289]: audit 2026-03-10T10:19:01.395471+0000 mon.b (mon.1) 189 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:19:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:02 vm04 bash[28289]: cluster 2026-03-10T10:19:01.396400+0000 mon.a (mon.0) 1885 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-10T10:19:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:02 vm04 bash[28289]: cluster 2026-03-10T10:19:01.396400+0000 mon.a (mon.0) 1885 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-10T10:19:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:02 vm04 bash[28289]: audit 2026-03-10T10:19:01.398370+0000 mon.a (mon.0) 1886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:19:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:02 vm04 bash[28289]: audit 2026-03-10T10:19:01.398370+0000 mon.a (mon.0) 1886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:19:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:02 vm04 bash[28289]: audit 2026-03-10T10:19:01.615595+0000 mon.c (mon.2) 332 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:02 vm04 bash[28289]: audit 2026-03-10T10:19:01.615595+0000 mon.c (mon.2) 332 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:02.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:02 vm04 bash[20742]: cluster 2026-03-10T10:19:01.386916+0000 mon.a (mon.0) 1883 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:02.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:02 vm04 bash[20742]: cluster 2026-03-10T10:19:01.386916+0000 mon.a (mon.0) 1883 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:02 vm04 bash[20742]: audit 2026-03-10T10:19:01.389632+0000 mon.a (mon.0) 1884 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-29", "mode": "writeback"}]': finished 2026-03-10T10:19:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:02 vm04 bash[20742]: audit 2026-03-10T10:19:01.389632+0000 mon.a (mon.0) 1884 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-29", "mode": "writeback"}]': finished 2026-03-10T10:19:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:02 vm04 bash[20742]: audit 2026-03-10T10:19:01.395471+0000 mon.b (mon.1) 189 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:19:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:02 vm04 bash[20742]: audit 2026-03-10T10:19:01.395471+0000 mon.b (mon.1) 189 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:19:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:02 vm04 bash[20742]: cluster 2026-03-10T10:19:01.396400+0000 mon.a (mon.0) 1885 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-10T10:19:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:02 vm04 bash[20742]: cluster 2026-03-10T10:19:01.396400+0000 mon.a (mon.0) 1885 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-10T10:19:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:02 vm04 bash[20742]: audit 2026-03-10T10:19:01.398370+0000 mon.a (mon.0) 1886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:19:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:02 vm04 bash[20742]: audit 2026-03-10T10:19:01.398370+0000 mon.a (mon.0) 1886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:19:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:02 vm04 bash[20742]: audit 2026-03-10T10:19:01.615595+0000 mon.c (mon.2) 332 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:02 vm04 bash[20742]: audit 2026-03-10T10:19:01.615595+0000 mon.c (mon.2) 332 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:02 vm07 bash[23367]: cluster 2026-03-10T10:19:01.386916+0000 mon.a (mon.0) 1883 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:02 vm07 bash[23367]: cluster 2026-03-10T10:19:01.386916+0000 mon.a (mon.0) 1883 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:02 vm07 bash[23367]: audit 2026-03-10T10:19:01.389632+0000 mon.a (mon.0) 1884 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-29", "mode": "writeback"}]': finished 2026-03-10T10:19:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:02 vm07 bash[23367]: audit 2026-03-10T10:19:01.389632+0000 mon.a (mon.0) 1884 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-29", "mode": "writeback"}]': finished 2026-03-10T10:19:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:02 vm07 bash[23367]: audit 2026-03-10T10:19:01.395471+0000 mon.b (mon.1) 189 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:19:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:02 vm07 bash[23367]: audit 2026-03-10T10:19:01.395471+0000 mon.b (mon.1) 189 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:19:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:02 vm07 bash[23367]: cluster 2026-03-10T10:19:01.396400+0000 mon.a (mon.0) 1885 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-10T10:19:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:02 vm07 bash[23367]: cluster 2026-03-10T10:19:01.396400+0000 mon.a (mon.0) 1885 : cluster [DBG] osdmap e222: 8 total, 8 up, 8 in 2026-03-10T10:19:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:02 vm07 bash[23367]: audit 2026-03-10T10:19:01.398370+0000 mon.a (mon.0) 1886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:19:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:02 vm07 bash[23367]: audit 2026-03-10T10:19:01.398370+0000 mon.a (mon.0) 1886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:19:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:02 vm07 bash[23367]: audit 2026-03-10T10:19:01.615595+0000 mon.c (mon.2) 332 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:02 vm07 bash[23367]: audit 2026-03-10T10:19:01.615595+0000 mon.c (mon.2) 332 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:03.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:19:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:19:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:19:03.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:03 vm07 bash[23367]: audit 2026-03-10T10:19:02.407688+0000 mon.a (mon.0) 1887 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStat_vm04-59252-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm04-59252-38"}]': finished 2026-03-10T10:19:03.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:03 vm07 bash[23367]: audit 2026-03-10T10:19:02.407688+0000 mon.a (mon.0) 1887 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStat_vm04-59252-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm04-59252-38"}]': finished 2026-03-10T10:19:03.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:03 vm07 bash[23367]: audit 2026-03-10T10:19:02.407775+0000 mon.a (mon.0) 1888 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-47"}]': finished 2026-03-10T10:19:03.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:03 vm07 bash[23367]: audit 2026-03-10T10:19:02.407775+0000 mon.a (mon.0) 1888 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-47"}]': finished 2026-03-10T10:19:03.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:03 vm07 bash[23367]: audit 2026-03-10T10:19:02.408386+0000 mon.b (mon.1) 190 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:19:03.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:03 vm07 bash[23367]: audit 2026-03-10T10:19:02.408386+0000 mon.b (mon.1) 190 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:19:03.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:03 vm07 bash[23367]: cluster 2026-03-10T10:19:02.411824+0000 mgr.y (mgr.24422) 224 : cluster [DBG] pgmap v292: 328 pgs: 8 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 305 active+clean; 4.4 MiB data, 677 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1024 KiB/s wr, 1 op/s 2026-03-10T10:19:03.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:03 vm07 bash[23367]: cluster 2026-03-10T10:19:02.411824+0000 mgr.y (mgr.24422) 224 : cluster [DBG] pgmap v292: 328 pgs: 8 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 305 active+clean; 4.4 MiB data, 677 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1024 KiB/s wr, 1 op/s 2026-03-10T10:19:03.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:03 vm07 bash[23367]: cluster 2026-03-10T10:19:02.419545+0000 mon.a (mon.0) 1889 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-10T10:19:03.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:03 vm07 bash[23367]: cluster 2026-03-10T10:19:02.419545+0000 mon.a (mon.0) 1889 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-10T10:19:03.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:03 vm07 bash[23367]: audit 2026-03-10T10:19:02.423083+0000 mon.a (mon.0) 1890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:19:03.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:03 vm07 bash[23367]: audit 2026-03-10T10:19:02.423083+0000 mon.a (mon.0) 1890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:19:03.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:03 vm07 bash[23367]: audit 2026-03-10T10:19:02.616502+0000 mon.c (mon.2) 333 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:03.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:03 vm07 bash[23367]: audit 2026-03-10T10:19:02.616502+0000 mon.c (mon.2) 333 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:03.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:03 vm07 bash[23367]: audit 2026-03-10T10:19:03.424024+0000 mon.a (mon.0) 1891 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-47"}]': finished 2026-03-10T10:19:03.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:03 vm07 bash[23367]: audit 2026-03-10T10:19:03.424024+0000 mon.a (mon.0) 1891 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-47"}]': finished 2026-03-10T10:19:03.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:03 vm07 bash[23367]: cluster 2026-03-10T10:19:03.457631+0000 mon.a (mon.0) 1892 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-10T10:19:03.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:03 vm07 bash[23367]: cluster 2026-03-10T10:19:03.457631+0000 mon.a (mon.0) 1892 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-10T10:19:03.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:03 vm04 bash[28289]: audit 2026-03-10T10:19:02.407688+0000 mon.a (mon.0) 1887 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStat_vm04-59252-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm04-59252-38"}]': finished 2026-03-10T10:19:03.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:03 vm04 bash[28289]: audit 2026-03-10T10:19:02.407688+0000 mon.a (mon.0) 1887 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStat_vm04-59252-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm04-59252-38"}]': finished 2026-03-10T10:19:03.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:03 vm04 bash[28289]: audit 2026-03-10T10:19:02.407775+0000 mon.a (mon.0) 1888 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-47"}]': finished 2026-03-10T10:19:03.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:03 vm04 bash[28289]: audit 2026-03-10T10:19:02.407775+0000 mon.a (mon.0) 1888 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-47"}]': finished 2026-03-10T10:19:03.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:03 vm04 bash[28289]: audit 2026-03-10T10:19:02.408386+0000 mon.b (mon.1) 190 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:19:03.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:03 vm04 bash[28289]: audit 2026-03-10T10:19:02.408386+0000 mon.b (mon.1) 190 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:19:03.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:03 vm04 bash[28289]: cluster 2026-03-10T10:19:02.411824+0000 mgr.y (mgr.24422) 224 : cluster [DBG] pgmap v292: 328 pgs: 8 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 305 active+clean; 4.4 MiB data, 677 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1024 KiB/s wr, 1 op/s 2026-03-10T10:19:03.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:03 vm04 bash[28289]: cluster 2026-03-10T10:19:02.411824+0000 mgr.y (mgr.24422) 224 : cluster [DBG] pgmap v292: 328 pgs: 8 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 305 active+clean; 4.4 MiB data, 677 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1024 KiB/s wr, 1 op/s 2026-03-10T10:19:03.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:03 vm04 bash[28289]: cluster 2026-03-10T10:19:02.419545+0000 mon.a (mon.0) 1889 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-10T10:19:03.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:03 vm04 bash[28289]: cluster 2026-03-10T10:19:02.419545+0000 mon.a (mon.0) 1889 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-10T10:19:03.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:03 vm04 bash[28289]: audit 2026-03-10T10:19:02.423083+0000 mon.a (mon.0) 1890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:19:03.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:03 vm04 bash[28289]: audit 2026-03-10T10:19:02.423083+0000 mon.a (mon.0) 1890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:19:03.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:03 vm04 bash[28289]: audit 2026-03-10T10:19:02.616502+0000 mon.c (mon.2) 333 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:03.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:03 vm04 bash[28289]: audit 2026-03-10T10:19:02.616502+0000 mon.c (mon.2) 333 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:03.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:03 vm04 bash[28289]: audit 2026-03-10T10:19:03.424024+0000 mon.a (mon.0) 1891 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-47"}]': finished 2026-03-10T10:19:03.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:03 vm04 bash[28289]: audit 2026-03-10T10:19:03.424024+0000 mon.a (mon.0) 1891 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-47"}]': finished 2026-03-10T10:19:03.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:03 vm04 bash[28289]: cluster 2026-03-10T10:19:03.457631+0000 mon.a (mon.0) 1892 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-10T10:19:03.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:03 vm04 bash[28289]: cluster 2026-03-10T10:19:03.457631+0000 mon.a (mon.0) 1892 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-10T10:19:03.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:03 vm04 bash[20742]: audit 2026-03-10T10:19:02.407688+0000 mon.a (mon.0) 1887 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStat_vm04-59252-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm04-59252-38"}]': finished 2026-03-10T10:19:03.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:03 vm04 bash[20742]: audit 2026-03-10T10:19:02.407688+0000 mon.a (mon.0) 1887 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStat_vm04-59252-38", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStat_vm04-59252-38"}]': finished 2026-03-10T10:19:03.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:03 vm04 bash[20742]: audit 2026-03-10T10:19:02.407775+0000 mon.a (mon.0) 1888 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-47"}]': finished 2026-03-10T10:19:03.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:03 vm04 bash[20742]: audit 2026-03-10T10:19:02.407775+0000 mon.a (mon.0) 1888 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleWritePP_vm04-59259-47"}]': finished 2026-03-10T10:19:03.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:03 vm04 bash[20742]: audit 2026-03-10T10:19:02.408386+0000 mon.b (mon.1) 190 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:19:03.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:03 vm04 bash[20742]: audit 2026-03-10T10:19:02.408386+0000 mon.b (mon.1) 190 : audit [INF] from='client.? 192.168.123.104:0/2098015021' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:19:03.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:03 vm04 bash[20742]: cluster 2026-03-10T10:19:02.411824+0000 mgr.y (mgr.24422) 224 : cluster [DBG] pgmap v292: 328 pgs: 8 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 305 active+clean; 4.4 MiB data, 677 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1024 KiB/s wr, 1 op/s 2026-03-10T10:19:03.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:03 vm04 bash[20742]: cluster 2026-03-10T10:19:02.411824+0000 mgr.y (mgr.24422) 224 : cluster [DBG] pgmap v292: 328 pgs: 8 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 305 active+clean; 4.4 MiB data, 677 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1024 KiB/s wr, 1 op/s 2026-03-10T10:19:03.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:03 vm04 bash[20742]: cluster 2026-03-10T10:19:02.419545+0000 mon.a (mon.0) 1889 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-10T10:19:03.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:03 vm04 bash[20742]: cluster 2026-03-10T10:19:02.419545+0000 mon.a (mon.0) 1889 : cluster [DBG] osdmap e223: 8 total, 8 up, 8 in 2026-03-10T10:19:03.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:03 vm04 bash[20742]: audit 2026-03-10T10:19:02.423083+0000 mon.a (mon.0) 1890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:19:03.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:03 vm04 bash[20742]: audit 2026-03-10T10:19:02.423083+0000 mon.a (mon.0) 1890 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-47"}]: dispatch 2026-03-10T10:19:03.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:03 vm04 bash[20742]: audit 2026-03-10T10:19:02.616502+0000 mon.c (mon.2) 333 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:03.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:03 vm04 bash[20742]: audit 2026-03-10T10:19:02.616502+0000 mon.c (mon.2) 333 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:03.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:03 vm04 bash[20742]: audit 2026-03-10T10:19:03.424024+0000 mon.a (mon.0) 1891 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-47"}]': finished 2026-03-10T10:19:03.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:03 vm04 bash[20742]: audit 2026-03-10T10:19:03.424024+0000 mon.a (mon.0) 1891 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleWritePP_vm04-59259-47"}]': finished 2026-03-10T10:19:03.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:03 vm04 bash[20742]: cluster 2026-03-10T10:19:03.457631+0000 mon.a (mon.0) 1892 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-10T10:19:03.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:03 vm04 bash[20742]: cluster 2026-03-10T10:19:03.457631+0000 mon.a (mon.0) 1892 : cluster [DBG] osdmap e224: 8 total, 8 up, 8 in 2026-03-10T10:19:04.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:04 vm04 bash[28289]: audit 2026-03-10T10:19:03.468016+0000 mon.a (mon.0) 1893 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm04-59259-48"}]: dispatch 2026-03-10T10:19:04.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:04 vm04 bash[28289]: audit 2026-03-10T10:19:03.468016+0000 mon.a (mon.0) 1893 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm04-59259-48"}]: dispatch 2026-03-10T10:19:04.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:04 vm04 bash[28289]: audit 2026-03-10T10:19:03.469480+0000 mon.a (mon.0) 1894 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm04-59259-48"}]: dispatch 2026-03-10T10:19:04.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:04 vm04 bash[28289]: audit 2026-03-10T10:19:03.469480+0000 mon.a (mon.0) 1894 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm04-59259-48"}]: dispatch 2026-03-10T10:19:04.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:04 vm04 bash[28289]: audit 2026-03-10T10:19:03.469850+0000 mon.a (mon.0) 1895 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm04-59259-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:04.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:04 vm04 bash[28289]: audit 2026-03-10T10:19:03.469850+0000 mon.a (mon.0) 1895 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm04-59259-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:04.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:04 vm04 bash[28289]: audit 2026-03-10T10:19:03.617361+0000 mon.c (mon.2) 334 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:04.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:04 vm04 bash[28289]: audit 2026-03-10T10:19:03.617361+0000 mon.c (mon.2) 334 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:04.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:04 vm04 bash[20742]: audit 2026-03-10T10:19:03.468016+0000 mon.a (mon.0) 1893 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm04-59259-48"}]: dispatch 2026-03-10T10:19:04.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:04 vm04 bash[20742]: audit 2026-03-10T10:19:03.468016+0000 mon.a (mon.0) 1893 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm04-59259-48"}]: dispatch 2026-03-10T10:19:04.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:04 vm04 bash[20742]: audit 2026-03-10T10:19:03.469480+0000 mon.a (mon.0) 1894 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm04-59259-48"}]: dispatch 2026-03-10T10:19:04.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:04 vm04 bash[20742]: audit 2026-03-10T10:19:03.469480+0000 mon.a (mon.0) 1894 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm04-59259-48"}]: dispatch 2026-03-10T10:19:04.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:04 vm04 bash[20742]: audit 2026-03-10T10:19:03.469850+0000 mon.a (mon.0) 1895 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm04-59259-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:04.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:04 vm04 bash[20742]: audit 2026-03-10T10:19:03.469850+0000 mon.a (mon.0) 1895 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm04-59259-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:04.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:04 vm04 bash[20742]: audit 2026-03-10T10:19:03.617361+0000 mon.c (mon.2) 334 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:04.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:04 vm04 bash[20742]: audit 2026-03-10T10:19:03.617361+0000 mon.c (mon.2) 334 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:05.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:04 vm07 bash[23367]: audit 2026-03-10T10:19:03.468016+0000 mon.a (mon.0) 1893 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm04-59259-48"}]: dispatch 2026-03-10T10:19:05.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:04 vm07 bash[23367]: audit 2026-03-10T10:19:03.468016+0000 mon.a (mon.0) 1893 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm04-59259-48"}]: dispatch 2026-03-10T10:19:05.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:04 vm07 bash[23367]: audit 2026-03-10T10:19:03.469480+0000 mon.a (mon.0) 1894 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm04-59259-48"}]: dispatch 2026-03-10T10:19:05.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:04 vm07 bash[23367]: audit 2026-03-10T10:19:03.469480+0000 mon.a (mon.0) 1894 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm04-59259-48"}]: dispatch 2026-03-10T10:19:05.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:04 vm07 bash[23367]: audit 2026-03-10T10:19:03.469850+0000 mon.a (mon.0) 1895 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm04-59259-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:05.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:04 vm07 bash[23367]: audit 2026-03-10T10:19:03.469850+0000 mon.a (mon.0) 1895 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm04-59259-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:05.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:04 vm07 bash[23367]: audit 2026-03-10T10:19:03.617361+0000 mon.c (mon.2) 334 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:05.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:04 vm07 bash[23367]: audit 2026-03-10T10:19:03.617361+0000 mon.c (mon.2) 334 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:05 vm04 bash[28289]: cluster 2026-03-10T10:19:04.412639+0000 mgr.y (mgr.24422) 225 : cluster [DBG] pgmap v294: 328 pgs: 3 creating+peering, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 310 active+clean; 8.4 MiB data, 702 MiB used, 159 GiB / 160 GiB avail; 12 KiB/s rd, 254 B/s wr, 23 op/s 2026-03-10T10:19:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:05 vm04 bash[28289]: cluster 2026-03-10T10:19:04.412639+0000 mgr.y (mgr.24422) 225 : cluster [DBG] pgmap v294: 328 pgs: 3 creating+peering, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 310 active+clean; 8.4 MiB data, 702 MiB used, 159 GiB / 160 GiB avail; 12 KiB/s rd, 254 B/s wr, 23 op/s 2026-03-10T10:19:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:05 vm04 bash[28289]: audit 2026-03-10T10:19:04.525507+0000 mon.a (mon.0) 1896 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm04-59259-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:05 vm04 bash[28289]: audit 2026-03-10T10:19:04.525507+0000 mon.a (mon.0) 1896 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm04-59259-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:05 vm04 bash[28289]: cluster 2026-03-10T10:19:04.532613+0000 mon.a (mon.0) 1897 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-10T10:19:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:05 vm04 bash[28289]: cluster 2026-03-10T10:19:04.532613+0000 mon.a (mon.0) 1897 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-10T10:19:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:05 vm04 bash[28289]: audit 2026-03-10T10:19:04.534195+0000 mon.c (mon.2) 335 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:05 vm04 bash[28289]: audit 2026-03-10T10:19:04.534195+0000 mon.c (mon.2) 335 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:05 vm04 bash[28289]: audit 2026-03-10T10:19:04.538305+0000 mon.a (mon.0) 1898 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm04-59259-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm04-59259-48"}]: dispatch 2026-03-10T10:19:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:05 vm04 bash[28289]: audit 2026-03-10T10:19:04.538305+0000 mon.a (mon.0) 1898 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm04-59259-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm04-59259-48"}]: dispatch 2026-03-10T10:19:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:05 vm04 bash[28289]: audit 2026-03-10T10:19:04.538514+0000 mon.a (mon.0) 1899 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:05.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:05 vm04 bash[28289]: audit 2026-03-10T10:19:04.538514+0000 mon.a (mon.0) 1899 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:05.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:05 vm04 bash[28289]: audit 2026-03-10T10:19:04.618195+0000 mon.c (mon.2) 336 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:05.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:05 vm04 bash[28289]: audit 2026-03-10T10:19:04.618195+0000 mon.c (mon.2) 336 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:05.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:05 vm04 bash[28289]: audit 2026-03-10T10:19:05.529191+0000 mon.a (mon.0) 1900 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm04-59252-38"}]': finished 2026-03-10T10:19:05.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:05 vm04 bash[28289]: audit 2026-03-10T10:19:05.529191+0000 mon.a (mon.0) 1900 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm04-59252-38"}]': finished 2026-03-10T10:19:05.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:05 vm04 bash[28289]: audit 2026-03-10T10:19:05.535256+0000 mon.c (mon.2) 337 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:05.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:05 vm04 bash[28289]: audit 2026-03-10T10:19:05.535256+0000 mon.c (mon.2) 337 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:05.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:05 vm04 bash[28289]: cluster 2026-03-10T10:19:05.537710+0000 mon.a (mon.0) 1901 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-10T10:19:05.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:05 vm04 bash[28289]: cluster 2026-03-10T10:19:05.537710+0000 mon.a (mon.0) 1901 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-10T10:19:05.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:05 vm04 bash[28289]: audit 2026-03-10T10:19:05.544175+0000 mon.a (mon.0) 1902 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:05.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:05 vm04 bash[28289]: audit 2026-03-10T10:19:05.544175+0000 mon.a (mon.0) 1902 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:05 vm04 bash[20742]: cluster 2026-03-10T10:19:04.412639+0000 mgr.y (mgr.24422) 225 : cluster [DBG] pgmap v294: 328 pgs: 3 creating+peering, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 310 active+clean; 8.4 MiB data, 702 MiB used, 159 GiB / 160 GiB avail; 12 KiB/s rd, 254 B/s wr, 23 op/s 2026-03-10T10:19:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:05 vm04 bash[20742]: cluster 2026-03-10T10:19:04.412639+0000 mgr.y (mgr.24422) 225 : cluster [DBG] pgmap v294: 328 pgs: 3 creating+peering, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 310 active+clean; 8.4 MiB data, 702 MiB used, 159 GiB / 160 GiB avail; 12 KiB/s rd, 254 B/s wr, 23 op/s 2026-03-10T10:19:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:05 vm04 bash[20742]: audit 2026-03-10T10:19:04.525507+0000 mon.a (mon.0) 1896 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm04-59259-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:05 vm04 bash[20742]: audit 2026-03-10T10:19:04.525507+0000 mon.a (mon.0) 1896 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm04-59259-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:05 vm04 bash[20742]: cluster 2026-03-10T10:19:04.532613+0000 mon.a (mon.0) 1897 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-10T10:19:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:05 vm04 bash[20742]: cluster 2026-03-10T10:19:04.532613+0000 mon.a (mon.0) 1897 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-10T10:19:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:05 vm04 bash[20742]: audit 2026-03-10T10:19:04.534195+0000 mon.c (mon.2) 335 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:05 vm04 bash[20742]: audit 2026-03-10T10:19:04.534195+0000 mon.c (mon.2) 335 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:05 vm04 bash[20742]: audit 2026-03-10T10:19:04.538305+0000 mon.a (mon.0) 1898 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm04-59259-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm04-59259-48"}]: dispatch 2026-03-10T10:19:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:05 vm04 bash[20742]: audit 2026-03-10T10:19:04.538305+0000 mon.a (mon.0) 1898 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm04-59259-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm04-59259-48"}]: dispatch 2026-03-10T10:19:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:05 vm04 bash[20742]: audit 2026-03-10T10:19:04.538514+0000 mon.a (mon.0) 1899 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:05 vm04 bash[20742]: audit 2026-03-10T10:19:04.538514+0000 mon.a (mon.0) 1899 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:05 vm04 bash[20742]: audit 2026-03-10T10:19:04.618195+0000 mon.c (mon.2) 336 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:05 vm04 bash[20742]: audit 2026-03-10T10:19:04.618195+0000 mon.c (mon.2) 336 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:05 vm04 bash[20742]: audit 2026-03-10T10:19:05.529191+0000 mon.a (mon.0) 1900 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm04-59252-38"}]': finished 2026-03-10T10:19:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:05 vm04 bash[20742]: audit 2026-03-10T10:19:05.529191+0000 mon.a (mon.0) 1900 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm04-59252-38"}]': finished 2026-03-10T10:19:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:05 vm04 bash[20742]: audit 2026-03-10T10:19:05.535256+0000 mon.c (mon.2) 337 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:05 vm04 bash[20742]: audit 2026-03-10T10:19:05.535256+0000 mon.c (mon.2) 337 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:05 vm04 bash[20742]: cluster 2026-03-10T10:19:05.537710+0000 mon.a (mon.0) 1901 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-10T10:19:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:05 vm04 bash[20742]: cluster 2026-03-10T10:19:05.537710+0000 mon.a (mon.0) 1901 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-10T10:19:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:05 vm04 bash[20742]: audit 2026-03-10T10:19:05.544175+0000 mon.a (mon.0) 1902 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:05 vm04 bash[20742]: audit 2026-03-10T10:19:05.544175+0000 mon.a (mon.0) 1902 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:06.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:05 vm07 bash[23367]: cluster 2026-03-10T10:19:04.412639+0000 mgr.y (mgr.24422) 225 : cluster [DBG] pgmap v294: 328 pgs: 3 creating+peering, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 310 active+clean; 8.4 MiB data, 702 MiB used, 159 GiB / 160 GiB avail; 12 KiB/s rd, 254 B/s wr, 23 op/s 2026-03-10T10:19:06.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:05 vm07 bash[23367]: cluster 2026-03-10T10:19:04.412639+0000 mgr.y (mgr.24422) 225 : cluster [DBG] pgmap v294: 328 pgs: 3 creating+peering, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 310 active+clean; 8.4 MiB data, 702 MiB used, 159 GiB / 160 GiB avail; 12 KiB/s rd, 254 B/s wr, 23 op/s 2026-03-10T10:19:06.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:05 vm07 bash[23367]: audit 2026-03-10T10:19:04.525507+0000 mon.a (mon.0) 1896 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm04-59259-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:06.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:05 vm07 bash[23367]: audit 2026-03-10T10:19:04.525507+0000 mon.a (mon.0) 1896 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-WaitForSafePP_vm04-59259-48", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:06.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:05 vm07 bash[23367]: cluster 2026-03-10T10:19:04.532613+0000 mon.a (mon.0) 1897 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-10T10:19:06.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:05 vm07 bash[23367]: cluster 2026-03-10T10:19:04.532613+0000 mon.a (mon.0) 1897 : cluster [DBG] osdmap e225: 8 total, 8 up, 8 in 2026-03-10T10:19:06.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:05 vm07 bash[23367]: audit 2026-03-10T10:19:04.534195+0000 mon.c (mon.2) 335 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:06.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:05 vm07 bash[23367]: audit 2026-03-10T10:19:04.534195+0000 mon.c (mon.2) 335 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:06.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:05 vm07 bash[23367]: audit 2026-03-10T10:19:04.538305+0000 mon.a (mon.0) 1898 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm04-59259-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm04-59259-48"}]: dispatch 2026-03-10T10:19:06.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:05 vm07 bash[23367]: audit 2026-03-10T10:19:04.538305+0000 mon.a (mon.0) 1898 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm04-59259-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm04-59259-48"}]: dispatch 2026-03-10T10:19:06.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:05 vm07 bash[23367]: audit 2026-03-10T10:19:04.538514+0000 mon.a (mon.0) 1899 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:06.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:05 vm07 bash[23367]: audit 2026-03-10T10:19:04.538514+0000 mon.a (mon.0) 1899 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:06.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:05 vm07 bash[23367]: audit 2026-03-10T10:19:04.618195+0000 mon.c (mon.2) 336 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:06.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:05 vm07 bash[23367]: audit 2026-03-10T10:19:04.618195+0000 mon.c (mon.2) 336 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:06.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:05 vm07 bash[23367]: audit 2026-03-10T10:19:05.529191+0000 mon.a (mon.0) 1900 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm04-59252-38"}]': finished 2026-03-10T10:19:06.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:05 vm07 bash[23367]: audit 2026-03-10T10:19:05.529191+0000 mon.a (mon.0) 1900 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStat_vm04-59252-38"}]': finished 2026-03-10T10:19:06.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:05 vm07 bash[23367]: audit 2026-03-10T10:19:05.535256+0000 mon.c (mon.2) 337 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:06.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:05 vm07 bash[23367]: audit 2026-03-10T10:19:05.535256+0000 mon.c (mon.2) 337 : audit [INF] from='client.? 192.168.123.104:0/71233616' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:06.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:05 vm07 bash[23367]: cluster 2026-03-10T10:19:05.537710+0000 mon.a (mon.0) 1901 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-10T10:19:06.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:05 vm07 bash[23367]: cluster 2026-03-10T10:19:05.537710+0000 mon.a (mon.0) 1901 : cluster [DBG] osdmap e226: 8 total, 8 up, 8 in 2026-03-10T10:19:06.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:05 vm07 bash[23367]: audit 2026-03-10T10:19:05.544175+0000 mon.a (mon.0) 1902 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:06.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:05 vm07 bash[23367]: audit 2026-03-10T10:19:05.544175+0000 mon.a (mon.0) 1902 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm04-59252-38"}]: dispatch 2026-03-10T10:19:06.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:06 vm04 bash[28289]: audit 2026-03-10T10:19:05.619102+0000 mon.c (mon.2) 338 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:06.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:06 vm04 bash[28289]: audit 2026-03-10T10:19:05.619102+0000 mon.c (mon.2) 338 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:06.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:06 vm04 bash[28289]: audit 2026-03-10T10:19:05.801951+0000 mon.a (mon.0) 1903 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:06.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:06 vm04 bash[28289]: audit 2026-03-10T10:19:05.801951+0000 mon.a (mon.0) 1903 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:06.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:06 vm04 bash[28289]: audit 2026-03-10T10:19:06.413838+0000 mon.a (mon.0) 1904 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "27"}]: dispatch 2026-03-10T10:19:06.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:06 vm04 bash[28289]: audit 2026-03-10T10:19:06.413838+0000 mon.a (mon.0) 1904 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "27"}]: dispatch 2026-03-10T10:19:06.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:06 vm04 bash[28289]: audit 2026-03-10T10:19:06.532895+0000 mon.a (mon.0) 1905 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm04-59259-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm04-59259-48"}]': finished 2026-03-10T10:19:06.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:06 vm04 bash[28289]: audit 2026-03-10T10:19:06.532895+0000 mon.a (mon.0) 1905 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm04-59259-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm04-59259-48"}]': finished 2026-03-10T10:19:06.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:06 vm04 bash[28289]: audit 2026-03-10T10:19:06.533285+0000 mon.a (mon.0) 1906 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm04-59252-38"}]': finished 2026-03-10T10:19:06.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:06 vm04 bash[28289]: audit 2026-03-10T10:19:06.533285+0000 mon.a (mon.0) 1906 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm04-59252-38"}]': finished 2026-03-10T10:19:06.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:06 vm04 bash[28289]: audit 2026-03-10T10:19:06.533434+0000 mon.a (mon.0) 1907 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:06.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:06 vm04 bash[28289]: audit 2026-03-10T10:19:06.533434+0000 mon.a (mon.0) 1907 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:06.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:06 vm04 bash[28289]: audit 2026-03-10T10:19:06.533491+0000 mon.a (mon.0) 1908 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "27"}]': finished 2026-03-10T10:19:06.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:06 vm04 bash[28289]: audit 2026-03-10T10:19:06.533491+0000 mon.a (mon.0) 1908 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "27"}]': finished 2026-03-10T10:19:06.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:06 vm04 bash[28289]: cluster 2026-03-10T10:19:06.547623+0000 mon.a (mon.0) 1909 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-10T10:19:06.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:06 vm04 bash[28289]: cluster 2026-03-10T10:19:06.547623+0000 mon.a (mon.0) 1909 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-10T10:19:06.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:06 vm04 bash[28289]: audit 2026-03-10T10:19:06.560427+0000 mon.a (mon.0) 1910 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-29"}]: dispatch 2026-03-10T10:19:06.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:06 vm04 bash[28289]: audit 2026-03-10T10:19:06.560427+0000 mon.a (mon.0) 1910 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-29"}]: dispatch 2026-03-10T10:19:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:06 vm04 bash[20742]: audit 2026-03-10T10:19:05.619102+0000 mon.c (mon.2) 338 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:06 vm04 bash[20742]: audit 2026-03-10T10:19:05.619102+0000 mon.c (mon.2) 338 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:06 vm04 bash[20742]: audit 2026-03-10T10:19:05.801951+0000 mon.a (mon.0) 1903 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:06 vm04 bash[20742]: audit 2026-03-10T10:19:05.801951+0000 mon.a (mon.0) 1903 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:06 vm04 bash[20742]: audit 2026-03-10T10:19:06.413838+0000 mon.a (mon.0) 1904 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "27"}]: dispatch 2026-03-10T10:19:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:06 vm04 bash[20742]: audit 2026-03-10T10:19:06.413838+0000 mon.a (mon.0) 1904 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "27"}]: dispatch 2026-03-10T10:19:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:06 vm04 bash[20742]: audit 2026-03-10T10:19:06.532895+0000 mon.a (mon.0) 1905 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm04-59259-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm04-59259-48"}]': finished 2026-03-10T10:19:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:06 vm04 bash[20742]: audit 2026-03-10T10:19:06.532895+0000 mon.a (mon.0) 1905 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm04-59259-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm04-59259-48"}]': finished 2026-03-10T10:19:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:06 vm04 bash[20742]: audit 2026-03-10T10:19:06.533285+0000 mon.a (mon.0) 1906 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm04-59252-38"}]': finished 2026-03-10T10:19:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:06 vm04 bash[20742]: audit 2026-03-10T10:19:06.533285+0000 mon.a (mon.0) 1906 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm04-59252-38"}]': finished 2026-03-10T10:19:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:06 vm04 bash[20742]: audit 2026-03-10T10:19:06.533434+0000 mon.a (mon.0) 1907 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:06 vm04 bash[20742]: audit 2026-03-10T10:19:06.533434+0000 mon.a (mon.0) 1907 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:06 vm04 bash[20742]: audit 2026-03-10T10:19:06.533491+0000 mon.a (mon.0) 1908 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "27"}]': finished 2026-03-10T10:19:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:06 vm04 bash[20742]: audit 2026-03-10T10:19:06.533491+0000 mon.a (mon.0) 1908 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "27"}]': finished 2026-03-10T10:19:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:06 vm04 bash[20742]: cluster 2026-03-10T10:19:06.547623+0000 mon.a (mon.0) 1909 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-10T10:19:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:06 vm04 bash[20742]: cluster 2026-03-10T10:19:06.547623+0000 mon.a (mon.0) 1909 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-10T10:19:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:06 vm04 bash[20742]: audit 2026-03-10T10:19:06.560427+0000 mon.a (mon.0) 1910 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-29"}]: dispatch 2026-03-10T10:19:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:06 vm04 bash[20742]: audit 2026-03-10T10:19:06.560427+0000 mon.a (mon.0) 1910 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-29"}]: dispatch 2026-03-10T10:19:07.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:06 vm07 bash[23367]: audit 2026-03-10T10:19:05.619102+0000 mon.c (mon.2) 338 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:07.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:06 vm07 bash[23367]: audit 2026-03-10T10:19:05.619102+0000 mon.c (mon.2) 338 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:07.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:06 vm07 bash[23367]: audit 2026-03-10T10:19:05.801951+0000 mon.a (mon.0) 1903 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:07.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:06 vm07 bash[23367]: audit 2026-03-10T10:19:05.801951+0000 mon.a (mon.0) 1903 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:07.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:06 vm07 bash[23367]: audit 2026-03-10T10:19:06.413838+0000 mon.a (mon.0) 1904 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "27"}]: dispatch 2026-03-10T10:19:07.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:06 vm07 bash[23367]: audit 2026-03-10T10:19:06.413838+0000 mon.a (mon.0) 1904 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "27"}]: dispatch 2026-03-10T10:19:07.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:06 vm07 bash[23367]: audit 2026-03-10T10:19:06.532895+0000 mon.a (mon.0) 1905 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm04-59259-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm04-59259-48"}]': finished 2026-03-10T10:19:07.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:06 vm07 bash[23367]: audit 2026-03-10T10:19:06.532895+0000 mon.a (mon.0) 1905 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "WaitForSafePP_vm04-59259-48", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-WaitForSafePP_vm04-59259-48"}]': finished 2026-03-10T10:19:07.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:06 vm07 bash[23367]: audit 2026-03-10T10:19:06.533285+0000 mon.a (mon.0) 1906 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm04-59252-38"}]': finished 2026-03-10T10:19:07.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:06 vm07 bash[23367]: audit 2026-03-10T10:19:06.533285+0000 mon.a (mon.0) 1906 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStat_vm04-59252-38"}]': finished 2026-03-10T10:19:07.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:06 vm07 bash[23367]: audit 2026-03-10T10:19:06.533434+0000 mon.a (mon.0) 1907 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:07.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:06 vm07 bash[23367]: audit 2026-03-10T10:19:06.533434+0000 mon.a (mon.0) 1907 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:07.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:06 vm07 bash[23367]: audit 2026-03-10T10:19:06.533491+0000 mon.a (mon.0) 1908 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "27"}]': finished 2026-03-10T10:19:07.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:06 vm07 bash[23367]: audit 2026-03-10T10:19:06.533491+0000 mon.a (mon.0) 1908 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "27"}]': finished 2026-03-10T10:19:07.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:06 vm07 bash[23367]: cluster 2026-03-10T10:19:06.547623+0000 mon.a (mon.0) 1909 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-10T10:19:07.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:06 vm07 bash[23367]: cluster 2026-03-10T10:19:06.547623+0000 mon.a (mon.0) 1909 : cluster [DBG] osdmap e227: 8 total, 8 up, 8 in 2026-03-10T10:19:07.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:06 vm07 bash[23367]: audit 2026-03-10T10:19:06.560427+0000 mon.a (mon.0) 1910 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-29"}]: dispatch 2026-03-10T10:19:07.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:06 vm07 bash[23367]: audit 2026-03-10T10:19:06.560427+0000 mon.a (mon.0) 1910 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-29"}]: dispatch 2026-03-10T10:19:08.342 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:08 vm07 bash[23367]: cluster 2026-03-10T10:19:06.413067+0000 mgr.y (mgr.24422) 226 : cluster [DBG] pgmap v297: 320 pgs: 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 305 active+clean; 8.4 MiB data, 702 MiB used, 159 GiB / 160 GiB avail; 12 KiB/s rd, 0 B/s wr, 23 op/s 2026-03-10T10:19:08.342 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:08 vm07 bash[23367]: cluster 2026-03-10T10:19:06.413067+0000 mgr.y (mgr.24422) 226 : cluster [DBG] pgmap v297: 320 pgs: 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 305 active+clean; 8.4 MiB data, 702 MiB used, 159 GiB / 160 GiB avail; 12 KiB/s rd, 0 B/s wr, 23 op/s 2026-03-10T10:19:08.342 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:08 vm07 bash[23367]: cluster 2026-03-10T10:19:06.561578+0000 mon.a (mon.0) 1911 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:08.342 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:08 vm07 bash[23367]: cluster 2026-03-10T10:19:06.561578+0000 mon.a (mon.0) 1911 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:08.342 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:08 vm07 bash[23367]: audit 2026-03-10T10:19:06.563836+0000 mon.b (mon.1) 191 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:08.342 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:08 vm07 bash[23367]: audit 2026-03-10T10:19:06.563836+0000 mon.b (mon.1) 191 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:08.342 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:08 vm07 bash[23367]: audit 2026-03-10T10:19:06.575865+0000 mon.b (mon.1) 192 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:08.342 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:08 vm07 bash[23367]: audit 2026-03-10T10:19:06.575865+0000 mon.b (mon.1) 192 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:08.342 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:08 vm07 bash[23367]: audit 2026-03-10T10:19:06.576457+0000 mon.a (mon.0) 1912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:08.342 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:08 vm07 bash[23367]: audit 2026-03-10T10:19:06.576457+0000 mon.a (mon.0) 1912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:08.342 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:08 vm07 bash[23367]: audit 2026-03-10T10:19:06.588085+0000 mon.a (mon.0) 1913 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:08.342 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:08 vm07 bash[23367]: audit 2026-03-10T10:19:06.588085+0000 mon.a (mon.0) 1913 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:08.342 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:08 vm07 bash[23367]: audit 2026-03-10T10:19:06.606397+0000 mon.b (mon.1) 193 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm04-59252-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:08.342 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:08 vm07 bash[23367]: audit 2026-03-10T10:19:06.606397+0000 mon.b (mon.1) 193 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm04-59252-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:08.342 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:08 vm07 bash[23367]: audit 2026-03-10T10:19:06.609377+0000 mon.a (mon.0) 1914 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm04-59252-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:08.342 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:08 vm07 bash[23367]: audit 2026-03-10T10:19:06.609377+0000 mon.a (mon.0) 1914 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm04-59252-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:08.342 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:08 vm07 bash[23367]: audit 2026-03-10T10:19:06.620148+0000 mon.c (mon.2) 339 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:08.342 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:08 vm07 bash[23367]: audit 2026-03-10T10:19:06.620148+0000 mon.c (mon.2) 339 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:08 vm04 bash[28289]: cluster 2026-03-10T10:19:06.413067+0000 mgr.y (mgr.24422) 226 : cluster [DBG] pgmap v297: 320 pgs: 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 305 active+clean; 8.4 MiB data, 702 MiB used, 159 GiB / 160 GiB avail; 12 KiB/s rd, 0 B/s wr, 23 op/s 2026-03-10T10:19:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:08 vm04 bash[28289]: cluster 2026-03-10T10:19:06.413067+0000 mgr.y (mgr.24422) 226 : cluster [DBG] pgmap v297: 320 pgs: 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 305 active+clean; 8.4 MiB data, 702 MiB used, 159 GiB / 160 GiB avail; 12 KiB/s rd, 0 B/s wr, 23 op/s 2026-03-10T10:19:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:08 vm04 bash[28289]: cluster 2026-03-10T10:19:06.561578+0000 mon.a (mon.0) 1911 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:08 vm04 bash[28289]: cluster 2026-03-10T10:19:06.561578+0000 mon.a (mon.0) 1911 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:08 vm04 bash[28289]: audit 2026-03-10T10:19:06.563836+0000 mon.b (mon.1) 191 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:08 vm04 bash[28289]: audit 2026-03-10T10:19:06.563836+0000 mon.b (mon.1) 191 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:08 vm04 bash[28289]: audit 2026-03-10T10:19:06.575865+0000 mon.b (mon.1) 192 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:08 vm04 bash[28289]: audit 2026-03-10T10:19:06.575865+0000 mon.b (mon.1) 192 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:08 vm04 bash[28289]: audit 2026-03-10T10:19:06.576457+0000 mon.a (mon.0) 1912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:08 vm04 bash[28289]: audit 2026-03-10T10:19:06.576457+0000 mon.a (mon.0) 1912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:08 vm04 bash[28289]: audit 2026-03-10T10:19:06.588085+0000 mon.a (mon.0) 1913 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:08 vm04 bash[28289]: audit 2026-03-10T10:19:06.588085+0000 mon.a (mon.0) 1913 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:08 vm04 bash[28289]: audit 2026-03-10T10:19:06.606397+0000 mon.b (mon.1) 193 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm04-59252-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:08 vm04 bash[28289]: audit 2026-03-10T10:19:06.606397+0000 mon.b (mon.1) 193 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm04-59252-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:08 vm04 bash[28289]: audit 2026-03-10T10:19:06.609377+0000 mon.a (mon.0) 1914 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm04-59252-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:08 vm04 bash[28289]: audit 2026-03-10T10:19:06.609377+0000 mon.a (mon.0) 1914 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm04-59252-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:08 vm04 bash[28289]: audit 2026-03-10T10:19:06.620148+0000 mon.c (mon.2) 339 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:08 vm04 bash[28289]: audit 2026-03-10T10:19:06.620148+0000 mon.c (mon.2) 339 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:08 vm04 bash[20742]: cluster 2026-03-10T10:19:06.413067+0000 mgr.y (mgr.24422) 226 : cluster [DBG] pgmap v297: 320 pgs: 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 305 active+clean; 8.4 MiB data, 702 MiB used, 159 GiB / 160 GiB avail; 12 KiB/s rd, 0 B/s wr, 23 op/s 2026-03-10T10:19:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:08 vm04 bash[20742]: cluster 2026-03-10T10:19:06.413067+0000 mgr.y (mgr.24422) 226 : cluster [DBG] pgmap v297: 320 pgs: 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 305 active+clean; 8.4 MiB data, 702 MiB used, 159 GiB / 160 GiB avail; 12 KiB/s rd, 0 B/s wr, 23 op/s 2026-03-10T10:19:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:08 vm04 bash[20742]: cluster 2026-03-10T10:19:06.561578+0000 mon.a (mon.0) 1911 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:08 vm04 bash[20742]: cluster 2026-03-10T10:19:06.561578+0000 mon.a (mon.0) 1911 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:08 vm04 bash[20742]: audit 2026-03-10T10:19:06.563836+0000 mon.b (mon.1) 191 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:08 vm04 bash[20742]: audit 2026-03-10T10:19:06.563836+0000 mon.b (mon.1) 191 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:08 vm04 bash[20742]: audit 2026-03-10T10:19:06.575865+0000 mon.b (mon.1) 192 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:08.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:08 vm04 bash[20742]: audit 2026-03-10T10:19:06.575865+0000 mon.b (mon.1) 192 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:08.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:08 vm04 bash[20742]: audit 2026-03-10T10:19:06.576457+0000 mon.a (mon.0) 1912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:08.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:08 vm04 bash[20742]: audit 2026-03-10T10:19:06.576457+0000 mon.a (mon.0) 1912 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:08.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:08 vm04 bash[20742]: audit 2026-03-10T10:19:06.588085+0000 mon.a (mon.0) 1913 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:08.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:08 vm04 bash[20742]: audit 2026-03-10T10:19:06.588085+0000 mon.a (mon.0) 1913 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:08.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:08 vm04 bash[20742]: audit 2026-03-10T10:19:06.606397+0000 mon.b (mon.1) 193 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm04-59252-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:08.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:08 vm04 bash[20742]: audit 2026-03-10T10:19:06.606397+0000 mon.b (mon.1) 193 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm04-59252-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:08.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:08 vm04 bash[20742]: audit 2026-03-10T10:19:06.609377+0000 mon.a (mon.0) 1914 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm04-59252-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:08.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:08 vm04 bash[20742]: audit 2026-03-10T10:19:06.609377+0000 mon.a (mon.0) 1914 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm04-59252-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:08.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:08 vm04 bash[20742]: audit 2026-03-10T10:19:06.620148+0000 mon.c (mon.2) 339 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:08.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:08 vm04 bash[20742]: audit 2026-03-10T10:19:06.620148+0000 mon.c (mon.2) 339 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:08.766 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:19:08 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:19:09.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:09 vm04 bash[28289]: cluster 2026-03-10T10:19:07.533080+0000 mon.a (mon.0) 1915 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:09.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:09 vm04 bash[28289]: cluster 2026-03-10T10:19:07.533080+0000 mon.a (mon.0) 1915 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:09.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:09 vm04 bash[28289]: audit 2026-03-10T10:19:07.621085+0000 mon.c (mon.2) 340 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:09.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:09 vm04 bash[28289]: audit 2026-03-10T10:19:07.621085+0000 mon.c (mon.2) 340 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:09.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:09 vm04 bash[28289]: audit 2026-03-10T10:19:07.851631+0000 mon.a (mon.0) 1916 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-29"}]': finished 2026-03-10T10:19:09.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:09 vm04 bash[28289]: audit 2026-03-10T10:19:07.851631+0000 mon.a (mon.0) 1916 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-29"}]': finished 2026-03-10T10:19:09.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:09 vm04 bash[28289]: audit 2026-03-10T10:19:07.851746+0000 mon.a (mon.0) 1917 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm04-59252-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:09.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:09 vm04 bash[28289]: audit 2026-03-10T10:19:07.851746+0000 mon.a (mon.0) 1917 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm04-59252-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:09.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:09 vm04 bash[28289]: cluster 2026-03-10T10:19:07.882172+0000 mon.a (mon.0) 1918 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-10T10:19:09.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:09 vm04 bash[28289]: cluster 2026-03-10T10:19:07.882172+0000 mon.a (mon.0) 1918 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-10T10:19:09.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:09 vm04 bash[28289]: audit 2026-03-10T10:19:07.889091+0000 mon.b (mon.1) 194 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm04-59252-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:09 vm04 bash[28289]: audit 2026-03-10T10:19:07.889091+0000 mon.b (mon.1) 194 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm04-59252-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:09 vm04 bash[28289]: audit 2026-03-10T10:19:07.949498+0000 mon.a (mon.0) 1919 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm04-59252-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:09 vm04 bash[28289]: audit 2026-03-10T10:19:07.949498+0000 mon.a (mon.0) 1919 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm04-59252-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:09 vm04 bash[28289]: audit 2026-03-10T10:19:08.414557+0000 mon.a (mon.0) 1920 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "27"}]: dispatch 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:09 vm04 bash[28289]: audit 2026-03-10T10:19:08.414557+0000 mon.a (mon.0) 1920 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "27"}]: dispatch 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:09 vm04 bash[28289]: audit 2026-03-10T10:19:08.622125+0000 mon.c (mon.2) 341 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:09 vm04 bash[28289]: audit 2026-03-10T10:19:08.622125+0000 mon.c (mon.2) 341 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:09 vm04 bash[28289]: audit 2026-03-10T10:19:08.863076+0000 mon.a (mon.0) 1921 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "27"}]': finished 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:09 vm04 bash[28289]: audit 2026-03-10T10:19:08.863076+0000 mon.a (mon.0) 1921 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "27"}]': finished 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:09 vm04 bash[28289]: cluster 2026-03-10T10:19:08.899958+0000 mon.a (mon.0) 1922 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:09 vm04 bash[28289]: cluster 2026-03-10T10:19:08.899958+0000 mon.a (mon.0) 1922 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:09 vm04 bash[28289]: audit 2026-03-10T10:19:08.901289+0000 mon.a (mon.0) 1923 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm04-59259-48"}]: dispatch 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:09 vm04 bash[28289]: audit 2026-03-10T10:19:08.901289+0000 mon.a (mon.0) 1923 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm04-59259-48"}]: dispatch 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:09 vm04 bash[20742]: cluster 2026-03-10T10:19:07.533080+0000 mon.a (mon.0) 1915 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:09 vm04 bash[20742]: cluster 2026-03-10T10:19:07.533080+0000 mon.a (mon.0) 1915 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:09 vm04 bash[20742]: audit 2026-03-10T10:19:07.621085+0000 mon.c (mon.2) 340 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:09 vm04 bash[20742]: audit 2026-03-10T10:19:07.621085+0000 mon.c (mon.2) 340 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:09 vm04 bash[20742]: audit 2026-03-10T10:19:07.851631+0000 mon.a (mon.0) 1916 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-29"}]': finished 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:09 vm04 bash[20742]: audit 2026-03-10T10:19:07.851631+0000 mon.a (mon.0) 1916 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-29"}]': finished 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:09 vm04 bash[20742]: audit 2026-03-10T10:19:07.851746+0000 mon.a (mon.0) 1917 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm04-59252-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:09 vm04 bash[20742]: audit 2026-03-10T10:19:07.851746+0000 mon.a (mon.0) 1917 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm04-59252-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:09 vm04 bash[20742]: cluster 2026-03-10T10:19:07.882172+0000 mon.a (mon.0) 1918 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:09 vm04 bash[20742]: cluster 2026-03-10T10:19:07.882172+0000 mon.a (mon.0) 1918 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:09 vm04 bash[20742]: audit 2026-03-10T10:19:07.889091+0000 mon.b (mon.1) 194 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm04-59252-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:09 vm04 bash[20742]: audit 2026-03-10T10:19:07.889091+0000 mon.b (mon.1) 194 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm04-59252-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:09 vm04 bash[20742]: audit 2026-03-10T10:19:07.949498+0000 mon.a (mon.0) 1919 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm04-59252-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:09 vm04 bash[20742]: audit 2026-03-10T10:19:07.949498+0000 mon.a (mon.0) 1919 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm04-59252-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:09 vm04 bash[20742]: audit 2026-03-10T10:19:08.414557+0000 mon.a (mon.0) 1920 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "27"}]: dispatch 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:09 vm04 bash[20742]: audit 2026-03-10T10:19:08.414557+0000 mon.a (mon.0) 1920 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "27"}]: dispatch 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:09 vm04 bash[20742]: audit 2026-03-10T10:19:08.622125+0000 mon.c (mon.2) 341 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:09 vm04 bash[20742]: audit 2026-03-10T10:19:08.622125+0000 mon.c (mon.2) 341 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:09 vm04 bash[20742]: audit 2026-03-10T10:19:08.863076+0000 mon.a (mon.0) 1921 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "27"}]': finished 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:09 vm04 bash[20742]: audit 2026-03-10T10:19:08.863076+0000 mon.a (mon.0) 1921 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "27"}]': finished 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:09 vm04 bash[20742]: cluster 2026-03-10T10:19:08.899958+0000 mon.a (mon.0) 1922 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-10T10:19:09.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:09 vm04 bash[20742]: cluster 2026-03-10T10:19:08.899958+0000 mon.a (mon.0) 1922 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-10T10:19:09.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:09 vm04 bash[20742]: audit 2026-03-10T10:19:08.901289+0000 mon.a (mon.0) 1923 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm04-59259-48"}]: dispatch 2026-03-10T10:19:09.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:09 vm04 bash[20742]: audit 2026-03-10T10:19:08.901289+0000 mon.a (mon.0) 1923 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm04-59259-48"}]: dispatch 2026-03-10T10:19:09.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:09 vm07 bash[23367]: cluster 2026-03-10T10:19:07.533080+0000 mon.a (mon.0) 1915 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:09.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:09 vm07 bash[23367]: cluster 2026-03-10T10:19:07.533080+0000 mon.a (mon.0) 1915 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:09.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:09 vm07 bash[23367]: audit 2026-03-10T10:19:07.621085+0000 mon.c (mon.2) 340 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:09.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:09 vm07 bash[23367]: audit 2026-03-10T10:19:07.621085+0000 mon.c (mon.2) 340 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:09.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:09 vm07 bash[23367]: audit 2026-03-10T10:19:07.851631+0000 mon.a (mon.0) 1916 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-29"}]': finished 2026-03-10T10:19:09.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:09 vm07 bash[23367]: audit 2026-03-10T10:19:07.851631+0000 mon.a (mon.0) 1916 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-29"}]': finished 2026-03-10T10:19:09.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:09 vm07 bash[23367]: audit 2026-03-10T10:19:07.851746+0000 mon.a (mon.0) 1917 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm04-59252-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:09.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:09 vm07 bash[23367]: audit 2026-03-10T10:19:07.851746+0000 mon.a (mon.0) 1917 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatNS_vm04-59252-39", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:09.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:09 vm07 bash[23367]: cluster 2026-03-10T10:19:07.882172+0000 mon.a (mon.0) 1918 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-10T10:19:09.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:09 vm07 bash[23367]: cluster 2026-03-10T10:19:07.882172+0000 mon.a (mon.0) 1918 : cluster [DBG] osdmap e228: 8 total, 8 up, 8 in 2026-03-10T10:19:09.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:09 vm07 bash[23367]: audit 2026-03-10T10:19:07.889091+0000 mon.b (mon.1) 194 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm04-59252-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:09.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:09 vm07 bash[23367]: audit 2026-03-10T10:19:07.889091+0000 mon.b (mon.1) 194 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm04-59252-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:09.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:09 vm07 bash[23367]: audit 2026-03-10T10:19:07.949498+0000 mon.a (mon.0) 1919 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm04-59252-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:09.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:09 vm07 bash[23367]: audit 2026-03-10T10:19:07.949498+0000 mon.a (mon.0) 1919 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm04-59252-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:09.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:09 vm07 bash[23367]: audit 2026-03-10T10:19:08.414557+0000 mon.a (mon.0) 1920 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "27"}]: dispatch 2026-03-10T10:19:09.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:09 vm07 bash[23367]: audit 2026-03-10T10:19:08.414557+0000 mon.a (mon.0) 1920 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "27"}]: dispatch 2026-03-10T10:19:09.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:09 vm07 bash[23367]: audit 2026-03-10T10:19:08.622125+0000 mon.c (mon.2) 341 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:09.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:09 vm07 bash[23367]: audit 2026-03-10T10:19:08.622125+0000 mon.c (mon.2) 341 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:09.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:09 vm07 bash[23367]: audit 2026-03-10T10:19:08.863076+0000 mon.a (mon.0) 1921 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "27"}]': finished 2026-03-10T10:19:09.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:09 vm07 bash[23367]: audit 2026-03-10T10:19:08.863076+0000 mon.a (mon.0) 1921 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "27"}]': finished 2026-03-10T10:19:09.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:09 vm07 bash[23367]: cluster 2026-03-10T10:19:08.899958+0000 mon.a (mon.0) 1922 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-10T10:19:09.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:09 vm07 bash[23367]: cluster 2026-03-10T10:19:08.899958+0000 mon.a (mon.0) 1922 : cluster [DBG] osdmap e229: 8 total, 8 up, 8 in 2026-03-10T10:19:09.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:09 vm07 bash[23367]: audit 2026-03-10T10:19:08.901289+0000 mon.a (mon.0) 1923 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm04-59259-48"}]: dispatch 2026-03-10T10:19:09.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:09 vm07 bash[23367]: audit 2026-03-10T10:19:08.901289+0000 mon.a (mon.0) 1923 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm04-59259-48"}]: dispatch 2026-03-10T10:19:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:10 vm04 bash[28289]: audit 2026-03-10T10:19:08.342360+0000 mgr.y (mgr.24422) 227 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:19:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:10 vm04 bash[28289]: audit 2026-03-10T10:19:08.342360+0000 mgr.y (mgr.24422) 227 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:19:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:10 vm04 bash[28289]: cluster 2026-03-10T10:19:08.413835+0000 mgr.y (mgr.24422) 228 : cluster [DBG] pgmap v300: 328 pgs: 4 unknown, 14 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 304 active+clean; 4.4 MiB data, 694 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 3 op/s; 63 B/s, 0 objects/s recovering 2026-03-10T10:19:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:10 vm04 bash[28289]: cluster 2026-03-10T10:19:08.413835+0000 mgr.y (mgr.24422) 228 : cluster [DBG] pgmap v300: 328 pgs: 4 unknown, 14 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 304 active+clean; 4.4 MiB data, 694 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 3 op/s; 63 B/s, 0 objects/s recovering 2026-03-10T10:19:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:10 vm04 bash[28289]: audit 2026-03-10T10:19:09.622974+0000 mon.c (mon.2) 342 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:10 vm04 bash[28289]: audit 2026-03-10T10:19:09.622974+0000 mon.c (mon.2) 342 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:10 vm04 bash[28289]: audit 2026-03-10T10:19:09.916600+0000 mon.a (mon.0) 1924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm04-59252-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm04-59252-39"}]': finished 2026-03-10T10:19:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:10 vm04 bash[28289]: audit 2026-03-10T10:19:09.916600+0000 mon.a (mon.0) 1924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm04-59252-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm04-59252-39"}]': finished 2026-03-10T10:19:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:10 vm04 bash[28289]: audit 2026-03-10T10:19:09.916732+0000 mon.a (mon.0) 1925 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm04-59259-48"}]': finished 2026-03-10T10:19:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:10 vm04 bash[28289]: audit 2026-03-10T10:19:09.916732+0000 mon.a (mon.0) 1925 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm04-59259-48"}]': finished 2026-03-10T10:19:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:10 vm04 bash[28289]: cluster 2026-03-10T10:19:09.957000+0000 mon.a (mon.0) 1926 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-10T10:19:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:10 vm04 bash[28289]: cluster 2026-03-10T10:19:09.957000+0000 mon.a (mon.0) 1926 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-10T10:19:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:10 vm04 bash[28289]: audit 2026-03-10T10:19:09.957619+0000 mon.a (mon.0) 1927 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm04-59259-48"}]: dispatch 2026-03-10T10:19:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:10 vm04 bash[28289]: audit 2026-03-10T10:19:09.957619+0000 mon.a (mon.0) 1927 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm04-59259-48"}]: dispatch 2026-03-10T10:19:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:10 vm04 bash[28289]: audit 2026-03-10T10:19:09.958061+0000 mon.a (mon.0) 1928 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:10 vm04 bash[28289]: audit 2026-03-10T10:19:09.958061+0000 mon.a (mon.0) 1928 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:10 vm04 bash[20742]: audit 2026-03-10T10:19:08.342360+0000 mgr.y (mgr.24422) 227 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:19:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:10 vm04 bash[20742]: audit 2026-03-10T10:19:08.342360+0000 mgr.y (mgr.24422) 227 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:19:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:10 vm04 bash[20742]: cluster 2026-03-10T10:19:08.413835+0000 mgr.y (mgr.24422) 228 : cluster [DBG] pgmap v300: 328 pgs: 4 unknown, 14 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 304 active+clean; 4.4 MiB data, 694 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 3 op/s; 63 B/s, 0 objects/s recovering 2026-03-10T10:19:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:10 vm04 bash[20742]: cluster 2026-03-10T10:19:08.413835+0000 mgr.y (mgr.24422) 228 : cluster [DBG] pgmap v300: 328 pgs: 4 unknown, 14 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 304 active+clean; 4.4 MiB data, 694 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 3 op/s; 63 B/s, 0 objects/s recovering 2026-03-10T10:19:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:10 vm04 bash[20742]: audit 2026-03-10T10:19:09.622974+0000 mon.c (mon.2) 342 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:10 vm04 bash[20742]: audit 2026-03-10T10:19:09.622974+0000 mon.c (mon.2) 342 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:10 vm04 bash[20742]: audit 2026-03-10T10:19:09.916600+0000 mon.a (mon.0) 1924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm04-59252-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm04-59252-39"}]': finished 2026-03-10T10:19:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:10 vm04 bash[20742]: audit 2026-03-10T10:19:09.916600+0000 mon.a (mon.0) 1924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm04-59252-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm04-59252-39"}]': finished 2026-03-10T10:19:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:10 vm04 bash[20742]: audit 2026-03-10T10:19:09.916732+0000 mon.a (mon.0) 1925 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm04-59259-48"}]': finished 2026-03-10T10:19:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:10 vm04 bash[20742]: audit 2026-03-10T10:19:09.916732+0000 mon.a (mon.0) 1925 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm04-59259-48"}]': finished 2026-03-10T10:19:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:10 vm04 bash[20742]: cluster 2026-03-10T10:19:09.957000+0000 mon.a (mon.0) 1926 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-10T10:19:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:10 vm04 bash[20742]: cluster 2026-03-10T10:19:09.957000+0000 mon.a (mon.0) 1926 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-10T10:19:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:10 vm04 bash[20742]: audit 2026-03-10T10:19:09.957619+0000 mon.a (mon.0) 1927 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm04-59259-48"}]: dispatch 2026-03-10T10:19:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:10 vm04 bash[20742]: audit 2026-03-10T10:19:09.957619+0000 mon.a (mon.0) 1927 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm04-59259-48"}]: dispatch 2026-03-10T10:19:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:10 vm04 bash[20742]: audit 2026-03-10T10:19:09.958061+0000 mon.a (mon.0) 1928 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:10 vm04 bash[20742]: audit 2026-03-10T10:19:09.958061+0000 mon.a (mon.0) 1928 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:10 vm07 bash[23367]: audit 2026-03-10T10:19:08.342360+0000 mgr.y (mgr.24422) 227 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:19:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:10 vm07 bash[23367]: audit 2026-03-10T10:19:08.342360+0000 mgr.y (mgr.24422) 227 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:19:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:10 vm07 bash[23367]: cluster 2026-03-10T10:19:08.413835+0000 mgr.y (mgr.24422) 228 : cluster [DBG] pgmap v300: 328 pgs: 4 unknown, 14 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 304 active+clean; 4.4 MiB data, 694 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 3 op/s; 63 B/s, 0 objects/s recovering 2026-03-10T10:19:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:10 vm07 bash[23367]: cluster 2026-03-10T10:19:08.413835+0000 mgr.y (mgr.24422) 228 : cluster [DBG] pgmap v300: 328 pgs: 4 unknown, 14 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 304 active+clean; 4.4 MiB data, 694 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 3 op/s; 63 B/s, 0 objects/s recovering 2026-03-10T10:19:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:10 vm07 bash[23367]: audit 2026-03-10T10:19:09.622974+0000 mon.c (mon.2) 342 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:10 vm07 bash[23367]: audit 2026-03-10T10:19:09.622974+0000 mon.c (mon.2) 342 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:10 vm07 bash[23367]: audit 2026-03-10T10:19:09.916600+0000 mon.a (mon.0) 1924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm04-59252-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm04-59252-39"}]': finished 2026-03-10T10:19:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:10 vm07 bash[23367]: audit 2026-03-10T10:19:09.916600+0000 mon.a (mon.0) 1924 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatNS_vm04-59252-39", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatNS_vm04-59252-39"}]': finished 2026-03-10T10:19:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:10 vm07 bash[23367]: audit 2026-03-10T10:19:09.916732+0000 mon.a (mon.0) 1925 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm04-59259-48"}]': finished 2026-03-10T10:19:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:10 vm07 bash[23367]: audit 2026-03-10T10:19:09.916732+0000 mon.a (mon.0) 1925 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-WaitForSafePP_vm04-59259-48"}]': finished 2026-03-10T10:19:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:10 vm07 bash[23367]: cluster 2026-03-10T10:19:09.957000+0000 mon.a (mon.0) 1926 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-10T10:19:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:10 vm07 bash[23367]: cluster 2026-03-10T10:19:09.957000+0000 mon.a (mon.0) 1926 : cluster [DBG] osdmap e230: 8 total, 8 up, 8 in 2026-03-10T10:19:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:10 vm07 bash[23367]: audit 2026-03-10T10:19:09.957619+0000 mon.a (mon.0) 1927 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm04-59259-48"}]: dispatch 2026-03-10T10:19:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:10 vm07 bash[23367]: audit 2026-03-10T10:19:09.957619+0000 mon.a (mon.0) 1927 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm04-59259-48"}]: dispatch 2026-03-10T10:19:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:10 vm07 bash[23367]: audit 2026-03-10T10:19:09.958061+0000 mon.a (mon.0) 1928 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:10 vm07 bash[23367]: audit 2026-03-10T10:19:09.958061+0000 mon.a (mon.0) 1928 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-31","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:11.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:11 vm04 bash[20742]: audit 2026-03-10T10:19:10.623731+0000 mon.c (mon.2) 343 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:11 vm04 bash[20742]: audit 2026-03-10T10:19:10.623731+0000 mon.c (mon.2) 343 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:11 vm04 bash[20742]: audit 2026-03-10T10:19:10.972740+0000 mon.a (mon.0) 1929 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm04-59259-48"}]': finished 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:11 vm04 bash[20742]: audit 2026-03-10T10:19:10.972740+0000 mon.a (mon.0) 1929 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm04-59259-48"}]': finished 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:11 vm04 bash[20742]: audit 2026-03-10T10:19:10.972952+0000 mon.a (mon.0) 1930 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:11 vm04 bash[20742]: audit 2026-03-10T10:19:10.972952+0000 mon.a (mon.0) 1930 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:11 vm04 bash[20742]: cluster 2026-03-10T10:19:10.979238+0000 mon.a (mon.0) 1931 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:11 vm04 bash[20742]: cluster 2026-03-10T10:19:10.979238+0000 mon.a (mon.0) 1931 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:11 vm04 bash[20742]: audit 2026-03-10T10:19:10.984321+0000 mon.a (mon.0) 1932 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:11 vm04 bash[20742]: audit 2026-03-10T10:19:10.984321+0000 mon.a (mon.0) 1932 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:11 vm04 bash[20742]: audit 2026-03-10T10:19:11.018314+0000 mon.b (mon.1) 195 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:11 vm04 bash[20742]: audit 2026-03-10T10:19:11.018314+0000 mon.b (mon.1) 195 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:11 vm04 bash[20742]: audit 2026-03-10T10:19:11.021768+0000 mon.a (mon.0) 1933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:11 vm04 bash[20742]: audit 2026-03-10T10:19:11.021768+0000 mon.a (mon.0) 1933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:11 vm04 bash[20742]: audit 2026-03-10T10:19:11.022221+0000 mon.b (mon.1) 196 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:11 vm04 bash[20742]: audit 2026-03-10T10:19:11.022221+0000 mon.b (mon.1) 196 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:11 vm04 bash[20742]: audit 2026-03-10T10:19:11.023288+0000 mon.b (mon.1) 197 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm04-59259-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:11 vm04 bash[20742]: audit 2026-03-10T10:19:11.023288+0000 mon.b (mon.1) 197 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm04-59259-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:11 vm04 bash[20742]: audit 2026-03-10T10:19:11.025159+0000 mon.a (mon.0) 1934 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:11 vm04 bash[20742]: audit 2026-03-10T10:19:11.025159+0000 mon.a (mon.0) 1934 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:11 vm04 bash[20742]: audit 2026-03-10T10:19:11.026112+0000 mon.a (mon.0) 1935 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm04-59259-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:11 vm04 bash[20742]: audit 2026-03-10T10:19:11.026112+0000 mon.a (mon.0) 1935 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm04-59259-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:11 vm04 bash[28289]: audit 2026-03-10T10:19:10.623731+0000 mon.c (mon.2) 343 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:11 vm04 bash[28289]: audit 2026-03-10T10:19:10.623731+0000 mon.c (mon.2) 343 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:11 vm04 bash[28289]: audit 2026-03-10T10:19:10.972740+0000 mon.a (mon.0) 1929 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm04-59259-48"}]': finished 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:11 vm04 bash[28289]: audit 2026-03-10T10:19:10.972740+0000 mon.a (mon.0) 1929 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm04-59259-48"}]': finished 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:11 vm04 bash[28289]: audit 2026-03-10T10:19:10.972952+0000 mon.a (mon.0) 1930 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:11 vm04 bash[28289]: audit 2026-03-10T10:19:10.972952+0000 mon.a (mon.0) 1930 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:11 vm04 bash[28289]: cluster 2026-03-10T10:19:10.979238+0000 mon.a (mon.0) 1931 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:11 vm04 bash[28289]: cluster 2026-03-10T10:19:10.979238+0000 mon.a (mon.0) 1931 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:11 vm04 bash[28289]: audit 2026-03-10T10:19:10.984321+0000 mon.a (mon.0) 1932 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:11 vm04 bash[28289]: audit 2026-03-10T10:19:10.984321+0000 mon.a (mon.0) 1932 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:11 vm04 bash[28289]: audit 2026-03-10T10:19:11.018314+0000 mon.b (mon.1) 195 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:11 vm04 bash[28289]: audit 2026-03-10T10:19:11.018314+0000 mon.b (mon.1) 195 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:11 vm04 bash[28289]: audit 2026-03-10T10:19:11.021768+0000 mon.a (mon.0) 1933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:11 vm04 bash[28289]: audit 2026-03-10T10:19:11.021768+0000 mon.a (mon.0) 1933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:11 vm04 bash[28289]: audit 2026-03-10T10:19:11.022221+0000 mon.b (mon.1) 196 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:11 vm04 bash[28289]: audit 2026-03-10T10:19:11.022221+0000 mon.b (mon.1) 196 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:11 vm04 bash[28289]: audit 2026-03-10T10:19:11.023288+0000 mon.b (mon.1) 197 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm04-59259-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:11 vm04 bash[28289]: audit 2026-03-10T10:19:11.023288+0000 mon.b (mon.1) 197 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm04-59259-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:11 vm04 bash[28289]: audit 2026-03-10T10:19:11.025159+0000 mon.a (mon.0) 1934 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:11 vm04 bash[28289]: audit 2026-03-10T10:19:11.025159+0000 mon.a (mon.0) 1934 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:11 vm04 bash[28289]: audit 2026-03-10T10:19:11.026112+0000 mon.a (mon.0) 1935 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm04-59259-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:11.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:11 vm04 bash[28289]: audit 2026-03-10T10:19:11.026112+0000 mon.a (mon.0) 1935 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm04-59259-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:11.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:11 vm07 bash[23367]: audit 2026-03-10T10:19:10.623731+0000 mon.c (mon.2) 343 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:11.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:11 vm07 bash[23367]: audit 2026-03-10T10:19:10.623731+0000 mon.c (mon.2) 343 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:11.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:11 vm07 bash[23367]: audit 2026-03-10T10:19:10.972740+0000 mon.a (mon.0) 1929 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm04-59259-48"}]': finished 2026-03-10T10:19:11.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:11 vm07 bash[23367]: audit 2026-03-10T10:19:10.972740+0000 mon.a (mon.0) 1929 : audit [INF] from='client.? 192.168.123.104:0/2455555081' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"WaitForSafePP_vm04-59259-48"}]': finished 2026-03-10T10:19:11.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:11 vm07 bash[23367]: audit 2026-03-10T10:19:10.972952+0000 mon.a (mon.0) 1930 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:11.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:11 vm07 bash[23367]: audit 2026-03-10T10:19:10.972952+0000 mon.a (mon.0) 1930 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-31","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:11.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:11 vm07 bash[23367]: cluster 2026-03-10T10:19:10.979238+0000 mon.a (mon.0) 1931 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-10T10:19:11.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:11 vm07 bash[23367]: cluster 2026-03-10T10:19:10.979238+0000 mon.a (mon.0) 1931 : cluster [DBG] osdmap e231: 8 total, 8 up, 8 in 2026-03-10T10:19:11.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:11 vm07 bash[23367]: audit 2026-03-10T10:19:10.984321+0000 mon.a (mon.0) 1932 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:11.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:11 vm07 bash[23367]: audit 2026-03-10T10:19:10.984321+0000 mon.a (mon.0) 1932 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-31", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:11.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:11 vm07 bash[23367]: audit 2026-03-10T10:19:11.018314+0000 mon.b (mon.1) 195 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:11.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:11 vm07 bash[23367]: audit 2026-03-10T10:19:11.018314+0000 mon.b (mon.1) 195 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:11.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:11 vm07 bash[23367]: audit 2026-03-10T10:19:11.021768+0000 mon.a (mon.0) 1933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:11.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:11 vm07 bash[23367]: audit 2026-03-10T10:19:11.021768+0000 mon.a (mon.0) 1933 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:11.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:11 vm07 bash[23367]: audit 2026-03-10T10:19:11.022221+0000 mon.b (mon.1) 196 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:11.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:11 vm07 bash[23367]: audit 2026-03-10T10:19:11.022221+0000 mon.b (mon.1) 196 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:11.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:11 vm07 bash[23367]: audit 2026-03-10T10:19:11.023288+0000 mon.b (mon.1) 197 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm04-59259-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:11.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:11 vm07 bash[23367]: audit 2026-03-10T10:19:11.023288+0000 mon.b (mon.1) 197 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm04-59259-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:11.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:11 vm07 bash[23367]: audit 2026-03-10T10:19:11.025159+0000 mon.a (mon.0) 1934 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:11.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:11 vm07 bash[23367]: audit 2026-03-10T10:19:11.025159+0000 mon.a (mon.0) 1934 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:11.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:11 vm07 bash[23367]: audit 2026-03-10T10:19:11.026112+0000 mon.a (mon.0) 1935 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm04-59259-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:11.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:11 vm07 bash[23367]: audit 2026-03-10T10:19:11.026112+0000 mon.a (mon.0) 1935 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm04-59259-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:12.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:12 vm04 bash[28289]: cluster 2026-03-10T10:19:10.414353+0000 mgr.y (mgr.24422) 229 : cluster [DBG] pgmap v303: 328 pgs: 40 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 273 active+clean; 8.4 MiB data, 698 MiB used, 159 GiB / 160 GiB avail; 63 B/s, 0 objects/s recovering 2026-03-10T10:19:12.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:12 vm04 bash[28289]: cluster 2026-03-10T10:19:10.414353+0000 mgr.y (mgr.24422) 229 : cluster [DBG] pgmap v303: 328 pgs: 40 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 273 active+clean; 8.4 MiB data, 698 MiB used, 159 GiB / 160 GiB avail; 63 B/s, 0 objects/s recovering 2026-03-10T10:19:12.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:12 vm04 bash[28289]: audit 2026-03-10T10:19:11.624921+0000 mon.c (mon.2) 344 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:12.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:12 vm04 bash[28289]: audit 2026-03-10T10:19:11.624921+0000 mon.c (mon.2) 344 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:12.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:12 vm04 bash[28289]: audit 2026-03-10T10:19:12.004444+0000 mon.a (mon.0) 1936 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-31", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:12.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:12 vm04 bash[28289]: audit 2026-03-10T10:19:12.004444+0000 mon.a (mon.0) 1936 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-31", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:12.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:12 vm04 bash[28289]: audit 2026-03-10T10:19:12.004574+0000 mon.a (mon.0) 1937 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm04-59259-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:12.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:12 vm04 bash[28289]: audit 2026-03-10T10:19:12.004574+0000 mon.a (mon.0) 1937 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm04-59259-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:12.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:12 vm04 bash[28289]: audit 2026-03-10T10:19:12.008621+0000 mon.b (mon.1) 198 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm04-59259-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:12.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:12 vm04 bash[28289]: audit 2026-03-10T10:19:12.008621+0000 mon.b (mon.1) 198 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm04-59259-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:12.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:12 vm04 bash[28289]: audit 2026-03-10T10:19:12.009547+0000 mon.b (mon.1) 199 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:12.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:12 vm04 bash[28289]: audit 2026-03-10T10:19:12.009547+0000 mon.b (mon.1) 199 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:12.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:12 vm04 bash[28289]: cluster 2026-03-10T10:19:12.010409+0000 mon.a (mon.0) 1938 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-10T10:19:12.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:12 vm04 bash[28289]: cluster 2026-03-10T10:19:12.010409+0000 mon.a (mon.0) 1938 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-10T10:19:12.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:12 vm04 bash[28289]: audit 2026-03-10T10:19:12.011237+0000 mon.a (mon.0) 1939 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-31"}]: dispatch 2026-03-10T10:19:12.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:12 vm04 bash[28289]: audit 2026-03-10T10:19:12.011237+0000 mon.a (mon.0) 1939 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-31"}]: dispatch 2026-03-10T10:19:12.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:12 vm04 bash[28289]: audit 2026-03-10T10:19:12.011519+0000 mon.a (mon.0) 1940 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm04-59259-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:12.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:12 vm04 bash[28289]: audit 2026-03-10T10:19:12.011519+0000 mon.a (mon.0) 1940 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm04-59259-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:12.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:12 vm04 bash[28289]: audit 2026-03-10T10:19:12.012341+0000 mon.a (mon.0) 1941 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:12.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:12 vm04 bash[28289]: audit 2026-03-10T10:19:12.012341+0000 mon.a (mon.0) 1941 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:12.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:12 vm04 bash[20742]: cluster 2026-03-10T10:19:10.414353+0000 mgr.y (mgr.24422) 229 : cluster [DBG] pgmap v303: 328 pgs: 40 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 273 active+clean; 8.4 MiB data, 698 MiB used, 159 GiB / 160 GiB avail; 63 B/s, 0 objects/s recovering 2026-03-10T10:19:12.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:12 vm04 bash[20742]: cluster 2026-03-10T10:19:10.414353+0000 mgr.y (mgr.24422) 229 : cluster [DBG] pgmap v303: 328 pgs: 40 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 273 active+clean; 8.4 MiB data, 698 MiB used, 159 GiB / 160 GiB avail; 63 B/s, 0 objects/s recovering 2026-03-10T10:19:12.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:12 vm04 bash[20742]: audit 2026-03-10T10:19:11.624921+0000 mon.c (mon.2) 344 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:12.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:12 vm04 bash[20742]: audit 2026-03-10T10:19:11.624921+0000 mon.c (mon.2) 344 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:12.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:12 vm04 bash[20742]: audit 2026-03-10T10:19:12.004444+0000 mon.a (mon.0) 1936 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-31", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:12.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:12 vm04 bash[20742]: audit 2026-03-10T10:19:12.004444+0000 mon.a (mon.0) 1936 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-31", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:12.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:12 vm04 bash[20742]: audit 2026-03-10T10:19:12.004574+0000 mon.a (mon.0) 1937 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm04-59259-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:12.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:12 vm04 bash[20742]: audit 2026-03-10T10:19:12.004574+0000 mon.a (mon.0) 1937 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm04-59259-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:12.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:12 vm04 bash[20742]: audit 2026-03-10T10:19:12.008621+0000 mon.b (mon.1) 198 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm04-59259-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:12.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:12 vm04 bash[20742]: audit 2026-03-10T10:19:12.008621+0000 mon.b (mon.1) 198 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm04-59259-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:12.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:12 vm04 bash[20742]: audit 2026-03-10T10:19:12.009547+0000 mon.b (mon.1) 199 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:12.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:12 vm04 bash[20742]: audit 2026-03-10T10:19:12.009547+0000 mon.b (mon.1) 199 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:12.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:12 vm04 bash[20742]: cluster 2026-03-10T10:19:12.010409+0000 mon.a (mon.0) 1938 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-10T10:19:12.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:12 vm04 bash[20742]: cluster 2026-03-10T10:19:12.010409+0000 mon.a (mon.0) 1938 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-10T10:19:12.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:12 vm04 bash[20742]: audit 2026-03-10T10:19:12.011237+0000 mon.a (mon.0) 1939 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-31"}]: dispatch 2026-03-10T10:19:12.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:12 vm04 bash[20742]: audit 2026-03-10T10:19:12.011237+0000 mon.a (mon.0) 1939 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-31"}]: dispatch 2026-03-10T10:19:12.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:12 vm04 bash[20742]: audit 2026-03-10T10:19:12.011519+0000 mon.a (mon.0) 1940 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm04-59259-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:12.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:12 vm04 bash[20742]: audit 2026-03-10T10:19:12.011519+0000 mon.a (mon.0) 1940 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm04-59259-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:12.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:12 vm04 bash[20742]: audit 2026-03-10T10:19:12.012341+0000 mon.a (mon.0) 1941 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:12.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:12 vm04 bash[20742]: audit 2026-03-10T10:19:12.012341+0000 mon.a (mon.0) 1941 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:12 vm07 bash[23367]: cluster 2026-03-10T10:19:10.414353+0000 mgr.y (mgr.24422) 229 : cluster [DBG] pgmap v303: 328 pgs: 40 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 273 active+clean; 8.4 MiB data, 698 MiB used, 159 GiB / 160 GiB avail; 63 B/s, 0 objects/s recovering 2026-03-10T10:19:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:12 vm07 bash[23367]: cluster 2026-03-10T10:19:10.414353+0000 mgr.y (mgr.24422) 229 : cluster [DBG] pgmap v303: 328 pgs: 40 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 273 active+clean; 8.4 MiB data, 698 MiB used, 159 GiB / 160 GiB avail; 63 B/s, 0 objects/s recovering 2026-03-10T10:19:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:12 vm07 bash[23367]: audit 2026-03-10T10:19:11.624921+0000 mon.c (mon.2) 344 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:12 vm07 bash[23367]: audit 2026-03-10T10:19:11.624921+0000 mon.c (mon.2) 344 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:12 vm07 bash[23367]: audit 2026-03-10T10:19:12.004444+0000 mon.a (mon.0) 1936 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-31", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:12 vm07 bash[23367]: audit 2026-03-10T10:19:12.004444+0000 mon.a (mon.0) 1936 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-31", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:12 vm07 bash[23367]: audit 2026-03-10T10:19:12.004574+0000 mon.a (mon.0) 1937 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm04-59259-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:12 vm07 bash[23367]: audit 2026-03-10T10:19:12.004574+0000 mon.a (mon.0) 1937 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP_vm04-59259-49", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:12 vm07 bash[23367]: audit 2026-03-10T10:19:12.008621+0000 mon.b (mon.1) 198 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm04-59259-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:12 vm07 bash[23367]: audit 2026-03-10T10:19:12.008621+0000 mon.b (mon.1) 198 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm04-59259-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:12 vm07 bash[23367]: audit 2026-03-10T10:19:12.009547+0000 mon.b (mon.1) 199 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:12 vm07 bash[23367]: audit 2026-03-10T10:19:12.009547+0000 mon.b (mon.1) 199 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:12 vm07 bash[23367]: cluster 2026-03-10T10:19:12.010409+0000 mon.a (mon.0) 1938 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-10T10:19:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:12 vm07 bash[23367]: cluster 2026-03-10T10:19:12.010409+0000 mon.a (mon.0) 1938 : cluster [DBG] osdmap e232: 8 total, 8 up, 8 in 2026-03-10T10:19:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:12 vm07 bash[23367]: audit 2026-03-10T10:19:12.011237+0000 mon.a (mon.0) 1939 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-31"}]: dispatch 2026-03-10T10:19:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:12 vm07 bash[23367]: audit 2026-03-10T10:19:12.011237+0000 mon.a (mon.0) 1939 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-31"}]: dispatch 2026-03-10T10:19:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:12 vm07 bash[23367]: audit 2026-03-10T10:19:12.011519+0000 mon.a (mon.0) 1940 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm04-59259-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:12 vm07 bash[23367]: audit 2026-03-10T10:19:12.011519+0000 mon.a (mon.0) 1940 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP_vm04-59259-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:12 vm07 bash[23367]: audit 2026-03-10T10:19:12.012341+0000 mon.a (mon.0) 1941 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:12 vm07 bash[23367]: audit 2026-03-10T10:19:12.012341+0000 mon.a (mon.0) 1941 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:13.345 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:19:13 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:19:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:19:13.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:13 vm04 bash[28289]: audit 2026-03-10T10:19:12.626029+0000 mon.c (mon.2) 345 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:13.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:13 vm04 bash[28289]: audit 2026-03-10T10:19:12.626029+0000 mon.c (mon.2) 345 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:13.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:13 vm04 bash[28289]: audit 2026-03-10T10:19:12.812224+0000 mon.a (mon.0) 1942 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:19:13.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:13 vm04 bash[28289]: audit 2026-03-10T10:19:12.812224+0000 mon.a (mon.0) 1942 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:19:13.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:13 vm04 bash[20742]: audit 2026-03-10T10:19:12.626029+0000 mon.c (mon.2) 345 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:13.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:13 vm04 bash[20742]: audit 2026-03-10T10:19:12.626029+0000 mon.c (mon.2) 345 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:13.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:13 vm04 bash[20742]: audit 2026-03-10T10:19:12.812224+0000 mon.a (mon.0) 1942 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:19:13.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:13 vm04 bash[20742]: audit 2026-03-10T10:19:12.812224+0000 mon.a (mon.0) 1942 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:19:13.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:13 vm07 bash[23367]: audit 2026-03-10T10:19:12.626029+0000 mon.c (mon.2) 345 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:13.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:13 vm07 bash[23367]: audit 2026-03-10T10:19:12.626029+0000 mon.c (mon.2) 345 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:13.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:13 vm07 bash[23367]: audit 2026-03-10T10:19:12.812224+0000 mon.a (mon.0) 1942 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:19:13.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:13 vm07 bash[23367]: audit 2026-03-10T10:19:12.812224+0000 mon.a (mon.0) 1942 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:19:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:14 vm04 bash[28289]: cluster 2026-03-10T10:19:12.414684+0000 mgr.y (mgr.24422) 230 : cluster [DBG] pgmap v306: 319 pgs: 32 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 272 active+clean; 8.4 MiB data, 698 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:19:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:14 vm04 bash[28289]: cluster 2026-03-10T10:19:12.414684+0000 mgr.y (mgr.24422) 230 : cluster [DBG] pgmap v306: 319 pgs: 32 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 272 active+clean; 8.4 MiB data, 698 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:19:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:14 vm04 bash[28289]: cluster 2026-03-10T10:19:13.163475+0000 mon.a (mon.0) 1943 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:14 vm04 bash[28289]: cluster 2026-03-10T10:19:13.163475+0000 mon.a (mon.0) 1943 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:14 vm04 bash[28289]: audit 2026-03-10T10:19:13.182927+0000 mon.a (mon.0) 1944 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-31"}]': finished 2026-03-10T10:19:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:14 vm04 bash[28289]: audit 2026-03-10T10:19:13.182927+0000 mon.a (mon.0) 1944 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-31"}]': finished 2026-03-10T10:19:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:14 vm04 bash[28289]: audit 2026-03-10T10:19:13.183300+0000 mon.a (mon.0) 1945 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm04-59252-39"}]': finished 2026-03-10T10:19:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:14 vm04 bash[28289]: audit 2026-03-10T10:19:13.183300+0000 mon.a (mon.0) 1945 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm04-59252-39"}]': finished 2026-03-10T10:19:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:14 vm04 bash[28289]: audit 2026-03-10T10:19:13.330477+0000 mon.b (mon.1) 200 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:14 vm04 bash[28289]: audit 2026-03-10T10:19:13.330477+0000 mon.b (mon.1) 200 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:14 vm04 bash[28289]: cluster 2026-03-10T10:19:13.338111+0000 mon.a (mon.0) 1946 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-10T10:19:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:14 vm04 bash[28289]: cluster 2026-03-10T10:19:13.338111+0000 mon.a (mon.0) 1946 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-10T10:19:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:14 vm04 bash[28289]: audit 2026-03-10T10:19:13.340235+0000 mon.a (mon.0) 1947 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-31", "mode": "writeback"}]: dispatch 2026-03-10T10:19:14.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:14 vm04 bash[28289]: audit 2026-03-10T10:19:13.340235+0000 mon.a (mon.0) 1947 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-31", "mode": "writeback"}]: dispatch 2026-03-10T10:19:14.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:14 vm04 bash[28289]: audit 2026-03-10T10:19:13.340318+0000 mon.a (mon.0) 1948 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:14.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:14 vm04 bash[28289]: audit 2026-03-10T10:19:13.340318+0000 mon.a (mon.0) 1948 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:14.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:14 vm04 bash[28289]: audit 2026-03-10T10:19:13.627123+0000 mon.c (mon.2) 346 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:14.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:14 vm04 bash[28289]: audit 2026-03-10T10:19:13.627123+0000 mon.c (mon.2) 346 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:14.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:14 vm04 bash[28289]: cluster 2026-03-10T10:19:14.183632+0000 mon.a (mon.0) 1949 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:14.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:14 vm04 bash[28289]: cluster 2026-03-10T10:19:14.183632+0000 mon.a (mon.0) 1949 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:14.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:14 vm04 bash[28289]: audit 2026-03-10T10:19:14.233286+0000 mon.a (mon.0) 1950 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP_vm04-59259-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm04-59259-49"}]': finished 2026-03-10T10:19:14.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:14 vm04 bash[28289]: audit 2026-03-10T10:19:14.233286+0000 mon.a (mon.0) 1950 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP_vm04-59259-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm04-59259-49"}]': finished 2026-03-10T10:19:14.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:14 vm04 bash[28289]: audit 2026-03-10T10:19:14.233373+0000 mon.a (mon.0) 1951 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-31", "mode": "writeback"}]': finished 2026-03-10T10:19:14.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:14 vm04 bash[28289]: audit 2026-03-10T10:19:14.233373+0000 mon.a (mon.0) 1951 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-31", "mode": "writeback"}]': finished 2026-03-10T10:19:14.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:14 vm04 bash[28289]: audit 2026-03-10T10:19:14.233579+0000 mon.a (mon.0) 1952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm04-59252-39"}]': finished 2026-03-10T10:19:14.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:14 vm04 bash[28289]: audit 2026-03-10T10:19:14.233579+0000 mon.a (mon.0) 1952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm04-59252-39"}]': finished 2026-03-10T10:19:14.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:14 vm04 bash[28289]: cluster 2026-03-10T10:19:14.306989+0000 mon.a (mon.0) 1953 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-10T10:19:14.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:14 vm04 bash[28289]: cluster 2026-03-10T10:19:14.306989+0000 mon.a (mon.0) 1953 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-10T10:19:14.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:14 vm04 bash[20742]: cluster 2026-03-10T10:19:12.414684+0000 mgr.y (mgr.24422) 230 : cluster [DBG] pgmap v306: 319 pgs: 32 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 272 active+clean; 8.4 MiB data, 698 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:19:14.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:14 vm04 bash[20742]: cluster 2026-03-10T10:19:12.414684+0000 mgr.y (mgr.24422) 230 : cluster [DBG] pgmap v306: 319 pgs: 32 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 272 active+clean; 8.4 MiB data, 698 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:19:14.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:14 vm04 bash[20742]: cluster 2026-03-10T10:19:13.163475+0000 mon.a (mon.0) 1943 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:14.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:14 vm04 bash[20742]: cluster 2026-03-10T10:19:13.163475+0000 mon.a (mon.0) 1943 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:14.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:14 vm04 bash[20742]: audit 2026-03-10T10:19:13.182927+0000 mon.a (mon.0) 1944 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-31"}]': finished 2026-03-10T10:19:14.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:14 vm04 bash[20742]: audit 2026-03-10T10:19:13.182927+0000 mon.a (mon.0) 1944 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-31"}]': finished 2026-03-10T10:19:14.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:14 vm04 bash[20742]: audit 2026-03-10T10:19:13.183300+0000 mon.a (mon.0) 1945 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm04-59252-39"}]': finished 2026-03-10T10:19:14.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:14 vm04 bash[20742]: audit 2026-03-10T10:19:13.183300+0000 mon.a (mon.0) 1945 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm04-59252-39"}]': finished 2026-03-10T10:19:14.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:14 vm04 bash[20742]: audit 2026-03-10T10:19:13.330477+0000 mon.b (mon.1) 200 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:14.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:14 vm04 bash[20742]: audit 2026-03-10T10:19:13.330477+0000 mon.b (mon.1) 200 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:14.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:14 vm04 bash[20742]: cluster 2026-03-10T10:19:13.338111+0000 mon.a (mon.0) 1946 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-10T10:19:14.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:14 vm04 bash[20742]: cluster 2026-03-10T10:19:13.338111+0000 mon.a (mon.0) 1946 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-10T10:19:14.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:14 vm04 bash[20742]: audit 2026-03-10T10:19:13.340235+0000 mon.a (mon.0) 1947 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-31", "mode": "writeback"}]: dispatch 2026-03-10T10:19:14.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:14 vm04 bash[20742]: audit 2026-03-10T10:19:13.340235+0000 mon.a (mon.0) 1947 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-31", "mode": "writeback"}]: dispatch 2026-03-10T10:19:14.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:14 vm04 bash[20742]: audit 2026-03-10T10:19:13.340318+0000 mon.a (mon.0) 1948 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:14.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:14 vm04 bash[20742]: audit 2026-03-10T10:19:13.340318+0000 mon.a (mon.0) 1948 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:14.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:14 vm04 bash[20742]: audit 2026-03-10T10:19:13.627123+0000 mon.c (mon.2) 346 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:14.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:14 vm04 bash[20742]: audit 2026-03-10T10:19:13.627123+0000 mon.c (mon.2) 346 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:14.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:14 vm04 bash[20742]: cluster 2026-03-10T10:19:14.183632+0000 mon.a (mon.0) 1949 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:14.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:14 vm04 bash[20742]: cluster 2026-03-10T10:19:14.183632+0000 mon.a (mon.0) 1949 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:14.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:14 vm04 bash[20742]: audit 2026-03-10T10:19:14.233286+0000 mon.a (mon.0) 1950 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP_vm04-59259-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm04-59259-49"}]': finished 2026-03-10T10:19:14.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:14 vm04 bash[20742]: audit 2026-03-10T10:19:14.233286+0000 mon.a (mon.0) 1950 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP_vm04-59259-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm04-59259-49"}]': finished 2026-03-10T10:19:14.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:14 vm04 bash[20742]: audit 2026-03-10T10:19:14.233373+0000 mon.a (mon.0) 1951 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-31", "mode": "writeback"}]': finished 2026-03-10T10:19:14.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:14 vm04 bash[20742]: audit 2026-03-10T10:19:14.233373+0000 mon.a (mon.0) 1951 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-31", "mode": "writeback"}]': finished 2026-03-10T10:19:14.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:14 vm04 bash[20742]: audit 2026-03-10T10:19:14.233579+0000 mon.a (mon.0) 1952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm04-59252-39"}]': finished 2026-03-10T10:19:14.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:14 vm04 bash[20742]: audit 2026-03-10T10:19:14.233579+0000 mon.a (mon.0) 1952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm04-59252-39"}]': finished 2026-03-10T10:19:14.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:14 vm04 bash[20742]: cluster 2026-03-10T10:19:14.306989+0000 mon.a (mon.0) 1953 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-10T10:19:14.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:14 vm04 bash[20742]: cluster 2026-03-10T10:19:14.306989+0000 mon.a (mon.0) 1953 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-10T10:19:14.774 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:14 vm07 bash[23367]: cluster 2026-03-10T10:19:12.414684+0000 mgr.y (mgr.24422) 230 : cluster [DBG] pgmap v306: 319 pgs: 32 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 272 active+clean; 8.4 MiB data, 698 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:19:14.774 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:14 vm07 bash[23367]: cluster 2026-03-10T10:19:12.414684+0000 mgr.y (mgr.24422) 230 : cluster [DBG] pgmap v306: 319 pgs: 32 unknown, 9 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 272 active+clean; 8.4 MiB data, 698 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:19:14.774 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:14 vm07 bash[23367]: cluster 2026-03-10T10:19:13.163475+0000 mon.a (mon.0) 1943 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:14.774 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:14 vm07 bash[23367]: cluster 2026-03-10T10:19:13.163475+0000 mon.a (mon.0) 1943 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:14.774 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:14 vm07 bash[23367]: audit 2026-03-10T10:19:13.182927+0000 mon.a (mon.0) 1944 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-31"}]': finished 2026-03-10T10:19:14.774 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:14 vm07 bash[23367]: audit 2026-03-10T10:19:13.182927+0000 mon.a (mon.0) 1944 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-31"}]': finished 2026-03-10T10:19:14.774 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:14 vm07 bash[23367]: audit 2026-03-10T10:19:13.183300+0000 mon.a (mon.0) 1945 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm04-59252-39"}]': finished 2026-03-10T10:19:14.774 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:14 vm07 bash[23367]: audit 2026-03-10T10:19:13.183300+0000 mon.a (mon.0) 1945 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatNS_vm04-59252-39"}]': finished 2026-03-10T10:19:14.774 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:14 vm07 bash[23367]: audit 2026-03-10T10:19:13.330477+0000 mon.b (mon.1) 200 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:14.774 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:14 vm07 bash[23367]: audit 2026-03-10T10:19:13.330477+0000 mon.b (mon.1) 200 : audit [INF] from='client.? 192.168.123.104:0/3495963715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:14.774 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:14 vm07 bash[23367]: cluster 2026-03-10T10:19:13.338111+0000 mon.a (mon.0) 1946 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-10T10:19:14.774 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:14 vm07 bash[23367]: cluster 2026-03-10T10:19:13.338111+0000 mon.a (mon.0) 1946 : cluster [DBG] osdmap e233: 8 total, 8 up, 8 in 2026-03-10T10:19:14.774 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:14 vm07 bash[23367]: audit 2026-03-10T10:19:13.340235+0000 mon.a (mon.0) 1947 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-31", "mode": "writeback"}]: dispatch 2026-03-10T10:19:14.774 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:14 vm07 bash[23367]: audit 2026-03-10T10:19:13.340235+0000 mon.a (mon.0) 1947 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-31", "mode": "writeback"}]: dispatch 2026-03-10T10:19:14.774 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:14 vm07 bash[23367]: audit 2026-03-10T10:19:13.340318+0000 mon.a (mon.0) 1948 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:14.774 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:14 vm07 bash[23367]: audit 2026-03-10T10:19:13.340318+0000 mon.a (mon.0) 1948 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm04-59252-39"}]: dispatch 2026-03-10T10:19:14.774 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:14 vm07 bash[23367]: audit 2026-03-10T10:19:13.627123+0000 mon.c (mon.2) 346 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:14.774 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:14 vm07 bash[23367]: audit 2026-03-10T10:19:13.627123+0000 mon.c (mon.2) 346 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:14.774 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:14 vm07 bash[23367]: cluster 2026-03-10T10:19:14.183632+0000 mon.a (mon.0) 1949 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:14.774 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:14 vm07 bash[23367]: cluster 2026-03-10T10:19:14.183632+0000 mon.a (mon.0) 1949 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:14.774 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:14 vm07 bash[23367]: audit 2026-03-10T10:19:14.233286+0000 mon.a (mon.0) 1950 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP_vm04-59259-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm04-59259-49"}]': finished 2026-03-10T10:19:14.774 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:14 vm07 bash[23367]: audit 2026-03-10T10:19:14.233286+0000 mon.a (mon.0) 1950 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP_vm04-59259-49", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP_vm04-59259-49"}]': finished 2026-03-10T10:19:14.774 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:14 vm07 bash[23367]: audit 2026-03-10T10:19:14.233373+0000 mon.a (mon.0) 1951 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-31", "mode": "writeback"}]': finished 2026-03-10T10:19:14.774 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:14 vm07 bash[23367]: audit 2026-03-10T10:19:14.233373+0000 mon.a (mon.0) 1951 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-31", "mode": "writeback"}]': finished 2026-03-10T10:19:14.774 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:14 vm07 bash[23367]: audit 2026-03-10T10:19:14.233579+0000 mon.a (mon.0) 1952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm04-59252-39"}]': finished 2026-03-10T10:19:14.774 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:14 vm07 bash[23367]: audit 2026-03-10T10:19:14.233579+0000 mon.a (mon.0) 1952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatNS_vm04-59252-39"}]': finished 2026-03-10T10:19:14.774 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:14 vm07 bash[23367]: cluster 2026-03-10T10:19:14.306989+0000 mon.a (mon.0) 1953 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-10T10:19:14.774 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:14 vm07 bash[23367]: cluster 2026-03-10T10:19:14.306989+0000 mon.a (mon.0) 1953 : cluster [DBG] osdmap e234: 8 total, 8 up, 8 in 2026-03-10T10:19:15.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:15 vm04 bash[28289]: audit 2026-03-10T10:19:14.345660+0000 mon.c (mon.2) 347 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:15 vm04 bash[28289]: audit 2026-03-10T10:19:14.345660+0000 mon.c (mon.2) 347 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:15 vm04 bash[28289]: audit 2026-03-10T10:19:14.358803+0000 mon.a (mon.0) 1954 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:15 vm04 bash[28289]: audit 2026-03-10T10:19:14.358803+0000 mon.a (mon.0) 1954 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:15 vm04 bash[28289]: audit 2026-03-10T10:19:14.369756+0000 mon.c (mon.2) 348 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:15 vm04 bash[28289]: audit 2026-03-10T10:19:14.369756+0000 mon.c (mon.2) 348 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:15 vm04 bash[28289]: cluster 2026-03-10T10:19:14.415087+0000 mgr.y (mgr.24422) 231 : cluster [DBG] pgmap v309: 327 pgs: 8 unknown, 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 310 active+clean; 8.4 MiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:15.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:15 vm04 bash[28289]: cluster 2026-03-10T10:19:14.415087+0000 mgr.y (mgr.24422) 231 : cluster [DBG] pgmap v309: 327 pgs: 8 unknown, 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 310 active+clean; 8.4 MiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:15.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:15 vm04 bash[28289]: audit 2026-03-10T10:19:14.430071+0000 mon.a (mon.0) 1955 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:15 vm04 bash[28289]: audit 2026-03-10T10:19:14.430071+0000 mon.a (mon.0) 1955 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:15 vm04 bash[28289]: audit 2026-03-10T10:19:14.430830+0000 mon.c (mon.2) 349 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm04-59252-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:15.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:15 vm04 bash[28289]: audit 2026-03-10T10:19:14.430830+0000 mon.c (mon.2) 349 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm04-59252-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:15.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:15 vm04 bash[28289]: audit 2026-03-10T10:19:14.431089+0000 mon.a (mon.0) 1956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm04-59252-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:15.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:15 vm04 bash[28289]: audit 2026-03-10T10:19:14.431089+0000 mon.a (mon.0) 1956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm04-59252-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:15.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:15 vm04 bash[28289]: audit 2026-03-10T10:19:14.468338+0000 mon.a (mon.0) 1957 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:15.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:15 vm04 bash[28289]: audit 2026-03-10T10:19:14.468338+0000 mon.a (mon.0) 1957 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:15.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:15 vm04 bash[28289]: audit 2026-03-10T10:19:14.628002+0000 mon.c (mon.2) 350 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:15.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:15 vm04 bash[28289]: audit 2026-03-10T10:19:14.628002+0000 mon.c (mon.2) 350 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:15.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:15 vm04 bash[28289]: audit 2026-03-10T10:19:15.290036+0000 mon.a (mon.0) 1958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm04-59252-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:15.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:15 vm04 bash[28289]: audit 2026-03-10T10:19:15.290036+0000 mon.a (mon.0) 1958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm04-59252-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:15 vm04 bash[28289]: audit 2026-03-10T10:19:15.290150+0000 mon.a (mon.0) 1959 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:15 vm04 bash[28289]: audit 2026-03-10T10:19:15.290150+0000 mon.a (mon.0) 1959 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:15 vm04 bash[28289]: cluster 2026-03-10T10:19:15.294432+0000 mon.a (mon.0) 1960 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:15 vm04 bash[28289]: cluster 2026-03-10T10:19:15.294432+0000 mon.a (mon.0) 1960 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:15 vm04 bash[28289]: audit 2026-03-10T10:19:15.294925+0000 mon.a (mon.0) 1961 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-31"}]: dispatch 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:15 vm04 bash[28289]: audit 2026-03-10T10:19:15.294925+0000 mon.a (mon.0) 1961 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-31"}]: dispatch 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:15 vm04 bash[28289]: audit 2026-03-10T10:19:15.296635+0000 mon.c (mon.2) 351 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm04-59252-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:15 vm04 bash[28289]: audit 2026-03-10T10:19:15.296635+0000 mon.c (mon.2) 351 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm04-59252-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:15 vm04 bash[28289]: audit 2026-03-10T10:19:15.296896+0000 mon.a (mon.0) 1962 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm04-59252-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:15 vm04 bash[28289]: audit 2026-03-10T10:19:15.296896+0000 mon.a (mon.0) 1962 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm04-59252-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:15 vm04 bash[20742]: audit 2026-03-10T10:19:14.345660+0000 mon.c (mon.2) 347 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:15 vm04 bash[20742]: audit 2026-03-10T10:19:14.345660+0000 mon.c (mon.2) 347 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:15 vm04 bash[20742]: audit 2026-03-10T10:19:14.358803+0000 mon.a (mon.0) 1954 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:15 vm04 bash[20742]: audit 2026-03-10T10:19:14.358803+0000 mon.a (mon.0) 1954 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:15 vm04 bash[20742]: audit 2026-03-10T10:19:14.369756+0000 mon.c (mon.2) 348 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:15 vm04 bash[20742]: audit 2026-03-10T10:19:14.369756+0000 mon.c (mon.2) 348 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:15 vm04 bash[20742]: cluster 2026-03-10T10:19:14.415087+0000 mgr.y (mgr.24422) 231 : cluster [DBG] pgmap v309: 327 pgs: 8 unknown, 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 310 active+clean; 8.4 MiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:15 vm04 bash[20742]: cluster 2026-03-10T10:19:14.415087+0000 mgr.y (mgr.24422) 231 : cluster [DBG] pgmap v309: 327 pgs: 8 unknown, 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 310 active+clean; 8.4 MiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:15 vm04 bash[20742]: audit 2026-03-10T10:19:14.430071+0000 mon.a (mon.0) 1955 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:15 vm04 bash[20742]: audit 2026-03-10T10:19:14.430071+0000 mon.a (mon.0) 1955 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:15 vm04 bash[20742]: audit 2026-03-10T10:19:14.430830+0000 mon.c (mon.2) 349 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm04-59252-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:15 vm04 bash[20742]: audit 2026-03-10T10:19:14.430830+0000 mon.c (mon.2) 349 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm04-59252-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:15 vm04 bash[20742]: audit 2026-03-10T10:19:14.431089+0000 mon.a (mon.0) 1956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm04-59252-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:15 vm04 bash[20742]: audit 2026-03-10T10:19:14.431089+0000 mon.a (mon.0) 1956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm04-59252-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:15 vm04 bash[20742]: audit 2026-03-10T10:19:14.468338+0000 mon.a (mon.0) 1957 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:15 vm04 bash[20742]: audit 2026-03-10T10:19:14.468338+0000 mon.a (mon.0) 1957 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:15 vm04 bash[20742]: audit 2026-03-10T10:19:14.628002+0000 mon.c (mon.2) 350 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:15 vm04 bash[20742]: audit 2026-03-10T10:19:14.628002+0000 mon.c (mon.2) 350 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:15 vm04 bash[20742]: audit 2026-03-10T10:19:15.290036+0000 mon.a (mon.0) 1958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm04-59252-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:15 vm04 bash[20742]: audit 2026-03-10T10:19:15.290036+0000 mon.a (mon.0) 1958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm04-59252-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:15 vm04 bash[20742]: audit 2026-03-10T10:19:15.290150+0000 mon.a (mon.0) 1959 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:15 vm04 bash[20742]: audit 2026-03-10T10:19:15.290150+0000 mon.a (mon.0) 1959 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:15 vm04 bash[20742]: cluster 2026-03-10T10:19:15.294432+0000 mon.a (mon.0) 1960 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:15 vm04 bash[20742]: cluster 2026-03-10T10:19:15.294432+0000 mon.a (mon.0) 1960 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:15 vm04 bash[20742]: audit 2026-03-10T10:19:15.294925+0000 mon.a (mon.0) 1961 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-31"}]: dispatch 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:15 vm04 bash[20742]: audit 2026-03-10T10:19:15.294925+0000 mon.a (mon.0) 1961 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-31"}]: dispatch 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:15 vm04 bash[20742]: audit 2026-03-10T10:19:15.296635+0000 mon.c (mon.2) 351 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm04-59252-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:15 vm04 bash[20742]: audit 2026-03-10T10:19:15.296635+0000 mon.c (mon.2) 351 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm04-59252-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:15 vm04 bash[20742]: audit 2026-03-10T10:19:15.296896+0000 mon.a (mon.0) 1962 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm04-59252-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:15 vm04 bash[20742]: audit 2026-03-10T10:19:15.296896+0000 mon.a (mon.0) 1962 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm04-59252-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:15 vm07 bash[23367]: audit 2026-03-10T10:19:14.345660+0000 mon.c (mon.2) 347 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:15 vm07 bash[23367]: audit 2026-03-10T10:19:14.345660+0000 mon.c (mon.2) 347 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:15 vm07 bash[23367]: audit 2026-03-10T10:19:14.358803+0000 mon.a (mon.0) 1954 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:15 vm07 bash[23367]: audit 2026-03-10T10:19:14.358803+0000 mon.a (mon.0) 1954 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:15 vm07 bash[23367]: audit 2026-03-10T10:19:14.369756+0000 mon.c (mon.2) 348 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:15 vm07 bash[23367]: audit 2026-03-10T10:19:14.369756+0000 mon.c (mon.2) 348 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:15 vm07 bash[23367]: cluster 2026-03-10T10:19:14.415087+0000 mgr.y (mgr.24422) 231 : cluster [DBG] pgmap v309: 327 pgs: 8 unknown, 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 310 active+clean; 8.4 MiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:15.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:15 vm07 bash[23367]: cluster 2026-03-10T10:19:14.415087+0000 mgr.y (mgr.24422) 231 : cluster [DBG] pgmap v309: 327 pgs: 8 unknown, 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 310 active+clean; 8.4 MiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:15.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:15 vm07 bash[23367]: audit 2026-03-10T10:19:14.430071+0000 mon.a (mon.0) 1955 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:15 vm07 bash[23367]: audit 2026-03-10T10:19:14.430071+0000 mon.a (mon.0) 1955 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:15 vm07 bash[23367]: audit 2026-03-10T10:19:14.430830+0000 mon.c (mon.2) 349 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm04-59252-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:15.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:15 vm07 bash[23367]: audit 2026-03-10T10:19:14.430830+0000 mon.c (mon.2) 349 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm04-59252-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:15.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:15 vm07 bash[23367]: audit 2026-03-10T10:19:14.431089+0000 mon.a (mon.0) 1956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm04-59252-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:15.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:15 vm07 bash[23367]: audit 2026-03-10T10:19:14.431089+0000 mon.a (mon.0) 1956 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm04-59252-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:15.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:15 vm07 bash[23367]: audit 2026-03-10T10:19:14.468338+0000 mon.a (mon.0) 1957 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:15.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:15 vm07 bash[23367]: audit 2026-03-10T10:19:14.468338+0000 mon.a (mon.0) 1957 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:15.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:15 vm07 bash[23367]: audit 2026-03-10T10:19:14.628002+0000 mon.c (mon.2) 350 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:15.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:15 vm07 bash[23367]: audit 2026-03-10T10:19:14.628002+0000 mon.c (mon.2) 350 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:15.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:15 vm07 bash[23367]: audit 2026-03-10T10:19:15.290036+0000 mon.a (mon.0) 1958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm04-59252-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:15.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:15 vm07 bash[23367]: audit 2026-03-10T10:19:15.290036+0000 mon.a (mon.0) 1958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemove_vm04-59252-40", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:15.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:15 vm07 bash[23367]: audit 2026-03-10T10:19:15.290150+0000 mon.a (mon.0) 1959 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:15.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:15 vm07 bash[23367]: audit 2026-03-10T10:19:15.290150+0000 mon.a (mon.0) 1959 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:15.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:15 vm07 bash[23367]: cluster 2026-03-10T10:19:15.294432+0000 mon.a (mon.0) 1960 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-10T10:19:15.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:15 vm07 bash[23367]: cluster 2026-03-10T10:19:15.294432+0000 mon.a (mon.0) 1960 : cluster [DBG] osdmap e235: 8 total, 8 up, 8 in 2026-03-10T10:19:15.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:15 vm07 bash[23367]: audit 2026-03-10T10:19:15.294925+0000 mon.a (mon.0) 1961 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-31"}]: dispatch 2026-03-10T10:19:15.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:15 vm07 bash[23367]: audit 2026-03-10T10:19:15.294925+0000 mon.a (mon.0) 1961 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-31"}]: dispatch 2026-03-10T10:19:15.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:15 vm07 bash[23367]: audit 2026-03-10T10:19:15.296635+0000 mon.c (mon.2) 351 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm04-59252-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:15 vm07 bash[23367]: audit 2026-03-10T10:19:15.296635+0000 mon.c (mon.2) 351 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm04-59252-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:15 vm07 bash[23367]: audit 2026-03-10T10:19:15.296896+0000 mon.a (mon.0) 1962 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm04-59252-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:15.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:15 vm07 bash[23367]: audit 2026-03-10T10:19:15.296896+0000 mon.a (mon.0) 1962 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemove_vm04-59252-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:16.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:16 vm04 bash[28289]: audit 2026-03-10T10:19:15.628935+0000 mon.c (mon.2) 352 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:16.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:16 vm04 bash[28289]: audit 2026-03-10T10:19:15.628935+0000 mon.c (mon.2) 352 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:16.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:16 vm04 bash[28289]: cluster 2026-03-10T10:19:16.290547+0000 mon.a (mon.0) 1963 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:16.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:16 vm04 bash[28289]: cluster 2026-03-10T10:19:16.290547+0000 mon.a (mon.0) 1963 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:16.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:16 vm04 bash[28289]: audit 2026-03-10T10:19:16.312192+0000 mon.a (mon.0) 1964 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-31"}]': finished 2026-03-10T10:19:16.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:16 vm04 bash[28289]: audit 2026-03-10T10:19:16.312192+0000 mon.a (mon.0) 1964 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-31"}]': finished 2026-03-10T10:19:16.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:16 vm04 bash[28289]: audit 2026-03-10T10:19:16.325251+0000 mon.b (mon.1) 201 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:16.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:16 vm04 bash[28289]: audit 2026-03-10T10:19:16.325251+0000 mon.b (mon.1) 201 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:16.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:16 vm04 bash[28289]: cluster 2026-03-10T10:19:16.326324+0000 mon.a (mon.0) 1965 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-10T10:19:16.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:16 vm04 bash[28289]: cluster 2026-03-10T10:19:16.326324+0000 mon.a (mon.0) 1965 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-10T10:19:16.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:16 vm04 bash[28289]: audit 2026-03-10T10:19:16.328287+0000 mon.a (mon.0) 1966 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:16.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:16 vm04 bash[28289]: audit 2026-03-10T10:19:16.328287+0000 mon.a (mon.0) 1966 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:16.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:16 vm04 bash[28289]: audit 2026-03-10T10:19:16.416783+0000 mon.a (mon.0) 1967 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "26"}]: dispatch 2026-03-10T10:19:16.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:16 vm04 bash[28289]: audit 2026-03-10T10:19:16.416783+0000 mon.a (mon.0) 1967 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "26"}]: dispatch 2026-03-10T10:19:16.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:16 vm04 bash[28289]: audit 2026-03-10T10:19:16.487614+0000 mon.a (mon.0) 1968 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemove_vm04-59252-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm04-59252-40"}]': finished 2026-03-10T10:19:16.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:16 vm04 bash[28289]: audit 2026-03-10T10:19:16.487614+0000 mon.a (mon.0) 1968 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemove_vm04-59252-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm04-59252-40"}]': finished 2026-03-10T10:19:16.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:16 vm04 bash[28289]: audit 2026-03-10T10:19:16.487709+0000 mon.a (mon.0) 1969 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm04-59259-49"}]': finished 2026-03-10T10:19:16.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:16 vm04 bash[28289]: audit 2026-03-10T10:19:16.487709+0000 mon.a (mon.0) 1969 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm04-59259-49"}]': finished 2026-03-10T10:19:16.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:16 vm04 bash[28289]: audit 2026-03-10T10:19:16.487759+0000 mon.a (mon.0) 1970 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "26"}]': finished 2026-03-10T10:19:16.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:16 vm04 bash[28289]: audit 2026-03-10T10:19:16.487759+0000 mon.a (mon.0) 1970 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "26"}]': finished 2026-03-10T10:19:16.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:16 vm04 bash[28289]: cluster 2026-03-10T10:19:16.492071+0000 mon.a (mon.0) 1971 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-10T10:19:16.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:16 vm04 bash[28289]: cluster 2026-03-10T10:19:16.492071+0000 mon.a (mon.0) 1971 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-10T10:19:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:16 vm04 bash[20742]: audit 2026-03-10T10:19:15.628935+0000 mon.c (mon.2) 352 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:16 vm04 bash[20742]: audit 2026-03-10T10:19:15.628935+0000 mon.c (mon.2) 352 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:16 vm04 bash[20742]: cluster 2026-03-10T10:19:16.290547+0000 mon.a (mon.0) 1963 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:16 vm04 bash[20742]: cluster 2026-03-10T10:19:16.290547+0000 mon.a (mon.0) 1963 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:16 vm04 bash[20742]: audit 2026-03-10T10:19:16.312192+0000 mon.a (mon.0) 1964 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-31"}]': finished 2026-03-10T10:19:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:16 vm04 bash[20742]: audit 2026-03-10T10:19:16.312192+0000 mon.a (mon.0) 1964 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-31"}]': finished 2026-03-10T10:19:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:16 vm04 bash[20742]: audit 2026-03-10T10:19:16.325251+0000 mon.b (mon.1) 201 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:16 vm04 bash[20742]: audit 2026-03-10T10:19:16.325251+0000 mon.b (mon.1) 201 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:16 vm04 bash[20742]: cluster 2026-03-10T10:19:16.326324+0000 mon.a (mon.0) 1965 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-10T10:19:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:16 vm04 bash[20742]: cluster 2026-03-10T10:19:16.326324+0000 mon.a (mon.0) 1965 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-10T10:19:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:16 vm04 bash[20742]: audit 2026-03-10T10:19:16.328287+0000 mon.a (mon.0) 1966 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:16 vm04 bash[20742]: audit 2026-03-10T10:19:16.328287+0000 mon.a (mon.0) 1966 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:16 vm04 bash[20742]: audit 2026-03-10T10:19:16.416783+0000 mon.a (mon.0) 1967 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "26"}]: dispatch 2026-03-10T10:19:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:16 vm04 bash[20742]: audit 2026-03-10T10:19:16.416783+0000 mon.a (mon.0) 1967 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "26"}]: dispatch 2026-03-10T10:19:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:16 vm04 bash[20742]: audit 2026-03-10T10:19:16.487614+0000 mon.a (mon.0) 1968 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemove_vm04-59252-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm04-59252-40"}]': finished 2026-03-10T10:19:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:16 vm04 bash[20742]: audit 2026-03-10T10:19:16.487614+0000 mon.a (mon.0) 1968 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemove_vm04-59252-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm04-59252-40"}]': finished 2026-03-10T10:19:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:16 vm04 bash[20742]: audit 2026-03-10T10:19:16.487709+0000 mon.a (mon.0) 1969 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm04-59259-49"}]': finished 2026-03-10T10:19:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:16 vm04 bash[20742]: audit 2026-03-10T10:19:16.487709+0000 mon.a (mon.0) 1969 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm04-59259-49"}]': finished 2026-03-10T10:19:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:16 vm04 bash[20742]: audit 2026-03-10T10:19:16.487759+0000 mon.a (mon.0) 1970 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "26"}]': finished 2026-03-10T10:19:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:16 vm04 bash[20742]: audit 2026-03-10T10:19:16.487759+0000 mon.a (mon.0) 1970 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "26"}]': finished 2026-03-10T10:19:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:16 vm04 bash[20742]: cluster 2026-03-10T10:19:16.492071+0000 mon.a (mon.0) 1971 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-10T10:19:16.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:16 vm04 bash[20742]: cluster 2026-03-10T10:19:16.492071+0000 mon.a (mon.0) 1971 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-10T10:19:17.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:16 vm07 bash[23367]: audit 2026-03-10T10:19:15.628935+0000 mon.c (mon.2) 352 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:17.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:16 vm07 bash[23367]: audit 2026-03-10T10:19:15.628935+0000 mon.c (mon.2) 352 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:17.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:16 vm07 bash[23367]: cluster 2026-03-10T10:19:16.290547+0000 mon.a (mon.0) 1963 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:17.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:16 vm07 bash[23367]: cluster 2026-03-10T10:19:16.290547+0000 mon.a (mon.0) 1963 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:17.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:16 vm07 bash[23367]: audit 2026-03-10T10:19:16.312192+0000 mon.a (mon.0) 1964 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-31"}]': finished 2026-03-10T10:19:17.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:16 vm07 bash[23367]: audit 2026-03-10T10:19:16.312192+0000 mon.a (mon.0) 1964 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-31"}]': finished 2026-03-10T10:19:17.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:16 vm07 bash[23367]: audit 2026-03-10T10:19:16.325251+0000 mon.b (mon.1) 201 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:17.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:16 vm07 bash[23367]: audit 2026-03-10T10:19:16.325251+0000 mon.b (mon.1) 201 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:17.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:16 vm07 bash[23367]: cluster 2026-03-10T10:19:16.326324+0000 mon.a (mon.0) 1965 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-10T10:19:17.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:16 vm07 bash[23367]: cluster 2026-03-10T10:19:16.326324+0000 mon.a (mon.0) 1965 : cluster [DBG] osdmap e236: 8 total, 8 up, 8 in 2026-03-10T10:19:17.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:16 vm07 bash[23367]: audit 2026-03-10T10:19:16.328287+0000 mon.a (mon.0) 1966 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:17.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:16 vm07 bash[23367]: audit 2026-03-10T10:19:16.328287+0000 mon.a (mon.0) 1966 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:17.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:16 vm07 bash[23367]: audit 2026-03-10T10:19:16.416783+0000 mon.a (mon.0) 1967 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "26"}]: dispatch 2026-03-10T10:19:17.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:16 vm07 bash[23367]: audit 2026-03-10T10:19:16.416783+0000 mon.a (mon.0) 1967 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "26"}]: dispatch 2026-03-10T10:19:17.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:16 vm07 bash[23367]: audit 2026-03-10T10:19:16.487614+0000 mon.a (mon.0) 1968 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemove_vm04-59252-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm04-59252-40"}]': finished 2026-03-10T10:19:17.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:16 vm07 bash[23367]: audit 2026-03-10T10:19:16.487614+0000 mon.a (mon.0) 1968 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemove_vm04-59252-40", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemove_vm04-59252-40"}]': finished 2026-03-10T10:19:17.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:16 vm07 bash[23367]: audit 2026-03-10T10:19:16.487709+0000 mon.a (mon.0) 1969 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm04-59259-49"}]': finished 2026-03-10T10:19:17.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:16 vm07 bash[23367]: audit 2026-03-10T10:19:16.487709+0000 mon.a (mon.0) 1969 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP_vm04-59259-49"}]': finished 2026-03-10T10:19:17.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:16 vm07 bash[23367]: audit 2026-03-10T10:19:16.487759+0000 mon.a (mon.0) 1970 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "26"}]': finished 2026-03-10T10:19:17.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:16 vm07 bash[23367]: audit 2026-03-10T10:19:16.487759+0000 mon.a (mon.0) 1970 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "26"}]': finished 2026-03-10T10:19:17.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:16 vm07 bash[23367]: cluster 2026-03-10T10:19:16.492071+0000 mon.a (mon.0) 1971 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-10T10:19:17.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:16 vm07 bash[23367]: cluster 2026-03-10T10:19:16.492071+0000 mon.a (mon.0) 1971 : cluster [DBG] osdmap e237: 8 total, 8 up, 8 in 2026-03-10T10:19:18.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:17 vm07 bash[23367]: cluster 2026-03-10T10:19:16.415804+0000 mgr.y (mgr.24422) 232 : cluster [DBG] pgmap v312: 319 pgs: 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 310 active+clean; 8.4 MiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.0 KiB/s wr, 4 op/s; 63 B/s, 0 objects/s recovering 2026-03-10T10:19:18.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:17 vm07 bash[23367]: cluster 2026-03-10T10:19:16.415804+0000 mgr.y (mgr.24422) 232 : cluster [DBG] pgmap v312: 319 pgs: 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 310 active+clean; 8.4 MiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.0 KiB/s wr, 4 op/s; 63 B/s, 0 objects/s recovering 2026-03-10T10:19:18.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:17 vm07 bash[23367]: audit 2026-03-10T10:19:16.492824+0000 mon.b (mon.1) 202 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:18.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:17 vm07 bash[23367]: audit 2026-03-10T10:19:16.492824+0000 mon.b (mon.1) 202 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:18.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:17 vm07 bash[23367]: audit 2026-03-10T10:19:16.498361+0000 mon.a (mon.0) 1972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:18.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:17 vm07 bash[23367]: audit 2026-03-10T10:19:16.498361+0000 mon.a (mon.0) 1972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:18.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:17 vm07 bash[23367]: audit 2026-03-10T10:19:16.629802+0000 mon.c (mon.2) 353 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:18.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:17 vm07 bash[23367]: audit 2026-03-10T10:19:16.629802+0000 mon.c (mon.2) 353 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:18.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:17 vm04 bash[28289]: cluster 2026-03-10T10:19:16.415804+0000 mgr.y (mgr.24422) 232 : cluster [DBG] pgmap v312: 319 pgs: 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 310 active+clean; 8.4 MiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.0 KiB/s wr, 4 op/s; 63 B/s, 0 objects/s recovering 2026-03-10T10:19:18.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:17 vm04 bash[28289]: cluster 2026-03-10T10:19:16.415804+0000 mgr.y (mgr.24422) 232 : cluster [DBG] pgmap v312: 319 pgs: 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 310 active+clean; 8.4 MiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.0 KiB/s wr, 4 op/s; 63 B/s, 0 objects/s recovering 2026-03-10T10:19:18.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:17 vm04 bash[28289]: audit 2026-03-10T10:19:16.492824+0000 mon.b (mon.1) 202 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:18.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:17 vm04 bash[28289]: audit 2026-03-10T10:19:16.492824+0000 mon.b (mon.1) 202 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:18.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:17 vm04 bash[28289]: audit 2026-03-10T10:19:16.498361+0000 mon.a (mon.0) 1972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:18.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:17 vm04 bash[28289]: audit 2026-03-10T10:19:16.498361+0000 mon.a (mon.0) 1972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:18.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:17 vm04 bash[28289]: audit 2026-03-10T10:19:16.629802+0000 mon.c (mon.2) 353 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:18.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:17 vm04 bash[28289]: audit 2026-03-10T10:19:16.629802+0000 mon.c (mon.2) 353 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:18.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:17 vm04 bash[20742]: cluster 2026-03-10T10:19:16.415804+0000 mgr.y (mgr.24422) 232 : cluster [DBG] pgmap v312: 319 pgs: 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 310 active+clean; 8.4 MiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.0 KiB/s wr, 4 op/s; 63 B/s, 0 objects/s recovering 2026-03-10T10:19:18.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:17 vm04 bash[20742]: cluster 2026-03-10T10:19:16.415804+0000 mgr.y (mgr.24422) 232 : cluster [DBG] pgmap v312: 319 pgs: 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 310 active+clean; 8.4 MiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.0 KiB/s wr, 4 op/s; 63 B/s, 0 objects/s recovering 2026-03-10T10:19:18.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:17 vm04 bash[20742]: audit 2026-03-10T10:19:16.492824+0000 mon.b (mon.1) 202 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:18.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:17 vm04 bash[20742]: audit 2026-03-10T10:19:16.492824+0000 mon.b (mon.1) 202 : audit [INF] from='client.? 192.168.123.104:0/556742174' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:18.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:17 vm04 bash[20742]: audit 2026-03-10T10:19:16.498361+0000 mon.a (mon.0) 1972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:18.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:17 vm04 bash[20742]: audit 2026-03-10T10:19:16.498361+0000 mon.a (mon.0) 1972 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm04-59259-49"}]: dispatch 2026-03-10T10:19:18.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:17 vm04 bash[20742]: audit 2026-03-10T10:19:16.629802+0000 mon.c (mon.2) 353 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:18.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:17 vm04 bash[20742]: audit 2026-03-10T10:19:16.629802+0000 mon.c (mon.2) 353 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:18.766 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:19:18 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:19:19.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:18 vm04 bash[28289]: audit 2026-03-10T10:19:17.636263+0000 mon.c (mon.2) 354 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:19.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:18 vm04 bash[28289]: audit 2026-03-10T10:19:17.636263+0000 mon.c (mon.2) 354 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:19.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:18 vm04 bash[28289]: audit 2026-03-10T10:19:17.730746+0000 mon.a (mon.0) 1973 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm04-59259-49"}]': finished 2026-03-10T10:19:19.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:18 vm04 bash[28289]: audit 2026-03-10T10:19:17.730746+0000 mon.a (mon.0) 1973 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm04-59259-49"}]': finished 2026-03-10T10:19:19.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:18 vm04 bash[28289]: cluster 2026-03-10T10:19:17.751010+0000 mon.a (mon.0) 1974 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-10T10:19:19.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:18 vm04 bash[28289]: cluster 2026-03-10T10:19:17.751010+0000 mon.a (mon.0) 1974 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-10T10:19:19.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:18 vm04 bash[28289]: audit 2026-03-10T10:19:17.767676+0000 mon.b (mon.1) 203 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:19.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:18 vm04 bash[28289]: audit 2026-03-10T10:19:17.767676+0000 mon.b (mon.1) 203 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:19.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:18 vm04 bash[28289]: audit 2026-03-10T10:19:17.778892+0000 mon.b (mon.1) 204 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:19.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:18 vm04 bash[28289]: audit 2026-03-10T10:19:17.778892+0000 mon.b (mon.1) 204 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:19.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:18 vm04 bash[28289]: audit 2026-03-10T10:19:17.779476+0000 mon.b (mon.1) 205 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm04-59259-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:19.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:18 vm04 bash[28289]: audit 2026-03-10T10:19:17.779476+0000 mon.b (mon.1) 205 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm04-59259-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:19.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:18 vm04 bash[28289]: audit 2026-03-10T10:19:17.781062+0000 mon.a (mon.0) 1975 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:19.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:18 vm04 bash[28289]: audit 2026-03-10T10:19:17.781062+0000 mon.a (mon.0) 1975 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:19.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:18 vm04 bash[28289]: audit 2026-03-10T10:19:17.781733+0000 mon.a (mon.0) 1976 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:19.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:18 vm04 bash[28289]: audit 2026-03-10T10:19:17.781733+0000 mon.a (mon.0) 1976 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:19.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:18 vm04 bash[28289]: audit 2026-03-10T10:19:17.782293+0000 mon.a (mon.0) 1977 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm04-59259-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:19.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:18 vm04 bash[28289]: audit 2026-03-10T10:19:17.782293+0000 mon.a (mon.0) 1977 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm04-59259-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:19.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:18 vm04 bash[28289]: audit 2026-03-10T10:19:18.637113+0000 mon.c (mon.2) 355 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:19.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:18 vm04 bash[28289]: audit 2026-03-10T10:19:18.637113+0000 mon.c (mon.2) 355 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:19.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:18 vm04 bash[20742]: audit 2026-03-10T10:19:17.636263+0000 mon.c (mon.2) 354 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:19.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:18 vm04 bash[20742]: audit 2026-03-10T10:19:17.636263+0000 mon.c (mon.2) 354 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:19.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:18 vm04 bash[20742]: audit 2026-03-10T10:19:17.730746+0000 mon.a (mon.0) 1973 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm04-59259-49"}]': finished 2026-03-10T10:19:19.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:18 vm04 bash[20742]: audit 2026-03-10T10:19:17.730746+0000 mon.a (mon.0) 1973 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm04-59259-49"}]': finished 2026-03-10T10:19:19.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:18 vm04 bash[20742]: cluster 2026-03-10T10:19:17.751010+0000 mon.a (mon.0) 1974 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-10T10:19:19.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:18 vm04 bash[20742]: cluster 2026-03-10T10:19:17.751010+0000 mon.a (mon.0) 1974 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-10T10:19:19.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:18 vm04 bash[20742]: audit 2026-03-10T10:19:17.767676+0000 mon.b (mon.1) 203 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:19.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:18 vm04 bash[20742]: audit 2026-03-10T10:19:17.767676+0000 mon.b (mon.1) 203 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:19.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:18 vm04 bash[20742]: audit 2026-03-10T10:19:17.778892+0000 mon.b (mon.1) 204 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:19.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:18 vm04 bash[20742]: audit 2026-03-10T10:19:17.778892+0000 mon.b (mon.1) 204 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:19.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:18 vm04 bash[20742]: audit 2026-03-10T10:19:17.779476+0000 mon.b (mon.1) 205 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm04-59259-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:19.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:18 vm04 bash[20742]: audit 2026-03-10T10:19:17.779476+0000 mon.b (mon.1) 205 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm04-59259-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:19.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:18 vm04 bash[20742]: audit 2026-03-10T10:19:17.781062+0000 mon.a (mon.0) 1975 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:19.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:18 vm04 bash[20742]: audit 2026-03-10T10:19:17.781062+0000 mon.a (mon.0) 1975 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:19.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:18 vm04 bash[20742]: audit 2026-03-10T10:19:17.781733+0000 mon.a (mon.0) 1976 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:19.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:18 vm04 bash[20742]: audit 2026-03-10T10:19:17.781733+0000 mon.a (mon.0) 1976 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:19.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:18 vm04 bash[20742]: audit 2026-03-10T10:19:17.782293+0000 mon.a (mon.0) 1977 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm04-59259-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:19.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:18 vm04 bash[20742]: audit 2026-03-10T10:19:17.782293+0000 mon.a (mon.0) 1977 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm04-59259-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:19.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:18 vm04 bash[20742]: audit 2026-03-10T10:19:18.637113+0000 mon.c (mon.2) 355 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:19.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:18 vm04 bash[20742]: audit 2026-03-10T10:19:18.637113+0000 mon.c (mon.2) 355 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:19.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:18 vm07 bash[23367]: audit 2026-03-10T10:19:17.636263+0000 mon.c (mon.2) 354 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:19.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:18 vm07 bash[23367]: audit 2026-03-10T10:19:17.636263+0000 mon.c (mon.2) 354 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:19.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:18 vm07 bash[23367]: audit 2026-03-10T10:19:17.730746+0000 mon.a (mon.0) 1973 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm04-59259-49"}]': finished 2026-03-10T10:19:19.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:18 vm07 bash[23367]: audit 2026-03-10T10:19:17.730746+0000 mon.a (mon.0) 1973 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP_vm04-59259-49"}]': finished 2026-03-10T10:19:19.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:18 vm07 bash[23367]: cluster 2026-03-10T10:19:17.751010+0000 mon.a (mon.0) 1974 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-10T10:19:19.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:18 vm07 bash[23367]: cluster 2026-03-10T10:19:17.751010+0000 mon.a (mon.0) 1974 : cluster [DBG] osdmap e238: 8 total, 8 up, 8 in 2026-03-10T10:19:19.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:18 vm07 bash[23367]: audit 2026-03-10T10:19:17.767676+0000 mon.b (mon.1) 203 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:19.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:18 vm07 bash[23367]: audit 2026-03-10T10:19:17.767676+0000 mon.b (mon.1) 203 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:19.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:18 vm07 bash[23367]: audit 2026-03-10T10:19:17.778892+0000 mon.b (mon.1) 204 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:19.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:18 vm07 bash[23367]: audit 2026-03-10T10:19:17.778892+0000 mon.b (mon.1) 204 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:19.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:18 vm07 bash[23367]: audit 2026-03-10T10:19:17.779476+0000 mon.b (mon.1) 205 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm04-59259-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:19.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:18 vm07 bash[23367]: audit 2026-03-10T10:19:17.779476+0000 mon.b (mon.1) 205 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm04-59259-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:19.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:18 vm07 bash[23367]: audit 2026-03-10T10:19:17.781062+0000 mon.a (mon.0) 1975 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:19.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:18 vm07 bash[23367]: audit 2026-03-10T10:19:17.781062+0000 mon.a (mon.0) 1975 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:19.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:18 vm07 bash[23367]: audit 2026-03-10T10:19:17.781733+0000 mon.a (mon.0) 1976 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:19.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:18 vm07 bash[23367]: audit 2026-03-10T10:19:17.781733+0000 mon.a (mon.0) 1976 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:19.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:18 vm07 bash[23367]: audit 2026-03-10T10:19:17.782293+0000 mon.a (mon.0) 1977 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm04-59259-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:19.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:18 vm07 bash[23367]: audit 2026-03-10T10:19:17.782293+0000 mon.a (mon.0) 1977 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm04-59259-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:19.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:18 vm07 bash[23367]: audit 2026-03-10T10:19:18.637113+0000 mon.c (mon.2) 355 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:19.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:18 vm07 bash[23367]: audit 2026-03-10T10:19:18.637113+0000 mon.c (mon.2) 355 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:20.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:20 vm07 bash[23367]: audit 2026-03-10T10:19:18.348517+0000 mgr.y (mgr.24422) 233 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:19:20.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:20 vm07 bash[23367]: audit 2026-03-10T10:19:18.348517+0000 mgr.y (mgr.24422) 233 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:19:20.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:20 vm07 bash[23367]: cluster 2026-03-10T10:19:18.416498+0000 mgr.y (mgr.24422) 234 : cluster [DBG] pgmap v315: 295 pgs: 2 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 287 active+clean; 8.4 MiB data, 699 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:19:20.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:20 vm07 bash[23367]: cluster 2026-03-10T10:19:18.416498+0000 mgr.y (mgr.24422) 234 : cluster [DBG] pgmap v315: 295 pgs: 2 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 287 active+clean; 8.4 MiB data, 699 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:19:20.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:20 vm07 bash[23367]: cluster 2026-03-10T10:19:18.730608+0000 mon.a (mon.0) 1978 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:20.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:20 vm07 bash[23367]: cluster 2026-03-10T10:19:18.730608+0000 mon.a (mon.0) 1978 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:20.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:20 vm07 bash[23367]: audit 2026-03-10T10:19:18.932867+0000 mon.a (mon.0) 1979 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm04-59259-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:20.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:20 vm07 bash[23367]: audit 2026-03-10T10:19:18.932867+0000 mon.a (mon.0) 1979 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm04-59259-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:20.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:20 vm07 bash[23367]: cluster 2026-03-10T10:19:18.969440+0000 mon.a (mon.0) 1980 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-10T10:19:20.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:20 vm07 bash[23367]: cluster 2026-03-10T10:19:18.969440+0000 mon.a (mon.0) 1980 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-10T10:19:20.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:20 vm07 bash[23367]: audit 2026-03-10T10:19:18.979685+0000 mon.a (mon.0) 1981 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:20.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:20 vm07 bash[23367]: audit 2026-03-10T10:19:18.979685+0000 mon.a (mon.0) 1981 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:20.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:20 vm07 bash[23367]: audit 2026-03-10T10:19:18.980802+0000 mon.c (mon.2) 356 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:20.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:20 vm07 bash[23367]: audit 2026-03-10T10:19:18.980802+0000 mon.c (mon.2) 356 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:20.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:20 vm07 bash[23367]: audit 2026-03-10T10:19:18.981196+0000 mon.a (mon.0) 1982 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:20.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:20 vm07 bash[23367]: audit 2026-03-10T10:19:18.981196+0000 mon.a (mon.0) 1982 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:20.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:20 vm07 bash[23367]: audit 2026-03-10T10:19:19.005287+0000 mon.b (mon.1) 206 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm04-59259-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:20.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:20 vm07 bash[23367]: audit 2026-03-10T10:19:19.005287+0000 mon.b (mon.1) 206 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm04-59259-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:20.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:20 vm07 bash[23367]: audit 2026-03-10T10:19:19.008252+0000 mon.a (mon.0) 1983 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm04-59259-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:20.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:20 vm07 bash[23367]: audit 2026-03-10T10:19:19.008252+0000 mon.a (mon.0) 1983 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm04-59259-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:20.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:20 vm07 bash[23367]: audit 2026-03-10T10:19:19.637895+0000 mon.c (mon.2) 357 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:20.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:20 vm07 bash[23367]: audit 2026-03-10T10:19:19.637895+0000 mon.c (mon.2) 357 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:20 vm04 bash[28289]: audit 2026-03-10T10:19:18.348517+0000 mgr.y (mgr.24422) 233 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:19:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:20 vm04 bash[28289]: audit 2026-03-10T10:19:18.348517+0000 mgr.y (mgr.24422) 233 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:19:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:20 vm04 bash[28289]: cluster 2026-03-10T10:19:18.416498+0000 mgr.y (mgr.24422) 234 : cluster [DBG] pgmap v315: 295 pgs: 2 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 287 active+clean; 8.4 MiB data, 699 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:19:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:20 vm04 bash[28289]: cluster 2026-03-10T10:19:18.416498+0000 mgr.y (mgr.24422) 234 : cluster [DBG] pgmap v315: 295 pgs: 2 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 287 active+clean; 8.4 MiB data, 699 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:19:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:20 vm04 bash[28289]: cluster 2026-03-10T10:19:18.730608+0000 mon.a (mon.0) 1978 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:20 vm04 bash[28289]: cluster 2026-03-10T10:19:18.730608+0000 mon.a (mon.0) 1978 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:20 vm04 bash[28289]: audit 2026-03-10T10:19:18.932867+0000 mon.a (mon.0) 1979 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm04-59259-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:20 vm04 bash[28289]: audit 2026-03-10T10:19:18.932867+0000 mon.a (mon.0) 1979 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm04-59259-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:20 vm04 bash[28289]: cluster 2026-03-10T10:19:18.969440+0000 mon.a (mon.0) 1980 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-10T10:19:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:20 vm04 bash[28289]: cluster 2026-03-10T10:19:18.969440+0000 mon.a (mon.0) 1980 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-10T10:19:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:20 vm04 bash[28289]: audit 2026-03-10T10:19:18.979685+0000 mon.a (mon.0) 1981 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:20 vm04 bash[28289]: audit 2026-03-10T10:19:18.979685+0000 mon.a (mon.0) 1981 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:20 vm04 bash[28289]: audit 2026-03-10T10:19:18.980802+0000 mon.c (mon.2) 356 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:20 vm04 bash[28289]: audit 2026-03-10T10:19:18.980802+0000 mon.c (mon.2) 356 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:20 vm04 bash[28289]: audit 2026-03-10T10:19:18.981196+0000 mon.a (mon.0) 1982 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:20 vm04 bash[28289]: audit 2026-03-10T10:19:18.981196+0000 mon.a (mon.0) 1982 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:20 vm04 bash[28289]: audit 2026-03-10T10:19:19.005287+0000 mon.b (mon.1) 206 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm04-59259-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:20 vm04 bash[28289]: audit 2026-03-10T10:19:19.005287+0000 mon.b (mon.1) 206 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm04-59259-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:20 vm04 bash[28289]: audit 2026-03-10T10:19:19.008252+0000 mon.a (mon.0) 1983 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm04-59259-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:20 vm04 bash[28289]: audit 2026-03-10T10:19:19.008252+0000 mon.a (mon.0) 1983 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm04-59259-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:20 vm04 bash[28289]: audit 2026-03-10T10:19:19.637895+0000 mon.c (mon.2) 357 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:20.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:20 vm04 bash[28289]: audit 2026-03-10T10:19:19.637895+0000 mon.c (mon.2) 357 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:20 vm04 bash[20742]: audit 2026-03-10T10:19:18.348517+0000 mgr.y (mgr.24422) 233 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:19:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:20 vm04 bash[20742]: audit 2026-03-10T10:19:18.348517+0000 mgr.y (mgr.24422) 233 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:19:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:20 vm04 bash[20742]: cluster 2026-03-10T10:19:18.416498+0000 mgr.y (mgr.24422) 234 : cluster [DBG] pgmap v315: 295 pgs: 2 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 287 active+clean; 8.4 MiB data, 699 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:19:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:20 vm04 bash[20742]: cluster 2026-03-10T10:19:18.416498+0000 mgr.y (mgr.24422) 234 : cluster [DBG] pgmap v315: 295 pgs: 2 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 287 active+clean; 8.4 MiB data, 699 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:19:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:20 vm04 bash[20742]: cluster 2026-03-10T10:19:18.730608+0000 mon.a (mon.0) 1978 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:20 vm04 bash[20742]: cluster 2026-03-10T10:19:18.730608+0000 mon.a (mon.0) 1978 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:20 vm04 bash[20742]: audit 2026-03-10T10:19:18.932867+0000 mon.a (mon.0) 1979 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm04-59259-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:20 vm04 bash[20742]: audit 2026-03-10T10:19:18.932867+0000 mon.a (mon.0) 1979 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripPP2_vm04-59259-50", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:20 vm04 bash[20742]: cluster 2026-03-10T10:19:18.969440+0000 mon.a (mon.0) 1980 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-10T10:19:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:20 vm04 bash[20742]: cluster 2026-03-10T10:19:18.969440+0000 mon.a (mon.0) 1980 : cluster [DBG] osdmap e239: 8 total, 8 up, 8 in 2026-03-10T10:19:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:20 vm04 bash[20742]: audit 2026-03-10T10:19:18.979685+0000 mon.a (mon.0) 1981 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:20 vm04 bash[20742]: audit 2026-03-10T10:19:18.979685+0000 mon.a (mon.0) 1981 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-33","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:20 vm04 bash[20742]: audit 2026-03-10T10:19:18.980802+0000 mon.c (mon.2) 356 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:20 vm04 bash[20742]: audit 2026-03-10T10:19:18.980802+0000 mon.c (mon.2) 356 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:20 vm04 bash[20742]: audit 2026-03-10T10:19:18.981196+0000 mon.a (mon.0) 1982 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:20 vm04 bash[20742]: audit 2026-03-10T10:19:18.981196+0000 mon.a (mon.0) 1982 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:20 vm04 bash[20742]: audit 2026-03-10T10:19:19.005287+0000 mon.b (mon.1) 206 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm04-59259-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:20 vm04 bash[20742]: audit 2026-03-10T10:19:19.005287+0000 mon.b (mon.1) 206 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm04-59259-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:20 vm04 bash[20742]: audit 2026-03-10T10:19:19.008252+0000 mon.a (mon.0) 1983 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm04-59259-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:20 vm04 bash[20742]: audit 2026-03-10T10:19:19.008252+0000 mon.a (mon.0) 1983 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm04-59259-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:20 vm04 bash[20742]: audit 2026-03-10T10:19:19.637895+0000 mon.c (mon.2) 357 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:20 vm04 bash[20742]: audit 2026-03-10T10:19:19.637895+0000 mon.c (mon.2) 357 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:21.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:21 vm04 bash[28289]: audit 2026-03-10T10:19:20.117424+0000 mon.a (mon.0) 1984 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:21.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:21 vm04 bash[28289]: audit 2026-03-10T10:19:20.117424+0000 mon.a (mon.0) 1984 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:21.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:21 vm04 bash[28289]: audit 2026-03-10T10:19:20.117510+0000 mon.a (mon.0) 1985 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm04-59252-40"}]': finished 2026-03-10T10:19:21.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:21 vm04 bash[28289]: audit 2026-03-10T10:19:20.117510+0000 mon.a (mon.0) 1985 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm04-59252-40"}]': finished 2026-03-10T10:19:21.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:21 vm04 bash[28289]: audit 2026-03-10T10:19:20.154248+0000 mon.c (mon.2) 358 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:21.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:21 vm04 bash[28289]: audit 2026-03-10T10:19:20.154248+0000 mon.c (mon.2) 358 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:21.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:21 vm04 bash[28289]: cluster 2026-03-10T10:19:20.175422+0000 mon.a (mon.0) 1986 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-10T10:19:21.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:21 vm04 bash[28289]: cluster 2026-03-10T10:19:20.175422+0000 mon.a (mon.0) 1986 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-10T10:19:21.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:21 vm04 bash[28289]: audit 2026-03-10T10:19:20.192918+0000 mon.a (mon.0) 1987 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:21.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:21 vm04 bash[28289]: audit 2026-03-10T10:19:20.192918+0000 mon.a (mon.0) 1987 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:21.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:21 vm04 bash[28289]: audit 2026-03-10T10:19:20.193880+0000 mon.a (mon.0) 1988 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:21.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:21 vm04 bash[28289]: audit 2026-03-10T10:19:20.193880+0000 mon.a (mon.0) 1988 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:21.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:21 vm04 bash[28289]: audit 2026-03-10T10:19:20.418022+0000 mon.a (mon.0) 1989 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "26"}]: dispatch 2026-03-10T10:19:21.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:21 vm04 bash[28289]: audit 2026-03-10T10:19:20.418022+0000 mon.a (mon.0) 1989 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "26"}]: dispatch 2026-03-10T10:19:21.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:21 vm04 bash[28289]: audit 2026-03-10T10:19:20.638825+0000 mon.c (mon.2) 359 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:21.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:21 vm04 bash[28289]: audit 2026-03-10T10:19:20.638825+0000 mon.c (mon.2) 359 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:21.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:21 vm04 bash[20742]: audit 2026-03-10T10:19:20.117424+0000 mon.a (mon.0) 1984 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:21.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:21 vm04 bash[20742]: audit 2026-03-10T10:19:20.117424+0000 mon.a (mon.0) 1984 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:21.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:21 vm04 bash[20742]: audit 2026-03-10T10:19:20.117510+0000 mon.a (mon.0) 1985 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm04-59252-40"}]': finished 2026-03-10T10:19:21.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:21 vm04 bash[20742]: audit 2026-03-10T10:19:20.117510+0000 mon.a (mon.0) 1985 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm04-59252-40"}]': finished 2026-03-10T10:19:21.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:21 vm04 bash[20742]: audit 2026-03-10T10:19:20.154248+0000 mon.c (mon.2) 358 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:21.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:21 vm04 bash[20742]: audit 2026-03-10T10:19:20.154248+0000 mon.c (mon.2) 358 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:21.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:21 vm04 bash[20742]: cluster 2026-03-10T10:19:20.175422+0000 mon.a (mon.0) 1986 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-10T10:19:21.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:21 vm04 bash[20742]: cluster 2026-03-10T10:19:20.175422+0000 mon.a (mon.0) 1986 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-10T10:19:21.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:21 vm04 bash[20742]: audit 2026-03-10T10:19:20.192918+0000 mon.a (mon.0) 1987 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:21.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:21 vm04 bash[20742]: audit 2026-03-10T10:19:20.192918+0000 mon.a (mon.0) 1987 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:21.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:21 vm04 bash[20742]: audit 2026-03-10T10:19:20.193880+0000 mon.a (mon.0) 1988 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:21.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:21 vm04 bash[20742]: audit 2026-03-10T10:19:20.193880+0000 mon.a (mon.0) 1988 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:21.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:21 vm04 bash[20742]: audit 2026-03-10T10:19:20.418022+0000 mon.a (mon.0) 1989 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "26"}]: dispatch 2026-03-10T10:19:21.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:21 vm04 bash[20742]: audit 2026-03-10T10:19:20.418022+0000 mon.a (mon.0) 1989 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "26"}]: dispatch 2026-03-10T10:19:21.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:21 vm04 bash[20742]: audit 2026-03-10T10:19:20.638825+0000 mon.c (mon.2) 359 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:21.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:21 vm04 bash[20742]: audit 2026-03-10T10:19:20.638825+0000 mon.c (mon.2) 359 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:21 vm07 bash[23367]: audit 2026-03-10T10:19:20.117424+0000 mon.a (mon.0) 1984 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:21 vm07 bash[23367]: audit 2026-03-10T10:19:20.117424+0000 mon.a (mon.0) 1984 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-33","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:21 vm07 bash[23367]: audit 2026-03-10T10:19:20.117510+0000 mon.a (mon.0) 1985 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm04-59252-40"}]': finished 2026-03-10T10:19:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:21 vm07 bash[23367]: audit 2026-03-10T10:19:20.117510+0000 mon.a (mon.0) 1985 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemove_vm04-59252-40"}]': finished 2026-03-10T10:19:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:21 vm07 bash[23367]: audit 2026-03-10T10:19:20.154248+0000 mon.c (mon.2) 358 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:21 vm07 bash[23367]: audit 2026-03-10T10:19:20.154248+0000 mon.c (mon.2) 358 : audit [INF] from='client.? 192.168.123.104:0/1540292210' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:21 vm07 bash[23367]: cluster 2026-03-10T10:19:20.175422+0000 mon.a (mon.0) 1986 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-10T10:19:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:21 vm07 bash[23367]: cluster 2026-03-10T10:19:20.175422+0000 mon.a (mon.0) 1986 : cluster [DBG] osdmap e240: 8 total, 8 up, 8 in 2026-03-10T10:19:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:21 vm07 bash[23367]: audit 2026-03-10T10:19:20.192918+0000 mon.a (mon.0) 1987 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:21 vm07 bash[23367]: audit 2026-03-10T10:19:20.192918+0000 mon.a (mon.0) 1987 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemove_vm04-59252-40"}]: dispatch 2026-03-10T10:19:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:21 vm07 bash[23367]: audit 2026-03-10T10:19:20.193880+0000 mon.a (mon.0) 1988 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:21 vm07 bash[23367]: audit 2026-03-10T10:19:20.193880+0000 mon.a (mon.0) 1988 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-33", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:21 vm07 bash[23367]: audit 2026-03-10T10:19:20.418022+0000 mon.a (mon.0) 1989 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "26"}]: dispatch 2026-03-10T10:19:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:21 vm07 bash[23367]: audit 2026-03-10T10:19:20.418022+0000 mon.a (mon.0) 1989 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "26"}]: dispatch 2026-03-10T10:19:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:21 vm07 bash[23367]: audit 2026-03-10T10:19:20.638825+0000 mon.c (mon.2) 359 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:21 vm07 bash[23367]: audit 2026-03-10T10:19:20.638825+0000 mon.c (mon.2) 359 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:22.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: cluster 2026-03-10T10:19:20.416981+0000 mgr.y (mgr.24422) 235 : cluster [DBG] pgmap v318: 319 pgs: 32 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 281 active+clean; 8.4 MiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:19:22.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: cluster 2026-03-10T10:19:20.416981+0000 mgr.y (mgr.24422) 235 : cluster [DBG] pgmap v318: 319 pgs: 32 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 281 active+clean; 8.4 MiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:19:22.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: audit 2026-03-10T10:19:21.253613+0000 mon.a (mon.0) 1990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm04-59259-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm04-59259-50"}]': finished 2026-03-10T10:19:22.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: audit 2026-03-10T10:19:21.253613+0000 mon.a (mon.0) 1990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm04-59259-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm04-59259-50"}]': finished 2026-03-10T10:19:22.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: audit 2026-03-10T10:19:21.253676+0000 mon.a (mon.0) 1991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemove_vm04-59252-40"}]': finished 2026-03-10T10:19:22.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: audit 2026-03-10T10:19:21.253676+0000 mon.a (mon.0) 1991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemove_vm04-59252-40"}]': finished 2026-03-10T10:19:22.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: audit 2026-03-10T10:19:21.253744+0000 mon.a (mon.0) 1992 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-33", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:22.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: audit 2026-03-10T10:19:21.253744+0000 mon.a (mon.0) 1992 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-33", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:22.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: audit 2026-03-10T10:19:21.253766+0000 mon.a (mon.0) 1993 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "26"}]': finished 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: audit 2026-03-10T10:19:21.253766+0000 mon.a (mon.0) 1993 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "26"}]': finished 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: cluster 2026-03-10T10:19:21.290695+0000 mon.a (mon.0) 1994 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: cluster 2026-03-10T10:19:21.290695+0000 mon.a (mon.0) 1994 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: audit 2026-03-10T10:19:21.295099+0000 mon.a (mon.0) 1995 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-33"}]: dispatch 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: audit 2026-03-10T10:19:21.295099+0000 mon.a (mon.0) 1995 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-33"}]: dispatch 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: audit 2026-03-10T10:19:21.391819+0000 mon.c (mon.2) 360 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: audit 2026-03-10T10:19:21.391819+0000 mon.c (mon.2) 360 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: audit 2026-03-10T10:19:21.392138+0000 mon.a (mon.0) 1996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: audit 2026-03-10T10:19:21.392138+0000 mon.a (mon.0) 1996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: audit 2026-03-10T10:19:21.392958+0000 mon.c (mon.2) 361 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: audit 2026-03-10T10:19:21.392958+0000 mon.c (mon.2) 361 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: audit 2026-03-10T10:19:21.393175+0000 mon.a (mon.0) 1997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: audit 2026-03-10T10:19:21.393175+0000 mon.a (mon.0) 1997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: audit 2026-03-10T10:19:21.393933+0000 mon.c (mon.2) 362 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm04-59252-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: audit 2026-03-10T10:19:21.393933+0000 mon.c (mon.2) 362 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm04-59252-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: audit 2026-03-10T10:19:21.394174+0000 mon.a (mon.0) 1998 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm04-59252-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: audit 2026-03-10T10:19:21.394174+0000 mon.a (mon.0) 1998 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm04-59252-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: audit 2026-03-10T10:19:21.639636+0000 mon.c (mon.2) 363 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: audit 2026-03-10T10:19:21.639636+0000 mon.c (mon.2) 363 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: audit 2026-03-10T10:19:22.336026+0000 mon.a (mon.0) 1999 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-33"}]': finished 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: audit 2026-03-10T10:19:22.336026+0000 mon.a (mon.0) 1999 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-33"}]': finished 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: audit 2026-03-10T10:19:22.336158+0000 mon.a (mon.0) 2000 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm04-59252-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: audit 2026-03-10T10:19:22.336158+0000 mon.a (mon.0) 2000 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm04-59252-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: cluster 2026-03-10T10:19:22.353305+0000 mon.a (mon.0) 2001 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: cluster 2026-03-10T10:19:22.353305+0000 mon.a (mon.0) 2001 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: audit 2026-03-10T10:19:22.353965+0000 mon.a (mon.0) 2002 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-33", "mode": "writeback"}]: dispatch 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:22 vm04 bash[28289]: audit 2026-03-10T10:19:22.353965+0000 mon.a (mon.0) 2002 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-33", "mode": "writeback"}]: dispatch 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: cluster 2026-03-10T10:19:20.416981+0000 mgr.y (mgr.24422) 235 : cluster [DBG] pgmap v318: 319 pgs: 32 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 281 active+clean; 8.4 MiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: cluster 2026-03-10T10:19:20.416981+0000 mgr.y (mgr.24422) 235 : cluster [DBG] pgmap v318: 319 pgs: 32 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 281 active+clean; 8.4 MiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: audit 2026-03-10T10:19:21.253613+0000 mon.a (mon.0) 1990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm04-59259-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm04-59259-50"}]': finished 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: audit 2026-03-10T10:19:21.253613+0000 mon.a (mon.0) 1990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm04-59259-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm04-59259-50"}]': finished 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: audit 2026-03-10T10:19:21.253676+0000 mon.a (mon.0) 1991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemove_vm04-59252-40"}]': finished 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: audit 2026-03-10T10:19:21.253676+0000 mon.a (mon.0) 1991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemove_vm04-59252-40"}]': finished 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: audit 2026-03-10T10:19:21.253744+0000 mon.a (mon.0) 1992 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-33", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: audit 2026-03-10T10:19:21.253744+0000 mon.a (mon.0) 1992 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-33", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: audit 2026-03-10T10:19:21.253766+0000 mon.a (mon.0) 1993 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "26"}]': finished 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: audit 2026-03-10T10:19:21.253766+0000 mon.a (mon.0) 1993 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "26"}]': finished 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: cluster 2026-03-10T10:19:21.290695+0000 mon.a (mon.0) 1994 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: cluster 2026-03-10T10:19:21.290695+0000 mon.a (mon.0) 1994 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: audit 2026-03-10T10:19:21.295099+0000 mon.a (mon.0) 1995 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-33"}]: dispatch 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: audit 2026-03-10T10:19:21.295099+0000 mon.a (mon.0) 1995 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-33"}]: dispatch 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: audit 2026-03-10T10:19:21.391819+0000 mon.c (mon.2) 360 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: audit 2026-03-10T10:19:21.391819+0000 mon.c (mon.2) 360 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: audit 2026-03-10T10:19:21.392138+0000 mon.a (mon.0) 1996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: audit 2026-03-10T10:19:21.392138+0000 mon.a (mon.0) 1996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:22.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: audit 2026-03-10T10:19:21.392958+0000 mon.c (mon.2) 361 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:22.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: audit 2026-03-10T10:19:21.392958+0000 mon.c (mon.2) 361 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:22.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: audit 2026-03-10T10:19:21.393175+0000 mon.a (mon.0) 1997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:22.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: audit 2026-03-10T10:19:21.393175+0000 mon.a (mon.0) 1997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:22.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: audit 2026-03-10T10:19:21.393933+0000 mon.c (mon.2) 362 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm04-59252-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:22.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: audit 2026-03-10T10:19:21.393933+0000 mon.c (mon.2) 362 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm04-59252-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:22.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: audit 2026-03-10T10:19:21.394174+0000 mon.a (mon.0) 1998 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm04-59252-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:22.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: audit 2026-03-10T10:19:21.394174+0000 mon.a (mon.0) 1998 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm04-59252-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:22.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: audit 2026-03-10T10:19:21.639636+0000 mon.c (mon.2) 363 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:22.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: audit 2026-03-10T10:19:21.639636+0000 mon.c (mon.2) 363 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:22.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: audit 2026-03-10T10:19:22.336026+0000 mon.a (mon.0) 1999 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-33"}]': finished 2026-03-10T10:19:22.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: audit 2026-03-10T10:19:22.336026+0000 mon.a (mon.0) 1999 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-33"}]': finished 2026-03-10T10:19:22.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: audit 2026-03-10T10:19:22.336158+0000 mon.a (mon.0) 2000 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm04-59252-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:22.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: audit 2026-03-10T10:19:22.336158+0000 mon.a (mon.0) 2000 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm04-59252-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:22.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: cluster 2026-03-10T10:19:22.353305+0000 mon.a (mon.0) 2001 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-10T10:19:22.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: cluster 2026-03-10T10:19:22.353305+0000 mon.a (mon.0) 2001 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-10T10:19:22.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: audit 2026-03-10T10:19:22.353965+0000 mon.a (mon.0) 2002 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-33", "mode": "writeback"}]: dispatch 2026-03-10T10:19:22.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:22 vm04 bash[20742]: audit 2026-03-10T10:19:22.353965+0000 mon.a (mon.0) 2002 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-33", "mode": "writeback"}]: dispatch 2026-03-10T10:19:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: cluster 2026-03-10T10:19:20.416981+0000 mgr.y (mgr.24422) 235 : cluster [DBG] pgmap v318: 319 pgs: 32 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 281 active+clean; 8.4 MiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:19:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: cluster 2026-03-10T10:19:20.416981+0000 mgr.y (mgr.24422) 235 : cluster [DBG] pgmap v318: 319 pgs: 32 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 281 active+clean; 8.4 MiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:19:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: audit 2026-03-10T10:19:21.253613+0000 mon.a (mon.0) 1990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm04-59259-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm04-59259-50"}]': finished 2026-03-10T10:19:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: audit 2026-03-10T10:19:21.253613+0000 mon.a (mon.0) 1990 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripPP2_vm04-59259-50", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripPP2_vm04-59259-50"}]': finished 2026-03-10T10:19:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: audit 2026-03-10T10:19:21.253676+0000 mon.a (mon.0) 1991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemove_vm04-59252-40"}]': finished 2026-03-10T10:19:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: audit 2026-03-10T10:19:21.253676+0000 mon.a (mon.0) 1991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemove_vm04-59252-40"}]': finished 2026-03-10T10:19:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: audit 2026-03-10T10:19:21.253744+0000 mon.a (mon.0) 1992 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-33", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: audit 2026-03-10T10:19:21.253744+0000 mon.a (mon.0) 1992 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-33", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: audit 2026-03-10T10:19:21.253766+0000 mon.a (mon.0) 1993 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "26"}]': finished 2026-03-10T10:19:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: audit 2026-03-10T10:19:21.253766+0000 mon.a (mon.0) 1993 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "26"}]': finished 2026-03-10T10:19:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: cluster 2026-03-10T10:19:21.290695+0000 mon.a (mon.0) 1994 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-10T10:19:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: cluster 2026-03-10T10:19:21.290695+0000 mon.a (mon.0) 1994 : cluster [DBG] osdmap e241: 8 total, 8 up, 8 in 2026-03-10T10:19:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: audit 2026-03-10T10:19:21.295099+0000 mon.a (mon.0) 1995 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-33"}]: dispatch 2026-03-10T10:19:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: audit 2026-03-10T10:19:21.295099+0000 mon.a (mon.0) 1995 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-33"}]: dispatch 2026-03-10T10:19:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: audit 2026-03-10T10:19:21.391819+0000 mon.c (mon.2) 360 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: audit 2026-03-10T10:19:21.391819+0000 mon.c (mon.2) 360 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: audit 2026-03-10T10:19:21.392138+0000 mon.a (mon.0) 1996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: audit 2026-03-10T10:19:21.392138+0000 mon.a (mon.0) 1996 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: audit 2026-03-10T10:19:21.392958+0000 mon.c (mon.2) 361 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: audit 2026-03-10T10:19:21.392958+0000 mon.c (mon.2) 361 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: audit 2026-03-10T10:19:21.393175+0000 mon.a (mon.0) 1997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: audit 2026-03-10T10:19:21.393175+0000 mon.a (mon.0) 1997 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: audit 2026-03-10T10:19:21.393933+0000 mon.c (mon.2) 362 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm04-59252-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: audit 2026-03-10T10:19:21.393933+0000 mon.c (mon.2) 362 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm04-59252-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: audit 2026-03-10T10:19:21.394174+0000 mon.a (mon.0) 1998 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm04-59252-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: audit 2026-03-10T10:19:21.394174+0000 mon.a (mon.0) 1998 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm04-59252-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: audit 2026-03-10T10:19:21.639636+0000 mon.c (mon.2) 363 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: audit 2026-03-10T10:19:21.639636+0000 mon.c (mon.2) 363 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: audit 2026-03-10T10:19:22.336026+0000 mon.a (mon.0) 1999 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-33"}]': finished 2026-03-10T10:19:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: audit 2026-03-10T10:19:22.336026+0000 mon.a (mon.0) 1999 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-33"}]': finished 2026-03-10T10:19:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: audit 2026-03-10T10:19:22.336158+0000 mon.a (mon.0) 2000 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm04-59252-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: audit 2026-03-10T10:19:22.336158+0000 mon.a (mon.0) 2000 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClass_vm04-59252-41", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: cluster 2026-03-10T10:19:22.353305+0000 mon.a (mon.0) 2001 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-10T10:19:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: cluster 2026-03-10T10:19:22.353305+0000 mon.a (mon.0) 2001 : cluster [DBG] osdmap e242: 8 total, 8 up, 8 in 2026-03-10T10:19:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: audit 2026-03-10T10:19:22.353965+0000 mon.a (mon.0) 2002 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-33", "mode": "writeback"}]: dispatch 2026-03-10T10:19:22.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:22 vm07 bash[23367]: audit 2026-03-10T10:19:22.353965+0000 mon.a (mon.0) 2002 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-33", "mode": "writeback"}]: dispatch 2026-03-10T10:19:23.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:19:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:19:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:19:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:23 vm04 bash[28289]: audit 2026-03-10T10:19:22.358889+0000 mon.c (mon.2) 364 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm04-59252-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:23 vm04 bash[28289]: audit 2026-03-10T10:19:22.358889+0000 mon.c (mon.2) 364 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm04-59252-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:23 vm04 bash[28289]: audit 2026-03-10T10:19:22.394591+0000 mon.a (mon.0) 2003 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm04-59252-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:23 vm04 bash[28289]: audit 2026-03-10T10:19:22.394591+0000 mon.a (mon.0) 2003 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm04-59252-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:23 vm04 bash[28289]: cluster 2026-03-10T10:19:22.417436+0000 mgr.y (mgr.24422) 236 : cluster [DBG] pgmap v321: 327 pgs: 40 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 281 active+clean; 8.4 MiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:19:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:23 vm04 bash[28289]: cluster 2026-03-10T10:19:22.417436+0000 mgr.y (mgr.24422) 236 : cluster [DBG] pgmap v321: 327 pgs: 40 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 281 active+clean; 8.4 MiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:19:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:23 vm04 bash[28289]: audit 2026-03-10T10:19:22.640517+0000 mon.c (mon.2) 365 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:23 vm04 bash[28289]: audit 2026-03-10T10:19:22.640517+0000 mon.c (mon.2) 365 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:23 vm04 bash[28289]: audit 2026-03-10T10:19:23.013587+0000 mon.a (mon.0) 2004 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:19:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:23 vm04 bash[28289]: audit 2026-03-10T10:19:23.013587+0000 mon.a (mon.0) 2004 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:19:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:23 vm04 bash[28289]: cluster 2026-03-10T10:19:23.430202+0000 mon.a (mon.0) 2005 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:23 vm04 bash[28289]: cluster 2026-03-10T10:19:23.430202+0000 mon.a (mon.0) 2005 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:24.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:23 vm04 bash[20742]: audit 2026-03-10T10:19:22.358889+0000 mon.c (mon.2) 364 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm04-59252-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:24.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:23 vm04 bash[20742]: audit 2026-03-10T10:19:22.358889+0000 mon.c (mon.2) 364 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm04-59252-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:24.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:23 vm04 bash[20742]: audit 2026-03-10T10:19:22.394591+0000 mon.a (mon.0) 2003 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm04-59252-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:24.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:23 vm04 bash[20742]: audit 2026-03-10T10:19:22.394591+0000 mon.a (mon.0) 2003 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm04-59252-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:24.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:23 vm04 bash[20742]: cluster 2026-03-10T10:19:22.417436+0000 mgr.y (mgr.24422) 236 : cluster [DBG] pgmap v321: 327 pgs: 40 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 281 active+clean; 8.4 MiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:19:24.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:23 vm04 bash[20742]: cluster 2026-03-10T10:19:22.417436+0000 mgr.y (mgr.24422) 236 : cluster [DBG] pgmap v321: 327 pgs: 40 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 281 active+clean; 8.4 MiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:19:24.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:23 vm04 bash[20742]: audit 2026-03-10T10:19:22.640517+0000 mon.c (mon.2) 365 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:24.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:23 vm04 bash[20742]: audit 2026-03-10T10:19:22.640517+0000 mon.c (mon.2) 365 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:24.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:23 vm04 bash[20742]: audit 2026-03-10T10:19:23.013587+0000 mon.a (mon.0) 2004 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:19:24.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:23 vm04 bash[20742]: audit 2026-03-10T10:19:23.013587+0000 mon.a (mon.0) 2004 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:19:24.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:23 vm04 bash[20742]: cluster 2026-03-10T10:19:23.430202+0000 mon.a (mon.0) 2005 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:24.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:23 vm04 bash[20742]: cluster 2026-03-10T10:19:23.430202+0000 mon.a (mon.0) 2005 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:24.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:23 vm07 bash[23367]: audit 2026-03-10T10:19:22.358889+0000 mon.c (mon.2) 364 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm04-59252-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:24.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:23 vm07 bash[23367]: audit 2026-03-10T10:19:22.358889+0000 mon.c (mon.2) 364 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm04-59252-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:24.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:23 vm07 bash[23367]: audit 2026-03-10T10:19:22.394591+0000 mon.a (mon.0) 2003 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm04-59252-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:24.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:23 vm07 bash[23367]: audit 2026-03-10T10:19:22.394591+0000 mon.a (mon.0) 2003 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClass_vm04-59252-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:24.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:23 vm07 bash[23367]: cluster 2026-03-10T10:19:22.417436+0000 mgr.y (mgr.24422) 236 : cluster [DBG] pgmap v321: 327 pgs: 40 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 281 active+clean; 8.4 MiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:19:24.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:23 vm07 bash[23367]: cluster 2026-03-10T10:19:22.417436+0000 mgr.y (mgr.24422) 236 : cluster [DBG] pgmap v321: 327 pgs: 40 unknown, 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 281 active+clean; 8.4 MiB data, 699 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:19:24.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:23 vm07 bash[23367]: audit 2026-03-10T10:19:22.640517+0000 mon.c (mon.2) 365 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:24.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:23 vm07 bash[23367]: audit 2026-03-10T10:19:22.640517+0000 mon.c (mon.2) 365 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:24.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:23 vm07 bash[23367]: audit 2026-03-10T10:19:23.013587+0000 mon.a (mon.0) 2004 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:19:24.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:23 vm07 bash[23367]: audit 2026-03-10T10:19:23.013587+0000 mon.a (mon.0) 2004 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:19:24.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:23 vm07 bash[23367]: cluster 2026-03-10T10:19:23.430202+0000 mon.a (mon.0) 2005 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:24.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:23 vm07 bash[23367]: cluster 2026-03-10T10:19:23.430202+0000 mon.a (mon.0) 2005 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:25.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:24 vm04 bash[28289]: audit 2026-03-10T10:19:23.569987+0000 mon.a (mon.0) 2006 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:19:25.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:24 vm04 bash[28289]: audit 2026-03-10T10:19:23.569987+0000 mon.a (mon.0) 2006 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:19:25.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:24 vm04 bash[28289]: audit 2026-03-10T10:19:23.651606+0000 mon.c (mon.2) 366 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:25.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:24 vm04 bash[28289]: audit 2026-03-10T10:19:23.651606+0000 mon.c (mon.2) 366 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:25.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:24 vm04 bash[28289]: audit 2026-03-10T10:19:23.822720+0000 mon.a (mon.0) 2007 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-33", "mode": "writeback"}]': finished 2026-03-10T10:19:25.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:24 vm04 bash[28289]: audit 2026-03-10T10:19:23.822720+0000 mon.a (mon.0) 2007 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-33", "mode": "writeback"}]': finished 2026-03-10T10:19:25.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:24 vm04 bash[28289]: audit 2026-03-10T10:19:23.878359+0000 mon.a (mon.0) 2008 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:19:25.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:24 vm04 bash[28289]: audit 2026-03-10T10:19:23.878359+0000 mon.a (mon.0) 2008 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:19:25.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:24 vm04 bash[28289]: audit 2026-03-10T10:19:23.883992+0000 mon.b (mon.1) 207 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:25.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:24 vm04 bash[28289]: audit 2026-03-10T10:19:23.883992+0000 mon.b (mon.1) 207 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:25.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:24 vm04 bash[28289]: cluster 2026-03-10T10:19:23.885399+0000 mon.a (mon.0) 2009 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-10T10:19:25.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:24 vm04 bash[28289]: cluster 2026-03-10T10:19:23.885399+0000 mon.a (mon.0) 2009 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-10T10:19:25.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:24 vm04 bash[28289]: audit 2026-03-10T10:19:23.986005+0000 mon.a (mon.0) 2010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:24 vm04 bash[28289]: audit 2026-03-10T10:19:23.986005+0000 mon.a (mon.0) 2010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:24 vm04 bash[28289]: audit 2026-03-10T10:19:24.011983+0000 mon.a (mon.0) 2011 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:24 vm04 bash[28289]: audit 2026-03-10T10:19:24.011983+0000 mon.a (mon.0) 2011 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:24 vm04 bash[28289]: audit 2026-03-10T10:19:24.222768+0000 mon.a (mon.0) 2012 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:24 vm04 bash[28289]: audit 2026-03-10T10:19:24.222768+0000 mon.a (mon.0) 2012 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:24 vm04 bash[28289]: audit 2026-03-10T10:19:24.577381+0000 mon.a (mon.0) 2013 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:24 vm04 bash[28289]: audit 2026-03-10T10:19:24.577381+0000 mon.a (mon.0) 2013 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:24 vm04 bash[28289]: audit 2026-03-10T10:19:24.578902+0000 mon.a (mon.0) 2014 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:24 vm04 bash[28289]: audit 2026-03-10T10:19:24.578902+0000 mon.a (mon.0) 2014 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:24 vm04 bash[28289]: audit 2026-03-10T10:19:24.652367+0000 mon.c (mon.2) 367 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:24 vm04 bash[28289]: audit 2026-03-10T10:19:24.652367+0000 mon.c (mon.2) 367 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:24 vm04 bash[28289]: audit 2026-03-10T10:19:24.715789+0000 mon.a (mon.0) 2015 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:24 vm04 bash[28289]: audit 2026-03-10T10:19:24.715789+0000 mon.a (mon.0) 2015 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:24 vm04 bash[28289]: audit 2026-03-10T10:19:24.764685+0000 mon.a (mon.0) 2016 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:24 vm04 bash[28289]: audit 2026-03-10T10:19:24.764685+0000 mon.a (mon.0) 2016 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:24 vm04 bash[20742]: audit 2026-03-10T10:19:23.569987+0000 mon.a (mon.0) 2006 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:24 vm04 bash[20742]: audit 2026-03-10T10:19:23.569987+0000 mon.a (mon.0) 2006 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:24 vm04 bash[20742]: audit 2026-03-10T10:19:23.651606+0000 mon.c (mon.2) 366 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:24 vm04 bash[20742]: audit 2026-03-10T10:19:23.651606+0000 mon.c (mon.2) 366 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:24 vm04 bash[20742]: audit 2026-03-10T10:19:23.822720+0000 mon.a (mon.0) 2007 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-33", "mode": "writeback"}]': finished 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:24 vm04 bash[20742]: audit 2026-03-10T10:19:23.822720+0000 mon.a (mon.0) 2007 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-33", "mode": "writeback"}]': finished 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:24 vm04 bash[20742]: audit 2026-03-10T10:19:23.878359+0000 mon.a (mon.0) 2008 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:24 vm04 bash[20742]: audit 2026-03-10T10:19:23.878359+0000 mon.a (mon.0) 2008 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:24 vm04 bash[20742]: audit 2026-03-10T10:19:23.883992+0000 mon.b (mon.1) 207 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:24 vm04 bash[20742]: audit 2026-03-10T10:19:23.883992+0000 mon.b (mon.1) 207 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:24 vm04 bash[20742]: cluster 2026-03-10T10:19:23.885399+0000 mon.a (mon.0) 2009 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:24 vm04 bash[20742]: cluster 2026-03-10T10:19:23.885399+0000 mon.a (mon.0) 2009 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:24 vm04 bash[20742]: audit 2026-03-10T10:19:23.986005+0000 mon.a (mon.0) 2010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:24 vm04 bash[20742]: audit 2026-03-10T10:19:23.986005+0000 mon.a (mon.0) 2010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:24 vm04 bash[20742]: audit 2026-03-10T10:19:24.011983+0000 mon.a (mon.0) 2011 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:24 vm04 bash[20742]: audit 2026-03-10T10:19:24.011983+0000 mon.a (mon.0) 2011 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:24 vm04 bash[20742]: audit 2026-03-10T10:19:24.222768+0000 mon.a (mon.0) 2012 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:24 vm04 bash[20742]: audit 2026-03-10T10:19:24.222768+0000 mon.a (mon.0) 2012 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:24 vm04 bash[20742]: audit 2026-03-10T10:19:24.577381+0000 mon.a (mon.0) 2013 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:24 vm04 bash[20742]: audit 2026-03-10T10:19:24.577381+0000 mon.a (mon.0) 2013 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:24 vm04 bash[20742]: audit 2026-03-10T10:19:24.578902+0000 mon.a (mon.0) 2014 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:24 vm04 bash[20742]: audit 2026-03-10T10:19:24.578902+0000 mon.a (mon.0) 2014 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:24 vm04 bash[20742]: audit 2026-03-10T10:19:24.652367+0000 mon.c (mon.2) 367 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:24 vm04 bash[20742]: audit 2026-03-10T10:19:24.652367+0000 mon.c (mon.2) 367 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:24 vm04 bash[20742]: audit 2026-03-10T10:19:24.715789+0000 mon.a (mon.0) 2015 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:24 vm04 bash[20742]: audit 2026-03-10T10:19:24.715789+0000 mon.a (mon.0) 2015 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:24 vm04 bash[20742]: audit 2026-03-10T10:19:24.764685+0000 mon.a (mon.0) 2016 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:25.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:24 vm04 bash[20742]: audit 2026-03-10T10:19:24.764685+0000 mon.a (mon.0) 2016 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:25.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:24 vm07 bash[23367]: audit 2026-03-10T10:19:23.569987+0000 mon.a (mon.0) 2006 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:19:25.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:24 vm07 bash[23367]: audit 2026-03-10T10:19:23.569987+0000 mon.a (mon.0) 2006 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:19:25.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:24 vm07 bash[23367]: audit 2026-03-10T10:19:23.651606+0000 mon.c (mon.2) 366 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:25.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:24 vm07 bash[23367]: audit 2026-03-10T10:19:23.651606+0000 mon.c (mon.2) 366 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:25.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:24 vm07 bash[23367]: audit 2026-03-10T10:19:23.822720+0000 mon.a (mon.0) 2007 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-33", "mode": "writeback"}]': finished 2026-03-10T10:19:25.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:24 vm07 bash[23367]: audit 2026-03-10T10:19:23.822720+0000 mon.a (mon.0) 2007 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-33", "mode": "writeback"}]': finished 2026-03-10T10:19:25.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:24 vm07 bash[23367]: audit 2026-03-10T10:19:23.878359+0000 mon.a (mon.0) 2008 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:19:25.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:24 vm07 bash[23367]: audit 2026-03-10T10:19:23.878359+0000 mon.a (mon.0) 2008 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:19:25.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:24 vm07 bash[23367]: audit 2026-03-10T10:19:23.883992+0000 mon.b (mon.1) 207 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:25.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:24 vm07 bash[23367]: audit 2026-03-10T10:19:23.883992+0000 mon.b (mon.1) 207 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:25.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:24 vm07 bash[23367]: cluster 2026-03-10T10:19:23.885399+0000 mon.a (mon.0) 2009 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-10T10:19:25.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:24 vm07 bash[23367]: cluster 2026-03-10T10:19:23.885399+0000 mon.a (mon.0) 2009 : cluster [DBG] osdmap e243: 8 total, 8 up, 8 in 2026-03-10T10:19:25.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:24 vm07 bash[23367]: audit 2026-03-10T10:19:23.986005+0000 mon.a (mon.0) 2010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:25.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:24 vm07 bash[23367]: audit 2026-03-10T10:19:23.986005+0000 mon.a (mon.0) 2010 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:25.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:24 vm07 bash[23367]: audit 2026-03-10T10:19:24.011983+0000 mon.a (mon.0) 2011 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:19:25.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:24 vm07 bash[23367]: audit 2026-03-10T10:19:24.011983+0000 mon.a (mon.0) 2011 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:19:25.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:24 vm07 bash[23367]: audit 2026-03-10T10:19:24.222768+0000 mon.a (mon.0) 2012 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:19:25.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:24 vm07 bash[23367]: audit 2026-03-10T10:19:24.222768+0000 mon.a (mon.0) 2012 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:19:25.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:24 vm07 bash[23367]: audit 2026-03-10T10:19:24.577381+0000 mon.a (mon.0) 2013 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:19:25.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:24 vm07 bash[23367]: audit 2026-03-10T10:19:24.577381+0000 mon.a (mon.0) 2013 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:19:25.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:24 vm07 bash[23367]: audit 2026-03-10T10:19:24.578902+0000 mon.a (mon.0) 2014 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:19:25.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:24 vm07 bash[23367]: audit 2026-03-10T10:19:24.578902+0000 mon.a (mon.0) 2014 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:19:25.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:24 vm07 bash[23367]: audit 2026-03-10T10:19:24.652367+0000 mon.c (mon.2) 367 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:25.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:24 vm07 bash[23367]: audit 2026-03-10T10:19:24.652367+0000 mon.c (mon.2) 367 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:25.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:24 vm07 bash[23367]: audit 2026-03-10T10:19:24.715789+0000 mon.a (mon.0) 2015 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:19:25.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:24 vm07 bash[23367]: audit 2026-03-10T10:19:24.715789+0000 mon.a (mon.0) 2015 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:19:25.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:24 vm07 bash[23367]: audit 2026-03-10T10:19:24.764685+0000 mon.a (mon.0) 2016 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:25.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:24 vm07 bash[23367]: audit 2026-03-10T10:19:24.764685+0000 mon.a (mon.0) 2016 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:26 vm04 bash[28289]: cluster 2026-03-10T10:19:24.417868+0000 mgr.y (mgr.24422) 237 : cluster [DBG] pgmap v323: 318 pgs: 1 clean+premerge+peered, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 312 active+clean; 4.4 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:26 vm04 bash[28289]: cluster 2026-03-10T10:19:24.417868+0000 mgr.y (mgr.24422) 237 : cluster [DBG] pgmap v323: 318 pgs: 1 clean+premerge+peered, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 312 active+clean; 4.4 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:26 vm04 bash[28289]: audit 2026-03-10T10:19:24.897040+0000 mon.a (mon.0) 2017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClass_vm04-59252-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm04-59252-41"}]': finished 2026-03-10T10:19:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:26 vm04 bash[28289]: audit 2026-03-10T10:19:24.897040+0000 mon.a (mon.0) 2017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClass_vm04-59252-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm04-59252-41"}]': finished 2026-03-10T10:19:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:26 vm04 bash[28289]: audit 2026-03-10T10:19:24.897186+0000 mon.a (mon.0) 2018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm04-59259-50"}]': finished 2026-03-10T10:19:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:26 vm04 bash[28289]: audit 2026-03-10T10:19:24.897186+0000 mon.a (mon.0) 2018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm04-59259-50"}]': finished 2026-03-10T10:19:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:26 vm04 bash[28289]: audit 2026-03-10T10:19:24.897328+0000 mon.a (mon.0) 2019 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:26 vm04 bash[28289]: audit 2026-03-10T10:19:24.897328+0000 mon.a (mon.0) 2019 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:26 vm04 bash[28289]: audit 2026-03-10T10:19:24.908291+0000 mon.b (mon.1) 208 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:26 vm04 bash[28289]: audit 2026-03-10T10:19:24.908291+0000 mon.b (mon.1) 208 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:26 vm04 bash[28289]: cluster 2026-03-10T10:19:24.923279+0000 mon.a (mon.0) 2020 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-10T10:19:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:26 vm04 bash[28289]: cluster 2026-03-10T10:19:24.923279+0000 mon.a (mon.0) 2020 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-10T10:19:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:26 vm04 bash[28289]: audit 2026-03-10T10:19:24.924337+0000 mon.a (mon.0) 2021 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-33"}]: dispatch 2026-03-10T10:19:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:26 vm04 bash[28289]: audit 2026-03-10T10:19:24.924337+0000 mon.a (mon.0) 2021 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-33"}]: dispatch 2026-03-10T10:19:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:26 vm04 bash[28289]: audit 2026-03-10T10:19:24.930487+0000 mon.a (mon.0) 2022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:26 vm04 bash[28289]: audit 2026-03-10T10:19:24.930487+0000 mon.a (mon.0) 2022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:26 vm04 bash[28289]: cluster 2026-03-10T10:19:25.217302+0000 mon.a (mon.0) 2023 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:26 vm04 bash[28289]: cluster 2026-03-10T10:19:25.217302+0000 mon.a (mon.0) 2023 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:26 vm04 bash[28289]: audit 2026-03-10T10:19:25.653328+0000 mon.c (mon.2) 368 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:26 vm04 bash[28289]: audit 2026-03-10T10:19:25.653328+0000 mon.c (mon.2) 368 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:26.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:26 vm04 bash[20742]: cluster 2026-03-10T10:19:24.417868+0000 mgr.y (mgr.24422) 237 : cluster [DBG] pgmap v323: 318 pgs: 1 clean+premerge+peered, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 312 active+clean; 4.4 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:26.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:26 vm04 bash[20742]: cluster 2026-03-10T10:19:24.417868+0000 mgr.y (mgr.24422) 237 : cluster [DBG] pgmap v323: 318 pgs: 1 clean+premerge+peered, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 312 active+clean; 4.4 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:26.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:26 vm04 bash[20742]: audit 2026-03-10T10:19:24.897040+0000 mon.a (mon.0) 2017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClass_vm04-59252-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm04-59252-41"}]': finished 2026-03-10T10:19:26.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:26 vm04 bash[20742]: audit 2026-03-10T10:19:24.897040+0000 mon.a (mon.0) 2017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClass_vm04-59252-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm04-59252-41"}]': finished 2026-03-10T10:19:26.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:26 vm04 bash[20742]: audit 2026-03-10T10:19:24.897186+0000 mon.a (mon.0) 2018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm04-59259-50"}]': finished 2026-03-10T10:19:26.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:26 vm04 bash[20742]: audit 2026-03-10T10:19:24.897186+0000 mon.a (mon.0) 2018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm04-59259-50"}]': finished 2026-03-10T10:19:26.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:26 vm04 bash[20742]: audit 2026-03-10T10:19:24.897328+0000 mon.a (mon.0) 2019 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:26.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:26 vm04 bash[20742]: audit 2026-03-10T10:19:24.897328+0000 mon.a (mon.0) 2019 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:26.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:26 vm04 bash[20742]: audit 2026-03-10T10:19:24.908291+0000 mon.b (mon.1) 208 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:26.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:26 vm04 bash[20742]: audit 2026-03-10T10:19:24.908291+0000 mon.b (mon.1) 208 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:26.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:26 vm04 bash[20742]: cluster 2026-03-10T10:19:24.923279+0000 mon.a (mon.0) 2020 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-10T10:19:26.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:26 vm04 bash[20742]: cluster 2026-03-10T10:19:24.923279+0000 mon.a (mon.0) 2020 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-10T10:19:26.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:26 vm04 bash[20742]: audit 2026-03-10T10:19:24.924337+0000 mon.a (mon.0) 2021 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-33"}]: dispatch 2026-03-10T10:19:26.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:26 vm04 bash[20742]: audit 2026-03-10T10:19:24.924337+0000 mon.a (mon.0) 2021 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-33"}]: dispatch 2026-03-10T10:19:26.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:26 vm04 bash[20742]: audit 2026-03-10T10:19:24.930487+0000 mon.a (mon.0) 2022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:26.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:26 vm04 bash[20742]: audit 2026-03-10T10:19:24.930487+0000 mon.a (mon.0) 2022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:26.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:26 vm04 bash[20742]: cluster 2026-03-10T10:19:25.217302+0000 mon.a (mon.0) 2023 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:26.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:26 vm04 bash[20742]: cluster 2026-03-10T10:19:25.217302+0000 mon.a (mon.0) 2023 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:26.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:26 vm04 bash[20742]: audit 2026-03-10T10:19:25.653328+0000 mon.c (mon.2) 368 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:26.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:26 vm04 bash[20742]: audit 2026-03-10T10:19:25.653328+0000 mon.c (mon.2) 368 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:26.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:26 vm07 bash[23367]: cluster 2026-03-10T10:19:24.417868+0000 mgr.y (mgr.24422) 237 : cluster [DBG] pgmap v323: 318 pgs: 1 clean+premerge+peered, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 312 active+clean; 4.4 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:26.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:26 vm07 bash[23367]: cluster 2026-03-10T10:19:24.417868+0000 mgr.y (mgr.24422) 237 : cluster [DBG] pgmap v323: 318 pgs: 1 clean+premerge+peered, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 312 active+clean; 4.4 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:26.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:26 vm07 bash[23367]: audit 2026-03-10T10:19:24.897040+0000 mon.a (mon.0) 2017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClass_vm04-59252-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm04-59252-41"}]': finished 2026-03-10T10:19:26.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:26 vm07 bash[23367]: audit 2026-03-10T10:19:24.897040+0000 mon.a (mon.0) 2017 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClass_vm04-59252-41", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClass_vm04-59252-41"}]': finished 2026-03-10T10:19:26.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:26 vm07 bash[23367]: audit 2026-03-10T10:19:24.897186+0000 mon.a (mon.0) 2018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm04-59259-50"}]': finished 2026-03-10T10:19:26.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:26 vm07 bash[23367]: audit 2026-03-10T10:19:24.897186+0000 mon.a (mon.0) 2018 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripPP2_vm04-59259-50"}]': finished 2026-03-10T10:19:26.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:26 vm07 bash[23367]: audit 2026-03-10T10:19:24.897328+0000 mon.a (mon.0) 2019 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:26.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:26 vm07 bash[23367]: audit 2026-03-10T10:19:24.897328+0000 mon.a (mon.0) 2019 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:26.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:26 vm07 bash[23367]: audit 2026-03-10T10:19:24.908291+0000 mon.b (mon.1) 208 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:26.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:26 vm07 bash[23367]: audit 2026-03-10T10:19:24.908291+0000 mon.b (mon.1) 208 : audit [INF] from='client.? 192.168.123.104:0/951205170' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:26.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:26 vm07 bash[23367]: cluster 2026-03-10T10:19:24.923279+0000 mon.a (mon.0) 2020 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-10T10:19:26.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:26 vm07 bash[23367]: cluster 2026-03-10T10:19:24.923279+0000 mon.a (mon.0) 2020 : cluster [DBG] osdmap e244: 8 total, 8 up, 8 in 2026-03-10T10:19:26.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:26 vm07 bash[23367]: audit 2026-03-10T10:19:24.924337+0000 mon.a (mon.0) 2021 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-33"}]: dispatch 2026-03-10T10:19:26.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:26 vm07 bash[23367]: audit 2026-03-10T10:19:24.924337+0000 mon.a (mon.0) 2021 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-33"}]: dispatch 2026-03-10T10:19:26.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:26 vm07 bash[23367]: audit 2026-03-10T10:19:24.930487+0000 mon.a (mon.0) 2022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:26.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:26 vm07 bash[23367]: audit 2026-03-10T10:19:24.930487+0000 mon.a (mon.0) 2022 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm04-59259-50"}]: dispatch 2026-03-10T10:19:26.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:26 vm07 bash[23367]: cluster 2026-03-10T10:19:25.217302+0000 mon.a (mon.0) 2023 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:26.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:26 vm07 bash[23367]: cluster 2026-03-10T10:19:25.217302+0000 mon.a (mon.0) 2023 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:26.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:26 vm07 bash[23367]: audit 2026-03-10T10:19:25.653328+0000 mon.c (mon.2) 368 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:26.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:26 vm07 bash[23367]: audit 2026-03-10T10:19:25.653328+0000 mon.c (mon.2) 368 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:27.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:27 vm04 bash[28289]: cluster 2026-03-10T10:19:25.897359+0000 mon.a (mon.0) 2024 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:27.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:27 vm04 bash[28289]: cluster 2026-03-10T10:19:25.897359+0000 mon.a (mon.0) 2024 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:27.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:27 vm04 bash[28289]: audit 2026-03-10T10:19:25.935338+0000 mon.a (mon.0) 2025 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-33"}]': finished 2026-03-10T10:19:27.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:27 vm04 bash[28289]: audit 2026-03-10T10:19:25.935338+0000 mon.a (mon.0) 2025 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-33"}]': finished 2026-03-10T10:19:27.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:27 vm04 bash[28289]: audit 2026-03-10T10:19:25.935807+0000 mon.a (mon.0) 2026 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm04-59259-50"}]': finished 2026-03-10T10:19:27.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:27 vm04 bash[28289]: audit 2026-03-10T10:19:25.935807+0000 mon.a (mon.0) 2026 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm04-59259-50"}]': finished 2026-03-10T10:19:27.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:27 vm04 bash[28289]: cluster 2026-03-10T10:19:25.948211+0000 mon.a (mon.0) 2027 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-10T10:19:27.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:27 vm04 bash[28289]: cluster 2026-03-10T10:19:25.948211+0000 mon.a (mon.0) 2027 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-10T10:19:27.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:27 vm04 bash[28289]: audit 2026-03-10T10:19:26.654262+0000 mon.c (mon.2) 369 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:27.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:27 vm04 bash[28289]: audit 2026-03-10T10:19:26.654262+0000 mon.c (mon.2) 369 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:27.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:27 vm04 bash[20742]: cluster 2026-03-10T10:19:25.897359+0000 mon.a (mon.0) 2024 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:27.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:27 vm04 bash[20742]: cluster 2026-03-10T10:19:25.897359+0000 mon.a (mon.0) 2024 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:27.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:27 vm04 bash[20742]: audit 2026-03-10T10:19:25.935338+0000 mon.a (mon.0) 2025 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-33"}]': finished 2026-03-10T10:19:27.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:27 vm04 bash[20742]: audit 2026-03-10T10:19:25.935338+0000 mon.a (mon.0) 2025 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-33"}]': finished 2026-03-10T10:19:27.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:27 vm04 bash[20742]: audit 2026-03-10T10:19:25.935807+0000 mon.a (mon.0) 2026 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm04-59259-50"}]': finished 2026-03-10T10:19:27.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:27 vm04 bash[20742]: audit 2026-03-10T10:19:25.935807+0000 mon.a (mon.0) 2026 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm04-59259-50"}]': finished 2026-03-10T10:19:27.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:27 vm04 bash[20742]: cluster 2026-03-10T10:19:25.948211+0000 mon.a (mon.0) 2027 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-10T10:19:27.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:27 vm04 bash[20742]: cluster 2026-03-10T10:19:25.948211+0000 mon.a (mon.0) 2027 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-10T10:19:27.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:27 vm04 bash[20742]: audit 2026-03-10T10:19:26.654262+0000 mon.c (mon.2) 369 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:27.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:27 vm04 bash[20742]: audit 2026-03-10T10:19:26.654262+0000 mon.c (mon.2) 369 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:27.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:27 vm07 bash[23367]: cluster 2026-03-10T10:19:25.897359+0000 mon.a (mon.0) 2024 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:27.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:27 vm07 bash[23367]: cluster 2026-03-10T10:19:25.897359+0000 mon.a (mon.0) 2024 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:27.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:27 vm07 bash[23367]: audit 2026-03-10T10:19:25.935338+0000 mon.a (mon.0) 2025 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-33"}]': finished 2026-03-10T10:19:27.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:27 vm07 bash[23367]: audit 2026-03-10T10:19:25.935338+0000 mon.a (mon.0) 2025 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-33"}]': finished 2026-03-10T10:19:27.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:27 vm07 bash[23367]: audit 2026-03-10T10:19:25.935807+0000 mon.a (mon.0) 2026 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm04-59259-50"}]': finished 2026-03-10T10:19:27.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:27 vm07 bash[23367]: audit 2026-03-10T10:19:25.935807+0000 mon.a (mon.0) 2026 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripPP2_vm04-59259-50"}]': finished 2026-03-10T10:19:27.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:27 vm07 bash[23367]: cluster 2026-03-10T10:19:25.948211+0000 mon.a (mon.0) 2027 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-10T10:19:27.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:27 vm07 bash[23367]: cluster 2026-03-10T10:19:25.948211+0000 mon.a (mon.0) 2027 : cluster [DBG] osdmap e245: 8 total, 8 up, 8 in 2026-03-10T10:19:27.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:27 vm07 bash[23367]: audit 2026-03-10T10:19:26.654262+0000 mon.c (mon.2) 369 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:27.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:27 vm07 bash[23367]: audit 2026-03-10T10:19:26.654262+0000 mon.c (mon.2) 369 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:28 vm04 bash[28289]: cluster 2026-03-10T10:19:26.418357+0000 mgr.y (mgr.24422) 238 : cluster [DBG] pgmap v326: 326 pgs: 8 unknown, 1 clean+premerge+peered, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 312 active+clean; 4.4 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:28 vm04 bash[28289]: cluster 2026-03-10T10:19:26.418357+0000 mgr.y (mgr.24422) 238 : cluster [DBG] pgmap v326: 326 pgs: 8 unknown, 1 clean+premerge+peered, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 312 active+clean; 4.4 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:28 vm04 bash[28289]: cluster 2026-03-10T10:19:27.148433+0000 mon.a (mon.0) 2028 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-10T10:19:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:28 vm04 bash[28289]: cluster 2026-03-10T10:19:27.148433+0000 mon.a (mon.0) 2028 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-10T10:19:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:28 vm04 bash[28289]: audit 2026-03-10T10:19:27.183358+0000 mon.c (mon.2) 370 : audit [INF] from='client.? 192.168.123.104:0/3267522114' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:28 vm04 bash[28289]: audit 2026-03-10T10:19:27.183358+0000 mon.c (mon.2) 370 : audit [INF] from='client.? 192.168.123.104:0/3267522114' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:28 vm04 bash[28289]: audit 2026-03-10T10:19:27.183727+0000 mon.c (mon.2) 371 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:28 vm04 bash[28289]: audit 2026-03-10T10:19:27.183727+0000 mon.c (mon.2) 371 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:28 vm04 bash[28289]: audit 2026-03-10T10:19:27.363719+0000 mon.a (mon.0) 2029 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:28 vm04 bash[28289]: audit 2026-03-10T10:19:27.363719+0000 mon.a (mon.0) 2029 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:28 vm04 bash[28289]: audit 2026-03-10T10:19:27.364240+0000 mon.a (mon.0) 2030 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:28 vm04 bash[28289]: audit 2026-03-10T10:19:27.364240+0000 mon.a (mon.0) 2030 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:28 vm04 bash[28289]: audit 2026-03-10T10:19:27.655159+0000 mon.c (mon.2) 372 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:28 vm04 bash[28289]: audit 2026-03-10T10:19:27.655159+0000 mon.c (mon.2) 372 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:28 vm04 bash[28289]: audit 2026-03-10T10:19:27.821376+0000 mon.a (mon.0) 2031 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:28 vm04 bash[28289]: audit 2026-03-10T10:19:27.821376+0000 mon.a (mon.0) 2031 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:28 vm04 bash[28289]: audit 2026-03-10T10:19:28.148120+0000 mon.a (mon.0) 2032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:28 vm04 bash[28289]: audit 2026-03-10T10:19:28.148120+0000 mon.a (mon.0) 2032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:28 vm04 bash[28289]: audit 2026-03-10T10:19:28.148193+0000 mon.a (mon.0) 2033 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm04-59252-41"}]': finished 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:28 vm04 bash[28289]: audit 2026-03-10T10:19:28.148193+0000 mon.a (mon.0) 2033 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm04-59252-41"}]': finished 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:28 vm04 bash[28289]: cluster 2026-03-10T10:19:28.158870+0000 mon.a (mon.0) 2034 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:28 vm04 bash[28289]: cluster 2026-03-10T10:19:28.158870+0000 mon.a (mon.0) 2034 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:28 vm04 bash[28289]: audit 2026-03-10T10:19:28.161728+0000 mon.a (mon.0) 2035 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:28 vm04 bash[28289]: audit 2026-03-10T10:19:28.161728+0000 mon.a (mon.0) 2035 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:28 vm04 bash[28289]: audit 2026-03-10T10:19:28.166258+0000 mon.c (mon.2) 373 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:28 vm04 bash[28289]: audit 2026-03-10T10:19:28.166258+0000 mon.c (mon.2) 373 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:28 vm04 bash[28289]: audit 2026-03-10T10:19:28.170547+0000 mon.a (mon.0) 2036 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:28 vm04 bash[28289]: audit 2026-03-10T10:19:28.170547+0000 mon.a (mon.0) 2036 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:28 vm04 bash[20742]: cluster 2026-03-10T10:19:26.418357+0000 mgr.y (mgr.24422) 238 : cluster [DBG] pgmap v326: 326 pgs: 8 unknown, 1 clean+premerge+peered, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 312 active+clean; 4.4 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:28 vm04 bash[20742]: cluster 2026-03-10T10:19:26.418357+0000 mgr.y (mgr.24422) 238 : cluster [DBG] pgmap v326: 326 pgs: 8 unknown, 1 clean+premerge+peered, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 312 active+clean; 4.4 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:28 vm04 bash[20742]: cluster 2026-03-10T10:19:27.148433+0000 mon.a (mon.0) 2028 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:28 vm04 bash[20742]: cluster 2026-03-10T10:19:27.148433+0000 mon.a (mon.0) 2028 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:28 vm04 bash[20742]: audit 2026-03-10T10:19:27.183358+0000 mon.c (mon.2) 370 : audit [INF] from='client.? 192.168.123.104:0/3267522114' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:28 vm04 bash[20742]: audit 2026-03-10T10:19:27.183358+0000 mon.c (mon.2) 370 : audit [INF] from='client.? 192.168.123.104:0/3267522114' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:28 vm04 bash[20742]: audit 2026-03-10T10:19:27.183727+0000 mon.c (mon.2) 371 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:28 vm04 bash[20742]: audit 2026-03-10T10:19:27.183727+0000 mon.c (mon.2) 371 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:28 vm04 bash[20742]: audit 2026-03-10T10:19:27.363719+0000 mon.a (mon.0) 2029 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:28 vm04 bash[20742]: audit 2026-03-10T10:19:27.363719+0000 mon.a (mon.0) 2029 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:28 vm04 bash[20742]: audit 2026-03-10T10:19:27.364240+0000 mon.a (mon.0) 2030 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:28 vm04 bash[20742]: audit 2026-03-10T10:19:27.364240+0000 mon.a (mon.0) 2030 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:28 vm04 bash[20742]: audit 2026-03-10T10:19:27.655159+0000 mon.c (mon.2) 372 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:28 vm04 bash[20742]: audit 2026-03-10T10:19:27.655159+0000 mon.c (mon.2) 372 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:28 vm04 bash[20742]: audit 2026-03-10T10:19:27.821376+0000 mon.a (mon.0) 2031 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:28 vm04 bash[20742]: audit 2026-03-10T10:19:27.821376+0000 mon.a (mon.0) 2031 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:28 vm04 bash[20742]: audit 2026-03-10T10:19:28.148120+0000 mon.a (mon.0) 2032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:28 vm04 bash[20742]: audit 2026-03-10T10:19:28.148120+0000 mon.a (mon.0) 2032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:28 vm04 bash[20742]: audit 2026-03-10T10:19:28.148193+0000 mon.a (mon.0) 2033 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm04-59252-41"}]': finished 2026-03-10T10:19:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:28 vm04 bash[20742]: audit 2026-03-10T10:19:28.148193+0000 mon.a (mon.0) 2033 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm04-59252-41"}]': finished 2026-03-10T10:19:28.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:28 vm04 bash[20742]: cluster 2026-03-10T10:19:28.158870+0000 mon.a (mon.0) 2034 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-10T10:19:28.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:28 vm04 bash[20742]: cluster 2026-03-10T10:19:28.158870+0000 mon.a (mon.0) 2034 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-10T10:19:28.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:28 vm04 bash[20742]: audit 2026-03-10T10:19:28.161728+0000 mon.a (mon.0) 2035 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:28.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:28 vm04 bash[20742]: audit 2026-03-10T10:19:28.161728+0000 mon.a (mon.0) 2035 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:28.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:28 vm04 bash[20742]: audit 2026-03-10T10:19:28.166258+0000 mon.c (mon.2) 373 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:28.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:28 vm04 bash[20742]: audit 2026-03-10T10:19:28.166258+0000 mon.c (mon.2) 373 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:28.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:28 vm04 bash[20742]: audit 2026-03-10T10:19:28.170547+0000 mon.a (mon.0) 2036 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:28.455 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:28 vm04 bash[20742]: audit 2026-03-10T10:19:28.170547+0000 mon.a (mon.0) 2036 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:28.517 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:19:28 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:19:28.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:28 vm07 bash[23367]: cluster 2026-03-10T10:19:26.418357+0000 mgr.y (mgr.24422) 238 : cluster [DBG] pgmap v326: 326 pgs: 8 unknown, 1 clean+premerge+peered, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 312 active+clean; 4.4 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:28.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:28 vm07 bash[23367]: cluster 2026-03-10T10:19:26.418357+0000 mgr.y (mgr.24422) 238 : cluster [DBG] pgmap v326: 326 pgs: 8 unknown, 1 clean+premerge+peered, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 312 active+clean; 4.4 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:28.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:28 vm07 bash[23367]: cluster 2026-03-10T10:19:27.148433+0000 mon.a (mon.0) 2028 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-10T10:19:28.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:28 vm07 bash[23367]: cluster 2026-03-10T10:19:27.148433+0000 mon.a (mon.0) 2028 : cluster [DBG] osdmap e246: 8 total, 8 up, 8 in 2026-03-10T10:19:28.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:28 vm07 bash[23367]: audit 2026-03-10T10:19:27.183358+0000 mon.c (mon.2) 370 : audit [INF] from='client.? 192.168.123.104:0/3267522114' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:28.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:28 vm07 bash[23367]: audit 2026-03-10T10:19:27.183358+0000 mon.c (mon.2) 370 : audit [INF] from='client.? 192.168.123.104:0/3267522114' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:28.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:28 vm07 bash[23367]: audit 2026-03-10T10:19:27.183727+0000 mon.c (mon.2) 371 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:28.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:28 vm07 bash[23367]: audit 2026-03-10T10:19:27.183727+0000 mon.c (mon.2) 371 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:28.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:28 vm07 bash[23367]: audit 2026-03-10T10:19:27.363719+0000 mon.a (mon.0) 2029 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:28.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:28 vm07 bash[23367]: audit 2026-03-10T10:19:27.363719+0000 mon.a (mon.0) 2029 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:28.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:28 vm07 bash[23367]: audit 2026-03-10T10:19:27.364240+0000 mon.a (mon.0) 2030 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:28.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:28 vm07 bash[23367]: audit 2026-03-10T10:19:27.364240+0000 mon.a (mon.0) 2030 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:28.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:28 vm07 bash[23367]: audit 2026-03-10T10:19:27.655159+0000 mon.c (mon.2) 372 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:28.518 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:28 vm07 bash[23367]: audit 2026-03-10T10:19:27.655159+0000 mon.c (mon.2) 372 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:28.518 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:28 vm07 bash[23367]: audit 2026-03-10T10:19:27.821376+0000 mon.a (mon.0) 2031 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:19:28.518 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:28 vm07 bash[23367]: audit 2026-03-10T10:19:27.821376+0000 mon.a (mon.0) 2031 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:19:28.518 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:28 vm07 bash[23367]: audit 2026-03-10T10:19:28.148120+0000 mon.a (mon.0) 2032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:28.518 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:28 vm07 bash[23367]: audit 2026-03-10T10:19:28.148120+0000 mon.a (mon.0) 2032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripPP3_vm04-59259-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:28.518 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:28 vm07 bash[23367]: audit 2026-03-10T10:19:28.148193+0000 mon.a (mon.0) 2033 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm04-59252-41"}]': finished 2026-03-10T10:19:28.518 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:28 vm07 bash[23367]: audit 2026-03-10T10:19:28.148193+0000 mon.a (mon.0) 2033 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClass_vm04-59252-41"}]': finished 2026-03-10T10:19:28.518 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:28 vm07 bash[23367]: cluster 2026-03-10T10:19:28.158870+0000 mon.a (mon.0) 2034 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-10T10:19:28.518 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:28 vm07 bash[23367]: cluster 2026-03-10T10:19:28.158870+0000 mon.a (mon.0) 2034 : cluster [DBG] osdmap e247: 8 total, 8 up, 8 in 2026-03-10T10:19:28.518 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:28 vm07 bash[23367]: audit 2026-03-10T10:19:28.161728+0000 mon.a (mon.0) 2035 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:28.518 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:28 vm07 bash[23367]: audit 2026-03-10T10:19:28.161728+0000 mon.a (mon.0) 2035 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-35","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:28.518 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:28 vm07 bash[23367]: audit 2026-03-10T10:19:28.166258+0000 mon.c (mon.2) 373 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:28.518 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:28 vm07 bash[23367]: audit 2026-03-10T10:19:28.166258+0000 mon.c (mon.2) 373 : audit [INF] from='client.? 192.168.123.104:0/3374078418' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:28.518 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:28 vm07 bash[23367]: audit 2026-03-10T10:19:28.170547+0000 mon.a (mon.0) 2036 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:28.518 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:28 vm07 bash[23367]: audit 2026-03-10T10:19:28.170547+0000 mon.a (mon.0) 2036 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm04-59252-41"}]: dispatch 2026-03-10T10:19:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:29 vm04 bash[28289]: audit 2026-03-10T10:19:28.656110+0000 mon.c (mon.2) 374 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:29 vm04 bash[28289]: audit 2026-03-10T10:19:28.656110+0000 mon.c (mon.2) 374 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:29 vm04 bash[28289]: audit 2026-03-10T10:19:29.167132+0000 mon.a (mon.0) 2037 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:29 vm04 bash[28289]: audit 2026-03-10T10:19:29.167132+0000 mon.a (mon.0) 2037 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:29 vm04 bash[28289]: audit 2026-03-10T10:19:29.167198+0000 mon.a (mon.0) 2038 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm04-59252-41"}]': finished 2026-03-10T10:19:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:29 vm04 bash[28289]: audit 2026-03-10T10:19:29.167198+0000 mon.a (mon.0) 2038 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm04-59252-41"}]': finished 2026-03-10T10:19:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:29 vm04 bash[28289]: cluster 2026-03-10T10:19:29.189983+0000 mon.a (mon.0) 2039 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-10T10:19:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:29 vm04 bash[28289]: cluster 2026-03-10T10:19:29.189983+0000 mon.a (mon.0) 2039 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-10T10:19:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:29 vm04 bash[28289]: audit 2026-03-10T10:19:29.192260+0000 mon.a (mon.0) 2040 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:29 vm04 bash[28289]: audit 2026-03-10T10:19:29.192260+0000 mon.a (mon.0) 2040 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:29 vm04 bash[20742]: audit 2026-03-10T10:19:28.656110+0000 mon.c (mon.2) 374 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:29 vm04 bash[20742]: audit 2026-03-10T10:19:28.656110+0000 mon.c (mon.2) 374 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:29 vm04 bash[20742]: audit 2026-03-10T10:19:29.167132+0000 mon.a (mon.0) 2037 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:29 vm04 bash[20742]: audit 2026-03-10T10:19:29.167132+0000 mon.a (mon.0) 2037 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:29 vm04 bash[20742]: audit 2026-03-10T10:19:29.167198+0000 mon.a (mon.0) 2038 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm04-59252-41"}]': finished 2026-03-10T10:19:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:29 vm04 bash[20742]: audit 2026-03-10T10:19:29.167198+0000 mon.a (mon.0) 2038 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm04-59252-41"}]': finished 2026-03-10T10:19:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:29 vm04 bash[20742]: cluster 2026-03-10T10:19:29.189983+0000 mon.a (mon.0) 2039 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-10T10:19:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:29 vm04 bash[20742]: cluster 2026-03-10T10:19:29.189983+0000 mon.a (mon.0) 2039 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-10T10:19:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:29 vm04 bash[20742]: audit 2026-03-10T10:19:29.192260+0000 mon.a (mon.0) 2040 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:29.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:29 vm04 bash[20742]: audit 2026-03-10T10:19:29.192260+0000 mon.a (mon.0) 2040 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:29.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:29 vm07 bash[23367]: audit 2026-03-10T10:19:28.656110+0000 mon.c (mon.2) 374 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:29.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:29 vm07 bash[23367]: audit 2026-03-10T10:19:28.656110+0000 mon.c (mon.2) 374 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:29.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:29 vm07 bash[23367]: audit 2026-03-10T10:19:29.167132+0000 mon.a (mon.0) 2037 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:29.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:29 vm07 bash[23367]: audit 2026-03-10T10:19:29.167132+0000 mon.a (mon.0) 2037 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-35","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:29.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:29 vm07 bash[23367]: audit 2026-03-10T10:19:29.167198+0000 mon.a (mon.0) 2038 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm04-59252-41"}]': finished 2026-03-10T10:19:29.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:29 vm07 bash[23367]: audit 2026-03-10T10:19:29.167198+0000 mon.a (mon.0) 2038 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClass_vm04-59252-41"}]': finished 2026-03-10T10:19:29.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:29 vm07 bash[23367]: cluster 2026-03-10T10:19:29.189983+0000 mon.a (mon.0) 2039 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-10T10:19:29.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:29 vm07 bash[23367]: cluster 2026-03-10T10:19:29.189983+0000 mon.a (mon.0) 2039 : cluster [DBG] osdmap e248: 8 total, 8 up, 8 in 2026-03-10T10:19:29.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:29 vm07 bash[23367]: audit 2026-03-10T10:19:29.192260+0000 mon.a (mon.0) 2040 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:29.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:29 vm07 bash[23367]: audit 2026-03-10T10:19:29.192260+0000 mon.a (mon.0) 2040 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-35", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:30.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:30 vm04 bash[28289]: audit 2026-03-10T10:19:28.358658+0000 mgr.y (mgr.24422) 239 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:19:30.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:30 vm04 bash[28289]: audit 2026-03-10T10:19:28.358658+0000 mgr.y (mgr.24422) 239 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:19:30.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:30 vm04 bash[28289]: cluster 2026-03-10T10:19:28.419199+0000 mgr.y (mgr.24422) 240 : cluster [DBG] pgmap v329: 350 pgs: 19 creating+peering, 26 unknown, 1 clean+premerge+peered, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 4.4 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T10:19:30.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:30 vm04 bash[28289]: cluster 2026-03-10T10:19:28.419199+0000 mgr.y (mgr.24422) 240 : cluster [DBG] pgmap v329: 350 pgs: 19 creating+peering, 26 unknown, 1 clean+premerge+peered, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 4.4 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T10:19:30.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:30 vm04 bash[28289]: audit 2026-03-10T10:19:29.258911+0000 mon.a (mon.0) 2041 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm04-59252-42"}]: dispatch 2026-03-10T10:19:30.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:30 vm04 bash[28289]: audit 2026-03-10T10:19:29.258911+0000 mon.a (mon.0) 2041 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm04-59252-42"}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:30 vm04 bash[28289]: audit 2026-03-10T10:19:29.262094+0000 mon.a (mon.0) 2042 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm04-59252-42"}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:30 vm04 bash[28289]: audit 2026-03-10T10:19:29.262094+0000 mon.a (mon.0) 2042 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm04-59252-42"}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:30 vm04 bash[28289]: audit 2026-03-10T10:19:29.263317+0000 mon.a (mon.0) 2043 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm04-59252-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:30 vm04 bash[28289]: audit 2026-03-10T10:19:29.263317+0000 mon.a (mon.0) 2043 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm04-59252-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:30 vm04 bash[28289]: audit 2026-03-10T10:19:29.264084+0000 mon.c (mon.2) 375 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:30 vm04 bash[28289]: audit 2026-03-10T10:19:29.264084+0000 mon.c (mon.2) 375 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:30 vm04 bash[28289]: audit 2026-03-10T10:19:29.311790+0000 mon.a (mon.0) 2044 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:30 vm04 bash[28289]: audit 2026-03-10T10:19:29.311790+0000 mon.a (mon.0) 2044 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:30 vm04 bash[28289]: audit 2026-03-10T10:19:29.344017+0000 mon.c (mon.2) 376 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:30 vm04 bash[28289]: audit 2026-03-10T10:19:29.344017+0000 mon.c (mon.2) 376 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:30 vm04 bash[28289]: audit 2026-03-10T10:19:29.370807+0000 mon.a (mon.0) 2045 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:30 vm04 bash[28289]: audit 2026-03-10T10:19:29.370807+0000 mon.a (mon.0) 2045 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:30 vm04 bash[28289]: audit 2026-03-10T10:19:29.376062+0000 mon.c (mon.2) 377 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:30 vm04 bash[28289]: audit 2026-03-10T10:19:29.376062+0000 mon.c (mon.2) 377 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:30 vm04 bash[28289]: audit 2026-03-10T10:19:29.376256+0000 mon.a (mon.0) 2046 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:30 vm04 bash[28289]: audit 2026-03-10T10:19:29.376256+0000 mon.a (mon.0) 2046 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:30 vm04 bash[28289]: audit 2026-03-10T10:19:29.656982+0000 mon.c (mon.2) 378 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:30 vm04 bash[28289]: audit 2026-03-10T10:19:29.656982+0000 mon.c (mon.2) 378 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:30 vm04 bash[28289]: audit 2026-03-10T10:19:30.329279+0000 mon.a (mon.0) 2047 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-35", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:30 vm04 bash[28289]: audit 2026-03-10T10:19:30.329279+0000 mon.a (mon.0) 2047 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-35", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:30 vm04 bash[20742]: audit 2026-03-10T10:19:28.358658+0000 mgr.y (mgr.24422) 239 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:30 vm04 bash[20742]: audit 2026-03-10T10:19:28.358658+0000 mgr.y (mgr.24422) 239 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:30 vm04 bash[20742]: cluster 2026-03-10T10:19:28.419199+0000 mgr.y (mgr.24422) 240 : cluster [DBG] pgmap v329: 350 pgs: 19 creating+peering, 26 unknown, 1 clean+premerge+peered, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 4.4 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:30 vm04 bash[20742]: cluster 2026-03-10T10:19:28.419199+0000 mgr.y (mgr.24422) 240 : cluster [DBG] pgmap v329: 350 pgs: 19 creating+peering, 26 unknown, 1 clean+premerge+peered, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 4.4 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:30 vm04 bash[20742]: audit 2026-03-10T10:19:29.258911+0000 mon.a (mon.0) 2041 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm04-59252-42"}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:30 vm04 bash[20742]: audit 2026-03-10T10:19:29.258911+0000 mon.a (mon.0) 2041 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm04-59252-42"}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:30 vm04 bash[20742]: audit 2026-03-10T10:19:29.262094+0000 mon.a (mon.0) 2042 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm04-59252-42"}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:30 vm04 bash[20742]: audit 2026-03-10T10:19:29.262094+0000 mon.a (mon.0) 2042 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm04-59252-42"}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:30 vm04 bash[20742]: audit 2026-03-10T10:19:29.263317+0000 mon.a (mon.0) 2043 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm04-59252-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:30 vm04 bash[20742]: audit 2026-03-10T10:19:29.263317+0000 mon.a (mon.0) 2043 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm04-59252-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:30 vm04 bash[20742]: audit 2026-03-10T10:19:29.264084+0000 mon.c (mon.2) 375 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:30 vm04 bash[20742]: audit 2026-03-10T10:19:29.264084+0000 mon.c (mon.2) 375 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:30 vm04 bash[20742]: audit 2026-03-10T10:19:29.311790+0000 mon.a (mon.0) 2044 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:30 vm04 bash[20742]: audit 2026-03-10T10:19:29.311790+0000 mon.a (mon.0) 2044 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:30 vm04 bash[20742]: audit 2026-03-10T10:19:29.344017+0000 mon.c (mon.2) 376 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:30 vm04 bash[20742]: audit 2026-03-10T10:19:29.344017+0000 mon.c (mon.2) 376 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:30 vm04 bash[20742]: audit 2026-03-10T10:19:29.370807+0000 mon.a (mon.0) 2045 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:30 vm04 bash[20742]: audit 2026-03-10T10:19:29.370807+0000 mon.a (mon.0) 2045 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:30 vm04 bash[20742]: audit 2026-03-10T10:19:29.376062+0000 mon.c (mon.2) 377 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:30 vm04 bash[20742]: audit 2026-03-10T10:19:29.376062+0000 mon.c (mon.2) 377 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:30 vm04 bash[20742]: audit 2026-03-10T10:19:29.376256+0000 mon.a (mon.0) 2046 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:30 vm04 bash[20742]: audit 2026-03-10T10:19:29.376256+0000 mon.a (mon.0) 2046 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:30 vm04 bash[20742]: audit 2026-03-10T10:19:29.656982+0000 mon.c (mon.2) 378 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:30 vm04 bash[20742]: audit 2026-03-10T10:19:29.656982+0000 mon.c (mon.2) 378 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:30 vm04 bash[20742]: audit 2026-03-10T10:19:30.329279+0000 mon.a (mon.0) 2047 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-35", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:30.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:30 vm04 bash[20742]: audit 2026-03-10T10:19:30.329279+0000 mon.a (mon.0) 2047 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-35", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:30 vm07 bash[23367]: audit 2026-03-10T10:19:28.358658+0000 mgr.y (mgr.24422) 239 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:19:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:30 vm07 bash[23367]: audit 2026-03-10T10:19:28.358658+0000 mgr.y (mgr.24422) 239 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:19:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:30 vm07 bash[23367]: cluster 2026-03-10T10:19:28.419199+0000 mgr.y (mgr.24422) 240 : cluster [DBG] pgmap v329: 350 pgs: 19 creating+peering, 26 unknown, 1 clean+premerge+peered, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 4.4 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T10:19:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:30 vm07 bash[23367]: cluster 2026-03-10T10:19:28.419199+0000 mgr.y (mgr.24422) 240 : cluster [DBG] pgmap v329: 350 pgs: 19 creating+peering, 26 unknown, 1 clean+premerge+peered, 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 4.4 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T10:19:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:30 vm07 bash[23367]: audit 2026-03-10T10:19:29.258911+0000 mon.a (mon.0) 2041 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm04-59252-42"}]: dispatch 2026-03-10T10:19:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:30 vm07 bash[23367]: audit 2026-03-10T10:19:29.258911+0000 mon.a (mon.0) 2041 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm04-59252-42"}]: dispatch 2026-03-10T10:19:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:30 vm07 bash[23367]: audit 2026-03-10T10:19:29.262094+0000 mon.a (mon.0) 2042 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm04-59252-42"}]: dispatch 2026-03-10T10:19:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:30 vm07 bash[23367]: audit 2026-03-10T10:19:29.262094+0000 mon.a (mon.0) 2042 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm04-59252-42"}]: dispatch 2026-03-10T10:19:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:30 vm07 bash[23367]: audit 2026-03-10T10:19:29.263317+0000 mon.a (mon.0) 2043 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm04-59252-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:30 vm07 bash[23367]: audit 2026-03-10T10:19:29.263317+0000 mon.a (mon.0) 2043 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm04-59252-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:30 vm07 bash[23367]: audit 2026-03-10T10:19:29.264084+0000 mon.c (mon.2) 375 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:30 vm07 bash[23367]: audit 2026-03-10T10:19:29.264084+0000 mon.c (mon.2) 375 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:30 vm07 bash[23367]: audit 2026-03-10T10:19:29.311790+0000 mon.a (mon.0) 2044 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:30 vm07 bash[23367]: audit 2026-03-10T10:19:29.311790+0000 mon.a (mon.0) 2044 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:30 vm07 bash[23367]: audit 2026-03-10T10:19:29.344017+0000 mon.c (mon.2) 376 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:30 vm07 bash[23367]: audit 2026-03-10T10:19:29.344017+0000 mon.c (mon.2) 376 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:30 vm07 bash[23367]: audit 2026-03-10T10:19:29.370807+0000 mon.a (mon.0) 2045 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:30 vm07 bash[23367]: audit 2026-03-10T10:19:29.370807+0000 mon.a (mon.0) 2045 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:30 vm07 bash[23367]: audit 2026-03-10T10:19:29.376062+0000 mon.c (mon.2) 377 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:30 vm07 bash[23367]: audit 2026-03-10T10:19:29.376062+0000 mon.c (mon.2) 377 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:30 vm07 bash[23367]: audit 2026-03-10T10:19:29.376256+0000 mon.a (mon.0) 2046 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:30 vm07 bash[23367]: audit 2026-03-10T10:19:29.376256+0000 mon.a (mon.0) 2046 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:30 vm07 bash[23367]: audit 2026-03-10T10:19:29.656982+0000 mon.c (mon.2) 378 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:30 vm07 bash[23367]: audit 2026-03-10T10:19:29.656982+0000 mon.c (mon.2) 378 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:30 vm07 bash[23367]: audit 2026-03-10T10:19:30.329279+0000 mon.a (mon.0) 2047 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-35", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:30 vm07 bash[23367]: audit 2026-03-10T10:19:30.329279+0000 mon.a (mon.0) 2047 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-35", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: audit 2026-03-10T10:19:30.329419+0000 mon.a (mon.0) 2048 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm04-59252-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: audit 2026-03-10T10:19:30.329419+0000 mon.a (mon.0) 2048 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm04-59252-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: audit 2026-03-10T10:19:30.329647+0000 mon.a (mon.0) 2049 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: audit 2026-03-10T10:19:30.329647+0000 mon.a (mon.0) 2049 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: cluster 2026-03-10T10:19:30.419582+0000 mgr.y (mgr.24422) 241 : cluster [DBG] pgmap v331: 318 pgs: 32 creating+peering, 286 active+clean; 4.4 MiB data, 693 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.1 KiB/s wr, 3 op/s; 28 B/s, 0 objects/s recovering 2026-03-10T10:19:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: cluster 2026-03-10T10:19:30.419582+0000 mgr.y (mgr.24422) 241 : cluster [DBG] pgmap v331: 318 pgs: 32 creating+peering, 286 active+clean; 4.4 MiB data, 693 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.1 KiB/s wr, 3 op/s; 28 B/s, 0 objects/s recovering 2026-03-10T10:19:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: cluster 2026-03-10T10:19:30.495078+0000 mon.a (mon.0) 2050 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-10T10:19:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: cluster 2026-03-10T10:19:30.495078+0000 mon.a (mon.0) 2050 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-10T10:19:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: audit 2026-03-10T10:19:30.538998+0000 mon.a (mon.0) 2051 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-35"}]: dispatch 2026-03-10T10:19:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: audit 2026-03-10T10:19:30.538998+0000 mon.a (mon.0) 2051 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-35"}]: dispatch 2026-03-10T10:19:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: audit 2026-03-10T10:19:30.539370+0000 mon.a (mon.0) 2052 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm04-59252-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm04-59252-42"}]: dispatch 2026-03-10T10:19:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: audit 2026-03-10T10:19:30.539370+0000 mon.a (mon.0) 2052 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm04-59252-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm04-59252-42"}]: dispatch 2026-03-10T10:19:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: audit 2026-03-10T10:19:30.596250+0000 mon.c (mon.2) 379 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm04-59259-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: audit 2026-03-10T10:19:30.596250+0000 mon.c (mon.2) 379 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm04-59259-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: audit 2026-03-10T10:19:30.670171+0000 mon.c (mon.2) 380 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: audit 2026-03-10T10:19:30.670171+0000 mon.c (mon.2) 380 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: audit 2026-03-10T10:19:30.670864+0000 mon.a (mon.0) 2053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm04-59259-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: audit 2026-03-10T10:19:30.670864+0000 mon.a (mon.0) 2053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm04-59259-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: cluster 2026-03-10T10:19:31.329143+0000 mon.a (mon.0) 2054 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: cluster 2026-03-10T10:19:31.329143+0000 mon.a (mon.0) 2054 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: audit 2026-03-10T10:19:31.372416+0000 mon.a (mon.0) 2055 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-35"}]': finished 2026-03-10T10:19:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: audit 2026-03-10T10:19:31.372416+0000 mon.a (mon.0) 2055 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-35"}]': finished 2026-03-10T10:19:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: cluster 2026-03-10T10:19:31.384113+0000 mon.a (mon.0) 2056 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-10T10:19:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: cluster 2026-03-10T10:19:31.384113+0000 mon.a (mon.0) 2056 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-10T10:19:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: audit 2026-03-10T10:19:31.384783+0000 mon.a (mon.0) 2057 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-35", "mode": "writeback"}]: dispatch 2026-03-10T10:19:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: audit 2026-03-10T10:19:31.384783+0000 mon.a (mon.0) 2057 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-35", "mode": "writeback"}]: dispatch 2026-03-10T10:19:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: cluster 2026-03-10T10:19:31.433122+0000 mon.a (mon.0) 2058 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: cluster 2026-03-10T10:19:31.433122+0000 mon.a (mon.0) 2058 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:31.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: audit 2026-03-10T10:19:31.454606+0000 mon.a (mon.0) 2059 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWrite_vm04-59252-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm04-59252-42"}]': finished 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: audit 2026-03-10T10:19:31.454606+0000 mon.a (mon.0) 2059 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWrite_vm04-59252-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm04-59252-42"}]': finished 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: audit 2026-03-10T10:19:31.454660+0000 mon.a (mon.0) 2060 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm04-59259-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm04-59259-52"}]': finished 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: audit 2026-03-10T10:19:31.454660+0000 mon.a (mon.0) 2060 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm04-59259-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm04-59259-52"}]': finished 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: audit 2026-03-10T10:19:31.454707+0000 mon.a (mon.0) 2061 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-35", "mode": "writeback"}]': finished 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: audit 2026-03-10T10:19:31.454707+0000 mon.a (mon.0) 2061 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-35", "mode": "writeback"}]': finished 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: cluster 2026-03-10T10:19:31.476737+0000 mon.a (mon.0) 2062 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:31 vm04 bash[28289]: cluster 2026-03-10T10:19:31.476737+0000 mon.a (mon.0) 2062 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: audit 2026-03-10T10:19:30.329419+0000 mon.a (mon.0) 2048 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm04-59252-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: audit 2026-03-10T10:19:30.329419+0000 mon.a (mon.0) 2048 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm04-59252-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: audit 2026-03-10T10:19:30.329647+0000 mon.a (mon.0) 2049 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: audit 2026-03-10T10:19:30.329647+0000 mon.a (mon.0) 2049 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: cluster 2026-03-10T10:19:30.419582+0000 mgr.y (mgr.24422) 241 : cluster [DBG] pgmap v331: 318 pgs: 32 creating+peering, 286 active+clean; 4.4 MiB data, 693 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.1 KiB/s wr, 3 op/s; 28 B/s, 0 objects/s recovering 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: cluster 2026-03-10T10:19:30.419582+0000 mgr.y (mgr.24422) 241 : cluster [DBG] pgmap v331: 318 pgs: 32 creating+peering, 286 active+clean; 4.4 MiB data, 693 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.1 KiB/s wr, 3 op/s; 28 B/s, 0 objects/s recovering 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: cluster 2026-03-10T10:19:30.495078+0000 mon.a (mon.0) 2050 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: cluster 2026-03-10T10:19:30.495078+0000 mon.a (mon.0) 2050 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: audit 2026-03-10T10:19:30.538998+0000 mon.a (mon.0) 2051 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-35"}]: dispatch 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: audit 2026-03-10T10:19:30.538998+0000 mon.a (mon.0) 2051 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-35"}]: dispatch 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: audit 2026-03-10T10:19:30.539370+0000 mon.a (mon.0) 2052 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm04-59252-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm04-59252-42"}]: dispatch 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: audit 2026-03-10T10:19:30.539370+0000 mon.a (mon.0) 2052 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm04-59252-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm04-59252-42"}]: dispatch 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: audit 2026-03-10T10:19:30.596250+0000 mon.c (mon.2) 379 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm04-59259-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: audit 2026-03-10T10:19:30.596250+0000 mon.c (mon.2) 379 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm04-59259-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: audit 2026-03-10T10:19:30.670171+0000 mon.c (mon.2) 380 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: audit 2026-03-10T10:19:30.670171+0000 mon.c (mon.2) 380 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: audit 2026-03-10T10:19:30.670864+0000 mon.a (mon.0) 2053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm04-59259-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: audit 2026-03-10T10:19:30.670864+0000 mon.a (mon.0) 2053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm04-59259-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: cluster 2026-03-10T10:19:31.329143+0000 mon.a (mon.0) 2054 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: cluster 2026-03-10T10:19:31.329143+0000 mon.a (mon.0) 2054 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: audit 2026-03-10T10:19:31.372416+0000 mon.a (mon.0) 2055 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-35"}]': finished 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: audit 2026-03-10T10:19:31.372416+0000 mon.a (mon.0) 2055 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-35"}]': finished 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: cluster 2026-03-10T10:19:31.384113+0000 mon.a (mon.0) 2056 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: cluster 2026-03-10T10:19:31.384113+0000 mon.a (mon.0) 2056 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: audit 2026-03-10T10:19:31.384783+0000 mon.a (mon.0) 2057 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-35", "mode": "writeback"}]: dispatch 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: audit 2026-03-10T10:19:31.384783+0000 mon.a (mon.0) 2057 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-35", "mode": "writeback"}]: dispatch 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: cluster 2026-03-10T10:19:31.433122+0000 mon.a (mon.0) 2058 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: cluster 2026-03-10T10:19:31.433122+0000 mon.a (mon.0) 2058 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: audit 2026-03-10T10:19:31.454606+0000 mon.a (mon.0) 2059 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWrite_vm04-59252-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm04-59252-42"}]': finished 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: audit 2026-03-10T10:19:31.454606+0000 mon.a (mon.0) 2059 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWrite_vm04-59252-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm04-59252-42"}]': finished 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: audit 2026-03-10T10:19:31.454660+0000 mon.a (mon.0) 2060 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm04-59259-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm04-59259-52"}]': finished 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: audit 2026-03-10T10:19:31.454660+0000 mon.a (mon.0) 2060 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm04-59259-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm04-59259-52"}]': finished 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: audit 2026-03-10T10:19:31.454707+0000 mon.a (mon.0) 2061 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-35", "mode": "writeback"}]': finished 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: audit 2026-03-10T10:19:31.454707+0000 mon.a (mon.0) 2061 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-35", "mode": "writeback"}]': finished 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: cluster 2026-03-10T10:19:31.476737+0000 mon.a (mon.0) 2062 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-10T10:19:31.955 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:31 vm04 bash[20742]: cluster 2026-03-10T10:19:31.476737+0000 mon.a (mon.0) 2062 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-10T10:19:32.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: audit 2026-03-10T10:19:30.329419+0000 mon.a (mon.0) 2048 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm04-59252-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:32.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: audit 2026-03-10T10:19:30.329419+0000 mon.a (mon.0) 2048 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWrite_vm04-59252-42", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:32.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: audit 2026-03-10T10:19:30.329647+0000 mon.a (mon.0) 2049 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:32.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: audit 2026-03-10T10:19:30.329647+0000 mon.a (mon.0) 2049 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:32.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: cluster 2026-03-10T10:19:30.419582+0000 mgr.y (mgr.24422) 241 : cluster [DBG] pgmap v331: 318 pgs: 32 creating+peering, 286 active+clean; 4.4 MiB data, 693 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.1 KiB/s wr, 3 op/s; 28 B/s, 0 objects/s recovering 2026-03-10T10:19:32.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: cluster 2026-03-10T10:19:30.419582+0000 mgr.y (mgr.24422) 241 : cluster [DBG] pgmap v331: 318 pgs: 32 creating+peering, 286 active+clean; 4.4 MiB data, 693 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.1 KiB/s wr, 3 op/s; 28 B/s, 0 objects/s recovering 2026-03-10T10:19:32.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: cluster 2026-03-10T10:19:30.495078+0000 mon.a (mon.0) 2050 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-10T10:19:32.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: cluster 2026-03-10T10:19:30.495078+0000 mon.a (mon.0) 2050 : cluster [DBG] osdmap e249: 8 total, 8 up, 8 in 2026-03-10T10:19:32.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: audit 2026-03-10T10:19:30.538998+0000 mon.a (mon.0) 2051 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-35"}]: dispatch 2026-03-10T10:19:32.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: audit 2026-03-10T10:19:30.538998+0000 mon.a (mon.0) 2051 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-35"}]: dispatch 2026-03-10T10:19:32.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: audit 2026-03-10T10:19:30.539370+0000 mon.a (mon.0) 2052 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm04-59252-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm04-59252-42"}]: dispatch 2026-03-10T10:19:32.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: audit 2026-03-10T10:19:30.539370+0000 mon.a (mon.0) 2052 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWrite_vm04-59252-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm04-59252-42"}]: dispatch 2026-03-10T10:19:32.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: audit 2026-03-10T10:19:30.596250+0000 mon.c (mon.2) 379 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm04-59259-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:32.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: audit 2026-03-10T10:19:30.596250+0000 mon.c (mon.2) 379 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm04-59259-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:32.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: audit 2026-03-10T10:19:30.670171+0000 mon.c (mon.2) 380 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:32.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: audit 2026-03-10T10:19:30.670171+0000 mon.c (mon.2) 380 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:32.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: audit 2026-03-10T10:19:30.670864+0000 mon.a (mon.0) 2053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm04-59259-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:32.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: audit 2026-03-10T10:19:30.670864+0000 mon.a (mon.0) 2053 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm04-59259-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:32.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: cluster 2026-03-10T10:19:31.329143+0000 mon.a (mon.0) 2054 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:32.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: cluster 2026-03-10T10:19:31.329143+0000 mon.a (mon.0) 2054 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:32.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: audit 2026-03-10T10:19:31.372416+0000 mon.a (mon.0) 2055 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-35"}]': finished 2026-03-10T10:19:32.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: audit 2026-03-10T10:19:31.372416+0000 mon.a (mon.0) 2055 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-35"}]': finished 2026-03-10T10:19:32.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: cluster 2026-03-10T10:19:31.384113+0000 mon.a (mon.0) 2056 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-10T10:19:32.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: cluster 2026-03-10T10:19:31.384113+0000 mon.a (mon.0) 2056 : cluster [DBG] osdmap e250: 8 total, 8 up, 8 in 2026-03-10T10:19:32.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: audit 2026-03-10T10:19:31.384783+0000 mon.a (mon.0) 2057 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-35", "mode": "writeback"}]: dispatch 2026-03-10T10:19:32.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: audit 2026-03-10T10:19:31.384783+0000 mon.a (mon.0) 2057 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-35", "mode": "writeback"}]: dispatch 2026-03-10T10:19:32.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: cluster 2026-03-10T10:19:31.433122+0000 mon.a (mon.0) 2058 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:32.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: cluster 2026-03-10T10:19:31.433122+0000 mon.a (mon.0) 2058 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:32.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: audit 2026-03-10T10:19:31.454606+0000 mon.a (mon.0) 2059 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWrite_vm04-59252-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm04-59252-42"}]': finished 2026-03-10T10:19:32.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: audit 2026-03-10T10:19:31.454606+0000 mon.a (mon.0) 2059 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWrite_vm04-59252-42", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWrite_vm04-59252-42"}]': finished 2026-03-10T10:19:32.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: audit 2026-03-10T10:19:31.454660+0000 mon.a (mon.0) 2060 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm04-59259-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm04-59259-52"}]': finished 2026-03-10T10:19:32.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: audit 2026-03-10T10:19:31.454660+0000 mon.a (mon.0) 2060 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripSparseReadPP_vm04-59259-52", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripSparseReadPP_vm04-59259-52"}]': finished 2026-03-10T10:19:32.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: audit 2026-03-10T10:19:31.454707+0000 mon.a (mon.0) 2061 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-35", "mode": "writeback"}]': finished 2026-03-10T10:19:32.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: audit 2026-03-10T10:19:31.454707+0000 mon.a (mon.0) 2061 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-35", "mode": "writeback"}]': finished 2026-03-10T10:19:32.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: cluster 2026-03-10T10:19:31.476737+0000 mon.a (mon.0) 2062 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-10T10:19:32.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:31 vm07 bash[23367]: cluster 2026-03-10T10:19:31.476737+0000 mon.a (mon.0) 2062 : cluster [DBG] osdmap e251: 8 total, 8 up, 8 in 2026-03-10T10:19:33.008 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:32 vm04 bash[28289]: audit 2026-03-10T10:19:31.671729+0000 mon.c (mon.2) 381 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:33.008 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:32 vm04 bash[28289]: audit 2026-03-10T10:19:31.671729+0000 mon.c (mon.2) 381 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:33.008 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:32 vm04 bash[28289]: cluster 2026-03-10T10:19:32.539258+0000 mon.a (mon.0) 2063 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-10T10:19:33.008 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:32 vm04 bash[28289]: cluster 2026-03-10T10:19:32.539258+0000 mon.a (mon.0) 2063 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-10T10:19:33.009 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:32 vm04 bash[28289]: audit 2026-03-10T10:19:32.672949+0000 mon.c (mon.2) 382 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:33.009 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:32 vm04 bash[28289]: audit 2026-03-10T10:19:32.672949+0000 mon.c (mon.2) 382 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:33.009 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:32 vm04 bash[20742]: audit 2026-03-10T10:19:31.671729+0000 mon.c (mon.2) 381 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:33.009 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:32 vm04 bash[20742]: audit 2026-03-10T10:19:31.671729+0000 mon.c (mon.2) 381 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:33.009 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:32 vm04 bash[20742]: cluster 2026-03-10T10:19:32.539258+0000 mon.a (mon.0) 2063 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-10T10:19:33.009 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:32 vm04 bash[20742]: cluster 2026-03-10T10:19:32.539258+0000 mon.a (mon.0) 2063 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-10T10:19:33.009 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:32 vm04 bash[20742]: audit 2026-03-10T10:19:32.672949+0000 mon.c (mon.2) 382 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:33.009 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:32 vm04 bash[20742]: audit 2026-03-10T10:19:32.672949+0000 mon.c (mon.2) 382 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:33.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:32 vm07 bash[23367]: audit 2026-03-10T10:19:31.671729+0000 mon.c (mon.2) 381 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:33.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:32 vm07 bash[23367]: audit 2026-03-10T10:19:31.671729+0000 mon.c (mon.2) 381 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:33.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:32 vm07 bash[23367]: cluster 2026-03-10T10:19:32.539258+0000 mon.a (mon.0) 2063 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-10T10:19:33.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:32 vm07 bash[23367]: cluster 2026-03-10T10:19:32.539258+0000 mon.a (mon.0) 2063 : cluster [DBG] osdmap e252: 8 total, 8 up, 8 in 2026-03-10T10:19:33.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:32 vm07 bash[23367]: audit 2026-03-10T10:19:32.672949+0000 mon.c (mon.2) 382 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:33.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:32 vm07 bash[23367]: audit 2026-03-10T10:19:32.672949+0000 mon.c (mon.2) 382 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:33.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:19:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:19:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:19:34.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:33 vm07 bash[23367]: cluster 2026-03-10T10:19:32.420124+0000 mgr.y (mgr.24422) 242 : cluster [DBG] pgmap v335: 334 pgs: 16 unknown, 32 creating+peering, 286 active+clean; 4.4 MiB data, 693 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1023 B/s wr, 2 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:19:34.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:33 vm07 bash[23367]: cluster 2026-03-10T10:19:32.420124+0000 mgr.y (mgr.24422) 242 : cluster [DBG] pgmap v335: 334 pgs: 16 unknown, 32 creating+peering, 286 active+clean; 4.4 MiB data, 693 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1023 B/s wr, 2 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:19:34.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:33 vm07 bash[23367]: cluster 2026-03-10T10:19:33.621101+0000 mon.a (mon.0) 2064 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-10T10:19:34.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:33 vm07 bash[23367]: cluster 2026-03-10T10:19:33.621101+0000 mon.a (mon.0) 2064 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-10T10:19:34.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:33 vm07 bash[23367]: audit 2026-03-10T10:19:33.622372+0000 mon.a (mon.0) 2065 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm04-59252-42"}]: dispatch 2026-03-10T10:19:34.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:33 vm07 bash[23367]: audit 2026-03-10T10:19:33.622372+0000 mon.a (mon.0) 2065 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm04-59252-42"}]: dispatch 2026-03-10T10:19:34.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:33 vm07 bash[23367]: audit 2026-03-10T10:19:33.633412+0000 mon.c (mon.2) 383 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:34.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:33 vm07 bash[23367]: audit 2026-03-10T10:19:33.633412+0000 mon.c (mon.2) 383 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:34.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:33 vm07 bash[23367]: audit 2026-03-10T10:19:33.633894+0000 mon.a (mon.0) 2066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:34.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:33 vm07 bash[23367]: audit 2026-03-10T10:19:33.633894+0000 mon.a (mon.0) 2066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:34.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:33 vm07 bash[23367]: audit 2026-03-10T10:19:33.674239+0000 mon.c (mon.2) 384 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:34.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:33 vm07 bash[23367]: audit 2026-03-10T10:19:33.674239+0000 mon.c (mon.2) 384 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:34.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:33 vm04 bash[28289]: cluster 2026-03-10T10:19:32.420124+0000 mgr.y (mgr.24422) 242 : cluster [DBG] pgmap v335: 334 pgs: 16 unknown, 32 creating+peering, 286 active+clean; 4.4 MiB data, 693 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1023 B/s wr, 2 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:19:34.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:33 vm04 bash[28289]: cluster 2026-03-10T10:19:32.420124+0000 mgr.y (mgr.24422) 242 : cluster [DBG] pgmap v335: 334 pgs: 16 unknown, 32 creating+peering, 286 active+clean; 4.4 MiB data, 693 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1023 B/s wr, 2 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:19:34.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:33 vm04 bash[28289]: cluster 2026-03-10T10:19:33.621101+0000 mon.a (mon.0) 2064 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-10T10:19:34.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:33 vm04 bash[28289]: cluster 2026-03-10T10:19:33.621101+0000 mon.a (mon.0) 2064 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-10T10:19:34.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:33 vm04 bash[28289]: audit 2026-03-10T10:19:33.622372+0000 mon.a (mon.0) 2065 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm04-59252-42"}]: dispatch 2026-03-10T10:19:34.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:33 vm04 bash[28289]: audit 2026-03-10T10:19:33.622372+0000 mon.a (mon.0) 2065 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm04-59252-42"}]: dispatch 2026-03-10T10:19:34.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:33 vm04 bash[28289]: audit 2026-03-10T10:19:33.633412+0000 mon.c (mon.2) 383 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:34.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:33 vm04 bash[28289]: audit 2026-03-10T10:19:33.633412+0000 mon.c (mon.2) 383 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:34.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:33 vm04 bash[28289]: audit 2026-03-10T10:19:33.633894+0000 mon.a (mon.0) 2066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:34.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:33 vm04 bash[28289]: audit 2026-03-10T10:19:33.633894+0000 mon.a (mon.0) 2066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:34.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:33 vm04 bash[28289]: audit 2026-03-10T10:19:33.674239+0000 mon.c (mon.2) 384 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:34.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:33 vm04 bash[28289]: audit 2026-03-10T10:19:33.674239+0000 mon.c (mon.2) 384 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:34.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:33 vm04 bash[20742]: cluster 2026-03-10T10:19:32.420124+0000 mgr.y (mgr.24422) 242 : cluster [DBG] pgmap v335: 334 pgs: 16 unknown, 32 creating+peering, 286 active+clean; 4.4 MiB data, 693 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1023 B/s wr, 2 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:19:34.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:33 vm04 bash[20742]: cluster 2026-03-10T10:19:32.420124+0000 mgr.y (mgr.24422) 242 : cluster [DBG] pgmap v335: 334 pgs: 16 unknown, 32 creating+peering, 286 active+clean; 4.4 MiB data, 693 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 1023 B/s wr, 2 op/s; 31 B/s, 0 objects/s recovering 2026-03-10T10:19:34.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:33 vm04 bash[20742]: cluster 2026-03-10T10:19:33.621101+0000 mon.a (mon.0) 2064 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-10T10:19:34.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:33 vm04 bash[20742]: cluster 2026-03-10T10:19:33.621101+0000 mon.a (mon.0) 2064 : cluster [DBG] osdmap e253: 8 total, 8 up, 8 in 2026-03-10T10:19:34.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:33 vm04 bash[20742]: audit 2026-03-10T10:19:33.622372+0000 mon.a (mon.0) 2065 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm04-59252-42"}]: dispatch 2026-03-10T10:19:34.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:33 vm04 bash[20742]: audit 2026-03-10T10:19:33.622372+0000 mon.a (mon.0) 2065 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm04-59252-42"}]: dispatch 2026-03-10T10:19:34.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:33 vm04 bash[20742]: audit 2026-03-10T10:19:33.633412+0000 mon.c (mon.2) 383 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:34.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:33 vm04 bash[20742]: audit 2026-03-10T10:19:33.633412+0000 mon.c (mon.2) 383 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:34.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:33 vm04 bash[20742]: audit 2026-03-10T10:19:33.633894+0000 mon.a (mon.0) 2066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:34.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:33 vm04 bash[20742]: audit 2026-03-10T10:19:33.633894+0000 mon.a (mon.0) 2066 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:34.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:33 vm04 bash[20742]: audit 2026-03-10T10:19:33.674239+0000 mon.c (mon.2) 384 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:34.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:33 vm04 bash[20742]: audit 2026-03-10T10:19:33.674239+0000 mon.c (mon.2) 384 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:35.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:34 vm04 bash[20742]: audit 2026-03-10T10:19:33.735913+0000 mon.a (mon.0) 2067 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:35.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:34 vm04 bash[20742]: audit 2026-03-10T10:19:33.735913+0000 mon.a (mon.0) 2067 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:35.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:34 vm04 bash[20742]: audit 2026-03-10T10:19:34.421545+0000 mon.a (mon.0) 2068 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "25"}]: dispatch 2026-03-10T10:19:35.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:34 vm04 bash[20742]: audit 2026-03-10T10:19:34.421545+0000 mon.a (mon.0) 2068 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "25"}]: dispatch 2026-03-10T10:19:35.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:34 vm04 bash[20742]: audit 2026-03-10T10:19:34.675231+0000 mon.a (mon.0) 2069 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm04-59252-42"}]': finished 2026-03-10T10:19:35.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:34 vm04 bash[20742]: audit 2026-03-10T10:19:34.675231+0000 mon.a (mon.0) 2069 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm04-59252-42"}]': finished 2026-03-10T10:19:35.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:34 vm04 bash[20742]: audit 2026-03-10T10:19:34.675299+0000 mon.c (mon.2) 385 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:35.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:34 vm04 bash[20742]: audit 2026-03-10T10:19:34.675299+0000 mon.c (mon.2) 385 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:35.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:34 vm04 bash[20742]: audit 2026-03-10T10:19:34.675306+0000 mon.a (mon.0) 2070 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52"}]': finished 2026-03-10T10:19:35.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:34 vm04 bash[20742]: audit 2026-03-10T10:19:34.675306+0000 mon.a (mon.0) 2070 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52"}]': finished 2026-03-10T10:19:35.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:34 vm04 bash[20742]: audit 2026-03-10T10:19:34.675350+0000 mon.a (mon.0) 2071 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:35.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:34 vm04 bash[20742]: audit 2026-03-10T10:19:34.675350+0000 mon.a (mon.0) 2071 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:35.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:34 vm04 bash[20742]: audit 2026-03-10T10:19:34.675391+0000 mon.a (mon.0) 2072 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "25"}]': finished 2026-03-10T10:19:35.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:34 vm04 bash[20742]: audit 2026-03-10T10:19:34.675391+0000 mon.a (mon.0) 2072 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "25"}]': finished 2026-03-10T10:19:35.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:34 vm04 bash[20742]: audit 2026-03-10T10:19:34.705599+0000 mon.c (mon.2) 386 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:35.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:34 vm04 bash[20742]: audit 2026-03-10T10:19:34.705599+0000 mon.c (mon.2) 386 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:35.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:34 vm04 bash[20742]: cluster 2026-03-10T10:19:34.706110+0000 mon.a (mon.0) 2073 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-10T10:19:35.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:34 vm04 bash[20742]: cluster 2026-03-10T10:19:34.706110+0000 mon.a (mon.0) 2073 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-10T10:19:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:34 vm04 bash[20742]: audit 2026-03-10T10:19:34.707225+0000 mon.a (mon.0) 2074 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm04-59252-42"}]: dispatch 2026-03-10T10:19:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:34 vm04 bash[20742]: audit 2026-03-10T10:19:34.707225+0000 mon.a (mon.0) 2074 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm04-59252-42"}]: dispatch 2026-03-10T10:19:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:34 vm04 bash[20742]: audit 2026-03-10T10:19:34.707398+0000 mon.a (mon.0) 2075 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:34 vm04 bash[20742]: audit 2026-03-10T10:19:34.707398+0000 mon.a (mon.0) 2075 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:34 vm04 bash[20742]: audit 2026-03-10T10:19:34.723206+0000 mon.a (mon.0) 2076 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-35"}]: dispatch 2026-03-10T10:19:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:34 vm04 bash[20742]: audit 2026-03-10T10:19:34.723206+0000 mon.a (mon.0) 2076 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-35"}]: dispatch 2026-03-10T10:19:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:34 vm04 bash[28289]: audit 2026-03-10T10:19:33.735913+0000 mon.a (mon.0) 2067 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:34 vm04 bash[28289]: audit 2026-03-10T10:19:33.735913+0000 mon.a (mon.0) 2067 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:34 vm04 bash[28289]: audit 2026-03-10T10:19:34.421545+0000 mon.a (mon.0) 2068 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "25"}]: dispatch 2026-03-10T10:19:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:34 vm04 bash[28289]: audit 2026-03-10T10:19:34.421545+0000 mon.a (mon.0) 2068 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "25"}]: dispatch 2026-03-10T10:19:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:34 vm04 bash[28289]: audit 2026-03-10T10:19:34.675231+0000 mon.a (mon.0) 2069 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm04-59252-42"}]': finished 2026-03-10T10:19:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:34 vm04 bash[28289]: audit 2026-03-10T10:19:34.675231+0000 mon.a (mon.0) 2069 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm04-59252-42"}]': finished 2026-03-10T10:19:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:34 vm04 bash[28289]: audit 2026-03-10T10:19:34.675299+0000 mon.c (mon.2) 385 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:34 vm04 bash[28289]: audit 2026-03-10T10:19:34.675299+0000 mon.c (mon.2) 385 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:34 vm04 bash[28289]: audit 2026-03-10T10:19:34.675306+0000 mon.a (mon.0) 2070 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52"}]': finished 2026-03-10T10:19:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:34 vm04 bash[28289]: audit 2026-03-10T10:19:34.675306+0000 mon.a (mon.0) 2070 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52"}]': finished 2026-03-10T10:19:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:34 vm04 bash[28289]: audit 2026-03-10T10:19:34.675350+0000 mon.a (mon.0) 2071 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:34 vm04 bash[28289]: audit 2026-03-10T10:19:34.675350+0000 mon.a (mon.0) 2071 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:34 vm04 bash[28289]: audit 2026-03-10T10:19:34.675391+0000 mon.a (mon.0) 2072 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "25"}]': finished 2026-03-10T10:19:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:34 vm04 bash[28289]: audit 2026-03-10T10:19:34.675391+0000 mon.a (mon.0) 2072 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "25"}]': finished 2026-03-10T10:19:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:34 vm04 bash[28289]: audit 2026-03-10T10:19:34.705599+0000 mon.c (mon.2) 386 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:34 vm04 bash[28289]: audit 2026-03-10T10:19:34.705599+0000 mon.c (mon.2) 386 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:34 vm04 bash[28289]: cluster 2026-03-10T10:19:34.706110+0000 mon.a (mon.0) 2073 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-10T10:19:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:34 vm04 bash[28289]: cluster 2026-03-10T10:19:34.706110+0000 mon.a (mon.0) 2073 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-10T10:19:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:34 vm04 bash[28289]: audit 2026-03-10T10:19:34.707225+0000 mon.a (mon.0) 2074 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm04-59252-42"}]: dispatch 2026-03-10T10:19:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:34 vm04 bash[28289]: audit 2026-03-10T10:19:34.707225+0000 mon.a (mon.0) 2074 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm04-59252-42"}]: dispatch 2026-03-10T10:19:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:34 vm04 bash[28289]: audit 2026-03-10T10:19:34.707398+0000 mon.a (mon.0) 2075 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:34 vm04 bash[28289]: audit 2026-03-10T10:19:34.707398+0000 mon.a (mon.0) 2075 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:34 vm04 bash[28289]: audit 2026-03-10T10:19:34.723206+0000 mon.a (mon.0) 2076 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-35"}]: dispatch 2026-03-10T10:19:35.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:34 vm04 bash[28289]: audit 2026-03-10T10:19:34.723206+0000 mon.a (mon.0) 2076 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-35"}]: dispatch 2026-03-10T10:19:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:34 vm07 bash[23367]: audit 2026-03-10T10:19:33.735913+0000 mon.a (mon.0) 2067 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:34 vm07 bash[23367]: audit 2026-03-10T10:19:33.735913+0000 mon.a (mon.0) 2067 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:34 vm07 bash[23367]: audit 2026-03-10T10:19:34.421545+0000 mon.a (mon.0) 2068 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "25"}]: dispatch 2026-03-10T10:19:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:34 vm07 bash[23367]: audit 2026-03-10T10:19:34.421545+0000 mon.a (mon.0) 2068 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "25"}]: dispatch 2026-03-10T10:19:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:34 vm07 bash[23367]: audit 2026-03-10T10:19:34.675231+0000 mon.a (mon.0) 2069 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm04-59252-42"}]': finished 2026-03-10T10:19:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:34 vm07 bash[23367]: audit 2026-03-10T10:19:34.675231+0000 mon.a (mon.0) 2069 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWrite_vm04-59252-42"}]': finished 2026-03-10T10:19:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:34 vm07 bash[23367]: audit 2026-03-10T10:19:34.675299+0000 mon.c (mon.2) 385 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:34 vm07 bash[23367]: audit 2026-03-10T10:19:34.675299+0000 mon.c (mon.2) 385 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:34 vm07 bash[23367]: audit 2026-03-10T10:19:34.675306+0000 mon.a (mon.0) 2070 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52"}]': finished 2026-03-10T10:19:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:34 vm07 bash[23367]: audit 2026-03-10T10:19:34.675306+0000 mon.a (mon.0) 2070 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripSparseReadPP_vm04-59259-52"}]': finished 2026-03-10T10:19:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:34 vm07 bash[23367]: audit 2026-03-10T10:19:34.675350+0000 mon.a (mon.0) 2071 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:34 vm07 bash[23367]: audit 2026-03-10T10:19:34.675350+0000 mon.a (mon.0) 2071 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:34 vm07 bash[23367]: audit 2026-03-10T10:19:34.675391+0000 mon.a (mon.0) 2072 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "25"}]': finished 2026-03-10T10:19:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:34 vm07 bash[23367]: audit 2026-03-10T10:19:34.675391+0000 mon.a (mon.0) 2072 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "25"}]': finished 2026-03-10T10:19:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:34 vm07 bash[23367]: audit 2026-03-10T10:19:34.705599+0000 mon.c (mon.2) 386 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:34 vm07 bash[23367]: audit 2026-03-10T10:19:34.705599+0000 mon.c (mon.2) 386 : audit [INF] from='client.? 192.168.123.104:0/1594516978' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:34 vm07 bash[23367]: cluster 2026-03-10T10:19:34.706110+0000 mon.a (mon.0) 2073 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-10T10:19:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:34 vm07 bash[23367]: cluster 2026-03-10T10:19:34.706110+0000 mon.a (mon.0) 2073 : cluster [DBG] osdmap e254: 8 total, 8 up, 8 in 2026-03-10T10:19:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:34 vm07 bash[23367]: audit 2026-03-10T10:19:34.707225+0000 mon.a (mon.0) 2074 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm04-59252-42"}]: dispatch 2026-03-10T10:19:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:34 vm07 bash[23367]: audit 2026-03-10T10:19:34.707225+0000 mon.a (mon.0) 2074 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm04-59252-42"}]: dispatch 2026-03-10T10:19:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:34 vm07 bash[23367]: audit 2026-03-10T10:19:34.707398+0000 mon.a (mon.0) 2075 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:34 vm07 bash[23367]: audit 2026-03-10T10:19:34.707398+0000 mon.a (mon.0) 2075 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm04-59259-52"}]: dispatch 2026-03-10T10:19:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:34 vm07 bash[23367]: audit 2026-03-10T10:19:34.723206+0000 mon.a (mon.0) 2076 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-35"}]: dispatch 2026-03-10T10:19:35.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:34 vm07 bash[23367]: audit 2026-03-10T10:19:34.723206+0000 mon.a (mon.0) 2076 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-35"}]: dispatch 2026-03-10T10:19:35.686 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAioEC.SimpleWrite (7269 ms) 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAioEC.WaitForComplete 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAioEC.WaitForComplete (7491 ms) 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAioEC.RoundTrip 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAioEC.RoundTrip (7101 ms) 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAioEC.RoundTrip2 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAioEC.RoundTrip2 (7167 ms) 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAioEC.RoundTripAppend 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAioEC.RoundTripAppend (7268 ms) 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAioEC.IsComplete 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAioEC.IsComplete (7578 ms) 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAioEC.IsSafe 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAioEC.IsSafe (7120 ms) 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAioEC.ReturnValue 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAioEC.ReturnValue (7270 ms) 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAioEC.Flush 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAioEC.Flush (7190 ms) 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAioEC.FlushAsync 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAioEC.FlushAsync (7497 ms) 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAioEC.RoundTripWriteFull 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAioEC.RoundTripWriteFull (7877 ms) 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAioEC.SimpleStat 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAioEC.SimpleStat (7232 ms) 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAioEC.SimpleStatNS 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAioEC.SimpleStatNS (7782 ms) 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAioEC.StatRemove 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAioEC.StatRemove (7043 ms) 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAioEC.ExecuteClass 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAioEC.ExecuteClass (7835 ms) 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ RUN ] LibRadosAioEC.MultiWrite 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ OK ] LibRadosAioEC.MultiWrite (6471 ms) 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [----------] 16 tests from LibRadosAioEC (117191 ms total) 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [----------] Global test environment tear-down 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [==========] 42 tests from 2 test suites ran. (195534 ms total) 2026-03-10T10:19:35.687 INFO:tasks.workunit.client.0.vm04.stdout: api_aio: [ PASSED ] 42 tests. 2026-03-10T10:19:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:35 vm04 bash[20742]: cluster 2026-03-10T10:19:34.420856+0000 mgr.y (mgr.24422) 243 : cluster [DBG] pgmap v338: 318 pgs: 318 active+clean; 4.4 MiB data, 673 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 782 B/s wr, 2 op/s 2026-03-10T10:19:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:35 vm04 bash[20742]: cluster 2026-03-10T10:19:34.420856+0000 mgr.y (mgr.24422) 243 : cluster [DBG] pgmap v338: 318 pgs: 318 active+clean; 4.4 MiB data, 673 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 782 B/s wr, 2 op/s 2026-03-10T10:19:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:35 vm04 bash[20742]: audit 2026-03-10T10:19:35.676187+0000 mon.c (mon.2) 387 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:35 vm04 bash[20742]: audit 2026-03-10T10:19:35.676187+0000 mon.c (mon.2) 387 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:35 vm04 bash[20742]: audit 2026-03-10T10:19:35.676666+0000 mon.c (mon.2) 388 : audit [INF] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T10:19:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:35 vm04 bash[20742]: audit 2026-03-10T10:19:35.676666+0000 mon.c (mon.2) 388 : audit [INF] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T10:19:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:35 vm04 bash[20742]: audit 2026-03-10T10:19:35.677026+0000 mon.a (mon.0) 2077 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T10:19:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:35 vm04 bash[20742]: audit 2026-03-10T10:19:35.677026+0000 mon.a (mon.0) 2077 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T10:19:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:35 vm04 bash[20742]: audit 2026-03-10T10:19:35.679137+0000 mon.a (mon.0) 2078 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm04-59252-42"}]': finished 2026-03-10T10:19:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:35 vm04 bash[20742]: audit 2026-03-10T10:19:35.679137+0000 mon.a (mon.0) 2078 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm04-59252-42"}]': finished 2026-03-10T10:19:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:35 vm04 bash[20742]: audit 2026-03-10T10:19:35.679217+0000 mon.a (mon.0) 2079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm04-59259-52"}]': finished 2026-03-10T10:19:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:35 vm04 bash[20742]: audit 2026-03-10T10:19:35.679217+0000 mon.a (mon.0) 2079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm04-59259-52"}]': finished 2026-03-10T10:19:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:35 vm04 bash[20742]: audit 2026-03-10T10:19:35.679244+0000 mon.a (mon.0) 2080 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-35"}]': finished 2026-03-10T10:19:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:35 vm04 bash[20742]: audit 2026-03-10T10:19:35.679244+0000 mon.a (mon.0) 2080 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-35"}]': finished 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:35 vm04 bash[20742]: cluster 2026-03-10T10:19:35.682097+0000 mon.a (mon.0) 2081 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:35 vm04 bash[20742]: cluster 2026-03-10T10:19:35.682097+0000 mon.a (mon.0) 2081 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:35 vm04 bash[28289]: cluster 2026-03-10T10:19:34.420856+0000 mgr.y (mgr.24422) 243 : cluster [DBG] pgmap v338: 318 pgs: 318 active+clean; 4.4 MiB data, 673 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 782 B/s wr, 2 op/s 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:35 vm04 bash[28289]: cluster 2026-03-10T10:19:34.420856+0000 mgr.y (mgr.24422) 243 : cluster [DBG] pgmap v338: 318 pgs: 318 active+clean; 4.4 MiB data, 673 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 782 B/s wr, 2 op/s 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:35 vm04 bash[28289]: audit 2026-03-10T10:19:35.676187+0000 mon.c (mon.2) 387 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:35 vm04 bash[28289]: audit 2026-03-10T10:19:35.676187+0000 mon.c (mon.2) 387 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:35 vm04 bash[28289]: audit 2026-03-10T10:19:35.676666+0000 mon.c (mon.2) 388 : audit [INF] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:35 vm04 bash[28289]: audit 2026-03-10T10:19:35.676666+0000 mon.c (mon.2) 388 : audit [INF] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:35 vm04 bash[28289]: audit 2026-03-10T10:19:35.677026+0000 mon.a (mon.0) 2077 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:35 vm04 bash[28289]: audit 2026-03-10T10:19:35.677026+0000 mon.a (mon.0) 2077 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:35 vm04 bash[28289]: audit 2026-03-10T10:19:35.679137+0000 mon.a (mon.0) 2078 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm04-59252-42"}]': finished 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:35 vm04 bash[28289]: audit 2026-03-10T10:19:35.679137+0000 mon.a (mon.0) 2078 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm04-59252-42"}]': finished 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:35 vm04 bash[28289]: audit 2026-03-10T10:19:35.679217+0000 mon.a (mon.0) 2079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm04-59259-52"}]': finished 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:35 vm04 bash[28289]: audit 2026-03-10T10:19:35.679217+0000 mon.a (mon.0) 2079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm04-59259-52"}]': finished 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:35 vm04 bash[28289]: audit 2026-03-10T10:19:35.679244+0000 mon.a (mon.0) 2080 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-35"}]': finished 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:35 vm04 bash[28289]: audit 2026-03-10T10:19:35.679244+0000 mon.a (mon.0) 2080 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-35"}]': finished 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:35 vm04 bash[28289]: cluster 2026-03-10T10:19:35.682097+0000 mon.a (mon.0) 2081 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:35 vm04 bash[28289]: cluster 2026-03-10T10:19:35.682097+0000 mon.a (mon.0) 2081 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:35 vm04 bash[28289]: audit 2026-03-10T10:19:35.701609+0000 mon.c (mon.2) 389 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:35 vm04 bash[28289]: audit 2026-03-10T10:19:35.701609+0000 mon.c (mon.2) 389 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:35 vm04 bash[28289]: audit 2026-03-10T10:19:35.702067+0000 mon.a (mon.0) 2082 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:35 vm04 bash[28289]: audit 2026-03-10T10:19:35.702067+0000 mon.a (mon.0) 2082 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:35 vm04 bash[28289]: audit 2026-03-10T10:19:35.702426+0000 mon.c (mon.2) 390 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:35 vm04 bash[28289]: audit 2026-03-10T10:19:35.702426+0000 mon.c (mon.2) 390 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:35 vm04 bash[28289]: audit 2026-03-10T10:19:35.702653+0000 mon.a (mon.0) 2083 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:35 vm04 bash[28289]: audit 2026-03-10T10:19:35.702653+0000 mon.a (mon.0) 2083 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:35 vm04 bash[28289]: audit 2026-03-10T10:19:35.702954+0000 mon.c (mon.2) 391 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm04-59259-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:35 vm04 bash[28289]: audit 2026-03-10T10:19:35.702954+0000 mon.c (mon.2) 391 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm04-59259-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:35 vm04 bash[28289]: audit 2026-03-10T10:19:35.703105+0000 mon.a (mon.0) 2084 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm04-59259-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:35 vm04 bash[28289]: audit 2026-03-10T10:19:35.703105+0000 mon.a (mon.0) 2084 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm04-59259-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:35 vm04 bash[20742]: audit 2026-03-10T10:19:35.701609+0000 mon.c (mon.2) 389 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:35 vm04 bash[20742]: audit 2026-03-10T10:19:35.701609+0000 mon.c (mon.2) 389 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:35 vm04 bash[20742]: audit 2026-03-10T10:19:35.702067+0000 mon.a (mon.0) 2082 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:35 vm04 bash[20742]: audit 2026-03-10T10:19:35.702067+0000 mon.a (mon.0) 2082 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:35 vm04 bash[20742]: audit 2026-03-10T10:19:35.702426+0000 mon.c (mon.2) 390 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:35 vm04 bash[20742]: audit 2026-03-10T10:19:35.702426+0000 mon.c (mon.2) 390 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:35 vm04 bash[20742]: audit 2026-03-10T10:19:35.702653+0000 mon.a (mon.0) 2083 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:35 vm04 bash[20742]: audit 2026-03-10T10:19:35.702653+0000 mon.a (mon.0) 2083 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:35 vm04 bash[20742]: audit 2026-03-10T10:19:35.702954+0000 mon.c (mon.2) 391 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm04-59259-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:35 vm04 bash[20742]: audit 2026-03-10T10:19:35.702954+0000 mon.c (mon.2) 391 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm04-59259-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:35 vm04 bash[20742]: audit 2026-03-10T10:19:35.703105+0000 mon.a (mon.0) 2084 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm04-59259-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:36.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:35 vm04 bash[20742]: audit 2026-03-10T10:19:35.703105+0000 mon.a (mon.0) 2084 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm04-59259-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:35 vm07 bash[23367]: cluster 2026-03-10T10:19:34.420856+0000 mgr.y (mgr.24422) 243 : cluster [DBG] pgmap v338: 318 pgs: 318 active+clean; 4.4 MiB data, 673 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 782 B/s wr, 2 op/s 2026-03-10T10:19:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:35 vm07 bash[23367]: cluster 2026-03-10T10:19:34.420856+0000 mgr.y (mgr.24422) 243 : cluster [DBG] pgmap v338: 318 pgs: 318 active+clean; 4.4 MiB data, 673 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 782 B/s wr, 2 op/s 2026-03-10T10:19:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:35 vm07 bash[23367]: audit 2026-03-10T10:19:35.676187+0000 mon.c (mon.2) 387 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:35 vm07 bash[23367]: audit 2026-03-10T10:19:35.676187+0000 mon.c (mon.2) 387 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:35 vm07 bash[23367]: audit 2026-03-10T10:19:35.676666+0000 mon.c (mon.2) 388 : audit [INF] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T10:19:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:35 vm07 bash[23367]: audit 2026-03-10T10:19:35.676666+0000 mon.c (mon.2) 388 : audit [INF] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T10:19:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:35 vm07 bash[23367]: audit 2026-03-10T10:19:35.677026+0000 mon.a (mon.0) 2077 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T10:19:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:35 vm07 bash[23367]: audit 2026-03-10T10:19:35.677026+0000 mon.a (mon.0) 2077 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]: dispatch 2026-03-10T10:19:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:35 vm07 bash[23367]: audit 2026-03-10T10:19:35.679137+0000 mon.a (mon.0) 2078 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm04-59252-42"}]': finished 2026-03-10T10:19:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:35 vm07 bash[23367]: audit 2026-03-10T10:19:35.679137+0000 mon.a (mon.0) 2078 : audit [INF] from='client.? 192.168.123.104:0/986905405' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWrite_vm04-59252-42"}]': finished 2026-03-10T10:19:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:35 vm07 bash[23367]: audit 2026-03-10T10:19:35.679217+0000 mon.a (mon.0) 2079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm04-59259-52"}]': finished 2026-03-10T10:19:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:35 vm07 bash[23367]: audit 2026-03-10T10:19:35.679217+0000 mon.a (mon.0) 2079 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripSparseReadPP_vm04-59259-52"}]': finished 2026-03-10T10:19:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:35 vm07 bash[23367]: audit 2026-03-10T10:19:35.679244+0000 mon.a (mon.0) 2080 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-35"}]': finished 2026-03-10T10:19:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:35 vm07 bash[23367]: audit 2026-03-10T10:19:35.679244+0000 mon.a (mon.0) 2080 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-35"}]': finished 2026-03-10T10:19:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:35 vm07 bash[23367]: cluster 2026-03-10T10:19:35.682097+0000 mon.a (mon.0) 2081 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-10T10:19:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:35 vm07 bash[23367]: cluster 2026-03-10T10:19:35.682097+0000 mon.a (mon.0) 2081 : cluster [DBG] osdmap e255: 8 total, 8 up, 8 in 2026-03-10T10:19:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:35 vm07 bash[23367]: audit 2026-03-10T10:19:35.701609+0000 mon.c (mon.2) 389 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:35 vm07 bash[23367]: audit 2026-03-10T10:19:35.701609+0000 mon.c (mon.2) 389 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:35 vm07 bash[23367]: audit 2026-03-10T10:19:35.702067+0000 mon.a (mon.0) 2082 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:35 vm07 bash[23367]: audit 2026-03-10T10:19:35.702067+0000 mon.a (mon.0) 2082 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:35 vm07 bash[23367]: audit 2026-03-10T10:19:35.702426+0000 mon.c (mon.2) 390 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:35 vm07 bash[23367]: audit 2026-03-10T10:19:35.702426+0000 mon.c (mon.2) 390 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:35 vm07 bash[23367]: audit 2026-03-10T10:19:35.702653+0000 mon.a (mon.0) 2083 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:35 vm07 bash[23367]: audit 2026-03-10T10:19:35.702653+0000 mon.a (mon.0) 2083 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:36.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:35 vm07 bash[23367]: audit 2026-03-10T10:19:35.702954+0000 mon.c (mon.2) 391 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm04-59259-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:36.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:35 vm07 bash[23367]: audit 2026-03-10T10:19:35.702954+0000 mon.c (mon.2) 391 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm04-59259-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:36.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:35 vm07 bash[23367]: audit 2026-03-10T10:19:35.703105+0000 mon.a (mon.0) 2084 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm04-59259-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:36.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:35 vm07 bash[23367]: audit 2026-03-10T10:19:35.703105+0000 mon.a (mon.0) 2084 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm04-59259-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:36 vm04 bash[28289]: audit 2026-03-10T10:19:36.422171+0000 mon.a (mon.0) 2085 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "24"}]: dispatch 2026-03-10T10:19:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:36 vm04 bash[28289]: audit 2026-03-10T10:19:36.422171+0000 mon.a (mon.0) 2085 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "24"}]: dispatch 2026-03-10T10:19:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:36 vm04 bash[28289]: audit 2026-03-10T10:19:36.682940+0000 mon.a (mon.0) 2086 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]': finished 2026-03-10T10:19:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:36 vm04 bash[28289]: audit 2026-03-10T10:19:36.682940+0000 mon.a (mon.0) 2086 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]': finished 2026-03-10T10:19:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:36 vm04 bash[28289]: audit 2026-03-10T10:19:36.683078+0000 mon.a (mon.0) 2087 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm04-59259-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:36 vm04 bash[28289]: audit 2026-03-10T10:19:36.683078+0000 mon.a (mon.0) 2087 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm04-59259-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:36 vm04 bash[28289]: audit 2026-03-10T10:19:36.683190+0000 mon.a (mon.0) 2088 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "24"}]': finished 2026-03-10T10:19:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:36 vm04 bash[28289]: audit 2026-03-10T10:19:36.683190+0000 mon.a (mon.0) 2088 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "24"}]': finished 2026-03-10T10:19:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:36 vm04 bash[28289]: cluster 2026-03-10T10:19:36.687934+0000 mon.a (mon.0) 2089 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-10T10:19:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:36 vm04 bash[28289]: cluster 2026-03-10T10:19:36.687934+0000 mon.a (mon.0) 2089 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-10T10:19:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:36 vm04 bash[28289]: audit 2026-03-10T10:19:36.716364+0000 mon.c (mon.2) 392 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:36 vm04 bash[28289]: audit 2026-03-10T10:19:36.716364+0000 mon.c (mon.2) 392 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:36 vm04 bash[28289]: audit 2026-03-10T10:19:36.716540+0000 mon.c (mon.2) 393 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm04-59259-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:36 vm04 bash[28289]: audit 2026-03-10T10:19:36.716540+0000 mon.c (mon.2) 393 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm04-59259-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:36 vm04 bash[28289]: audit 2026-03-10T10:19:36.717740+0000 mon.a (mon.0) 2090 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm04-59259-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:36 vm04 bash[28289]: audit 2026-03-10T10:19:36.717740+0000 mon.a (mon.0) 2090 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm04-59259-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:36 vm04 bash[28289]: audit 2026-03-10T10:19:36.739283+0000 mon.a (mon.0) 2091 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:37.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:36 vm04 bash[28289]: audit 2026-03-10T10:19:36.739283+0000 mon.a (mon.0) 2091 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:36 vm04 bash[20742]: audit 2026-03-10T10:19:36.422171+0000 mon.a (mon.0) 2085 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "24"}]: dispatch 2026-03-10T10:19:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:36 vm04 bash[20742]: audit 2026-03-10T10:19:36.422171+0000 mon.a (mon.0) 2085 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "24"}]: dispatch 2026-03-10T10:19:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:36 vm04 bash[20742]: audit 2026-03-10T10:19:36.682940+0000 mon.a (mon.0) 2086 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]': finished 2026-03-10T10:19:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:36 vm04 bash[20742]: audit 2026-03-10T10:19:36.682940+0000 mon.a (mon.0) 2086 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]': finished 2026-03-10T10:19:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:36 vm04 bash[20742]: audit 2026-03-10T10:19:36.683078+0000 mon.a (mon.0) 2087 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm04-59259-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:36 vm04 bash[20742]: audit 2026-03-10T10:19:36.683078+0000 mon.a (mon.0) 2087 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm04-59259-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:36 vm04 bash[20742]: audit 2026-03-10T10:19:36.683190+0000 mon.a (mon.0) 2088 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "24"}]': finished 2026-03-10T10:19:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:36 vm04 bash[20742]: audit 2026-03-10T10:19:36.683190+0000 mon.a (mon.0) 2088 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "24"}]': finished 2026-03-10T10:19:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:36 vm04 bash[20742]: cluster 2026-03-10T10:19:36.687934+0000 mon.a (mon.0) 2089 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-10T10:19:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:36 vm04 bash[20742]: cluster 2026-03-10T10:19:36.687934+0000 mon.a (mon.0) 2089 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-10T10:19:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:36 vm04 bash[20742]: audit 2026-03-10T10:19:36.716364+0000 mon.c (mon.2) 392 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:36 vm04 bash[20742]: audit 2026-03-10T10:19:36.716364+0000 mon.c (mon.2) 392 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:36 vm04 bash[20742]: audit 2026-03-10T10:19:36.716540+0000 mon.c (mon.2) 393 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm04-59259-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:36 vm04 bash[20742]: audit 2026-03-10T10:19:36.716540+0000 mon.c (mon.2) 393 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm04-59259-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:36 vm04 bash[20742]: audit 2026-03-10T10:19:36.717740+0000 mon.a (mon.0) 2090 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm04-59259-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:36 vm04 bash[20742]: audit 2026-03-10T10:19:36.717740+0000 mon.a (mon.0) 2090 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm04-59259-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:36 vm04 bash[20742]: audit 2026-03-10T10:19:36.739283+0000 mon.a (mon.0) 2091 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:36 vm04 bash[20742]: audit 2026-03-10T10:19:36.739283+0000 mon.a (mon.0) 2091 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:36 vm07 bash[23367]: audit 2026-03-10T10:19:36.422171+0000 mon.a (mon.0) 2085 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "24"}]: dispatch 2026-03-10T10:19:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:36 vm07 bash[23367]: audit 2026-03-10T10:19:36.422171+0000 mon.a (mon.0) 2085 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "24"}]: dispatch 2026-03-10T10:19:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:36 vm07 bash[23367]: audit 2026-03-10T10:19:36.682940+0000 mon.a (mon.0) 2086 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]': finished 2026-03-10T10:19:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:36 vm07 bash[23367]: audit 2026-03-10T10:19:36.682940+0000 mon.a (mon.0) 2086 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool set","pool":"LibRadosList_vm04-59366-1","var":"pgp_num","val":"11"}]': finished 2026-03-10T10:19:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:36 vm07 bash[23367]: audit 2026-03-10T10:19:36.683078+0000 mon.a (mon.0) 2087 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm04-59259-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:36 vm07 bash[23367]: audit 2026-03-10T10:19:36.683078+0000 mon.a (mon.0) 2087 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripAppendPP_vm04-59259-53", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:36 vm07 bash[23367]: audit 2026-03-10T10:19:36.683190+0000 mon.a (mon.0) 2088 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "24"}]': finished 2026-03-10T10:19:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:36 vm07 bash[23367]: audit 2026-03-10T10:19:36.683190+0000 mon.a (mon.0) 2088 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pgp_num_actual", "val": "24"}]': finished 2026-03-10T10:19:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:36 vm07 bash[23367]: cluster 2026-03-10T10:19:36.687934+0000 mon.a (mon.0) 2089 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-10T10:19:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:36 vm07 bash[23367]: cluster 2026-03-10T10:19:36.687934+0000 mon.a (mon.0) 2089 : cluster [DBG] osdmap e256: 8 total, 8 up, 8 in 2026-03-10T10:19:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:36 vm07 bash[23367]: audit 2026-03-10T10:19:36.716364+0000 mon.c (mon.2) 392 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:36 vm07 bash[23367]: audit 2026-03-10T10:19:36.716364+0000 mon.c (mon.2) 392 : audit [DBG] from='client.? 192.168.123.104:0/3130540433' entity='client.admin' cmd=[{"prefix":"status","format":"json"}]: dispatch 2026-03-10T10:19:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:36 vm07 bash[23367]: audit 2026-03-10T10:19:36.716540+0000 mon.c (mon.2) 393 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm04-59259-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:36 vm07 bash[23367]: audit 2026-03-10T10:19:36.716540+0000 mon.c (mon.2) 393 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm04-59259-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:36 vm07 bash[23367]: audit 2026-03-10T10:19:36.717740+0000 mon.a (mon.0) 2090 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm04-59259-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:36 vm07 bash[23367]: audit 2026-03-10T10:19:36.717740+0000 mon.a (mon.0) 2090 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm04-59259-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:36 vm07 bash[23367]: audit 2026-03-10T10:19:36.739283+0000 mon.a (mon.0) 2091 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:36 vm07 bash[23367]: audit 2026-03-10T10:19:36.739283+0000 mon.a (mon.0) 2091 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:38.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:38 vm04 bash[28289]: cluster 2026-03-10T10:19:36.421369+0000 mgr.y (mgr.24422) 244 : cluster [DBG] pgmap v341: 318 pgs: 318 active+clean; 4.4 MiB data, 673 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:19:38.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:38 vm04 bash[28289]: cluster 2026-03-10T10:19:36.421369+0000 mgr.y (mgr.24422) 244 : cluster [DBG] pgmap v341: 318 pgs: 318 active+clean; 4.4 MiB data, 673 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:19:38.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:38 vm04 bash[28289]: audit 2026-03-10T10:19:37.768565+0000 mon.a (mon.0) 2092 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:38.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:38 vm04 bash[28289]: audit 2026-03-10T10:19:37.768565+0000 mon.a (mon.0) 2092 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:38.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:38 vm04 bash[28289]: cluster 2026-03-10T10:19:37.788546+0000 mon.a (mon.0) 2093 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-10T10:19:38.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:38 vm04 bash[28289]: cluster 2026-03-10T10:19:37.788546+0000 mon.a (mon.0) 2093 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-10T10:19:38.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:38 vm04 bash[28289]: audit 2026-03-10T10:19:37.788951+0000 mon.a (mon.0) 2094 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-35"}]: dispatch 2026-03-10T10:19:38.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:38 vm04 bash[28289]: audit 2026-03-10T10:19:37.788951+0000 mon.a (mon.0) 2094 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-35"}]: dispatch 2026-03-10T10:19:38.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:38 vm04 bash[20742]: cluster 2026-03-10T10:19:36.421369+0000 mgr.y (mgr.24422) 244 : cluster [DBG] pgmap v341: 318 pgs: 318 active+clean; 4.4 MiB data, 673 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:19:38.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:38 vm04 bash[20742]: cluster 2026-03-10T10:19:36.421369+0000 mgr.y (mgr.24422) 244 : cluster [DBG] pgmap v341: 318 pgs: 318 active+clean; 4.4 MiB data, 673 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:19:38.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:38 vm04 bash[20742]: audit 2026-03-10T10:19:37.768565+0000 mon.a (mon.0) 2092 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:38.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:38 vm04 bash[20742]: audit 2026-03-10T10:19:37.768565+0000 mon.a (mon.0) 2092 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:38.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:38 vm04 bash[20742]: cluster 2026-03-10T10:19:37.788546+0000 mon.a (mon.0) 2093 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-10T10:19:38.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:38 vm04 bash[20742]: cluster 2026-03-10T10:19:37.788546+0000 mon.a (mon.0) 2093 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-10T10:19:38.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:38 vm04 bash[20742]: audit 2026-03-10T10:19:37.788951+0000 mon.a (mon.0) 2094 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-35"}]: dispatch 2026-03-10T10:19:38.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:38 vm04 bash[20742]: audit 2026-03-10T10:19:37.788951+0000 mon.a (mon.0) 2094 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-35"}]: dispatch 2026-03-10T10:19:38.517 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:19:38 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:19:38.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:38 vm07 bash[23367]: cluster 2026-03-10T10:19:36.421369+0000 mgr.y (mgr.24422) 244 : cluster [DBG] pgmap v341: 318 pgs: 318 active+clean; 4.4 MiB data, 673 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:19:38.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:38 vm07 bash[23367]: cluster 2026-03-10T10:19:36.421369+0000 mgr.y (mgr.24422) 244 : cluster [DBG] pgmap v341: 318 pgs: 318 active+clean; 4.4 MiB data, 673 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:19:38.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:38 vm07 bash[23367]: audit 2026-03-10T10:19:37.768565+0000 mon.a (mon.0) 2092 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:38.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:38 vm07 bash[23367]: audit 2026-03-10T10:19:37.768565+0000 mon.a (mon.0) 2092 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:38.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:38 vm07 bash[23367]: cluster 2026-03-10T10:19:37.788546+0000 mon.a (mon.0) 2093 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-10T10:19:38.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:38 vm07 bash[23367]: cluster 2026-03-10T10:19:37.788546+0000 mon.a (mon.0) 2093 : cluster [DBG] osdmap e257: 8 total, 8 up, 8 in 2026-03-10T10:19:38.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:38 vm07 bash[23367]: audit 2026-03-10T10:19:37.788951+0000 mon.a (mon.0) 2094 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-35"}]: dispatch 2026-03-10T10:19:38.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:38 vm07 bash[23367]: audit 2026-03-10T10:19:37.788951+0000 mon.a (mon.0) 2094 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-35"}]: dispatch 2026-03-10T10:19:39.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:39 vm07 bash[23367]: audit 2026-03-10T10:19:38.422701+0000 mon.a (mon.0) 2095 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "25"}]: dispatch 2026-03-10T10:19:39.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:39 vm07 bash[23367]: audit 2026-03-10T10:19:38.422701+0000 mon.a (mon.0) 2095 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "25"}]: dispatch 2026-03-10T10:19:39.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:39 vm07 bash[23367]: cluster 2026-03-10T10:19:38.768307+0000 mon.a (mon.0) 2096 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:39.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:39 vm07 bash[23367]: cluster 2026-03-10T10:19:38.768307+0000 mon.a (mon.0) 2096 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:39.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:39 vm07 bash[23367]: audit 2026-03-10T10:19:38.771986+0000 mon.a (mon.0) 2097 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm04-59259-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm04-59259-53"}]': finished 2026-03-10T10:19:39.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:39 vm07 bash[23367]: audit 2026-03-10T10:19:38.771986+0000 mon.a (mon.0) 2097 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm04-59259-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm04-59259-53"}]': finished 2026-03-10T10:19:39.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:39 vm07 bash[23367]: audit 2026-03-10T10:19:38.772163+0000 mon.a (mon.0) 2098 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-35"}]': finished 2026-03-10T10:19:39.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:39 vm07 bash[23367]: audit 2026-03-10T10:19:38.772163+0000 mon.a (mon.0) 2098 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-35"}]': finished 2026-03-10T10:19:39.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:39 vm07 bash[23367]: audit 2026-03-10T10:19:38.772286+0000 mon.a (mon.0) 2099 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "25"}]': finished 2026-03-10T10:19:39.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:39 vm07 bash[23367]: audit 2026-03-10T10:19:38.772286+0000 mon.a (mon.0) 2099 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "25"}]': finished 2026-03-10T10:19:39.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:39 vm07 bash[23367]: cluster 2026-03-10T10:19:38.790688+0000 mon.a (mon.0) 2100 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-10T10:19:39.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:39 vm07 bash[23367]: cluster 2026-03-10T10:19:38.790688+0000 mon.a (mon.0) 2100 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-10T10:19:39.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:39 vm04 bash[28289]: audit 2026-03-10T10:19:38.422701+0000 mon.a (mon.0) 2095 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "25"}]: dispatch 2026-03-10T10:19:39.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:39 vm04 bash[28289]: audit 2026-03-10T10:19:38.422701+0000 mon.a (mon.0) 2095 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "25"}]: dispatch 2026-03-10T10:19:39.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:39 vm04 bash[28289]: cluster 2026-03-10T10:19:38.768307+0000 mon.a (mon.0) 2096 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:39.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:39 vm04 bash[28289]: cluster 2026-03-10T10:19:38.768307+0000 mon.a (mon.0) 2096 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:39.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:39 vm04 bash[28289]: audit 2026-03-10T10:19:38.771986+0000 mon.a (mon.0) 2097 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm04-59259-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm04-59259-53"}]': finished 2026-03-10T10:19:39.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:39 vm04 bash[28289]: audit 2026-03-10T10:19:38.771986+0000 mon.a (mon.0) 2097 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm04-59259-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm04-59259-53"}]': finished 2026-03-10T10:19:39.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:39 vm04 bash[28289]: audit 2026-03-10T10:19:38.772163+0000 mon.a (mon.0) 2098 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-35"}]': finished 2026-03-10T10:19:39.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:39 vm04 bash[28289]: audit 2026-03-10T10:19:38.772163+0000 mon.a (mon.0) 2098 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-35"}]': finished 2026-03-10T10:19:39.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:39 vm04 bash[28289]: audit 2026-03-10T10:19:38.772286+0000 mon.a (mon.0) 2099 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "25"}]': finished 2026-03-10T10:19:39.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:39 vm04 bash[28289]: audit 2026-03-10T10:19:38.772286+0000 mon.a (mon.0) 2099 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "25"}]': finished 2026-03-10T10:19:39.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:39 vm04 bash[28289]: cluster 2026-03-10T10:19:38.790688+0000 mon.a (mon.0) 2100 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-10T10:19:39.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:39 vm04 bash[28289]: cluster 2026-03-10T10:19:38.790688+0000 mon.a (mon.0) 2100 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-10T10:19:39.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:39 vm04 bash[20742]: audit 2026-03-10T10:19:38.422701+0000 mon.a (mon.0) 2095 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "25"}]: dispatch 2026-03-10T10:19:39.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:39 vm04 bash[20742]: audit 2026-03-10T10:19:38.422701+0000 mon.a (mon.0) 2095 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "25"}]: dispatch 2026-03-10T10:19:39.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:39 vm04 bash[20742]: cluster 2026-03-10T10:19:38.768307+0000 mon.a (mon.0) 2096 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:39.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:39 vm04 bash[20742]: cluster 2026-03-10T10:19:38.768307+0000 mon.a (mon.0) 2096 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:39.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:39 vm04 bash[20742]: audit 2026-03-10T10:19:38.771986+0000 mon.a (mon.0) 2097 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm04-59259-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm04-59259-53"}]': finished 2026-03-10T10:19:39.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:39 vm04 bash[20742]: audit 2026-03-10T10:19:38.771986+0000 mon.a (mon.0) 2097 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripAppendPP_vm04-59259-53", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripAppendPP_vm04-59259-53"}]': finished 2026-03-10T10:19:39.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:39 vm04 bash[20742]: audit 2026-03-10T10:19:38.772163+0000 mon.a (mon.0) 2098 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-35"}]': finished 2026-03-10T10:19:39.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:39 vm04 bash[20742]: audit 2026-03-10T10:19:38.772163+0000 mon.a (mon.0) 2098 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-35"}]': finished 2026-03-10T10:19:39.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:39 vm04 bash[20742]: audit 2026-03-10T10:19:38.772286+0000 mon.a (mon.0) 2099 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "25"}]': finished 2026-03-10T10:19:39.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:39 vm04 bash[20742]: audit 2026-03-10T10:19:38.772286+0000 mon.a (mon.0) 2099 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "25"}]': finished 2026-03-10T10:19:39.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:39 vm04 bash[20742]: cluster 2026-03-10T10:19:38.790688+0000 mon.a (mon.0) 2100 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-10T10:19:39.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:39 vm04 bash[20742]: cluster 2026-03-10T10:19:38.790688+0000 mon.a (mon.0) 2100 : cluster [DBG] osdmap e258: 8 total, 8 up, 8 in 2026-03-10T10:19:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:40 vm04 bash[20742]: audit 2026-03-10T10:19:38.363740+0000 mgr.y (mgr.24422) 245 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:19:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:40 vm04 bash[20742]: audit 2026-03-10T10:19:38.363740+0000 mgr.y (mgr.24422) 245 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:19:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:40 vm04 bash[20742]: cluster 2026-03-10T10:19:38.422060+0000 mgr.y (mgr.24422) 246 : cluster [DBG] pgmap v344: 318 pgs: 318 active+clean; 4.4 MiB data, 674 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:19:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:40 vm04 bash[20742]: cluster 2026-03-10T10:19:38.422060+0000 mgr.y (mgr.24422) 246 : cluster [DBG] pgmap v344: 318 pgs: 318 active+clean; 4.4 MiB data, 674 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:19:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:40 vm04 bash[20742]: cluster 2026-03-10T10:19:39.808017+0000 mon.a (mon.0) 2101 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-10T10:19:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:40 vm04 bash[20742]: cluster 2026-03-10T10:19:39.808017+0000 mon.a (mon.0) 2101 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-10T10:19:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:40 vm04 bash[28289]: audit 2026-03-10T10:19:38.363740+0000 mgr.y (mgr.24422) 245 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:19:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:40 vm04 bash[28289]: audit 2026-03-10T10:19:38.363740+0000 mgr.y (mgr.24422) 245 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:19:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:40 vm04 bash[28289]: cluster 2026-03-10T10:19:38.422060+0000 mgr.y (mgr.24422) 246 : cluster [DBG] pgmap v344: 318 pgs: 318 active+clean; 4.4 MiB data, 674 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:19:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:40 vm04 bash[28289]: cluster 2026-03-10T10:19:38.422060+0000 mgr.y (mgr.24422) 246 : cluster [DBG] pgmap v344: 318 pgs: 318 active+clean; 4.4 MiB data, 674 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:19:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:40 vm04 bash[28289]: cluster 2026-03-10T10:19:39.808017+0000 mon.a (mon.0) 2101 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-10T10:19:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:40 vm04 bash[28289]: cluster 2026-03-10T10:19:39.808017+0000 mon.a (mon.0) 2101 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-10T10:19:40.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:40 vm07 bash[23367]: audit 2026-03-10T10:19:38.363740+0000 mgr.y (mgr.24422) 245 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:19:40.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:40 vm07 bash[23367]: audit 2026-03-10T10:19:38.363740+0000 mgr.y (mgr.24422) 245 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:19:40.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:40 vm07 bash[23367]: cluster 2026-03-10T10:19:38.422060+0000 mgr.y (mgr.24422) 246 : cluster [DBG] pgmap v344: 318 pgs: 318 active+clean; 4.4 MiB data, 674 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:19:40.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:40 vm07 bash[23367]: cluster 2026-03-10T10:19:38.422060+0000 mgr.y (mgr.24422) 246 : cluster [DBG] pgmap v344: 318 pgs: 318 active+clean; 4.4 MiB data, 674 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:19:40.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:40 vm07 bash[23367]: cluster 2026-03-10T10:19:39.808017+0000 mon.a (mon.0) 2101 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-10T10:19:40.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:40 vm07 bash[23367]: cluster 2026-03-10T10:19:39.808017+0000 mon.a (mon.0) 2101 : cluster [DBG] osdmap e259: 8 total, 8 up, 8 in 2026-03-10T10:19:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:41 vm04 bash[28289]: cluster 2026-03-10T10:19:40.422496+0000 mgr.y (mgr.24422) 247 : cluster [DBG] pgmap v347: 294 pgs: 8 unknown, 286 active+clean; 4.4 MiB data, 674 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1 op/s; 0 B/s, 0 objects/s recovering 2026-03-10T10:19:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:41 vm04 bash[28289]: cluster 2026-03-10T10:19:40.422496+0000 mgr.y (mgr.24422) 247 : cluster [DBG] pgmap v347: 294 pgs: 8 unknown, 286 active+clean; 4.4 MiB data, 674 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1 op/s; 0 B/s, 0 objects/s recovering 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:41 vm04 bash[28289]: cluster 2026-03-10T10:19:40.839482+0000 mon.a (mon.0) 2102 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:41 vm04 bash[28289]: cluster 2026-03-10T10:19:40.839482+0000 mon.a (mon.0) 2102 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:41 vm04 bash[28289]: audit 2026-03-10T10:19:40.840201+0000 mon.c (mon.2) 394 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:41 vm04 bash[28289]: audit 2026-03-10T10:19:40.840201+0000 mon.c (mon.2) 394 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:41 vm04 bash[28289]: audit 2026-03-10T10:19:40.841119+0000 mon.a (mon.0) 2103 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:41 vm04 bash[28289]: audit 2026-03-10T10:19:40.841119+0000 mon.a (mon.0) 2103 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:41 vm04 bash[28289]: audit 2026-03-10T10:19:40.841494+0000 mon.a (mon.0) 2104 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:41 vm04 bash[28289]: audit 2026-03-10T10:19:40.841494+0000 mon.a (mon.0) 2104 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:41 vm04 bash[28289]: cluster 2026-03-10T10:19:41.327009+0000 mon.a (mon.0) 2105 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:41 vm04 bash[28289]: cluster 2026-03-10T10:19:41.327009+0000 mon.a (mon.0) 2105 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:41 vm04 bash[28289]: audit 2026-03-10T10:19:41.450658+0000 mon.a (mon.0) 2106 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm04-59259-53"}]': finished 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:41 vm04 bash[28289]: audit 2026-03-10T10:19:41.450658+0000 mon.a (mon.0) 2106 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm04-59259-53"}]': finished 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:41 vm04 bash[28289]: audit 2026-03-10T10:19:41.450809+0000 mon.a (mon.0) 2107 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:41 vm04 bash[28289]: audit 2026-03-10T10:19:41.450809+0000 mon.a (mon.0) 2107 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:41 vm04 bash[28289]: cluster 2026-03-10T10:19:41.460306+0000 mon.a (mon.0) 2108 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:41 vm04 bash[28289]: cluster 2026-03-10T10:19:41.460306+0000 mon.a (mon.0) 2108 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:41 vm04 bash[28289]: audit 2026-03-10T10:19:41.461728+0000 mon.a (mon.0) 2109 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:41 vm04 bash[28289]: audit 2026-03-10T10:19:41.461728+0000 mon.a (mon.0) 2109 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:41 vm04 bash[28289]: audit 2026-03-10T10:19:41.470119+0000 mon.c (mon.2) 395 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:41 vm04 bash[28289]: audit 2026-03-10T10:19:41.470119+0000 mon.c (mon.2) 395 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:41 vm04 bash[28289]: audit 2026-03-10T10:19:41.488150+0000 mon.a (mon.0) 2110 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:41 vm04 bash[28289]: audit 2026-03-10T10:19:41.488150+0000 mon.a (mon.0) 2110 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:41 vm04 bash[20742]: cluster 2026-03-10T10:19:40.422496+0000 mgr.y (mgr.24422) 247 : cluster [DBG] pgmap v347: 294 pgs: 8 unknown, 286 active+clean; 4.4 MiB data, 674 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1 op/s; 0 B/s, 0 objects/s recovering 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:41 vm04 bash[20742]: cluster 2026-03-10T10:19:40.422496+0000 mgr.y (mgr.24422) 247 : cluster [DBG] pgmap v347: 294 pgs: 8 unknown, 286 active+clean; 4.4 MiB data, 674 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1 op/s; 0 B/s, 0 objects/s recovering 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:41 vm04 bash[20742]: cluster 2026-03-10T10:19:40.839482+0000 mon.a (mon.0) 2102 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:41 vm04 bash[20742]: cluster 2026-03-10T10:19:40.839482+0000 mon.a (mon.0) 2102 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:41 vm04 bash[20742]: audit 2026-03-10T10:19:40.840201+0000 mon.c (mon.2) 394 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:41 vm04 bash[20742]: audit 2026-03-10T10:19:40.840201+0000 mon.c (mon.2) 394 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:41 vm04 bash[20742]: audit 2026-03-10T10:19:40.841119+0000 mon.a (mon.0) 2103 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:41 vm04 bash[20742]: audit 2026-03-10T10:19:40.841119+0000 mon.a (mon.0) 2103 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:41 vm04 bash[20742]: audit 2026-03-10T10:19:40.841494+0000 mon.a (mon.0) 2104 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:41 vm04 bash[20742]: audit 2026-03-10T10:19:40.841494+0000 mon.a (mon.0) 2104 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:41 vm04 bash[20742]: cluster 2026-03-10T10:19:41.327009+0000 mon.a (mon.0) 2105 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:41 vm04 bash[20742]: cluster 2026-03-10T10:19:41.327009+0000 mon.a (mon.0) 2105 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:41 vm04 bash[20742]: audit 2026-03-10T10:19:41.450658+0000 mon.a (mon.0) 2106 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm04-59259-53"}]': finished 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:41 vm04 bash[20742]: audit 2026-03-10T10:19:41.450658+0000 mon.a (mon.0) 2106 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm04-59259-53"}]': finished 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:41 vm04 bash[20742]: audit 2026-03-10T10:19:41.450809+0000 mon.a (mon.0) 2107 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:41 vm04 bash[20742]: audit 2026-03-10T10:19:41.450809+0000 mon.a (mon.0) 2107 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:41 vm04 bash[20742]: cluster 2026-03-10T10:19:41.460306+0000 mon.a (mon.0) 2108 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:41 vm04 bash[20742]: cluster 2026-03-10T10:19:41.460306+0000 mon.a (mon.0) 2108 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:41 vm04 bash[20742]: audit 2026-03-10T10:19:41.461728+0000 mon.a (mon.0) 2109 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:41 vm04 bash[20742]: audit 2026-03-10T10:19:41.461728+0000 mon.a (mon.0) 2109 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:41 vm04 bash[20742]: audit 2026-03-10T10:19:41.470119+0000 mon.c (mon.2) 395 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:41 vm04 bash[20742]: audit 2026-03-10T10:19:41.470119+0000 mon.c (mon.2) 395 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:41 vm04 bash[20742]: audit 2026-03-10T10:19:41.488150+0000 mon.a (mon.0) 2110 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:41 vm04 bash[20742]: audit 2026-03-10T10:19:41.488150+0000 mon.a (mon.0) 2110 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:41 vm07 bash[23367]: cluster 2026-03-10T10:19:40.422496+0000 mgr.y (mgr.24422) 247 : cluster [DBG] pgmap v347: 294 pgs: 8 unknown, 286 active+clean; 4.4 MiB data, 674 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1 op/s; 0 B/s, 0 objects/s recovering 2026-03-10T10:19:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:41 vm07 bash[23367]: cluster 2026-03-10T10:19:40.422496+0000 mgr.y (mgr.24422) 247 : cluster [DBG] pgmap v347: 294 pgs: 8 unknown, 286 active+clean; 4.4 MiB data, 674 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1 op/s; 0 B/s, 0 objects/s recovering 2026-03-10T10:19:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:41 vm07 bash[23367]: cluster 2026-03-10T10:19:40.839482+0000 mon.a (mon.0) 2102 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-10T10:19:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:41 vm07 bash[23367]: cluster 2026-03-10T10:19:40.839482+0000 mon.a (mon.0) 2102 : cluster [DBG] osdmap e260: 8 total, 8 up, 8 in 2026-03-10T10:19:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:41 vm07 bash[23367]: audit 2026-03-10T10:19:40.840201+0000 mon.c (mon.2) 394 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:41 vm07 bash[23367]: audit 2026-03-10T10:19:40.840201+0000 mon.c (mon.2) 394 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:41 vm07 bash[23367]: audit 2026-03-10T10:19:40.841119+0000 mon.a (mon.0) 2103 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:41 vm07 bash[23367]: audit 2026-03-10T10:19:40.841119+0000 mon.a (mon.0) 2103 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:41 vm07 bash[23367]: audit 2026-03-10T10:19:40.841494+0000 mon.a (mon.0) 2104 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:41 vm07 bash[23367]: audit 2026-03-10T10:19:40.841494+0000 mon.a (mon.0) 2104 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-37","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:41 vm07 bash[23367]: cluster 2026-03-10T10:19:41.327009+0000 mon.a (mon.0) 2105 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:41 vm07 bash[23367]: cluster 2026-03-10T10:19:41.327009+0000 mon.a (mon.0) 2105 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:41 vm07 bash[23367]: audit 2026-03-10T10:19:41.450658+0000 mon.a (mon.0) 2106 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm04-59259-53"}]': finished 2026-03-10T10:19:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:41 vm07 bash[23367]: audit 2026-03-10T10:19:41.450658+0000 mon.a (mon.0) 2106 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripAppendPP_vm04-59259-53"}]': finished 2026-03-10T10:19:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:41 vm07 bash[23367]: audit 2026-03-10T10:19:41.450809+0000 mon.a (mon.0) 2107 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:41 vm07 bash[23367]: audit 2026-03-10T10:19:41.450809+0000 mon.a (mon.0) 2107 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-37","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:41 vm07 bash[23367]: cluster 2026-03-10T10:19:41.460306+0000 mon.a (mon.0) 2108 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-10T10:19:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:41 vm07 bash[23367]: cluster 2026-03-10T10:19:41.460306+0000 mon.a (mon.0) 2108 : cluster [DBG] osdmap e261: 8 total, 8 up, 8 in 2026-03-10T10:19:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:41 vm07 bash[23367]: audit 2026-03-10T10:19:41.461728+0000 mon.a (mon.0) 2109 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:41 vm07 bash[23367]: audit 2026-03-10T10:19:41.461728+0000 mon.a (mon.0) 2109 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-37", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:41 vm07 bash[23367]: audit 2026-03-10T10:19:41.470119+0000 mon.c (mon.2) 395 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:41 vm07 bash[23367]: audit 2026-03-10T10:19:41.470119+0000 mon.c (mon.2) 395 : audit [INF] from='client.? 192.168.123.104:0/3875816594' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:41 vm07 bash[23367]: audit 2026-03-10T10:19:41.488150+0000 mon.a (mon.0) 2110 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:41 vm07 bash[23367]: audit 2026-03-10T10:19:41.488150+0000 mon.a (mon.0) 2110 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm04-59259-53"}]: dispatch 2026-03-10T10:19:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:42 vm04 bash[28289]: audit 2026-03-10T10:19:42.423582+0000 mon.a (mon.0) 2111 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "24"}]: dispatch 2026-03-10T10:19:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:42 vm04 bash[28289]: audit 2026-03-10T10:19:42.423582+0000 mon.a (mon.0) 2111 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "24"}]: dispatch 2026-03-10T10:19:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:42 vm04 bash[28289]: audit 2026-03-10T10:19:42.474024+0000 mon.a (mon.0) 2112 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-37", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:42 vm04 bash[28289]: audit 2026-03-10T10:19:42.474024+0000 mon.a (mon.0) 2112 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-37", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:42 vm04 bash[28289]: audit 2026-03-10T10:19:42.474221+0000 mon.a (mon.0) 2113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm04-59259-53"}]': finished 2026-03-10T10:19:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:42 vm04 bash[28289]: audit 2026-03-10T10:19:42.474221+0000 mon.a (mon.0) 2113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm04-59259-53"}]': finished 2026-03-10T10:19:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:42 vm04 bash[28289]: audit 2026-03-10T10:19:42.474565+0000 mon.a (mon.0) 2114 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "24"}]': finished 2026-03-10T10:19:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:42 vm04 bash[28289]: audit 2026-03-10T10:19:42.474565+0000 mon.a (mon.0) 2114 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "24"}]': finished 2026-03-10T10:19:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:42 vm04 bash[28289]: cluster 2026-03-10T10:19:42.483161+0000 mon.a (mon.0) 2115 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-10T10:19:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:42 vm04 bash[28289]: cluster 2026-03-10T10:19:42.483161+0000 mon.a (mon.0) 2115 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-10T10:19:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:42 vm04 bash[28289]: audit 2026-03-10T10:19:42.488411+0000 mon.a (mon.0) 2116 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-37"}]: dispatch 2026-03-10T10:19:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:42 vm04 bash[28289]: audit 2026-03-10T10:19:42.488411+0000 mon.a (mon.0) 2116 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-37"}]: dispatch 2026-03-10T10:19:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:42 vm04 bash[28289]: audit 2026-03-10T10:19:42.496258+0000 mon.b (mon.1) 209 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:42 vm04 bash[28289]: audit 2026-03-10T10:19:42.496258+0000 mon.b (mon.1) 209 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:42 vm04 bash[28289]: audit 2026-03-10T10:19:42.499325+0000 mon.b (mon.1) 210 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:42 vm04 bash[28289]: audit 2026-03-10T10:19:42.499325+0000 mon.b (mon.1) 210 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:42 vm04 bash[28289]: audit 2026-03-10T10:19:42.501026+0000 mon.b (mon.1) 211 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm04-59259-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:42 vm04 bash[28289]: audit 2026-03-10T10:19:42.501026+0000 mon.b (mon.1) 211 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm04-59259-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:42 vm04 bash[28289]: audit 2026-03-10T10:19:42.501281+0000 mon.a (mon.0) 2117 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:42 vm04 bash[28289]: audit 2026-03-10T10:19:42.501281+0000 mon.a (mon.0) 2117 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:42 vm04 bash[28289]: audit 2026-03-10T10:19:42.503116+0000 mon.a (mon.0) 2118 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:42 vm04 bash[28289]: audit 2026-03-10T10:19:42.503116+0000 mon.a (mon.0) 2118 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:42 vm04 bash[28289]: audit 2026-03-10T10:19:42.504932+0000 mon.a (mon.0) 2119 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm04-59259-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:42 vm04 bash[28289]: audit 2026-03-10T10:19:42.504932+0000 mon.a (mon.0) 2119 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm04-59259-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:42 vm04 bash[28289]: audit 2026-03-10T10:19:42.829371+0000 mon.a (mon.0) 2120 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:42 vm04 bash[28289]: audit 2026-03-10T10:19:42.829371+0000 mon.a (mon.0) 2120 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:42 vm04 bash[20742]: audit 2026-03-10T10:19:42.423582+0000 mon.a (mon.0) 2111 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "24"}]: dispatch 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:42 vm04 bash[20742]: audit 2026-03-10T10:19:42.423582+0000 mon.a (mon.0) 2111 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "24"}]: dispatch 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:42 vm04 bash[20742]: audit 2026-03-10T10:19:42.474024+0000 mon.a (mon.0) 2112 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-37", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:42 vm04 bash[20742]: audit 2026-03-10T10:19:42.474024+0000 mon.a (mon.0) 2112 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-37", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:42 vm04 bash[20742]: audit 2026-03-10T10:19:42.474221+0000 mon.a (mon.0) 2113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm04-59259-53"}]': finished 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:42 vm04 bash[20742]: audit 2026-03-10T10:19:42.474221+0000 mon.a (mon.0) 2113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm04-59259-53"}]': finished 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:42 vm04 bash[20742]: audit 2026-03-10T10:19:42.474565+0000 mon.a (mon.0) 2114 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "24"}]': finished 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:42 vm04 bash[20742]: audit 2026-03-10T10:19:42.474565+0000 mon.a (mon.0) 2114 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "24"}]': finished 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:42 vm04 bash[20742]: cluster 2026-03-10T10:19:42.483161+0000 mon.a (mon.0) 2115 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:42 vm04 bash[20742]: cluster 2026-03-10T10:19:42.483161+0000 mon.a (mon.0) 2115 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:42 vm04 bash[20742]: audit 2026-03-10T10:19:42.488411+0000 mon.a (mon.0) 2116 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-37"}]: dispatch 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:42 vm04 bash[20742]: audit 2026-03-10T10:19:42.488411+0000 mon.a (mon.0) 2116 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-37"}]: dispatch 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:42 vm04 bash[20742]: audit 2026-03-10T10:19:42.496258+0000 mon.b (mon.1) 209 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:42 vm04 bash[20742]: audit 2026-03-10T10:19:42.496258+0000 mon.b (mon.1) 209 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:42 vm04 bash[20742]: audit 2026-03-10T10:19:42.499325+0000 mon.b (mon.1) 210 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:42 vm04 bash[20742]: audit 2026-03-10T10:19:42.499325+0000 mon.b (mon.1) 210 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:42 vm04 bash[20742]: audit 2026-03-10T10:19:42.501026+0000 mon.b (mon.1) 211 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm04-59259-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:42 vm04 bash[20742]: audit 2026-03-10T10:19:42.501026+0000 mon.b (mon.1) 211 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm04-59259-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:42 vm04 bash[20742]: audit 2026-03-10T10:19:42.501281+0000 mon.a (mon.0) 2117 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:42 vm04 bash[20742]: audit 2026-03-10T10:19:42.501281+0000 mon.a (mon.0) 2117 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:42 vm04 bash[20742]: audit 2026-03-10T10:19:42.503116+0000 mon.a (mon.0) 2118 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:42 vm04 bash[20742]: audit 2026-03-10T10:19:42.503116+0000 mon.a (mon.0) 2118 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:42 vm04 bash[20742]: audit 2026-03-10T10:19:42.504932+0000 mon.a (mon.0) 2119 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm04-59259-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:42 vm04 bash[20742]: audit 2026-03-10T10:19:42.504932+0000 mon.a (mon.0) 2119 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm04-59259-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:42 vm04 bash[20742]: audit 2026-03-10T10:19:42.829371+0000 mon.a (mon.0) 2120 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:42 vm04 bash[20742]: audit 2026-03-10T10:19:42.829371+0000 mon.a (mon.0) 2120 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:19:43.204 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:19:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:19:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:19:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:42 vm07 bash[23367]: audit 2026-03-10T10:19:42.423582+0000 mon.a (mon.0) 2111 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "24"}]: dispatch 2026-03-10T10:19:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:42 vm07 bash[23367]: audit 2026-03-10T10:19:42.423582+0000 mon.a (mon.0) 2111 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "24"}]: dispatch 2026-03-10T10:19:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:42 vm07 bash[23367]: audit 2026-03-10T10:19:42.474024+0000 mon.a (mon.0) 2112 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-37", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:42 vm07 bash[23367]: audit 2026-03-10T10:19:42.474024+0000 mon.a (mon.0) 2112 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-37", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:42 vm07 bash[23367]: audit 2026-03-10T10:19:42.474221+0000 mon.a (mon.0) 2113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm04-59259-53"}]': finished 2026-03-10T10:19:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:42 vm07 bash[23367]: audit 2026-03-10T10:19:42.474221+0000 mon.a (mon.0) 2113 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripAppendPP_vm04-59259-53"}]': finished 2026-03-10T10:19:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:42 vm07 bash[23367]: audit 2026-03-10T10:19:42.474565+0000 mon.a (mon.0) 2114 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "24"}]': finished 2026-03-10T10:19:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:42 vm07 bash[23367]: audit 2026-03-10T10:19:42.474565+0000 mon.a (mon.0) 2114 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "LibRadosList_vm04-59366-1", "var": "pg_num_actual", "val": "24"}]': finished 2026-03-10T10:19:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:42 vm07 bash[23367]: cluster 2026-03-10T10:19:42.483161+0000 mon.a (mon.0) 2115 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-10T10:19:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:42 vm07 bash[23367]: cluster 2026-03-10T10:19:42.483161+0000 mon.a (mon.0) 2115 : cluster [DBG] osdmap e262: 8 total, 8 up, 8 in 2026-03-10T10:19:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:42 vm07 bash[23367]: audit 2026-03-10T10:19:42.488411+0000 mon.a (mon.0) 2116 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-37"}]: dispatch 2026-03-10T10:19:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:42 vm07 bash[23367]: audit 2026-03-10T10:19:42.488411+0000 mon.a (mon.0) 2116 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-37"}]: dispatch 2026-03-10T10:19:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:42 vm07 bash[23367]: audit 2026-03-10T10:19:42.496258+0000 mon.b (mon.1) 209 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:42 vm07 bash[23367]: audit 2026-03-10T10:19:42.496258+0000 mon.b (mon.1) 209 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:42 vm07 bash[23367]: audit 2026-03-10T10:19:42.499325+0000 mon.b (mon.1) 210 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:42 vm07 bash[23367]: audit 2026-03-10T10:19:42.499325+0000 mon.b (mon.1) 210 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:42 vm07 bash[23367]: audit 2026-03-10T10:19:42.501026+0000 mon.b (mon.1) 211 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm04-59259-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:42 vm07 bash[23367]: audit 2026-03-10T10:19:42.501026+0000 mon.b (mon.1) 211 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm04-59259-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:42 vm07 bash[23367]: audit 2026-03-10T10:19:42.501281+0000 mon.a (mon.0) 2117 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:42 vm07 bash[23367]: audit 2026-03-10T10:19:42.501281+0000 mon.a (mon.0) 2117 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:42 vm07 bash[23367]: audit 2026-03-10T10:19:42.503116+0000 mon.a (mon.0) 2118 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:42 vm07 bash[23367]: audit 2026-03-10T10:19:42.503116+0000 mon.a (mon.0) 2118 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:42 vm07 bash[23367]: audit 2026-03-10T10:19:42.504932+0000 mon.a (mon.0) 2119 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm04-59259-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:42 vm07 bash[23367]: audit 2026-03-10T10:19:42.504932+0000 mon.a (mon.0) 2119 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm04-59259-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:42 vm07 bash[23367]: audit 2026-03-10T10:19:42.829371+0000 mon.a (mon.0) 2120 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:19:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:42 vm07 bash[23367]: audit 2026-03-10T10:19:42.829371+0000 mon.a (mon.0) 2120 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:19:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:43 vm04 bash[28289]: cluster 2026-03-10T10:19:42.422882+0000 mgr.y (mgr.24422) 248 : cluster [DBG] pgmap v350: 317 pgs: 32 unknown, 285 active+clean; 4.4 MiB data, 674 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1 op/s; 0 B/s, 0 objects/s recovering 2026-03-10T10:19:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:43 vm04 bash[28289]: cluster 2026-03-10T10:19:42.422882+0000 mgr.y (mgr.24422) 248 : cluster [DBG] pgmap v350: 317 pgs: 32 unknown, 285 active+clean; 4.4 MiB data, 674 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1 op/s; 0 B/s, 0 objects/s recovering 2026-03-10T10:19:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:43 vm04 bash[28289]: audit 2026-03-10T10:19:43.493142+0000 mon.a (mon.0) 2121 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-37"}]': finished 2026-03-10T10:19:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:43 vm04 bash[28289]: audit 2026-03-10T10:19:43.493142+0000 mon.a (mon.0) 2121 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-37"}]': finished 2026-03-10T10:19:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:43 vm04 bash[28289]: audit 2026-03-10T10:19:43.493300+0000 mon.a (mon.0) 2122 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm04-59259-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:43 vm04 bash[28289]: audit 2026-03-10T10:19:43.493300+0000 mon.a (mon.0) 2122 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm04-59259-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:43 vm04 bash[28289]: audit 2026-03-10T10:19:43.494502+0000 mon.b (mon.1) 212 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm04-59259-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:43 vm04 bash[28289]: audit 2026-03-10T10:19:43.494502+0000 mon.b (mon.1) 212 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm04-59259-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:43 vm04 bash[28289]: cluster 2026-03-10T10:19:43.497240+0000 mon.a (mon.0) 2123 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-10T10:19:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:43 vm04 bash[28289]: cluster 2026-03-10T10:19:43.497240+0000 mon.a (mon.0) 2123 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-10T10:19:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:43 vm04 bash[28289]: audit 2026-03-10T10:19:43.497727+0000 mon.a (mon.0) 2124 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-37", "mode": "writeback"}]: dispatch 2026-03-10T10:19:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:43 vm04 bash[28289]: audit 2026-03-10T10:19:43.497727+0000 mon.a (mon.0) 2124 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-37", "mode": "writeback"}]: dispatch 2026-03-10T10:19:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:43 vm04 bash[28289]: audit 2026-03-10T10:19:43.498350+0000 mon.a (mon.0) 2125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm04-59259-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:43 vm04 bash[28289]: audit 2026-03-10T10:19:43.498350+0000 mon.a (mon.0) 2125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm04-59259-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:43 vm04 bash[20742]: cluster 2026-03-10T10:19:42.422882+0000 mgr.y (mgr.24422) 248 : cluster [DBG] pgmap v350: 317 pgs: 32 unknown, 285 active+clean; 4.4 MiB data, 674 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1 op/s; 0 B/s, 0 objects/s recovering 2026-03-10T10:19:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:43 vm04 bash[20742]: cluster 2026-03-10T10:19:42.422882+0000 mgr.y (mgr.24422) 248 : cluster [DBG] pgmap v350: 317 pgs: 32 unknown, 285 active+clean; 4.4 MiB data, 674 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1 op/s; 0 B/s, 0 objects/s recovering 2026-03-10T10:19:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:43 vm04 bash[20742]: audit 2026-03-10T10:19:43.493142+0000 mon.a (mon.0) 2121 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-37"}]': finished 2026-03-10T10:19:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:43 vm04 bash[20742]: audit 2026-03-10T10:19:43.493142+0000 mon.a (mon.0) 2121 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-37"}]': finished 2026-03-10T10:19:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:43 vm04 bash[20742]: audit 2026-03-10T10:19:43.493300+0000 mon.a (mon.0) 2122 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm04-59259-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:43 vm04 bash[20742]: audit 2026-03-10T10:19:43.493300+0000 mon.a (mon.0) 2122 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm04-59259-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:43 vm04 bash[20742]: audit 2026-03-10T10:19:43.494502+0000 mon.b (mon.1) 212 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm04-59259-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:43 vm04 bash[20742]: audit 2026-03-10T10:19:43.494502+0000 mon.b (mon.1) 212 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm04-59259-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:43 vm04 bash[20742]: cluster 2026-03-10T10:19:43.497240+0000 mon.a (mon.0) 2123 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-10T10:19:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:43 vm04 bash[20742]: cluster 2026-03-10T10:19:43.497240+0000 mon.a (mon.0) 2123 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-10T10:19:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:43 vm04 bash[20742]: audit 2026-03-10T10:19:43.497727+0000 mon.a (mon.0) 2124 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-37", "mode": "writeback"}]: dispatch 2026-03-10T10:19:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:43 vm04 bash[20742]: audit 2026-03-10T10:19:43.497727+0000 mon.a (mon.0) 2124 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-37", "mode": "writeback"}]: dispatch 2026-03-10T10:19:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:43 vm04 bash[20742]: audit 2026-03-10T10:19:43.498350+0000 mon.a (mon.0) 2125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm04-59259-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:43 vm04 bash[20742]: audit 2026-03-10T10:19:43.498350+0000 mon.a (mon.0) 2125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm04-59259-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:43 vm07 bash[23367]: cluster 2026-03-10T10:19:42.422882+0000 mgr.y (mgr.24422) 248 : cluster [DBG] pgmap v350: 317 pgs: 32 unknown, 285 active+clean; 4.4 MiB data, 674 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1 op/s; 0 B/s, 0 objects/s recovering 2026-03-10T10:19:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:43 vm07 bash[23367]: cluster 2026-03-10T10:19:42.422882+0000 mgr.y (mgr.24422) 248 : cluster [DBG] pgmap v350: 317 pgs: 32 unknown, 285 active+clean; 4.4 MiB data, 674 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 1 op/s; 0 B/s, 0 objects/s recovering 2026-03-10T10:19:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:43 vm07 bash[23367]: audit 2026-03-10T10:19:43.493142+0000 mon.a (mon.0) 2121 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-37"}]': finished 2026-03-10T10:19:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:43 vm07 bash[23367]: audit 2026-03-10T10:19:43.493142+0000 mon.a (mon.0) 2121 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-37"}]': finished 2026-03-10T10:19:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:43 vm07 bash[23367]: audit 2026-03-10T10:19:43.493300+0000 mon.a (mon.0) 2122 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm04-59259-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:43 vm07 bash[23367]: audit 2026-03-10T10:19:43.493300+0000 mon.a (mon.0) 2122 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsCompletePP_vm04-59259-54", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:43 vm07 bash[23367]: audit 2026-03-10T10:19:43.494502+0000 mon.b (mon.1) 212 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm04-59259-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:43 vm07 bash[23367]: audit 2026-03-10T10:19:43.494502+0000 mon.b (mon.1) 212 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm04-59259-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:43 vm07 bash[23367]: cluster 2026-03-10T10:19:43.497240+0000 mon.a (mon.0) 2123 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-10T10:19:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:43 vm07 bash[23367]: cluster 2026-03-10T10:19:43.497240+0000 mon.a (mon.0) 2123 : cluster [DBG] osdmap e263: 8 total, 8 up, 8 in 2026-03-10T10:19:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:43 vm07 bash[23367]: audit 2026-03-10T10:19:43.497727+0000 mon.a (mon.0) 2124 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-37", "mode": "writeback"}]: dispatch 2026-03-10T10:19:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:43 vm07 bash[23367]: audit 2026-03-10T10:19:43.497727+0000 mon.a (mon.0) 2124 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-37", "mode": "writeback"}]: dispatch 2026-03-10T10:19:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:43 vm07 bash[23367]: audit 2026-03-10T10:19:43.498350+0000 mon.a (mon.0) 2125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm04-59259-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:43 vm07 bash[23367]: audit 2026-03-10T10:19:43.498350+0000 mon.a (mon.0) 2125 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsCompletePP_vm04-59259-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:45.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:44 vm04 bash[28289]: cluster 2026-03-10T10:19:44.493599+0000 mon.a (mon.0) 2126 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:45.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:44 vm04 bash[28289]: cluster 2026-03-10T10:19:44.493599+0000 mon.a (mon.0) 2126 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:45.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:44 vm04 bash[28289]: audit 2026-03-10T10:19:44.532293+0000 mon.a (mon.0) 2127 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-37", "mode": "writeback"}]': finished 2026-03-10T10:19:45.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:44 vm04 bash[28289]: audit 2026-03-10T10:19:44.532293+0000 mon.a (mon.0) 2127 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-37", "mode": "writeback"}]': finished 2026-03-10T10:19:45.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:44 vm04 bash[28289]: cluster 2026-03-10T10:19:44.548591+0000 mon.a (mon.0) 2128 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-10T10:19:45.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:44 vm04 bash[28289]: cluster 2026-03-10T10:19:44.548591+0000 mon.a (mon.0) 2128 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-10T10:19:45.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:44 vm04 bash[28289]: audit 2026-03-10T10:19:44.612654+0000 mon.a (mon.0) 2129 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:45.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:44 vm04 bash[28289]: audit 2026-03-10T10:19:44.612654+0000 mon.a (mon.0) 2129 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:44 vm04 bash[20742]: cluster 2026-03-10T10:19:44.493599+0000 mon.a (mon.0) 2126 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:44 vm04 bash[20742]: cluster 2026-03-10T10:19:44.493599+0000 mon.a (mon.0) 2126 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:44 vm04 bash[20742]: audit 2026-03-10T10:19:44.532293+0000 mon.a (mon.0) 2127 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-37", "mode": "writeback"}]': finished 2026-03-10T10:19:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:44 vm04 bash[20742]: audit 2026-03-10T10:19:44.532293+0000 mon.a (mon.0) 2127 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-37", "mode": "writeback"}]': finished 2026-03-10T10:19:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:44 vm04 bash[20742]: cluster 2026-03-10T10:19:44.548591+0000 mon.a (mon.0) 2128 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-10T10:19:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:44 vm04 bash[20742]: cluster 2026-03-10T10:19:44.548591+0000 mon.a (mon.0) 2128 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-10T10:19:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:44 vm04 bash[20742]: audit 2026-03-10T10:19:44.612654+0000 mon.a (mon.0) 2129 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:44 vm04 bash[20742]: audit 2026-03-10T10:19:44.612654+0000 mon.a (mon.0) 2129 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:45.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:45 vm07 bash[23367]: cluster 2026-03-10T10:19:44.493599+0000 mon.a (mon.0) 2126 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:45.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:45 vm07 bash[23367]: cluster 2026-03-10T10:19:44.493599+0000 mon.a (mon.0) 2126 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:45.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:45 vm07 bash[23367]: audit 2026-03-10T10:19:44.532293+0000 mon.a (mon.0) 2127 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-37", "mode": "writeback"}]': finished 2026-03-10T10:19:45.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:45 vm07 bash[23367]: audit 2026-03-10T10:19:44.532293+0000 mon.a (mon.0) 2127 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-37", "mode": "writeback"}]': finished 2026-03-10T10:19:45.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:45 vm07 bash[23367]: cluster 2026-03-10T10:19:44.548591+0000 mon.a (mon.0) 2128 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-10T10:19:45.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:45 vm07 bash[23367]: cluster 2026-03-10T10:19:44.548591+0000 mon.a (mon.0) 2128 : cluster [DBG] osdmap e264: 8 total, 8 up, 8 in 2026-03-10T10:19:45.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:45 vm07 bash[23367]: audit 2026-03-10T10:19:44.612654+0000 mon.a (mon.0) 2129 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:45.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:45 vm07 bash[23367]: audit 2026-03-10T10:19:44.612654+0000 mon.a (mon.0) 2129 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:45 vm07 bash[23367]: cluster 2026-03-10T10:19:44.423535+0000 mgr.y (mgr.24422) 249 : cluster [DBG] pgmap v353: 317 pgs: 2 clean+premerge+peered, 315 active+clean; 4.4 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:45 vm07 bash[23367]: cluster 2026-03-10T10:19:44.423535+0000 mgr.y (mgr.24422) 249 : cluster [DBG] pgmap v353: 317 pgs: 2 clean+premerge+peered, 315 active+clean; 4.4 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:45 vm07 bash[23367]: audit 2026-03-10T10:19:45.537521+0000 mon.a (mon.0) 2130 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsCompletePP_vm04-59259-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm04-59259-54"}]': finished 2026-03-10T10:19:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:45 vm07 bash[23367]: audit 2026-03-10T10:19:45.537521+0000 mon.a (mon.0) 2130 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsCompletePP_vm04-59259-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm04-59259-54"}]': finished 2026-03-10T10:19:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:45 vm07 bash[23367]: audit 2026-03-10T10:19:45.537738+0000 mon.a (mon.0) 2131 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:45 vm07 bash[23367]: audit 2026-03-10T10:19:45.537738+0000 mon.a (mon.0) 2131 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:45 vm07 bash[23367]: cluster 2026-03-10T10:19:45.566410+0000 mon.a (mon.0) 2132 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-10T10:19:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:45 vm07 bash[23367]: cluster 2026-03-10T10:19:45.566410+0000 mon.a (mon.0) 2132 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-10T10:19:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:45 vm07 bash[23367]: audit 2026-03-10T10:19:45.567489+0000 mon.a (mon.0) 2133 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-37"}]: dispatch 2026-03-10T10:19:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:45 vm07 bash[23367]: audit 2026-03-10T10:19:45.567489+0000 mon.a (mon.0) 2133 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-37"}]: dispatch 2026-03-10T10:19:46.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:45 vm04 bash[28289]: cluster 2026-03-10T10:19:44.423535+0000 mgr.y (mgr.24422) 249 : cluster [DBG] pgmap v353: 317 pgs: 2 clean+premerge+peered, 315 active+clean; 4.4 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:46.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:45 vm04 bash[28289]: cluster 2026-03-10T10:19:44.423535+0000 mgr.y (mgr.24422) 249 : cluster [DBG] pgmap v353: 317 pgs: 2 clean+premerge+peered, 315 active+clean; 4.4 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:46.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:45 vm04 bash[28289]: audit 2026-03-10T10:19:45.537521+0000 mon.a (mon.0) 2130 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsCompletePP_vm04-59259-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm04-59259-54"}]': finished 2026-03-10T10:19:46.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:45 vm04 bash[28289]: audit 2026-03-10T10:19:45.537521+0000 mon.a (mon.0) 2130 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsCompletePP_vm04-59259-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm04-59259-54"}]': finished 2026-03-10T10:19:46.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:45 vm04 bash[28289]: audit 2026-03-10T10:19:45.537738+0000 mon.a (mon.0) 2131 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:46.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:45 vm04 bash[28289]: audit 2026-03-10T10:19:45.537738+0000 mon.a (mon.0) 2131 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:46.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:45 vm04 bash[28289]: cluster 2026-03-10T10:19:45.566410+0000 mon.a (mon.0) 2132 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-10T10:19:46.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:45 vm04 bash[28289]: cluster 2026-03-10T10:19:45.566410+0000 mon.a (mon.0) 2132 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-10T10:19:46.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:45 vm04 bash[28289]: audit 2026-03-10T10:19:45.567489+0000 mon.a (mon.0) 2133 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-37"}]: dispatch 2026-03-10T10:19:46.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:45 vm04 bash[28289]: audit 2026-03-10T10:19:45.567489+0000 mon.a (mon.0) 2133 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-37"}]: dispatch 2026-03-10T10:19:46.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:45 vm04 bash[20742]: cluster 2026-03-10T10:19:44.423535+0000 mgr.y (mgr.24422) 249 : cluster [DBG] pgmap v353: 317 pgs: 2 clean+premerge+peered, 315 active+clean; 4.4 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:46.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:45 vm04 bash[20742]: cluster 2026-03-10T10:19:44.423535+0000 mgr.y (mgr.24422) 249 : cluster [DBG] pgmap v353: 317 pgs: 2 clean+premerge+peered, 315 active+clean; 4.4 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:46.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:45 vm04 bash[20742]: audit 2026-03-10T10:19:45.537521+0000 mon.a (mon.0) 2130 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsCompletePP_vm04-59259-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm04-59259-54"}]': finished 2026-03-10T10:19:46.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:45 vm04 bash[20742]: audit 2026-03-10T10:19:45.537521+0000 mon.a (mon.0) 2130 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsCompletePP_vm04-59259-54", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsCompletePP_vm04-59259-54"}]': finished 2026-03-10T10:19:46.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:45 vm04 bash[20742]: audit 2026-03-10T10:19:45.537738+0000 mon.a (mon.0) 2131 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:46.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:45 vm04 bash[20742]: audit 2026-03-10T10:19:45.537738+0000 mon.a (mon.0) 2131 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:46.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:45 vm04 bash[20742]: cluster 2026-03-10T10:19:45.566410+0000 mon.a (mon.0) 2132 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-10T10:19:46.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:45 vm04 bash[20742]: cluster 2026-03-10T10:19:45.566410+0000 mon.a (mon.0) 2132 : cluster [DBG] osdmap e265: 8 total, 8 up, 8 in 2026-03-10T10:19:46.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:45 vm04 bash[20742]: audit 2026-03-10T10:19:45.567489+0000 mon.a (mon.0) 2133 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-37"}]: dispatch 2026-03-10T10:19:46.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:45 vm04 bash[20742]: audit 2026-03-10T10:19:45.567489+0000 mon.a (mon.0) 2133 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-37"}]: dispatch 2026-03-10T10:19:47.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:47 vm04 bash[28289]: cluster 2026-03-10T10:19:46.449081+0000 mon.a (mon.0) 2134 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:47.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:47 vm04 bash[28289]: cluster 2026-03-10T10:19:46.449081+0000 mon.a (mon.0) 2134 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:47.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:47 vm04 bash[28289]: cluster 2026-03-10T10:19:46.537412+0000 mon.a (mon.0) 2135 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:47.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:47 vm04 bash[28289]: cluster 2026-03-10T10:19:46.537412+0000 mon.a (mon.0) 2135 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:47.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:47 vm04 bash[28289]: audit 2026-03-10T10:19:46.546907+0000 mon.a (mon.0) 2136 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-37"}]': finished 2026-03-10T10:19:47.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:47 vm04 bash[28289]: audit 2026-03-10T10:19:46.546907+0000 mon.a (mon.0) 2136 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-37"}]': finished 2026-03-10T10:19:47.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:47 vm04 bash[28289]: cluster 2026-03-10T10:19:46.560687+0000 mon.a (mon.0) 2137 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-10T10:19:47.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:47 vm04 bash[28289]: cluster 2026-03-10T10:19:46.560687+0000 mon.a (mon.0) 2137 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-10T10:19:47.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:47 vm04 bash[20742]: cluster 2026-03-10T10:19:46.449081+0000 mon.a (mon.0) 2134 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:47.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:47 vm04 bash[20742]: cluster 2026-03-10T10:19:46.449081+0000 mon.a (mon.0) 2134 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:47.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:47 vm04 bash[20742]: cluster 2026-03-10T10:19:46.537412+0000 mon.a (mon.0) 2135 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:47.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:47 vm04 bash[20742]: cluster 2026-03-10T10:19:46.537412+0000 mon.a (mon.0) 2135 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:47.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:47 vm04 bash[20742]: audit 2026-03-10T10:19:46.546907+0000 mon.a (mon.0) 2136 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-37"}]': finished 2026-03-10T10:19:47.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:47 vm04 bash[20742]: audit 2026-03-10T10:19:46.546907+0000 mon.a (mon.0) 2136 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-37"}]': finished 2026-03-10T10:19:47.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:47 vm04 bash[20742]: cluster 2026-03-10T10:19:46.560687+0000 mon.a (mon.0) 2137 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-10T10:19:47.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:47 vm04 bash[20742]: cluster 2026-03-10T10:19:46.560687+0000 mon.a (mon.0) 2137 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-10T10:19:47.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:47 vm07 bash[23367]: cluster 2026-03-10T10:19:46.449081+0000 mon.a (mon.0) 2134 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:47.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:47 vm07 bash[23367]: cluster 2026-03-10T10:19:46.449081+0000 mon.a (mon.0) 2134 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:47.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:47 vm07 bash[23367]: cluster 2026-03-10T10:19:46.537412+0000 mon.a (mon.0) 2135 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:47.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:47 vm07 bash[23367]: cluster 2026-03-10T10:19:46.537412+0000 mon.a (mon.0) 2135 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:47.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:47 vm07 bash[23367]: audit 2026-03-10T10:19:46.546907+0000 mon.a (mon.0) 2136 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-37"}]': finished 2026-03-10T10:19:47.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:47 vm07 bash[23367]: audit 2026-03-10T10:19:46.546907+0000 mon.a (mon.0) 2136 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-37"}]': finished 2026-03-10T10:19:47.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:47 vm07 bash[23367]: cluster 2026-03-10T10:19:46.560687+0000 mon.a (mon.0) 2137 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-10T10:19:47.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:47 vm07 bash[23367]: cluster 2026-03-10T10:19:46.560687+0000 mon.a (mon.0) 2137 : cluster [DBG] osdmap e266: 8 total, 8 up, 8 in 2026-03-10T10:19:48.374 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:48 vm07 bash[23367]: cluster 2026-03-10T10:19:46.423945+0000 mgr.y (mgr.24422) 250 : cluster [DBG] pgmap v356: 324 pgs: 8 unknown, 1 clean+premerge+peered, 315 active+clean; 4.4 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:48.374 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:48 vm07 bash[23367]: cluster 2026-03-10T10:19:46.423945+0000 mgr.y (mgr.24422) 250 : cluster [DBG] pgmap v356: 324 pgs: 8 unknown, 1 clean+premerge+peered, 315 active+clean; 4.4 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:48.374 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:48 vm07 bash[23367]: cluster 2026-03-10T10:19:47.557038+0000 mon.a (mon.0) 2138 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-10T10:19:48.374 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:48 vm07 bash[23367]: cluster 2026-03-10T10:19:47.557038+0000 mon.a (mon.0) 2138 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-10T10:19:48.374 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:48 vm07 bash[23367]: audit 2026-03-10T10:19:47.560748+0000 mon.b (mon.1) 213 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:48.374 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:48 vm07 bash[23367]: audit 2026-03-10T10:19:47.560748+0000 mon.b (mon.1) 213 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:48.374 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:48 vm07 bash[23367]: audit 2026-03-10T10:19:47.563789+0000 mon.a (mon.0) 2139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:48.374 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:48 vm07 bash[23367]: audit 2026-03-10T10:19:47.563789+0000 mon.a (mon.0) 2139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:48.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:48 vm04 bash[28289]: cluster 2026-03-10T10:19:46.423945+0000 mgr.y (mgr.24422) 250 : cluster [DBG] pgmap v356: 324 pgs: 8 unknown, 1 clean+premerge+peered, 315 active+clean; 4.4 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:48.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:48 vm04 bash[28289]: cluster 2026-03-10T10:19:46.423945+0000 mgr.y (mgr.24422) 250 : cluster [DBG] pgmap v356: 324 pgs: 8 unknown, 1 clean+premerge+peered, 315 active+clean; 4.4 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:48.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:48 vm04 bash[28289]: cluster 2026-03-10T10:19:47.557038+0000 mon.a (mon.0) 2138 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-10T10:19:48.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:48 vm04 bash[28289]: cluster 2026-03-10T10:19:47.557038+0000 mon.a (mon.0) 2138 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-10T10:19:48.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:48 vm04 bash[28289]: audit 2026-03-10T10:19:47.560748+0000 mon.b (mon.1) 213 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:48.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:48 vm04 bash[28289]: audit 2026-03-10T10:19:47.560748+0000 mon.b (mon.1) 213 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:48.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:48 vm04 bash[28289]: audit 2026-03-10T10:19:47.563789+0000 mon.a (mon.0) 2139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:48.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:48 vm04 bash[28289]: audit 2026-03-10T10:19:47.563789+0000 mon.a (mon.0) 2139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:48.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:48 vm04 bash[20742]: cluster 2026-03-10T10:19:46.423945+0000 mgr.y (mgr.24422) 250 : cluster [DBG] pgmap v356: 324 pgs: 8 unknown, 1 clean+premerge+peered, 315 active+clean; 4.4 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:48.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:48 vm04 bash[20742]: cluster 2026-03-10T10:19:46.423945+0000 mgr.y (mgr.24422) 250 : cluster [DBG] pgmap v356: 324 pgs: 8 unknown, 1 clean+premerge+peered, 315 active+clean; 4.4 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:48.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:48 vm04 bash[20742]: cluster 2026-03-10T10:19:47.557038+0000 mon.a (mon.0) 2138 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-10T10:19:48.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:48 vm04 bash[20742]: cluster 2026-03-10T10:19:47.557038+0000 mon.a (mon.0) 2138 : cluster [DBG] osdmap e267: 8 total, 8 up, 8 in 2026-03-10T10:19:48.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:48 vm04 bash[20742]: audit 2026-03-10T10:19:47.560748+0000 mon.b (mon.1) 213 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:48.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:48 vm04 bash[20742]: audit 2026-03-10T10:19:47.560748+0000 mon.b (mon.1) 213 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:48.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:48 vm04 bash[20742]: audit 2026-03-10T10:19:47.563789+0000 mon.a (mon.0) 2139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:48.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:48 vm04 bash[20742]: audit 2026-03-10T10:19:47.563789+0000 mon.a (mon.0) 2139 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:48.767 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:19:48 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:19:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:49 vm07 bash[23367]: audit 2026-03-10T10:19:48.374507+0000 mgr.y (mgr.24422) 251 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:19:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:49 vm07 bash[23367]: audit 2026-03-10T10:19:48.374507+0000 mgr.y (mgr.24422) 251 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:19:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:49 vm07 bash[23367]: cluster 2026-03-10T10:19:48.424609+0000 mgr.y (mgr.24422) 252 : cluster [DBG] pgmap v359: 284 pgs: 1 clean+premerge+peered, 283 active+clean; 4.4 MiB data, 683 MiB used, 159 GiB / 160 GiB avail; 1023 B/s wr, 0 op/s 2026-03-10T10:19:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:49 vm07 bash[23367]: cluster 2026-03-10T10:19:48.424609+0000 mgr.y (mgr.24422) 252 : cluster [DBG] pgmap v359: 284 pgs: 1 clean+premerge+peered, 283 active+clean; 4.4 MiB data, 683 MiB used, 159 GiB / 160 GiB avail; 1023 B/s wr, 0 op/s 2026-03-10T10:19:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:49 vm07 bash[23367]: audit 2026-03-10T10:19:48.696686+0000 mon.a (mon.0) 2140 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm04-59259-54"}]': finished 2026-03-10T10:19:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:49 vm07 bash[23367]: audit 2026-03-10T10:19:48.696686+0000 mon.a (mon.0) 2140 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm04-59259-54"}]': finished 2026-03-10T10:19:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:49 vm07 bash[23367]: cluster 2026-03-10T10:19:48.722955+0000 mon.a (mon.0) 2141 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-10T10:19:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:49 vm07 bash[23367]: cluster 2026-03-10T10:19:48.722955+0000 mon.a (mon.0) 2141 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-10T10:19:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:49 vm07 bash[23367]: audit 2026-03-10T10:19:48.724759+0000 mon.a (mon.0) 2142 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:49 vm07 bash[23367]: audit 2026-03-10T10:19:48.724759+0000 mon.a (mon.0) 2142 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:49 vm07 bash[23367]: audit 2026-03-10T10:19:48.739469+0000 mon.b (mon.1) 214 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:49 vm07 bash[23367]: audit 2026-03-10T10:19:48.739469+0000 mon.b (mon.1) 214 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:49 vm07 bash[23367]: audit 2026-03-10T10:19:48.742484+0000 mon.a (mon.0) 2143 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:49 vm07 bash[23367]: audit 2026-03-10T10:19:48.742484+0000 mon.a (mon.0) 2143 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:49 vm04 bash[28289]: audit 2026-03-10T10:19:48.374507+0000 mgr.y (mgr.24422) 251 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:19:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:49 vm04 bash[28289]: audit 2026-03-10T10:19:48.374507+0000 mgr.y (mgr.24422) 251 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:19:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:49 vm04 bash[28289]: cluster 2026-03-10T10:19:48.424609+0000 mgr.y (mgr.24422) 252 : cluster [DBG] pgmap v359: 284 pgs: 1 clean+premerge+peered, 283 active+clean; 4.4 MiB data, 683 MiB used, 159 GiB / 160 GiB avail; 1023 B/s wr, 0 op/s 2026-03-10T10:19:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:49 vm04 bash[28289]: cluster 2026-03-10T10:19:48.424609+0000 mgr.y (mgr.24422) 252 : cluster [DBG] pgmap v359: 284 pgs: 1 clean+premerge+peered, 283 active+clean; 4.4 MiB data, 683 MiB used, 159 GiB / 160 GiB avail; 1023 B/s wr, 0 op/s 2026-03-10T10:19:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:49 vm04 bash[28289]: audit 2026-03-10T10:19:48.696686+0000 mon.a (mon.0) 2140 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm04-59259-54"}]': finished 2026-03-10T10:19:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:49 vm04 bash[28289]: audit 2026-03-10T10:19:48.696686+0000 mon.a (mon.0) 2140 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm04-59259-54"}]': finished 2026-03-10T10:19:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:49 vm04 bash[28289]: cluster 2026-03-10T10:19:48.722955+0000 mon.a (mon.0) 2141 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-10T10:19:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:49 vm04 bash[28289]: cluster 2026-03-10T10:19:48.722955+0000 mon.a (mon.0) 2141 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-10T10:19:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:49 vm04 bash[28289]: audit 2026-03-10T10:19:48.724759+0000 mon.a (mon.0) 2142 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:49 vm04 bash[28289]: audit 2026-03-10T10:19:48.724759+0000 mon.a (mon.0) 2142 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:49 vm04 bash[28289]: audit 2026-03-10T10:19:48.739469+0000 mon.b (mon.1) 214 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:49 vm04 bash[28289]: audit 2026-03-10T10:19:48.739469+0000 mon.b (mon.1) 214 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:49 vm04 bash[28289]: audit 2026-03-10T10:19:48.742484+0000 mon.a (mon.0) 2143 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:49 vm04 bash[28289]: audit 2026-03-10T10:19:48.742484+0000 mon.a (mon.0) 2143 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:49 vm04 bash[20742]: audit 2026-03-10T10:19:48.374507+0000 mgr.y (mgr.24422) 251 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:19:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:49 vm04 bash[20742]: audit 2026-03-10T10:19:48.374507+0000 mgr.y (mgr.24422) 251 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:19:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:49 vm04 bash[20742]: cluster 2026-03-10T10:19:48.424609+0000 mgr.y (mgr.24422) 252 : cluster [DBG] pgmap v359: 284 pgs: 1 clean+premerge+peered, 283 active+clean; 4.4 MiB data, 683 MiB used, 159 GiB / 160 GiB avail; 1023 B/s wr, 0 op/s 2026-03-10T10:19:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:49 vm04 bash[20742]: cluster 2026-03-10T10:19:48.424609+0000 mgr.y (mgr.24422) 252 : cluster [DBG] pgmap v359: 284 pgs: 1 clean+premerge+peered, 283 active+clean; 4.4 MiB data, 683 MiB used, 159 GiB / 160 GiB avail; 1023 B/s wr, 0 op/s 2026-03-10T10:19:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:49 vm04 bash[20742]: audit 2026-03-10T10:19:48.696686+0000 mon.a (mon.0) 2140 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm04-59259-54"}]': finished 2026-03-10T10:19:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:49 vm04 bash[20742]: audit 2026-03-10T10:19:48.696686+0000 mon.a (mon.0) 2140 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsCompletePP_vm04-59259-54"}]': finished 2026-03-10T10:19:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:49 vm04 bash[20742]: cluster 2026-03-10T10:19:48.722955+0000 mon.a (mon.0) 2141 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-10T10:19:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:49 vm04 bash[20742]: cluster 2026-03-10T10:19:48.722955+0000 mon.a (mon.0) 2141 : cluster [DBG] osdmap e268: 8 total, 8 up, 8 in 2026-03-10T10:19:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:49 vm04 bash[20742]: audit 2026-03-10T10:19:48.724759+0000 mon.a (mon.0) 2142 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:49 vm04 bash[20742]: audit 2026-03-10T10:19:48.724759+0000 mon.a (mon.0) 2142 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-39","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:49 vm04 bash[20742]: audit 2026-03-10T10:19:48.739469+0000 mon.b (mon.1) 214 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:49 vm04 bash[20742]: audit 2026-03-10T10:19:48.739469+0000 mon.b (mon.1) 214 : audit [INF] from='client.? 192.168.123.104:0/2410062792' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:49 vm04 bash[20742]: audit 2026-03-10T10:19:48.742484+0000 mon.a (mon.0) 2143 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:49 vm04 bash[20742]: audit 2026-03-10T10:19:48.742484+0000 mon.a (mon.0) 2143 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm04-59259-54"}]: dispatch 2026-03-10T10:19:51.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:50 vm07 bash[23367]: audit 2026-03-10T10:19:49.706985+0000 mon.a (mon.0) 2144 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:51.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:50 vm07 bash[23367]: audit 2026-03-10T10:19:49.706985+0000 mon.a (mon.0) 2144 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:51.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:50 vm07 bash[23367]: audit 2026-03-10T10:19:49.707162+0000 mon.a (mon.0) 2145 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm04-59259-54"}]': finished 2026-03-10T10:19:51.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:50 vm07 bash[23367]: audit 2026-03-10T10:19:49.707162+0000 mon.a (mon.0) 2145 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm04-59259-54"}]': finished 2026-03-10T10:19:51.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:50 vm07 bash[23367]: cluster 2026-03-10T10:19:49.713035+0000 mon.a (mon.0) 2146 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-10T10:19:51.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:50 vm07 bash[23367]: cluster 2026-03-10T10:19:49.713035+0000 mon.a (mon.0) 2146 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-10T10:19:51.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:50 vm07 bash[23367]: audit 2026-03-10T10:19:49.730728+0000 mon.a (mon.0) 2147 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:51.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:50 vm07 bash[23367]: audit 2026-03-10T10:19:49.730728+0000 mon.a (mon.0) 2147 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:51.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:50 vm07 bash[23367]: audit 2026-03-10T10:19:49.732739+0000 mon.c (mon.2) 396 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:50 vm07 bash[23367]: audit 2026-03-10T10:19:49.732739+0000 mon.c (mon.2) 396 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:50 vm07 bash[23367]: audit 2026-03-10T10:19:49.778670+0000 mon.a (mon.0) 2148 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:50 vm07 bash[23367]: audit 2026-03-10T10:19:49.778670+0000 mon.a (mon.0) 2148 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:50 vm07 bash[23367]: audit 2026-03-10T10:19:49.780132+0000 mon.c (mon.2) 397 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:50 vm07 bash[23367]: audit 2026-03-10T10:19:49.780132+0000 mon.c (mon.2) 397 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:50 vm07 bash[23367]: audit 2026-03-10T10:19:49.780508+0000 mon.a (mon.0) 2149 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:50 vm07 bash[23367]: audit 2026-03-10T10:19:49.780508+0000 mon.a (mon.0) 2149 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:50 vm07 bash[23367]: audit 2026-03-10T10:19:49.781842+0000 mon.c (mon.2) 398 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm04-59259-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:51.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:50 vm07 bash[23367]: audit 2026-03-10T10:19:49.781842+0000 mon.c (mon.2) 398 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm04-59259-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:51.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:50 vm07 bash[23367]: audit 2026-03-10T10:19:49.782498+0000 mon.a (mon.0) 2150 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm04-59259-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:51.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:50 vm07 bash[23367]: audit 2026-03-10T10:19:49.782498+0000 mon.a (mon.0) 2150 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm04-59259-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:51.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:50 vm07 bash[23367]: audit 2026-03-10T10:19:50.725080+0000 mon.a (mon.0) 2151 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-39", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:51.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:50 vm07 bash[23367]: audit 2026-03-10T10:19:50.725080+0000 mon.a (mon.0) 2151 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-39", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:51.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:50 vm07 bash[23367]: audit 2026-03-10T10:19:50.725164+0000 mon.a (mon.0) 2152 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm04-59259-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:51.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:50 vm07 bash[23367]: audit 2026-03-10T10:19:50.725164+0000 mon.a (mon.0) 2152 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm04-59259-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:51.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:50 vm07 bash[23367]: cluster 2026-03-10T10:19:50.729251+0000 mon.a (mon.0) 2153 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-10T10:19:51.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:50 vm07 bash[23367]: cluster 2026-03-10T10:19:50.729251+0000 mon.a (mon.0) 2153 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-10T10:19:51.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:50 vm07 bash[23367]: audit 2026-03-10T10:19:50.729865+0000 mon.a (mon.0) 2154 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-39"}]: dispatch 2026-03-10T10:19:51.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:50 vm07 bash[23367]: audit 2026-03-10T10:19:50.729865+0000 mon.a (mon.0) 2154 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-39"}]: dispatch 2026-03-10T10:19:51.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:50 vm07 bash[23367]: audit 2026-03-10T10:19:50.731187+0000 mon.c (mon.2) 399 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm04-59259-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:50 vm07 bash[23367]: audit 2026-03-10T10:19:50.731187+0000 mon.c (mon.2) 399 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm04-59259-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:50 vm07 bash[23367]: audit 2026-03-10T10:19:50.731473+0000 mon.a (mon.0) 2155 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm04-59259-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.018 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:50 vm07 bash[23367]: audit 2026-03-10T10:19:50.731473+0000 mon.a (mon.0) 2155 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm04-59259-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:50 vm04 bash[28289]: audit 2026-03-10T10:19:49.706985+0000 mon.a (mon.0) 2144 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:50 vm04 bash[28289]: audit 2026-03-10T10:19:49.706985+0000 mon.a (mon.0) 2144 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:50 vm04 bash[28289]: audit 2026-03-10T10:19:49.707162+0000 mon.a (mon.0) 2145 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm04-59259-54"}]': finished 2026-03-10T10:19:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:50 vm04 bash[28289]: audit 2026-03-10T10:19:49.707162+0000 mon.a (mon.0) 2145 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm04-59259-54"}]': finished 2026-03-10T10:19:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:50 vm04 bash[28289]: cluster 2026-03-10T10:19:49.713035+0000 mon.a (mon.0) 2146 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-10T10:19:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:50 vm04 bash[28289]: cluster 2026-03-10T10:19:49.713035+0000 mon.a (mon.0) 2146 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-10T10:19:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:50 vm04 bash[28289]: audit 2026-03-10T10:19:49.730728+0000 mon.a (mon.0) 2147 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:50 vm04 bash[28289]: audit 2026-03-10T10:19:49.730728+0000 mon.a (mon.0) 2147 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:50 vm04 bash[28289]: audit 2026-03-10T10:19:49.732739+0000 mon.c (mon.2) 396 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:50 vm04 bash[28289]: audit 2026-03-10T10:19:49.732739+0000 mon.c (mon.2) 396 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:50 vm04 bash[28289]: audit 2026-03-10T10:19:49.778670+0000 mon.a (mon.0) 2148 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:50 vm04 bash[28289]: audit 2026-03-10T10:19:49.778670+0000 mon.a (mon.0) 2148 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:50 vm04 bash[28289]: audit 2026-03-10T10:19:49.780132+0000 mon.c (mon.2) 397 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:50 vm04 bash[28289]: audit 2026-03-10T10:19:49.780132+0000 mon.c (mon.2) 397 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:50 vm04 bash[28289]: audit 2026-03-10T10:19:49.780508+0000 mon.a (mon.0) 2149 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:50 vm04 bash[28289]: audit 2026-03-10T10:19:49.780508+0000 mon.a (mon.0) 2149 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:50 vm04 bash[28289]: audit 2026-03-10T10:19:49.781842+0000 mon.c (mon.2) 398 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm04-59259-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:50 vm04 bash[28289]: audit 2026-03-10T10:19:49.781842+0000 mon.c (mon.2) 398 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm04-59259-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:50 vm04 bash[28289]: audit 2026-03-10T10:19:49.782498+0000 mon.a (mon.0) 2150 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm04-59259-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:50 vm04 bash[28289]: audit 2026-03-10T10:19:49.782498+0000 mon.a (mon.0) 2150 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm04-59259-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:50 vm04 bash[28289]: audit 2026-03-10T10:19:50.725080+0000 mon.a (mon.0) 2151 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-39", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:50 vm04 bash[28289]: audit 2026-03-10T10:19:50.725080+0000 mon.a (mon.0) 2151 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-39", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:50 vm04 bash[28289]: audit 2026-03-10T10:19:50.725164+0000 mon.a (mon.0) 2152 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm04-59259-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:50 vm04 bash[28289]: audit 2026-03-10T10:19:50.725164+0000 mon.a (mon.0) 2152 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm04-59259-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:50 vm04 bash[28289]: cluster 2026-03-10T10:19:50.729251+0000 mon.a (mon.0) 2153 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:50 vm04 bash[28289]: cluster 2026-03-10T10:19:50.729251+0000 mon.a (mon.0) 2153 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:50 vm04 bash[28289]: audit 2026-03-10T10:19:50.729865+0000 mon.a (mon.0) 2154 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-39"}]: dispatch 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:50 vm04 bash[28289]: audit 2026-03-10T10:19:50.729865+0000 mon.a (mon.0) 2154 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-39"}]: dispatch 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:50 vm04 bash[28289]: audit 2026-03-10T10:19:50.731187+0000 mon.c (mon.2) 399 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm04-59259-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:50 vm04 bash[28289]: audit 2026-03-10T10:19:50.731187+0000 mon.c (mon.2) 399 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm04-59259-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:50 vm04 bash[28289]: audit 2026-03-10T10:19:50.731473+0000 mon.a (mon.0) 2155 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm04-59259-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:50 vm04 bash[28289]: audit 2026-03-10T10:19:50.731473+0000 mon.a (mon.0) 2155 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm04-59259-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:50 vm04 bash[20742]: audit 2026-03-10T10:19:49.706985+0000 mon.a (mon.0) 2144 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:50 vm04 bash[20742]: audit 2026-03-10T10:19:49.706985+0000 mon.a (mon.0) 2144 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-39","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:50 vm04 bash[20742]: audit 2026-03-10T10:19:49.707162+0000 mon.a (mon.0) 2145 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm04-59259-54"}]': finished 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:50 vm04 bash[20742]: audit 2026-03-10T10:19:49.707162+0000 mon.a (mon.0) 2145 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsCompletePP_vm04-59259-54"}]': finished 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:50 vm04 bash[20742]: cluster 2026-03-10T10:19:49.713035+0000 mon.a (mon.0) 2146 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:50 vm04 bash[20742]: cluster 2026-03-10T10:19:49.713035+0000 mon.a (mon.0) 2146 : cluster [DBG] osdmap e269: 8 total, 8 up, 8 in 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:50 vm04 bash[20742]: audit 2026-03-10T10:19:49.730728+0000 mon.a (mon.0) 2147 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:50 vm04 bash[20742]: audit 2026-03-10T10:19:49.730728+0000 mon.a (mon.0) 2147 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-39", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:50 vm04 bash[20742]: audit 2026-03-10T10:19:49.732739+0000 mon.c (mon.2) 396 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:50 vm04 bash[20742]: audit 2026-03-10T10:19:49.732739+0000 mon.c (mon.2) 396 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:50 vm04 bash[20742]: audit 2026-03-10T10:19:49.778670+0000 mon.a (mon.0) 2148 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:50 vm04 bash[20742]: audit 2026-03-10T10:19:49.778670+0000 mon.a (mon.0) 2148 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:50 vm04 bash[20742]: audit 2026-03-10T10:19:49.780132+0000 mon.c (mon.2) 397 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:50 vm04 bash[20742]: audit 2026-03-10T10:19:49.780132+0000 mon.c (mon.2) 397 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:50 vm04 bash[20742]: audit 2026-03-10T10:19:49.780508+0000 mon.a (mon.0) 2149 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:50 vm04 bash[20742]: audit 2026-03-10T10:19:49.780508+0000 mon.a (mon.0) 2149 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:50 vm04 bash[20742]: audit 2026-03-10T10:19:49.781842+0000 mon.c (mon.2) 398 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm04-59259-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:50 vm04 bash[20742]: audit 2026-03-10T10:19:49.781842+0000 mon.c (mon.2) 398 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm04-59259-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:50 vm04 bash[20742]: audit 2026-03-10T10:19:49.782498+0000 mon.a (mon.0) 2150 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm04-59259-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:50 vm04 bash[20742]: audit 2026-03-10T10:19:49.782498+0000 mon.a (mon.0) 2150 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm04-59259-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:50 vm04 bash[20742]: audit 2026-03-10T10:19:50.725080+0000 mon.a (mon.0) 2151 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-39", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:50 vm04 bash[20742]: audit 2026-03-10T10:19:50.725080+0000 mon.a (mon.0) 2151 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-39", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:50 vm04 bash[20742]: audit 2026-03-10T10:19:50.725164+0000 mon.a (mon.0) 2152 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm04-59259-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:50 vm04 bash[20742]: audit 2026-03-10T10:19:50.725164+0000 mon.a (mon.0) 2152 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-IsSafePP_vm04-59259-55", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:51.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:50 vm04 bash[20742]: cluster 2026-03-10T10:19:50.729251+0000 mon.a (mon.0) 2153 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-10T10:19:51.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:50 vm04 bash[20742]: cluster 2026-03-10T10:19:50.729251+0000 mon.a (mon.0) 2153 : cluster [DBG] osdmap e270: 8 total, 8 up, 8 in 2026-03-10T10:19:51.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:50 vm04 bash[20742]: audit 2026-03-10T10:19:50.729865+0000 mon.a (mon.0) 2154 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-39"}]: dispatch 2026-03-10T10:19:51.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:50 vm04 bash[20742]: audit 2026-03-10T10:19:50.729865+0000 mon.a (mon.0) 2154 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-39"}]: dispatch 2026-03-10T10:19:51.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:50 vm04 bash[20742]: audit 2026-03-10T10:19:50.731187+0000 mon.c (mon.2) 399 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm04-59259-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:50 vm04 bash[20742]: audit 2026-03-10T10:19:50.731187+0000 mon.c (mon.2) 399 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm04-59259-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:50 vm04 bash[20742]: audit 2026-03-10T10:19:50.731473+0000 mon.a (mon.0) 2155 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm04-59259-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:51.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:50 vm04 bash[20742]: audit 2026-03-10T10:19:50.731473+0000 mon.a (mon.0) 2155 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "IsSafePP_vm04-59259-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:52.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:51 vm07 bash[23367]: cluster 2026-03-10T10:19:50.425112+0000 mgr.y (mgr.24422) 253 : cluster [DBG] pgmap v362: 316 pgs: 32 unknown, 284 active+clean; 4.4 MiB data, 687 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s; 0 B/s, 0 objects/s recovering 2026-03-10T10:19:52.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:51 vm07 bash[23367]: cluster 2026-03-10T10:19:50.425112+0000 mgr.y (mgr.24422) 253 : cluster [DBG] pgmap v362: 316 pgs: 32 unknown, 284 active+clean; 4.4 MiB data, 687 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s; 0 B/s, 0 objects/s recovering 2026-03-10T10:19:52.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:51 vm07 bash[23367]: audit 2026-03-10T10:19:50.748543+0000 mon.c (mon.2) 400 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:52.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:51 vm07 bash[23367]: audit 2026-03-10T10:19:50.748543+0000 mon.c (mon.2) 400 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:52.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:51 vm07 bash[23367]: audit 2026-03-10T10:19:50.764588+0000 mon.a (mon.0) 2156 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:52.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:51 vm07 bash[23367]: audit 2026-03-10T10:19:50.764588+0000 mon.a (mon.0) 2156 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:52.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:51 vm07 bash[23367]: audit 2026-03-10T10:19:50.765043+0000 mon.c (mon.2) 401 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:52.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:51 vm07 bash[23367]: audit 2026-03-10T10:19:50.765043+0000 mon.c (mon.2) 401 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:52.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:51 vm07 bash[23367]: audit 2026-03-10T10:19:50.765167+0000 mon.a (mon.0) 2157 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:52.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:51 vm07 bash[23367]: audit 2026-03-10T10:19:50.765167+0000 mon.a (mon.0) 2157 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:52.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:51 vm07 bash[23367]: audit 2026-03-10T10:19:50.765626+0000 mon.c (mon.2) 402 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm04-59366-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:52.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:51 vm07 bash[23367]: audit 2026-03-10T10:19:50.765626+0000 mon.c (mon.2) 402 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm04-59366-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:52.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:51 vm07 bash[23367]: audit 2026-03-10T10:19:50.765759+0000 mon.a (mon.0) 2158 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm04-59366-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:52.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:51 vm07 bash[23367]: audit 2026-03-10T10:19:50.765759+0000 mon.a (mon.0) 2158 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm04-59366-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:52.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:51 vm07 bash[23367]: audit 2026-03-10T10:19:51.731264+0000 mon.a (mon.0) 2159 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-39"}]': finished 2026-03-10T10:19:52.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:51 vm07 bash[23367]: audit 2026-03-10T10:19:51.731264+0000 mon.a (mon.0) 2159 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-39"}]': finished 2026-03-10T10:19:52.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:51 vm07 bash[23367]: audit 2026-03-10T10:19:51.731360+0000 mon.a (mon.0) 2160 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm04-59366-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:52.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:51 vm07 bash[23367]: audit 2026-03-10T10:19:51.731360+0000 mon.a (mon.0) 2160 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm04-59366-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:52.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:51 vm07 bash[23367]: audit 2026-03-10T10:19:51.744142+0000 mon.c (mon.2) 403 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm04-59366-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:52.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:51 vm07 bash[23367]: audit 2026-03-10T10:19:51.744142+0000 mon.c (mon.2) 403 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm04-59366-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:52.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:51 vm07 bash[23367]: cluster 2026-03-10T10:19:51.755467+0000 mon.a (mon.0) 2161 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-10T10:19:52.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:51 vm07 bash[23367]: cluster 2026-03-10T10:19:51.755467+0000 mon.a (mon.0) 2161 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-10T10:19:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:51 vm04 bash[28289]: cluster 2026-03-10T10:19:50.425112+0000 mgr.y (mgr.24422) 253 : cluster [DBG] pgmap v362: 316 pgs: 32 unknown, 284 active+clean; 4.4 MiB data, 687 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s; 0 B/s, 0 objects/s recovering 2026-03-10T10:19:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:51 vm04 bash[28289]: cluster 2026-03-10T10:19:50.425112+0000 mgr.y (mgr.24422) 253 : cluster [DBG] pgmap v362: 316 pgs: 32 unknown, 284 active+clean; 4.4 MiB data, 687 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s; 0 B/s, 0 objects/s recovering 2026-03-10T10:19:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:51 vm04 bash[28289]: audit 2026-03-10T10:19:50.748543+0000 mon.c (mon.2) 400 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:51 vm04 bash[28289]: audit 2026-03-10T10:19:50.748543+0000 mon.c (mon.2) 400 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:51 vm04 bash[28289]: audit 2026-03-10T10:19:50.764588+0000 mon.a (mon.0) 2156 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:51 vm04 bash[28289]: audit 2026-03-10T10:19:50.764588+0000 mon.a (mon.0) 2156 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:51 vm04 bash[28289]: audit 2026-03-10T10:19:50.765043+0000 mon.c (mon.2) 401 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:51 vm04 bash[28289]: audit 2026-03-10T10:19:50.765043+0000 mon.c (mon.2) 401 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:51 vm04 bash[28289]: audit 2026-03-10T10:19:50.765167+0000 mon.a (mon.0) 2157 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:51 vm04 bash[28289]: audit 2026-03-10T10:19:50.765167+0000 mon.a (mon.0) 2157 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:51 vm04 bash[28289]: audit 2026-03-10T10:19:50.765626+0000 mon.c (mon.2) 402 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm04-59366-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:51 vm04 bash[28289]: audit 2026-03-10T10:19:50.765626+0000 mon.c (mon.2) 402 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm04-59366-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:51 vm04 bash[28289]: audit 2026-03-10T10:19:50.765759+0000 mon.a (mon.0) 2158 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm04-59366-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:51 vm04 bash[28289]: audit 2026-03-10T10:19:50.765759+0000 mon.a (mon.0) 2158 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm04-59366-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:51 vm04 bash[28289]: audit 2026-03-10T10:19:51.731264+0000 mon.a (mon.0) 2159 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-39"}]': finished 2026-03-10T10:19:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:51 vm04 bash[28289]: audit 2026-03-10T10:19:51.731264+0000 mon.a (mon.0) 2159 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-39"}]': finished 2026-03-10T10:19:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:51 vm04 bash[28289]: audit 2026-03-10T10:19:51.731360+0000 mon.a (mon.0) 2160 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm04-59366-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:51 vm04 bash[28289]: audit 2026-03-10T10:19:51.731360+0000 mon.a (mon.0) 2160 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm04-59366-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:51 vm04 bash[28289]: audit 2026-03-10T10:19:51.744142+0000 mon.c (mon.2) 403 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm04-59366-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:51 vm04 bash[28289]: audit 2026-03-10T10:19:51.744142+0000 mon.c (mon.2) 403 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm04-59366-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:51 vm04 bash[28289]: cluster 2026-03-10T10:19:51.755467+0000 mon.a (mon.0) 2161 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-10T10:19:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:51 vm04 bash[28289]: cluster 2026-03-10T10:19:51.755467+0000 mon.a (mon.0) 2161 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-10T10:19:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:51 vm04 bash[20742]: cluster 2026-03-10T10:19:50.425112+0000 mgr.y (mgr.24422) 253 : cluster [DBG] pgmap v362: 316 pgs: 32 unknown, 284 active+clean; 4.4 MiB data, 687 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s; 0 B/s, 0 objects/s recovering 2026-03-10T10:19:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:51 vm04 bash[20742]: cluster 2026-03-10T10:19:50.425112+0000 mgr.y (mgr.24422) 253 : cluster [DBG] pgmap v362: 316 pgs: 32 unknown, 284 active+clean; 4.4 MiB data, 687 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s; 0 B/s, 0 objects/s recovering 2026-03-10T10:19:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:51 vm04 bash[20742]: audit 2026-03-10T10:19:50.748543+0000 mon.c (mon.2) 400 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:51 vm04 bash[20742]: audit 2026-03-10T10:19:50.748543+0000 mon.c (mon.2) 400 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:51 vm04 bash[20742]: audit 2026-03-10T10:19:50.764588+0000 mon.a (mon.0) 2156 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:51 vm04 bash[20742]: audit 2026-03-10T10:19:50.764588+0000 mon.a (mon.0) 2156 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:51 vm04 bash[20742]: audit 2026-03-10T10:19:50.765043+0000 mon.c (mon.2) 401 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:51 vm04 bash[20742]: audit 2026-03-10T10:19:50.765043+0000 mon.c (mon.2) 401 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:51 vm04 bash[20742]: audit 2026-03-10T10:19:50.765167+0000 mon.a (mon.0) 2157 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:51 vm04 bash[20742]: audit 2026-03-10T10:19:50.765167+0000 mon.a (mon.0) 2157 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:51 vm04 bash[20742]: audit 2026-03-10T10:19:50.765626+0000 mon.c (mon.2) 402 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm04-59366-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:51 vm04 bash[20742]: audit 2026-03-10T10:19:50.765626+0000 mon.c (mon.2) 402 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm04-59366-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:51 vm04 bash[20742]: audit 2026-03-10T10:19:50.765759+0000 mon.a (mon.0) 2158 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm04-59366-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:51 vm04 bash[20742]: audit 2026-03-10T10:19:50.765759+0000 mon.a (mon.0) 2158 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm04-59366-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:51 vm04 bash[20742]: audit 2026-03-10T10:19:51.731264+0000 mon.a (mon.0) 2159 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-39"}]': finished 2026-03-10T10:19:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:51 vm04 bash[20742]: audit 2026-03-10T10:19:51.731264+0000 mon.a (mon.0) 2159 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-39"}]': finished 2026-03-10T10:19:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:51 vm04 bash[20742]: audit 2026-03-10T10:19:51.731360+0000 mon.a (mon.0) 2160 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm04-59366-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:51 vm04 bash[20742]: audit 2026-03-10T10:19:51.731360+0000 mon.a (mon.0) 2160 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosListEC_vm04-59366-2", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:51 vm04 bash[20742]: audit 2026-03-10T10:19:51.744142+0000 mon.c (mon.2) 403 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm04-59366-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:51 vm04 bash[20742]: audit 2026-03-10T10:19:51.744142+0000 mon.c (mon.2) 403 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm04-59366-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:51 vm04 bash[20742]: cluster 2026-03-10T10:19:51.755467+0000 mon.a (mon.0) 2161 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-10T10:19:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:51 vm04 bash[20742]: cluster 2026-03-10T10:19:51.755467+0000 mon.a (mon.0) 2161 : cluster [DBG] osdmap e271: 8 total, 8 up, 8 in 2026-03-10T10:19:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:52 vm04 bash[20742]: audit 2026-03-10T10:19:51.757216+0000 mon.a (mon.0) 2162 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-39", "mode": "writeback"}]: dispatch 2026-03-10T10:19:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:52 vm04 bash[20742]: audit 2026-03-10T10:19:51.757216+0000 mon.a (mon.0) 2162 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-39", "mode": "writeback"}]: dispatch 2026-03-10T10:19:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:52 vm04 bash[20742]: audit 2026-03-10T10:19:51.757322+0000 mon.a (mon.0) 2163 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm04-59366-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:52 vm04 bash[20742]: audit 2026-03-10T10:19:51.757322+0000 mon.a (mon.0) 2163 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm04-59366-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:52 vm04 bash[20742]: cluster 2026-03-10T10:19:52.732093+0000 mon.a (mon.0) 2164 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:52 vm04 bash[20742]: cluster 2026-03-10T10:19:52.732093+0000 mon.a (mon.0) 2164 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:52 vm04 bash[20742]: audit 2026-03-10T10:19:52.735526+0000 mon.a (mon.0) 2165 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafePP_vm04-59259-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm04-59259-55"}]': finished 2026-03-10T10:19:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:52 vm04 bash[20742]: audit 2026-03-10T10:19:52.735526+0000 mon.a (mon.0) 2165 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafePP_vm04-59259-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm04-59259-55"}]': finished 2026-03-10T10:19:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:52 vm04 bash[20742]: audit 2026-03-10T10:19:52.735607+0000 mon.a (mon.0) 2166 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-39", "mode": "writeback"}]': finished 2026-03-10T10:19:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:52 vm04 bash[20742]: audit 2026-03-10T10:19:52.735607+0000 mon.a (mon.0) 2166 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-39", "mode": "writeback"}]': finished 2026-03-10T10:19:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:52 vm04 bash[20742]: cluster 2026-03-10T10:19:52.782798+0000 mon.a (mon.0) 2167 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-10T10:19:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:52 vm04 bash[20742]: cluster 2026-03-10T10:19:52.782798+0000 mon.a (mon.0) 2167 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-10T10:19:53.204 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:19:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:19:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:19:53.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:52 vm04 bash[28289]: audit 2026-03-10T10:19:51.757216+0000 mon.a (mon.0) 2162 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-39", "mode": "writeback"}]: dispatch 2026-03-10T10:19:53.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:52 vm04 bash[28289]: audit 2026-03-10T10:19:51.757216+0000 mon.a (mon.0) 2162 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-39", "mode": "writeback"}]: dispatch 2026-03-10T10:19:53.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:52 vm04 bash[28289]: audit 2026-03-10T10:19:51.757322+0000 mon.a (mon.0) 2163 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm04-59366-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:53.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:52 vm04 bash[28289]: audit 2026-03-10T10:19:51.757322+0000 mon.a (mon.0) 2163 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm04-59366-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:53.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:52 vm04 bash[28289]: cluster 2026-03-10T10:19:52.732093+0000 mon.a (mon.0) 2164 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:53.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:52 vm04 bash[28289]: cluster 2026-03-10T10:19:52.732093+0000 mon.a (mon.0) 2164 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:53.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:52 vm04 bash[28289]: audit 2026-03-10T10:19:52.735526+0000 mon.a (mon.0) 2165 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafePP_vm04-59259-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm04-59259-55"}]': finished 2026-03-10T10:19:53.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:52 vm04 bash[28289]: audit 2026-03-10T10:19:52.735526+0000 mon.a (mon.0) 2165 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafePP_vm04-59259-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm04-59259-55"}]': finished 2026-03-10T10:19:53.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:52 vm04 bash[28289]: audit 2026-03-10T10:19:52.735607+0000 mon.a (mon.0) 2166 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-39", "mode": "writeback"}]': finished 2026-03-10T10:19:53.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:52 vm04 bash[28289]: audit 2026-03-10T10:19:52.735607+0000 mon.a (mon.0) 2166 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-39", "mode": "writeback"}]': finished 2026-03-10T10:19:53.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:52 vm04 bash[28289]: cluster 2026-03-10T10:19:52.782798+0000 mon.a (mon.0) 2167 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-10T10:19:53.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:52 vm04 bash[28289]: cluster 2026-03-10T10:19:52.782798+0000 mon.a (mon.0) 2167 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-10T10:19:53.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:52 vm07 bash[23367]: audit 2026-03-10T10:19:51.757216+0000 mon.a (mon.0) 2162 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-39", "mode": "writeback"}]: dispatch 2026-03-10T10:19:53.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:52 vm07 bash[23367]: audit 2026-03-10T10:19:51.757216+0000 mon.a (mon.0) 2162 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-39", "mode": "writeback"}]: dispatch 2026-03-10T10:19:53.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:52 vm07 bash[23367]: audit 2026-03-10T10:19:51.757322+0000 mon.a (mon.0) 2163 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm04-59366-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:53.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:52 vm07 bash[23367]: audit 2026-03-10T10:19:51.757322+0000 mon.a (mon.0) 2163 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm04-59366-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:53.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:52 vm07 bash[23367]: cluster 2026-03-10T10:19:52.732093+0000 mon.a (mon.0) 2164 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:53.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:52 vm07 bash[23367]: cluster 2026-03-10T10:19:52.732093+0000 mon.a (mon.0) 2164 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:19:53.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:52 vm07 bash[23367]: audit 2026-03-10T10:19:52.735526+0000 mon.a (mon.0) 2165 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafePP_vm04-59259-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm04-59259-55"}]': finished 2026-03-10T10:19:53.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:52 vm07 bash[23367]: audit 2026-03-10T10:19:52.735526+0000 mon.a (mon.0) 2165 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "IsSafePP_vm04-59259-55", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-IsSafePP_vm04-59259-55"}]': finished 2026-03-10T10:19:53.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:52 vm07 bash[23367]: audit 2026-03-10T10:19:52.735607+0000 mon.a (mon.0) 2166 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-39", "mode": "writeback"}]': finished 2026-03-10T10:19:53.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:52 vm07 bash[23367]: audit 2026-03-10T10:19:52.735607+0000 mon.a (mon.0) 2166 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-39", "mode": "writeback"}]': finished 2026-03-10T10:19:53.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:52 vm07 bash[23367]: cluster 2026-03-10T10:19:52.782798+0000 mon.a (mon.0) 2167 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-10T10:19:53.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:52 vm07 bash[23367]: cluster 2026-03-10T10:19:52.782798+0000 mon.a (mon.0) 2167 : cluster [DBG] osdmap e272: 8 total, 8 up, 8 in 2026-03-10T10:19:54.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:53 vm04 bash[28289]: cluster 2026-03-10T10:19:52.425462+0000 mgr.y (mgr.24422) 254 : cluster [DBG] pgmap v365: 292 pgs: 32 unknown, 260 active+clean; 4.4 MiB data, 687 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:54.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:53 vm04 bash[28289]: cluster 2026-03-10T10:19:52.425462+0000 mgr.y (mgr.24422) 254 : cluster [DBG] pgmap v365: 292 pgs: 32 unknown, 260 active+clean; 4.4 MiB data, 687 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:54.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:53 vm04 bash[28289]: audit 2026-03-10T10:19:52.949609+0000 mon.a (mon.0) 2168 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:54.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:53 vm04 bash[28289]: audit 2026-03-10T10:19:52.949609+0000 mon.a (mon.0) 2168 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:54.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:53 vm04 bash[28289]: audit 2026-03-10T10:19:53.739842+0000 mon.a (mon.0) 2169 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm04-59366-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm04-59366-2"}]': finished 2026-03-10T10:19:54.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:53 vm04 bash[28289]: audit 2026-03-10T10:19:53.739842+0000 mon.a (mon.0) 2169 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm04-59366-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm04-59366-2"}]': finished 2026-03-10T10:19:54.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:53 vm04 bash[28289]: audit 2026-03-10T10:19:53.740027+0000 mon.a (mon.0) 2170 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:54.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:53 vm04 bash[28289]: audit 2026-03-10T10:19:53.740027+0000 mon.a (mon.0) 2170 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:54.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:53 vm04 bash[28289]: cluster 2026-03-10T10:19:53.750464+0000 mon.a (mon.0) 2171 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-10T10:19:54.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:53 vm04 bash[28289]: cluster 2026-03-10T10:19:53.750464+0000 mon.a (mon.0) 2171 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-10T10:19:54.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:53 vm04 bash[28289]: audit 2026-03-10T10:19:53.765321+0000 mon.a (mon.0) 2172 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-39"}]: dispatch 2026-03-10T10:19:54.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:53 vm04 bash[28289]: audit 2026-03-10T10:19:53.765321+0000 mon.a (mon.0) 2172 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-39"}]: dispatch 2026-03-10T10:19:54.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:53 vm04 bash[20742]: cluster 2026-03-10T10:19:52.425462+0000 mgr.y (mgr.24422) 254 : cluster [DBG] pgmap v365: 292 pgs: 32 unknown, 260 active+clean; 4.4 MiB data, 687 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:54.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:53 vm04 bash[20742]: cluster 2026-03-10T10:19:52.425462+0000 mgr.y (mgr.24422) 254 : cluster [DBG] pgmap v365: 292 pgs: 32 unknown, 260 active+clean; 4.4 MiB data, 687 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:54.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:53 vm04 bash[20742]: audit 2026-03-10T10:19:52.949609+0000 mon.a (mon.0) 2168 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:54.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:53 vm04 bash[20742]: audit 2026-03-10T10:19:52.949609+0000 mon.a (mon.0) 2168 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:54.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:53 vm04 bash[20742]: audit 2026-03-10T10:19:53.739842+0000 mon.a (mon.0) 2169 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm04-59366-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm04-59366-2"}]': finished 2026-03-10T10:19:54.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:53 vm04 bash[20742]: audit 2026-03-10T10:19:53.739842+0000 mon.a (mon.0) 2169 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm04-59366-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm04-59366-2"}]': finished 2026-03-10T10:19:54.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:53 vm04 bash[20742]: audit 2026-03-10T10:19:53.740027+0000 mon.a (mon.0) 2170 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:54.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:53 vm04 bash[20742]: audit 2026-03-10T10:19:53.740027+0000 mon.a (mon.0) 2170 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:54.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:53 vm04 bash[20742]: cluster 2026-03-10T10:19:53.750464+0000 mon.a (mon.0) 2171 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-10T10:19:54.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:53 vm04 bash[20742]: cluster 2026-03-10T10:19:53.750464+0000 mon.a (mon.0) 2171 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-10T10:19:54.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:53 vm04 bash[20742]: audit 2026-03-10T10:19:53.765321+0000 mon.a (mon.0) 2172 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-39"}]: dispatch 2026-03-10T10:19:54.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:53 vm04 bash[20742]: audit 2026-03-10T10:19:53.765321+0000 mon.a (mon.0) 2172 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-39"}]: dispatch 2026-03-10T10:19:54.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:53 vm07 bash[23367]: cluster 2026-03-10T10:19:52.425462+0000 mgr.y (mgr.24422) 254 : cluster [DBG] pgmap v365: 292 pgs: 32 unknown, 260 active+clean; 4.4 MiB data, 687 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:54.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:53 vm07 bash[23367]: cluster 2026-03-10T10:19:52.425462+0000 mgr.y (mgr.24422) 254 : cluster [DBG] pgmap v365: 292 pgs: 32 unknown, 260 active+clean; 4.4 MiB data, 687 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:19:54.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:53 vm07 bash[23367]: audit 2026-03-10T10:19:52.949609+0000 mon.a (mon.0) 2168 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:54.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:53 vm07 bash[23367]: audit 2026-03-10T10:19:52.949609+0000 mon.a (mon.0) 2168 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:19:54.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:53 vm07 bash[23367]: audit 2026-03-10T10:19:53.739842+0000 mon.a (mon.0) 2169 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm04-59366-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm04-59366-2"}]': finished 2026-03-10T10:19:54.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:53 vm07 bash[23367]: audit 2026-03-10T10:19:53.739842+0000 mon.a (mon.0) 2169 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosListEC_vm04-59366-2", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosListEC_vm04-59366-2"}]': finished 2026-03-10T10:19:54.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:53 vm07 bash[23367]: audit 2026-03-10T10:19:53.740027+0000 mon.a (mon.0) 2170 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:54.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:53 vm07 bash[23367]: audit 2026-03-10T10:19:53.740027+0000 mon.a (mon.0) 2170 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:19:54.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:53 vm07 bash[23367]: cluster 2026-03-10T10:19:53.750464+0000 mon.a (mon.0) 2171 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-10T10:19:54.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:53 vm07 bash[23367]: cluster 2026-03-10T10:19:53.750464+0000 mon.a (mon.0) 2171 : cluster [DBG] osdmap e273: 8 total, 8 up, 8 in 2026-03-10T10:19:54.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:53 vm07 bash[23367]: audit 2026-03-10T10:19:53.765321+0000 mon.a (mon.0) 2172 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-39"}]: dispatch 2026-03-10T10:19:54.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:53 vm07 bash[23367]: audit 2026-03-10T10:19:53.765321+0000 mon.a (mon.0) 2172 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-39"}]: dispatch 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm04.stdout:6 expected=6 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : seek to 13:bd63b0f1:::8:head 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : cursor()=13:bd63b0f1:::8:head expected=13:bd63b0f1:::8:head 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > 13:bd63b0f1:::8:head -> 8 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : entry=8 expected=8 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : seek to 13:02547ec2:::1:head 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : cursor()=13:02547ec2:::1:head expected=13:02547ec2:::1:head 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > 13:02547ec2:::1:head -> 1 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : entry=1 expected=1 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : seek to 13:863748b0:::15:head 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : cursor()=13:863748b0:::15:head expected=13:863748b0:::15:head 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > 13:863748b0:::15:head -> 15 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : entry=15 expected=15 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : seek to 13:6cac518f:::0:head 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : cursor()=13:6cac518f:::0:head expected=13:6cac518f:::0:head 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > 13:6cac518f:::0:head -> 0 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : entry=0 expected=0 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : seek to 13:62a1935d:::14:head 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : cursor()=13:62a1935d:::14:head expected=13:62a1935d:::14:head 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > 13:62a1935d:::14:head -> 14 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : entry=14 expected=14 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : seek to 13:5c6b0b28:::7:head 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : cursor()=13:5c6b0b28:::7:head expected=13:5c6b0b28:::7:head 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > 13:5c6b0b28:::7:head -> 7 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : entry=7 expected=7 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : seek to 13:566253c9:::13:head 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : cursor()=13:566253c9:::13:head expected=13:566253c9:::13:head 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > 13:566253c9:::13:head -> 13 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : entry=13 expected=13 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : seek to 13:52ea6a34:::10:head 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : cursor()=13:52ea6a34:::10:head expected=13:52ea6a34:::10:head 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm04.stdout: api_list: > 13:52ea6a34:::10:head -> 10 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm04.stdout: api_list: : entry=10 expected=10 2026-03-10T10:19:54.879 INFO:tasks.workunit.client.0.vm04.stdout: api_list: [ OK ] LibRadosList.ListObjectsCursor (237 ms) 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: [ RUN ] LibRadosList.EnumerateObjects 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: [ OK ] LibRadosList.EnumerateObjects (70441 ms) 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: [ RUN ] LibRadosList.EnumerateObjectsSplit 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: split 0/5 -> MIN 13:33333333::::head 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: split 1/5 -> 13:33333333::::head 13:66666666::::head 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: split 2/5 -> 13:66666666::::head 13:99999999::::head 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: split 3/5 -> 13:99999999::::head 13:cccccccc::::head 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: split 4/5 -> 13:cccccccc::::head MAX 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: [ OK ] LibRadosList.EnumerateObjectsSplit (135947 ms) 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: [----------] 7 tests from LibRadosList (207986 ms total) 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: [----------] 3 tests from LibRadosListEC 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: [ RUN ] LibRadosListEC.ListObjects 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: [ OK ] LibRadosListEC.ListObjects (1067 ms) 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: [ RUN ] LibRadosListEC.ListObjectsNS 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: myset foo1,foo2,foo3 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: foo1 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: foo2 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: foo3 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: myset foo1,foo4,foo5 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: foo4 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: foo5 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: foo1 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: myset foo6,foo7 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: foo7 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: foo6 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: myset :foo1,:foo2,:foo3,ns1:foo1,ns1:foo4,ns1:foo5,ns2:foo6,ns2:foo7 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: ns1:foo4 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: ns1:foo5 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: ns2:foo7 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: ns2:foo6 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: ns1:foo1 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: :foo1 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: :foo2 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: :foo3 2026-03-10T10:19:54.880 INFO:tasks.workunit.client.0.vm04.stdout: api_list: [ OK ] LibRadosListEC.ListObjectsNS (53 ms) 2026-03-10T10:19:55.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:54 vm04 bash[28289]: cluster 2026-03-10T10:19:54.739869+0000 mon.a (mon.0) 2173 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:55.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:54 vm04 bash[28289]: cluster 2026-03-10T10:19:54.739869+0000 mon.a (mon.0) 2173 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:55.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:54 vm04 bash[28289]: audit 2026-03-10T10:19:54.743292+0000 mon.a (mon.0) 2174 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-39"}]': finished 2026-03-10T10:19:55.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:54 vm04 bash[28289]: audit 2026-03-10T10:19:54.743292+0000 mon.a (mon.0) 2174 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-39"}]': finished 2026-03-10T10:19:55.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:54 vm04 bash[28289]: cluster 2026-03-10T10:19:54.761753+0000 mon.a (mon.0) 2175 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-10T10:19:55.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:54 vm04 bash[28289]: cluster 2026-03-10T10:19:54.761753+0000 mon.a (mon.0) 2175 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-10T10:19:55.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:54 vm04 bash[28289]: audit 2026-03-10T10:19:54.764580+0000 mon.c (mon.2) 404 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:55.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:54 vm04 bash[28289]: audit 2026-03-10T10:19:54.764580+0000 mon.c (mon.2) 404 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:55.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:54 vm04 bash[28289]: audit 2026-03-10T10:19:54.765119+0000 mon.a (mon.0) 2176 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:55.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:54 vm04 bash[28289]: audit 2026-03-10T10:19:54.765119+0000 mon.a (mon.0) 2176 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:55.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:54 vm04 bash[20742]: cluster 2026-03-10T10:19:54.739869+0000 mon.a (mon.0) 2173 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:55.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:54 vm04 bash[20742]: cluster 2026-03-10T10:19:54.739869+0000 mon.a (mon.0) 2173 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:55.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:54 vm04 bash[20742]: audit 2026-03-10T10:19:54.743292+0000 mon.a (mon.0) 2174 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-39"}]': finished 2026-03-10T10:19:55.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:54 vm04 bash[20742]: audit 2026-03-10T10:19:54.743292+0000 mon.a (mon.0) 2174 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-39"}]': finished 2026-03-10T10:19:55.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:54 vm04 bash[20742]: cluster 2026-03-10T10:19:54.761753+0000 mon.a (mon.0) 2175 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-10T10:19:55.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:54 vm04 bash[20742]: cluster 2026-03-10T10:19:54.761753+0000 mon.a (mon.0) 2175 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-10T10:19:55.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:54 vm04 bash[20742]: audit 2026-03-10T10:19:54.764580+0000 mon.c (mon.2) 404 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:55.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:54 vm04 bash[20742]: audit 2026-03-10T10:19:54.764580+0000 mon.c (mon.2) 404 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:55.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:54 vm04 bash[20742]: audit 2026-03-10T10:19:54.765119+0000 mon.a (mon.0) 2176 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:55.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:54 vm04 bash[20742]: audit 2026-03-10T10:19:54.765119+0000 mon.a (mon.0) 2176 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:55.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:54 vm07 bash[23367]: cluster 2026-03-10T10:19:54.739869+0000 mon.a (mon.0) 2173 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:55.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:54 vm07 bash[23367]: cluster 2026-03-10T10:19:54.739869+0000 mon.a (mon.0) 2173 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:19:55.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:54 vm07 bash[23367]: audit 2026-03-10T10:19:54.743292+0000 mon.a (mon.0) 2174 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-39"}]': finished 2026-03-10T10:19:55.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:54 vm07 bash[23367]: audit 2026-03-10T10:19:54.743292+0000 mon.a (mon.0) 2174 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-39"}]': finished 2026-03-10T10:19:55.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:54 vm07 bash[23367]: cluster 2026-03-10T10:19:54.761753+0000 mon.a (mon.0) 2175 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-10T10:19:55.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:54 vm07 bash[23367]: cluster 2026-03-10T10:19:54.761753+0000 mon.a (mon.0) 2175 : cluster [DBG] osdmap e274: 8 total, 8 up, 8 in 2026-03-10T10:19:55.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:54 vm07 bash[23367]: audit 2026-03-10T10:19:54.764580+0000 mon.c (mon.2) 404 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:55.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:54 vm07 bash[23367]: audit 2026-03-10T10:19:54.764580+0000 mon.c (mon.2) 404 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:55.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:54 vm07 bash[23367]: audit 2026-03-10T10:19:54.765119+0000 mon.a (mon.0) 2176 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:55.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:54 vm07 bash[23367]: audit 2026-03-10T10:19:54.765119+0000 mon.a (mon.0) 2176 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:55.755 INFO:tasks.workunit.client.0.vm04.stdout: api_list: [ RUN ] LibRadosListEC api_tier_pp: [==========] Running 77 tests from 4 test suites. 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [----------] Global test environment set-up. 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [----------] 3 tests from LibRadosTierPP 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: seed 59491 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTierPP.Dirty 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTierPP.Dirty (532 ms) 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTierPP.FlushWriteRaces 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTierPP.FlushWriteRaces (10341 ms) 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTierPP.HitSetNone 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTierPP.HitSetNone (28 ms) 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [----------] 3 tests from LibRadosTierPP (10901 ms total) 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [----------] 48 tests from LibRadosTwoPoolsPP 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.Overlay 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.Overlay (6873 ms) 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.Promote 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.Promote (7959 ms) 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.PromoteSnap 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.PromoteSnap (10275 ms) 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.PromoteSnapScrub 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: my_snaps [3] 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: my_snaps [4,3] 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: my_snaps [5,4,3] 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: my_snaps [6,5,4,3] 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: promoting some heads 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: promoting from clones for snap 6 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: promoting from clones for snap 5 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: promoting from clones for snap 4 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: promoting from clones for snap 3 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: waiting for scrubs... 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: done waiting 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.PromoteSnapScrub (46398 ms) 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.PromoteSnapTrimRace 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.PromoteSnapTrimRace (10461 ms) 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.Whiteout 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.Whiteout (8121 ms) 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.WhiteoutDeleteCreate 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.WhiteoutDeleteCreate (8274 ms) 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.Evict 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.Evict (8662 ms) 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.EvictSnap 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.EvictSnap (10244 ms) 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.EvictSnap2 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.EvictSnap2 (9243 ms) 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ListSnap 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ListSnap (11255 ms) 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.EvictSnapRollbackReadRace 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.EvictSnapRollbackReadRace (14740 ms) 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.TryFlush 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.TryFlush (8860 ms) 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.Flush 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.Flush (9399 ms) 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.FlushSnap 2026-03-10T10:19:55.756 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.FlushSnap (12648 ms) 2026-03-10T10:19:55.757 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.FlushTryFlushRaces 2026-03-10T10:19:55.757 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.FlushTryFlushRaces (7755 ms) 2026-03-10T10:19:55.757 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.TryFlushReadRace 2026-03-10T10:19:55.757 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.TryFlushReadRace (8194 ms) 2026-03-10T10:19:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:55 vm04 bash[28289]: cluster 2026-03-10T10:19:54.425827+0000 mgr.y (mgr.24422) 255 : cluster [DBG] pgmap v368: 308 pgs: 8 unknown, 8 creating+peering, 292 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 181 op/s 2026-03-10T10:19:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:55 vm04 bash[28289]: cluster 2026-03-10T10:19:54.425827+0000 mgr.y (mgr.24422) 255 : cluster [DBG] pgmap v368: 308 pgs: 8 unknown, 8 creating+peering, 292 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 181 op/s 2026-03-10T10:19:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:55 vm04 bash[28289]: cluster 2026-03-10T10:19:54.822398+0000 mon.a (mon.0) 2177 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:55 vm04 bash[28289]: cluster 2026-03-10T10:19:54.822398+0000 mon.a (mon.0) 2177 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:55 vm04 bash[28289]: audit 2026-03-10T10:19:55.746870+0000 mon.a (mon.0) 2178 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm04-59259-55"}]': finished 2026-03-10T10:19:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:55 vm04 bash[28289]: audit 2026-03-10T10:19:55.746870+0000 mon.a (mon.0) 2178 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm04-59259-55"}]': finished 2026-03-10T10:19:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:55 vm04 bash[28289]: audit 2026-03-10T10:19:55.752547+0000 mon.c (mon.2) 405 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:55 vm04 bash[28289]: audit 2026-03-10T10:19:55.752547+0000 mon.c (mon.2) 405 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:55 vm04 bash[28289]: cluster 2026-03-10T10:19:55.753313+0000 mon.a (mon.0) 2179 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-10T10:19:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:55 vm04 bash[28289]: cluster 2026-03-10T10:19:55.753313+0000 mon.a (mon.0) 2179 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-10T10:19:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:55 vm04 bash[28289]: audit 2026-03-10T10:19:55.756619+0000 mon.a (mon.0) 2180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:55 vm04 bash[28289]: audit 2026-03-10T10:19:55.756619+0000 mon.a (mon.0) 2180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:55 vm04 bash[28289]: audit 2026-03-10T10:19:55.761982+0000 mon.c (mon.2) 406 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:55 vm04 bash[28289]: audit 2026-03-10T10:19:55.761982+0000 mon.c (mon.2) 406 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:55 vm04 bash[28289]: audit 2026-03-10T10:19:55.762683+0000 mon.a (mon.0) 2181 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:55 vm04 bash[28289]: audit 2026-03-10T10:19:55.762683+0000 mon.a (mon.0) 2181 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:55 vm04 bash[20742]: cluster 2026-03-10T10:19:54.425827+0000 mgr.y (mgr.24422) 255 : cluster [DBG] pgmap v368: 308 pgs: 8 unknown, 8 creating+peering, 292 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 181 op/s 2026-03-10T10:19:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:55 vm04 bash[20742]: cluster 2026-03-10T10:19:54.425827+0000 mgr.y (mgr.24422) 255 : cluster [DBG] pgmap v368: 308 pgs: 8 unknown, 8 creating+peering, 292 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 181 op/s 2026-03-10T10:19:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:55 vm04 bash[20742]: cluster 2026-03-10T10:19:54.822398+0000 mon.a (mon.0) 2177 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:55 vm04 bash[20742]: cluster 2026-03-10T10:19:54.822398+0000 mon.a (mon.0) 2177 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:55 vm04 bash[20742]: audit 2026-03-10T10:19:55.746870+0000 mon.a (mon.0) 2178 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm04-59259-55"}]': finished 2026-03-10T10:19:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:55 vm04 bash[20742]: audit 2026-03-10T10:19:55.746870+0000 mon.a (mon.0) 2178 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm04-59259-55"}]': finished 2026-03-10T10:19:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:55 vm04 bash[20742]: audit 2026-03-10T10:19:55.752547+0000 mon.c (mon.2) 405 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:55 vm04 bash[20742]: audit 2026-03-10T10:19:55.752547+0000 mon.c (mon.2) 405 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:55 vm04 bash[20742]: cluster 2026-03-10T10:19:55.753313+0000 mon.a (mon.0) 2179 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-10T10:19:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:55 vm04 bash[20742]: cluster 2026-03-10T10:19:55.753313+0000 mon.a (mon.0) 2179 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-10T10:19:56.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:55 vm04 bash[20742]: audit 2026-03-10T10:19:55.756619+0000 mon.a (mon.0) 2180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:56.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:55 vm04 bash[20742]: audit 2026-03-10T10:19:55.756619+0000 mon.a (mon.0) 2180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:56.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:55 vm04 bash[20742]: audit 2026-03-10T10:19:55.761982+0000 mon.c (mon.2) 406 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:56.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:55 vm04 bash[20742]: audit 2026-03-10T10:19:55.761982+0000 mon.c (mon.2) 406 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:56.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:55 vm04 bash[20742]: audit 2026-03-10T10:19:55.762683+0000 mon.a (mon.0) 2181 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:56.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:55 vm04 bash[20742]: audit 2026-03-10T10:19:55.762683+0000 mon.a (mon.0) 2181 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:56.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:55 vm07 bash[23367]: cluster 2026-03-10T10:19:54.425827+0000 mgr.y (mgr.24422) 255 : cluster [DBG] pgmap v368: 308 pgs: 8 unknown, 8 creating+peering, 292 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 181 op/s 2026-03-10T10:19:56.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:55 vm07 bash[23367]: cluster 2026-03-10T10:19:54.425827+0000 mgr.y (mgr.24422) 255 : cluster [DBG] pgmap v368: 308 pgs: 8 unknown, 8 creating+peering, 292 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 181 op/s 2026-03-10T10:19:56.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:55 vm07 bash[23367]: cluster 2026-03-10T10:19:54.822398+0000 mon.a (mon.0) 2177 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:56.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:55 vm07 bash[23367]: cluster 2026-03-10T10:19:54.822398+0000 mon.a (mon.0) 2177 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:19:56.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:55 vm07 bash[23367]: audit 2026-03-10T10:19:55.746870+0000 mon.a (mon.0) 2178 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm04-59259-55"}]': finished 2026-03-10T10:19:56.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:55 vm07 bash[23367]: audit 2026-03-10T10:19:55.746870+0000 mon.a (mon.0) 2178 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-IsSafePP_vm04-59259-55"}]': finished 2026-03-10T10:19:56.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:55 vm07 bash[23367]: audit 2026-03-10T10:19:55.752547+0000 mon.c (mon.2) 405 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:56.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:55 vm07 bash[23367]: audit 2026-03-10T10:19:55.752547+0000 mon.c (mon.2) 405 : audit [INF] from='client.? 192.168.123.104:0/3216946149' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:56.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:55 vm07 bash[23367]: cluster 2026-03-10T10:19:55.753313+0000 mon.a (mon.0) 2179 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-10T10:19:56.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:55 vm07 bash[23367]: cluster 2026-03-10T10:19:55.753313+0000 mon.a (mon.0) 2179 : cluster [DBG] osdmap e275: 8 total, 8 up, 8 in 2026-03-10T10:19:56.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:55 vm07 bash[23367]: audit 2026-03-10T10:19:55.756619+0000 mon.a (mon.0) 2180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:56.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:55 vm07 bash[23367]: audit 2026-03-10T10:19:55.756619+0000 mon.a (mon.0) 2180 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm04-59259-55"}]: dispatch 2026-03-10T10:19:56.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:55 vm07 bash[23367]: audit 2026-03-10T10:19:55.761982+0000 mon.c (mon.2) 406 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:56.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:55 vm07 bash[23367]: audit 2026-03-10T10:19:55.761982+0000 mon.c (mon.2) 406 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:56.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:55 vm07 bash[23367]: audit 2026-03-10T10:19:55.762683+0000 mon.a (mon.0) 2181 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:56.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:55 vm07 bash[23367]: audit 2026-03-10T10:19:55.762683+0000 mon.a (mon.0) 2181 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:58.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:57 vm04 bash[28289]: cluster 2026-03-10T10:19:56.426178+0000 mgr.y (mgr.24422) 256 : cluster [DBG] pgmap v371: 260 pgs: 260 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T10:19:58.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:57 vm04 bash[28289]: cluster 2026-03-10T10:19:56.426178+0000 mgr.y (mgr.24422) 256 : cluster [DBG] pgmap v371: 260 pgs: 260 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T10:19:58.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:57 vm04 bash[28289]: audit 2026-03-10T10:19:56.783935+0000 mon.a (mon.0) 2182 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm04-59259-55"}]': finished 2026-03-10T10:19:58.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:57 vm04 bash[28289]: audit 2026-03-10T10:19:56.783935+0000 mon.a (mon.0) 2182 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm04-59259-55"}]': finished 2026-03-10T10:19:58.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:57 vm04 bash[28289]: audit 2026-03-10T10:19:56.784195+0000 mon.a (mon.0) 2183 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm04-59366-2"}]': finished 2026-03-10T10:19:58.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:57 vm04 bash[28289]: audit 2026-03-10T10:19:56.784195+0000 mon.a (mon.0) 2183 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm04-59366-2"}]': finished 2026-03-10T10:19:58.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:57 vm04 bash[28289]: audit 2026-03-10T10:19:56.791548+0000 mon.c (mon.2) 407 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:58.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:57 vm04 bash[28289]: audit 2026-03-10T10:19:56.791548+0000 mon.c (mon.2) 407 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:58.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:57 vm04 bash[28289]: cluster 2026-03-10T10:19:56.792619+0000 mon.a (mon.0) 2184 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-10T10:19:58.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:57 vm04 bash[28289]: cluster 2026-03-10T10:19:56.792619+0000 mon.a (mon.0) 2184 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-10T10:19:58.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:57 vm04 bash[28289]: audit 2026-03-10T10:19:56.809385+0000 mon.a (mon.0) 2185 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:58.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:57 vm04 bash[28289]: audit 2026-03-10T10:19:56.809385+0000 mon.a (mon.0) 2185 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:58.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:57 vm04 bash[28289]: audit 2026-03-10T10:19:56.809826+0000 mon.a (mon.0) 2186 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:58.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:57 vm04 bash[28289]: audit 2026-03-10T10:19:56.809826+0000 mon.a (mon.0) 2186 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:58.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:57 vm04 bash[28289]: audit 2026-03-10T10:19:56.816156+0000 mon.c (mon.2) 408 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:58.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:57 vm04 bash[28289]: audit 2026-03-10T10:19:56.816156+0000 mon.c (mon.2) 408 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:58.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:57 vm04 bash[28289]: audit 2026-03-10T10:19:56.817207+0000 mon.a (mon.0) 2187 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:58.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:57 vm04 bash[28289]: audit 2026-03-10T10:19:56.817207+0000 mon.a (mon.0) 2187 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:58.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:57 vm04 bash[28289]: audit 2026-03-10T10:19:56.817587+0000 mon.c (mon.2) 409 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:58.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:57 vm04 bash[28289]: audit 2026-03-10T10:19:56.817587+0000 mon.c (mon.2) 409 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:58.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:57 vm04 bash[28289]: audit 2026-03-10T10:19:56.817849+0000 mon.a (mon.0) 2188 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:58.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:57 vm04 bash[28289]: audit 2026-03-10T10:19:56.817849+0000 mon.a (mon.0) 2188 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:58.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:57 vm04 bash[28289]: audit 2026-03-10T10:19:56.818238+0000 mon.c (mon.2) 410 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm04-59259-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:58.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:57 vm04 bash[28289]: audit 2026-03-10T10:19:56.818238+0000 mon.c (mon.2) 410 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm04-59259-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:58.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:57 vm04 bash[28289]: audit 2026-03-10T10:19:56.818505+0000 mon.a (mon.0) 2189 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm04-59259-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:58.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:57 vm04 bash[28289]: audit 2026-03-10T10:19:56.818505+0000 mon.a (mon.0) 2189 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm04-59259-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:58.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:57 vm04 bash[20742]: cluster 2026-03-10T10:19:56.426178+0000 mgr.y (mgr.24422) 256 : cluster [DBG] pgmap v371: 260 pgs: 260 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T10:19:58.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:57 vm04 bash[20742]: cluster 2026-03-10T10:19:56.426178+0000 mgr.y (mgr.24422) 256 : cluster [DBG] pgmap v371: 260 pgs: 260 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T10:19:58.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:57 vm04 bash[20742]: audit 2026-03-10T10:19:56.783935+0000 mon.a (mon.0) 2182 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm04-59259-55"}]': finished 2026-03-10T10:19:58.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:57 vm04 bash[20742]: audit 2026-03-10T10:19:56.783935+0000 mon.a (mon.0) 2182 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm04-59259-55"}]': finished 2026-03-10T10:19:58.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:57 vm04 bash[20742]: audit 2026-03-10T10:19:56.784195+0000 mon.a (mon.0) 2183 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm04-59366-2"}]': finished 2026-03-10T10:19:58.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:57 vm04 bash[20742]: audit 2026-03-10T10:19:56.784195+0000 mon.a (mon.0) 2183 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm04-59366-2"}]': finished 2026-03-10T10:19:58.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:57 vm04 bash[20742]: audit 2026-03-10T10:19:56.791548+0000 mon.c (mon.2) 407 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:58.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:57 vm04 bash[20742]: audit 2026-03-10T10:19:56.791548+0000 mon.c (mon.2) 407 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:58.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:57 vm04 bash[20742]: cluster 2026-03-10T10:19:56.792619+0000 mon.a (mon.0) 2184 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-10T10:19:58.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:57 vm04 bash[20742]: cluster 2026-03-10T10:19:56.792619+0000 mon.a (mon.0) 2184 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-10T10:19:58.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:57 vm04 bash[20742]: audit 2026-03-10T10:19:56.809385+0000 mon.a (mon.0) 2185 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:58.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:57 vm04 bash[20742]: audit 2026-03-10T10:19:56.809385+0000 mon.a (mon.0) 2185 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:58.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:57 vm04 bash[20742]: audit 2026-03-10T10:19:56.809826+0000 mon.a (mon.0) 2186 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:58.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:57 vm04 bash[20742]: audit 2026-03-10T10:19:56.809826+0000 mon.a (mon.0) 2186 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:58.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:57 vm04 bash[20742]: audit 2026-03-10T10:19:56.816156+0000 mon.c (mon.2) 408 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:58.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:57 vm04 bash[20742]: audit 2026-03-10T10:19:56.816156+0000 mon.c (mon.2) 408 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:58.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:57 vm04 bash[20742]: audit 2026-03-10T10:19:56.817207+0000 mon.a (mon.0) 2187 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:58.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:57 vm04 bash[20742]: audit 2026-03-10T10:19:56.817207+0000 mon.a (mon.0) 2187 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:58.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:57 vm04 bash[20742]: audit 2026-03-10T10:19:56.817587+0000 mon.c (mon.2) 409 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:58.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:57 vm04 bash[20742]: audit 2026-03-10T10:19:56.817587+0000 mon.c (mon.2) 409 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:58.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:57 vm04 bash[20742]: audit 2026-03-10T10:19:56.817849+0000 mon.a (mon.0) 2188 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:58.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:57 vm04 bash[20742]: audit 2026-03-10T10:19:56.817849+0000 mon.a (mon.0) 2188 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:58.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:57 vm04 bash[20742]: audit 2026-03-10T10:19:56.818238+0000 mon.c (mon.2) 410 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm04-59259-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:58.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:57 vm04 bash[20742]: audit 2026-03-10T10:19:56.818238+0000 mon.c (mon.2) 410 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm04-59259-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:58.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:57 vm04 bash[20742]: audit 2026-03-10T10:19:56.818505+0000 mon.a (mon.0) 2189 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm04-59259-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:58.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:57 vm04 bash[20742]: audit 2026-03-10T10:19:56.818505+0000 mon.a (mon.0) 2189 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm04-59259-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:57 vm07 bash[23367]: cluster 2026-03-10T10:19:56.426178+0000 mgr.y (mgr.24422) 256 : cluster [DBG] pgmap v371: 260 pgs: 260 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T10:19:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:57 vm07 bash[23367]: cluster 2026-03-10T10:19:56.426178+0000 mgr.y (mgr.24422) 256 : cluster [DBG] pgmap v371: 260 pgs: 260 active+clean; 8.3 MiB data, 692 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T10:19:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:57 vm07 bash[23367]: audit 2026-03-10T10:19:56.783935+0000 mon.a (mon.0) 2182 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm04-59259-55"}]': finished 2026-03-10T10:19:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:57 vm07 bash[23367]: audit 2026-03-10T10:19:56.783935+0000 mon.a (mon.0) 2182 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"IsSafePP_vm04-59259-55"}]': finished 2026-03-10T10:19:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:57 vm07 bash[23367]: audit 2026-03-10T10:19:56.784195+0000 mon.a (mon.0) 2183 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm04-59366-2"}]': finished 2026-03-10T10:19:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:57 vm07 bash[23367]: audit 2026-03-10T10:19:56.784195+0000 mon.a (mon.0) 2183 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosListEC_vm04-59366-2"}]': finished 2026-03-10T10:19:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:57 vm07 bash[23367]: audit 2026-03-10T10:19:56.791548+0000 mon.c (mon.2) 407 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:57 vm07 bash[23367]: audit 2026-03-10T10:19:56.791548+0000 mon.c (mon.2) 407 : audit [INF] from='client.? 192.168.123.104:0/3838145506' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:57 vm07 bash[23367]: cluster 2026-03-10T10:19:56.792619+0000 mon.a (mon.0) 2184 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-10T10:19:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:57 vm07 bash[23367]: cluster 2026-03-10T10:19:56.792619+0000 mon.a (mon.0) 2184 : cluster [DBG] osdmap e276: 8 total, 8 up, 8 in 2026-03-10T10:19:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:57 vm07 bash[23367]: audit 2026-03-10T10:19:56.809385+0000 mon.a (mon.0) 2185 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:57 vm07 bash[23367]: audit 2026-03-10T10:19:56.809385+0000 mon.a (mon.0) 2185 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm04-59366-2"}]: dispatch 2026-03-10T10:19:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:57 vm07 bash[23367]: audit 2026-03-10T10:19:56.809826+0000 mon.a (mon.0) 2186 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:57 vm07 bash[23367]: audit 2026-03-10T10:19:56.809826+0000 mon.a (mon.0) 2186 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-41","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:19:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:57 vm07 bash[23367]: audit 2026-03-10T10:19:56.816156+0000 mon.c (mon.2) 408 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:57 vm07 bash[23367]: audit 2026-03-10T10:19:56.816156+0000 mon.c (mon.2) 408 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:57 vm07 bash[23367]: audit 2026-03-10T10:19:56.817207+0000 mon.a (mon.0) 2187 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:57 vm07 bash[23367]: audit 2026-03-10T10:19:56.817207+0000 mon.a (mon.0) 2187 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:57 vm07 bash[23367]: audit 2026-03-10T10:19:56.817587+0000 mon.c (mon.2) 409 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:57 vm07 bash[23367]: audit 2026-03-10T10:19:56.817587+0000 mon.c (mon.2) 409 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:57 vm07 bash[23367]: audit 2026-03-10T10:19:56.817849+0000 mon.a (mon.0) 2188 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:57 vm07 bash[23367]: audit 2026-03-10T10:19:56.817849+0000 mon.a (mon.0) 2188 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:57 vm07 bash[23367]: audit 2026-03-10T10:19:56.818238+0000 mon.c (mon.2) 410 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm04-59259-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:57 vm07 bash[23367]: audit 2026-03-10T10:19:56.818238+0000 mon.c (mon.2) 410 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm04-59259-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:57 vm07 bash[23367]: audit 2026-03-10T10:19:56.818505+0000 mon.a (mon.0) 2189 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm04-59259-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:57 vm07 bash[23367]: audit 2026-03-10T10:19:56.818505+0000 mon.a (mon.0) 2189 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm04-59259-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:19:58.766 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:19:58 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:19:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:58 vm04 bash[28289]: audit 2026-03-10T10:19:57.787526+0000 mon.a (mon.0) 2190 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm04-59366-2"}]': finished 2026-03-10T10:19:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:58 vm04 bash[28289]: audit 2026-03-10T10:19:57.787526+0000 mon.a (mon.0) 2190 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm04-59366-2"}]': finished 2026-03-10T10:19:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:58 vm04 bash[28289]: audit 2026-03-10T10:19:57.787593+0000 mon.a (mon.0) 2191 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:58 vm04 bash[28289]: audit 2026-03-10T10:19:57.787593+0000 mon.a (mon.0) 2191 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:58 vm04 bash[28289]: audit 2026-03-10T10:19:57.788077+0000 mon.a (mon.0) 2192 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm04-59259-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:58 vm04 bash[28289]: audit 2026-03-10T10:19:57.788077+0000 mon.a (mon.0) 2192 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm04-59259-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:58 vm04 bash[28289]: cluster 2026-03-10T10:19:57.795582+0000 mon.a (mon.0) 2193 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-10T10:19:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:58 vm04 bash[28289]: cluster 2026-03-10T10:19:57.795582+0000 mon.a (mon.0) 2193 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-10T10:19:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:58 vm04 bash[28289]: audit 2026-03-10T10:19:57.802274+0000 mon.a (mon.0) 2194 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:58 vm04 bash[28289]: audit 2026-03-10T10:19:57.802274+0000 mon.a (mon.0) 2194 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:58 vm04 bash[28289]: audit 2026-03-10T10:19:57.824619+0000 mon.c (mon.2) 411 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm04-59259-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:58 vm04 bash[28289]: audit 2026-03-10T10:19:57.824619+0000 mon.c (mon.2) 411 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm04-59259-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:58 vm04 bash[28289]: audit 2026-03-10T10:19:57.836106+0000 mon.a (mon.0) 2195 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:19:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:58 vm04 bash[28289]: audit 2026-03-10T10:19:57.836106+0000 mon.a (mon.0) 2195 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:19:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:58 vm04 bash[28289]: audit 2026-03-10T10:19:57.865841+0000 mon.a (mon.0) 2196 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm04-59259-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:58 vm04 bash[28289]: audit 2026-03-10T10:19:57.865841+0000 mon.a (mon.0) 2196 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm04-59259-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:58 vm04 bash[28289]: audit 2026-03-10T10:19:58.827408+0000 mon.a (mon.0) 2197 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-41", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:58 vm04 bash[28289]: audit 2026-03-10T10:19:58.827408+0000 mon.a (mon.0) 2197 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-41", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:58 vm04 bash[28289]: cluster 2026-03-10T10:19:58.859821+0000 mon.a (mon.0) 2198 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-10T10:19:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:58 vm04 bash[28289]: cluster 2026-03-10T10:19:58.859821+0000 mon.a (mon.0) 2198 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-10T10:19:59.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:58 vm04 bash[20742]: audit 2026-03-10T10:19:57.787526+0000 mon.a (mon.0) 2190 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm04-59366-2"}]': finished 2026-03-10T10:19:59.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:58 vm04 bash[20742]: audit 2026-03-10T10:19:57.787526+0000 mon.a (mon.0) 2190 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm04-59366-2"}]': finished 2026-03-10T10:19:59.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:58 vm04 bash[20742]: audit 2026-03-10T10:19:57.787593+0000 mon.a (mon.0) 2191 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:59.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:58 vm04 bash[20742]: audit 2026-03-10T10:19:57.787593+0000 mon.a (mon.0) 2191 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:59.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:58 vm04 bash[20742]: audit 2026-03-10T10:19:57.788077+0000 mon.a (mon.0) 2192 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm04-59259-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:59.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:58 vm04 bash[20742]: audit 2026-03-10T10:19:57.788077+0000 mon.a (mon.0) 2192 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm04-59259-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:59.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:58 vm04 bash[20742]: cluster 2026-03-10T10:19:57.795582+0000 mon.a (mon.0) 2193 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-10T10:19:59.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:58 vm04 bash[20742]: cluster 2026-03-10T10:19:57.795582+0000 mon.a (mon.0) 2193 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-10T10:19:59.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:58 vm04 bash[20742]: audit 2026-03-10T10:19:57.802274+0000 mon.a (mon.0) 2194 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:59.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:58 vm04 bash[20742]: audit 2026-03-10T10:19:57.802274+0000 mon.a (mon.0) 2194 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:59.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:58 vm04 bash[20742]: audit 2026-03-10T10:19:57.824619+0000 mon.c (mon.2) 411 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm04-59259-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:59.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:58 vm04 bash[20742]: audit 2026-03-10T10:19:57.824619+0000 mon.c (mon.2) 411 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm04-59259-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:59.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:58 vm04 bash[20742]: audit 2026-03-10T10:19:57.836106+0000 mon.a (mon.0) 2195 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:19:59.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:58 vm04 bash[20742]: audit 2026-03-10T10:19:57.836106+0000 mon.a (mon.0) 2195 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:19:59.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:58 vm04 bash[20742]: audit 2026-03-10T10:19:57.865841+0000 mon.a (mon.0) 2196 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm04-59259-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:59.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:58 vm04 bash[20742]: audit 2026-03-10T10:19:57.865841+0000 mon.a (mon.0) 2196 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm04-59259-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:59.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:58 vm04 bash[20742]: audit 2026-03-10T10:19:58.827408+0000 mon.a (mon.0) 2197 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-41", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:59.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:58 vm04 bash[20742]: audit 2026-03-10T10:19:58.827408+0000 mon.a (mon.0) 2197 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-41", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:59.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:58 vm04 bash[20742]: cluster 2026-03-10T10:19:58.859821+0000 mon.a (mon.0) 2198 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-10T10:19:59.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:58 vm04 bash[20742]: cluster 2026-03-10T10:19:58.859821+0000 mon.a (mon.0) 2198 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-10T10:19:59.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:58 vm07 bash[23367]: audit 2026-03-10T10:19:57.787526+0000 mon.a (mon.0) 2190 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm04-59366-2"}]': finished 2026-03-10T10:19:59.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:58 vm07 bash[23367]: audit 2026-03-10T10:19:57.787526+0000 mon.a (mon.0) 2190 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosListEC_vm04-59366-2"}]': finished 2026-03-10T10:19:59.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:58 vm07 bash[23367]: audit 2026-03-10T10:19:57.787593+0000 mon.a (mon.0) 2191 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:59.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:58 vm07 bash[23367]: audit 2026-03-10T10:19:57.787593+0000 mon.a (mon.0) 2191 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-41","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:19:59.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:58 vm07 bash[23367]: audit 2026-03-10T10:19:57.788077+0000 mon.a (mon.0) 2192 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm04-59259-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:59.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:58 vm07 bash[23367]: audit 2026-03-10T10:19:57.788077+0000 mon.a (mon.0) 2192 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ReturnValuePP_vm04-59259-56", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:19:59.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:58 vm07 bash[23367]: cluster 2026-03-10T10:19:57.795582+0000 mon.a (mon.0) 2193 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-10T10:19:59.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:58 vm07 bash[23367]: cluster 2026-03-10T10:19:57.795582+0000 mon.a (mon.0) 2193 : cluster [DBG] osdmap e277: 8 total, 8 up, 8 in 2026-03-10T10:19:59.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:58 vm07 bash[23367]: audit 2026-03-10T10:19:57.802274+0000 mon.a (mon.0) 2194 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:59.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:58 vm07 bash[23367]: audit 2026-03-10T10:19:57.802274+0000 mon.a (mon.0) 2194 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-41", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:19:59.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:58 vm07 bash[23367]: audit 2026-03-10T10:19:57.824619+0000 mon.c (mon.2) 411 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm04-59259-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:59.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:58 vm07 bash[23367]: audit 2026-03-10T10:19:57.824619+0000 mon.c (mon.2) 411 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm04-59259-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:59.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:58 vm07 bash[23367]: audit 2026-03-10T10:19:57.836106+0000 mon.a (mon.0) 2195 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:19:59.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:58 vm07 bash[23367]: audit 2026-03-10T10:19:57.836106+0000 mon.a (mon.0) 2195 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:19:59.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:58 vm07 bash[23367]: audit 2026-03-10T10:19:57.865841+0000 mon.a (mon.0) 2196 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm04-59259-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:59.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:58 vm07 bash[23367]: audit 2026-03-10T10:19:57.865841+0000 mon.a (mon.0) 2196 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm04-59259-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:19:59.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:58 vm07 bash[23367]: audit 2026-03-10T10:19:58.827408+0000 mon.a (mon.0) 2197 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-41", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:59.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:58 vm07 bash[23367]: audit 2026-03-10T10:19:58.827408+0000 mon.a (mon.0) 2197 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-41", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:19:59.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:58 vm07 bash[23367]: cluster 2026-03-10T10:19:58.859821+0000 mon.a (mon.0) 2198 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-10T10:19:59.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:58 vm07 bash[23367]: cluster 2026-03-10T10:19:58.859821+0000 mon.a (mon.0) 2198 : cluster [DBG] osdmap e278: 8 total, 8 up, 8 in 2026-03-10T10:20:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:59 vm04 bash[28289]: audit 2026-03-10T10:19:58.380893+0000 mgr.y (mgr.24422) 257 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:59 vm04 bash[28289]: audit 2026-03-10T10:19:58.380893+0000 mgr.y (mgr.24422) 257 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:59 vm04 bash[28289]: cluster 2026-03-10T10:19:58.426759+0000 mgr.y (mgr.24422) 258 : cluster [DBG] pgmap v374: 292 pgs: 13 unknown, 279 active+clean; 8.3 MiB data, 704 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:20:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:59 vm04 bash[28289]: cluster 2026-03-10T10:19:58.426759+0000 mgr.y (mgr.24422) 258 : cluster [DBG] pgmap v374: 292 pgs: 13 unknown, 279 active+clean; 8.3 MiB data, 704 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:20:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:59 vm04 bash[28289]: audit 2026-03-10T10:19:58.862415+0000 mon.a (mon.0) 2199 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:20:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:59 vm04 bash[28289]: audit 2026-03-10T10:19:58.862415+0000 mon.a (mon.0) 2199 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:20:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:59 vm04 bash[28289]: audit 2026-03-10T10:19:58.863115+0000 mon.a (mon.0) 2200 : audit [INF] from='client.? 192.168.123.104:0/272171802' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59366-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:59 vm04 bash[28289]: audit 2026-03-10T10:19:58.863115+0000 mon.a (mon.0) 2200 : audit [INF] from='client.? 192.168.123.104:0/272171802' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59366-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:59 vm04 bash[28289]: audit 2026-03-10T10:19:59.831258+0000 mon.a (mon.0) 2201 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm04-59259-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm04-59259-56"}]': finished 2026-03-10T10:20:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:59 vm04 bash[28289]: audit 2026-03-10T10:19:59.831258+0000 mon.a (mon.0) 2201 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm04-59259-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm04-59259-56"}]': finished 2026-03-10T10:20:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:59 vm04 bash[28289]: audit 2026-03-10T10:19:59.831336+0000 mon.a (mon.0) 2202 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:20:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:59 vm04 bash[28289]: audit 2026-03-10T10:19:59.831336+0000 mon.a (mon.0) 2202 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:20:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:59 vm04 bash[28289]: audit 2026-03-10T10:19:59.831459+0000 mon.a (mon.0) 2203 : audit [INF] from='client.? 192.168.123.104:0/272171802' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59366-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:59 vm04 bash[28289]: audit 2026-03-10T10:19:59.831459+0000 mon.a (mon.0) 2203 : audit [INF] from='client.? 192.168.123.104:0/272171802' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59366-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:59 vm04 bash[28289]: cluster 2026-03-10T10:19:59.835018+0000 mon.a (mon.0) 2204 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-10T10:20:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:59 vm04 bash[28289]: cluster 2026-03-10T10:19:59.835018+0000 mon.a (mon.0) 2204 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-10T10:20:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:59 vm04 bash[28289]: audit 2026-03-10T10:19:59.835542+0000 mon.a (mon.0) 2205 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:20:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:19:59 vm04 bash[28289]: audit 2026-03-10T10:19:59.835542+0000 mon.a (mon.0) 2205 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:20:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:59 vm04 bash[20742]: audit 2026-03-10T10:19:58.380893+0000 mgr.y (mgr.24422) 257 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:59 vm04 bash[20742]: audit 2026-03-10T10:19:58.380893+0000 mgr.y (mgr.24422) 257 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:59 vm04 bash[20742]: cluster 2026-03-10T10:19:58.426759+0000 mgr.y (mgr.24422) 258 : cluster [DBG] pgmap v374: 292 pgs: 13 unknown, 279 active+clean; 8.3 MiB data, 704 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:20:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:59 vm04 bash[20742]: cluster 2026-03-10T10:19:58.426759+0000 mgr.y (mgr.24422) 258 : cluster [DBG] pgmap v374: 292 pgs: 13 unknown, 279 active+clean; 8.3 MiB data, 704 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:20:00.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:59 vm04 bash[20742]: audit 2026-03-10T10:19:58.862415+0000 mon.a (mon.0) 2199 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:20:00.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:59 vm04 bash[20742]: audit 2026-03-10T10:19:58.862415+0000 mon.a (mon.0) 2199 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:20:00.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:59 vm04 bash[20742]: audit 2026-03-10T10:19:58.863115+0000 mon.a (mon.0) 2200 : audit [INF] from='client.? 192.168.123.104:0/272171802' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59366-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:00.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:59 vm04 bash[20742]: audit 2026-03-10T10:19:58.863115+0000 mon.a (mon.0) 2200 : audit [INF] from='client.? 192.168.123.104:0/272171802' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59366-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:00.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:59 vm04 bash[20742]: audit 2026-03-10T10:19:59.831258+0000 mon.a (mon.0) 2201 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm04-59259-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm04-59259-56"}]': finished 2026-03-10T10:20:00.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:59 vm04 bash[20742]: audit 2026-03-10T10:19:59.831258+0000 mon.a (mon.0) 2201 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm04-59259-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm04-59259-56"}]': finished 2026-03-10T10:20:00.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:59 vm04 bash[20742]: audit 2026-03-10T10:19:59.831336+0000 mon.a (mon.0) 2202 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:20:00.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:59 vm04 bash[20742]: audit 2026-03-10T10:19:59.831336+0000 mon.a (mon.0) 2202 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:20:00.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:59 vm04 bash[20742]: audit 2026-03-10T10:19:59.831459+0000 mon.a (mon.0) 2203 : audit [INF] from='client.? 192.168.123.104:0/272171802' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59366-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:00.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:59 vm04 bash[20742]: audit 2026-03-10T10:19:59.831459+0000 mon.a (mon.0) 2203 : audit [INF] from='client.? 192.168.123.104:0/272171802' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59366-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:00.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:59 vm04 bash[20742]: cluster 2026-03-10T10:19:59.835018+0000 mon.a (mon.0) 2204 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-10T10:20:00.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:59 vm04 bash[20742]: cluster 2026-03-10T10:19:59.835018+0000 mon.a (mon.0) 2204 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-10T10:20:00.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:59 vm04 bash[20742]: audit 2026-03-10T10:19:59.835542+0000 mon.a (mon.0) 2205 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:20:00.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:19:59 vm04 bash[20742]: audit 2026-03-10T10:19:59.835542+0000 mon.a (mon.0) 2205 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:20:00.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:59 vm07 bash[23367]: audit 2026-03-10T10:19:58.380893+0000 mgr.y (mgr.24422) 257 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:00.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:59 vm07 bash[23367]: audit 2026-03-10T10:19:58.380893+0000 mgr.y (mgr.24422) 257 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:00.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:59 vm07 bash[23367]: cluster 2026-03-10T10:19:58.426759+0000 mgr.y (mgr.24422) 258 : cluster [DBG] pgmap v374: 292 pgs: 13 unknown, 279 active+clean; 8.3 MiB data, 704 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:20:00.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:59 vm07 bash[23367]: cluster 2026-03-10T10:19:58.426759+0000 mgr.y (mgr.24422) 258 : cluster [DBG] pgmap v374: 292 pgs: 13 unknown, 279 active+clean; 8.3 MiB data, 704 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:20:00.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:59 vm07 bash[23367]: audit 2026-03-10T10:19:58.862415+0000 mon.a (mon.0) 2199 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:20:00.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:59 vm07 bash[23367]: audit 2026-03-10T10:19:58.862415+0000 mon.a (mon.0) 2199 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:20:00.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:59 vm07 bash[23367]: audit 2026-03-10T10:19:58.863115+0000 mon.a (mon.0) 2200 : audit [INF] from='client.? 192.168.123.104:0/272171802' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59366-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:00.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:59 vm07 bash[23367]: audit 2026-03-10T10:19:58.863115+0000 mon.a (mon.0) 2200 : audit [INF] from='client.? 192.168.123.104:0/272171802' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59366-3","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:00.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:59 vm07 bash[23367]: audit 2026-03-10T10:19:59.831258+0000 mon.a (mon.0) 2201 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm04-59259-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm04-59259-56"}]': finished 2026-03-10T10:20:00.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:59 vm07 bash[23367]: audit 2026-03-10T10:19:59.831258+0000 mon.a (mon.0) 2201 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ReturnValuePP_vm04-59259-56", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ReturnValuePP_vm04-59259-56"}]': finished 2026-03-10T10:20:00.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:59 vm07 bash[23367]: audit 2026-03-10T10:19:59.831336+0000 mon.a (mon.0) 2202 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:20:00.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:59 vm07 bash[23367]: audit 2026-03-10T10:19:59.831336+0000 mon.a (mon.0) 2202 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:20:00.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:59 vm07 bash[23367]: audit 2026-03-10T10:19:59.831459+0000 mon.a (mon.0) 2203 : audit [INF] from='client.? 192.168.123.104:0/272171802' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59366-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:00.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:59 vm07 bash[23367]: audit 2026-03-10T10:19:59.831459+0000 mon.a (mon.0) 2203 : audit [INF] from='client.? 192.168.123.104:0/272171802' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59366-3","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:00.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:59 vm07 bash[23367]: cluster 2026-03-10T10:19:59.835018+0000 mon.a (mon.0) 2204 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-10T10:20:00.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:59 vm07 bash[23367]: cluster 2026-03-10T10:19:59.835018+0000 mon.a (mon.0) 2204 : cluster [DBG] osdmap e279: 8 total, 8 up, 8 in 2026-03-10T10:20:00.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:59 vm07 bash[23367]: audit 2026-03-10T10:19:59.835542+0000 mon.a (mon.0) 2205 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:20:00.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:19:59 vm07 bash[23367]: audit 2026-03-10T10:19:59.835542+0000 mon.a (mon.0) 2205 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:20:00.865 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.HitS.ListObjectsStart 2026-03-10T10:20:00.865 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 1 0 2026-03-10T10:20:00.865 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 10 0 2026-03-10T10:20:00.865 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 13 0 2026-03-10T10:20:00.865 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 7 0 2026-03-10T10:20:00.865 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 14 0 2026-03-10T10:20:00.865 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 0 0 2026-03-10T10:20:00.865 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 15 0 2026-03-10T10:20:00.865 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 11 0 2026-03-10T10:20:00.865 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 5 0 2026-03-10T10:20:00.865 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 8 0 2026-03-10T10:20:00.865 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 6 0 2026-03-10T10:20:00.865 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 3 0 2026-03-10T10:20:00.865 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 4 0 2026-03-10T10:20:00.865 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 12 0 2026-03-10T10:20:00.865 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 9 0 2026-03-10T10:20:00.865 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 2 0 2026-03-10T10:20:00.865 INFO:tasks.workunit.client.0.vm04.stdout: api_list: have 1 expect one of 0,1,10,11,12,13,14,15,2,3,4,5,6,7,8,9 2026-03-10T10:20:00.865 INFO:tasks.workunit.client.0.vm04.stdout: api_list: [ OK ] LibRadosListEC.ListObjectsStart (117 ms) 2026-03-10T10:20:00.865 INFO:tasks.workunit.client.0.vm04.stdout: api_list: [----------] 3 tests from LibRadosListEC (1237 ms total) 2026-03-10T10:20:00.865 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 2026-03-10T10:20:00.865 INFO:tasks.workunit.client.0.vm04.stdout: api_list: [----------] 1 test from LibRadosListNP 2026-03-10T10:20:00.865 INFO:tasks.workunit.client.0.vm04.stdout: api_list: [ RUN ] LibRadosListNP.ListObjectsError 2026-03-10T10:20:00.865 INFO:tasks.workunit.client.0.vm04.stdout: api_list: [ OK ] LibRadosListNP.ListObjectsError (2995 ms) 2026-03-10T10:20:00.865 INFO:tasks.workunit.client.0.vm04.stdout: api_list: [----------] 1 test from LibRadosListNP (2995 ms total) 2026-03-10T10:20:00.865 INFO:tasks.workunit.client.0.vm04.stdout: api_list: 2026-03-10T10:20:00.865 INFO:tasks.workunit.client.0.vm04.stdout: api_list: [----------] Global test environment tear-down 2026-03-10T10:20:00.865 INFO:tasks.workunit.client.0.vm04.stdout: api_list: [==========] 11 tests from 3 test suites ran. (220631 ms total) 2026-03-10T10:20:00.865 INFO:tasks.workunit.client.0.vm04.stdout: api_list: [ PASSED ] 11 tests. 2026-03-10T10:20:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:00 vm04 bash[28289]: audit 2026-03-10T10:19:59.883743+0000 mon.a (mon.0) 2206 : audit [INF] from='client.? 192.168.123.104:0/272171802' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm04-59366-3","pool2":"test-rados-api-vm04-59366-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-10T10:20:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:00 vm04 bash[28289]: audit 2026-03-10T10:19:59.883743+0000 mon.a (mon.0) 2206 : audit [INF] from='client.? 192.168.123.104:0/272171802' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm04-59366-3","pool2":"test-rados-api-vm04-59366-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-10T10:20:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:00 vm04 bash[28289]: cluster 2026-03-10T10:20:00.000132+0000 mon.a (mon.0) 2207 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-10T10:20:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:00 vm04 bash[28289]: cluster 2026-03-10T10:20:00.000132+0000 mon.a (mon.0) 2207 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-10T10:20:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:00 vm04 bash[28289]: cluster 2026-03-10T10:20:00.000154+0000 mon.a (mon.0) 2208 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-10T10:20:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:00 vm04 bash[28289]: cluster 2026-03-10T10:20:00.000154+0000 mon.a (mon.0) 2208 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-10T10:20:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:00 vm04 bash[28289]: cluster 2026-03-10T10:20:00.000162+0000 mon.a (mon.0) 2209 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T10:20:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:00 vm04 bash[28289]: cluster 2026-03-10T10:20:00.000162+0000 mon.a (mon.0) 2209 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T10:20:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:00 vm04 bash[28289]: cluster 2026-03-10T10:20:00.000168+0000 mon.a (mon.0) 2210 : cluster [WRN] application not enabled on pool 'WatchNotifyvm04-60261-1' 2026-03-10T10:20:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:00 vm04 bash[28289]: cluster 2026-03-10T10:20:00.000168+0000 mon.a (mon.0) 2210 : cluster [WRN] application not enabled on pool 'WatchNotifyvm04-60261-1' 2026-03-10T10:20:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:00 vm04 bash[28289]: cluster 2026-03-10T10:20:00.000182+0000 mon.a (mon.0) 2211 : cluster [WRN] application not enabled on pool 'AssertExistsvm04-60281-1' 2026-03-10T10:20:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:00 vm04 bash[28289]: cluster 2026-03-10T10:20:00.000182+0000 mon.a (mon.0) 2211 : cluster [WRN] application not enabled on pool 'AssertExistsvm04-60281-1' 2026-03-10T10:20:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:00 vm04 bash[28289]: cluster 2026-03-10T10:20:00.000207+0000 mon.a (mon.0) 2212 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T10:20:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:00 vm04 bash[28289]: cluster 2026-03-10T10:20:00.000207+0000 mon.a (mon.0) 2212 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T10:20:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:00 vm04 bash[28289]: audit 2026-03-10T10:20:00.836617+0000 mon.a (mon.0) 2213 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:20:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:00 vm04 bash[28289]: audit 2026-03-10T10:20:00.836617+0000 mon.a (mon.0) 2213 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:20:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:00 vm04 bash[28289]: audit 2026-03-10T10:20:00.836696+0000 mon.a (mon.0) 2214 : audit [INF] from='client.? 192.168.123.104:0/272171802' entity='client.admin' cmd='[{"prefix":"osd pool rm","pool": "test-rados-api-vm04-59366-3","pool2":"test-rados-api-vm04-59366-3","yes_i_really_really_mean_it_not_faking": true}]': finished 2026-03-10T10:20:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:00 vm04 bash[28289]: audit 2026-03-10T10:20:00.836696+0000 mon.a (mon.0) 2214 : audit [INF] from='client.? 192.168.123.104:0/272171802' entity='client.admin' cmd='[{"prefix":"osd pool rm","pool": "test-rados-api-vm04-59366-3","pool2":"test-rados-api-vm04-59366-3","yes_i_really_really_mean_it_not_faking": true}]': finished 2026-03-10T10:20:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:00 vm04 bash[28289]: cluster 2026-03-10T10:20:00.841526+0000 mon.a (mon.0) 2215 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-10T10:20:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:00 vm04 bash[28289]: cluster 2026-03-10T10:20:00.841526+0000 mon.a (mon.0) 2215 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-10T10:20:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:00 vm04 bash[28289]: audit 2026-03-10T10:20:00.861878+0000 mon.a (mon.0) 2216 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T10:20:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:00 vm04 bash[28289]: audit 2026-03-10T10:20:00.861878+0000 mon.a (mon.0) 2216 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T10:20:01.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:00 vm04 bash[20742]: audit 2026-03-10T10:19:59.883743+0000 mon.a (mon.0) 2206 : audit [INF] from='client.? 192.168.123.104:0/272171802' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm04-59366-3","pool2":"test-rados-api-vm04-59366-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-10T10:20:01.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:00 vm04 bash[20742]: audit 2026-03-10T10:19:59.883743+0000 mon.a (mon.0) 2206 : audit [INF] from='client.? 192.168.123.104:0/272171802' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm04-59366-3","pool2":"test-rados-api-vm04-59366-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-10T10:20:01.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:00 vm04 bash[20742]: cluster 2026-03-10T10:20:00.000132+0000 mon.a (mon.0) 2207 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-10T10:20:01.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:00 vm04 bash[20742]: cluster 2026-03-10T10:20:00.000132+0000 mon.a (mon.0) 2207 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-10T10:20:01.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:00 vm04 bash[20742]: cluster 2026-03-10T10:20:00.000154+0000 mon.a (mon.0) 2208 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-10T10:20:01.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:00 vm04 bash[20742]: cluster 2026-03-10T10:20:00.000154+0000 mon.a (mon.0) 2208 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-10T10:20:01.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:00 vm04 bash[20742]: cluster 2026-03-10T10:20:00.000162+0000 mon.a (mon.0) 2209 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T10:20:01.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:00 vm04 bash[20742]: cluster 2026-03-10T10:20:00.000162+0000 mon.a (mon.0) 2209 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T10:20:01.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:00 vm04 bash[20742]: cluster 2026-03-10T10:20:00.000168+0000 mon.a (mon.0) 2210 : cluster [WRN] application not enabled on pool 'WatchNotifyvm04-60261-1' 2026-03-10T10:20:01.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:00 vm04 bash[20742]: cluster 2026-03-10T10:20:00.000168+0000 mon.a (mon.0) 2210 : cluster [WRN] application not enabled on pool 'WatchNotifyvm04-60261-1' 2026-03-10T10:20:01.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:00 vm04 bash[20742]: cluster 2026-03-10T10:20:00.000182+0000 mon.a (mon.0) 2211 : cluster [WRN] application not enabled on pool 'AssertExistsvm04-60281-1' 2026-03-10T10:20:01.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:00 vm04 bash[20742]: cluster 2026-03-10T10:20:00.000182+0000 mon.a (mon.0) 2211 : cluster [WRN] application not enabled on pool 'AssertExistsvm04-60281-1' 2026-03-10T10:20:01.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:00 vm04 bash[20742]: cluster 2026-03-10T10:20:00.000207+0000 mon.a (mon.0) 2212 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T10:20:01.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:00 vm04 bash[20742]: cluster 2026-03-10T10:20:00.000207+0000 mon.a (mon.0) 2212 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T10:20:01.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:00 vm04 bash[20742]: audit 2026-03-10T10:20:00.836617+0000 mon.a (mon.0) 2213 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:20:01.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:00 vm04 bash[20742]: audit 2026-03-10T10:20:00.836617+0000 mon.a (mon.0) 2213 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:20:01.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:00 vm04 bash[20742]: audit 2026-03-10T10:20:00.836696+0000 mon.a (mon.0) 2214 : audit [INF] from='client.? 192.168.123.104:0/272171802' entity='client.admin' cmd='[{"prefix":"osd pool rm","pool": "test-rados-api-vm04-59366-3","pool2":"test-rados-api-vm04-59366-3","yes_i_really_really_mean_it_not_faking": true}]': finished 2026-03-10T10:20:01.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:00 vm04 bash[20742]: audit 2026-03-10T10:20:00.836696+0000 mon.a (mon.0) 2214 : audit [INF] from='client.? 192.168.123.104:0/272171802' entity='client.admin' cmd='[{"prefix":"osd pool rm","pool": "test-rados-api-vm04-59366-3","pool2":"test-rados-api-vm04-59366-3","yes_i_really_really_mean_it_not_faking": true}]': finished 2026-03-10T10:20:01.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:00 vm04 bash[20742]: cluster 2026-03-10T10:20:00.841526+0000 mon.a (mon.0) 2215 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-10T10:20:01.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:00 vm04 bash[20742]: cluster 2026-03-10T10:20:00.841526+0000 mon.a (mon.0) 2215 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-10T10:20:01.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:00 vm04 bash[20742]: audit 2026-03-10T10:20:00.861878+0000 mon.a (mon.0) 2216 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T10:20:01.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:00 vm04 bash[20742]: audit 2026-03-10T10:20:00.861878+0000 mon.a (mon.0) 2216 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T10:20:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:00 vm07 bash[23367]: audit 2026-03-10T10:19:59.883743+0000 mon.a (mon.0) 2206 : audit [INF] from='client.? 192.168.123.104:0/272171802' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm04-59366-3","pool2":"test-rados-api-vm04-59366-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-10T10:20:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:00 vm07 bash[23367]: audit 2026-03-10T10:19:59.883743+0000 mon.a (mon.0) 2206 : audit [INF] from='client.? 192.168.123.104:0/272171802' entity='client.admin' cmd=[{"prefix":"osd pool rm","pool": "test-rados-api-vm04-59366-3","pool2":"test-rados-api-vm04-59366-3","yes_i_really_really_mean_it_not_faking": true}]: dispatch 2026-03-10T10:20:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:00 vm07 bash[23367]: cluster 2026-03-10T10:20:00.000132+0000 mon.a (mon.0) 2207 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-10T10:20:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:00 vm07 bash[23367]: cluster 2026-03-10T10:20:00.000132+0000 mon.a (mon.0) 2207 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-10T10:20:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:00 vm07 bash[23367]: cluster 2026-03-10T10:20:00.000154+0000 mon.a (mon.0) 2208 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-10T10:20:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:00 vm07 bash[23367]: cluster 2026-03-10T10:20:00.000154+0000 mon.a (mon.0) 2208 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-10T10:20:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:00 vm07 bash[23367]: cluster 2026-03-10T10:20:00.000162+0000 mon.a (mon.0) 2209 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T10:20:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:00 vm07 bash[23367]: cluster 2026-03-10T10:20:00.000162+0000 mon.a (mon.0) 2209 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T10:20:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:00 vm07 bash[23367]: cluster 2026-03-10T10:20:00.000168+0000 mon.a (mon.0) 2210 : cluster [WRN] application not enabled on pool 'WatchNotifyvm04-60261-1' 2026-03-10T10:20:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:00 vm07 bash[23367]: cluster 2026-03-10T10:20:00.000168+0000 mon.a (mon.0) 2210 : cluster [WRN] application not enabled on pool 'WatchNotifyvm04-60261-1' 2026-03-10T10:20:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:00 vm07 bash[23367]: cluster 2026-03-10T10:20:00.000182+0000 mon.a (mon.0) 2211 : cluster [WRN] application not enabled on pool 'AssertExistsvm04-60281-1' 2026-03-10T10:20:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:00 vm07 bash[23367]: cluster 2026-03-10T10:20:00.000182+0000 mon.a (mon.0) 2211 : cluster [WRN] application not enabled on pool 'AssertExistsvm04-60281-1' 2026-03-10T10:20:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:00 vm07 bash[23367]: cluster 2026-03-10T10:20:00.000207+0000 mon.a (mon.0) 2212 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T10:20:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:00 vm07 bash[23367]: cluster 2026-03-10T10:20:00.000207+0000 mon.a (mon.0) 2212 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T10:20:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:00 vm07 bash[23367]: audit 2026-03-10T10:20:00.836617+0000 mon.a (mon.0) 2213 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:20:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:00 vm07 bash[23367]: audit 2026-03-10T10:20:00.836617+0000 mon.a (mon.0) 2213 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:20:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:00 vm07 bash[23367]: audit 2026-03-10T10:20:00.836696+0000 mon.a (mon.0) 2214 : audit [INF] from='client.? 192.168.123.104:0/272171802' entity='client.admin' cmd='[{"prefix":"osd pool rm","pool": "test-rados-api-vm04-59366-3","pool2":"test-rados-api-vm04-59366-3","yes_i_really_really_mean_it_not_faking": true}]': finished 2026-03-10T10:20:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:00 vm07 bash[23367]: audit 2026-03-10T10:20:00.836696+0000 mon.a (mon.0) 2214 : audit [INF] from='client.? 192.168.123.104:0/272171802' entity='client.admin' cmd='[{"prefix":"osd pool rm","pool": "test-rados-api-vm04-59366-3","pool2":"test-rados-api-vm04-59366-3","yes_i_really_really_mean_it_not_faking": true}]': finished 2026-03-10T10:20:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:00 vm07 bash[23367]: cluster 2026-03-10T10:20:00.841526+0000 mon.a (mon.0) 2215 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-10T10:20:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:00 vm07 bash[23367]: cluster 2026-03-10T10:20:00.841526+0000 mon.a (mon.0) 2215 : cluster [DBG] osdmap e280: 8 total, 8 up, 8 in 2026-03-10T10:20:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:00 vm07 bash[23367]: audit 2026-03-10T10:20:00.861878+0000 mon.a (mon.0) 2216 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T10:20:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:00 vm07 bash[23367]: audit 2026-03-10T10:20:00.861878+0000 mon.a (mon.0) 2216 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T10:20:02.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:01 vm04 bash[28289]: cluster 2026-03-10T10:20:00.427198+0000 mgr.y (mgr.24422) 259 : cluster [DBG] pgmap v377: 332 pgs: 40 unknown, 292 active+clean; 8.3 MiB data, 704 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:02.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:01 vm04 bash[28289]: cluster 2026-03-10T10:20:00.427198+0000 mgr.y (mgr.24422) 259 : cluster [DBG] pgmap v377: 332 pgs: 40 unknown, 292 active+clean; 8.3 MiB data, 704 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:02.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:01 vm04 bash[28289]: cluster 2026-03-10T10:20:00.872073+0000 mon.a (mon.0) 2217 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:02.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:01 vm04 bash[28289]: cluster 2026-03-10T10:20:00.872073+0000 mon.a (mon.0) 2217 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:02.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:01 vm04 bash[28289]: audit 2026-03-10T10:20:01.851823+0000 mon.a (mon.0) 2218 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T10:20:02.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:01 vm04 bash[28289]: audit 2026-03-10T10:20:01.851823+0000 mon.a (mon.0) 2218 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T10:20:02.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:01 vm04 bash[28289]: cluster 2026-03-10T10:20:01.858782+0000 mon.a (mon.0) 2219 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-10T10:20:02.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:01 vm04 bash[28289]: cluster 2026-03-10T10:20:01.858782+0000 mon.a (mon.0) 2219 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-10T10:20:02.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:01 vm04 bash[28289]: audit 2026-03-10T10:20:01.859514+0000 mon.c (mon.2) 412 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:20:02.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:01 vm04 bash[28289]: audit 2026-03-10T10:20:01.859514+0000 mon.c (mon.2) 412 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:20:02.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:01 vm04 bash[28289]: audit 2026-03-10T10:20:01.860652+0000 mon.a (mon.0) 2220 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:20:02.205 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:01 vm04 bash[28289]: audit 2026-03-10T10:20:01.860652+0000 mon.a (mon.0) 2220 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:20:02.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:01 vm04 bash[20742]: cluster 2026-03-10T10:20:00.427198+0000 mgr.y (mgr.24422) 259 : cluster [DBG] pgmap v377: 332 pgs: 40 unknown, 292 active+clean; 8.3 MiB data, 704 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:02.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:01 vm04 bash[20742]: cluster 2026-03-10T10:20:00.427198+0000 mgr.y (mgr.24422) 259 : cluster [DBG] pgmap v377: 332 pgs: 40 unknown, 292 active+clean; 8.3 MiB data, 704 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:02.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:01 vm04 bash[20742]: cluster 2026-03-10T10:20:00.872073+0000 mon.a (mon.0) 2217 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:02.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:01 vm04 bash[20742]: cluster 2026-03-10T10:20:00.872073+0000 mon.a (mon.0) 2217 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:02.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:01 vm04 bash[20742]: audit 2026-03-10T10:20:01.851823+0000 mon.a (mon.0) 2218 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T10:20:02.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:01 vm04 bash[20742]: audit 2026-03-10T10:20:01.851823+0000 mon.a (mon.0) 2218 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T10:20:02.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:01 vm04 bash[20742]: cluster 2026-03-10T10:20:01.858782+0000 mon.a (mon.0) 2219 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-10T10:20:02.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:01 vm04 bash[20742]: cluster 2026-03-10T10:20:01.858782+0000 mon.a (mon.0) 2219 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-10T10:20:02.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:01 vm04 bash[20742]: audit 2026-03-10T10:20:01.859514+0000 mon.c (mon.2) 412 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:20:02.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:01 vm04 bash[20742]: audit 2026-03-10T10:20:01.859514+0000 mon.c (mon.2) 412 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:20:02.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:01 vm04 bash[20742]: audit 2026-03-10T10:20:01.860652+0000 mon.a (mon.0) 2220 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:20:02.206 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:01 vm04 bash[20742]: audit 2026-03-10T10:20:01.860652+0000 mon.a (mon.0) 2220 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:20:02.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:01 vm07 bash[23367]: cluster 2026-03-10T10:20:00.427198+0000 mgr.y (mgr.24422) 259 : cluster [DBG] pgmap v377: 332 pgs: 40 unknown, 292 active+clean; 8.3 MiB data, 704 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:02.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:01 vm07 bash[23367]: cluster 2026-03-10T10:20:00.427198+0000 mgr.y (mgr.24422) 259 : cluster [DBG] pgmap v377: 332 pgs: 40 unknown, 292 active+clean; 8.3 MiB data, 704 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:02.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:01 vm07 bash[23367]: cluster 2026-03-10T10:20:00.872073+0000 mon.a (mon.0) 2217 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:02.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:01 vm07 bash[23367]: cluster 2026-03-10T10:20:00.872073+0000 mon.a (mon.0) 2217 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:02.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:01 vm07 bash[23367]: audit 2026-03-10T10:20:01.851823+0000 mon.a (mon.0) 2218 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T10:20:02.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:01 vm07 bash[23367]: audit 2026-03-10T10:20:01.851823+0000 mon.a (mon.0) 2218 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-41","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T10:20:02.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:01 vm07 bash[23367]: cluster 2026-03-10T10:20:01.858782+0000 mon.a (mon.0) 2219 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-10T10:20:02.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:01 vm07 bash[23367]: cluster 2026-03-10T10:20:01.858782+0000 mon.a (mon.0) 2219 : cluster [DBG] osdmap e281: 8 total, 8 up, 8 in 2026-03-10T10:20:02.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:01 vm07 bash[23367]: audit 2026-03-10T10:20:01.859514+0000 mon.c (mon.2) 412 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:20:02.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:01 vm07 bash[23367]: audit 2026-03-10T10:20:01.859514+0000 mon.c (mon.2) 412 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:20:02.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:01 vm07 bash[23367]: audit 2026-03-10T10:20:01.860652+0000 mon.a (mon.0) 2220 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:20:02.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:01 vm07 bash[23367]: audit 2026-03-10T10:20:01.860652+0000 mon.a (mon.0) 2220 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:20:03.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:20:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:20:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:20:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:03 vm07 bash[23367]: cluster 2026-03-10T10:20:02.427513+0000 mgr.y (mgr.24422) 260 : cluster [DBG] pgmap v380: 292 pgs: 292 active+clean; 8.3 MiB data, 704 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:03 vm07 bash[23367]: cluster 2026-03-10T10:20:02.427513+0000 mgr.y (mgr.24422) 260 : cluster [DBG] pgmap v380: 292 pgs: 292 active+clean; 8.3 MiB data, 704 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:03 vm07 bash[23367]: audit 2026-03-10T10:20:02.854551+0000 mon.a (mon.0) 2221 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm04-59259-56"}]': finished 2026-03-10T10:20:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:03 vm07 bash[23367]: audit 2026-03-10T10:20:02.854551+0000 mon.a (mon.0) 2221 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm04-59259-56"}]': finished 2026-03-10T10:20:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:03 vm07 bash[23367]: cluster 2026-03-10T10:20:02.858332+0000 mon.a (mon.0) 2222 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-10T10:20:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:03 vm07 bash[23367]: cluster 2026-03-10T10:20:02.858332+0000 mon.a (mon.0) 2222 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-10T10:20:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:03 vm07 bash[23367]: audit 2026-03-10T10:20:02.861281+0000 mon.c (mon.2) 413 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:20:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:03 vm07 bash[23367]: audit 2026-03-10T10:20:02.861281+0000 mon.c (mon.2) 413 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:20:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:03 vm07 bash[23367]: audit 2026-03-10T10:20:02.863032+0000 mon.a (mon.0) 2223 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:20:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:03 vm07 bash[23367]: audit 2026-03-10T10:20:02.863032+0000 mon.a (mon.0) 2223 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:20:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:03 vm07 bash[23367]: audit 2026-03-10T10:20:02.898050+0000 mon.a (mon.0) 2224 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:20:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:03 vm07 bash[23367]: audit 2026-03-10T10:20:02.898050+0000 mon.a (mon.0) 2224 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:20:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:03 vm07 bash[23367]: audit 2026-03-10T10:20:02.898329+0000 mon.a (mon.0) 2225 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-41"}]: dispatch 2026-03-10T10:20:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:03 vm07 bash[23367]: audit 2026-03-10T10:20:02.898329+0000 mon.a (mon.0) 2225 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-41"}]: dispatch 2026-03-10T10:20:04.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:03 vm04 bash[28289]: cluster 2026-03-10T10:20:02.427513+0000 mgr.y (mgr.24422) 260 : cluster [DBG] pgmap v380: 292 pgs: 292 active+clean; 8.3 MiB data, 704 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:04.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:03 vm04 bash[28289]: cluster 2026-03-10T10:20:02.427513+0000 mgr.y (mgr.24422) 260 : cluster [DBG] pgmap v380: 292 pgs: 292 active+clean; 8.3 MiB data, 704 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:04.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:03 vm04 bash[28289]: audit 2026-03-10T10:20:02.854551+0000 mon.a (mon.0) 2221 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm04-59259-56"}]': finished 2026-03-10T10:20:04.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:03 vm04 bash[28289]: audit 2026-03-10T10:20:02.854551+0000 mon.a (mon.0) 2221 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm04-59259-56"}]': finished 2026-03-10T10:20:04.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:03 vm04 bash[28289]: cluster 2026-03-10T10:20:02.858332+0000 mon.a (mon.0) 2222 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-10T10:20:04.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:03 vm04 bash[28289]: cluster 2026-03-10T10:20:02.858332+0000 mon.a (mon.0) 2222 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-10T10:20:04.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:03 vm04 bash[28289]: audit 2026-03-10T10:20:02.861281+0000 mon.c (mon.2) 413 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:20:04.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:03 vm04 bash[28289]: audit 2026-03-10T10:20:02.861281+0000 mon.c (mon.2) 413 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:20:04.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:03 vm04 bash[28289]: audit 2026-03-10T10:20:02.863032+0000 mon.a (mon.0) 2223 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:20:04.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:03 vm04 bash[28289]: audit 2026-03-10T10:20:02.863032+0000 mon.a (mon.0) 2223 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:20:04.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:03 vm04 bash[28289]: audit 2026-03-10T10:20:02.898050+0000 mon.a (mon.0) 2224 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:20:04.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:03 vm04 bash[28289]: audit 2026-03-10T10:20:02.898050+0000 mon.a (mon.0) 2224 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:20:04.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:03 vm04 bash[28289]: audit 2026-03-10T10:20:02.898329+0000 mon.a (mon.0) 2225 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-41"}]: dispatch 2026-03-10T10:20:04.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:03 vm04 bash[28289]: audit 2026-03-10T10:20:02.898329+0000 mon.a (mon.0) 2225 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-41"}]: dispatch 2026-03-10T10:20:04.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:03 vm04 bash[20742]: cluster 2026-03-10T10:20:02.427513+0000 mgr.y (mgr.24422) 260 : cluster [DBG] pgmap v380: 292 pgs: 292 active+clean; 8.3 MiB data, 704 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:04.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:03 vm04 bash[20742]: cluster 2026-03-10T10:20:02.427513+0000 mgr.y (mgr.24422) 260 : cluster [DBG] pgmap v380: 292 pgs: 292 active+clean; 8.3 MiB data, 704 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:04.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:03 vm04 bash[20742]: audit 2026-03-10T10:20:02.854551+0000 mon.a (mon.0) 2221 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm04-59259-56"}]': finished 2026-03-10T10:20:04.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:03 vm04 bash[20742]: audit 2026-03-10T10:20:02.854551+0000 mon.a (mon.0) 2221 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ReturnValuePP_vm04-59259-56"}]': finished 2026-03-10T10:20:04.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:03 vm04 bash[20742]: cluster 2026-03-10T10:20:02.858332+0000 mon.a (mon.0) 2222 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-10T10:20:04.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:03 vm04 bash[20742]: cluster 2026-03-10T10:20:02.858332+0000 mon.a (mon.0) 2222 : cluster [DBG] osdmap e282: 8 total, 8 up, 8 in 2026-03-10T10:20:04.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:03 vm04 bash[20742]: audit 2026-03-10T10:20:02.861281+0000 mon.c (mon.2) 413 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:20:04.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:03 vm04 bash[20742]: audit 2026-03-10T10:20:02.861281+0000 mon.c (mon.2) 413 : audit [INF] from='client.? 192.168.123.104:0/330717010' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:20:04.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:04 vm04 bash[20742]: audit 2026-03-10T10:20:02.863032+0000 mon.a (mon.0) 2223 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:20:04.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:04 vm04 bash[20742]: audit 2026-03-10T10:20:02.863032+0000 mon.a (mon.0) 2223 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm04-59259-56"}]: dispatch 2026-03-10T10:20:04.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:04 vm04 bash[20742]: audit 2026-03-10T10:20:02.898050+0000 mon.a (mon.0) 2224 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:20:04.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:04 vm04 bash[20742]: audit 2026-03-10T10:20:02.898050+0000 mon.a (mon.0) 2224 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:20:04.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:04 vm04 bash[20742]: audit 2026-03-10T10:20:02.898329+0000 mon.a (mon.0) 2225 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-41"}]: dispatch 2026-03-10T10:20:04.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:04 vm04 bash[20742]: audit 2026-03-10T10:20:02.898329+0000 mon.a (mon.0) 2225 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-41"}]: dispatch 2026-03-10T10:20:05.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:04 vm07 bash[23367]: audit 2026-03-10T10:20:03.976688+0000 mon.a (mon.0) 2226 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm04-59259-56"}]': finished 2026-03-10T10:20:05.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:04 vm07 bash[23367]: audit 2026-03-10T10:20:03.976688+0000 mon.a (mon.0) 2226 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm04-59259-56"}]': finished 2026-03-10T10:20:05.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:04 vm07 bash[23367]: audit 2026-03-10T10:20:03.976783+0000 mon.a (mon.0) 2227 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-41"}]': finished 2026-03-10T10:20:05.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:04 vm07 bash[23367]: audit 2026-03-10T10:20:03.976783+0000 mon.a (mon.0) 2227 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-41"}]': finished 2026-03-10T10:20:05.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:04 vm07 bash[23367]: cluster 2026-03-10T10:20:03.980628+0000 mon.a (mon.0) 2228 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-10T10:20:05.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:04 vm07 bash[23367]: cluster 2026-03-10T10:20:03.980628+0000 mon.a (mon.0) 2228 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-10T10:20:05.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:04 vm07 bash[23367]: audit 2026-03-10T10:20:04.034149+0000 mon.c (mon.2) 414 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:05.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:04 vm07 bash[23367]: audit 2026-03-10T10:20:04.034149+0000 mon.c (mon.2) 414 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:05.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:04 vm07 bash[23367]: audit 2026-03-10T10:20:04.034806+0000 mon.a (mon.0) 2229 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:05.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:04 vm07 bash[23367]: audit 2026-03-10T10:20:04.034806+0000 mon.a (mon.0) 2229 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:05.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:04 vm07 bash[23367]: audit 2026-03-10T10:20:04.035932+0000 mon.c (mon.2) 415 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:05.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:04 vm07 bash[23367]: audit 2026-03-10T10:20:04.035932+0000 mon.c (mon.2) 415 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:05.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:04 vm07 bash[23367]: audit 2026-03-10T10:20:04.036448+0000 mon.a (mon.0) 2230 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:05.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:04 vm07 bash[23367]: audit 2026-03-10T10:20:04.036448+0000 mon.a (mon.0) 2230 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:05.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:04 vm07 bash[23367]: audit 2026-03-10T10:20:04.036938+0000 mon.c (mon.2) 416 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm04-59259-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:05.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:04 vm07 bash[23367]: audit 2026-03-10T10:20:04.036938+0000 mon.c (mon.2) 416 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm04-59259-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:05.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:04 vm07 bash[23367]: audit 2026-03-10T10:20:04.037334+0000 mon.a (mon.0) 2231 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm04-59259-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:05.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:04 vm07 bash[23367]: audit 2026-03-10T10:20:04.037334+0000 mon.a (mon.0) 2231 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm04-59259-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:05.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:04 vm04 bash[28289]: audit 2026-03-10T10:20:03.976688+0000 mon.a (mon.0) 2226 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm04-59259-56"}]': finished 2026-03-10T10:20:05.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:04 vm04 bash[28289]: audit 2026-03-10T10:20:03.976688+0000 mon.a (mon.0) 2226 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm04-59259-56"}]': finished 2026-03-10T10:20:05.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:04 vm04 bash[28289]: audit 2026-03-10T10:20:03.976783+0000 mon.a (mon.0) 2227 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-41"}]': finished 2026-03-10T10:20:05.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:04 vm04 bash[28289]: audit 2026-03-10T10:20:03.976783+0000 mon.a (mon.0) 2227 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-41"}]': finished 2026-03-10T10:20:05.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:04 vm04 bash[28289]: cluster 2026-03-10T10:20:03.980628+0000 mon.a (mon.0) 2228 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-10T10:20:05.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:04 vm04 bash[28289]: cluster 2026-03-10T10:20:03.980628+0000 mon.a (mon.0) 2228 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-10T10:20:05.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:04 vm04 bash[28289]: audit 2026-03-10T10:20:04.034149+0000 mon.c (mon.2) 414 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:05.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:04 vm04 bash[28289]: audit 2026-03-10T10:20:04.034149+0000 mon.c (mon.2) 414 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:05.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:04 vm04 bash[28289]: audit 2026-03-10T10:20:04.034806+0000 mon.a (mon.0) 2229 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:05.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:04 vm04 bash[28289]: audit 2026-03-10T10:20:04.034806+0000 mon.a (mon.0) 2229 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:05.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:04 vm04 bash[28289]: audit 2026-03-10T10:20:04.035932+0000 mon.c (mon.2) 415 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:05.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:04 vm04 bash[28289]: audit 2026-03-10T10:20:04.035932+0000 mon.c (mon.2) 415 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:05.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:04 vm04 bash[28289]: audit 2026-03-10T10:20:04.036448+0000 mon.a (mon.0) 2230 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:05.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:04 vm04 bash[28289]: audit 2026-03-10T10:20:04.036448+0000 mon.a (mon.0) 2230 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:05.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:04 vm04 bash[28289]: audit 2026-03-10T10:20:04.036938+0000 mon.c (mon.2) 416 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm04-59259-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:05.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:04 vm04 bash[28289]: audit 2026-03-10T10:20:04.036938+0000 mon.c (mon.2) 416 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm04-59259-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:05.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:04 vm04 bash[28289]: audit 2026-03-10T10:20:04.037334+0000 mon.a (mon.0) 2231 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm04-59259-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:05.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:04 vm04 bash[28289]: audit 2026-03-10T10:20:04.037334+0000 mon.a (mon.0) 2231 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm04-59259-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:05.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:05 vm04 bash[20742]: audit 2026-03-10T10:20:03.976688+0000 mon.a (mon.0) 2226 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm04-59259-56"}]': finished 2026-03-10T10:20:05.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:05 vm04 bash[20742]: audit 2026-03-10T10:20:03.976688+0000 mon.a (mon.0) 2226 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ReturnValuePP_vm04-59259-56"}]': finished 2026-03-10T10:20:05.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:05 vm04 bash[20742]: audit 2026-03-10T10:20:03.976783+0000 mon.a (mon.0) 2227 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-41"}]': finished 2026-03-10T10:20:05.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:05 vm04 bash[20742]: audit 2026-03-10T10:20:03.976783+0000 mon.a (mon.0) 2227 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-41"}]': finished 2026-03-10T10:20:05.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:05 vm04 bash[20742]: cluster 2026-03-10T10:20:03.980628+0000 mon.a (mon.0) 2228 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-10T10:20:05.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:05 vm04 bash[20742]: cluster 2026-03-10T10:20:03.980628+0000 mon.a (mon.0) 2228 : cluster [DBG] osdmap e283: 8 total, 8 up, 8 in 2026-03-10T10:20:05.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:05 vm04 bash[20742]: audit 2026-03-10T10:20:04.034149+0000 mon.c (mon.2) 414 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:05.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:05 vm04 bash[20742]: audit 2026-03-10T10:20:04.034149+0000 mon.c (mon.2) 414 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:05.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:05 vm04 bash[20742]: audit 2026-03-10T10:20:04.034806+0000 mon.a (mon.0) 2229 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:05.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:05 vm04 bash[20742]: audit 2026-03-10T10:20:04.034806+0000 mon.a (mon.0) 2229 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:05.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:05 vm04 bash[20742]: audit 2026-03-10T10:20:04.035932+0000 mon.c (mon.2) 415 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:05.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:05 vm04 bash[20742]: audit 2026-03-10T10:20:04.035932+0000 mon.c (mon.2) 415 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:05.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:05 vm04 bash[20742]: audit 2026-03-10T10:20:04.036448+0000 mon.a (mon.0) 2230 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:05.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:05 vm04 bash[20742]: audit 2026-03-10T10:20:04.036448+0000 mon.a (mon.0) 2230 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:05.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:05 vm04 bash[20742]: audit 2026-03-10T10:20:04.036938+0000 mon.c (mon.2) 416 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm04-59259-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:05.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:05 vm04 bash[20742]: audit 2026-03-10T10:20:04.036938+0000 mon.c (mon.2) 416 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm04-59259-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:05.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:05 vm04 bash[20742]: audit 2026-03-10T10:20:04.037334+0000 mon.a (mon.0) 2231 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm04-59259-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:05.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:05 vm04 bash[20742]: audit 2026-03-10T10:20:04.037334+0000 mon.a (mon.0) 2231 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm04-59259-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:06.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:06 vm07 bash[23367]: cluster 2026-03-10T10:20:04.428005+0000 mgr.y (mgr.24422) 261 : cluster [DBG] pgmap v383: 292 pgs: 292 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:06.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:06 vm07 bash[23367]: cluster 2026-03-10T10:20:04.428005+0000 mgr.y (mgr.24422) 261 : cluster [DBG] pgmap v383: 292 pgs: 292 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:06.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:06 vm07 bash[23367]: audit 2026-03-10T10:20:05.007299+0000 mon.a (mon.0) 2232 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm04-59259-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:06.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:06 vm07 bash[23367]: audit 2026-03-10T10:20:05.007299+0000 mon.a (mon.0) 2232 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm04-59259-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:06.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:06 vm07 bash[23367]: cluster 2026-03-10T10:20:05.010684+0000 mon.a (mon.0) 2233 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-10T10:20:06.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:06 vm07 bash[23367]: cluster 2026-03-10T10:20:05.010684+0000 mon.a (mon.0) 2233 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-10T10:20:06.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:06 vm07 bash[23367]: audit 2026-03-10T10:20:05.017022+0000 mon.c (mon.2) 417 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm04-59259-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:06.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:06 vm07 bash[23367]: audit 2026-03-10T10:20:05.017022+0000 mon.c (mon.2) 417 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm04-59259-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:06.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:06 vm07 bash[23367]: audit 2026-03-10T10:20:05.021648+0000 mon.a (mon.0) 2234 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm04-59259-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:06.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:06 vm07 bash[23367]: audit 2026-03-10T10:20:05.021648+0000 mon.a (mon.0) 2234 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm04-59259-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:06.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:06 vm04 bash[28289]: cluster 2026-03-10T10:20:04.428005+0000 mgr.y (mgr.24422) 261 : cluster [DBG] pgmap v383: 292 pgs: 292 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:06.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:06 vm04 bash[28289]: cluster 2026-03-10T10:20:04.428005+0000 mgr.y (mgr.24422) 261 : cluster [DBG] pgmap v383: 292 pgs: 292 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:06.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:06 vm04 bash[28289]: audit 2026-03-10T10:20:05.007299+0000 mon.a (mon.0) 2232 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm04-59259-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:06.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:06 vm04 bash[28289]: audit 2026-03-10T10:20:05.007299+0000 mon.a (mon.0) 2232 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm04-59259-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:06.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:06 vm04 bash[28289]: cluster 2026-03-10T10:20:05.010684+0000 mon.a (mon.0) 2233 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-10T10:20:06.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:06 vm04 bash[28289]: cluster 2026-03-10T10:20:05.010684+0000 mon.a (mon.0) 2233 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-10T10:20:06.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:06 vm04 bash[28289]: audit 2026-03-10T10:20:05.017022+0000 mon.c (mon.2) 417 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm04-59259-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:06.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:06 vm04 bash[28289]: audit 2026-03-10T10:20:05.017022+0000 mon.c (mon.2) 417 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm04-59259-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:06.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:06 vm04 bash[28289]: audit 2026-03-10T10:20:05.021648+0000 mon.a (mon.0) 2234 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm04-59259-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:06.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:06 vm04 bash[28289]: audit 2026-03-10T10:20:05.021648+0000 mon.a (mon.0) 2234 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm04-59259-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:06.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:06 vm04 bash[20742]: cluster 2026-03-10T10:20:04.428005+0000 mgr.y (mgr.24422) 261 : cluster [DBG] pgmap v383: 292 pgs: 292 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:06.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:06 vm04 bash[20742]: cluster 2026-03-10T10:20:04.428005+0000 mgr.y (mgr.24422) 261 : cluster [DBG] pgmap v383: 292 pgs: 292 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:06.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:06 vm04 bash[20742]: audit 2026-03-10T10:20:05.007299+0000 mon.a (mon.0) 2232 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm04-59259-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:06.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:06 vm04 bash[20742]: audit 2026-03-10T10:20:05.007299+0000 mon.a (mon.0) 2232 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushPP_vm04-59259-57", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:06.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:06 vm04 bash[20742]: cluster 2026-03-10T10:20:05.010684+0000 mon.a (mon.0) 2233 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-10T10:20:06.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:06 vm04 bash[20742]: cluster 2026-03-10T10:20:05.010684+0000 mon.a (mon.0) 2233 : cluster [DBG] osdmap e284: 8 total, 8 up, 8 in 2026-03-10T10:20:06.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:06 vm04 bash[20742]: audit 2026-03-10T10:20:05.017022+0000 mon.c (mon.2) 417 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm04-59259-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:06.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:06 vm04 bash[20742]: audit 2026-03-10T10:20:05.017022+0000 mon.c (mon.2) 417 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm04-59259-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:06.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:06 vm04 bash[20742]: audit 2026-03-10T10:20:05.021648+0000 mon.a (mon.0) 2234 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm04-59259-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:06.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:06 vm04 bash[20742]: audit 2026-03-10T10:20:05.021648+0000 mon.a (mon.0) 2234 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushPP_vm04-59259-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:07.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:07 vm04 bash[28289]: cluster 2026-03-10T10:20:06.064157+0000 mon.a (mon.0) 2235 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-10T10:20:07.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:07 vm04 bash[28289]: cluster 2026-03-10T10:20:06.064157+0000 mon.a (mon.0) 2235 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-10T10:20:07.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:07 vm04 bash[28289]: audit 2026-03-10T10:20:06.066677+0000 mon.a (mon.0) 2236 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:07.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:07 vm04 bash[28289]: audit 2026-03-10T10:20:06.066677+0000 mon.a (mon.0) 2236 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:07.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:07 vm04 bash[28289]: cluster 2026-03-10T10:20:06.451952+0000 mon.a (mon.0) 2237 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:07.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:07 vm04 bash[28289]: cluster 2026-03-10T10:20:06.451952+0000 mon.a (mon.0) 2237 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:07.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:07 vm04 bash[20742]: cluster 2026-03-10T10:20:06.064157+0000 mon.a (mon.0) 2235 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-10T10:20:07.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:07 vm04 bash[20742]: cluster 2026-03-10T10:20:06.064157+0000 mon.a (mon.0) 2235 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-10T10:20:07.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:07 vm04 bash[20742]: audit 2026-03-10T10:20:06.066677+0000 mon.a (mon.0) 2236 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:07.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:07 vm04 bash[20742]: audit 2026-03-10T10:20:06.066677+0000 mon.a (mon.0) 2236 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:07.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:07 vm04 bash[20742]: cluster 2026-03-10T10:20:06.451952+0000 mon.a (mon.0) 2237 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:07.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:07 vm04 bash[20742]: cluster 2026-03-10T10:20:06.451952+0000 mon.a (mon.0) 2237 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:07.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:07 vm07 bash[23367]: cluster 2026-03-10T10:20:06.064157+0000 mon.a (mon.0) 2235 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-10T10:20:07.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:07 vm07 bash[23367]: cluster 2026-03-10T10:20:06.064157+0000 mon.a (mon.0) 2235 : cluster [DBG] osdmap e285: 8 total, 8 up, 8 in 2026-03-10T10:20:07.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:07 vm07 bash[23367]: audit 2026-03-10T10:20:06.066677+0000 mon.a (mon.0) 2236 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:07.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:07 vm07 bash[23367]: audit 2026-03-10T10:20:06.066677+0000 mon.a (mon.0) 2236 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-43","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:07.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:07 vm07 bash[23367]: cluster 2026-03-10T10:20:06.451952+0000 mon.a (mon.0) 2237 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:07.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:07 vm07 bash[23367]: cluster 2026-03-10T10:20:06.451952+0000 mon.a (mon.0) 2237 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:08.393 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:08 vm07 bash[23367]: cluster 2026-03-10T10:20:06.428455+0000 mgr.y (mgr.24422) 262 : cluster [DBG] pgmap v386: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:08.393 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:08 vm07 bash[23367]: cluster 2026-03-10T10:20:06.428455+0000 mgr.y (mgr.24422) 262 : cluster [DBG] pgmap v386: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:08.393 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:08 vm07 bash[23367]: audit 2026-03-10T10:20:07.038569+0000 mon.a (mon.0) 2238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushPP_vm04-59259-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm04-59259-57"}]': finished 2026-03-10T10:20:08.393 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:08 vm07 bash[23367]: audit 2026-03-10T10:20:07.038569+0000 mon.a (mon.0) 2238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushPP_vm04-59259-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm04-59259-57"}]': finished 2026-03-10T10:20:08.393 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:08 vm07 bash[23367]: audit 2026-03-10T10:20:07.038608+0000 mon.a (mon.0) 2239 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:08.393 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:08 vm07 bash[23367]: audit 2026-03-10T10:20:07.038608+0000 mon.a (mon.0) 2239 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:08.393 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:08 vm07 bash[23367]: cluster 2026-03-10T10:20:07.066668+0000 mon.a (mon.0) 2240 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-10T10:20:08.393 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:08 vm07 bash[23367]: cluster 2026-03-10T10:20:07.066668+0000 mon.a (mon.0) 2240 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-10T10:20:08.393 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:08 vm07 bash[23367]: audit 2026-03-10T10:20:07.068946+0000 mon.a (mon.0) 2241 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm04-59491-6","var": "pg_num","format": "json"}]: dispatch 2026-03-10T10:20:08.393 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:08 vm07 bash[23367]: audit 2026-03-10T10:20:07.068946+0000 mon.a (mon.0) 2241 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm04-59491-6","var": "pg_num","format": "json"}]: dispatch 2026-03-10T10:20:08.393 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:08 vm07 bash[23367]: audit 2026-03-10T10:20:07.069270+0000 mon.a (mon.0) 2242 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:20:08.393 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:08 vm07 bash[23367]: audit 2026-03-10T10:20:07.069270+0000 mon.a (mon.0) 2242 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:20:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:08 vm04 bash[28289]: cluster 2026-03-10T10:20:06.428455+0000 mgr.y (mgr.24422) 262 : cluster [DBG] pgmap v386: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:08 vm04 bash[28289]: cluster 2026-03-10T10:20:06.428455+0000 mgr.y (mgr.24422) 262 : cluster [DBG] pgmap v386: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:08 vm04 bash[28289]: audit 2026-03-10T10:20:07.038569+0000 mon.a (mon.0) 2238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushPP_vm04-59259-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm04-59259-57"}]': finished 2026-03-10T10:20:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:08 vm04 bash[28289]: audit 2026-03-10T10:20:07.038569+0000 mon.a (mon.0) 2238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushPP_vm04-59259-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm04-59259-57"}]': finished 2026-03-10T10:20:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:08 vm04 bash[28289]: audit 2026-03-10T10:20:07.038608+0000 mon.a (mon.0) 2239 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:08 vm04 bash[28289]: audit 2026-03-10T10:20:07.038608+0000 mon.a (mon.0) 2239 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:08 vm04 bash[28289]: cluster 2026-03-10T10:20:07.066668+0000 mon.a (mon.0) 2240 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-10T10:20:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:08 vm04 bash[28289]: cluster 2026-03-10T10:20:07.066668+0000 mon.a (mon.0) 2240 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-10T10:20:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:08 vm04 bash[28289]: audit 2026-03-10T10:20:07.068946+0000 mon.a (mon.0) 2241 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm04-59491-6","var": "pg_num","format": "json"}]: dispatch 2026-03-10T10:20:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:08 vm04 bash[28289]: audit 2026-03-10T10:20:07.068946+0000 mon.a (mon.0) 2241 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm04-59491-6","var": "pg_num","format": "json"}]: dispatch 2026-03-10T10:20:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:08 vm04 bash[28289]: audit 2026-03-10T10:20:07.069270+0000 mon.a (mon.0) 2242 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:20:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:08 vm04 bash[28289]: audit 2026-03-10T10:20:07.069270+0000 mon.a (mon.0) 2242 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:20:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:08 vm04 bash[20742]: cluster 2026-03-10T10:20:06.428455+0000 mgr.y (mgr.24422) 262 : cluster [DBG] pgmap v386: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:08 vm04 bash[20742]: cluster 2026-03-10T10:20:06.428455+0000 mgr.y (mgr.24422) 262 : cluster [DBG] pgmap v386: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:08 vm04 bash[20742]: audit 2026-03-10T10:20:07.038569+0000 mon.a (mon.0) 2238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushPP_vm04-59259-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm04-59259-57"}]': finished 2026-03-10T10:20:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:08 vm04 bash[20742]: audit 2026-03-10T10:20:07.038569+0000 mon.a (mon.0) 2238 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushPP_vm04-59259-57", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushPP_vm04-59259-57"}]': finished 2026-03-10T10:20:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:08 vm04 bash[20742]: audit 2026-03-10T10:20:07.038608+0000 mon.a (mon.0) 2239 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:08 vm04 bash[20742]: audit 2026-03-10T10:20:07.038608+0000 mon.a (mon.0) 2239 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-43","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:08 vm04 bash[20742]: cluster 2026-03-10T10:20:07.066668+0000 mon.a (mon.0) 2240 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-10T10:20:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:08 vm04 bash[20742]: cluster 2026-03-10T10:20:07.066668+0000 mon.a (mon.0) 2240 : cluster [DBG] osdmap e286: 8 total, 8 up, 8 in 2026-03-10T10:20:08.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:08 vm04 bash[20742]: audit 2026-03-10T10:20:07.068946+0000 mon.a (mon.0) 2241 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm04-59491-6","var": "pg_num","format": "json"}]: dispatch 2026-03-10T10:20:08.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:08 vm04 bash[20742]: audit 2026-03-10T10:20:07.068946+0000 mon.a (mon.0) 2241 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm04-59491-6","var": "pg_num","format": "json"}]: dispatch 2026-03-10T10:20:08.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:08 vm04 bash[20742]: audit 2026-03-10T10:20:07.069270+0000 mon.a (mon.0) 2242 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:20:08.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:08 vm04 bash[20742]: audit 2026-03-10T10:20:07.069270+0000 mon.a (mon.0) 2242 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-43", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:20:08.766 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:20:08 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:20:09.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:09 vm04 bash[28289]: audit 2026-03-10T10:20:08.054478+0000 mon.a (mon.0) 2243 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-43", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:20:09.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:09 vm04 bash[28289]: audit 2026-03-10T10:20:08.054478+0000 mon.a (mon.0) 2243 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-43", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:20:09.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:09 vm04 bash[28289]: cluster 2026-03-10T10:20:08.064094+0000 mon.a (mon.0) 2244 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-10T10:20:09.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:09 vm04 bash[28289]: cluster 2026-03-10T10:20:08.064094+0000 mon.a (mon.0) 2244 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-10T10:20:09.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:09 vm04 bash[28289]: audit 2026-03-10T10:20:08.066364+0000 mon.a (mon.0) 2245 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-10T10:20:09.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:09 vm04 bash[28289]: audit 2026-03-10T10:20:08.066364+0000 mon.a (mon.0) 2245 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-10T10:20:09.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:09 vm04 bash[20742]: audit 2026-03-10T10:20:08.054478+0000 mon.a (mon.0) 2243 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-43", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:20:09.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:09 vm04 bash[20742]: audit 2026-03-10T10:20:08.054478+0000 mon.a (mon.0) 2243 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-43", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:20:09.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:09 vm04 bash[20742]: cluster 2026-03-10T10:20:08.064094+0000 mon.a (mon.0) 2244 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-10T10:20:09.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:09 vm04 bash[20742]: cluster 2026-03-10T10:20:08.064094+0000 mon.a (mon.0) 2244 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-10T10:20:09.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:09 vm04 bash[20742]: audit 2026-03-10T10:20:08.066364+0000 mon.a (mon.0) 2245 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-10T10:20:09.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:09 vm04 bash[20742]: audit 2026-03-10T10:20:08.066364+0000 mon.a (mon.0) 2245 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-10T10:20:09.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:09 vm07 bash[23367]: audit 2026-03-10T10:20:08.054478+0000 mon.a (mon.0) 2243 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-43", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:20:09.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:09 vm07 bash[23367]: audit 2026-03-10T10:20:08.054478+0000 mon.a (mon.0) 2243 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-43", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:20:09.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:09 vm07 bash[23367]: cluster 2026-03-10T10:20:08.064094+0000 mon.a (mon.0) 2244 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-10T10:20:09.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:09 vm07 bash[23367]: cluster 2026-03-10T10:20:08.064094+0000 mon.a (mon.0) 2244 : cluster [DBG] osdmap e287: 8 total, 8 up, 8 in 2026-03-10T10:20:09.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:09 vm07 bash[23367]: audit 2026-03-10T10:20:08.066364+0000 mon.a (mon.0) 2245 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-10T10:20:09.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:09 vm07 bash[23367]: audit 2026-03-10T10:20:08.066364+0000 mon.a (mon.0) 2245 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_count","val": "8"}]: dispatch 2026-03-10T10:20:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:10 vm04 bash[28289]: audit 2026-03-10T10:20:08.393545+0000 mgr.y (mgr.24422) 263 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:10 vm04 bash[28289]: audit 2026-03-10T10:20:08.393545+0000 mgr.y (mgr.24422) 263 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:10 vm04 bash[28289]: cluster 2026-03-10T10:20:08.429070+0000 mgr.y (mgr.24422) 264 : cluster [DBG] pgmap v389: 300 pgs: 2 creating+activating, 18 unknown, 280 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:20:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:10 vm04 bash[28289]: cluster 2026-03-10T10:20:08.429070+0000 mgr.y (mgr.24422) 264 : cluster [DBG] pgmap v389: 300 pgs: 2 creating+activating, 18 unknown, 280 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:20:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:10 vm04 bash[28289]: audit 2026-03-10T10:20:09.094844+0000 mon.a (mon.0) 2246 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_count","val": "8"}]': finished 2026-03-10T10:20:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:10 vm04 bash[28289]: audit 2026-03-10T10:20:09.094844+0000 mon.a (mon.0) 2246 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_count","val": "8"}]': finished 2026-03-10T10:20:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:10 vm04 bash[28289]: cluster 2026-03-10T10:20:09.102414+0000 mon.a (mon.0) 2247 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-10T10:20:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:10 vm04 bash[28289]: cluster 2026-03-10T10:20:09.102414+0000 mon.a (mon.0) 2247 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-10T10:20:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:10 vm04 bash[28289]: audit 2026-03-10T10:20:09.102745+0000 mon.c (mon.2) 418 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:10 vm04 bash[28289]: audit 2026-03-10T10:20:09.102745+0000 mon.c (mon.2) 418 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:10 vm04 bash[28289]: audit 2026-03-10T10:20:09.115445+0000 mon.a (mon.0) 2248 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:20:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:10 vm04 bash[28289]: audit 2026-03-10T10:20:09.115445+0000 mon.a (mon.0) 2248 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:20:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:10 vm04 bash[28289]: audit 2026-03-10T10:20:09.115903+0000 mon.a (mon.0) 2249 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:10 vm04 bash[28289]: audit 2026-03-10T10:20:09.115903+0000 mon.a (mon.0) 2249 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:10 vm04 bash[28289]: audit 2026-03-10T10:20:10.097614+0000 mon.a (mon.0) 2250 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:20:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:10 vm04 bash[28289]: audit 2026-03-10T10:20:10.097614+0000 mon.a (mon.0) 2250 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:20:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:10 vm04 bash[28289]: audit 2026-03-10T10:20:10.097739+0000 mon.a (mon.0) 2251 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm04-59259-57"}]': finished 2026-03-10T10:20:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:10 vm04 bash[28289]: audit 2026-03-10T10:20:10.097739+0000 mon.a (mon.0) 2251 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm04-59259-57"}]': finished 2026-03-10T10:20:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:10 vm04 bash[28289]: cluster 2026-03-10T10:20:10.101083+0000 mon.a (mon.0) 2252 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-10T10:20:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:10 vm04 bash[28289]: cluster 2026-03-10T10:20:10.101083+0000 mon.a (mon.0) 2252 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-10T10:20:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:10 vm04 bash[28289]: audit 2026-03-10T10:20:10.101627+0000 mon.a (mon.0) 2253 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-10T10:20:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:10 vm04 bash[28289]: audit 2026-03-10T10:20:10.101627+0000 mon.a (mon.0) 2253 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-10T10:20:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:10 vm04 bash[28289]: audit 2026-03-10T10:20:10.102821+0000 mon.c (mon.2) 419 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:10 vm04 bash[28289]: audit 2026-03-10T10:20:10.102821+0000 mon.c (mon.2) 419 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:10 vm04 bash[28289]: audit 2026-03-10T10:20:10.103057+0000 mon.a (mon.0) 2254 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:10.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:10 vm04 bash[28289]: audit 2026-03-10T10:20:10.103057+0000 mon.a (mon.0) 2254 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:10 vm04 bash[20742]: audit 2026-03-10T10:20:08.393545+0000 mgr.y (mgr.24422) 263 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:10 vm04 bash[20742]: audit 2026-03-10T10:20:08.393545+0000 mgr.y (mgr.24422) 263 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:10 vm04 bash[20742]: cluster 2026-03-10T10:20:08.429070+0000 mgr.y (mgr.24422) 264 : cluster [DBG] pgmap v389: 300 pgs: 2 creating+activating, 18 unknown, 280 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:20:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:10 vm04 bash[20742]: cluster 2026-03-10T10:20:08.429070+0000 mgr.y (mgr.24422) 264 : cluster [DBG] pgmap v389: 300 pgs: 2 creating+activating, 18 unknown, 280 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:20:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:10 vm04 bash[20742]: audit 2026-03-10T10:20:09.094844+0000 mon.a (mon.0) 2246 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_count","val": "8"}]': finished 2026-03-10T10:20:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:10 vm04 bash[20742]: audit 2026-03-10T10:20:09.094844+0000 mon.a (mon.0) 2246 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_count","val": "8"}]': finished 2026-03-10T10:20:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:10 vm04 bash[20742]: cluster 2026-03-10T10:20:09.102414+0000 mon.a (mon.0) 2247 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-10T10:20:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:10 vm04 bash[20742]: cluster 2026-03-10T10:20:09.102414+0000 mon.a (mon.0) 2247 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-10T10:20:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:10 vm04 bash[20742]: audit 2026-03-10T10:20:09.102745+0000 mon.c (mon.2) 418 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:10 vm04 bash[20742]: audit 2026-03-10T10:20:09.102745+0000 mon.c (mon.2) 418 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:10 vm04 bash[20742]: audit 2026-03-10T10:20:09.115445+0000 mon.a (mon.0) 2248 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:20:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:10 vm04 bash[20742]: audit 2026-03-10T10:20:09.115445+0000 mon.a (mon.0) 2248 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:20:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:10 vm04 bash[20742]: audit 2026-03-10T10:20:09.115903+0000 mon.a (mon.0) 2249 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:10 vm04 bash[20742]: audit 2026-03-10T10:20:09.115903+0000 mon.a (mon.0) 2249 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:10 vm04 bash[20742]: audit 2026-03-10T10:20:10.097614+0000 mon.a (mon.0) 2250 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:20:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:10 vm04 bash[20742]: audit 2026-03-10T10:20:10.097614+0000 mon.a (mon.0) 2250 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:20:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:10 vm04 bash[20742]: audit 2026-03-10T10:20:10.097739+0000 mon.a (mon.0) 2251 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm04-59259-57"}]': finished 2026-03-10T10:20:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:10 vm04 bash[20742]: audit 2026-03-10T10:20:10.097739+0000 mon.a (mon.0) 2251 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm04-59259-57"}]': finished 2026-03-10T10:20:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:10 vm04 bash[20742]: cluster 2026-03-10T10:20:10.101083+0000 mon.a (mon.0) 2252 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-10T10:20:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:10 vm04 bash[20742]: cluster 2026-03-10T10:20:10.101083+0000 mon.a (mon.0) 2252 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-10T10:20:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:10 vm04 bash[20742]: audit 2026-03-10T10:20:10.101627+0000 mon.a (mon.0) 2253 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-10T10:20:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:10 vm04 bash[20742]: audit 2026-03-10T10:20:10.101627+0000 mon.a (mon.0) 2253 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-10T10:20:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:10 vm04 bash[20742]: audit 2026-03-10T10:20:10.102821+0000 mon.c (mon.2) 419 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:10 vm04 bash[20742]: audit 2026-03-10T10:20:10.102821+0000 mon.c (mon.2) 419 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:10 vm04 bash[20742]: audit 2026-03-10T10:20:10.103057+0000 mon.a (mon.0) 2254 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:10.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:10 vm04 bash[20742]: audit 2026-03-10T10:20:10.103057+0000 mon.a (mon.0) 2254 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:10 vm07 bash[23367]: audit 2026-03-10T10:20:08.393545+0000 mgr.y (mgr.24422) 263 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:10 vm07 bash[23367]: audit 2026-03-10T10:20:08.393545+0000 mgr.y (mgr.24422) 263 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:10 vm07 bash[23367]: cluster 2026-03-10T10:20:08.429070+0000 mgr.y (mgr.24422) 264 : cluster [DBG] pgmap v389: 300 pgs: 2 creating+activating, 18 unknown, 280 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:20:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:10 vm07 bash[23367]: cluster 2026-03-10T10:20:08.429070+0000 mgr.y (mgr.24422) 264 : cluster [DBG] pgmap v389: 300 pgs: 2 creating+activating, 18 unknown, 280 active+clean; 8.3 MiB data, 709 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:20:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:10 vm07 bash[23367]: audit 2026-03-10T10:20:09.094844+0000 mon.a (mon.0) 2246 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_count","val": "8"}]': finished 2026-03-10T10:20:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:10 vm07 bash[23367]: audit 2026-03-10T10:20:09.094844+0000 mon.a (mon.0) 2246 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_count","val": "8"}]': finished 2026-03-10T10:20:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:10 vm07 bash[23367]: cluster 2026-03-10T10:20:09.102414+0000 mon.a (mon.0) 2247 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-10T10:20:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:10 vm07 bash[23367]: cluster 2026-03-10T10:20:09.102414+0000 mon.a (mon.0) 2247 : cluster [DBG] osdmap e288: 8 total, 8 up, 8 in 2026-03-10T10:20:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:10 vm07 bash[23367]: audit 2026-03-10T10:20:09.102745+0000 mon.c (mon.2) 418 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:10 vm07 bash[23367]: audit 2026-03-10T10:20:09.102745+0000 mon.c (mon.2) 418 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:10 vm07 bash[23367]: audit 2026-03-10T10:20:09.115445+0000 mon.a (mon.0) 2248 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:20:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:10 vm07 bash[23367]: audit 2026-03-10T10:20:09.115445+0000 mon.a (mon.0) 2248 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:20:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:10 vm07 bash[23367]: audit 2026-03-10T10:20:09.115903+0000 mon.a (mon.0) 2249 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:10 vm07 bash[23367]: audit 2026-03-10T10:20:09.115903+0000 mon.a (mon.0) 2249 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:10 vm07 bash[23367]: audit 2026-03-10T10:20:10.097614+0000 mon.a (mon.0) 2250 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:20:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:10 vm07 bash[23367]: audit 2026-03-10T10:20:10.097614+0000 mon.a (mon.0) 2250 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:20:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:10 vm07 bash[23367]: audit 2026-03-10T10:20:10.097739+0000 mon.a (mon.0) 2251 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm04-59259-57"}]': finished 2026-03-10T10:20:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:10 vm07 bash[23367]: audit 2026-03-10T10:20:10.097739+0000 mon.a (mon.0) 2251 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushPP_vm04-59259-57"}]': finished 2026-03-10T10:20:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:10 vm07 bash[23367]: cluster 2026-03-10T10:20:10.101083+0000 mon.a (mon.0) 2252 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-10T10:20:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:10 vm07 bash[23367]: cluster 2026-03-10T10:20:10.101083+0000 mon.a (mon.0) 2252 : cluster [DBG] osdmap e289: 8 total, 8 up, 8 in 2026-03-10T10:20:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:10 vm07 bash[23367]: audit 2026-03-10T10:20:10.101627+0000 mon.a (mon.0) 2253 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-10T10:20:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:10 vm07 bash[23367]: audit 2026-03-10T10:20:10.101627+0000 mon.a (mon.0) 2253 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_type","val": "explicit_hash"}]: dispatch 2026-03-10T10:20:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:10 vm07 bash[23367]: audit 2026-03-10T10:20:10.102821+0000 mon.c (mon.2) 419 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:10 vm07 bash[23367]: audit 2026-03-10T10:20:10.102821+0000 mon.c (mon.2) 419 : audit [INF] from='client.? 192.168.123.104:0/531297239' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:10 vm07 bash[23367]: audit 2026-03-10T10:20:10.103057+0000 mon.a (mon.0) 2254 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:10 vm07 bash[23367]: audit 2026-03-10T10:20:10.103057+0000 mon.a (mon.0) 2254 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushPP_vm04-59259-57"}]: dispatch 2026-03-10T10:20:11.422 INFO:tasks.workunit.client.0.vm04.stdout:etRead 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: hmm, no HitSet yet 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: ok, hit_set contains 266:602f83fe:::foo:head 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.HitSetRead (9255 ms) 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.HitSetWrite 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: pg_num = 32 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: pg 0 ls 1773138012,0 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: pg 1 ls 1773138012,0 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: pg 2 ls 1773138012,0 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: pg 3 ls 1773138012,0 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: pg 4 ls 1773138012,0 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: pg 5 ls 1773138012,0 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: pg 6 ls 1773138012,0 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: pg 7 ls 1773138012,0 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: pg 8 ls 1773138012,0 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: pg 9 ls 1773138012,0 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: pg 10 ls 1773138012,0 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: pg 11 ls 1773138012,0 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: pg 12 ls 1773138012,0 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: pg 13 ls 1773138012,0 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: pg 14 ls 1773138012,0 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: pg 15 ls 1773138012,0 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: pg 16 ls 1773138012,0 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: pg 17 ls 1773138012,0 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: pg 18 ls 1773138012,0 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: pg 19 ls 1773138012,0 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: pg 20 ls 1773138012,0 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: pg 21 ls 1773138012,0 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: pg 22 ls 1773138012,0 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: pg 23 ls 1773138012,0 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: pg 24 ls 1773138012,0 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: pg 25 ls 1773138012,0 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: pg 26 ls 1773138012,0 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: pg 27 ls 1773138012,0 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: pg 28 ls 1773138012,0 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: pg 29 ls 1773138012,0 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: pg 30 ls 1773138012,0 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: pg 31 ls 1773138012,0 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: pg_num = 32 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:6cac518f:::0:head 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:02547ec2:::1:head 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:f905c69b:::2:head 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:cfc208b3:::3:head 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:d83876eb:::4:head 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:b29083e3:::5:head 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:c4fdafeb:::6:head 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:5c6b0b28:::7:head 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:bd63b0f1:::8:head 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:e960b815:::9:head 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:52ea6a34:::10:head 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:89d3ae78:::11:head 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:de5d7c5f:::12:head 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:566253c9:::13:head 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:62a1935d:::14:head 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:863748b0:::15:head 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:3958e169:::16:head 2026-03-10T10:20:11.423 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:4d4dabf9:::17:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:8391935d:::18:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:28883081:::19:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:69259c59:::20:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:4bdb80b7:::21:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:a11c5d71:::22:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:271af37b:::23:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:95b121be:::24:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:58d1031b:::25:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:0a050783:::26:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:c709704c:::27:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:cbe56eaf:::28:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:86b4b162:::29:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:70d89383:::30:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:dd450c7c:::31:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:6d5729b1:::32:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:c388f3fb:::33:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:56cfea31:::34:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:9dbc1bf7:::35:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:40b74ccd:::36:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:4d5aaf42:::37:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:920f362c:::38:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:6cc53222:::39:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:9cad833f:::40:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:1ea84d41:::41:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:c4480ef6:::42:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:a694361e:::43:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:d1bd33e9:::44:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:ddc2cd5d:::45:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:2b782207:::46:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:7b187fca:::47:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:90ecdf6f:::48:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:a5ed95fe:::49:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:ea0eaa55:::50:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:f33ef17b:::51:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:a0d1b2f6:::52:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:60c5229e:::53:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:edcbc575:::54:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:102cf253:::55:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:efb7fb0b:::56:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:50d0a326:::57:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:d4dc5daf:::58:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:3a130462:::59:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:ec87ed71:::60:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:d5bc9454:::61:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:3ddfe313:::62:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:7c2816b9:::63:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:47e00e4d:::64:head 2026-03-10T10:20:11.424 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:c6410c18:::65:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:b48ed237:::66:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:cd63ad31:::67:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:b179e92b:::68:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:0d9f741a:::69:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:6d3352ae:::70:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:c6d5c19e:::71:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:bc4729c3:::72:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:77e930b9:::73:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:0abeecfd:::74:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:b7c37e15:::75:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:b6378398:::76:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:02bd68de:::77:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:cc795d2d:::78:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:630d4fea:::79:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:e0d29ef5:::80:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:fd6f13d2:::81:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:606461d5:::82:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:eadbdc43:::83:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:8761d0bb:::84:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:9ef0186f:::85:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:e0d41294:::86:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:961de695:::87:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:1423148f:::88:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:633a8fa2:::89:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:a8653809:::90:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:3dac8b33:::91:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:35aad435:::92:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:f6dcc343:::93:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:dbbdad87:::94:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:1cb48ce0:::95:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:03cd461c:::96:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:17a4ea99:::97:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:9993c9a7:::98:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:6394211c:::99:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:94c7ae57:::100:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:6fdee5bb:::101:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:9a477fd1:::102:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:eb850916:::103:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:affc56b9:::104:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:b42dc814:::105:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:f319f8f0:::106:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:9a40b9de:::107:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:8b524f28:::108:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:e3de589f:::109:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:90f90a5b:::110:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:a7b4f1d7:::111:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:af51766e:::112:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:b6f90bd1:::113:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:e0261208:::114:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:c9569ef7:::115:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:61bebe50:::116:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:fe93412b:::117:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:d3d38bee:::118:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:3100ba0c:::119:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:d0560ada:::120:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:f0ea8b35:::121:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:766f231a:::122:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:a07a2582:::123:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:bd7c6b3a:::124:head 2026-03-10T10:20:11.425 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:fb2ddaff:::125:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:4408e1fe:::126:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:ee1df7a7:::127:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:c3002909:::128:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:4f48ffa9:::129:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:edf38733:::130:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:c08425c0:::131:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:5f902d98:::132:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:41ea2c93:::133:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:813cee13:::134:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:0131818d:::135:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:26ba5a85:::136:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:381b8a5a:::137:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:28797e47:::138:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:bfca7f22:::139:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:36807075:::140:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:80b03975:::141:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:5c15709b:::142:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:f39ea15e:::143:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:ea992956:::144:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:48887b1c:::145:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:9f24a9dd:::146:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:987f100b:::147:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:d2dd3581:::148:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:7fed1808:::149:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:c80b70e9:::150:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:85ed90f9:::151:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:36428b24:::152:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:d044c34a:::153:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:7c18bf58:::154:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:d1c21232:::155:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:a7a3c575:::156:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:87da0633:::157:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:d5ac3822:::158:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:3f20522d:::159:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:6ca26563:::160:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:532ce135:::161:head 2026-03-10T10:20:11.426 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:c78863e6:::162:head 2026-03-10T10:20:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:12 vm07 bash[23367]: cluster 2026-03-10T10:20:10.429429+0000 mgr.y (mgr.24422) 265 : cluster [DBG] pgmap v392: 292 pgs: 292 active+clean; 8.3 MiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:12 vm07 bash[23367]: cluster 2026-03-10T10:20:10.429429+0000 mgr.y (mgr.24422) 265 : cluster [DBG] pgmap v392: 292 pgs: 292 active+clean; 8.3 MiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:12 vm07 bash[23367]: audit 2026-03-10T10:20:11.100556+0000 mon.a (mon.0) 2255 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_type","val": "explicit_hash"}]': finished 2026-03-10T10:20:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:12 vm07 bash[23367]: audit 2026-03-10T10:20:11.100556+0000 mon.a (mon.0) 2255 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_type","val": "explicit_hash"}]': finished 2026-03-10T10:20:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:12 vm07 bash[23367]: audit 2026-03-10T10:20:11.100683+0000 mon.a (mon.0) 2256 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushPP_vm04-59259-57"}]': finished 2026-03-10T10:20:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:12 vm07 bash[23367]: audit 2026-03-10T10:20:11.100683+0000 mon.a (mon.0) 2256 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushPP_vm04-59259-57"}]': finished 2026-03-10T10:20:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:12 vm07 bash[23367]: cluster 2026-03-10T10:20:11.104202+0000 mon.a (mon.0) 2257 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-10T10:20:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:12 vm07 bash[23367]: cluster 2026-03-10T10:20:11.104202+0000 mon.a (mon.0) 2257 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-10T10:20:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:12 vm07 bash[23367]: audit 2026-03-10T10:20:11.120614+0000 mon.c (mon.2) 420 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:12 vm07 bash[23367]: audit 2026-03-10T10:20:11.120614+0000 mon.c (mon.2) 420 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:12 vm07 bash[23367]: audit 2026-03-10T10:20:11.121819+0000 mon.a (mon.0) 2258 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:12 vm07 bash[23367]: audit 2026-03-10T10:20:11.121819+0000 mon.a (mon.0) 2258 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:12 vm07 bash[23367]: audit 2026-03-10T10:20:11.123704+0000 mon.c (mon.2) 421 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:12 vm07 bash[23367]: audit 2026-03-10T10:20:11.123704+0000 mon.c (mon.2) 421 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:12 vm07 bash[23367]: audit 2026-03-10T10:20:11.123854+0000 mon.a (mon.0) 2259 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:12 vm07 bash[23367]: audit 2026-03-10T10:20:11.123854+0000 mon.a (mon.0) 2259 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:12 vm07 bash[23367]: audit 2026-03-10T10:20:11.125025+0000 mon.c (mon.2) 422 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm04-59259-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:12 vm07 bash[23367]: audit 2026-03-10T10:20:11.125025+0000 mon.c (mon.2) 422 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm04-59259-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:12 vm07 bash[23367]: audit 2026-03-10T10:20:11.125261+0000 mon.a (mon.0) 2260 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm04-59259-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:12 vm07 bash[23367]: audit 2026-03-10T10:20:11.125261+0000 mon.a (mon.0) 2260 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm04-59259-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:12 vm07 bash[23367]: audit 2026-03-10T10:20:11.423637+0000 mon.a (mon.0) 2261 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm04-59491-43","var": "pg_num","format": "json"}]: dispatch 2026-03-10T10:20:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:12 vm07 bash[23367]: audit 2026-03-10T10:20:11.423637+0000 mon.a (mon.0) 2261 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm04-59491-43","var": "pg_num","format": "json"}]: dispatch 2026-03-10T10:20:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:12 vm07 bash[23367]: audit 2026-03-10T10:20:11.502683+0000 mon.a (mon.0) 2262 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:20:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:12 vm07 bash[23367]: audit 2026-03-10T10:20:11.502683+0000 mon.a (mon.0) 2262 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:20:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:12 vm07 bash[23367]: audit 2026-03-10T10:20:11.502913+0000 mon.a (mon.0) 2263 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-43"}]: dispatch 2026-03-10T10:20:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:12 vm07 bash[23367]: audit 2026-03-10T10:20:11.502913+0000 mon.a (mon.0) 2263 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-43"}]: dispatch 2026-03-10T10:20:12.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:12 vm04 bash[28289]: cluster 2026-03-10T10:20:10.429429+0000 mgr.y (mgr.24422) 265 : cluster [DBG] pgmap v392: 292 pgs: 292 active+clean; 8.3 MiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:12.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:12 vm04 bash[28289]: cluster 2026-03-10T10:20:10.429429+0000 mgr.y (mgr.24422) 265 : cluster [DBG] pgmap v392: 292 pgs: 292 active+clean; 8.3 MiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:12.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:12 vm04 bash[28289]: audit 2026-03-10T10:20:11.100556+0000 mon.a (mon.0) 2255 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_type","val": "explicit_hash"}]': finished 2026-03-10T10:20:12.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:12 vm04 bash[28289]: audit 2026-03-10T10:20:11.100556+0000 mon.a (mon.0) 2255 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_type","val": "explicit_hash"}]': finished 2026-03-10T10:20:12.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:12 vm04 bash[28289]: audit 2026-03-10T10:20:11.100683+0000 mon.a (mon.0) 2256 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushPP_vm04-59259-57"}]': finished 2026-03-10T10:20:12.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:12 vm04 bash[28289]: audit 2026-03-10T10:20:11.100683+0000 mon.a (mon.0) 2256 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushPP_vm04-59259-57"}]': finished 2026-03-10T10:20:12.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:12 vm04 bash[28289]: cluster 2026-03-10T10:20:11.104202+0000 mon.a (mon.0) 2257 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-10T10:20:12.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:12 vm04 bash[28289]: cluster 2026-03-10T10:20:11.104202+0000 mon.a (mon.0) 2257 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-10T10:20:12.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:12 vm04 bash[28289]: audit 2026-03-10T10:20:11.120614+0000 mon.c (mon.2) 420 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:12.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:12 vm04 bash[28289]: audit 2026-03-10T10:20:11.120614+0000 mon.c (mon.2) 420 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:12.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:12 vm04 bash[28289]: audit 2026-03-10T10:20:11.121819+0000 mon.a (mon.0) 2258 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:12.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:12 vm04 bash[28289]: audit 2026-03-10T10:20:11.121819+0000 mon.a (mon.0) 2258 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:12.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:12 vm04 bash[28289]: audit 2026-03-10T10:20:11.123704+0000 mon.c (mon.2) 421 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:12.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:12 vm04 bash[28289]: audit 2026-03-10T10:20:11.123704+0000 mon.c (mon.2) 421 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:12.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:12 vm04 bash[28289]: audit 2026-03-10T10:20:11.123854+0000 mon.a (mon.0) 2259 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:12.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:12 vm04 bash[28289]: audit 2026-03-10T10:20:11.123854+0000 mon.a (mon.0) 2259 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:12.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:12 vm04 bash[28289]: audit 2026-03-10T10:20:11.125025+0000 mon.c (mon.2) 422 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm04-59259-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:12.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:12 vm04 bash[28289]: audit 2026-03-10T10:20:11.125025+0000 mon.c (mon.2) 422 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm04-59259-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:12.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:12 vm04 bash[28289]: audit 2026-03-10T10:20:11.125261+0000 mon.a (mon.0) 2260 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm04-59259-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:12.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:12 vm04 bash[28289]: audit 2026-03-10T10:20:11.125261+0000 mon.a (mon.0) 2260 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm04-59259-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:12.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:12 vm04 bash[28289]: audit 2026-03-10T10:20:11.423637+0000 mon.a (mon.0) 2261 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm04-59491-43","var": "pg_num","format": "json"}]: dispatch 2026-03-10T10:20:12.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:12 vm04 bash[28289]: audit 2026-03-10T10:20:11.423637+0000 mon.a (mon.0) 2261 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm04-59491-43","var": "pg_num","format": "json"}]: dispatch 2026-03-10T10:20:12.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:12 vm04 bash[28289]: audit 2026-03-10T10:20:11.502683+0000 mon.a (mon.0) 2262 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:20:12.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:12 vm04 bash[28289]: audit 2026-03-10T10:20:11.502683+0000 mon.a (mon.0) 2262 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:20:12.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:12 vm04 bash[28289]: audit 2026-03-10T10:20:11.502913+0000 mon.a (mon.0) 2263 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-43"}]: dispatch 2026-03-10T10:20:12.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:12 vm04 bash[28289]: audit 2026-03-10T10:20:11.502913+0000 mon.a (mon.0) 2263 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-43"}]: dispatch 2026-03-10T10:20:12.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:12 vm04 bash[20742]: cluster 2026-03-10T10:20:10.429429+0000 mgr.y (mgr.24422) 265 : cluster [DBG] pgmap v392: 292 pgs: 292 active+clean; 8.3 MiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:12.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:12 vm04 bash[20742]: cluster 2026-03-10T10:20:10.429429+0000 mgr.y (mgr.24422) 265 : cluster [DBG] pgmap v392: 292 pgs: 292 active+clean; 8.3 MiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:12.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:12 vm04 bash[20742]: audit 2026-03-10T10:20:11.100556+0000 mon.a (mon.0) 2255 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_type","val": "explicit_hash"}]': finished 2026-03-10T10:20:12.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:12 vm04 bash[20742]: audit 2026-03-10T10:20:11.100556+0000 mon.a (mon.0) 2255 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-43","var": "hit_set_type","val": "explicit_hash"}]': finished 2026-03-10T10:20:12.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:12 vm04 bash[20742]: audit 2026-03-10T10:20:11.100683+0000 mon.a (mon.0) 2256 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushPP_vm04-59259-57"}]': finished 2026-03-10T10:20:12.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:12 vm04 bash[20742]: audit 2026-03-10T10:20:11.100683+0000 mon.a (mon.0) 2256 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushPP_vm04-59259-57"}]': finished 2026-03-10T10:20:12.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:12 vm04 bash[20742]: cluster 2026-03-10T10:20:11.104202+0000 mon.a (mon.0) 2257 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-10T10:20:12.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:12 vm04 bash[20742]: cluster 2026-03-10T10:20:11.104202+0000 mon.a (mon.0) 2257 : cluster [DBG] osdmap e290: 8 total, 8 up, 8 in 2026-03-10T10:20:12.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:12 vm04 bash[20742]: audit 2026-03-10T10:20:11.120614+0000 mon.c (mon.2) 420 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:12.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:12 vm04 bash[20742]: audit 2026-03-10T10:20:11.120614+0000 mon.c (mon.2) 420 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:12.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:12 vm04 bash[20742]: audit 2026-03-10T10:20:11.121819+0000 mon.a (mon.0) 2258 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:12.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:12 vm04 bash[20742]: audit 2026-03-10T10:20:11.121819+0000 mon.a (mon.0) 2258 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:12.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:12 vm04 bash[20742]: audit 2026-03-10T10:20:11.123704+0000 mon.c (mon.2) 421 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:12.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:12 vm04 bash[20742]: audit 2026-03-10T10:20:11.123704+0000 mon.c (mon.2) 421 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:12.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:12 vm04 bash[20742]: audit 2026-03-10T10:20:11.123854+0000 mon.a (mon.0) 2259 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:12.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:12 vm04 bash[20742]: audit 2026-03-10T10:20:11.123854+0000 mon.a (mon.0) 2259 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:12.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:12 vm04 bash[20742]: audit 2026-03-10T10:20:11.125025+0000 mon.c (mon.2) 422 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm04-59259-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:12.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:12 vm04 bash[20742]: audit 2026-03-10T10:20:11.125025+0000 mon.c (mon.2) 422 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm04-59259-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:12.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:12 vm04 bash[20742]: audit 2026-03-10T10:20:11.125261+0000 mon.a (mon.0) 2260 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm04-59259-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:12.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:12 vm04 bash[20742]: audit 2026-03-10T10:20:11.125261+0000 mon.a (mon.0) 2260 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm04-59259-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:12.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:12 vm04 bash[20742]: audit 2026-03-10T10:20:11.423637+0000 mon.a (mon.0) 2261 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm04-59491-43","var": "pg_num","format": "json"}]: dispatch 2026-03-10T10:20:12.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:12 vm04 bash[20742]: audit 2026-03-10T10:20:11.423637+0000 mon.a (mon.0) 2261 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool get","pool":"test-rados-api-vm04-59491-43","var": "pg_num","format": "json"}]: dispatch 2026-03-10T10:20:12.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:12 vm04 bash[20742]: audit 2026-03-10T10:20:11.502683+0000 mon.a (mon.0) 2262 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:20:12.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:12 vm04 bash[20742]: audit 2026-03-10T10:20:11.502683+0000 mon.a (mon.0) 2262 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:20:12.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:12 vm04 bash[20742]: audit 2026-03-10T10:20:11.502913+0000 mon.a (mon.0) 2263 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-43"}]: dispatch 2026-03-10T10:20:12.705 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:12 vm04 bash[20742]: audit 2026-03-10T10:20:11.502913+0000 mon.a (mon.0) 2263 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-43"}]: dispatch 2026-03-10T10:20:13.273 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:20:13 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:20:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:20:13.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:13 vm04 bash[28289]: audit 2026-03-10T10:20:12.103988+0000 mon.a (mon.0) 2264 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm04-59259-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:13.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:13 vm04 bash[28289]: audit 2026-03-10T10:20:12.103988+0000 mon.a (mon.0) 2264 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm04-59259-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:13.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:13 vm04 bash[28289]: audit 2026-03-10T10:20:12.104069+0000 mon.a (mon.0) 2265 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-43"}]': finished 2026-03-10T10:20:13.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:13 vm04 bash[28289]: audit 2026-03-10T10:20:12.104069+0000 mon.a (mon.0) 2265 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-43"}]': finished 2026-03-10T10:20:13.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:13 vm04 bash[28289]: cluster 2026-03-10T10:20:12.107864+0000 mon.a (mon.0) 2266 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-10T10:20:13.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:13 vm04 bash[28289]: cluster 2026-03-10T10:20:12.107864+0000 mon.a (mon.0) 2266 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-10T10:20:13.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:13 vm04 bash[28289]: audit 2026-03-10T10:20:12.108739+0000 mon.c (mon.2) 423 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm04-59259-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:13.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:13 vm04 bash[28289]: audit 2026-03-10T10:20:12.108739+0000 mon.c (mon.2) 423 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm04-59259-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:13.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:13 vm04 bash[28289]: audit 2026-03-10T10:20:12.109594+0000 mon.a (mon.0) 2267 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm04-59259-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:13.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:13 vm04 bash[28289]: audit 2026-03-10T10:20:12.109594+0000 mon.a (mon.0) 2267 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm04-59259-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:13.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:13 vm04 bash[28289]: audit 2026-03-10T10:20:12.865847+0000 mon.a (mon.0) 2268 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:20:13.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:13 vm04 bash[28289]: audit 2026-03-10T10:20:12.865847+0000 mon.a (mon.0) 2268 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:20:13.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:13 vm04 bash[28289]: cluster 2026-03-10T10:20:13.118250+0000 mon.a (mon.0) 2269 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-10T10:20:13.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:13 vm04 bash[28289]: cluster 2026-03-10T10:20:13.118250+0000 mon.a (mon.0) 2269 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-10T10:20:13.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:13 vm04 bash[20742]: audit 2026-03-10T10:20:12.103988+0000 mon.a (mon.0) 2264 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm04-59259-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:13.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:13 vm04 bash[20742]: audit 2026-03-10T10:20:12.103988+0000 mon.a (mon.0) 2264 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm04-59259-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:13.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:13 vm04 bash[20742]: audit 2026-03-10T10:20:12.104069+0000 mon.a (mon.0) 2265 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-43"}]': finished 2026-03-10T10:20:13.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:13 vm04 bash[20742]: audit 2026-03-10T10:20:12.104069+0000 mon.a (mon.0) 2265 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-43"}]': finished 2026-03-10T10:20:13.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:13 vm04 bash[20742]: cluster 2026-03-10T10:20:12.107864+0000 mon.a (mon.0) 2266 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-10T10:20:13.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:13 vm04 bash[20742]: cluster 2026-03-10T10:20:12.107864+0000 mon.a (mon.0) 2266 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-10T10:20:13.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:13 vm04 bash[20742]: audit 2026-03-10T10:20:12.108739+0000 mon.c (mon.2) 423 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm04-59259-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:13.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:13 vm04 bash[20742]: audit 2026-03-10T10:20:12.108739+0000 mon.c (mon.2) 423 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm04-59259-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:13.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:13 vm04 bash[20742]: audit 2026-03-10T10:20:12.109594+0000 mon.a (mon.0) 2267 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm04-59259-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:13.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:13 vm04 bash[20742]: audit 2026-03-10T10:20:12.109594+0000 mon.a (mon.0) 2267 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm04-59259-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:13.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:13 vm04 bash[20742]: audit 2026-03-10T10:20:12.865847+0000 mon.a (mon.0) 2268 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:20:13.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:13 vm04 bash[20742]: audit 2026-03-10T10:20:12.865847+0000 mon.a (mon.0) 2268 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:20:13.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:13 vm04 bash[20742]: cluster 2026-03-10T10:20:13.118250+0000 mon.a (mon.0) 2269 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-10T10:20:13.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:13 vm04 bash[20742]: cluster 2026-03-10T10:20:13.118250+0000 mon.a (mon.0) 2269 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-10T10:20:13.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:13 vm07 bash[23367]: audit 2026-03-10T10:20:12.103988+0000 mon.a (mon.0) 2264 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm04-59259-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:13.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:13 vm07 bash[23367]: audit 2026-03-10T10:20:12.103988+0000 mon.a (mon.0) 2264 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-FlushAsyncPP_vm04-59259-58", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:13.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:13 vm07 bash[23367]: audit 2026-03-10T10:20:12.104069+0000 mon.a (mon.0) 2265 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-43"}]': finished 2026-03-10T10:20:13.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:13 vm07 bash[23367]: audit 2026-03-10T10:20:12.104069+0000 mon.a (mon.0) 2265 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-43"}]': finished 2026-03-10T10:20:13.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:13 vm07 bash[23367]: cluster 2026-03-10T10:20:12.107864+0000 mon.a (mon.0) 2266 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-10T10:20:13.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:13 vm07 bash[23367]: cluster 2026-03-10T10:20:12.107864+0000 mon.a (mon.0) 2266 : cluster [DBG] osdmap e291: 8 total, 8 up, 8 in 2026-03-10T10:20:13.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:13 vm07 bash[23367]: audit 2026-03-10T10:20:12.108739+0000 mon.c (mon.2) 423 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm04-59259-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:13.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:13 vm07 bash[23367]: audit 2026-03-10T10:20:12.108739+0000 mon.c (mon.2) 423 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm04-59259-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:13.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:13 vm07 bash[23367]: audit 2026-03-10T10:20:12.109594+0000 mon.a (mon.0) 2267 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm04-59259-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:13.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:13 vm07 bash[23367]: audit 2026-03-10T10:20:12.109594+0000 mon.a (mon.0) 2267 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm04-59259-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:13.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:13 vm07 bash[23367]: audit 2026-03-10T10:20:12.865847+0000 mon.a (mon.0) 2268 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:20:13.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:13 vm07 bash[23367]: audit 2026-03-10T10:20:12.865847+0000 mon.a (mon.0) 2268 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:20:13.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:13 vm07 bash[23367]: cluster 2026-03-10T10:20:13.118250+0000 mon.a (mon.0) 2269 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-10T10:20:13.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:13 vm07 bash[23367]: cluster 2026-03-10T10:20:13.118250+0000 mon.a (mon.0) 2269 : cluster [DBG] osdmap e292: 8 total, 8 up, 8 in 2026-03-10T10:20:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:14 vm04 bash[28289]: cluster 2026-03-10T10:20:12.429825+0000 mgr.y (mgr.24422) 266 : cluster [DBG] pgmap v395: 292 pgs: 292 active+clean; 8.3 MiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:14 vm04 bash[28289]: cluster 2026-03-10T10:20:12.429825+0000 mgr.y (mgr.24422) 266 : cluster [DBG] pgmap v395: 292 pgs: 292 active+clean; 8.3 MiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:14 vm04 bash[28289]: audit 2026-03-10T10:20:14.118406+0000 mon.a (mon.0) 2270 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm04-59259-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm04-59259-58"}]': finished 2026-03-10T10:20:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:14 vm04 bash[28289]: audit 2026-03-10T10:20:14.118406+0000 mon.a (mon.0) 2270 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm04-59259-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm04-59259-58"}]': finished 2026-03-10T10:20:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:14 vm04 bash[28289]: cluster 2026-03-10T10:20:14.123104+0000 mon.a (mon.0) 2271 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-10T10:20:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:14 vm04 bash[28289]: cluster 2026-03-10T10:20:14.123104+0000 mon.a (mon.0) 2271 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-10T10:20:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:14 vm04 bash[28289]: audit 2026-03-10T10:20:14.125349+0000 mon.a (mon.0) 2272 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:14 vm04 bash[28289]: audit 2026-03-10T10:20:14.125349+0000 mon.a (mon.0) 2272 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:14.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:14 vm04 bash[20742]: cluster 2026-03-10T10:20:12.429825+0000 mgr.y (mgr.24422) 266 : cluster [DBG] pgmap v395: 292 pgs: 292 active+clean; 8.3 MiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:14.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:14 vm04 bash[20742]: cluster 2026-03-10T10:20:12.429825+0000 mgr.y (mgr.24422) 266 : cluster [DBG] pgmap v395: 292 pgs: 292 active+clean; 8.3 MiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:14.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:14 vm04 bash[20742]: audit 2026-03-10T10:20:14.118406+0000 mon.a (mon.0) 2270 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm04-59259-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm04-59259-58"}]': finished 2026-03-10T10:20:14.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:14 vm04 bash[20742]: audit 2026-03-10T10:20:14.118406+0000 mon.a (mon.0) 2270 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm04-59259-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm04-59259-58"}]': finished 2026-03-10T10:20:14.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:14 vm04 bash[20742]: cluster 2026-03-10T10:20:14.123104+0000 mon.a (mon.0) 2271 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-10T10:20:14.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:14 vm04 bash[20742]: cluster 2026-03-10T10:20:14.123104+0000 mon.a (mon.0) 2271 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-10T10:20:14.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:14 vm04 bash[20742]: audit 2026-03-10T10:20:14.125349+0000 mon.a (mon.0) 2272 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:14.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:14 vm04 bash[20742]: audit 2026-03-10T10:20:14.125349+0000 mon.a (mon.0) 2272 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:14.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:14 vm07 bash[23367]: cluster 2026-03-10T10:20:12.429825+0000 mgr.y (mgr.24422) 266 : cluster [DBG] pgmap v395: 292 pgs: 292 active+clean; 8.3 MiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:14.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:14 vm07 bash[23367]: cluster 2026-03-10T10:20:12.429825+0000 mgr.y (mgr.24422) 266 : cluster [DBG] pgmap v395: 292 pgs: 292 active+clean; 8.3 MiB data, 710 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:14.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:14 vm07 bash[23367]: audit 2026-03-10T10:20:14.118406+0000 mon.a (mon.0) 2270 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm04-59259-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm04-59259-58"}]': finished 2026-03-10T10:20:14.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:14 vm07 bash[23367]: audit 2026-03-10T10:20:14.118406+0000 mon.a (mon.0) 2270 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "FlushAsyncPP_vm04-59259-58", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-FlushAsyncPP_vm04-59259-58"}]': finished 2026-03-10T10:20:14.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:14 vm07 bash[23367]: cluster 2026-03-10T10:20:14.123104+0000 mon.a (mon.0) 2271 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-10T10:20:14.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:14 vm07 bash[23367]: cluster 2026-03-10T10:20:14.123104+0000 mon.a (mon.0) 2271 : cluster [DBG] osdmap e293: 8 total, 8 up, 8 in 2026-03-10T10:20:14.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:14 vm07 bash[23367]: audit 2026-03-10T10:20:14.125349+0000 mon.a (mon.0) 2272 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:14.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:14 vm07 bash[23367]: audit 2026-03-10T10:20:14.125349+0000 mon.a (mon.0) 2272 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-45","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:16 vm04 bash[28289]: cluster 2026-03-10T10:20:14.430308+0000 mgr.y (mgr.24422) 267 : cluster [DBG] pgmap v398: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:16 vm04 bash[28289]: cluster 2026-03-10T10:20:14.430308+0000 mgr.y (mgr.24422) 267 : cluster [DBG] pgmap v398: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:16 vm04 bash[28289]: audit 2026-03-10T10:20:15.122151+0000 mon.a (mon.0) 2273 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:16 vm04 bash[28289]: audit 2026-03-10T10:20:15.122151+0000 mon.a (mon.0) 2273 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:16 vm04 bash[28289]: cluster 2026-03-10T10:20:15.126003+0000 mon.a (mon.0) 2274 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-10T10:20:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:16 vm04 bash[28289]: cluster 2026-03-10T10:20:15.126003+0000 mon.a (mon.0) 2274 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-10T10:20:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:16 vm04 bash[28289]: audit 2026-03-10T10:20:15.130085+0000 mon.a (mon.0) 2275 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:20:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:16 vm04 bash[28289]: audit 2026-03-10T10:20:15.130085+0000 mon.a (mon.0) 2275 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:20:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:16 vm04 bash[28289]: cluster 2026-03-10T10:20:15.276810+0000 mon.a (mon.0) 2276 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:16 vm04 bash[28289]: cluster 2026-03-10T10:20:15.276810+0000 mon.a (mon.0) 2276 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:16.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:16 vm04 bash[20742]: cluster 2026-03-10T10:20:14.430308+0000 mgr.y (mgr.24422) 267 : cluster [DBG] pgmap v398: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:16.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:16 vm04 bash[20742]: cluster 2026-03-10T10:20:14.430308+0000 mgr.y (mgr.24422) 267 : cluster [DBG] pgmap v398: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:16.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:16 vm04 bash[20742]: audit 2026-03-10T10:20:15.122151+0000 mon.a (mon.0) 2273 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:16.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:16 vm04 bash[20742]: audit 2026-03-10T10:20:15.122151+0000 mon.a (mon.0) 2273 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:16.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:16 vm04 bash[20742]: cluster 2026-03-10T10:20:15.126003+0000 mon.a (mon.0) 2274 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-10T10:20:16.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:16 vm04 bash[20742]: cluster 2026-03-10T10:20:15.126003+0000 mon.a (mon.0) 2274 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-10T10:20:16.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:16 vm04 bash[20742]: audit 2026-03-10T10:20:15.130085+0000 mon.a (mon.0) 2275 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:20:16.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:16 vm04 bash[20742]: audit 2026-03-10T10:20:15.130085+0000 mon.a (mon.0) 2275 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:20:16.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:16 vm04 bash[20742]: cluster 2026-03-10T10:20:15.276810+0000 mon.a (mon.0) 2276 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:16.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:16 vm04 bash[20742]: cluster 2026-03-10T10:20:15.276810+0000 mon.a (mon.0) 2276 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:16.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:16 vm07 bash[23367]: cluster 2026-03-10T10:20:14.430308+0000 mgr.y (mgr.24422) 267 : cluster [DBG] pgmap v398: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:16.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:16 vm07 bash[23367]: cluster 2026-03-10T10:20:14.430308+0000 mgr.y (mgr.24422) 267 : cluster [DBG] pgmap v398: 300 pgs: 40 unknown, 260 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:16.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:16 vm07 bash[23367]: audit 2026-03-10T10:20:15.122151+0000 mon.a (mon.0) 2273 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:16.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:16 vm07 bash[23367]: audit 2026-03-10T10:20:15.122151+0000 mon.a (mon.0) 2273 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-45","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:16.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:16 vm07 bash[23367]: cluster 2026-03-10T10:20:15.126003+0000 mon.a (mon.0) 2274 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-10T10:20:16.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:16 vm07 bash[23367]: cluster 2026-03-10T10:20:15.126003+0000 mon.a (mon.0) 2274 : cluster [DBG] osdmap e294: 8 total, 8 up, 8 in 2026-03-10T10:20:16.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:16 vm07 bash[23367]: audit 2026-03-10T10:20:15.130085+0000 mon.a (mon.0) 2275 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:20:16.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:16 vm07 bash[23367]: audit 2026-03-10T10:20:15.130085+0000 mon.a (mon.0) 2275 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-45", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:20:16.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:16 vm07 bash[23367]: cluster 2026-03-10T10:20:15.276810+0000 mon.a (mon.0) 2276 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:16.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:16 vm07 bash[23367]: cluster 2026-03-10T10:20:15.276810+0000 mon.a (mon.0) 2276 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:17.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:17 vm07 bash[23367]: audit 2026-03-10T10:20:16.144460+0000 mon.a (mon.0) 2277 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-45", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:20:17.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:17 vm07 bash[23367]: audit 2026-03-10T10:20:16.144460+0000 mon.a (mon.0) 2277 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-45", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:20:17.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:17 vm07 bash[23367]: cluster 2026-03-10T10:20:16.158360+0000 mon.a (mon.0) 2278 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-10T10:20:17.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:17 vm07 bash[23367]: cluster 2026-03-10T10:20:16.158360+0000 mon.a (mon.0) 2278 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-10T10:20:17.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:17 vm07 bash[23367]: audit 2026-03-10T10:20:16.163951+0000 mon.a (mon.0) 2279 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T10:20:17.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:17 vm07 bash[23367]: audit 2026-03-10T10:20:16.163951+0000 mon.a (mon.0) 2279 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T10:20:17.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:17 vm07 bash[23367]: audit 2026-03-10T10:20:16.167726+0000 mon.c (mon.2) 424 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:17.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:17 vm07 bash[23367]: audit 2026-03-10T10:20:16.167726+0000 mon.c (mon.2) 424 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:17.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:17 vm07 bash[23367]: audit 2026-03-10T10:20:16.201984+0000 mon.a (mon.0) 2280 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:17.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:17 vm07 bash[23367]: audit 2026-03-10T10:20:16.201984+0000 mon.a (mon.0) 2280 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:17 vm04 bash[28289]: audit 2026-03-10T10:20:16.144460+0000 mon.a (mon.0) 2277 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-45", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:20:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:17 vm04 bash[28289]: audit 2026-03-10T10:20:16.144460+0000 mon.a (mon.0) 2277 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-45", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:20:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:17 vm04 bash[28289]: cluster 2026-03-10T10:20:16.158360+0000 mon.a (mon.0) 2278 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-10T10:20:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:17 vm04 bash[28289]: cluster 2026-03-10T10:20:16.158360+0000 mon.a (mon.0) 2278 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-10T10:20:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:17 vm04 bash[28289]: audit 2026-03-10T10:20:16.163951+0000 mon.a (mon.0) 2279 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T10:20:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:17 vm04 bash[28289]: audit 2026-03-10T10:20:16.163951+0000 mon.a (mon.0) 2279 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T10:20:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:17 vm04 bash[28289]: audit 2026-03-10T10:20:16.167726+0000 mon.c (mon.2) 424 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:17 vm04 bash[28289]: audit 2026-03-10T10:20:16.167726+0000 mon.c (mon.2) 424 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:17 vm04 bash[28289]: audit 2026-03-10T10:20:16.201984+0000 mon.a (mon.0) 2280 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:17.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:17 vm04 bash[28289]: audit 2026-03-10T10:20:16.201984+0000 mon.a (mon.0) 2280 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:17.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:17 vm04 bash[20742]: audit 2026-03-10T10:20:16.144460+0000 mon.a (mon.0) 2277 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-45", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:20:17.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:17 vm04 bash[20742]: audit 2026-03-10T10:20:16.144460+0000 mon.a (mon.0) 2277 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-45", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:20:17.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:17 vm04 bash[20742]: cluster 2026-03-10T10:20:16.158360+0000 mon.a (mon.0) 2278 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-10T10:20:17.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:17 vm04 bash[20742]: cluster 2026-03-10T10:20:16.158360+0000 mon.a (mon.0) 2278 : cluster [DBG] osdmap e295: 8 total, 8 up, 8 in 2026-03-10T10:20:17.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:17 vm04 bash[20742]: audit 2026-03-10T10:20:16.163951+0000 mon.a (mon.0) 2279 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T10:20:17.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:17 vm04 bash[20742]: audit 2026-03-10T10:20:16.163951+0000 mon.a (mon.0) 2279 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T10:20:17.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:17 vm04 bash[20742]: audit 2026-03-10T10:20:16.167726+0000 mon.c (mon.2) 424 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:17.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:17 vm04 bash[20742]: audit 2026-03-10T10:20:16.167726+0000 mon.c (mon.2) 424 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:17.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:17 vm04 bash[20742]: audit 2026-03-10T10:20:16.201984+0000 mon.a (mon.0) 2280 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:17.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:17 vm04 bash[20742]: audit 2026-03-10T10:20:16.201984+0000 mon.a (mon.0) 2280 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:18.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:18 vm04 bash[28289]: cluster 2026-03-10T10:20:16.430793+0000 mgr.y (mgr.24422) 268 : cluster [DBG] pgmap v401: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:18.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:18 vm04 bash[28289]: cluster 2026-03-10T10:20:16.430793+0000 mgr.y (mgr.24422) 268 : cluster [DBG] pgmap v401: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:18.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:18 vm04 bash[28289]: audit 2026-03-10T10:20:17.177591+0000 mon.a (mon.0) 2281 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_count","val": "3"}]': finished 2026-03-10T10:20:18.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:18 vm04 bash[28289]: audit 2026-03-10T10:20:17.177591+0000 mon.a (mon.0) 2281 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_count","val": "3"}]': finished 2026-03-10T10:20:18.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:18 vm04 bash[28289]: audit 2026-03-10T10:20:17.177691+0000 mon.a (mon.0) 2282 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm04-59259-58"}]': finished 2026-03-10T10:20:18.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:18 vm04 bash[28289]: audit 2026-03-10T10:20:17.177691+0000 mon.a (mon.0) 2282 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm04-59259-58"}]': finished 2026-03-10T10:20:18.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:18 vm04 bash[28289]: cluster 2026-03-10T10:20:17.199302+0000 mon.a (mon.0) 2283 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-10T10:20:18.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:18 vm04 bash[28289]: cluster 2026-03-10T10:20:17.199302+0000 mon.a (mon.0) 2283 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-10T10:20:18.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:18 vm04 bash[28289]: audit 2026-03-10T10:20:17.201518+0000 mon.c (mon.2) 425 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:18.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:18 vm04 bash[28289]: audit 2026-03-10T10:20:17.201518+0000 mon.c (mon.2) 425 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:18.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:18 vm04 bash[28289]: audit 2026-03-10T10:20:17.205187+0000 mon.a (mon.0) 2284 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T10:20:18.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:18 vm04 bash[28289]: audit 2026-03-10T10:20:17.205187+0000 mon.a (mon.0) 2284 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T10:20:18.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:18 vm04 bash[28289]: audit 2026-03-10T10:20:17.205262+0000 mon.a (mon.0) 2285 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:18.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:18 vm04 bash[28289]: audit 2026-03-10T10:20:17.205262+0000 mon.a (mon.0) 2285 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:18.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:18 vm04 bash[28289]: audit 2026-03-10T10:20:18.200570+0000 mon.a (mon.0) 2286 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_period","val": "3"}]': finished 2026-03-10T10:20:18.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:18 vm04 bash[28289]: audit 2026-03-10T10:20:18.200570+0000 mon.a (mon.0) 2286 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_period","val": "3"}]': finished 2026-03-10T10:20:18.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:18 vm04 bash[28289]: audit 2026-03-10T10:20:18.200754+0000 mon.a (mon.0) 2287 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm04-59259-58"}]': finished 2026-03-10T10:20:18.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:18 vm04 bash[28289]: audit 2026-03-10T10:20:18.200754+0000 mon.a (mon.0) 2287 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm04-59259-58"}]': finished 2026-03-10T10:20:18.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:18 vm04 bash[28289]: cluster 2026-03-10T10:20:18.205540+0000 mon.a (mon.0) 2288 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-10T10:20:18.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:18 vm04 bash[28289]: cluster 2026-03-10T10:20:18.205540+0000 mon.a (mon.0) 2288 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-10T10:20:18.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:18 vm04 bash[28289]: audit 2026-03-10T10:20:18.206227+0000 mon.a (mon.0) 2289 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:20:18.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:18 vm04 bash[28289]: audit 2026-03-10T10:20:18.206227+0000 mon.a (mon.0) 2289 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:20:18.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:18 vm04 bash[20742]: cluster 2026-03-10T10:20:16.430793+0000 mgr.y (mgr.24422) 268 : cluster [DBG] pgmap v401: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:18.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:18 vm04 bash[20742]: cluster 2026-03-10T10:20:16.430793+0000 mgr.y (mgr.24422) 268 : cluster [DBG] pgmap v401: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:18.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:18 vm04 bash[20742]: audit 2026-03-10T10:20:17.177591+0000 mon.a (mon.0) 2281 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_count","val": "3"}]': finished 2026-03-10T10:20:18.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:18 vm04 bash[20742]: audit 2026-03-10T10:20:17.177591+0000 mon.a (mon.0) 2281 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_count","val": "3"}]': finished 2026-03-10T10:20:18.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:18 vm04 bash[20742]: audit 2026-03-10T10:20:17.177691+0000 mon.a (mon.0) 2282 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm04-59259-58"}]': finished 2026-03-10T10:20:18.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:18 vm04 bash[20742]: audit 2026-03-10T10:20:17.177691+0000 mon.a (mon.0) 2282 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm04-59259-58"}]': finished 2026-03-10T10:20:18.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:18 vm04 bash[20742]: cluster 2026-03-10T10:20:17.199302+0000 mon.a (mon.0) 2283 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-10T10:20:18.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:18 vm04 bash[20742]: cluster 2026-03-10T10:20:17.199302+0000 mon.a (mon.0) 2283 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-10T10:20:18.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:18 vm04 bash[20742]: audit 2026-03-10T10:20:17.201518+0000 mon.c (mon.2) 425 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:18.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:18 vm04 bash[20742]: audit 2026-03-10T10:20:17.201518+0000 mon.c (mon.2) 425 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:18.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:18 vm04 bash[20742]: audit 2026-03-10T10:20:17.205187+0000 mon.a (mon.0) 2284 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T10:20:18.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:18 vm04 bash[20742]: audit 2026-03-10T10:20:17.205187+0000 mon.a (mon.0) 2284 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T10:20:18.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:18 vm04 bash[20742]: audit 2026-03-10T10:20:17.205262+0000 mon.a (mon.0) 2285 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:18.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:18 vm04 bash[20742]: audit 2026-03-10T10:20:17.205262+0000 mon.a (mon.0) 2285 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:18.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:18 vm04 bash[20742]: audit 2026-03-10T10:20:18.200570+0000 mon.a (mon.0) 2286 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_period","val": "3"}]': finished 2026-03-10T10:20:18.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:18 vm04 bash[20742]: audit 2026-03-10T10:20:18.200570+0000 mon.a (mon.0) 2286 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_period","val": "3"}]': finished 2026-03-10T10:20:18.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:18 vm04 bash[20742]: audit 2026-03-10T10:20:18.200754+0000 mon.a (mon.0) 2287 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm04-59259-58"}]': finished 2026-03-10T10:20:18.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:18 vm04 bash[20742]: audit 2026-03-10T10:20:18.200754+0000 mon.a (mon.0) 2287 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm04-59259-58"}]': finished 2026-03-10T10:20:18.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:18 vm04 bash[20742]: cluster 2026-03-10T10:20:18.205540+0000 mon.a (mon.0) 2288 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-10T10:20:18.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:18 vm04 bash[20742]: cluster 2026-03-10T10:20:18.205540+0000 mon.a (mon.0) 2288 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-10T10:20:18.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:18 vm04 bash[20742]: audit 2026-03-10T10:20:18.206227+0000 mon.a (mon.0) 2289 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:20:18.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:18 vm04 bash[20742]: audit 2026-03-10T10:20:18.206227+0000 mon.a (mon.0) 2289 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:20:18.766 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:20:18 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:20:18.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:18 vm07 bash[23367]: cluster 2026-03-10T10:20:16.430793+0000 mgr.y (mgr.24422) 268 : cluster [DBG] pgmap v401: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:18.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:18 vm07 bash[23367]: cluster 2026-03-10T10:20:16.430793+0000 mgr.y (mgr.24422) 268 : cluster [DBG] pgmap v401: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:18.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:18 vm07 bash[23367]: audit 2026-03-10T10:20:17.177591+0000 mon.a (mon.0) 2281 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_count","val": "3"}]': finished 2026-03-10T10:20:18.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:18 vm07 bash[23367]: audit 2026-03-10T10:20:17.177591+0000 mon.a (mon.0) 2281 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_count","val": "3"}]': finished 2026-03-10T10:20:18.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:18 vm07 bash[23367]: audit 2026-03-10T10:20:17.177691+0000 mon.a (mon.0) 2282 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm04-59259-58"}]': finished 2026-03-10T10:20:18.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:18 vm07 bash[23367]: audit 2026-03-10T10:20:17.177691+0000 mon.a (mon.0) 2282 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-FlushAsyncPP_vm04-59259-58"}]': finished 2026-03-10T10:20:18.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:18 vm07 bash[23367]: cluster 2026-03-10T10:20:17.199302+0000 mon.a (mon.0) 2283 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-10T10:20:18.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:18 vm07 bash[23367]: cluster 2026-03-10T10:20:17.199302+0000 mon.a (mon.0) 2283 : cluster [DBG] osdmap e296: 8 total, 8 up, 8 in 2026-03-10T10:20:18.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:18 vm07 bash[23367]: audit 2026-03-10T10:20:17.201518+0000 mon.c (mon.2) 425 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:18.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:18 vm07 bash[23367]: audit 2026-03-10T10:20:17.201518+0000 mon.c (mon.2) 425 : audit [INF] from='client.? 192.168.123.104:0/1528756201' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:18.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:18 vm07 bash[23367]: audit 2026-03-10T10:20:17.205187+0000 mon.a (mon.0) 2284 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T10:20:18.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:18 vm07 bash[23367]: audit 2026-03-10T10:20:17.205187+0000 mon.a (mon.0) 2284 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T10:20:18.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:18 vm07 bash[23367]: audit 2026-03-10T10:20:17.205262+0000 mon.a (mon.0) 2285 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:18.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:18 vm07 bash[23367]: audit 2026-03-10T10:20:17.205262+0000 mon.a (mon.0) 2285 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm04-59259-58"}]: dispatch 2026-03-10T10:20:18.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:18 vm07 bash[23367]: audit 2026-03-10T10:20:18.200570+0000 mon.a (mon.0) 2286 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_period","val": "3"}]': finished 2026-03-10T10:20:18.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:18 vm07 bash[23367]: audit 2026-03-10T10:20:18.200570+0000 mon.a (mon.0) 2286 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_period","val": "3"}]': finished 2026-03-10T10:20:18.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:18 vm07 bash[23367]: audit 2026-03-10T10:20:18.200754+0000 mon.a (mon.0) 2287 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm04-59259-58"}]': finished 2026-03-10T10:20:18.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:18 vm07 bash[23367]: audit 2026-03-10T10:20:18.200754+0000 mon.a (mon.0) 2287 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"FlushAsyncPP_vm04-59259-58"}]': finished 2026-03-10T10:20:18.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:18 vm07 bash[23367]: cluster 2026-03-10T10:20:18.205540+0000 mon.a (mon.0) 2288 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-10T10:20:18.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:18 vm07 bash[23367]: cluster 2026-03-10T10:20:18.205540+0000 mon.a (mon.0) 2288 : cluster [DBG] osdmap e297: 8 total, 8 up, 8 in 2026-03-10T10:20:18.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:18 vm07 bash[23367]: audit 2026-03-10T10:20:18.206227+0000 mon.a (mon.0) 2289 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:20:18.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:18 vm07 bash[23367]: audit 2026-03-10T10:20:18.206227+0000 mon.a (mon.0) 2289 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:20:19.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:19 vm04 bash[28289]: audit 2026-03-10T10:20:18.229349+0000 mon.a (mon.0) 2290 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm04-59259-59"}]: dispatch 2026-03-10T10:20:19.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:19 vm04 bash[28289]: audit 2026-03-10T10:20:18.229349+0000 mon.a (mon.0) 2290 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm04-59259-59"}]: dispatch 2026-03-10T10:20:19.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:19 vm04 bash[28289]: audit 2026-03-10T10:20:18.230385+0000 mon.a (mon.0) 2291 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm04-59259-59"}]: dispatch 2026-03-10T10:20:19.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:19 vm04 bash[28289]: audit 2026-03-10T10:20:18.230385+0000 mon.a (mon.0) 2291 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm04-59259-59"}]: dispatch 2026-03-10T10:20:19.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:19 vm04 bash[28289]: audit 2026-03-10T10:20:18.230623+0000 mon.a (mon.0) 2292 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm04-59259-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:19.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:19 vm04 bash[28289]: audit 2026-03-10T10:20:18.230623+0000 mon.a (mon.0) 2292 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm04-59259-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:19.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:19 vm04 bash[28289]: audit 2026-03-10T10:20:19.234297+0000 mon.a (mon.0) 2293 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:20:19.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:19 vm04 bash[28289]: audit 2026-03-10T10:20:19.234297+0000 mon.a (mon.0) 2293 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:20:19.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:19 vm04 bash[28289]: audit 2026-03-10T10:20:19.234510+0000 mon.a (mon.0) 2294 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm04-59259-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:19.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:19 vm04 bash[28289]: audit 2026-03-10T10:20:19.234510+0000 mon.a (mon.0) 2294 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm04-59259-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:19.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:19 vm04 bash[28289]: cluster 2026-03-10T10:20:19.239172+0000 mon.a (mon.0) 2295 : cluster [DBG] osdmap e298: 8 total, 8 up, 8 in 2026-03-10T10:20:19.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:19 vm04 bash[28289]: cluster 2026-03-10T10:20:19.239172+0000 mon.a (mon.0) 2295 : cluster [DBG] osdmap e298: 8 total, 8 up, 8 in 2026-03-10T10:20:19.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:19 vm04 bash[28289]: audit 2026-03-10T10:20:19.239848+0000 mon.a (mon.0) 2296 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T10:20:19.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:19 vm04 bash[28289]: audit 2026-03-10T10:20:19.239848+0000 mon.a (mon.0) 2296 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T10:20:19.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:19 vm04 bash[28289]: audit 2026-03-10T10:20:19.239957+0000 mon.a (mon.0) 2297 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm04-59259-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm04-59259-59"}]: dispatch 2026-03-10T10:20:19.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:19 vm04 bash[28289]: audit 2026-03-10T10:20:19.239957+0000 mon.a (mon.0) 2297 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm04-59259-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm04-59259-59"}]: dispatch 2026-03-10T10:20:19.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:19 vm04 bash[20742]: audit 2026-03-10T10:20:18.229349+0000 mon.a (mon.0) 2290 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm04-59259-59"}]: dispatch 2026-03-10T10:20:19.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:19 vm04 bash[20742]: audit 2026-03-10T10:20:18.229349+0000 mon.a (mon.0) 2290 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm04-59259-59"}]: dispatch 2026-03-10T10:20:19.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:19 vm04 bash[20742]: audit 2026-03-10T10:20:18.230385+0000 mon.a (mon.0) 2291 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm04-59259-59"}]: dispatch 2026-03-10T10:20:19.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:19 vm04 bash[20742]: audit 2026-03-10T10:20:18.230385+0000 mon.a (mon.0) 2291 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm04-59259-59"}]: dispatch 2026-03-10T10:20:19.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:19 vm04 bash[20742]: audit 2026-03-10T10:20:18.230623+0000 mon.a (mon.0) 2292 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm04-59259-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:19.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:19 vm04 bash[20742]: audit 2026-03-10T10:20:18.230623+0000 mon.a (mon.0) 2292 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm04-59259-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:19.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:19 vm04 bash[20742]: audit 2026-03-10T10:20:19.234297+0000 mon.a (mon.0) 2293 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:20:19.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:19 vm04 bash[20742]: audit 2026-03-10T10:20:19.234297+0000 mon.a (mon.0) 2293 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:20:19.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:19 vm04 bash[20742]: audit 2026-03-10T10:20:19.234510+0000 mon.a (mon.0) 2294 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm04-59259-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:19.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:19 vm04 bash[20742]: audit 2026-03-10T10:20:19.234510+0000 mon.a (mon.0) 2294 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm04-59259-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:19.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:19 vm04 bash[20742]: cluster 2026-03-10T10:20:19.239172+0000 mon.a (mon.0) 2295 : cluster [DBG] osdmap e298: 8 total, 8 up, 8 in 2026-03-10T10:20:19.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:19 vm04 bash[20742]: cluster 2026-03-10T10:20:19.239172+0000 mon.a (mon.0) 2295 : cluster [DBG] osdmap e298: 8 total, 8 up, 8 in 2026-03-10T10:20:19.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:19 vm04 bash[20742]: audit 2026-03-10T10:20:19.239848+0000 mon.a (mon.0) 2296 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T10:20:19.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:19 vm04 bash[20742]: audit 2026-03-10T10:20:19.239848+0000 mon.a (mon.0) 2296 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T10:20:19.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:19 vm04 bash[20742]: audit 2026-03-10T10:20:19.239957+0000 mon.a (mon.0) 2297 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm04-59259-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm04-59259-59"}]: dispatch 2026-03-10T10:20:19.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:19 vm04 bash[20742]: audit 2026-03-10T10:20:19.239957+0000 mon.a (mon.0) 2297 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm04-59259-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm04-59259-59"}]: dispatch 2026-03-10T10:20:19.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:19 vm07 bash[23367]: audit 2026-03-10T10:20:18.229349+0000 mon.a (mon.0) 2290 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm04-59259-59"}]: dispatch 2026-03-10T10:20:19.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:19 vm07 bash[23367]: audit 2026-03-10T10:20:18.229349+0000 mon.a (mon.0) 2290 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm04-59259-59"}]: dispatch 2026-03-10T10:20:19.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:19 vm07 bash[23367]: audit 2026-03-10T10:20:18.230385+0000 mon.a (mon.0) 2291 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm04-59259-59"}]: dispatch 2026-03-10T10:20:19.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:19 vm07 bash[23367]: audit 2026-03-10T10:20:18.230385+0000 mon.a (mon.0) 2291 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm04-59259-59"}]: dispatch 2026-03-10T10:20:19.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:19 vm07 bash[23367]: audit 2026-03-10T10:20:18.230623+0000 mon.a (mon.0) 2292 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm04-59259-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:19.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:19 vm07 bash[23367]: audit 2026-03-10T10:20:18.230623+0000 mon.a (mon.0) 2292 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm04-59259-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:19.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:19 vm07 bash[23367]: audit 2026-03-10T10:20:19.234297+0000 mon.a (mon.0) 2293 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:20:19.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:19 vm07 bash[23367]: audit 2026-03-10T10:20:19.234297+0000 mon.a (mon.0) 2293 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:20:19.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:19 vm07 bash[23367]: audit 2026-03-10T10:20:19.234510+0000 mon.a (mon.0) 2294 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm04-59259-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:19.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:19 vm07 bash[23367]: audit 2026-03-10T10:20:19.234510+0000 mon.a (mon.0) 2294 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-RoundTripWriteFullPP_vm04-59259-59", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:19.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:19 vm07 bash[23367]: cluster 2026-03-10T10:20:19.239172+0000 mon.a (mon.0) 2295 : cluster [DBG] osdmap e298: 8 total, 8 up, 8 in 2026-03-10T10:20:19.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:19 vm07 bash[23367]: cluster 2026-03-10T10:20:19.239172+0000 mon.a (mon.0) 2295 : cluster [DBG] osdmap e298: 8 total, 8 up, 8 in 2026-03-10T10:20:19.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:19 vm07 bash[23367]: audit 2026-03-10T10:20:19.239848+0000 mon.a (mon.0) 2296 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T10:20:19.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:19 vm07 bash[23367]: audit 2026-03-10T10:20:19.239848+0000 mon.a (mon.0) 2296 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T10:20:19.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:19 vm07 bash[23367]: audit 2026-03-10T10:20:19.239957+0000 mon.a (mon.0) 2297 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm04-59259-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm04-59259-59"}]: dispatch 2026-03-10T10:20:19.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:19 vm07 bash[23367]: audit 2026-03-10T10:20:19.239957+0000 mon.a (mon.0) 2297 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm04-59259-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm04-59259-59"}]: dispatch 2026-03-10T10:20:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:20 vm04 bash[28289]: audit 2026-03-10T10:20:18.398432+0000 mgr.y (mgr.24422) 269 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:20 vm04 bash[28289]: audit 2026-03-10T10:20:18.398432+0000 mgr.y (mgr.24422) 269 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:20 vm04 bash[28289]: cluster 2026-03-10T10:20:18.431396+0000 mgr.y (mgr.24422) 270 : cluster [DBG] pgmap v404: 292 pgs: 17 unknown, 275 active+clean; 8.3 MiB data, 679 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:20:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:20 vm04 bash[28289]: cluster 2026-03-10T10:20:18.431396+0000 mgr.y (mgr.24422) 270 : cluster [DBG] pgmap v404: 292 pgs: 17 unknown, 275 active+clean; 8.3 MiB data, 679 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:20:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:20 vm04 bash[28289]: audit 2026-03-10T10:20:20.238237+0000 mon.a (mon.0) 2298 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-10T10:20:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:20 vm04 bash[28289]: audit 2026-03-10T10:20:20.238237+0000 mon.a (mon.0) 2298 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-10T10:20:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:20 vm04 bash[28289]: cluster 2026-03-10T10:20:20.253576+0000 mon.a (mon.0) 2299 : cluster [DBG] osdmap e299: 8 total, 8 up, 8 in 2026-03-10T10:20:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:20 vm04 bash[28289]: cluster 2026-03-10T10:20:20.253576+0000 mon.a (mon.0) 2299 : cluster [DBG] osdmap e299: 8 total, 8 up, 8 in 2026-03-10T10:20:20.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:20 vm04 bash[20742]: audit 2026-03-10T10:20:18.398432+0000 mgr.y (mgr.24422) 269 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:20.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:20 vm04 bash[20742]: audit 2026-03-10T10:20:18.398432+0000 mgr.y (mgr.24422) 269 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:20.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:20 vm04 bash[20742]: cluster 2026-03-10T10:20:18.431396+0000 mgr.y (mgr.24422) 270 : cluster [DBG] pgmap v404: 292 pgs: 17 unknown, 275 active+clean; 8.3 MiB data, 679 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:20:20.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:20 vm04 bash[20742]: cluster 2026-03-10T10:20:18.431396+0000 mgr.y (mgr.24422) 270 : cluster [DBG] pgmap v404: 292 pgs: 17 unknown, 275 active+clean; 8.3 MiB data, 679 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:20:20.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:20 vm04 bash[20742]: audit 2026-03-10T10:20:20.238237+0000 mon.a (mon.0) 2298 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-10T10:20:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:20 vm04 bash[20742]: audit 2026-03-10T10:20:20.238237+0000 mon.a (mon.0) 2298 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-10T10:20:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:20 vm04 bash[20742]: cluster 2026-03-10T10:20:20.253576+0000 mon.a (mon.0) 2299 : cluster [DBG] osdmap e299: 8 total, 8 up, 8 in 2026-03-10T10:20:20.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:20 vm04 bash[20742]: cluster 2026-03-10T10:20:20.253576+0000 mon.a (mon.0) 2299 : cluster [DBG] osdmap e299: 8 total, 8 up, 8 in 2026-03-10T10:20:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:20 vm07 bash[23367]: audit 2026-03-10T10:20:18.398432+0000 mgr.y (mgr.24422) 269 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:20 vm07 bash[23367]: audit 2026-03-10T10:20:18.398432+0000 mgr.y (mgr.24422) 269 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:20 vm07 bash[23367]: cluster 2026-03-10T10:20:18.431396+0000 mgr.y (mgr.24422) 270 : cluster [DBG] pgmap v404: 292 pgs: 17 unknown, 275 active+clean; 8.3 MiB data, 679 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:20:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:20 vm07 bash[23367]: cluster 2026-03-10T10:20:18.431396+0000 mgr.y (mgr.24422) 270 : cluster [DBG] pgmap v404: 292 pgs: 17 unknown, 275 active+clean; 8.3 MiB data, 679 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:20:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:20 vm07 bash[23367]: audit 2026-03-10T10:20:20.238237+0000 mon.a (mon.0) 2298 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-10T10:20:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:20 vm07 bash[23367]: audit 2026-03-10T10:20:20.238237+0000 mon.a (mon.0) 2298 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-45","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-10T10:20:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:20 vm07 bash[23367]: cluster 2026-03-10T10:20:20.253576+0000 mon.a (mon.0) 2299 : cluster [DBG] osdmap e299: 8 total, 8 up, 8 in 2026-03-10T10:20:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:20 vm07 bash[23367]: cluster 2026-03-10T10:20:20.253576+0000 mon.a (mon.0) 2299 : cluster [DBG] osdmap e299: 8 total, 8 up, 8 in 2026-03-10T10:20:22.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:22 vm04 bash[28289]: cluster 2026-03-10T10:20:20.431786+0000 mgr.y (mgr.24422) 271 : cluster [DBG] pgmap v407: 292 pgs: 292 active+clean; 8.3 MiB data, 678 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:22.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:22 vm04 bash[28289]: cluster 2026-03-10T10:20:20.431786+0000 mgr.y (mgr.24422) 271 : cluster [DBG] pgmap v407: 292 pgs: 292 active+clean; 8.3 MiB data, 678 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:22.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:22 vm04 bash[28289]: audit 2026-03-10T10:20:21.271086+0000 mon.a (mon.0) 2300 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm04-59259-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm04-59259-59"}]': finished 2026-03-10T10:20:22.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:22 vm04 bash[28289]: audit 2026-03-10T10:20:21.271086+0000 mon.a (mon.0) 2300 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm04-59259-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm04-59259-59"}]': finished 2026-03-10T10:20:22.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:22 vm04 bash[28289]: cluster 2026-03-10T10:20:21.285917+0000 mon.a (mon.0) 2301 : cluster [DBG] osdmap e300: 8 total, 8 up, 8 in 2026-03-10T10:20:22.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:22 vm04 bash[28289]: cluster 2026-03-10T10:20:21.285917+0000 mon.a (mon.0) 2301 : cluster [DBG] osdmap e300: 8 total, 8 up, 8 in 2026-03-10T10:20:22.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:22 vm04 bash[28289]: cluster 2026-03-10T10:20:21.455464+0000 mon.a (mon.0) 2302 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:22.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:22 vm04 bash[28289]: cluster 2026-03-10T10:20:21.455464+0000 mon.a (mon.0) 2302 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:22.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:22 vm04 bash[20742]: cluster 2026-03-10T10:20:20.431786+0000 mgr.y (mgr.24422) 271 : cluster [DBG] pgmap v407: 292 pgs: 292 active+clean; 8.3 MiB data, 678 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:22.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:22 vm04 bash[20742]: cluster 2026-03-10T10:20:20.431786+0000 mgr.y (mgr.24422) 271 : cluster [DBG] pgmap v407: 292 pgs: 292 active+clean; 8.3 MiB data, 678 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:22.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:22 vm04 bash[20742]: audit 2026-03-10T10:20:21.271086+0000 mon.a (mon.0) 2300 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm04-59259-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm04-59259-59"}]': finished 2026-03-10T10:20:22.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:22 vm04 bash[20742]: audit 2026-03-10T10:20:21.271086+0000 mon.a (mon.0) 2300 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm04-59259-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm04-59259-59"}]': finished 2026-03-10T10:20:22.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:22 vm04 bash[20742]: cluster 2026-03-10T10:20:21.285917+0000 mon.a (mon.0) 2301 : cluster [DBG] osdmap e300: 8 total, 8 up, 8 in 2026-03-10T10:20:22.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:22 vm04 bash[20742]: cluster 2026-03-10T10:20:21.285917+0000 mon.a (mon.0) 2301 : cluster [DBG] osdmap e300: 8 total, 8 up, 8 in 2026-03-10T10:20:22.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:22 vm04 bash[20742]: cluster 2026-03-10T10:20:21.455464+0000 mon.a (mon.0) 2302 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:22.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:22 vm04 bash[20742]: cluster 2026-03-10T10:20:21.455464+0000 mon.a (mon.0) 2302 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:22 vm07 bash[23367]: cluster 2026-03-10T10:20:20.431786+0000 mgr.y (mgr.24422) 271 : cluster [DBG] pgmap v407: 292 pgs: 292 active+clean; 8.3 MiB data, 678 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:22 vm07 bash[23367]: cluster 2026-03-10T10:20:20.431786+0000 mgr.y (mgr.24422) 271 : cluster [DBG] pgmap v407: 292 pgs: 292 active+clean; 8.3 MiB data, 678 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:22 vm07 bash[23367]: audit 2026-03-10T10:20:21.271086+0000 mon.a (mon.0) 2300 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm04-59259-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm04-59259-59"}]': finished 2026-03-10T10:20:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:22 vm07 bash[23367]: audit 2026-03-10T10:20:21.271086+0000 mon.a (mon.0) 2300 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "RoundTripWriteFullPP_vm04-59259-59", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-RoundTripWriteFullPP_vm04-59259-59"}]': finished 2026-03-10T10:20:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:22 vm07 bash[23367]: cluster 2026-03-10T10:20:21.285917+0000 mon.a (mon.0) 2301 : cluster [DBG] osdmap e300: 8 total, 8 up, 8 in 2026-03-10T10:20:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:22 vm07 bash[23367]: cluster 2026-03-10T10:20:21.285917+0000 mon.a (mon.0) 2301 : cluster [DBG] osdmap e300: 8 total, 8 up, 8 in 2026-03-10T10:20:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:22 vm07 bash[23367]: cluster 2026-03-10T10:20:21.455464+0000 mon.a (mon.0) 2302 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:22 vm07 bash[23367]: cluster 2026-03-10T10:20:21.455464+0000 mon.a (mon.0) 2302 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:23.310 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:20:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:20:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:20:23.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:23 vm04 bash[28289]: cluster 2026-03-10T10:20:22.315688+0000 mon.a (mon.0) 2303 : cluster [DBG] osdmap e301: 8 total, 8 up, 8 in 2026-03-10T10:20:23.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:23 vm04 bash[28289]: cluster 2026-03-10T10:20:22.315688+0000 mon.a (mon.0) 2303 : cluster [DBG] osdmap e301: 8 total, 8 up, 8 in 2026-03-10T10:20:23.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:23 vm04 bash[20742]: cluster 2026-03-10T10:20:22.315688+0000 mon.a (mon.0) 2303 : cluster [DBG] osdmap e301: 8 total, 8 up, 8 in 2026-03-10T10:20:23.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:23 vm04 bash[20742]: cluster 2026-03-10T10:20:22.315688+0000 mon.a (mon.0) 2303 : cluster [DBG] osdmap e301: 8 total, 8 up, 8 in 2026-03-10T10:20:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:23 vm07 bash[23367]: cluster 2026-03-10T10:20:22.315688+0000 mon.a (mon.0) 2303 : cluster [DBG] osdmap e301: 8 total, 8 up, 8 in 2026-03-10T10:20:23.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:23 vm07 bash[23367]: cluster 2026-03-10T10:20:22.315688+0000 mon.a (mon.0) 2303 : cluster [DBG] osdmap e301: 8 total, 8 up, 8 in 2026-03-10T10:20:24.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:24 vm04 bash[28289]: cluster 2026-03-10T10:20:22.432131+0000 mgr.y (mgr.24422) 272 : cluster [DBG] pgmap v410: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 678 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:24.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:24 vm04 bash[28289]: cluster 2026-03-10T10:20:22.432131+0000 mgr.y (mgr.24422) 272 : cluster [DBG] pgmap v410: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 678 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:24.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:24 vm04 bash[28289]: cluster 2026-03-10T10:20:23.321880+0000 mon.a (mon.0) 2304 : cluster [DBG] osdmap e302: 8 total, 8 up, 8 in 2026-03-10T10:20:24.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:24 vm04 bash[28289]: cluster 2026-03-10T10:20:23.321880+0000 mon.a (mon.0) 2304 : cluster [DBG] osdmap e302: 8 total, 8 up, 8 in 2026-03-10T10:20:24.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:24 vm04 bash[28289]: audit 2026-03-10T10:20:23.322769+0000 mon.a (mon.0) 2305 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm04-59259-59"}]: dispatch 2026-03-10T10:20:24.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:24 vm04 bash[28289]: audit 2026-03-10T10:20:23.322769+0000 mon.a (mon.0) 2305 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm04-59259-59"}]: dispatch 2026-03-10T10:20:24.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:24 vm04 bash[20742]: cluster 2026-03-10T10:20:22.432131+0000 mgr.y (mgr.24422) 272 : cluster [DBG] pgmap v410: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 678 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:24.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:24 vm04 bash[20742]: cluster 2026-03-10T10:20:22.432131+0000 mgr.y (mgr.24422) 272 : cluster [DBG] pgmap v410: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 678 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:24.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:24 vm04 bash[20742]: cluster 2026-03-10T10:20:23.321880+0000 mon.a (mon.0) 2304 : cluster [DBG] osdmap e302: 8 total, 8 up, 8 in 2026-03-10T10:20:24.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:24 vm04 bash[20742]: cluster 2026-03-10T10:20:23.321880+0000 mon.a (mon.0) 2304 : cluster [DBG] osdmap e302: 8 total, 8 up, 8 in 2026-03-10T10:20:24.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:24 vm04 bash[20742]: audit 2026-03-10T10:20:23.322769+0000 mon.a (mon.0) 2305 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm04-59259-59"}]: dispatch 2026-03-10T10:20:24.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:24 vm04 bash[20742]: audit 2026-03-10T10:20:23.322769+0000 mon.a (mon.0) 2305 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm04-59259-59"}]: dispatch 2026-03-10T10:20:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:24 vm07 bash[23367]: cluster 2026-03-10T10:20:22.432131+0000 mgr.y (mgr.24422) 272 : cluster [DBG] pgmap v410: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 678 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:24 vm07 bash[23367]: cluster 2026-03-10T10:20:22.432131+0000 mgr.y (mgr.24422) 272 : cluster [DBG] pgmap v410: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 678 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:24 vm07 bash[23367]: cluster 2026-03-10T10:20:23.321880+0000 mon.a (mon.0) 2304 : cluster [DBG] osdmap e302: 8 total, 8 up, 8 in 2026-03-10T10:20:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:24 vm07 bash[23367]: cluster 2026-03-10T10:20:23.321880+0000 mon.a (mon.0) 2304 : cluster [DBG] osdmap e302: 8 total, 8 up, 8 in 2026-03-10T10:20:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:24 vm07 bash[23367]: audit 2026-03-10T10:20:23.322769+0000 mon.a (mon.0) 2305 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm04-59259-59"}]: dispatch 2026-03-10T10:20:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:24 vm07 bash[23367]: audit 2026-03-10T10:20:23.322769+0000 mon.a (mon.0) 2305 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm04-59259-59"}]: dispatch 2026-03-10T10:20:25.346 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:7462ddf6::.RoundTripAppendPP (3182 ms) 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAio.RacingRemovePP 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAio.RacingRemovePP (3037 ms) 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripCmpExtPP 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripCmpExtPP (2993 ms) 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAio.RoundTripCmpExtPP2 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAio.RoundTripCmpExtPP2 (3117 ms) 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAio.PoolEIOFlag 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: setting pool EIO 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: max_success 100, min_failed 101 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAio.PoolEIOFlag (4202 ms) 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAio.MultiReads 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAio.MultiReads (3074 ms) 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [----------] 32 tests from LibRadosAio (113607 ms total) 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [----------] 4 tests from LibRadosAioPP 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAioPP.ReadIntoBufferlist 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAioPP.ReadIntoBufferlist (3028 ms) 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAioPP.XattrsRoundTripPP 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAioPP.XattrsRoundTripPP (9646 ms) 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAioPP.RmXattrPP 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAioPP.RmXattrPP (15466 ms) 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAioPP.RemoveTestPP 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAioPP.RemoveTestPP (3038 ms) 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [----------] 4 tests from LibRadosAioPP (31178 ms total) 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [----------] 1 test from LibRadosIoPP 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosIoPP.XattrListPP 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosIoPP.XattrListPP (3564 ms) 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [----------] 1 test from LibRadosIoPP (3564 ms total) 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [----------] 20 tests from LibRadosAioEC 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.SimpleWritePP 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAioEC.SimpleWritePP (14950 ms) 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.WaitForSafePP 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAioEC.WaitForSafePP (7528 ms) 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripPP 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripPP (6771 ms) 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripPP2 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripPP2 (8195 ms) 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripPP3 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripPP3 (3274 ms) 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripSparseReadPP 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripSparseReadPP (6466 ms) 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripAppendPP 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripAppendPP (6796 ms) 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.IsCompletePP 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAioEC.IsCompletePP (7235 ms) 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.IsSafePP 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAioEC.IsSafePP (7085 ms) 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.ReturnValuePP 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAioEC.ReturnValuePP (7214 ms) 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.FlushPP 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAioEC.FlushPP (7088 ms) 2026-03-10T10:20:25.347 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.FlushAsyncPP 2026-03-10T10:20:25.348 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAioEC.FlushAsyncPP (7112 ms) 2026-03-10T10:20:25.348 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripWriteFullPP 2026-03-10T10:20:25.348 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripWriteFullPP (7124 ms) 2026-03-10T10:20:25.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:25 vm04 bash[28289]: audit 2026-03-10T10:20:24.331167+0000 mon.a (mon.0) 2306 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm04-59259-59"}]': finished 2026-03-10T10:20:25.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:25 vm04 bash[28289]: audit 2026-03-10T10:20:24.331167+0000 mon.a (mon.0) 2306 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm04-59259-59"}]': finished 2026-03-10T10:20:25.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:25 vm04 bash[28289]: cluster 2026-03-10T10:20:24.336524+0000 mon.a (mon.0) 2307 : cluster [DBG] osdmap e303: 8 total, 8 up, 8 in 2026-03-10T10:20:25.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:25 vm04 bash[28289]: cluster 2026-03-10T10:20:24.336524+0000 mon.a (mon.0) 2307 : cluster [DBG] osdmap e303: 8 total, 8 up, 8 in 2026-03-10T10:20:25.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:25 vm04 bash[28289]: audit 2026-03-10T10:20:24.338161+0000 mon.a (mon.0) 2308 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm04-59259-59"}]: dispatch 2026-03-10T10:20:25.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:25 vm04 bash[28289]: audit 2026-03-10T10:20:24.338161+0000 mon.a (mon.0) 2308 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm04-59259-59"}]: dispatch 2026-03-10T10:20:25.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:25 vm04 bash[28289]: cluster 2026-03-10T10:20:24.432482+0000 mgr.y (mgr.24422) 273 : cluster [DBG] pgmap v413: 292 pgs: 292 active+clean; 8.3 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T10:20:25.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:25 vm04 bash[28289]: cluster 2026-03-10T10:20:24.432482+0000 mgr.y (mgr.24422) 273 : cluster [DBG] pgmap v413: 292 pgs: 292 active+clean; 8.3 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T10:20:25.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:25 vm04 bash[28289]: audit 2026-03-10T10:20:24.767609+0000 mon.a (mon.0) 2309 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:20:25.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:25 vm04 bash[28289]: audit 2026-03-10T10:20:24.767609+0000 mon.a (mon.0) 2309 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:20:25.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:25 vm04 bash[28289]: audit 2026-03-10T10:20:25.124148+0000 mon.a (mon.0) 2310 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:25.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:25 vm04 bash[28289]: audit 2026-03-10T10:20:25.124148+0000 mon.a (mon.0) 2310 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:25.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:25 vm04 bash[28289]: audit 2026-03-10T10:20:25.124829+0000 mon.a (mon.0) 2311 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:20:25.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:25 vm04 bash[28289]: audit 2026-03-10T10:20:25.124829+0000 mon.a (mon.0) 2311 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:20:25.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:25 vm04 bash[28289]: audit 2026-03-10T10:20:25.132120+0000 mon.a (mon.0) 2312 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:20:25.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:25 vm04 bash[28289]: audit 2026-03-10T10:20:25.132120+0000 mon.a (mon.0) 2312 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:20:25.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:25 vm04 bash[28289]: audit 2026-03-10T10:20:25.335010+0000 mon.a (mon.0) 2313 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm04-59259-59"}]': finished 2026-03-10T10:20:25.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:25 vm04 bash[28289]: audit 2026-03-10T10:20:25.335010+0000 mon.a (mon.0) 2313 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm04-59259-59"}]': finished 2026-03-10T10:20:25.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:25 vm04 bash[28289]: cluster 2026-03-10T10:20:25.339519+0000 mon.a (mon.0) 2314 : cluster [DBG] osdmap e304: 8 total, 8 up, 8 in 2026-03-10T10:20:25.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:25 vm04 bash[28289]: cluster 2026-03-10T10:20:25.339519+0000 mon.a (mon.0) 2314 : cluster [DBG] osdmap e304: 8 total, 8 up, 8 in 2026-03-10T10:20:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:25 vm04 bash[20742]: audit 2026-03-10T10:20:24.331167+0000 mon.a (mon.0) 2306 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm04-59259-59"}]': finished 2026-03-10T10:20:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:25 vm04 bash[20742]: audit 2026-03-10T10:20:24.331167+0000 mon.a (mon.0) 2306 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm04-59259-59"}]': finished 2026-03-10T10:20:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:25 vm04 bash[20742]: cluster 2026-03-10T10:20:24.336524+0000 mon.a (mon.0) 2307 : cluster [DBG] osdmap e303: 8 total, 8 up, 8 in 2026-03-10T10:20:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:25 vm04 bash[20742]: cluster 2026-03-10T10:20:24.336524+0000 mon.a (mon.0) 2307 : cluster [DBG] osdmap e303: 8 total, 8 up, 8 in 2026-03-10T10:20:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:25 vm04 bash[20742]: audit 2026-03-10T10:20:24.338161+0000 mon.a (mon.0) 2308 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm04-59259-59"}]: dispatch 2026-03-10T10:20:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:25 vm04 bash[20742]: audit 2026-03-10T10:20:24.338161+0000 mon.a (mon.0) 2308 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm04-59259-59"}]: dispatch 2026-03-10T10:20:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:25 vm04 bash[20742]: cluster 2026-03-10T10:20:24.432482+0000 mgr.y (mgr.24422) 273 : cluster [DBG] pgmap v413: 292 pgs: 292 active+clean; 8.3 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T10:20:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:25 vm04 bash[20742]: cluster 2026-03-10T10:20:24.432482+0000 mgr.y (mgr.24422) 273 : cluster [DBG] pgmap v413: 292 pgs: 292 active+clean; 8.3 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T10:20:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:25 vm04 bash[20742]: audit 2026-03-10T10:20:24.767609+0000 mon.a (mon.0) 2309 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:20:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:25 vm04 bash[20742]: audit 2026-03-10T10:20:24.767609+0000 mon.a (mon.0) 2309 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:20:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:25 vm04 bash[20742]: audit 2026-03-10T10:20:25.124148+0000 mon.a (mon.0) 2310 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:25 vm04 bash[20742]: audit 2026-03-10T10:20:25.124148+0000 mon.a (mon.0) 2310 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:25 vm04 bash[20742]: audit 2026-03-10T10:20:25.124829+0000 mon.a (mon.0) 2311 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:20:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:25 vm04 bash[20742]: audit 2026-03-10T10:20:25.124829+0000 mon.a (mon.0) 2311 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:20:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:25 vm04 bash[20742]: audit 2026-03-10T10:20:25.132120+0000 mon.a (mon.0) 2312 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:20:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:25 vm04 bash[20742]: audit 2026-03-10T10:20:25.132120+0000 mon.a (mon.0) 2312 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:20:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:25 vm04 bash[20742]: audit 2026-03-10T10:20:25.335010+0000 mon.a (mon.0) 2313 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm04-59259-59"}]': finished 2026-03-10T10:20:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:25 vm04 bash[20742]: audit 2026-03-10T10:20:25.335010+0000 mon.a (mon.0) 2313 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm04-59259-59"}]': finished 2026-03-10T10:20:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:25 vm04 bash[20742]: cluster 2026-03-10T10:20:25.339519+0000 mon.a (mon.0) 2314 : cluster [DBG] osdmap e304: 8 total, 8 up, 8 in 2026-03-10T10:20:25.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:25 vm04 bash[20742]: cluster 2026-03-10T10:20:25.339519+0000 mon.a (mon.0) 2314 : cluster [DBG] osdmap e304: 8 total, 8 up, 8 in 2026-03-10T10:20:25.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:25 vm07 bash[23367]: audit 2026-03-10T10:20:24.331167+0000 mon.a (mon.0) 2306 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm04-59259-59"}]': finished 2026-03-10T10:20:25.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:25 vm07 bash[23367]: audit 2026-03-10T10:20:24.331167+0000 mon.a (mon.0) 2306 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-RoundTripWriteFullPP_vm04-59259-59"}]': finished 2026-03-10T10:20:25.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:25 vm07 bash[23367]: cluster 2026-03-10T10:20:24.336524+0000 mon.a (mon.0) 2307 : cluster [DBG] osdmap e303: 8 total, 8 up, 8 in 2026-03-10T10:20:25.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:25 vm07 bash[23367]: cluster 2026-03-10T10:20:24.336524+0000 mon.a (mon.0) 2307 : cluster [DBG] osdmap e303: 8 total, 8 up, 8 in 2026-03-10T10:20:25.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:25 vm07 bash[23367]: audit 2026-03-10T10:20:24.338161+0000 mon.a (mon.0) 2308 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm04-59259-59"}]: dispatch 2026-03-10T10:20:25.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:25 vm07 bash[23367]: audit 2026-03-10T10:20:24.338161+0000 mon.a (mon.0) 2308 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm04-59259-59"}]: dispatch 2026-03-10T10:20:25.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:25 vm07 bash[23367]: cluster 2026-03-10T10:20:24.432482+0000 mgr.y (mgr.24422) 273 : cluster [DBG] pgmap v413: 292 pgs: 292 active+clean; 8.3 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T10:20:25.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:25 vm07 bash[23367]: cluster 2026-03-10T10:20:24.432482+0000 mgr.y (mgr.24422) 273 : cluster [DBG] pgmap v413: 292 pgs: 292 active+clean; 8.3 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T10:20:25.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:25 vm07 bash[23367]: audit 2026-03-10T10:20:24.767609+0000 mon.a (mon.0) 2309 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:20:25.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:25 vm07 bash[23367]: audit 2026-03-10T10:20:24.767609+0000 mon.a (mon.0) 2309 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:20:25.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:25 vm07 bash[23367]: audit 2026-03-10T10:20:25.124148+0000 mon.a (mon.0) 2310 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:25.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:25 vm07 bash[23367]: audit 2026-03-10T10:20:25.124148+0000 mon.a (mon.0) 2310 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:20:25.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:25 vm07 bash[23367]: audit 2026-03-10T10:20:25.124829+0000 mon.a (mon.0) 2311 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:20:25.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:25 vm07 bash[23367]: audit 2026-03-10T10:20:25.124829+0000 mon.a (mon.0) 2311 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:20:25.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:25 vm07 bash[23367]: audit 2026-03-10T10:20:25.132120+0000 mon.a (mon.0) 2312 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:20:25.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:25 vm07 bash[23367]: audit 2026-03-10T10:20:25.132120+0000 mon.a (mon.0) 2312 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:20:25.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:25 vm07 bash[23367]: audit 2026-03-10T10:20:25.335010+0000 mon.a (mon.0) 2313 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm04-59259-59"}]': finished 2026-03-10T10:20:25.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:25 vm07 bash[23367]: audit 2026-03-10T10:20:25.335010+0000 mon.a (mon.0) 2313 : audit [INF] from='client.? 192.168.123.104:0/713540606' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"RoundTripWriteFullPP_vm04-59259-59"}]': finished 2026-03-10T10:20:25.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:25 vm07 bash[23367]: cluster 2026-03-10T10:20:25.339519+0000 mon.a (mon.0) 2314 : cluster [DBG] osdmap e304: 8 total, 8 up, 8 in 2026-03-10T10:20:25.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:25 vm07 bash[23367]: cluster 2026-03-10T10:20:25.339519+0000 mon.a (mon.0) 2314 : cluster [DBG] osdmap e304: 8 total, 8 up, 8 in 2026-03-10T10:20:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:27 vm04 bash[28289]: cluster 2026-03-10T10:20:26.386173+0000 mon.a (mon.0) 2315 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-10T10:20:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:27 vm04 bash[28289]: cluster 2026-03-10T10:20:26.386173+0000 mon.a (mon.0) 2315 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-10T10:20:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:27 vm04 bash[28289]: audit 2026-03-10T10:20:26.386913+0000 mon.c (mon.2) 426 : audit [INF] from='client.? 192.168.123.104:0/137211600' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:27 vm04 bash[28289]: audit 2026-03-10T10:20:26.386913+0000 mon.c (mon.2) 426 : audit [INF] from='client.? 192.168.123.104:0/137211600' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:27 vm04 bash[28289]: audit 2026-03-10T10:20:26.389079+0000 mon.a (mon.0) 2316 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:27 vm04 bash[28289]: audit 2026-03-10T10:20:26.389079+0000 mon.a (mon.0) 2316 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:27 vm04 bash[28289]: cluster 2026-03-10T10:20:26.432848+0000 mgr.y (mgr.24422) 274 : cluster [DBG] pgmap v416: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T10:20:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:27 vm04 bash[28289]: cluster 2026-03-10T10:20:26.432848+0000 mgr.y (mgr.24422) 274 : cluster [DBG] pgmap v416: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T10:20:27.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:27 vm04 bash[20742]: cluster 2026-03-10T10:20:26.386173+0000 mon.a (mon.0) 2315 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-10T10:20:27.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:27 vm04 bash[20742]: cluster 2026-03-10T10:20:26.386173+0000 mon.a (mon.0) 2315 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-10T10:20:27.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:27 vm04 bash[20742]: audit 2026-03-10T10:20:26.386913+0000 mon.c (mon.2) 426 : audit [INF] from='client.? 192.168.123.104:0/137211600' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:27.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:27 vm04 bash[20742]: audit 2026-03-10T10:20:26.386913+0000 mon.c (mon.2) 426 : audit [INF] from='client.? 192.168.123.104:0/137211600' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:27.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:27 vm04 bash[20742]: audit 2026-03-10T10:20:26.389079+0000 mon.a (mon.0) 2316 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:27.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:27 vm04 bash[20742]: audit 2026-03-10T10:20:26.389079+0000 mon.a (mon.0) 2316 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:27.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:27 vm04 bash[20742]: cluster 2026-03-10T10:20:26.432848+0000 mgr.y (mgr.24422) 274 : cluster [DBG] pgmap v416: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T10:20:27.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:27 vm04 bash[20742]: cluster 2026-03-10T10:20:26.432848+0000 mgr.y (mgr.24422) 274 : cluster [DBG] pgmap v416: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T10:20:27.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:27 vm07 bash[23367]: cluster 2026-03-10T10:20:26.386173+0000 mon.a (mon.0) 2315 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-10T10:20:27.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:27 vm07 bash[23367]: cluster 2026-03-10T10:20:26.386173+0000 mon.a (mon.0) 2315 : cluster [DBG] osdmap e305: 8 total, 8 up, 8 in 2026-03-10T10:20:27.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:27 vm07 bash[23367]: audit 2026-03-10T10:20:26.386913+0000 mon.c (mon.2) 426 : audit [INF] from='client.? 192.168.123.104:0/137211600' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:27.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:27 vm07 bash[23367]: audit 2026-03-10T10:20:26.386913+0000 mon.c (mon.2) 426 : audit [INF] from='client.? 192.168.123.104:0/137211600' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:27.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:27 vm07 bash[23367]: audit 2026-03-10T10:20:26.389079+0000 mon.a (mon.0) 2316 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:27.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:27 vm07 bash[23367]: audit 2026-03-10T10:20:26.389079+0000 mon.a (mon.0) 2316 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-60","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:27.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:27 vm07 bash[23367]: cluster 2026-03-10T10:20:26.432848+0000 mgr.y (mgr.24422) 274 : cluster [DBG] pgmap v416: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T10:20:27.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:27 vm07 bash[23367]: cluster 2026-03-10T10:20:26.432848+0000 mgr.y (mgr.24422) 274 : cluster [DBG] pgmap v416: 324 pgs: 32 unknown, 292 active+clean; 8.3 MiB data, 679 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T10:20:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:28 vm04 bash[28289]: cluster 2026-03-10T10:20:27.365850+0000 mon.a (mon.0) 2317 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:28 vm04 bash[28289]: cluster 2026-03-10T10:20:27.365850+0000 mon.a (mon.0) 2317 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:28 vm04 bash[28289]: audit 2026-03-10T10:20:27.373748+0000 mon.a (mon.0) 2318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-60","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:28 vm04 bash[28289]: audit 2026-03-10T10:20:27.373748+0000 mon.a (mon.0) 2318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-60","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:28 vm04 bash[28289]: cluster 2026-03-10T10:20:27.378337+0000 mon.a (mon.0) 2319 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-10T10:20:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:28 vm04 bash[28289]: cluster 2026-03-10T10:20:27.378337+0000 mon.a (mon.0) 2319 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-10T10:20:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:28 vm04 bash[28289]: audit 2026-03-10T10:20:27.872654+0000 mon.a (mon.0) 2320 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:20:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:28 vm04 bash[28289]: audit 2026-03-10T10:20:27.872654+0000 mon.a (mon.0) 2320 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:20:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:28 vm04 bash[20742]: cluster 2026-03-10T10:20:27.365850+0000 mon.a (mon.0) 2317 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:28 vm04 bash[20742]: cluster 2026-03-10T10:20:27.365850+0000 mon.a (mon.0) 2317 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:28 vm04 bash[20742]: audit 2026-03-10T10:20:27.373748+0000 mon.a (mon.0) 2318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-60","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:28 vm04 bash[20742]: audit 2026-03-10T10:20:27.373748+0000 mon.a (mon.0) 2318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-60","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:28 vm04 bash[20742]: cluster 2026-03-10T10:20:27.378337+0000 mon.a (mon.0) 2319 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-10T10:20:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:28 vm04 bash[20742]: cluster 2026-03-10T10:20:27.378337+0000 mon.a (mon.0) 2319 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-10T10:20:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:28 vm04 bash[20742]: audit 2026-03-10T10:20:27.872654+0000 mon.a (mon.0) 2320 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:20:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:28 vm04 bash[20742]: audit 2026-03-10T10:20:27.872654+0000 mon.a (mon.0) 2320 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:20:28.766 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:20:28 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:20:28.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:28 vm07 bash[23367]: cluster 2026-03-10T10:20:27.365850+0000 mon.a (mon.0) 2317 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:28.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:28 vm07 bash[23367]: cluster 2026-03-10T10:20:27.365850+0000 mon.a (mon.0) 2317 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:28.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:28 vm07 bash[23367]: audit 2026-03-10T10:20:27.373748+0000 mon.a (mon.0) 2318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-60","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:28.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:28 vm07 bash[23367]: audit 2026-03-10T10:20:27.373748+0000 mon.a (mon.0) 2318 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "RoundTripWriteFullPP2_vm04-59259-60","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:28.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:28 vm07 bash[23367]: cluster 2026-03-10T10:20:27.378337+0000 mon.a (mon.0) 2319 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-10T10:20:28.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:28 vm07 bash[23367]: cluster 2026-03-10T10:20:27.378337+0000 mon.a (mon.0) 2319 : cluster [DBG] osdmap e306: 8 total, 8 up, 8 in 2026-03-10T10:20:28.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:28 vm07 bash[23367]: audit 2026-03-10T10:20:27.872654+0000 mon.a (mon.0) 2320 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:20:28.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:28 vm07 bash[23367]: audit 2026-03-10T10:20:27.872654+0000 mon.a (mon.0) 2320 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:20:29.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:29 vm04 bash[28289]: cluster 2026-03-10T10:20:28.402110+0000 mon.a (mon.0) 2321 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-10T10:20:29.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:29 vm04 bash[28289]: cluster 2026-03-10T10:20:28.402110+0000 mon.a (mon.0) 2321 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-10T10:20:29.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:29 vm04 bash[28289]: audit 2026-03-10T10:20:28.411074+0000 mgr.y (mgr.24422) 275 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:29.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:29 vm04 bash[28289]: audit 2026-03-10T10:20:28.411074+0000 mgr.y (mgr.24422) 275 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:29.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:29 vm04 bash[28289]: audit 2026-03-10T10:20:28.429089+0000 mon.a (mon.0) 2322 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm04-59259-61"}]: dispatch 2026-03-10T10:20:29.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:29 vm04 bash[28289]: audit 2026-03-10T10:20:28.429089+0000 mon.a (mon.0) 2322 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm04-59259-61"}]: dispatch 2026-03-10T10:20:29.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:29 vm04 bash[28289]: audit 2026-03-10T10:20:28.429693+0000 mon.a (mon.0) 2323 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm04-59259-61"}]: dispatch 2026-03-10T10:20:29.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:29 vm04 bash[28289]: audit 2026-03-10T10:20:28.429693+0000 mon.a (mon.0) 2323 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm04-59259-61"}]: dispatch 2026-03-10T10:20:29.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:29 vm04 bash[28289]: audit 2026-03-10T10:20:28.429929+0000 mon.a (mon.0) 2324 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm04-59259-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:29.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:29 vm04 bash[28289]: audit 2026-03-10T10:20:28.429929+0000 mon.a (mon.0) 2324 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm04-59259-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:29.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:29 vm04 bash[28289]: cluster 2026-03-10T10:20:28.433217+0000 mgr.y (mgr.24422) 276 : cluster [DBG] pgmap v419: 292 pgs: 292 active+clean; 8.3 MiB data, 679 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:20:29.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:29 vm04 bash[28289]: cluster 2026-03-10T10:20:28.433217+0000 mgr.y (mgr.24422) 276 : cluster [DBG] pgmap v419: 292 pgs: 292 active+clean; 8.3 MiB data, 679 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:20:29.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:29 vm04 bash[20742]: cluster 2026-03-10T10:20:28.402110+0000 mon.a (mon.0) 2321 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-10T10:20:29.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:29 vm04 bash[20742]: cluster 2026-03-10T10:20:28.402110+0000 mon.a (mon.0) 2321 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-10T10:20:29.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:29 vm04 bash[20742]: audit 2026-03-10T10:20:28.411074+0000 mgr.y (mgr.24422) 275 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:29.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:29 vm04 bash[20742]: audit 2026-03-10T10:20:28.411074+0000 mgr.y (mgr.24422) 275 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:29.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:29 vm04 bash[20742]: audit 2026-03-10T10:20:28.429089+0000 mon.a (mon.0) 2322 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm04-59259-61"}]: dispatch 2026-03-10T10:20:29.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:29 vm04 bash[20742]: audit 2026-03-10T10:20:28.429089+0000 mon.a (mon.0) 2322 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm04-59259-61"}]: dispatch 2026-03-10T10:20:29.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:29 vm04 bash[20742]: audit 2026-03-10T10:20:28.429693+0000 mon.a (mon.0) 2323 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm04-59259-61"}]: dispatch 2026-03-10T10:20:29.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:29 vm04 bash[20742]: audit 2026-03-10T10:20:28.429693+0000 mon.a (mon.0) 2323 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm04-59259-61"}]: dispatch 2026-03-10T10:20:29.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:29 vm04 bash[20742]: audit 2026-03-10T10:20:28.429929+0000 mon.a (mon.0) 2324 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm04-59259-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:29 vm04 bash[20742]: audit 2026-03-10T10:20:28.429929+0000 mon.a (mon.0) 2324 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm04-59259-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:29 vm04 bash[20742]: cluster 2026-03-10T10:20:28.433217+0000 mgr.y (mgr.24422) 276 : cluster [DBG] pgmap v419: 292 pgs: 292 active+clean; 8.3 MiB data, 679 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:20:29.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:29 vm04 bash[20742]: cluster 2026-03-10T10:20:28.433217+0000 mgr.y (mgr.24422) 276 : cluster [DBG] pgmap v419: 292 pgs: 292 active+clean; 8.3 MiB data, 679 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:20:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:29 vm07 bash[23367]: cluster 2026-03-10T10:20:28.402110+0000 mon.a (mon.0) 2321 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-10T10:20:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:29 vm07 bash[23367]: cluster 2026-03-10T10:20:28.402110+0000 mon.a (mon.0) 2321 : cluster [DBG] osdmap e307: 8 total, 8 up, 8 in 2026-03-10T10:20:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:29 vm07 bash[23367]: audit 2026-03-10T10:20:28.411074+0000 mgr.y (mgr.24422) 275 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:29 vm07 bash[23367]: audit 2026-03-10T10:20:28.411074+0000 mgr.y (mgr.24422) 275 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:29 vm07 bash[23367]: audit 2026-03-10T10:20:28.429089+0000 mon.a (mon.0) 2322 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm04-59259-61"}]: dispatch 2026-03-10T10:20:30.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:29 vm07 bash[23367]: audit 2026-03-10T10:20:28.429089+0000 mon.a (mon.0) 2322 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm04-59259-61"}]: dispatch 2026-03-10T10:20:30.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:29 vm07 bash[23367]: audit 2026-03-10T10:20:28.429693+0000 mon.a (mon.0) 2323 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm04-59259-61"}]: dispatch 2026-03-10T10:20:30.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:29 vm07 bash[23367]: audit 2026-03-10T10:20:28.429693+0000 mon.a (mon.0) 2323 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm04-59259-61"}]: dispatch 2026-03-10T10:20:30.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:29 vm07 bash[23367]: audit 2026-03-10T10:20:28.429929+0000 mon.a (mon.0) 2324 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm04-59259-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:30.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:29 vm07 bash[23367]: audit 2026-03-10T10:20:28.429929+0000 mon.a (mon.0) 2324 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm04-59259-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:30.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:29 vm07 bash[23367]: cluster 2026-03-10T10:20:28.433217+0000 mgr.y (mgr.24422) 276 : cluster [DBG] pgmap v419: 292 pgs: 292 active+clean; 8.3 MiB data, 679 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:20:30.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:29 vm07 bash[23367]: cluster 2026-03-10T10:20:28.433217+0000 mgr.y (mgr.24422) 276 : cluster [DBG] pgmap v419: 292 pgs: 292 active+clean; 8.3 MiB data, 679 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:20:30.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:30 vm04 bash[28289]: audit 2026-03-10T10:20:29.548786+0000 mon.a (mon.0) 2325 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm04-59259-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:30.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:30 vm04 bash[28289]: audit 2026-03-10T10:20:29.548786+0000 mon.a (mon.0) 2325 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm04-59259-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:30.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:30 vm04 bash[28289]: cluster 2026-03-10T10:20:29.560970+0000 mon.a (mon.0) 2326 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-10T10:20:30.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:30 vm04 bash[28289]: cluster 2026-03-10T10:20:29.560970+0000 mon.a (mon.0) 2326 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-10T10:20:30.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:30 vm04 bash[28289]: audit 2026-03-10T10:20:29.561479+0000 mon.a (mon.0) 2327 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm04-59259-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm04-59259-61"}]: dispatch 2026-03-10T10:20:30.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:30 vm04 bash[28289]: audit 2026-03-10T10:20:29.561479+0000 mon.a (mon.0) 2327 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm04-59259-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm04-59259-61"}]: dispatch 2026-03-10T10:20:30.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:30 vm04 bash[20742]: audit 2026-03-10T10:20:29.548786+0000 mon.a (mon.0) 2325 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm04-59259-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:30.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:30 vm04 bash[20742]: audit 2026-03-10T10:20:29.548786+0000 mon.a (mon.0) 2325 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm04-59259-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:30.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:30 vm04 bash[20742]: cluster 2026-03-10T10:20:29.560970+0000 mon.a (mon.0) 2326 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-10T10:20:30.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:30 vm04 bash[20742]: cluster 2026-03-10T10:20:29.560970+0000 mon.a (mon.0) 2326 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-10T10:20:30.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:30 vm04 bash[20742]: audit 2026-03-10T10:20:29.561479+0000 mon.a (mon.0) 2327 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm04-59259-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm04-59259-61"}]: dispatch 2026-03-10T10:20:30.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:30 vm04 bash[20742]: audit 2026-03-10T10:20:29.561479+0000 mon.a (mon.0) 2327 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm04-59259-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm04-59259-61"}]: dispatch 2026-03-10T10:20:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:30 vm07 bash[23367]: audit 2026-03-10T10:20:29.548786+0000 mon.a (mon.0) 2325 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm04-59259-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:30 vm07 bash[23367]: audit 2026-03-10T10:20:29.548786+0000 mon.a (mon.0) 2325 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPP_vm04-59259-61", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:30 vm07 bash[23367]: cluster 2026-03-10T10:20:29.560970+0000 mon.a (mon.0) 2326 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-10T10:20:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:30 vm07 bash[23367]: cluster 2026-03-10T10:20:29.560970+0000 mon.a (mon.0) 2326 : cluster [DBG] osdmap e308: 8 total, 8 up, 8 in 2026-03-10T10:20:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:30 vm07 bash[23367]: audit 2026-03-10T10:20:29.561479+0000 mon.a (mon.0) 2327 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm04-59259-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm04-59259-61"}]: dispatch 2026-03-10T10:20:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:30 vm07 bash[23367]: audit 2026-03-10T10:20:29.561479+0000 mon.a (mon.0) 2327 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm04-59259-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm04-59259-61"}]: dispatch 2026-03-10T10:20:31.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:31 vm04 bash[28289]: cluster 2026-03-10T10:20:30.433524+0000 mgr.y (mgr.24422) 277 : cluster [DBG] pgmap v421: 292 pgs: 292 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T10:20:31.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:31 vm04 bash[28289]: cluster 2026-03-10T10:20:30.433524+0000 mgr.y (mgr.24422) 277 : cluster [DBG] pgmap v421: 292 pgs: 292 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T10:20:31.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:31 vm04 bash[28289]: cluster 2026-03-10T10:20:30.574887+0000 mon.a (mon.0) 2328 : cluster [DBG] osdmap e309: 8 total, 8 up, 8 in 2026-03-10T10:20:31.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:31 vm04 bash[28289]: cluster 2026-03-10T10:20:30.574887+0000 mon.a (mon.0) 2328 : cluster [DBG] osdmap e309: 8 total, 8 up, 8 in 2026-03-10T10:20:31.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:31 vm04 bash[20742]: cluster 2026-03-10T10:20:30.433524+0000 mgr.y (mgr.24422) 277 : cluster [DBG] pgmap v421: 292 pgs: 292 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T10:20:31.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:31 vm04 bash[20742]: cluster 2026-03-10T10:20:30.433524+0000 mgr.y (mgr.24422) 277 : cluster [DBG] pgmap v421: 292 pgs: 292 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T10:20:31.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:31 vm04 bash[20742]: cluster 2026-03-10T10:20:30.574887+0000 mon.a (mon.0) 2328 : cluster [DBG] osdmap e309: 8 total, 8 up, 8 in 2026-03-10T10:20:31.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:31 vm04 bash[20742]: cluster 2026-03-10T10:20:30.574887+0000 mon.a (mon.0) 2328 : cluster [DBG] osdmap e309: 8 total, 8 up, 8 in 2026-03-10T10:20:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:31 vm07 bash[23367]: cluster 2026-03-10T10:20:30.433524+0000 mgr.y (mgr.24422) 277 : cluster [DBG] pgmap v421: 292 pgs: 292 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T10:20:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:31 vm07 bash[23367]: cluster 2026-03-10T10:20:30.433524+0000 mgr.y (mgr.24422) 277 : cluster [DBG] pgmap v421: 292 pgs: 292 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T10:20:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:31 vm07 bash[23367]: cluster 2026-03-10T10:20:30.574887+0000 mon.a (mon.0) 2328 : cluster [DBG] osdmap e309: 8 total, 8 up, 8 in 2026-03-10T10:20:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:31 vm07 bash[23367]: cluster 2026-03-10T10:20:30.574887+0000 mon.a (mon.0) 2328 : cluster [DBG] osdmap e309: 8 total, 8 up, 8 in 2026-03-10T10:20:32.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:32 vm04 bash[28289]: audit 2026-03-10T10:20:31.562150+0000 mon.a (mon.0) 2329 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm04-59259-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm04-59259-61"}]': finished 2026-03-10T10:20:32.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:32 vm04 bash[28289]: audit 2026-03-10T10:20:31.562150+0000 mon.a (mon.0) 2329 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm04-59259-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm04-59259-61"}]': finished 2026-03-10T10:20:32.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:32 vm04 bash[28289]: cluster 2026-03-10T10:20:31.575776+0000 mon.a (mon.0) 2330 : cluster [DBG] osdmap e310: 8 total, 8 up, 8 in 2026-03-10T10:20:32.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:32 vm04 bash[28289]: cluster 2026-03-10T10:20:31.575776+0000 mon.a (mon.0) 2330 : cluster [DBG] osdmap e310: 8 total, 8 up, 8 in 2026-03-10T10:20:32.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:32 vm04 bash[28289]: audit 2026-03-10T10:20:32.311694+0000 mon.a (mon.0) 2331 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:20:32.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:32 vm04 bash[28289]: audit 2026-03-10T10:20:32.311694+0000 mon.a (mon.0) 2331 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:20:32.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:32 vm04 bash[28289]: audit 2026-03-10T10:20:32.311982+0000 mon.a (mon.0) 2332 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-45"}]: dispatch 2026-03-10T10:20:32.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:32 vm04 bash[28289]: audit 2026-03-10T10:20:32.311982+0000 mon.a (mon.0) 2332 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-45"}]: dispatch 2026-03-10T10:20:32.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:32 vm04 bash[28289]: audit 2026-03-10T10:20:32.566784+0000 mon.a (mon.0) 2333 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-45"}]': finished 2026-03-10T10:20:32.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:32 vm04 bash[28289]: audit 2026-03-10T10:20:32.566784+0000 mon.a (mon.0) 2333 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-45"}]': finished 2026-03-10T10:20:32.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:32 vm04 bash[28289]: cluster 2026-03-10T10:20:32.573445+0000 mon.a (mon.0) 2334 : cluster [DBG] osdmap e311: 8 total, 8 up, 8 in 2026-03-10T10:20:32.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:32 vm04 bash[28289]: cluster 2026-03-10T10:20:32.573445+0000 mon.a (mon.0) 2334 : cluster [DBG] osdmap e311: 8 total, 8 up, 8 in 2026-03-10T10:20:32.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:32 vm04 bash[20742]: audit 2026-03-10T10:20:31.562150+0000 mon.a (mon.0) 2329 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm04-59259-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm04-59259-61"}]': finished 2026-03-10T10:20:32.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:32 vm04 bash[20742]: audit 2026-03-10T10:20:31.562150+0000 mon.a (mon.0) 2329 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm04-59259-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm04-59259-61"}]': finished 2026-03-10T10:20:32.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:32 vm04 bash[20742]: cluster 2026-03-10T10:20:31.575776+0000 mon.a (mon.0) 2330 : cluster [DBG] osdmap e310: 8 total, 8 up, 8 in 2026-03-10T10:20:32.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:32 vm04 bash[20742]: cluster 2026-03-10T10:20:31.575776+0000 mon.a (mon.0) 2330 : cluster [DBG] osdmap e310: 8 total, 8 up, 8 in 2026-03-10T10:20:32.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:32 vm04 bash[20742]: audit 2026-03-10T10:20:32.311694+0000 mon.a (mon.0) 2331 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:20:32.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:32 vm04 bash[20742]: audit 2026-03-10T10:20:32.311694+0000 mon.a (mon.0) 2331 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:20:32.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:32 vm04 bash[20742]: audit 2026-03-10T10:20:32.311982+0000 mon.a (mon.0) 2332 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-45"}]: dispatch 2026-03-10T10:20:32.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:32 vm04 bash[20742]: audit 2026-03-10T10:20:32.311982+0000 mon.a (mon.0) 2332 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-45"}]: dispatch 2026-03-10T10:20:32.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:32 vm04 bash[20742]: audit 2026-03-10T10:20:32.566784+0000 mon.a (mon.0) 2333 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-45"}]': finished 2026-03-10T10:20:32.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:32 vm04 bash[20742]: audit 2026-03-10T10:20:32.566784+0000 mon.a (mon.0) 2333 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-45"}]': finished 2026-03-10T10:20:32.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:32 vm04 bash[20742]: cluster 2026-03-10T10:20:32.573445+0000 mon.a (mon.0) 2334 : cluster [DBG] osdmap e311: 8 total, 8 up, 8 in 2026-03-10T10:20:32.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:32 vm04 bash[20742]: cluster 2026-03-10T10:20:32.573445+0000 mon.a (mon.0) 2334 : cluster [DBG] osdmap e311: 8 total, 8 up, 8 in 2026-03-10T10:20:33.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:32 vm07 bash[23367]: audit 2026-03-10T10:20:31.562150+0000 mon.a (mon.0) 2329 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm04-59259-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm04-59259-61"}]': finished 2026-03-10T10:20:33.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:32 vm07 bash[23367]: audit 2026-03-10T10:20:31.562150+0000 mon.a (mon.0) 2329 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPP_vm04-59259-61", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPP_vm04-59259-61"}]': finished 2026-03-10T10:20:33.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:32 vm07 bash[23367]: cluster 2026-03-10T10:20:31.575776+0000 mon.a (mon.0) 2330 : cluster [DBG] osdmap e310: 8 total, 8 up, 8 in 2026-03-10T10:20:33.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:32 vm07 bash[23367]: cluster 2026-03-10T10:20:31.575776+0000 mon.a (mon.0) 2330 : cluster [DBG] osdmap e310: 8 total, 8 up, 8 in 2026-03-10T10:20:33.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:32 vm07 bash[23367]: audit 2026-03-10T10:20:32.311694+0000 mon.a (mon.0) 2331 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:20:33.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:32 vm07 bash[23367]: audit 2026-03-10T10:20:32.311694+0000 mon.a (mon.0) 2331 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:20:33.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:32 vm07 bash[23367]: audit 2026-03-10T10:20:32.311982+0000 mon.a (mon.0) 2332 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-45"}]: dispatch 2026-03-10T10:20:33.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:32 vm07 bash[23367]: audit 2026-03-10T10:20:32.311982+0000 mon.a (mon.0) 2332 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-45"}]: dispatch 2026-03-10T10:20:33.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:32 vm07 bash[23367]: audit 2026-03-10T10:20:32.566784+0000 mon.a (mon.0) 2333 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-45"}]': finished 2026-03-10T10:20:33.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:32 vm07 bash[23367]: audit 2026-03-10T10:20:32.566784+0000 mon.a (mon.0) 2333 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-45"}]': finished 2026-03-10T10:20:33.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:32 vm07 bash[23367]: cluster 2026-03-10T10:20:32.573445+0000 mon.a (mon.0) 2334 : cluster [DBG] osdmap e311: 8 total, 8 up, 8 in 2026-03-10T10:20:33.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:32 vm07 bash[23367]: cluster 2026-03-10T10:20:32.573445+0000 mon.a (mon.0) 2334 : cluster [DBG] osdmap e311: 8 total, 8 up, 8 in 2026-03-10T10:20:33.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:20:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:20:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:20:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:33 vm04 bash[28289]: cluster 2026-03-10T10:20:32.433827+0000 mgr.y (mgr.24422) 278 : cluster [DBG] pgmap v424: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T10:20:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:33 vm04 bash[28289]: cluster 2026-03-10T10:20:32.433827+0000 mgr.y (mgr.24422) 278 : cluster [DBG] pgmap v424: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T10:20:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:33 vm04 bash[28289]: cluster 2026-03-10T10:20:32.584112+0000 mon.a (mon.0) 2335 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:33 vm04 bash[28289]: cluster 2026-03-10T10:20:32.584112+0000 mon.a (mon.0) 2335 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:33 vm04 bash[28289]: cluster 2026-03-10T10:20:33.598103+0000 mon.a (mon.0) 2336 : cluster [DBG] osdmap e312: 8 total, 8 up, 8 in 2026-03-10T10:20:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:33 vm04 bash[28289]: cluster 2026-03-10T10:20:33.598103+0000 mon.a (mon.0) 2336 : cluster [DBG] osdmap e312: 8 total, 8 up, 8 in 2026-03-10T10:20:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:33 vm04 bash[28289]: audit 2026-03-10T10:20:33.598748+0000 mon.a (mon.0) 2337 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm04-59259-61"}]: dispatch 2026-03-10T10:20:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:33 vm04 bash[28289]: audit 2026-03-10T10:20:33.598748+0000 mon.a (mon.0) 2337 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm04-59259-61"}]: dispatch 2026-03-10T10:20:33.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:33 vm04 bash[20742]: cluster 2026-03-10T10:20:32.433827+0000 mgr.y (mgr.24422) 278 : cluster [DBG] pgmap v424: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T10:20:33.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:33 vm04 bash[20742]: cluster 2026-03-10T10:20:32.433827+0000 mgr.y (mgr.24422) 278 : cluster [DBG] pgmap v424: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T10:20:33.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:33 vm04 bash[20742]: cluster 2026-03-10T10:20:32.584112+0000 mon.a (mon.0) 2335 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:33.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:33 vm04 bash[20742]: cluster 2026-03-10T10:20:32.584112+0000 mon.a (mon.0) 2335 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:33.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:33 vm04 bash[20742]: cluster 2026-03-10T10:20:33.598103+0000 mon.a (mon.0) 2336 : cluster [DBG] osdmap e312: 8 total, 8 up, 8 in 2026-03-10T10:20:33.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:33 vm04 bash[20742]: cluster 2026-03-10T10:20:33.598103+0000 mon.a (mon.0) 2336 : cluster [DBG] osdmap e312: 8 total, 8 up, 8 in 2026-03-10T10:20:33.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:33 vm04 bash[20742]: audit 2026-03-10T10:20:33.598748+0000 mon.a (mon.0) 2337 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm04-59259-61"}]: dispatch 2026-03-10T10:20:33.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:33 vm04 bash[20742]: audit 2026-03-10T10:20:33.598748+0000 mon.a (mon.0) 2337 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm04-59259-61"}]: dispatch 2026-03-10T10:20:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:33 vm07 bash[23367]: cluster 2026-03-10T10:20:32.433827+0000 mgr.y (mgr.24422) 278 : cluster [DBG] pgmap v424: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T10:20:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:33 vm07 bash[23367]: cluster 2026-03-10T10:20:32.433827+0000 mgr.y (mgr.24422) 278 : cluster [DBG] pgmap v424: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T10:20:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:33 vm07 bash[23367]: cluster 2026-03-10T10:20:32.584112+0000 mon.a (mon.0) 2335 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:33 vm07 bash[23367]: cluster 2026-03-10T10:20:32.584112+0000 mon.a (mon.0) 2335 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:33 vm07 bash[23367]: cluster 2026-03-10T10:20:33.598103+0000 mon.a (mon.0) 2336 : cluster [DBG] osdmap e312: 8 total, 8 up, 8 in 2026-03-10T10:20:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:33 vm07 bash[23367]: cluster 2026-03-10T10:20:33.598103+0000 mon.a (mon.0) 2336 : cluster [DBG] osdmap e312: 8 total, 8 up, 8 in 2026-03-10T10:20:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:33 vm07 bash[23367]: audit 2026-03-10T10:20:33.598748+0000 mon.a (mon.0) 2337 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm04-59259-61"}]: dispatch 2026-03-10T10:20:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:33 vm07 bash[23367]: audit 2026-03-10T10:20:33.598748+0000 mon.a (mon.0) 2337 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm04-59259-61"}]: dispatch 2026-03-10T10:20:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:35 vm04 bash[28289]: cluster 2026-03-10T10:20:34.434424+0000 mgr.y (mgr.24422) 279 : cluster [DBG] pgmap v427: 260 pgs: 260 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:35 vm04 bash[28289]: cluster 2026-03-10T10:20:34.434424+0000 mgr.y (mgr.24422) 279 : cluster [DBG] pgmap v427: 260 pgs: 260 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:35 vm04 bash[28289]: audit 2026-03-10T10:20:34.599487+0000 mon.a (mon.0) 2338 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm04-59259-61"}]': finished 2026-03-10T10:20:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:35 vm04 bash[28289]: audit 2026-03-10T10:20:34.599487+0000 mon.a (mon.0) 2338 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm04-59259-61"}]': finished 2026-03-10T10:20:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:35 vm04 bash[28289]: cluster 2026-03-10T10:20:34.619684+0000 mon.a (mon.0) 2339 : cluster [DBG] osdmap e313: 8 total, 8 up, 8 in 2026-03-10T10:20:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:35 vm04 bash[28289]: cluster 2026-03-10T10:20:34.619684+0000 mon.a (mon.0) 2339 : cluster [DBG] osdmap e313: 8 total, 8 up, 8 in 2026-03-10T10:20:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:35 vm04 bash[28289]: audit 2026-03-10T10:20:34.620292+0000 mon.a (mon.0) 2340 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm04-59259-61"}]: dispatch 2026-03-10T10:20:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:35 vm04 bash[28289]: audit 2026-03-10T10:20:34.620292+0000 mon.a (mon.0) 2340 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm04-59259-61"}]: dispatch 2026-03-10T10:20:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:35 vm04 bash[28289]: audit 2026-03-10T10:20:34.620908+0000 mon.a (mon.0) 2341 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:35 vm04 bash[28289]: audit 2026-03-10T10:20:34.620908+0000 mon.a (mon.0) 2341 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:35 vm04 bash[20742]: cluster 2026-03-10T10:20:34.434424+0000 mgr.y (mgr.24422) 279 : cluster [DBG] pgmap v427: 260 pgs: 260 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:35 vm04 bash[20742]: cluster 2026-03-10T10:20:34.434424+0000 mgr.y (mgr.24422) 279 : cluster [DBG] pgmap v427: 260 pgs: 260 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:35 vm04 bash[20742]: audit 2026-03-10T10:20:34.599487+0000 mon.a (mon.0) 2338 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm04-59259-61"}]': finished 2026-03-10T10:20:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:35 vm04 bash[20742]: audit 2026-03-10T10:20:34.599487+0000 mon.a (mon.0) 2338 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm04-59259-61"}]': finished 2026-03-10T10:20:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:35 vm04 bash[20742]: cluster 2026-03-10T10:20:34.619684+0000 mon.a (mon.0) 2339 : cluster [DBG] osdmap e313: 8 total, 8 up, 8 in 2026-03-10T10:20:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:35 vm04 bash[20742]: cluster 2026-03-10T10:20:34.619684+0000 mon.a (mon.0) 2339 : cluster [DBG] osdmap e313: 8 total, 8 up, 8 in 2026-03-10T10:20:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:35 vm04 bash[20742]: audit 2026-03-10T10:20:34.620292+0000 mon.a (mon.0) 2340 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm04-59259-61"}]: dispatch 2026-03-10T10:20:35.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:35 vm04 bash[20742]: audit 2026-03-10T10:20:34.620292+0000 mon.a (mon.0) 2340 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm04-59259-61"}]: dispatch 2026-03-10T10:20:35.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:35 vm04 bash[20742]: audit 2026-03-10T10:20:34.620908+0000 mon.a (mon.0) 2341 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:35.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:35 vm04 bash[20742]: audit 2026-03-10T10:20:34.620908+0000 mon.a (mon.0) 2341 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:36.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:35 vm07 bash[23367]: cluster 2026-03-10T10:20:34.434424+0000 mgr.y (mgr.24422) 279 : cluster [DBG] pgmap v427: 260 pgs: 260 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:36.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:35 vm07 bash[23367]: cluster 2026-03-10T10:20:34.434424+0000 mgr.y (mgr.24422) 279 : cluster [DBG] pgmap v427: 260 pgs: 260 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:36.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:35 vm07 bash[23367]: audit 2026-03-10T10:20:34.599487+0000 mon.a (mon.0) 2338 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm04-59259-61"}]': finished 2026-03-10T10:20:36.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:35 vm07 bash[23367]: audit 2026-03-10T10:20:34.599487+0000 mon.a (mon.0) 2338 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPP_vm04-59259-61"}]': finished 2026-03-10T10:20:36.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:35 vm07 bash[23367]: cluster 2026-03-10T10:20:34.619684+0000 mon.a (mon.0) 2339 : cluster [DBG] osdmap e313: 8 total, 8 up, 8 in 2026-03-10T10:20:36.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:35 vm07 bash[23367]: cluster 2026-03-10T10:20:34.619684+0000 mon.a (mon.0) 2339 : cluster [DBG] osdmap e313: 8 total, 8 up, 8 in 2026-03-10T10:20:36.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:35 vm07 bash[23367]: audit 2026-03-10T10:20:34.620292+0000 mon.a (mon.0) 2340 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm04-59259-61"}]: dispatch 2026-03-10T10:20:36.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:35 vm07 bash[23367]: audit 2026-03-10T10:20:34.620292+0000 mon.a (mon.0) 2340 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm04-59259-61"}]: dispatch 2026-03-10T10:20:36.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:35 vm07 bash[23367]: audit 2026-03-10T10:20:34.620908+0000 mon.a (mon.0) 2341 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:36.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:35 vm07 bash[23367]: audit 2026-03-10T10:20:34.620908+0000 mon.a (mon.0) 2341 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-47","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:36 vm04 bash[28289]: audit 2026-03-10T10:20:35.603911+0000 mon.a (mon.0) 2342 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm04-59259-61"}]': finished 2026-03-10T10:20:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:36 vm04 bash[28289]: audit 2026-03-10T10:20:35.603911+0000 mon.a (mon.0) 2342 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm04-59259-61"}]': finished 2026-03-10T10:20:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:36 vm04 bash[28289]: audit 2026-03-10T10:20:35.604007+0000 mon.a (mon.0) 2343 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-47","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:36 vm04 bash[28289]: audit 2026-03-10T10:20:35.604007+0000 mon.a (mon.0) 2343 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-47","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:36 vm04 bash[28289]: cluster 2026-03-10T10:20:35.609135+0000 mon.a (mon.0) 2344 : cluster [DBG] osdmap e314: 8 total, 8 up, 8 in 2026-03-10T10:20:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:36 vm04 bash[28289]: cluster 2026-03-10T10:20:35.609135+0000 mon.a (mon.0) 2344 : cluster [DBG] osdmap e314: 8 total, 8 up, 8 in 2026-03-10T10:20:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:36 vm04 bash[28289]: audit 2026-03-10T10:20:35.636647+0000 mon.c (mon.2) 427 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:36 vm04 bash[28289]: audit 2026-03-10T10:20:35.636647+0000 mon.c (mon.2) 427 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:36 vm04 bash[28289]: audit 2026-03-10T10:20:35.658066+0000 mon.a (mon.0) 2345 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:36 vm04 bash[28289]: audit 2026-03-10T10:20:35.658066+0000 mon.a (mon.0) 2345 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:36 vm04 bash[28289]: audit 2026-03-10T10:20:35.662146+0000 mon.c (mon.2) 428 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:36 vm04 bash[28289]: audit 2026-03-10T10:20:35.662146+0000 mon.c (mon.2) 428 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:36 vm04 bash[28289]: audit 2026-03-10T10:20:35.665391+0000 mon.a (mon.0) 2346 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:36 vm04 bash[28289]: audit 2026-03-10T10:20:35.665391+0000 mon.a (mon.0) 2346 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:36 vm04 bash[28289]: audit 2026-03-10T10:20:35.669840+0000 mon.c (mon.2) 429 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm04-59259-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:36 vm04 bash[28289]: audit 2026-03-10T10:20:35.669840+0000 mon.c (mon.2) 429 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm04-59259-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:36 vm04 bash[28289]: audit 2026-03-10T10:20:35.670098+0000 mon.a (mon.0) 2347 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm04-59259-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:36 vm04 bash[28289]: audit 2026-03-10T10:20:35.670098+0000 mon.a (mon.0) 2347 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm04-59259-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:36 vm04 bash[28289]: audit 2026-03-10T10:20:35.694806+0000 mon.a (mon.0) 2348 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:20:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:36 vm04 bash[28289]: audit 2026-03-10T10:20:35.694806+0000 mon.a (mon.0) 2348 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:20:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:36 vm04 bash[20742]: audit 2026-03-10T10:20:35.603911+0000 mon.a (mon.0) 2342 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm04-59259-61"}]': finished 2026-03-10T10:20:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:36 vm04 bash[20742]: audit 2026-03-10T10:20:35.603911+0000 mon.a (mon.0) 2342 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm04-59259-61"}]': finished 2026-03-10T10:20:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:36 vm04 bash[20742]: audit 2026-03-10T10:20:35.604007+0000 mon.a (mon.0) 2343 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-47","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:36 vm04 bash[20742]: audit 2026-03-10T10:20:35.604007+0000 mon.a (mon.0) 2343 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-47","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:36 vm04 bash[20742]: cluster 2026-03-10T10:20:35.609135+0000 mon.a (mon.0) 2344 : cluster [DBG] osdmap e314: 8 total, 8 up, 8 in 2026-03-10T10:20:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:36 vm04 bash[20742]: cluster 2026-03-10T10:20:35.609135+0000 mon.a (mon.0) 2344 : cluster [DBG] osdmap e314: 8 total, 8 up, 8 in 2026-03-10T10:20:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:36 vm04 bash[20742]: audit 2026-03-10T10:20:35.636647+0000 mon.c (mon.2) 427 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:36 vm04 bash[20742]: audit 2026-03-10T10:20:35.636647+0000 mon.c (mon.2) 427 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:36 vm04 bash[20742]: audit 2026-03-10T10:20:35.658066+0000 mon.a (mon.0) 2345 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:36 vm04 bash[20742]: audit 2026-03-10T10:20:35.658066+0000 mon.a (mon.0) 2345 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:36 vm04 bash[20742]: audit 2026-03-10T10:20:35.662146+0000 mon.c (mon.2) 428 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:36 vm04 bash[20742]: audit 2026-03-10T10:20:35.662146+0000 mon.c (mon.2) 428 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:36 vm04 bash[20742]: audit 2026-03-10T10:20:35.665391+0000 mon.a (mon.0) 2346 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:36 vm04 bash[20742]: audit 2026-03-10T10:20:35.665391+0000 mon.a (mon.0) 2346 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:36 vm04 bash[20742]: audit 2026-03-10T10:20:35.669840+0000 mon.c (mon.2) 429 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm04-59259-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:36 vm04 bash[20742]: audit 2026-03-10T10:20:35.669840+0000 mon.c (mon.2) 429 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm04-59259-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:36 vm04 bash[20742]: audit 2026-03-10T10:20:35.670098+0000 mon.a (mon.0) 2347 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm04-59259-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:36 vm04 bash[20742]: audit 2026-03-10T10:20:35.670098+0000 mon.a (mon.0) 2347 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm04-59259-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:36 vm04 bash[20742]: audit 2026-03-10T10:20:35.694806+0000 mon.a (mon.0) 2348 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:20:37.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:36 vm04 bash[20742]: audit 2026-03-10T10:20:35.694806+0000 mon.a (mon.0) 2348 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:20:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:36 vm07 bash[23367]: audit 2026-03-10T10:20:35.603911+0000 mon.a (mon.0) 2342 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm04-59259-61"}]': finished 2026-03-10T10:20:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:36 vm07 bash[23367]: audit 2026-03-10T10:20:35.603911+0000 mon.a (mon.0) 2342 : audit [INF] from='client.? 192.168.123.104:0/3978980185' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPP_vm04-59259-61"}]': finished 2026-03-10T10:20:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:36 vm07 bash[23367]: audit 2026-03-10T10:20:35.604007+0000 mon.a (mon.0) 2343 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-47","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:36 vm07 bash[23367]: audit 2026-03-10T10:20:35.604007+0000 mon.a (mon.0) 2343 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-47","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:36 vm07 bash[23367]: cluster 2026-03-10T10:20:35.609135+0000 mon.a (mon.0) 2344 : cluster [DBG] osdmap e314: 8 total, 8 up, 8 in 2026-03-10T10:20:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:36 vm07 bash[23367]: cluster 2026-03-10T10:20:35.609135+0000 mon.a (mon.0) 2344 : cluster [DBG] osdmap e314: 8 total, 8 up, 8 in 2026-03-10T10:20:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:36 vm07 bash[23367]: audit 2026-03-10T10:20:35.636647+0000 mon.c (mon.2) 427 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:36 vm07 bash[23367]: audit 2026-03-10T10:20:35.636647+0000 mon.c (mon.2) 427 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:36 vm07 bash[23367]: audit 2026-03-10T10:20:35.658066+0000 mon.a (mon.0) 2345 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:36 vm07 bash[23367]: audit 2026-03-10T10:20:35.658066+0000 mon.a (mon.0) 2345 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:36 vm07 bash[23367]: audit 2026-03-10T10:20:35.662146+0000 mon.c (mon.2) 428 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:36 vm07 bash[23367]: audit 2026-03-10T10:20:35.662146+0000 mon.c (mon.2) 428 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:36 vm07 bash[23367]: audit 2026-03-10T10:20:35.665391+0000 mon.a (mon.0) 2346 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:36 vm07 bash[23367]: audit 2026-03-10T10:20:35.665391+0000 mon.a (mon.0) 2346 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:36 vm07 bash[23367]: audit 2026-03-10T10:20:35.669840+0000 mon.c (mon.2) 429 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm04-59259-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:36 vm07 bash[23367]: audit 2026-03-10T10:20:35.669840+0000 mon.c (mon.2) 429 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm04-59259-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:36 vm07 bash[23367]: audit 2026-03-10T10:20:35.670098+0000 mon.a (mon.0) 2347 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm04-59259-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:36 vm07 bash[23367]: audit 2026-03-10T10:20:35.670098+0000 mon.a (mon.0) 2347 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm04-59259-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:36 vm07 bash[23367]: audit 2026-03-10T10:20:35.694806+0000 mon.a (mon.0) 2348 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:20:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:36 vm07 bash[23367]: audit 2026-03-10T10:20:35.694806+0000 mon.a (mon.0) 2348 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-47", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:20:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:37 vm04 bash[28289]: cluster 2026-03-10T10:20:36.434805+0000 mgr.y (mgr.24422) 280 : cluster [DBG] pgmap v430: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:37 vm04 bash[28289]: cluster 2026-03-10T10:20:36.434805+0000 mgr.y (mgr.24422) 280 : cluster [DBG] pgmap v430: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:37 vm04 bash[28289]: audit 2026-03-10T10:20:36.783717+0000 mon.a (mon.0) 2349 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm04-59259-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:37 vm04 bash[28289]: audit 2026-03-10T10:20:36.783717+0000 mon.a (mon.0) 2349 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm04-59259-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:37 vm04 bash[28289]: audit 2026-03-10T10:20:36.783843+0000 mon.a (mon.0) 2350 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-47", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:20:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:37 vm04 bash[28289]: audit 2026-03-10T10:20:36.783843+0000 mon.a (mon.0) 2350 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-47", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:20:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:37 vm04 bash[28289]: cluster 2026-03-10T10:20:36.800787+0000 mon.a (mon.0) 2351 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-10T10:20:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:37 vm04 bash[28289]: cluster 2026-03-10T10:20:36.800787+0000 mon.a (mon.0) 2351 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-10T10:20:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:37 vm04 bash[28289]: audit 2026-03-10T10:20:36.801895+0000 mon.c (mon.2) 430 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm04-59259-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:37 vm04 bash[28289]: audit 2026-03-10T10:20:36.801895+0000 mon.c (mon.2) 430 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm04-59259-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:37 vm04 bash[28289]: audit 2026-03-10T10:20:36.826419+0000 mon.a (mon.0) 2352 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-47"}]: dispatch 2026-03-10T10:20:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:37 vm04 bash[28289]: audit 2026-03-10T10:20:36.826419+0000 mon.a (mon.0) 2352 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-47"}]: dispatch 2026-03-10T10:20:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:37 vm04 bash[28289]: audit 2026-03-10T10:20:36.826509+0000 mon.a (mon.0) 2353 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm04-59259-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:37 vm04 bash[28289]: audit 2026-03-10T10:20:36.826509+0000 mon.a (mon.0) 2353 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm04-59259-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:37 vm04 bash[28289]: audit 2026-03-10T10:20:37.787217+0000 mon.a (mon.0) 2354 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-47"}]': finished 2026-03-10T10:20:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:37 vm04 bash[28289]: audit 2026-03-10T10:20:37.787217+0000 mon.a (mon.0) 2354 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-47"}]': finished 2026-03-10T10:20:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:37 vm04 bash[28289]: cluster 2026-03-10T10:20:37.799583+0000 mon.a (mon.0) 2355 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-10T10:20:38.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:37 vm04 bash[28289]: cluster 2026-03-10T10:20:37.799583+0000 mon.a (mon.0) 2355 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-10T10:20:38.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:37 vm04 bash[28289]: audit 2026-03-10T10:20:37.800024+0000 mon.a (mon.0) 2356 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-47", "mode": "writeback"}]: dispatch 2026-03-10T10:20:38.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:37 vm04 bash[28289]: audit 2026-03-10T10:20:37.800024+0000 mon.a (mon.0) 2356 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-47", "mode": "writeback"}]: dispatch 2026-03-10T10:20:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:37 vm04 bash[20742]: cluster 2026-03-10T10:20:36.434805+0000 mgr.y (mgr.24422) 280 : cluster [DBG] pgmap v430: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:37 vm04 bash[20742]: cluster 2026-03-10T10:20:36.434805+0000 mgr.y (mgr.24422) 280 : cluster [DBG] pgmap v430: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:37 vm04 bash[20742]: audit 2026-03-10T10:20:36.783717+0000 mon.a (mon.0) 2349 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm04-59259-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:37 vm04 bash[20742]: audit 2026-03-10T10:20:36.783717+0000 mon.a (mon.0) 2349 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm04-59259-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:37 vm04 bash[20742]: audit 2026-03-10T10:20:36.783843+0000 mon.a (mon.0) 2350 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-47", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:20:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:37 vm04 bash[20742]: audit 2026-03-10T10:20:36.783843+0000 mon.a (mon.0) 2350 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-47", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:20:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:37 vm04 bash[20742]: cluster 2026-03-10T10:20:36.800787+0000 mon.a (mon.0) 2351 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-10T10:20:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:37 vm04 bash[20742]: cluster 2026-03-10T10:20:36.800787+0000 mon.a (mon.0) 2351 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-10T10:20:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:37 vm04 bash[20742]: audit 2026-03-10T10:20:36.801895+0000 mon.c (mon.2) 430 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm04-59259-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:37 vm04 bash[20742]: audit 2026-03-10T10:20:36.801895+0000 mon.c (mon.2) 430 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm04-59259-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:37 vm04 bash[20742]: audit 2026-03-10T10:20:36.826419+0000 mon.a (mon.0) 2352 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-47"}]: dispatch 2026-03-10T10:20:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:37 vm04 bash[20742]: audit 2026-03-10T10:20:36.826419+0000 mon.a (mon.0) 2352 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-47"}]: dispatch 2026-03-10T10:20:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:37 vm04 bash[20742]: audit 2026-03-10T10:20:36.826509+0000 mon.a (mon.0) 2353 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm04-59259-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:37 vm04 bash[20742]: audit 2026-03-10T10:20:36.826509+0000 mon.a (mon.0) 2353 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm04-59259-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:37 vm04 bash[20742]: audit 2026-03-10T10:20:37.787217+0000 mon.a (mon.0) 2354 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-47"}]': finished 2026-03-10T10:20:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:37 vm04 bash[20742]: audit 2026-03-10T10:20:37.787217+0000 mon.a (mon.0) 2354 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-47"}]': finished 2026-03-10T10:20:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:37 vm04 bash[20742]: cluster 2026-03-10T10:20:37.799583+0000 mon.a (mon.0) 2355 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-10T10:20:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:37 vm04 bash[20742]: cluster 2026-03-10T10:20:37.799583+0000 mon.a (mon.0) 2355 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-10T10:20:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:37 vm04 bash[20742]: audit 2026-03-10T10:20:37.800024+0000 mon.a (mon.0) 2356 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-47", "mode": "writeback"}]: dispatch 2026-03-10T10:20:38.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:37 vm04 bash[20742]: audit 2026-03-10T10:20:37.800024+0000 mon.a (mon.0) 2356 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-47", "mode": "writeback"}]: dispatch 2026-03-10T10:20:38.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:37 vm07 bash[23367]: cluster 2026-03-10T10:20:36.434805+0000 mgr.y (mgr.24422) 280 : cluster [DBG] pgmap v430: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:38.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:37 vm07 bash[23367]: cluster 2026-03-10T10:20:36.434805+0000 mgr.y (mgr.24422) 280 : cluster [DBG] pgmap v430: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 680 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:38.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:37 vm07 bash[23367]: audit 2026-03-10T10:20:36.783717+0000 mon.a (mon.0) 2349 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm04-59259-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:38.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:37 vm07 bash[23367]: audit 2026-03-10T10:20:36.783717+0000 mon.a (mon.0) 2349 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-SimpleStatPPNS_vm04-59259-62", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:38.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:37 vm07 bash[23367]: audit 2026-03-10T10:20:36.783843+0000 mon.a (mon.0) 2350 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-47", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:20:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:37 vm07 bash[23367]: audit 2026-03-10T10:20:36.783843+0000 mon.a (mon.0) 2350 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-47", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:20:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:37 vm07 bash[23367]: cluster 2026-03-10T10:20:36.800787+0000 mon.a (mon.0) 2351 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-10T10:20:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:37 vm07 bash[23367]: cluster 2026-03-10T10:20:36.800787+0000 mon.a (mon.0) 2351 : cluster [DBG] osdmap e315: 8 total, 8 up, 8 in 2026-03-10T10:20:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:37 vm07 bash[23367]: audit 2026-03-10T10:20:36.801895+0000 mon.c (mon.2) 430 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm04-59259-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:37 vm07 bash[23367]: audit 2026-03-10T10:20:36.801895+0000 mon.c (mon.2) 430 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm04-59259-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:37 vm07 bash[23367]: audit 2026-03-10T10:20:36.826419+0000 mon.a (mon.0) 2352 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-47"}]: dispatch 2026-03-10T10:20:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:37 vm07 bash[23367]: audit 2026-03-10T10:20:36.826419+0000 mon.a (mon.0) 2352 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-47"}]: dispatch 2026-03-10T10:20:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:37 vm07 bash[23367]: audit 2026-03-10T10:20:36.826509+0000 mon.a (mon.0) 2353 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm04-59259-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:37 vm07 bash[23367]: audit 2026-03-10T10:20:36.826509+0000 mon.a (mon.0) 2353 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm04-59259-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:37 vm07 bash[23367]: audit 2026-03-10T10:20:37.787217+0000 mon.a (mon.0) 2354 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-47"}]': finished 2026-03-10T10:20:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:37 vm07 bash[23367]: audit 2026-03-10T10:20:37.787217+0000 mon.a (mon.0) 2354 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-47"}]': finished 2026-03-10T10:20:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:37 vm07 bash[23367]: cluster 2026-03-10T10:20:37.799583+0000 mon.a (mon.0) 2355 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-10T10:20:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:37 vm07 bash[23367]: cluster 2026-03-10T10:20:37.799583+0000 mon.a (mon.0) 2355 : cluster [DBG] osdmap e316: 8 total, 8 up, 8 in 2026-03-10T10:20:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:37 vm07 bash[23367]: audit 2026-03-10T10:20:37.800024+0000 mon.a (mon.0) 2356 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-47", "mode": "writeback"}]: dispatch 2026-03-10T10:20:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:37 vm07 bash[23367]: audit 2026-03-10T10:20:37.800024+0000 mon.a (mon.0) 2356 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-47", "mode": "writeback"}]: dispatch 2026-03-10T10:20:38.766 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:20:38 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:20:39.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:39 vm07 bash[23367]: cluster 2026-03-10T10:20:38.787250+0000 mon.a (mon.0) 2357 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:20:39.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:39 vm07 bash[23367]: cluster 2026-03-10T10:20:38.787250+0000 mon.a (mon.0) 2357 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:20:39.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:39 vm07 bash[23367]: audit 2026-03-10T10:20:38.792076+0000 mon.a (mon.0) 2358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm04-59259-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm04-59259-62"}]': finished 2026-03-10T10:20:39.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:39 vm07 bash[23367]: audit 2026-03-10T10:20:38.792076+0000 mon.a (mon.0) 2358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm04-59259-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm04-59259-62"}]': finished 2026-03-10T10:20:39.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:39 vm07 bash[23367]: audit 2026-03-10T10:20:38.792223+0000 mon.a (mon.0) 2359 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-47", "mode": "writeback"}]': finished 2026-03-10T10:20:39.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:39 vm07 bash[23367]: audit 2026-03-10T10:20:38.792223+0000 mon.a (mon.0) 2359 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-47", "mode": "writeback"}]': finished 2026-03-10T10:20:39.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:39 vm07 bash[23367]: cluster 2026-03-10T10:20:38.802237+0000 mon.a (mon.0) 2360 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-10T10:20:39.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:39 vm07 bash[23367]: cluster 2026-03-10T10:20:38.802237+0000 mon.a (mon.0) 2360 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-10T10:20:39.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:39 vm07 bash[23367]: audit 2026-03-10T10:20:38.812904+0000 mon.a (mon.0) 2361 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:20:39.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:39 vm07 bash[23367]: audit 2026-03-10T10:20:38.812904+0000 mon.a (mon.0) 2361 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:20:39.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:39 vm04 bash[28289]: cluster 2026-03-10T10:20:38.787250+0000 mon.a (mon.0) 2357 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:20:39.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:39 vm04 bash[28289]: cluster 2026-03-10T10:20:38.787250+0000 mon.a (mon.0) 2357 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:20:39.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:39 vm04 bash[28289]: audit 2026-03-10T10:20:38.792076+0000 mon.a (mon.0) 2358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm04-59259-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm04-59259-62"}]': finished 2026-03-10T10:20:39.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:39 vm04 bash[28289]: audit 2026-03-10T10:20:38.792076+0000 mon.a (mon.0) 2358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm04-59259-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm04-59259-62"}]': finished 2026-03-10T10:20:39.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:39 vm04 bash[28289]: audit 2026-03-10T10:20:38.792223+0000 mon.a (mon.0) 2359 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-47", "mode": "writeback"}]': finished 2026-03-10T10:20:39.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:39 vm04 bash[28289]: audit 2026-03-10T10:20:38.792223+0000 mon.a (mon.0) 2359 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-47", "mode": "writeback"}]': finished 2026-03-10T10:20:39.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:39 vm04 bash[28289]: cluster 2026-03-10T10:20:38.802237+0000 mon.a (mon.0) 2360 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-10T10:20:39.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:39 vm04 bash[28289]: cluster 2026-03-10T10:20:38.802237+0000 mon.a (mon.0) 2360 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-10T10:20:39.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:39 vm04 bash[28289]: audit 2026-03-10T10:20:38.812904+0000 mon.a (mon.0) 2361 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:20:39.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:39 vm04 bash[28289]: audit 2026-03-10T10:20:38.812904+0000 mon.a (mon.0) 2361 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:20:39.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:38 vm04 bash[20742]: cluster 2026-03-10T10:20:38.787250+0000 mon.a (mon.0) 2357 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:20:39.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:38 vm04 bash[20742]: cluster 2026-03-10T10:20:38.787250+0000 mon.a (mon.0) 2357 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:20:39.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:38 vm04 bash[20742]: audit 2026-03-10T10:20:38.792076+0000 mon.a (mon.0) 2358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm04-59259-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm04-59259-62"}]': finished 2026-03-10T10:20:39.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:38 vm04 bash[20742]: audit 2026-03-10T10:20:38.792076+0000 mon.a (mon.0) 2358 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "SimpleStatPPNS_vm04-59259-62", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-SimpleStatPPNS_vm04-59259-62"}]': finished 2026-03-10T10:20:39.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:38 vm04 bash[20742]: audit 2026-03-10T10:20:38.792223+0000 mon.a (mon.0) 2359 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-47", "mode": "writeback"}]': finished 2026-03-10T10:20:39.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:38 vm04 bash[20742]: audit 2026-03-10T10:20:38.792223+0000 mon.a (mon.0) 2359 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-47", "mode": "writeback"}]': finished 2026-03-10T10:20:39.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:38 vm04 bash[20742]: cluster 2026-03-10T10:20:38.802237+0000 mon.a (mon.0) 2360 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-10T10:20:39.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:38 vm04 bash[20742]: cluster 2026-03-10T10:20:38.802237+0000 mon.a (mon.0) 2360 : cluster [DBG] osdmap e317: 8 total, 8 up, 8 in 2026-03-10T10:20:39.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:38 vm04 bash[20742]: audit 2026-03-10T10:20:38.812904+0000 mon.a (mon.0) 2361 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:20:39.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:38 vm04 bash[20742]: audit 2026-03-10T10:20:38.812904+0000 mon.a (mon.0) 2361 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:20:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:39 vm07 bash[23367]: audit 2026-03-10T10:20:38.421149+0000 mgr.y (mgr.24422) 281 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:39 vm07 bash[23367]: audit 2026-03-10T10:20:38.421149+0000 mgr.y (mgr.24422) 281 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:39 vm07 bash[23367]: cluster 2026-03-10T10:20:38.435365+0000 mgr.y (mgr.24422) 282 : cluster [DBG] pgmap v433: 292 pgs: 17 unknown, 275 active+clean; 8.3 MiB data, 681 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s wr, 1 op/s 2026-03-10T10:20:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:39 vm07 bash[23367]: cluster 2026-03-10T10:20:38.435365+0000 mgr.y (mgr.24422) 282 : cluster [DBG] pgmap v433: 292 pgs: 17 unknown, 275 active+clean; 8.3 MiB data, 681 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s wr, 1 op/s 2026-03-10T10:20:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:39 vm07 bash[23367]: audit 2026-03-10T10:20:39.794904+0000 mon.a (mon.0) 2362 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:20:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:39 vm07 bash[23367]: audit 2026-03-10T10:20:39.794904+0000 mon.a (mon.0) 2362 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:20:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:39 vm07 bash[23367]: cluster 2026-03-10T10:20:39.799077+0000 mon.a (mon.0) 2363 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-10T10:20:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:39 vm07 bash[23367]: cluster 2026-03-10T10:20:39.799077+0000 mon.a (mon.0) 2363 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-10T10:20:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:39 vm07 bash[23367]: audit 2026-03-10T10:20:39.799892+0000 mon.a (mon.0) 2364 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:20:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:39 vm07 bash[23367]: audit 2026-03-10T10:20:39.799892+0000 mon.a (mon.0) 2364 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:20:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:39 vm04 bash[28289]: audit 2026-03-10T10:20:38.421149+0000 mgr.y (mgr.24422) 281 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:39 vm04 bash[28289]: audit 2026-03-10T10:20:38.421149+0000 mgr.y (mgr.24422) 281 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:39 vm04 bash[28289]: cluster 2026-03-10T10:20:38.435365+0000 mgr.y (mgr.24422) 282 : cluster [DBG] pgmap v433: 292 pgs: 17 unknown, 275 active+clean; 8.3 MiB data, 681 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s wr, 1 op/s 2026-03-10T10:20:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:39 vm04 bash[28289]: cluster 2026-03-10T10:20:38.435365+0000 mgr.y (mgr.24422) 282 : cluster [DBG] pgmap v433: 292 pgs: 17 unknown, 275 active+clean; 8.3 MiB data, 681 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s wr, 1 op/s 2026-03-10T10:20:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:39 vm04 bash[28289]: audit 2026-03-10T10:20:39.794904+0000 mon.a (mon.0) 2362 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:20:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:39 vm04 bash[28289]: audit 2026-03-10T10:20:39.794904+0000 mon.a (mon.0) 2362 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:20:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:39 vm04 bash[28289]: cluster 2026-03-10T10:20:39.799077+0000 mon.a (mon.0) 2363 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-10T10:20:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:39 vm04 bash[28289]: cluster 2026-03-10T10:20:39.799077+0000 mon.a (mon.0) 2363 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-10T10:20:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:39 vm04 bash[28289]: audit 2026-03-10T10:20:39.799892+0000 mon.a (mon.0) 2364 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:20:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:39 vm04 bash[28289]: audit 2026-03-10T10:20:39.799892+0000 mon.a (mon.0) 2364 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:20:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:39 vm04 bash[20742]: audit 2026-03-10T10:20:38.421149+0000 mgr.y (mgr.24422) 281 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:39 vm04 bash[20742]: audit 2026-03-10T10:20:38.421149+0000 mgr.y (mgr.24422) 281 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:39 vm04 bash[20742]: cluster 2026-03-10T10:20:38.435365+0000 mgr.y (mgr.24422) 282 : cluster [DBG] pgmap v433: 292 pgs: 17 unknown, 275 active+clean; 8.3 MiB data, 681 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s wr, 1 op/s 2026-03-10T10:20:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:39 vm04 bash[20742]: cluster 2026-03-10T10:20:38.435365+0000 mgr.y (mgr.24422) 282 : cluster [DBG] pgmap v433: 292 pgs: 17 unknown, 275 active+clean; 8.3 MiB data, 681 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s wr, 1 op/s 2026-03-10T10:20:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:39 vm04 bash[20742]: audit 2026-03-10T10:20:39.794904+0000 mon.a (mon.0) 2362 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:20:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:39 vm04 bash[20742]: audit 2026-03-10T10:20:39.794904+0000 mon.a (mon.0) 2362 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:20:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:39 vm04 bash[20742]: cluster 2026-03-10T10:20:39.799077+0000 mon.a (mon.0) 2363 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-10T10:20:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:39 vm04 bash[20742]: cluster 2026-03-10T10:20:39.799077+0000 mon.a (mon.0) 2363 : cluster [DBG] osdmap e318: 8 total, 8 up, 8 in 2026-03-10T10:20:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:39 vm04 bash[20742]: audit 2026-03-10T10:20:39.799892+0000 mon.a (mon.0) 2364 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:20:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:39 vm04 bash[20742]: audit 2026-03-10T10:20:39.799892+0000 mon.a (mon.0) 2364 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:20:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:41 vm04 bash[28289]: cluster 2026-03-10T10:20:40.435655+0000 mgr.y (mgr.24422) 283 : cluster [DBG] pgmap v436: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 681 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-10T10:20:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:41 vm04 bash[28289]: cluster 2026-03-10T10:20:40.435655+0000 mgr.y (mgr.24422) 283 : cluster [DBG] pgmap v436: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 681 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-10T10:20:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:41 vm04 bash[28289]: audit 2026-03-10T10:20:40.797165+0000 mon.a (mon.0) 2365 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:20:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:41 vm04 bash[28289]: audit 2026-03-10T10:20:40.797165+0000 mon.a (mon.0) 2365 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:20:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:41 vm04 bash[28289]: cluster 2026-03-10T10:20:40.799985+0000 mon.a (mon.0) 2366 : cluster [DBG] osdmap e319: 8 total, 8 up, 8 in 2026-03-10T10:20:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:41 vm04 bash[28289]: cluster 2026-03-10T10:20:40.799985+0000 mon.a (mon.0) 2366 : cluster [DBG] osdmap e319: 8 total, 8 up, 8 in 2026-03-10T10:20:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:41 vm04 bash[28289]: audit 2026-03-10T10:20:40.801603+0000 mon.a (mon.0) 2367 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:20:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:41 vm04 bash[28289]: audit 2026-03-10T10:20:40.801603+0000 mon.a (mon.0) 2367 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:20:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:41 vm04 bash[28289]: audit 2026-03-10T10:20:40.807186+0000 mon.c (mon.2) 431 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:41 vm04 bash[28289]: audit 2026-03-10T10:20:40.807186+0000 mon.c (mon.2) 431 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:41 vm04 bash[28289]: audit 2026-03-10T10:20:40.807386+0000 mon.a (mon.0) 2368 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:41 vm04 bash[28289]: audit 2026-03-10T10:20:40.807386+0000 mon.a (mon.0) 2368 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:41 vm04 bash[28289]: cluster 2026-03-10T10:20:40.982460+0000 mon.a (mon.0) 2369 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:41 vm04 bash[28289]: cluster 2026-03-10T10:20:40.982460+0000 mon.a (mon.0) 2369 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:41 vm04 bash[20742]: cluster 2026-03-10T10:20:40.435655+0000 mgr.y (mgr.24422) 283 : cluster [DBG] pgmap v436: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 681 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-10T10:20:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:41 vm04 bash[20742]: cluster 2026-03-10T10:20:40.435655+0000 mgr.y (mgr.24422) 283 : cluster [DBG] pgmap v436: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 681 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-10T10:20:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:41 vm04 bash[20742]: audit 2026-03-10T10:20:40.797165+0000 mon.a (mon.0) 2365 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:20:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:41 vm04 bash[20742]: audit 2026-03-10T10:20:40.797165+0000 mon.a (mon.0) 2365 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:20:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:41 vm04 bash[20742]: cluster 2026-03-10T10:20:40.799985+0000 mon.a (mon.0) 2366 : cluster [DBG] osdmap e319: 8 total, 8 up, 8 in 2026-03-10T10:20:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:41 vm04 bash[20742]: cluster 2026-03-10T10:20:40.799985+0000 mon.a (mon.0) 2366 : cluster [DBG] osdmap e319: 8 total, 8 up, 8 in 2026-03-10T10:20:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:41 vm04 bash[20742]: audit 2026-03-10T10:20:40.801603+0000 mon.a (mon.0) 2367 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:20:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:41 vm04 bash[20742]: audit 2026-03-10T10:20:40.801603+0000 mon.a (mon.0) 2367 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:20:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:41 vm04 bash[20742]: audit 2026-03-10T10:20:40.807186+0000 mon.c (mon.2) 431 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:41 vm04 bash[20742]: audit 2026-03-10T10:20:40.807186+0000 mon.c (mon.2) 431 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:41 vm04 bash[20742]: audit 2026-03-10T10:20:40.807386+0000 mon.a (mon.0) 2368 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:41 vm04 bash[20742]: audit 2026-03-10T10:20:40.807386+0000 mon.a (mon.0) 2368 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:41 vm04 bash[20742]: cluster 2026-03-10T10:20:40.982460+0000 mon.a (mon.0) 2369 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:41 vm04 bash[20742]: cluster 2026-03-10T10:20:40.982460+0000 mon.a (mon.0) 2369 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:41 vm07 bash[23367]: cluster 2026-03-10T10:20:40.435655+0000 mgr.y (mgr.24422) 283 : cluster [DBG] pgmap v436: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 681 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-10T10:20:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:41 vm07 bash[23367]: cluster 2026-03-10T10:20:40.435655+0000 mgr.y (mgr.24422) 283 : cluster [DBG] pgmap v436: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 681 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.0 KiB/s wr, 6 op/s 2026-03-10T10:20:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:41 vm07 bash[23367]: audit 2026-03-10T10:20:40.797165+0000 mon.a (mon.0) 2365 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:20:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:41 vm07 bash[23367]: audit 2026-03-10T10:20:40.797165+0000 mon.a (mon.0) 2365 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:20:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:41 vm07 bash[23367]: cluster 2026-03-10T10:20:40.799985+0000 mon.a (mon.0) 2366 : cluster [DBG] osdmap e319: 8 total, 8 up, 8 in 2026-03-10T10:20:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:41 vm07 bash[23367]: cluster 2026-03-10T10:20:40.799985+0000 mon.a (mon.0) 2366 : cluster [DBG] osdmap e319: 8 total, 8 up, 8 in 2026-03-10T10:20:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:41 vm07 bash[23367]: audit 2026-03-10T10:20:40.801603+0000 mon.a (mon.0) 2367 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:20:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:41 vm07 bash[23367]: audit 2026-03-10T10:20:40.801603+0000 mon.a (mon.0) 2367 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:20:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:41 vm07 bash[23367]: audit 2026-03-10T10:20:40.807186+0000 mon.c (mon.2) 431 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:41 vm07 bash[23367]: audit 2026-03-10T10:20:40.807186+0000 mon.c (mon.2) 431 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:41 vm07 bash[23367]: audit 2026-03-10T10:20:40.807386+0000 mon.a (mon.0) 2368 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:41 vm07 bash[23367]: audit 2026-03-10T10:20:40.807386+0000 mon.a (mon.0) 2368 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:41 vm07 bash[23367]: cluster 2026-03-10T10:20:40.982460+0000 mon.a (mon.0) 2369 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:41 vm07 bash[23367]: cluster 2026-03-10T10:20:40.982460+0000 mon.a (mon.0) 2369 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:43.203 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:20:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:20:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:20:43.211 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:42 vm04 bash[28289]: cluster 2026-03-10T10:20:41.797305+0000 mon.a (mon.0) 2370 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:20:43.211 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:42 vm04 bash[28289]: cluster 2026-03-10T10:20:41.797305+0000 mon.a (mon.0) 2370 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:20:43.211 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:42 vm04 bash[28289]: audit 2026-03-10T10:20:41.799803+0000 mon.a (mon.0) 2371 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:20:43.211 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:42 vm04 bash[28289]: audit 2026-03-10T10:20:41.799803+0000 mon.a (mon.0) 2371 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:20:43.211 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:42 vm04 bash[28289]: audit 2026-03-10T10:20:41.800138+0000 mon.a (mon.0) 2372 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm04-59259-62"}]': finished 2026-03-10T10:20:43.211 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:42 vm04 bash[28289]: audit 2026-03-10T10:20:41.800138+0000 mon.a (mon.0) 2372 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm04-59259-62"}]': finished 2026-03-10T10:20:43.211 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:42 vm04 bash[28289]: cluster 2026-03-10T10:20:41.803117+0000 mon.a (mon.0) 2373 : cluster [DBG] osdmap e320: 8 total, 8 up, 8 in 2026-03-10T10:20:43.211 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:42 vm04 bash[28289]: cluster 2026-03-10T10:20:41.803117+0000 mon.a (mon.0) 2373 : cluster [DBG] osdmap e320: 8 total, 8 up, 8 in 2026-03-10T10:20:43.211 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:42 vm04 bash[28289]: audit 2026-03-10T10:20:41.806596+0000 mon.c (mon.2) 432 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:43.211 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:42 vm04 bash[28289]: audit 2026-03-10T10:20:41.806596+0000 mon.c (mon.2) 432 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:43.211 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:42 vm04 bash[28289]: audit 2026-03-10T10:20:41.806919+0000 mon.a (mon.0) 2374 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T10:20:43.211 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:42 vm04 bash[28289]: audit 2026-03-10T10:20:41.806919+0000 mon.a (mon.0) 2374 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T10:20:43.211 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:42 vm04 bash[28289]: audit 2026-03-10T10:20:41.806979+0000 mon.a (mon.0) 2375 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:43.211 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:42 vm04 bash[28289]: audit 2026-03-10T10:20:41.806979+0000 mon.a (mon.0) 2375 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:43.211 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:42 vm04 bash[28289]: audit 2026-03-10T10:20:42.804417+0000 mon.a (mon.0) 2376 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T10:20:43.211 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:42 vm04 bash[28289]: audit 2026-03-10T10:20:42.804417+0000 mon.a (mon.0) 2376 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T10:20:43.211 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:42 vm04 bash[20742]: cluster 2026-03-10T10:20:41.797305+0000 mon.a (mon.0) 2370 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:20:43.211 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:42 vm04 bash[20742]: cluster 2026-03-10T10:20:41.797305+0000 mon.a (mon.0) 2370 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:20:43.211 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:42 vm04 bash[20742]: audit 2026-03-10T10:20:41.799803+0000 mon.a (mon.0) 2371 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:20:43.211 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:42 vm04 bash[20742]: audit 2026-03-10T10:20:41.799803+0000 mon.a (mon.0) 2371 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:20:43.211 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:42 vm04 bash[20742]: audit 2026-03-10T10:20:41.800138+0000 mon.a (mon.0) 2372 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm04-59259-62"}]': finished 2026-03-10T10:20:43.211 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:42 vm04 bash[20742]: audit 2026-03-10T10:20:41.800138+0000 mon.a (mon.0) 2372 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm04-59259-62"}]': finished 2026-03-10T10:20:43.212 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:42 vm04 bash[20742]: cluster 2026-03-10T10:20:41.803117+0000 mon.a (mon.0) 2373 : cluster [DBG] osdmap e320: 8 total, 8 up, 8 in 2026-03-10T10:20:43.212 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:42 vm04 bash[20742]: cluster 2026-03-10T10:20:41.803117+0000 mon.a (mon.0) 2373 : cluster [DBG] osdmap e320: 8 total, 8 up, 8 in 2026-03-10T10:20:43.212 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:42 vm04 bash[20742]: audit 2026-03-10T10:20:41.806596+0000 mon.c (mon.2) 432 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:43.212 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:42 vm04 bash[20742]: audit 2026-03-10T10:20:41.806596+0000 mon.c (mon.2) 432 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:43.212 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:42 vm04 bash[20742]: audit 2026-03-10T10:20:41.806919+0000 mon.a (mon.0) 2374 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T10:20:43.212 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:42 vm04 bash[20742]: audit 2026-03-10T10:20:41.806919+0000 mon.a (mon.0) 2374 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T10:20:43.212 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:42 vm04 bash[20742]: audit 2026-03-10T10:20:41.806979+0000 mon.a (mon.0) 2375 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:43.212 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:42 vm04 bash[20742]: audit 2026-03-10T10:20:41.806979+0000 mon.a (mon.0) 2375 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:43.212 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:42 vm04 bash[20742]: audit 2026-03-10T10:20:42.804417+0000 mon.a (mon.0) 2376 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T10:20:43.212 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:42 vm04 bash[20742]: audit 2026-03-10T10:20:42.804417+0000 mon.a (mon.0) 2376 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T10:20:43.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:42 vm07 bash[23367]: cluster 2026-03-10T10:20:41.797305+0000 mon.a (mon.0) 2370 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:20:43.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:42 vm07 bash[23367]: cluster 2026-03-10T10:20:41.797305+0000 mon.a (mon.0) 2370 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:20:43.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:42 vm07 bash[23367]: audit 2026-03-10T10:20:41.799803+0000 mon.a (mon.0) 2371 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:20:43.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:42 vm07 bash[23367]: audit 2026-03-10T10:20:41.799803+0000 mon.a (mon.0) 2371 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:20:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:42 vm07 bash[23367]: audit 2026-03-10T10:20:41.800138+0000 mon.a (mon.0) 2372 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm04-59259-62"}]': finished 2026-03-10T10:20:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:42 vm07 bash[23367]: audit 2026-03-10T10:20:41.800138+0000 mon.a (mon.0) 2372 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-SimpleStatPPNS_vm04-59259-62"}]': finished 2026-03-10T10:20:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:42 vm07 bash[23367]: cluster 2026-03-10T10:20:41.803117+0000 mon.a (mon.0) 2373 : cluster [DBG] osdmap e320: 8 total, 8 up, 8 in 2026-03-10T10:20:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:42 vm07 bash[23367]: cluster 2026-03-10T10:20:41.803117+0000 mon.a (mon.0) 2373 : cluster [DBG] osdmap e320: 8 total, 8 up, 8 in 2026-03-10T10:20:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:42 vm07 bash[23367]: audit 2026-03-10T10:20:41.806596+0000 mon.c (mon.2) 432 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:42 vm07 bash[23367]: audit 2026-03-10T10:20:41.806596+0000 mon.c (mon.2) 432 : audit [INF] from='client.? 192.168.123.104:0/3089813681' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:42 vm07 bash[23367]: audit 2026-03-10T10:20:41.806919+0000 mon.a (mon.0) 2374 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T10:20:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:42 vm07 bash[23367]: audit 2026-03-10T10:20:41.806919+0000 mon.a (mon.0) 2374 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T10:20:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:42 vm07 bash[23367]: audit 2026-03-10T10:20:41.806979+0000 mon.a (mon.0) 2375 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:42 vm07 bash[23367]: audit 2026-03-10T10:20:41.806979+0000 mon.a (mon.0) 2375 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm04-59259-62"}]: dispatch 2026-03-10T10:20:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:42 vm07 bash[23367]: audit 2026-03-10T10:20:42.804417+0000 mon.a (mon.0) 2376 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T10:20:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:42 vm07 bash[23367]: audit 2026-03-10T10:20:42.804417+0000 mon.a (mon.0) 2376 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T10:20:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:43 vm04 bash[28289]: cluster 2026-03-10T10:20:42.435917+0000 mgr.y (mgr.24422) 284 : cluster [DBG] pgmap v439: 292 pgs: 292 active+clean; 8.3 MiB data, 681 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.0 KiB/s wr, 4 op/s 2026-03-10T10:20:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:43 vm04 bash[28289]: cluster 2026-03-10T10:20:42.435917+0000 mgr.y (mgr.24422) 284 : cluster [DBG] pgmap v439: 292 pgs: 292 active+clean; 8.3 MiB data, 681 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.0 KiB/s wr, 4 op/s 2026-03-10T10:20:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:43 vm04 bash[28289]: audit 2026-03-10T10:20:42.805067+0000 mon.a (mon.0) 2377 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm04-59259-62"}]': finished 2026-03-10T10:20:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:43 vm04 bash[28289]: audit 2026-03-10T10:20:42.805067+0000 mon.a (mon.0) 2377 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm04-59259-62"}]': finished 2026-03-10T10:20:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:43 vm04 bash[28289]: cluster 2026-03-10T10:20:42.810791+0000 mon.a (mon.0) 2378 : cluster [DBG] osdmap e321: 8 total, 8 up, 8 in 2026-03-10T10:20:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:43 vm04 bash[28289]: cluster 2026-03-10T10:20:42.810791+0000 mon.a (mon.0) 2378 : cluster [DBG] osdmap e321: 8 total, 8 up, 8 in 2026-03-10T10:20:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:43 vm04 bash[28289]: audit 2026-03-10T10:20:42.818612+0000 mon.a (mon.0) 2379 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T10:20:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:43 vm04 bash[28289]: audit 2026-03-10T10:20:42.818612+0000 mon.a (mon.0) 2379 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T10:20:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:43 vm04 bash[28289]: audit 2026-03-10T10:20:42.821310+0000 mon.c (mon.2) 433 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:43 vm04 bash[28289]: audit 2026-03-10T10:20:42.821310+0000 mon.c (mon.2) 433 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:43 vm04 bash[28289]: audit 2026-03-10T10:20:42.826079+0000 mon.a (mon.0) 2380 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:43 vm04 bash[28289]: audit 2026-03-10T10:20:42.826079+0000 mon.a (mon.0) 2380 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:43 vm04 bash[28289]: audit 2026-03-10T10:20:42.827167+0000 mon.c (mon.2) 434 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:43 vm04 bash[28289]: audit 2026-03-10T10:20:42.827167+0000 mon.c (mon.2) 434 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:43 vm04 bash[28289]: audit 2026-03-10T10:20:42.834250+0000 mon.a (mon.0) 2381 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:43 vm04 bash[28289]: audit 2026-03-10T10:20:42.834250+0000 mon.a (mon.0) 2381 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:43 vm04 bash[28289]: audit 2026-03-10T10:20:42.835133+0000 mon.c (mon.2) 435 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm04-59259-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:43 vm04 bash[28289]: audit 2026-03-10T10:20:42.835133+0000 mon.c (mon.2) 435 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm04-59259-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:43 vm04 bash[28289]: audit 2026-03-10T10:20:42.835353+0000 mon.a (mon.0) 2382 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm04-59259-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:43 vm04 bash[28289]: audit 2026-03-10T10:20:42.835353+0000 mon.a (mon.0) 2382 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm04-59259-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:43 vm04 bash[28289]: audit 2026-03-10T10:20:42.878630+0000 mon.a (mon.0) 2383 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:43 vm04 bash[28289]: audit 2026-03-10T10:20:42.878630+0000 mon.a (mon.0) 2383 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:43 vm04 bash[28289]: audit 2026-03-10T10:20:43.806996+0000 mon.a (mon.0) 2384 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:43 vm04 bash[28289]: audit 2026-03-10T10:20:43.806996+0000 mon.a (mon.0) 2384 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:43 vm04 bash[28289]: audit 2026-03-10T10:20:43.807107+0000 mon.a (mon.0) 2385 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm04-59259-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:43 vm04 bash[28289]: audit 2026-03-10T10:20:43.807107+0000 mon.a (mon.0) 2385 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm04-59259-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:43 vm04 bash[28289]: audit 2026-03-10T10:20:43.811699+0000 mon.c (mon.2) 436 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm04-59259-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:43 vm04 bash[28289]: audit 2026-03-10T10:20:43.811699+0000 mon.c (mon.2) 436 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm04-59259-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:43 vm04 bash[28289]: cluster 2026-03-10T10:20:43.813984+0000 mon.a (mon.0) 2386 : cluster [DBG] osdmap e322: 8 total, 8 up, 8 in 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:43 vm04 bash[28289]: cluster 2026-03-10T10:20:43.813984+0000 mon.a (mon.0) 2386 : cluster [DBG] osdmap e322: 8 total, 8 up, 8 in 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:43 vm04 bash[28289]: audit 2026-03-10T10:20:43.814903+0000 mon.a (mon.0) 2387 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:43 vm04 bash[28289]: audit 2026-03-10T10:20:43.814903+0000 mon.a (mon.0) 2387 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:43 vm04 bash[28289]: audit 2026-03-10T10:20:43.815034+0000 mon.a (mon.0) 2388 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm04-59259-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:43 vm04 bash[28289]: audit 2026-03-10T10:20:43.815034+0000 mon.a (mon.0) 2388 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm04-59259-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:43 vm04 bash[20742]: cluster 2026-03-10T10:20:42.435917+0000 mgr.y (mgr.24422) 284 : cluster [DBG] pgmap v439: 292 pgs: 292 active+clean; 8.3 MiB data, 681 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.0 KiB/s wr, 4 op/s 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:43 vm04 bash[20742]: cluster 2026-03-10T10:20:42.435917+0000 mgr.y (mgr.24422) 284 : cluster [DBG] pgmap v439: 292 pgs: 292 active+clean; 8.3 MiB data, 681 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.0 KiB/s wr, 4 op/s 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:43 vm04 bash[20742]: audit 2026-03-10T10:20:42.805067+0000 mon.a (mon.0) 2377 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm04-59259-62"}]': finished 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:43 vm04 bash[20742]: audit 2026-03-10T10:20:42.805067+0000 mon.a (mon.0) 2377 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm04-59259-62"}]': finished 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:43 vm04 bash[20742]: cluster 2026-03-10T10:20:42.810791+0000 mon.a (mon.0) 2378 : cluster [DBG] osdmap e321: 8 total, 8 up, 8 in 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:43 vm04 bash[20742]: cluster 2026-03-10T10:20:42.810791+0000 mon.a (mon.0) 2378 : cluster [DBG] osdmap e321: 8 total, 8 up, 8 in 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:43 vm04 bash[20742]: audit 2026-03-10T10:20:42.818612+0000 mon.a (mon.0) 2379 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:43 vm04 bash[20742]: audit 2026-03-10T10:20:42.818612+0000 mon.a (mon.0) 2379 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:43 vm04 bash[20742]: audit 2026-03-10T10:20:42.821310+0000 mon.c (mon.2) 433 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:43 vm04 bash[20742]: audit 2026-03-10T10:20:42.821310+0000 mon.c (mon.2) 433 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:43 vm04 bash[20742]: audit 2026-03-10T10:20:42.826079+0000 mon.a (mon.0) 2380 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:43 vm04 bash[20742]: audit 2026-03-10T10:20:42.826079+0000 mon.a (mon.0) 2380 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:43 vm04 bash[20742]: audit 2026-03-10T10:20:42.827167+0000 mon.c (mon.2) 434 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:43 vm04 bash[20742]: audit 2026-03-10T10:20:42.827167+0000 mon.c (mon.2) 434 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:43 vm04 bash[20742]: audit 2026-03-10T10:20:42.834250+0000 mon.a (mon.0) 2381 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:43 vm04 bash[20742]: audit 2026-03-10T10:20:42.834250+0000 mon.a (mon.0) 2381 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:43 vm04 bash[20742]: audit 2026-03-10T10:20:42.835133+0000 mon.c (mon.2) 435 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm04-59259-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:43 vm04 bash[20742]: audit 2026-03-10T10:20:42.835133+0000 mon.c (mon.2) 435 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm04-59259-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:43 vm04 bash[20742]: audit 2026-03-10T10:20:42.835353+0000 mon.a (mon.0) 2382 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm04-59259-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:43 vm04 bash[20742]: audit 2026-03-10T10:20:42.835353+0000 mon.a (mon.0) 2382 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm04-59259-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:43 vm04 bash[20742]: audit 2026-03-10T10:20:42.878630+0000 mon.a (mon.0) 2383 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:43 vm04 bash[20742]: audit 2026-03-10T10:20:42.878630+0000 mon.a (mon.0) 2383 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:43 vm04 bash[20742]: audit 2026-03-10T10:20:43.806996+0000 mon.a (mon.0) 2384 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:43 vm04 bash[20742]: audit 2026-03-10T10:20:43.806996+0000 mon.a (mon.0) 2384 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:43 vm04 bash[20742]: audit 2026-03-10T10:20:43.807107+0000 mon.a (mon.0) 2385 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm04-59259-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:43 vm04 bash[20742]: audit 2026-03-10T10:20:43.807107+0000 mon.a (mon.0) 2385 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm04-59259-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:43 vm04 bash[20742]: audit 2026-03-10T10:20:43.811699+0000 mon.c (mon.2) 436 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm04-59259-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:43 vm04 bash[20742]: audit 2026-03-10T10:20:43.811699+0000 mon.c (mon.2) 436 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm04-59259-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:43 vm04 bash[20742]: cluster 2026-03-10T10:20:43.813984+0000 mon.a (mon.0) 2386 : cluster [DBG] osdmap e322: 8 total, 8 up, 8 in 2026-03-10T10:20:44.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:43 vm04 bash[20742]: cluster 2026-03-10T10:20:43.813984+0000 mon.a (mon.0) 2386 : cluster [DBG] osdmap e322: 8 total, 8 up, 8 in 2026-03-10T10:20:44.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:43 vm04 bash[20742]: audit 2026-03-10T10:20:43.814903+0000 mon.a (mon.0) 2387 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T10:20:44.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:43 vm04 bash[20742]: audit 2026-03-10T10:20:43.814903+0000 mon.a (mon.0) 2387 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T10:20:44.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:43 vm04 bash[20742]: audit 2026-03-10T10:20:43.815034+0000 mon.a (mon.0) 2388 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm04-59259-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.205 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:43 vm04 bash[20742]: audit 2026-03-10T10:20:43.815034+0000 mon.a (mon.0) 2388 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm04-59259-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:43 vm07 bash[23367]: cluster 2026-03-10T10:20:42.435917+0000 mgr.y (mgr.24422) 284 : cluster [DBG] pgmap v439: 292 pgs: 292 active+clean; 8.3 MiB data, 681 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.0 KiB/s wr, 4 op/s 2026-03-10T10:20:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:43 vm07 bash[23367]: cluster 2026-03-10T10:20:42.435917+0000 mgr.y (mgr.24422) 284 : cluster [DBG] pgmap v439: 292 pgs: 292 active+clean; 8.3 MiB data, 681 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.0 KiB/s wr, 4 op/s 2026-03-10T10:20:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:43 vm07 bash[23367]: audit 2026-03-10T10:20:42.805067+0000 mon.a (mon.0) 2377 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm04-59259-62"}]': finished 2026-03-10T10:20:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:43 vm07 bash[23367]: audit 2026-03-10T10:20:42.805067+0000 mon.a (mon.0) 2377 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"SimpleStatPPNS_vm04-59259-62"}]': finished 2026-03-10T10:20:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:43 vm07 bash[23367]: cluster 2026-03-10T10:20:42.810791+0000 mon.a (mon.0) 2378 : cluster [DBG] osdmap e321: 8 total, 8 up, 8 in 2026-03-10T10:20:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:43 vm07 bash[23367]: cluster 2026-03-10T10:20:42.810791+0000 mon.a (mon.0) 2378 : cluster [DBG] osdmap e321: 8 total, 8 up, 8 in 2026-03-10T10:20:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:43 vm07 bash[23367]: audit 2026-03-10T10:20:42.818612+0000 mon.a (mon.0) 2379 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T10:20:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:43 vm07 bash[23367]: audit 2026-03-10T10:20:42.818612+0000 mon.a (mon.0) 2379 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T10:20:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:43 vm07 bash[23367]: audit 2026-03-10T10:20:42.821310+0000 mon.c (mon.2) 433 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:43 vm07 bash[23367]: audit 2026-03-10T10:20:42.821310+0000 mon.c (mon.2) 433 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:43 vm07 bash[23367]: audit 2026-03-10T10:20:42.826079+0000 mon.a (mon.0) 2380 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:43 vm07 bash[23367]: audit 2026-03-10T10:20:42.826079+0000 mon.a (mon.0) 2380 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:43 vm07 bash[23367]: audit 2026-03-10T10:20:42.827167+0000 mon.c (mon.2) 434 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:43 vm07 bash[23367]: audit 2026-03-10T10:20:42.827167+0000 mon.c (mon.2) 434 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:43 vm07 bash[23367]: audit 2026-03-10T10:20:42.834250+0000 mon.a (mon.0) 2381 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:43 vm07 bash[23367]: audit 2026-03-10T10:20:42.834250+0000 mon.a (mon.0) 2381 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:43 vm07 bash[23367]: audit 2026-03-10T10:20:42.835133+0000 mon.c (mon.2) 435 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm04-59259-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:43 vm07 bash[23367]: audit 2026-03-10T10:20:42.835133+0000 mon.c (mon.2) 435 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm04-59259-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:43 vm07 bash[23367]: audit 2026-03-10T10:20:42.835353+0000 mon.a (mon.0) 2382 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm04-59259-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:43 vm07 bash[23367]: audit 2026-03-10T10:20:42.835353+0000 mon.a (mon.0) 2382 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm04-59259-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:43 vm07 bash[23367]: audit 2026-03-10T10:20:42.878630+0000 mon.a (mon.0) 2383 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:20:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:43 vm07 bash[23367]: audit 2026-03-10T10:20:42.878630+0000 mon.a (mon.0) 2383 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:20:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:43 vm07 bash[23367]: audit 2026-03-10T10:20:43.806996+0000 mon.a (mon.0) 2384 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-10T10:20:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:43 vm07 bash[23367]: audit 2026-03-10T10:20:43.806996+0000 mon.a (mon.0) 2384 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-10T10:20:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:43 vm07 bash[23367]: audit 2026-03-10T10:20:43.807107+0000 mon.a (mon.0) 2385 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm04-59259-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:43 vm07 bash[23367]: audit 2026-03-10T10:20:43.807107+0000 mon.a (mon.0) 2385 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-StatRemovePP_vm04-59259-63", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:43 vm07 bash[23367]: audit 2026-03-10T10:20:43.811699+0000 mon.c (mon.2) 436 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm04-59259-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:43 vm07 bash[23367]: audit 2026-03-10T10:20:43.811699+0000 mon.c (mon.2) 436 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm04-59259-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:43 vm07 bash[23367]: cluster 2026-03-10T10:20:43.813984+0000 mon.a (mon.0) 2386 : cluster [DBG] osdmap e322: 8 total, 8 up, 8 in 2026-03-10T10:20:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:43 vm07 bash[23367]: cluster 2026-03-10T10:20:43.813984+0000 mon.a (mon.0) 2386 : cluster [DBG] osdmap e322: 8 total, 8 up, 8 in 2026-03-10T10:20:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:43 vm07 bash[23367]: audit 2026-03-10T10:20:43.814903+0000 mon.a (mon.0) 2387 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T10:20:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:43 vm07 bash[23367]: audit 2026-03-10T10:20:43.814903+0000 mon.a (mon.0) 2387 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T10:20:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:43 vm07 bash[23367]: audit 2026-03-10T10:20:43.815034+0000 mon.a (mon.0) 2388 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm04-59259-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:43 vm07 bash[23367]: audit 2026-03-10T10:20:43.815034+0000 mon.a (mon.0) 2388 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "StatRemovePP_vm04-59259-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:45 vm04 bash[28289]: cluster 2026-03-10T10:20:44.436242+0000 mgr.y (mgr.24422) 285 : cluster [DBG] pgmap v442: 292 pgs: 292 active+clean; 8.3 MiB data, 682 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:45 vm04 bash[28289]: cluster 2026-03-10T10:20:44.436242+0000 mgr.y (mgr.24422) 285 : cluster [DBG] pgmap v442: 292 pgs: 292 active+clean; 8.3 MiB data, 682 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:45 vm04 bash[28289]: audit 2026-03-10T10:20:44.810453+0000 mon.a (mon.0) 2389 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-10T10:20:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:45 vm04 bash[28289]: audit 2026-03-10T10:20:44.810453+0000 mon.a (mon.0) 2389 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-10T10:20:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:45 vm04 bash[28289]: cluster 2026-03-10T10:20:44.819848+0000 mon.a (mon.0) 2390 : cluster [DBG] osdmap e323: 8 total, 8 up, 8 in 2026-03-10T10:20:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:45 vm04 bash[28289]: cluster 2026-03-10T10:20:44.819848+0000 mon.a (mon.0) 2390 : cluster [DBG] osdmap e323: 8 total, 8 up, 8 in 2026-03-10T10:20:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:45 vm04 bash[28289]: audit 2026-03-10T10:20:44.864930+0000 mon.a (mon.0) 2391 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:20:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:45 vm04 bash[28289]: audit 2026-03-10T10:20:44.864930+0000 mon.a (mon.0) 2391 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:20:46.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:45 vm04 bash[20742]: cluster 2026-03-10T10:20:44.436242+0000 mgr.y (mgr.24422) 285 : cluster [DBG] pgmap v442: 292 pgs: 292 active+clean; 8.3 MiB data, 682 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:46.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:45 vm04 bash[20742]: cluster 2026-03-10T10:20:44.436242+0000 mgr.y (mgr.24422) 285 : cluster [DBG] pgmap v442: 292 pgs: 292 active+clean; 8.3 MiB data, 682 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:46.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:45 vm04 bash[20742]: audit 2026-03-10T10:20:44.810453+0000 mon.a (mon.0) 2389 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-10T10:20:46.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:45 vm04 bash[20742]: audit 2026-03-10T10:20:44.810453+0000 mon.a (mon.0) 2389 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-10T10:20:46.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:45 vm04 bash[20742]: cluster 2026-03-10T10:20:44.819848+0000 mon.a (mon.0) 2390 : cluster [DBG] osdmap e323: 8 total, 8 up, 8 in 2026-03-10T10:20:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:45 vm04 bash[20742]: cluster 2026-03-10T10:20:44.819848+0000 mon.a (mon.0) 2390 : cluster [DBG] osdmap e323: 8 total, 8 up, 8 in 2026-03-10T10:20:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:45 vm04 bash[20742]: audit 2026-03-10T10:20:44.864930+0000 mon.a (mon.0) 2391 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:20:46.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:45 vm04 bash[20742]: audit 2026-03-10T10:20:44.864930+0000 mon.a (mon.0) 2391 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:20:46.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:45 vm07 bash[23367]: cluster 2026-03-10T10:20:44.436242+0000 mgr.y (mgr.24422) 285 : cluster [DBG] pgmap v442: 292 pgs: 292 active+clean; 8.3 MiB data, 682 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:46.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:45 vm07 bash[23367]: cluster 2026-03-10T10:20:44.436242+0000 mgr.y (mgr.24422) 285 : cluster [DBG] pgmap v442: 292 pgs: 292 active+clean; 8.3 MiB data, 682 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:46.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:45 vm07 bash[23367]: audit 2026-03-10T10:20:44.810453+0000 mon.a (mon.0) 2389 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-10T10:20:46.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:45 vm07 bash[23367]: audit 2026-03-10T10:20:44.810453+0000 mon.a (mon.0) 2389 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-47","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-10T10:20:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:45 vm07 bash[23367]: cluster 2026-03-10T10:20:44.819848+0000 mon.a (mon.0) 2390 : cluster [DBG] osdmap e323: 8 total, 8 up, 8 in 2026-03-10T10:20:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:45 vm07 bash[23367]: cluster 2026-03-10T10:20:44.819848+0000 mon.a (mon.0) 2390 : cluster [DBG] osdmap e323: 8 total, 8 up, 8 in 2026-03-10T10:20:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:45 vm07 bash[23367]: audit 2026-03-10T10:20:44.864930+0000 mon.a (mon.0) 2391 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:20:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:45 vm07 bash[23367]: audit 2026-03-10T10:20:44.864930+0000 mon.a (mon.0) 2391 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:20:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:46 vm04 bash[28289]: audit 2026-03-10T10:20:45.823421+0000 mon.a (mon.0) 2392 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemovePP_vm04-59259-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm04-59259-63"}]': finished 2026-03-10T10:20:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:46 vm04 bash[28289]: audit 2026-03-10T10:20:45.823421+0000 mon.a (mon.0) 2392 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemovePP_vm04-59259-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm04-59259-63"}]': finished 2026-03-10T10:20:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:46 vm04 bash[28289]: audit 2026-03-10T10:20:45.823544+0000 mon.a (mon.0) 2393 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:20:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:46 vm04 bash[28289]: audit 2026-03-10T10:20:45.823544+0000 mon.a (mon.0) 2393 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:20:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:46 vm04 bash[28289]: cluster 2026-03-10T10:20:45.827575+0000 mon.a (mon.0) 2394 : cluster [DBG] osdmap e324: 8 total, 8 up, 8 in 2026-03-10T10:20:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:46 vm04 bash[28289]: cluster 2026-03-10T10:20:45.827575+0000 mon.a (mon.0) 2394 : cluster [DBG] osdmap e324: 8 total, 8 up, 8 in 2026-03-10T10:20:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:46 vm04 bash[28289]: audit 2026-03-10T10:20:45.843354+0000 mon.a (mon.0) 2395 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-47"}]: dispatch 2026-03-10T10:20:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:46 vm04 bash[28289]: audit 2026-03-10T10:20:45.843354+0000 mon.a (mon.0) 2395 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-47"}]: dispatch 2026-03-10T10:20:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:46 vm04 bash[28289]: cluster 2026-03-10T10:20:46.458827+0000 mon.a (mon.0) 2396 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:46 vm04 bash[28289]: cluster 2026-03-10T10:20:46.458827+0000 mon.a (mon.0) 2396 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:47.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:46 vm04 bash[20742]: audit 2026-03-10T10:20:45.823421+0000 mon.a (mon.0) 2392 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemovePP_vm04-59259-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm04-59259-63"}]': finished 2026-03-10T10:20:47.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:46 vm04 bash[20742]: audit 2026-03-10T10:20:45.823421+0000 mon.a (mon.0) 2392 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemovePP_vm04-59259-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm04-59259-63"}]': finished 2026-03-10T10:20:47.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:46 vm04 bash[20742]: audit 2026-03-10T10:20:45.823544+0000 mon.a (mon.0) 2393 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:20:47.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:46 vm04 bash[20742]: audit 2026-03-10T10:20:45.823544+0000 mon.a (mon.0) 2393 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:20:47.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:46 vm04 bash[20742]: cluster 2026-03-10T10:20:45.827575+0000 mon.a (mon.0) 2394 : cluster [DBG] osdmap e324: 8 total, 8 up, 8 in 2026-03-10T10:20:47.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:46 vm04 bash[20742]: cluster 2026-03-10T10:20:45.827575+0000 mon.a (mon.0) 2394 : cluster [DBG] osdmap e324: 8 total, 8 up, 8 in 2026-03-10T10:20:47.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:46 vm04 bash[20742]: audit 2026-03-10T10:20:45.843354+0000 mon.a (mon.0) 2395 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-47"}]: dispatch 2026-03-10T10:20:47.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:46 vm04 bash[20742]: audit 2026-03-10T10:20:45.843354+0000 mon.a (mon.0) 2395 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-47"}]: dispatch 2026-03-10T10:20:47.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:46 vm04 bash[20742]: cluster 2026-03-10T10:20:46.458827+0000 mon.a (mon.0) 2396 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:47.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:46 vm04 bash[20742]: cluster 2026-03-10T10:20:46.458827+0000 mon.a (mon.0) 2396 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:47.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:46 vm07 bash[23367]: audit 2026-03-10T10:20:45.823421+0000 mon.a (mon.0) 2392 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemovePP_vm04-59259-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm04-59259-63"}]': finished 2026-03-10T10:20:47.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:46 vm07 bash[23367]: audit 2026-03-10T10:20:45.823421+0000 mon.a (mon.0) 2392 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "StatRemovePP_vm04-59259-63", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-StatRemovePP_vm04-59259-63"}]': finished 2026-03-10T10:20:47.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:46 vm07 bash[23367]: audit 2026-03-10T10:20:45.823544+0000 mon.a (mon.0) 2393 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:20:47.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:46 vm07 bash[23367]: audit 2026-03-10T10:20:45.823544+0000 mon.a (mon.0) 2393 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:20:47.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:46 vm07 bash[23367]: cluster 2026-03-10T10:20:45.827575+0000 mon.a (mon.0) 2394 : cluster [DBG] osdmap e324: 8 total, 8 up, 8 in 2026-03-10T10:20:47.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:46 vm07 bash[23367]: cluster 2026-03-10T10:20:45.827575+0000 mon.a (mon.0) 2394 : cluster [DBG] osdmap e324: 8 total, 8 up, 8 in 2026-03-10T10:20:47.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:46 vm07 bash[23367]: audit 2026-03-10T10:20:45.843354+0000 mon.a (mon.0) 2395 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-47"}]: dispatch 2026-03-10T10:20:47.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:46 vm07 bash[23367]: audit 2026-03-10T10:20:45.843354+0000 mon.a (mon.0) 2395 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-47"}]: dispatch 2026-03-10T10:20:47.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:46 vm07 bash[23367]: cluster 2026-03-10T10:20:46.458827+0000 mon.a (mon.0) 2396 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:47.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:46 vm07 bash[23367]: cluster 2026-03-10T10:20:46.458827+0000 mon.a (mon.0) 2396 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:47 vm04 bash[28289]: cluster 2026-03-10T10:20:46.436527+0000 mgr.y (mgr.24422) 286 : cluster [DBG] pgmap v445: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 682 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:47 vm04 bash[28289]: cluster 2026-03-10T10:20:46.436527+0000 mgr.y (mgr.24422) 286 : cluster [DBG] pgmap v445: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 682 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:47 vm04 bash[28289]: audit 2026-03-10T10:20:46.834630+0000 mon.a (mon.0) 2397 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-47"}]': finished 2026-03-10T10:20:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:47 vm04 bash[28289]: audit 2026-03-10T10:20:46.834630+0000 mon.a (mon.0) 2397 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-47"}]': finished 2026-03-10T10:20:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:47 vm04 bash[28289]: cluster 2026-03-10T10:20:46.844442+0000 mon.a (mon.0) 2398 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-10T10:20:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:47 vm04 bash[28289]: cluster 2026-03-10T10:20:46.844442+0000 mon.a (mon.0) 2398 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-10T10:20:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:47 vm04 bash[28289]: audit 2026-03-10T10:20:46.875611+0000 mon.a (mon.0) 2399 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:20:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:47 vm04 bash[28289]: audit 2026-03-10T10:20:46.875611+0000 mon.a (mon.0) 2399 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:20:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:47 vm04 bash[28289]: audit 2026-03-10T10:20:46.875884+0000 mon.a (mon.0) 2400 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-47"}]: dispatch 2026-03-10T10:20:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:47 vm04 bash[28289]: audit 2026-03-10T10:20:46.875884+0000 mon.a (mon.0) 2400 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-47"}]: dispatch 2026-03-10T10:20:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:47 vm04 bash[20742]: cluster 2026-03-10T10:20:46.436527+0000 mgr.y (mgr.24422) 286 : cluster [DBG] pgmap v445: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 682 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:47 vm04 bash[20742]: cluster 2026-03-10T10:20:46.436527+0000 mgr.y (mgr.24422) 286 : cluster [DBG] pgmap v445: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 682 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:47 vm04 bash[20742]: audit 2026-03-10T10:20:46.834630+0000 mon.a (mon.0) 2397 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-47"}]': finished 2026-03-10T10:20:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:47 vm04 bash[20742]: audit 2026-03-10T10:20:46.834630+0000 mon.a (mon.0) 2397 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-47"}]': finished 2026-03-10T10:20:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:47 vm04 bash[20742]: cluster 2026-03-10T10:20:46.844442+0000 mon.a (mon.0) 2398 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-10T10:20:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:47 vm04 bash[20742]: cluster 2026-03-10T10:20:46.844442+0000 mon.a (mon.0) 2398 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-10T10:20:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:47 vm04 bash[20742]: audit 2026-03-10T10:20:46.875611+0000 mon.a (mon.0) 2399 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:20:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:47 vm04 bash[20742]: audit 2026-03-10T10:20:46.875611+0000 mon.a (mon.0) 2399 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:20:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:47 vm04 bash[20742]: audit 2026-03-10T10:20:46.875884+0000 mon.a (mon.0) 2400 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-47"}]: dispatch 2026-03-10T10:20:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:47 vm04 bash[20742]: audit 2026-03-10T10:20:46.875884+0000 mon.a (mon.0) 2400 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-47"}]: dispatch 2026-03-10T10:20:48.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:47 vm07 bash[23367]: cluster 2026-03-10T10:20:46.436527+0000 mgr.y (mgr.24422) 286 : cluster [DBG] pgmap v445: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 682 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:48.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:47 vm07 bash[23367]: cluster 2026-03-10T10:20:46.436527+0000 mgr.y (mgr.24422) 286 : cluster [DBG] pgmap v445: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 682 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:20:48.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:47 vm07 bash[23367]: audit 2026-03-10T10:20:46.834630+0000 mon.a (mon.0) 2397 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-47"}]': finished 2026-03-10T10:20:48.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:47 vm07 bash[23367]: audit 2026-03-10T10:20:46.834630+0000 mon.a (mon.0) 2397 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-47"}]': finished 2026-03-10T10:20:48.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:47 vm07 bash[23367]: cluster 2026-03-10T10:20:46.844442+0000 mon.a (mon.0) 2398 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-10T10:20:48.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:47 vm07 bash[23367]: cluster 2026-03-10T10:20:46.844442+0000 mon.a (mon.0) 2398 : cluster [DBG] osdmap e325: 8 total, 8 up, 8 in 2026-03-10T10:20:48.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:47 vm07 bash[23367]: audit 2026-03-10T10:20:46.875611+0000 mon.a (mon.0) 2399 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:20:48.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:47 vm07 bash[23367]: audit 2026-03-10T10:20:46.875611+0000 mon.a (mon.0) 2399 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:20:48.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:47 vm07 bash[23367]: audit 2026-03-10T10:20:46.875884+0000 mon.a (mon.0) 2400 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-47"}]: dispatch 2026-03-10T10:20:48.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:47 vm07 bash[23367]: audit 2026-03-10T10:20:46.875884+0000 mon.a (mon.0) 2400 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-47"}]: dispatch 2026-03-10T10:20:48.766 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:20:48 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:20:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:48 vm04 bash[28289]: cluster 2026-03-10T10:20:47.841456+0000 mon.a (mon.0) 2401 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-10T10:20:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:48 vm04 bash[28289]: cluster 2026-03-10T10:20:47.841456+0000 mon.a (mon.0) 2401 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-10T10:20:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:48 vm04 bash[28289]: audit 2026-03-10T10:20:47.886770+0000 mon.c (mon.2) 437 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:48 vm04 bash[28289]: audit 2026-03-10T10:20:47.886770+0000 mon.c (mon.2) 437 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:48 vm04 bash[28289]: audit 2026-03-10T10:20:47.894873+0000 mon.a (mon.0) 2402 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:48 vm04 bash[28289]: audit 2026-03-10T10:20:47.894873+0000 mon.a (mon.0) 2402 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:48 vm04 bash[28289]: audit 2026-03-10T10:20:48.862101+0000 mon.a (mon.0) 2403 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm04-59259-63"}]': finished 2026-03-10T10:20:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:48 vm04 bash[28289]: audit 2026-03-10T10:20:48.862101+0000 mon.a (mon.0) 2403 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm04-59259-63"}]': finished 2026-03-10T10:20:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:48 vm04 bash[28289]: audit 2026-03-10T10:20:48.872971+0000 mon.c (mon.2) 438 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:48 vm04 bash[28289]: audit 2026-03-10T10:20:48.872971+0000 mon.c (mon.2) 438 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:48 vm04 bash[28289]: cluster 2026-03-10T10:20:48.875777+0000 mon.a (mon.0) 2404 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-10T10:20:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:48 vm04 bash[28289]: cluster 2026-03-10T10:20:48.875777+0000 mon.a (mon.0) 2404 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-10T10:20:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:48 vm04 bash[28289]: audit 2026-03-10T10:20:48.928426+0000 mon.a (mon.0) 2405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:48 vm04 bash[28289]: audit 2026-03-10T10:20:48.928426+0000 mon.a (mon.0) 2405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:49.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:48 vm04 bash[20742]: cluster 2026-03-10T10:20:47.841456+0000 mon.a (mon.0) 2401 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-10T10:20:49.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:48 vm04 bash[20742]: cluster 2026-03-10T10:20:47.841456+0000 mon.a (mon.0) 2401 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-10T10:20:49.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:48 vm04 bash[20742]: audit 2026-03-10T10:20:47.886770+0000 mon.c (mon.2) 437 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:49.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:48 vm04 bash[20742]: audit 2026-03-10T10:20:47.886770+0000 mon.c (mon.2) 437 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:49.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:48 vm04 bash[20742]: audit 2026-03-10T10:20:47.894873+0000 mon.a (mon.0) 2402 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:49.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:48 vm04 bash[20742]: audit 2026-03-10T10:20:47.894873+0000 mon.a (mon.0) 2402 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:49.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:48 vm04 bash[20742]: audit 2026-03-10T10:20:48.862101+0000 mon.a (mon.0) 2403 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm04-59259-63"}]': finished 2026-03-10T10:20:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:48 vm04 bash[20742]: audit 2026-03-10T10:20:48.862101+0000 mon.a (mon.0) 2403 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm04-59259-63"}]': finished 2026-03-10T10:20:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:48 vm04 bash[20742]: audit 2026-03-10T10:20:48.872971+0000 mon.c (mon.2) 438 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:48 vm04 bash[20742]: audit 2026-03-10T10:20:48.872971+0000 mon.c (mon.2) 438 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:48 vm04 bash[20742]: cluster 2026-03-10T10:20:48.875777+0000 mon.a (mon.0) 2404 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-10T10:20:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:48 vm04 bash[20742]: cluster 2026-03-10T10:20:48.875777+0000 mon.a (mon.0) 2404 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-10T10:20:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:48 vm04 bash[20742]: audit 2026-03-10T10:20:48.928426+0000 mon.a (mon.0) 2405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:49.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:48 vm04 bash[20742]: audit 2026-03-10T10:20:48.928426+0000 mon.a (mon.0) 2405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:49.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:48 vm07 bash[23367]: cluster 2026-03-10T10:20:47.841456+0000 mon.a (mon.0) 2401 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-10T10:20:49.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:48 vm07 bash[23367]: cluster 2026-03-10T10:20:47.841456+0000 mon.a (mon.0) 2401 : cluster [DBG] osdmap e326: 8 total, 8 up, 8 in 2026-03-10T10:20:49.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:48 vm07 bash[23367]: audit 2026-03-10T10:20:47.886770+0000 mon.c (mon.2) 437 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:49.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:48 vm07 bash[23367]: audit 2026-03-10T10:20:47.886770+0000 mon.c (mon.2) 437 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:49.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:48 vm07 bash[23367]: audit 2026-03-10T10:20:47.894873+0000 mon.a (mon.0) 2402 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:49.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:48 vm07 bash[23367]: audit 2026-03-10T10:20:47.894873+0000 mon.a (mon.0) 2402 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:49.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:48 vm07 bash[23367]: audit 2026-03-10T10:20:48.862101+0000 mon.a (mon.0) 2403 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm04-59259-63"}]': finished 2026-03-10T10:20:49.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:48 vm07 bash[23367]: audit 2026-03-10T10:20:48.862101+0000 mon.a (mon.0) 2403 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-StatRemovePP_vm04-59259-63"}]': finished 2026-03-10T10:20:49.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:48 vm07 bash[23367]: audit 2026-03-10T10:20:48.872971+0000 mon.c (mon.2) 438 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:49.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:48 vm07 bash[23367]: audit 2026-03-10T10:20:48.872971+0000 mon.c (mon.2) 438 : audit [INF] from='client.? 192.168.123.104:0/3074623432' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:49.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:48 vm07 bash[23367]: cluster 2026-03-10T10:20:48.875777+0000 mon.a (mon.0) 2404 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-10T10:20:49.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:48 vm07 bash[23367]: cluster 2026-03-10T10:20:48.875777+0000 mon.a (mon.0) 2404 : cluster [DBG] osdmap e327: 8 total, 8 up, 8 in 2026-03-10T10:20:49.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:48 vm07 bash[23367]: audit 2026-03-10T10:20:48.928426+0000 mon.a (mon.0) 2405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:49.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:48 vm07 bash[23367]: audit 2026-03-10T10:20:48.928426+0000 mon.a (mon.0) 2405 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm04-59259-63"}]: dispatch 2026-03-10T10:20:50.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:49 vm07 bash[23367]: audit 2026-03-10T10:20:48.430363+0000 mgr.y (mgr.24422) 287 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:49 vm07 bash[23367]: audit 2026-03-10T10:20:48.430363+0000 mgr.y (mgr.24422) 287 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:49 vm07 bash[23367]: cluster 2026-03-10T10:20:48.437039+0000 mgr.y (mgr.24422) 288 : cluster [DBG] pgmap v448: 260 pgs: 260 active+clean; 8.3 MiB data, 682 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:20:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:49 vm07 bash[23367]: cluster 2026-03-10T10:20:48.437039+0000 mgr.y (mgr.24422) 288 : cluster [DBG] pgmap v448: 260 pgs: 260 active+clean; 8.3 MiB data, 682 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:20:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:49 vm07 bash[23367]: audit 2026-03-10T10:20:48.938812+0000 mon.a (mon.0) 2406 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:49 vm07 bash[23367]: audit 2026-03-10T10:20:48.938812+0000 mon.a (mon.0) 2406 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:49 vm07 bash[23367]: audit 2026-03-10T10:20:49.865257+0000 mon.a (mon.0) 2407 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm04-59259-63"}]': finished 2026-03-10T10:20:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:49 vm07 bash[23367]: audit 2026-03-10T10:20:49.865257+0000 mon.a (mon.0) 2407 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm04-59259-63"}]': finished 2026-03-10T10:20:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:49 vm07 bash[23367]: audit 2026-03-10T10:20:49.865370+0000 mon.a (mon.0) 2408 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-49","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:49 vm07 bash[23367]: audit 2026-03-10T10:20:49.865370+0000 mon.a (mon.0) 2408 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-49","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:49 vm07 bash[23367]: cluster 2026-03-10T10:20:49.869588+0000 mon.a (mon.0) 2409 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-10T10:20:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:49 vm07 bash[23367]: cluster 2026-03-10T10:20:49.869588+0000 mon.a (mon.0) 2409 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-10T10:20:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:49 vm07 bash[23367]: audit 2026-03-10T10:20:49.902063+0000 mon.c (mon.2) 439 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:49 vm07 bash[23367]: audit 2026-03-10T10:20:49.902063+0000 mon.c (mon.2) 439 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:49 vm07 bash[23367]: audit 2026-03-10T10:20:49.907102+0000 mon.a (mon.0) 2410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:49 vm07 bash[23367]: audit 2026-03-10T10:20:49.907102+0000 mon.a (mon.0) 2410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:49 vm07 bash[23367]: audit 2026-03-10T10:20:49.908595+0000 mon.c (mon.2) 440 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:49 vm07 bash[23367]: audit 2026-03-10T10:20:49.908595+0000 mon.c (mon.2) 440 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:49 vm07 bash[23367]: audit 2026-03-10T10:20:49.916591+0000 mon.a (mon.0) 2411 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:20:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:49 vm07 bash[23367]: audit 2026-03-10T10:20:49.916591+0000 mon.a (mon.0) 2411 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:20:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:49 vm07 bash[23367]: audit 2026-03-10T10:20:49.916716+0000 mon.a (mon.0) 2412 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:49 vm07 bash[23367]: audit 2026-03-10T10:20:49.916716+0000 mon.a (mon.0) 2412 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:49 vm07 bash[23367]: audit 2026-03-10T10:20:49.918898+0000 mon.c (mon.2) 441 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm04-59259-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:49 vm07 bash[23367]: audit 2026-03-10T10:20:49.918898+0000 mon.c (mon.2) 441 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm04-59259-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:49 vm07 bash[23367]: audit 2026-03-10T10:20:49.921506+0000 mon.a (mon.0) 2413 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm04-59259-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:49 vm07 bash[23367]: audit 2026-03-10T10:20:49.921506+0000 mon.a (mon.0) 2413 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm04-59259-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:49 vm04 bash[28289]: audit 2026-03-10T10:20:48.430363+0000 mgr.y (mgr.24422) 287 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:49 vm04 bash[28289]: audit 2026-03-10T10:20:48.430363+0000 mgr.y (mgr.24422) 287 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:49 vm04 bash[28289]: cluster 2026-03-10T10:20:48.437039+0000 mgr.y (mgr.24422) 288 : cluster [DBG] pgmap v448: 260 pgs: 260 active+clean; 8.3 MiB data, 682 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:20:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:49 vm04 bash[28289]: cluster 2026-03-10T10:20:48.437039+0000 mgr.y (mgr.24422) 288 : cluster [DBG] pgmap v448: 260 pgs: 260 active+clean; 8.3 MiB data, 682 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:20:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:49 vm04 bash[28289]: audit 2026-03-10T10:20:48.938812+0000 mon.a (mon.0) 2406 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:49 vm04 bash[28289]: audit 2026-03-10T10:20:48.938812+0000 mon.a (mon.0) 2406 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:49 vm04 bash[28289]: audit 2026-03-10T10:20:49.865257+0000 mon.a (mon.0) 2407 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm04-59259-63"}]': finished 2026-03-10T10:20:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:49 vm04 bash[28289]: audit 2026-03-10T10:20:49.865257+0000 mon.a (mon.0) 2407 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm04-59259-63"}]': finished 2026-03-10T10:20:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:49 vm04 bash[28289]: audit 2026-03-10T10:20:49.865370+0000 mon.a (mon.0) 2408 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-49","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:49 vm04 bash[28289]: audit 2026-03-10T10:20:49.865370+0000 mon.a (mon.0) 2408 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-49","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:49 vm04 bash[28289]: cluster 2026-03-10T10:20:49.869588+0000 mon.a (mon.0) 2409 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-10T10:20:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:49 vm04 bash[28289]: cluster 2026-03-10T10:20:49.869588+0000 mon.a (mon.0) 2409 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-10T10:20:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:49 vm04 bash[28289]: audit 2026-03-10T10:20:49.902063+0000 mon.c (mon.2) 439 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:49 vm04 bash[28289]: audit 2026-03-10T10:20:49.902063+0000 mon.c (mon.2) 439 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:49 vm04 bash[28289]: audit 2026-03-10T10:20:49.907102+0000 mon.a (mon.0) 2410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:49 vm04 bash[28289]: audit 2026-03-10T10:20:49.907102+0000 mon.a (mon.0) 2410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:49 vm04 bash[28289]: audit 2026-03-10T10:20:49.908595+0000 mon.c (mon.2) 440 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:49 vm04 bash[28289]: audit 2026-03-10T10:20:49.908595+0000 mon.c (mon.2) 440 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:49 vm04 bash[28289]: audit 2026-03-10T10:20:49.916591+0000 mon.a (mon.0) 2411 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:20:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:49 vm04 bash[28289]: audit 2026-03-10T10:20:49.916591+0000 mon.a (mon.0) 2411 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:20:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:49 vm04 bash[28289]: audit 2026-03-10T10:20:49.916716+0000 mon.a (mon.0) 2412 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:49 vm04 bash[28289]: audit 2026-03-10T10:20:49.916716+0000 mon.a (mon.0) 2412 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:50.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:49 vm04 bash[28289]: audit 2026-03-10T10:20:49.918898+0000 mon.c (mon.2) 441 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm04-59259-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:50.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:49 vm04 bash[28289]: audit 2026-03-10T10:20:49.918898+0000 mon.c (mon.2) 441 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm04-59259-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:50.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:49 vm04 bash[28289]: audit 2026-03-10T10:20:49.921506+0000 mon.a (mon.0) 2413 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm04-59259-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:50.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:49 vm04 bash[28289]: audit 2026-03-10T10:20:49.921506+0000 mon.a (mon.0) 2413 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm04-59259-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:50.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:49 vm04 bash[20742]: audit 2026-03-10T10:20:48.430363+0000 mgr.y (mgr.24422) 287 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:50.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:49 vm04 bash[20742]: audit 2026-03-10T10:20:48.430363+0000 mgr.y (mgr.24422) 287 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:20:50.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:49 vm04 bash[20742]: cluster 2026-03-10T10:20:48.437039+0000 mgr.y (mgr.24422) 288 : cluster [DBG] pgmap v448: 260 pgs: 260 active+clean; 8.3 MiB data, 682 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:20:50.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:49 vm04 bash[20742]: cluster 2026-03-10T10:20:48.437039+0000 mgr.y (mgr.24422) 288 : cluster [DBG] pgmap v448: 260 pgs: 260 active+clean; 8.3 MiB data, 682 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:20:50.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:49 vm04 bash[20742]: audit 2026-03-10T10:20:48.938812+0000 mon.a (mon.0) 2406 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:50.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:49 vm04 bash[20742]: audit 2026-03-10T10:20:48.938812+0000 mon.a (mon.0) 2406 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-49","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:20:50.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:49 vm04 bash[20742]: audit 2026-03-10T10:20:49.865257+0000 mon.a (mon.0) 2407 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm04-59259-63"}]': finished 2026-03-10T10:20:50.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:49 vm04 bash[20742]: audit 2026-03-10T10:20:49.865257+0000 mon.a (mon.0) 2407 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"StatRemovePP_vm04-59259-63"}]': finished 2026-03-10T10:20:50.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:49 vm04 bash[20742]: audit 2026-03-10T10:20:49.865370+0000 mon.a (mon.0) 2408 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-49","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:50.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:49 vm04 bash[20742]: audit 2026-03-10T10:20:49.865370+0000 mon.a (mon.0) 2408 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-49","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:20:50.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:49 vm04 bash[20742]: cluster 2026-03-10T10:20:49.869588+0000 mon.a (mon.0) 2409 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-10T10:20:50.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:49 vm04 bash[20742]: cluster 2026-03-10T10:20:49.869588+0000 mon.a (mon.0) 2409 : cluster [DBG] osdmap e328: 8 total, 8 up, 8 in 2026-03-10T10:20:50.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:49 vm04 bash[20742]: audit 2026-03-10T10:20:49.902063+0000 mon.c (mon.2) 439 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:50.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:49 vm04 bash[20742]: audit 2026-03-10T10:20:49.902063+0000 mon.c (mon.2) 439 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:50.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:49 vm04 bash[20742]: audit 2026-03-10T10:20:49.907102+0000 mon.a (mon.0) 2410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:50.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:49 vm04 bash[20742]: audit 2026-03-10T10:20:49.907102+0000 mon.a (mon.0) 2410 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:50.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:49 vm04 bash[20742]: audit 2026-03-10T10:20:49.908595+0000 mon.c (mon.2) 440 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:50.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:49 vm04 bash[20742]: audit 2026-03-10T10:20:49.908595+0000 mon.c (mon.2) 440 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:50.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:49 vm04 bash[20742]: audit 2026-03-10T10:20:49.916591+0000 mon.a (mon.0) 2411 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:20:50.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:49 vm04 bash[20742]: audit 2026-03-10T10:20:49.916591+0000 mon.a (mon.0) 2411 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-49", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:20:50.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:49 vm04 bash[20742]: audit 2026-03-10T10:20:49.916716+0000 mon.a (mon.0) 2412 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:50.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:49 vm04 bash[20742]: audit 2026-03-10T10:20:49.916716+0000 mon.a (mon.0) 2412 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:50.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:49 vm04 bash[20742]: audit 2026-03-10T10:20:49.918898+0000 mon.c (mon.2) 441 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm04-59259-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:50.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:49 vm04 bash[20742]: audit 2026-03-10T10:20:49.918898+0000 mon.c (mon.2) 441 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm04-59259-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:50.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:49 vm04 bash[20742]: audit 2026-03-10T10:20:49.921506+0000 mon.a (mon.0) 2413 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm04-59259-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:50.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:49 vm04 bash[20742]: audit 2026-03-10T10:20:49.921506+0000 mon.a (mon.0) 2413 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm04-59259-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:51 vm04 bash[28289]: cluster 2026-03-10T10:20:50.437338+0000 mgr.y (mgr.24422) 289 : cluster [DBG] pgmap v451: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 682 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T10:20:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:51 vm04 bash[28289]: cluster 2026-03-10T10:20:50.437338+0000 mgr.y (mgr.24422) 289 : cluster [DBG] pgmap v451: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 682 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T10:20:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:51 vm04 bash[28289]: audit 2026-03-10T10:20:50.867683+0000 mon.a (mon.0) 2414 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-49", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:20:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:51 vm04 bash[28289]: audit 2026-03-10T10:20:50.867683+0000 mon.a (mon.0) 2414 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-49", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:20:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:51 vm04 bash[28289]: audit 2026-03-10T10:20:50.867787+0000 mon.a (mon.0) 2415 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm04-59259-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:51 vm04 bash[28289]: audit 2026-03-10T10:20:50.867787+0000 mon.a (mon.0) 2415 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm04-59259-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:51 vm04 bash[28289]: cluster 2026-03-10T10:20:50.870804+0000 mon.a (mon.0) 2416 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-10T10:20:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:51 vm04 bash[28289]: cluster 2026-03-10T10:20:50.870804+0000 mon.a (mon.0) 2416 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-10T10:20:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:51 vm04 bash[28289]: audit 2026-03-10T10:20:50.871245+0000 mon.a (mon.0) 2417 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-49"}]: dispatch 2026-03-10T10:20:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:51 vm04 bash[28289]: audit 2026-03-10T10:20:50.871245+0000 mon.a (mon.0) 2417 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-49"}]: dispatch 2026-03-10T10:20:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:51 vm04 bash[28289]: audit 2026-03-10T10:20:50.872694+0000 mon.c (mon.2) 442 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm04-59259-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:51 vm04 bash[28289]: audit 2026-03-10T10:20:50.872694+0000 mon.c (mon.2) 442 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm04-59259-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:51 vm04 bash[28289]: audit 2026-03-10T10:20:50.872915+0000 mon.a (mon.0) 2418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm04-59259-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:51 vm04 bash[28289]: audit 2026-03-10T10:20:50.872915+0000 mon.a (mon.0) 2418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm04-59259-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:51 vm04 bash[20742]: cluster 2026-03-10T10:20:50.437338+0000 mgr.y (mgr.24422) 289 : cluster [DBG] pgmap v451: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 682 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T10:20:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:51 vm04 bash[20742]: cluster 2026-03-10T10:20:50.437338+0000 mgr.y (mgr.24422) 289 : cluster [DBG] pgmap v451: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 682 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T10:20:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:51 vm04 bash[20742]: audit 2026-03-10T10:20:50.867683+0000 mon.a (mon.0) 2414 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-49", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:20:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:51 vm04 bash[20742]: audit 2026-03-10T10:20:50.867683+0000 mon.a (mon.0) 2414 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-49", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:20:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:51 vm04 bash[20742]: audit 2026-03-10T10:20:50.867787+0000 mon.a (mon.0) 2415 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm04-59259-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:51 vm04 bash[20742]: audit 2026-03-10T10:20:50.867787+0000 mon.a (mon.0) 2415 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm04-59259-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:51 vm04 bash[20742]: cluster 2026-03-10T10:20:50.870804+0000 mon.a (mon.0) 2416 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-10T10:20:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:51 vm04 bash[20742]: cluster 2026-03-10T10:20:50.870804+0000 mon.a (mon.0) 2416 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-10T10:20:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:51 vm04 bash[20742]: audit 2026-03-10T10:20:50.871245+0000 mon.a (mon.0) 2417 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-49"}]: dispatch 2026-03-10T10:20:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:51 vm04 bash[20742]: audit 2026-03-10T10:20:50.871245+0000 mon.a (mon.0) 2417 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-49"}]: dispatch 2026-03-10T10:20:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:51 vm04 bash[20742]: audit 2026-03-10T10:20:50.872694+0000 mon.c (mon.2) 442 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm04-59259-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:51 vm04 bash[20742]: audit 2026-03-10T10:20:50.872694+0000 mon.c (mon.2) 442 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm04-59259-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:51 vm04 bash[20742]: audit 2026-03-10T10:20:50.872915+0000 mon.a (mon.0) 2418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm04-59259-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:51 vm04 bash[20742]: audit 2026-03-10T10:20:50.872915+0000 mon.a (mon.0) 2418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm04-59259-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:51 vm07 bash[23367]: cluster 2026-03-10T10:20:50.437338+0000 mgr.y (mgr.24422) 289 : cluster [DBG] pgmap v451: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 682 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T10:20:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:51 vm07 bash[23367]: cluster 2026-03-10T10:20:50.437338+0000 mgr.y (mgr.24422) 289 : cluster [DBG] pgmap v451: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 682 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T10:20:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:51 vm07 bash[23367]: audit 2026-03-10T10:20:50.867683+0000 mon.a (mon.0) 2414 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-49", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:20:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:51 vm07 bash[23367]: audit 2026-03-10T10:20:50.867683+0000 mon.a (mon.0) 2414 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-49", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:20:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:51 vm07 bash[23367]: audit 2026-03-10T10:20:50.867787+0000 mon.a (mon.0) 2415 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm04-59259-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:51 vm07 bash[23367]: audit 2026-03-10T10:20:50.867787+0000 mon.a (mon.0) 2415 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-ExecuteClassPP_vm04-59259-64", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:51 vm07 bash[23367]: cluster 2026-03-10T10:20:50.870804+0000 mon.a (mon.0) 2416 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-10T10:20:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:51 vm07 bash[23367]: cluster 2026-03-10T10:20:50.870804+0000 mon.a (mon.0) 2416 : cluster [DBG] osdmap e329: 8 total, 8 up, 8 in 2026-03-10T10:20:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:51 vm07 bash[23367]: audit 2026-03-10T10:20:50.871245+0000 mon.a (mon.0) 2417 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-49"}]: dispatch 2026-03-10T10:20:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:51 vm07 bash[23367]: audit 2026-03-10T10:20:50.871245+0000 mon.a (mon.0) 2417 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-49"}]: dispatch 2026-03-10T10:20:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:51 vm07 bash[23367]: audit 2026-03-10T10:20:50.872694+0000 mon.c (mon.2) 442 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm04-59259-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:51 vm07 bash[23367]: audit 2026-03-10T10:20:50.872694+0000 mon.c (mon.2) 442 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm04-59259-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:51 vm07 bash[23367]: audit 2026-03-10T10:20:50.872915+0000 mon.a (mon.0) 2418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm04-59259-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:51 vm07 bash[23367]: audit 2026-03-10T10:20:50.872915+0000 mon.a (mon.0) 2418 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm04-59259-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:53.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:52 vm04 bash[28289]: audit 2026-03-10T10:20:51.871638+0000 mon.a (mon.0) 2419 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-49"}]': finished 2026-03-10T10:20:53.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:52 vm04 bash[28289]: audit 2026-03-10T10:20:51.871638+0000 mon.a (mon.0) 2419 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-49"}]': finished 2026-03-10T10:20:53.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:52 vm04 bash[28289]: cluster 2026-03-10T10:20:51.879450+0000 mon.a (mon.0) 2420 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-10T10:20:53.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:52 vm04 bash[28289]: cluster 2026-03-10T10:20:51.879450+0000 mon.a (mon.0) 2420 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-10T10:20:53.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:52 vm04 bash[28289]: audit 2026-03-10T10:20:51.880385+0000 mon.a (mon.0) 2421 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-49", "mode": "readproxy"}]: dispatch 2026-03-10T10:20:53.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:52 vm04 bash[28289]: audit 2026-03-10T10:20:51.880385+0000 mon.a (mon.0) 2421 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-49", "mode": "readproxy"}]: dispatch 2026-03-10T10:20:53.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:52 vm04 bash[28289]: cluster 2026-03-10T10:20:52.871912+0000 mon.a (mon.0) 2422 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:20:53.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:52 vm04 bash[28289]: cluster 2026-03-10T10:20:52.871912+0000 mon.a (mon.0) 2422 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:20:53.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:52 vm04 bash[28289]: audit 2026-03-10T10:20:52.874860+0000 mon.a (mon.0) 2423 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm04-59259-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm04-59259-64"}]': finished 2026-03-10T10:20:53.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:52 vm04 bash[28289]: audit 2026-03-10T10:20:52.874860+0000 mon.a (mon.0) 2423 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm04-59259-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm04-59259-64"}]': finished 2026-03-10T10:20:53.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:52 vm04 bash[28289]: audit 2026-03-10T10:20:52.875006+0000 mon.a (mon.0) 2424 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-49", "mode": "readproxy"}]': finished 2026-03-10T10:20:53.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:52 vm04 bash[28289]: audit 2026-03-10T10:20:52.875006+0000 mon.a (mon.0) 2424 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-49", "mode": "readproxy"}]': finished 2026-03-10T10:20:53.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:52 vm04 bash[28289]: cluster 2026-03-10T10:20:52.880246+0000 mon.a (mon.0) 2425 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-10T10:20:53.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:52 vm04 bash[28289]: cluster 2026-03-10T10:20:52.880246+0000 mon.a (mon.0) 2425 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-10T10:20:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:52 vm04 bash[20742]: audit 2026-03-10T10:20:51.871638+0000 mon.a (mon.0) 2419 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-49"}]': finished 2026-03-10T10:20:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:52 vm04 bash[20742]: audit 2026-03-10T10:20:51.871638+0000 mon.a (mon.0) 2419 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-49"}]': finished 2026-03-10T10:20:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:52 vm04 bash[20742]: cluster 2026-03-10T10:20:51.879450+0000 mon.a (mon.0) 2420 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-10T10:20:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:52 vm04 bash[20742]: cluster 2026-03-10T10:20:51.879450+0000 mon.a (mon.0) 2420 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-10T10:20:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:52 vm04 bash[20742]: audit 2026-03-10T10:20:51.880385+0000 mon.a (mon.0) 2421 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-49", "mode": "readproxy"}]: dispatch 2026-03-10T10:20:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:52 vm04 bash[20742]: audit 2026-03-10T10:20:51.880385+0000 mon.a (mon.0) 2421 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-49", "mode": "readproxy"}]: dispatch 2026-03-10T10:20:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:52 vm04 bash[20742]: cluster 2026-03-10T10:20:52.871912+0000 mon.a (mon.0) 2422 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:20:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:52 vm04 bash[20742]: cluster 2026-03-10T10:20:52.871912+0000 mon.a (mon.0) 2422 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:20:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:52 vm04 bash[20742]: audit 2026-03-10T10:20:52.874860+0000 mon.a (mon.0) 2423 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm04-59259-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm04-59259-64"}]': finished 2026-03-10T10:20:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:52 vm04 bash[20742]: audit 2026-03-10T10:20:52.874860+0000 mon.a (mon.0) 2423 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm04-59259-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm04-59259-64"}]': finished 2026-03-10T10:20:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:52 vm04 bash[20742]: audit 2026-03-10T10:20:52.875006+0000 mon.a (mon.0) 2424 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-49", "mode": "readproxy"}]': finished 2026-03-10T10:20:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:52 vm04 bash[20742]: audit 2026-03-10T10:20:52.875006+0000 mon.a (mon.0) 2424 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-49", "mode": "readproxy"}]': finished 2026-03-10T10:20:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:52 vm04 bash[20742]: cluster 2026-03-10T10:20:52.880246+0000 mon.a (mon.0) 2425 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-10T10:20:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:52 vm04 bash[20742]: cluster 2026-03-10T10:20:52.880246+0000 mon.a (mon.0) 2425 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-10T10:20:53.204 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:20:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:20:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:20:53.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:52 vm07 bash[23367]: audit 2026-03-10T10:20:51.871638+0000 mon.a (mon.0) 2419 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-49"}]': finished 2026-03-10T10:20:53.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:52 vm07 bash[23367]: audit 2026-03-10T10:20:51.871638+0000 mon.a (mon.0) 2419 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-49"}]': finished 2026-03-10T10:20:53.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:52 vm07 bash[23367]: cluster 2026-03-10T10:20:51.879450+0000 mon.a (mon.0) 2420 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-10T10:20:53.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:52 vm07 bash[23367]: cluster 2026-03-10T10:20:51.879450+0000 mon.a (mon.0) 2420 : cluster [DBG] osdmap e330: 8 total, 8 up, 8 in 2026-03-10T10:20:53.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:52 vm07 bash[23367]: audit 2026-03-10T10:20:51.880385+0000 mon.a (mon.0) 2421 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-49", "mode": "readproxy"}]: dispatch 2026-03-10T10:20:53.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:52 vm07 bash[23367]: audit 2026-03-10T10:20:51.880385+0000 mon.a (mon.0) 2421 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-49", "mode": "readproxy"}]: dispatch 2026-03-10T10:20:53.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:52 vm07 bash[23367]: cluster 2026-03-10T10:20:52.871912+0000 mon.a (mon.0) 2422 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:20:53.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:52 vm07 bash[23367]: cluster 2026-03-10T10:20:52.871912+0000 mon.a (mon.0) 2422 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:20:53.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:52 vm07 bash[23367]: audit 2026-03-10T10:20:52.874860+0000 mon.a (mon.0) 2423 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm04-59259-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm04-59259-64"}]': finished 2026-03-10T10:20:53.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:52 vm07 bash[23367]: audit 2026-03-10T10:20:52.874860+0000 mon.a (mon.0) 2423 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "ExecuteClassPP_vm04-59259-64", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-ExecuteClassPP_vm04-59259-64"}]': finished 2026-03-10T10:20:53.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:52 vm07 bash[23367]: audit 2026-03-10T10:20:52.875006+0000 mon.a (mon.0) 2424 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-49", "mode": "readproxy"}]': finished 2026-03-10T10:20:53.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:52 vm07 bash[23367]: audit 2026-03-10T10:20:52.875006+0000 mon.a (mon.0) 2424 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-49", "mode": "readproxy"}]': finished 2026-03-10T10:20:53.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:52 vm07 bash[23367]: cluster 2026-03-10T10:20:52.880246+0000 mon.a (mon.0) 2425 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-10T10:20:53.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:52 vm07 bash[23367]: cluster 2026-03-10T10:20:52.880246+0000 mon.a (mon.0) 2425 : cluster [DBG] osdmap e331: 8 total, 8 up, 8 in 2026-03-10T10:20:54.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:53 vm04 bash[28289]: cluster 2026-03-10T10:20:52.437662+0000 mgr.y (mgr.24422) 290 : cluster [DBG] pgmap v454: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 682 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T10:20:54.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:53 vm04 bash[28289]: cluster 2026-03-10T10:20:52.437662+0000 mgr.y (mgr.24422) 290 : cluster [DBG] pgmap v454: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 682 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T10:20:54.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:53 vm04 bash[20742]: cluster 2026-03-10T10:20:52.437662+0000 mgr.y (mgr.24422) 290 : cluster [DBG] pgmap v454: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 682 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T10:20:54.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:53 vm04 bash[20742]: cluster 2026-03-10T10:20:52.437662+0000 mgr.y (mgr.24422) 290 : cluster [DBG] pgmap v454: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 682 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T10:20:54.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:53 vm07 bash[23367]: cluster 2026-03-10T10:20:52.437662+0000 mgr.y (mgr.24422) 290 : cluster [DBG] pgmap v454: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 682 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T10:20:54.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:53 vm07 bash[23367]: cluster 2026-03-10T10:20:52.437662+0000 mgr.y (mgr.24422) 290 : cluster [DBG] pgmap v454: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 682 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T10:20:55.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:54 vm07 bash[23367]: cluster 2026-03-10T10:20:53.916436+0000 mon.a (mon.0) 2426 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-10T10:20:55.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:54 vm07 bash[23367]: cluster 2026-03-10T10:20:53.916436+0000 mon.a (mon.0) 2426 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-10T10:20:55.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:54 vm04 bash[28289]: cluster 2026-03-10T10:20:53.916436+0000 mon.a (mon.0) 2426 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-10T10:20:55.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:54 vm04 bash[28289]: cluster 2026-03-10T10:20:53.916436+0000 mon.a (mon.0) 2426 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-10T10:20:55.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:54 vm04 bash[20742]: cluster 2026-03-10T10:20:53.916436+0000 mon.a (mon.0) 2426 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-10T10:20:55.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:54 vm04 bash[20742]: cluster 2026-03-10T10:20:53.916436+0000 mon.a (mon.0) 2426 : cluster [DBG] osdmap e332: 8 total, 8 up, 8 in 2026-03-10T10:20:56.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:56 vm04 bash[28289]: cluster 2026-03-10T10:20:54.438074+0000 mgr.y (mgr.24422) 291 : cluster [DBG] pgmap v457: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 701 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:20:56.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:56 vm04 bash[28289]: cluster 2026-03-10T10:20:54.438074+0000 mgr.y (mgr.24422) 291 : cluster [DBG] pgmap v457: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 701 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:20:56.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:56 vm04 bash[28289]: cluster 2026-03-10T10:20:54.925903+0000 mon.a (mon.0) 2427 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:56.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:56 vm04 bash[28289]: cluster 2026-03-10T10:20:54.925903+0000 mon.a (mon.0) 2427 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:56.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:56 vm04 bash[28289]: cluster 2026-03-10T10:20:54.999024+0000 mon.a (mon.0) 2428 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-10T10:20:56.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:56 vm04 bash[28289]: cluster 2026-03-10T10:20:54.999024+0000 mon.a (mon.0) 2428 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-10T10:20:56.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:56 vm04 bash[28289]: audit 2026-03-10T10:20:54.999976+0000 mon.c (mon.2) 443 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:56.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:56 vm04 bash[28289]: audit 2026-03-10T10:20:54.999976+0000 mon.c (mon.2) 443 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:56.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:56 vm04 bash[28289]: audit 2026-03-10T10:20:55.000397+0000 mon.a (mon.0) 2429 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:56.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:56 vm04 bash[28289]: audit 2026-03-10T10:20:55.000397+0000 mon.a (mon.0) 2429 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:56.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:56 vm04 bash[20742]: cluster 2026-03-10T10:20:54.438074+0000 mgr.y (mgr.24422) 291 : cluster [DBG] pgmap v457: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 701 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:20:56.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:56 vm04 bash[20742]: cluster 2026-03-10T10:20:54.438074+0000 mgr.y (mgr.24422) 291 : cluster [DBG] pgmap v457: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 701 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:20:56.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:56 vm04 bash[20742]: cluster 2026-03-10T10:20:54.925903+0000 mon.a (mon.0) 2427 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:56.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:56 vm04 bash[20742]: cluster 2026-03-10T10:20:54.925903+0000 mon.a (mon.0) 2427 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:56.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:56 vm04 bash[20742]: cluster 2026-03-10T10:20:54.999024+0000 mon.a (mon.0) 2428 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-10T10:20:56.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:56 vm04 bash[20742]: cluster 2026-03-10T10:20:54.999024+0000 mon.a (mon.0) 2428 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-10T10:20:56.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:56 vm04 bash[20742]: audit 2026-03-10T10:20:54.999976+0000 mon.c (mon.2) 443 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:56.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:56 vm04 bash[20742]: audit 2026-03-10T10:20:54.999976+0000 mon.c (mon.2) 443 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:56.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:56 vm04 bash[20742]: audit 2026-03-10T10:20:55.000397+0000 mon.a (mon.0) 2429 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:56.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:56 vm04 bash[20742]: audit 2026-03-10T10:20:55.000397+0000 mon.a (mon.0) 2429 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:56.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:56 vm07 bash[23367]: cluster 2026-03-10T10:20:54.438074+0000 mgr.y (mgr.24422) 291 : cluster [DBG] pgmap v457: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 701 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:20:56.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:56 vm07 bash[23367]: cluster 2026-03-10T10:20:54.438074+0000 mgr.y (mgr.24422) 291 : cluster [DBG] pgmap v457: 300 pgs: 8 creating+peering, 292 active+clean; 8.3 MiB data, 701 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:20:56.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:56 vm07 bash[23367]: cluster 2026-03-10T10:20:54.925903+0000 mon.a (mon.0) 2427 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:56.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:56 vm07 bash[23367]: cluster 2026-03-10T10:20:54.925903+0000 mon.a (mon.0) 2427 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:20:56.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:56 vm07 bash[23367]: cluster 2026-03-10T10:20:54.999024+0000 mon.a (mon.0) 2428 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-10T10:20:56.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:56 vm07 bash[23367]: cluster 2026-03-10T10:20:54.999024+0000 mon.a (mon.0) 2428 : cluster [DBG] osdmap e333: 8 total, 8 up, 8 in 2026-03-10T10:20:56.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:56 vm07 bash[23367]: audit 2026-03-10T10:20:54.999976+0000 mon.c (mon.2) 443 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:56.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:56 vm07 bash[23367]: audit 2026-03-10T10:20:54.999976+0000 mon.c (mon.2) 443 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:56.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:56 vm07 bash[23367]: audit 2026-03-10T10:20:55.000397+0000 mon.a (mon.0) 2429 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:56.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:56 vm07 bash[23367]: audit 2026-03-10T10:20:55.000397+0000 mon.a (mon.0) 2429 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:57.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:57 vm04 bash[28289]: audit 2026-03-10T10:20:56.069122+0000 mon.a (mon.0) 2430 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm04-59259-64"}]': finished 2026-03-10T10:20:57.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:57 vm04 bash[28289]: audit 2026-03-10T10:20:56.069122+0000 mon.a (mon.0) 2430 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm04-59259-64"}]': finished 2026-03-10T10:20:57.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:57 vm04 bash[28289]: audit 2026-03-10T10:20:56.072288+0000 mon.c (mon.2) 444 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:57.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:57 vm04 bash[28289]: audit 2026-03-10T10:20:56.072288+0000 mon.c (mon.2) 444 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:57.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:57 vm04 bash[28289]: cluster 2026-03-10T10:20:56.073438+0000 mon.a (mon.0) 2431 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-10T10:20:57.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:57 vm04 bash[28289]: cluster 2026-03-10T10:20:56.073438+0000 mon.a (mon.0) 2431 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-10T10:20:57.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:57 vm04 bash[28289]: audit 2026-03-10T10:20:56.074014+0000 mon.a (mon.0) 2432 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:57.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:57 vm04 bash[28289]: audit 2026-03-10T10:20:56.074014+0000 mon.a (mon.0) 2432 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:57.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:57 vm04 bash[20742]: audit 2026-03-10T10:20:56.069122+0000 mon.a (mon.0) 2430 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm04-59259-64"}]': finished 2026-03-10T10:20:57.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:57 vm04 bash[20742]: audit 2026-03-10T10:20:56.069122+0000 mon.a (mon.0) 2430 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm04-59259-64"}]': finished 2026-03-10T10:20:57.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:57 vm04 bash[20742]: audit 2026-03-10T10:20:56.072288+0000 mon.c (mon.2) 444 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:57.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:57 vm04 bash[20742]: audit 2026-03-10T10:20:56.072288+0000 mon.c (mon.2) 444 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:57.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:57 vm04 bash[20742]: cluster 2026-03-10T10:20:56.073438+0000 mon.a (mon.0) 2431 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-10T10:20:57.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:57 vm04 bash[20742]: cluster 2026-03-10T10:20:56.073438+0000 mon.a (mon.0) 2431 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-10T10:20:57.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:57 vm04 bash[20742]: audit 2026-03-10T10:20:56.074014+0000 mon.a (mon.0) 2432 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:57.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:57 vm04 bash[20742]: audit 2026-03-10T10:20:56.074014+0000 mon.a (mon.0) 2432 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:57.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:57 vm07 bash[23367]: audit 2026-03-10T10:20:56.069122+0000 mon.a (mon.0) 2430 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm04-59259-64"}]': finished 2026-03-10T10:20:57.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:57 vm07 bash[23367]: audit 2026-03-10T10:20:56.069122+0000 mon.a (mon.0) 2430 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-ExecuteClassPP_vm04-59259-64"}]': finished 2026-03-10T10:20:57.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:57 vm07 bash[23367]: audit 2026-03-10T10:20:56.072288+0000 mon.c (mon.2) 444 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:57.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:57 vm07 bash[23367]: audit 2026-03-10T10:20:56.072288+0000 mon.c (mon.2) 444 : audit [INF] from='client.? 192.168.123.104:0/599412235' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:57.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:57 vm07 bash[23367]: cluster 2026-03-10T10:20:56.073438+0000 mon.a (mon.0) 2431 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-10T10:20:57.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:57 vm07 bash[23367]: cluster 2026-03-10T10:20:56.073438+0000 mon.a (mon.0) 2431 : cluster [DBG] osdmap e334: 8 total, 8 up, 8 in 2026-03-10T10:20:57.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:57 vm07 bash[23367]: audit 2026-03-10T10:20:56.074014+0000 mon.a (mon.0) 2432 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:57.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:57 vm07 bash[23367]: audit 2026-03-10T10:20:56.074014+0000 mon.a (mon.0) 2432 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm04-59259-64"}]: dispatch 2026-03-10T10:20:58.440 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:58 vm07 bash[23367]: cluster 2026-03-10T10:20:56.438364+0000 mgr.y (mgr.24422) 292 : cluster [DBG] pgmap v460: 292 pgs: 292 active+clean; 8.3 MiB data, 701 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:20:58.440 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:58 vm07 bash[23367]: cluster 2026-03-10T10:20:56.438364+0000 mgr.y (mgr.24422) 292 : cluster [DBG] pgmap v460: 292 pgs: 292 active+clean; 8.3 MiB data, 701 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:20:58.440 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:58 vm07 bash[23367]: audit 2026-03-10T10:20:57.071721+0000 mon.a (mon.0) 2433 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm04-59259-64"}]': finished 2026-03-10T10:20:58.440 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:58 vm07 bash[23367]: audit 2026-03-10T10:20:57.071721+0000 mon.a (mon.0) 2433 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm04-59259-64"}]': finished 2026-03-10T10:20:58.440 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:58 vm07 bash[23367]: cluster 2026-03-10T10:20:57.075139+0000 mon.a (mon.0) 2434 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-10T10:20:58.440 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:58 vm07 bash[23367]: cluster 2026-03-10T10:20:57.075139+0000 mon.a (mon.0) 2434 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-10T10:20:58.440 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:58 vm07 bash[23367]: audit 2026-03-10T10:20:57.104610+0000 mon.c (mon.2) 445 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:58.440 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:58 vm07 bash[23367]: audit 2026-03-10T10:20:57.104610+0000 mon.c (mon.2) 445 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:58.440 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:58 vm07 bash[23367]: audit 2026-03-10T10:20:57.107849+0000 mon.a (mon.0) 2435 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:58.440 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:58 vm07 bash[23367]: audit 2026-03-10T10:20:57.107849+0000 mon.a (mon.0) 2435 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:58.440 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:58 vm07 bash[23367]: audit 2026-03-10T10:20:57.108447+0000 mon.c (mon.2) 446 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:58.440 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:58 vm07 bash[23367]: audit 2026-03-10T10:20:57.108447+0000 mon.c (mon.2) 446 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:58.440 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:58 vm07 bash[23367]: audit 2026-03-10T10:20:57.109462+0000 mon.a (mon.0) 2436 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:58.440 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:58 vm07 bash[23367]: audit 2026-03-10T10:20:57.109462+0000 mon.a (mon.0) 2436 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:58.440 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:58 vm07 bash[23367]: audit 2026-03-10T10:20:57.109959+0000 mon.c (mon.2) 447 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm04-59259-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:58.440 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:58 vm07 bash[23367]: audit 2026-03-10T10:20:57.109959+0000 mon.c (mon.2) 447 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm04-59259-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:58.440 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:58 vm07 bash[23367]: audit 2026-03-10T10:20:57.110984+0000 mon.a (mon.0) 2437 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm04-59259-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:58.441 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:58 vm07 bash[23367]: audit 2026-03-10T10:20:57.110984+0000 mon.a (mon.0) 2437 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm04-59259-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:58.441 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:58 vm07 bash[23367]: audit 2026-03-10T10:20:57.901966+0000 mon.a (mon.0) 2438 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:20:58.441 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:58 vm07 bash[23367]: audit 2026-03-10T10:20:57.901966+0000 mon.a (mon.0) 2438 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:20:58.441 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:58 vm07 bash[23367]: audit 2026-03-10T10:20:57.903190+0000 mon.a (mon.0) 2439 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:20:58.441 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:58 vm07 bash[23367]: audit 2026-03-10T10:20:57.903190+0000 mon.a (mon.0) 2439 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:20:58.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:58 vm04 bash[28289]: cluster 2026-03-10T10:20:56.438364+0000 mgr.y (mgr.24422) 292 : cluster [DBG] pgmap v460: 292 pgs: 292 active+clean; 8.3 MiB data, 701 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:20:58.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:58 vm04 bash[28289]: cluster 2026-03-10T10:20:56.438364+0000 mgr.y (mgr.24422) 292 : cluster [DBG] pgmap v460: 292 pgs: 292 active+clean; 8.3 MiB data, 701 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:20:58.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:58 vm04 bash[28289]: audit 2026-03-10T10:20:57.071721+0000 mon.a (mon.0) 2433 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm04-59259-64"}]': finished 2026-03-10T10:20:58.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:58 vm04 bash[28289]: audit 2026-03-10T10:20:57.071721+0000 mon.a (mon.0) 2433 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm04-59259-64"}]': finished 2026-03-10T10:20:58.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:58 vm04 bash[28289]: cluster 2026-03-10T10:20:57.075139+0000 mon.a (mon.0) 2434 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-10T10:20:58.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:58 vm04 bash[28289]: cluster 2026-03-10T10:20:57.075139+0000 mon.a (mon.0) 2434 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-10T10:20:58.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:58 vm04 bash[28289]: audit 2026-03-10T10:20:57.104610+0000 mon.c (mon.2) 445 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:58.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:58 vm04 bash[28289]: audit 2026-03-10T10:20:57.104610+0000 mon.c (mon.2) 445 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:58.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:58 vm04 bash[28289]: audit 2026-03-10T10:20:57.107849+0000 mon.a (mon.0) 2435 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:58.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:58 vm04 bash[28289]: audit 2026-03-10T10:20:57.107849+0000 mon.a (mon.0) 2435 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:58.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:58 vm04 bash[28289]: audit 2026-03-10T10:20:57.108447+0000 mon.c (mon.2) 446 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:58.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:58 vm04 bash[28289]: audit 2026-03-10T10:20:57.108447+0000 mon.c (mon.2) 446 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:58.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:58 vm04 bash[28289]: audit 2026-03-10T10:20:57.109462+0000 mon.a (mon.0) 2436 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:58.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:58 vm04 bash[28289]: audit 2026-03-10T10:20:57.109462+0000 mon.a (mon.0) 2436 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:58.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:58 vm04 bash[28289]: audit 2026-03-10T10:20:57.109959+0000 mon.c (mon.2) 447 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm04-59259-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:58.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:58 vm04 bash[28289]: audit 2026-03-10T10:20:57.109959+0000 mon.c (mon.2) 447 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm04-59259-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:58.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:58 vm04 bash[28289]: audit 2026-03-10T10:20:57.110984+0000 mon.a (mon.0) 2437 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm04-59259-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:58.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:58 vm04 bash[28289]: audit 2026-03-10T10:20:57.110984+0000 mon.a (mon.0) 2437 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm04-59259-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:58.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:58 vm04 bash[28289]: audit 2026-03-10T10:20:57.901966+0000 mon.a (mon.0) 2438 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:20:58.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:58 vm04 bash[28289]: audit 2026-03-10T10:20:57.901966+0000 mon.a (mon.0) 2438 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:20:58.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:58 vm04 bash[28289]: audit 2026-03-10T10:20:57.903190+0000 mon.a (mon.0) 2439 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:20:58.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:58 vm04 bash[28289]: audit 2026-03-10T10:20:57.903190+0000 mon.a (mon.0) 2439 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:20:58.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:58 vm04 bash[20742]: cluster 2026-03-10T10:20:56.438364+0000 mgr.y (mgr.24422) 292 : cluster [DBG] pgmap v460: 292 pgs: 292 active+clean; 8.3 MiB data, 701 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:20:58.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:58 vm04 bash[20742]: cluster 2026-03-10T10:20:56.438364+0000 mgr.y (mgr.24422) 292 : cluster [DBG] pgmap v460: 292 pgs: 292 active+clean; 8.3 MiB data, 701 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:20:58.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:58 vm04 bash[20742]: audit 2026-03-10T10:20:57.071721+0000 mon.a (mon.0) 2433 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm04-59259-64"}]': finished 2026-03-10T10:20:58.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:58 vm04 bash[20742]: audit 2026-03-10T10:20:57.071721+0000 mon.a (mon.0) 2433 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"ExecuteClassPP_vm04-59259-64"}]': finished 2026-03-10T10:20:58.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:58 vm04 bash[20742]: cluster 2026-03-10T10:20:57.075139+0000 mon.a (mon.0) 2434 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-10T10:20:58.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:58 vm04 bash[20742]: cluster 2026-03-10T10:20:57.075139+0000 mon.a (mon.0) 2434 : cluster [DBG] osdmap e335: 8 total, 8 up, 8 in 2026-03-10T10:20:58.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:58 vm04 bash[20742]: audit 2026-03-10T10:20:57.104610+0000 mon.c (mon.2) 445 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:58.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:58 vm04 bash[20742]: audit 2026-03-10T10:20:57.104610+0000 mon.c (mon.2) 445 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:58.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:58 vm04 bash[20742]: audit 2026-03-10T10:20:57.107849+0000 mon.a (mon.0) 2435 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:58.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:58 vm04 bash[20742]: audit 2026-03-10T10:20:57.107849+0000 mon.a (mon.0) 2435 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:58.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:58 vm04 bash[20742]: audit 2026-03-10T10:20:57.108447+0000 mon.c (mon.2) 446 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:58.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:58 vm04 bash[20742]: audit 2026-03-10T10:20:57.108447+0000 mon.c (mon.2) 446 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:58.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:58 vm04 bash[20742]: audit 2026-03-10T10:20:57.109462+0000 mon.a (mon.0) 2436 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:58.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:58 vm04 bash[20742]: audit 2026-03-10T10:20:57.109462+0000 mon.a (mon.0) 2436 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:58.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:58 vm04 bash[20742]: audit 2026-03-10T10:20:57.109959+0000 mon.c (mon.2) 447 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm04-59259-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:58.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:58 vm04 bash[20742]: audit 2026-03-10T10:20:57.109959+0000 mon.c (mon.2) 447 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm04-59259-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:58.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:58 vm04 bash[20742]: audit 2026-03-10T10:20:57.110984+0000 mon.a (mon.0) 2437 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm04-59259-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:58.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:58 vm04 bash[20742]: audit 2026-03-10T10:20:57.110984+0000 mon.a (mon.0) 2437 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm04-59259-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:20:58.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:58 vm04 bash[20742]: audit 2026-03-10T10:20:57.901966+0000 mon.a (mon.0) 2438 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:20:58.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:58 vm04 bash[20742]: audit 2026-03-10T10:20:57.901966+0000 mon.a (mon.0) 2438 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:20:58.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:58 vm04 bash[20742]: audit 2026-03-10T10:20:57.903190+0000 mon.a (mon.0) 2439 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:20:58.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:58 vm04 bash[20742]: audit 2026-03-10T10:20:57.903190+0000 mon.a (mon.0) 2439 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:20:58.766 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:20:58 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:20:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:59 vm04 bash[28289]: audit 2026-03-10T10:20:58.103085+0000 mon.a (mon.0) 2440 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm04-59259-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:59 vm04 bash[28289]: audit 2026-03-10T10:20:58.103085+0000 mon.a (mon.0) 2440 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm04-59259-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:59 vm04 bash[28289]: cluster 2026-03-10T10:20:58.107566+0000 mon.a (mon.0) 2441 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-10T10:20:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:59 vm04 bash[28289]: cluster 2026-03-10T10:20:58.107566+0000 mon.a (mon.0) 2441 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-10T10:20:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:59 vm04 bash[28289]: audit 2026-03-10T10:20:58.113291+0000 mon.c (mon.2) 448 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm04-59259-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:59 vm04 bash[28289]: audit 2026-03-10T10:20:58.113291+0000 mon.c (mon.2) 448 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm04-59259-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:59 vm04 bash[28289]: audit 2026-03-10T10:20:58.114355+0000 mon.a (mon.0) 2442 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm04-59259-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:20:59 vm04 bash[28289]: audit 2026-03-10T10:20:58.114355+0000 mon.a (mon.0) 2442 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm04-59259-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:59 vm04 bash[20742]: audit 2026-03-10T10:20:58.103085+0000 mon.a (mon.0) 2440 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm04-59259-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:59.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:59 vm04 bash[20742]: audit 2026-03-10T10:20:58.103085+0000 mon.a (mon.0) 2440 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm04-59259-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:59.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:59 vm04 bash[20742]: cluster 2026-03-10T10:20:58.107566+0000 mon.a (mon.0) 2441 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-10T10:20:59.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:59 vm04 bash[20742]: cluster 2026-03-10T10:20:58.107566+0000 mon.a (mon.0) 2441 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-10T10:20:59.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:59 vm04 bash[20742]: audit 2026-03-10T10:20:58.113291+0000 mon.c (mon.2) 448 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm04-59259-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:59.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:59 vm04 bash[20742]: audit 2026-03-10T10:20:58.113291+0000 mon.c (mon.2) 448 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm04-59259-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:59.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:59 vm04 bash[20742]: audit 2026-03-10T10:20:58.114355+0000 mon.a (mon.0) 2442 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm04-59259-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:59.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:20:59 vm04 bash[20742]: audit 2026-03-10T10:20:58.114355+0000 mon.a (mon.0) 2442 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm04-59259-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:59.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:59 vm07 bash[23367]: audit 2026-03-10T10:20:58.103085+0000 mon.a (mon.0) 2440 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm04-59259-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:59.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:59 vm07 bash[23367]: audit 2026-03-10T10:20:58.103085+0000 mon.a (mon.0) 2440 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-OmapPP_vm04-59259-65", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:20:59.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:59 vm07 bash[23367]: cluster 2026-03-10T10:20:58.107566+0000 mon.a (mon.0) 2441 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-10T10:20:59.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:59 vm07 bash[23367]: cluster 2026-03-10T10:20:58.107566+0000 mon.a (mon.0) 2441 : cluster [DBG] osdmap e336: 8 total, 8 up, 8 in 2026-03-10T10:20:59.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:59 vm07 bash[23367]: audit 2026-03-10T10:20:58.113291+0000 mon.c (mon.2) 448 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm04-59259-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:59.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:59 vm07 bash[23367]: audit 2026-03-10T10:20:58.113291+0000 mon.c (mon.2) 448 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm04-59259-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:59.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:59 vm07 bash[23367]: audit 2026-03-10T10:20:58.114355+0000 mon.a (mon.0) 2442 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm04-59259-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:20:59.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:20:59 vm07 bash[23367]: audit 2026-03-10T10:20:58.114355+0000 mon.a (mon.0) 2442 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "OmapPP_vm04-59259-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:21:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:00 vm04 bash[28289]: cluster 2026-03-10T10:20:58.439037+0000 mgr.y (mgr.24422) 293 : cluster [DBG] pgmap v463: 292 pgs: 292 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:21:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:00 vm04 bash[28289]: cluster 2026-03-10T10:20:58.439037+0000 mgr.y (mgr.24422) 293 : cluster [DBG] pgmap v463: 292 pgs: 292 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:21:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:00 vm04 bash[28289]: audit 2026-03-10T10:20:58.440890+0000 mgr.y (mgr.24422) 294 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:00 vm04 bash[28289]: audit 2026-03-10T10:20:58.440890+0000 mgr.y (mgr.24422) 294 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:00 vm04 bash[28289]: cluster 2026-03-10T10:20:59.186146+0000 mon.a (mon.0) 2443 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-10T10:21:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:00 vm04 bash[28289]: cluster 2026-03-10T10:20:59.186146+0000 mon.a (mon.0) 2443 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-10T10:21:00.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:00 vm04 bash[20742]: cluster 2026-03-10T10:20:58.439037+0000 mgr.y (mgr.24422) 293 : cluster [DBG] pgmap v463: 292 pgs: 292 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:21:00.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:00 vm04 bash[20742]: cluster 2026-03-10T10:20:58.439037+0000 mgr.y (mgr.24422) 293 : cluster [DBG] pgmap v463: 292 pgs: 292 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:21:00.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:00 vm04 bash[20742]: audit 2026-03-10T10:20:58.440890+0000 mgr.y (mgr.24422) 294 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:00.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:00 vm04 bash[20742]: audit 2026-03-10T10:20:58.440890+0000 mgr.y (mgr.24422) 294 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:00.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:00 vm04 bash[20742]: cluster 2026-03-10T10:20:59.186146+0000 mon.a (mon.0) 2443 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-10T10:21:00.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:00 vm04 bash[20742]: cluster 2026-03-10T10:20:59.186146+0000 mon.a (mon.0) 2443 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-10T10:21:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:00 vm07 bash[23367]: cluster 2026-03-10T10:20:58.439037+0000 mgr.y (mgr.24422) 293 : cluster [DBG] pgmap v463: 292 pgs: 292 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:21:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:00 vm07 bash[23367]: cluster 2026-03-10T10:20:58.439037+0000 mgr.y (mgr.24422) 293 : cluster [DBG] pgmap v463: 292 pgs: 292 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:21:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:00 vm07 bash[23367]: audit 2026-03-10T10:20:58.440890+0000 mgr.y (mgr.24422) 294 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:00 vm07 bash[23367]: audit 2026-03-10T10:20:58.440890+0000 mgr.y (mgr.24422) 294 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:00 vm07 bash[23367]: cluster 2026-03-10T10:20:59.186146+0000 mon.a (mon.0) 2443 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-10T10:21:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:00 vm07 bash[23367]: cluster 2026-03-10T10:20:59.186146+0000 mon.a (mon.0) 2443 : cluster [DBG] osdmap e337: 8 total, 8 up, 8 in 2026-03-10T10:21:01.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:01 vm04 bash[28289]: audit 2026-03-10T10:21:00.181522+0000 mon.a (mon.0) 2444 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "OmapPP_vm04-59259-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm04-59259-65"}]': finished 2026-03-10T10:21:01.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:01 vm04 bash[28289]: audit 2026-03-10T10:21:00.181522+0000 mon.a (mon.0) 2444 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "OmapPP_vm04-59259-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm04-59259-65"}]': finished 2026-03-10T10:21:01.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:01 vm04 bash[28289]: cluster 2026-03-10T10:21:00.190261+0000 mon.a (mon.0) 2445 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-10T10:21:01.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:01 vm04 bash[28289]: cluster 2026-03-10T10:21:00.190261+0000 mon.a (mon.0) 2445 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-10T10:21:01.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:01 vm04 bash[20742]: audit 2026-03-10T10:21:00.181522+0000 mon.a (mon.0) 2444 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "OmapPP_vm04-59259-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm04-59259-65"}]': finished 2026-03-10T10:21:01.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:01 vm04 bash[20742]: audit 2026-03-10T10:21:00.181522+0000 mon.a (mon.0) 2444 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "OmapPP_vm04-59259-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm04-59259-65"}]': finished 2026-03-10T10:21:01.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:01 vm04 bash[20742]: cluster 2026-03-10T10:21:00.190261+0000 mon.a (mon.0) 2445 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-10T10:21:01.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:01 vm04 bash[20742]: cluster 2026-03-10T10:21:00.190261+0000 mon.a (mon.0) 2445 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-10T10:21:01.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:01 vm07 bash[23367]: audit 2026-03-10T10:21:00.181522+0000 mon.a (mon.0) 2444 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "OmapPP_vm04-59259-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm04-59259-65"}]': finished 2026-03-10T10:21:01.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:01 vm07 bash[23367]: audit 2026-03-10T10:21:00.181522+0000 mon.a (mon.0) 2444 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "OmapPP_vm04-59259-65", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-OmapPP_vm04-59259-65"}]': finished 2026-03-10T10:21:01.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:01 vm07 bash[23367]: cluster 2026-03-10T10:21:00.190261+0000 mon.a (mon.0) 2445 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-10T10:21:01.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:01 vm07 bash[23367]: cluster 2026-03-10T10:21:00.190261+0000 mon.a (mon.0) 2445 : cluster [DBG] osdmap e338: 8 total, 8 up, 8 in 2026-03-10T10:21:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:02 vm04 bash[28289]: cluster 2026-03-10T10:21:00.439440+0000 mgr.y (mgr.24422) 295 : cluster [DBG] pgmap v466: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:21:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:02 vm04 bash[28289]: cluster 2026-03-10T10:21:00.439440+0000 mgr.y (mgr.24422) 295 : cluster [DBG] pgmap v466: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:21:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:02 vm04 bash[28289]: cluster 2026-03-10T10:21:01.183151+0000 mon.a (mon.0) 2446 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:02 vm04 bash[28289]: cluster 2026-03-10T10:21:01.183151+0000 mon.a (mon.0) 2446 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:02 vm04 bash[28289]: cluster 2026-03-10T10:21:01.190310+0000 mon.a (mon.0) 2447 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-10T10:21:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:02 vm04 bash[28289]: cluster 2026-03-10T10:21:01.190310+0000 mon.a (mon.0) 2447 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-10T10:21:02.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:02 vm04 bash[20742]: cluster 2026-03-10T10:21:00.439440+0000 mgr.y (mgr.24422) 295 : cluster [DBG] pgmap v466: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:21:02.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:02 vm04 bash[20742]: cluster 2026-03-10T10:21:00.439440+0000 mgr.y (mgr.24422) 295 : cluster [DBG] pgmap v466: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:21:02.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:02 vm04 bash[20742]: cluster 2026-03-10T10:21:01.183151+0000 mon.a (mon.0) 2446 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:02.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:02 vm04 bash[20742]: cluster 2026-03-10T10:21:01.183151+0000 mon.a (mon.0) 2446 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:02.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:02 vm04 bash[20742]: cluster 2026-03-10T10:21:01.190310+0000 mon.a (mon.0) 2447 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-10T10:21:02.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:02 vm04 bash[20742]: cluster 2026-03-10T10:21:01.190310+0000 mon.a (mon.0) 2447 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-10T10:21:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:02 vm07 bash[23367]: cluster 2026-03-10T10:21:00.439440+0000 mgr.y (mgr.24422) 295 : cluster [DBG] pgmap v466: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:21:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:02 vm07 bash[23367]: cluster 2026-03-10T10:21:00.439440+0000 mgr.y (mgr.24422) 295 : cluster [DBG] pgmap v466: 300 pgs: 8 unknown, 292 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:21:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:02 vm07 bash[23367]: cluster 2026-03-10T10:21:01.183151+0000 mon.a (mon.0) 2446 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:02 vm07 bash[23367]: cluster 2026-03-10T10:21:01.183151+0000 mon.a (mon.0) 2446 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:02 vm07 bash[23367]: cluster 2026-03-10T10:21:01.190310+0000 mon.a (mon.0) 2447 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-10T10:21:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:02 vm07 bash[23367]: cluster 2026-03-10T10:21:01.190310+0000 mon.a (mon.0) 2447 : cluster [DBG] osdmap e339: 8 total, 8 up, 8 in 2026-03-10T10:21:03.304 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:21:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:21:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:21:03.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:03 vm04 bash[28289]: cluster 2026-03-10T10:21:02.302121+0000 mon.a (mon.0) 2448 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-10T10:21:03.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:03 vm04 bash[28289]: cluster 2026-03-10T10:21:02.302121+0000 mon.a (mon.0) 2448 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-10T10:21:03.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:03 vm04 bash[28289]: audit 2026-03-10T10:21:02.303046+0000 mon.c (mon.2) 449 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:21:03.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:03 vm04 bash[28289]: audit 2026-03-10T10:21:02.303046+0000 mon.c (mon.2) 449 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:21:03.711 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:03 vm04 bash[28289]: audit 2026-03-10T10:21:02.303429+0000 mon.a (mon.0) 2449 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:21:03.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:03 vm04 bash[28289]: audit 2026-03-10T10:21:02.303429+0000 mon.a (mon.0) 2449 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:21:03.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:03 vm04 bash[28289]: audit 2026-03-10T10:21:03.187578+0000 mon.a (mon.0) 2450 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:03.712 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:03 vm04 bash[28289]: audit 2026-03-10T10:21:03.187578+0000 mon.a (mon.0) 2450 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:03.712 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:03 vm04 bash[20742]: cluster 2026-03-10T10:21:02.302121+0000 mon.a (mon.0) 2448 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-10T10:21:03.712 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:03 vm04 bash[20742]: cluster 2026-03-10T10:21:02.302121+0000 mon.a (mon.0) 2448 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-10T10:21:03.712 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:03 vm04 bash[20742]: audit 2026-03-10T10:21:02.303046+0000 mon.c (mon.2) 449 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:21:03.712 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:03 vm04 bash[20742]: audit 2026-03-10T10:21:02.303046+0000 mon.c (mon.2) 449 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:21:03.712 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:03 vm04 bash[20742]: audit 2026-03-10T10:21:02.303429+0000 mon.a (mon.0) 2449 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:21:03.712 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:03 vm04 bash[20742]: audit 2026-03-10T10:21:02.303429+0000 mon.a (mon.0) 2449 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:21:03.712 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:03 vm04 bash[20742]: audit 2026-03-10T10:21:03.187578+0000 mon.a (mon.0) 2450 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:03.712 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:03 vm04 bash[20742]: audit 2026-03-10T10:21:03.187578+0000 mon.a (mon.0) 2450 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:03 vm07 bash[23367]: cluster 2026-03-10T10:21:02.302121+0000 mon.a (mon.0) 2448 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-10T10:21:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:03 vm07 bash[23367]: cluster 2026-03-10T10:21:02.302121+0000 mon.a (mon.0) 2448 : cluster [DBG] osdmap e340: 8 total, 8 up, 8 in 2026-03-10T10:21:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:03 vm07 bash[23367]: audit 2026-03-10T10:21:02.303046+0000 mon.c (mon.2) 449 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:21:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:03 vm07 bash[23367]: audit 2026-03-10T10:21:02.303046+0000 mon.c (mon.2) 449 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:21:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:03 vm07 bash[23367]: audit 2026-03-10T10:21:02.303429+0000 mon.a (mon.0) 2449 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:21:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:03 vm07 bash[23367]: audit 2026-03-10T10:21:02.303429+0000 mon.a (mon.0) 2449 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:21:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:03 vm07 bash[23367]: audit 2026-03-10T10:21:03.187578+0000 mon.a (mon.0) 2450 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:03 vm07 bash[23367]: audit 2026-03-10T10:21:03.187578+0000 mon.a (mon.0) 2450 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:04 vm04 bash[28289]: cluster 2026-03-10T10:21:02.439778+0000 mgr.y (mgr.24422) 296 : cluster [DBG] pgmap v469: 292 pgs: 292 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:21:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:04 vm04 bash[28289]: cluster 2026-03-10T10:21:02.439778+0000 mgr.y (mgr.24422) 296 : cluster [DBG] pgmap v469: 292 pgs: 292 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:21:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:04 vm04 bash[28289]: audit 2026-03-10T10:21:03.303098+0000 mon.a (mon.0) 2451 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm04-59259-65"}]': finished 2026-03-10T10:21:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:04 vm04 bash[28289]: audit 2026-03-10T10:21:03.303098+0000 mon.a (mon.0) 2451 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm04-59259-65"}]': finished 2026-03-10T10:21:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:04 vm04 bash[28289]: audit 2026-03-10T10:21:03.303128+0000 mon.a (mon.0) 2452 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:21:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:04 vm04 bash[28289]: audit 2026-03-10T10:21:03.303128+0000 mon.a (mon.0) 2452 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:21:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:04 vm04 bash[28289]: audit 2026-03-10T10:21:03.312378+0000 mon.c (mon.2) 450 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:21:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:04 vm04 bash[28289]: audit 2026-03-10T10:21:03.312378+0000 mon.c (mon.2) 450 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:21:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:04 vm04 bash[28289]: cluster 2026-03-10T10:21:03.315219+0000 mon.a (mon.0) 2453 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-10T10:21:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:04 vm04 bash[28289]: cluster 2026-03-10T10:21:03.315219+0000 mon.a (mon.0) 2453 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-10T10:21:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:04 vm04 bash[28289]: audit 2026-03-10T10:21:03.315757+0000 mon.a (mon.0) 2454 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-49"}]: dispatch 2026-03-10T10:21:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:04 vm04 bash[28289]: audit 2026-03-10T10:21:03.315757+0000 mon.a (mon.0) 2454 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-49"}]: dispatch 2026-03-10T10:21:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:04 vm04 bash[28289]: audit 2026-03-10T10:21:03.315835+0000 mon.a (mon.0) 2455 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:21:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:04 vm04 bash[28289]: audit 2026-03-10T10:21:03.315835+0000 mon.a (mon.0) 2455 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:21:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:04 vm04 bash[20742]: cluster 2026-03-10T10:21:02.439778+0000 mgr.y (mgr.24422) 296 : cluster [DBG] pgmap v469: 292 pgs: 292 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:21:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:04 vm04 bash[20742]: cluster 2026-03-10T10:21:02.439778+0000 mgr.y (mgr.24422) 296 : cluster [DBG] pgmap v469: 292 pgs: 292 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:21:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:04 vm04 bash[20742]: audit 2026-03-10T10:21:03.303098+0000 mon.a (mon.0) 2451 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm04-59259-65"}]': finished 2026-03-10T10:21:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:04 vm04 bash[20742]: audit 2026-03-10T10:21:03.303098+0000 mon.a (mon.0) 2451 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm04-59259-65"}]': finished 2026-03-10T10:21:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:04 vm04 bash[20742]: audit 2026-03-10T10:21:03.303128+0000 mon.a (mon.0) 2452 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:21:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:04 vm04 bash[20742]: audit 2026-03-10T10:21:03.303128+0000 mon.a (mon.0) 2452 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:21:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:04 vm04 bash[20742]: audit 2026-03-10T10:21:03.312378+0000 mon.c (mon.2) 450 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:21:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:04 vm04 bash[20742]: audit 2026-03-10T10:21:03.312378+0000 mon.c (mon.2) 450 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:21:04.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:04 vm04 bash[20742]: cluster 2026-03-10T10:21:03.315219+0000 mon.a (mon.0) 2453 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-10T10:21:04.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:04 vm04 bash[20742]: cluster 2026-03-10T10:21:03.315219+0000 mon.a (mon.0) 2453 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-10T10:21:04.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:04 vm04 bash[20742]: audit 2026-03-10T10:21:03.315757+0000 mon.a (mon.0) 2454 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-49"}]: dispatch 2026-03-10T10:21:04.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:04 vm04 bash[20742]: audit 2026-03-10T10:21:03.315757+0000 mon.a (mon.0) 2454 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-49"}]: dispatch 2026-03-10T10:21:04.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:04 vm04 bash[20742]: audit 2026-03-10T10:21:03.315835+0000 mon.a (mon.0) 2455 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:21:04.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:04 vm04 bash[20742]: audit 2026-03-10T10:21:03.315835+0000 mon.a (mon.0) 2455 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:21:04.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:04 vm07 bash[23367]: cluster 2026-03-10T10:21:02.439778+0000 mgr.y (mgr.24422) 296 : cluster [DBG] pgmap v469: 292 pgs: 292 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:21:04.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:04 vm07 bash[23367]: cluster 2026-03-10T10:21:02.439778+0000 mgr.y (mgr.24422) 296 : cluster [DBG] pgmap v469: 292 pgs: 292 active+clean; 8.3 MiB data, 723 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:21:04.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:04 vm07 bash[23367]: audit 2026-03-10T10:21:03.303098+0000 mon.a (mon.0) 2451 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm04-59259-65"}]': finished 2026-03-10T10:21:04.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:04 vm07 bash[23367]: audit 2026-03-10T10:21:03.303098+0000 mon.a (mon.0) 2451 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-OmapPP_vm04-59259-65"}]': finished 2026-03-10T10:21:04.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:04 vm07 bash[23367]: audit 2026-03-10T10:21:03.303128+0000 mon.a (mon.0) 2452 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:21:04.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:04 vm07 bash[23367]: audit 2026-03-10T10:21:03.303128+0000 mon.a (mon.0) 2452 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:21:04.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:04 vm07 bash[23367]: audit 2026-03-10T10:21:03.312378+0000 mon.c (mon.2) 450 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:21:04.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:04 vm07 bash[23367]: audit 2026-03-10T10:21:03.312378+0000 mon.c (mon.2) 450 : audit [INF] from='client.? 192.168.123.104:0/4222627781' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:21:04.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:04 vm07 bash[23367]: cluster 2026-03-10T10:21:03.315219+0000 mon.a (mon.0) 2453 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-10T10:21:04.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:04 vm07 bash[23367]: cluster 2026-03-10T10:21:03.315219+0000 mon.a (mon.0) 2453 : cluster [DBG] osdmap e341: 8 total, 8 up, 8 in 2026-03-10T10:21:04.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:04 vm07 bash[23367]: audit 2026-03-10T10:21:03.315757+0000 mon.a (mon.0) 2454 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-49"}]: dispatch 2026-03-10T10:21:04.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:04 vm07 bash[23367]: audit 2026-03-10T10:21:03.315757+0000 mon.a (mon.0) 2454 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-49"}]: dispatch 2026-03-10T10:21:04.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:04 vm07 bash[23367]: audit 2026-03-10T10:21:03.315835+0000 mon.a (mon.0) 2455 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:21:04.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:04 vm07 bash[23367]: audit 2026-03-10T10:21:03.315835+0000 mon.a (mon.0) 2455 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"OmapPP_vm04-59259-65"}]: dispatch 2026-03-10T10:21:05.669 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.RoundTripWriteFullPP2:163:head 2026-03-10T10:21:05.669 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:5d165639:::164:head 2026-03-10T10:21:05.669 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:f43765fc:::165:head 2026-03-10T10:21:05.669 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:b4c720e9:::166:head 2026-03-10T10:21:05.669 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:e694b040:::167:head 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:afa38db2:::168:head 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:77ba9f53:::169:head 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:87495034:::170:head 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:7c96bf0e:::171:head 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:dbe346cc:::172:head 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:e943ec24:::173:head 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:f97a9c0c:::174:head 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:6f26e74d:::175:head 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:4f95e106:::176:head 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:0e6f2f8f:::177:head 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:05db05f1:::178:head 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:38a78d66:::179:head 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:d095610b:::180:head 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:a1a9d709:::181:head 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:1e5d39db:::182:head 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:f7df4fb9:::183:head 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:03a7f161:::184:head 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:ba70721e:::185:head 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:28e5662d:::186:head 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:973d52de:::187:head 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:4303eb1c:::188:head 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:b990b48e:::189:head 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:29b8165b:::190:head 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:3547f197:::191:head 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:7e260936:::192:head 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:1abec7b1:::193:head 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:10fdda93:::194:head 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:15817eea:::195:head 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:770bab57:::196:head 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:ed9e13e7:::197:head 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:71471a8f:::198:head 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: checking for 269:10fb1d02:::199:head 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.HitSetWrite (8107 ms) 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.HitSetTrim 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: got ls 1773138021,0 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: first is 1773138021 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: got ls 1773138021,0 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: got ls 1773138021,0 2026-03-10T10:21:05.670 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: got ls 1773138021,1773138023,1773138024,0 2026-03-10T10:21:05.671 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: got ls 1773138021,1773138023,1773138024,0 2026-03-10T10:21:05.671 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: got ls 1773138021,1773138023,1773138024,0 2026-03-10T10:21:05.671 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: got ls 1773138021,1773138023,1773138024,1773138026,1773138027,0 2026-03-10T10:21:05.671 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: got ls 1773138021,1773138023,1773138024,1773138026,1773138027,0 2026-03-10T10:21:05.671 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: got ls 1773138021,1773138023,1773138024,1773138026,1773138027,0 2026-03-10T10:21:05.671 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: got ls 1773138021,1773138023,1773138024,1773138026,1773138027,1773138029,1773138030,0 2026-03-10T10:21:05.671 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: got ls 1773138021,1773138023,1773138024,1773138026,1773138027,1773138029,1773138030,0 2026-03-10T10:21:05.671 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: got ls 1773138021,1773138023,1773138024,1773138026,1773138027,1773138029,1773138030,0 2026-03-10T10:21:05.671 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: got ls 1773138024,1773138026,1773138027,1773138029,1773138030,1773138032,1773138033,0 2026-03-10T10:21:05.671 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: first now 1773138024, trimmed 2026-03-10T10:21:05.671 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.HitSetTrim (20479 ms) 2026-03-10T10:21:05.671 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.PromoteOn2ndRead 2026-03-10T10:21:05.671 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: foo0 2026-03-10T10:21:05.671 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: verifying foo0 is eventually promoted 2026-03-10T10:21:05.671 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.PromoteOn2ndRead (14244 ms) 2026-03-10T10:21:05.671 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ProxyRead 2026-03-10T10:21:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:05 vm04 bash[28289]: cluster 2026-03-10T10:21:04.303637+0000 mon.a (mon.0) 2456 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:21:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:05 vm04 bash[28289]: cluster 2026-03-10T10:21:04.303637+0000 mon.a (mon.0) 2456 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:21:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:05 vm04 bash[28289]: audit 2026-03-10T10:21:04.314530+0000 mon.a (mon.0) 2457 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-49"}]': finished 2026-03-10T10:21:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:05 vm04 bash[28289]: audit 2026-03-10T10:21:04.314530+0000 mon.a (mon.0) 2457 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-49"}]': finished 2026-03-10T10:21:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:05 vm04 bash[28289]: audit 2026-03-10T10:21:04.314576+0000 mon.a (mon.0) 2458 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"OmapPP_vm04-59259-65"}]': finished 2026-03-10T10:21:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:05 vm04 bash[28289]: audit 2026-03-10T10:21:04.314576+0000 mon.a (mon.0) 2458 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"OmapPP_vm04-59259-65"}]': finished 2026-03-10T10:21:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:05 vm04 bash[28289]: cluster 2026-03-10T10:21:04.322254+0000 mon.a (mon.0) 2459 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-10T10:21:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:05 vm04 bash[28289]: cluster 2026-03-10T10:21:04.322254+0000 mon.a (mon.0) 2459 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-10T10:21:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:05 vm04 bash[28289]: audit 2026-03-10T10:21:04.335359+0000 mon.c (mon.2) 451 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:05 vm04 bash[28289]: audit 2026-03-10T10:21:04.335359+0000 mon.c (mon.2) 451 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:05 vm04 bash[28289]: audit 2026-03-10T10:21:04.339902+0000 mon.a (mon.0) 2460 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:05 vm04 bash[28289]: audit 2026-03-10T10:21:04.339902+0000 mon.a (mon.0) 2460 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:05 vm04 bash[28289]: audit 2026-03-10T10:21:04.341790+0000 mon.c (mon.2) 452 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:05 vm04 bash[28289]: audit 2026-03-10T10:21:04.341790+0000 mon.c (mon.2) 452 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:05 vm04 bash[28289]: audit 2026-03-10T10:21:04.435917+0000 mon.a (mon.0) 2461 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:05 vm04 bash[28289]: audit 2026-03-10T10:21:04.435917+0000 mon.a (mon.0) 2461 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:05 vm04 bash[28289]: audit 2026-03-10T10:21:04.436435+0000 mon.c (mon.2) 453 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm04-59259-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:21:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:05 vm04 bash[28289]: audit 2026-03-10T10:21:04.436435+0000 mon.c (mon.2) 453 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm04-59259-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:21:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:05 vm04 bash[28289]: audit 2026-03-10T10:21:04.436803+0000 mon.a (mon.0) 2462 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm04-59259-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:21:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:05 vm04 bash[28289]: audit 2026-03-10T10:21:04.436803+0000 mon.a (mon.0) 2462 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm04-59259-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:21:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:05 vm04 bash[28289]: cluster 2026-03-10T10:21:04.440224+0000 mgr.y (mgr.24422) 297 : cluster [DBG] pgmap v472: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:21:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:05 vm04 bash[28289]: cluster 2026-03-10T10:21:04.440224+0000 mgr.y (mgr.24422) 297 : cluster [DBG] pgmap v472: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:21:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:05 vm04 bash[28289]: audit 2026-03-10T10:21:04.492381+0000 mon.a (mon.0) 2463 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:05 vm04 bash[28289]: audit 2026-03-10T10:21:04.492381+0000 mon.a (mon.0) 2463 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:05 vm04 bash[28289]: audit 2026-03-10T10:21:04.492871+0000 mon.a (mon.0) 2464 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-49"}]: dispatch 2026-03-10T10:21:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:05 vm04 bash[28289]: audit 2026-03-10T10:21:04.492871+0000 mon.a (mon.0) 2464 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-49"}]: dispatch 2026-03-10T10:21:05.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:05 vm04 bash[20742]: cluster 2026-03-10T10:21:04.303637+0000 mon.a (mon.0) 2456 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:21:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:05 vm04 bash[20742]: cluster 2026-03-10T10:21:04.303637+0000 mon.a (mon.0) 2456 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:21:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:05 vm04 bash[20742]: audit 2026-03-10T10:21:04.314530+0000 mon.a (mon.0) 2457 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-49"}]': finished 2026-03-10T10:21:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:05 vm04 bash[20742]: audit 2026-03-10T10:21:04.314530+0000 mon.a (mon.0) 2457 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-49"}]': finished 2026-03-10T10:21:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:05 vm04 bash[20742]: audit 2026-03-10T10:21:04.314576+0000 mon.a (mon.0) 2458 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"OmapPP_vm04-59259-65"}]': finished 2026-03-10T10:21:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:05 vm04 bash[20742]: audit 2026-03-10T10:21:04.314576+0000 mon.a (mon.0) 2458 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"OmapPP_vm04-59259-65"}]': finished 2026-03-10T10:21:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:05 vm04 bash[20742]: cluster 2026-03-10T10:21:04.322254+0000 mon.a (mon.0) 2459 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-10T10:21:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:05 vm04 bash[20742]: cluster 2026-03-10T10:21:04.322254+0000 mon.a (mon.0) 2459 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-10T10:21:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:05 vm04 bash[20742]: audit 2026-03-10T10:21:04.335359+0000 mon.c (mon.2) 451 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:05 vm04 bash[20742]: audit 2026-03-10T10:21:04.335359+0000 mon.c (mon.2) 451 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:05 vm04 bash[20742]: audit 2026-03-10T10:21:04.339902+0000 mon.a (mon.0) 2460 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:05 vm04 bash[20742]: audit 2026-03-10T10:21:04.339902+0000 mon.a (mon.0) 2460 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:05 vm04 bash[20742]: audit 2026-03-10T10:21:04.341790+0000 mon.c (mon.2) 452 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:05 vm04 bash[20742]: audit 2026-03-10T10:21:04.341790+0000 mon.c (mon.2) 452 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:05 vm04 bash[20742]: audit 2026-03-10T10:21:04.435917+0000 mon.a (mon.0) 2461 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:05 vm04 bash[20742]: audit 2026-03-10T10:21:04.435917+0000 mon.a (mon.0) 2461 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:05 vm04 bash[20742]: audit 2026-03-10T10:21:04.436435+0000 mon.c (mon.2) 453 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm04-59259-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:21:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:05 vm04 bash[20742]: audit 2026-03-10T10:21:04.436435+0000 mon.c (mon.2) 453 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm04-59259-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:21:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:05 vm04 bash[20742]: audit 2026-03-10T10:21:04.436803+0000 mon.a (mon.0) 2462 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm04-59259-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:21:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:05 vm04 bash[20742]: audit 2026-03-10T10:21:04.436803+0000 mon.a (mon.0) 2462 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm04-59259-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:21:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:05 vm04 bash[20742]: cluster 2026-03-10T10:21:04.440224+0000 mgr.y (mgr.24422) 297 : cluster [DBG] pgmap v472: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:21:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:05 vm04 bash[20742]: cluster 2026-03-10T10:21:04.440224+0000 mgr.y (mgr.24422) 297 : cluster [DBG] pgmap v472: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:21:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:05 vm04 bash[20742]: audit 2026-03-10T10:21:04.492381+0000 mon.a (mon.0) 2463 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:05 vm04 bash[20742]: audit 2026-03-10T10:21:04.492381+0000 mon.a (mon.0) 2463 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:05 vm04 bash[20742]: audit 2026-03-10T10:21:04.492871+0000 mon.a (mon.0) 2464 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-49"}]: dispatch 2026-03-10T10:21:05.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:05 vm04 bash[20742]: audit 2026-03-10T10:21:04.492871+0000 mon.a (mon.0) 2464 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-49"}]: dispatch 2026-03-10T10:21:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:05 vm07 bash[23367]: cluster 2026-03-10T10:21:04.303637+0000 mon.a (mon.0) 2456 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:21:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:05 vm07 bash[23367]: cluster 2026-03-10T10:21:04.303637+0000 mon.a (mon.0) 2456 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:21:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:05 vm07 bash[23367]: audit 2026-03-10T10:21:04.314530+0000 mon.a (mon.0) 2457 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-49"}]': finished 2026-03-10T10:21:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:05 vm07 bash[23367]: audit 2026-03-10T10:21:04.314530+0000 mon.a (mon.0) 2457 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-49"}]': finished 2026-03-10T10:21:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:05 vm07 bash[23367]: audit 2026-03-10T10:21:04.314576+0000 mon.a (mon.0) 2458 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"OmapPP_vm04-59259-65"}]': finished 2026-03-10T10:21:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:05 vm07 bash[23367]: audit 2026-03-10T10:21:04.314576+0000 mon.a (mon.0) 2458 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"OmapPP_vm04-59259-65"}]': finished 2026-03-10T10:21:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:05 vm07 bash[23367]: cluster 2026-03-10T10:21:04.322254+0000 mon.a (mon.0) 2459 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-10T10:21:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:05 vm07 bash[23367]: cluster 2026-03-10T10:21:04.322254+0000 mon.a (mon.0) 2459 : cluster [DBG] osdmap e342: 8 total, 8 up, 8 in 2026-03-10T10:21:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:05 vm07 bash[23367]: audit 2026-03-10T10:21:04.335359+0000 mon.c (mon.2) 451 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:05 vm07 bash[23367]: audit 2026-03-10T10:21:04.335359+0000 mon.c (mon.2) 451 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:05 vm07 bash[23367]: audit 2026-03-10T10:21:04.339902+0000 mon.a (mon.0) 2460 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:05 vm07 bash[23367]: audit 2026-03-10T10:21:04.339902+0000 mon.a (mon.0) 2460 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:05 vm07 bash[23367]: audit 2026-03-10T10:21:04.341790+0000 mon.c (mon.2) 452 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:05 vm07 bash[23367]: audit 2026-03-10T10:21:04.341790+0000 mon.c (mon.2) 452 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:05 vm07 bash[23367]: audit 2026-03-10T10:21:04.435917+0000 mon.a (mon.0) 2461 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:05 vm07 bash[23367]: audit 2026-03-10T10:21:04.435917+0000 mon.a (mon.0) 2461 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:06.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:05 vm07 bash[23367]: audit 2026-03-10T10:21:04.436435+0000 mon.c (mon.2) 453 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm04-59259-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:21:06.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:05 vm07 bash[23367]: audit 2026-03-10T10:21:04.436435+0000 mon.c (mon.2) 453 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm04-59259-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:21:06.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:05 vm07 bash[23367]: audit 2026-03-10T10:21:04.436803+0000 mon.a (mon.0) 2462 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm04-59259-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:21:06.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:05 vm07 bash[23367]: audit 2026-03-10T10:21:04.436803+0000 mon.a (mon.0) 2462 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm04-59259-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:21:06.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:05 vm07 bash[23367]: cluster 2026-03-10T10:21:04.440224+0000 mgr.y (mgr.24422) 297 : cluster [DBG] pgmap v472: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:21:06.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:05 vm07 bash[23367]: cluster 2026-03-10T10:21:04.440224+0000 mgr.y (mgr.24422) 297 : cluster [DBG] pgmap v472: 292 pgs: 292 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:21:06.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:05 vm07 bash[23367]: audit 2026-03-10T10:21:04.492381+0000 mon.a (mon.0) 2463 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:06.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:05 vm07 bash[23367]: audit 2026-03-10T10:21:04.492381+0000 mon.a (mon.0) 2463 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:06.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:05 vm07 bash[23367]: audit 2026-03-10T10:21:04.492871+0000 mon.a (mon.0) 2464 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-49"}]: dispatch 2026-03-10T10:21:06.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:05 vm07 bash[23367]: audit 2026-03-10T10:21:04.492871+0000 mon.a (mon.0) 2464 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-49"}]: dispatch 2026-03-10T10:21:06.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:06 vm04 bash[28289]: audit 2026-03-10T10:21:05.660854+0000 mon.a (mon.0) 2465 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm04-59259-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:21:06.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:06 vm04 bash[28289]: audit 2026-03-10T10:21:05.660854+0000 mon.a (mon.0) 2465 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm04-59259-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:21:06.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:06 vm04 bash[28289]: cluster 2026-03-10T10:21:05.668896+0000 mon.a (mon.0) 2466 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-10T10:21:06.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:06 vm04 bash[28289]: cluster 2026-03-10T10:21:05.668896+0000 mon.a (mon.0) 2466 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-10T10:21:06.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:06 vm04 bash[28289]: audit 2026-03-10T10:21:05.672276+0000 mon.c (mon.2) 454 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm04-59259-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:06.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:06 vm04 bash[28289]: audit 2026-03-10T10:21:05.672276+0000 mon.c (mon.2) 454 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm04-59259-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:06.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:06 vm04 bash[28289]: audit 2026-03-10T10:21:05.678478+0000 mon.a (mon.0) 2467 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm04-59259-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:06.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:06 vm04 bash[28289]: audit 2026-03-10T10:21:05.678478+0000 mon.a (mon.0) 2467 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm04-59259-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:06.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:06 vm04 bash[28289]: cluster 2026-03-10T10:21:06.461057+0000 mon.a (mon.0) 2468 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:06.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:06 vm04 bash[28289]: cluster 2026-03-10T10:21:06.461057+0000 mon.a (mon.0) 2468 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:06.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:06 vm04 bash[20742]: audit 2026-03-10T10:21:05.660854+0000 mon.a (mon.0) 2465 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm04-59259-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:21:06.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:06 vm04 bash[20742]: audit 2026-03-10T10:21:05.660854+0000 mon.a (mon.0) 2465 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm04-59259-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:21:06.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:06 vm04 bash[20742]: cluster 2026-03-10T10:21:05.668896+0000 mon.a (mon.0) 2466 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-10T10:21:06.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:06 vm04 bash[20742]: cluster 2026-03-10T10:21:05.668896+0000 mon.a (mon.0) 2466 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-10T10:21:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:06 vm04 bash[20742]: audit 2026-03-10T10:21:05.672276+0000 mon.c (mon.2) 454 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm04-59259-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:06 vm04 bash[20742]: audit 2026-03-10T10:21:05.672276+0000 mon.c (mon.2) 454 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm04-59259-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:06 vm04 bash[20742]: audit 2026-03-10T10:21:05.678478+0000 mon.a (mon.0) 2467 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm04-59259-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:06 vm04 bash[20742]: audit 2026-03-10T10:21:05.678478+0000 mon.a (mon.0) 2467 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm04-59259-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:06 vm04 bash[20742]: cluster 2026-03-10T10:21:06.461057+0000 mon.a (mon.0) 2468 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:06.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:06 vm04 bash[20742]: cluster 2026-03-10T10:21:06.461057+0000 mon.a (mon.0) 2468 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:07.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:06 vm07 bash[23367]: audit 2026-03-10T10:21:05.660854+0000 mon.a (mon.0) 2465 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm04-59259-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:21:07.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:06 vm07 bash[23367]: audit 2026-03-10T10:21:05.660854+0000 mon.a (mon.0) 2465 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-MultiWritePP_vm04-59259-66", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:21:07.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:06 vm07 bash[23367]: cluster 2026-03-10T10:21:05.668896+0000 mon.a (mon.0) 2466 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-10T10:21:07.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:06 vm07 bash[23367]: cluster 2026-03-10T10:21:05.668896+0000 mon.a (mon.0) 2466 : cluster [DBG] osdmap e343: 8 total, 8 up, 8 in 2026-03-10T10:21:07.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:06 vm07 bash[23367]: audit 2026-03-10T10:21:05.672276+0000 mon.c (mon.2) 454 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm04-59259-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:07.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:06 vm07 bash[23367]: audit 2026-03-10T10:21:05.672276+0000 mon.c (mon.2) 454 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm04-59259-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:07.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:06 vm07 bash[23367]: audit 2026-03-10T10:21:05.678478+0000 mon.a (mon.0) 2467 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm04-59259-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:07.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:06 vm07 bash[23367]: audit 2026-03-10T10:21:05.678478+0000 mon.a (mon.0) 2467 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "MultiWritePP_vm04-59259-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:07.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:06 vm07 bash[23367]: cluster 2026-03-10T10:21:06.461057+0000 mon.a (mon.0) 2468 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:07.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:06 vm07 bash[23367]: cluster 2026-03-10T10:21:06.461057+0000 mon.a (mon.0) 2468 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:08.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:08 vm07 bash[23367]: cluster 2026-03-10T10:21:06.440534+0000 mgr.y (mgr.24422) 298 : cluster [DBG] pgmap v474: 260 pgs: 260 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:21:08.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:08 vm07 bash[23367]: cluster 2026-03-10T10:21:06.440534+0000 mgr.y (mgr.24422) 298 : cluster [DBG] pgmap v474: 260 pgs: 260 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:21:08.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:08 vm07 bash[23367]: cluster 2026-03-10T10:21:06.711018+0000 mon.a (mon.0) 2469 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-10T10:21:08.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:08 vm07 bash[23367]: cluster 2026-03-10T10:21:06.711018+0000 mon.a (mon.0) 2469 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-10T10:21:08.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:08 vm07 bash[23367]: audit 2026-03-10T10:21:06.714093+0000 mon.a (mon.0) 2470 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:08.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:08 vm07 bash[23367]: audit 2026-03-10T10:21:06.714093+0000 mon.a (mon.0) 2470 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:08 vm04 bash[28289]: cluster 2026-03-10T10:21:06.440534+0000 mgr.y (mgr.24422) 298 : cluster [DBG] pgmap v474: 260 pgs: 260 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:21:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:08 vm04 bash[28289]: cluster 2026-03-10T10:21:06.440534+0000 mgr.y (mgr.24422) 298 : cluster [DBG] pgmap v474: 260 pgs: 260 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:21:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:08 vm04 bash[28289]: cluster 2026-03-10T10:21:06.711018+0000 mon.a (mon.0) 2469 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-10T10:21:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:08 vm04 bash[28289]: cluster 2026-03-10T10:21:06.711018+0000 mon.a (mon.0) 2469 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-10T10:21:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:08 vm04 bash[28289]: audit 2026-03-10T10:21:06.714093+0000 mon.a (mon.0) 2470 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:08 vm04 bash[28289]: audit 2026-03-10T10:21:06.714093+0000 mon.a (mon.0) 2470 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:08 vm04 bash[20742]: cluster 2026-03-10T10:21:06.440534+0000 mgr.y (mgr.24422) 298 : cluster [DBG] pgmap v474: 260 pgs: 260 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:21:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:08 vm04 bash[20742]: cluster 2026-03-10T10:21:06.440534+0000 mgr.y (mgr.24422) 298 : cluster [DBG] pgmap v474: 260 pgs: 260 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:21:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:08 vm04 bash[20742]: cluster 2026-03-10T10:21:06.711018+0000 mon.a (mon.0) 2469 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-10T10:21:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:08 vm04 bash[20742]: cluster 2026-03-10T10:21:06.711018+0000 mon.a (mon.0) 2469 : cluster [DBG] osdmap e344: 8 total, 8 up, 8 in 2026-03-10T10:21:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:08 vm04 bash[20742]: audit 2026-03-10T10:21:06.714093+0000 mon.a (mon.0) 2470 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:08 vm04 bash[20742]: audit 2026-03-10T10:21:06.714093+0000 mon.a (mon.0) 2470 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-51","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:08.766 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:21:08 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:21:09.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:09 vm04 bash[28289]: audit 2026-03-10T10:21:08.409950+0000 mon.a (mon.0) 2471 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWritePP_vm04-59259-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm04-59259-66"}]': finished 2026-03-10T10:21:09.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:09 vm04 bash[28289]: audit 2026-03-10T10:21:08.409950+0000 mon.a (mon.0) 2471 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWritePP_vm04-59259-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm04-59259-66"}]': finished 2026-03-10T10:21:09.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:09 vm04 bash[28289]: audit 2026-03-10T10:21:08.410018+0000 mon.a (mon.0) 2472 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:09.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:09 vm04 bash[28289]: audit 2026-03-10T10:21:08.410018+0000 mon.a (mon.0) 2472 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:09.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:09 vm04 bash[28289]: cluster 2026-03-10T10:21:08.414669+0000 mon.a (mon.0) 2473 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-10T10:21:09.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:09 vm04 bash[28289]: cluster 2026-03-10T10:21:08.414669+0000 mon.a (mon.0) 2473 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-10T10:21:09.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:09 vm04 bash[28289]: cluster 2026-03-10T10:21:08.445208+0000 mgr.y (mgr.24422) 299 : cluster [DBG] pgmap v477: 300 pgs: 6 creating+activating, 13 creating+peering, 21 unknown, 260 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:21:09.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:09 vm04 bash[28289]: cluster 2026-03-10T10:21:08.445208+0000 mgr.y (mgr.24422) 299 : cluster [DBG] pgmap v477: 300 pgs: 6 creating+activating, 13 creating+peering, 21 unknown, 260 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:21:09.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:09 vm04 bash[28289]: audit 2026-03-10T10:21:08.448459+0000 mgr.y (mgr.24422) 300 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:09.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:09 vm04 bash[28289]: audit 2026-03-10T10:21:08.448459+0000 mgr.y (mgr.24422) 300 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:09.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:09 vm04 bash[28289]: audit 2026-03-10T10:21:08.465958+0000 mon.a (mon.0) 2474 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:21:09.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:09 vm04 bash[28289]: audit 2026-03-10T10:21:08.465958+0000 mon.a (mon.0) 2474 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:21:09.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:09 vm04 bash[20742]: audit 2026-03-10T10:21:08.409950+0000 mon.a (mon.0) 2471 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWritePP_vm04-59259-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm04-59259-66"}]': finished 2026-03-10T10:21:09.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:09 vm04 bash[20742]: audit 2026-03-10T10:21:08.409950+0000 mon.a (mon.0) 2471 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWritePP_vm04-59259-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm04-59259-66"}]': finished 2026-03-10T10:21:09.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:09 vm04 bash[20742]: audit 2026-03-10T10:21:08.410018+0000 mon.a (mon.0) 2472 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:09.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:09 vm04 bash[20742]: audit 2026-03-10T10:21:08.410018+0000 mon.a (mon.0) 2472 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:09.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:09 vm04 bash[20742]: cluster 2026-03-10T10:21:08.414669+0000 mon.a (mon.0) 2473 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-10T10:21:09.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:09 vm04 bash[20742]: cluster 2026-03-10T10:21:08.414669+0000 mon.a (mon.0) 2473 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-10T10:21:09.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:09 vm04 bash[20742]: cluster 2026-03-10T10:21:08.445208+0000 mgr.y (mgr.24422) 299 : cluster [DBG] pgmap v477: 300 pgs: 6 creating+activating, 13 creating+peering, 21 unknown, 260 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:21:09.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:09 vm04 bash[20742]: cluster 2026-03-10T10:21:08.445208+0000 mgr.y (mgr.24422) 299 : cluster [DBG] pgmap v477: 300 pgs: 6 creating+activating, 13 creating+peering, 21 unknown, 260 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:21:09.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:09 vm04 bash[20742]: audit 2026-03-10T10:21:08.448459+0000 mgr.y (mgr.24422) 300 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:09.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:09 vm04 bash[20742]: audit 2026-03-10T10:21:08.448459+0000 mgr.y (mgr.24422) 300 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:09.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:09 vm04 bash[20742]: audit 2026-03-10T10:21:08.465958+0000 mon.a (mon.0) 2474 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:21:09.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:09 vm04 bash[20742]: audit 2026-03-10T10:21:08.465958+0000 mon.a (mon.0) 2474 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:21:09.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:09 vm07 bash[23367]: audit 2026-03-10T10:21:08.409950+0000 mon.a (mon.0) 2471 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWritePP_vm04-59259-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm04-59259-66"}]': finished 2026-03-10T10:21:09.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:09 vm07 bash[23367]: audit 2026-03-10T10:21:08.409950+0000 mon.a (mon.0) 2471 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "MultiWritePP_vm04-59259-66", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-MultiWritePP_vm04-59259-66"}]': finished 2026-03-10T10:21:09.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:09 vm07 bash[23367]: audit 2026-03-10T10:21:08.410018+0000 mon.a (mon.0) 2472 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:09.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:09 vm07 bash[23367]: audit 2026-03-10T10:21:08.410018+0000 mon.a (mon.0) 2472 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-51","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:09.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:09 vm07 bash[23367]: cluster 2026-03-10T10:21:08.414669+0000 mon.a (mon.0) 2473 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-10T10:21:09.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:09 vm07 bash[23367]: cluster 2026-03-10T10:21:08.414669+0000 mon.a (mon.0) 2473 : cluster [DBG] osdmap e345: 8 total, 8 up, 8 in 2026-03-10T10:21:09.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:09 vm07 bash[23367]: cluster 2026-03-10T10:21:08.445208+0000 mgr.y (mgr.24422) 299 : cluster [DBG] pgmap v477: 300 pgs: 6 creating+activating, 13 creating+peering, 21 unknown, 260 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:21:09.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:09 vm07 bash[23367]: cluster 2026-03-10T10:21:08.445208+0000 mgr.y (mgr.24422) 299 : cluster [DBG] pgmap v477: 300 pgs: 6 creating+activating, 13 creating+peering, 21 unknown, 260 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:21:09.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:09 vm07 bash[23367]: audit 2026-03-10T10:21:08.448459+0000 mgr.y (mgr.24422) 300 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:09.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:09 vm07 bash[23367]: audit 2026-03-10T10:21:08.448459+0000 mgr.y (mgr.24422) 300 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:09.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:09 vm07 bash[23367]: audit 2026-03-10T10:21:08.465958+0000 mon.a (mon.0) 2474 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:21:09.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:09 vm07 bash[23367]: audit 2026-03-10T10:21:08.465958+0000 mon.a (mon.0) 2474 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-51", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:21:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:10 vm04 bash[28289]: audit 2026-03-10T10:21:09.418192+0000 mon.a (mon.0) 2475 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-51", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:21:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:10 vm04 bash[28289]: audit 2026-03-10T10:21:09.418192+0000 mon.a (mon.0) 2475 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-51", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:21:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:10 vm04 bash[28289]: cluster 2026-03-10T10:21:09.430678+0000 mon.a (mon.0) 2476 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-10T10:21:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:10 vm04 bash[28289]: cluster 2026-03-10T10:21:09.430678+0000 mon.a (mon.0) 2476 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-10T10:21:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:10 vm04 bash[28289]: audit 2026-03-10T10:21:09.438011+0000 mon.a (mon.0) 2477 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-51"}]: dispatch 2026-03-10T10:21:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:10 vm04 bash[28289]: audit 2026-03-10T10:21:09.438011+0000 mon.a (mon.0) 2477 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-51"}]: dispatch 2026-03-10T10:21:10.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:10 vm04 bash[20742]: audit 2026-03-10T10:21:09.418192+0000 mon.a (mon.0) 2475 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-51", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:21:10.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:10 vm04 bash[20742]: audit 2026-03-10T10:21:09.418192+0000 mon.a (mon.0) 2475 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-51", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:21:10.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:10 vm04 bash[20742]: cluster 2026-03-10T10:21:09.430678+0000 mon.a (mon.0) 2476 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-10T10:21:10.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:10 vm04 bash[20742]: cluster 2026-03-10T10:21:09.430678+0000 mon.a (mon.0) 2476 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-10T10:21:10.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:10 vm04 bash[20742]: audit 2026-03-10T10:21:09.438011+0000 mon.a (mon.0) 2477 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-51"}]: dispatch 2026-03-10T10:21:10.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:10 vm04 bash[20742]: audit 2026-03-10T10:21:09.438011+0000 mon.a (mon.0) 2477 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-51"}]: dispatch 2026-03-10T10:21:10.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:10 vm07 bash[23367]: audit 2026-03-10T10:21:09.418192+0000 mon.a (mon.0) 2475 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-51", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:21:10.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:10 vm07 bash[23367]: audit 2026-03-10T10:21:09.418192+0000 mon.a (mon.0) 2475 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-51", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:21:10.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:10 vm07 bash[23367]: cluster 2026-03-10T10:21:09.430678+0000 mon.a (mon.0) 2476 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-10T10:21:10.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:10 vm07 bash[23367]: cluster 2026-03-10T10:21:09.430678+0000 mon.a (mon.0) 2476 : cluster [DBG] osdmap e346: 8 total, 8 up, 8 in 2026-03-10T10:21:10.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:10 vm07 bash[23367]: audit 2026-03-10T10:21:09.438011+0000 mon.a (mon.0) 2477 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-51"}]: dispatch 2026-03-10T10:21:10.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:10 vm07 bash[23367]: audit 2026-03-10T10:21:09.438011+0000 mon.a (mon.0) 2477 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-51"}]: dispatch 2026-03-10T10:21:11.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:11 vm04 bash[28289]: audit 2026-03-10T10:21:10.421809+0000 mon.a (mon.0) 2478 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-51"}]': finished 2026-03-10T10:21:11.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:11 vm04 bash[28289]: audit 2026-03-10T10:21:10.421809+0000 mon.a (mon.0) 2478 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-51"}]': finished 2026-03-10T10:21:11.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:11 vm04 bash[28289]: cluster 2026-03-10T10:21:10.426433+0000 mon.a (mon.0) 2479 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-10T10:21:11.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:11 vm04 bash[28289]: cluster 2026-03-10T10:21:10.426433+0000 mon.a (mon.0) 2479 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-10T10:21:11.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:11 vm04 bash[28289]: audit 2026-03-10T10:21:10.427366+0000 mon.c (mon.2) 455 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:11.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:11 vm04 bash[28289]: audit 2026-03-10T10:21:10.427366+0000 mon.c (mon.2) 455 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:11.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:11 vm04 bash[28289]: audit 2026-03-10T10:21:10.428223+0000 mon.a (mon.0) 2480 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-51", "mode": "writeback"}]: dispatch 2026-03-10T10:21:11.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:11 vm04 bash[28289]: audit 2026-03-10T10:21:10.428223+0000 mon.a (mon.0) 2480 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-51", "mode": "writeback"}]: dispatch 2026-03-10T10:21:11.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:11 vm04 bash[28289]: audit 2026-03-10T10:21:10.428400+0000 mon.a (mon.0) 2481 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:11.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:11 vm04 bash[28289]: audit 2026-03-10T10:21:10.428400+0000 mon.a (mon.0) 2481 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:11.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:11 vm04 bash[28289]: cluster 2026-03-10T10:21:10.445487+0000 mgr.y (mgr.24422) 301 : cluster [DBG] pgmap v480: 292 pgs: 6 creating+activating, 11 creating+peering, 275 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:21:11.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:11 vm04 bash[28289]: cluster 2026-03-10T10:21:10.445487+0000 mgr.y (mgr.24422) 301 : cluster [DBG] pgmap v480: 292 pgs: 6 creating+activating, 11 creating+peering, 275 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:21:11.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:11 vm04 bash[28289]: cluster 2026-03-10T10:21:11.421732+0000 mon.a (mon.0) 2482 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:21:11.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:11 vm04 bash[28289]: cluster 2026-03-10T10:21:11.421732+0000 mon.a (mon.0) 2482 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:21:11.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:11 vm04 bash[28289]: audit 2026-03-10T10:21:11.424849+0000 mon.a (mon.0) 2483 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-51", "mode": "writeback"}]': finished 2026-03-10T10:21:11.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:11 vm04 bash[28289]: audit 2026-03-10T10:21:11.424849+0000 mon.a (mon.0) 2483 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-51", "mode": "writeback"}]': finished 2026-03-10T10:21:11.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:11 vm04 bash[28289]: audit 2026-03-10T10:21:11.424877+0000 mon.a (mon.0) 2484 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm04-59259-66"}]': finished 2026-03-10T10:21:11.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:11 vm04 bash[28289]: audit 2026-03-10T10:21:11.424877+0000 mon.a (mon.0) 2484 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm04-59259-66"}]': finished 2026-03-10T10:21:11.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:11 vm04 bash[28289]: cluster 2026-03-10T10:21:11.427579+0000 mon.a (mon.0) 2485 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-10T10:21:11.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:11 vm04 bash[28289]: cluster 2026-03-10T10:21:11.427579+0000 mon.a (mon.0) 2485 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-10T10:21:11.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:11 vm04 bash[28289]: audit 2026-03-10T10:21:11.431073+0000 mon.c (mon.2) 456 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:11.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:11 vm04 bash[28289]: audit 2026-03-10T10:21:11.431073+0000 mon.c (mon.2) 456 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:11.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:11 vm04 bash[28289]: audit 2026-03-10T10:21:11.431335+0000 mon.a (mon.0) 2486 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:11.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:11 vm04 bash[28289]: audit 2026-03-10T10:21:11.431335+0000 mon.a (mon.0) 2486 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:11.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:11 vm04 bash[20742]: audit 2026-03-10T10:21:10.421809+0000 mon.a (mon.0) 2478 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-51"}]': finished 2026-03-10T10:21:11.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:11 vm04 bash[20742]: audit 2026-03-10T10:21:10.421809+0000 mon.a (mon.0) 2478 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-51"}]': finished 2026-03-10T10:21:11.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:11 vm04 bash[20742]: cluster 2026-03-10T10:21:10.426433+0000 mon.a (mon.0) 2479 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-10T10:21:11.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:11 vm04 bash[20742]: cluster 2026-03-10T10:21:10.426433+0000 mon.a (mon.0) 2479 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-10T10:21:11.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:11 vm04 bash[20742]: audit 2026-03-10T10:21:10.427366+0000 mon.c (mon.2) 455 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:11.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:11 vm04 bash[20742]: audit 2026-03-10T10:21:10.427366+0000 mon.c (mon.2) 455 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:11.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:11 vm04 bash[20742]: audit 2026-03-10T10:21:10.428223+0000 mon.a (mon.0) 2480 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-51", "mode": "writeback"}]: dispatch 2026-03-10T10:21:11.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:11 vm04 bash[20742]: audit 2026-03-10T10:21:10.428223+0000 mon.a (mon.0) 2480 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-51", "mode": "writeback"}]: dispatch 2026-03-10T10:21:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:11 vm04 bash[20742]: audit 2026-03-10T10:21:10.428400+0000 mon.a (mon.0) 2481 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:11 vm04 bash[20742]: audit 2026-03-10T10:21:10.428400+0000 mon.a (mon.0) 2481 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:11 vm04 bash[20742]: cluster 2026-03-10T10:21:10.445487+0000 mgr.y (mgr.24422) 301 : cluster [DBG] pgmap v480: 292 pgs: 6 creating+activating, 11 creating+peering, 275 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:21:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:11 vm04 bash[20742]: cluster 2026-03-10T10:21:10.445487+0000 mgr.y (mgr.24422) 301 : cluster [DBG] pgmap v480: 292 pgs: 6 creating+activating, 11 creating+peering, 275 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:21:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:11 vm04 bash[20742]: cluster 2026-03-10T10:21:11.421732+0000 mon.a (mon.0) 2482 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:21:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:11 vm04 bash[20742]: cluster 2026-03-10T10:21:11.421732+0000 mon.a (mon.0) 2482 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:21:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:11 vm04 bash[20742]: audit 2026-03-10T10:21:11.424849+0000 mon.a (mon.0) 2483 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-51", "mode": "writeback"}]': finished 2026-03-10T10:21:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:11 vm04 bash[20742]: audit 2026-03-10T10:21:11.424849+0000 mon.a (mon.0) 2483 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-51", "mode": "writeback"}]': finished 2026-03-10T10:21:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:11 vm04 bash[20742]: audit 2026-03-10T10:21:11.424877+0000 mon.a (mon.0) 2484 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm04-59259-66"}]': finished 2026-03-10T10:21:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:11 vm04 bash[20742]: audit 2026-03-10T10:21:11.424877+0000 mon.a (mon.0) 2484 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm04-59259-66"}]': finished 2026-03-10T10:21:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:11 vm04 bash[20742]: cluster 2026-03-10T10:21:11.427579+0000 mon.a (mon.0) 2485 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-10T10:21:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:11 vm04 bash[20742]: cluster 2026-03-10T10:21:11.427579+0000 mon.a (mon.0) 2485 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-10T10:21:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:11 vm04 bash[20742]: audit 2026-03-10T10:21:11.431073+0000 mon.c (mon.2) 456 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:11 vm04 bash[20742]: audit 2026-03-10T10:21:11.431073+0000 mon.c (mon.2) 456 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:11 vm04 bash[20742]: audit 2026-03-10T10:21:11.431335+0000 mon.a (mon.0) 2486 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:11.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:11 vm04 bash[20742]: audit 2026-03-10T10:21:11.431335+0000 mon.a (mon.0) 2486 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:11.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:11 vm07 bash[23367]: audit 2026-03-10T10:21:10.421809+0000 mon.a (mon.0) 2478 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-51"}]': finished 2026-03-10T10:21:11.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:11 vm07 bash[23367]: audit 2026-03-10T10:21:10.421809+0000 mon.a (mon.0) 2478 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-51"}]': finished 2026-03-10T10:21:11.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:11 vm07 bash[23367]: cluster 2026-03-10T10:21:10.426433+0000 mon.a (mon.0) 2479 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-10T10:21:11.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:11 vm07 bash[23367]: cluster 2026-03-10T10:21:10.426433+0000 mon.a (mon.0) 2479 : cluster [DBG] osdmap e347: 8 total, 8 up, 8 in 2026-03-10T10:21:11.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:11 vm07 bash[23367]: audit 2026-03-10T10:21:10.427366+0000 mon.c (mon.2) 455 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:11.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:11 vm07 bash[23367]: audit 2026-03-10T10:21:10.427366+0000 mon.c (mon.2) 455 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:11.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:11 vm07 bash[23367]: audit 2026-03-10T10:21:10.428223+0000 mon.a (mon.0) 2480 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-51", "mode": "writeback"}]: dispatch 2026-03-10T10:21:11.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:11 vm07 bash[23367]: audit 2026-03-10T10:21:10.428223+0000 mon.a (mon.0) 2480 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-51", "mode": "writeback"}]: dispatch 2026-03-10T10:21:11.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:11 vm07 bash[23367]: audit 2026-03-10T10:21:10.428400+0000 mon.a (mon.0) 2481 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:11.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:11 vm07 bash[23367]: audit 2026-03-10T10:21:10.428400+0000 mon.a (mon.0) 2481 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:11.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:11 vm07 bash[23367]: cluster 2026-03-10T10:21:10.445487+0000 mgr.y (mgr.24422) 301 : cluster [DBG] pgmap v480: 292 pgs: 6 creating+activating, 11 creating+peering, 275 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:21:11.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:11 vm07 bash[23367]: cluster 2026-03-10T10:21:10.445487+0000 mgr.y (mgr.24422) 301 : cluster [DBG] pgmap v480: 292 pgs: 6 creating+activating, 11 creating+peering, 275 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:21:11.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:11 vm07 bash[23367]: cluster 2026-03-10T10:21:11.421732+0000 mon.a (mon.0) 2482 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:21:11.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:11 vm07 bash[23367]: cluster 2026-03-10T10:21:11.421732+0000 mon.a (mon.0) 2482 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:21:11.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:11 vm07 bash[23367]: audit 2026-03-10T10:21:11.424849+0000 mon.a (mon.0) 2483 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-51", "mode": "writeback"}]': finished 2026-03-10T10:21:11.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:11 vm07 bash[23367]: audit 2026-03-10T10:21:11.424849+0000 mon.a (mon.0) 2483 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-51", "mode": "writeback"}]': finished 2026-03-10T10:21:11.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:11 vm07 bash[23367]: audit 2026-03-10T10:21:11.424877+0000 mon.a (mon.0) 2484 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm04-59259-66"}]': finished 2026-03-10T10:21:11.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:11 vm07 bash[23367]: audit 2026-03-10T10:21:11.424877+0000 mon.a (mon.0) 2484 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-MultiWritePP_vm04-59259-66"}]': finished 2026-03-10T10:21:11.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:11 vm07 bash[23367]: cluster 2026-03-10T10:21:11.427579+0000 mon.a (mon.0) 2485 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-10T10:21:11.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:11 vm07 bash[23367]: cluster 2026-03-10T10:21:11.427579+0000 mon.a (mon.0) 2485 : cluster [DBG] osdmap e348: 8 total, 8 up, 8 in 2026-03-10T10:21:11.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:11 vm07 bash[23367]: audit 2026-03-10T10:21:11.431073+0000 mon.c (mon.2) 456 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:11.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:11 vm07 bash[23367]: audit 2026-03-10T10:21:11.431073+0000 mon.c (mon.2) 456 : audit [INF] from='client.? 192.168.123.104:0/1472311870' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:11.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:11 vm07 bash[23367]: audit 2026-03-10T10:21:11.431335+0000 mon.a (mon.0) 2486 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:11.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:11 vm07 bash[23367]: audit 2026-03-10T10:21:11.431335+0000 mon.a (mon.0) 2486 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm04-59259-66"}]: dispatch 2026-03-10T10:21:12.615 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ 2026-03-10T10:21:12.615 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAioEC.RoundTripWriteFullPP2 (3062 ms) 2026-03-10T10:21:12.615 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.SimpleStatPP 2026-03-10T10:21:12.615 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAioEC.SimpleStatPP (7208 ms) 2026-03-10T10:21:12.615 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.SimpleStatPPNS 2026-03-10T10:21:12.615 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAioEC.SimpleStatPPNS (7199 ms) 2026-03-10T10:21:12.615 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.StatRemovePP 2026-03-10T10:21:12.615 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAioEC.StatRemovePP (7081 ms) 2026-03-10T10:21:12.615 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.ExecuteClassPP 2026-03-10T10:21:12.615 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAioEC.ExecuteClassPP (7200 ms) 2026-03-10T10:21:12.615 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.OmapPP 2026-03-10T10:21:12.615 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAioEC.OmapPP (7233 ms) 2026-03-10T10:21:12.615 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ RUN ] LibRadosAioEC.MultiWritePP 2026-03-10T10:21:12.615 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ OK ] LibRadosAioEC.MultiWritePP (8288 ms) 2026-03-10T10:21:12.615 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [----------] 20 tests from LibRadosAioEC (144109 ms total) 2026-03-10T10:21:12.615 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: 2026-03-10T10:21:12.615 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [----------] Global test environment tear-down 2026-03-10T10:21:12.615 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [==========] 57 tests from 4 test suites ran. (292459 ms total) 2026-03-10T10:21:12.615 INFO:tasks.workunit.client.0.vm04.stdout: api_aio_pp: [ PASSED ] 57 tests. 2026-03-10T10:21:12.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:12 vm04 bash[28289]: audit 2026-03-10T10:21:11.485133+0000 mon.a (mon.0) 2487 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:21:12.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:12 vm04 bash[28289]: audit 2026-03-10T10:21:11.485133+0000 mon.a (mon.0) 2487 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:21:12.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:12 vm04 bash[20742]: audit 2026-03-10T10:21:11.485133+0000 mon.a (mon.0) 2487 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:21:12.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:12 vm04 bash[20742]: audit 2026-03-10T10:21:11.485133+0000 mon.a (mon.0) 2487 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:21:13.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:12 vm07 bash[23367]: audit 2026-03-10T10:21:11.485133+0000 mon.a (mon.0) 2487 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:21:13.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:12 vm07 bash[23367]: audit 2026-03-10T10:21:11.485133+0000 mon.a (mon.0) 2487 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:21:13.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:21:13 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:21:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:21:13.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:13 vm04 bash[28289]: cluster 2026-03-10T10:21:12.445774+0000 mgr.y (mgr.24422) 302 : cluster [DBG] pgmap v482: 292 pgs: 6 creating+activating, 11 creating+peering, 275 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 761 B/s wr, 1 op/s 2026-03-10T10:21:13.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:13 vm04 bash[28289]: cluster 2026-03-10T10:21:12.445774+0000 mgr.y (mgr.24422) 302 : cluster [DBG] pgmap v482: 292 pgs: 6 creating+activating, 11 creating+peering, 275 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 761 B/s wr, 1 op/s 2026-03-10T10:21:13.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:13 vm04 bash[28289]: audit 2026-03-10T10:21:12.605281+0000 mon.a (mon.0) 2488 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm04-59259-66"}]': finished 2026-03-10T10:21:13.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:13 vm04 bash[28289]: audit 2026-03-10T10:21:12.605281+0000 mon.a (mon.0) 2488 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm04-59259-66"}]': finished 2026-03-10T10:21:13.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:13 vm04 bash[28289]: audit 2026-03-10T10:21:12.605357+0000 mon.a (mon.0) 2489 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:21:13.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:13 vm04 bash[28289]: audit 2026-03-10T10:21:12.605357+0000 mon.a (mon.0) 2489 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:21:13.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:13 vm04 bash[28289]: cluster 2026-03-10T10:21:12.609720+0000 mon.a (mon.0) 2490 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-10T10:21:13.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:13 vm04 bash[28289]: cluster 2026-03-10T10:21:12.609720+0000 mon.a (mon.0) 2490 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-10T10:21:13.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:13 vm04 bash[28289]: audit 2026-03-10T10:21:12.617993+0000 mon.a (mon.0) 2491 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:21:13.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:13 vm04 bash[28289]: audit 2026-03-10T10:21:12.617993+0000 mon.a (mon.0) 2491 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:21:13.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:13 vm04 bash[28289]: audit 2026-03-10T10:21:12.913380+0000 mon.a (mon.0) 2492 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:21:13.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:13 vm04 bash[28289]: audit 2026-03-10T10:21:12.913380+0000 mon.a (mon.0) 2492 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:21:13.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:13 vm04 bash[28289]: audit 2026-03-10T10:21:13.609214+0000 mon.a (mon.0) 2493 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:21:13.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:13 vm04 bash[28289]: audit 2026-03-10T10:21:13.609214+0000 mon.a (mon.0) 2493 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:21:13.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:13 vm04 bash[28289]: cluster 2026-03-10T10:21:13.612101+0000 mon.a (mon.0) 2494 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-10T10:21:13.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:13 vm04 bash[28289]: cluster 2026-03-10T10:21:13.612101+0000 mon.a (mon.0) 2494 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-10T10:21:13.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:13 vm04 bash[28289]: audit 2026-03-10T10:21:13.612516+0000 mon.a (mon.0) 2495 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:21:13.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:13 vm04 bash[28289]: audit 2026-03-10T10:21:13.612516+0000 mon.a (mon.0) 2495 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:21:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:13 vm04 bash[20742]: cluster 2026-03-10T10:21:12.445774+0000 mgr.y (mgr.24422) 302 : cluster [DBG] pgmap v482: 292 pgs: 6 creating+activating, 11 creating+peering, 275 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 761 B/s wr, 1 op/s 2026-03-10T10:21:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:13 vm04 bash[20742]: cluster 2026-03-10T10:21:12.445774+0000 mgr.y (mgr.24422) 302 : cluster [DBG] pgmap v482: 292 pgs: 6 creating+activating, 11 creating+peering, 275 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 761 B/s wr, 1 op/s 2026-03-10T10:21:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:13 vm04 bash[20742]: audit 2026-03-10T10:21:12.605281+0000 mon.a (mon.0) 2488 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm04-59259-66"}]': finished 2026-03-10T10:21:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:13 vm04 bash[20742]: audit 2026-03-10T10:21:12.605281+0000 mon.a (mon.0) 2488 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm04-59259-66"}]': finished 2026-03-10T10:21:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:13 vm04 bash[20742]: audit 2026-03-10T10:21:12.605357+0000 mon.a (mon.0) 2489 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:21:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:13 vm04 bash[20742]: audit 2026-03-10T10:21:12.605357+0000 mon.a (mon.0) 2489 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:21:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:13 vm04 bash[20742]: cluster 2026-03-10T10:21:12.609720+0000 mon.a (mon.0) 2490 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-10T10:21:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:13 vm04 bash[20742]: cluster 2026-03-10T10:21:12.609720+0000 mon.a (mon.0) 2490 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-10T10:21:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:13 vm04 bash[20742]: audit 2026-03-10T10:21:12.617993+0000 mon.a (mon.0) 2491 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:21:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:13 vm04 bash[20742]: audit 2026-03-10T10:21:12.617993+0000 mon.a (mon.0) 2491 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:21:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:13 vm04 bash[20742]: audit 2026-03-10T10:21:12.913380+0000 mon.a (mon.0) 2492 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:21:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:13 vm04 bash[20742]: audit 2026-03-10T10:21:12.913380+0000 mon.a (mon.0) 2492 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:21:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:13 vm04 bash[20742]: audit 2026-03-10T10:21:13.609214+0000 mon.a (mon.0) 2493 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:21:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:13 vm04 bash[20742]: audit 2026-03-10T10:21:13.609214+0000 mon.a (mon.0) 2493 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:21:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:13 vm04 bash[20742]: cluster 2026-03-10T10:21:13.612101+0000 mon.a (mon.0) 2494 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-10T10:21:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:13 vm04 bash[20742]: cluster 2026-03-10T10:21:13.612101+0000 mon.a (mon.0) 2494 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-10T10:21:13.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:13 vm04 bash[20742]: audit 2026-03-10T10:21:13.612516+0000 mon.a (mon.0) 2495 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:21:13.958 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:13 vm04 bash[20742]: audit 2026-03-10T10:21:13.612516+0000 mon.a (mon.0) 2495 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:21:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:13 vm07 bash[23367]: cluster 2026-03-10T10:21:12.445774+0000 mgr.y (mgr.24422) 302 : cluster [DBG] pgmap v482: 292 pgs: 6 creating+activating, 11 creating+peering, 275 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 761 B/s wr, 1 op/s 2026-03-10T10:21:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:13 vm07 bash[23367]: cluster 2026-03-10T10:21:12.445774+0000 mgr.y (mgr.24422) 302 : cluster [DBG] pgmap v482: 292 pgs: 6 creating+activating, 11 creating+peering, 275 active+clean; 8.3 MiB data, 724 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 761 B/s wr, 1 op/s 2026-03-10T10:21:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:13 vm07 bash[23367]: audit 2026-03-10T10:21:12.605281+0000 mon.a (mon.0) 2488 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm04-59259-66"}]': finished 2026-03-10T10:21:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:13 vm07 bash[23367]: audit 2026-03-10T10:21:12.605281+0000 mon.a (mon.0) 2488 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"MultiWritePP_vm04-59259-66"}]': finished 2026-03-10T10:21:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:13 vm07 bash[23367]: audit 2026-03-10T10:21:12.605357+0000 mon.a (mon.0) 2489 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:21:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:13 vm07 bash[23367]: audit 2026-03-10T10:21:12.605357+0000 mon.a (mon.0) 2489 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:21:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:13 vm07 bash[23367]: cluster 2026-03-10T10:21:12.609720+0000 mon.a (mon.0) 2490 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-10T10:21:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:13 vm07 bash[23367]: cluster 2026-03-10T10:21:12.609720+0000 mon.a (mon.0) 2490 : cluster [DBG] osdmap e349: 8 total, 8 up, 8 in 2026-03-10T10:21:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:13 vm07 bash[23367]: audit 2026-03-10T10:21:12.617993+0000 mon.a (mon.0) 2491 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:21:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:13 vm07 bash[23367]: audit 2026-03-10T10:21:12.617993+0000 mon.a (mon.0) 2491 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:21:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:13 vm07 bash[23367]: audit 2026-03-10T10:21:12.913380+0000 mon.a (mon.0) 2492 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:21:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:13 vm07 bash[23367]: audit 2026-03-10T10:21:12.913380+0000 mon.a (mon.0) 2492 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:21:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:13 vm07 bash[23367]: audit 2026-03-10T10:21:13.609214+0000 mon.a (mon.0) 2493 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:21:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:13 vm07 bash[23367]: audit 2026-03-10T10:21:13.609214+0000 mon.a (mon.0) 2493 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:21:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:13 vm07 bash[23367]: cluster 2026-03-10T10:21:13.612101+0000 mon.a (mon.0) 2494 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-10T10:21:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:13 vm07 bash[23367]: cluster 2026-03-10T10:21:13.612101+0000 mon.a (mon.0) 2494 : cluster [DBG] osdmap e350: 8 total, 8 up, 8 in 2026-03-10T10:21:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:13 vm07 bash[23367]: audit 2026-03-10T10:21:13.612516+0000 mon.a (mon.0) 2495 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:21:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:13 vm07 bash[23367]: audit 2026-03-10T10:21:13.612516+0000 mon.a (mon.0) 2495 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:21:15.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:14 vm07 bash[23367]: cluster 2026-03-10T10:21:14.609367+0000 mon.a (mon.0) 2496 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:21:15.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:14 vm07 bash[23367]: cluster 2026-03-10T10:21:14.609367+0000 mon.a (mon.0) 2496 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:21:15.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:14 vm07 bash[23367]: audit 2026-03-10T10:21:14.670815+0000 mon.a (mon.0) 2497 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:21:15.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:14 vm07 bash[23367]: audit 2026-03-10T10:21:14.670815+0000 mon.a (mon.0) 2497 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:21:15.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:14 vm07 bash[23367]: cluster 2026-03-10T10:21:14.677692+0000 mon.a (mon.0) 2498 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-10T10:21:15.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:14 vm07 bash[23367]: cluster 2026-03-10T10:21:14.677692+0000 mon.a (mon.0) 2498 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-10T10:21:15.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:14 vm07 bash[23367]: audit 2026-03-10T10:21:14.679162+0000 mon.a (mon.0) 2499 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T10:21:15.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:14 vm07 bash[23367]: audit 2026-03-10T10:21:14.679162+0000 mon.a (mon.0) 2499 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T10:21:15.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:14 vm04 bash[28289]: cluster 2026-03-10T10:21:14.609367+0000 mon.a (mon.0) 2496 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:21:15.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:14 vm04 bash[28289]: cluster 2026-03-10T10:21:14.609367+0000 mon.a (mon.0) 2496 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:21:15.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:14 vm04 bash[28289]: audit 2026-03-10T10:21:14.670815+0000 mon.a (mon.0) 2497 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:21:15.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:14 vm04 bash[28289]: audit 2026-03-10T10:21:14.670815+0000 mon.a (mon.0) 2497 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:21:15.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:14 vm04 bash[28289]: cluster 2026-03-10T10:21:14.677692+0000 mon.a (mon.0) 2498 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-10T10:21:15.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:14 vm04 bash[28289]: cluster 2026-03-10T10:21:14.677692+0000 mon.a (mon.0) 2498 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-10T10:21:15.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:14 vm04 bash[28289]: audit 2026-03-10T10:21:14.679162+0000 mon.a (mon.0) 2499 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T10:21:15.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:14 vm04 bash[28289]: audit 2026-03-10T10:21:14.679162+0000 mon.a (mon.0) 2499 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T10:21:15.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:14 vm04 bash[20742]: cluster 2026-03-10T10:21:14.609367+0000 mon.a (mon.0) 2496 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:21:15.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:14 vm04 bash[20742]: cluster 2026-03-10T10:21:14.609367+0000 mon.a (mon.0) 2496 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:21:15.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:14 vm04 bash[20742]: audit 2026-03-10T10:21:14.670815+0000 mon.a (mon.0) 2497 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:21:15.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:14 vm04 bash[20742]: audit 2026-03-10T10:21:14.670815+0000 mon.a (mon.0) 2497 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:21:15.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:14 vm04 bash[20742]: cluster 2026-03-10T10:21:14.677692+0000 mon.a (mon.0) 2498 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-10T10:21:15.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:14 vm04 bash[20742]: cluster 2026-03-10T10:21:14.677692+0000 mon.a (mon.0) 2498 : cluster [DBG] osdmap e351: 8 total, 8 up, 8 in 2026-03-10T10:21:15.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:14 vm04 bash[20742]: audit 2026-03-10T10:21:14.679162+0000 mon.a (mon.0) 2499 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T10:21:15.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:14 vm04 bash[20742]: audit 2026-03-10T10:21:14.679162+0000 mon.a (mon.0) 2499 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T10:21:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:15 vm07 bash[23367]: cluster 2026-03-10T10:21:14.446438+0000 mgr.y (mgr.24422) 303 : cluster [DBG] pgmap v485: 292 pgs: 292 active+clean; 8.3 MiB data, 743 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 254 B/s wr, 3 op/s 2026-03-10T10:21:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:15 vm07 bash[23367]: cluster 2026-03-10T10:21:14.446438+0000 mgr.y (mgr.24422) 303 : cluster [DBG] pgmap v485: 292 pgs: 292 active+clean; 8.3 MiB data, 743 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 254 B/s wr, 3 op/s 2026-03-10T10:21:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:15 vm07 bash[23367]: audit 2026-03-10T10:21:15.674276+0000 mon.a (mon.0) 2500 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T10:21:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:15 vm07 bash[23367]: audit 2026-03-10T10:21:15.674276+0000 mon.a (mon.0) 2500 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T10:21:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:15 vm07 bash[23367]: cluster 2026-03-10T10:21:15.682413+0000 mon.a (mon.0) 2501 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-10T10:21:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:15 vm07 bash[23367]: cluster 2026-03-10T10:21:15.682413+0000 mon.a (mon.0) 2501 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-10T10:21:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:15 vm07 bash[23367]: audit 2026-03-10T10:21:15.683023+0000 mon.a (mon.0) 2502 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T10:21:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:15 vm07 bash[23367]: audit 2026-03-10T10:21:15.683023+0000 mon.a (mon.0) 2502 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T10:21:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:15 vm04 bash[28289]: cluster 2026-03-10T10:21:14.446438+0000 mgr.y (mgr.24422) 303 : cluster [DBG] pgmap v485: 292 pgs: 292 active+clean; 8.3 MiB data, 743 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 254 B/s wr, 3 op/s 2026-03-10T10:21:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:15 vm04 bash[28289]: cluster 2026-03-10T10:21:14.446438+0000 mgr.y (mgr.24422) 303 : cluster [DBG] pgmap v485: 292 pgs: 292 active+clean; 8.3 MiB data, 743 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 254 B/s wr, 3 op/s 2026-03-10T10:21:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:15 vm04 bash[28289]: audit 2026-03-10T10:21:15.674276+0000 mon.a (mon.0) 2500 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T10:21:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:15 vm04 bash[28289]: audit 2026-03-10T10:21:15.674276+0000 mon.a (mon.0) 2500 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T10:21:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:15 vm04 bash[28289]: cluster 2026-03-10T10:21:15.682413+0000 mon.a (mon.0) 2501 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-10T10:21:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:15 vm04 bash[28289]: cluster 2026-03-10T10:21:15.682413+0000 mon.a (mon.0) 2501 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-10T10:21:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:15 vm04 bash[28289]: audit 2026-03-10T10:21:15.683023+0000 mon.a (mon.0) 2502 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T10:21:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:15 vm04 bash[28289]: audit 2026-03-10T10:21:15.683023+0000 mon.a (mon.0) 2502 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T10:21:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:15 vm04 bash[20742]: cluster 2026-03-10T10:21:14.446438+0000 mgr.y (mgr.24422) 303 : cluster [DBG] pgmap v485: 292 pgs: 292 active+clean; 8.3 MiB data, 743 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 254 B/s wr, 3 op/s 2026-03-10T10:21:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:15 vm04 bash[20742]: cluster 2026-03-10T10:21:14.446438+0000 mgr.y (mgr.24422) 303 : cluster [DBG] pgmap v485: 292 pgs: 292 active+clean; 8.3 MiB data, 743 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 254 B/s wr, 3 op/s 2026-03-10T10:21:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:15 vm04 bash[20742]: audit 2026-03-10T10:21:15.674276+0000 mon.a (mon.0) 2500 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T10:21:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:15 vm04 bash[20742]: audit 2026-03-10T10:21:15.674276+0000 mon.a (mon.0) 2500 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T10:21:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:15 vm04 bash[20742]: cluster 2026-03-10T10:21:15.682413+0000 mon.a (mon.0) 2501 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-10T10:21:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:15 vm04 bash[20742]: cluster 2026-03-10T10:21:15.682413+0000 mon.a (mon.0) 2501 : cluster [DBG] osdmap e352: 8 total, 8 up, 8 in 2026-03-10T10:21:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:15 vm04 bash[20742]: audit 2026-03-10T10:21:15.683023+0000 mon.a (mon.0) 2502 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T10:21:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:15 vm04 bash[20742]: audit 2026-03-10T10:21:15.683023+0000 mon.a (mon.0) 2502 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T10:21:17.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:17 vm04 bash[28289]: cluster 2026-03-10T10:21:16.446768+0000 mgr.y (mgr.24422) 304 : cluster [DBG] pgmap v488: 292 pgs: 292 active+clean; 8.3 MiB data, 743 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 255 B/s wr, 3 op/s 2026-03-10T10:21:17.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:17 vm04 bash[28289]: cluster 2026-03-10T10:21:16.446768+0000 mgr.y (mgr.24422) 304 : cluster [DBG] pgmap v488: 292 pgs: 292 active+clean; 8.3 MiB data, 743 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 255 B/s wr, 3 op/s 2026-03-10T10:21:17.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:17 vm04 bash[28289]: audit 2026-03-10T10:21:16.677012+0000 mon.a (mon.0) 2503 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "target_max_objects","val": "1"}]': finished 2026-03-10T10:21:17.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:17 vm04 bash[28289]: audit 2026-03-10T10:21:16.677012+0000 mon.a (mon.0) 2503 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "target_max_objects","val": "1"}]': finished 2026-03-10T10:21:17.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:17 vm04 bash[28289]: cluster 2026-03-10T10:21:16.680486+0000 mon.a (mon.0) 2504 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-10T10:21:17.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:17 vm04 bash[28289]: cluster 2026-03-10T10:21:16.680486+0000 mon.a (mon.0) 2504 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-10T10:21:17.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:17 vm04 bash[20742]: cluster 2026-03-10T10:21:16.446768+0000 mgr.y (mgr.24422) 304 : cluster [DBG] pgmap v488: 292 pgs: 292 active+clean; 8.3 MiB data, 743 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 255 B/s wr, 3 op/s 2026-03-10T10:21:17.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:17 vm04 bash[20742]: cluster 2026-03-10T10:21:16.446768+0000 mgr.y (mgr.24422) 304 : cluster [DBG] pgmap v488: 292 pgs: 292 active+clean; 8.3 MiB data, 743 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 255 B/s wr, 3 op/s 2026-03-10T10:21:17.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:17 vm04 bash[20742]: audit 2026-03-10T10:21:16.677012+0000 mon.a (mon.0) 2503 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "target_max_objects","val": "1"}]': finished 2026-03-10T10:21:17.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:17 vm04 bash[20742]: audit 2026-03-10T10:21:16.677012+0000 mon.a (mon.0) 2503 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "target_max_objects","val": "1"}]': finished 2026-03-10T10:21:17.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:17 vm04 bash[20742]: cluster 2026-03-10T10:21:16.680486+0000 mon.a (mon.0) 2504 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-10T10:21:17.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:17 vm04 bash[20742]: cluster 2026-03-10T10:21:16.680486+0000 mon.a (mon.0) 2504 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-10T10:21:18.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:17 vm07 bash[23367]: cluster 2026-03-10T10:21:16.446768+0000 mgr.y (mgr.24422) 304 : cluster [DBG] pgmap v488: 292 pgs: 292 active+clean; 8.3 MiB data, 743 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 255 B/s wr, 3 op/s 2026-03-10T10:21:18.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:17 vm07 bash[23367]: cluster 2026-03-10T10:21:16.446768+0000 mgr.y (mgr.24422) 304 : cluster [DBG] pgmap v488: 292 pgs: 292 active+clean; 8.3 MiB data, 743 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 255 B/s wr, 3 op/s 2026-03-10T10:21:18.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:17 vm07 bash[23367]: audit 2026-03-10T10:21:16.677012+0000 mon.a (mon.0) 2503 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "target_max_objects","val": "1"}]': finished 2026-03-10T10:21:18.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:17 vm07 bash[23367]: audit 2026-03-10T10:21:16.677012+0000 mon.a (mon.0) 2503 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-51","var": "target_max_objects","val": "1"}]': finished 2026-03-10T10:21:18.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:17 vm07 bash[23367]: cluster 2026-03-10T10:21:16.680486+0000 mon.a (mon.0) 2504 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-10T10:21:18.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:17 vm07 bash[23367]: cluster 2026-03-10T10:21:16.680486+0000 mon.a (mon.0) 2504 : cluster [DBG] osdmap e353: 8 total, 8 up, 8 in 2026-03-10T10:21:18.732 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:21:18 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:21:19.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:18 vm07 bash[23367]: cluster 2026-03-10T10:21:18.679133+0000 mon.a (mon.0) 2505 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-10T10:21:19.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:18 vm07 bash[23367]: cluster 2026-03-10T10:21:18.679133+0000 mon.a (mon.0) 2505 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-10T10:21:19.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:18 vm04 bash[28289]: cluster 2026-03-10T10:21:18.679133+0000 mon.a (mon.0) 2505 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-10T10:21:19.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:18 vm04 bash[28289]: cluster 2026-03-10T10:21:18.679133+0000 mon.a (mon.0) 2505 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-10T10:21:19.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:18 vm04 bash[20742]: cluster 2026-03-10T10:21:18.679133+0000 mon.a (mon.0) 2505 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-10T10:21:19.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:18 vm04 bash[20742]: cluster 2026-03-10T10:21:18.679133+0000 mon.a (mon.0) 2505 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-10T10:21:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:19 vm04 bash[28289]: cluster 2026-03-10T10:21:18.447371+0000 mgr.y (mgr.24422) 305 : cluster [DBG] pgmap v490: 292 pgs: 292 active+clean; 8.3 MiB data, 743 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 2 op/s 2026-03-10T10:21:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:19 vm04 bash[28289]: cluster 2026-03-10T10:21:18.447371+0000 mgr.y (mgr.24422) 305 : cluster [DBG] pgmap v490: 292 pgs: 292 active+clean; 8.3 MiB data, 743 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 2 op/s 2026-03-10T10:21:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:19 vm04 bash[28289]: audit 2026-03-10T10:21:18.458831+0000 mgr.y (mgr.24422) 306 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:19 vm04 bash[28289]: audit 2026-03-10T10:21:18.458831+0000 mgr.y (mgr.24422) 306 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:20.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:19 vm04 bash[20742]: cluster 2026-03-10T10:21:18.447371+0000 mgr.y (mgr.24422) 305 : cluster [DBG] pgmap v490: 292 pgs: 292 active+clean; 8.3 MiB data, 743 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 2 op/s 2026-03-10T10:21:20.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:19 vm04 bash[20742]: cluster 2026-03-10T10:21:18.447371+0000 mgr.y (mgr.24422) 305 : cluster [DBG] pgmap v490: 292 pgs: 292 active+clean; 8.3 MiB data, 743 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 2 op/s 2026-03-10T10:21:20.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:19 vm04 bash[20742]: audit 2026-03-10T10:21:18.458831+0000 mgr.y (mgr.24422) 306 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:20.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:19 vm04 bash[20742]: audit 2026-03-10T10:21:18.458831+0000 mgr.y (mgr.24422) 306 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:20.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:19 vm07 bash[23367]: cluster 2026-03-10T10:21:18.447371+0000 mgr.y (mgr.24422) 305 : cluster [DBG] pgmap v490: 292 pgs: 292 active+clean; 8.3 MiB data, 743 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 2 op/s 2026-03-10T10:21:20.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:19 vm07 bash[23367]: cluster 2026-03-10T10:21:18.447371+0000 mgr.y (mgr.24422) 305 : cluster [DBG] pgmap v490: 292 pgs: 292 active+clean; 8.3 MiB data, 743 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 2 op/s 2026-03-10T10:21:20.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:19 vm07 bash[23367]: audit 2026-03-10T10:21:18.458831+0000 mgr.y (mgr.24422) 306 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:20.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:19 vm07 bash[23367]: audit 2026-03-10T10:21:18.458831+0000 mgr.y (mgr.24422) 306 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:22.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:21 vm04 bash[28289]: cluster 2026-03-10T10:21:20.447923+0000 mgr.y (mgr.24422) 307 : cluster [DBG] pgmap v491: 292 pgs: 292 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:21:22.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:21 vm04 bash[28289]: cluster 2026-03-10T10:21:20.447923+0000 mgr.y (mgr.24422) 307 : cluster [DBG] pgmap v491: 292 pgs: 292 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:21:22.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:21 vm04 bash[20742]: cluster 2026-03-10T10:21:20.447923+0000 mgr.y (mgr.24422) 307 : cluster [DBG] pgmap v491: 292 pgs: 292 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:21:22.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:21 vm04 bash[20742]: cluster 2026-03-10T10:21:20.447923+0000 mgr.y (mgr.24422) 307 : cluster [DBG] pgmap v491: 292 pgs: 292 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:21:22.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:21 vm07 bash[23367]: cluster 2026-03-10T10:21:20.447923+0000 mgr.y (mgr.24422) 307 : cluster [DBG] pgmap v491: 292 pgs: 292 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:21:22.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:21 vm07 bash[23367]: cluster 2026-03-10T10:21:20.447923+0000 mgr.y (mgr.24422) 307 : cluster [DBG] pgmap v491: 292 pgs: 292 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:21:23.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:21:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:21:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:21:24.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:23 vm04 bash[20742]: cluster 2026-03-10T10:21:22.448255+0000 mgr.y (mgr.24422) 308 : cluster [DBG] pgmap v492: 292 pgs: 292 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 658 B/s rd, 0 B/s wr, 0 op/s 2026-03-10T10:21:24.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:23 vm04 bash[20742]: cluster 2026-03-10T10:21:22.448255+0000 mgr.y (mgr.24422) 308 : cluster [DBG] pgmap v492: 292 pgs: 292 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 658 B/s rd, 0 B/s wr, 0 op/s 2026-03-10T10:21:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:23 vm04 bash[28289]: cluster 2026-03-10T10:21:22.448255+0000 mgr.y (mgr.24422) 308 : cluster [DBG] pgmap v492: 292 pgs: 292 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 658 B/s rd, 0 B/s wr, 0 op/s 2026-03-10T10:21:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:23 vm04 bash[28289]: cluster 2026-03-10T10:21:22.448255+0000 mgr.y (mgr.24422) 308 : cluster [DBG] pgmap v492: 292 pgs: 292 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 658 B/s rd, 0 B/s wr, 0 op/s 2026-03-10T10:21:24.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:23 vm07 bash[23367]: cluster 2026-03-10T10:21:22.448255+0000 mgr.y (mgr.24422) 308 : cluster [DBG] pgmap v492: 292 pgs: 292 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 658 B/s rd, 0 B/s wr, 0 op/s 2026-03-10T10:21:24.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:23 vm07 bash[23367]: cluster 2026-03-10T10:21:22.448255+0000 mgr.y (mgr.24422) 308 : cluster [DBG] pgmap v492: 292 pgs: 292 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 658 B/s rd, 0 B/s wr, 0 op/s 2026-03-10T10:21:26.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:25 vm04 bash[28289]: cluster 2026-03-10T10:21:24.449011+0000 mgr.y (mgr.24422) 309 : cluster [DBG] pgmap v493: 292 pgs: 292 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:21:26.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:25 vm04 bash[28289]: cluster 2026-03-10T10:21:24.449011+0000 mgr.y (mgr.24422) 309 : cluster [DBG] pgmap v493: 292 pgs: 292 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:21:26.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:25 vm04 bash[28289]: audit 2026-03-10T10:21:25.173789+0000 mon.a (mon.0) 2506 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:21:26.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:25 vm04 bash[28289]: audit 2026-03-10T10:21:25.173789+0000 mon.a (mon.0) 2506 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:21:26.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:25 vm04 bash[28289]: audit 2026-03-10T10:21:25.520706+0000 mon.a (mon.0) 2507 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:21:26.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:25 vm04 bash[28289]: audit 2026-03-10T10:21:25.520706+0000 mon.a (mon.0) 2507 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:21:26.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:25 vm04 bash[28289]: audit 2026-03-10T10:21:25.521280+0000 mon.a (mon.0) 2508 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:21:26.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:25 vm04 bash[28289]: audit 2026-03-10T10:21:25.521280+0000 mon.a (mon.0) 2508 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:21:26.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:25 vm04 bash[28289]: audit 2026-03-10T10:21:25.575433+0000 mon.a (mon.0) 2509 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:21:26.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:25 vm04 bash[28289]: audit 2026-03-10T10:21:25.575433+0000 mon.a (mon.0) 2509 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:21:26.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:25 vm04 bash[20742]: cluster 2026-03-10T10:21:24.449011+0000 mgr.y (mgr.24422) 309 : cluster [DBG] pgmap v493: 292 pgs: 292 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:21:26.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:25 vm04 bash[20742]: cluster 2026-03-10T10:21:24.449011+0000 mgr.y (mgr.24422) 309 : cluster [DBG] pgmap v493: 292 pgs: 292 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:21:26.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:25 vm04 bash[20742]: audit 2026-03-10T10:21:25.173789+0000 mon.a (mon.0) 2506 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:21:26.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:25 vm04 bash[20742]: audit 2026-03-10T10:21:25.173789+0000 mon.a (mon.0) 2506 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:21:26.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:25 vm04 bash[20742]: audit 2026-03-10T10:21:25.520706+0000 mon.a (mon.0) 2507 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:21:26.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:25 vm04 bash[20742]: audit 2026-03-10T10:21:25.520706+0000 mon.a (mon.0) 2507 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:21:26.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:25 vm04 bash[20742]: audit 2026-03-10T10:21:25.521280+0000 mon.a (mon.0) 2508 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:21:26.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:25 vm04 bash[20742]: audit 2026-03-10T10:21:25.521280+0000 mon.a (mon.0) 2508 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:21:26.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:25 vm04 bash[20742]: audit 2026-03-10T10:21:25.575433+0000 mon.a (mon.0) 2509 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:21:26.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:25 vm04 bash[20742]: audit 2026-03-10T10:21:25.575433+0000 mon.a (mon.0) 2509 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:21:26.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:25 vm07 bash[23367]: cluster 2026-03-10T10:21:24.449011+0000 mgr.y (mgr.24422) 309 : cluster [DBG] pgmap v493: 292 pgs: 292 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:21:26.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:25 vm07 bash[23367]: cluster 2026-03-10T10:21:24.449011+0000 mgr.y (mgr.24422) 309 : cluster [DBG] pgmap v493: 292 pgs: 292 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:21:26.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:25 vm07 bash[23367]: audit 2026-03-10T10:21:25.173789+0000 mon.a (mon.0) 2506 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:21:26.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:25 vm07 bash[23367]: audit 2026-03-10T10:21:25.173789+0000 mon.a (mon.0) 2506 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:21:26.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:25 vm07 bash[23367]: audit 2026-03-10T10:21:25.520706+0000 mon.a (mon.0) 2507 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:21:26.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:25 vm07 bash[23367]: audit 2026-03-10T10:21:25.520706+0000 mon.a (mon.0) 2507 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:21:26.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:25 vm07 bash[23367]: audit 2026-03-10T10:21:25.521280+0000 mon.a (mon.0) 2508 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:21:26.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:25 vm07 bash[23367]: audit 2026-03-10T10:21:25.521280+0000 mon.a (mon.0) 2508 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:21:26.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:25 vm07 bash[23367]: audit 2026-03-10T10:21:25.575433+0000 mon.a (mon.0) 2509 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:21:26.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:25 vm07 bash[23367]: audit 2026-03-10T10:21:25.575433+0000 mon.a (mon.0) 2509 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:21:27.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:26 vm04 bash[28289]: audit 2026-03-10T10:21:26.705588+0000 mon.a (mon.0) 2510 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:27.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:26 vm04 bash[28289]: audit 2026-03-10T10:21:26.705588+0000 mon.a (mon.0) 2510 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:27.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:26 vm04 bash[20742]: audit 2026-03-10T10:21:26.705588+0000 mon.a (mon.0) 2510 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:27.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:26 vm04 bash[20742]: audit 2026-03-10T10:21:26.705588+0000 mon.a (mon.0) 2510 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:27.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:26 vm07 bash[23367]: audit 2026-03-10T10:21:26.705588+0000 mon.a (mon.0) 2510 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:27.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:26 vm07 bash[23367]: audit 2026-03-10T10:21:26.705588+0000 mon.a (mon.0) 2510 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:27 vm04 bash[28289]: cluster 2026-03-10T10:21:26.449311+0000 mgr.y (mgr.24422) 310 : cluster [DBG] pgmap v494: 292 pgs: 292 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:21:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:27 vm04 bash[28289]: cluster 2026-03-10T10:21:26.449311+0000 mgr.y (mgr.24422) 310 : cluster [DBG] pgmap v494: 292 pgs: 292 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:21:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:27 vm04 bash[28289]: audit 2026-03-10T10:21:26.870299+0000 mon.a (mon.0) 2511 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:21:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:27 vm04 bash[28289]: audit 2026-03-10T10:21:26.870299+0000 mon.a (mon.0) 2511 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:21:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:27 vm04 bash[28289]: cluster 2026-03-10T10:21:26.875398+0000 mon.a (mon.0) 2512 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-10T10:21:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:27 vm04 bash[28289]: cluster 2026-03-10T10:21:26.875398+0000 mon.a (mon.0) 2512 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-10T10:21:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:27 vm04 bash[28289]: audit 2026-03-10T10:21:26.881072+0000 mon.a (mon.0) 2513 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-51"}]: dispatch 2026-03-10T10:21:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:27 vm04 bash[28289]: audit 2026-03-10T10:21:26.881072+0000 mon.a (mon.0) 2513 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-51"}]: dispatch 2026-03-10T10:21:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:27 vm04 bash[20742]: cluster 2026-03-10T10:21:26.449311+0000 mgr.y (mgr.24422) 310 : cluster [DBG] pgmap v494: 292 pgs: 292 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:21:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:27 vm04 bash[20742]: cluster 2026-03-10T10:21:26.449311+0000 mgr.y (mgr.24422) 310 : cluster [DBG] pgmap v494: 292 pgs: 292 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:21:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:27 vm04 bash[20742]: audit 2026-03-10T10:21:26.870299+0000 mon.a (mon.0) 2511 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:21:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:27 vm04 bash[20742]: audit 2026-03-10T10:21:26.870299+0000 mon.a (mon.0) 2511 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:21:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:27 vm04 bash[20742]: cluster 2026-03-10T10:21:26.875398+0000 mon.a (mon.0) 2512 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-10T10:21:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:27 vm04 bash[20742]: cluster 2026-03-10T10:21:26.875398+0000 mon.a (mon.0) 2512 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-10T10:21:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:27 vm04 bash[20742]: audit 2026-03-10T10:21:26.881072+0000 mon.a (mon.0) 2513 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-51"}]: dispatch 2026-03-10T10:21:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:27 vm04 bash[20742]: audit 2026-03-10T10:21:26.881072+0000 mon.a (mon.0) 2513 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-51"}]: dispatch 2026-03-10T10:21:28.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:27 vm07 bash[23367]: cluster 2026-03-10T10:21:26.449311+0000 mgr.y (mgr.24422) 310 : cluster [DBG] pgmap v494: 292 pgs: 292 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:21:28.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:27 vm07 bash[23367]: cluster 2026-03-10T10:21:26.449311+0000 mgr.y (mgr.24422) 310 : cluster [DBG] pgmap v494: 292 pgs: 292 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:21:28.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:27 vm07 bash[23367]: audit 2026-03-10T10:21:26.870299+0000 mon.a (mon.0) 2511 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:21:28.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:27 vm07 bash[23367]: audit 2026-03-10T10:21:26.870299+0000 mon.a (mon.0) 2511 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:21:28.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:27 vm07 bash[23367]: cluster 2026-03-10T10:21:26.875398+0000 mon.a (mon.0) 2512 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-10T10:21:28.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:27 vm07 bash[23367]: cluster 2026-03-10T10:21:26.875398+0000 mon.a (mon.0) 2512 : cluster [DBG] osdmap e354: 8 total, 8 up, 8 in 2026-03-10T10:21:28.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:27 vm07 bash[23367]: audit 2026-03-10T10:21:26.881072+0000 mon.a (mon.0) 2513 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-51"}]: dispatch 2026-03-10T10:21:28.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:27 vm07 bash[23367]: audit 2026-03-10T10:21:26.881072+0000 mon.a (mon.0) 2513 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-51"}]: dispatch 2026-03-10T10:21:28.766 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:21:28 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:21:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:28 vm04 bash[28289]: audit 2026-03-10T10:21:27.874378+0000 mon.a (mon.0) 2514 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-51"}]': finished 2026-03-10T10:21:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:28 vm04 bash[28289]: audit 2026-03-10T10:21:27.874378+0000 mon.a (mon.0) 2514 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-51"}]': finished 2026-03-10T10:21:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:28 vm04 bash[28289]: cluster 2026-03-10T10:21:27.877344+0000 mon.a (mon.0) 2515 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-10T10:21:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:28 vm04 bash[28289]: cluster 2026-03-10T10:21:27.877344+0000 mon.a (mon.0) 2515 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-10T10:21:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:28 vm04 bash[28289]: audit 2026-03-10T10:21:27.923448+0000 mon.a (mon.0) 2516 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:21:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:28 vm04 bash[28289]: audit 2026-03-10T10:21:27.923448+0000 mon.a (mon.0) 2516 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:21:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:28 vm04 bash[28289]: audit 2026-03-10T10:21:27.924541+0000 mon.a (mon.0) 2517 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:21:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:28 vm04 bash[28289]: audit 2026-03-10T10:21:27.924541+0000 mon.a (mon.0) 2517 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:21:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:28 vm04 bash[28289]: audit 2026-03-10T10:21:27.937438+0000 mon.a (mon.0) 2518 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:28 vm04 bash[28289]: audit 2026-03-10T10:21:27.937438+0000 mon.a (mon.0) 2518 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:28 vm04 bash[28289]: audit 2026-03-10T10:21:27.937643+0000 mon.a (mon.0) 2519 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-51"}]: dispatch 2026-03-10T10:21:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:28 vm04 bash[28289]: audit 2026-03-10T10:21:27.937643+0000 mon.a (mon.0) 2519 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-51"}]: dispatch 2026-03-10T10:21:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:28 vm04 bash[20742]: audit 2026-03-10T10:21:27.874378+0000 mon.a (mon.0) 2514 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-51"}]': finished 2026-03-10T10:21:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:28 vm04 bash[20742]: audit 2026-03-10T10:21:27.874378+0000 mon.a (mon.0) 2514 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-51"}]': finished 2026-03-10T10:21:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:28 vm04 bash[20742]: cluster 2026-03-10T10:21:27.877344+0000 mon.a (mon.0) 2515 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-10T10:21:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:28 vm04 bash[20742]: cluster 2026-03-10T10:21:27.877344+0000 mon.a (mon.0) 2515 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-10T10:21:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:28 vm04 bash[20742]: audit 2026-03-10T10:21:27.923448+0000 mon.a (mon.0) 2516 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:21:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:28 vm04 bash[20742]: audit 2026-03-10T10:21:27.923448+0000 mon.a (mon.0) 2516 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:21:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:28 vm04 bash[20742]: audit 2026-03-10T10:21:27.924541+0000 mon.a (mon.0) 2517 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:21:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:28 vm04 bash[20742]: audit 2026-03-10T10:21:27.924541+0000 mon.a (mon.0) 2517 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:21:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:28 vm04 bash[20742]: audit 2026-03-10T10:21:27.937438+0000 mon.a (mon.0) 2518 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:28 vm04 bash[20742]: audit 2026-03-10T10:21:27.937438+0000 mon.a (mon.0) 2518 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:28 vm04 bash[20742]: audit 2026-03-10T10:21:27.937643+0000 mon.a (mon.0) 2519 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-51"}]: dispatch 2026-03-10T10:21:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:28 vm04 bash[20742]: audit 2026-03-10T10:21:27.937643+0000 mon.a (mon.0) 2519 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-51"}]: dispatch 2026-03-10T10:21:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:28 vm07 bash[23367]: audit 2026-03-10T10:21:27.874378+0000 mon.a (mon.0) 2514 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-51"}]': finished 2026-03-10T10:21:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:28 vm07 bash[23367]: audit 2026-03-10T10:21:27.874378+0000 mon.a (mon.0) 2514 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-51"}]': finished 2026-03-10T10:21:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:28 vm07 bash[23367]: cluster 2026-03-10T10:21:27.877344+0000 mon.a (mon.0) 2515 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-10T10:21:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:28 vm07 bash[23367]: cluster 2026-03-10T10:21:27.877344+0000 mon.a (mon.0) 2515 : cluster [DBG] osdmap e355: 8 total, 8 up, 8 in 2026-03-10T10:21:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:28 vm07 bash[23367]: audit 2026-03-10T10:21:27.923448+0000 mon.a (mon.0) 2516 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:21:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:28 vm07 bash[23367]: audit 2026-03-10T10:21:27.923448+0000 mon.a (mon.0) 2516 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:21:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:28 vm07 bash[23367]: audit 2026-03-10T10:21:27.924541+0000 mon.a (mon.0) 2517 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:21:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:28 vm07 bash[23367]: audit 2026-03-10T10:21:27.924541+0000 mon.a (mon.0) 2517 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:21:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:28 vm07 bash[23367]: audit 2026-03-10T10:21:27.937438+0000 mon.a (mon.0) 2518 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:28 vm07 bash[23367]: audit 2026-03-10T10:21:27.937438+0000 mon.a (mon.0) 2518 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:28 vm07 bash[23367]: audit 2026-03-10T10:21:27.937643+0000 mon.a (mon.0) 2519 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-51"}]: dispatch 2026-03-10T10:21:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:28 vm07 bash[23367]: audit 2026-03-10T10:21:27.937643+0000 mon.a (mon.0) 2519 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-51"}]: dispatch 2026-03-10T10:21:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:29 vm04 bash[28289]: cluster 2026-03-10T10:21:28.449842+0000 mgr.y (mgr.24422) 311 : cluster [DBG] pgmap v497: 292 pgs: 292 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 B/s wr, 0 op/s 2026-03-10T10:21:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:29 vm04 bash[28289]: cluster 2026-03-10T10:21:28.449842+0000 mgr.y (mgr.24422) 311 : cluster [DBG] pgmap v497: 292 pgs: 292 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 B/s wr, 0 op/s 2026-03-10T10:21:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:29 vm04 bash[28289]: audit 2026-03-10T10:21:28.469384+0000 mgr.y (mgr.24422) 312 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:29 vm04 bash[28289]: audit 2026-03-10T10:21:28.469384+0000 mgr.y (mgr.24422) 312 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:29 vm04 bash[28289]: cluster 2026-03-10T10:21:28.921886+0000 mon.a (mon.0) 2520 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-10T10:21:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:29 vm04 bash[28289]: cluster 2026-03-10T10:21:28.921886+0000 mon.a (mon.0) 2520 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-10T10:21:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:29 vm04 bash[28289]: cluster 2026-03-10T10:21:28.934724+0000 mon.a (mon.0) 2521 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-10T10:21:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:29 vm04 bash[28289]: cluster 2026-03-10T10:21:28.934724+0000 mon.a (mon.0) 2521 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-10T10:21:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:29 vm04 bash[20742]: cluster 2026-03-10T10:21:28.449842+0000 mgr.y (mgr.24422) 311 : cluster [DBG] pgmap v497: 292 pgs: 292 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 B/s wr, 0 op/s 2026-03-10T10:21:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:29 vm04 bash[20742]: cluster 2026-03-10T10:21:28.449842+0000 mgr.y (mgr.24422) 311 : cluster [DBG] pgmap v497: 292 pgs: 292 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 B/s wr, 0 op/s 2026-03-10T10:21:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:29 vm04 bash[20742]: audit 2026-03-10T10:21:28.469384+0000 mgr.y (mgr.24422) 312 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:29 vm04 bash[20742]: audit 2026-03-10T10:21:28.469384+0000 mgr.y (mgr.24422) 312 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:29 vm04 bash[20742]: cluster 2026-03-10T10:21:28.921886+0000 mon.a (mon.0) 2520 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-10T10:21:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:29 vm04 bash[20742]: cluster 2026-03-10T10:21:28.921886+0000 mon.a (mon.0) 2520 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-10T10:21:30.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:29 vm04 bash[20742]: cluster 2026-03-10T10:21:28.934724+0000 mon.a (mon.0) 2521 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-10T10:21:30.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:29 vm04 bash[20742]: cluster 2026-03-10T10:21:28.934724+0000 mon.a (mon.0) 2521 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-10T10:21:30.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:29 vm07 bash[23367]: cluster 2026-03-10T10:21:28.449842+0000 mgr.y (mgr.24422) 311 : cluster [DBG] pgmap v497: 292 pgs: 292 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 B/s wr, 0 op/s 2026-03-10T10:21:30.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:29 vm07 bash[23367]: cluster 2026-03-10T10:21:28.449842+0000 mgr.y (mgr.24422) 311 : cluster [DBG] pgmap v497: 292 pgs: 292 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 B/s wr, 0 op/s 2026-03-10T10:21:30.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:29 vm07 bash[23367]: audit 2026-03-10T10:21:28.469384+0000 mgr.y (mgr.24422) 312 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:30.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:29 vm07 bash[23367]: audit 2026-03-10T10:21:28.469384+0000 mgr.y (mgr.24422) 312 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:30.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:29 vm07 bash[23367]: cluster 2026-03-10T10:21:28.921886+0000 mon.a (mon.0) 2520 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-10T10:21:30.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:29 vm07 bash[23367]: cluster 2026-03-10T10:21:28.921886+0000 mon.a (mon.0) 2520 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-10T10:21:30.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:29 vm07 bash[23367]: cluster 2026-03-10T10:21:28.934724+0000 mon.a (mon.0) 2521 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-10T10:21:30.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:29 vm07 bash[23367]: cluster 2026-03-10T10:21:28.934724+0000 mon.a (mon.0) 2521 : cluster [DBG] osdmap e356: 8 total, 8 up, 8 in 2026-03-10T10:21:31.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:30 vm04 bash[28289]: cluster 2026-03-10T10:21:29.937075+0000 mon.a (mon.0) 2522 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-10T10:21:31.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:30 vm04 bash[28289]: cluster 2026-03-10T10:21:29.937075+0000 mon.a (mon.0) 2522 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-10T10:21:31.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:30 vm04 bash[28289]: audit 2026-03-10T10:21:29.939125+0000 mon.a (mon.0) 2523 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:31.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:30 vm04 bash[28289]: audit 2026-03-10T10:21:29.939125+0000 mon.a (mon.0) 2523 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:31.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:30 vm04 bash[20742]: cluster 2026-03-10T10:21:29.937075+0000 mon.a (mon.0) 2522 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-10T10:21:31.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:30 vm04 bash[20742]: cluster 2026-03-10T10:21:29.937075+0000 mon.a (mon.0) 2522 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-10T10:21:31.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:30 vm04 bash[20742]: audit 2026-03-10T10:21:29.939125+0000 mon.a (mon.0) 2523 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:31.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:30 vm04 bash[20742]: audit 2026-03-10T10:21:29.939125+0000 mon.a (mon.0) 2523 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:31.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:30 vm07 bash[23367]: cluster 2026-03-10T10:21:29.937075+0000 mon.a (mon.0) 2522 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-10T10:21:31.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:30 vm07 bash[23367]: cluster 2026-03-10T10:21:29.937075+0000 mon.a (mon.0) 2522 : cluster [DBG] osdmap e357: 8 total, 8 up, 8 in 2026-03-10T10:21:31.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:30 vm07 bash[23367]: audit 2026-03-10T10:21:29.939125+0000 mon.a (mon.0) 2523 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:31.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:30 vm07 bash[23367]: audit 2026-03-10T10:21:29.939125+0000 mon.a (mon.0) 2523 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-53","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:31 vm07 bash[23367]: cluster 2026-03-10T10:21:30.450158+0000 mgr.y (mgr.24422) 313 : cluster [DBG] pgmap v500: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T10:21:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:31 vm07 bash[23367]: cluster 2026-03-10T10:21:30.450158+0000 mgr.y (mgr.24422) 313 : cluster [DBG] pgmap v500: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T10:21:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:31 vm07 bash[23367]: cluster 2026-03-10T10:21:30.934526+0000 mon.a (mon.0) 2524 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:31 vm07 bash[23367]: cluster 2026-03-10T10:21:30.934526+0000 mon.a (mon.0) 2524 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:31 vm07 bash[23367]: audit 2026-03-10T10:21:30.944332+0000 mon.a (mon.0) 2525 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-53","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:31 vm07 bash[23367]: audit 2026-03-10T10:21:30.944332+0000 mon.a (mon.0) 2525 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-53","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:31 vm07 bash[23367]: cluster 2026-03-10T10:21:30.959915+0000 mon.a (mon.0) 2526 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-10T10:21:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:31 vm07 bash[23367]: cluster 2026-03-10T10:21:30.959915+0000 mon.a (mon.0) 2526 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-10T10:21:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:31 vm07 bash[23367]: audit 2026-03-10T10:21:31.010194+0000 mon.a (mon.0) 2527 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:31 vm07 bash[23367]: audit 2026-03-10T10:21:31.010194+0000 mon.a (mon.0) 2527 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:31 vm07 bash[23367]: audit 2026-03-10T10:21:31.010472+0000 mon.a (mon.0) 2528 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-53"}]: dispatch 2026-03-10T10:21:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:31 vm07 bash[23367]: audit 2026-03-10T10:21:31.010472+0000 mon.a (mon.0) 2528 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-53"}]: dispatch 2026-03-10T10:21:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:31 vm04 bash[28289]: cluster 2026-03-10T10:21:30.450158+0000 mgr.y (mgr.24422) 313 : cluster [DBG] pgmap v500: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T10:21:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:31 vm04 bash[28289]: cluster 2026-03-10T10:21:30.450158+0000 mgr.y (mgr.24422) 313 : cluster [DBG] pgmap v500: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T10:21:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:31 vm04 bash[28289]: cluster 2026-03-10T10:21:30.934526+0000 mon.a (mon.0) 2524 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:31 vm04 bash[28289]: cluster 2026-03-10T10:21:30.934526+0000 mon.a (mon.0) 2524 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:31 vm04 bash[28289]: audit 2026-03-10T10:21:30.944332+0000 mon.a (mon.0) 2525 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-53","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:31 vm04 bash[28289]: audit 2026-03-10T10:21:30.944332+0000 mon.a (mon.0) 2525 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-53","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:31 vm04 bash[28289]: cluster 2026-03-10T10:21:30.959915+0000 mon.a (mon.0) 2526 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-10T10:21:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:31 vm04 bash[28289]: cluster 2026-03-10T10:21:30.959915+0000 mon.a (mon.0) 2526 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-10T10:21:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:31 vm04 bash[28289]: audit 2026-03-10T10:21:31.010194+0000 mon.a (mon.0) 2527 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:31 vm04 bash[28289]: audit 2026-03-10T10:21:31.010194+0000 mon.a (mon.0) 2527 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:31 vm04 bash[28289]: audit 2026-03-10T10:21:31.010472+0000 mon.a (mon.0) 2528 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-53"}]: dispatch 2026-03-10T10:21:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:31 vm04 bash[28289]: audit 2026-03-10T10:21:31.010472+0000 mon.a (mon.0) 2528 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-53"}]: dispatch 2026-03-10T10:21:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:31 vm04 bash[20742]: cluster 2026-03-10T10:21:30.450158+0000 mgr.y (mgr.24422) 313 : cluster [DBG] pgmap v500: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T10:21:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:31 vm04 bash[20742]: cluster 2026-03-10T10:21:30.450158+0000 mgr.y (mgr.24422) 313 : cluster [DBG] pgmap v500: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T10:21:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:31 vm04 bash[20742]: cluster 2026-03-10T10:21:30.934526+0000 mon.a (mon.0) 2524 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:31 vm04 bash[20742]: cluster 2026-03-10T10:21:30.934526+0000 mon.a (mon.0) 2524 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:31 vm04 bash[20742]: audit 2026-03-10T10:21:30.944332+0000 mon.a (mon.0) 2525 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-53","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:31 vm04 bash[20742]: audit 2026-03-10T10:21:30.944332+0000 mon.a (mon.0) 2525 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-53","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:31 vm04 bash[20742]: cluster 2026-03-10T10:21:30.959915+0000 mon.a (mon.0) 2526 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-10T10:21:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:31 vm04 bash[20742]: cluster 2026-03-10T10:21:30.959915+0000 mon.a (mon.0) 2526 : cluster [DBG] osdmap e358: 8 total, 8 up, 8 in 2026-03-10T10:21:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:31 vm04 bash[20742]: audit 2026-03-10T10:21:31.010194+0000 mon.a (mon.0) 2527 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:31 vm04 bash[20742]: audit 2026-03-10T10:21:31.010194+0000 mon.a (mon.0) 2527 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:31 vm04 bash[20742]: audit 2026-03-10T10:21:31.010472+0000 mon.a (mon.0) 2528 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-53"}]: dispatch 2026-03-10T10:21:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:31 vm04 bash[20742]: audit 2026-03-10T10:21:31.010472+0000 mon.a (mon.0) 2528 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-53"}]: dispatch 2026-03-10T10:21:33.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:21:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:21:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:21:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:33 vm04 bash[28289]: cluster 2026-03-10T10:21:31.951393+0000 mon.a (mon.0) 2529 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-10T10:21:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:33 vm04 bash[28289]: cluster 2026-03-10T10:21:31.951393+0000 mon.a (mon.0) 2529 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-10T10:21:33.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:33 vm04 bash[20742]: cluster 2026-03-10T10:21:31.951393+0000 mon.a (mon.0) 2529 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-10T10:21:33.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:33 vm04 bash[20742]: cluster 2026-03-10T10:21:31.951393+0000 mon.a (mon.0) 2529 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-10T10:21:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:33 vm07 bash[23367]: cluster 2026-03-10T10:21:31.951393+0000 mon.a (mon.0) 2529 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-10T10:21:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:33 vm07 bash[23367]: cluster 2026-03-10T10:21:31.951393+0000 mon.a (mon.0) 2529 : cluster [DBG] osdmap e359: 8 total, 8 up, 8 in 2026-03-10T10:21:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:34 vm04 bash[28289]: cluster 2026-03-10T10:21:32.450438+0000 mgr.y (mgr.24422) 314 : cluster [DBG] pgmap v503: 260 pgs: 260 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:21:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:34 vm04 bash[28289]: cluster 2026-03-10T10:21:32.450438+0000 mgr.y (mgr.24422) 314 : cluster [DBG] pgmap v503: 260 pgs: 260 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:21:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:34 vm04 bash[28289]: cluster 2026-03-10T10:21:33.503341+0000 mon.a (mon.0) 2530 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-10T10:21:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:34 vm04 bash[28289]: cluster 2026-03-10T10:21:33.503341+0000 mon.a (mon.0) 2530 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-10T10:21:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:34 vm04 bash[28289]: audit 2026-03-10T10:21:33.516557+0000 mon.a (mon.0) 2531 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:34 vm04 bash[28289]: audit 2026-03-10T10:21:33.516557+0000 mon.a (mon.0) 2531 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:34 vm04 bash[28289]: audit 2026-03-10T10:21:34.395658+0000 mon.a (mon.0) 2532 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-55","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:34 vm04 bash[28289]: audit 2026-03-10T10:21:34.395658+0000 mon.a (mon.0) 2532 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-55","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:34 vm04 bash[28289]: cluster 2026-03-10T10:21:34.399676+0000 mon.a (mon.0) 2533 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-10T10:21:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:34 vm04 bash[28289]: cluster 2026-03-10T10:21:34.399676+0000 mon.a (mon.0) 2533 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-10T10:21:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:34 vm04 bash[28289]: audit 2026-03-10T10:21:34.400769+0000 mon.a (mon.0) 2534 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:21:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:34 vm04 bash[28289]: audit 2026-03-10T10:21:34.400769+0000 mon.a (mon.0) 2534 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:21:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:34 vm04 bash[20742]: cluster 2026-03-10T10:21:32.450438+0000 mgr.y (mgr.24422) 314 : cluster [DBG] pgmap v503: 260 pgs: 260 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:21:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:34 vm04 bash[20742]: cluster 2026-03-10T10:21:32.450438+0000 mgr.y (mgr.24422) 314 : cluster [DBG] pgmap v503: 260 pgs: 260 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:21:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:34 vm04 bash[20742]: cluster 2026-03-10T10:21:33.503341+0000 mon.a (mon.0) 2530 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-10T10:21:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:34 vm04 bash[20742]: cluster 2026-03-10T10:21:33.503341+0000 mon.a (mon.0) 2530 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-10T10:21:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:34 vm04 bash[20742]: audit 2026-03-10T10:21:33.516557+0000 mon.a (mon.0) 2531 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:34 vm04 bash[20742]: audit 2026-03-10T10:21:33.516557+0000 mon.a (mon.0) 2531 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:34 vm04 bash[20742]: audit 2026-03-10T10:21:34.395658+0000 mon.a (mon.0) 2532 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-55","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:34 vm04 bash[20742]: audit 2026-03-10T10:21:34.395658+0000 mon.a (mon.0) 2532 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-55","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:34 vm04 bash[20742]: cluster 2026-03-10T10:21:34.399676+0000 mon.a (mon.0) 2533 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-10T10:21:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:34 vm04 bash[20742]: cluster 2026-03-10T10:21:34.399676+0000 mon.a (mon.0) 2533 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-10T10:21:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:34 vm04 bash[20742]: audit 2026-03-10T10:21:34.400769+0000 mon.a (mon.0) 2534 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:21:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:34 vm04 bash[20742]: audit 2026-03-10T10:21:34.400769+0000 mon.a (mon.0) 2534 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:21:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:34 vm07 bash[23367]: cluster 2026-03-10T10:21:32.450438+0000 mgr.y (mgr.24422) 314 : cluster [DBG] pgmap v503: 260 pgs: 260 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:21:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:34 vm07 bash[23367]: cluster 2026-03-10T10:21:32.450438+0000 mgr.y (mgr.24422) 314 : cluster [DBG] pgmap v503: 260 pgs: 260 active+clean; 8.3 MiB data, 747 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:21:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:34 vm07 bash[23367]: cluster 2026-03-10T10:21:33.503341+0000 mon.a (mon.0) 2530 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-10T10:21:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:34 vm07 bash[23367]: cluster 2026-03-10T10:21:33.503341+0000 mon.a (mon.0) 2530 : cluster [DBG] osdmap e360: 8 total, 8 up, 8 in 2026-03-10T10:21:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:34 vm07 bash[23367]: audit 2026-03-10T10:21:33.516557+0000 mon.a (mon.0) 2531 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:34 vm07 bash[23367]: audit 2026-03-10T10:21:33.516557+0000 mon.a (mon.0) 2531 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-55","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:34 vm07 bash[23367]: audit 2026-03-10T10:21:34.395658+0000 mon.a (mon.0) 2532 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-55","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:34 vm07 bash[23367]: audit 2026-03-10T10:21:34.395658+0000 mon.a (mon.0) 2532 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-55","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:34 vm07 bash[23367]: cluster 2026-03-10T10:21:34.399676+0000 mon.a (mon.0) 2533 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-10T10:21:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:34 vm07 bash[23367]: cluster 2026-03-10T10:21:34.399676+0000 mon.a (mon.0) 2533 : cluster [DBG] osdmap e361: 8 total, 8 up, 8 in 2026-03-10T10:21:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:34 vm07 bash[23367]: audit 2026-03-10T10:21:34.400769+0000 mon.a (mon.0) 2534 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:21:35.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:34 vm07 bash[23367]: audit 2026-03-10T10:21:34.400769+0000 mon.a (mon.0) 2534 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:21:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:35 vm04 bash[28289]: cluster 2026-03-10T10:21:34.450771+0000 mgr.y (mgr.24422) 315 : cluster [DBG] pgmap v506: 292 pgs: 16 creating+peering, 16 unknown, 260 active+clean; 8.3 MiB data, 752 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:21:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:35 vm04 bash[28289]: cluster 2026-03-10T10:21:34.450771+0000 mgr.y (mgr.24422) 315 : cluster [DBG] pgmap v506: 292 pgs: 16 creating+peering, 16 unknown, 260 active+clean; 8.3 MiB data, 752 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:21:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:35 vm04 bash[28289]: audit 2026-03-10T10:21:34.600719+0000 mon.a (mon.0) 2535 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:35 vm04 bash[28289]: audit 2026-03-10T10:21:34.600719+0000 mon.a (mon.0) 2535 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:35 vm04 bash[28289]: audit 2026-03-10T10:21:34.600991+0000 mon.a (mon.0) 2536 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-55"}]: dispatch 2026-03-10T10:21:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:35 vm04 bash[28289]: audit 2026-03-10T10:21:34.600991+0000 mon.a (mon.0) 2536 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-55"}]: dispatch 2026-03-10T10:21:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:35 vm04 bash[28289]: cluster 2026-03-10T10:21:35.413601+0000 mon.a (mon.0) 2537 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-10T10:21:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:35 vm04 bash[28289]: cluster 2026-03-10T10:21:35.413601+0000 mon.a (mon.0) 2537 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-10T10:21:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:35 vm04 bash[20742]: cluster 2026-03-10T10:21:34.450771+0000 mgr.y (mgr.24422) 315 : cluster [DBG] pgmap v506: 292 pgs: 16 creating+peering, 16 unknown, 260 active+clean; 8.3 MiB data, 752 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:21:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:35 vm04 bash[20742]: cluster 2026-03-10T10:21:34.450771+0000 mgr.y (mgr.24422) 315 : cluster [DBG] pgmap v506: 292 pgs: 16 creating+peering, 16 unknown, 260 active+clean; 8.3 MiB data, 752 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:21:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:35 vm04 bash[20742]: audit 2026-03-10T10:21:34.600719+0000 mon.a (mon.0) 2535 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:35 vm04 bash[20742]: audit 2026-03-10T10:21:34.600719+0000 mon.a (mon.0) 2535 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:35 vm04 bash[20742]: audit 2026-03-10T10:21:34.600991+0000 mon.a (mon.0) 2536 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-55"}]: dispatch 2026-03-10T10:21:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:35 vm04 bash[20742]: audit 2026-03-10T10:21:34.600991+0000 mon.a (mon.0) 2536 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-55"}]: dispatch 2026-03-10T10:21:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:35 vm04 bash[20742]: cluster 2026-03-10T10:21:35.413601+0000 mon.a (mon.0) 2537 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-10T10:21:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:35 vm04 bash[20742]: cluster 2026-03-10T10:21:35.413601+0000 mon.a (mon.0) 2537 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-10T10:21:36.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:35 vm07 bash[23367]: cluster 2026-03-10T10:21:34.450771+0000 mgr.y (mgr.24422) 315 : cluster [DBG] pgmap v506: 292 pgs: 16 creating+peering, 16 unknown, 260 active+clean; 8.3 MiB data, 752 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:21:36.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:35 vm07 bash[23367]: cluster 2026-03-10T10:21:34.450771+0000 mgr.y (mgr.24422) 315 : cluster [DBG] pgmap v506: 292 pgs: 16 creating+peering, 16 unknown, 260 active+clean; 8.3 MiB data, 752 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:21:36.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:35 vm07 bash[23367]: audit 2026-03-10T10:21:34.600719+0000 mon.a (mon.0) 2535 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:36.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:35 vm07 bash[23367]: audit 2026-03-10T10:21:34.600719+0000 mon.a (mon.0) 2535 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:36.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:35 vm07 bash[23367]: audit 2026-03-10T10:21:34.600991+0000 mon.a (mon.0) 2536 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-55"}]: dispatch 2026-03-10T10:21:36.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:35 vm07 bash[23367]: audit 2026-03-10T10:21:34.600991+0000 mon.a (mon.0) 2536 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-55"}]: dispatch 2026-03-10T10:21:36.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:35 vm07 bash[23367]: cluster 2026-03-10T10:21:35.413601+0000 mon.a (mon.0) 2537 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-10T10:21:36.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:35 vm07 bash[23367]: cluster 2026-03-10T10:21:35.413601+0000 mon.a (mon.0) 2537 : cluster [DBG] osdmap e362: 8 total, 8 up, 8 in 2026-03-10T10:21:37.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:37 vm04 bash[28289]: cluster 2026-03-10T10:21:36.414613+0000 mon.a (mon.0) 2538 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-10T10:21:37.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:37 vm04 bash[28289]: cluster 2026-03-10T10:21:36.414613+0000 mon.a (mon.0) 2538 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-10T10:21:37.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:37 vm04 bash[28289]: audit 2026-03-10T10:21:36.415910+0000 mon.a (mon.0) 2539 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:37.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:37 vm04 bash[28289]: audit 2026-03-10T10:21:36.415910+0000 mon.a (mon.0) 2539 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:37.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:37 vm04 bash[28289]: cluster 2026-03-10T10:21:36.451123+0000 mgr.y (mgr.24422) 316 : cluster [DBG] pgmap v509: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 752 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:21:37.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:37 vm04 bash[28289]: cluster 2026-03-10T10:21:36.451123+0000 mgr.y (mgr.24422) 316 : cluster [DBG] pgmap v509: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 752 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:21:37.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:37 vm04 bash[28289]: cluster 2026-03-10T10:21:36.464981+0000 mon.a (mon.0) 2540 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:37.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:37 vm04 bash[28289]: cluster 2026-03-10T10:21:36.464981+0000 mon.a (mon.0) 2540 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:37.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:37 vm04 bash[20742]: cluster 2026-03-10T10:21:36.414613+0000 mon.a (mon.0) 2538 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-10T10:21:37.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:37 vm04 bash[20742]: cluster 2026-03-10T10:21:36.414613+0000 mon.a (mon.0) 2538 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-10T10:21:37.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:37 vm04 bash[20742]: audit 2026-03-10T10:21:36.415910+0000 mon.a (mon.0) 2539 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:37.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:37 vm04 bash[20742]: audit 2026-03-10T10:21:36.415910+0000 mon.a (mon.0) 2539 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:37.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:37 vm04 bash[20742]: cluster 2026-03-10T10:21:36.451123+0000 mgr.y (mgr.24422) 316 : cluster [DBG] pgmap v509: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 752 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:21:37.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:37 vm04 bash[20742]: cluster 2026-03-10T10:21:36.451123+0000 mgr.y (mgr.24422) 316 : cluster [DBG] pgmap v509: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 752 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:21:37.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:37 vm04 bash[20742]: cluster 2026-03-10T10:21:36.464981+0000 mon.a (mon.0) 2540 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:37.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:37 vm04 bash[20742]: cluster 2026-03-10T10:21:36.464981+0000 mon.a (mon.0) 2540 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:37.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:37 vm07 bash[23367]: cluster 2026-03-10T10:21:36.414613+0000 mon.a (mon.0) 2538 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-10T10:21:37.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:37 vm07 bash[23367]: cluster 2026-03-10T10:21:36.414613+0000 mon.a (mon.0) 2538 : cluster [DBG] osdmap e363: 8 total, 8 up, 8 in 2026-03-10T10:21:37.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:37 vm07 bash[23367]: audit 2026-03-10T10:21:36.415910+0000 mon.a (mon.0) 2539 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:37.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:37 vm07 bash[23367]: audit 2026-03-10T10:21:36.415910+0000 mon.a (mon.0) 2539 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-57","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:37.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:37 vm07 bash[23367]: cluster 2026-03-10T10:21:36.451123+0000 mgr.y (mgr.24422) 316 : cluster [DBG] pgmap v509: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 752 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:21:37.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:37 vm07 bash[23367]: cluster 2026-03-10T10:21:36.451123+0000 mgr.y (mgr.24422) 316 : cluster [DBG] pgmap v509: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 752 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:21:37.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:37 vm07 bash[23367]: cluster 2026-03-10T10:21:36.464981+0000 mon.a (mon.0) 2540 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:37.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:37 vm07 bash[23367]: cluster 2026-03-10T10:21:36.464981+0000 mon.a (mon.0) 2540 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:38.766 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:21:38 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:21:39.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:38 vm04 bash[28289]: audit 2026-03-10T10:21:37.414086+0000 mon.a (mon.0) 2541 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-57","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:39.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:38 vm04 bash[28289]: audit 2026-03-10T10:21:37.414086+0000 mon.a (mon.0) 2541 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-57","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:39.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:38 vm04 bash[28289]: cluster 2026-03-10T10:21:37.417662+0000 mon.a (mon.0) 2542 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-10T10:21:39.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:38 vm04 bash[28289]: cluster 2026-03-10T10:21:37.417662+0000 mon.a (mon.0) 2542 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-10T10:21:39.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:38 vm04 bash[28289]: audit 2026-03-10T10:21:37.486489+0000 mon.a (mon.0) 2543 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:39.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:38 vm04 bash[28289]: audit 2026-03-10T10:21:37.486489+0000 mon.a (mon.0) 2543 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:39.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:38 vm04 bash[28289]: audit 2026-03-10T10:21:37.486759+0000 mon.a (mon.0) 2544 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-57"}]: dispatch 2026-03-10T10:21:39.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:38 vm04 bash[28289]: audit 2026-03-10T10:21:37.486759+0000 mon.a (mon.0) 2544 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-57"}]: dispatch 2026-03-10T10:21:39.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:38 vm04 bash[20742]: audit 2026-03-10T10:21:37.414086+0000 mon.a (mon.0) 2541 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-57","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:39.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:38 vm04 bash[20742]: audit 2026-03-10T10:21:37.414086+0000 mon.a (mon.0) 2541 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-57","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:39.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:38 vm04 bash[20742]: cluster 2026-03-10T10:21:37.417662+0000 mon.a (mon.0) 2542 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-10T10:21:39.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:38 vm04 bash[20742]: cluster 2026-03-10T10:21:37.417662+0000 mon.a (mon.0) 2542 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-10T10:21:39.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:38 vm04 bash[20742]: audit 2026-03-10T10:21:37.486489+0000 mon.a (mon.0) 2543 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:39.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:38 vm04 bash[20742]: audit 2026-03-10T10:21:37.486489+0000 mon.a (mon.0) 2543 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:39.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:38 vm04 bash[20742]: audit 2026-03-10T10:21:37.486759+0000 mon.a (mon.0) 2544 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-57"}]: dispatch 2026-03-10T10:21:39.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:38 vm04 bash[20742]: audit 2026-03-10T10:21:37.486759+0000 mon.a (mon.0) 2544 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-57"}]: dispatch 2026-03-10T10:21:39.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:38 vm07 bash[23367]: audit 2026-03-10T10:21:37.414086+0000 mon.a (mon.0) 2541 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-57","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:39.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:38 vm07 bash[23367]: audit 2026-03-10T10:21:37.414086+0000 mon.a (mon.0) 2541 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-57","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:39.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:38 vm07 bash[23367]: cluster 2026-03-10T10:21:37.417662+0000 mon.a (mon.0) 2542 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-10T10:21:39.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:38 vm07 bash[23367]: cluster 2026-03-10T10:21:37.417662+0000 mon.a (mon.0) 2542 : cluster [DBG] osdmap e364: 8 total, 8 up, 8 in 2026-03-10T10:21:39.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:38 vm07 bash[23367]: audit 2026-03-10T10:21:37.486489+0000 mon.a (mon.0) 2543 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:39.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:38 vm07 bash[23367]: audit 2026-03-10T10:21:37.486489+0000 mon.a (mon.0) 2543 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:39.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:38 vm07 bash[23367]: audit 2026-03-10T10:21:37.486759+0000 mon.a (mon.0) 2544 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-57"}]: dispatch 2026-03-10T10:21:39.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:38 vm07 bash[23367]: audit 2026-03-10T10:21:37.486759+0000 mon.a (mon.0) 2544 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-57"}]: dispatch 2026-03-10T10:21:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:40 vm04 bash[28289]: cluster 2026-03-10T10:21:38.451592+0000 mgr.y (mgr.24422) 317 : cluster [DBG] pgmap v511: 292 pgs: 17 unknown, 275 active+clean; 8.3 MiB data, 752 MiB used, 159 GiB / 160 GiB avail; 252 B/s rd, 505 B/s wr, 1 op/s 2026-03-10T10:21:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:40 vm04 bash[28289]: cluster 2026-03-10T10:21:38.451592+0000 mgr.y (mgr.24422) 317 : cluster [DBG] pgmap v511: 292 pgs: 17 unknown, 275 active+clean; 8.3 MiB data, 752 MiB used, 159 GiB / 160 GiB avail; 252 B/s rd, 505 B/s wr, 1 op/s 2026-03-10T10:21:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:40 vm04 bash[28289]: audit 2026-03-10T10:21:38.473073+0000 mgr.y (mgr.24422) 318 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:40 vm04 bash[28289]: audit 2026-03-10T10:21:38.473073+0000 mgr.y (mgr.24422) 318 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:40 vm04 bash[28289]: cluster 2026-03-10T10:21:38.700753+0000 mon.a (mon.0) 2545 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-10T10:21:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:40 vm04 bash[28289]: cluster 2026-03-10T10:21:38.700753+0000 mon.a (mon.0) 2545 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-10T10:21:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:40 vm04 bash[28289]: cluster 2026-03-10T10:21:39.649792+0000 mon.a (mon.0) 2546 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-10T10:21:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:40 vm04 bash[28289]: cluster 2026-03-10T10:21:39.649792+0000 mon.a (mon.0) 2546 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-10T10:21:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:40 vm04 bash[28289]: audit 2026-03-10T10:21:39.664224+0000 mon.a (mon.0) 2547 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:40 vm04 bash[28289]: audit 2026-03-10T10:21:39.664224+0000 mon.a (mon.0) 2547 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:40 vm04 bash[20742]: cluster 2026-03-10T10:21:38.451592+0000 mgr.y (mgr.24422) 317 : cluster [DBG] pgmap v511: 292 pgs: 17 unknown, 275 active+clean; 8.3 MiB data, 752 MiB used, 159 GiB / 160 GiB avail; 252 B/s rd, 505 B/s wr, 1 op/s 2026-03-10T10:21:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:40 vm04 bash[20742]: cluster 2026-03-10T10:21:38.451592+0000 mgr.y (mgr.24422) 317 : cluster [DBG] pgmap v511: 292 pgs: 17 unknown, 275 active+clean; 8.3 MiB data, 752 MiB used, 159 GiB / 160 GiB avail; 252 B/s rd, 505 B/s wr, 1 op/s 2026-03-10T10:21:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:40 vm04 bash[20742]: audit 2026-03-10T10:21:38.473073+0000 mgr.y (mgr.24422) 318 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:40 vm04 bash[20742]: audit 2026-03-10T10:21:38.473073+0000 mgr.y (mgr.24422) 318 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:40 vm04 bash[20742]: cluster 2026-03-10T10:21:38.700753+0000 mon.a (mon.0) 2545 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-10T10:21:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:40 vm04 bash[20742]: cluster 2026-03-10T10:21:38.700753+0000 mon.a (mon.0) 2545 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-10T10:21:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:40 vm04 bash[20742]: cluster 2026-03-10T10:21:39.649792+0000 mon.a (mon.0) 2546 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-10T10:21:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:40 vm04 bash[20742]: cluster 2026-03-10T10:21:39.649792+0000 mon.a (mon.0) 2546 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-10T10:21:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:40 vm04 bash[20742]: audit 2026-03-10T10:21:39.664224+0000 mon.a (mon.0) 2547 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:40 vm04 bash[20742]: audit 2026-03-10T10:21:39.664224+0000 mon.a (mon.0) 2547 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:40.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:40 vm07 bash[23367]: cluster 2026-03-10T10:21:38.451592+0000 mgr.y (mgr.24422) 317 : cluster [DBG] pgmap v511: 292 pgs: 17 unknown, 275 active+clean; 8.3 MiB data, 752 MiB used, 159 GiB / 160 GiB avail; 252 B/s rd, 505 B/s wr, 1 op/s 2026-03-10T10:21:40.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:40 vm07 bash[23367]: cluster 2026-03-10T10:21:38.451592+0000 mgr.y (mgr.24422) 317 : cluster [DBG] pgmap v511: 292 pgs: 17 unknown, 275 active+clean; 8.3 MiB data, 752 MiB used, 159 GiB / 160 GiB avail; 252 B/s rd, 505 B/s wr, 1 op/s 2026-03-10T10:21:40.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:40 vm07 bash[23367]: audit 2026-03-10T10:21:38.473073+0000 mgr.y (mgr.24422) 318 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:40.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:40 vm07 bash[23367]: audit 2026-03-10T10:21:38.473073+0000 mgr.y (mgr.24422) 318 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:40.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:40 vm07 bash[23367]: cluster 2026-03-10T10:21:38.700753+0000 mon.a (mon.0) 2545 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-10T10:21:40.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:40 vm07 bash[23367]: cluster 2026-03-10T10:21:38.700753+0000 mon.a (mon.0) 2545 : cluster [DBG] osdmap e365: 8 total, 8 up, 8 in 2026-03-10T10:21:40.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:40 vm07 bash[23367]: cluster 2026-03-10T10:21:39.649792+0000 mon.a (mon.0) 2546 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-10T10:21:40.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:40 vm07 bash[23367]: cluster 2026-03-10T10:21:39.649792+0000 mon.a (mon.0) 2546 : cluster [DBG] osdmap e366: 8 total, 8 up, 8 in 2026-03-10T10:21:40.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:40 vm07 bash[23367]: audit 2026-03-10T10:21:39.664224+0000 mon.a (mon.0) 2547 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:40.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:40 vm07 bash[23367]: audit 2026-03-10T10:21:39.664224+0000 mon.a (mon.0) 2547 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-59","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:41 vm07 bash[23367]: cluster 2026-03-10T10:21:40.451898+0000 mgr.y (mgr.24422) 319 : cluster [DBG] pgmap v514: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 753 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:21:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:41 vm07 bash[23367]: cluster 2026-03-10T10:21:40.451898+0000 mgr.y (mgr.24422) 319 : cluster [DBG] pgmap v514: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 753 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:21:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:41 vm07 bash[23367]: audit 2026-03-10T10:21:40.655820+0000 mon.a (mon.0) 2548 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-59","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:41 vm07 bash[23367]: audit 2026-03-10T10:21:40.655820+0000 mon.a (mon.0) 2548 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-59","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:41 vm07 bash[23367]: cluster 2026-03-10T10:21:40.663348+0000 mon.a (mon.0) 2549 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-10T10:21:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:41 vm07 bash[23367]: cluster 2026-03-10T10:21:40.663348+0000 mon.a (mon.0) 2549 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-10T10:21:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:41 vm07 bash[23367]: audit 2026-03-10T10:21:40.667089+0000 mon.a (mon.0) 2550 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:21:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:41 vm07 bash[23367]: audit 2026-03-10T10:21:40.667089+0000 mon.a (mon.0) 2550 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:21:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:41 vm07 bash[23367]: audit 2026-03-10T10:21:40.728167+0000 mon.a (mon.0) 2551 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:41 vm07 bash[23367]: audit 2026-03-10T10:21:40.728167+0000 mon.a (mon.0) 2551 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:41 vm07 bash[23367]: audit 2026-03-10T10:21:40.728422+0000 mon.a (mon.0) 2552 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-59"}]: dispatch 2026-03-10T10:21:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:41 vm07 bash[23367]: audit 2026-03-10T10:21:40.728422+0000 mon.a (mon.0) 2552 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-59"}]: dispatch 2026-03-10T10:21:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:41 vm07 bash[23367]: cluster 2026-03-10T10:21:41.465536+0000 mon.a (mon.0) 2553 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:42.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:41 vm07 bash[23367]: cluster 2026-03-10T10:21:41.465536+0000 mon.a (mon.0) 2553 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:41 vm04 bash[28289]: cluster 2026-03-10T10:21:40.451898+0000 mgr.y (mgr.24422) 319 : cluster [DBG] pgmap v514: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 753 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:21:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:41 vm04 bash[28289]: cluster 2026-03-10T10:21:40.451898+0000 mgr.y (mgr.24422) 319 : cluster [DBG] pgmap v514: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 753 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:21:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:41 vm04 bash[28289]: audit 2026-03-10T10:21:40.655820+0000 mon.a (mon.0) 2548 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-59","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:41 vm04 bash[28289]: audit 2026-03-10T10:21:40.655820+0000 mon.a (mon.0) 2548 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-59","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:41 vm04 bash[28289]: cluster 2026-03-10T10:21:40.663348+0000 mon.a (mon.0) 2549 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-10T10:21:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:41 vm04 bash[28289]: cluster 2026-03-10T10:21:40.663348+0000 mon.a (mon.0) 2549 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-10T10:21:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:41 vm04 bash[28289]: audit 2026-03-10T10:21:40.667089+0000 mon.a (mon.0) 2550 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:21:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:41 vm04 bash[28289]: audit 2026-03-10T10:21:40.667089+0000 mon.a (mon.0) 2550 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:21:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:41 vm04 bash[28289]: audit 2026-03-10T10:21:40.728167+0000 mon.a (mon.0) 2551 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:41 vm04 bash[28289]: audit 2026-03-10T10:21:40.728167+0000 mon.a (mon.0) 2551 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:41 vm04 bash[28289]: audit 2026-03-10T10:21:40.728422+0000 mon.a (mon.0) 2552 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-59"}]: dispatch 2026-03-10T10:21:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:41 vm04 bash[28289]: audit 2026-03-10T10:21:40.728422+0000 mon.a (mon.0) 2552 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-59"}]: dispatch 2026-03-10T10:21:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:41 vm04 bash[28289]: cluster 2026-03-10T10:21:41.465536+0000 mon.a (mon.0) 2553 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:41 vm04 bash[28289]: cluster 2026-03-10T10:21:41.465536+0000 mon.a (mon.0) 2553 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:41 vm04 bash[20742]: cluster 2026-03-10T10:21:40.451898+0000 mgr.y (mgr.24422) 319 : cluster [DBG] pgmap v514: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 753 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:21:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:41 vm04 bash[20742]: cluster 2026-03-10T10:21:40.451898+0000 mgr.y (mgr.24422) 319 : cluster [DBG] pgmap v514: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 753 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:21:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:41 vm04 bash[20742]: audit 2026-03-10T10:21:40.655820+0000 mon.a (mon.0) 2548 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-59","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:41 vm04 bash[20742]: audit 2026-03-10T10:21:40.655820+0000 mon.a (mon.0) 2548 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-59","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:41 vm04 bash[20742]: cluster 2026-03-10T10:21:40.663348+0000 mon.a (mon.0) 2549 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-10T10:21:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:41 vm04 bash[20742]: cluster 2026-03-10T10:21:40.663348+0000 mon.a (mon.0) 2549 : cluster [DBG] osdmap e367: 8 total, 8 up, 8 in 2026-03-10T10:21:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:41 vm04 bash[20742]: audit 2026-03-10T10:21:40.667089+0000 mon.a (mon.0) 2550 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:21:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:41 vm04 bash[20742]: audit 2026-03-10T10:21:40.667089+0000 mon.a (mon.0) 2550 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:21:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:41 vm04 bash[20742]: audit 2026-03-10T10:21:40.728167+0000 mon.a (mon.0) 2551 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:41 vm04 bash[20742]: audit 2026-03-10T10:21:40.728167+0000 mon.a (mon.0) 2551 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:41 vm04 bash[20742]: audit 2026-03-10T10:21:40.728422+0000 mon.a (mon.0) 2552 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-59"}]: dispatch 2026-03-10T10:21:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:41 vm04 bash[20742]: audit 2026-03-10T10:21:40.728422+0000 mon.a (mon.0) 2552 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-59"}]: dispatch 2026-03-10T10:21:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:41 vm04 bash[20742]: cluster 2026-03-10T10:21:41.465536+0000 mon.a (mon.0) 2553 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:42.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:41 vm04 bash[20742]: cluster 2026-03-10T10:21:41.465536+0000 mon.a (mon.0) 2553 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:42 vm04 bash[28289]: cluster 2026-03-10T10:21:41.725600+0000 mon.a (mon.0) 2554 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-10T10:21:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:42 vm04 bash[28289]: cluster 2026-03-10T10:21:41.725600+0000 mon.a (mon.0) 2554 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-10T10:21:43.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:42 vm04 bash[20742]: cluster 2026-03-10T10:21:41.725600+0000 mon.a (mon.0) 2554 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-10T10:21:43.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:42 vm04 bash[20742]: cluster 2026-03-10T10:21:41.725600+0000 mon.a (mon.0) 2554 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-10T10:21:43.203 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:21:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:21:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:21:43.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:42 vm07 bash[23367]: cluster 2026-03-10T10:21:41.725600+0000 mon.a (mon.0) 2554 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-10T10:21:43.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:42 vm07 bash[23367]: cluster 2026-03-10T10:21:41.725600+0000 mon.a (mon.0) 2554 : cluster [DBG] osdmap e368: 8 total, 8 up, 8 in 2026-03-10T10:21:44.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:44 vm07 bash[23367]: cluster 2026-03-10T10:21:42.452165+0000 mgr.y (mgr.24422) 320 : cluster [DBG] pgmap v517: 260 pgs: 260 active+clean; 8.3 MiB data, 753 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.2 KiB/s wr, 1 op/s 2026-03-10T10:21:44.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:44 vm07 bash[23367]: cluster 2026-03-10T10:21:42.452165+0000 mgr.y (mgr.24422) 320 : cluster [DBG] pgmap v517: 260 pgs: 260 active+clean; 8.3 MiB data, 753 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.2 KiB/s wr, 1 op/s 2026-03-10T10:21:44.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:44 vm07 bash[23367]: cluster 2026-03-10T10:21:42.840855+0000 mon.a (mon.0) 2555 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-10T10:21:44.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:44 vm07 bash[23367]: cluster 2026-03-10T10:21:42.840855+0000 mon.a (mon.0) 2555 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-10T10:21:44.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:44 vm07 bash[23367]: audit 2026-03-10T10:21:42.842486+0000 mon.a (mon.0) 2556 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:44.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:44 vm07 bash[23367]: audit 2026-03-10T10:21:42.842486+0000 mon.a (mon.0) 2556 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:44 vm07 bash[23367]: audit 2026-03-10T10:21:42.938592+0000 mon.a (mon.0) 2557 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:21:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:44 vm07 bash[23367]: audit 2026-03-10T10:21:42.938592+0000 mon.a (mon.0) 2557 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:21:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:44 vm07 bash[23367]: audit 2026-03-10T10:21:42.939518+0000 mon.a (mon.0) 2558 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:21:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:44 vm07 bash[23367]: audit 2026-03-10T10:21:42.939518+0000 mon.a (mon.0) 2558 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:21:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:44 vm04 bash[28289]: cluster 2026-03-10T10:21:42.452165+0000 mgr.y (mgr.24422) 320 : cluster [DBG] pgmap v517: 260 pgs: 260 active+clean; 8.3 MiB data, 753 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.2 KiB/s wr, 1 op/s 2026-03-10T10:21:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:44 vm04 bash[28289]: cluster 2026-03-10T10:21:42.452165+0000 mgr.y (mgr.24422) 320 : cluster [DBG] pgmap v517: 260 pgs: 260 active+clean; 8.3 MiB data, 753 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.2 KiB/s wr, 1 op/s 2026-03-10T10:21:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:44 vm04 bash[28289]: cluster 2026-03-10T10:21:42.840855+0000 mon.a (mon.0) 2555 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-10T10:21:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:44 vm04 bash[28289]: cluster 2026-03-10T10:21:42.840855+0000 mon.a (mon.0) 2555 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-10T10:21:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:44 vm04 bash[28289]: audit 2026-03-10T10:21:42.842486+0000 mon.a (mon.0) 2556 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:44 vm04 bash[28289]: audit 2026-03-10T10:21:42.842486+0000 mon.a (mon.0) 2556 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:44 vm04 bash[28289]: audit 2026-03-10T10:21:42.938592+0000 mon.a (mon.0) 2557 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:21:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:44 vm04 bash[28289]: audit 2026-03-10T10:21:42.938592+0000 mon.a (mon.0) 2557 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:21:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:44 vm04 bash[28289]: audit 2026-03-10T10:21:42.939518+0000 mon.a (mon.0) 2558 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:21:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:44 vm04 bash[28289]: audit 2026-03-10T10:21:42.939518+0000 mon.a (mon.0) 2558 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:21:44.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:44 vm04 bash[20742]: cluster 2026-03-10T10:21:42.452165+0000 mgr.y (mgr.24422) 320 : cluster [DBG] pgmap v517: 260 pgs: 260 active+clean; 8.3 MiB data, 753 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.2 KiB/s wr, 1 op/s 2026-03-10T10:21:44.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:44 vm04 bash[20742]: cluster 2026-03-10T10:21:42.452165+0000 mgr.y (mgr.24422) 320 : cluster [DBG] pgmap v517: 260 pgs: 260 active+clean; 8.3 MiB data, 753 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.2 KiB/s wr, 1 op/s 2026-03-10T10:21:44.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:44 vm04 bash[20742]: cluster 2026-03-10T10:21:42.840855+0000 mon.a (mon.0) 2555 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-10T10:21:44.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:44 vm04 bash[20742]: cluster 2026-03-10T10:21:42.840855+0000 mon.a (mon.0) 2555 : cluster [DBG] osdmap e369: 8 total, 8 up, 8 in 2026-03-10T10:21:44.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:44 vm04 bash[20742]: audit 2026-03-10T10:21:42.842486+0000 mon.a (mon.0) 2556 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:44.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:44 vm04 bash[20742]: audit 2026-03-10T10:21:42.842486+0000 mon.a (mon.0) 2556 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-61","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:44.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:44 vm04 bash[20742]: audit 2026-03-10T10:21:42.938592+0000 mon.a (mon.0) 2557 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:21:44.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:44 vm04 bash[20742]: audit 2026-03-10T10:21:42.938592+0000 mon.a (mon.0) 2557 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:21:44.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:44 vm04 bash[20742]: audit 2026-03-10T10:21:42.939518+0000 mon.a (mon.0) 2558 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:21:44.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:44 vm04 bash[20742]: audit 2026-03-10T10:21:42.939518+0000 mon.a (mon.0) 2558 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:21:45.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:45 vm04 bash[28289]: audit 2026-03-10T10:21:43.984695+0000 mon.a (mon.0) 2559 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-61","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:45.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:45 vm04 bash[28289]: audit 2026-03-10T10:21:43.984695+0000 mon.a (mon.0) 2559 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-61","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:45.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:45 vm04 bash[28289]: cluster 2026-03-10T10:21:43.987742+0000 mon.a (mon.0) 2560 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-10T10:21:45.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:45 vm04 bash[28289]: cluster 2026-03-10T10:21:43.987742+0000 mon.a (mon.0) 2560 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-10T10:21:45.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:45 vm04 bash[28289]: audit 2026-03-10T10:21:43.991471+0000 mon.a (mon.0) 2561 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:21:45.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:45 vm04 bash[28289]: audit 2026-03-10T10:21:43.991471+0000 mon.a (mon.0) 2561 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:21:45.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:45 vm04 bash[28289]: audit 2026-03-10T10:21:43.993606+0000 mon.a (mon.0) 2562 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:21:45.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:45 vm04 bash[28289]: audit 2026-03-10T10:21:43.993606+0000 mon.a (mon.0) 2562 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:21:45.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:45 vm04 bash[28289]: audit 2026-03-10T10:21:44.988158+0000 mon.a (mon.0) 2563 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:21:45.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:45 vm04 bash[28289]: audit 2026-03-10T10:21:44.988158+0000 mon.a (mon.0) 2563 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:21:45.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:45 vm04 bash[28289]: cluster 2026-03-10T10:21:44.991293+0000 mon.a (mon.0) 2564 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-10T10:21:45.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:45 vm04 bash[28289]: cluster 2026-03-10T10:21:44.991293+0000 mon.a (mon.0) 2564 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-10T10:21:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:45 vm04 bash[20742]: audit 2026-03-10T10:21:43.984695+0000 mon.a (mon.0) 2559 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-61","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:45 vm04 bash[20742]: audit 2026-03-10T10:21:43.984695+0000 mon.a (mon.0) 2559 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-61","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:45 vm04 bash[20742]: cluster 2026-03-10T10:21:43.987742+0000 mon.a (mon.0) 2560 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-10T10:21:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:45 vm04 bash[20742]: cluster 2026-03-10T10:21:43.987742+0000 mon.a (mon.0) 2560 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-10T10:21:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:45 vm04 bash[20742]: audit 2026-03-10T10:21:43.991471+0000 mon.a (mon.0) 2561 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:21:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:45 vm04 bash[20742]: audit 2026-03-10T10:21:43.991471+0000 mon.a (mon.0) 2561 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:21:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:45 vm04 bash[20742]: audit 2026-03-10T10:21:43.993606+0000 mon.a (mon.0) 2562 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:21:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:45 vm04 bash[20742]: audit 2026-03-10T10:21:43.993606+0000 mon.a (mon.0) 2562 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:21:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:45 vm04 bash[20742]: audit 2026-03-10T10:21:44.988158+0000 mon.a (mon.0) 2563 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:21:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:45 vm04 bash[20742]: audit 2026-03-10T10:21:44.988158+0000 mon.a (mon.0) 2563 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:21:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:45 vm04 bash[20742]: cluster 2026-03-10T10:21:44.991293+0000 mon.a (mon.0) 2564 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-10T10:21:45.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:45 vm04 bash[20742]: cluster 2026-03-10T10:21:44.991293+0000 mon.a (mon.0) 2564 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-10T10:21:45.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:45 vm07 bash[23367]: audit 2026-03-10T10:21:43.984695+0000 mon.a (mon.0) 2559 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-61","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:45.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:45 vm07 bash[23367]: audit 2026-03-10T10:21:43.984695+0000 mon.a (mon.0) 2559 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-61","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:45.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:45 vm07 bash[23367]: cluster 2026-03-10T10:21:43.987742+0000 mon.a (mon.0) 2560 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-10T10:21:45.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:45 vm07 bash[23367]: cluster 2026-03-10T10:21:43.987742+0000 mon.a (mon.0) 2560 : cluster [DBG] osdmap e370: 8 total, 8 up, 8 in 2026-03-10T10:21:45.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:45 vm07 bash[23367]: audit 2026-03-10T10:21:43.991471+0000 mon.a (mon.0) 2561 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:21:45.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:45 vm07 bash[23367]: audit 2026-03-10T10:21:43.991471+0000 mon.a (mon.0) 2561 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:21:45.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:45 vm07 bash[23367]: audit 2026-03-10T10:21:43.993606+0000 mon.a (mon.0) 2562 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:21:45.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:45 vm07 bash[23367]: audit 2026-03-10T10:21:43.993606+0000 mon.a (mon.0) 2562 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:21:45.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:45 vm07 bash[23367]: audit 2026-03-10T10:21:44.988158+0000 mon.a (mon.0) 2563 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:21:45.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:45 vm07 bash[23367]: audit 2026-03-10T10:21:44.988158+0000 mon.a (mon.0) 2563 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:21:45.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:45 vm07 bash[23367]: cluster 2026-03-10T10:21:44.991293+0000 mon.a (mon.0) 2564 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-10T10:21:45.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:45 vm07 bash[23367]: cluster 2026-03-10T10:21:44.991293+0000 mon.a (mon.0) 2564 : cluster [DBG] osdmap e371: 8 total, 8 up, 8 in 2026-03-10T10:21:46.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:46 vm04 bash[28289]: cluster 2026-03-10T10:21:44.452478+0000 mgr.y (mgr.24422) 321 : cluster [DBG] pgmap v520: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 771 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T10:21:46.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:46 vm04 bash[28289]: cluster 2026-03-10T10:21:44.452478+0000 mgr.y (mgr.24422) 321 : cluster [DBG] pgmap v520: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 771 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T10:21:46.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:46 vm04 bash[28289]: audit 2026-03-10T10:21:45.115808+0000 mon.a (mon.0) 2565 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:46.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:46 vm04 bash[28289]: audit 2026-03-10T10:21:45.115808+0000 mon.a (mon.0) 2565 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:46.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:46 vm04 bash[28289]: audit 2026-03-10T10:21:45.116104+0000 mon.a (mon.0) 2566 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-61"}]: dispatch 2026-03-10T10:21:46.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:46 vm04 bash[28289]: audit 2026-03-10T10:21:45.116104+0000 mon.a (mon.0) 2566 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-61"}]: dispatch 2026-03-10T10:21:46.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:46 vm04 bash[20742]: cluster 2026-03-10T10:21:44.452478+0000 mgr.y (mgr.24422) 321 : cluster [DBG] pgmap v520: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 771 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T10:21:46.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:46 vm04 bash[20742]: cluster 2026-03-10T10:21:44.452478+0000 mgr.y (mgr.24422) 321 : cluster [DBG] pgmap v520: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 771 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T10:21:46.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:46 vm04 bash[20742]: audit 2026-03-10T10:21:45.115808+0000 mon.a (mon.0) 2565 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:46.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:46 vm04 bash[20742]: audit 2026-03-10T10:21:45.115808+0000 mon.a (mon.0) 2565 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:46.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:46 vm04 bash[20742]: audit 2026-03-10T10:21:45.116104+0000 mon.a (mon.0) 2566 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-61"}]: dispatch 2026-03-10T10:21:46.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:46 vm04 bash[20742]: audit 2026-03-10T10:21:45.116104+0000 mon.a (mon.0) 2566 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-61"}]: dispatch 2026-03-10T10:21:47.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:46 vm07 bash[23367]: cluster 2026-03-10T10:21:44.452478+0000 mgr.y (mgr.24422) 321 : cluster [DBG] pgmap v520: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 771 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T10:21:47.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:46 vm07 bash[23367]: cluster 2026-03-10T10:21:44.452478+0000 mgr.y (mgr.24422) 321 : cluster [DBG] pgmap v520: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 771 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T10:21:47.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:46 vm07 bash[23367]: audit 2026-03-10T10:21:45.115808+0000 mon.a (mon.0) 2565 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:47.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:46 vm07 bash[23367]: audit 2026-03-10T10:21:45.115808+0000 mon.a (mon.0) 2565 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:21:47.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:46 vm07 bash[23367]: audit 2026-03-10T10:21:45.116104+0000 mon.a (mon.0) 2566 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-61"}]: dispatch 2026-03-10T10:21:47.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:46 vm07 bash[23367]: audit 2026-03-10T10:21:45.116104+0000 mon.a (mon.0) 2566 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-61"}]: dispatch 2026-03-10T10:21:47.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:47 vm04 bash[28289]: cluster 2026-03-10T10:21:46.345070+0000 mon.a (mon.0) 2567 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-10T10:21:47.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:47 vm04 bash[28289]: cluster 2026-03-10T10:21:46.345070+0000 mon.a (mon.0) 2567 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-10T10:21:47.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:47 vm04 bash[28289]: cluster 2026-03-10T10:21:46.452863+0000 mgr.y (mgr.24422) 322 : cluster [DBG] pgmap v523: 260 pgs: 260 active+clean; 8.3 MiB data, 771 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T10:21:47.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:47 vm04 bash[28289]: cluster 2026-03-10T10:21:46.452863+0000 mgr.y (mgr.24422) 322 : cluster [DBG] pgmap v523: 260 pgs: 260 active+clean; 8.3 MiB data, 771 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T10:21:47.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:47 vm04 bash[28289]: cluster 2026-03-10T10:21:46.466186+0000 mon.a (mon.0) 2568 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:47.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:47 vm04 bash[28289]: cluster 2026-03-10T10:21:46.466186+0000 mon.a (mon.0) 2568 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:47.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:47 vm04 bash[28289]: cluster 2026-03-10T10:21:47.296030+0000 mon.a (mon.0) 2569 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-10T10:21:47.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:47 vm04 bash[28289]: cluster 2026-03-10T10:21:47.296030+0000 mon.a (mon.0) 2569 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-10T10:21:47.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:47 vm04 bash[28289]: audit 2026-03-10T10:21:47.298498+0000 mon.a (mon.0) 2570 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:47.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:47 vm04 bash[28289]: audit 2026-03-10T10:21:47.298498+0000 mon.a (mon.0) 2570 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:47.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:47 vm04 bash[20742]: cluster 2026-03-10T10:21:46.345070+0000 mon.a (mon.0) 2567 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-10T10:21:47.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:47 vm04 bash[20742]: cluster 2026-03-10T10:21:46.345070+0000 mon.a (mon.0) 2567 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-10T10:21:47.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:47 vm04 bash[20742]: cluster 2026-03-10T10:21:46.452863+0000 mgr.y (mgr.24422) 322 : cluster [DBG] pgmap v523: 260 pgs: 260 active+clean; 8.3 MiB data, 771 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T10:21:47.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:47 vm04 bash[20742]: cluster 2026-03-10T10:21:46.452863+0000 mgr.y (mgr.24422) 322 : cluster [DBG] pgmap v523: 260 pgs: 260 active+clean; 8.3 MiB data, 771 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T10:21:47.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:47 vm04 bash[20742]: cluster 2026-03-10T10:21:46.466186+0000 mon.a (mon.0) 2568 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:47.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:47 vm04 bash[20742]: cluster 2026-03-10T10:21:46.466186+0000 mon.a (mon.0) 2568 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:47.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:47 vm04 bash[20742]: cluster 2026-03-10T10:21:47.296030+0000 mon.a (mon.0) 2569 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-10T10:21:47.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:47 vm04 bash[20742]: cluster 2026-03-10T10:21:47.296030+0000 mon.a (mon.0) 2569 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-10T10:21:47.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:47 vm04 bash[20742]: audit 2026-03-10T10:21:47.298498+0000 mon.a (mon.0) 2570 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:47.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:47 vm04 bash[20742]: audit 2026-03-10T10:21:47.298498+0000 mon.a (mon.0) 2570 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:48.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:47 vm07 bash[23367]: cluster 2026-03-10T10:21:46.345070+0000 mon.a (mon.0) 2567 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-10T10:21:48.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:47 vm07 bash[23367]: cluster 2026-03-10T10:21:46.345070+0000 mon.a (mon.0) 2567 : cluster [DBG] osdmap e372: 8 total, 8 up, 8 in 2026-03-10T10:21:48.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:47 vm07 bash[23367]: cluster 2026-03-10T10:21:46.452863+0000 mgr.y (mgr.24422) 322 : cluster [DBG] pgmap v523: 260 pgs: 260 active+clean; 8.3 MiB data, 771 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T10:21:48.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:47 vm07 bash[23367]: cluster 2026-03-10T10:21:46.452863+0000 mgr.y (mgr.24422) 322 : cluster [DBG] pgmap v523: 260 pgs: 260 active+clean; 8.3 MiB data, 771 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T10:21:48.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:47 vm07 bash[23367]: cluster 2026-03-10T10:21:46.466186+0000 mon.a (mon.0) 2568 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:48.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:47 vm07 bash[23367]: cluster 2026-03-10T10:21:46.466186+0000 mon.a (mon.0) 2568 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:21:48.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:47 vm07 bash[23367]: cluster 2026-03-10T10:21:47.296030+0000 mon.a (mon.0) 2569 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-10T10:21:48.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:47 vm07 bash[23367]: cluster 2026-03-10T10:21:47.296030+0000 mon.a (mon.0) 2569 : cluster [DBG] osdmap e373: 8 total, 8 up, 8 in 2026-03-10T10:21:48.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:47 vm07 bash[23367]: audit 2026-03-10T10:21:47.298498+0000 mon.a (mon.0) 2570 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:48.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:47 vm07 bash[23367]: audit 2026-03-10T10:21:47.298498+0000 mon.a (mon.0) 2570 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-63","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:21:48.766 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:21:48 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:21:49.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:49 vm04 bash[28289]: audit 2026-03-10T10:21:48.297693+0000 mon.a (mon.0) 2571 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-63","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:49.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:49 vm04 bash[28289]: audit 2026-03-10T10:21:48.297693+0000 mon.a (mon.0) 2571 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-63","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:49.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:49 vm04 bash[28289]: cluster 2026-03-10T10:21:48.301341+0000 mon.a (mon.0) 2572 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-10T10:21:49.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:49 vm04 bash[28289]: cluster 2026-03-10T10:21:48.301341+0000 mon.a (mon.0) 2572 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-10T10:21:49.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:49 vm04 bash[28289]: audit 2026-03-10T10:21:48.302621+0000 mon.a (mon.0) 2573 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:21:49.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:49 vm04 bash[28289]: audit 2026-03-10T10:21:48.302621+0000 mon.a (mon.0) 2573 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:21:49.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:49 vm04 bash[28289]: audit 2026-03-10T10:21:48.304732+0000 mon.a (mon.0) 2574 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:21:49.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:49 vm04 bash[28289]: audit 2026-03-10T10:21:48.304732+0000 mon.a (mon.0) 2574 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:21:49.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:49 vm04 bash[20742]: audit 2026-03-10T10:21:48.297693+0000 mon.a (mon.0) 2571 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-63","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:49.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:49 vm04 bash[20742]: audit 2026-03-10T10:21:48.297693+0000 mon.a (mon.0) 2571 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-63","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:49.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:49 vm04 bash[20742]: cluster 2026-03-10T10:21:48.301341+0000 mon.a (mon.0) 2572 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-10T10:21:49.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:49 vm04 bash[20742]: cluster 2026-03-10T10:21:48.301341+0000 mon.a (mon.0) 2572 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-10T10:21:49.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:49 vm04 bash[20742]: audit 2026-03-10T10:21:48.302621+0000 mon.a (mon.0) 2573 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:21:49.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:49 vm04 bash[20742]: audit 2026-03-10T10:21:48.302621+0000 mon.a (mon.0) 2573 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:21:49.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:49 vm04 bash[20742]: audit 2026-03-10T10:21:48.304732+0000 mon.a (mon.0) 2574 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:21:49.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:49 vm04 bash[20742]: audit 2026-03-10T10:21:48.304732+0000 mon.a (mon.0) 2574 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:21:49.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:49 vm07 bash[23367]: audit 2026-03-10T10:21:48.297693+0000 mon.a (mon.0) 2571 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-63","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:49.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:49 vm07 bash[23367]: audit 2026-03-10T10:21:48.297693+0000 mon.a (mon.0) 2571 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-63","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:21:49.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:49 vm07 bash[23367]: cluster 2026-03-10T10:21:48.301341+0000 mon.a (mon.0) 2572 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-10T10:21:49.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:49 vm07 bash[23367]: cluster 2026-03-10T10:21:48.301341+0000 mon.a (mon.0) 2572 : cluster [DBG] osdmap e374: 8 total, 8 up, 8 in 2026-03-10T10:21:49.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:49 vm07 bash[23367]: audit 2026-03-10T10:21:48.302621+0000 mon.a (mon.0) 2573 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:21:49.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:49 vm07 bash[23367]: audit 2026-03-10T10:21:48.302621+0000 mon.a (mon.0) 2573 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:21:49.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:49 vm07 bash[23367]: audit 2026-03-10T10:21:48.304732+0000 mon.a (mon.0) 2574 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:21:49.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:49 vm07 bash[23367]: audit 2026-03-10T10:21:48.304732+0000 mon.a (mon.0) 2574 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:21:50.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:50 vm04 bash[28289]: cluster 2026-03-10T10:21:48.453453+0000 mgr.y (mgr.24422) 323 : cluster [DBG] pgmap v526: 292 pgs: 18 unknown, 274 active+clean; 8.3 MiB data, 807 MiB used, 159 GiB / 160 GiB avail; 511 B/s wr, 0 op/s 2026-03-10T10:21:50.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:50 vm04 bash[28289]: cluster 2026-03-10T10:21:48.453453+0000 mgr.y (mgr.24422) 323 : cluster [DBG] pgmap v526: 292 pgs: 18 unknown, 274 active+clean; 8.3 MiB data, 807 MiB used, 159 GiB / 160 GiB avail; 511 B/s wr, 0 op/s 2026-03-10T10:21:50.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:50 vm04 bash[28289]: audit 2026-03-10T10:21:48.483704+0000 mgr.y (mgr.24422) 324 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:50.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:50 vm04 bash[28289]: audit 2026-03-10T10:21:48.483704+0000 mgr.y (mgr.24422) 324 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:50.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:50 vm04 bash[28289]: audit 2026-03-10T10:21:49.301255+0000 mon.a (mon.0) 2575 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:21:50.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:50 vm04 bash[28289]: audit 2026-03-10T10:21:49.301255+0000 mon.a (mon.0) 2575 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:21:50.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:50 vm04 bash[28289]: cluster 2026-03-10T10:21:49.304365+0000 mon.a (mon.0) 2576 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-10T10:21:50.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:50 vm04 bash[28289]: cluster 2026-03-10T10:21:49.304365+0000 mon.a (mon.0) 2576 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-10T10:21:50.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:50 vm04 bash[20742]: cluster 2026-03-10T10:21:48.453453+0000 mgr.y (mgr.24422) 323 : cluster [DBG] pgmap v526: 292 pgs: 18 unknown, 274 active+clean; 8.3 MiB data, 807 MiB used, 159 GiB / 160 GiB avail; 511 B/s wr, 0 op/s 2026-03-10T10:21:50.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:50 vm04 bash[20742]: cluster 2026-03-10T10:21:48.453453+0000 mgr.y (mgr.24422) 323 : cluster [DBG] pgmap v526: 292 pgs: 18 unknown, 274 active+clean; 8.3 MiB data, 807 MiB used, 159 GiB / 160 GiB avail; 511 B/s wr, 0 op/s 2026-03-10T10:21:50.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:50 vm04 bash[20742]: audit 2026-03-10T10:21:48.483704+0000 mgr.y (mgr.24422) 324 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:50.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:50 vm04 bash[20742]: audit 2026-03-10T10:21:48.483704+0000 mgr.y (mgr.24422) 324 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:50.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:50 vm04 bash[20742]: audit 2026-03-10T10:21:49.301255+0000 mon.a (mon.0) 2575 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:21:50.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:50 vm04 bash[20742]: audit 2026-03-10T10:21:49.301255+0000 mon.a (mon.0) 2575 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:21:50.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:50 vm04 bash[20742]: cluster 2026-03-10T10:21:49.304365+0000 mon.a (mon.0) 2576 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-10T10:21:50.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:50 vm04 bash[20742]: cluster 2026-03-10T10:21:49.304365+0000 mon.a (mon.0) 2576 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-10T10:21:50.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:50 vm07 bash[23367]: cluster 2026-03-10T10:21:48.453453+0000 mgr.y (mgr.24422) 323 : cluster [DBG] pgmap v526: 292 pgs: 18 unknown, 274 active+clean; 8.3 MiB data, 807 MiB used, 159 GiB / 160 GiB avail; 511 B/s wr, 0 op/s 2026-03-10T10:21:50.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:50 vm07 bash[23367]: cluster 2026-03-10T10:21:48.453453+0000 mgr.y (mgr.24422) 323 : cluster [DBG] pgmap v526: 292 pgs: 18 unknown, 274 active+clean; 8.3 MiB data, 807 MiB used, 159 GiB / 160 GiB avail; 511 B/s wr, 0 op/s 2026-03-10T10:21:50.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:50 vm07 bash[23367]: audit 2026-03-10T10:21:48.483704+0000 mgr.y (mgr.24422) 324 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:50.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:50 vm07 bash[23367]: audit 2026-03-10T10:21:48.483704+0000 mgr.y (mgr.24422) 324 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:21:50.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:50 vm07 bash[23367]: audit 2026-03-10T10:21:49.301255+0000 mon.a (mon.0) 2575 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:21:50.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:50 vm07 bash[23367]: audit 2026-03-10T10:21:49.301255+0000 mon.a (mon.0) 2575 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:21:50.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:50 vm07 bash[23367]: cluster 2026-03-10T10:21:49.304365+0000 mon.a (mon.0) 2576 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-10T10:21:50.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:50 vm07 bash[23367]: cluster 2026-03-10T10:21:49.304365+0000 mon.a (mon.0) 2576 : cluster [DBG] osdmap e375: 8 total, 8 up, 8 in 2026-03-10T10:21:51.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:51 vm04 bash[28289]: cluster 2026-03-10T10:21:50.323347+0000 mon.a (mon.0) 2577 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-10T10:21:51.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:51 vm04 bash[28289]: cluster 2026-03-10T10:21:50.323347+0000 mon.a (mon.0) 2577 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-10T10:21:51.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:51 vm04 bash[28289]: cluster 2026-03-10T10:21:50.453750+0000 mgr.y (mgr.24422) 325 : cluster [DBG] pgmap v529: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:21:51.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:51 vm04 bash[28289]: cluster 2026-03-10T10:21:50.453750+0000 mgr.y (mgr.24422) 325 : cluster [DBG] pgmap v529: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:21:51.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:51 vm04 bash[20742]: cluster 2026-03-10T10:21:50.323347+0000 mon.a (mon.0) 2577 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-10T10:21:51.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:51 vm04 bash[20742]: cluster 2026-03-10T10:21:50.323347+0000 mon.a (mon.0) 2577 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-10T10:21:51.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:51 vm04 bash[20742]: cluster 2026-03-10T10:21:50.453750+0000 mgr.y (mgr.24422) 325 : cluster [DBG] pgmap v529: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:21:51.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:51 vm04 bash[20742]: cluster 2026-03-10T10:21:50.453750+0000 mgr.y (mgr.24422) 325 : cluster [DBG] pgmap v529: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:21:51.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:51 vm07 bash[23367]: cluster 2026-03-10T10:21:50.323347+0000 mon.a (mon.0) 2577 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-10T10:21:51.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:51 vm07 bash[23367]: cluster 2026-03-10T10:21:50.323347+0000 mon.a (mon.0) 2577 : cluster [DBG] osdmap e376: 8 total, 8 up, 8 in 2026-03-10T10:21:51.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:51 vm07 bash[23367]: cluster 2026-03-10T10:21:50.453750+0000 mgr.y (mgr.24422) 325 : cluster [DBG] pgmap v529: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:21:51.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:51 vm07 bash[23367]: cluster 2026-03-10T10:21:50.453750+0000 mgr.y (mgr.24422) 325 : cluster [DBG] pgmap v529: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:21:52.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:52 vm04 bash[28289]: cluster 2026-03-10T10:21:51.345129+0000 mon.a (mon.0) 2578 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-10T10:21:52.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:52 vm04 bash[28289]: cluster 2026-03-10T10:21:51.345129+0000 mon.a (mon.0) 2578 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-10T10:21:52.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:52 vm04 bash[20742]: cluster 2026-03-10T10:21:51.345129+0000 mon.a (mon.0) 2578 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-10T10:21:52.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:52 vm04 bash[20742]: cluster 2026-03-10T10:21:51.345129+0000 mon.a (mon.0) 2578 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-10T10:21:52.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:52 vm07 bash[23367]: cluster 2026-03-10T10:21:51.345129+0000 mon.a (mon.0) 2578 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-10T10:21:52.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:52 vm07 bash[23367]: cluster 2026-03-10T10:21:51.345129+0000 mon.a (mon.0) 2578 : cluster [DBG] osdmap e377: 8 total, 8 up, 8 in 2026-03-10T10:21:53.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:21:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:21:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:21:53.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:53 vm07 bash[23367]: cluster 2026-03-10T10:21:52.451000+0000 mon.a (mon.0) 2579 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-10T10:21:53.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:53 vm07 bash[23367]: cluster 2026-03-10T10:21:52.451000+0000 mon.a (mon.0) 2579 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-10T10:21:53.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:53 vm07 bash[23367]: cluster 2026-03-10T10:21:52.454061+0000 mgr.y (mgr.24422) 326 : cluster [DBG] pgmap v532: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:21:53.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:53 vm07 bash[23367]: cluster 2026-03-10T10:21:52.454061+0000 mgr.y (mgr.24422) 326 : cluster [DBG] pgmap v532: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:21:53.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:53 vm04 bash[28289]: cluster 2026-03-10T10:21:52.451000+0000 mon.a (mon.0) 2579 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-10T10:21:53.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:53 vm04 bash[28289]: cluster 2026-03-10T10:21:52.451000+0000 mon.a (mon.0) 2579 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-10T10:21:53.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:53 vm04 bash[28289]: cluster 2026-03-10T10:21:52.454061+0000 mgr.y (mgr.24422) 326 : cluster [DBG] pgmap v532: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:21:53.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:53 vm04 bash[28289]: cluster 2026-03-10T10:21:52.454061+0000 mgr.y (mgr.24422) 326 : cluster [DBG] pgmap v532: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:21:53.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:53 vm04 bash[20742]: cluster 2026-03-10T10:21:52.451000+0000 mon.a (mon.0) 2579 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-10T10:21:53.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:53 vm04 bash[20742]: cluster 2026-03-10T10:21:52.451000+0000 mon.a (mon.0) 2579 : cluster [DBG] osdmap e378: 8 total, 8 up, 8 in 2026-03-10T10:21:53.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:53 vm04 bash[20742]: cluster 2026-03-10T10:21:52.454061+0000 mgr.y (mgr.24422) 326 : cluster [DBG] pgmap v532: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:21:53.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:53 vm04 bash[20742]: cluster 2026-03-10T10:21:52.454061+0000 mgr.y (mgr.24422) 326 : cluster [DBG] pgmap v532: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:21:54.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:54 vm07 bash[23367]: cluster 2026-03-10T10:21:53.477518+0000 mon.a (mon.0) 2580 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-10T10:21:54.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:54 vm07 bash[23367]: cluster 2026-03-10T10:21:53.477518+0000 mon.a (mon.0) 2580 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-10T10:21:54.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:54 vm04 bash[28289]: cluster 2026-03-10T10:21:53.477518+0000 mon.a (mon.0) 2580 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-10T10:21:54.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:54 vm04 bash[28289]: cluster 2026-03-10T10:21:53.477518+0000 mon.a (mon.0) 2580 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-10T10:21:54.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:54 vm04 bash[20742]: cluster 2026-03-10T10:21:53.477518+0000 mon.a (mon.0) 2580 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-10T10:21:54.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:54 vm04 bash[20742]: cluster 2026-03-10T10:21:53.477518+0000 mon.a (mon.0) 2580 : cluster [DBG] osdmap e379: 8 total, 8 up, 8 in 2026-03-10T10:21:55.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:55 vm07 bash[23367]: cluster 2026-03-10T10:21:54.454962+0000 mgr.y (mgr.24422) 327 : cluster [DBG] pgmap v534: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 2.7 KiB/s wr, 7 op/s 2026-03-10T10:21:55.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:55 vm07 bash[23367]: cluster 2026-03-10T10:21:54.454962+0000 mgr.y (mgr.24422) 327 : cluster [DBG] pgmap v534: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 2.7 KiB/s wr, 7 op/s 2026-03-10T10:21:55.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:55 vm04 bash[28289]: cluster 2026-03-10T10:21:54.454962+0000 mgr.y (mgr.24422) 327 : cluster [DBG] pgmap v534: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 2.7 KiB/s wr, 7 op/s 2026-03-10T10:21:55.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:55 vm04 bash[28289]: cluster 2026-03-10T10:21:54.454962+0000 mgr.y (mgr.24422) 327 : cluster [DBG] pgmap v534: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 2.7 KiB/s wr, 7 op/s 2026-03-10T10:21:55.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:55 vm04 bash[20742]: cluster 2026-03-10T10:21:54.454962+0000 mgr.y (mgr.24422) 327 : cluster [DBG] pgmap v534: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 2.7 KiB/s wr, 7 op/s 2026-03-10T10:21:55.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:55 vm04 bash[20742]: cluster 2026-03-10T10:21:54.454962+0000 mgr.y (mgr.24422) 327 : cluster [DBG] pgmap v534: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 2.7 KiB/s wr, 7 op/s 2026-03-10T10:21:58.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:57 vm04 bash[28289]: cluster 2026-03-10T10:21:56.455288+0000 mgr.y (mgr.24422) 328 : cluster [DBG] pgmap v535: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.8 KiB/s wr, 4 op/s 2026-03-10T10:21:58.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:57 vm04 bash[28289]: cluster 2026-03-10T10:21:56.455288+0000 mgr.y (mgr.24422) 328 : cluster [DBG] pgmap v535: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.8 KiB/s wr, 4 op/s 2026-03-10T10:21:58.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:57 vm04 bash[20742]: cluster 2026-03-10T10:21:56.455288+0000 mgr.y (mgr.24422) 328 : cluster [DBG] pgmap v535: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.8 KiB/s wr, 4 op/s 2026-03-10T10:21:58.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:57 vm04 bash[20742]: cluster 2026-03-10T10:21:56.455288+0000 mgr.y (mgr.24422) 328 : cluster [DBG] pgmap v535: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.8 KiB/s wr, 4 op/s 2026-03-10T10:21:58.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:57 vm07 bash[23367]: cluster 2026-03-10T10:21:56.455288+0000 mgr.y (mgr.24422) 328 : cluster [DBG] pgmap v535: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.8 KiB/s wr, 4 op/s 2026-03-10T10:21:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:57 vm07 bash[23367]: cluster 2026-03-10T10:21:56.455288+0000 mgr.y (mgr.24422) 328 : cluster [DBG] pgmap v535: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.8 KiB/s wr, 4 op/s 2026-03-10T10:21:58.766 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:21:58 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:21:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:58 vm04 bash[28289]: audit 2026-03-10T10:21:57.945222+0000 mon.a (mon.0) 2581 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:21:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:58 vm04 bash[28289]: audit 2026-03-10T10:21:57.945222+0000 mon.a (mon.0) 2581 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:21:59.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:58 vm04 bash[20742]: audit 2026-03-10T10:21:57.945222+0000 mon.a (mon.0) 2581 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:21:59.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:58 vm04 bash[20742]: audit 2026-03-10T10:21:57.945222+0000 mon.a (mon.0) 2581 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:21:59.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:58 vm07 bash[23367]: audit 2026-03-10T10:21:57.945222+0000 mon.a (mon.0) 2581 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:21:59.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:58 vm07 bash[23367]: audit 2026-03-10T10:21:57.945222+0000 mon.a (mon.0) 2581 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:22:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:59 vm04 bash[28289]: cluster 2026-03-10T10:21:58.456254+0000 mgr.y (mgr.24422) 329 : cluster [DBG] pgmap v536: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-10T10:22:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:59 vm04 bash[28289]: cluster 2026-03-10T10:21:58.456254+0000 mgr.y (mgr.24422) 329 : cluster [DBG] pgmap v536: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-10T10:22:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:59 vm04 bash[28289]: audit 2026-03-10T10:21:58.494487+0000 mgr.y (mgr.24422) 330 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:21:59 vm04 bash[28289]: audit 2026-03-10T10:21:58.494487+0000 mgr.y (mgr.24422) 330 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:59 vm04 bash[20742]: cluster 2026-03-10T10:21:58.456254+0000 mgr.y (mgr.24422) 329 : cluster [DBG] pgmap v536: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-10T10:22:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:59 vm04 bash[20742]: cluster 2026-03-10T10:21:58.456254+0000 mgr.y (mgr.24422) 329 : cluster [DBG] pgmap v536: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-10T10:22:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:59 vm04 bash[20742]: audit 2026-03-10T10:21:58.494487+0000 mgr.y (mgr.24422) 330 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:21:59 vm04 bash[20742]: audit 2026-03-10T10:21:58.494487+0000 mgr.y (mgr.24422) 330 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:00.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:59 vm07 bash[23367]: cluster 2026-03-10T10:21:58.456254+0000 mgr.y (mgr.24422) 329 : cluster [DBG] pgmap v536: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-10T10:22:00.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:59 vm07 bash[23367]: cluster 2026-03-10T10:21:58.456254+0000 mgr.y (mgr.24422) 329 : cluster [DBG] pgmap v536: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-10T10:22:00.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:59 vm07 bash[23367]: audit 2026-03-10T10:21:58.494487+0000 mgr.y (mgr.24422) 330 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:00.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:21:59 vm07 bash[23367]: audit 2026-03-10T10:21:58.494487+0000 mgr.y (mgr.24422) 330 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:02.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:01 vm04 bash[28289]: cluster 2026-03-10T10:22:00.457011+0000 mgr.y (mgr.24422) 331 : cluster [DBG] pgmap v537: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 1.4 KiB/s wr, 4 op/s 2026-03-10T10:22:02.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:01 vm04 bash[28289]: cluster 2026-03-10T10:22:00.457011+0000 mgr.y (mgr.24422) 331 : cluster [DBG] pgmap v537: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 1.4 KiB/s wr, 4 op/s 2026-03-10T10:22:02.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:01 vm04 bash[28289]: cluster 2026-03-10T10:22:01.476615+0000 mon.a (mon.0) 2582 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-10T10:22:02.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:01 vm04 bash[28289]: cluster 2026-03-10T10:22:01.476615+0000 mon.a (mon.0) 2582 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-10T10:22:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:01 vm04 bash[20742]: cluster 2026-03-10T10:22:00.457011+0000 mgr.y (mgr.24422) 331 : cluster [DBG] pgmap v537: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 1.4 KiB/s wr, 4 op/s 2026-03-10T10:22:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:01 vm04 bash[20742]: cluster 2026-03-10T10:22:00.457011+0000 mgr.y (mgr.24422) 331 : cluster [DBG] pgmap v537: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 1.4 KiB/s wr, 4 op/s 2026-03-10T10:22:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:01 vm04 bash[20742]: cluster 2026-03-10T10:22:01.476615+0000 mon.a (mon.0) 2582 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-10T10:22:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:01 vm04 bash[20742]: cluster 2026-03-10T10:22:01.476615+0000 mon.a (mon.0) 2582 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-10T10:22:02.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:01 vm07 bash[23367]: cluster 2026-03-10T10:22:00.457011+0000 mgr.y (mgr.24422) 331 : cluster [DBG] pgmap v537: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 1.4 KiB/s wr, 4 op/s 2026-03-10T10:22:02.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:01 vm07 bash[23367]: cluster 2026-03-10T10:22:00.457011+0000 mgr.y (mgr.24422) 331 : cluster [DBG] pgmap v537: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 2.1 KiB/s rd, 1.4 KiB/s wr, 4 op/s 2026-03-10T10:22:02.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:01 vm07 bash[23367]: cluster 2026-03-10T10:22:01.476615+0000 mon.a (mon.0) 2582 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-10T10:22:02.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:01 vm07 bash[23367]: cluster 2026-03-10T10:22:01.476615+0000 mon.a (mon.0) 2582 : cluster [DBG] osdmap e380: 8 total, 8 up, 8 in 2026-03-10T10:22:03.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:22:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:22:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:22:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:03 vm04 bash[28289]: cluster 2026-03-10T10:22:02.457434+0000 mgr.y (mgr.24422) 332 : cluster [DBG] pgmap v539: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 227 B/s wr, 1 op/s 2026-03-10T10:22:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:03 vm04 bash[28289]: cluster 2026-03-10T10:22:02.457434+0000 mgr.y (mgr.24422) 332 : cluster [DBG] pgmap v539: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 227 B/s wr, 1 op/s 2026-03-10T10:22:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:03 vm04 bash[20742]: cluster 2026-03-10T10:22:02.457434+0000 mgr.y (mgr.24422) 332 : cluster [DBG] pgmap v539: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 227 B/s wr, 1 op/s 2026-03-10T10:22:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:03 vm04 bash[20742]: cluster 2026-03-10T10:22:02.457434+0000 mgr.y (mgr.24422) 332 : cluster [DBG] pgmap v539: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 227 B/s wr, 1 op/s 2026-03-10T10:22:04.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:03 vm07 bash[23367]: cluster 2026-03-10T10:22:02.457434+0000 mgr.y (mgr.24422) 332 : cluster [DBG] pgmap v539: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 227 B/s wr, 1 op/s 2026-03-10T10:22:04.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:03 vm07 bash[23367]: cluster 2026-03-10T10:22:02.457434+0000 mgr.y (mgr.24422) 332 : cluster [DBG] pgmap v539: 292 pgs: 292 active+clean; 8.3 MiB data, 791 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 227 B/s wr, 1 op/s 2026-03-10T10:22:05.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:04 vm07 bash[23367]: cluster 2026-03-10T10:22:03.960218+0000 mon.a (mon.0) 2583 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-10T10:22:05.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:04 vm07 bash[23367]: cluster 2026-03-10T10:22:03.960218+0000 mon.a (mon.0) 2583 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-10T10:22:05.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:04 vm04 bash[28289]: cluster 2026-03-10T10:22:03.960218+0000 mon.a (mon.0) 2583 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-10T10:22:05.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:04 vm04 bash[28289]: cluster 2026-03-10T10:22:03.960218+0000 mon.a (mon.0) 2583 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-10T10:22:05.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:04 vm04 bash[20742]: cluster 2026-03-10T10:22:03.960218+0000 mon.a (mon.0) 2583 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-10T10:22:05.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:04 vm04 bash[20742]: cluster 2026-03-10T10:22:03.960218+0000 mon.a (mon.0) 2583 : cluster [DBG] osdmap e381: 8 total, 8 up, 8 in 2026-03-10T10:22:06.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:05 vm07 bash[23367]: cluster 2026-03-10T10:22:04.457733+0000 mgr.y (mgr.24422) 333 : cluster [DBG] pgmap v541: 292 pgs: 292 active+clean; 8.3 MiB data, 809 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:22:06.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:05 vm07 bash[23367]: cluster 2026-03-10T10:22:04.457733+0000 mgr.y (mgr.24422) 333 : cluster [DBG] pgmap v541: 292 pgs: 292 active+clean; 8.3 MiB data, 809 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:22:06.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:05 vm04 bash[28289]: cluster 2026-03-10T10:22:04.457733+0000 mgr.y (mgr.24422) 333 : cluster [DBG] pgmap v541: 292 pgs: 292 active+clean; 8.3 MiB data, 809 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:22:06.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:05 vm04 bash[28289]: cluster 2026-03-10T10:22:04.457733+0000 mgr.y (mgr.24422) 333 : cluster [DBG] pgmap v541: 292 pgs: 292 active+clean; 8.3 MiB data, 809 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:22:06.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:05 vm04 bash[20742]: cluster 2026-03-10T10:22:04.457733+0000 mgr.y (mgr.24422) 333 : cluster [DBG] pgmap v541: 292 pgs: 292 active+clean; 8.3 MiB data, 809 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:22:06.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:05 vm04 bash[20742]: cluster 2026-03-10T10:22:04.457733+0000 mgr.y (mgr.24422) 333 : cluster [DBG] pgmap v541: 292 pgs: 292 active+clean; 8.3 MiB data, 809 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:22:07.973 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:07 vm04 bash[20742]: cluster 2026-03-10T10:22:06.458065+0000 mgr.y (mgr.24422) 334 : cluster [DBG] pgmap v542: 292 pgs: 292 active+clean; 8.3 MiB data, 809 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T10:22:08.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:07 vm07 bash[23367]: cluster 2026-03-10T10:22:06.458065+0000 mgr.y (mgr.24422) 334 : cluster [DBG] pgmap v542: 292 pgs: 292 active+clean; 8.3 MiB data, 809 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T10:22:08.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:07 vm07 bash[23367]: cluster 2026-03-10T10:22:06.458065+0000 mgr.y (mgr.24422) 334 : cluster [DBG] pgmap v542: 292 pgs: 292 active+clean; 8.3 MiB data, 809 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T10:22:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:07 vm04 bash[28289]: cluster 2026-03-10T10:22:06.458065+0000 mgr.y (mgr.24422) 334 : cluster [DBG] pgmap v542: 292 pgs: 292 active+clean; 8.3 MiB data, 809 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T10:22:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:07 vm04 bash[28289]: cluster 2026-03-10T10:22:06.458065+0000 mgr.y (mgr.24422) 334 : cluster [DBG] pgmap v542: 292 pgs: 292 active+clean; 8.3 MiB data, 809 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T10:22:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:07 vm04 bash[20742]: cluster 2026-03-10T10:22:06.458065+0000 mgr.y (mgr.24422) 334 : cluster [DBG] pgmap v542: 292 pgs: 292 active+clean; 8.3 MiB data, 809 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T10:22:08.766 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:22:08 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:22:10.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:10 vm04 bash[20742]: cluster 2026-03-10T10:22:08.458605+0000 mgr.y (mgr.24422) 335 : cluster [DBG] pgmap v543: 292 pgs: 292 active+clean; 8.3 MiB data, 809 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:22:10.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:10 vm04 bash[20742]: cluster 2026-03-10T10:22:08.458605+0000 mgr.y (mgr.24422) 335 : cluster [DBG] pgmap v543: 292 pgs: 292 active+clean; 8.3 MiB data, 809 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:22:10.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:10 vm04 bash[20742]: audit 2026-03-10T10:22:08.504421+0000 mgr.y (mgr.24422) 336 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:10.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:10 vm04 bash[20742]: audit 2026-03-10T10:22:08.504421+0000 mgr.y (mgr.24422) 336 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:10 vm04 bash[28289]: cluster 2026-03-10T10:22:08.458605+0000 mgr.y (mgr.24422) 335 : cluster [DBG] pgmap v543: 292 pgs: 292 active+clean; 8.3 MiB data, 809 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:22:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:10 vm04 bash[28289]: cluster 2026-03-10T10:22:08.458605+0000 mgr.y (mgr.24422) 335 : cluster [DBG] pgmap v543: 292 pgs: 292 active+clean; 8.3 MiB data, 809 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:22:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:10 vm04 bash[28289]: audit 2026-03-10T10:22:08.504421+0000 mgr.y (mgr.24422) 336 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:10 vm04 bash[28289]: audit 2026-03-10T10:22:08.504421+0000 mgr.y (mgr.24422) 336 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:10.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:10 vm07 bash[23367]: cluster 2026-03-10T10:22:08.458605+0000 mgr.y (mgr.24422) 335 : cluster [DBG] pgmap v543: 292 pgs: 292 active+clean; 8.3 MiB data, 809 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:22:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:10 vm07 bash[23367]: cluster 2026-03-10T10:22:08.458605+0000 mgr.y (mgr.24422) 335 : cluster [DBG] pgmap v543: 292 pgs: 292 active+clean; 8.3 MiB data, 809 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:22:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:10 vm07 bash[23367]: audit 2026-03-10T10:22:08.504421+0000 mgr.y (mgr.24422) 336 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:10 vm07 bash[23367]: audit 2026-03-10T10:22:08.504421+0000 mgr.y (mgr.24422) 336 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:12.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:12 vm04 bash[28289]: cluster 2026-03-10T10:22:10.459269+0000 mgr.y (mgr.24422) 337 : cluster [DBG] pgmap v544: 292 pgs: 292 active+clean; 8.3 MiB data, 827 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:22:12.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:12 vm04 bash[28289]: cluster 2026-03-10T10:22:10.459269+0000 mgr.y (mgr.24422) 337 : cluster [DBG] pgmap v544: 292 pgs: 292 active+clean; 8.3 MiB data, 827 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:22:12.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:12 vm04 bash[28289]: cluster 2026-03-10T10:22:11.477353+0000 mon.a (mon.0) 2584 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-10T10:22:12.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:12 vm04 bash[28289]: cluster 2026-03-10T10:22:11.477353+0000 mon.a (mon.0) 2584 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-10T10:22:12.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:12 vm04 bash[20742]: cluster 2026-03-10T10:22:10.459269+0000 mgr.y (mgr.24422) 337 : cluster [DBG] pgmap v544: 292 pgs: 292 active+clean; 8.3 MiB data, 827 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:22:12.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:12 vm04 bash[20742]: cluster 2026-03-10T10:22:10.459269+0000 mgr.y (mgr.24422) 337 : cluster [DBG] pgmap v544: 292 pgs: 292 active+clean; 8.3 MiB data, 827 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:22:12.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:12 vm04 bash[20742]: cluster 2026-03-10T10:22:11.477353+0000 mon.a (mon.0) 2584 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-10T10:22:12.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:12 vm04 bash[20742]: cluster 2026-03-10T10:22:11.477353+0000 mon.a (mon.0) 2584 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-10T10:22:12.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:12 vm07 bash[23367]: cluster 2026-03-10T10:22:10.459269+0000 mgr.y (mgr.24422) 337 : cluster [DBG] pgmap v544: 292 pgs: 292 active+clean; 8.3 MiB data, 827 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:22:12.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:12 vm07 bash[23367]: cluster 2026-03-10T10:22:10.459269+0000 mgr.y (mgr.24422) 337 : cluster [DBG] pgmap v544: 292 pgs: 292 active+clean; 8.3 MiB data, 827 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:22:12.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:12 vm07 bash[23367]: cluster 2026-03-10T10:22:11.477353+0000 mon.a (mon.0) 2584 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-10T10:22:12.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:12 vm07 bash[23367]: cluster 2026-03-10T10:22:11.477353+0000 mon.a (mon.0) 2584 : cluster [DBG] osdmap e382: 8 total, 8 up, 8 in 2026-03-10T10:22:13.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:13 vm04 bash[28289]: audit 2026-03-10T10:22:12.951550+0000 mon.a (mon.0) 2585 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:22:13.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:13 vm04 bash[28289]: audit 2026-03-10T10:22:12.951550+0000 mon.a (mon.0) 2585 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:22:13.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:13 vm04 bash[20742]: audit 2026-03-10T10:22:12.951550+0000 mon.a (mon.0) 2585 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:22:13.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:13 vm04 bash[20742]: audit 2026-03-10T10:22:12.951550+0000 mon.a (mon.0) 2585 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:22:13.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:22:13 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:22:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:22:13.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:13 vm07 bash[23367]: audit 2026-03-10T10:22:12.951550+0000 mon.a (mon.0) 2585 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:22:13.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:13 vm07 bash[23367]: audit 2026-03-10T10:22:12.951550+0000 mon.a (mon.0) 2585 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:22:14.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:14 vm04 bash[28289]: cluster 2026-03-10T10:22:12.459615+0000 mgr.y (mgr.24422) 338 : cluster [DBG] pgmap v546: 292 pgs: 292 active+clean; 8.3 MiB data, 827 MiB used, 159 GiB / 160 GiB avail; 843 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:22:14.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:14 vm04 bash[28289]: cluster 2026-03-10T10:22:12.459615+0000 mgr.y (mgr.24422) 338 : cluster [DBG] pgmap v546: 292 pgs: 292 active+clean; 8.3 MiB data, 827 MiB used, 159 GiB / 160 GiB avail; 843 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:22:14.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:14 vm04 bash[20742]: cluster 2026-03-10T10:22:12.459615+0000 mgr.y (mgr.24422) 338 : cluster [DBG] pgmap v546: 292 pgs: 292 active+clean; 8.3 MiB data, 827 MiB used, 159 GiB / 160 GiB avail; 843 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:22:14.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:14 vm04 bash[20742]: cluster 2026-03-10T10:22:12.459615+0000 mgr.y (mgr.24422) 338 : cluster [DBG] pgmap v546: 292 pgs: 292 active+clean; 8.3 MiB data, 827 MiB used, 159 GiB / 160 GiB avail; 843 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:22:14.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:14 vm07 bash[23367]: cluster 2026-03-10T10:22:12.459615+0000 mgr.y (mgr.24422) 338 : cluster [DBG] pgmap v546: 292 pgs: 292 active+clean; 8.3 MiB data, 827 MiB used, 159 GiB / 160 GiB avail; 843 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:22:14.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:14 vm07 bash[23367]: cluster 2026-03-10T10:22:12.459615+0000 mgr.y (mgr.24422) 338 : cluster [DBG] pgmap v546: 292 pgs: 292 active+clean; 8.3 MiB data, 827 MiB used, 159 GiB / 160 GiB avail; 843 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:22:15.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:15 vm04 bash[28289]: cluster 2026-03-10T10:22:14.167404+0000 mon.a (mon.0) 2586 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-10T10:22:15.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:15 vm04 bash[28289]: cluster 2026-03-10T10:22:14.167404+0000 mon.a (mon.0) 2586 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-10T10:22:15.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:15 vm04 bash[20742]: cluster 2026-03-10T10:22:14.167404+0000 mon.a (mon.0) 2586 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-10T10:22:15.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:15 vm04 bash[20742]: cluster 2026-03-10T10:22:14.167404+0000 mon.a (mon.0) 2586 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-10T10:22:15.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:15 vm07 bash[23367]: cluster 2026-03-10T10:22:14.167404+0000 mon.a (mon.0) 2586 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-10T10:22:15.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:15 vm07 bash[23367]: cluster 2026-03-10T10:22:14.167404+0000 mon.a (mon.0) 2586 : cluster [DBG] osdmap e383: 8 total, 8 up, 8 in 2026-03-10T10:22:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:16 vm04 bash[28289]: cluster 2026-03-10T10:22:14.460028+0000 mgr.y (mgr.24422) 339 : cluster [DBG] pgmap v548: 292 pgs: 292 active+clean; 8.3 MiB data, 827 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:22:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:16 vm04 bash[28289]: cluster 2026-03-10T10:22:14.460028+0000 mgr.y (mgr.24422) 339 : cluster [DBG] pgmap v548: 292 pgs: 292 active+clean; 8.3 MiB data, 827 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:22:16.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:16 vm04 bash[20742]: cluster 2026-03-10T10:22:14.460028+0000 mgr.y (mgr.24422) 339 : cluster [DBG] pgmap v548: 292 pgs: 292 active+clean; 8.3 MiB data, 827 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:22:16.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:16 vm04 bash[20742]: cluster 2026-03-10T10:22:14.460028+0000 mgr.y (mgr.24422) 339 : cluster [DBG] pgmap v548: 292 pgs: 292 active+clean; 8.3 MiB data, 827 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:22:16.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:16 vm07 bash[23367]: cluster 2026-03-10T10:22:14.460028+0000 mgr.y (mgr.24422) 339 : cluster [DBG] pgmap v548: 292 pgs: 292 active+clean; 8.3 MiB data, 827 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:22:16.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:16 vm07 bash[23367]: cluster 2026-03-10T10:22:14.460028+0000 mgr.y (mgr.24422) 339 : cluster [DBG] pgmap v548: 292 pgs: 292 active+clean; 8.3 MiB data, 827 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:22:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:18 vm04 bash[28289]: cluster 2026-03-10T10:22:16.460359+0000 mgr.y (mgr.24422) 340 : cluster [DBG] pgmap v549: 292 pgs: 292 active+clean; 8.3 MiB data, 827 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:22:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:18 vm04 bash[28289]: cluster 2026-03-10T10:22:16.460359+0000 mgr.y (mgr.24422) 340 : cluster [DBG] pgmap v549: 292 pgs: 292 active+clean; 8.3 MiB data, 827 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:22:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:18 vm04 bash[20742]: cluster 2026-03-10T10:22:16.460359+0000 mgr.y (mgr.24422) 340 : cluster [DBG] pgmap v549: 292 pgs: 292 active+clean; 8.3 MiB data, 827 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:22:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:18 vm04 bash[20742]: cluster 2026-03-10T10:22:16.460359+0000 mgr.y (mgr.24422) 340 : cluster [DBG] pgmap v549: 292 pgs: 292 active+clean; 8.3 MiB data, 827 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:22:18.506 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:18 vm07 bash[23367]: cluster 2026-03-10T10:22:16.460359+0000 mgr.y (mgr.24422) 340 : cluster [DBG] pgmap v549: 292 pgs: 292 active+clean; 8.3 MiB data, 827 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:22:18.506 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:18 vm07 bash[23367]: cluster 2026-03-10T10:22:16.460359+0000 mgr.y (mgr.24422) 340 : cluster [DBG] pgmap v549: 292 pgs: 292 active+clean; 8.3 MiB data, 827 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:22:18.766 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:22:18 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:22:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:20 vm04 bash[28289]: cluster 2026-03-10T10:22:18.460911+0000 mgr.y (mgr.24422) 341 : cluster [DBG] pgmap v550: 292 pgs: 292 active+clean; 8.3 MiB data, 831 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:22:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:20 vm04 bash[28289]: cluster 2026-03-10T10:22:18.460911+0000 mgr.y (mgr.24422) 341 : cluster [DBG] pgmap v550: 292 pgs: 292 active+clean; 8.3 MiB data, 831 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:22:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:20 vm04 bash[28289]: audit 2026-03-10T10:22:18.507253+0000 mgr.y (mgr.24422) 342 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:20 vm04 bash[28289]: audit 2026-03-10T10:22:18.507253+0000 mgr.y (mgr.24422) 342 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:20 vm04 bash[20742]: cluster 2026-03-10T10:22:18.460911+0000 mgr.y (mgr.24422) 341 : cluster [DBG] pgmap v550: 292 pgs: 292 active+clean; 8.3 MiB data, 831 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:22:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:20 vm04 bash[20742]: cluster 2026-03-10T10:22:18.460911+0000 mgr.y (mgr.24422) 341 : cluster [DBG] pgmap v550: 292 pgs: 292 active+clean; 8.3 MiB data, 831 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:22:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:20 vm04 bash[20742]: audit 2026-03-10T10:22:18.507253+0000 mgr.y (mgr.24422) 342 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:20 vm04 bash[20742]: audit 2026-03-10T10:22:18.507253+0000 mgr.y (mgr.24422) 342 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:20 vm07 bash[23367]: cluster 2026-03-10T10:22:18.460911+0000 mgr.y (mgr.24422) 341 : cluster [DBG] pgmap v550: 292 pgs: 292 active+clean; 8.3 MiB data, 831 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:22:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:20 vm07 bash[23367]: cluster 2026-03-10T10:22:18.460911+0000 mgr.y (mgr.24422) 341 : cluster [DBG] pgmap v550: 292 pgs: 292 active+clean; 8.3 MiB data, 831 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:22:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:20 vm07 bash[23367]: audit 2026-03-10T10:22:18.507253+0000 mgr.y (mgr.24422) 342 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:20 vm07 bash[23367]: audit 2026-03-10T10:22:18.507253+0000 mgr.y (mgr.24422) 342 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:22 vm04 bash[28289]: cluster 2026-03-10T10:22:20.461455+0000 mgr.y (mgr.24422) 343 : cluster [DBG] pgmap v551: 292 pgs: 292 active+clean; 8.3 MiB data, 831 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:22:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:22 vm04 bash[28289]: cluster 2026-03-10T10:22:20.461455+0000 mgr.y (mgr.24422) 343 : cluster [DBG] pgmap v551: 292 pgs: 292 active+clean; 8.3 MiB data, 831 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:22:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:22 vm04 bash[28289]: cluster 2026-03-10T10:22:21.484849+0000 mon.a (mon.0) 2587 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-10T10:22:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:22 vm04 bash[28289]: cluster 2026-03-10T10:22:21.484849+0000 mon.a (mon.0) 2587 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-10T10:22:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:22 vm04 bash[20742]: cluster 2026-03-10T10:22:20.461455+0000 mgr.y (mgr.24422) 343 : cluster [DBG] pgmap v551: 292 pgs: 292 active+clean; 8.3 MiB data, 831 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:22:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:22 vm04 bash[20742]: cluster 2026-03-10T10:22:20.461455+0000 mgr.y (mgr.24422) 343 : cluster [DBG] pgmap v551: 292 pgs: 292 active+clean; 8.3 MiB data, 831 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:22:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:22 vm04 bash[20742]: cluster 2026-03-10T10:22:21.484849+0000 mon.a (mon.0) 2587 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-10T10:22:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:22 vm04 bash[20742]: cluster 2026-03-10T10:22:21.484849+0000 mon.a (mon.0) 2587 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-10T10:22:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:22 vm07 bash[23367]: cluster 2026-03-10T10:22:20.461455+0000 mgr.y (mgr.24422) 343 : cluster [DBG] pgmap v551: 292 pgs: 292 active+clean; 8.3 MiB data, 831 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:22:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:22 vm07 bash[23367]: cluster 2026-03-10T10:22:20.461455+0000 mgr.y (mgr.24422) 343 : cluster [DBG] pgmap v551: 292 pgs: 292 active+clean; 8.3 MiB data, 831 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:22:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:22 vm07 bash[23367]: cluster 2026-03-10T10:22:21.484849+0000 mon.a (mon.0) 2587 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-10T10:22:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:22 vm07 bash[23367]: cluster 2026-03-10T10:22:21.484849+0000 mon.a (mon.0) 2587 : cluster [DBG] osdmap e384: 8 total, 8 up, 8 in 2026-03-10T10:22:23.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:22:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:22:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:22:24.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:24 vm07 bash[23367]: cluster 2026-03-10T10:22:22.461721+0000 mgr.y (mgr.24422) 344 : cluster [DBG] pgmap v553: 292 pgs: 292 active+clean; 8.3 MiB data, 831 MiB used, 159 GiB / 160 GiB avail; 740 B/s rd, 0 op/s 2026-03-10T10:22:24.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:24 vm07 bash[23367]: cluster 2026-03-10T10:22:22.461721+0000 mgr.y (mgr.24422) 344 : cluster [DBG] pgmap v553: 292 pgs: 292 active+clean; 8.3 MiB data, 831 MiB used, 159 GiB / 160 GiB avail; 740 B/s rd, 0 op/s 2026-03-10T10:22:24.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:24 vm07 bash[23367]: audit 2026-03-10T10:22:24.179043+0000 mon.a (mon.0) 2588 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:22:24.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:24 vm07 bash[23367]: audit 2026-03-10T10:22:24.179043+0000 mon.a (mon.0) 2588 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:22:24.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:24 vm07 bash[23367]: audit 2026-03-10T10:22:24.179305+0000 mon.a (mon.0) 2589 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-63"}]: dispatch 2026-03-10T10:22:24.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:24 vm07 bash[23367]: audit 2026-03-10T10:22:24.179305+0000 mon.a (mon.0) 2589 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-63"}]: dispatch 2026-03-10T10:22:24.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:24 vm04 bash[28289]: cluster 2026-03-10T10:22:22.461721+0000 mgr.y (mgr.24422) 344 : cluster [DBG] pgmap v553: 292 pgs: 292 active+clean; 8.3 MiB data, 831 MiB used, 159 GiB / 160 GiB avail; 740 B/s rd, 0 op/s 2026-03-10T10:22:24.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:24 vm04 bash[28289]: cluster 2026-03-10T10:22:22.461721+0000 mgr.y (mgr.24422) 344 : cluster [DBG] pgmap v553: 292 pgs: 292 active+clean; 8.3 MiB data, 831 MiB used, 159 GiB / 160 GiB avail; 740 B/s rd, 0 op/s 2026-03-10T10:22:24.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:24 vm04 bash[28289]: audit 2026-03-10T10:22:24.179043+0000 mon.a (mon.0) 2588 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:22:24.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:24 vm04 bash[28289]: audit 2026-03-10T10:22:24.179043+0000 mon.a (mon.0) 2588 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:22:24.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:24 vm04 bash[28289]: audit 2026-03-10T10:22:24.179305+0000 mon.a (mon.0) 2589 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-63"}]: dispatch 2026-03-10T10:22:24.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:24 vm04 bash[28289]: audit 2026-03-10T10:22:24.179305+0000 mon.a (mon.0) 2589 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-63"}]: dispatch 2026-03-10T10:22:24.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:24 vm04 bash[20742]: cluster 2026-03-10T10:22:22.461721+0000 mgr.y (mgr.24422) 344 : cluster [DBG] pgmap v553: 292 pgs: 292 active+clean; 8.3 MiB data, 831 MiB used, 159 GiB / 160 GiB avail; 740 B/s rd, 0 op/s 2026-03-10T10:22:24.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:24 vm04 bash[20742]: cluster 2026-03-10T10:22:22.461721+0000 mgr.y (mgr.24422) 344 : cluster [DBG] pgmap v553: 292 pgs: 292 active+clean; 8.3 MiB data, 831 MiB used, 159 GiB / 160 GiB avail; 740 B/s rd, 0 op/s 2026-03-10T10:22:24.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:24 vm04 bash[20742]: audit 2026-03-10T10:22:24.179043+0000 mon.a (mon.0) 2588 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:22:24.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:24 vm04 bash[20742]: audit 2026-03-10T10:22:24.179043+0000 mon.a (mon.0) 2588 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:22:24.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:24 vm04 bash[20742]: audit 2026-03-10T10:22:24.179305+0000 mon.a (mon.0) 2589 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-63"}]: dispatch 2026-03-10T10:22:24.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:24 vm04 bash[20742]: audit 2026-03-10T10:22:24.179305+0000 mon.a (mon.0) 2589 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-63"}]: dispatch 2026-03-10T10:22:25.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:25 vm04 bash[28289]: cluster 2026-03-10T10:22:24.248996+0000 mon.a (mon.0) 2590 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-10T10:22:25.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:25 vm04 bash[28289]: cluster 2026-03-10T10:22:24.248996+0000 mon.a (mon.0) 2590 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-10T10:22:25.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:25 vm04 bash[20742]: cluster 2026-03-10T10:22:24.248996+0000 mon.a (mon.0) 2590 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-10T10:22:25.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:25 vm04 bash[20742]: cluster 2026-03-10T10:22:24.248996+0000 mon.a (mon.0) 2590 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-10T10:22:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:25 vm07 bash[23367]: cluster 2026-03-10T10:22:24.248996+0000 mon.a (mon.0) 2590 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-10T10:22:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:25 vm07 bash[23367]: cluster 2026-03-10T10:22:24.248996+0000 mon.a (mon.0) 2590 : cluster [DBG] osdmap e385: 8 total, 8 up, 8 in 2026-03-10T10:22:26.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:26 vm04 bash[28289]: cluster 2026-03-10T10:22:24.462090+0000 mgr.y (mgr.24422) 345 : cluster [DBG] pgmap v555: 260 pgs: 260 active+clean; 8.3 MiB data, 831 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:22:26.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:26 vm04 bash[28289]: cluster 2026-03-10T10:22:24.462090+0000 mgr.y (mgr.24422) 345 : cluster [DBG] pgmap v555: 260 pgs: 260 active+clean; 8.3 MiB data, 831 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:22:26.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:26 vm04 bash[28289]: audit 2026-03-10T10:22:25.617298+0000 mon.a (mon.0) 2591 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:22:26.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:26 vm04 bash[28289]: audit 2026-03-10T10:22:25.617298+0000 mon.a (mon.0) 2591 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:22:26.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:26 vm04 bash[28289]: cluster 2026-03-10T10:22:25.764669+0000 mon.a (mon.0) 2592 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-10T10:22:26.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:26 vm04 bash[28289]: cluster 2026-03-10T10:22:25.764669+0000 mon.a (mon.0) 2592 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-10T10:22:26.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:26 vm04 bash[28289]: audit 2026-03-10T10:22:25.777410+0000 mon.a (mon.0) 2593 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:22:26.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:26 vm04 bash[28289]: audit 2026-03-10T10:22:25.777410+0000 mon.a (mon.0) 2593 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:22:26.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:26 vm04 bash[28289]: audit 2026-03-10T10:22:25.974562+0000 mon.a (mon.0) 2594 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:22:26.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:26 vm04 bash[28289]: audit 2026-03-10T10:22:25.974562+0000 mon.a (mon.0) 2594 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:22:26.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:26 vm04 bash[28289]: audit 2026-03-10T10:22:25.975192+0000 mon.a (mon.0) 2595 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:22:26.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:26 vm04 bash[28289]: audit 2026-03-10T10:22:25.975192+0000 mon.a (mon.0) 2595 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:22:26.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:26 vm04 bash[28289]: audit 2026-03-10T10:22:25.981419+0000 mon.a (mon.0) 2596 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:22:26.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:26 vm04 bash[28289]: audit 2026-03-10T10:22:25.981419+0000 mon.a (mon.0) 2596 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:22:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:26 vm04 bash[20742]: cluster 2026-03-10T10:22:24.462090+0000 mgr.y (mgr.24422) 345 : cluster [DBG] pgmap v555: 260 pgs: 260 active+clean; 8.3 MiB data, 831 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:22:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:26 vm04 bash[20742]: cluster 2026-03-10T10:22:24.462090+0000 mgr.y (mgr.24422) 345 : cluster [DBG] pgmap v555: 260 pgs: 260 active+clean; 8.3 MiB data, 831 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:22:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:26 vm04 bash[20742]: audit 2026-03-10T10:22:25.617298+0000 mon.a (mon.0) 2591 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:22:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:26 vm04 bash[20742]: audit 2026-03-10T10:22:25.617298+0000 mon.a (mon.0) 2591 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:22:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:26 vm04 bash[20742]: cluster 2026-03-10T10:22:25.764669+0000 mon.a (mon.0) 2592 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-10T10:22:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:26 vm04 bash[20742]: cluster 2026-03-10T10:22:25.764669+0000 mon.a (mon.0) 2592 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-10T10:22:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:26 vm04 bash[20742]: audit 2026-03-10T10:22:25.777410+0000 mon.a (mon.0) 2593 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:22:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:26 vm04 bash[20742]: audit 2026-03-10T10:22:25.777410+0000 mon.a (mon.0) 2593 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:22:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:26 vm04 bash[20742]: audit 2026-03-10T10:22:25.974562+0000 mon.a (mon.0) 2594 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:22:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:26 vm04 bash[20742]: audit 2026-03-10T10:22:25.974562+0000 mon.a (mon.0) 2594 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:22:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:26 vm04 bash[20742]: audit 2026-03-10T10:22:25.975192+0000 mon.a (mon.0) 2595 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:22:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:26 vm04 bash[20742]: audit 2026-03-10T10:22:25.975192+0000 mon.a (mon.0) 2595 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:22:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:26 vm04 bash[20742]: audit 2026-03-10T10:22:25.981419+0000 mon.a (mon.0) 2596 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:22:26.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:26 vm04 bash[20742]: audit 2026-03-10T10:22:25.981419+0000 mon.a (mon.0) 2596 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:22:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:26 vm07 bash[23367]: cluster 2026-03-10T10:22:24.462090+0000 mgr.y (mgr.24422) 345 : cluster [DBG] pgmap v555: 260 pgs: 260 active+clean; 8.3 MiB data, 831 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:22:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:26 vm07 bash[23367]: cluster 2026-03-10T10:22:24.462090+0000 mgr.y (mgr.24422) 345 : cluster [DBG] pgmap v555: 260 pgs: 260 active+clean; 8.3 MiB data, 831 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:22:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:26 vm07 bash[23367]: audit 2026-03-10T10:22:25.617298+0000 mon.a (mon.0) 2591 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:22:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:26 vm07 bash[23367]: audit 2026-03-10T10:22:25.617298+0000 mon.a (mon.0) 2591 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:22:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:26 vm07 bash[23367]: cluster 2026-03-10T10:22:25.764669+0000 mon.a (mon.0) 2592 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-10T10:22:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:26 vm07 bash[23367]: cluster 2026-03-10T10:22:25.764669+0000 mon.a (mon.0) 2592 : cluster [DBG] osdmap e386: 8 total, 8 up, 8 in 2026-03-10T10:22:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:26 vm07 bash[23367]: audit 2026-03-10T10:22:25.777410+0000 mon.a (mon.0) 2593 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:22:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:26 vm07 bash[23367]: audit 2026-03-10T10:22:25.777410+0000 mon.a (mon.0) 2593 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-65","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:22:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:26 vm07 bash[23367]: audit 2026-03-10T10:22:25.974562+0000 mon.a (mon.0) 2594 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:22:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:26 vm07 bash[23367]: audit 2026-03-10T10:22:25.974562+0000 mon.a (mon.0) 2594 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:22:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:26 vm07 bash[23367]: audit 2026-03-10T10:22:25.975192+0000 mon.a (mon.0) 2595 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:22:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:26 vm07 bash[23367]: audit 2026-03-10T10:22:25.975192+0000 mon.a (mon.0) 2595 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:22:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:26 vm07 bash[23367]: audit 2026-03-10T10:22:25.981419+0000 mon.a (mon.0) 2596 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:22:27.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:26 vm07 bash[23367]: audit 2026-03-10T10:22:25.981419+0000 mon.a (mon.0) 2596 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:22:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:27 vm04 bash[28289]: cluster 2026-03-10T10:22:26.462504+0000 mgr.y (mgr.24422) 346 : cluster [DBG] pgmap v557: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 831 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:22:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:27 vm04 bash[28289]: cluster 2026-03-10T10:22:26.462504+0000 mgr.y (mgr.24422) 346 : cluster [DBG] pgmap v557: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 831 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:22:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:27 vm04 bash[28289]: audit 2026-03-10T10:22:26.639068+0000 mon.a (mon.0) 2597 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-65","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:22:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:27 vm04 bash[28289]: audit 2026-03-10T10:22:26.639068+0000 mon.a (mon.0) 2597 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-65","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:22:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:27 vm04 bash[28289]: cluster 2026-03-10T10:22:26.645011+0000 mon.a (mon.0) 2598 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-10T10:22:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:27 vm04 bash[28289]: cluster 2026-03-10T10:22:26.645011+0000 mon.a (mon.0) 2598 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-10T10:22:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:27 vm04 bash[28289]: audit 2026-03-10T10:22:26.651909+0000 mon.a (mon.0) 2599 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:22:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:27 vm04 bash[28289]: audit 2026-03-10T10:22:26.651909+0000 mon.a (mon.0) 2599 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:22:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:27 vm04 bash[28289]: audit 2026-03-10T10:22:26.655882+0000 mon.a (mon.0) 2600 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:22:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:27 vm04 bash[28289]: audit 2026-03-10T10:22:26.655882+0000 mon.a (mon.0) 2600 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:22:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:27 vm04 bash[28289]: cluster 2026-03-10T10:22:26.978474+0000 mon.a (mon.0) 2601 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:22:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:27 vm04 bash[28289]: cluster 2026-03-10T10:22:26.978474+0000 mon.a (mon.0) 2601 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:22:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:27 vm04 bash[28289]: audit 2026-03-10T10:22:27.658673+0000 mon.a (mon.0) 2602 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:22:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:27 vm04 bash[28289]: audit 2026-03-10T10:22:27.658673+0000 mon.a (mon.0) 2602 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:22:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:27 vm04 bash[28289]: cluster 2026-03-10T10:22:27.662949+0000 mon.a (mon.0) 2603 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-10T10:22:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:27 vm04 bash[28289]: cluster 2026-03-10T10:22:27.662949+0000 mon.a (mon.0) 2603 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-10T10:22:27.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:27 vm04 bash[20742]: cluster 2026-03-10T10:22:26.462504+0000 mgr.y (mgr.24422) 346 : cluster [DBG] pgmap v557: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 831 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:22:27.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:27 vm04 bash[20742]: cluster 2026-03-10T10:22:26.462504+0000 mgr.y (mgr.24422) 346 : cluster [DBG] pgmap v557: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 831 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:22:27.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:27 vm04 bash[20742]: audit 2026-03-10T10:22:26.639068+0000 mon.a (mon.0) 2597 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-65","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:22:27.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:27 vm04 bash[20742]: audit 2026-03-10T10:22:26.639068+0000 mon.a (mon.0) 2597 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-65","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:22:27.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:27 vm04 bash[20742]: cluster 2026-03-10T10:22:26.645011+0000 mon.a (mon.0) 2598 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-10T10:22:27.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:27 vm04 bash[20742]: cluster 2026-03-10T10:22:26.645011+0000 mon.a (mon.0) 2598 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-10T10:22:27.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:27 vm04 bash[20742]: audit 2026-03-10T10:22:26.651909+0000 mon.a (mon.0) 2599 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:22:27.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:27 vm04 bash[20742]: audit 2026-03-10T10:22:26.651909+0000 mon.a (mon.0) 2599 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:22:27.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:27 vm04 bash[20742]: audit 2026-03-10T10:22:26.655882+0000 mon.a (mon.0) 2600 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:22:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:27 vm04 bash[20742]: audit 2026-03-10T10:22:26.655882+0000 mon.a (mon.0) 2600 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:22:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:27 vm04 bash[20742]: cluster 2026-03-10T10:22:26.978474+0000 mon.a (mon.0) 2601 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:22:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:27 vm04 bash[20742]: cluster 2026-03-10T10:22:26.978474+0000 mon.a (mon.0) 2601 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:22:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:27 vm04 bash[20742]: audit 2026-03-10T10:22:27.658673+0000 mon.a (mon.0) 2602 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:22:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:27 vm04 bash[20742]: audit 2026-03-10T10:22:27.658673+0000 mon.a (mon.0) 2602 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:22:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:27 vm04 bash[20742]: cluster 2026-03-10T10:22:27.662949+0000 mon.a (mon.0) 2603 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-10T10:22:27.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:27 vm04 bash[20742]: cluster 2026-03-10T10:22:27.662949+0000 mon.a (mon.0) 2603 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-10T10:22:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:27 vm07 bash[23367]: cluster 2026-03-10T10:22:26.462504+0000 mgr.y (mgr.24422) 346 : cluster [DBG] pgmap v557: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 831 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:22:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:27 vm07 bash[23367]: cluster 2026-03-10T10:22:26.462504+0000 mgr.y (mgr.24422) 346 : cluster [DBG] pgmap v557: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 831 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:22:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:27 vm07 bash[23367]: audit 2026-03-10T10:22:26.639068+0000 mon.a (mon.0) 2597 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-65","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:22:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:27 vm07 bash[23367]: audit 2026-03-10T10:22:26.639068+0000 mon.a (mon.0) 2597 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-65","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:22:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:27 vm07 bash[23367]: cluster 2026-03-10T10:22:26.645011+0000 mon.a (mon.0) 2598 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-10T10:22:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:27 vm07 bash[23367]: cluster 2026-03-10T10:22:26.645011+0000 mon.a (mon.0) 2598 : cluster [DBG] osdmap e387: 8 total, 8 up, 8 in 2026-03-10T10:22:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:27 vm07 bash[23367]: audit 2026-03-10T10:22:26.651909+0000 mon.a (mon.0) 2599 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:22:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:27 vm07 bash[23367]: audit 2026-03-10T10:22:26.651909+0000 mon.a (mon.0) 2599 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:22:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:27 vm07 bash[23367]: audit 2026-03-10T10:22:26.655882+0000 mon.a (mon.0) 2600 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:22:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:27 vm07 bash[23367]: audit 2026-03-10T10:22:26.655882+0000 mon.a (mon.0) 2600 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:22:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:27 vm07 bash[23367]: cluster 2026-03-10T10:22:26.978474+0000 mon.a (mon.0) 2601 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:22:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:27 vm07 bash[23367]: cluster 2026-03-10T10:22:26.978474+0000 mon.a (mon.0) 2601 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:22:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:27 vm07 bash[23367]: audit 2026-03-10T10:22:27.658673+0000 mon.a (mon.0) 2602 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:22:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:27 vm07 bash[23367]: audit 2026-03-10T10:22:27.658673+0000 mon.a (mon.0) 2602 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:22:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:27 vm07 bash[23367]: cluster 2026-03-10T10:22:27.662949+0000 mon.a (mon.0) 2603 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-10T10:22:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:27 vm07 bash[23367]: cluster 2026-03-10T10:22:27.662949+0000 mon.a (mon.0) 2603 : cluster [DBG] osdmap e388: 8 total, 8 up, 8 in 2026-03-10T10:22:28.766 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:22:28 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:22:28.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:28 vm07 bash[23367]: audit 2026-03-10T10:22:27.958206+0000 mon.a (mon.0) 2604 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:22:28.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:28 vm07 bash[23367]: audit 2026-03-10T10:22:27.958206+0000 mon.a (mon.0) 2604 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:22:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:28 vm04 bash[28289]: audit 2026-03-10T10:22:27.958206+0000 mon.a (mon.0) 2604 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:22:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:28 vm04 bash[28289]: audit 2026-03-10T10:22:27.958206+0000 mon.a (mon.0) 2604 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:22:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:28 vm04 bash[20742]: audit 2026-03-10T10:22:27.958206+0000 mon.a (mon.0) 2604 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:22:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:28 vm04 bash[20742]: audit 2026-03-10T10:22:27.958206+0000 mon.a (mon.0) 2604 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:22:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:29 vm07 bash[23367]: cluster 2026-03-10T10:22:28.463035+0000 mgr.y (mgr.24422) 347 : cluster [DBG] pgmap v560: 292 pgs: 14 unknown, 278 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 971 B/s wr, 1 op/s 2026-03-10T10:22:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:29 vm07 bash[23367]: cluster 2026-03-10T10:22:28.463035+0000 mgr.y (mgr.24422) 347 : cluster [DBG] pgmap v560: 292 pgs: 14 unknown, 278 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 971 B/s wr, 1 op/s 2026-03-10T10:22:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:29 vm07 bash[23367]: audit 2026-03-10T10:22:28.516618+0000 mgr.y (mgr.24422) 348 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:29 vm07 bash[23367]: audit 2026-03-10T10:22:28.516618+0000 mgr.y (mgr.24422) 348 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:29 vm07 bash[23367]: cluster 2026-03-10T10:22:28.753982+0000 mon.a (mon.0) 2605 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-10T10:22:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:29 vm07 bash[23367]: cluster 2026-03-10T10:22:28.753982+0000 mon.a (mon.0) 2605 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-10T10:22:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:29 vm04 bash[28289]: cluster 2026-03-10T10:22:28.463035+0000 mgr.y (mgr.24422) 347 : cluster [DBG] pgmap v560: 292 pgs: 14 unknown, 278 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 971 B/s wr, 1 op/s 2026-03-10T10:22:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:29 vm04 bash[28289]: cluster 2026-03-10T10:22:28.463035+0000 mgr.y (mgr.24422) 347 : cluster [DBG] pgmap v560: 292 pgs: 14 unknown, 278 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 971 B/s wr, 1 op/s 2026-03-10T10:22:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:29 vm04 bash[28289]: audit 2026-03-10T10:22:28.516618+0000 mgr.y (mgr.24422) 348 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:29 vm04 bash[28289]: audit 2026-03-10T10:22:28.516618+0000 mgr.y (mgr.24422) 348 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:29 vm04 bash[28289]: cluster 2026-03-10T10:22:28.753982+0000 mon.a (mon.0) 2605 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-10T10:22:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:29 vm04 bash[28289]: cluster 2026-03-10T10:22:28.753982+0000 mon.a (mon.0) 2605 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-10T10:22:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:29 vm04 bash[20742]: cluster 2026-03-10T10:22:28.463035+0000 mgr.y (mgr.24422) 347 : cluster [DBG] pgmap v560: 292 pgs: 14 unknown, 278 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 971 B/s wr, 1 op/s 2026-03-10T10:22:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:29 vm04 bash[20742]: cluster 2026-03-10T10:22:28.463035+0000 mgr.y (mgr.24422) 347 : cluster [DBG] pgmap v560: 292 pgs: 14 unknown, 278 active+clean; 8.3 MiB data, 836 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 971 B/s wr, 1 op/s 2026-03-10T10:22:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:29 vm04 bash[20742]: audit 2026-03-10T10:22:28.516618+0000 mgr.y (mgr.24422) 348 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:29 vm04 bash[20742]: audit 2026-03-10T10:22:28.516618+0000 mgr.y (mgr.24422) 348 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:29 vm04 bash[20742]: cluster 2026-03-10T10:22:28.753982+0000 mon.a (mon.0) 2605 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-10T10:22:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:29 vm04 bash[20742]: cluster 2026-03-10T10:22:28.753982+0000 mon.a (mon.0) 2605 : cluster [DBG] osdmap e389: 8 total, 8 up, 8 in 2026-03-10T10:22:31.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:30 vm04 bash[28289]: cluster 2026-03-10T10:22:29.758439+0000 mon.a (mon.0) 2606 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-10T10:22:31.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:30 vm04 bash[28289]: cluster 2026-03-10T10:22:29.758439+0000 mon.a (mon.0) 2606 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-10T10:22:31.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:30 vm04 bash[20742]: cluster 2026-03-10T10:22:29.758439+0000 mon.a (mon.0) 2606 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-10T10:22:31.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:30 vm04 bash[20742]: cluster 2026-03-10T10:22:29.758439+0000 mon.a (mon.0) 2606 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-10T10:22:31.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:30 vm07 bash[23367]: cluster 2026-03-10T10:22:29.758439+0000 mon.a (mon.0) 2606 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-10T10:22:31.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:30 vm07 bash[23367]: cluster 2026-03-10T10:22:29.758439+0000 mon.a (mon.0) 2606 : cluster [DBG] osdmap e390: 8 total, 8 up, 8 in 2026-03-10T10:22:32.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:31 vm04 bash[28289]: cluster 2026-03-10T10:22:30.463406+0000 mgr.y (mgr.24422) 349 : cluster [DBG] pgmap v563: 292 pgs: 292 active+clean; 8.3 MiB data, 875 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-10T10:22:32.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:31 vm04 bash[28289]: cluster 2026-03-10T10:22:30.463406+0000 mgr.y (mgr.24422) 349 : cluster [DBG] pgmap v563: 292 pgs: 292 active+clean; 8.3 MiB data, 875 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-10T10:22:32.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:31 vm04 bash[28289]: cluster 2026-03-10T10:22:30.771489+0000 mon.a (mon.0) 2607 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-10T10:22:32.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:31 vm04 bash[28289]: cluster 2026-03-10T10:22:30.771489+0000 mon.a (mon.0) 2607 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-10T10:22:32.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:31 vm04 bash[20742]: cluster 2026-03-10T10:22:30.463406+0000 mgr.y (mgr.24422) 349 : cluster [DBG] pgmap v563: 292 pgs: 292 active+clean; 8.3 MiB data, 875 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-10T10:22:32.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:31 vm04 bash[20742]: cluster 2026-03-10T10:22:30.463406+0000 mgr.y (mgr.24422) 349 : cluster [DBG] pgmap v563: 292 pgs: 292 active+clean; 8.3 MiB data, 875 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-10T10:22:32.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:31 vm04 bash[20742]: cluster 2026-03-10T10:22:30.771489+0000 mon.a (mon.0) 2607 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-10T10:22:32.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:31 vm04 bash[20742]: cluster 2026-03-10T10:22:30.771489+0000 mon.a (mon.0) 2607 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-10T10:22:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:31 vm07 bash[23367]: cluster 2026-03-10T10:22:30.463406+0000 mgr.y (mgr.24422) 349 : cluster [DBG] pgmap v563: 292 pgs: 292 active+clean; 8.3 MiB data, 875 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-10T10:22:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:31 vm07 bash[23367]: cluster 2026-03-10T10:22:30.463406+0000 mgr.y (mgr.24422) 349 : cluster [DBG] pgmap v563: 292 pgs: 292 active+clean; 8.3 MiB data, 875 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 4 op/s 2026-03-10T10:22:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:31 vm07 bash[23367]: cluster 2026-03-10T10:22:30.771489+0000 mon.a (mon.0) 2607 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-10T10:22:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:31 vm07 bash[23367]: cluster 2026-03-10T10:22:30.771489+0000 mon.a (mon.0) 2607 : cluster [DBG] osdmap e391: 8 total, 8 up, 8 in 2026-03-10T10:22:33.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:22:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:22:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:22:34.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:33 vm04 bash[20742]: cluster 2026-03-10T10:22:32.463788+0000 mgr.y (mgr.24422) 350 : cluster [DBG] pgmap v565: 292 pgs: 292 active+clean; 8.3 MiB data, 875 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T10:22:34.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:33 vm04 bash[20742]: cluster 2026-03-10T10:22:32.463788+0000 mgr.y (mgr.24422) 350 : cluster [DBG] pgmap v565: 292 pgs: 292 active+clean; 8.3 MiB data, 875 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T10:22:34.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:33 vm04 bash[28289]: cluster 2026-03-10T10:22:32.463788+0000 mgr.y (mgr.24422) 350 : cluster [DBG] pgmap v565: 292 pgs: 292 active+clean; 8.3 MiB data, 875 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T10:22:34.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:33 vm04 bash[28289]: cluster 2026-03-10T10:22:32.463788+0000 mgr.y (mgr.24422) 350 : cluster [DBG] pgmap v565: 292 pgs: 292 active+clean; 8.3 MiB data, 875 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T10:22:34.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:33 vm07 bash[23367]: cluster 2026-03-10T10:22:32.463788+0000 mgr.y (mgr.24422) 350 : cluster [DBG] pgmap v565: 292 pgs: 292 active+clean; 8.3 MiB data, 875 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T10:22:34.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:33 vm07 bash[23367]: cluster 2026-03-10T10:22:32.463788+0000 mgr.y (mgr.24422) 350 : cluster [DBG] pgmap v565: 292 pgs: 292 active+clean; 8.3 MiB data, 875 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T10:22:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:35 vm04 bash[28289]: cluster 2026-03-10T10:22:34.464725+0000 mgr.y (mgr.24422) 351 : cluster [DBG] pgmap v566: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail; 3.3 KiB/s rd, 1023 B/s wr, 6 op/s 2026-03-10T10:22:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:35 vm04 bash[28289]: cluster 2026-03-10T10:22:34.464725+0000 mgr.y (mgr.24422) 351 : cluster [DBG] pgmap v566: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail; 3.3 KiB/s rd, 1023 B/s wr, 6 op/s 2026-03-10T10:22:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:35 vm04 bash[20742]: cluster 2026-03-10T10:22:34.464725+0000 mgr.y (mgr.24422) 351 : cluster [DBG] pgmap v566: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail; 3.3 KiB/s rd, 1023 B/s wr, 6 op/s 2026-03-10T10:22:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:35 vm04 bash[20742]: cluster 2026-03-10T10:22:34.464725+0000 mgr.y (mgr.24422) 351 : cluster [DBG] pgmap v566: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail; 3.3 KiB/s rd, 1023 B/s wr, 6 op/s 2026-03-10T10:22:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:35 vm07 bash[23367]: cluster 2026-03-10T10:22:34.464725+0000 mgr.y (mgr.24422) 351 : cluster [DBG] pgmap v566: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail; 3.3 KiB/s rd, 1023 B/s wr, 6 op/s 2026-03-10T10:22:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:35 vm07 bash[23367]: cluster 2026-03-10T10:22:34.464725+0000 mgr.y (mgr.24422) 351 : cluster [DBG] pgmap v566: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail; 3.3 KiB/s rd, 1023 B/s wr, 6 op/s 2026-03-10T10:22:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:36 vm07 bash[23367]: cluster 2026-03-10T10:22:36.475784+0000 mon.a (mon.0) 2608 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:22:37.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:36 vm07 bash[23367]: cluster 2026-03-10T10:22:36.475784+0000 mon.a (mon.0) 2608 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:22:37.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:36 vm04 bash[28289]: cluster 2026-03-10T10:22:36.475784+0000 mon.a (mon.0) 2608 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:22:37.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:36 vm04 bash[28289]: cluster 2026-03-10T10:22:36.475784+0000 mon.a (mon.0) 2608 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:22:37.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:36 vm04 bash[20742]: cluster 2026-03-10T10:22:36.475784+0000 mon.a (mon.0) 2608 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:22:37.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:36 vm04 bash[20742]: cluster 2026-03-10T10:22:36.475784+0000 mon.a (mon.0) 2608 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:22:38.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:37 vm07 bash[23367]: cluster 2026-03-10T10:22:36.465049+0000 mgr.y (mgr.24422) 352 : cluster [DBG] pgmap v567: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 530 B/s wr, 3 op/s 2026-03-10T10:22:38.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:37 vm07 bash[23367]: cluster 2026-03-10T10:22:36.465049+0000 mgr.y (mgr.24422) 352 : cluster [DBG] pgmap v567: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 530 B/s wr, 3 op/s 2026-03-10T10:22:38.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:37 vm04 bash[28289]: cluster 2026-03-10T10:22:36.465049+0000 mgr.y (mgr.24422) 352 : cluster [DBG] pgmap v567: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 530 B/s wr, 3 op/s 2026-03-10T10:22:38.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:37 vm04 bash[28289]: cluster 2026-03-10T10:22:36.465049+0000 mgr.y (mgr.24422) 352 : cluster [DBG] pgmap v567: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 530 B/s wr, 3 op/s 2026-03-10T10:22:38.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:37 vm04 bash[20742]: cluster 2026-03-10T10:22:36.465049+0000 mgr.y (mgr.24422) 352 : cluster [DBG] pgmap v567: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 530 B/s wr, 3 op/s 2026-03-10T10:22:38.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:37 vm04 bash[20742]: cluster 2026-03-10T10:22:36.465049+0000 mgr.y (mgr.24422) 352 : cluster [DBG] pgmap v567: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 530 B/s wr, 3 op/s 2026-03-10T10:22:39.016 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:22:38 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:22:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:39 vm07 bash[23367]: cluster 2026-03-10T10:22:38.465547+0000 mgr.y (mgr.24422) 353 : cluster [DBG] pgmap v568: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 470 B/s wr, 3 op/s 2026-03-10T10:22:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:39 vm07 bash[23367]: cluster 2026-03-10T10:22:38.465547+0000 mgr.y (mgr.24422) 353 : cluster [DBG] pgmap v568: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 470 B/s wr, 3 op/s 2026-03-10T10:22:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:39 vm07 bash[23367]: audit 2026-03-10T10:22:38.522851+0000 mgr.y (mgr.24422) 354 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:39 vm07 bash[23367]: audit 2026-03-10T10:22:38.522851+0000 mgr.y (mgr.24422) 354 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:39 vm04 bash[28289]: cluster 2026-03-10T10:22:38.465547+0000 mgr.y (mgr.24422) 353 : cluster [DBG] pgmap v568: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 470 B/s wr, 3 op/s 2026-03-10T10:22:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:39 vm04 bash[28289]: cluster 2026-03-10T10:22:38.465547+0000 mgr.y (mgr.24422) 353 : cluster [DBG] pgmap v568: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 470 B/s wr, 3 op/s 2026-03-10T10:22:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:39 vm04 bash[28289]: audit 2026-03-10T10:22:38.522851+0000 mgr.y (mgr.24422) 354 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:39 vm04 bash[28289]: audit 2026-03-10T10:22:38.522851+0000 mgr.y (mgr.24422) 354 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:39 vm04 bash[20742]: cluster 2026-03-10T10:22:38.465547+0000 mgr.y (mgr.24422) 353 : cluster [DBG] pgmap v568: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 470 B/s wr, 3 op/s 2026-03-10T10:22:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:39 vm04 bash[20742]: cluster 2026-03-10T10:22:38.465547+0000 mgr.y (mgr.24422) 353 : cluster [DBG] pgmap v568: 292 pgs: 1 active+clean+snaptrim, 291 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 470 B/s wr, 3 op/s 2026-03-10T10:22:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:39 vm04 bash[20742]: audit 2026-03-10T10:22:38.522851+0000 mgr.y (mgr.24422) 354 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:39 vm04 bash[20742]: audit 2026-03-10T10:22:38.522851+0000 mgr.y (mgr.24422) 354 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:41.270 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:40 vm07 bash[23367]: audit 2026-03-10T10:22:40.783952+0000 mon.a (mon.0) 2609 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:22:41.270 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:40 vm07 bash[23367]: audit 2026-03-10T10:22:40.783952+0000 mon.a (mon.0) 2609 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:22:41.270 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:40 vm07 bash[23367]: audit 2026-03-10T10:22:40.784334+0000 mon.a (mon.0) 2610 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-65"}]: dispatch 2026-03-10T10:22:41.270 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:40 vm07 bash[23367]: audit 2026-03-10T10:22:40.784334+0000 mon.a (mon.0) 2610 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-65"}]: dispatch 2026-03-10T10:22:41.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:40 vm04 bash[28289]: audit 2026-03-10T10:22:40.783952+0000 mon.a (mon.0) 2609 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:22:41.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:40 vm04 bash[28289]: audit 2026-03-10T10:22:40.783952+0000 mon.a (mon.0) 2609 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:22:41.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:40 vm04 bash[28289]: audit 2026-03-10T10:22:40.784334+0000 mon.a (mon.0) 2610 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-65"}]: dispatch 2026-03-10T10:22:41.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:40 vm04 bash[28289]: audit 2026-03-10T10:22:40.784334+0000 mon.a (mon.0) 2610 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-65"}]: dispatch 2026-03-10T10:22:41.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:40 vm04 bash[20742]: audit 2026-03-10T10:22:40.783952+0000 mon.a (mon.0) 2609 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:22:41.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:40 vm04 bash[20742]: audit 2026-03-10T10:22:40.783952+0000 mon.a (mon.0) 2609 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:22:41.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:40 vm04 bash[20742]: audit 2026-03-10T10:22:40.784334+0000 mon.a (mon.0) 2610 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-65"}]: dispatch 2026-03-10T10:22:41.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:40 vm04 bash[20742]: audit 2026-03-10T10:22:40.784334+0000 mon.a (mon.0) 2610 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-65"}]: dispatch 2026-03-10T10:22:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:41 vm07 bash[23367]: cluster 2026-03-10T10:22:40.466496+0000 mgr.y (mgr.24422) 355 : cluster [DBG] pgmap v569: 292 pgs: 292 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 409 B/s wr, 3 op/s 2026-03-10T10:22:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:41 vm07 bash[23367]: cluster 2026-03-10T10:22:40.466496+0000 mgr.y (mgr.24422) 355 : cluster [DBG] pgmap v569: 292 pgs: 292 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 409 B/s wr, 3 op/s 2026-03-10T10:22:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:41 vm07 bash[23367]: cluster 2026-03-10T10:22:40.998010+0000 mon.a (mon.0) 2611 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-10T10:22:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:41 vm07 bash[23367]: cluster 2026-03-10T10:22:40.998010+0000 mon.a (mon.0) 2611 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-10T10:22:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:41 vm07 bash[23367]: cluster 2026-03-10T10:22:41.487921+0000 mon.a (mon.0) 2612 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-10T10:22:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:41 vm07 bash[23367]: cluster 2026-03-10T10:22:41.487921+0000 mon.a (mon.0) 2612 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-10T10:22:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:41 vm07 bash[23367]: audit 2026-03-10T10:22:41.497314+0000 mon.a (mon.0) 2613 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:22:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:41 vm07 bash[23367]: audit 2026-03-10T10:22:41.497314+0000 mon.a (mon.0) 2613 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:22:42.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:41 vm04 bash[28289]: cluster 2026-03-10T10:22:40.466496+0000 mgr.y (mgr.24422) 355 : cluster [DBG] pgmap v569: 292 pgs: 292 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 409 B/s wr, 3 op/s 2026-03-10T10:22:42.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:41 vm04 bash[28289]: cluster 2026-03-10T10:22:40.466496+0000 mgr.y (mgr.24422) 355 : cluster [DBG] pgmap v569: 292 pgs: 292 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 409 B/s wr, 3 op/s 2026-03-10T10:22:42.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:41 vm04 bash[28289]: cluster 2026-03-10T10:22:40.998010+0000 mon.a (mon.0) 2611 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-10T10:22:42.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:41 vm04 bash[28289]: cluster 2026-03-10T10:22:40.998010+0000 mon.a (mon.0) 2611 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-10T10:22:42.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:41 vm04 bash[28289]: cluster 2026-03-10T10:22:41.487921+0000 mon.a (mon.0) 2612 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-10T10:22:42.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:41 vm04 bash[28289]: cluster 2026-03-10T10:22:41.487921+0000 mon.a (mon.0) 2612 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-10T10:22:42.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:41 vm04 bash[28289]: audit 2026-03-10T10:22:41.497314+0000 mon.a (mon.0) 2613 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:22:42.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:41 vm04 bash[28289]: audit 2026-03-10T10:22:41.497314+0000 mon.a (mon.0) 2613 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:22:42.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:41 vm04 bash[20742]: cluster 2026-03-10T10:22:40.466496+0000 mgr.y (mgr.24422) 355 : cluster [DBG] pgmap v569: 292 pgs: 292 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 409 B/s wr, 3 op/s 2026-03-10T10:22:42.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:41 vm04 bash[20742]: cluster 2026-03-10T10:22:40.466496+0000 mgr.y (mgr.24422) 355 : cluster [DBG] pgmap v569: 292 pgs: 292 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 409 B/s wr, 3 op/s 2026-03-10T10:22:42.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:41 vm04 bash[20742]: cluster 2026-03-10T10:22:40.998010+0000 mon.a (mon.0) 2611 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-10T10:22:42.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:41 vm04 bash[20742]: cluster 2026-03-10T10:22:40.998010+0000 mon.a (mon.0) 2611 : cluster [DBG] osdmap e392: 8 total, 8 up, 8 in 2026-03-10T10:22:42.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:41 vm04 bash[20742]: cluster 2026-03-10T10:22:41.487921+0000 mon.a (mon.0) 2612 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-10T10:22:42.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:41 vm04 bash[20742]: cluster 2026-03-10T10:22:41.487921+0000 mon.a (mon.0) 2612 : cluster [DBG] osdmap e393: 8 total, 8 up, 8 in 2026-03-10T10:22:42.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:41 vm04 bash[20742]: audit 2026-03-10T10:22:41.497314+0000 mon.a (mon.0) 2613 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:22:42.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:41 vm04 bash[20742]: audit 2026-03-10T10:22:41.497314+0000 mon.a (mon.0) 2613 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-67","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:22:43.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:22:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:22:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:22:43.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:43 vm07 bash[23367]: cluster 2026-03-10T10:22:42.466854+0000 mgr.y (mgr.24422) 356 : cluster [DBG] pgmap v572: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:22:43.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:43 vm07 bash[23367]: cluster 2026-03-10T10:22:42.466854+0000 mgr.y (mgr.24422) 356 : cluster [DBG] pgmap v572: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:22:43.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:43 vm07 bash[23367]: audit 2026-03-10T10:22:42.484106+0000 mon.a (mon.0) 2614 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-67","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:22:43.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:43 vm07 bash[23367]: audit 2026-03-10T10:22:42.484106+0000 mon.a (mon.0) 2614 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-67","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:22:43.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:43 vm07 bash[23367]: cluster 2026-03-10T10:22:42.486870+0000 mon.a (mon.0) 2615 : cluster [DBG] osdmap e394: 8 total, 8 up, 8 in 2026-03-10T10:22:43.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:43 vm07 bash[23367]: cluster 2026-03-10T10:22:42.486870+0000 mon.a (mon.0) 2615 : cluster [DBG] osdmap e394: 8 total, 8 up, 8 in 2026-03-10T10:22:43.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:43 vm07 bash[23367]: audit 2026-03-10T10:22:42.489527+0000 mon.a (mon.0) 2616 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:22:43.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:43 vm07 bash[23367]: audit 2026-03-10T10:22:42.489527+0000 mon.a (mon.0) 2616 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:22:43.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:43 vm07 bash[23367]: audit 2026-03-10T10:22:42.964754+0000 mon.a (mon.0) 2617 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:22:43.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:43 vm07 bash[23367]: audit 2026-03-10T10:22:42.964754+0000 mon.a (mon.0) 2617 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:22:43.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:43 vm07 bash[23367]: cluster 2026-03-10T10:22:42.987269+0000 mon.a (mon.0) 2618 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:22:43.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:43 vm07 bash[23367]: cluster 2026-03-10T10:22:42.987269+0000 mon.a (mon.0) 2618 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:22:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:43 vm04 bash[28289]: cluster 2026-03-10T10:22:42.466854+0000 mgr.y (mgr.24422) 356 : cluster [DBG] pgmap v572: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:22:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:43 vm04 bash[28289]: cluster 2026-03-10T10:22:42.466854+0000 mgr.y (mgr.24422) 356 : cluster [DBG] pgmap v572: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:22:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:43 vm04 bash[28289]: audit 2026-03-10T10:22:42.484106+0000 mon.a (mon.0) 2614 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-67","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:22:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:43 vm04 bash[28289]: audit 2026-03-10T10:22:42.484106+0000 mon.a (mon.0) 2614 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-67","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:22:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:43 vm04 bash[28289]: cluster 2026-03-10T10:22:42.486870+0000 mon.a (mon.0) 2615 : cluster [DBG] osdmap e394: 8 total, 8 up, 8 in 2026-03-10T10:22:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:43 vm04 bash[28289]: cluster 2026-03-10T10:22:42.486870+0000 mon.a (mon.0) 2615 : cluster [DBG] osdmap e394: 8 total, 8 up, 8 in 2026-03-10T10:22:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:43 vm04 bash[28289]: audit 2026-03-10T10:22:42.489527+0000 mon.a (mon.0) 2616 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:22:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:43 vm04 bash[28289]: audit 2026-03-10T10:22:42.489527+0000 mon.a (mon.0) 2616 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:22:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:43 vm04 bash[28289]: audit 2026-03-10T10:22:42.964754+0000 mon.a (mon.0) 2617 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:22:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:43 vm04 bash[28289]: audit 2026-03-10T10:22:42.964754+0000 mon.a (mon.0) 2617 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:22:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:43 vm04 bash[28289]: cluster 2026-03-10T10:22:42.987269+0000 mon.a (mon.0) 2618 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:22:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:43 vm04 bash[28289]: cluster 2026-03-10T10:22:42.987269+0000 mon.a (mon.0) 2618 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:22:43.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:43 vm04 bash[20742]: cluster 2026-03-10T10:22:42.466854+0000 mgr.y (mgr.24422) 356 : cluster [DBG] pgmap v572: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:22:43.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:43 vm04 bash[20742]: cluster 2026-03-10T10:22:42.466854+0000 mgr.y (mgr.24422) 356 : cluster [DBG] pgmap v572: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:22:43.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:43 vm04 bash[20742]: audit 2026-03-10T10:22:42.484106+0000 mon.a (mon.0) 2614 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-67","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:22:43.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:43 vm04 bash[20742]: audit 2026-03-10T10:22:42.484106+0000 mon.a (mon.0) 2614 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-67","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:22:43.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:43 vm04 bash[20742]: cluster 2026-03-10T10:22:42.486870+0000 mon.a (mon.0) 2615 : cluster [DBG] osdmap e394: 8 total, 8 up, 8 in 2026-03-10T10:22:43.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:43 vm04 bash[20742]: cluster 2026-03-10T10:22:42.486870+0000 mon.a (mon.0) 2615 : cluster [DBG] osdmap e394: 8 total, 8 up, 8 in 2026-03-10T10:22:43.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:43 vm04 bash[20742]: audit 2026-03-10T10:22:42.489527+0000 mon.a (mon.0) 2616 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:22:43.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:43 vm04 bash[20742]: audit 2026-03-10T10:22:42.489527+0000 mon.a (mon.0) 2616 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:22:43.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:43 vm04 bash[20742]: audit 2026-03-10T10:22:42.964754+0000 mon.a (mon.0) 2617 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:22:43.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:43 vm04 bash[20742]: audit 2026-03-10T10:22:42.964754+0000 mon.a (mon.0) 2617 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:22:43.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:43 vm04 bash[20742]: cluster 2026-03-10T10:22:42.987269+0000 mon.a (mon.0) 2618 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:22:43.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:43 vm04 bash[20742]: cluster 2026-03-10T10:22:42.987269+0000 mon.a (mon.0) 2618 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:22:44.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:44 vm04 bash[28289]: cluster 2026-03-10T10:22:43.502586+0000 mon.a (mon.0) 2619 : cluster [DBG] osdmap e395: 8 total, 8 up, 8 in 2026-03-10T10:22:44.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:44 vm04 bash[28289]: cluster 2026-03-10T10:22:43.502586+0000 mon.a (mon.0) 2619 : cluster [DBG] osdmap e395: 8 total, 8 up, 8 in 2026-03-10T10:22:44.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:44 vm04 bash[28289]: audit 2026-03-10T10:22:43.542469+0000 mon.a (mon.0) 2620 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:22:44.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:44 vm04 bash[28289]: audit 2026-03-10T10:22:43.542469+0000 mon.a (mon.0) 2620 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:22:44.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:44 vm04 bash[28289]: audit 2026-03-10T10:22:43.542751+0000 mon.a (mon.0) 2621 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-67"}]: dispatch 2026-03-10T10:22:44.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:44 vm04 bash[28289]: audit 2026-03-10T10:22:43.542751+0000 mon.a (mon.0) 2621 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-67"}]: dispatch 2026-03-10T10:22:44.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:44 vm04 bash[20742]: cluster 2026-03-10T10:22:43.502586+0000 mon.a (mon.0) 2619 : cluster [DBG] osdmap e395: 8 total, 8 up, 8 in 2026-03-10T10:22:44.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:44 vm04 bash[20742]: cluster 2026-03-10T10:22:43.502586+0000 mon.a (mon.0) 2619 : cluster [DBG] osdmap e395: 8 total, 8 up, 8 in 2026-03-10T10:22:44.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:44 vm04 bash[20742]: audit 2026-03-10T10:22:43.542469+0000 mon.a (mon.0) 2620 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:22:44.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:44 vm04 bash[20742]: audit 2026-03-10T10:22:43.542469+0000 mon.a (mon.0) 2620 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:22:44.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:44 vm04 bash[20742]: audit 2026-03-10T10:22:43.542751+0000 mon.a (mon.0) 2621 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-67"}]: dispatch 2026-03-10T10:22:44.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:44 vm04 bash[20742]: audit 2026-03-10T10:22:43.542751+0000 mon.a (mon.0) 2621 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-67"}]: dispatch 2026-03-10T10:22:45.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:44 vm07 bash[23367]: cluster 2026-03-10T10:22:43.502586+0000 mon.a (mon.0) 2619 : cluster [DBG] osdmap e395: 8 total, 8 up, 8 in 2026-03-10T10:22:45.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:44 vm07 bash[23367]: cluster 2026-03-10T10:22:43.502586+0000 mon.a (mon.0) 2619 : cluster [DBG] osdmap e395: 8 total, 8 up, 8 in 2026-03-10T10:22:45.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:44 vm07 bash[23367]: audit 2026-03-10T10:22:43.542469+0000 mon.a (mon.0) 2620 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:22:45.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:44 vm07 bash[23367]: audit 2026-03-10T10:22:43.542469+0000 mon.a (mon.0) 2620 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:22:45.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:44 vm07 bash[23367]: audit 2026-03-10T10:22:43.542751+0000 mon.a (mon.0) 2621 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-67"}]: dispatch 2026-03-10T10:22:45.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:44 vm07 bash[23367]: audit 2026-03-10T10:22:43.542751+0000 mon.a (mon.0) 2621 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-67"}]: dispatch 2026-03-10T10:22:45.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:45 vm04 bash[28289]: cluster 2026-03-10T10:22:44.467443+0000 mgr.y (mgr.24422) 357 : cluster [DBG] pgmap v575: 292 pgs: 292 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:22:45.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:45 vm04 bash[28289]: cluster 2026-03-10T10:22:44.467443+0000 mgr.y (mgr.24422) 357 : cluster [DBG] pgmap v575: 292 pgs: 292 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:22:45.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:45 vm04 bash[28289]: cluster 2026-03-10T10:22:44.556168+0000 mon.a (mon.0) 2622 : cluster [DBG] osdmap e396: 8 total, 8 up, 8 in 2026-03-10T10:22:45.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:45 vm04 bash[28289]: cluster 2026-03-10T10:22:44.556168+0000 mon.a (mon.0) 2622 : cluster [DBG] osdmap e396: 8 total, 8 up, 8 in 2026-03-10T10:22:45.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:45 vm04 bash[20742]: cluster 2026-03-10T10:22:44.467443+0000 mgr.y (mgr.24422) 357 : cluster [DBG] pgmap v575: 292 pgs: 292 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:22:45.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:45 vm04 bash[20742]: cluster 2026-03-10T10:22:44.467443+0000 mgr.y (mgr.24422) 357 : cluster [DBG] pgmap v575: 292 pgs: 292 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:22:45.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:45 vm04 bash[20742]: cluster 2026-03-10T10:22:44.556168+0000 mon.a (mon.0) 2622 : cluster [DBG] osdmap e396: 8 total, 8 up, 8 in 2026-03-10T10:22:45.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:45 vm04 bash[20742]: cluster 2026-03-10T10:22:44.556168+0000 mon.a (mon.0) 2622 : cluster [DBG] osdmap e396: 8 total, 8 up, 8 in 2026-03-10T10:22:46.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:45 vm07 bash[23367]: cluster 2026-03-10T10:22:44.467443+0000 mgr.y (mgr.24422) 357 : cluster [DBG] pgmap v575: 292 pgs: 292 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:22:46.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:45 vm07 bash[23367]: cluster 2026-03-10T10:22:44.467443+0000 mgr.y (mgr.24422) 357 : cluster [DBG] pgmap v575: 292 pgs: 292 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:22:46.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:45 vm07 bash[23367]: cluster 2026-03-10T10:22:44.556168+0000 mon.a (mon.0) 2622 : cluster [DBG] osdmap e396: 8 total, 8 up, 8 in 2026-03-10T10:22:46.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:45 vm07 bash[23367]: cluster 2026-03-10T10:22:44.556168+0000 mon.a (mon.0) 2622 : cluster [DBG] osdmap e396: 8 total, 8 up, 8 in 2026-03-10T10:22:46.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:46 vm04 bash[28289]: cluster 2026-03-10T10:22:45.549909+0000 mon.a (mon.0) 2623 : cluster [DBG] osdmap e397: 8 total, 8 up, 8 in 2026-03-10T10:22:46.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:46 vm04 bash[28289]: cluster 2026-03-10T10:22:45.549909+0000 mon.a (mon.0) 2623 : cluster [DBG] osdmap e397: 8 total, 8 up, 8 in 2026-03-10T10:22:46.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:46 vm04 bash[28289]: audit 2026-03-10T10:22:45.559578+0000 mon.a (mon.0) 2624 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:22:46.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:46 vm04 bash[28289]: audit 2026-03-10T10:22:45.559578+0000 mon.a (mon.0) 2624 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:22:46.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:46 vm04 bash[28289]: audit 2026-03-10T10:22:46.550068+0000 mon.a (mon.0) 2625 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-69","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:22:46.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:46 vm04 bash[28289]: audit 2026-03-10T10:22:46.550068+0000 mon.a (mon.0) 2625 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-69","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:22:46.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:46 vm04 bash[28289]: cluster 2026-03-10T10:22:46.555796+0000 mon.a (mon.0) 2626 : cluster [DBG] osdmap e398: 8 total, 8 up, 8 in 2026-03-10T10:22:46.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:46 vm04 bash[28289]: cluster 2026-03-10T10:22:46.555796+0000 mon.a (mon.0) 2626 : cluster [DBG] osdmap e398: 8 total, 8 up, 8 in 2026-03-10T10:22:46.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:46 vm04 bash[28289]: audit 2026-03-10T10:22:46.558543+0000 mon.a (mon.0) 2627 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:22:46.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:46 vm04 bash[28289]: audit 2026-03-10T10:22:46.558543+0000 mon.a (mon.0) 2627 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:22:46.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:46 vm04 bash[20742]: cluster 2026-03-10T10:22:45.549909+0000 mon.a (mon.0) 2623 : cluster [DBG] osdmap e397: 8 total, 8 up, 8 in 2026-03-10T10:22:46.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:46 vm04 bash[20742]: cluster 2026-03-10T10:22:45.549909+0000 mon.a (mon.0) 2623 : cluster [DBG] osdmap e397: 8 total, 8 up, 8 in 2026-03-10T10:22:46.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:46 vm04 bash[20742]: audit 2026-03-10T10:22:45.559578+0000 mon.a (mon.0) 2624 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:22:46.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:46 vm04 bash[20742]: audit 2026-03-10T10:22:45.559578+0000 mon.a (mon.0) 2624 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:22:46.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:46 vm04 bash[20742]: audit 2026-03-10T10:22:46.550068+0000 mon.a (mon.0) 2625 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-69","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:22:46.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:46 vm04 bash[20742]: audit 2026-03-10T10:22:46.550068+0000 mon.a (mon.0) 2625 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-69","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:22:46.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:46 vm04 bash[20742]: cluster 2026-03-10T10:22:46.555796+0000 mon.a (mon.0) 2626 : cluster [DBG] osdmap e398: 8 total, 8 up, 8 in 2026-03-10T10:22:46.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:46 vm04 bash[20742]: cluster 2026-03-10T10:22:46.555796+0000 mon.a (mon.0) 2626 : cluster [DBG] osdmap e398: 8 total, 8 up, 8 in 2026-03-10T10:22:46.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:46 vm04 bash[20742]: audit 2026-03-10T10:22:46.558543+0000 mon.a (mon.0) 2627 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:22:46.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:46 vm04 bash[20742]: audit 2026-03-10T10:22:46.558543+0000 mon.a (mon.0) 2627 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:22:47.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:46 vm07 bash[23367]: cluster 2026-03-10T10:22:45.549909+0000 mon.a (mon.0) 2623 : cluster [DBG] osdmap e397: 8 total, 8 up, 8 in 2026-03-10T10:22:47.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:46 vm07 bash[23367]: cluster 2026-03-10T10:22:45.549909+0000 mon.a (mon.0) 2623 : cluster [DBG] osdmap e397: 8 total, 8 up, 8 in 2026-03-10T10:22:47.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:46 vm07 bash[23367]: audit 2026-03-10T10:22:45.559578+0000 mon.a (mon.0) 2624 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:22:47.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:46 vm07 bash[23367]: audit 2026-03-10T10:22:45.559578+0000 mon.a (mon.0) 2624 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-69","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:22:47.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:46 vm07 bash[23367]: audit 2026-03-10T10:22:46.550068+0000 mon.a (mon.0) 2625 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-69","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:22:47.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:46 vm07 bash[23367]: audit 2026-03-10T10:22:46.550068+0000 mon.a (mon.0) 2625 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-69","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:22:47.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:46 vm07 bash[23367]: cluster 2026-03-10T10:22:46.555796+0000 mon.a (mon.0) 2626 : cluster [DBG] osdmap e398: 8 total, 8 up, 8 in 2026-03-10T10:22:47.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:46 vm07 bash[23367]: cluster 2026-03-10T10:22:46.555796+0000 mon.a (mon.0) 2626 : cluster [DBG] osdmap e398: 8 total, 8 up, 8 in 2026-03-10T10:22:47.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:46 vm07 bash[23367]: audit 2026-03-10T10:22:46.558543+0000 mon.a (mon.0) 2627 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:22:47.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:46 vm07 bash[23367]: audit 2026-03-10T10:22:46.558543+0000 mon.a (mon.0) 2627 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:22:47.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:47 vm04 bash[28289]: cluster 2026-03-10T10:22:46.467892+0000 mgr.y (mgr.24422) 358 : cluster [DBG] pgmap v578: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:22:47.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:47 vm04 bash[28289]: cluster 2026-03-10T10:22:46.467892+0000 mgr.y (mgr.24422) 358 : cluster [DBG] pgmap v578: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:22:47.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:47 vm04 bash[28289]: audit 2026-03-10T10:22:46.629124+0000 mon.a (mon.0) 2628 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:22:47.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:47 vm04 bash[28289]: audit 2026-03-10T10:22:46.629124+0000 mon.a (mon.0) 2628 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:22:47.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:47 vm04 bash[28289]: audit 2026-03-10T10:22:46.629459+0000 mon.a (mon.0) 2629 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-69"}]: dispatch 2026-03-10T10:22:47.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:47 vm04 bash[28289]: audit 2026-03-10T10:22:46.629459+0000 mon.a (mon.0) 2629 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-69"}]: dispatch 2026-03-10T10:22:47.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:47 vm04 bash[28289]: cluster 2026-03-10T10:22:47.564081+0000 mon.a (mon.0) 2630 : cluster [DBG] osdmap e399: 8 total, 8 up, 8 in 2026-03-10T10:22:47.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:47 vm04 bash[28289]: cluster 2026-03-10T10:22:47.564081+0000 mon.a (mon.0) 2630 : cluster [DBG] osdmap e399: 8 total, 8 up, 8 in 2026-03-10T10:22:47.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:47 vm04 bash[20742]: cluster 2026-03-10T10:22:46.467892+0000 mgr.y (mgr.24422) 358 : cluster [DBG] pgmap v578: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:22:47.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:47 vm04 bash[20742]: cluster 2026-03-10T10:22:46.467892+0000 mgr.y (mgr.24422) 358 : cluster [DBG] pgmap v578: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:22:47.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:47 vm04 bash[20742]: audit 2026-03-10T10:22:46.629124+0000 mon.a (mon.0) 2628 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:22:47.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:47 vm04 bash[20742]: audit 2026-03-10T10:22:46.629124+0000 mon.a (mon.0) 2628 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:22:47.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:47 vm04 bash[20742]: audit 2026-03-10T10:22:46.629459+0000 mon.a (mon.0) 2629 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-69"}]: dispatch 2026-03-10T10:22:47.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:47 vm04 bash[20742]: audit 2026-03-10T10:22:46.629459+0000 mon.a (mon.0) 2629 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-69"}]: dispatch 2026-03-10T10:22:47.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:47 vm04 bash[20742]: cluster 2026-03-10T10:22:47.564081+0000 mon.a (mon.0) 2630 : cluster [DBG] osdmap e399: 8 total, 8 up, 8 in 2026-03-10T10:22:47.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:47 vm04 bash[20742]: cluster 2026-03-10T10:22:47.564081+0000 mon.a (mon.0) 2630 : cluster [DBG] osdmap e399: 8 total, 8 up, 8 in 2026-03-10T10:22:48.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:47 vm07 bash[23367]: cluster 2026-03-10T10:22:46.467892+0000 mgr.y (mgr.24422) 358 : cluster [DBG] pgmap v578: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:22:48.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:47 vm07 bash[23367]: cluster 2026-03-10T10:22:46.467892+0000 mgr.y (mgr.24422) 358 : cluster [DBG] pgmap v578: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:22:48.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:47 vm07 bash[23367]: audit 2026-03-10T10:22:46.629124+0000 mon.a (mon.0) 2628 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:22:48.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:47 vm07 bash[23367]: audit 2026-03-10T10:22:46.629124+0000 mon.a (mon.0) 2628 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:22:48.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:47 vm07 bash[23367]: audit 2026-03-10T10:22:46.629459+0000 mon.a (mon.0) 2629 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-69"}]: dispatch 2026-03-10T10:22:48.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:47 vm07 bash[23367]: audit 2026-03-10T10:22:46.629459+0000 mon.a (mon.0) 2629 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-69"}]: dispatch 2026-03-10T10:22:48.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:47 vm07 bash[23367]: cluster 2026-03-10T10:22:47.564081+0000 mon.a (mon.0) 2630 : cluster [DBG] osdmap e399: 8 total, 8 up, 8 in 2026-03-10T10:22:48.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:47 vm07 bash[23367]: cluster 2026-03-10T10:22:47.564081+0000 mon.a (mon.0) 2630 : cluster [DBG] osdmap e399: 8 total, 8 up, 8 in 2026-03-10T10:22:48.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:48 vm04 bash[28289]: cluster 2026-03-10T10:22:48.597321+0000 mon.a (mon.0) 2631 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:22:48.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:48 vm04 bash[28289]: cluster 2026-03-10T10:22:48.597321+0000 mon.a (mon.0) 2631 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:22:48.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:48 vm04 bash[28289]: cluster 2026-03-10T10:22:48.602280+0000 mon.a (mon.0) 2632 : cluster [DBG] osdmap e400: 8 total, 8 up, 8 in 2026-03-10T10:22:48.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:48 vm04 bash[28289]: cluster 2026-03-10T10:22:48.602280+0000 mon.a (mon.0) 2632 : cluster [DBG] osdmap e400: 8 total, 8 up, 8 in 2026-03-10T10:22:48.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:48 vm04 bash[28289]: audit 2026-03-10T10:22:48.607602+0000 mon.a (mon.0) 2633 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:22:48.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:48 vm04 bash[28289]: audit 2026-03-10T10:22:48.607602+0000 mon.a (mon.0) 2633 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:22:48.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:48 vm04 bash[20742]: cluster 2026-03-10T10:22:48.597321+0000 mon.a (mon.0) 2631 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:22:48.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:48 vm04 bash[20742]: cluster 2026-03-10T10:22:48.597321+0000 mon.a (mon.0) 2631 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:22:48.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:48 vm04 bash[20742]: cluster 2026-03-10T10:22:48.602280+0000 mon.a (mon.0) 2632 : cluster [DBG] osdmap e400: 8 total, 8 up, 8 in 2026-03-10T10:22:48.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:48 vm04 bash[20742]: cluster 2026-03-10T10:22:48.602280+0000 mon.a (mon.0) 2632 : cluster [DBG] osdmap e400: 8 total, 8 up, 8 in 2026-03-10T10:22:48.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:48 vm04 bash[20742]: audit 2026-03-10T10:22:48.607602+0000 mon.a (mon.0) 2633 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:22:48.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:48 vm04 bash[20742]: audit 2026-03-10T10:22:48.607602+0000 mon.a (mon.0) 2633 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:22:49.016 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:22:48 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:22:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:48 vm07 bash[23367]: cluster 2026-03-10T10:22:48.597321+0000 mon.a (mon.0) 2631 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:22:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:48 vm07 bash[23367]: cluster 2026-03-10T10:22:48.597321+0000 mon.a (mon.0) 2631 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:22:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:48 vm07 bash[23367]: cluster 2026-03-10T10:22:48.602280+0000 mon.a (mon.0) 2632 : cluster [DBG] osdmap e400: 8 total, 8 up, 8 in 2026-03-10T10:22:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:48 vm07 bash[23367]: cluster 2026-03-10T10:22:48.602280+0000 mon.a (mon.0) 2632 : cluster [DBG] osdmap e400: 8 total, 8 up, 8 in 2026-03-10T10:22:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:48 vm07 bash[23367]: audit 2026-03-10T10:22:48.607602+0000 mon.a (mon.0) 2633 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:22:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:48 vm07 bash[23367]: audit 2026-03-10T10:22:48.607602+0000 mon.a (mon.0) 2633 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-71","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:22:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:49 vm04 bash[28289]: cluster 2026-03-10T10:22:48.468410+0000 mgr.y (mgr.24422) 359 : cluster [DBG] pgmap v581: 260 pgs: 260 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:22:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:49 vm04 bash[28289]: cluster 2026-03-10T10:22:48.468410+0000 mgr.y (mgr.24422) 359 : cluster [DBG] pgmap v581: 260 pgs: 260 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:22:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:49 vm04 bash[28289]: audit 2026-03-10T10:22:48.533455+0000 mgr.y (mgr.24422) 360 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:49 vm04 bash[28289]: audit 2026-03-10T10:22:48.533455+0000 mgr.y (mgr.24422) 360 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:49 vm04 bash[28289]: audit 2026-03-10T10:22:49.604186+0000 mon.a (mon.0) 2634 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-71","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:22:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:49 vm04 bash[28289]: audit 2026-03-10T10:22:49.604186+0000 mon.a (mon.0) 2634 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-71","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:22:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:49 vm04 bash[28289]: cluster 2026-03-10T10:22:49.616157+0000 mon.a (mon.0) 2635 : cluster [DBG] osdmap e401: 8 total, 8 up, 8 in 2026-03-10T10:22:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:49 vm04 bash[28289]: cluster 2026-03-10T10:22:49.616157+0000 mon.a (mon.0) 2635 : cluster [DBG] osdmap e401: 8 total, 8 up, 8 in 2026-03-10T10:22:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:49 vm04 bash[28289]: audit 2026-03-10T10:22:49.626248+0000 mon.a (mon.0) 2636 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:22:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:49 vm04 bash[28289]: audit 2026-03-10T10:22:49.626248+0000 mon.a (mon.0) 2636 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:22:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:49 vm04 bash[28289]: audit 2026-03-10T10:22:49.628687+0000 mon.a (mon.0) 2637 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:22:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:49 vm04 bash[28289]: audit 2026-03-10T10:22:49.628687+0000 mon.a (mon.0) 2637 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:22:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:49 vm04 bash[20742]: cluster 2026-03-10T10:22:48.468410+0000 mgr.y (mgr.24422) 359 : cluster [DBG] pgmap v581: 260 pgs: 260 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:22:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:49 vm04 bash[20742]: cluster 2026-03-10T10:22:48.468410+0000 mgr.y (mgr.24422) 359 : cluster [DBG] pgmap v581: 260 pgs: 260 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:22:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:49 vm04 bash[20742]: audit 2026-03-10T10:22:48.533455+0000 mgr.y (mgr.24422) 360 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:49 vm04 bash[20742]: audit 2026-03-10T10:22:48.533455+0000 mgr.y (mgr.24422) 360 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:49 vm04 bash[20742]: audit 2026-03-10T10:22:49.604186+0000 mon.a (mon.0) 2634 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-71","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:22:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:49 vm04 bash[20742]: audit 2026-03-10T10:22:49.604186+0000 mon.a (mon.0) 2634 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-71","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:22:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:49 vm04 bash[20742]: cluster 2026-03-10T10:22:49.616157+0000 mon.a (mon.0) 2635 : cluster [DBG] osdmap e401: 8 total, 8 up, 8 in 2026-03-10T10:22:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:49 vm04 bash[20742]: cluster 2026-03-10T10:22:49.616157+0000 mon.a (mon.0) 2635 : cluster [DBG] osdmap e401: 8 total, 8 up, 8 in 2026-03-10T10:22:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:49 vm04 bash[20742]: audit 2026-03-10T10:22:49.626248+0000 mon.a (mon.0) 2636 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:22:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:49 vm04 bash[20742]: audit 2026-03-10T10:22:49.626248+0000 mon.a (mon.0) 2636 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:22:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:49 vm04 bash[20742]: audit 2026-03-10T10:22:49.628687+0000 mon.a (mon.0) 2637 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:22:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:49 vm04 bash[20742]: audit 2026-03-10T10:22:49.628687+0000 mon.a (mon.0) 2637 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:22:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:49 vm07 bash[23367]: cluster 2026-03-10T10:22:48.468410+0000 mgr.y (mgr.24422) 359 : cluster [DBG] pgmap v581: 260 pgs: 260 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:22:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:49 vm07 bash[23367]: cluster 2026-03-10T10:22:48.468410+0000 mgr.y (mgr.24422) 359 : cluster [DBG] pgmap v581: 260 pgs: 260 active+clean; 8.3 MiB data, 880 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:22:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:49 vm07 bash[23367]: audit 2026-03-10T10:22:48.533455+0000 mgr.y (mgr.24422) 360 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:49 vm07 bash[23367]: audit 2026-03-10T10:22:48.533455+0000 mgr.y (mgr.24422) 360 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:22:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:49 vm07 bash[23367]: audit 2026-03-10T10:22:49.604186+0000 mon.a (mon.0) 2634 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-71","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:22:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:49 vm07 bash[23367]: audit 2026-03-10T10:22:49.604186+0000 mon.a (mon.0) 2634 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-71","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:22:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:49 vm07 bash[23367]: cluster 2026-03-10T10:22:49.616157+0000 mon.a (mon.0) 2635 : cluster [DBG] osdmap e401: 8 total, 8 up, 8 in 2026-03-10T10:22:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:49 vm07 bash[23367]: cluster 2026-03-10T10:22:49.616157+0000 mon.a (mon.0) 2635 : cluster [DBG] osdmap e401: 8 total, 8 up, 8 in 2026-03-10T10:22:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:49 vm07 bash[23367]: audit 2026-03-10T10:22:49.626248+0000 mon.a (mon.0) 2636 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:22:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:49 vm07 bash[23367]: audit 2026-03-10T10:22:49.626248+0000 mon.a (mon.0) 2636 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:22:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:49 vm07 bash[23367]: audit 2026-03-10T10:22:49.628687+0000 mon.a (mon.0) 2637 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:22:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:49 vm07 bash[23367]: audit 2026-03-10T10:22:49.628687+0000 mon.a (mon.0) 2637 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:22:51.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:51 vm04 bash[28289]: cluster 2026-03-10T10:22:50.468695+0000 mgr.y (mgr.24422) 361 : cluster [DBG] pgmap v584: 292 pgs: 16 creating+peering, 16 unknown, 260 active+clean; 8.3 MiB data, 881 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T10:22:51.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:51 vm04 bash[28289]: cluster 2026-03-10T10:22:50.468695+0000 mgr.y (mgr.24422) 361 : cluster [DBG] pgmap v584: 292 pgs: 16 creating+peering, 16 unknown, 260 active+clean; 8.3 MiB data, 881 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T10:22:51.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:51 vm04 bash[28289]: audit 2026-03-10T10:22:50.608323+0000 mon.a (mon.0) 2638 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:22:51.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:51 vm04 bash[28289]: audit 2026-03-10T10:22:50.608323+0000 mon.a (mon.0) 2638 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:22:51.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:51 vm04 bash[28289]: cluster 2026-03-10T10:22:50.612163+0000 mon.a (mon.0) 2639 : cluster [DBG] osdmap e402: 8 total, 8 up, 8 in 2026-03-10T10:22:51.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:51 vm04 bash[28289]: cluster 2026-03-10T10:22:50.612163+0000 mon.a (mon.0) 2639 : cluster [DBG] osdmap e402: 8 total, 8 up, 8 in 2026-03-10T10:22:51.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:51 vm04 bash[20742]: cluster 2026-03-10T10:22:50.468695+0000 mgr.y (mgr.24422) 361 : cluster [DBG] pgmap v584: 292 pgs: 16 creating+peering, 16 unknown, 260 active+clean; 8.3 MiB data, 881 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T10:22:51.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:51 vm04 bash[20742]: cluster 2026-03-10T10:22:50.468695+0000 mgr.y (mgr.24422) 361 : cluster [DBG] pgmap v584: 292 pgs: 16 creating+peering, 16 unknown, 260 active+clean; 8.3 MiB data, 881 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T10:22:51.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:51 vm04 bash[20742]: audit 2026-03-10T10:22:50.608323+0000 mon.a (mon.0) 2638 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:22:51.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:51 vm04 bash[20742]: audit 2026-03-10T10:22:50.608323+0000 mon.a (mon.0) 2638 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:22:51.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:51 vm04 bash[20742]: cluster 2026-03-10T10:22:50.612163+0000 mon.a (mon.0) 2639 : cluster [DBG] osdmap e402: 8 total, 8 up, 8 in 2026-03-10T10:22:51.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:51 vm04 bash[20742]: cluster 2026-03-10T10:22:50.612163+0000 mon.a (mon.0) 2639 : cluster [DBG] osdmap e402: 8 total, 8 up, 8 in 2026-03-10T10:22:52.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:51 vm07 bash[23367]: cluster 2026-03-10T10:22:50.468695+0000 mgr.y (mgr.24422) 361 : cluster [DBG] pgmap v584: 292 pgs: 16 creating+peering, 16 unknown, 260 active+clean; 8.3 MiB data, 881 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T10:22:52.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:51 vm07 bash[23367]: cluster 2026-03-10T10:22:50.468695+0000 mgr.y (mgr.24422) 361 : cluster [DBG] pgmap v584: 292 pgs: 16 creating+peering, 16 unknown, 260 active+clean; 8.3 MiB data, 881 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T10:22:52.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:51 vm07 bash[23367]: audit 2026-03-10T10:22:50.608323+0000 mon.a (mon.0) 2638 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:22:52.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:51 vm07 bash[23367]: audit 2026-03-10T10:22:50.608323+0000 mon.a (mon.0) 2638 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:22:52.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:51 vm07 bash[23367]: cluster 2026-03-10T10:22:50.612163+0000 mon.a (mon.0) 2639 : cluster [DBG] osdmap e402: 8 total, 8 up, 8 in 2026-03-10T10:22:52.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:51 vm07 bash[23367]: cluster 2026-03-10T10:22:50.612163+0000 mon.a (mon.0) 2639 : cluster [DBG] osdmap e402: 8 total, 8 up, 8 in 2026-03-10T10:22:52.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:52 vm04 bash[28289]: cluster 2026-03-10T10:22:51.661630+0000 mon.a (mon.0) 2640 : cluster [DBG] osdmap e403: 8 total, 8 up, 8 in 2026-03-10T10:22:52.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:52 vm04 bash[28289]: cluster 2026-03-10T10:22:51.661630+0000 mon.a (mon.0) 2640 : cluster [DBG] osdmap e403: 8 total, 8 up, 8 in 2026-03-10T10:22:52.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:52 vm04 bash[20742]: cluster 2026-03-10T10:22:51.661630+0000 mon.a (mon.0) 2640 : cluster [DBG] osdmap e403: 8 total, 8 up, 8 in 2026-03-10T10:22:52.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:52 vm04 bash[20742]: cluster 2026-03-10T10:22:51.661630+0000 mon.a (mon.0) 2640 : cluster [DBG] osdmap e403: 8 total, 8 up, 8 in 2026-03-10T10:22:53.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:52 vm07 bash[23367]: cluster 2026-03-10T10:22:51.661630+0000 mon.a (mon.0) 2640 : cluster [DBG] osdmap e403: 8 total, 8 up, 8 in 2026-03-10T10:22:53.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:52 vm07 bash[23367]: cluster 2026-03-10T10:22:51.661630+0000 mon.a (mon.0) 2640 : cluster [DBG] osdmap e403: 8 total, 8 up, 8 in 2026-03-10T10:22:53.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:22:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:22:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:22:53.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:53 vm04 bash[28289]: cluster 2026-03-10T10:22:52.469061+0000 mgr.y (mgr.24422) 362 : cluster [DBG] pgmap v587: 292 pgs: 16 creating+peering, 16 unknown, 260 active+clean; 8.3 MiB data, 881 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:22:53.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:53 vm04 bash[28289]: cluster 2026-03-10T10:22:52.469061+0000 mgr.y (mgr.24422) 362 : cluster [DBG] pgmap v587: 292 pgs: 16 creating+peering, 16 unknown, 260 active+clean; 8.3 MiB data, 881 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:22:53.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:53 vm04 bash[20742]: cluster 2026-03-10T10:22:52.469061+0000 mgr.y (mgr.24422) 362 : cluster [DBG] pgmap v587: 292 pgs: 16 creating+peering, 16 unknown, 260 active+clean; 8.3 MiB data, 881 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:22:53.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:53 vm04 bash[20742]: cluster 2026-03-10T10:22:52.469061+0000 mgr.y (mgr.24422) 362 : cluster [DBG] pgmap v587: 292 pgs: 16 creating+peering, 16 unknown, 260 active+clean; 8.3 MiB data, 881 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:22:54.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:53 vm07 bash[23367]: cluster 2026-03-10T10:22:52.469061+0000 mgr.y (mgr.24422) 362 : cluster [DBG] pgmap v587: 292 pgs: 16 creating+peering, 16 unknown, 260 active+clean; 8.3 MiB data, 881 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:22:54.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:53 vm07 bash[23367]: cluster 2026-03-10T10:22:52.469061+0000 mgr.y (mgr.24422) 362 : cluster [DBG] pgmap v587: 292 pgs: 16 creating+peering, 16 unknown, 260 active+clean; 8.3 MiB data, 881 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:22:56.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:55 vm07 bash[23367]: cluster 2026-03-10T10:22:54.469856+0000 mgr.y (mgr.24422) 363 : cluster [DBG] pgmap v588: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 2.2 KiB/s wr, 5 op/s 2026-03-10T10:22:56.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:55 vm07 bash[23367]: cluster 2026-03-10T10:22:54.469856+0000 mgr.y (mgr.24422) 363 : cluster [DBG] pgmap v588: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 2.2 KiB/s wr, 5 op/s 2026-03-10T10:22:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:55 vm04 bash[28289]: cluster 2026-03-10T10:22:54.469856+0000 mgr.y (mgr.24422) 363 : cluster [DBG] pgmap v588: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 2.2 KiB/s wr, 5 op/s 2026-03-10T10:22:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:55 vm04 bash[28289]: cluster 2026-03-10T10:22:54.469856+0000 mgr.y (mgr.24422) 363 : cluster [DBG] pgmap v588: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 2.2 KiB/s wr, 5 op/s 2026-03-10T10:22:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:55 vm04 bash[20742]: cluster 2026-03-10T10:22:54.469856+0000 mgr.y (mgr.24422) 363 : cluster [DBG] pgmap v588: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 2.2 KiB/s wr, 5 op/s 2026-03-10T10:22:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:55 vm04 bash[20742]: cluster 2026-03-10T10:22:54.469856+0000 mgr.y (mgr.24422) 363 : cluster [DBG] pgmap v588: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.9 KiB/s rd, 2.2 KiB/s wr, 5 op/s 2026-03-10T10:22:58.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:57 vm07 bash[23367]: cluster 2026-03-10T10:22:56.470217+0000 mgr.y (mgr.24422) 364 : cluster [DBG] pgmap v589: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 894 B/s rd, 1.9 KiB/s wr, 4 op/s 2026-03-10T10:22:58.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:57 vm07 bash[23367]: cluster 2026-03-10T10:22:56.470217+0000 mgr.y (mgr.24422) 364 : cluster [DBG] pgmap v589: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 894 B/s rd, 1.9 KiB/s wr, 4 op/s 2026-03-10T10:22:58.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:57 vm04 bash[28289]: cluster 2026-03-10T10:22:56.470217+0000 mgr.y (mgr.24422) 364 : cluster [DBG] pgmap v589: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 894 B/s rd, 1.9 KiB/s wr, 4 op/s 2026-03-10T10:22:58.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:57 vm04 bash[28289]: cluster 2026-03-10T10:22:56.470217+0000 mgr.y (mgr.24422) 364 : cluster [DBG] pgmap v589: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 894 B/s rd, 1.9 KiB/s wr, 4 op/s 2026-03-10T10:22:58.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:57 vm04 bash[20742]: cluster 2026-03-10T10:22:56.470217+0000 mgr.y (mgr.24422) 364 : cluster [DBG] pgmap v589: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 894 B/s rd, 1.9 KiB/s wr, 4 op/s 2026-03-10T10:22:58.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:57 vm04 bash[20742]: cluster 2026-03-10T10:22:56.470217+0000 mgr.y (mgr.24422) 364 : cluster [DBG] pgmap v589: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 894 B/s rd, 1.9 KiB/s wr, 4 op/s 2026-03-10T10:22:59.016 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:22:58 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:22:59.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:58 vm07 bash[23367]: audit 2026-03-10T10:22:57.972618+0000 mon.a (mon.0) 2641 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:22:59.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:58 vm07 bash[23367]: audit 2026-03-10T10:22:57.972618+0000 mon.a (mon.0) 2641 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:22:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:58 vm04 bash[28289]: audit 2026-03-10T10:22:57.972618+0000 mon.a (mon.0) 2641 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:22:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:58 vm04 bash[28289]: audit 2026-03-10T10:22:57.972618+0000 mon.a (mon.0) 2641 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:22:59.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:58 vm04 bash[20742]: audit 2026-03-10T10:22:57.972618+0000 mon.a (mon.0) 2641 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:22:59.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:58 vm04 bash[20742]: audit 2026-03-10T10:22:57.972618+0000 mon.a (mon.0) 2641 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:23:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:59 vm04 bash[28289]: cluster 2026-03-10T10:22:58.470825+0000 mgr.y (mgr.24422) 365 : cluster [DBG] pgmap v590: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1.6 KiB/s wr, 3 op/s 2026-03-10T10:23:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:59 vm04 bash[28289]: cluster 2026-03-10T10:22:58.470825+0000 mgr.y (mgr.24422) 365 : cluster [DBG] pgmap v590: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1.6 KiB/s wr, 3 op/s 2026-03-10T10:23:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:59 vm04 bash[28289]: audit 2026-03-10T10:22:58.543429+0000 mgr.y (mgr.24422) 366 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:22:59 vm04 bash[28289]: audit 2026-03-10T10:22:58.543429+0000 mgr.y (mgr.24422) 366 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:59 vm04 bash[20742]: cluster 2026-03-10T10:22:58.470825+0000 mgr.y (mgr.24422) 365 : cluster [DBG] pgmap v590: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1.6 KiB/s wr, 3 op/s 2026-03-10T10:23:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:59 vm04 bash[20742]: cluster 2026-03-10T10:22:58.470825+0000 mgr.y (mgr.24422) 365 : cluster [DBG] pgmap v590: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1.6 KiB/s wr, 3 op/s 2026-03-10T10:23:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:59 vm04 bash[20742]: audit 2026-03-10T10:22:58.543429+0000 mgr.y (mgr.24422) 366 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:22:59 vm04 bash[20742]: audit 2026-03-10T10:22:58.543429+0000 mgr.y (mgr.24422) 366 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:00.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:59 vm07 bash[23367]: cluster 2026-03-10T10:22:58.470825+0000 mgr.y (mgr.24422) 365 : cluster [DBG] pgmap v590: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1.6 KiB/s wr, 3 op/s 2026-03-10T10:23:00.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:59 vm07 bash[23367]: cluster 2026-03-10T10:22:58.470825+0000 mgr.y (mgr.24422) 365 : cluster [DBG] pgmap v590: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1.6 KiB/s wr, 3 op/s 2026-03-10T10:23:00.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:59 vm07 bash[23367]: audit 2026-03-10T10:22:58.543429+0000 mgr.y (mgr.24422) 366 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:00.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:22:59 vm07 bash[23367]: audit 2026-03-10T10:22:58.543429+0000 mgr.y (mgr.24422) 366 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:02.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:02 vm04 bash[28289]: cluster 2026-03-10T10:23:00.471526+0000 mgr.y (mgr.24422) 367 : cluster [DBG] pgmap v591: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-10T10:23:02.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:02 vm04 bash[28289]: cluster 2026-03-10T10:23:00.471526+0000 mgr.y (mgr.24422) 367 : cluster [DBG] pgmap v591: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-10T10:23:02.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:02 vm04 bash[20742]: cluster 2026-03-10T10:23:00.471526+0000 mgr.y (mgr.24422) 367 : cluster [DBG] pgmap v591: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-10T10:23:02.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:02 vm04 bash[20742]: cluster 2026-03-10T10:23:00.471526+0000 mgr.y (mgr.24422) 367 : cluster [DBG] pgmap v591: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-10T10:23:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:02 vm07 bash[23367]: cluster 2026-03-10T10:23:00.471526+0000 mgr.y (mgr.24422) 367 : cluster [DBG] pgmap v591: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-10T10:23:02.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:02 vm07 bash[23367]: cluster 2026-03-10T10:23:00.471526+0000 mgr.y (mgr.24422) 367 : cluster [DBG] pgmap v591: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-10T10:23:03.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:23:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:23:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:23:04.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:04 vm04 bash[28289]: cluster 2026-03-10T10:23:02.471910+0000 mgr.y (mgr.24422) 368 : cluster [DBG] pgmap v592: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T10:23:04.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:04 vm04 bash[28289]: cluster 2026-03-10T10:23:02.471910+0000 mgr.y (mgr.24422) 368 : cluster [DBG] pgmap v592: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T10:23:04.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:04 vm04 bash[20742]: cluster 2026-03-10T10:23:02.471910+0000 mgr.y (mgr.24422) 368 : cluster [DBG] pgmap v592: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T10:23:04.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:04 vm04 bash[20742]: cluster 2026-03-10T10:23:02.471910+0000 mgr.y (mgr.24422) 368 : cluster [DBG] pgmap v592: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T10:23:04.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:04 vm07 bash[23367]: cluster 2026-03-10T10:23:02.471910+0000 mgr.y (mgr.24422) 368 : cluster [DBG] pgmap v592: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T10:23:04.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:04 vm07 bash[23367]: cluster 2026-03-10T10:23:02.471910+0000 mgr.y (mgr.24422) 368 : cluster [DBG] pgmap v592: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T10:23:06.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:06 vm04 bash[28289]: cluster 2026-03-10T10:23:04.472711+0000 mgr.y (mgr.24422) 369 : cluster [DBG] pgmap v593: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.3 KiB/s wr, 4 op/s 2026-03-10T10:23:06.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:06 vm04 bash[28289]: cluster 2026-03-10T10:23:04.472711+0000 mgr.y (mgr.24422) 369 : cluster [DBG] pgmap v593: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.3 KiB/s wr, 4 op/s 2026-03-10T10:23:06.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:06 vm04 bash[20742]: cluster 2026-03-10T10:23:04.472711+0000 mgr.y (mgr.24422) 369 : cluster [DBG] pgmap v593: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.3 KiB/s wr, 4 op/s 2026-03-10T10:23:06.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:06 vm04 bash[20742]: cluster 2026-03-10T10:23:04.472711+0000 mgr.y (mgr.24422) 369 : cluster [DBG] pgmap v593: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.3 KiB/s wr, 4 op/s 2026-03-10T10:23:06.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:06 vm07 bash[23367]: cluster 2026-03-10T10:23:04.472711+0000 mgr.y (mgr.24422) 369 : cluster [DBG] pgmap v593: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.3 KiB/s wr, 4 op/s 2026-03-10T10:23:06.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:06 vm07 bash[23367]: cluster 2026-03-10T10:23:04.472711+0000 mgr.y (mgr.24422) 369 : cluster [DBG] pgmap v593: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.3 KiB/s wr, 4 op/s 2026-03-10T10:23:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:08 vm04 bash[28289]: cluster 2026-03-10T10:23:06.473064+0000 mgr.y (mgr.24422) 370 : cluster [DBG] pgmap v594: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:23:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:08 vm04 bash[28289]: cluster 2026-03-10T10:23:06.473064+0000 mgr.y (mgr.24422) 370 : cluster [DBG] pgmap v594: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:23:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:08 vm04 bash[20742]: cluster 2026-03-10T10:23:06.473064+0000 mgr.y (mgr.24422) 370 : cluster [DBG] pgmap v594: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:23:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:08 vm04 bash[20742]: cluster 2026-03-10T10:23:06.473064+0000 mgr.y (mgr.24422) 370 : cluster [DBG] pgmap v594: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:23:08.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:08 vm07 bash[23367]: cluster 2026-03-10T10:23:06.473064+0000 mgr.y (mgr.24422) 370 : cluster [DBG] pgmap v594: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:23:08.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:08 vm07 bash[23367]: cluster 2026-03-10T10:23:06.473064+0000 mgr.y (mgr.24422) 370 : cluster [DBG] pgmap v594: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:23:09.016 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:23:08 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:23:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:10 vm04 bash[28289]: cluster 2026-03-10T10:23:08.473626+0000 mgr.y (mgr.24422) 371 : cluster [DBG] pgmap v595: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:23:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:10 vm04 bash[28289]: cluster 2026-03-10T10:23:08.473626+0000 mgr.y (mgr.24422) 371 : cluster [DBG] pgmap v595: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:23:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:10 vm04 bash[28289]: audit 2026-03-10T10:23:08.550959+0000 mgr.y (mgr.24422) 372 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:10 vm04 bash[28289]: audit 2026-03-10T10:23:08.550959+0000 mgr.y (mgr.24422) 372 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:10.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:10 vm04 bash[20742]: cluster 2026-03-10T10:23:08.473626+0000 mgr.y (mgr.24422) 371 : cluster [DBG] pgmap v595: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:23:10.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:10 vm04 bash[20742]: cluster 2026-03-10T10:23:08.473626+0000 mgr.y (mgr.24422) 371 : cluster [DBG] pgmap v595: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:23:10.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:10 vm04 bash[20742]: audit 2026-03-10T10:23:08.550959+0000 mgr.y (mgr.24422) 372 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:10.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:10 vm04 bash[20742]: audit 2026-03-10T10:23:08.550959+0000 mgr.y (mgr.24422) 372 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:10.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:10 vm07 bash[23367]: cluster 2026-03-10T10:23:08.473626+0000 mgr.y (mgr.24422) 371 : cluster [DBG] pgmap v595: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:23:10.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:10 vm07 bash[23367]: cluster 2026-03-10T10:23:08.473626+0000 mgr.y (mgr.24422) 371 : cluster [DBG] pgmap v595: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:23:10.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:10 vm07 bash[23367]: audit 2026-03-10T10:23:08.550959+0000 mgr.y (mgr.24422) 372 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:10.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:10 vm07 bash[23367]: audit 2026-03-10T10:23:08.550959+0000 mgr.y (mgr.24422) 372 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:12.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:12 vm04 bash[28289]: cluster 2026-03-10T10:23:10.474323+0000 mgr.y (mgr.24422) 373 : cluster [DBG] pgmap v596: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:23:12.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:12 vm04 bash[28289]: cluster 2026-03-10T10:23:10.474323+0000 mgr.y (mgr.24422) 373 : cluster [DBG] pgmap v596: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:23:12.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:12 vm04 bash[28289]: audit 2026-03-10T10:23:11.705215+0000 mon.a (mon.0) 2642 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:23:12.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:12 vm04 bash[28289]: audit 2026-03-10T10:23:11.705215+0000 mon.a (mon.0) 2642 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:23:12.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:12 vm04 bash[28289]: audit 2026-03-10T10:23:11.705456+0000 mon.a (mon.0) 2643 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-71"}]: dispatch 2026-03-10T10:23:12.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:12 vm04 bash[28289]: audit 2026-03-10T10:23:11.705456+0000 mon.a (mon.0) 2643 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-71"}]: dispatch 2026-03-10T10:23:12.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:12 vm04 bash[20742]: cluster 2026-03-10T10:23:10.474323+0000 mgr.y (mgr.24422) 373 : cluster [DBG] pgmap v596: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:23:12.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:12 vm04 bash[20742]: cluster 2026-03-10T10:23:10.474323+0000 mgr.y (mgr.24422) 373 : cluster [DBG] pgmap v596: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:23:12.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:12 vm04 bash[20742]: audit 2026-03-10T10:23:11.705215+0000 mon.a (mon.0) 2642 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:23:12.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:12 vm04 bash[20742]: audit 2026-03-10T10:23:11.705215+0000 mon.a (mon.0) 2642 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:23:12.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:12 vm04 bash[20742]: audit 2026-03-10T10:23:11.705456+0000 mon.a (mon.0) 2643 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-71"}]: dispatch 2026-03-10T10:23:12.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:12 vm04 bash[20742]: audit 2026-03-10T10:23:11.705456+0000 mon.a (mon.0) 2643 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-71"}]: dispatch 2026-03-10T10:23:12.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:12 vm07 bash[23367]: cluster 2026-03-10T10:23:10.474323+0000 mgr.y (mgr.24422) 373 : cluster [DBG] pgmap v596: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:23:12.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:12 vm07 bash[23367]: cluster 2026-03-10T10:23:10.474323+0000 mgr.y (mgr.24422) 373 : cluster [DBG] pgmap v596: 292 pgs: 292 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:23:12.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:12 vm07 bash[23367]: audit 2026-03-10T10:23:11.705215+0000 mon.a (mon.0) 2642 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:23:12.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:12 vm07 bash[23367]: audit 2026-03-10T10:23:11.705215+0000 mon.a (mon.0) 2642 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:23:12.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:12 vm07 bash[23367]: audit 2026-03-10T10:23:11.705456+0000 mon.a (mon.0) 2643 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-71"}]: dispatch 2026-03-10T10:23:12.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:12 vm07 bash[23367]: audit 2026-03-10T10:23:11.705456+0000 mon.a (mon.0) 2643 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-71"}]: dispatch 2026-03-10T10:23:13.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:13 vm04 bash[28289]: cluster 2026-03-10T10:23:12.129785+0000 mon.a (mon.0) 2644 : cluster [DBG] osdmap e404: 8 total, 8 up, 8 in 2026-03-10T10:23:13.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:13 vm04 bash[28289]: cluster 2026-03-10T10:23:12.129785+0000 mon.a (mon.0) 2644 : cluster [DBG] osdmap e404: 8 total, 8 up, 8 in 2026-03-10T10:23:13.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:13 vm04 bash[28289]: audit 2026-03-10T10:23:12.978279+0000 mon.a (mon.0) 2645 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:23:13.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:13 vm04 bash[28289]: audit 2026-03-10T10:23:12.978279+0000 mon.a (mon.0) 2645 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:23:13.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:13 vm04 bash[20742]: cluster 2026-03-10T10:23:12.129785+0000 mon.a (mon.0) 2644 : cluster [DBG] osdmap e404: 8 total, 8 up, 8 in 2026-03-10T10:23:13.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:13 vm04 bash[20742]: cluster 2026-03-10T10:23:12.129785+0000 mon.a (mon.0) 2644 : cluster [DBG] osdmap e404: 8 total, 8 up, 8 in 2026-03-10T10:23:13.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:13 vm04 bash[20742]: audit 2026-03-10T10:23:12.978279+0000 mon.a (mon.0) 2645 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:23:13.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:13 vm04 bash[20742]: audit 2026-03-10T10:23:12.978279+0000 mon.a (mon.0) 2645 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:23:13.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:23:13 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:23:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:23:13.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:13 vm07 bash[23367]: cluster 2026-03-10T10:23:12.129785+0000 mon.a (mon.0) 2644 : cluster [DBG] osdmap e404: 8 total, 8 up, 8 in 2026-03-10T10:23:13.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:13 vm07 bash[23367]: cluster 2026-03-10T10:23:12.129785+0000 mon.a (mon.0) 2644 : cluster [DBG] osdmap e404: 8 total, 8 up, 8 in 2026-03-10T10:23:13.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:13 vm07 bash[23367]: audit 2026-03-10T10:23:12.978279+0000 mon.a (mon.0) 2645 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:23:13.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:13 vm07 bash[23367]: audit 2026-03-10T10:23:12.978279+0000 mon.a (mon.0) 2645 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:23:14.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:14 vm04 bash[28289]: cluster 2026-03-10T10:23:12.474663+0000 mgr.y (mgr.24422) 374 : cluster [DBG] pgmap v598: 260 pgs: 260 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-10T10:23:14.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:14 vm04 bash[28289]: cluster 2026-03-10T10:23:12.474663+0000 mgr.y (mgr.24422) 374 : cluster [DBG] pgmap v598: 260 pgs: 260 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-10T10:23:14.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:14 vm04 bash[28289]: cluster 2026-03-10T10:23:13.166764+0000 mon.a (mon.0) 2646 : cluster [DBG] osdmap e405: 8 total, 8 up, 8 in 2026-03-10T10:23:14.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:14 vm04 bash[28289]: cluster 2026-03-10T10:23:13.166764+0000 mon.a (mon.0) 2646 : cluster [DBG] osdmap e405: 8 total, 8 up, 8 in 2026-03-10T10:23:14.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:14 vm04 bash[28289]: audit 2026-03-10T10:23:13.169360+0000 mon.a (mon.0) 2647 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:23:14.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:14 vm04 bash[28289]: audit 2026-03-10T10:23:13.169360+0000 mon.a (mon.0) 2647 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:23:14.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:14 vm04 bash[20742]: cluster 2026-03-10T10:23:12.474663+0000 mgr.y (mgr.24422) 374 : cluster [DBG] pgmap v598: 260 pgs: 260 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-10T10:23:14.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:14 vm04 bash[20742]: cluster 2026-03-10T10:23:12.474663+0000 mgr.y (mgr.24422) 374 : cluster [DBG] pgmap v598: 260 pgs: 260 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-10T10:23:14.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:14 vm04 bash[20742]: cluster 2026-03-10T10:23:13.166764+0000 mon.a (mon.0) 2646 : cluster [DBG] osdmap e405: 8 total, 8 up, 8 in 2026-03-10T10:23:14.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:14 vm04 bash[20742]: cluster 2026-03-10T10:23:13.166764+0000 mon.a (mon.0) 2646 : cluster [DBG] osdmap e405: 8 total, 8 up, 8 in 2026-03-10T10:23:14.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:14 vm04 bash[20742]: audit 2026-03-10T10:23:13.169360+0000 mon.a (mon.0) 2647 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:23:14.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:14 vm04 bash[20742]: audit 2026-03-10T10:23:13.169360+0000 mon.a (mon.0) 2647 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:23:14.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:14 vm07 bash[23367]: cluster 2026-03-10T10:23:12.474663+0000 mgr.y (mgr.24422) 374 : cluster [DBG] pgmap v598: 260 pgs: 260 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-10T10:23:14.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:14 vm07 bash[23367]: cluster 2026-03-10T10:23:12.474663+0000 mgr.y (mgr.24422) 374 : cluster [DBG] pgmap v598: 260 pgs: 260 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 921 B/s rd, 0 op/s 2026-03-10T10:23:14.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:14 vm07 bash[23367]: cluster 2026-03-10T10:23:13.166764+0000 mon.a (mon.0) 2646 : cluster [DBG] osdmap e405: 8 total, 8 up, 8 in 2026-03-10T10:23:14.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:14 vm07 bash[23367]: cluster 2026-03-10T10:23:13.166764+0000 mon.a (mon.0) 2646 : cluster [DBG] osdmap e405: 8 total, 8 up, 8 in 2026-03-10T10:23:14.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:14 vm07 bash[23367]: audit 2026-03-10T10:23:13.169360+0000 mon.a (mon.0) 2647 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:23:14.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:14 vm07 bash[23367]: audit 2026-03-10T10:23:13.169360+0000 mon.a (mon.0) 2647 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-73","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:23:15.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:15 vm04 bash[28289]: audit 2026-03-10T10:23:14.149917+0000 mon.a (mon.0) 2648 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-73","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:23:15.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:15 vm04 bash[28289]: audit 2026-03-10T10:23:14.149917+0000 mon.a (mon.0) 2648 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-73","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:23:15.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:15 vm04 bash[28289]: cluster 2026-03-10T10:23:14.154123+0000 mon.a (mon.0) 2649 : cluster [DBG] osdmap e406: 8 total, 8 up, 8 in 2026-03-10T10:23:15.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:15 vm04 bash[28289]: cluster 2026-03-10T10:23:14.154123+0000 mon.a (mon.0) 2649 : cluster [DBG] osdmap e406: 8 total, 8 up, 8 in 2026-03-10T10:23:15.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:15 vm04 bash[28289]: audit 2026-03-10T10:23:14.160380+0000 mon.a (mon.0) 2650 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:23:15.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:15 vm04 bash[28289]: audit 2026-03-10T10:23:14.160380+0000 mon.a (mon.0) 2650 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:23:15.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:15 vm04 bash[20742]: audit 2026-03-10T10:23:14.149917+0000 mon.a (mon.0) 2648 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-73","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:23:15.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:15 vm04 bash[20742]: audit 2026-03-10T10:23:14.149917+0000 mon.a (mon.0) 2648 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-73","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:23:15.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:15 vm04 bash[20742]: cluster 2026-03-10T10:23:14.154123+0000 mon.a (mon.0) 2649 : cluster [DBG] osdmap e406: 8 total, 8 up, 8 in 2026-03-10T10:23:15.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:15 vm04 bash[20742]: cluster 2026-03-10T10:23:14.154123+0000 mon.a (mon.0) 2649 : cluster [DBG] osdmap e406: 8 total, 8 up, 8 in 2026-03-10T10:23:15.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:15 vm04 bash[20742]: audit 2026-03-10T10:23:14.160380+0000 mon.a (mon.0) 2650 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:23:15.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:15 vm04 bash[20742]: audit 2026-03-10T10:23:14.160380+0000 mon.a (mon.0) 2650 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:23:15.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:15 vm07 bash[23367]: audit 2026-03-10T10:23:14.149917+0000 mon.a (mon.0) 2648 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-73","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:23:15.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:15 vm07 bash[23367]: audit 2026-03-10T10:23:14.149917+0000 mon.a (mon.0) 2648 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-73","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:23:15.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:15 vm07 bash[23367]: cluster 2026-03-10T10:23:14.154123+0000 mon.a (mon.0) 2649 : cluster [DBG] osdmap e406: 8 total, 8 up, 8 in 2026-03-10T10:23:15.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:15 vm07 bash[23367]: cluster 2026-03-10T10:23:14.154123+0000 mon.a (mon.0) 2649 : cluster [DBG] osdmap e406: 8 total, 8 up, 8 in 2026-03-10T10:23:15.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:15 vm07 bash[23367]: audit 2026-03-10T10:23:14.160380+0000 mon.a (mon.0) 2650 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:23:15.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:15 vm07 bash[23367]: audit 2026-03-10T10:23:14.160380+0000 mon.a (mon.0) 2650 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:23:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:16 vm04 bash[28289]: cluster 2026-03-10T10:23:14.474981+0000 mgr.y (mgr.24422) 375 : cluster [DBG] pgmap v601: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:23:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:16 vm04 bash[28289]: cluster 2026-03-10T10:23:14.474981+0000 mgr.y (mgr.24422) 375 : cluster [DBG] pgmap v601: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:23:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:16 vm04 bash[28289]: cluster 2026-03-10T10:23:15.195677+0000 mon.a (mon.0) 2651 : cluster [DBG] osdmap e407: 8 total, 8 up, 8 in 2026-03-10T10:23:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:16 vm04 bash[28289]: cluster 2026-03-10T10:23:15.195677+0000 mon.a (mon.0) 2651 : cluster [DBG] osdmap e407: 8 total, 8 up, 8 in 2026-03-10T10:23:16.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:16 vm04 bash[20742]: cluster 2026-03-10T10:23:14.474981+0000 mgr.y (mgr.24422) 375 : cluster [DBG] pgmap v601: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:23:16.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:16 vm04 bash[20742]: cluster 2026-03-10T10:23:14.474981+0000 mgr.y (mgr.24422) 375 : cluster [DBG] pgmap v601: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:23:16.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:16 vm04 bash[20742]: cluster 2026-03-10T10:23:15.195677+0000 mon.a (mon.0) 2651 : cluster [DBG] osdmap e407: 8 total, 8 up, 8 in 2026-03-10T10:23:16.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:16 vm04 bash[20742]: cluster 2026-03-10T10:23:15.195677+0000 mon.a (mon.0) 2651 : cluster [DBG] osdmap e407: 8 total, 8 up, 8 in 2026-03-10T10:23:16.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:16 vm07 bash[23367]: cluster 2026-03-10T10:23:14.474981+0000 mgr.y (mgr.24422) 375 : cluster [DBG] pgmap v601: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:23:16.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:16 vm07 bash[23367]: cluster 2026-03-10T10:23:14.474981+0000 mgr.y (mgr.24422) 375 : cluster [DBG] pgmap v601: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:23:16.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:16 vm07 bash[23367]: cluster 2026-03-10T10:23:15.195677+0000 mon.a (mon.0) 2651 : cluster [DBG] osdmap e407: 8 total, 8 up, 8 in 2026-03-10T10:23:16.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:16 vm07 bash[23367]: cluster 2026-03-10T10:23:15.195677+0000 mon.a (mon.0) 2651 : cluster [DBG] osdmap e407: 8 total, 8 up, 8 in 2026-03-10T10:23:17.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:17 vm04 bash[28289]: cluster 2026-03-10T10:23:16.198157+0000 mon.a (mon.0) 2652 : cluster [DBG] osdmap e408: 8 total, 8 up, 8 in 2026-03-10T10:23:17.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:17 vm04 bash[28289]: cluster 2026-03-10T10:23:16.198157+0000 mon.a (mon.0) 2652 : cluster [DBG] osdmap e408: 8 total, 8 up, 8 in 2026-03-10T10:23:17.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:17 vm04 bash[20742]: cluster 2026-03-10T10:23:16.198157+0000 mon.a (mon.0) 2652 : cluster [DBG] osdmap e408: 8 total, 8 up, 8 in 2026-03-10T10:23:17.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:17 vm04 bash[20742]: cluster 2026-03-10T10:23:16.198157+0000 mon.a (mon.0) 2652 : cluster [DBG] osdmap e408: 8 total, 8 up, 8 in 2026-03-10T10:23:17.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:17 vm07 bash[23367]: cluster 2026-03-10T10:23:16.198157+0000 mon.a (mon.0) 2652 : cluster [DBG] osdmap e408: 8 total, 8 up, 8 in 2026-03-10T10:23:17.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:17 vm07 bash[23367]: cluster 2026-03-10T10:23:16.198157+0000 mon.a (mon.0) 2652 : cluster [DBG] osdmap e408: 8 total, 8 up, 8 in 2026-03-10T10:23:18.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:18 vm07 bash[23367]: cluster 2026-03-10T10:23:16.475309+0000 mgr.y (mgr.24422) 376 : cluster [DBG] pgmap v604: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:23:18.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:18 vm07 bash[23367]: cluster 2026-03-10T10:23:16.475309+0000 mgr.y (mgr.24422) 376 : cluster [DBG] pgmap v604: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:23:18.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:18 vm04 bash[28289]: cluster 2026-03-10T10:23:16.475309+0000 mgr.y (mgr.24422) 376 : cluster [DBG] pgmap v604: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:23:18.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:18 vm04 bash[28289]: cluster 2026-03-10T10:23:16.475309+0000 mgr.y (mgr.24422) 376 : cluster [DBG] pgmap v604: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:23:18.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:18 vm04 bash[20742]: cluster 2026-03-10T10:23:16.475309+0000 mgr.y (mgr.24422) 376 : cluster [DBG] pgmap v604: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:23:18.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:18 vm04 bash[20742]: cluster 2026-03-10T10:23:16.475309+0000 mgr.y (mgr.24422) 376 : cluster [DBG] pgmap v604: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:23:19.016 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:23:18 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:23:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:20 vm07 bash[23367]: cluster 2026-03-10T10:23:18.475898+0000 mgr.y (mgr.24422) 377 : cluster [DBG] pgmap v605: 292 pgs: 15 creating+peering, 277 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 191 B/s wr, 2 op/s 2026-03-10T10:23:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:20 vm07 bash[23367]: cluster 2026-03-10T10:23:18.475898+0000 mgr.y (mgr.24422) 377 : cluster [DBG] pgmap v605: 292 pgs: 15 creating+peering, 277 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 191 B/s wr, 2 op/s 2026-03-10T10:23:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:20 vm07 bash[23367]: audit 2026-03-10T10:23:18.552121+0000 mgr.y (mgr.24422) 378 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:20 vm07 bash[23367]: audit 2026-03-10T10:23:18.552121+0000 mgr.y (mgr.24422) 378 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:20 vm04 bash[28289]: cluster 2026-03-10T10:23:18.475898+0000 mgr.y (mgr.24422) 377 : cluster [DBG] pgmap v605: 292 pgs: 15 creating+peering, 277 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 191 B/s wr, 2 op/s 2026-03-10T10:23:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:20 vm04 bash[28289]: cluster 2026-03-10T10:23:18.475898+0000 mgr.y (mgr.24422) 377 : cluster [DBG] pgmap v605: 292 pgs: 15 creating+peering, 277 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 191 B/s wr, 2 op/s 2026-03-10T10:23:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:20 vm04 bash[28289]: audit 2026-03-10T10:23:18.552121+0000 mgr.y (mgr.24422) 378 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:20 vm04 bash[28289]: audit 2026-03-10T10:23:18.552121+0000 mgr.y (mgr.24422) 378 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:20.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:20 vm04 bash[20742]: cluster 2026-03-10T10:23:18.475898+0000 mgr.y (mgr.24422) 377 : cluster [DBG] pgmap v605: 292 pgs: 15 creating+peering, 277 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 191 B/s wr, 2 op/s 2026-03-10T10:23:20.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:20 vm04 bash[20742]: cluster 2026-03-10T10:23:18.475898+0000 mgr.y (mgr.24422) 377 : cluster [DBG] pgmap v605: 292 pgs: 15 creating+peering, 277 active+clean; 8.3 MiB data, 887 MiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 191 B/s wr, 2 op/s 2026-03-10T10:23:20.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:20 vm04 bash[20742]: audit 2026-03-10T10:23:18.552121+0000 mgr.y (mgr.24422) 378 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:20.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:20 vm04 bash[20742]: audit 2026-03-10T10:23:18.552121+0000 mgr.y (mgr.24422) 378 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:22 vm07 bash[23367]: cluster 2026-03-10T10:23:20.476667+0000 mgr.y (mgr.24422) 379 : cluster [DBG] pgmap v606: 292 pgs: 292 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-10T10:23:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:22 vm07 bash[23367]: cluster 2026-03-10T10:23:20.476667+0000 mgr.y (mgr.24422) 379 : cluster [DBG] pgmap v606: 292 pgs: 292 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-10T10:23:22.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:22 vm04 bash[28289]: cluster 2026-03-10T10:23:20.476667+0000 mgr.y (mgr.24422) 379 : cluster [DBG] pgmap v606: 292 pgs: 292 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-10T10:23:22.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:22 vm04 bash[28289]: cluster 2026-03-10T10:23:20.476667+0000 mgr.y (mgr.24422) 379 : cluster [DBG] pgmap v606: 292 pgs: 292 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-10T10:23:22.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:22 vm04 bash[20742]: cluster 2026-03-10T10:23:20.476667+0000 mgr.y (mgr.24422) 379 : cluster [DBG] pgmap v606: 292 pgs: 292 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-10T10:23:22.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:22 vm04 bash[20742]: cluster 2026-03-10T10:23:20.476667+0000 mgr.y (mgr.24422) 379 : cluster [DBG] pgmap v606: 292 pgs: 292 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-10T10:23:23.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:23:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:23:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:23:24.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:24 vm07 bash[23367]: cluster 2026-03-10T10:23:22.477002+0000 mgr.y (mgr.24422) 380 : cluster [DBG] pgmap v607: 292 pgs: 292 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 895 B/s wr, 3 op/s 2026-03-10T10:23:24.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:24 vm07 bash[23367]: cluster 2026-03-10T10:23:22.477002+0000 mgr.y (mgr.24422) 380 : cluster [DBG] pgmap v607: 292 pgs: 292 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 895 B/s wr, 3 op/s 2026-03-10T10:23:24.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:24 vm04 bash[28289]: cluster 2026-03-10T10:23:22.477002+0000 mgr.y (mgr.24422) 380 : cluster [DBG] pgmap v607: 292 pgs: 292 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 895 B/s wr, 3 op/s 2026-03-10T10:23:24.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:24 vm04 bash[28289]: cluster 2026-03-10T10:23:22.477002+0000 mgr.y (mgr.24422) 380 : cluster [DBG] pgmap v607: 292 pgs: 292 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 895 B/s wr, 3 op/s 2026-03-10T10:23:24.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:24 vm04 bash[20742]: cluster 2026-03-10T10:23:22.477002+0000 mgr.y (mgr.24422) 380 : cluster [DBG] pgmap v607: 292 pgs: 292 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 895 B/s wr, 3 op/s 2026-03-10T10:23:24.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:24 vm04 bash[20742]: cluster 2026-03-10T10:23:22.477002+0000 mgr.y (mgr.24422) 380 : cluster [DBG] pgmap v607: 292 pgs: 292 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 895 B/s wr, 3 op/s 2026-03-10T10:23:26.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:26 vm04 bash[28289]: cluster 2026-03-10T10:23:24.477767+0000 mgr.y (mgr.24422) 381 : cluster [DBG] pgmap v608: 292 pgs: 292 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 771 B/s wr, 3 op/s 2026-03-10T10:23:26.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:26 vm04 bash[28289]: cluster 2026-03-10T10:23:24.477767+0000 mgr.y (mgr.24422) 381 : cluster [DBG] pgmap v608: 292 pgs: 292 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 771 B/s wr, 3 op/s 2026-03-10T10:23:26.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:26 vm04 bash[28289]: audit 2026-03-10T10:23:26.023331+0000 mon.a (mon.0) 2653 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:23:26.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:26 vm04 bash[28289]: audit 2026-03-10T10:23:26.023331+0000 mon.a (mon.0) 2653 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:23:26.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:26 vm04 bash[28289]: audit 2026-03-10T10:23:26.245279+0000 mon.a (mon.0) 2654 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:23:26.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:26 vm04 bash[28289]: audit 2026-03-10T10:23:26.245279+0000 mon.a (mon.0) 2654 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:23:26.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:26 vm04 bash[28289]: audit 2026-03-10T10:23:26.245579+0000 mon.a (mon.0) 2655 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-73"}]: dispatch 2026-03-10T10:23:26.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:26 vm04 bash[28289]: audit 2026-03-10T10:23:26.245579+0000 mon.a (mon.0) 2655 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-73"}]: dispatch 2026-03-10T10:23:26.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:26 vm04 bash[20742]: cluster 2026-03-10T10:23:24.477767+0000 mgr.y (mgr.24422) 381 : cluster [DBG] pgmap v608: 292 pgs: 292 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 771 B/s wr, 3 op/s 2026-03-10T10:23:26.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:26 vm04 bash[20742]: cluster 2026-03-10T10:23:24.477767+0000 mgr.y (mgr.24422) 381 : cluster [DBG] pgmap v608: 292 pgs: 292 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 771 B/s wr, 3 op/s 2026-03-10T10:23:26.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:26 vm04 bash[20742]: audit 2026-03-10T10:23:26.023331+0000 mon.a (mon.0) 2653 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:23:26.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:26 vm04 bash[20742]: audit 2026-03-10T10:23:26.023331+0000 mon.a (mon.0) 2653 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:23:26.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:26 vm04 bash[20742]: audit 2026-03-10T10:23:26.245279+0000 mon.a (mon.0) 2654 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:23:26.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:26 vm04 bash[20742]: audit 2026-03-10T10:23:26.245279+0000 mon.a (mon.0) 2654 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:23:26.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:26 vm04 bash[20742]: audit 2026-03-10T10:23:26.245579+0000 mon.a (mon.0) 2655 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-73"}]: dispatch 2026-03-10T10:23:26.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:26 vm04 bash[20742]: audit 2026-03-10T10:23:26.245579+0000 mon.a (mon.0) 2655 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-73"}]: dispatch 2026-03-10T10:23:26.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:26 vm07 bash[23367]: cluster 2026-03-10T10:23:24.477767+0000 mgr.y (mgr.24422) 381 : cluster [DBG] pgmap v608: 292 pgs: 292 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 771 B/s wr, 3 op/s 2026-03-10T10:23:26.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:26 vm07 bash[23367]: cluster 2026-03-10T10:23:24.477767+0000 mgr.y (mgr.24422) 381 : cluster [DBG] pgmap v608: 292 pgs: 292 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 771 B/s wr, 3 op/s 2026-03-10T10:23:26.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:26 vm07 bash[23367]: audit 2026-03-10T10:23:26.023331+0000 mon.a (mon.0) 2653 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:23:26.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:26 vm07 bash[23367]: audit 2026-03-10T10:23:26.023331+0000 mon.a (mon.0) 2653 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:23:26.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:26 vm07 bash[23367]: audit 2026-03-10T10:23:26.245279+0000 mon.a (mon.0) 2654 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:23:26.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:26 vm07 bash[23367]: audit 2026-03-10T10:23:26.245279+0000 mon.a (mon.0) 2654 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:23:26.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:26 vm07 bash[23367]: audit 2026-03-10T10:23:26.245579+0000 mon.a (mon.0) 2655 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-73"}]: dispatch 2026-03-10T10:23:26.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:26 vm07 bash[23367]: audit 2026-03-10T10:23:26.245579+0000 mon.a (mon.0) 2655 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-73"}]: dispatch 2026-03-10T10:23:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:27 vm04 bash[28289]: cluster 2026-03-10T10:23:26.270230+0000 mon.a (mon.0) 2656 : cluster [DBG] osdmap e409: 8 total, 8 up, 8 in 2026-03-10T10:23:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:27 vm04 bash[28289]: cluster 2026-03-10T10:23:26.270230+0000 mon.a (mon.0) 2656 : cluster [DBG] osdmap e409: 8 total, 8 up, 8 in 2026-03-10T10:23:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:27 vm04 bash[28289]: audit 2026-03-10T10:23:26.380715+0000 mon.a (mon.0) 2657 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:23:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:27 vm04 bash[28289]: audit 2026-03-10T10:23:26.380715+0000 mon.a (mon.0) 2657 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:23:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:27 vm04 bash[28289]: audit 2026-03-10T10:23:26.381228+0000 mon.a (mon.0) 2658 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:23:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:27 vm04 bash[28289]: audit 2026-03-10T10:23:26.381228+0000 mon.a (mon.0) 2658 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:23:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:27 vm04 bash[28289]: audit 2026-03-10T10:23:26.385907+0000 mon.a (mon.0) 2659 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:23:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:27 vm04 bash[28289]: audit 2026-03-10T10:23:26.385907+0000 mon.a (mon.0) 2659 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:23:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:27 vm04 bash[28289]: cluster 2026-03-10T10:23:27.273669+0000 mon.a (mon.0) 2660 : cluster [DBG] osdmap e410: 8 total, 8 up, 8 in 2026-03-10T10:23:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:27 vm04 bash[28289]: cluster 2026-03-10T10:23:27.273669+0000 mon.a (mon.0) 2660 : cluster [DBG] osdmap e410: 8 total, 8 up, 8 in 2026-03-10T10:23:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:27 vm04 bash[28289]: audit 2026-03-10T10:23:27.279165+0000 mon.a (mon.0) 2661 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:23:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:27 vm04 bash[28289]: audit 2026-03-10T10:23:27.279165+0000 mon.a (mon.0) 2661 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:23:27.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:27 vm04 bash[20742]: cluster 2026-03-10T10:23:26.270230+0000 mon.a (mon.0) 2656 : cluster [DBG] osdmap e409: 8 total, 8 up, 8 in 2026-03-10T10:23:27.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:27 vm04 bash[20742]: cluster 2026-03-10T10:23:26.270230+0000 mon.a (mon.0) 2656 : cluster [DBG] osdmap e409: 8 total, 8 up, 8 in 2026-03-10T10:23:27.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:27 vm04 bash[20742]: audit 2026-03-10T10:23:26.380715+0000 mon.a (mon.0) 2657 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:23:27.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:27 vm04 bash[20742]: audit 2026-03-10T10:23:26.380715+0000 mon.a (mon.0) 2657 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:23:27.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:27 vm04 bash[20742]: audit 2026-03-10T10:23:26.381228+0000 mon.a (mon.0) 2658 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:23:27.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:27 vm04 bash[20742]: audit 2026-03-10T10:23:26.381228+0000 mon.a (mon.0) 2658 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:23:27.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:27 vm04 bash[20742]: audit 2026-03-10T10:23:26.385907+0000 mon.a (mon.0) 2659 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:23:27.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:27 vm04 bash[20742]: audit 2026-03-10T10:23:26.385907+0000 mon.a (mon.0) 2659 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:23:27.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:27 vm04 bash[20742]: cluster 2026-03-10T10:23:27.273669+0000 mon.a (mon.0) 2660 : cluster [DBG] osdmap e410: 8 total, 8 up, 8 in 2026-03-10T10:23:27.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:27 vm04 bash[20742]: cluster 2026-03-10T10:23:27.273669+0000 mon.a (mon.0) 2660 : cluster [DBG] osdmap e410: 8 total, 8 up, 8 in 2026-03-10T10:23:27.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:27 vm04 bash[20742]: audit 2026-03-10T10:23:27.279165+0000 mon.a (mon.0) 2661 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:23:27.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:27 vm04 bash[20742]: audit 2026-03-10T10:23:27.279165+0000 mon.a (mon.0) 2661 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:23:27.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:27 vm07 bash[23367]: cluster 2026-03-10T10:23:26.270230+0000 mon.a (mon.0) 2656 : cluster [DBG] osdmap e409: 8 total, 8 up, 8 in 2026-03-10T10:23:27.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:27 vm07 bash[23367]: cluster 2026-03-10T10:23:26.270230+0000 mon.a (mon.0) 2656 : cluster [DBG] osdmap e409: 8 total, 8 up, 8 in 2026-03-10T10:23:27.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:27 vm07 bash[23367]: audit 2026-03-10T10:23:26.380715+0000 mon.a (mon.0) 2657 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:23:27.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:27 vm07 bash[23367]: audit 2026-03-10T10:23:26.380715+0000 mon.a (mon.0) 2657 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:23:27.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:27 vm07 bash[23367]: audit 2026-03-10T10:23:26.381228+0000 mon.a (mon.0) 2658 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:23:27.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:27 vm07 bash[23367]: audit 2026-03-10T10:23:26.381228+0000 mon.a (mon.0) 2658 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:23:27.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:27 vm07 bash[23367]: audit 2026-03-10T10:23:26.385907+0000 mon.a (mon.0) 2659 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:23:27.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:27 vm07 bash[23367]: audit 2026-03-10T10:23:26.385907+0000 mon.a (mon.0) 2659 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:23:27.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:27 vm07 bash[23367]: cluster 2026-03-10T10:23:27.273669+0000 mon.a (mon.0) 2660 : cluster [DBG] osdmap e410: 8 total, 8 up, 8 in 2026-03-10T10:23:27.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:27 vm07 bash[23367]: cluster 2026-03-10T10:23:27.273669+0000 mon.a (mon.0) 2660 : cluster [DBG] osdmap e410: 8 total, 8 up, 8 in 2026-03-10T10:23:27.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:27 vm07 bash[23367]: audit 2026-03-10T10:23:27.279165+0000 mon.a (mon.0) 2661 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:23:27.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:27 vm07 bash[23367]: audit 2026-03-10T10:23:27.279165+0000 mon.a (mon.0) 2661 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-75","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:23:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:28 vm04 bash[28289]: cluster 2026-03-10T10:23:26.478112+0000 mgr.y (mgr.24422) 382 : cluster [DBG] pgmap v610: 260 pgs: 260 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 307 B/s wr, 1 op/s 2026-03-10T10:23:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:28 vm04 bash[28289]: cluster 2026-03-10T10:23:26.478112+0000 mgr.y (mgr.24422) 382 : cluster [DBG] pgmap v610: 260 pgs: 260 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 307 B/s wr, 1 op/s 2026-03-10T10:23:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:28 vm04 bash[28289]: audit 2026-03-10T10:23:27.985154+0000 mon.a (mon.0) 2662 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:23:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:28 vm04 bash[28289]: audit 2026-03-10T10:23:27.985154+0000 mon.a (mon.0) 2662 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:23:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:28 vm04 bash[28289]: audit 2026-03-10T10:23:28.274860+0000 mon.a (mon.0) 2663 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-75","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:23:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:28 vm04 bash[28289]: audit 2026-03-10T10:23:28.274860+0000 mon.a (mon.0) 2663 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-75","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:23:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:28 vm04 bash[28289]: cluster 2026-03-10T10:23:28.284584+0000 mon.a (mon.0) 2664 : cluster [DBG] osdmap e411: 8 total, 8 up, 8 in 2026-03-10T10:23:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:28 vm04 bash[28289]: cluster 2026-03-10T10:23:28.284584+0000 mon.a (mon.0) 2664 : cluster [DBG] osdmap e411: 8 total, 8 up, 8 in 2026-03-10T10:23:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:28 vm04 bash[20742]: cluster 2026-03-10T10:23:26.478112+0000 mgr.y (mgr.24422) 382 : cluster [DBG] pgmap v610: 260 pgs: 260 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 307 B/s wr, 1 op/s 2026-03-10T10:23:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:28 vm04 bash[20742]: cluster 2026-03-10T10:23:26.478112+0000 mgr.y (mgr.24422) 382 : cluster [DBG] pgmap v610: 260 pgs: 260 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 307 B/s wr, 1 op/s 2026-03-10T10:23:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:28 vm04 bash[20742]: audit 2026-03-10T10:23:27.985154+0000 mon.a (mon.0) 2662 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:23:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:28 vm04 bash[20742]: audit 2026-03-10T10:23:27.985154+0000 mon.a (mon.0) 2662 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:23:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:28 vm04 bash[20742]: audit 2026-03-10T10:23:28.274860+0000 mon.a (mon.0) 2663 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-75","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:23:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:28 vm04 bash[20742]: audit 2026-03-10T10:23:28.274860+0000 mon.a (mon.0) 2663 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-75","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:23:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:28 vm04 bash[20742]: cluster 2026-03-10T10:23:28.284584+0000 mon.a (mon.0) 2664 : cluster [DBG] osdmap e411: 8 total, 8 up, 8 in 2026-03-10T10:23:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:28 vm04 bash[20742]: cluster 2026-03-10T10:23:28.284584+0000 mon.a (mon.0) 2664 : cluster [DBG] osdmap e411: 8 total, 8 up, 8 in 2026-03-10T10:23:28.766 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:23:28 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:23:28.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:28 vm07 bash[23367]: cluster 2026-03-10T10:23:26.478112+0000 mgr.y (mgr.24422) 382 : cluster [DBG] pgmap v610: 260 pgs: 260 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 307 B/s wr, 1 op/s 2026-03-10T10:23:28.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:28 vm07 bash[23367]: cluster 2026-03-10T10:23:26.478112+0000 mgr.y (mgr.24422) 382 : cluster [DBG] pgmap v610: 260 pgs: 260 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 307 B/s wr, 1 op/s 2026-03-10T10:23:28.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:28 vm07 bash[23367]: audit 2026-03-10T10:23:27.985154+0000 mon.a (mon.0) 2662 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:23:28.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:28 vm07 bash[23367]: audit 2026-03-10T10:23:27.985154+0000 mon.a (mon.0) 2662 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:23:28.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:28 vm07 bash[23367]: audit 2026-03-10T10:23:28.274860+0000 mon.a (mon.0) 2663 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-75","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:23:28.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:28 vm07 bash[23367]: audit 2026-03-10T10:23:28.274860+0000 mon.a (mon.0) 2663 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-75","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:23:28.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:28 vm07 bash[23367]: cluster 2026-03-10T10:23:28.284584+0000 mon.a (mon.0) 2664 : cluster [DBG] osdmap e411: 8 total, 8 up, 8 in 2026-03-10T10:23:28.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:28 vm07 bash[23367]: cluster 2026-03-10T10:23:28.284584+0000 mon.a (mon.0) 2664 : cluster [DBG] osdmap e411: 8 total, 8 up, 8 in 2026-03-10T10:23:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:29 vm04 bash[28289]: audit 2026-03-10T10:23:28.312771+0000 mon.a (mon.0) 2665 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:23:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:29 vm04 bash[28289]: audit 2026-03-10T10:23:28.312771+0000 mon.a (mon.0) 2665 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:23:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:29 vm04 bash[28289]: cluster 2026-03-10T10:23:29.282567+0000 mon.a (mon.0) 2666 : cluster [DBG] osdmap e412: 8 total, 8 up, 8 in 2026-03-10T10:23:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:29 vm04 bash[28289]: cluster 2026-03-10T10:23:29.282567+0000 mon.a (mon.0) 2666 : cluster [DBG] osdmap e412: 8 total, 8 up, 8 in 2026-03-10T10:23:29.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:29 vm04 bash[20742]: audit 2026-03-10T10:23:28.312771+0000 mon.a (mon.0) 2665 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:23:29.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:29 vm04 bash[20742]: audit 2026-03-10T10:23:28.312771+0000 mon.a (mon.0) 2665 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:23:29.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:29 vm04 bash[20742]: cluster 2026-03-10T10:23:29.282567+0000 mon.a (mon.0) 2666 : cluster [DBG] osdmap e412: 8 total, 8 up, 8 in 2026-03-10T10:23:29.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:29 vm04 bash[20742]: cluster 2026-03-10T10:23:29.282567+0000 mon.a (mon.0) 2666 : cluster [DBG] osdmap e412: 8 total, 8 up, 8 in 2026-03-10T10:23:29.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:29 vm07 bash[23367]: audit 2026-03-10T10:23:28.312771+0000 mon.a (mon.0) 2665 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:23:29.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:29 vm07 bash[23367]: audit 2026-03-10T10:23:28.312771+0000 mon.a (mon.0) 2665 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:23:29.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:29 vm07 bash[23367]: cluster 2026-03-10T10:23:29.282567+0000 mon.a (mon.0) 2666 : cluster [DBG] osdmap e412: 8 total, 8 up, 8 in 2026-03-10T10:23:29.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:29 vm07 bash[23367]: cluster 2026-03-10T10:23:29.282567+0000 mon.a (mon.0) 2666 : cluster [DBG] osdmap e412: 8 total, 8 up, 8 in 2026-03-10T10:23:30.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:30 vm04 bash[28289]: cluster 2026-03-10T10:23:28.478773+0000 mgr.y (mgr.24422) 383 : cluster [DBG] pgmap v613: 292 pgs: 20 unknown, 272 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:23:30.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:30 vm04 bash[28289]: cluster 2026-03-10T10:23:28.478773+0000 mgr.y (mgr.24422) 383 : cluster [DBG] pgmap v613: 292 pgs: 20 unknown, 272 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:23:30.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:30 vm04 bash[28289]: audit 2026-03-10T10:23:28.560450+0000 mgr.y (mgr.24422) 384 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:30.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:30 vm04 bash[28289]: audit 2026-03-10T10:23:28.560450+0000 mgr.y (mgr.24422) 384 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:30.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:30 vm04 bash[28289]: cluster 2026-03-10T10:23:30.285786+0000 mon.a (mon.0) 2667 : cluster [DBG] osdmap e413: 8 total, 8 up, 8 in 2026-03-10T10:23:30.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:30 vm04 bash[28289]: cluster 2026-03-10T10:23:30.285786+0000 mon.a (mon.0) 2667 : cluster [DBG] osdmap e413: 8 total, 8 up, 8 in 2026-03-10T10:23:30.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:30 vm04 bash[20742]: cluster 2026-03-10T10:23:28.478773+0000 mgr.y (mgr.24422) 383 : cluster [DBG] pgmap v613: 292 pgs: 20 unknown, 272 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:23:30.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:30 vm04 bash[20742]: cluster 2026-03-10T10:23:28.478773+0000 mgr.y (mgr.24422) 383 : cluster [DBG] pgmap v613: 292 pgs: 20 unknown, 272 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:23:30.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:30 vm04 bash[20742]: audit 2026-03-10T10:23:28.560450+0000 mgr.y (mgr.24422) 384 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:30.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:30 vm04 bash[20742]: audit 2026-03-10T10:23:28.560450+0000 mgr.y (mgr.24422) 384 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:30.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:30 vm04 bash[20742]: cluster 2026-03-10T10:23:30.285786+0000 mon.a (mon.0) 2667 : cluster [DBG] osdmap e413: 8 total, 8 up, 8 in 2026-03-10T10:23:30.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:30 vm04 bash[20742]: cluster 2026-03-10T10:23:30.285786+0000 mon.a (mon.0) 2667 : cluster [DBG] osdmap e413: 8 total, 8 up, 8 in 2026-03-10T10:23:30.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:30 vm07 bash[23367]: cluster 2026-03-10T10:23:28.478773+0000 mgr.y (mgr.24422) 383 : cluster [DBG] pgmap v613: 292 pgs: 20 unknown, 272 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:23:30.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:30 vm07 bash[23367]: cluster 2026-03-10T10:23:28.478773+0000 mgr.y (mgr.24422) 383 : cluster [DBG] pgmap v613: 292 pgs: 20 unknown, 272 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:23:30.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:30 vm07 bash[23367]: audit 2026-03-10T10:23:28.560450+0000 mgr.y (mgr.24422) 384 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:30.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:30 vm07 bash[23367]: audit 2026-03-10T10:23:28.560450+0000 mgr.y (mgr.24422) 384 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:30.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:30 vm07 bash[23367]: cluster 2026-03-10T10:23:30.285786+0000 mon.a (mon.0) 2667 : cluster [DBG] osdmap e413: 8 total, 8 up, 8 in 2026-03-10T10:23:30.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:30 vm07 bash[23367]: cluster 2026-03-10T10:23:30.285786+0000 mon.a (mon.0) 2667 : cluster [DBG] osdmap e413: 8 total, 8 up, 8 in 2026-03-10T10:23:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:31 vm04 bash[28289]: audit 2026-03-10T10:23:30.349571+0000 mon.a (mon.0) 2668 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:23:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:31 vm04 bash[28289]: audit 2026-03-10T10:23:30.349571+0000 mon.a (mon.0) 2668 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:23:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:31 vm04 bash[28289]: audit 2026-03-10T10:23:30.349820+0000 mon.a (mon.0) 2669 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-75"}]: dispatch 2026-03-10T10:23:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:31 vm04 bash[28289]: audit 2026-03-10T10:23:30.349820+0000 mon.a (mon.0) 2669 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-75"}]: dispatch 2026-03-10T10:23:31.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:31 vm04 bash[20742]: audit 2026-03-10T10:23:30.349571+0000 mon.a (mon.0) 2668 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:23:31.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:31 vm04 bash[20742]: audit 2026-03-10T10:23:30.349571+0000 mon.a (mon.0) 2668 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:23:31.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:31 vm04 bash[20742]: audit 2026-03-10T10:23:30.349820+0000 mon.a (mon.0) 2669 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-75"}]: dispatch 2026-03-10T10:23:31.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:31 vm04 bash[20742]: audit 2026-03-10T10:23:30.349820+0000 mon.a (mon.0) 2669 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-75"}]: dispatch 2026-03-10T10:23:31.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:31 vm07 bash[23367]: audit 2026-03-10T10:23:30.349571+0000 mon.a (mon.0) 2668 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:23:31.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:31 vm07 bash[23367]: audit 2026-03-10T10:23:30.349571+0000 mon.a (mon.0) 2668 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:23:31.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:31 vm07 bash[23367]: audit 2026-03-10T10:23:30.349820+0000 mon.a (mon.0) 2669 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-75"}]: dispatch 2026-03-10T10:23:31.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:31 vm07 bash[23367]: audit 2026-03-10T10:23:30.349820+0000 mon.a (mon.0) 2669 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-75"}]: dispatch 2026-03-10T10:23:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:32 vm04 bash[28289]: cluster 2026-03-10T10:23:30.479160+0000 mgr.y (mgr.24422) 385 : cluster [DBG] pgmap v616: 292 pgs: 292 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T10:23:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:32 vm04 bash[28289]: cluster 2026-03-10T10:23:30.479160+0000 mgr.y (mgr.24422) 385 : cluster [DBG] pgmap v616: 292 pgs: 292 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T10:23:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:32 vm04 bash[28289]: cluster 2026-03-10T10:23:31.350690+0000 mon.a (mon.0) 2670 : cluster [DBG] osdmap e414: 8 total, 8 up, 8 in 2026-03-10T10:23:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:32 vm04 bash[28289]: cluster 2026-03-10T10:23:31.350690+0000 mon.a (mon.0) 2670 : cluster [DBG] osdmap e414: 8 total, 8 up, 8 in 2026-03-10T10:23:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:32 vm04 bash[20742]: cluster 2026-03-10T10:23:30.479160+0000 mgr.y (mgr.24422) 385 : cluster [DBG] pgmap v616: 292 pgs: 292 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T10:23:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:32 vm04 bash[20742]: cluster 2026-03-10T10:23:30.479160+0000 mgr.y (mgr.24422) 385 : cluster [DBG] pgmap v616: 292 pgs: 292 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T10:23:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:32 vm04 bash[20742]: cluster 2026-03-10T10:23:31.350690+0000 mon.a (mon.0) 2670 : cluster [DBG] osdmap e414: 8 total, 8 up, 8 in 2026-03-10T10:23:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:32 vm04 bash[20742]: cluster 2026-03-10T10:23:31.350690+0000 mon.a (mon.0) 2670 : cluster [DBG] osdmap e414: 8 total, 8 up, 8 in 2026-03-10T10:23:32.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:32 vm07 bash[23367]: cluster 2026-03-10T10:23:30.479160+0000 mgr.y (mgr.24422) 385 : cluster [DBG] pgmap v616: 292 pgs: 292 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T10:23:32.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:32 vm07 bash[23367]: cluster 2026-03-10T10:23:30.479160+0000 mgr.y (mgr.24422) 385 : cluster [DBG] pgmap v616: 292 pgs: 292 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T10:23:32.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:32 vm07 bash[23367]: cluster 2026-03-10T10:23:31.350690+0000 mon.a (mon.0) 2670 : cluster [DBG] osdmap e414: 8 total, 8 up, 8 in 2026-03-10T10:23:32.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:32 vm07 bash[23367]: cluster 2026-03-10T10:23:31.350690+0000 mon.a (mon.0) 2670 : cluster [DBG] osdmap e414: 8 total, 8 up, 8 in 2026-03-10T10:23:33.399 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:23:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:23:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:23:33.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:33 vm04 bash[28289]: cluster 2026-03-10T10:23:32.353254+0000 mon.a (mon.0) 2671 : cluster [DBG] osdmap e415: 8 total, 8 up, 8 in 2026-03-10T10:23:33.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:33 vm04 bash[28289]: cluster 2026-03-10T10:23:32.353254+0000 mon.a (mon.0) 2671 : cluster [DBG] osdmap e415: 8 total, 8 up, 8 in 2026-03-10T10:23:33.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:33 vm04 bash[28289]: audit 2026-03-10T10:23:32.354216+0000 mon.a (mon.0) 2672 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:23:33.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:33 vm04 bash[28289]: audit 2026-03-10T10:23:32.354216+0000 mon.a (mon.0) 2672 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:23:33.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:33 vm04 bash[28289]: cluster 2026-03-10T10:23:32.479484+0000 mgr.y (mgr.24422) 386 : cluster [DBG] pgmap v619: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:23:33.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:33 vm04 bash[28289]: cluster 2026-03-10T10:23:32.479484+0000 mgr.y (mgr.24422) 386 : cluster [DBG] pgmap v619: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:23:33.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:33 vm04 bash[20742]: cluster 2026-03-10T10:23:32.353254+0000 mon.a (mon.0) 2671 : cluster [DBG] osdmap e415: 8 total, 8 up, 8 in 2026-03-10T10:23:33.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:33 vm04 bash[20742]: cluster 2026-03-10T10:23:32.353254+0000 mon.a (mon.0) 2671 : cluster [DBG] osdmap e415: 8 total, 8 up, 8 in 2026-03-10T10:23:33.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:33 vm04 bash[20742]: audit 2026-03-10T10:23:32.354216+0000 mon.a (mon.0) 2672 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:23:33.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:33 vm04 bash[20742]: audit 2026-03-10T10:23:32.354216+0000 mon.a (mon.0) 2672 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:23:33.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:33 vm04 bash[20742]: cluster 2026-03-10T10:23:32.479484+0000 mgr.y (mgr.24422) 386 : cluster [DBG] pgmap v619: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:23:33.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:33 vm04 bash[20742]: cluster 2026-03-10T10:23:32.479484+0000 mgr.y (mgr.24422) 386 : cluster [DBG] pgmap v619: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:23:33.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:33 vm07 bash[23367]: cluster 2026-03-10T10:23:32.353254+0000 mon.a (mon.0) 2671 : cluster [DBG] osdmap e415: 8 total, 8 up, 8 in 2026-03-10T10:23:33.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:33 vm07 bash[23367]: cluster 2026-03-10T10:23:32.353254+0000 mon.a (mon.0) 2671 : cluster [DBG] osdmap e415: 8 total, 8 up, 8 in 2026-03-10T10:23:33.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:33 vm07 bash[23367]: audit 2026-03-10T10:23:32.354216+0000 mon.a (mon.0) 2672 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:23:33.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:33 vm07 bash[23367]: audit 2026-03-10T10:23:32.354216+0000 mon.a (mon.0) 2672 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-77","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:23:33.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:33 vm07 bash[23367]: cluster 2026-03-10T10:23:32.479484+0000 mgr.y (mgr.24422) 386 : cluster [DBG] pgmap v619: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:23:33.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:33 vm07 bash[23367]: cluster 2026-03-10T10:23:32.479484+0000 mgr.y (mgr.24422) 386 : cluster [DBG] pgmap v619: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 905 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:23:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:34 vm04 bash[28289]: cluster 2026-03-10T10:23:33.350915+0000 mon.a (mon.0) 2673 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:23:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:34 vm04 bash[28289]: cluster 2026-03-10T10:23:33.350915+0000 mon.a (mon.0) 2673 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:23:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:34 vm04 bash[28289]: audit 2026-03-10T10:23:33.359461+0000 mon.a (mon.0) 2674 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-77","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:23:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:34 vm04 bash[28289]: audit 2026-03-10T10:23:33.359461+0000 mon.a (mon.0) 2674 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-77","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:23:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:34 vm04 bash[28289]: cluster 2026-03-10T10:23:33.362144+0000 mon.a (mon.0) 2675 : cluster [DBG] osdmap e416: 8 total, 8 up, 8 in 2026-03-10T10:23:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:34 vm04 bash[28289]: cluster 2026-03-10T10:23:33.362144+0000 mon.a (mon.0) 2675 : cluster [DBG] osdmap e416: 8 total, 8 up, 8 in 2026-03-10T10:23:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:34 vm04 bash[28289]: audit 2026-03-10T10:23:33.367534+0000 mon.a (mon.0) 2676 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:23:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:34 vm04 bash[28289]: audit 2026-03-10T10:23:33.367534+0000 mon.a (mon.0) 2676 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:23:34.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:34 vm04 bash[20742]: cluster 2026-03-10T10:23:33.350915+0000 mon.a (mon.0) 2673 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:23:34.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:34 vm04 bash[20742]: cluster 2026-03-10T10:23:33.350915+0000 mon.a (mon.0) 2673 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:23:34.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:34 vm04 bash[20742]: audit 2026-03-10T10:23:33.359461+0000 mon.a (mon.0) 2674 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-77","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:23:34.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:34 vm04 bash[20742]: audit 2026-03-10T10:23:33.359461+0000 mon.a (mon.0) 2674 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-77","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:23:34.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:34 vm04 bash[20742]: cluster 2026-03-10T10:23:33.362144+0000 mon.a (mon.0) 2675 : cluster [DBG] osdmap e416: 8 total, 8 up, 8 in 2026-03-10T10:23:34.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:34 vm04 bash[20742]: cluster 2026-03-10T10:23:33.362144+0000 mon.a (mon.0) 2675 : cluster [DBG] osdmap e416: 8 total, 8 up, 8 in 2026-03-10T10:23:34.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:34 vm04 bash[20742]: audit 2026-03-10T10:23:33.367534+0000 mon.a (mon.0) 2676 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:23:34.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:34 vm04 bash[20742]: audit 2026-03-10T10:23:33.367534+0000 mon.a (mon.0) 2676 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:23:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:34 vm07 bash[23367]: cluster 2026-03-10T10:23:33.350915+0000 mon.a (mon.0) 2673 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:23:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:34 vm07 bash[23367]: cluster 2026-03-10T10:23:33.350915+0000 mon.a (mon.0) 2673 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:23:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:34 vm07 bash[23367]: audit 2026-03-10T10:23:33.359461+0000 mon.a (mon.0) 2674 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-77","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:23:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:34 vm07 bash[23367]: audit 2026-03-10T10:23:33.359461+0000 mon.a (mon.0) 2674 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-77","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:23:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:34 vm07 bash[23367]: cluster 2026-03-10T10:23:33.362144+0000 mon.a (mon.0) 2675 : cluster [DBG] osdmap e416: 8 total, 8 up, 8 in 2026-03-10T10:23:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:34 vm07 bash[23367]: cluster 2026-03-10T10:23:33.362144+0000 mon.a (mon.0) 2675 : cluster [DBG] osdmap e416: 8 total, 8 up, 8 in 2026-03-10T10:23:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:34 vm07 bash[23367]: audit 2026-03-10T10:23:33.367534+0000 mon.a (mon.0) 2676 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:23:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:34 vm07 bash[23367]: audit 2026-03-10T10:23:33.367534+0000 mon.a (mon.0) 2676 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:23:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:35 vm04 bash[28289]: cluster 2026-03-10T10:23:34.424349+0000 mon.a (mon.0) 2677 : cluster [DBG] osdmap e417: 8 total, 8 up, 8 in 2026-03-10T10:23:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:35 vm04 bash[28289]: cluster 2026-03-10T10:23:34.424349+0000 mon.a (mon.0) 2677 : cluster [DBG] osdmap e417: 8 total, 8 up, 8 in 2026-03-10T10:23:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:35 vm04 bash[28289]: audit 2026-03-10T10:23:34.469863+0000 mon.a (mon.0) 2678 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:23:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:35 vm04 bash[28289]: audit 2026-03-10T10:23:34.469863+0000 mon.a (mon.0) 2678 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:23:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:35 vm04 bash[28289]: audit 2026-03-10T10:23:34.470077+0000 mon.a (mon.0) 2679 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-77"}]: dispatch 2026-03-10T10:23:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:35 vm04 bash[28289]: audit 2026-03-10T10:23:34.470077+0000 mon.a (mon.0) 2679 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-77"}]: dispatch 2026-03-10T10:23:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:35 vm04 bash[28289]: cluster 2026-03-10T10:23:34.479843+0000 mgr.y (mgr.24422) 387 : cluster [DBG] pgmap v622: 292 pgs: 5 creating+activating, 287 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:23:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:35 vm04 bash[28289]: cluster 2026-03-10T10:23:34.479843+0000 mgr.y (mgr.24422) 387 : cluster [DBG] pgmap v622: 292 pgs: 5 creating+activating, 287 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:23:35.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:35 vm04 bash[20742]: cluster 2026-03-10T10:23:34.424349+0000 mon.a (mon.0) 2677 : cluster [DBG] osdmap e417: 8 total, 8 up, 8 in 2026-03-10T10:23:35.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:35 vm04 bash[20742]: cluster 2026-03-10T10:23:34.424349+0000 mon.a (mon.0) 2677 : cluster [DBG] osdmap e417: 8 total, 8 up, 8 in 2026-03-10T10:23:35.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:35 vm04 bash[20742]: audit 2026-03-10T10:23:34.469863+0000 mon.a (mon.0) 2678 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:23:35.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:35 vm04 bash[20742]: audit 2026-03-10T10:23:34.469863+0000 mon.a (mon.0) 2678 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:23:35.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:35 vm04 bash[20742]: audit 2026-03-10T10:23:34.470077+0000 mon.a (mon.0) 2679 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-77"}]: dispatch 2026-03-10T10:23:35.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:35 vm04 bash[20742]: audit 2026-03-10T10:23:34.470077+0000 mon.a (mon.0) 2679 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-77"}]: dispatch 2026-03-10T10:23:35.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:35 vm04 bash[20742]: cluster 2026-03-10T10:23:34.479843+0000 mgr.y (mgr.24422) 387 : cluster [DBG] pgmap v622: 292 pgs: 5 creating+activating, 287 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:23:35.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:35 vm04 bash[20742]: cluster 2026-03-10T10:23:34.479843+0000 mgr.y (mgr.24422) 387 : cluster [DBG] pgmap v622: 292 pgs: 5 creating+activating, 287 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:23:35.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:35 vm07 bash[23367]: cluster 2026-03-10T10:23:34.424349+0000 mon.a (mon.0) 2677 : cluster [DBG] osdmap e417: 8 total, 8 up, 8 in 2026-03-10T10:23:35.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:35 vm07 bash[23367]: cluster 2026-03-10T10:23:34.424349+0000 mon.a (mon.0) 2677 : cluster [DBG] osdmap e417: 8 total, 8 up, 8 in 2026-03-10T10:23:35.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:35 vm07 bash[23367]: audit 2026-03-10T10:23:34.469863+0000 mon.a (mon.0) 2678 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:23:35.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:35 vm07 bash[23367]: audit 2026-03-10T10:23:34.469863+0000 mon.a (mon.0) 2678 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:23:35.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:35 vm07 bash[23367]: audit 2026-03-10T10:23:34.470077+0000 mon.a (mon.0) 2679 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-77"}]: dispatch 2026-03-10T10:23:35.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:35 vm07 bash[23367]: audit 2026-03-10T10:23:34.470077+0000 mon.a (mon.0) 2679 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-77"}]: dispatch 2026-03-10T10:23:35.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:35 vm07 bash[23367]: cluster 2026-03-10T10:23:34.479843+0000 mgr.y (mgr.24422) 387 : cluster [DBG] pgmap v622: 292 pgs: 5 creating+activating, 287 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:23:35.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:35 vm07 bash[23367]: cluster 2026-03-10T10:23:34.479843+0000 mgr.y (mgr.24422) 387 : cluster [DBG] pgmap v622: 292 pgs: 5 creating+activating, 287 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:23:36.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:36 vm07 bash[23367]: cluster 2026-03-10T10:23:35.452808+0000 mon.a (mon.0) 2680 : cluster [DBG] osdmap e418: 8 total, 8 up, 8 in 2026-03-10T10:23:36.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:36 vm07 bash[23367]: cluster 2026-03-10T10:23:35.452808+0000 mon.a (mon.0) 2680 : cluster [DBG] osdmap e418: 8 total, 8 up, 8 in 2026-03-10T10:23:36.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:36 vm04 bash[28289]: cluster 2026-03-10T10:23:35.452808+0000 mon.a (mon.0) 2680 : cluster [DBG] osdmap e418: 8 total, 8 up, 8 in 2026-03-10T10:23:36.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:36 vm04 bash[28289]: cluster 2026-03-10T10:23:35.452808+0000 mon.a (mon.0) 2680 : cluster [DBG] osdmap e418: 8 total, 8 up, 8 in 2026-03-10T10:23:36.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:36 vm04 bash[20742]: cluster 2026-03-10T10:23:35.452808+0000 mon.a (mon.0) 2680 : cluster [DBG] osdmap e418: 8 total, 8 up, 8 in 2026-03-10T10:23:36.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:36 vm04 bash[20742]: cluster 2026-03-10T10:23:35.452808+0000 mon.a (mon.0) 2680 : cluster [DBG] osdmap e418: 8 total, 8 up, 8 in 2026-03-10T10:23:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:37 vm04 bash[28289]: cluster 2026-03-10T10:23:36.480291+0000 mgr.y (mgr.24422) 388 : cluster [DBG] pgmap v624: 260 pgs: 260 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 743 B/s wr, 2 op/s 2026-03-10T10:23:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:37 vm04 bash[28289]: cluster 2026-03-10T10:23:36.480291+0000 mgr.y (mgr.24422) 388 : cluster [DBG] pgmap v624: 260 pgs: 260 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 743 B/s wr, 2 op/s 2026-03-10T10:23:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:37 vm04 bash[28289]: cluster 2026-03-10T10:23:36.496343+0000 mon.a (mon.0) 2681 : cluster [DBG] osdmap e419: 8 total, 8 up, 8 in 2026-03-10T10:23:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:37 vm04 bash[28289]: cluster 2026-03-10T10:23:36.496343+0000 mon.a (mon.0) 2681 : cluster [DBG] osdmap e419: 8 total, 8 up, 8 in 2026-03-10T10:23:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:37 vm04 bash[28289]: audit 2026-03-10T10:23:36.503707+0000 mon.a (mon.0) 2682 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:23:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:37 vm04 bash[28289]: audit 2026-03-10T10:23:36.503707+0000 mon.a (mon.0) 2682 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:23:37.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:37 vm04 bash[20742]: cluster 2026-03-10T10:23:36.480291+0000 mgr.y (mgr.24422) 388 : cluster [DBG] pgmap v624: 260 pgs: 260 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 743 B/s wr, 2 op/s 2026-03-10T10:23:37.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:37 vm04 bash[20742]: cluster 2026-03-10T10:23:36.480291+0000 mgr.y (mgr.24422) 388 : cluster [DBG] pgmap v624: 260 pgs: 260 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 743 B/s wr, 2 op/s 2026-03-10T10:23:37.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:37 vm04 bash[20742]: cluster 2026-03-10T10:23:36.496343+0000 mon.a (mon.0) 2681 : cluster [DBG] osdmap e419: 8 total, 8 up, 8 in 2026-03-10T10:23:37.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:37 vm04 bash[20742]: cluster 2026-03-10T10:23:36.496343+0000 mon.a (mon.0) 2681 : cluster [DBG] osdmap e419: 8 total, 8 up, 8 in 2026-03-10T10:23:37.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:37 vm04 bash[20742]: audit 2026-03-10T10:23:36.503707+0000 mon.a (mon.0) 2682 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:23:37.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:37 vm04 bash[20742]: audit 2026-03-10T10:23:36.503707+0000 mon.a (mon.0) 2682 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:23:38.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:37 vm07 bash[23367]: cluster 2026-03-10T10:23:36.480291+0000 mgr.y (mgr.24422) 388 : cluster [DBG] pgmap v624: 260 pgs: 260 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 743 B/s wr, 2 op/s 2026-03-10T10:23:38.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:37 vm07 bash[23367]: cluster 2026-03-10T10:23:36.480291+0000 mgr.y (mgr.24422) 388 : cluster [DBG] pgmap v624: 260 pgs: 260 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 743 B/s wr, 2 op/s 2026-03-10T10:23:38.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:37 vm07 bash[23367]: cluster 2026-03-10T10:23:36.496343+0000 mon.a (mon.0) 2681 : cluster [DBG] osdmap e419: 8 total, 8 up, 8 in 2026-03-10T10:23:38.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:37 vm07 bash[23367]: cluster 2026-03-10T10:23:36.496343+0000 mon.a (mon.0) 2681 : cluster [DBG] osdmap e419: 8 total, 8 up, 8 in 2026-03-10T10:23:38.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:37 vm07 bash[23367]: audit 2026-03-10T10:23:36.503707+0000 mon.a (mon.0) 2682 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:23:38.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:37 vm07 bash[23367]: audit 2026-03-10T10:23:36.503707+0000 mon.a (mon.0) 2682 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-79","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:23:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:38 vm04 bash[28289]: audit 2026-03-10T10:23:37.485411+0000 mon.a (mon.0) 2683 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-79","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:23:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:38 vm04 bash[28289]: audit 2026-03-10T10:23:37.485411+0000 mon.a (mon.0) 2683 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-79","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:23:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:38 vm04 bash[28289]: cluster 2026-03-10T10:23:37.489602+0000 mon.a (mon.0) 2684 : cluster [DBG] osdmap e420: 8 total, 8 up, 8 in 2026-03-10T10:23:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:38 vm04 bash[28289]: cluster 2026-03-10T10:23:37.489602+0000 mon.a (mon.0) 2684 : cluster [DBG] osdmap e420: 8 total, 8 up, 8 in 2026-03-10T10:23:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:38 vm04 bash[28289]: audit 2026-03-10T10:23:37.509398+0000 mon.a (mon.0) 2685 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:23:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:38 vm04 bash[28289]: audit 2026-03-10T10:23:37.509398+0000 mon.a (mon.0) 2685 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:23:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:38 vm04 bash[28289]: cluster 2026-03-10T10:23:38.492710+0000 mon.a (mon.0) 2686 : cluster [DBG] osdmap e421: 8 total, 8 up, 8 in 2026-03-10T10:23:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:38 vm04 bash[28289]: cluster 2026-03-10T10:23:38.492710+0000 mon.a (mon.0) 2686 : cluster [DBG] osdmap e421: 8 total, 8 up, 8 in 2026-03-10T10:23:38.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:38 vm04 bash[20742]: audit 2026-03-10T10:23:37.485411+0000 mon.a (mon.0) 2683 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-79","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:23:38.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:38 vm04 bash[20742]: audit 2026-03-10T10:23:37.485411+0000 mon.a (mon.0) 2683 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-79","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:23:38.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:38 vm04 bash[20742]: cluster 2026-03-10T10:23:37.489602+0000 mon.a (mon.0) 2684 : cluster [DBG] osdmap e420: 8 total, 8 up, 8 in 2026-03-10T10:23:38.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:38 vm04 bash[20742]: cluster 2026-03-10T10:23:37.489602+0000 mon.a (mon.0) 2684 : cluster [DBG] osdmap e420: 8 total, 8 up, 8 in 2026-03-10T10:23:38.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:38 vm04 bash[20742]: audit 2026-03-10T10:23:37.509398+0000 mon.a (mon.0) 2685 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:23:38.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:38 vm04 bash[20742]: audit 2026-03-10T10:23:37.509398+0000 mon.a (mon.0) 2685 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:23:38.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:38 vm04 bash[20742]: cluster 2026-03-10T10:23:38.492710+0000 mon.a (mon.0) 2686 : cluster [DBG] osdmap e421: 8 total, 8 up, 8 in 2026-03-10T10:23:38.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:38 vm04 bash[20742]: cluster 2026-03-10T10:23:38.492710+0000 mon.a (mon.0) 2686 : cluster [DBG] osdmap e421: 8 total, 8 up, 8 in 2026-03-10T10:23:39.016 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:23:38 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:23:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:38 vm07 bash[23367]: audit 2026-03-10T10:23:37.485411+0000 mon.a (mon.0) 2683 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-79","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:23:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:38 vm07 bash[23367]: audit 2026-03-10T10:23:37.485411+0000 mon.a (mon.0) 2683 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-79","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:23:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:38 vm07 bash[23367]: cluster 2026-03-10T10:23:37.489602+0000 mon.a (mon.0) 2684 : cluster [DBG] osdmap e420: 8 total, 8 up, 8 in 2026-03-10T10:23:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:38 vm07 bash[23367]: cluster 2026-03-10T10:23:37.489602+0000 mon.a (mon.0) 2684 : cluster [DBG] osdmap e420: 8 total, 8 up, 8 in 2026-03-10T10:23:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:38 vm07 bash[23367]: audit 2026-03-10T10:23:37.509398+0000 mon.a (mon.0) 2685 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:23:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:38 vm07 bash[23367]: audit 2026-03-10T10:23:37.509398+0000 mon.a (mon.0) 2685 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:23:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:38 vm07 bash[23367]: cluster 2026-03-10T10:23:38.492710+0000 mon.a (mon.0) 2686 : cluster [DBG] osdmap e421: 8 total, 8 up, 8 in 2026-03-10T10:23:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:38 vm07 bash[23367]: cluster 2026-03-10T10:23:38.492710+0000 mon.a (mon.0) 2686 : cluster [DBG] osdmap e421: 8 total, 8 up, 8 in 2026-03-10T10:23:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:39 vm04 bash[28289]: cluster 2026-03-10T10:23:38.480744+0000 mgr.y (mgr.24422) 389 : cluster [DBG] pgmap v627: 292 pgs: 12 unknown, 280 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 252 B/s rd, 0 op/s 2026-03-10T10:23:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:39 vm04 bash[28289]: cluster 2026-03-10T10:23:38.480744+0000 mgr.y (mgr.24422) 389 : cluster [DBG] pgmap v627: 292 pgs: 12 unknown, 280 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 252 B/s rd, 0 op/s 2026-03-10T10:23:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:39 vm04 bash[28289]: audit 2026-03-10T10:23:38.570618+0000 mgr.y (mgr.24422) 390 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:39 vm04 bash[28289]: audit 2026-03-10T10:23:38.570618+0000 mgr.y (mgr.24422) 390 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:39 vm04 bash[28289]: cluster 2026-03-10T10:23:39.501574+0000 mon.a (mon.0) 2687 : cluster [DBG] osdmap e422: 8 total, 8 up, 8 in 2026-03-10T10:23:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:39 vm04 bash[28289]: cluster 2026-03-10T10:23:39.501574+0000 mon.a (mon.0) 2687 : cluster [DBG] osdmap e422: 8 total, 8 up, 8 in 2026-03-10T10:23:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:39 vm04 bash[28289]: audit 2026-03-10T10:23:39.526247+0000 mon.a (mon.0) 2688 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.1a"}]: dispatch 2026-03-10T10:23:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:39 vm04 bash[28289]: audit 2026-03-10T10:23:39.526247+0000 mon.a (mon.0) 2688 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.1a"}]: dispatch 2026-03-10T10:23:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:39 vm04 bash[20742]: cluster 2026-03-10T10:23:38.480744+0000 mgr.y (mgr.24422) 389 : cluster [DBG] pgmap v627: 292 pgs: 12 unknown, 280 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 252 B/s rd, 0 op/s 2026-03-10T10:23:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:39 vm04 bash[20742]: cluster 2026-03-10T10:23:38.480744+0000 mgr.y (mgr.24422) 389 : cluster [DBG] pgmap v627: 292 pgs: 12 unknown, 280 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 252 B/s rd, 0 op/s 2026-03-10T10:23:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:39 vm04 bash[20742]: audit 2026-03-10T10:23:38.570618+0000 mgr.y (mgr.24422) 390 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:39 vm04 bash[20742]: audit 2026-03-10T10:23:38.570618+0000 mgr.y (mgr.24422) 390 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:39 vm04 bash[20742]: cluster 2026-03-10T10:23:39.501574+0000 mon.a (mon.0) 2687 : cluster [DBG] osdmap e422: 8 total, 8 up, 8 in 2026-03-10T10:23:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:39 vm04 bash[20742]: cluster 2026-03-10T10:23:39.501574+0000 mon.a (mon.0) 2687 : cluster [DBG] osdmap e422: 8 total, 8 up, 8 in 2026-03-10T10:23:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:39 vm04 bash[20742]: audit 2026-03-10T10:23:39.526247+0000 mon.a (mon.0) 2688 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.1a"}]: dispatch 2026-03-10T10:23:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:39 vm04 bash[20742]: audit 2026-03-10T10:23:39.526247+0000 mon.a (mon.0) 2688 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.1a"}]: dispatch 2026-03-10T10:23:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:39 vm07 bash[23367]: cluster 2026-03-10T10:23:38.480744+0000 mgr.y (mgr.24422) 389 : cluster [DBG] pgmap v627: 292 pgs: 12 unknown, 280 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 252 B/s rd, 0 op/s 2026-03-10T10:23:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:39 vm07 bash[23367]: cluster 2026-03-10T10:23:38.480744+0000 mgr.y (mgr.24422) 389 : cluster [DBG] pgmap v627: 292 pgs: 12 unknown, 280 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 252 B/s rd, 0 op/s 2026-03-10T10:23:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:39 vm07 bash[23367]: audit 2026-03-10T10:23:38.570618+0000 mgr.y (mgr.24422) 390 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:39 vm07 bash[23367]: audit 2026-03-10T10:23:38.570618+0000 mgr.y (mgr.24422) 390 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:39 vm07 bash[23367]: cluster 2026-03-10T10:23:39.501574+0000 mon.a (mon.0) 2687 : cluster [DBG] osdmap e422: 8 total, 8 up, 8 in 2026-03-10T10:23:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:39 vm07 bash[23367]: cluster 2026-03-10T10:23:39.501574+0000 mon.a (mon.0) 2687 : cluster [DBG] osdmap e422: 8 total, 8 up, 8 in 2026-03-10T10:23:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:39 vm07 bash[23367]: audit 2026-03-10T10:23:39.526247+0000 mon.a (mon.0) 2688 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.1a"}]: dispatch 2026-03-10T10:23:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:39 vm07 bash[23367]: audit 2026-03-10T10:23:39.526247+0000 mon.a (mon.0) 2688 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.1a"}]: dispatch 2026-03-10T10:23:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:40 vm04 bash[28289]: audit 2026-03-10T10:23:39.526390+0000 mgr.y (mgr.24422) 391 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.1a"}]: dispatch 2026-03-10T10:23:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:40 vm04 bash[28289]: audit 2026-03-10T10:23:39.526390+0000 mgr.y (mgr.24422) 391 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.1a"}]: dispatch 2026-03-10T10:23:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:40 vm04 bash[28289]: cluster 2026-03-10T10:23:39.582239+0000 osd.4 (osd.4) 11 : cluster [DBG] 297.1a deep-scrub starts 2026-03-10T10:23:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:40 vm04 bash[28289]: cluster 2026-03-10T10:23:39.582239+0000 osd.4 (osd.4) 11 : cluster [DBG] 297.1a deep-scrub starts 2026-03-10T10:23:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:40 vm04 bash[28289]: cluster 2026-03-10T10:23:39.583340+0000 osd.4 (osd.4) 12 : cluster [DBG] 297.1a deep-scrub ok 2026-03-10T10:23:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:40 vm04 bash[28289]: cluster 2026-03-10T10:23:39.583340+0000 osd.4 (osd.4) 12 : cluster [DBG] 297.1a deep-scrub ok 2026-03-10T10:23:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:40 vm04 bash[20742]: audit 2026-03-10T10:23:39.526390+0000 mgr.y (mgr.24422) 391 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.1a"}]: dispatch 2026-03-10T10:23:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:40 vm04 bash[20742]: audit 2026-03-10T10:23:39.526390+0000 mgr.y (mgr.24422) 391 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.1a"}]: dispatch 2026-03-10T10:23:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:40 vm04 bash[20742]: cluster 2026-03-10T10:23:39.582239+0000 osd.4 (osd.4) 11 : cluster [DBG] 297.1a deep-scrub starts 2026-03-10T10:23:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:40 vm04 bash[20742]: cluster 2026-03-10T10:23:39.582239+0000 osd.4 (osd.4) 11 : cluster [DBG] 297.1a deep-scrub starts 2026-03-10T10:23:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:40 vm04 bash[20742]: cluster 2026-03-10T10:23:39.583340+0000 osd.4 (osd.4) 12 : cluster [DBG] 297.1a deep-scrub ok 2026-03-10T10:23:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:40 vm04 bash[20742]: cluster 2026-03-10T10:23:39.583340+0000 osd.4 (osd.4) 12 : cluster [DBG] 297.1a deep-scrub ok 2026-03-10T10:23:41.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:40 vm07 bash[23367]: audit 2026-03-10T10:23:39.526390+0000 mgr.y (mgr.24422) 391 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.1a"}]: dispatch 2026-03-10T10:23:41.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:40 vm07 bash[23367]: audit 2026-03-10T10:23:39.526390+0000 mgr.y (mgr.24422) 391 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg deep-scrub", "pgid": "297.1a"}]: dispatch 2026-03-10T10:23:41.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:40 vm07 bash[23367]: cluster 2026-03-10T10:23:39.582239+0000 osd.4 (osd.4) 11 : cluster [DBG] 297.1a deep-scrub starts 2026-03-10T10:23:41.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:40 vm07 bash[23367]: cluster 2026-03-10T10:23:39.582239+0000 osd.4 (osd.4) 11 : cluster [DBG] 297.1a deep-scrub starts 2026-03-10T10:23:41.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:40 vm07 bash[23367]: cluster 2026-03-10T10:23:39.583340+0000 osd.4 (osd.4) 12 : cluster [DBG] 297.1a deep-scrub ok 2026-03-10T10:23:41.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:40 vm07 bash[23367]: cluster 2026-03-10T10:23:39.583340+0000 osd.4 (osd.4) 12 : cluster [DBG] 297.1a deep-scrub ok 2026-03-10T10:23:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:41 vm04 bash[28289]: cluster 2026-03-10T10:23:40.481035+0000 mgr.y (mgr.24422) 392 : cluster [DBG] pgmap v630: 292 pgs: 292 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T10:23:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:41 vm04 bash[28289]: cluster 2026-03-10T10:23:40.481035+0000 mgr.y (mgr.24422) 392 : cluster [DBG] pgmap v630: 292 pgs: 292 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T10:23:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:41 vm04 bash[28289]: cluster 2026-03-10T10:23:41.504499+0000 mon.a (mon.0) 2689 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:23:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:41 vm04 bash[28289]: cluster 2026-03-10T10:23:41.504499+0000 mon.a (mon.0) 2689 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:23:41.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:41 vm04 bash[20742]: cluster 2026-03-10T10:23:40.481035+0000 mgr.y (mgr.24422) 392 : cluster [DBG] pgmap v630: 292 pgs: 292 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T10:23:41.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:41 vm04 bash[20742]: cluster 2026-03-10T10:23:40.481035+0000 mgr.y (mgr.24422) 392 : cluster [DBG] pgmap v630: 292 pgs: 292 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T10:23:41.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:41 vm04 bash[20742]: cluster 2026-03-10T10:23:41.504499+0000 mon.a (mon.0) 2689 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:23:41.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:41 vm04 bash[20742]: cluster 2026-03-10T10:23:41.504499+0000 mon.a (mon.0) 2689 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:23:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:41 vm07 bash[23367]: cluster 2026-03-10T10:23:40.481035+0000 mgr.y (mgr.24422) 392 : cluster [DBG] pgmap v630: 292 pgs: 292 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T10:23:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:41 vm07 bash[23367]: cluster 2026-03-10T10:23:40.481035+0000 mgr.y (mgr.24422) 392 : cluster [DBG] pgmap v630: 292 pgs: 292 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T10:23:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:41 vm07 bash[23367]: cluster 2026-03-10T10:23:41.504499+0000 mon.a (mon.0) 2689 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:23:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:41 vm07 bash[23367]: cluster 2026-03-10T10:23:41.504499+0000 mon.a (mon.0) 2689 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:23:43.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:23:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:23:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:23:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:43 vm04 bash[28289]: cluster 2026-03-10T10:23:42.481292+0000 mgr.y (mgr.24422) 393 : cluster [DBG] pgmap v631: 292 pgs: 292 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1 KiB/s rd, 682 B/s wr, 2 op/s 2026-03-10T10:23:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:43 vm04 bash[28289]: cluster 2026-03-10T10:23:42.481292+0000 mgr.y (mgr.24422) 393 : cluster [DBG] pgmap v631: 292 pgs: 292 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1 KiB/s rd, 682 B/s wr, 2 op/s 2026-03-10T10:23:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:43 vm04 bash[28289]: audit 2026-03-10T10:23:42.990866+0000 mon.a (mon.0) 2690 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:23:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:43 vm04 bash[28289]: audit 2026-03-10T10:23:42.990866+0000 mon.a (mon.0) 2690 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:23:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:43 vm04 bash[20742]: cluster 2026-03-10T10:23:42.481292+0000 mgr.y (mgr.24422) 393 : cluster [DBG] pgmap v631: 292 pgs: 292 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1 KiB/s rd, 682 B/s wr, 2 op/s 2026-03-10T10:23:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:43 vm04 bash[20742]: cluster 2026-03-10T10:23:42.481292+0000 mgr.y (mgr.24422) 393 : cluster [DBG] pgmap v631: 292 pgs: 292 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1 KiB/s rd, 682 B/s wr, 2 op/s 2026-03-10T10:23:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:43 vm04 bash[20742]: audit 2026-03-10T10:23:42.990866+0000 mon.a (mon.0) 2690 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:23:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:43 vm04 bash[20742]: audit 2026-03-10T10:23:42.990866+0000 mon.a (mon.0) 2690 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:23:44.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:43 vm07 bash[23367]: cluster 2026-03-10T10:23:42.481292+0000 mgr.y (mgr.24422) 393 : cluster [DBG] pgmap v631: 292 pgs: 292 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1 KiB/s rd, 682 B/s wr, 2 op/s 2026-03-10T10:23:44.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:43 vm07 bash[23367]: cluster 2026-03-10T10:23:42.481292+0000 mgr.y (mgr.24422) 393 : cluster [DBG] pgmap v631: 292 pgs: 292 active+clean; 8.3 MiB data, 924 MiB used, 159 GiB / 160 GiB avail; 1 KiB/s rd, 682 B/s wr, 2 op/s 2026-03-10T10:23:44.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:43 vm07 bash[23367]: audit 2026-03-10T10:23:42.990866+0000 mon.a (mon.0) 2690 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:23:44.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:43 vm07 bash[23367]: audit 2026-03-10T10:23:42.990866+0000 mon.a (mon.0) 2690 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:23:45.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:45 vm04 bash[28289]: cluster 2026-03-10T10:23:44.481995+0000 mgr.y (mgr.24422) 394 : cluster [DBG] pgmap v632: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 878 B/s wr, 3 op/s 2026-03-10T10:23:45.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:45 vm04 bash[28289]: cluster 2026-03-10T10:23:44.481995+0000 mgr.y (mgr.24422) 394 : cluster [DBG] pgmap v632: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 878 B/s wr, 3 op/s 2026-03-10T10:23:45.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:45 vm04 bash[20742]: cluster 2026-03-10T10:23:44.481995+0000 mgr.y (mgr.24422) 394 : cluster [DBG] pgmap v632: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 878 B/s wr, 3 op/s 2026-03-10T10:23:45.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:45 vm04 bash[20742]: cluster 2026-03-10T10:23:44.481995+0000 mgr.y (mgr.24422) 394 : cluster [DBG] pgmap v632: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 878 B/s wr, 3 op/s 2026-03-10T10:23:46.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:45 vm07 bash[23367]: cluster 2026-03-10T10:23:44.481995+0000 mgr.y (mgr.24422) 394 : cluster [DBG] pgmap v632: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 878 B/s wr, 3 op/s 2026-03-10T10:23:46.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:45 vm07 bash[23367]: cluster 2026-03-10T10:23:44.481995+0000 mgr.y (mgr.24422) 394 : cluster [DBG] pgmap v632: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 878 B/s wr, 3 op/s 2026-03-10T10:23:47.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:47 vm04 bash[28289]: cluster 2026-03-10T10:23:46.482379+0000 mgr.y (mgr.24422) 395 : cluster [DBG] pgmap v633: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:23:47.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:47 vm04 bash[28289]: cluster 2026-03-10T10:23:46.482379+0000 mgr.y (mgr.24422) 395 : cluster [DBG] pgmap v633: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:23:47.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:47 vm04 bash[20742]: cluster 2026-03-10T10:23:46.482379+0000 mgr.y (mgr.24422) 395 : cluster [DBG] pgmap v633: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:23:47.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:47 vm04 bash[20742]: cluster 2026-03-10T10:23:46.482379+0000 mgr.y (mgr.24422) 395 : cluster [DBG] pgmap v633: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:23:48.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:47 vm07 bash[23367]: cluster 2026-03-10T10:23:46.482379+0000 mgr.y (mgr.24422) 395 : cluster [DBG] pgmap v633: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:23:48.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:47 vm07 bash[23367]: cluster 2026-03-10T10:23:46.482379+0000 mgr.y (mgr.24422) 395 : cluster [DBG] pgmap v633: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:23:49.016 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:23:48 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:23:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:49 vm04 bash[28289]: cluster 2026-03-10T10:23:48.482927+0000 mgr.y (mgr.24422) 396 : cluster [DBG] pgmap v634: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 1 KiB/s rd, 409 B/s wr, 1 op/s 2026-03-10T10:23:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:49 vm04 bash[28289]: cluster 2026-03-10T10:23:48.482927+0000 mgr.y (mgr.24422) 396 : cluster [DBG] pgmap v634: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 1 KiB/s rd, 409 B/s wr, 1 op/s 2026-03-10T10:23:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:49 vm04 bash[28289]: audit 2026-03-10T10:23:48.579387+0000 mgr.y (mgr.24422) 397 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:49 vm04 bash[28289]: audit 2026-03-10T10:23:48.579387+0000 mgr.y (mgr.24422) 397 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:49 vm04 bash[20742]: cluster 2026-03-10T10:23:48.482927+0000 mgr.y (mgr.24422) 396 : cluster [DBG] pgmap v634: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 1 KiB/s rd, 409 B/s wr, 1 op/s 2026-03-10T10:23:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:49 vm04 bash[20742]: cluster 2026-03-10T10:23:48.482927+0000 mgr.y (mgr.24422) 396 : cluster [DBG] pgmap v634: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 1 KiB/s rd, 409 B/s wr, 1 op/s 2026-03-10T10:23:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:49 vm04 bash[20742]: audit 2026-03-10T10:23:48.579387+0000 mgr.y (mgr.24422) 397 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:49 vm04 bash[20742]: audit 2026-03-10T10:23:48.579387+0000 mgr.y (mgr.24422) 397 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:49 vm07 bash[23367]: cluster 2026-03-10T10:23:48.482927+0000 mgr.y (mgr.24422) 396 : cluster [DBG] pgmap v634: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 1 KiB/s rd, 409 B/s wr, 1 op/s 2026-03-10T10:23:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:49 vm07 bash[23367]: cluster 2026-03-10T10:23:48.482927+0000 mgr.y (mgr.24422) 396 : cluster [DBG] pgmap v634: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 1 KiB/s rd, 409 B/s wr, 1 op/s 2026-03-10T10:23:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:49 vm07 bash[23367]: audit 2026-03-10T10:23:48.579387+0000 mgr.y (mgr.24422) 397 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:49 vm07 bash[23367]: audit 2026-03-10T10:23:48.579387+0000 mgr.y (mgr.24422) 397 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:51.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:51 vm04 bash[28289]: cluster 2026-03-10T10:23:50.483512+0000 mgr.y (mgr.24422) 398 : cluster [DBG] pgmap v635: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 931 B/s rd, 186 B/s wr, 1 op/s 2026-03-10T10:23:51.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:51 vm04 bash[28289]: cluster 2026-03-10T10:23:50.483512+0000 mgr.y (mgr.24422) 398 : cluster [DBG] pgmap v635: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 931 B/s rd, 186 B/s wr, 1 op/s 2026-03-10T10:23:51.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:51 vm04 bash[20742]: cluster 2026-03-10T10:23:50.483512+0000 mgr.y (mgr.24422) 398 : cluster [DBG] pgmap v635: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 931 B/s rd, 186 B/s wr, 1 op/s 2026-03-10T10:23:51.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:51 vm04 bash[20742]: cluster 2026-03-10T10:23:50.483512+0000 mgr.y (mgr.24422) 398 : cluster [DBG] pgmap v635: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 931 B/s rd, 186 B/s wr, 1 op/s 2026-03-10T10:23:52.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:51 vm07 bash[23367]: cluster 2026-03-10T10:23:50.483512+0000 mgr.y (mgr.24422) 398 : cluster [DBG] pgmap v635: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 931 B/s rd, 186 B/s wr, 1 op/s 2026-03-10T10:23:52.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:51 vm07 bash[23367]: cluster 2026-03-10T10:23:50.483512+0000 mgr.y (mgr.24422) 398 : cluster [DBG] pgmap v635: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 931 B/s rd, 186 B/s wr, 1 op/s 2026-03-10T10:23:53.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:23:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:23:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:23:53.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:53 vm04 bash[28289]: cluster 2026-03-10T10:23:52.483842+0000 mgr.y (mgr.24422) 399 : cluster [DBG] pgmap v636: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 170 B/s wr, 1 op/s 2026-03-10T10:23:53.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:53 vm04 bash[28289]: cluster 2026-03-10T10:23:52.483842+0000 mgr.y (mgr.24422) 399 : cluster [DBG] pgmap v636: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 170 B/s wr, 1 op/s 2026-03-10T10:23:53.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:53 vm04 bash[20742]: cluster 2026-03-10T10:23:52.483842+0000 mgr.y (mgr.24422) 399 : cluster [DBG] pgmap v636: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 170 B/s wr, 1 op/s 2026-03-10T10:23:53.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:53 vm04 bash[20742]: cluster 2026-03-10T10:23:52.483842+0000 mgr.y (mgr.24422) 399 : cluster [DBG] pgmap v636: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 170 B/s wr, 1 op/s 2026-03-10T10:23:54.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:53 vm07 bash[23367]: cluster 2026-03-10T10:23:52.483842+0000 mgr.y (mgr.24422) 399 : cluster [DBG] pgmap v636: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 170 B/s wr, 1 op/s 2026-03-10T10:23:54.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:53 vm07 bash[23367]: cluster 2026-03-10T10:23:52.483842+0000 mgr.y (mgr.24422) 399 : cluster [DBG] pgmap v636: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 170 B/s wr, 1 op/s 2026-03-10T10:23:55.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:55 vm04 bash[28289]: cluster 2026-03-10T10:23:54.484638+0000 mgr.y (mgr.24422) 400 : cluster [DBG] pgmap v637: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s 2026-03-10T10:23:55.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:55 vm04 bash[28289]: cluster 2026-03-10T10:23:54.484638+0000 mgr.y (mgr.24422) 400 : cluster [DBG] pgmap v637: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s 2026-03-10T10:23:55.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:55 vm04 bash[20742]: cluster 2026-03-10T10:23:54.484638+0000 mgr.y (mgr.24422) 400 : cluster [DBG] pgmap v637: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s 2026-03-10T10:23:55.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:55 vm04 bash[20742]: cluster 2026-03-10T10:23:54.484638+0000 mgr.y (mgr.24422) 400 : cluster [DBG] pgmap v637: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s 2026-03-10T10:23:56.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:55 vm07 bash[23367]: cluster 2026-03-10T10:23:54.484638+0000 mgr.y (mgr.24422) 400 : cluster [DBG] pgmap v637: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s 2026-03-10T10:23:56.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:55 vm07 bash[23367]: cluster 2026-03-10T10:23:54.484638+0000 mgr.y (mgr.24422) 400 : cluster [DBG] pgmap v637: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s 2026-03-10T10:23:57.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:57 vm04 bash[28289]: cluster 2026-03-10T10:23:56.484993+0000 mgr.y (mgr.24422) 401 : cluster [DBG] pgmap v638: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:23:57.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:57 vm04 bash[28289]: cluster 2026-03-10T10:23:56.484993+0000 mgr.y (mgr.24422) 401 : cluster [DBG] pgmap v638: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:23:57.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:57 vm04 bash[20742]: cluster 2026-03-10T10:23:56.484993+0000 mgr.y (mgr.24422) 401 : cluster [DBG] pgmap v638: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:23:57.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:57 vm04 bash[20742]: cluster 2026-03-10T10:23:56.484993+0000 mgr.y (mgr.24422) 401 : cluster [DBG] pgmap v638: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:23:58.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:57 vm07 bash[23367]: cluster 2026-03-10T10:23:56.484993+0000 mgr.y (mgr.24422) 401 : cluster [DBG] pgmap v638: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:23:58.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:57 vm07 bash[23367]: cluster 2026-03-10T10:23:56.484993+0000 mgr.y (mgr.24422) 401 : cluster [DBG] pgmap v638: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:23:58.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:58 vm04 bash[28289]: audit 2026-03-10T10:23:57.996679+0000 mon.a (mon.0) 2691 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:23:58.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:58 vm04 bash[28289]: audit 2026-03-10T10:23:57.996679+0000 mon.a (mon.0) 2691 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:23:58.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:58 vm04 bash[20742]: audit 2026-03-10T10:23:57.996679+0000 mon.a (mon.0) 2691 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:23:58.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:58 vm04 bash[20742]: audit 2026-03-10T10:23:57.996679+0000 mon.a (mon.0) 2691 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:23:59.016 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:23:58 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:23:59.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:58 vm07 bash[23367]: audit 2026-03-10T10:23:57.996679+0000 mon.a (mon.0) 2691 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:23:59.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:58 vm07 bash[23367]: audit 2026-03-10T10:23:57.996679+0000 mon.a (mon.0) 2691 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:23:59.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:59 vm04 bash[28289]: cluster 2026-03-10T10:23:58.485452+0000 mgr.y (mgr.24422) 402 : cluster [DBG] pgmap v639: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:23:59.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:59 vm04 bash[28289]: cluster 2026-03-10T10:23:58.485452+0000 mgr.y (mgr.24422) 402 : cluster [DBG] pgmap v639: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:23:59.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:59 vm04 bash[28289]: audit 2026-03-10T10:23:58.589867+0000 mgr.y (mgr.24422) 403 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:59.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:59 vm04 bash[28289]: audit 2026-03-10T10:23:58.589867+0000 mgr.y (mgr.24422) 403 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:59.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:59 vm04 bash[28289]: audit 2026-03-10T10:23:59.536636+0000 mon.a (mon.0) 2692 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:23:59.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:59 vm04 bash[28289]: audit 2026-03-10T10:23:59.536636+0000 mon.a (mon.0) 2692 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:23:59.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:59 vm04 bash[28289]: audit 2026-03-10T10:23:59.536979+0000 mon.a (mon.0) 2693 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-79"}]: dispatch 2026-03-10T10:23:59.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:23:59 vm04 bash[28289]: audit 2026-03-10T10:23:59.536979+0000 mon.a (mon.0) 2693 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-79"}]: dispatch 2026-03-10T10:23:59.953 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 10:23:59 vm04 bash[31174]: debug 2026-03-10T10:23:59.530+0000 7f7b52f4f640 -1 snap_mapper.add_oid found existing snaps mapped on 297:5ce9f01d:test-rados-api-vm04-59491-80::foo:2, removing 2026-03-10T10:23:59.953 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 10 10:23:59 vm04 bash[49304]: debug 2026-03-10T10:23:59.534+0000 7f60c6979640 -1 snap_mapper.add_oid found existing snaps mapped on 297:5ce9f01d:test-rados-api-vm04-59491-80::foo:2, removing 2026-03-10T10:23:59.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:59 vm04 bash[20742]: cluster 2026-03-10T10:23:58.485452+0000 mgr.y (mgr.24422) 402 : cluster [DBG] pgmap v639: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:23:59.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:59 vm04 bash[20742]: cluster 2026-03-10T10:23:58.485452+0000 mgr.y (mgr.24422) 402 : cluster [DBG] pgmap v639: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:23:59.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:59 vm04 bash[20742]: audit 2026-03-10T10:23:58.589867+0000 mgr.y (mgr.24422) 403 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:59.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:59 vm04 bash[20742]: audit 2026-03-10T10:23:58.589867+0000 mgr.y (mgr.24422) 403 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:23:59.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:59 vm04 bash[20742]: audit 2026-03-10T10:23:59.536636+0000 mon.a (mon.0) 2692 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:23:59.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:59 vm04 bash[20742]: audit 2026-03-10T10:23:59.536636+0000 mon.a (mon.0) 2692 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:23:59.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:59 vm04 bash[20742]: audit 2026-03-10T10:23:59.536979+0000 mon.a (mon.0) 2693 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-79"}]: dispatch 2026-03-10T10:23:59.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:23:59 vm04 bash[20742]: audit 2026-03-10T10:23:59.536979+0000 mon.a (mon.0) 2693 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-79"}]: dispatch 2026-03-10T10:24:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:59 vm07 bash[23367]: cluster 2026-03-10T10:23:58.485452+0000 mgr.y (mgr.24422) 402 : cluster [DBG] pgmap v639: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:24:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:59 vm07 bash[23367]: cluster 2026-03-10T10:23:58.485452+0000 mgr.y (mgr.24422) 402 : cluster [DBG] pgmap v639: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:24:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:59 vm07 bash[23367]: audit 2026-03-10T10:23:58.589867+0000 mgr.y (mgr.24422) 403 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:24:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:59 vm07 bash[23367]: audit 2026-03-10T10:23:58.589867+0000 mgr.y (mgr.24422) 403 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:24:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:59 vm07 bash[23367]: audit 2026-03-10T10:23:59.536636+0000 mon.a (mon.0) 2692 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:59 vm07 bash[23367]: audit 2026-03-10T10:23:59.536636+0000 mon.a (mon.0) 2692 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:59 vm07 bash[23367]: audit 2026-03-10T10:23:59.536979+0000 mon.a (mon.0) 2693 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-79"}]: dispatch 2026-03-10T10:24:00.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:23:59 vm07 bash[23367]: audit 2026-03-10T10:23:59.536979+0000 mon.a (mon.0) 2693 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-79"}]: dispatch 2026-03-10T10:24:00.017 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 10:23:59 vm07 bash[26644]: debug 2026-03-10T10:23:59.529+0000 7f460cf4b640 -1 snap_mapper.add_oid found existing snaps mapped on 297:5ce9f01d:test-rados-api-vm04-59491-80::foo:2, removing 2026-03-10T10:24:01.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:00 vm07 bash[23367]: cluster 2026-03-10T10:23:59.698549+0000 mon.a (mon.0) 2694 : cluster [DBG] osdmap e423: 8 total, 8 up, 8 in 2026-03-10T10:24:01.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:00 vm07 bash[23367]: cluster 2026-03-10T10:23:59.698549+0000 mon.a (mon.0) 2694 : cluster [DBG] osdmap e423: 8 total, 8 up, 8 in 2026-03-10T10:24:01.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:00 vm07 bash[23367]: cluster 2026-03-10T10:24:00.700187+0000 mon.a (mon.0) 2695 : cluster [DBG] osdmap e424: 8 total, 8 up, 8 in 2026-03-10T10:24:01.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:00 vm07 bash[23367]: cluster 2026-03-10T10:24:00.700187+0000 mon.a (mon.0) 2695 : cluster [DBG] osdmap e424: 8 total, 8 up, 8 in 2026-03-10T10:24:01.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:00 vm07 bash[23367]: audit 2026-03-10T10:24:00.702461+0000 mon.a (mon.0) 2696 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:01.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:00 vm07 bash[23367]: audit 2026-03-10T10:24:00.702461+0000 mon.a (mon.0) 2696 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:00 vm04 bash[28289]: cluster 2026-03-10T10:23:59.698549+0000 mon.a (mon.0) 2694 : cluster [DBG] osdmap e423: 8 total, 8 up, 8 in 2026-03-10T10:24:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:00 vm04 bash[28289]: cluster 2026-03-10T10:23:59.698549+0000 mon.a (mon.0) 2694 : cluster [DBG] osdmap e423: 8 total, 8 up, 8 in 2026-03-10T10:24:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:00 vm04 bash[28289]: cluster 2026-03-10T10:24:00.700187+0000 mon.a (mon.0) 2695 : cluster [DBG] osdmap e424: 8 total, 8 up, 8 in 2026-03-10T10:24:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:00 vm04 bash[28289]: cluster 2026-03-10T10:24:00.700187+0000 mon.a (mon.0) 2695 : cluster [DBG] osdmap e424: 8 total, 8 up, 8 in 2026-03-10T10:24:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:00 vm04 bash[28289]: audit 2026-03-10T10:24:00.702461+0000 mon.a (mon.0) 2696 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:00 vm04 bash[28289]: audit 2026-03-10T10:24:00.702461+0000 mon.a (mon.0) 2696 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:01.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:00 vm04 bash[20742]: cluster 2026-03-10T10:23:59.698549+0000 mon.a (mon.0) 2694 : cluster [DBG] osdmap e423: 8 total, 8 up, 8 in 2026-03-10T10:24:01.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:00 vm04 bash[20742]: cluster 2026-03-10T10:23:59.698549+0000 mon.a (mon.0) 2694 : cluster [DBG] osdmap e423: 8 total, 8 up, 8 in 2026-03-10T10:24:01.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:00 vm04 bash[20742]: cluster 2026-03-10T10:24:00.700187+0000 mon.a (mon.0) 2695 : cluster [DBG] osdmap e424: 8 total, 8 up, 8 in 2026-03-10T10:24:01.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:00 vm04 bash[20742]: cluster 2026-03-10T10:24:00.700187+0000 mon.a (mon.0) 2695 : cluster [DBG] osdmap e424: 8 total, 8 up, 8 in 2026-03-10T10:24:01.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:00 vm04 bash[20742]: audit 2026-03-10T10:24:00.702461+0000 mon.a (mon.0) 2696 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:01.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:00 vm04 bash[20742]: audit 2026-03-10T10:24:00.702461+0000 mon.a (mon.0) 2696 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-81","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:02.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:01 vm07 bash[23367]: cluster 2026-03-10T10:24:00.485844+0000 mgr.y (mgr.24422) 404 : cluster [DBG] pgmap v641: 260 pgs: 260 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:24:02.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:01 vm07 bash[23367]: cluster 2026-03-10T10:24:00.485844+0000 mgr.y (mgr.24422) 404 : cluster [DBG] pgmap v641: 260 pgs: 260 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:24:02.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:01 vm07 bash[23367]: audit 2026-03-10T10:24:01.700846+0000 mon.a (mon.0) 2697 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-81","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:02.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:01 vm07 bash[23367]: audit 2026-03-10T10:24:01.700846+0000 mon.a (mon.0) 2697 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-81","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:02.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:01 vm07 bash[23367]: cluster 2026-03-10T10:24:01.705063+0000 mon.a (mon.0) 2698 : cluster [DBG] osdmap e425: 8 total, 8 up, 8 in 2026-03-10T10:24:02.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:01 vm07 bash[23367]: cluster 2026-03-10T10:24:01.705063+0000 mon.a (mon.0) 2698 : cluster [DBG] osdmap e425: 8 total, 8 up, 8 in 2026-03-10T10:24:02.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:01 vm04 bash[28289]: cluster 2026-03-10T10:24:00.485844+0000 mgr.y (mgr.24422) 404 : cluster [DBG] pgmap v641: 260 pgs: 260 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:24:02.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:01 vm04 bash[28289]: cluster 2026-03-10T10:24:00.485844+0000 mgr.y (mgr.24422) 404 : cluster [DBG] pgmap v641: 260 pgs: 260 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:24:02.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:01 vm04 bash[28289]: audit 2026-03-10T10:24:01.700846+0000 mon.a (mon.0) 2697 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-81","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:02.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:01 vm04 bash[28289]: audit 2026-03-10T10:24:01.700846+0000 mon.a (mon.0) 2697 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-81","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:02.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:01 vm04 bash[28289]: cluster 2026-03-10T10:24:01.705063+0000 mon.a (mon.0) 2698 : cluster [DBG] osdmap e425: 8 total, 8 up, 8 in 2026-03-10T10:24:02.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:01 vm04 bash[28289]: cluster 2026-03-10T10:24:01.705063+0000 mon.a (mon.0) 2698 : cluster [DBG] osdmap e425: 8 total, 8 up, 8 in 2026-03-10T10:24:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:01 vm04 bash[20742]: cluster 2026-03-10T10:24:00.485844+0000 mgr.y (mgr.24422) 404 : cluster [DBG] pgmap v641: 260 pgs: 260 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:24:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:01 vm04 bash[20742]: cluster 2026-03-10T10:24:00.485844+0000 mgr.y (mgr.24422) 404 : cluster [DBG] pgmap v641: 260 pgs: 260 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:24:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:01 vm04 bash[20742]: audit 2026-03-10T10:24:01.700846+0000 mon.a (mon.0) 2697 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-81","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:01 vm04 bash[20742]: audit 2026-03-10T10:24:01.700846+0000 mon.a (mon.0) 2697 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-81","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:01 vm04 bash[20742]: cluster 2026-03-10T10:24:01.705063+0000 mon.a (mon.0) 2698 : cluster [DBG] osdmap e425: 8 total, 8 up, 8 in 2026-03-10T10:24:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:01 vm04 bash[20742]: cluster 2026-03-10T10:24:01.705063+0000 mon.a (mon.0) 2698 : cluster [DBG] osdmap e425: 8 total, 8 up, 8 in 2026-03-10T10:24:03.008 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:02 vm04 bash[20742]: audit 2026-03-10T10:24:01.731519+0000 mon.a (mon.0) 2699 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:03.008 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:02 vm04 bash[20742]: audit 2026-03-10T10:24:01.731519+0000 mon.a (mon.0) 2699 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:03.008 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:02 vm04 bash[20742]: audit 2026-03-10T10:24:01.733819+0000 mon.a (mon.0) 2700 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:24:03.008 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:02 vm04 bash[20742]: audit 2026-03-10T10:24:01.733819+0000 mon.a (mon.0) 2700 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:24:03.008 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:02 vm04 bash[20742]: audit 2026-03-10T10:24:02.703956+0000 mon.a (mon.0) 2701 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:24:03.008 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:02 vm04 bash[20742]: audit 2026-03-10T10:24:02.703956+0000 mon.a (mon.0) 2701 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:24:03.008 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:02 vm04 bash[20742]: cluster 2026-03-10T10:24:02.707315+0000 mon.a (mon.0) 2702 : cluster [DBG] osdmap e426: 8 total, 8 up, 8 in 2026-03-10T10:24:03.008 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:02 vm04 bash[20742]: cluster 2026-03-10T10:24:02.707315+0000 mon.a (mon.0) 2702 : cluster [DBG] osdmap e426: 8 total, 8 up, 8 in 2026-03-10T10:24:03.008 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:02 vm04 bash[20742]: audit 2026-03-10T10:24:02.708950+0000 mon.a (mon.0) 2703 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:03.008 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:02 vm04 bash[20742]: audit 2026-03-10T10:24:02.708950+0000 mon.a (mon.0) 2703 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:03.008 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:02 vm04 bash[28289]: audit 2026-03-10T10:24:01.731519+0000 mon.a (mon.0) 2699 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:03.008 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:02 vm04 bash[28289]: audit 2026-03-10T10:24:01.731519+0000 mon.a (mon.0) 2699 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:03.008 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:02 vm04 bash[28289]: audit 2026-03-10T10:24:01.733819+0000 mon.a (mon.0) 2700 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:24:03.008 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:02 vm04 bash[28289]: audit 2026-03-10T10:24:01.733819+0000 mon.a (mon.0) 2700 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:24:03.008 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:02 vm04 bash[28289]: audit 2026-03-10T10:24:02.703956+0000 mon.a (mon.0) 2701 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:24:03.008 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:02 vm04 bash[28289]: audit 2026-03-10T10:24:02.703956+0000 mon.a (mon.0) 2701 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:24:03.008 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:02 vm04 bash[28289]: cluster 2026-03-10T10:24:02.707315+0000 mon.a (mon.0) 2702 : cluster [DBG] osdmap e426: 8 total, 8 up, 8 in 2026-03-10T10:24:03.008 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:02 vm04 bash[28289]: cluster 2026-03-10T10:24:02.707315+0000 mon.a (mon.0) 2702 : cluster [DBG] osdmap e426: 8 total, 8 up, 8 in 2026-03-10T10:24:03.008 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:02 vm04 bash[28289]: audit 2026-03-10T10:24:02.708950+0000 mon.a (mon.0) 2703 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:03.008 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:02 vm04 bash[28289]: audit 2026-03-10T10:24:02.708950+0000 mon.a (mon.0) 2703 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:03.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:02 vm07 bash[23367]: audit 2026-03-10T10:24:01.731519+0000 mon.a (mon.0) 2699 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:03.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:02 vm07 bash[23367]: audit 2026-03-10T10:24:01.731519+0000 mon.a (mon.0) 2699 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:03.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:02 vm07 bash[23367]: audit 2026-03-10T10:24:01.733819+0000 mon.a (mon.0) 2700 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:24:03.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:02 vm07 bash[23367]: audit 2026-03-10T10:24:01.733819+0000 mon.a (mon.0) 2700 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:24:03.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:02 vm07 bash[23367]: audit 2026-03-10T10:24:02.703956+0000 mon.a (mon.0) 2701 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:24:03.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:02 vm07 bash[23367]: audit 2026-03-10T10:24:02.703956+0000 mon.a (mon.0) 2701 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:24:03.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:02 vm07 bash[23367]: cluster 2026-03-10T10:24:02.707315+0000 mon.a (mon.0) 2702 : cluster [DBG] osdmap e426: 8 total, 8 up, 8 in 2026-03-10T10:24:03.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:02 vm07 bash[23367]: cluster 2026-03-10T10:24:02.707315+0000 mon.a (mon.0) 2702 : cluster [DBG] osdmap e426: 8 total, 8 up, 8 in 2026-03-10T10:24:03.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:02 vm07 bash[23367]: audit 2026-03-10T10:24:02.708950+0000 mon.a (mon.0) 2703 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:03.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:02 vm07 bash[23367]: audit 2026-03-10T10:24:02.708950+0000 mon.a (mon.0) 2703 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:03.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:24:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:24:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:24:04.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:03 vm07 bash[23367]: cluster 2026-03-10T10:24:02.486130+0000 mgr.y (mgr.24422) 405 : cluster [DBG] pgmap v644: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:24:04.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:03 vm07 bash[23367]: cluster 2026-03-10T10:24:02.486130+0000 mgr.y (mgr.24422) 405 : cluster [DBG] pgmap v644: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:24:04.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:03 vm07 bash[23367]: audit 2026-03-10T10:24:03.707716+0000 mon.a (mon.0) 2704 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:24:04.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:03 vm07 bash[23367]: audit 2026-03-10T10:24:03.707716+0000 mon.a (mon.0) 2704 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:24:04.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:03 vm07 bash[23367]: cluster 2026-03-10T10:24:03.710169+0000 mon.a (mon.0) 2705 : cluster [DBG] osdmap e427: 8 total, 8 up, 8 in 2026-03-10T10:24:04.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:03 vm07 bash[23367]: cluster 2026-03-10T10:24:03.710169+0000 mon.a (mon.0) 2705 : cluster [DBG] osdmap e427: 8 total, 8 up, 8 in 2026-03-10T10:24:04.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:03 vm07 bash[23367]: audit 2026-03-10T10:24:03.710610+0000 mon.a (mon.0) 2706 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T10:24:04.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:03 vm07 bash[23367]: audit 2026-03-10T10:24:03.710610+0000 mon.a (mon.0) 2706 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T10:24:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:03 vm04 bash[28289]: cluster 2026-03-10T10:24:02.486130+0000 mgr.y (mgr.24422) 405 : cluster [DBG] pgmap v644: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:24:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:03 vm04 bash[28289]: cluster 2026-03-10T10:24:02.486130+0000 mgr.y (mgr.24422) 405 : cluster [DBG] pgmap v644: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:24:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:03 vm04 bash[28289]: audit 2026-03-10T10:24:03.707716+0000 mon.a (mon.0) 2704 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:24:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:03 vm04 bash[28289]: audit 2026-03-10T10:24:03.707716+0000 mon.a (mon.0) 2704 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:24:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:03 vm04 bash[28289]: cluster 2026-03-10T10:24:03.710169+0000 mon.a (mon.0) 2705 : cluster [DBG] osdmap e427: 8 total, 8 up, 8 in 2026-03-10T10:24:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:03 vm04 bash[28289]: cluster 2026-03-10T10:24:03.710169+0000 mon.a (mon.0) 2705 : cluster [DBG] osdmap e427: 8 total, 8 up, 8 in 2026-03-10T10:24:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:03 vm04 bash[28289]: audit 2026-03-10T10:24:03.710610+0000 mon.a (mon.0) 2706 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T10:24:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:03 vm04 bash[28289]: audit 2026-03-10T10:24:03.710610+0000 mon.a (mon.0) 2706 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T10:24:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:03 vm04 bash[20742]: cluster 2026-03-10T10:24:02.486130+0000 mgr.y (mgr.24422) 405 : cluster [DBG] pgmap v644: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:24:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:03 vm04 bash[20742]: cluster 2026-03-10T10:24:02.486130+0000 mgr.y (mgr.24422) 405 : cluster [DBG] pgmap v644: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:24:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:03 vm04 bash[20742]: audit 2026-03-10T10:24:03.707716+0000 mon.a (mon.0) 2704 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:24:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:03 vm04 bash[20742]: audit 2026-03-10T10:24:03.707716+0000 mon.a (mon.0) 2704 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:24:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:03 vm04 bash[20742]: cluster 2026-03-10T10:24:03.710169+0000 mon.a (mon.0) 2705 : cluster [DBG] osdmap e427: 8 total, 8 up, 8 in 2026-03-10T10:24:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:03 vm04 bash[20742]: cluster 2026-03-10T10:24:03.710169+0000 mon.a (mon.0) 2705 : cluster [DBG] osdmap e427: 8 total, 8 up, 8 in 2026-03-10T10:24:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:03 vm04 bash[20742]: audit 2026-03-10T10:24:03.710610+0000 mon.a (mon.0) 2706 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T10:24:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:03 vm04 bash[20742]: audit 2026-03-10T10:24:03.710610+0000 mon.a (mon.0) 2706 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T10:24:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:05 vm07 bash[23367]: cluster 2026-03-10T10:24:04.486686+0000 mgr.y (mgr.24422) 406 : cluster [DBG] pgmap v647: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:24:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:05 vm07 bash[23367]: cluster 2026-03-10T10:24:04.486686+0000 mgr.y (mgr.24422) 406 : cluster [DBG] pgmap v647: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:24:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:05 vm07 bash[23367]: audit 2026-03-10T10:24:04.711213+0000 mon.a (mon.0) 2707 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T10:24:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:05 vm07 bash[23367]: audit 2026-03-10T10:24:04.711213+0000 mon.a (mon.0) 2707 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T10:24:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:05 vm07 bash[23367]: cluster 2026-03-10T10:24:04.713697+0000 mon.a (mon.0) 2708 : cluster [DBG] osdmap e428: 8 total, 8 up, 8 in 2026-03-10T10:24:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:05 vm07 bash[23367]: cluster 2026-03-10T10:24:04.713697+0000 mon.a (mon.0) 2708 : cluster [DBG] osdmap e428: 8 total, 8 up, 8 in 2026-03-10T10:24:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:05 vm07 bash[23367]: audit 2026-03-10T10:24:04.714026+0000 mon.a (mon.0) 2709 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:24:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:05 vm07 bash[23367]: audit 2026-03-10T10:24:04.714026+0000 mon.a (mon.0) 2709 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:24:06.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:05 vm04 bash[28289]: cluster 2026-03-10T10:24:04.486686+0000 mgr.y (mgr.24422) 406 : cluster [DBG] pgmap v647: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:24:06.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:05 vm04 bash[28289]: cluster 2026-03-10T10:24:04.486686+0000 mgr.y (mgr.24422) 406 : cluster [DBG] pgmap v647: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:24:06.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:05 vm04 bash[28289]: audit 2026-03-10T10:24:04.711213+0000 mon.a (mon.0) 2707 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T10:24:06.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:05 vm04 bash[28289]: audit 2026-03-10T10:24:04.711213+0000 mon.a (mon.0) 2707 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T10:24:06.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:05 vm04 bash[28289]: cluster 2026-03-10T10:24:04.713697+0000 mon.a (mon.0) 2708 : cluster [DBG] osdmap e428: 8 total, 8 up, 8 in 2026-03-10T10:24:06.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:05 vm04 bash[28289]: cluster 2026-03-10T10:24:04.713697+0000 mon.a (mon.0) 2708 : cluster [DBG] osdmap e428: 8 total, 8 up, 8 in 2026-03-10T10:24:06.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:05 vm04 bash[28289]: audit 2026-03-10T10:24:04.714026+0000 mon.a (mon.0) 2709 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:24:06.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:05 vm04 bash[28289]: audit 2026-03-10T10:24:04.714026+0000 mon.a (mon.0) 2709 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:24:06.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:05 vm04 bash[20742]: cluster 2026-03-10T10:24:04.486686+0000 mgr.y (mgr.24422) 406 : cluster [DBG] pgmap v647: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:24:06.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:05 vm04 bash[20742]: cluster 2026-03-10T10:24:04.486686+0000 mgr.y (mgr.24422) 406 : cluster [DBG] pgmap v647: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:24:06.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:05 vm04 bash[20742]: audit 2026-03-10T10:24:04.711213+0000 mon.a (mon.0) 2707 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T10:24:06.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:05 vm04 bash[20742]: audit 2026-03-10T10:24:04.711213+0000 mon.a (mon.0) 2707 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T10:24:06.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:05 vm04 bash[20742]: cluster 2026-03-10T10:24:04.713697+0000 mon.a (mon.0) 2708 : cluster [DBG] osdmap e428: 8 total, 8 up, 8 in 2026-03-10T10:24:06.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:05 vm04 bash[20742]: cluster 2026-03-10T10:24:04.713697+0000 mon.a (mon.0) 2708 : cluster [DBG] osdmap e428: 8 total, 8 up, 8 in 2026-03-10T10:24:06.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:05 vm04 bash[20742]: audit 2026-03-10T10:24:04.714026+0000 mon.a (mon.0) 2709 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:24:06.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:05 vm04 bash[20742]: audit 2026-03-10T10:24:04.714026+0000 mon.a (mon.0) 2709 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:24:07.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:06 vm04 bash[28289]: audit 2026-03-10T10:24:05.747254+0000 mon.a (mon.0) 2710 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:24:07.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:06 vm04 bash[28289]: audit 2026-03-10T10:24:05.747254+0000 mon.a (mon.0) 2710 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:24:07.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:06 vm04 bash[28289]: cluster 2026-03-10T10:24:05.751009+0000 mon.a (mon.0) 2711 : cluster [DBG] osdmap e429: 8 total, 8 up, 8 in 2026-03-10T10:24:07.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:06 vm04 bash[28289]: cluster 2026-03-10T10:24:05.751009+0000 mon.a (mon.0) 2711 : cluster [DBG] osdmap e429: 8 total, 8 up, 8 in 2026-03-10T10:24:07.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:06 vm04 bash[28289]: audit 2026-03-10T10:24:05.831815+0000 mon.a (mon.0) 2712 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-10T10:24:07.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:06 vm04 bash[28289]: audit 2026-03-10T10:24:05.831815+0000 mon.a (mon.0) 2712 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-10T10:24:07.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:06 vm04 bash[20742]: audit 2026-03-10T10:24:05.747254+0000 mon.a (mon.0) 2710 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:24:07.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:06 vm04 bash[20742]: audit 2026-03-10T10:24:05.747254+0000 mon.a (mon.0) 2710 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:24:07.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:06 vm04 bash[20742]: cluster 2026-03-10T10:24:05.751009+0000 mon.a (mon.0) 2711 : cluster [DBG] osdmap e429: 8 total, 8 up, 8 in 2026-03-10T10:24:07.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:06 vm04 bash[20742]: cluster 2026-03-10T10:24:05.751009+0000 mon.a (mon.0) 2711 : cluster [DBG] osdmap e429: 8 total, 8 up, 8 in 2026-03-10T10:24:07.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:06 vm04 bash[20742]: audit 2026-03-10T10:24:05.831815+0000 mon.a (mon.0) 2712 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-10T10:24:07.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:06 vm04 bash[20742]: audit 2026-03-10T10:24:05.831815+0000 mon.a (mon.0) 2712 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-10T10:24:07.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:06 vm07 bash[23367]: audit 2026-03-10T10:24:05.747254+0000 mon.a (mon.0) 2710 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:24:07.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:06 vm07 bash[23367]: audit 2026-03-10T10:24:05.747254+0000 mon.a (mon.0) 2710 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:24:07.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:06 vm07 bash[23367]: cluster 2026-03-10T10:24:05.751009+0000 mon.a (mon.0) 2711 : cluster [DBG] osdmap e429: 8 total, 8 up, 8 in 2026-03-10T10:24:07.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:06 vm07 bash[23367]: cluster 2026-03-10T10:24:05.751009+0000 mon.a (mon.0) 2711 : cluster [DBG] osdmap e429: 8 total, 8 up, 8 in 2026-03-10T10:24:07.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:06 vm07 bash[23367]: audit 2026-03-10T10:24:05.831815+0000 mon.a (mon.0) 2712 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-10T10:24:07.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:06 vm07 bash[23367]: audit 2026-03-10T10:24:05.831815+0000 mon.a (mon.0) 2712 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "512"}]: dispatch 2026-03-10T10:24:08.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:07 vm04 bash[28289]: cluster 2026-03-10T10:24:06.487002+0000 mgr.y (mgr.24422) 407 : cluster [DBG] pgmap v650: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:24:08.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:07 vm04 bash[28289]: cluster 2026-03-10T10:24:06.487002+0000 mgr.y (mgr.24422) 407 : cluster [DBG] pgmap v650: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:24:08.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:07 vm04 bash[28289]: audit 2026-03-10T10:24:06.810521+0000 mon.a (mon.0) 2713 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "512"}]': finished 2026-03-10T10:24:08.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:07 vm04 bash[28289]: audit 2026-03-10T10:24:06.810521+0000 mon.a (mon.0) 2713 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "512"}]': finished 2026-03-10T10:24:08.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:07 vm04 bash[28289]: cluster 2026-03-10T10:24:06.815120+0000 mon.a (mon.0) 2714 : cluster [DBG] osdmap e430: 8 total, 8 up, 8 in 2026-03-10T10:24:08.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:07 vm04 bash[28289]: cluster 2026-03-10T10:24:06.815120+0000 mon.a (mon.0) 2714 : cluster [DBG] osdmap e430: 8 total, 8 up, 8 in 2026-03-10T10:24:08.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:07 vm04 bash[28289]: audit 2026-03-10T10:24:06.846962+0000 mon.a (mon.0) 2715 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-10T10:24:08.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:07 vm04 bash[28289]: audit 2026-03-10T10:24:06.846962+0000 mon.a (mon.0) 2715 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-10T10:24:08.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:07 vm04 bash[20742]: cluster 2026-03-10T10:24:06.487002+0000 mgr.y (mgr.24422) 407 : cluster [DBG] pgmap v650: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:24:08.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:07 vm04 bash[20742]: cluster 2026-03-10T10:24:06.487002+0000 mgr.y (mgr.24422) 407 : cluster [DBG] pgmap v650: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:24:08.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:07 vm04 bash[20742]: audit 2026-03-10T10:24:06.810521+0000 mon.a (mon.0) 2713 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "512"}]': finished 2026-03-10T10:24:08.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:07 vm04 bash[20742]: audit 2026-03-10T10:24:06.810521+0000 mon.a (mon.0) 2713 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "512"}]': finished 2026-03-10T10:24:08.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:07 vm04 bash[20742]: cluster 2026-03-10T10:24:06.815120+0000 mon.a (mon.0) 2714 : cluster [DBG] osdmap e430: 8 total, 8 up, 8 in 2026-03-10T10:24:08.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:07 vm04 bash[20742]: cluster 2026-03-10T10:24:06.815120+0000 mon.a (mon.0) 2714 : cluster [DBG] osdmap e430: 8 total, 8 up, 8 in 2026-03-10T10:24:08.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:07 vm04 bash[20742]: audit 2026-03-10T10:24:06.846962+0000 mon.a (mon.0) 2715 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-10T10:24:08.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:07 vm04 bash[20742]: audit 2026-03-10T10:24:06.846962+0000 mon.a (mon.0) 2715 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-10T10:24:08.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:07 vm07 bash[23367]: cluster 2026-03-10T10:24:06.487002+0000 mgr.y (mgr.24422) 407 : cluster [DBG] pgmap v650: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:24:08.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:07 vm07 bash[23367]: cluster 2026-03-10T10:24:06.487002+0000 mgr.y (mgr.24422) 407 : cluster [DBG] pgmap v650: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:24:08.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:07 vm07 bash[23367]: audit 2026-03-10T10:24:06.810521+0000 mon.a (mon.0) 2713 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "512"}]': finished 2026-03-10T10:24:08.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:07 vm07 bash[23367]: audit 2026-03-10T10:24:06.810521+0000 mon.a (mon.0) 2713 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "512"}]': finished 2026-03-10T10:24:08.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:07 vm07 bash[23367]: cluster 2026-03-10T10:24:06.815120+0000 mon.a (mon.0) 2714 : cluster [DBG] osdmap e430: 8 total, 8 up, 8 in 2026-03-10T10:24:08.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:07 vm07 bash[23367]: cluster 2026-03-10T10:24:06.815120+0000 mon.a (mon.0) 2714 : cluster [DBG] osdmap e430: 8 total, 8 up, 8 in 2026-03-10T10:24:08.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:07 vm07 bash[23367]: audit 2026-03-10T10:24:06.846962+0000 mon.a (mon.0) 2715 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-10T10:24:08.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:07 vm07 bash[23367]: audit 2026-03-10T10:24:06.846962+0000 mon.a (mon.0) 2715 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "16384"}]: dispatch 2026-03-10T10:24:09.016 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:24:08 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:24:09.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:08 vm07 bash[23367]: audit 2026-03-10T10:24:07.813867+0000 mon.a (mon.0) 2716 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "16384"}]': finished 2026-03-10T10:24:09.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:08 vm07 bash[23367]: audit 2026-03-10T10:24:07.813867+0000 mon.a (mon.0) 2716 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "16384"}]': finished 2026-03-10T10:24:09.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:08 vm07 bash[23367]: cluster 2026-03-10T10:24:07.820153+0000 mon.a (mon.0) 2717 : cluster [DBG] osdmap e431: 8 total, 8 up, 8 in 2026-03-10T10:24:09.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:08 vm07 bash[23367]: cluster 2026-03-10T10:24:07.820153+0000 mon.a (mon.0) 2717 : cluster [DBG] osdmap e431: 8 total, 8 up, 8 in 2026-03-10T10:24:09.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:08 vm07 bash[23367]: audit 2026-03-10T10:24:07.840255+0000 mon.a (mon.0) 2718 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:24:09.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:08 vm07 bash[23367]: audit 2026-03-10T10:24:07.840255+0000 mon.a (mon.0) 2718 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:24:09.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:08 vm04 bash[28289]: audit 2026-03-10T10:24:07.813867+0000 mon.a (mon.0) 2716 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "16384"}]': finished 2026-03-10T10:24:09.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:08 vm04 bash[28289]: audit 2026-03-10T10:24:07.813867+0000 mon.a (mon.0) 2716 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "16384"}]': finished 2026-03-10T10:24:09.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:08 vm04 bash[28289]: cluster 2026-03-10T10:24:07.820153+0000 mon.a (mon.0) 2717 : cluster [DBG] osdmap e431: 8 total, 8 up, 8 in 2026-03-10T10:24:09.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:08 vm04 bash[28289]: cluster 2026-03-10T10:24:07.820153+0000 mon.a (mon.0) 2717 : cluster [DBG] osdmap e431: 8 total, 8 up, 8 in 2026-03-10T10:24:09.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:08 vm04 bash[28289]: audit 2026-03-10T10:24:07.840255+0000 mon.a (mon.0) 2718 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:24:09.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:08 vm04 bash[28289]: audit 2026-03-10T10:24:07.840255+0000 mon.a (mon.0) 2718 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:24:09.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:08 vm04 bash[20742]: audit 2026-03-10T10:24:07.813867+0000 mon.a (mon.0) 2716 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "16384"}]': finished 2026-03-10T10:24:09.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:08 vm04 bash[20742]: audit 2026-03-10T10:24:07.813867+0000 mon.a (mon.0) 2716 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "16384"}]': finished 2026-03-10T10:24:09.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:08 vm04 bash[20742]: cluster 2026-03-10T10:24:07.820153+0000 mon.a (mon.0) 2717 : cluster [DBG] osdmap e431: 8 total, 8 up, 8 in 2026-03-10T10:24:09.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:08 vm04 bash[20742]: cluster 2026-03-10T10:24:07.820153+0000 mon.a (mon.0) 2717 : cluster [DBG] osdmap e431: 8 total, 8 up, 8 in 2026-03-10T10:24:09.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:08 vm04 bash[20742]: audit 2026-03-10T10:24:07.840255+0000 mon.a (mon.0) 2718 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:24:09.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:08 vm04 bash[20742]: audit 2026-03-10T10:24:07.840255+0000 mon.a (mon.0) 2718 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:24:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:09 vm04 bash[28289]: cluster 2026-03-10T10:24:08.487603+0000 mgr.y (mgr.24422) 408 : cluster [DBG] pgmap v653: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 7.0 KiB/s wr, 15 op/s 2026-03-10T10:24:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:09 vm04 bash[28289]: cluster 2026-03-10T10:24:08.487603+0000 mgr.y (mgr.24422) 408 : cluster [DBG] pgmap v653: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 7.0 KiB/s wr, 15 op/s 2026-03-10T10:24:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:09 vm04 bash[28289]: audit 2026-03-10T10:24:08.599012+0000 mgr.y (mgr.24422) 409 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:24:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:09 vm04 bash[28289]: audit 2026-03-10T10:24:08.599012+0000 mgr.y (mgr.24422) 409 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:24:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:09 vm04 bash[28289]: audit 2026-03-10T10:24:08.826187+0000 mon.a (mon.0) 2719 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:24:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:09 vm04 bash[28289]: audit 2026-03-10T10:24:08.826187+0000 mon.a (mon.0) 2719 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:24:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:09 vm04 bash[28289]: cluster 2026-03-10T10:24:08.830249+0000 mon.a (mon.0) 2720 : cluster [DBG] osdmap e432: 8 total, 8 up, 8 in 2026-03-10T10:24:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:09 vm04 bash[28289]: cluster 2026-03-10T10:24:08.830249+0000 mon.a (mon.0) 2720 : cluster [DBG] osdmap e432: 8 total, 8 up, 8 in 2026-03-10T10:24:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:09 vm04 bash[28289]: audit 2026-03-10T10:24:08.881771+0000 mon.a (mon.0) 2721 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:09 vm04 bash[28289]: audit 2026-03-10T10:24:08.881771+0000 mon.a (mon.0) 2721 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:09 vm04 bash[28289]: audit 2026-03-10T10:24:08.881992+0000 mon.a (mon.0) 2722 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-81"}]: dispatch 2026-03-10T10:24:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:09 vm04 bash[28289]: audit 2026-03-10T10:24:08.881992+0000 mon.a (mon.0) 2722 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-81"}]: dispatch 2026-03-10T10:24:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:09 vm04 bash[20742]: cluster 2026-03-10T10:24:08.487603+0000 mgr.y (mgr.24422) 408 : cluster [DBG] pgmap v653: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 7.0 KiB/s wr, 15 op/s 2026-03-10T10:24:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:09 vm04 bash[20742]: cluster 2026-03-10T10:24:08.487603+0000 mgr.y (mgr.24422) 408 : cluster [DBG] pgmap v653: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 7.0 KiB/s wr, 15 op/s 2026-03-10T10:24:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:09 vm04 bash[20742]: audit 2026-03-10T10:24:08.599012+0000 mgr.y (mgr.24422) 409 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:24:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:09 vm04 bash[20742]: audit 2026-03-10T10:24:08.599012+0000 mgr.y (mgr.24422) 409 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:24:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:09 vm04 bash[20742]: audit 2026-03-10T10:24:08.826187+0000 mon.a (mon.0) 2719 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:24:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:09 vm04 bash[20742]: audit 2026-03-10T10:24:08.826187+0000 mon.a (mon.0) 2719 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:24:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:09 vm04 bash[20742]: cluster 2026-03-10T10:24:08.830249+0000 mon.a (mon.0) 2720 : cluster [DBG] osdmap e432: 8 total, 8 up, 8 in 2026-03-10T10:24:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:09 vm04 bash[20742]: cluster 2026-03-10T10:24:08.830249+0000 mon.a (mon.0) 2720 : cluster [DBG] osdmap e432: 8 total, 8 up, 8 in 2026-03-10T10:24:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:09 vm04 bash[20742]: audit 2026-03-10T10:24:08.881771+0000 mon.a (mon.0) 2721 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:09 vm04 bash[20742]: audit 2026-03-10T10:24:08.881771+0000 mon.a (mon.0) 2721 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:09 vm04 bash[20742]: audit 2026-03-10T10:24:08.881992+0000 mon.a (mon.0) 2722 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-81"}]: dispatch 2026-03-10T10:24:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:09 vm04 bash[20742]: audit 2026-03-10T10:24:08.881992+0000 mon.a (mon.0) 2722 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-81"}]: dispatch 2026-03-10T10:24:10.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:09 vm07 bash[23367]: cluster 2026-03-10T10:24:08.487603+0000 mgr.y (mgr.24422) 408 : cluster [DBG] pgmap v653: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 7.0 KiB/s wr, 15 op/s 2026-03-10T10:24:10.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:09 vm07 bash[23367]: cluster 2026-03-10T10:24:08.487603+0000 mgr.y (mgr.24422) 408 : cluster [DBG] pgmap v653: 292 pgs: 292 active+clean; 8.3 MiB data, 925 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 7.0 KiB/s wr, 15 op/s 2026-03-10T10:24:10.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:09 vm07 bash[23367]: audit 2026-03-10T10:24:08.599012+0000 mgr.y (mgr.24422) 409 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:24:10.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:09 vm07 bash[23367]: audit 2026-03-10T10:24:08.599012+0000 mgr.y (mgr.24422) 409 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:24:10.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:09 vm07 bash[23367]: audit 2026-03-10T10:24:08.826187+0000 mon.a (mon.0) 2719 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:24:10.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:09 vm07 bash[23367]: audit 2026-03-10T10:24:08.826187+0000 mon.a (mon.0) 2719 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-81","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:24:10.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:09 vm07 bash[23367]: cluster 2026-03-10T10:24:08.830249+0000 mon.a (mon.0) 2720 : cluster [DBG] osdmap e432: 8 total, 8 up, 8 in 2026-03-10T10:24:10.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:09 vm07 bash[23367]: cluster 2026-03-10T10:24:08.830249+0000 mon.a (mon.0) 2720 : cluster [DBG] osdmap e432: 8 total, 8 up, 8 in 2026-03-10T10:24:10.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:09 vm07 bash[23367]: audit 2026-03-10T10:24:08.881771+0000 mon.a (mon.0) 2721 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:10.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:09 vm07 bash[23367]: audit 2026-03-10T10:24:08.881771+0000 mon.a (mon.0) 2721 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:10.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:09 vm07 bash[23367]: audit 2026-03-10T10:24:08.881992+0000 mon.a (mon.0) 2722 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-81"}]: dispatch 2026-03-10T10:24:10.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:09 vm07 bash[23367]: audit 2026-03-10T10:24:08.881992+0000 mon.a (mon.0) 2722 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-81"}]: dispatch 2026-03-10T10:24:11.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:10 vm04 bash[28289]: cluster 2026-03-10T10:24:09.852513+0000 mon.a (mon.0) 2723 : cluster [DBG] osdmap e433: 8 total, 8 up, 8 in 2026-03-10T10:24:11.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:10 vm04 bash[28289]: cluster 2026-03-10T10:24:09.852513+0000 mon.a (mon.0) 2723 : cluster [DBG] osdmap e433: 8 total, 8 up, 8 in 2026-03-10T10:24:11.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:10 vm04 bash[20742]: cluster 2026-03-10T10:24:09.852513+0000 mon.a (mon.0) 2723 : cluster [DBG] osdmap e433: 8 total, 8 up, 8 in 2026-03-10T10:24:11.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:10 vm04 bash[20742]: cluster 2026-03-10T10:24:09.852513+0000 mon.a (mon.0) 2723 : cluster [DBG] osdmap e433: 8 total, 8 up, 8 in 2026-03-10T10:24:11.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:10 vm07 bash[23367]: cluster 2026-03-10T10:24:09.852513+0000 mon.a (mon.0) 2723 : cluster [DBG] osdmap e433: 8 total, 8 up, 8 in 2026-03-10T10:24:11.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:10 vm07 bash[23367]: cluster 2026-03-10T10:24:09.852513+0000 mon.a (mon.0) 2723 : cluster [DBG] osdmap e433: 8 total, 8 up, 8 in 2026-03-10T10:24:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:11 vm04 bash[28289]: cluster 2026-03-10T10:24:10.487929+0000 mgr.y (mgr.24422) 410 : cluster [DBG] pgmap v656: 260 pgs: 260 active+clean; 8.3 MiB data, 926 MiB used, 159 GiB / 160 GiB avail; 7.5 KiB/s rd, 9.0 KiB/s wr, 28 op/s 2026-03-10T10:24:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:11 vm04 bash[28289]: cluster 2026-03-10T10:24:10.487929+0000 mgr.y (mgr.24422) 410 : cluster [DBG] pgmap v656: 260 pgs: 260 active+clean; 8.3 MiB data, 926 MiB used, 159 GiB / 160 GiB avail; 7.5 KiB/s rd, 9.0 KiB/s wr, 28 op/s 2026-03-10T10:24:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:11 vm04 bash[28289]: cluster 2026-03-10T10:24:10.871531+0000 mon.a (mon.0) 2724 : cluster [DBG] osdmap e434: 8 total, 8 up, 8 in 2026-03-10T10:24:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:11 vm04 bash[28289]: cluster 2026-03-10T10:24:10.871531+0000 mon.a (mon.0) 2724 : cluster [DBG] osdmap e434: 8 total, 8 up, 8 in 2026-03-10T10:24:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:11 vm04 bash[28289]: audit 2026-03-10T10:24:10.873325+0000 mon.a (mon.0) 2725 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:11 vm04 bash[28289]: audit 2026-03-10T10:24:10.873325+0000 mon.a (mon.0) 2725 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:11 vm04 bash[28289]: audit 2026-03-10T10:24:11.871923+0000 mon.a (mon.0) 2726 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-83","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:11 vm04 bash[28289]: audit 2026-03-10T10:24:11.871923+0000 mon.a (mon.0) 2726 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-83","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:11 vm04 bash[28289]: cluster 2026-03-10T10:24:11.874955+0000 mon.a (mon.0) 2727 : cluster [DBG] osdmap e435: 8 total, 8 up, 8 in 2026-03-10T10:24:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:11 vm04 bash[28289]: cluster 2026-03-10T10:24:11.874955+0000 mon.a (mon.0) 2727 : cluster [DBG] osdmap e435: 8 total, 8 up, 8 in 2026-03-10T10:24:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:11 vm04 bash[28289]: audit 2026-03-10T10:24:11.875973+0000 mon.a (mon.0) 2728 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:11 vm04 bash[28289]: audit 2026-03-10T10:24:11.875973+0000 mon.a (mon.0) 2728 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:11 vm04 bash[28289]: audit 2026-03-10T10:24:11.878829+0000 mon.a (mon.0) 2729 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:24:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:11 vm04 bash[28289]: audit 2026-03-10T10:24:11.878829+0000 mon.a (mon.0) 2729 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:24:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:11 vm04 bash[20742]: cluster 2026-03-10T10:24:10.487929+0000 mgr.y (mgr.24422) 410 : cluster [DBG] pgmap v656: 260 pgs: 260 active+clean; 8.3 MiB data, 926 MiB used, 159 GiB / 160 GiB avail; 7.5 KiB/s rd, 9.0 KiB/s wr, 28 op/s 2026-03-10T10:24:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:11 vm04 bash[20742]: cluster 2026-03-10T10:24:10.487929+0000 mgr.y (mgr.24422) 410 : cluster [DBG] pgmap v656: 260 pgs: 260 active+clean; 8.3 MiB data, 926 MiB used, 159 GiB / 160 GiB avail; 7.5 KiB/s rd, 9.0 KiB/s wr, 28 op/s 2026-03-10T10:24:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:11 vm04 bash[20742]: cluster 2026-03-10T10:24:10.871531+0000 mon.a (mon.0) 2724 : cluster [DBG] osdmap e434: 8 total, 8 up, 8 in 2026-03-10T10:24:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:11 vm04 bash[20742]: cluster 2026-03-10T10:24:10.871531+0000 mon.a (mon.0) 2724 : cluster [DBG] osdmap e434: 8 total, 8 up, 8 in 2026-03-10T10:24:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:11 vm04 bash[20742]: audit 2026-03-10T10:24:10.873325+0000 mon.a (mon.0) 2725 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:11 vm04 bash[20742]: audit 2026-03-10T10:24:10.873325+0000 mon.a (mon.0) 2725 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:11 vm04 bash[20742]: audit 2026-03-10T10:24:11.871923+0000 mon.a (mon.0) 2726 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-83","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:11 vm04 bash[20742]: audit 2026-03-10T10:24:11.871923+0000 mon.a (mon.0) 2726 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-83","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:11 vm04 bash[20742]: cluster 2026-03-10T10:24:11.874955+0000 mon.a (mon.0) 2727 : cluster [DBG] osdmap e435: 8 total, 8 up, 8 in 2026-03-10T10:24:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:11 vm04 bash[20742]: cluster 2026-03-10T10:24:11.874955+0000 mon.a (mon.0) 2727 : cluster [DBG] osdmap e435: 8 total, 8 up, 8 in 2026-03-10T10:24:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:11 vm04 bash[20742]: audit 2026-03-10T10:24:11.875973+0000 mon.a (mon.0) 2728 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:11 vm04 bash[20742]: audit 2026-03-10T10:24:11.875973+0000 mon.a (mon.0) 2728 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:11 vm04 bash[20742]: audit 2026-03-10T10:24:11.878829+0000 mon.a (mon.0) 2729 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:24:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:11 vm04 bash[20742]: audit 2026-03-10T10:24:11.878829+0000 mon.a (mon.0) 2729 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:24:12.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:11 vm07 bash[23367]: cluster 2026-03-10T10:24:10.487929+0000 mgr.y (mgr.24422) 410 : cluster [DBG] pgmap v656: 260 pgs: 260 active+clean; 8.3 MiB data, 926 MiB used, 159 GiB / 160 GiB avail; 7.5 KiB/s rd, 9.0 KiB/s wr, 28 op/s 2026-03-10T10:24:12.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:11 vm07 bash[23367]: cluster 2026-03-10T10:24:10.487929+0000 mgr.y (mgr.24422) 410 : cluster [DBG] pgmap v656: 260 pgs: 260 active+clean; 8.3 MiB data, 926 MiB used, 159 GiB / 160 GiB avail; 7.5 KiB/s rd, 9.0 KiB/s wr, 28 op/s 2026-03-10T10:24:12.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:11 vm07 bash[23367]: cluster 2026-03-10T10:24:10.871531+0000 mon.a (mon.0) 2724 : cluster [DBG] osdmap e434: 8 total, 8 up, 8 in 2026-03-10T10:24:12.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:11 vm07 bash[23367]: cluster 2026-03-10T10:24:10.871531+0000 mon.a (mon.0) 2724 : cluster [DBG] osdmap e434: 8 total, 8 up, 8 in 2026-03-10T10:24:12.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:11 vm07 bash[23367]: audit 2026-03-10T10:24:10.873325+0000 mon.a (mon.0) 2725 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:12.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:11 vm07 bash[23367]: audit 2026-03-10T10:24:10.873325+0000 mon.a (mon.0) 2725 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-83","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:12.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:11 vm07 bash[23367]: audit 2026-03-10T10:24:11.871923+0000 mon.a (mon.0) 2726 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-83","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:12.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:11 vm07 bash[23367]: audit 2026-03-10T10:24:11.871923+0000 mon.a (mon.0) 2726 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-83","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:12.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:11 vm07 bash[23367]: cluster 2026-03-10T10:24:11.874955+0000 mon.a (mon.0) 2727 : cluster [DBG] osdmap e435: 8 total, 8 up, 8 in 2026-03-10T10:24:12.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:11 vm07 bash[23367]: cluster 2026-03-10T10:24:11.874955+0000 mon.a (mon.0) 2727 : cluster [DBG] osdmap e435: 8 total, 8 up, 8 in 2026-03-10T10:24:12.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:11 vm07 bash[23367]: audit 2026-03-10T10:24:11.875973+0000 mon.a (mon.0) 2728 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:12.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:11 vm07 bash[23367]: audit 2026-03-10T10:24:11.875973+0000 mon.a (mon.0) 2728 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:12.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:11 vm07 bash[23367]: audit 2026-03-10T10:24:11.878829+0000 mon.a (mon.0) 2729 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:24:12.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:11 vm07 bash[23367]: audit 2026-03-10T10:24:11.878829+0000 mon.a (mon.0) 2729 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:24:13.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:24:13 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:24:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:24:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:13 vm04 bash[28289]: cluster 2026-03-10T10:24:12.488320+0000 mgr.y (mgr.24422) 411 : cluster [DBG] pgmap v659: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 926 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 2.0 KiB/s wr, 12 op/s 2026-03-10T10:24:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:13 vm04 bash[28289]: cluster 2026-03-10T10:24:12.488320+0000 mgr.y (mgr.24422) 411 : cluster [DBG] pgmap v659: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 926 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 2.0 KiB/s wr, 12 op/s 2026-03-10T10:24:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:13 vm04 bash[28289]: audit 2026-03-10T10:24:12.874984+0000 mon.a (mon.0) 2730 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:24:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:13 vm04 bash[28289]: audit 2026-03-10T10:24:12.874984+0000 mon.a (mon.0) 2730 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:24:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:13 vm04 bash[28289]: cluster 2026-03-10T10:24:12.878624+0000 mon.a (mon.0) 2731 : cluster [DBG] osdmap e436: 8 total, 8 up, 8 in 2026-03-10T10:24:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:13 vm04 bash[28289]: cluster 2026-03-10T10:24:12.878624+0000 mon.a (mon.0) 2731 : cluster [DBG] osdmap e436: 8 total, 8 up, 8 in 2026-03-10T10:24:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:13 vm04 bash[28289]: audit 2026-03-10T10:24:12.879256+0000 mon.a (mon.0) 2732 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:13 vm04 bash[28289]: audit 2026-03-10T10:24:12.879256+0000 mon.a (mon.0) 2732 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:13 vm04 bash[28289]: audit 2026-03-10T10:24:13.009496+0000 mon.a (mon.0) 2733 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:24:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:13 vm04 bash[28289]: audit 2026-03-10T10:24:13.009496+0000 mon.a (mon.0) 2733 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:24:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:13 vm04 bash[20742]: cluster 2026-03-10T10:24:12.488320+0000 mgr.y (mgr.24422) 411 : cluster [DBG] pgmap v659: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 926 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 2.0 KiB/s wr, 12 op/s 2026-03-10T10:24:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:13 vm04 bash[20742]: cluster 2026-03-10T10:24:12.488320+0000 mgr.y (mgr.24422) 411 : cluster [DBG] pgmap v659: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 926 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 2.0 KiB/s wr, 12 op/s 2026-03-10T10:24:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:13 vm04 bash[20742]: audit 2026-03-10T10:24:12.874984+0000 mon.a (mon.0) 2730 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:24:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:13 vm04 bash[20742]: audit 2026-03-10T10:24:12.874984+0000 mon.a (mon.0) 2730 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:24:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:13 vm04 bash[20742]: cluster 2026-03-10T10:24:12.878624+0000 mon.a (mon.0) 2731 : cluster [DBG] osdmap e436: 8 total, 8 up, 8 in 2026-03-10T10:24:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:13 vm04 bash[20742]: cluster 2026-03-10T10:24:12.878624+0000 mon.a (mon.0) 2731 : cluster [DBG] osdmap e436: 8 total, 8 up, 8 in 2026-03-10T10:24:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:13 vm04 bash[20742]: audit 2026-03-10T10:24:12.879256+0000 mon.a (mon.0) 2732 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:13 vm04 bash[20742]: audit 2026-03-10T10:24:12.879256+0000 mon.a (mon.0) 2732 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:13 vm04 bash[20742]: audit 2026-03-10T10:24:13.009496+0000 mon.a (mon.0) 2733 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:24:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:13 vm04 bash[20742]: audit 2026-03-10T10:24:13.009496+0000 mon.a (mon.0) 2733 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:24:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:13 vm07 bash[23367]: cluster 2026-03-10T10:24:12.488320+0000 mgr.y (mgr.24422) 411 : cluster [DBG] pgmap v659: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 926 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 2.0 KiB/s wr, 12 op/s 2026-03-10T10:24:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:13 vm07 bash[23367]: cluster 2026-03-10T10:24:12.488320+0000 mgr.y (mgr.24422) 411 : cluster [DBG] pgmap v659: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 926 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 2.0 KiB/s wr, 12 op/s 2026-03-10T10:24:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:13 vm07 bash[23367]: audit 2026-03-10T10:24:12.874984+0000 mon.a (mon.0) 2730 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:24:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:13 vm07 bash[23367]: audit 2026-03-10T10:24:12.874984+0000 mon.a (mon.0) 2730 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:24:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:13 vm07 bash[23367]: cluster 2026-03-10T10:24:12.878624+0000 mon.a (mon.0) 2731 : cluster [DBG] osdmap e436: 8 total, 8 up, 8 in 2026-03-10T10:24:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:13 vm07 bash[23367]: cluster 2026-03-10T10:24:12.878624+0000 mon.a (mon.0) 2731 : cluster [DBG] osdmap e436: 8 total, 8 up, 8 in 2026-03-10T10:24:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:13 vm07 bash[23367]: audit 2026-03-10T10:24:12.879256+0000 mon.a (mon.0) 2732 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:13 vm07 bash[23367]: audit 2026-03-10T10:24:12.879256+0000 mon.a (mon.0) 2732 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:13 vm07 bash[23367]: audit 2026-03-10T10:24:13.009496+0000 mon.a (mon.0) 2733 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:24:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:13 vm07 bash[23367]: audit 2026-03-10T10:24:13.009496+0000 mon.a (mon.0) 2733 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:24:15.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:14 vm04 bash[28289]: audit 2026-03-10T10:24:13.878315+0000 mon.a (mon.0) 2734 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:24:15.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:14 vm04 bash[28289]: audit 2026-03-10T10:24:13.878315+0000 mon.a (mon.0) 2734 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:24:15.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:14 vm04 bash[28289]: cluster 2026-03-10T10:24:13.880956+0000 mon.a (mon.0) 2735 : cluster [DBG] osdmap e437: 8 total, 8 up, 8 in 2026-03-10T10:24:15.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:14 vm04 bash[28289]: cluster 2026-03-10T10:24:13.880956+0000 mon.a (mon.0) 2735 : cluster [DBG] osdmap e437: 8 total, 8 up, 8 in 2026-03-10T10:24:15.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:14 vm04 bash[28289]: audit 2026-03-10T10:24:13.882255+0000 mon.a (mon.0) 2736 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T10:24:15.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:14 vm04 bash[28289]: audit 2026-03-10T10:24:13.882255+0000 mon.a (mon.0) 2736 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T10:24:15.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:14 vm04 bash[28289]: audit 2026-03-10T10:24:14.881434+0000 mon.a (mon.0) 2737 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T10:24:15.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:14 vm04 bash[28289]: audit 2026-03-10T10:24:14.881434+0000 mon.a (mon.0) 2737 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T10:24:15.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:14 vm04 bash[28289]: cluster 2026-03-10T10:24:14.884367+0000 mon.a (mon.0) 2738 : cluster [DBG] osdmap e438: 8 total, 8 up, 8 in 2026-03-10T10:24:15.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:14 vm04 bash[28289]: cluster 2026-03-10T10:24:14.884367+0000 mon.a (mon.0) 2738 : cluster [DBG] osdmap e438: 8 total, 8 up, 8 in 2026-03-10T10:24:15.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:14 vm04 bash[28289]: audit 2026-03-10T10:24:14.885011+0000 mon.a (mon.0) 2739 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:24:15.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:14 vm04 bash[28289]: audit 2026-03-10T10:24:14.885011+0000 mon.a (mon.0) 2739 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:24:15.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:14 vm04 bash[20742]: audit 2026-03-10T10:24:13.878315+0000 mon.a (mon.0) 2734 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:24:15.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:14 vm04 bash[20742]: audit 2026-03-10T10:24:13.878315+0000 mon.a (mon.0) 2734 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:24:15.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:14 vm04 bash[20742]: cluster 2026-03-10T10:24:13.880956+0000 mon.a (mon.0) 2735 : cluster [DBG] osdmap e437: 8 total, 8 up, 8 in 2026-03-10T10:24:15.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:14 vm04 bash[20742]: cluster 2026-03-10T10:24:13.880956+0000 mon.a (mon.0) 2735 : cluster [DBG] osdmap e437: 8 total, 8 up, 8 in 2026-03-10T10:24:15.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:14 vm04 bash[20742]: audit 2026-03-10T10:24:13.882255+0000 mon.a (mon.0) 2736 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T10:24:15.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:14 vm04 bash[20742]: audit 2026-03-10T10:24:13.882255+0000 mon.a (mon.0) 2736 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T10:24:15.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:14 vm04 bash[20742]: audit 2026-03-10T10:24:14.881434+0000 mon.a (mon.0) 2737 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T10:24:15.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:14 vm04 bash[20742]: audit 2026-03-10T10:24:14.881434+0000 mon.a (mon.0) 2737 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T10:24:15.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:14 vm04 bash[20742]: cluster 2026-03-10T10:24:14.884367+0000 mon.a (mon.0) 2738 : cluster [DBG] osdmap e438: 8 total, 8 up, 8 in 2026-03-10T10:24:15.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:14 vm04 bash[20742]: cluster 2026-03-10T10:24:14.884367+0000 mon.a (mon.0) 2738 : cluster [DBG] osdmap e438: 8 total, 8 up, 8 in 2026-03-10T10:24:15.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:14 vm04 bash[20742]: audit 2026-03-10T10:24:14.885011+0000 mon.a (mon.0) 2739 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:24:15.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:14 vm04 bash[20742]: audit 2026-03-10T10:24:14.885011+0000 mon.a (mon.0) 2739 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:24:15.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:14 vm07 bash[23367]: audit 2026-03-10T10:24:13.878315+0000 mon.a (mon.0) 2734 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:24:15.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:14 vm07 bash[23367]: audit 2026-03-10T10:24:13.878315+0000 mon.a (mon.0) 2734 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:24:15.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:14 vm07 bash[23367]: cluster 2026-03-10T10:24:13.880956+0000 mon.a (mon.0) 2735 : cluster [DBG] osdmap e437: 8 total, 8 up, 8 in 2026-03-10T10:24:15.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:14 vm07 bash[23367]: cluster 2026-03-10T10:24:13.880956+0000 mon.a (mon.0) 2735 : cluster [DBG] osdmap e437: 8 total, 8 up, 8 in 2026-03-10T10:24:15.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:14 vm07 bash[23367]: audit 2026-03-10T10:24:13.882255+0000 mon.a (mon.0) 2736 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T10:24:15.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:14 vm07 bash[23367]: audit 2026-03-10T10:24:13.882255+0000 mon.a (mon.0) 2736 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T10:24:15.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:14 vm07 bash[23367]: audit 2026-03-10T10:24:14.881434+0000 mon.a (mon.0) 2737 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T10:24:15.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:14 vm07 bash[23367]: audit 2026-03-10T10:24:14.881434+0000 mon.a (mon.0) 2737 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T10:24:15.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:14 vm07 bash[23367]: cluster 2026-03-10T10:24:14.884367+0000 mon.a (mon.0) 2738 : cluster [DBG] osdmap e438: 8 total, 8 up, 8 in 2026-03-10T10:24:15.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:14 vm07 bash[23367]: cluster 2026-03-10T10:24:14.884367+0000 mon.a (mon.0) 2738 : cluster [DBG] osdmap e438: 8 total, 8 up, 8 in 2026-03-10T10:24:15.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:14 vm07 bash[23367]: audit 2026-03-10T10:24:14.885011+0000 mon.a (mon.0) 2739 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:24:15.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:14 vm07 bash[23367]: audit 2026-03-10T10:24:14.885011+0000 mon.a (mon.0) 2739 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:24:16.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:15 vm04 bash[28289]: cluster 2026-03-10T10:24:14.488664+0000 mgr.y (mgr.24422) 412 : cluster [DBG] pgmap v662: 292 pgs: 292 active+clean; 8.3 MiB data, 926 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 3.2 KiB/s wr, 11 op/s 2026-03-10T10:24:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:15 vm04 bash[28289]: cluster 2026-03-10T10:24:14.488664+0000 mgr.y (mgr.24422) 412 : cluster [DBG] pgmap v662: 292 pgs: 292 active+clean; 8.3 MiB data, 926 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 3.2 KiB/s wr, 11 op/s 2026-03-10T10:24:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:15 vm04 bash[28289]: audit 2026-03-10T10:24:15.885080+0000 mon.a (mon.0) 2740 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:24:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:15 vm04 bash[28289]: audit 2026-03-10T10:24:15.885080+0000 mon.a (mon.0) 2740 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:24:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:15 vm04 bash[28289]: cluster 2026-03-10T10:24:15.887775+0000 mon.a (mon.0) 2741 : cluster [DBG] osdmap e439: 8 total, 8 up, 8 in 2026-03-10T10:24:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:15 vm04 bash[28289]: cluster 2026-03-10T10:24:15.887775+0000 mon.a (mon.0) 2741 : cluster [DBG] osdmap e439: 8 total, 8 up, 8 in 2026-03-10T10:24:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:15 vm04 bash[20742]: cluster 2026-03-10T10:24:14.488664+0000 mgr.y (mgr.24422) 412 : cluster [DBG] pgmap v662: 292 pgs: 292 active+clean; 8.3 MiB data, 926 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 3.2 KiB/s wr, 11 op/s 2026-03-10T10:24:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:15 vm04 bash[20742]: cluster 2026-03-10T10:24:14.488664+0000 mgr.y (mgr.24422) 412 : cluster [DBG] pgmap v662: 292 pgs: 292 active+clean; 8.3 MiB data, 926 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 3.2 KiB/s wr, 11 op/s 2026-03-10T10:24:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:15 vm04 bash[20742]: audit 2026-03-10T10:24:15.885080+0000 mon.a (mon.0) 2740 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:24:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:15 vm04 bash[20742]: audit 2026-03-10T10:24:15.885080+0000 mon.a (mon.0) 2740 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:24:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:15 vm04 bash[20742]: cluster 2026-03-10T10:24:15.887775+0000 mon.a (mon.0) 2741 : cluster [DBG] osdmap e439: 8 total, 8 up, 8 in 2026-03-10T10:24:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:15 vm04 bash[20742]: cluster 2026-03-10T10:24:15.887775+0000 mon.a (mon.0) 2741 : cluster [DBG] osdmap e439: 8 total, 8 up, 8 in 2026-03-10T10:24:16.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:15 vm07 bash[23367]: cluster 2026-03-10T10:24:14.488664+0000 mgr.y (mgr.24422) 412 : cluster [DBG] pgmap v662: 292 pgs: 292 active+clean; 8.3 MiB data, 926 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 3.2 KiB/s wr, 11 op/s 2026-03-10T10:24:16.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:15 vm07 bash[23367]: cluster 2026-03-10T10:24:14.488664+0000 mgr.y (mgr.24422) 412 : cluster [DBG] pgmap v662: 292 pgs: 292 active+clean; 8.3 MiB data, 926 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 3.2 KiB/s wr, 11 op/s 2026-03-10T10:24:16.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:15 vm07 bash[23367]: audit 2026-03-10T10:24:15.885080+0000 mon.a (mon.0) 2740 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:24:16.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:15 vm07 bash[23367]: audit 2026-03-10T10:24:15.885080+0000 mon.a (mon.0) 2740 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-83","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:24:16.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:15 vm07 bash[23367]: cluster 2026-03-10T10:24:15.887775+0000 mon.a (mon.0) 2741 : cluster [DBG] osdmap e439: 8 total, 8 up, 8 in 2026-03-10T10:24:16.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:15 vm07 bash[23367]: cluster 2026-03-10T10:24:15.887775+0000 mon.a (mon.0) 2741 : cluster [DBG] osdmap e439: 8 total, 8 up, 8 in 2026-03-10T10:24:18.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:17 vm07 bash[23367]: cluster 2026-03-10T10:24:16.489026+0000 mgr.y (mgr.24422) 413 : cluster [DBG] pgmap v665: 292 pgs: 292 active+clean; 8.3 MiB data, 926 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 3.2 KiB/s wr, 11 op/s 2026-03-10T10:24:18.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:17 vm07 bash[23367]: cluster 2026-03-10T10:24:16.489026+0000 mgr.y (mgr.24422) 413 : cluster [DBG] pgmap v665: 292 pgs: 292 active+clean; 8.3 MiB data, 926 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 3.2 KiB/s wr, 11 op/s 2026-03-10T10:24:18.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:17 vm07 bash[23367]: cluster 2026-03-10T10:24:16.918030+0000 mon.a (mon.0) 2742 : cluster [DBG] osdmap e440: 8 total, 8 up, 8 in 2026-03-10T10:24:18.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:17 vm07 bash[23367]: cluster 2026-03-10T10:24:16.918030+0000 mon.a (mon.0) 2742 : cluster [DBG] osdmap e440: 8 total, 8 up, 8 in 2026-03-10T10:24:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:17 vm04 bash[28289]: cluster 2026-03-10T10:24:16.489026+0000 mgr.y (mgr.24422) 413 : cluster [DBG] pgmap v665: 292 pgs: 292 active+clean; 8.3 MiB data, 926 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 3.2 KiB/s wr, 11 op/s 2026-03-10T10:24:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:17 vm04 bash[28289]: cluster 2026-03-10T10:24:16.489026+0000 mgr.y (mgr.24422) 413 : cluster [DBG] pgmap v665: 292 pgs: 292 active+clean; 8.3 MiB data, 926 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 3.2 KiB/s wr, 11 op/s 2026-03-10T10:24:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:17 vm04 bash[28289]: cluster 2026-03-10T10:24:16.918030+0000 mon.a (mon.0) 2742 : cluster [DBG] osdmap e440: 8 total, 8 up, 8 in 2026-03-10T10:24:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:17 vm04 bash[28289]: cluster 2026-03-10T10:24:16.918030+0000 mon.a (mon.0) 2742 : cluster [DBG] osdmap e440: 8 total, 8 up, 8 in 2026-03-10T10:24:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:17 vm04 bash[20742]: cluster 2026-03-10T10:24:16.489026+0000 mgr.y (mgr.24422) 413 : cluster [DBG] pgmap v665: 292 pgs: 292 active+clean; 8.3 MiB data, 926 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 3.2 KiB/s wr, 11 op/s 2026-03-10T10:24:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:17 vm04 bash[20742]: cluster 2026-03-10T10:24:16.489026+0000 mgr.y (mgr.24422) 413 : cluster [DBG] pgmap v665: 292 pgs: 292 active+clean; 8.3 MiB data, 926 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s rd, 3.2 KiB/s wr, 11 op/s 2026-03-10T10:24:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:17 vm04 bash[20742]: cluster 2026-03-10T10:24:16.918030+0000 mon.a (mon.0) 2742 : cluster [DBG] osdmap e440: 8 total, 8 up, 8 in 2026-03-10T10:24:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:17 vm04 bash[20742]: cluster 2026-03-10T10:24:16.918030+0000 mon.a (mon.0) 2742 : cluster [DBG] osdmap e440: 8 total, 8 up, 8 in 2026-03-10T10:24:18.987 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:24:18 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:24:19.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:18 vm07 bash[23367]: cluster 2026-03-10T10:24:17.979611+0000 mon.a (mon.0) 2743 : cluster [DBG] osdmap e441: 8 total, 8 up, 8 in 2026-03-10T10:24:19.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:18 vm07 bash[23367]: cluster 2026-03-10T10:24:17.979611+0000 mon.a (mon.0) 2743 : cluster [DBG] osdmap e441: 8 total, 8 up, 8 in 2026-03-10T10:24:19.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:18 vm07 bash[23367]: audit 2026-03-10T10:24:18.015526+0000 mon.a (mon.0) 2744 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:19.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:18 vm07 bash[23367]: audit 2026-03-10T10:24:18.015526+0000 mon.a (mon.0) 2744 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:19.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:18 vm07 bash[23367]: audit 2026-03-10T10:24:18.015761+0000 mon.a (mon.0) 2745 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-83"}]: dispatch 2026-03-10T10:24:19.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:18 vm07 bash[23367]: audit 2026-03-10T10:24:18.015761+0000 mon.a (mon.0) 2745 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-83"}]: dispatch 2026-03-10T10:24:19.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:18 vm04 bash[28289]: cluster 2026-03-10T10:24:17.979611+0000 mon.a (mon.0) 2743 : cluster [DBG] osdmap e441: 8 total, 8 up, 8 in 2026-03-10T10:24:19.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:18 vm04 bash[28289]: cluster 2026-03-10T10:24:17.979611+0000 mon.a (mon.0) 2743 : cluster [DBG] osdmap e441: 8 total, 8 up, 8 in 2026-03-10T10:24:19.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:18 vm04 bash[28289]: audit 2026-03-10T10:24:18.015526+0000 mon.a (mon.0) 2744 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:19.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:18 vm04 bash[28289]: audit 2026-03-10T10:24:18.015526+0000 mon.a (mon.0) 2744 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:19.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:18 vm04 bash[28289]: audit 2026-03-10T10:24:18.015761+0000 mon.a (mon.0) 2745 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-83"}]: dispatch 2026-03-10T10:24:19.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:18 vm04 bash[28289]: audit 2026-03-10T10:24:18.015761+0000 mon.a (mon.0) 2745 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-83"}]: dispatch 2026-03-10T10:24:19.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:18 vm04 bash[20742]: cluster 2026-03-10T10:24:17.979611+0000 mon.a (mon.0) 2743 : cluster [DBG] osdmap e441: 8 total, 8 up, 8 in 2026-03-10T10:24:19.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:18 vm04 bash[20742]: cluster 2026-03-10T10:24:17.979611+0000 mon.a (mon.0) 2743 : cluster [DBG] osdmap e441: 8 total, 8 up, 8 in 2026-03-10T10:24:19.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:18 vm04 bash[20742]: audit 2026-03-10T10:24:18.015526+0000 mon.a (mon.0) 2744 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:19.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:18 vm04 bash[20742]: audit 2026-03-10T10:24:18.015526+0000 mon.a (mon.0) 2744 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:19.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:18 vm04 bash[20742]: audit 2026-03-10T10:24:18.015761+0000 mon.a (mon.0) 2745 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-83"}]: dispatch 2026-03-10T10:24:19.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:18 vm04 bash[20742]: audit 2026-03-10T10:24:18.015761+0000 mon.a (mon.0) 2745 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-83"}]: dispatch 2026-03-10T10:24:20.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:19 vm07 bash[23367]: cluster 2026-03-10T10:24:18.489615+0000 mgr.y (mgr.24422) 414 : cluster [DBG] pgmap v668: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:24:20.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:19 vm07 bash[23367]: cluster 2026-03-10T10:24:18.489615+0000 mgr.y (mgr.24422) 414 : cluster [DBG] pgmap v668: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:24:20.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:19 vm07 bash[23367]: audit 2026-03-10T10:24:18.609442+0000 mgr.y (mgr.24422) 415 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:24:20.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:19 vm07 bash[23367]: audit 2026-03-10T10:24:18.609442+0000 mgr.y (mgr.24422) 415 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:24:20.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:19 vm07 bash[23367]: cluster 2026-03-10T10:24:19.000097+0000 mon.a (mon.0) 2746 : cluster [DBG] osdmap e442: 8 total, 8 up, 8 in 2026-03-10T10:24:20.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:19 vm07 bash[23367]: cluster 2026-03-10T10:24:19.000097+0000 mon.a (mon.0) 2746 : cluster [DBG] osdmap e442: 8 total, 8 up, 8 in 2026-03-10T10:24:20.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:19 vm04 bash[28289]: cluster 2026-03-10T10:24:18.489615+0000 mgr.y (mgr.24422) 414 : cluster [DBG] pgmap v668: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:24:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:19 vm04 bash[28289]: cluster 2026-03-10T10:24:18.489615+0000 mgr.y (mgr.24422) 414 : cluster [DBG] pgmap v668: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:24:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:19 vm04 bash[28289]: audit 2026-03-10T10:24:18.609442+0000 mgr.y (mgr.24422) 415 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:24:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:19 vm04 bash[28289]: audit 2026-03-10T10:24:18.609442+0000 mgr.y (mgr.24422) 415 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:24:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:19 vm04 bash[28289]: cluster 2026-03-10T10:24:19.000097+0000 mon.a (mon.0) 2746 : cluster [DBG] osdmap e442: 8 total, 8 up, 8 in 2026-03-10T10:24:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:19 vm04 bash[28289]: cluster 2026-03-10T10:24:19.000097+0000 mon.a (mon.0) 2746 : cluster [DBG] osdmap e442: 8 total, 8 up, 8 in 2026-03-10T10:24:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:19 vm04 bash[20742]: cluster 2026-03-10T10:24:18.489615+0000 mgr.y (mgr.24422) 414 : cluster [DBG] pgmap v668: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:24:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:19 vm04 bash[20742]: cluster 2026-03-10T10:24:18.489615+0000 mgr.y (mgr.24422) 414 : cluster [DBG] pgmap v668: 292 pgs: 292 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:24:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:19 vm04 bash[20742]: audit 2026-03-10T10:24:18.609442+0000 mgr.y (mgr.24422) 415 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:24:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:19 vm04 bash[20742]: audit 2026-03-10T10:24:18.609442+0000 mgr.y (mgr.24422) 415 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:24:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:19 vm04 bash[20742]: cluster 2026-03-10T10:24:19.000097+0000 mon.a (mon.0) 2746 : cluster [DBG] osdmap e442: 8 total, 8 up, 8 in 2026-03-10T10:24:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:19 vm04 bash[20742]: cluster 2026-03-10T10:24:19.000097+0000 mon.a (mon.0) 2746 : cluster [DBG] osdmap e442: 8 total, 8 up, 8 in 2026-03-10T10:24:21.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:21 vm04 bash[28289]: cluster 2026-03-10T10:24:19.999336+0000 mon.a (mon.0) 2747 : cluster [DBG] osdmap e443: 8 total, 8 up, 8 in 2026-03-10T10:24:21.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:21 vm04 bash[28289]: cluster 2026-03-10T10:24:19.999336+0000 mon.a (mon.0) 2747 : cluster [DBG] osdmap e443: 8 total, 8 up, 8 in 2026-03-10T10:24:21.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:21 vm04 bash[28289]: audit 2026-03-10T10:24:20.019105+0000 mon.a (mon.0) 2748 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:21.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:21 vm04 bash[28289]: audit 2026-03-10T10:24:20.019105+0000 mon.a (mon.0) 2748 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:21.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:21 vm04 bash[20742]: cluster 2026-03-10T10:24:19.999336+0000 mon.a (mon.0) 2747 : cluster [DBG] osdmap e443: 8 total, 8 up, 8 in 2026-03-10T10:24:21.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:21 vm04 bash[20742]: cluster 2026-03-10T10:24:19.999336+0000 mon.a (mon.0) 2747 : cluster [DBG] osdmap e443: 8 total, 8 up, 8 in 2026-03-10T10:24:21.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:21 vm04 bash[20742]: audit 2026-03-10T10:24:20.019105+0000 mon.a (mon.0) 2748 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:21.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:21 vm04 bash[20742]: audit 2026-03-10T10:24:20.019105+0000 mon.a (mon.0) 2748 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:21 vm07 bash[23367]: cluster 2026-03-10T10:24:19.999336+0000 mon.a (mon.0) 2747 : cluster [DBG] osdmap e443: 8 total, 8 up, 8 in 2026-03-10T10:24:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:21 vm07 bash[23367]: cluster 2026-03-10T10:24:19.999336+0000 mon.a (mon.0) 2747 : cluster [DBG] osdmap e443: 8 total, 8 up, 8 in 2026-03-10T10:24:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:21 vm07 bash[23367]: audit 2026-03-10T10:24:20.019105+0000 mon.a (mon.0) 2748 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:21 vm07 bash[23367]: audit 2026-03-10T10:24:20.019105+0000 mon.a (mon.0) 2748 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-85","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:22 vm04 bash[28289]: cluster 2026-03-10T10:24:20.489890+0000 mgr.y (mgr.24422) 416 : cluster [DBG] pgmap v671: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-10T10:24:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:22 vm04 bash[28289]: cluster 2026-03-10T10:24:20.489890+0000 mgr.y (mgr.24422) 416 : cluster [DBG] pgmap v671: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-10T10:24:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:22 vm04 bash[28289]: cluster 2026-03-10T10:24:20.996889+0000 mon.a (mon.0) 2749 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:22 vm04 bash[28289]: cluster 2026-03-10T10:24:20.996889+0000 mon.a (mon.0) 2749 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:22 vm04 bash[28289]: audit 2026-03-10T10:24:21.037355+0000 mon.a (mon.0) 2750 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-85","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:22 vm04 bash[28289]: audit 2026-03-10T10:24:21.037355+0000 mon.a (mon.0) 2750 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-85","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:22 vm04 bash[28289]: cluster 2026-03-10T10:24:21.044342+0000 mon.a (mon.0) 2751 : cluster [DBG] osdmap e444: 8 total, 8 up, 8 in 2026-03-10T10:24:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:22 vm04 bash[28289]: cluster 2026-03-10T10:24:21.044342+0000 mon.a (mon.0) 2751 : cluster [DBG] osdmap e444: 8 total, 8 up, 8 in 2026-03-10T10:24:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:22 vm04 bash[28289]: audit 2026-03-10T10:24:21.045587+0000 mon.a (mon.0) 2752 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:22 vm04 bash[28289]: audit 2026-03-10T10:24:21.045587+0000 mon.a (mon.0) 2752 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:22 vm04 bash[28289]: audit 2026-03-10T10:24:21.048288+0000 mon.a (mon.0) 2753 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:24:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:22 vm04 bash[28289]: audit 2026-03-10T10:24:21.048288+0000 mon.a (mon.0) 2753 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:24:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:22 vm04 bash[20742]: cluster 2026-03-10T10:24:20.489890+0000 mgr.y (mgr.24422) 416 : cluster [DBG] pgmap v671: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-10T10:24:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:22 vm04 bash[20742]: cluster 2026-03-10T10:24:20.489890+0000 mgr.y (mgr.24422) 416 : cluster [DBG] pgmap v671: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-10T10:24:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:22 vm04 bash[20742]: cluster 2026-03-10T10:24:20.996889+0000 mon.a (mon.0) 2749 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:22 vm04 bash[20742]: cluster 2026-03-10T10:24:20.996889+0000 mon.a (mon.0) 2749 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:22 vm04 bash[20742]: audit 2026-03-10T10:24:21.037355+0000 mon.a (mon.0) 2750 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-85","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:22 vm04 bash[20742]: audit 2026-03-10T10:24:21.037355+0000 mon.a (mon.0) 2750 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-85","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:22 vm04 bash[20742]: cluster 2026-03-10T10:24:21.044342+0000 mon.a (mon.0) 2751 : cluster [DBG] osdmap e444: 8 total, 8 up, 8 in 2026-03-10T10:24:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:22 vm04 bash[20742]: cluster 2026-03-10T10:24:21.044342+0000 mon.a (mon.0) 2751 : cluster [DBG] osdmap e444: 8 total, 8 up, 8 in 2026-03-10T10:24:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:22 vm04 bash[20742]: audit 2026-03-10T10:24:21.045587+0000 mon.a (mon.0) 2752 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:22 vm04 bash[20742]: audit 2026-03-10T10:24:21.045587+0000 mon.a (mon.0) 2752 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:22 vm04 bash[20742]: audit 2026-03-10T10:24:21.048288+0000 mon.a (mon.0) 2753 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:24:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:22 vm04 bash[20742]: audit 2026-03-10T10:24:21.048288+0000 mon.a (mon.0) 2753 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:24:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:22 vm07 bash[23367]: cluster 2026-03-10T10:24:20.489890+0000 mgr.y (mgr.24422) 416 : cluster [DBG] pgmap v671: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-10T10:24:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:22 vm07 bash[23367]: cluster 2026-03-10T10:24:20.489890+0000 mgr.y (mgr.24422) 416 : cluster [DBG] pgmap v671: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.2 KiB/s wr, 10 op/s 2026-03-10T10:24:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:22 vm07 bash[23367]: cluster 2026-03-10T10:24:20.996889+0000 mon.a (mon.0) 2749 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:22 vm07 bash[23367]: cluster 2026-03-10T10:24:20.996889+0000 mon.a (mon.0) 2749 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:22 vm07 bash[23367]: audit 2026-03-10T10:24:21.037355+0000 mon.a (mon.0) 2750 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-85","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:22 vm07 bash[23367]: audit 2026-03-10T10:24:21.037355+0000 mon.a (mon.0) 2750 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-85","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:22 vm07 bash[23367]: cluster 2026-03-10T10:24:21.044342+0000 mon.a (mon.0) 2751 : cluster [DBG] osdmap e444: 8 total, 8 up, 8 in 2026-03-10T10:24:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:22 vm07 bash[23367]: cluster 2026-03-10T10:24:21.044342+0000 mon.a (mon.0) 2751 : cluster [DBG] osdmap e444: 8 total, 8 up, 8 in 2026-03-10T10:24:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:22 vm07 bash[23367]: audit 2026-03-10T10:24:21.045587+0000 mon.a (mon.0) 2752 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:22 vm07 bash[23367]: audit 2026-03-10T10:24:21.045587+0000 mon.a (mon.0) 2752 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:22 vm07 bash[23367]: audit 2026-03-10T10:24:21.048288+0000 mon.a (mon.0) 2753 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:24:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:22 vm07 bash[23367]: audit 2026-03-10T10:24:21.048288+0000 mon.a (mon.0) 2753 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:24:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:23 vm04 bash[28289]: audit 2026-03-10T10:24:22.039922+0000 mon.a (mon.0) 2754 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:24:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:23 vm04 bash[28289]: audit 2026-03-10T10:24:22.039922+0000 mon.a (mon.0) 2754 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:24:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:23 vm04 bash[28289]: cluster 2026-03-10T10:24:22.046743+0000 mon.a (mon.0) 2755 : cluster [DBG] osdmap e445: 8 total, 8 up, 8 in 2026-03-10T10:24:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:23 vm04 bash[28289]: cluster 2026-03-10T10:24:22.046743+0000 mon.a (mon.0) 2755 : cluster [DBG] osdmap e445: 8 total, 8 up, 8 in 2026-03-10T10:24:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:23 vm04 bash[28289]: audit 2026-03-10T10:24:22.047882+0000 mon.a (mon.0) 2756 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:23 vm04 bash[28289]: audit 2026-03-10T10:24:22.047882+0000 mon.a (mon.0) 2756 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:23 vm04 bash[28289]: audit 2026-03-10T10:24:23.042666+0000 mon.a (mon.0) 2757 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:24:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:23 vm04 bash[28289]: audit 2026-03-10T10:24:23.042666+0000 mon.a (mon.0) 2757 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:24:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:23 vm04 bash[28289]: cluster 2026-03-10T10:24:23.045165+0000 mon.a (mon.0) 2758 : cluster [DBG] osdmap e446: 8 total, 8 up, 8 in 2026-03-10T10:24:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:23 vm04 bash[28289]: cluster 2026-03-10T10:24:23.045165+0000 mon.a (mon.0) 2758 : cluster [DBG] osdmap e446: 8 total, 8 up, 8 in 2026-03-10T10:24:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:23 vm04 bash[28289]: audit 2026-03-10T10:24:23.045563+0000 mon.a (mon.0) 2759 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T10:24:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:23 vm04 bash[28289]: audit 2026-03-10T10:24:23.045563+0000 mon.a (mon.0) 2759 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T10:24:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:23 vm04 bash[20742]: audit 2026-03-10T10:24:22.039922+0000 mon.a (mon.0) 2754 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:24:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:23 vm04 bash[20742]: audit 2026-03-10T10:24:22.039922+0000 mon.a (mon.0) 2754 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:24:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:23 vm04 bash[20742]: cluster 2026-03-10T10:24:22.046743+0000 mon.a (mon.0) 2755 : cluster [DBG] osdmap e445: 8 total, 8 up, 8 in 2026-03-10T10:24:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:23 vm04 bash[20742]: cluster 2026-03-10T10:24:22.046743+0000 mon.a (mon.0) 2755 : cluster [DBG] osdmap e445: 8 total, 8 up, 8 in 2026-03-10T10:24:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:23 vm04 bash[20742]: audit 2026-03-10T10:24:22.047882+0000 mon.a (mon.0) 2756 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:23 vm04 bash[20742]: audit 2026-03-10T10:24:22.047882+0000 mon.a (mon.0) 2756 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:23 vm04 bash[20742]: audit 2026-03-10T10:24:23.042666+0000 mon.a (mon.0) 2757 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:24:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:23 vm04 bash[20742]: audit 2026-03-10T10:24:23.042666+0000 mon.a (mon.0) 2757 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:24:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:23 vm04 bash[20742]: cluster 2026-03-10T10:24:23.045165+0000 mon.a (mon.0) 2758 : cluster [DBG] osdmap e446: 8 total, 8 up, 8 in 2026-03-10T10:24:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:23 vm04 bash[20742]: cluster 2026-03-10T10:24:23.045165+0000 mon.a (mon.0) 2758 : cluster [DBG] osdmap e446: 8 total, 8 up, 8 in 2026-03-10T10:24:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:23 vm04 bash[20742]: audit 2026-03-10T10:24:23.045563+0000 mon.a (mon.0) 2759 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T10:24:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:23 vm04 bash[20742]: audit 2026-03-10T10:24:23.045563+0000 mon.a (mon.0) 2759 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T10:24:23.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:24:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:24:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:24:23.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:23 vm07 bash[23367]: audit 2026-03-10T10:24:22.039922+0000 mon.a (mon.0) 2754 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:24:23.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:23 vm07 bash[23367]: audit 2026-03-10T10:24:22.039922+0000 mon.a (mon.0) 2754 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:24:23.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:23 vm07 bash[23367]: cluster 2026-03-10T10:24:22.046743+0000 mon.a (mon.0) 2755 : cluster [DBG] osdmap e445: 8 total, 8 up, 8 in 2026-03-10T10:24:23.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:23 vm07 bash[23367]: cluster 2026-03-10T10:24:22.046743+0000 mon.a (mon.0) 2755 : cluster [DBG] osdmap e445: 8 total, 8 up, 8 in 2026-03-10T10:24:23.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:23 vm07 bash[23367]: audit 2026-03-10T10:24:22.047882+0000 mon.a (mon.0) 2756 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:23.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:23 vm07 bash[23367]: audit 2026-03-10T10:24:22.047882+0000 mon.a (mon.0) 2756 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:23.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:23 vm07 bash[23367]: audit 2026-03-10T10:24:23.042666+0000 mon.a (mon.0) 2757 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:24:23.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:23 vm07 bash[23367]: audit 2026-03-10T10:24:23.042666+0000 mon.a (mon.0) 2757 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_tier","val": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:24:23.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:23 vm07 bash[23367]: cluster 2026-03-10T10:24:23.045165+0000 mon.a (mon.0) 2758 : cluster [DBG] osdmap e446: 8 total, 8 up, 8 in 2026-03-10T10:24:23.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:23 vm07 bash[23367]: cluster 2026-03-10T10:24:23.045165+0000 mon.a (mon.0) 2758 : cluster [DBG] osdmap e446: 8 total, 8 up, 8 in 2026-03-10T10:24:23.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:23 vm07 bash[23367]: audit 2026-03-10T10:24:23.045563+0000 mon.a (mon.0) 2759 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T10:24:23.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:23 vm07 bash[23367]: audit 2026-03-10T10:24:23.045563+0000 mon.a (mon.0) 2759 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T10:24:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:24 vm04 bash[28289]: cluster 2026-03-10T10:24:22.490170+0000 mgr.y (mgr.24422) 417 : cluster [DBG] pgmap v674: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 3.7 KiB/s wr, 9 op/s 2026-03-10T10:24:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:24 vm04 bash[28289]: cluster 2026-03-10T10:24:22.490170+0000 mgr.y (mgr.24422) 417 : cluster [DBG] pgmap v674: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 3.7 KiB/s wr, 9 op/s 2026-03-10T10:24:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:24 vm04 bash[28289]: audit 2026-03-10T10:24:24.046381+0000 mon.a (mon.0) 2760 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T10:24:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:24 vm04 bash[28289]: audit 2026-03-10T10:24:24.046381+0000 mon.a (mon.0) 2760 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T10:24:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:24 vm04 bash[28289]: cluster 2026-03-10T10:24:24.051294+0000 mon.a (mon.0) 2761 : cluster [DBG] osdmap e447: 8 total, 8 up, 8 in 2026-03-10T10:24:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:24 vm04 bash[28289]: cluster 2026-03-10T10:24:24.051294+0000 mon.a (mon.0) 2761 : cluster [DBG] osdmap e447: 8 total, 8 up, 8 in 2026-03-10T10:24:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:24 vm04 bash[28289]: audit 2026-03-10T10:24:24.051804+0000 mon.a (mon.0) 2762 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:24:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:24 vm04 bash[28289]: audit 2026-03-10T10:24:24.051804+0000 mon.a (mon.0) 2762 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:24:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:24 vm04 bash[20742]: cluster 2026-03-10T10:24:22.490170+0000 mgr.y (mgr.24422) 417 : cluster [DBG] pgmap v674: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 3.7 KiB/s wr, 9 op/s 2026-03-10T10:24:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:24 vm04 bash[20742]: cluster 2026-03-10T10:24:22.490170+0000 mgr.y (mgr.24422) 417 : cluster [DBG] pgmap v674: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 3.7 KiB/s wr, 9 op/s 2026-03-10T10:24:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:24 vm04 bash[20742]: audit 2026-03-10T10:24:24.046381+0000 mon.a (mon.0) 2760 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T10:24:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:24 vm04 bash[20742]: audit 2026-03-10T10:24:24.046381+0000 mon.a (mon.0) 2760 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T10:24:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:24 vm04 bash[20742]: cluster 2026-03-10T10:24:24.051294+0000 mon.a (mon.0) 2761 : cluster [DBG] osdmap e447: 8 total, 8 up, 8 in 2026-03-10T10:24:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:24 vm04 bash[20742]: cluster 2026-03-10T10:24:24.051294+0000 mon.a (mon.0) 2761 : cluster [DBG] osdmap e447: 8 total, 8 up, 8 in 2026-03-10T10:24:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:24 vm04 bash[20742]: audit 2026-03-10T10:24:24.051804+0000 mon.a (mon.0) 2762 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:24:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:24 vm04 bash[20742]: audit 2026-03-10T10:24:24.051804+0000 mon.a (mon.0) 2762 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:24:24.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:24 vm07 bash[23367]: cluster 2026-03-10T10:24:22.490170+0000 mgr.y (mgr.24422) 417 : cluster [DBG] pgmap v674: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 3.7 KiB/s wr, 9 op/s 2026-03-10T10:24:24.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:24 vm07 bash[23367]: cluster 2026-03-10T10:24:22.490170+0000 mgr.y (mgr.24422) 417 : cluster [DBG] pgmap v674: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 927 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 3.7 KiB/s wr, 9 op/s 2026-03-10T10:24:24.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:24 vm07 bash[23367]: audit 2026-03-10T10:24:24.046381+0000 mon.a (mon.0) 2760 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T10:24:24.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:24 vm07 bash[23367]: audit 2026-03-10T10:24:24.046381+0000 mon.a (mon.0) 2760 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T10:24:24.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:24 vm07 bash[23367]: cluster 2026-03-10T10:24:24.051294+0000 mon.a (mon.0) 2761 : cluster [DBG] osdmap e447: 8 total, 8 up, 8 in 2026-03-10T10:24:24.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:24 vm07 bash[23367]: cluster 2026-03-10T10:24:24.051294+0000 mon.a (mon.0) 2761 : cluster [DBG] osdmap e447: 8 total, 8 up, 8 in 2026-03-10T10:24:24.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:24 vm07 bash[23367]: audit 2026-03-10T10:24:24.051804+0000 mon.a (mon.0) 2762 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:24:24.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:24 vm07 bash[23367]: audit 2026-03-10T10:24:24.051804+0000 mon.a (mon.0) 2762 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:24:26.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:26 vm04 bash[20742]: cluster 2026-03-10T10:24:24.490503+0000 mgr.y (mgr.24422) 418 : cluster [DBG] pgmap v677: 292 pgs: 292 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s 2026-03-10T10:24:26.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:26 vm04 bash[20742]: cluster 2026-03-10T10:24:24.490503+0000 mgr.y (mgr.24422) 418 : cluster [DBG] pgmap v677: 292 pgs: 292 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s 2026-03-10T10:24:26.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:26 vm04 bash[20742]: audit 2026-03-10T10:24:25.048800+0000 mon.a (mon.0) 2763 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:24:26.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:26 vm04 bash[20742]: audit 2026-03-10T10:24:25.048800+0000 mon.a (mon.0) 2763 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:24:26.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:26 vm04 bash[20742]: cluster 2026-03-10T10:24:25.050920+0000 mon.a (mon.0) 2764 : cluster [DBG] osdmap e448: 8 total, 8 up, 8 in 2026-03-10T10:24:26.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:26 vm04 bash[20742]: cluster 2026-03-10T10:24:25.050920+0000 mon.a (mon.0) 2764 : cluster [DBG] osdmap e448: 8 total, 8 up, 8 in 2026-03-10T10:24:26.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:26 vm04 bash[28289]: cluster 2026-03-10T10:24:24.490503+0000 mgr.y (mgr.24422) 418 : cluster [DBG] pgmap v677: 292 pgs: 292 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s 2026-03-10T10:24:26.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:26 vm04 bash[28289]: cluster 2026-03-10T10:24:24.490503+0000 mgr.y (mgr.24422) 418 : cluster [DBG] pgmap v677: 292 pgs: 292 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s 2026-03-10T10:24:26.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:26 vm04 bash[28289]: audit 2026-03-10T10:24:25.048800+0000 mon.a (mon.0) 2763 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:24:26.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:26 vm04 bash[28289]: audit 2026-03-10T10:24:25.048800+0000 mon.a (mon.0) 2763 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:24:26.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:26 vm04 bash[28289]: cluster 2026-03-10T10:24:25.050920+0000 mon.a (mon.0) 2764 : cluster [DBG] osdmap e448: 8 total, 8 up, 8 in 2026-03-10T10:24:26.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:26 vm04 bash[28289]: cluster 2026-03-10T10:24:25.050920+0000 mon.a (mon.0) 2764 : cluster [DBG] osdmap e448: 8 total, 8 up, 8 in 2026-03-10T10:24:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:26 vm07 bash[23367]: cluster 2026-03-10T10:24:24.490503+0000 mgr.y (mgr.24422) 418 : cluster [DBG] pgmap v677: 292 pgs: 292 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s 2026-03-10T10:24:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:26 vm07 bash[23367]: cluster 2026-03-10T10:24:24.490503+0000 mgr.y (mgr.24422) 418 : cluster [DBG] pgmap v677: 292 pgs: 292 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s 2026-03-10T10:24:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:26 vm07 bash[23367]: audit 2026-03-10T10:24:25.048800+0000 mon.a (mon.0) 2763 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:24:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:26 vm07 bash[23367]: audit 2026-03-10T10:24:25.048800+0000 mon.a (mon.0) 2763 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-85","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:24:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:26 vm07 bash[23367]: cluster 2026-03-10T10:24:25.050920+0000 mon.a (mon.0) 2764 : cluster [DBG] osdmap e448: 8 total, 8 up, 8 in 2026-03-10T10:24:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:26 vm07 bash[23367]: cluster 2026-03-10T10:24:25.050920+0000 mon.a (mon.0) 2764 : cluster [DBG] osdmap e448: 8 total, 8 up, 8 in 2026-03-10T10:24:27.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:27 vm04 bash[28289]: cluster 2026-03-10T10:24:26.063813+0000 mon.a (mon.0) 2765 : cluster [DBG] osdmap e449: 8 total, 8 up, 8 in 2026-03-10T10:24:27.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:27 vm04 bash[28289]: cluster 2026-03-10T10:24:26.063813+0000 mon.a (mon.0) 2765 : cluster [DBG] osdmap e449: 8 total, 8 up, 8 in 2026-03-10T10:24:27.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:27 vm04 bash[28289]: audit 2026-03-10T10:24:26.425713+0000 mon.a (mon.0) 2766 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:24:27.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:27 vm04 bash[28289]: audit 2026-03-10T10:24:26.425713+0000 mon.a (mon.0) 2766 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:24:27.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:27 vm04 bash[28289]: cluster 2026-03-10T10:24:26.510600+0000 mon.a (mon.0) 2767 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:27.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:27 vm04 bash[28289]: cluster 2026-03-10T10:24:26.510600+0000 mon.a (mon.0) 2767 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:27.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:27 vm04 bash[28289]: audit 2026-03-10T10:24:26.731264+0000 mon.a (mon.0) 2768 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:24:27.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:27 vm04 bash[28289]: audit 2026-03-10T10:24:26.731264+0000 mon.a (mon.0) 2768 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:24:27.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:27 vm04 bash[28289]: audit 2026-03-10T10:24:26.731868+0000 mon.a (mon.0) 2769 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:24:27.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:27 vm04 bash[28289]: audit 2026-03-10T10:24:26.731868+0000 mon.a (mon.0) 2769 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:24:27.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:27 vm04 bash[28289]: audit 2026-03-10T10:24:26.736683+0000 mon.a (mon.0) 2770 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:24:27.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:27 vm04 bash[28289]: audit 2026-03-10T10:24:26.736683+0000 mon.a (mon.0) 2770 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:24:27.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:27 vm04 bash[20742]: cluster 2026-03-10T10:24:26.063813+0000 mon.a (mon.0) 2765 : cluster [DBG] osdmap e449: 8 total, 8 up, 8 in 2026-03-10T10:24:27.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:27 vm04 bash[20742]: cluster 2026-03-10T10:24:26.063813+0000 mon.a (mon.0) 2765 : cluster [DBG] osdmap e449: 8 total, 8 up, 8 in 2026-03-10T10:24:27.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:27 vm04 bash[20742]: audit 2026-03-10T10:24:26.425713+0000 mon.a (mon.0) 2766 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:24:27.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:27 vm04 bash[20742]: audit 2026-03-10T10:24:26.425713+0000 mon.a (mon.0) 2766 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:24:27.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:27 vm04 bash[20742]: cluster 2026-03-10T10:24:26.510600+0000 mon.a (mon.0) 2767 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:27.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:27 vm04 bash[20742]: cluster 2026-03-10T10:24:26.510600+0000 mon.a (mon.0) 2767 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:27.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:27 vm04 bash[20742]: audit 2026-03-10T10:24:26.731264+0000 mon.a (mon.0) 2768 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:24:27.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:27 vm04 bash[20742]: audit 2026-03-10T10:24:26.731264+0000 mon.a (mon.0) 2768 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:24:27.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:27 vm04 bash[20742]: audit 2026-03-10T10:24:26.731868+0000 mon.a (mon.0) 2769 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:24:27.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:27 vm04 bash[20742]: audit 2026-03-10T10:24:26.731868+0000 mon.a (mon.0) 2769 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:24:27.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:27 vm04 bash[20742]: audit 2026-03-10T10:24:26.736683+0000 mon.a (mon.0) 2770 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:24:27.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:27 vm04 bash[20742]: audit 2026-03-10T10:24:26.736683+0000 mon.a (mon.0) 2770 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:24:27.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:27 vm07 bash[23367]: cluster 2026-03-10T10:24:26.063813+0000 mon.a (mon.0) 2765 : cluster [DBG] osdmap e449: 8 total, 8 up, 8 in 2026-03-10T10:24:27.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:27 vm07 bash[23367]: cluster 2026-03-10T10:24:26.063813+0000 mon.a (mon.0) 2765 : cluster [DBG] osdmap e449: 8 total, 8 up, 8 in 2026-03-10T10:24:27.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:27 vm07 bash[23367]: audit 2026-03-10T10:24:26.425713+0000 mon.a (mon.0) 2766 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:24:27.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:27 vm07 bash[23367]: audit 2026-03-10T10:24:26.425713+0000 mon.a (mon.0) 2766 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:24:27.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:27 vm07 bash[23367]: cluster 2026-03-10T10:24:26.510600+0000 mon.a (mon.0) 2767 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:27.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:27 vm07 bash[23367]: cluster 2026-03-10T10:24:26.510600+0000 mon.a (mon.0) 2767 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:27.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:27 vm07 bash[23367]: audit 2026-03-10T10:24:26.731264+0000 mon.a (mon.0) 2768 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:24:27.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:27 vm07 bash[23367]: audit 2026-03-10T10:24:26.731264+0000 mon.a (mon.0) 2768 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:24:27.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:27 vm07 bash[23367]: audit 2026-03-10T10:24:26.731868+0000 mon.a (mon.0) 2769 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:24:27.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:27 vm07 bash[23367]: audit 2026-03-10T10:24:26.731868+0000 mon.a (mon.0) 2769 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:24:27.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:27 vm07 bash[23367]: audit 2026-03-10T10:24:26.736683+0000 mon.a (mon.0) 2770 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:24:27.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:27 vm07 bash[23367]: audit 2026-03-10T10:24:26.736683+0000 mon.a (mon.0) 2770 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:24:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:28 vm04 bash[28289]: cluster 2026-03-10T10:24:26.490837+0000 mgr.y (mgr.24422) 419 : cluster [DBG] pgmap v680: 292 pgs: 292 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s 2026-03-10T10:24:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:28 vm04 bash[28289]: cluster 2026-03-10T10:24:26.490837+0000 mgr.y (mgr.24422) 419 : cluster [DBG] pgmap v680: 292 pgs: 292 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s 2026-03-10T10:24:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:28 vm04 bash[28289]: cluster 2026-03-10T10:24:27.138296+0000 mon.a (mon.0) 2771 : cluster [DBG] osdmap e450: 8 total, 8 up, 8 in 2026-03-10T10:24:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:28 vm04 bash[28289]: cluster 2026-03-10T10:24:27.138296+0000 mon.a (mon.0) 2771 : cluster [DBG] osdmap e450: 8 total, 8 up, 8 in 2026-03-10T10:24:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:28 vm04 bash[28289]: audit 2026-03-10T10:24:27.179772+0000 mon.a (mon.0) 2772 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:28 vm04 bash[28289]: audit 2026-03-10T10:24:27.179772+0000 mon.a (mon.0) 2772 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:28 vm04 bash[28289]: audit 2026-03-10T10:24:27.180000+0000 mon.a (mon.0) 2773 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-85"}]: dispatch 2026-03-10T10:24:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:28 vm04 bash[28289]: audit 2026-03-10T10:24:27.180000+0000 mon.a (mon.0) 2773 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-85"}]: dispatch 2026-03-10T10:24:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:28 vm04 bash[28289]: audit 2026-03-10T10:24:28.015894+0000 mon.a (mon.0) 2774 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:24:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:28 vm04 bash[28289]: audit 2026-03-10T10:24:28.015894+0000 mon.a (mon.0) 2774 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:24:28.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:28 vm04 bash[20742]: cluster 2026-03-10T10:24:26.490837+0000 mgr.y (mgr.24422) 419 : cluster [DBG] pgmap v680: 292 pgs: 292 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s 2026-03-10T10:24:28.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:28 vm04 bash[20742]: cluster 2026-03-10T10:24:26.490837+0000 mgr.y (mgr.24422) 419 : cluster [DBG] pgmap v680: 292 pgs: 292 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s 2026-03-10T10:24:28.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:28 vm04 bash[20742]: cluster 2026-03-10T10:24:27.138296+0000 mon.a (mon.0) 2771 : cluster [DBG] osdmap e450: 8 total, 8 up, 8 in 2026-03-10T10:24:28.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:28 vm04 bash[20742]: cluster 2026-03-10T10:24:27.138296+0000 mon.a (mon.0) 2771 : cluster [DBG] osdmap e450: 8 total, 8 up, 8 in 2026-03-10T10:24:28.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:28 vm04 bash[20742]: audit 2026-03-10T10:24:27.179772+0000 mon.a (mon.0) 2772 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:28.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:28 vm04 bash[20742]: audit 2026-03-10T10:24:27.179772+0000 mon.a (mon.0) 2772 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:28.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:28 vm04 bash[20742]: audit 2026-03-10T10:24:27.180000+0000 mon.a (mon.0) 2773 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-85"}]: dispatch 2026-03-10T10:24:28.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:28 vm04 bash[20742]: audit 2026-03-10T10:24:27.180000+0000 mon.a (mon.0) 2773 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-85"}]: dispatch 2026-03-10T10:24:28.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:28 vm04 bash[20742]: audit 2026-03-10T10:24:28.015894+0000 mon.a (mon.0) 2774 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:24:28.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:28 vm04 bash[20742]: audit 2026-03-10T10:24:28.015894+0000 mon.a (mon.0) 2774 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:24:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:28 vm07 bash[23367]: cluster 2026-03-10T10:24:26.490837+0000 mgr.y (mgr.24422) 419 : cluster [DBG] pgmap v680: 292 pgs: 292 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s 2026-03-10T10:24:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:28 vm07 bash[23367]: cluster 2026-03-10T10:24:26.490837+0000 mgr.y (mgr.24422) 419 : cluster [DBG] pgmap v680: 292 pgs: 292 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail; 108 KiB/s rd, 0 B/s wr, 179 op/s 2026-03-10T10:24:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:28 vm07 bash[23367]: cluster 2026-03-10T10:24:27.138296+0000 mon.a (mon.0) 2771 : cluster [DBG] osdmap e450: 8 total, 8 up, 8 in 2026-03-10T10:24:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:28 vm07 bash[23367]: cluster 2026-03-10T10:24:27.138296+0000 mon.a (mon.0) 2771 : cluster [DBG] osdmap e450: 8 total, 8 up, 8 in 2026-03-10T10:24:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:28 vm07 bash[23367]: audit 2026-03-10T10:24:27.179772+0000 mon.a (mon.0) 2772 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:28 vm07 bash[23367]: audit 2026-03-10T10:24:27.179772+0000 mon.a (mon.0) 2772 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:28 vm07 bash[23367]: audit 2026-03-10T10:24:27.180000+0000 mon.a (mon.0) 2773 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-85"}]: dispatch 2026-03-10T10:24:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:28 vm07 bash[23367]: audit 2026-03-10T10:24:27.180000+0000 mon.a (mon.0) 2773 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-85"}]: dispatch 2026-03-10T10:24:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:28 vm07 bash[23367]: audit 2026-03-10T10:24:28.015894+0000 mon.a (mon.0) 2774 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:24:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:28 vm07 bash[23367]: audit 2026-03-10T10:24:28.015894+0000 mon.a (mon.0) 2774 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:24:29.016 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:24:28 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:24:29.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:29 vm04 bash[28289]: cluster 2026-03-10T10:24:28.144134+0000 mon.a (mon.0) 2775 : cluster [DBG] osdmap e451: 8 total, 8 up, 8 in 2026-03-10T10:24:29.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:29 vm04 bash[28289]: cluster 2026-03-10T10:24:28.144134+0000 mon.a (mon.0) 2775 : cluster [DBG] osdmap e451: 8 total, 8 up, 8 in 2026-03-10T10:24:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:29 vm04 bash[20742]: cluster 2026-03-10T10:24:28.144134+0000 mon.a (mon.0) 2775 : cluster [DBG] osdmap e451: 8 total, 8 up, 8 in 2026-03-10T10:24:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:29 vm04 bash[20742]: cluster 2026-03-10T10:24:28.144134+0000 mon.a (mon.0) 2775 : cluster [DBG] osdmap e451: 8 total, 8 up, 8 in 2026-03-10T10:24:29.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:29 vm07 bash[23367]: cluster 2026-03-10T10:24:28.144134+0000 mon.a (mon.0) 2775 : cluster [DBG] osdmap e451: 8 total, 8 up, 8 in 2026-03-10T10:24:29.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:29 vm07 bash[23367]: cluster 2026-03-10T10:24:28.144134+0000 mon.a (mon.0) 2775 : cluster [DBG] osdmap e451: 8 total, 8 up, 8 in 2026-03-10T10:24:30.175 INFO:tasks.workunit.client.0.vm04.stdout: OK ] LibRadosTwoPoolsPP.ProxyRead (17828 ms) 2026-03-10T10:24:30.175 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.CachePin 2026-03-10T10:24:30.175 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.CachePin (23265 ms) 2026-03-10T10:24:30.175 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.SetRedirectRead 2026-03-10T10:24:30.175 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.SetRedirectRead (3017 ms) 2026-03-10T10:24:30.175 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestPromoteRead 2026-03-10T10:24:30.175 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T10:24:30.175 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestPromoteRead (3462 ms) 2026-03-10T10:24:30.175 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestRefRead 2026-03-10T10:24:30.175 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestRefRead (3338 ms) 2026-03-10T10:24:30.175 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestUnset 2026-03-10T10:24:30.175 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T10:24:30.175 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestUnset (2974 ms) 2026-03-10T10:24:30.175 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestDedupRefRead 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestDedupRefRead (4671 ms) 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestSnapRefcount 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestSnapRefcount (37852 ms) 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestSnapRefcount2 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestSnapRefcount2 (16752 ms) 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestTestSnapCreate 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestTestSnapCreate (3555 ms) 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestRedirectAfterPromote 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestRedirectAfterPromote (3007 ms) 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestCheckRefcountWhenModification 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestCheckRefcountWhenModification (24566 ms) 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestSnapIncCount 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestSnapIncCount (14143 ms) 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestEvict 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestEvict (5079 ms) 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestEvictPromote 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestEvictPromote (4101 ms) 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestSnapSizeMismatch 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: waiting for scrubs... 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: done waiting 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestSnapSizeMismatch (24247 ms) 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.DedupFlushRead 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.DedupFlushRead (10153 ms) 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestFlushSnap 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T10:24:30.176 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestFlushSnap (9147 ms) 2026-03-10T10:24:30.177 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestFlushDupCount 2026-03-10T10:24:30.177 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T10:24:30.177 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestFlushDupCount (9145 ms) 2026-03-10T10:24:30.177 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.TierFlushDuringFlush 2026-03-10T10:24:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:30 vm04 bash[28289]: cluster 2026-03-10T10:24:28.491444+0000 mgr.y (mgr.24422) 420 : cluster [DBG] pgmap v683: 260 pgs: 260 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 2.2 KiB/s wr, 4 op/s 2026-03-10T10:24:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:30 vm04 bash[28289]: cluster 2026-03-10T10:24:28.491444+0000 mgr.y (mgr.24422) 420 : cluster [DBG] pgmap v683: 260 pgs: 260 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 2.2 KiB/s wr, 4 op/s 2026-03-10T10:24:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:30 vm04 bash[28289]: audit 2026-03-10T10:24:28.620009+0000 mgr.y (mgr.24422) 421 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:24:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:30 vm04 bash[28289]: audit 2026-03-10T10:24:28.620009+0000 mgr.y (mgr.24422) 421 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:24:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:30 vm04 bash[28289]: cluster 2026-03-10T10:24:29.151255+0000 mon.a (mon.0) 2776 : cluster [DBG] osdmap e452: 8 total, 8 up, 8 in 2026-03-10T10:24:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:30 vm04 bash[28289]: cluster 2026-03-10T10:24:29.151255+0000 mon.a (mon.0) 2776 : cluster [DBG] osdmap e452: 8 total, 8 up, 8 in 2026-03-10T10:24:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:30 vm04 bash[28289]: audit 2026-03-10T10:24:29.152186+0000 mon.a (mon.0) 2777 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:30 vm04 bash[28289]: audit 2026-03-10T10:24:29.152186+0000 mon.a (mon.0) 2777 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:30 vm04 bash[20742]: cluster 2026-03-10T10:24:28.491444+0000 mgr.y (mgr.24422) 420 : cluster [DBG] pgmap v683: 260 pgs: 260 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 2.2 KiB/s wr, 4 op/s 2026-03-10T10:24:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:30 vm04 bash[20742]: cluster 2026-03-10T10:24:28.491444+0000 mgr.y (mgr.24422) 420 : cluster [DBG] pgmap v683: 260 pgs: 260 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 2.2 KiB/s wr, 4 op/s 2026-03-10T10:24:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:30 vm04 bash[20742]: audit 2026-03-10T10:24:28.620009+0000 mgr.y (mgr.24422) 421 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:24:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:30 vm04 bash[20742]: audit 2026-03-10T10:24:28.620009+0000 mgr.y (mgr.24422) 421 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:24:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:30 vm04 bash[20742]: cluster 2026-03-10T10:24:29.151255+0000 mon.a (mon.0) 2776 : cluster [DBG] osdmap e452: 8 total, 8 up, 8 in 2026-03-10T10:24:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:30 vm04 bash[20742]: cluster 2026-03-10T10:24:29.151255+0000 mon.a (mon.0) 2776 : cluster [DBG] osdmap e452: 8 total, 8 up, 8 in 2026-03-10T10:24:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:30 vm04 bash[20742]: audit 2026-03-10T10:24:29.152186+0000 mon.a (mon.0) 2777 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:30 vm04 bash[20742]: audit 2026-03-10T10:24:29.152186+0000 mon.a (mon.0) 2777 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:30 vm07 bash[23367]: cluster 2026-03-10T10:24:28.491444+0000 mgr.y (mgr.24422) 420 : cluster [DBG] pgmap v683: 260 pgs: 260 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 2.2 KiB/s wr, 4 op/s 2026-03-10T10:24:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:30 vm07 bash[23367]: cluster 2026-03-10T10:24:28.491444+0000 mgr.y (mgr.24422) 420 : cluster [DBG] pgmap v683: 260 pgs: 260 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail; 511 B/s rd, 2.2 KiB/s wr, 4 op/s 2026-03-10T10:24:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:30 vm07 bash[23367]: audit 2026-03-10T10:24:28.620009+0000 mgr.y (mgr.24422) 421 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:24:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:30 vm07 bash[23367]: audit 2026-03-10T10:24:28.620009+0000 mgr.y (mgr.24422) 421 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:24:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:30 vm07 bash[23367]: cluster 2026-03-10T10:24:29.151255+0000 mon.a (mon.0) 2776 : cluster [DBG] osdmap e452: 8 total, 8 up, 8 in 2026-03-10T10:24:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:30 vm07 bash[23367]: cluster 2026-03-10T10:24:29.151255+0000 mon.a (mon.0) 2776 : cluster [DBG] osdmap e452: 8 total, 8 up, 8 in 2026-03-10T10:24:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:30 vm07 bash[23367]: audit 2026-03-10T10:24:29.152186+0000 mon.a (mon.0) 2777 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:30 vm07 bash[23367]: audit 2026-03-10T10:24:29.152186+0000 mon.a (mon.0) 2777 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-87","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:31.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:31 vm07 bash[23367]: audit 2026-03-10T10:24:30.150928+0000 mon.a (mon.0) 2778 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-87","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:31.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:31 vm07 bash[23367]: audit 2026-03-10T10:24:30.150928+0000 mon.a (mon.0) 2778 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-87","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:31.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:31 vm07 bash[23367]: cluster 2026-03-10T10:24:30.161794+0000 mon.a (mon.0) 2779 : cluster [DBG] osdmap e453: 8 total, 8 up, 8 in 2026-03-10T10:24:31.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:31 vm07 bash[23367]: cluster 2026-03-10T10:24:30.161794+0000 mon.a (mon.0) 2779 : cluster [DBG] osdmap e453: 8 total, 8 up, 8 in 2026-03-10T10:24:31.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:31 vm07 bash[23367]: audit 2026-03-10T10:24:30.174999+0000 mon.a (mon.0) 2780 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:31.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:31 vm07 bash[23367]: audit 2026-03-10T10:24:30.174999+0000 mon.a (mon.0) 2780 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:31.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:31 vm07 bash[23367]: cluster 2026-03-10T10:24:31.169839+0000 mon.a (mon.0) 2781 : cluster [DBG] osdmap e454: 8 total, 8 up, 8 in 2026-03-10T10:24:31.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:31 vm07 bash[23367]: cluster 2026-03-10T10:24:31.169839+0000 mon.a (mon.0) 2781 : cluster [DBG] osdmap e454: 8 total, 8 up, 8 in 2026-03-10T10:24:31.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:31 vm07 bash[23367]: audit 2026-03-10T10:24:31.179447+0000 mon.a (mon.0) 2782 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:24:31.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:31 vm07 bash[23367]: audit 2026-03-10T10:24:31.179447+0000 mon.a (mon.0) 2782 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:24:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:31 vm04 bash[28289]: audit 2026-03-10T10:24:30.150928+0000 mon.a (mon.0) 2778 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-87","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:31 vm04 bash[28289]: audit 2026-03-10T10:24:30.150928+0000 mon.a (mon.0) 2778 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-87","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:31 vm04 bash[28289]: cluster 2026-03-10T10:24:30.161794+0000 mon.a (mon.0) 2779 : cluster [DBG] osdmap e453: 8 total, 8 up, 8 in 2026-03-10T10:24:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:31 vm04 bash[28289]: cluster 2026-03-10T10:24:30.161794+0000 mon.a (mon.0) 2779 : cluster [DBG] osdmap e453: 8 total, 8 up, 8 in 2026-03-10T10:24:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:31 vm04 bash[28289]: audit 2026-03-10T10:24:30.174999+0000 mon.a (mon.0) 2780 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:31 vm04 bash[28289]: audit 2026-03-10T10:24:30.174999+0000 mon.a (mon.0) 2780 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:31 vm04 bash[28289]: cluster 2026-03-10T10:24:31.169839+0000 mon.a (mon.0) 2781 : cluster [DBG] osdmap e454: 8 total, 8 up, 8 in 2026-03-10T10:24:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:31 vm04 bash[28289]: cluster 2026-03-10T10:24:31.169839+0000 mon.a (mon.0) 2781 : cluster [DBG] osdmap e454: 8 total, 8 up, 8 in 2026-03-10T10:24:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:31 vm04 bash[28289]: audit 2026-03-10T10:24:31.179447+0000 mon.a (mon.0) 2782 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:24:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:31 vm04 bash[28289]: audit 2026-03-10T10:24:31.179447+0000 mon.a (mon.0) 2782 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:24:31.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:31 vm04 bash[20742]: audit 2026-03-10T10:24:30.150928+0000 mon.a (mon.0) 2778 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-87","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:31.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:31 vm04 bash[20742]: audit 2026-03-10T10:24:30.150928+0000 mon.a (mon.0) 2778 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-87","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:31.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:31 vm04 bash[20742]: cluster 2026-03-10T10:24:30.161794+0000 mon.a (mon.0) 2779 : cluster [DBG] osdmap e453: 8 total, 8 up, 8 in 2026-03-10T10:24:31.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:31 vm04 bash[20742]: cluster 2026-03-10T10:24:30.161794+0000 mon.a (mon.0) 2779 : cluster [DBG] osdmap e453: 8 total, 8 up, 8 in 2026-03-10T10:24:31.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:31 vm04 bash[20742]: audit 2026-03-10T10:24:30.174999+0000 mon.a (mon.0) 2780 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:31.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:31 vm04 bash[20742]: audit 2026-03-10T10:24:30.174999+0000 mon.a (mon.0) 2780 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:31.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:31 vm04 bash[20742]: cluster 2026-03-10T10:24:31.169839+0000 mon.a (mon.0) 2781 : cluster [DBG] osdmap e454: 8 total, 8 up, 8 in 2026-03-10T10:24:31.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:31 vm04 bash[20742]: cluster 2026-03-10T10:24:31.169839+0000 mon.a (mon.0) 2781 : cluster [DBG] osdmap e454: 8 total, 8 up, 8 in 2026-03-10T10:24:31.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:31 vm04 bash[20742]: audit 2026-03-10T10:24:31.179447+0000 mon.a (mon.0) 2782 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:24:31.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:31 vm04 bash[20742]: audit 2026-03-10T10:24:31.179447+0000 mon.a (mon.0) 2782 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:24:32.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:32 vm07 bash[23367]: cluster 2026-03-10T10:24:30.491760+0000 mgr.y (mgr.24422) 422 : cluster [DBG] pgmap v686: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-10T10:24:32.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:32 vm07 bash[23367]: cluster 2026-03-10T10:24:30.491760+0000 mgr.y (mgr.24422) 422 : cluster [DBG] pgmap v686: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-10T10:24:32.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:32 vm07 bash[23367]: audit 2026-03-10T10:24:32.185664+0000 mon.a (mon.0) 2783 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:24:32.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:32 vm07 bash[23367]: audit 2026-03-10T10:24:32.185664+0000 mon.a (mon.0) 2783 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:24:32.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:32 vm07 bash[23367]: cluster 2026-03-10T10:24:32.188220+0000 mon.a (mon.0) 2784 : cluster [DBG] osdmap e455: 8 total, 8 up, 8 in 2026-03-10T10:24:32.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:32 vm07 bash[23367]: cluster 2026-03-10T10:24:32.188220+0000 mon.a (mon.0) 2784 : cluster [DBG] osdmap e455: 8 total, 8 up, 8 in 2026-03-10T10:24:32.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:32 vm07 bash[23367]: audit 2026-03-10T10:24:32.188719+0000 mon.a (mon.0) 2785 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_tier","val": "test-rados-api-vm04-59491-89-test-flush"}]: dispatch 2026-03-10T10:24:32.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:32 vm07 bash[23367]: audit 2026-03-10T10:24:32.188719+0000 mon.a (mon.0) 2785 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_tier","val": "test-rados-api-vm04-59491-89-test-flush"}]: dispatch 2026-03-10T10:24:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:32 vm04 bash[28289]: cluster 2026-03-10T10:24:30.491760+0000 mgr.y (mgr.24422) 422 : cluster [DBG] pgmap v686: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-10T10:24:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:32 vm04 bash[28289]: cluster 2026-03-10T10:24:30.491760+0000 mgr.y (mgr.24422) 422 : cluster [DBG] pgmap v686: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-10T10:24:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:32 vm04 bash[28289]: audit 2026-03-10T10:24:32.185664+0000 mon.a (mon.0) 2783 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:24:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:32 vm04 bash[28289]: audit 2026-03-10T10:24:32.185664+0000 mon.a (mon.0) 2783 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:24:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:32 vm04 bash[28289]: cluster 2026-03-10T10:24:32.188220+0000 mon.a (mon.0) 2784 : cluster [DBG] osdmap e455: 8 total, 8 up, 8 in 2026-03-10T10:24:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:32 vm04 bash[28289]: cluster 2026-03-10T10:24:32.188220+0000 mon.a (mon.0) 2784 : cluster [DBG] osdmap e455: 8 total, 8 up, 8 in 2026-03-10T10:24:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:32 vm04 bash[28289]: audit 2026-03-10T10:24:32.188719+0000 mon.a (mon.0) 2785 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_tier","val": "test-rados-api-vm04-59491-89-test-flush"}]: dispatch 2026-03-10T10:24:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:32 vm04 bash[28289]: audit 2026-03-10T10:24:32.188719+0000 mon.a (mon.0) 2785 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_tier","val": "test-rados-api-vm04-59491-89-test-flush"}]: dispatch 2026-03-10T10:24:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:32 vm04 bash[20742]: cluster 2026-03-10T10:24:30.491760+0000 mgr.y (mgr.24422) 422 : cluster [DBG] pgmap v686: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-10T10:24:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:32 vm04 bash[20742]: cluster 2026-03-10T10:24:30.491760+0000 mgr.y (mgr.24422) 422 : cluster [DBG] pgmap v686: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail; 2.5 KiB/s rd, 4.7 KiB/s wr, 11 op/s 2026-03-10T10:24:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:32 vm04 bash[20742]: audit 2026-03-10T10:24:32.185664+0000 mon.a (mon.0) 2783 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:24:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:32 vm04 bash[20742]: audit 2026-03-10T10:24:32.185664+0000 mon.a (mon.0) 2783 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:24:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:32 vm04 bash[20742]: cluster 2026-03-10T10:24:32.188220+0000 mon.a (mon.0) 2784 : cluster [DBG] osdmap e455: 8 total, 8 up, 8 in 2026-03-10T10:24:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:32 vm04 bash[20742]: cluster 2026-03-10T10:24:32.188220+0000 mon.a (mon.0) 2784 : cluster [DBG] osdmap e455: 8 total, 8 up, 8 in 2026-03-10T10:24:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:32 vm04 bash[20742]: audit 2026-03-10T10:24:32.188719+0000 mon.a (mon.0) 2785 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_tier","val": "test-rados-api-vm04-59491-89-test-flush"}]: dispatch 2026-03-10T10:24:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:32 vm04 bash[20742]: audit 2026-03-10T10:24:32.188719+0000 mon.a (mon.0) 2785 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_tier","val": "test-rados-api-vm04-59491-89-test-flush"}]: dispatch 2026-03-10T10:24:33.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:24:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:24:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:24:34.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:34 vm07 bash[23367]: cluster 2026-03-10T10:24:32.492224+0000 mgr.y (mgr.24422) 423 : cluster [DBG] pgmap v689: 324 pgs: 64 unknown, 260 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 2.5 KiB/s wr, 6 op/s 2026-03-10T10:24:34.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:34 vm07 bash[23367]: cluster 2026-03-10T10:24:32.492224+0000 mgr.y (mgr.24422) 423 : cluster [DBG] pgmap v689: 324 pgs: 64 unknown, 260 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 2.5 KiB/s wr, 6 op/s 2026-03-10T10:24:34.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:34 vm07 bash[23367]: audit 2026-03-10T10:24:33.194785+0000 mon.a (mon.0) 2786 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_tier","val": "test-rados-api-vm04-59491-89-test-flush"}]': finished 2026-03-10T10:24:34.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:34 vm07 bash[23367]: audit 2026-03-10T10:24:33.194785+0000 mon.a (mon.0) 2786 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_tier","val": "test-rados-api-vm04-59491-89-test-flush"}]': finished 2026-03-10T10:24:34.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:34 vm07 bash[23367]: cluster 2026-03-10T10:24:33.199443+0000 mon.a (mon.0) 2787 : cluster [DBG] osdmap e456: 8 total, 8 up, 8 in 2026-03-10T10:24:34.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:34 vm07 bash[23367]: cluster 2026-03-10T10:24:33.199443+0000 mon.a (mon.0) 2787 : cluster [DBG] osdmap e456: 8 total, 8 up, 8 in 2026-03-10T10:24:34.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:34 vm07 bash[23367]: audit 2026-03-10T10:24:33.199814+0000 mon.a (mon.0) 2788 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T10:24:34.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:34 vm07 bash[23367]: audit 2026-03-10T10:24:33.199814+0000 mon.a (mon.0) 2788 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T10:24:34.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:34 vm07 bash[23367]: cluster 2026-03-10T10:24:33.236683+0000 mon.a (mon.0) 2789 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:34.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:34 vm07 bash[23367]: cluster 2026-03-10T10:24:33.236683+0000 mon.a (mon.0) 2789 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:34 vm04 bash[28289]: cluster 2026-03-10T10:24:32.492224+0000 mgr.y (mgr.24422) 423 : cluster [DBG] pgmap v689: 324 pgs: 64 unknown, 260 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 2.5 KiB/s wr, 6 op/s 2026-03-10T10:24:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:34 vm04 bash[28289]: cluster 2026-03-10T10:24:32.492224+0000 mgr.y (mgr.24422) 423 : cluster [DBG] pgmap v689: 324 pgs: 64 unknown, 260 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 2.5 KiB/s wr, 6 op/s 2026-03-10T10:24:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:34 vm04 bash[28289]: audit 2026-03-10T10:24:33.194785+0000 mon.a (mon.0) 2786 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_tier","val": "test-rados-api-vm04-59491-89-test-flush"}]': finished 2026-03-10T10:24:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:34 vm04 bash[28289]: audit 2026-03-10T10:24:33.194785+0000 mon.a (mon.0) 2786 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_tier","val": "test-rados-api-vm04-59491-89-test-flush"}]': finished 2026-03-10T10:24:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:34 vm04 bash[28289]: cluster 2026-03-10T10:24:33.199443+0000 mon.a (mon.0) 2787 : cluster [DBG] osdmap e456: 8 total, 8 up, 8 in 2026-03-10T10:24:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:34 vm04 bash[28289]: cluster 2026-03-10T10:24:33.199443+0000 mon.a (mon.0) 2787 : cluster [DBG] osdmap e456: 8 total, 8 up, 8 in 2026-03-10T10:24:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:34 vm04 bash[28289]: audit 2026-03-10T10:24:33.199814+0000 mon.a (mon.0) 2788 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T10:24:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:34 vm04 bash[28289]: audit 2026-03-10T10:24:33.199814+0000 mon.a (mon.0) 2788 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T10:24:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:34 vm04 bash[28289]: cluster 2026-03-10T10:24:33.236683+0000 mon.a (mon.0) 2789 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:34 vm04 bash[28289]: cluster 2026-03-10T10:24:33.236683+0000 mon.a (mon.0) 2789 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:34.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:34 vm04 bash[20742]: cluster 2026-03-10T10:24:32.492224+0000 mgr.y (mgr.24422) 423 : cluster [DBG] pgmap v689: 324 pgs: 64 unknown, 260 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 2.5 KiB/s wr, 6 op/s 2026-03-10T10:24:34.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:34 vm04 bash[20742]: cluster 2026-03-10T10:24:32.492224+0000 mgr.y (mgr.24422) 423 : cluster [DBG] pgmap v689: 324 pgs: 64 unknown, 260 active+clean; 8.3 MiB data, 928 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 2.5 KiB/s wr, 6 op/s 2026-03-10T10:24:34.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:34 vm04 bash[20742]: audit 2026-03-10T10:24:33.194785+0000 mon.a (mon.0) 2786 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_tier","val": "test-rados-api-vm04-59491-89-test-flush"}]': finished 2026-03-10T10:24:34.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:34 vm04 bash[20742]: audit 2026-03-10T10:24:33.194785+0000 mon.a (mon.0) 2786 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_tier","val": "test-rados-api-vm04-59491-89-test-flush"}]': finished 2026-03-10T10:24:34.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:34 vm04 bash[20742]: cluster 2026-03-10T10:24:33.199443+0000 mon.a (mon.0) 2787 : cluster [DBG] osdmap e456: 8 total, 8 up, 8 in 2026-03-10T10:24:34.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:34 vm04 bash[20742]: cluster 2026-03-10T10:24:33.199443+0000 mon.a (mon.0) 2787 : cluster [DBG] osdmap e456: 8 total, 8 up, 8 in 2026-03-10T10:24:34.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:34 vm04 bash[20742]: audit 2026-03-10T10:24:33.199814+0000 mon.a (mon.0) 2788 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T10:24:34.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:34 vm04 bash[20742]: audit 2026-03-10T10:24:33.199814+0000 mon.a (mon.0) 2788 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T10:24:34.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:34 vm04 bash[20742]: cluster 2026-03-10T10:24:33.236683+0000 mon.a (mon.0) 2789 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:34.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:34 vm04 bash[20742]: cluster 2026-03-10T10:24:33.236683+0000 mon.a (mon.0) 2789 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:35.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:35 vm07 bash[23367]: audit 2026-03-10T10:24:34.198018+0000 mon.a (mon.0) 2790 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T10:24:35.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:35 vm07 bash[23367]: audit 2026-03-10T10:24:34.198018+0000 mon.a (mon.0) 2790 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T10:24:35.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:35 vm07 bash[23367]: cluster 2026-03-10T10:24:34.205716+0000 mon.a (mon.0) 2791 : cluster [DBG] osdmap e457: 8 total, 8 up, 8 in 2026-03-10T10:24:35.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:35 vm07 bash[23367]: cluster 2026-03-10T10:24:34.205716+0000 mon.a (mon.0) 2791 : cluster [DBG] osdmap e457: 8 total, 8 up, 8 in 2026-03-10T10:24:35.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:35 vm07 bash[23367]: audit 2026-03-10T10:24:34.208618+0000 mon.a (mon.0) 2792 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:24:35.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:35 vm07 bash[23367]: audit 2026-03-10T10:24:34.208618+0000 mon.a (mon.0) 2792 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:24:35.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:35 vm07 bash[23367]: audit 2026-03-10T10:24:35.201178+0000 mon.a (mon.0) 2793 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:24:35.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:35 vm07 bash[23367]: audit 2026-03-10T10:24:35.201178+0000 mon.a (mon.0) 2793 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:24:35.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:35 vm07 bash[23367]: cluster 2026-03-10T10:24:35.203975+0000 mon.a (mon.0) 2794 : cluster [DBG] osdmap e458: 8 total, 8 up, 8 in 2026-03-10T10:24:35.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:35 vm07 bash[23367]: cluster 2026-03-10T10:24:35.203975+0000 mon.a (mon.0) 2794 : cluster [DBG] osdmap e458: 8 total, 8 up, 8 in 2026-03-10T10:24:35.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:35 vm04 bash[20742]: audit 2026-03-10T10:24:34.198018+0000 mon.a (mon.0) 2790 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T10:24:35.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:35 vm04 bash[20742]: audit 2026-03-10T10:24:34.198018+0000 mon.a (mon.0) 2790 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T10:24:35.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:35 vm04 bash[20742]: cluster 2026-03-10T10:24:34.205716+0000 mon.a (mon.0) 2791 : cluster [DBG] osdmap e457: 8 total, 8 up, 8 in 2026-03-10T10:24:35.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:35 vm04 bash[20742]: cluster 2026-03-10T10:24:34.205716+0000 mon.a (mon.0) 2791 : cluster [DBG] osdmap e457: 8 total, 8 up, 8 in 2026-03-10T10:24:35.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:35 vm04 bash[20742]: audit 2026-03-10T10:24:34.208618+0000 mon.a (mon.0) 2792 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:24:35.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:35 vm04 bash[20742]: audit 2026-03-10T10:24:34.208618+0000 mon.a (mon.0) 2792 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:24:35.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:35 vm04 bash[20742]: audit 2026-03-10T10:24:35.201178+0000 mon.a (mon.0) 2793 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:24:35.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:35 vm04 bash[20742]: audit 2026-03-10T10:24:35.201178+0000 mon.a (mon.0) 2793 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:24:35.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:35 vm04 bash[20742]: cluster 2026-03-10T10:24:35.203975+0000 mon.a (mon.0) 2794 : cluster [DBG] osdmap e458: 8 total, 8 up, 8 in 2026-03-10T10:24:35.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:35 vm04 bash[20742]: cluster 2026-03-10T10:24:35.203975+0000 mon.a (mon.0) 2794 : cluster [DBG] osdmap e458: 8 total, 8 up, 8 in 2026-03-10T10:24:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:35 vm04 bash[28289]: audit 2026-03-10T10:24:34.198018+0000 mon.a (mon.0) 2790 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T10:24:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:35 vm04 bash[28289]: audit 2026-03-10T10:24:34.198018+0000 mon.a (mon.0) 2790 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T10:24:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:35 vm04 bash[28289]: cluster 2026-03-10T10:24:34.205716+0000 mon.a (mon.0) 2791 : cluster [DBG] osdmap e457: 8 total, 8 up, 8 in 2026-03-10T10:24:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:35 vm04 bash[28289]: cluster 2026-03-10T10:24:34.205716+0000 mon.a (mon.0) 2791 : cluster [DBG] osdmap e457: 8 total, 8 up, 8 in 2026-03-10T10:24:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:35 vm04 bash[28289]: audit 2026-03-10T10:24:34.208618+0000 mon.a (mon.0) 2792 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:24:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:35 vm04 bash[28289]: audit 2026-03-10T10:24:34.208618+0000 mon.a (mon.0) 2792 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:24:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:35 vm04 bash[28289]: audit 2026-03-10T10:24:35.201178+0000 mon.a (mon.0) 2793 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:24:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:35 vm04 bash[28289]: audit 2026-03-10T10:24:35.201178+0000 mon.a (mon.0) 2793 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-87","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:24:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:35 vm04 bash[28289]: cluster 2026-03-10T10:24:35.203975+0000 mon.a (mon.0) 2794 : cluster [DBG] osdmap e458: 8 total, 8 up, 8 in 2026-03-10T10:24:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:35 vm04 bash[28289]: cluster 2026-03-10T10:24:35.203975+0000 mon.a (mon.0) 2794 : cluster [DBG] osdmap e458: 8 total, 8 up, 8 in 2026-03-10T10:24:36.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:36 vm07 bash[23367]: cluster 2026-03-10T10:24:34.492611+0000 mgr.y (mgr.24422) 424 : cluster [DBG] pgmap v692: 324 pgs: 324 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:24:36.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:36 vm07 bash[23367]: cluster 2026-03-10T10:24:34.492611+0000 mgr.y (mgr.24422) 424 : cluster [DBG] pgmap v692: 324 pgs: 324 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:24:36.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:36 vm04 bash[20742]: cluster 2026-03-10T10:24:34.492611+0000 mgr.y (mgr.24422) 424 : cluster [DBG] pgmap v692: 324 pgs: 324 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:24:36.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:36 vm04 bash[20742]: cluster 2026-03-10T10:24:34.492611+0000 mgr.y (mgr.24422) 424 : cluster [DBG] pgmap v692: 324 pgs: 324 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:24:36.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:36 vm04 bash[28289]: cluster 2026-03-10T10:24:34.492611+0000 mgr.y (mgr.24422) 424 : cluster [DBG] pgmap v692: 324 pgs: 324 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:24:36.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:36 vm04 bash[28289]: cluster 2026-03-10T10:24:34.492611+0000 mgr.y (mgr.24422) 424 : cluster [DBG] pgmap v692: 324 pgs: 324 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:24:37.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:37 vm07 bash[23367]: cluster 2026-03-10T10:24:36.233887+0000 mon.a (mon.0) 2795 : cluster [DBG] osdmap e459: 8 total, 8 up, 8 in 2026-03-10T10:24:37.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:37 vm07 bash[23367]: cluster 2026-03-10T10:24:36.233887+0000 mon.a (mon.0) 2795 : cluster [DBG] osdmap e459: 8 total, 8 up, 8 in 2026-03-10T10:24:37.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:37 vm07 bash[23367]: audit 2026-03-10T10:24:36.287065+0000 mon.a (mon.0) 2796 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:37.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:37 vm07 bash[23367]: audit 2026-03-10T10:24:36.287065+0000 mon.a (mon.0) 2796 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:37.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:37 vm07 bash[23367]: audit 2026-03-10T10:24:36.287310+0000 mon.a (mon.0) 2797 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-87"}]: dispatch 2026-03-10T10:24:37.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:37 vm07 bash[23367]: audit 2026-03-10T10:24:36.287310+0000 mon.a (mon.0) 2797 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-87"}]: dispatch 2026-03-10T10:24:37.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:37 vm04 bash[28289]: cluster 2026-03-10T10:24:36.233887+0000 mon.a (mon.0) 2795 : cluster [DBG] osdmap e459: 8 total, 8 up, 8 in 2026-03-10T10:24:37.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:37 vm04 bash[28289]: cluster 2026-03-10T10:24:36.233887+0000 mon.a (mon.0) 2795 : cluster [DBG] osdmap e459: 8 total, 8 up, 8 in 2026-03-10T10:24:37.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:37 vm04 bash[28289]: audit 2026-03-10T10:24:36.287065+0000 mon.a (mon.0) 2796 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:37.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:37 vm04 bash[28289]: audit 2026-03-10T10:24:36.287065+0000 mon.a (mon.0) 2796 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:37.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:37 vm04 bash[28289]: audit 2026-03-10T10:24:36.287310+0000 mon.a (mon.0) 2797 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-87"}]: dispatch 2026-03-10T10:24:37.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:37 vm04 bash[28289]: audit 2026-03-10T10:24:36.287310+0000 mon.a (mon.0) 2797 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-87"}]: dispatch 2026-03-10T10:24:37.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:37 vm04 bash[20742]: cluster 2026-03-10T10:24:36.233887+0000 mon.a (mon.0) 2795 : cluster [DBG] osdmap e459: 8 total, 8 up, 8 in 2026-03-10T10:24:37.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:37 vm04 bash[20742]: cluster 2026-03-10T10:24:36.233887+0000 mon.a (mon.0) 2795 : cluster [DBG] osdmap e459: 8 total, 8 up, 8 in 2026-03-10T10:24:37.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:37 vm04 bash[20742]: audit 2026-03-10T10:24:36.287065+0000 mon.a (mon.0) 2796 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:37.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:37 vm04 bash[20742]: audit 2026-03-10T10:24:36.287065+0000 mon.a (mon.0) 2796 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:37.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:37 vm04 bash[20742]: audit 2026-03-10T10:24:36.287310+0000 mon.a (mon.0) 2797 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-87"}]: dispatch 2026-03-10T10:24:37.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:37 vm04 bash[20742]: audit 2026-03-10T10:24:36.287310+0000 mon.a (mon.0) 2797 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-87"}]: dispatch 2026-03-10T10:24:38.627 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:38 vm07 bash[23367]: cluster 2026-03-10T10:24:36.492941+0000 mgr.y (mgr.24422) 425 : cluster [DBG] pgmap v695: 292 pgs: 292 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:24:38.628 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:38 vm07 bash[23367]: cluster 2026-03-10T10:24:36.492941+0000 mgr.y (mgr.24422) 425 : cluster [DBG] pgmap v695: 292 pgs: 292 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:24:38.628 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:38 vm07 bash[23367]: cluster 2026-03-10T10:24:37.285957+0000 mon.a (mon.0) 2798 : cluster [DBG] osdmap e460: 8 total, 8 up, 8 in 2026-03-10T10:24:38.628 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:38 vm07 bash[23367]: cluster 2026-03-10T10:24:37.285957+0000 mon.a (mon.0) 2798 : cluster [DBG] osdmap e460: 8 total, 8 up, 8 in 2026-03-10T10:24:38.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:38 vm04 bash[28289]: cluster 2026-03-10T10:24:36.492941+0000 mgr.y (mgr.24422) 425 : cluster [DBG] pgmap v695: 292 pgs: 292 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:24:38.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:38 vm04 bash[28289]: cluster 2026-03-10T10:24:36.492941+0000 mgr.y (mgr.24422) 425 : cluster [DBG] pgmap v695: 292 pgs: 292 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:24:38.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:38 vm04 bash[28289]: cluster 2026-03-10T10:24:37.285957+0000 mon.a (mon.0) 2798 : cluster [DBG] osdmap e460: 8 total, 8 up, 8 in 2026-03-10T10:24:38.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:38 vm04 bash[28289]: cluster 2026-03-10T10:24:37.285957+0000 mon.a (mon.0) 2798 : cluster [DBG] osdmap e460: 8 total, 8 up, 8 in 2026-03-10T10:24:38.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:38 vm04 bash[20742]: cluster 2026-03-10T10:24:36.492941+0000 mgr.y (mgr.24422) 425 : cluster [DBG] pgmap v695: 292 pgs: 292 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:24:38.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:38 vm04 bash[20742]: cluster 2026-03-10T10:24:36.492941+0000 mgr.y (mgr.24422) 425 : cluster [DBG] pgmap v695: 292 pgs: 292 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:24:38.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:38 vm04 bash[20742]: cluster 2026-03-10T10:24:37.285957+0000 mon.a (mon.0) 2798 : cluster [DBG] osdmap e460: 8 total, 8 up, 8 in 2026-03-10T10:24:38.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:38 vm04 bash[20742]: cluster 2026-03-10T10:24:37.285957+0000 mon.a (mon.0) 2798 : cluster [DBG] osdmap e460: 8 total, 8 up, 8 in 2026-03-10T10:24:39.016 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:24:38 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:24:39.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:39 vm04 bash[28289]: cluster 2026-03-10T10:24:38.350516+0000 mon.a (mon.0) 2799 : cluster [DBG] osdmap e461: 8 total, 8 up, 8 in 2026-03-10T10:24:39.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:39 vm04 bash[28289]: cluster 2026-03-10T10:24:38.350516+0000 mon.a (mon.0) 2799 : cluster [DBG] osdmap e461: 8 total, 8 up, 8 in 2026-03-10T10:24:39.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:39 vm04 bash[28289]: audit 2026-03-10T10:24:38.369067+0000 mon.a (mon.0) 2800 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:39.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:39 vm04 bash[28289]: audit 2026-03-10T10:24:38.369067+0000 mon.a (mon.0) 2800 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:39.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:39 vm04 bash[20742]: cluster 2026-03-10T10:24:38.350516+0000 mon.a (mon.0) 2799 : cluster [DBG] osdmap e461: 8 total, 8 up, 8 in 2026-03-10T10:24:39.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:39 vm04 bash[20742]: cluster 2026-03-10T10:24:38.350516+0000 mon.a (mon.0) 2799 : cluster [DBG] osdmap e461: 8 total, 8 up, 8 in 2026-03-10T10:24:39.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:39 vm04 bash[20742]: audit 2026-03-10T10:24:38.369067+0000 mon.a (mon.0) 2800 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:39.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:39 vm04 bash[20742]: audit 2026-03-10T10:24:38.369067+0000 mon.a (mon.0) 2800 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:39.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:39 vm07 bash[23367]: cluster 2026-03-10T10:24:38.350516+0000 mon.a (mon.0) 2799 : cluster [DBG] osdmap e461: 8 total, 8 up, 8 in 2026-03-10T10:24:39.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:39 vm07 bash[23367]: cluster 2026-03-10T10:24:38.350516+0000 mon.a (mon.0) 2799 : cluster [DBG] osdmap e461: 8 total, 8 up, 8 in 2026-03-10T10:24:39.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:39 vm07 bash[23367]: audit 2026-03-10T10:24:38.369067+0000 mon.a (mon.0) 2800 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:39.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:39 vm07 bash[23367]: audit 2026-03-10T10:24:38.369067+0000 mon.a (mon.0) 2800 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-90","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:40 vm04 bash[20742]: cluster 2026-03-10T10:24:38.493535+0000 mgr.y (mgr.24422) 426 : cluster [DBG] pgmap v698: 292 pgs: 17 creating+peering, 15 unknown, 260 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:24:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:40 vm04 bash[20742]: cluster 2026-03-10T10:24:38.493535+0000 mgr.y (mgr.24422) 426 : cluster [DBG] pgmap v698: 292 pgs: 17 creating+peering, 15 unknown, 260 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:24:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:40 vm04 bash[20742]: audit 2026-03-10T10:24:38.628627+0000 mgr.y (mgr.24422) 427 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:24:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:40 vm04 bash[20742]: audit 2026-03-10T10:24:38.628627+0000 mgr.y (mgr.24422) 427 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:24:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:40 vm04 bash[20742]: cluster 2026-03-10T10:24:39.348961+0000 mon.a (mon.0) 2801 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:40 vm04 bash[20742]: cluster 2026-03-10T10:24:39.348961+0000 mon.a (mon.0) 2801 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:40 vm04 bash[20742]: audit 2026-03-10T10:24:39.351210+0000 mon.a (mon.0) 2802 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-90","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:40 vm04 bash[20742]: audit 2026-03-10T10:24:39.351210+0000 mon.a (mon.0) 2802 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-90","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:40 vm04 bash[20742]: cluster 2026-03-10T10:24:39.354769+0000 mon.a (mon.0) 2803 : cluster [DBG] osdmap e462: 8 total, 8 up, 8 in 2026-03-10T10:24:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:40 vm04 bash[20742]: cluster 2026-03-10T10:24:39.354769+0000 mon.a (mon.0) 2803 : cluster [DBG] osdmap e462: 8 total, 8 up, 8 in 2026-03-10T10:24:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:40 vm04 bash[20742]: audit 2026-03-10T10:24:39.368158+0000 mon.a (mon.0) 2804 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:40 vm04 bash[20742]: audit 2026-03-10T10:24:39.368158+0000 mon.a (mon.0) 2804 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:40 vm04 bash[20742]: audit 2026-03-10T10:24:39.372975+0000 mon.a (mon.0) 2805 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:24:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:40 vm04 bash[20742]: audit 2026-03-10T10:24:39.372975+0000 mon.a (mon.0) 2805 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:24:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:40 vm04 bash[20742]: audit 2026-03-10T10:24:40.354664+0000 mon.a (mon.0) 2806 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:24:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:40 vm04 bash[20742]: audit 2026-03-10T10:24:40.354664+0000 mon.a (mon.0) 2806 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:24:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:40 vm04 bash[20742]: cluster 2026-03-10T10:24:40.360631+0000 mon.a (mon.0) 2807 : cluster [DBG] osdmap e463: 8 total, 8 up, 8 in 2026-03-10T10:24:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:40 vm04 bash[20742]: cluster 2026-03-10T10:24:40.360631+0000 mon.a (mon.0) 2807 : cluster [DBG] osdmap e463: 8 total, 8 up, 8 in 2026-03-10T10:24:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:40 vm04 bash[28289]: cluster 2026-03-10T10:24:38.493535+0000 mgr.y (mgr.24422) 426 : cluster [DBG] pgmap v698: 292 pgs: 17 creating+peering, 15 unknown, 260 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:24:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:40 vm04 bash[28289]: cluster 2026-03-10T10:24:38.493535+0000 mgr.y (mgr.24422) 426 : cluster [DBG] pgmap v698: 292 pgs: 17 creating+peering, 15 unknown, 260 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:24:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:40 vm04 bash[28289]: audit 2026-03-10T10:24:38.628627+0000 mgr.y (mgr.24422) 427 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:24:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:40 vm04 bash[28289]: audit 2026-03-10T10:24:38.628627+0000 mgr.y (mgr.24422) 427 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:24:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:40 vm04 bash[28289]: cluster 2026-03-10T10:24:39.348961+0000 mon.a (mon.0) 2801 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:40 vm04 bash[28289]: cluster 2026-03-10T10:24:39.348961+0000 mon.a (mon.0) 2801 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:40 vm04 bash[28289]: audit 2026-03-10T10:24:39.351210+0000 mon.a (mon.0) 2802 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-90","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:40 vm04 bash[28289]: audit 2026-03-10T10:24:39.351210+0000 mon.a (mon.0) 2802 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-90","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:40 vm04 bash[28289]: cluster 2026-03-10T10:24:39.354769+0000 mon.a (mon.0) 2803 : cluster [DBG] osdmap e462: 8 total, 8 up, 8 in 2026-03-10T10:24:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:40 vm04 bash[28289]: cluster 2026-03-10T10:24:39.354769+0000 mon.a (mon.0) 2803 : cluster [DBG] osdmap e462: 8 total, 8 up, 8 in 2026-03-10T10:24:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:40 vm04 bash[28289]: audit 2026-03-10T10:24:39.368158+0000 mon.a (mon.0) 2804 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:40 vm04 bash[28289]: audit 2026-03-10T10:24:39.368158+0000 mon.a (mon.0) 2804 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:40 vm04 bash[28289]: audit 2026-03-10T10:24:39.372975+0000 mon.a (mon.0) 2805 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:24:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:40 vm04 bash[28289]: audit 2026-03-10T10:24:39.372975+0000 mon.a (mon.0) 2805 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:24:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:40 vm04 bash[28289]: audit 2026-03-10T10:24:40.354664+0000 mon.a (mon.0) 2806 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:24:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:40 vm04 bash[28289]: audit 2026-03-10T10:24:40.354664+0000 mon.a (mon.0) 2806 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:24:40.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:40 vm04 bash[28289]: cluster 2026-03-10T10:24:40.360631+0000 mon.a (mon.0) 2807 : cluster [DBG] osdmap e463: 8 total, 8 up, 8 in 2026-03-10T10:24:40.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:40 vm04 bash[28289]: cluster 2026-03-10T10:24:40.360631+0000 mon.a (mon.0) 2807 : cluster [DBG] osdmap e463: 8 total, 8 up, 8 in 2026-03-10T10:24:40.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:40 vm07 bash[23367]: cluster 2026-03-10T10:24:38.493535+0000 mgr.y (mgr.24422) 426 : cluster [DBG] pgmap v698: 292 pgs: 17 creating+peering, 15 unknown, 260 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:24:40.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:40 vm07 bash[23367]: cluster 2026-03-10T10:24:38.493535+0000 mgr.y (mgr.24422) 426 : cluster [DBG] pgmap v698: 292 pgs: 17 creating+peering, 15 unknown, 260 active+clean; 8.3 MiB data, 937 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:24:40.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:40 vm07 bash[23367]: audit 2026-03-10T10:24:38.628627+0000 mgr.y (mgr.24422) 427 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:24:40.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:40 vm07 bash[23367]: audit 2026-03-10T10:24:38.628627+0000 mgr.y (mgr.24422) 427 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:24:40.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:40 vm07 bash[23367]: cluster 2026-03-10T10:24:39.348961+0000 mon.a (mon.0) 2801 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:40.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:40 vm07 bash[23367]: cluster 2026-03-10T10:24:39.348961+0000 mon.a (mon.0) 2801 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:40.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:40 vm07 bash[23367]: audit 2026-03-10T10:24:39.351210+0000 mon.a (mon.0) 2802 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-90","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:40.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:40 vm07 bash[23367]: audit 2026-03-10T10:24:39.351210+0000 mon.a (mon.0) 2802 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-90","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:40.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:40 vm07 bash[23367]: cluster 2026-03-10T10:24:39.354769+0000 mon.a (mon.0) 2803 : cluster [DBG] osdmap e462: 8 total, 8 up, 8 in 2026-03-10T10:24:40.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:40 vm07 bash[23367]: cluster 2026-03-10T10:24:39.354769+0000 mon.a (mon.0) 2803 : cluster [DBG] osdmap e462: 8 total, 8 up, 8 in 2026-03-10T10:24:40.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:40 vm07 bash[23367]: audit 2026-03-10T10:24:39.368158+0000 mon.a (mon.0) 2804 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:40.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:40 vm07 bash[23367]: audit 2026-03-10T10:24:39.368158+0000 mon.a (mon.0) 2804 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:40.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:40 vm07 bash[23367]: audit 2026-03-10T10:24:39.372975+0000 mon.a (mon.0) 2805 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:24:40.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:40 vm07 bash[23367]: audit 2026-03-10T10:24:39.372975+0000 mon.a (mon.0) 2805 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:24:40.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:40 vm07 bash[23367]: audit 2026-03-10T10:24:40.354664+0000 mon.a (mon.0) 2806 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:24:40.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:40 vm07 bash[23367]: audit 2026-03-10T10:24:40.354664+0000 mon.a (mon.0) 2806 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-6","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:24:40.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:40 vm07 bash[23367]: cluster 2026-03-10T10:24:40.360631+0000 mon.a (mon.0) 2807 : cluster [DBG] osdmap e463: 8 total, 8 up, 8 in 2026-03-10T10:24:40.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:40 vm07 bash[23367]: cluster 2026-03-10T10:24:40.360631+0000 mon.a (mon.0) 2807 : cluster [DBG] osdmap e463: 8 total, 8 up, 8 in 2026-03-10T10:24:41.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:41 vm04 bash[28289]: cluster 2026-03-10T10:24:40.493928+0000 mgr.y (mgr.24422) 428 : cluster [DBG] pgmap v701: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:24:41.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:41 vm04 bash[28289]: cluster 2026-03-10T10:24:40.493928+0000 mgr.y (mgr.24422) 428 : cluster [DBG] pgmap v701: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:24:41.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:41 vm04 bash[20742]: cluster 2026-03-10T10:24:40.493928+0000 mgr.y (mgr.24422) 428 : cluster [DBG] pgmap v701: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:24:41.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:41 vm04 bash[20742]: cluster 2026-03-10T10:24:40.493928+0000 mgr.y (mgr.24422) 428 : cluster [DBG] pgmap v701: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:24:41.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:41 vm07 bash[23367]: cluster 2026-03-10T10:24:40.493928+0000 mgr.y (mgr.24422) 428 : cluster [DBG] pgmap v701: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:24:41.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:41 vm07 bash[23367]: cluster 2026-03-10T10:24:40.493928+0000 mgr.y (mgr.24422) 428 : cluster [DBG] pgmap v701: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:24:42.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:42 vm04 bash[28289]: cluster 2026-03-10T10:24:41.391841+0000 mon.a (mon.0) 2808 : cluster [DBG] osdmap e464: 8 total, 8 up, 8 in 2026-03-10T10:24:42.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:42 vm04 bash[28289]: cluster 2026-03-10T10:24:41.391841+0000 mon.a (mon.0) 2808 : cluster [DBG] osdmap e464: 8 total, 8 up, 8 in 2026-03-10T10:24:42.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:42 vm04 bash[20742]: cluster 2026-03-10T10:24:41.391841+0000 mon.a (mon.0) 2808 : cluster [DBG] osdmap e464: 8 total, 8 up, 8 in 2026-03-10T10:24:42.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:42 vm04 bash[20742]: cluster 2026-03-10T10:24:41.391841+0000 mon.a (mon.0) 2808 : cluster [DBG] osdmap e464: 8 total, 8 up, 8 in 2026-03-10T10:24:42.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:42 vm07 bash[23367]: cluster 2026-03-10T10:24:41.391841+0000 mon.a (mon.0) 2808 : cluster [DBG] osdmap e464: 8 total, 8 up, 8 in 2026-03-10T10:24:42.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:42 vm07 bash[23367]: cluster 2026-03-10T10:24:41.391841+0000 mon.a (mon.0) 2808 : cluster [DBG] osdmap e464: 8 total, 8 up, 8 in 2026-03-10T10:24:43.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:24:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:24:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:24:43.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:43 vm07 bash[23367]: cluster 2026-03-10T10:24:42.399199+0000 mon.a (mon.0) 2809 : cluster [DBG] osdmap e465: 8 total, 8 up, 8 in 2026-03-10T10:24:43.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:43 vm07 bash[23367]: cluster 2026-03-10T10:24:42.399199+0000 mon.a (mon.0) 2809 : cluster [DBG] osdmap e465: 8 total, 8 up, 8 in 2026-03-10T10:24:43.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:43 vm07 bash[23367]: audit 2026-03-10T10:24:42.443765+0000 mon.a (mon.0) 2810 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:43.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:43 vm07 bash[23367]: audit 2026-03-10T10:24:42.443765+0000 mon.a (mon.0) 2810 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:43.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:43 vm07 bash[23367]: audit 2026-03-10T10:24:42.443973+0000 mon.a (mon.0) 2811 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-90"}]: dispatch 2026-03-10T10:24:43.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:43 vm07 bash[23367]: audit 2026-03-10T10:24:42.443973+0000 mon.a (mon.0) 2811 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-90"}]: dispatch 2026-03-10T10:24:43.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:43 vm07 bash[23367]: cluster 2026-03-10T10:24:42.494212+0000 mgr.y (mgr.24422) 429 : cluster [DBG] pgmap v704: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:24:43.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:43 vm07 bash[23367]: cluster 2026-03-10T10:24:42.494212+0000 mgr.y (mgr.24422) 429 : cluster [DBG] pgmap v704: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:24:43.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:43 vm07 bash[23367]: audit 2026-03-10T10:24:43.022271+0000 mon.a (mon.0) 2812 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:24:43.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:43 vm07 bash[23367]: audit 2026-03-10T10:24:43.022271+0000 mon.a (mon.0) 2812 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:24:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:43 vm04 bash[28289]: cluster 2026-03-10T10:24:42.399199+0000 mon.a (mon.0) 2809 : cluster [DBG] osdmap e465: 8 total, 8 up, 8 in 2026-03-10T10:24:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:43 vm04 bash[28289]: cluster 2026-03-10T10:24:42.399199+0000 mon.a (mon.0) 2809 : cluster [DBG] osdmap e465: 8 total, 8 up, 8 in 2026-03-10T10:24:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:43 vm04 bash[28289]: audit 2026-03-10T10:24:42.443765+0000 mon.a (mon.0) 2810 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:43 vm04 bash[28289]: audit 2026-03-10T10:24:42.443765+0000 mon.a (mon.0) 2810 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:43 vm04 bash[28289]: audit 2026-03-10T10:24:42.443973+0000 mon.a (mon.0) 2811 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-90"}]: dispatch 2026-03-10T10:24:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:43 vm04 bash[28289]: audit 2026-03-10T10:24:42.443973+0000 mon.a (mon.0) 2811 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-90"}]: dispatch 2026-03-10T10:24:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:43 vm04 bash[28289]: cluster 2026-03-10T10:24:42.494212+0000 mgr.y (mgr.24422) 429 : cluster [DBG] pgmap v704: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:24:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:43 vm04 bash[28289]: cluster 2026-03-10T10:24:42.494212+0000 mgr.y (mgr.24422) 429 : cluster [DBG] pgmap v704: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:24:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:43 vm04 bash[28289]: audit 2026-03-10T10:24:43.022271+0000 mon.a (mon.0) 2812 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:24:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:43 vm04 bash[28289]: audit 2026-03-10T10:24:43.022271+0000 mon.a (mon.0) 2812 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:24:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:43 vm04 bash[20742]: cluster 2026-03-10T10:24:42.399199+0000 mon.a (mon.0) 2809 : cluster [DBG] osdmap e465: 8 total, 8 up, 8 in 2026-03-10T10:24:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:43 vm04 bash[20742]: cluster 2026-03-10T10:24:42.399199+0000 mon.a (mon.0) 2809 : cluster [DBG] osdmap e465: 8 total, 8 up, 8 in 2026-03-10T10:24:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:43 vm04 bash[20742]: audit 2026-03-10T10:24:42.443765+0000 mon.a (mon.0) 2810 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:43 vm04 bash[20742]: audit 2026-03-10T10:24:42.443765+0000 mon.a (mon.0) 2810 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:43 vm04 bash[20742]: audit 2026-03-10T10:24:42.443973+0000 mon.a (mon.0) 2811 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-90"}]: dispatch 2026-03-10T10:24:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:43 vm04 bash[20742]: audit 2026-03-10T10:24:42.443973+0000 mon.a (mon.0) 2811 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-90"}]: dispatch 2026-03-10T10:24:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:43 vm04 bash[20742]: cluster 2026-03-10T10:24:42.494212+0000 mgr.y (mgr.24422) 429 : cluster [DBG] pgmap v704: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:24:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:43 vm04 bash[20742]: cluster 2026-03-10T10:24:42.494212+0000 mgr.y (mgr.24422) 429 : cluster [DBG] pgmap v704: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 938 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:24:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:43 vm04 bash[20742]: audit 2026-03-10T10:24:43.022271+0000 mon.a (mon.0) 2812 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:24:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:43 vm04 bash[20742]: audit 2026-03-10T10:24:43.022271+0000 mon.a (mon.0) 2812 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:24:44.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:44 vm07 bash[23367]: cluster 2026-03-10T10:24:43.476721+0000 mon.a (mon.0) 2813 : cluster [DBG] osdmap e466: 8 total, 8 up, 8 in 2026-03-10T10:24:44.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:44 vm07 bash[23367]: cluster 2026-03-10T10:24:43.476721+0000 mon.a (mon.0) 2813 : cluster [DBG] osdmap e466: 8 total, 8 up, 8 in 2026-03-10T10:24:44.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:44 vm04 bash[28289]: cluster 2026-03-10T10:24:43.476721+0000 mon.a (mon.0) 2813 : cluster [DBG] osdmap e466: 8 total, 8 up, 8 in 2026-03-10T10:24:44.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:44 vm04 bash[28289]: cluster 2026-03-10T10:24:43.476721+0000 mon.a (mon.0) 2813 : cluster [DBG] osdmap e466: 8 total, 8 up, 8 in 2026-03-10T10:24:44.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:44 vm04 bash[20742]: cluster 2026-03-10T10:24:43.476721+0000 mon.a (mon.0) 2813 : cluster [DBG] osdmap e466: 8 total, 8 up, 8 in 2026-03-10T10:24:44.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:44 vm04 bash[20742]: cluster 2026-03-10T10:24:43.476721+0000 mon.a (mon.0) 2813 : cluster [DBG] osdmap e466: 8 total, 8 up, 8 in 2026-03-10T10:24:45.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:45 vm07 bash[23367]: cluster 2026-03-10T10:24:44.483395+0000 mon.a (mon.0) 2814 : cluster [DBG] osdmap e467: 8 total, 8 up, 8 in 2026-03-10T10:24:45.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:45 vm07 bash[23367]: cluster 2026-03-10T10:24:44.483395+0000 mon.a (mon.0) 2814 : cluster [DBG] osdmap e467: 8 total, 8 up, 8 in 2026-03-10T10:24:45.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:45 vm07 bash[23367]: audit 2026-03-10T10:24:44.488772+0000 mon.a (mon.0) 2815 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:45.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:45 vm07 bash[23367]: audit 2026-03-10T10:24:44.488772+0000 mon.a (mon.0) 2815 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:45.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:45 vm07 bash[23367]: cluster 2026-03-10T10:24:44.494529+0000 mgr.y (mgr.24422) 430 : cluster [DBG] pgmap v707: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T10:24:45.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:45 vm07 bash[23367]: cluster 2026-03-10T10:24:44.494529+0000 mgr.y (mgr.24422) 430 : cluster [DBG] pgmap v707: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T10:24:45.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:45 vm04 bash[28289]: cluster 2026-03-10T10:24:44.483395+0000 mon.a (mon.0) 2814 : cluster [DBG] osdmap e467: 8 total, 8 up, 8 in 2026-03-10T10:24:45.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:45 vm04 bash[28289]: cluster 2026-03-10T10:24:44.483395+0000 mon.a (mon.0) 2814 : cluster [DBG] osdmap e467: 8 total, 8 up, 8 in 2026-03-10T10:24:45.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:45 vm04 bash[28289]: audit 2026-03-10T10:24:44.488772+0000 mon.a (mon.0) 2815 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:45.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:45 vm04 bash[28289]: audit 2026-03-10T10:24:44.488772+0000 mon.a (mon.0) 2815 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:45.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:45 vm04 bash[28289]: cluster 2026-03-10T10:24:44.494529+0000 mgr.y (mgr.24422) 430 : cluster [DBG] pgmap v707: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T10:24:45.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:45 vm04 bash[28289]: cluster 2026-03-10T10:24:44.494529+0000 mgr.y (mgr.24422) 430 : cluster [DBG] pgmap v707: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T10:24:45.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:45 vm04 bash[20742]: cluster 2026-03-10T10:24:44.483395+0000 mon.a (mon.0) 2814 : cluster [DBG] osdmap e467: 8 total, 8 up, 8 in 2026-03-10T10:24:45.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:45 vm04 bash[20742]: cluster 2026-03-10T10:24:44.483395+0000 mon.a (mon.0) 2814 : cluster [DBG] osdmap e467: 8 total, 8 up, 8 in 2026-03-10T10:24:45.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:45 vm04 bash[20742]: audit 2026-03-10T10:24:44.488772+0000 mon.a (mon.0) 2815 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:45.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:45 vm04 bash[20742]: audit 2026-03-10T10:24:44.488772+0000 mon.a (mon.0) 2815 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-92","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:45.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:45 vm04 bash[20742]: cluster 2026-03-10T10:24:44.494529+0000 mgr.y (mgr.24422) 430 : cluster [DBG] pgmap v707: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T10:24:45.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:45 vm04 bash[20742]: cluster 2026-03-10T10:24:44.494529+0000 mgr.y (mgr.24422) 430 : cluster [DBG] pgmap v707: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T10:24:46.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:46 vm07 bash[23367]: cluster 2026-03-10T10:24:45.479016+0000 mon.a (mon.0) 2816 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:46.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:46 vm07 bash[23367]: cluster 2026-03-10T10:24:45.479016+0000 mon.a (mon.0) 2816 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:46.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:46 vm07 bash[23367]: audit 2026-03-10T10:24:45.487184+0000 mon.a (mon.0) 2817 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-92","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:46.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:46 vm07 bash[23367]: audit 2026-03-10T10:24:45.487184+0000 mon.a (mon.0) 2817 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-92","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:46.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:46 vm07 bash[23367]: cluster 2026-03-10T10:24:45.496752+0000 mon.a (mon.0) 2818 : cluster [DBG] osdmap e468: 8 total, 8 up, 8 in 2026-03-10T10:24:46.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:46 vm07 bash[23367]: cluster 2026-03-10T10:24:45.496752+0000 mon.a (mon.0) 2818 : cluster [DBG] osdmap e468: 8 total, 8 up, 8 in 2026-03-10T10:24:46.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:46 vm07 bash[23367]: audit 2026-03-10T10:24:45.499704+0000 mon.a (mon.0) 2819 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:46.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:46 vm07 bash[23367]: audit 2026-03-10T10:24:45.499704+0000 mon.a (mon.0) 2819 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:46.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:46 vm04 bash[28289]: cluster 2026-03-10T10:24:45.479016+0000 mon.a (mon.0) 2816 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:46.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:46 vm04 bash[28289]: cluster 2026-03-10T10:24:45.479016+0000 mon.a (mon.0) 2816 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:46.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:46 vm04 bash[28289]: audit 2026-03-10T10:24:45.487184+0000 mon.a (mon.0) 2817 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-92","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:46.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:46 vm04 bash[28289]: audit 2026-03-10T10:24:45.487184+0000 mon.a (mon.0) 2817 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-92","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:46.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:46 vm04 bash[28289]: cluster 2026-03-10T10:24:45.496752+0000 mon.a (mon.0) 2818 : cluster [DBG] osdmap e468: 8 total, 8 up, 8 in 2026-03-10T10:24:46.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:46 vm04 bash[28289]: cluster 2026-03-10T10:24:45.496752+0000 mon.a (mon.0) 2818 : cluster [DBG] osdmap e468: 8 total, 8 up, 8 in 2026-03-10T10:24:46.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:46 vm04 bash[28289]: audit 2026-03-10T10:24:45.499704+0000 mon.a (mon.0) 2819 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:46.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:46 vm04 bash[28289]: audit 2026-03-10T10:24:45.499704+0000 mon.a (mon.0) 2819 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:46.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:46 vm04 bash[20742]: cluster 2026-03-10T10:24:45.479016+0000 mon.a (mon.0) 2816 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:46.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:46 vm04 bash[20742]: cluster 2026-03-10T10:24:45.479016+0000 mon.a (mon.0) 2816 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:46.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:46 vm04 bash[20742]: audit 2026-03-10T10:24:45.487184+0000 mon.a (mon.0) 2817 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-92","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:46.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:46 vm04 bash[20742]: audit 2026-03-10T10:24:45.487184+0000 mon.a (mon.0) 2817 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-92","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:46.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:46 vm04 bash[20742]: cluster 2026-03-10T10:24:45.496752+0000 mon.a (mon.0) 2818 : cluster [DBG] osdmap e468: 8 total, 8 up, 8 in 2026-03-10T10:24:46.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:46 vm04 bash[20742]: cluster 2026-03-10T10:24:45.496752+0000 mon.a (mon.0) 2818 : cluster [DBG] osdmap e468: 8 total, 8 up, 8 in 2026-03-10T10:24:46.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:46 vm04 bash[20742]: audit 2026-03-10T10:24:45.499704+0000 mon.a (mon.0) 2819 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:46.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:46 vm04 bash[20742]: audit 2026-03-10T10:24:45.499704+0000 mon.a (mon.0) 2819 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:47.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:47 vm04 bash[28289]: cluster 2026-03-10T10:24:46.493766+0000 mon.a (mon.0) 2820 : cluster [DBG] osdmap e469: 8 total, 8 up, 8 in 2026-03-10T10:24:47.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:47 vm04 bash[28289]: cluster 2026-03-10T10:24:46.493766+0000 mon.a (mon.0) 2820 : cluster [DBG] osdmap e469: 8 total, 8 up, 8 in 2026-03-10T10:24:47.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:47 vm04 bash[28289]: cluster 2026-03-10T10:24:46.494787+0000 mgr.y (mgr.24422) 431 : cluster [DBG] pgmap v710: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T10:24:47.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:47 vm04 bash[28289]: cluster 2026-03-10T10:24:46.494787+0000 mgr.y (mgr.24422) 431 : cluster [DBG] pgmap v710: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T10:24:47.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:47 vm04 bash[20742]: cluster 2026-03-10T10:24:46.493766+0000 mon.a (mon.0) 2820 : cluster [DBG] osdmap e469: 8 total, 8 up, 8 in 2026-03-10T10:24:47.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:47 vm04 bash[20742]: cluster 2026-03-10T10:24:46.493766+0000 mon.a (mon.0) 2820 : cluster [DBG] osdmap e469: 8 total, 8 up, 8 in 2026-03-10T10:24:47.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:47 vm04 bash[20742]: cluster 2026-03-10T10:24:46.494787+0000 mgr.y (mgr.24422) 431 : cluster [DBG] pgmap v710: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T10:24:47.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:47 vm04 bash[20742]: cluster 2026-03-10T10:24:46.494787+0000 mgr.y (mgr.24422) 431 : cluster [DBG] pgmap v710: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T10:24:48.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:47 vm07 bash[23367]: cluster 2026-03-10T10:24:46.493766+0000 mon.a (mon.0) 2820 : cluster [DBG] osdmap e469: 8 total, 8 up, 8 in 2026-03-10T10:24:48.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:47 vm07 bash[23367]: cluster 2026-03-10T10:24:46.493766+0000 mon.a (mon.0) 2820 : cluster [DBG] osdmap e469: 8 total, 8 up, 8 in 2026-03-10T10:24:48.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:47 vm07 bash[23367]: cluster 2026-03-10T10:24:46.494787+0000 mgr.y (mgr.24422) 431 : cluster [DBG] pgmap v710: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T10:24:48.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:47 vm07 bash[23367]: cluster 2026-03-10T10:24:46.494787+0000 mgr.y (mgr.24422) 431 : cluster [DBG] pgmap v710: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T10:24:48.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:48 vm04 bash[28289]: cluster 2026-03-10T10:24:47.531418+0000 mon.a (mon.0) 2821 : cluster [DBG] osdmap e470: 8 total, 8 up, 8 in 2026-03-10T10:24:48.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:48 vm04 bash[28289]: cluster 2026-03-10T10:24:47.531418+0000 mon.a (mon.0) 2821 : cluster [DBG] osdmap e470: 8 total, 8 up, 8 in 2026-03-10T10:24:48.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:48 vm04 bash[28289]: audit 2026-03-10T10:24:47.576723+0000 mon.a (mon.0) 2822 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:48.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:48 vm04 bash[28289]: audit 2026-03-10T10:24:47.576723+0000 mon.a (mon.0) 2822 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:48.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:48 vm04 bash[28289]: audit 2026-03-10T10:24:47.576967+0000 mon.a (mon.0) 2823 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-92"}]: dispatch 2026-03-10T10:24:48.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:48 vm04 bash[28289]: audit 2026-03-10T10:24:47.576967+0000 mon.a (mon.0) 2823 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-92"}]: dispatch 2026-03-10T10:24:48.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:48 vm04 bash[20742]: cluster 2026-03-10T10:24:47.531418+0000 mon.a (mon.0) 2821 : cluster [DBG] osdmap e470: 8 total, 8 up, 8 in 2026-03-10T10:24:48.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:48 vm04 bash[20742]: cluster 2026-03-10T10:24:47.531418+0000 mon.a (mon.0) 2821 : cluster [DBG] osdmap e470: 8 total, 8 up, 8 in 2026-03-10T10:24:48.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:48 vm04 bash[20742]: audit 2026-03-10T10:24:47.576723+0000 mon.a (mon.0) 2822 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:48.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:48 vm04 bash[20742]: audit 2026-03-10T10:24:47.576723+0000 mon.a (mon.0) 2822 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:48.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:48 vm04 bash[20742]: audit 2026-03-10T10:24:47.576967+0000 mon.a (mon.0) 2823 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-92"}]: dispatch 2026-03-10T10:24:48.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:48 vm04 bash[20742]: audit 2026-03-10T10:24:47.576967+0000 mon.a (mon.0) 2823 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-92"}]: dispatch 2026-03-10T10:24:49.016 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:24:48 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:24:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:48 vm07 bash[23367]: cluster 2026-03-10T10:24:47.531418+0000 mon.a (mon.0) 2821 : cluster [DBG] osdmap e470: 8 total, 8 up, 8 in 2026-03-10T10:24:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:48 vm07 bash[23367]: cluster 2026-03-10T10:24:47.531418+0000 mon.a (mon.0) 2821 : cluster [DBG] osdmap e470: 8 total, 8 up, 8 in 2026-03-10T10:24:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:48 vm07 bash[23367]: audit 2026-03-10T10:24:47.576723+0000 mon.a (mon.0) 2822 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:48 vm07 bash[23367]: audit 2026-03-10T10:24:47.576723+0000 mon.a (mon.0) 2822 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:24:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:48 vm07 bash[23367]: audit 2026-03-10T10:24:47.576967+0000 mon.a (mon.0) 2823 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-92"}]: dispatch 2026-03-10T10:24:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:48 vm07 bash[23367]: audit 2026-03-10T10:24:47.576967+0000 mon.a (mon.0) 2823 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-92"}]: dispatch 2026-03-10T10:24:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:49 vm04 bash[28289]: cluster 2026-03-10T10:24:48.495302+0000 mgr.y (mgr.24422) 432 : cluster [DBG] pgmap v712: 292 pgs: 17 unknown, 275 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:24:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:49 vm04 bash[28289]: cluster 2026-03-10T10:24:48.495302+0000 mgr.y (mgr.24422) 432 : cluster [DBG] pgmap v712: 292 pgs: 17 unknown, 275 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:24:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:49 vm04 bash[28289]: cluster 2026-03-10T10:24:48.609223+0000 mon.a (mon.0) 2824 : cluster [DBG] osdmap e471: 8 total, 8 up, 8 in 2026-03-10T10:24:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:49 vm04 bash[28289]: cluster 2026-03-10T10:24:48.609223+0000 mon.a (mon.0) 2824 : cluster [DBG] osdmap e471: 8 total, 8 up, 8 in 2026-03-10T10:24:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:49 vm04 bash[28289]: audit 2026-03-10T10:24:48.639188+0000 mgr.y (mgr.24422) 433 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:24:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:49 vm04 bash[28289]: audit 2026-03-10T10:24:48.639188+0000 mgr.y (mgr.24422) 433 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:24:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:49 vm04 bash[20742]: cluster 2026-03-10T10:24:48.495302+0000 mgr.y (mgr.24422) 432 : cluster [DBG] pgmap v712: 292 pgs: 17 unknown, 275 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:24:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:49 vm04 bash[20742]: cluster 2026-03-10T10:24:48.495302+0000 mgr.y (mgr.24422) 432 : cluster [DBG] pgmap v712: 292 pgs: 17 unknown, 275 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:24:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:49 vm04 bash[20742]: cluster 2026-03-10T10:24:48.609223+0000 mon.a (mon.0) 2824 : cluster [DBG] osdmap e471: 8 total, 8 up, 8 in 2026-03-10T10:24:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:49 vm04 bash[20742]: cluster 2026-03-10T10:24:48.609223+0000 mon.a (mon.0) 2824 : cluster [DBG] osdmap e471: 8 total, 8 up, 8 in 2026-03-10T10:24:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:49 vm04 bash[20742]: audit 2026-03-10T10:24:48.639188+0000 mgr.y (mgr.24422) 433 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:24:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:49 vm04 bash[20742]: audit 2026-03-10T10:24:48.639188+0000 mgr.y (mgr.24422) 433 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:24:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:49 vm07 bash[23367]: cluster 2026-03-10T10:24:48.495302+0000 mgr.y (mgr.24422) 432 : cluster [DBG] pgmap v712: 292 pgs: 17 unknown, 275 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:24:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:49 vm07 bash[23367]: cluster 2026-03-10T10:24:48.495302+0000 mgr.y (mgr.24422) 432 : cluster [DBG] pgmap v712: 292 pgs: 17 unknown, 275 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:24:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:49 vm07 bash[23367]: cluster 2026-03-10T10:24:48.609223+0000 mon.a (mon.0) 2824 : cluster [DBG] osdmap e471: 8 total, 8 up, 8 in 2026-03-10T10:24:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:49 vm07 bash[23367]: cluster 2026-03-10T10:24:48.609223+0000 mon.a (mon.0) 2824 : cluster [DBG] osdmap e471: 8 total, 8 up, 8 in 2026-03-10T10:24:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:49 vm07 bash[23367]: audit 2026-03-10T10:24:48.639188+0000 mgr.y (mgr.24422) 433 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:24:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:49 vm07 bash[23367]: audit 2026-03-10T10:24:48.639188+0000 mgr.y (mgr.24422) 433 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:24:50.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:50 vm04 bash[28289]: cluster 2026-03-10T10:24:49.621721+0000 mon.a (mon.0) 2825 : cluster [DBG] osdmap e472: 8 total, 8 up, 8 in 2026-03-10T10:24:50.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:50 vm04 bash[28289]: cluster 2026-03-10T10:24:49.621721+0000 mon.a (mon.0) 2825 : cluster [DBG] osdmap e472: 8 total, 8 up, 8 in 2026-03-10T10:24:50.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:50 vm04 bash[28289]: audit 2026-03-10T10:24:49.634559+0000 mon.a (mon.0) 2826 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:50.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:50 vm04 bash[28289]: audit 2026-03-10T10:24:49.634559+0000 mon.a (mon.0) 2826 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:50.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:50 vm04 bash[20742]: cluster 2026-03-10T10:24:49.621721+0000 mon.a (mon.0) 2825 : cluster [DBG] osdmap e472: 8 total, 8 up, 8 in 2026-03-10T10:24:50.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:50 vm04 bash[20742]: cluster 2026-03-10T10:24:49.621721+0000 mon.a (mon.0) 2825 : cluster [DBG] osdmap e472: 8 total, 8 up, 8 in 2026-03-10T10:24:50.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:50 vm04 bash[20742]: audit 2026-03-10T10:24:49.634559+0000 mon.a (mon.0) 2826 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:50.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:50 vm04 bash[20742]: audit 2026-03-10T10:24:49.634559+0000 mon.a (mon.0) 2826 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:50 vm07 bash[23367]: cluster 2026-03-10T10:24:49.621721+0000 mon.a (mon.0) 2825 : cluster [DBG] osdmap e472: 8 total, 8 up, 8 in 2026-03-10T10:24:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:50 vm07 bash[23367]: cluster 2026-03-10T10:24:49.621721+0000 mon.a (mon.0) 2825 : cluster [DBG] osdmap e472: 8 total, 8 up, 8 in 2026-03-10T10:24:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:50 vm07 bash[23367]: audit 2026-03-10T10:24:49.634559+0000 mon.a (mon.0) 2826 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:50 vm07 bash[23367]: audit 2026-03-10T10:24:49.634559+0000 mon.a (mon.0) 2826 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-94","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:24:51.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:51 vm04 bash[28289]: cluster 2026-03-10T10:24:50.495608+0000 mgr.y (mgr.24422) 434 : cluster [DBG] pgmap v715: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T10:24:51.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:51 vm04 bash[28289]: cluster 2026-03-10T10:24:50.495608+0000 mgr.y (mgr.24422) 434 : cluster [DBG] pgmap v715: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T10:24:51.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:51 vm04 bash[28289]: cluster 2026-03-10T10:24:50.618146+0000 mon.a (mon.0) 2827 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:51.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:51 vm04 bash[28289]: cluster 2026-03-10T10:24:50.618146+0000 mon.a (mon.0) 2827 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:51.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:51 vm04 bash[28289]: audit 2026-03-10T10:24:50.619901+0000 mon.a (mon.0) 2828 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-94","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:51.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:51 vm04 bash[28289]: audit 2026-03-10T10:24:50.619901+0000 mon.a (mon.0) 2828 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-94","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:51.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:51 vm04 bash[28289]: cluster 2026-03-10T10:24:50.627218+0000 mon.a (mon.0) 2829 : cluster [DBG] osdmap e473: 8 total, 8 up, 8 in 2026-03-10T10:24:51.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:51 vm04 bash[28289]: cluster 2026-03-10T10:24:50.627218+0000 mon.a (mon.0) 2829 : cluster [DBG] osdmap e473: 8 total, 8 up, 8 in 2026-03-10T10:24:51.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:51 vm04 bash[28289]: audit 2026-03-10T10:24:50.628919+0000 mon.a (mon.0) 2830 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:51.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:51 vm04 bash[28289]: audit 2026-03-10T10:24:50.628919+0000 mon.a (mon.0) 2830 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:51.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:51 vm04 bash[20742]: cluster 2026-03-10T10:24:50.495608+0000 mgr.y (mgr.24422) 434 : cluster [DBG] pgmap v715: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T10:24:51.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:51 vm04 bash[20742]: cluster 2026-03-10T10:24:50.495608+0000 mgr.y (mgr.24422) 434 : cluster [DBG] pgmap v715: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T10:24:51.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:51 vm04 bash[20742]: cluster 2026-03-10T10:24:50.618146+0000 mon.a (mon.0) 2827 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:51.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:51 vm04 bash[20742]: cluster 2026-03-10T10:24:50.618146+0000 mon.a (mon.0) 2827 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:51.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:51 vm04 bash[20742]: audit 2026-03-10T10:24:50.619901+0000 mon.a (mon.0) 2828 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-94","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:51.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:51 vm04 bash[20742]: audit 2026-03-10T10:24:50.619901+0000 mon.a (mon.0) 2828 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-94","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:51.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:51 vm04 bash[20742]: cluster 2026-03-10T10:24:50.627218+0000 mon.a (mon.0) 2829 : cluster [DBG] osdmap e473: 8 total, 8 up, 8 in 2026-03-10T10:24:51.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:51 vm04 bash[20742]: cluster 2026-03-10T10:24:50.627218+0000 mon.a (mon.0) 2829 : cluster [DBG] osdmap e473: 8 total, 8 up, 8 in 2026-03-10T10:24:51.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:51 vm04 bash[20742]: audit 2026-03-10T10:24:50.628919+0000 mon.a (mon.0) 2830 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:51.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:51 vm04 bash[20742]: audit 2026-03-10T10:24:50.628919+0000 mon.a (mon.0) 2830 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:52.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:51 vm07 bash[23367]: cluster 2026-03-10T10:24:50.495608+0000 mgr.y (mgr.24422) 434 : cluster [DBG] pgmap v715: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T10:24:52.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:51 vm07 bash[23367]: cluster 2026-03-10T10:24:50.495608+0000 mgr.y (mgr.24422) 434 : cluster [DBG] pgmap v715: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s 2026-03-10T10:24:52.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:51 vm07 bash[23367]: cluster 2026-03-10T10:24:50.618146+0000 mon.a (mon.0) 2827 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:52.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:51 vm07 bash[23367]: cluster 2026-03-10T10:24:50.618146+0000 mon.a (mon.0) 2827 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:52.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:51 vm07 bash[23367]: audit 2026-03-10T10:24:50.619901+0000 mon.a (mon.0) 2828 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-94","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:52.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:51 vm07 bash[23367]: audit 2026-03-10T10:24:50.619901+0000 mon.a (mon.0) 2828 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-94","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:24:52.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:51 vm07 bash[23367]: cluster 2026-03-10T10:24:50.627218+0000 mon.a (mon.0) 2829 : cluster [DBG] osdmap e473: 8 total, 8 up, 8 in 2026-03-10T10:24:52.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:51 vm07 bash[23367]: cluster 2026-03-10T10:24:50.627218+0000 mon.a (mon.0) 2829 : cluster [DBG] osdmap e473: 8 total, 8 up, 8 in 2026-03-10T10:24:52.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:51 vm07 bash[23367]: audit 2026-03-10T10:24:50.628919+0000 mon.a (mon.0) 2830 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:52.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:51 vm07 bash[23367]: audit 2026-03-10T10:24:50.628919+0000 mon.a (mon.0) 2830 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:24:52.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:52 vm04 bash[28289]: cluster 2026-03-10T10:24:51.655836+0000 mon.a (mon.0) 2831 : cluster [DBG] osdmap e474: 8 total, 8 up, 8 in 2026-03-10T10:24:52.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:52 vm04 bash[28289]: cluster 2026-03-10T10:24:51.655836+0000 mon.a (mon.0) 2831 : cluster [DBG] osdmap e474: 8 total, 8 up, 8 in 2026-03-10T10:24:52.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:52 vm04 bash[28289]: cluster 2026-03-10T10:24:52.658517+0000 mon.a (mon.0) 2832 : cluster [DBG] osdmap e475: 8 total, 8 up, 8 in 2026-03-10T10:24:52.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:52 vm04 bash[28289]: cluster 2026-03-10T10:24:52.658517+0000 mon.a (mon.0) 2832 : cluster [DBG] osdmap e475: 8 total, 8 up, 8 in 2026-03-10T10:24:52.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:52 vm04 bash[20742]: cluster 2026-03-10T10:24:51.655836+0000 mon.a (mon.0) 2831 : cluster [DBG] osdmap e474: 8 total, 8 up, 8 in 2026-03-10T10:24:52.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:52 vm04 bash[20742]: cluster 2026-03-10T10:24:51.655836+0000 mon.a (mon.0) 2831 : cluster [DBG] osdmap e474: 8 total, 8 up, 8 in 2026-03-10T10:24:52.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:52 vm04 bash[20742]: cluster 2026-03-10T10:24:52.658517+0000 mon.a (mon.0) 2832 : cluster [DBG] osdmap e475: 8 total, 8 up, 8 in 2026-03-10T10:24:52.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:52 vm04 bash[20742]: cluster 2026-03-10T10:24:52.658517+0000 mon.a (mon.0) 2832 : cluster [DBG] osdmap e475: 8 total, 8 up, 8 in 2026-03-10T10:24:53.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:52 vm07 bash[23367]: cluster 2026-03-10T10:24:51.655836+0000 mon.a (mon.0) 2831 : cluster [DBG] osdmap e474: 8 total, 8 up, 8 in 2026-03-10T10:24:53.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:52 vm07 bash[23367]: cluster 2026-03-10T10:24:51.655836+0000 mon.a (mon.0) 2831 : cluster [DBG] osdmap e474: 8 total, 8 up, 8 in 2026-03-10T10:24:53.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:52 vm07 bash[23367]: cluster 2026-03-10T10:24:52.658517+0000 mon.a (mon.0) 2832 : cluster [DBG] osdmap e475: 8 total, 8 up, 8 in 2026-03-10T10:24:53.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:52 vm07 bash[23367]: cluster 2026-03-10T10:24:52.658517+0000 mon.a (mon.0) 2832 : cluster [DBG] osdmap e475: 8 total, 8 up, 8 in 2026-03-10T10:24:53.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:24:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:24:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:24:54.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:53 vm04 bash[28289]: cluster 2026-03-10T10:24:52.495985+0000 mgr.y (mgr.24422) 435 : cluster [DBG] pgmap v718: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T10:24:54.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:53 vm04 bash[28289]: cluster 2026-03-10T10:24:52.495985+0000 mgr.y (mgr.24422) 435 : cluster [DBG] pgmap v718: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T10:24:54.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:53 vm04 bash[20742]: cluster 2026-03-10T10:24:52.495985+0000 mgr.y (mgr.24422) 435 : cluster [DBG] pgmap v718: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T10:24:54.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:53 vm04 bash[20742]: cluster 2026-03-10T10:24:52.495985+0000 mgr.y (mgr.24422) 435 : cluster [DBG] pgmap v718: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T10:24:54.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:53 vm07 bash[23367]: cluster 2026-03-10T10:24:52.495985+0000 mgr.y (mgr.24422) 435 : cluster [DBG] pgmap v718: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T10:24:54.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:53 vm07 bash[23367]: cluster 2026-03-10T10:24:52.495985+0000 mgr.y (mgr.24422) 435 : cluster [DBG] pgmap v718: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 939 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T10:24:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:55 vm04 bash[20742]: cluster 2026-03-10T10:24:54.496790+0000 mgr.y (mgr.24422) 436 : cluster [DBG] pgmap v720: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.6 KiB/s wr, 6 op/s 2026-03-10T10:24:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:55 vm04 bash[20742]: cluster 2026-03-10T10:24:54.496790+0000 mgr.y (mgr.24422) 436 : cluster [DBG] pgmap v720: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.6 KiB/s wr, 6 op/s 2026-03-10T10:24:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:55 vm04 bash[28289]: cluster 2026-03-10T10:24:54.496790+0000 mgr.y (mgr.24422) 436 : cluster [DBG] pgmap v720: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.6 KiB/s wr, 6 op/s 2026-03-10T10:24:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:55 vm04 bash[28289]: cluster 2026-03-10T10:24:54.496790+0000 mgr.y (mgr.24422) 436 : cluster [DBG] pgmap v720: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.6 KiB/s wr, 6 op/s 2026-03-10T10:24:56.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:55 vm07 bash[23367]: cluster 2026-03-10T10:24:54.496790+0000 mgr.y (mgr.24422) 436 : cluster [DBG] pgmap v720: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.6 KiB/s wr, 6 op/s 2026-03-10T10:24:56.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:55 vm07 bash[23367]: cluster 2026-03-10T10:24:54.496790+0000 mgr.y (mgr.24422) 436 : cluster [DBG] pgmap v720: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 1.6 KiB/s wr, 6 op/s 2026-03-10T10:24:57.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:56 vm04 bash[28289]: cluster 2026-03-10T10:24:56.515567+0000 mon.a (mon.0) 2833 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:57.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:56 vm04 bash[28289]: cluster 2026-03-10T10:24:56.515567+0000 mon.a (mon.0) 2833 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:57.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:56 vm04 bash[20742]: cluster 2026-03-10T10:24:56.515567+0000 mon.a (mon.0) 2833 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:57.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:56 vm04 bash[20742]: cluster 2026-03-10T10:24:56.515567+0000 mon.a (mon.0) 2833 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:57.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:56 vm07 bash[23367]: cluster 2026-03-10T10:24:56.515567+0000 mon.a (mon.0) 2833 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:57.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:56 vm07 bash[23367]: cluster 2026-03-10T10:24:56.515567+0000 mon.a (mon.0) 2833 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:24:58.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:57 vm04 bash[20742]: cluster 2026-03-10T10:24:56.497095+0000 mgr.y (mgr.24422) 437 : cluster [DBG] pgmap v721: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s 2026-03-10T10:24:58.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:57 vm04 bash[20742]: cluster 2026-03-10T10:24:56.497095+0000 mgr.y (mgr.24422) 437 : cluster [DBG] pgmap v721: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s 2026-03-10T10:24:58.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:57 vm04 bash[28289]: cluster 2026-03-10T10:24:56.497095+0000 mgr.y (mgr.24422) 437 : cluster [DBG] pgmap v721: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s 2026-03-10T10:24:58.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:57 vm04 bash[28289]: cluster 2026-03-10T10:24:56.497095+0000 mgr.y (mgr.24422) 437 : cluster [DBG] pgmap v721: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s 2026-03-10T10:24:58.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:57 vm07 bash[23367]: cluster 2026-03-10T10:24:56.497095+0000 mgr.y (mgr.24422) 437 : cluster [DBG] pgmap v721: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s 2026-03-10T10:24:58.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:57 vm07 bash[23367]: cluster 2026-03-10T10:24:56.497095+0000 mgr.y (mgr.24422) 437 : cluster [DBG] pgmap v721: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s 2026-03-10T10:24:59.016 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:24:58 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:24:59.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:58 vm07 bash[23367]: audit 2026-03-10T10:24:58.028221+0000 mon.a (mon.0) 2834 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:24:59.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:58 vm07 bash[23367]: audit 2026-03-10T10:24:58.028221+0000 mon.a (mon.0) 2834 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:24:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:58 vm04 bash[28289]: audit 2026-03-10T10:24:58.028221+0000 mon.a (mon.0) 2834 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:24:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:58 vm04 bash[28289]: audit 2026-03-10T10:24:58.028221+0000 mon.a (mon.0) 2834 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:24:59.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:58 vm04 bash[20742]: audit 2026-03-10T10:24:58.028221+0000 mon.a (mon.0) 2834 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:24:59.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:58 vm04 bash[20742]: audit 2026-03-10T10:24:58.028221+0000 mon.a (mon.0) 2834 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:25:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:59 vm04 bash[28289]: cluster 2026-03-10T10:24:58.497666+0000 mgr.y (mgr.24422) 438 : cluster [DBG] pgmap v722: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.0 KiB/s wr, 3 op/s 2026-03-10T10:25:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:59 vm04 bash[28289]: cluster 2026-03-10T10:24:58.497666+0000 mgr.y (mgr.24422) 438 : cluster [DBG] pgmap v722: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.0 KiB/s wr, 3 op/s 2026-03-10T10:25:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:59 vm04 bash[28289]: audit 2026-03-10T10:24:58.649517+0000 mgr.y (mgr.24422) 439 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:24:59 vm04 bash[28289]: audit 2026-03-10T10:24:58.649517+0000 mgr.y (mgr.24422) 439 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:59 vm04 bash[20742]: cluster 2026-03-10T10:24:58.497666+0000 mgr.y (mgr.24422) 438 : cluster [DBG] pgmap v722: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.0 KiB/s wr, 3 op/s 2026-03-10T10:25:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:59 vm04 bash[20742]: cluster 2026-03-10T10:24:58.497666+0000 mgr.y (mgr.24422) 438 : cluster [DBG] pgmap v722: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.0 KiB/s wr, 3 op/s 2026-03-10T10:25:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:59 vm04 bash[20742]: audit 2026-03-10T10:24:58.649517+0000 mgr.y (mgr.24422) 439 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:24:59 vm04 bash[20742]: audit 2026-03-10T10:24:58.649517+0000 mgr.y (mgr.24422) 439 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:00.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:59 vm07 bash[23367]: cluster 2026-03-10T10:24:58.497666+0000 mgr.y (mgr.24422) 438 : cluster [DBG] pgmap v722: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.0 KiB/s wr, 3 op/s 2026-03-10T10:25:00.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:59 vm07 bash[23367]: cluster 2026-03-10T10:24:58.497666+0000 mgr.y (mgr.24422) 438 : cluster [DBG] pgmap v722: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.0 KiB/s wr, 3 op/s 2026-03-10T10:25:00.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:59 vm07 bash[23367]: audit 2026-03-10T10:24:58.649517+0000 mgr.y (mgr.24422) 439 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:00.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:24:59 vm07 bash[23367]: audit 2026-03-10T10:24:58.649517+0000 mgr.y (mgr.24422) 439 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:01.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:01 vm04 bash[28289]: cluster 2026-03-10T10:25:00.498379+0000 mgr.y (mgr.24422) 440 : cluster [DBG] pgmap v723: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 926 B/s wr, 4 op/s 2026-03-10T10:25:01.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:01 vm04 bash[28289]: cluster 2026-03-10T10:25:00.498379+0000 mgr.y (mgr.24422) 440 : cluster [DBG] pgmap v723: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 926 B/s wr, 4 op/s 2026-03-10T10:25:01.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:01 vm04 bash[20742]: cluster 2026-03-10T10:25:00.498379+0000 mgr.y (mgr.24422) 440 : cluster [DBG] pgmap v723: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 926 B/s wr, 4 op/s 2026-03-10T10:25:01.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:01 vm04 bash[20742]: cluster 2026-03-10T10:25:00.498379+0000 mgr.y (mgr.24422) 440 : cluster [DBG] pgmap v723: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 926 B/s wr, 4 op/s 2026-03-10T10:25:02.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:01 vm07 bash[23367]: cluster 2026-03-10T10:25:00.498379+0000 mgr.y (mgr.24422) 440 : cluster [DBG] pgmap v723: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 926 B/s wr, 4 op/s 2026-03-10T10:25:02.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:01 vm07 bash[23367]: cluster 2026-03-10T10:25:00.498379+0000 mgr.y (mgr.24422) 440 : cluster [DBG] pgmap v723: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 926 B/s wr, 4 op/s 2026-03-10T10:25:03.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:25:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:25:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:25:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:04 vm04 bash[20742]: cluster 2026-03-10T10:25:02.498769+0000 mgr.y (mgr.24422) 441 : cluster [DBG] pgmap v724: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 818 B/s wr, 3 op/s 2026-03-10T10:25:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:04 vm04 bash[20742]: cluster 2026-03-10T10:25:02.498769+0000 mgr.y (mgr.24422) 441 : cluster [DBG] pgmap v724: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 818 B/s wr, 3 op/s 2026-03-10T10:25:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:04 vm04 bash[20742]: cluster 2026-03-10T10:25:02.889753+0000 mon.a (mon.0) 2835 : cluster [DBG] osdmap e476: 8 total, 8 up, 8 in 2026-03-10T10:25:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:04 vm04 bash[20742]: cluster 2026-03-10T10:25:02.889753+0000 mon.a (mon.0) 2835 : cluster [DBG] osdmap e476: 8 total, 8 up, 8 in 2026-03-10T10:25:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:04 vm04 bash[28289]: cluster 2026-03-10T10:25:02.498769+0000 mgr.y (mgr.24422) 441 : cluster [DBG] pgmap v724: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 818 B/s wr, 3 op/s 2026-03-10T10:25:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:04 vm04 bash[28289]: cluster 2026-03-10T10:25:02.498769+0000 mgr.y (mgr.24422) 441 : cluster [DBG] pgmap v724: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 818 B/s wr, 3 op/s 2026-03-10T10:25:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:04 vm04 bash[28289]: cluster 2026-03-10T10:25:02.889753+0000 mon.a (mon.0) 2835 : cluster [DBG] osdmap e476: 8 total, 8 up, 8 in 2026-03-10T10:25:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:04 vm04 bash[28289]: cluster 2026-03-10T10:25:02.889753+0000 mon.a (mon.0) 2835 : cluster [DBG] osdmap e476: 8 total, 8 up, 8 in 2026-03-10T10:25:04.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:04 vm07 bash[23367]: cluster 2026-03-10T10:25:02.498769+0000 mgr.y (mgr.24422) 441 : cluster [DBG] pgmap v724: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 818 B/s wr, 3 op/s 2026-03-10T10:25:04.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:04 vm07 bash[23367]: cluster 2026-03-10T10:25:02.498769+0000 mgr.y (mgr.24422) 441 : cluster [DBG] pgmap v724: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 818 B/s wr, 3 op/s 2026-03-10T10:25:04.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:04 vm07 bash[23367]: cluster 2026-03-10T10:25:02.889753+0000 mon.a (mon.0) 2835 : cluster [DBG] osdmap e476: 8 total, 8 up, 8 in 2026-03-10T10:25:04.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:04 vm07 bash[23367]: cluster 2026-03-10T10:25:02.889753+0000 mon.a (mon.0) 2835 : cluster [DBG] osdmap e476: 8 total, 8 up, 8 in 2026-03-10T10:25:05.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:05 vm04 bash[28289]: cluster 2026-03-10T10:25:04.499560+0000 mgr.y (mgr.24422) 442 : cluster [DBG] pgmap v726: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:05.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:05 vm04 bash[28289]: cluster 2026-03-10T10:25:04.499560+0000 mgr.y (mgr.24422) 442 : cluster [DBG] pgmap v726: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:05.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:05 vm04 bash[20742]: cluster 2026-03-10T10:25:04.499560+0000 mgr.y (mgr.24422) 442 : cluster [DBG] pgmap v726: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:05.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:05 vm04 bash[20742]: cluster 2026-03-10T10:25:04.499560+0000 mgr.y (mgr.24422) 442 : cluster [DBG] pgmap v726: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:05.735 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:25:05 vm07 bash[50688]: logger=cleanup t=2026-03-10T10:25:05.587821397Z level=info msg="Completed cleanup jobs" duration=1.448053ms 2026-03-10T10:25:05.735 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:05 vm07 bash[23367]: cluster 2026-03-10T10:25:04.499560+0000 mgr.y (mgr.24422) 442 : cluster [DBG] pgmap v726: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:05.735 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:05 vm07 bash[23367]: cluster 2026-03-10T10:25:04.499560+0000 mgr.y (mgr.24422) 442 : cluster [DBG] pgmap v726: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:06.016 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:25:05 vm07 bash[50688]: logger=plugins.update.checker t=2026-03-10T10:25:05.734066572Z level=info msg="Update check succeeded" duration=54.187094ms 2026-03-10T10:25:07.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:07 vm04 bash[28289]: cluster 2026-03-10T10:25:06.499956+0000 mgr.y (mgr.24422) 443 : cluster [DBG] pgmap v727: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:07.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:07 vm04 bash[28289]: cluster 2026-03-10T10:25:06.499956+0000 mgr.y (mgr.24422) 443 : cluster [DBG] pgmap v727: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:07.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:07 vm04 bash[28289]: cluster 2026-03-10T10:25:06.528109+0000 mon.a (mon.0) 2836 : cluster [DBG] osdmap e477: 8 total, 8 up, 8 in 2026-03-10T10:25:07.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:07 vm04 bash[28289]: cluster 2026-03-10T10:25:06.528109+0000 mon.a (mon.0) 2836 : cluster [DBG] osdmap e477: 8 total, 8 up, 8 in 2026-03-10T10:25:07.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:07 vm04 bash[20742]: cluster 2026-03-10T10:25:06.499956+0000 mgr.y (mgr.24422) 443 : cluster [DBG] pgmap v727: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:07.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:07 vm04 bash[20742]: cluster 2026-03-10T10:25:06.499956+0000 mgr.y (mgr.24422) 443 : cluster [DBG] pgmap v727: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:07.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:07 vm04 bash[20742]: cluster 2026-03-10T10:25:06.528109+0000 mon.a (mon.0) 2836 : cluster [DBG] osdmap e477: 8 total, 8 up, 8 in 2026-03-10T10:25:07.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:07 vm04 bash[20742]: cluster 2026-03-10T10:25:06.528109+0000 mon.a (mon.0) 2836 : cluster [DBG] osdmap e477: 8 total, 8 up, 8 in 2026-03-10T10:25:08.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:07 vm07 bash[23367]: cluster 2026-03-10T10:25:06.499956+0000 mgr.y (mgr.24422) 443 : cluster [DBG] pgmap v727: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:08.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:07 vm07 bash[23367]: cluster 2026-03-10T10:25:06.499956+0000 mgr.y (mgr.24422) 443 : cluster [DBG] pgmap v727: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:08.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:07 vm07 bash[23367]: cluster 2026-03-10T10:25:06.528109+0000 mon.a (mon.0) 2836 : cluster [DBG] osdmap e477: 8 total, 8 up, 8 in 2026-03-10T10:25:08.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:07 vm07 bash[23367]: cluster 2026-03-10T10:25:06.528109+0000 mon.a (mon.0) 2836 : cluster [DBG] osdmap e477: 8 total, 8 up, 8 in 2026-03-10T10:25:09.016 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:25:08 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:25:09.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:09 vm04 bash[28289]: cluster 2026-03-10T10:25:08.500623+0000 mgr.y (mgr.24422) 444 : cluster [DBG] pgmap v729: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 895 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:09.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:09 vm04 bash[28289]: cluster 2026-03-10T10:25:08.500623+0000 mgr.y (mgr.24422) 444 : cluster [DBG] pgmap v729: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 895 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:09.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:09 vm04 bash[28289]: audit 2026-03-10T10:25:08.660166+0000 mgr.y (mgr.24422) 445 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:09.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:09 vm04 bash[28289]: audit 2026-03-10T10:25:08.660166+0000 mgr.y (mgr.24422) 445 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:09.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:09 vm04 bash[20742]: cluster 2026-03-10T10:25:08.500623+0000 mgr.y (mgr.24422) 444 : cluster [DBG] pgmap v729: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 895 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:09.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:09 vm04 bash[20742]: cluster 2026-03-10T10:25:08.500623+0000 mgr.y (mgr.24422) 444 : cluster [DBG] pgmap v729: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 895 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:09.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:09 vm04 bash[20742]: audit 2026-03-10T10:25:08.660166+0000 mgr.y (mgr.24422) 445 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:09.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:09 vm04 bash[20742]: audit 2026-03-10T10:25:08.660166+0000 mgr.y (mgr.24422) 445 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:10.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:09 vm07 bash[23367]: cluster 2026-03-10T10:25:08.500623+0000 mgr.y (mgr.24422) 444 : cluster [DBG] pgmap v729: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 895 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:10.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:09 vm07 bash[23367]: cluster 2026-03-10T10:25:08.500623+0000 mgr.y (mgr.24422) 444 : cluster [DBG] pgmap v729: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 895 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:10.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:09 vm07 bash[23367]: audit 2026-03-10T10:25:08.660166+0000 mgr.y (mgr.24422) 445 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:10.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:09 vm07 bash[23367]: audit 2026-03-10T10:25:08.660166+0000 mgr.y (mgr.24422) 445 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:11.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:11 vm04 bash[28289]: cluster 2026-03-10T10:25:10.501231+0000 mgr.y (mgr.24422) 446 : cluster [DBG] pgmap v730: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:11.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:11 vm04 bash[28289]: cluster 2026-03-10T10:25:10.501231+0000 mgr.y (mgr.24422) 446 : cluster [DBG] pgmap v730: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:11.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:11 vm04 bash[20742]: cluster 2026-03-10T10:25:10.501231+0000 mgr.y (mgr.24422) 446 : cluster [DBG] pgmap v730: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:11.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:11 vm04 bash[20742]: cluster 2026-03-10T10:25:10.501231+0000 mgr.y (mgr.24422) 446 : cluster [DBG] pgmap v730: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:11 vm07 bash[23367]: cluster 2026-03-10T10:25:10.501231+0000 mgr.y (mgr.24422) 446 : cluster [DBG] pgmap v730: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:11 vm07 bash[23367]: cluster 2026-03-10T10:25:10.501231+0000 mgr.y (mgr.24422) 446 : cluster [DBG] pgmap v730: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:13.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:25:13 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:25:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:25:13.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:13 vm04 bash[28289]: cluster 2026-03-10T10:25:12.501531+0000 mgr.y (mgr.24422) 447 : cluster [DBG] pgmap v731: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:13.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:13 vm04 bash[28289]: cluster 2026-03-10T10:25:12.501531+0000 mgr.y (mgr.24422) 447 : cluster [DBG] pgmap v731: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:13.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:13 vm04 bash[28289]: audit 2026-03-10T10:25:12.908867+0000 mon.a (mon.0) 2837 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:25:13.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:13 vm04 bash[28289]: audit 2026-03-10T10:25:12.908867+0000 mon.a (mon.0) 2837 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:25:13.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:13 vm04 bash[28289]: audit 2026-03-10T10:25:12.909182+0000 mon.a (mon.0) 2838 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-94"}]: dispatch 2026-03-10T10:25:13.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:13 vm04 bash[28289]: audit 2026-03-10T10:25:12.909182+0000 mon.a (mon.0) 2838 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-94"}]: dispatch 2026-03-10T10:25:13.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:13 vm04 bash[28289]: audit 2026-03-10T10:25:13.033854+0000 mon.a (mon.0) 2839 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:25:13.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:13 vm04 bash[28289]: audit 2026-03-10T10:25:13.033854+0000 mon.a (mon.0) 2839 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:25:13.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:13 vm04 bash[20742]: cluster 2026-03-10T10:25:12.501531+0000 mgr.y (mgr.24422) 447 : cluster [DBG] pgmap v731: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:13.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:13 vm04 bash[20742]: cluster 2026-03-10T10:25:12.501531+0000 mgr.y (mgr.24422) 447 : cluster [DBG] pgmap v731: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:13.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:13 vm04 bash[20742]: audit 2026-03-10T10:25:12.908867+0000 mon.a (mon.0) 2837 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:25:13.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:13 vm04 bash[20742]: audit 2026-03-10T10:25:12.908867+0000 mon.a (mon.0) 2837 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:25:13.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:13 vm04 bash[20742]: audit 2026-03-10T10:25:12.909182+0000 mon.a (mon.0) 2838 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-94"}]: dispatch 2026-03-10T10:25:13.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:13 vm04 bash[20742]: audit 2026-03-10T10:25:12.909182+0000 mon.a (mon.0) 2838 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-94"}]: dispatch 2026-03-10T10:25:13.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:13 vm04 bash[20742]: audit 2026-03-10T10:25:13.033854+0000 mon.a (mon.0) 2839 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:25:13.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:13 vm04 bash[20742]: audit 2026-03-10T10:25:13.033854+0000 mon.a (mon.0) 2839 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:25:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:13 vm07 bash[23367]: cluster 2026-03-10T10:25:12.501531+0000 mgr.y (mgr.24422) 447 : cluster [DBG] pgmap v731: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:13 vm07 bash[23367]: cluster 2026-03-10T10:25:12.501531+0000 mgr.y (mgr.24422) 447 : cluster [DBG] pgmap v731: 292 pgs: 292 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:13 vm07 bash[23367]: audit 2026-03-10T10:25:12.908867+0000 mon.a (mon.0) 2837 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:25:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:13 vm07 bash[23367]: audit 2026-03-10T10:25:12.908867+0000 mon.a (mon.0) 2837 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:25:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:13 vm07 bash[23367]: audit 2026-03-10T10:25:12.909182+0000 mon.a (mon.0) 2838 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-94"}]: dispatch 2026-03-10T10:25:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:13 vm07 bash[23367]: audit 2026-03-10T10:25:12.909182+0000 mon.a (mon.0) 2838 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-94"}]: dispatch 2026-03-10T10:25:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:13 vm07 bash[23367]: audit 2026-03-10T10:25:13.033854+0000 mon.a (mon.0) 2839 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:25:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:13 vm07 bash[23367]: audit 2026-03-10T10:25:13.033854+0000 mon.a (mon.0) 2839 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:25:14.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:14 vm04 bash[28289]: cluster 2026-03-10T10:25:13.556497+0000 mon.a (mon.0) 2840 : cluster [DBG] osdmap e478: 8 total, 8 up, 8 in 2026-03-10T10:25:14.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:14 vm04 bash[28289]: cluster 2026-03-10T10:25:13.556497+0000 mon.a (mon.0) 2840 : cluster [DBG] osdmap e478: 8 total, 8 up, 8 in 2026-03-10T10:25:14.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:14 vm04 bash[28289]: cluster 2026-03-10T10:25:14.578211+0000 mon.a (mon.0) 2841 : cluster [DBG] osdmap e479: 8 total, 8 up, 8 in 2026-03-10T10:25:14.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:14 vm04 bash[28289]: cluster 2026-03-10T10:25:14.578211+0000 mon.a (mon.0) 2841 : cluster [DBG] osdmap e479: 8 total, 8 up, 8 in 2026-03-10T10:25:14.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:14 vm04 bash[20742]: cluster 2026-03-10T10:25:13.556497+0000 mon.a (mon.0) 2840 : cluster [DBG] osdmap e478: 8 total, 8 up, 8 in 2026-03-10T10:25:14.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:14 vm04 bash[20742]: cluster 2026-03-10T10:25:13.556497+0000 mon.a (mon.0) 2840 : cluster [DBG] osdmap e478: 8 total, 8 up, 8 in 2026-03-10T10:25:14.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:14 vm04 bash[20742]: cluster 2026-03-10T10:25:14.578211+0000 mon.a (mon.0) 2841 : cluster [DBG] osdmap e479: 8 total, 8 up, 8 in 2026-03-10T10:25:14.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:14 vm04 bash[20742]: cluster 2026-03-10T10:25:14.578211+0000 mon.a (mon.0) 2841 : cluster [DBG] osdmap e479: 8 total, 8 up, 8 in 2026-03-10T10:25:15.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:14 vm07 bash[23367]: cluster 2026-03-10T10:25:13.556497+0000 mon.a (mon.0) 2840 : cluster [DBG] osdmap e478: 8 total, 8 up, 8 in 2026-03-10T10:25:15.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:14 vm07 bash[23367]: cluster 2026-03-10T10:25:13.556497+0000 mon.a (mon.0) 2840 : cluster [DBG] osdmap e478: 8 total, 8 up, 8 in 2026-03-10T10:25:15.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:14 vm07 bash[23367]: cluster 2026-03-10T10:25:14.578211+0000 mon.a (mon.0) 2841 : cluster [DBG] osdmap e479: 8 total, 8 up, 8 in 2026-03-10T10:25:15.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:14 vm07 bash[23367]: cluster 2026-03-10T10:25:14.578211+0000 mon.a (mon.0) 2841 : cluster [DBG] osdmap e479: 8 total, 8 up, 8 in 2026-03-10T10:25:15.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:15 vm04 bash[28289]: cluster 2026-03-10T10:25:14.502068+0000 mgr.y (mgr.24422) 448 : cluster [DBG] pgmap v733: 260 pgs: 260 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:25:15.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:15 vm04 bash[28289]: cluster 2026-03-10T10:25:14.502068+0000 mgr.y (mgr.24422) 448 : cluster [DBG] pgmap v733: 260 pgs: 260 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:25:15.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:15 vm04 bash[28289]: audit 2026-03-10T10:25:14.583531+0000 mon.a (mon.0) 2842 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:25:15.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:15 vm04 bash[28289]: audit 2026-03-10T10:25:14.583531+0000 mon.a (mon.0) 2842 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:25:15.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:15 vm04 bash[28289]: audit 2026-03-10T10:25:15.555025+0000 mon.a (mon.0) 2843 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-96","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:25:15.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:15 vm04 bash[28289]: audit 2026-03-10T10:25:15.555025+0000 mon.a (mon.0) 2843 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-96","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:25:15.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:15 vm04 bash[28289]: cluster 2026-03-10T10:25:15.566226+0000 mon.a (mon.0) 2844 : cluster [DBG] osdmap e480: 8 total, 8 up, 8 in 2026-03-10T10:25:15.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:15 vm04 bash[28289]: cluster 2026-03-10T10:25:15.566226+0000 mon.a (mon.0) 2844 : cluster [DBG] osdmap e480: 8 total, 8 up, 8 in 2026-03-10T10:25:15.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:15 vm04 bash[28289]: audit 2026-03-10T10:25:15.569302+0000 mon.a (mon.0) 2845 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:25:15.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:15 vm04 bash[28289]: audit 2026-03-10T10:25:15.569302+0000 mon.a (mon.0) 2845 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:25:15.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:15 vm04 bash[20742]: cluster 2026-03-10T10:25:14.502068+0000 mgr.y (mgr.24422) 448 : cluster [DBG] pgmap v733: 260 pgs: 260 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:25:15.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:15 vm04 bash[20742]: cluster 2026-03-10T10:25:14.502068+0000 mgr.y (mgr.24422) 448 : cluster [DBG] pgmap v733: 260 pgs: 260 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:25:15.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:15 vm04 bash[20742]: audit 2026-03-10T10:25:14.583531+0000 mon.a (mon.0) 2842 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:25:15.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:15 vm04 bash[20742]: audit 2026-03-10T10:25:14.583531+0000 mon.a (mon.0) 2842 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:25:15.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:15 vm04 bash[20742]: audit 2026-03-10T10:25:15.555025+0000 mon.a (mon.0) 2843 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-96","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:25:15.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:15 vm04 bash[20742]: audit 2026-03-10T10:25:15.555025+0000 mon.a (mon.0) 2843 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-96","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:25:15.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:15 vm04 bash[20742]: cluster 2026-03-10T10:25:15.566226+0000 mon.a (mon.0) 2844 : cluster [DBG] osdmap e480: 8 total, 8 up, 8 in 2026-03-10T10:25:15.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:15 vm04 bash[20742]: cluster 2026-03-10T10:25:15.566226+0000 mon.a (mon.0) 2844 : cluster [DBG] osdmap e480: 8 total, 8 up, 8 in 2026-03-10T10:25:15.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:15 vm04 bash[20742]: audit 2026-03-10T10:25:15.569302+0000 mon.a (mon.0) 2845 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:25:15.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:15 vm04 bash[20742]: audit 2026-03-10T10:25:15.569302+0000 mon.a (mon.0) 2845 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:25:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:15 vm07 bash[23367]: cluster 2026-03-10T10:25:14.502068+0000 mgr.y (mgr.24422) 448 : cluster [DBG] pgmap v733: 260 pgs: 260 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:25:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:15 vm07 bash[23367]: cluster 2026-03-10T10:25:14.502068+0000 mgr.y (mgr.24422) 448 : cluster [DBG] pgmap v733: 260 pgs: 260 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:25:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:15 vm07 bash[23367]: audit 2026-03-10T10:25:14.583531+0000 mon.a (mon.0) 2842 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:25:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:15 vm07 bash[23367]: audit 2026-03-10T10:25:14.583531+0000 mon.a (mon.0) 2842 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-96","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:25:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:15 vm07 bash[23367]: audit 2026-03-10T10:25:15.555025+0000 mon.a (mon.0) 2843 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-96","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:25:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:15 vm07 bash[23367]: audit 2026-03-10T10:25:15.555025+0000 mon.a (mon.0) 2843 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-96","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:25:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:15 vm07 bash[23367]: cluster 2026-03-10T10:25:15.566226+0000 mon.a (mon.0) 2844 : cluster [DBG] osdmap e480: 8 total, 8 up, 8 in 2026-03-10T10:25:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:15 vm07 bash[23367]: cluster 2026-03-10T10:25:15.566226+0000 mon.a (mon.0) 2844 : cluster [DBG] osdmap e480: 8 total, 8 up, 8 in 2026-03-10T10:25:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:15 vm07 bash[23367]: audit 2026-03-10T10:25:15.569302+0000 mon.a (mon.0) 2845 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:25:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:15 vm07 bash[23367]: audit 2026-03-10T10:25:15.569302+0000 mon.a (mon.0) 2845 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:25:17.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:17 vm04 bash[28289]: cluster 2026-03-10T10:25:16.502373+0000 mgr.y (mgr.24422) 449 : cluster [DBG] pgmap v736: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:25:17.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:17 vm04 bash[28289]: cluster 2026-03-10T10:25:16.502373+0000 mgr.y (mgr.24422) 449 : cluster [DBG] pgmap v736: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:25:17.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:17 vm04 bash[28289]: cluster 2026-03-10T10:25:16.561178+0000 mon.a (mon.0) 2846 : cluster [DBG] osdmap e481: 8 total, 8 up, 8 in 2026-03-10T10:25:17.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:17 vm04 bash[28289]: cluster 2026-03-10T10:25:16.561178+0000 mon.a (mon.0) 2846 : cluster [DBG] osdmap e481: 8 total, 8 up, 8 in 2026-03-10T10:25:17.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:17 vm04 bash[20742]: cluster 2026-03-10T10:25:16.502373+0000 mgr.y (mgr.24422) 449 : cluster [DBG] pgmap v736: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:25:17.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:17 vm04 bash[20742]: cluster 2026-03-10T10:25:16.502373+0000 mgr.y (mgr.24422) 449 : cluster [DBG] pgmap v736: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:25:17.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:17 vm04 bash[20742]: cluster 2026-03-10T10:25:16.561178+0000 mon.a (mon.0) 2846 : cluster [DBG] osdmap e481: 8 total, 8 up, 8 in 2026-03-10T10:25:17.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:17 vm04 bash[20742]: cluster 2026-03-10T10:25:16.561178+0000 mon.a (mon.0) 2846 : cluster [DBG] osdmap e481: 8 total, 8 up, 8 in 2026-03-10T10:25:18.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:17 vm07 bash[23367]: cluster 2026-03-10T10:25:16.502373+0000 mgr.y (mgr.24422) 449 : cluster [DBG] pgmap v736: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:25:18.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:17 vm07 bash[23367]: cluster 2026-03-10T10:25:16.502373+0000 mgr.y (mgr.24422) 449 : cluster [DBG] pgmap v736: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:25:18.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:17 vm07 bash[23367]: cluster 2026-03-10T10:25:16.561178+0000 mon.a (mon.0) 2846 : cluster [DBG] osdmap e481: 8 total, 8 up, 8 in 2026-03-10T10:25:18.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:17 vm07 bash[23367]: cluster 2026-03-10T10:25:16.561178+0000 mon.a (mon.0) 2846 : cluster [DBG] osdmap e481: 8 total, 8 up, 8 in 2026-03-10T10:25:19.016 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:25:18 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:25:19.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:19 vm04 bash[28289]: cluster 2026-03-10T10:25:18.503045+0000 mgr.y (mgr.24422) 450 : cluster [DBG] pgmap v738: 292 pgs: 13 unknown, 279 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 827 B/s wr, 3 op/s 2026-03-10T10:25:19.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:19 vm04 bash[28289]: cluster 2026-03-10T10:25:18.503045+0000 mgr.y (mgr.24422) 450 : cluster [DBG] pgmap v738: 292 pgs: 13 unknown, 279 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 827 B/s wr, 3 op/s 2026-03-10T10:25:19.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:19 vm04 bash[28289]: audit 2026-03-10T10:25:18.668009+0000 mgr.y (mgr.24422) 451 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:19.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:19 vm04 bash[28289]: audit 2026-03-10T10:25:18.668009+0000 mgr.y (mgr.24422) 451 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:19.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:19 vm04 bash[20742]: cluster 2026-03-10T10:25:18.503045+0000 mgr.y (mgr.24422) 450 : cluster [DBG] pgmap v738: 292 pgs: 13 unknown, 279 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 827 B/s wr, 3 op/s 2026-03-10T10:25:19.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:19 vm04 bash[20742]: cluster 2026-03-10T10:25:18.503045+0000 mgr.y (mgr.24422) 450 : cluster [DBG] pgmap v738: 292 pgs: 13 unknown, 279 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 827 B/s wr, 3 op/s 2026-03-10T10:25:19.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:19 vm04 bash[20742]: audit 2026-03-10T10:25:18.668009+0000 mgr.y (mgr.24422) 451 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:19.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:19 vm04 bash[20742]: audit 2026-03-10T10:25:18.668009+0000 mgr.y (mgr.24422) 451 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:20.015 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:19 vm07 bash[23367]: cluster 2026-03-10T10:25:18.503045+0000 mgr.y (mgr.24422) 450 : cluster [DBG] pgmap v738: 292 pgs: 13 unknown, 279 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 827 B/s wr, 3 op/s 2026-03-10T10:25:20.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:19 vm07 bash[23367]: cluster 2026-03-10T10:25:18.503045+0000 mgr.y (mgr.24422) 450 : cluster [DBG] pgmap v738: 292 pgs: 13 unknown, 279 active+clean; 8.3 MiB data, 940 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 827 B/s wr, 3 op/s 2026-03-10T10:25:20.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:19 vm07 bash[23367]: audit 2026-03-10T10:25:18.668009+0000 mgr.y (mgr.24422) 451 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:20.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:19 vm07 bash[23367]: audit 2026-03-10T10:25:18.668009+0000 mgr.y (mgr.24422) 451 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:21 vm04 bash[28289]: cluster 2026-03-10T10:25:20.503719+0000 mgr.y (mgr.24422) 452 : cluster [DBG] pgmap v739: 292 pgs: 292 active+clean; 8.3 MiB data, 941 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 853 B/s wr, 2 op/s 2026-03-10T10:25:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:21 vm04 bash[28289]: cluster 2026-03-10T10:25:20.503719+0000 mgr.y (mgr.24422) 452 : cluster [DBG] pgmap v739: 292 pgs: 292 active+clean; 8.3 MiB data, 941 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 853 B/s wr, 2 op/s 2026-03-10T10:25:21.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:21 vm04 bash[20742]: cluster 2026-03-10T10:25:20.503719+0000 mgr.y (mgr.24422) 452 : cluster [DBG] pgmap v739: 292 pgs: 292 active+clean; 8.3 MiB data, 941 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 853 B/s wr, 2 op/s 2026-03-10T10:25:21.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:21 vm04 bash[20742]: cluster 2026-03-10T10:25:20.503719+0000 mgr.y (mgr.24422) 452 : cluster [DBG] pgmap v739: 292 pgs: 292 active+clean; 8.3 MiB data, 941 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 853 B/s wr, 2 op/s 2026-03-10T10:25:22.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:21 vm07 bash[23367]: cluster 2026-03-10T10:25:20.503719+0000 mgr.y (mgr.24422) 452 : cluster [DBG] pgmap v739: 292 pgs: 292 active+clean; 8.3 MiB data, 941 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 853 B/s wr, 2 op/s 2026-03-10T10:25:22.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:21 vm07 bash[23367]: cluster 2026-03-10T10:25:20.503719+0000 mgr.y (mgr.24422) 452 : cluster [DBG] pgmap v739: 292 pgs: 292 active+clean; 8.3 MiB data, 941 MiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 853 B/s wr, 2 op/s 2026-03-10T10:25:23.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:25:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:25:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:25:23.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:23 vm04 bash[28289]: cluster 2026-03-10T10:25:22.504092+0000 mgr.y (mgr.24422) 453 : cluster [DBG] pgmap v740: 292 pgs: 292 active+clean; 8.3 MiB data, 941 MiB used, 159 GiB / 160 GiB avail; 643 B/s rd, 643 B/s wr, 2 op/s 2026-03-10T10:25:23.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:23 vm04 bash[28289]: cluster 2026-03-10T10:25:22.504092+0000 mgr.y (mgr.24422) 453 : cluster [DBG] pgmap v740: 292 pgs: 292 active+clean; 8.3 MiB data, 941 MiB used, 159 GiB / 160 GiB avail; 643 B/s rd, 643 B/s wr, 2 op/s 2026-03-10T10:25:23.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:23 vm04 bash[20742]: cluster 2026-03-10T10:25:22.504092+0000 mgr.y (mgr.24422) 453 : cluster [DBG] pgmap v740: 292 pgs: 292 active+clean; 8.3 MiB data, 941 MiB used, 159 GiB / 160 GiB avail; 643 B/s rd, 643 B/s wr, 2 op/s 2026-03-10T10:25:23.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:23 vm04 bash[20742]: cluster 2026-03-10T10:25:22.504092+0000 mgr.y (mgr.24422) 453 : cluster [DBG] pgmap v740: 292 pgs: 292 active+clean; 8.3 MiB data, 941 MiB used, 159 GiB / 160 GiB avail; 643 B/s rd, 643 B/s wr, 2 op/s 2026-03-10T10:25:24.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:23 vm07 bash[23367]: cluster 2026-03-10T10:25:22.504092+0000 mgr.y (mgr.24422) 453 : cluster [DBG] pgmap v740: 292 pgs: 292 active+clean; 8.3 MiB data, 941 MiB used, 159 GiB / 160 GiB avail; 643 B/s rd, 643 B/s wr, 2 op/s 2026-03-10T10:25:24.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:23 vm07 bash[23367]: cluster 2026-03-10T10:25:22.504092+0000 mgr.y (mgr.24422) 453 : cluster [DBG] pgmap v740: 292 pgs: 292 active+clean; 8.3 MiB data, 941 MiB used, 159 GiB / 160 GiB avail; 643 B/s rd, 643 B/s wr, 2 op/s 2026-03-10T10:25:25.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:25 vm04 bash[28289]: cluster 2026-03-10T10:25:24.504831+0000 mgr.y (mgr.24422) 454 : cluster [DBG] pgmap v741: 292 pgs: 292 active+clean; 8.3 MiB data, 941 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 572 B/s wr, 2 op/s 2026-03-10T10:25:25.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:25 vm04 bash[28289]: cluster 2026-03-10T10:25:24.504831+0000 mgr.y (mgr.24422) 454 : cluster [DBG] pgmap v741: 292 pgs: 292 active+clean; 8.3 MiB data, 941 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 572 B/s wr, 2 op/s 2026-03-10T10:25:25.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:25 vm04 bash[20742]: cluster 2026-03-10T10:25:24.504831+0000 mgr.y (mgr.24422) 454 : cluster [DBG] pgmap v741: 292 pgs: 292 active+clean; 8.3 MiB data, 941 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 572 B/s wr, 2 op/s 2026-03-10T10:25:25.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:25 vm04 bash[20742]: cluster 2026-03-10T10:25:24.504831+0000 mgr.y (mgr.24422) 454 : cluster [DBG] pgmap v741: 292 pgs: 292 active+clean; 8.3 MiB data, 941 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 572 B/s wr, 2 op/s 2026-03-10T10:25:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:25 vm07 bash[23367]: cluster 2026-03-10T10:25:24.504831+0000 mgr.y (mgr.24422) 454 : cluster [DBG] pgmap v741: 292 pgs: 292 active+clean; 8.3 MiB data, 941 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 572 B/s wr, 2 op/s 2026-03-10T10:25:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:25 vm07 bash[23367]: cluster 2026-03-10T10:25:24.504831+0000 mgr.y (mgr.24422) 454 : cluster [DBG] pgmap v741: 292 pgs: 292 active+clean; 8.3 MiB data, 941 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 572 B/s wr, 2 op/s 2026-03-10T10:25:26.953 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 10 10:25:26 vm04 bash[43416]: debug 2026-03-10T10:25:26.654+0000 7f03a59f3640 -1 snap_mapper.add_oid found existing snaps mapped on 102:7876060b:test-rados-api-vm04-59491-97::foo:21, removing 2026-03-10T10:25:26.953 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 10 10:25:26 vm04 bash[49304]: debug 2026-03-10T10:25:26.654+0000 7f60c6178640 -1 snap_mapper.add_oid found existing snaps mapped on 102:7876060b:test-rados-api-vm04-59491-97::foo:21, removing 2026-03-10T10:25:27.016 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 10:25:26 vm07 bash[26644]: debug 2026-03-10T10:25:26.649+0000 7f4609744640 -1 snap_mapper.add_oid found existing snaps mapped on 102:7876060b:test-rados-api-vm04-59491-97::foo:21, removing 2026-03-10T10:25:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:27 vm07 bash[23367]: cluster 2026-03-10T10:25:26.505118+0000 mgr.y (mgr.24422) 455 : cluster [DBG] pgmap v742: 292 pgs: 292 active+clean; 8.3 MiB data, 941 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:25:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:27 vm07 bash[23367]: cluster 2026-03-10T10:25:26.505118+0000 mgr.y (mgr.24422) 455 : cluster [DBG] pgmap v742: 292 pgs: 292 active+clean; 8.3 MiB data, 941 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:25:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:27 vm07 bash[23367]: audit 2026-03-10T10:25:26.776003+0000 mon.a (mon.0) 2847 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:25:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:27 vm07 bash[23367]: audit 2026-03-10T10:25:26.776003+0000 mon.a (mon.0) 2847 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:25:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:27 vm07 bash[23367]: audit 2026-03-10T10:25:26.813712+0000 mon.a (mon.0) 2848 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:25:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:27 vm07 bash[23367]: audit 2026-03-10T10:25:26.813712+0000 mon.a (mon.0) 2848 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:25:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:27 vm07 bash[23367]: audit 2026-03-10T10:25:26.814037+0000 mon.a (mon.0) 2849 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-96"}]: dispatch 2026-03-10T10:25:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:27 vm07 bash[23367]: audit 2026-03-10T10:25:26.814037+0000 mon.a (mon.0) 2849 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-96"}]: dispatch 2026-03-10T10:25:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:27 vm07 bash[23367]: audit 2026-03-10T10:25:27.113293+0000 mon.a (mon.0) 2850 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:25:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:27 vm07 bash[23367]: audit 2026-03-10T10:25:27.113293+0000 mon.a (mon.0) 2850 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:25:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:27 vm07 bash[23367]: audit 2026-03-10T10:25:27.113400+0000 mon.a (mon.0) 2851 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:25:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:27 vm07 bash[23367]: audit 2026-03-10T10:25:27.113400+0000 mon.a (mon.0) 2851 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:25:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:27 vm07 bash[23367]: audit 2026-03-10T10:25:27.114128+0000 mon.a (mon.0) 2852 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:25:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:27 vm07 bash[23367]: audit 2026-03-10T10:25:27.114128+0000 mon.a (mon.0) 2852 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:25:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:27 vm07 bash[23367]: audit 2026-03-10T10:25:27.114577+0000 mon.a (mon.0) 2853 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:25:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:27 vm07 bash[23367]: audit 2026-03-10T10:25:27.114577+0000 mon.a (mon.0) 2853 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:25:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:27 vm07 bash[23367]: audit 2026-03-10T10:25:27.119402+0000 mon.a (mon.0) 2854 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:25:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:27 vm07 bash[23367]: audit 2026-03-10T10:25:27.119402+0000 mon.a (mon.0) 2854 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:27 vm04 bash[28289]: cluster 2026-03-10T10:25:26.505118+0000 mgr.y (mgr.24422) 455 : cluster [DBG] pgmap v742: 292 pgs: 292 active+clean; 8.3 MiB data, 941 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:27 vm04 bash[28289]: cluster 2026-03-10T10:25:26.505118+0000 mgr.y (mgr.24422) 455 : cluster [DBG] pgmap v742: 292 pgs: 292 active+clean; 8.3 MiB data, 941 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:27 vm04 bash[28289]: audit 2026-03-10T10:25:26.776003+0000 mon.a (mon.0) 2847 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:27 vm04 bash[28289]: audit 2026-03-10T10:25:26.776003+0000 mon.a (mon.0) 2847 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:27 vm04 bash[28289]: audit 2026-03-10T10:25:26.813712+0000 mon.a (mon.0) 2848 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:27 vm04 bash[28289]: audit 2026-03-10T10:25:26.813712+0000 mon.a (mon.0) 2848 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:27 vm04 bash[28289]: audit 2026-03-10T10:25:26.814037+0000 mon.a (mon.0) 2849 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-96"}]: dispatch 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:27 vm04 bash[28289]: audit 2026-03-10T10:25:26.814037+0000 mon.a (mon.0) 2849 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-96"}]: dispatch 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:27 vm04 bash[28289]: audit 2026-03-10T10:25:27.113293+0000 mon.a (mon.0) 2850 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:27 vm04 bash[28289]: audit 2026-03-10T10:25:27.113293+0000 mon.a (mon.0) 2850 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:27 vm04 bash[28289]: audit 2026-03-10T10:25:27.113400+0000 mon.a (mon.0) 2851 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:27 vm04 bash[28289]: audit 2026-03-10T10:25:27.113400+0000 mon.a (mon.0) 2851 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:27 vm04 bash[28289]: audit 2026-03-10T10:25:27.114128+0000 mon.a (mon.0) 2852 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:27 vm04 bash[28289]: audit 2026-03-10T10:25:27.114128+0000 mon.a (mon.0) 2852 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:27 vm04 bash[28289]: audit 2026-03-10T10:25:27.114577+0000 mon.a (mon.0) 2853 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:27 vm04 bash[28289]: audit 2026-03-10T10:25:27.114577+0000 mon.a (mon.0) 2853 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:27 vm04 bash[28289]: audit 2026-03-10T10:25:27.119402+0000 mon.a (mon.0) 2854 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:27 vm04 bash[28289]: audit 2026-03-10T10:25:27.119402+0000 mon.a (mon.0) 2854 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:27 vm04 bash[20742]: cluster 2026-03-10T10:25:26.505118+0000 mgr.y (mgr.24422) 455 : cluster [DBG] pgmap v742: 292 pgs: 292 active+clean; 8.3 MiB data, 941 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:27 vm04 bash[20742]: cluster 2026-03-10T10:25:26.505118+0000 mgr.y (mgr.24422) 455 : cluster [DBG] pgmap v742: 292 pgs: 292 active+clean; 8.3 MiB data, 941 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:27 vm04 bash[20742]: audit 2026-03-10T10:25:26.776003+0000 mon.a (mon.0) 2847 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:27 vm04 bash[20742]: audit 2026-03-10T10:25:26.776003+0000 mon.a (mon.0) 2847 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:27 vm04 bash[20742]: audit 2026-03-10T10:25:26.813712+0000 mon.a (mon.0) 2848 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:27 vm04 bash[20742]: audit 2026-03-10T10:25:26.813712+0000 mon.a (mon.0) 2848 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:27 vm04 bash[20742]: audit 2026-03-10T10:25:26.814037+0000 mon.a (mon.0) 2849 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-96"}]: dispatch 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:27 vm04 bash[20742]: audit 2026-03-10T10:25:26.814037+0000 mon.a (mon.0) 2849 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-96"}]: dispatch 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:27 vm04 bash[20742]: audit 2026-03-10T10:25:27.113293+0000 mon.a (mon.0) 2850 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:27 vm04 bash[20742]: audit 2026-03-10T10:25:27.113293+0000 mon.a (mon.0) 2850 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:27 vm04 bash[20742]: audit 2026-03-10T10:25:27.113400+0000 mon.a (mon.0) 2851 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:27 vm04 bash[20742]: audit 2026-03-10T10:25:27.113400+0000 mon.a (mon.0) 2851 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:27 vm04 bash[20742]: audit 2026-03-10T10:25:27.114128+0000 mon.a (mon.0) 2852 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:27 vm04 bash[20742]: audit 2026-03-10T10:25:27.114128+0000 mon.a (mon.0) 2852 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:27 vm04 bash[20742]: audit 2026-03-10T10:25:27.114577+0000 mon.a (mon.0) 2853 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:27 vm04 bash[20742]: audit 2026-03-10T10:25:27.114577+0000 mon.a (mon.0) 2853 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:27 vm04 bash[20742]: audit 2026-03-10T10:25:27.119402+0000 mon.a (mon.0) 2854 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:25:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:27 vm04 bash[20742]: audit 2026-03-10T10:25:27.119402+0000 mon.a (mon.0) 2854 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:25:29.016 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:25:28 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:25:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:28 vm07 bash[23367]: cluster 2026-03-10T10:25:27.775377+0000 mon.a (mon.0) 2855 : cluster [DBG] osdmap e482: 8 total, 8 up, 8 in 2026-03-10T10:25:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:28 vm07 bash[23367]: cluster 2026-03-10T10:25:27.775377+0000 mon.a (mon.0) 2855 : cluster [DBG] osdmap e482: 8 total, 8 up, 8 in 2026-03-10T10:25:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:28 vm07 bash[23367]: audit 2026-03-10T10:25:28.039587+0000 mon.a (mon.0) 2856 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:25:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:28 vm07 bash[23367]: audit 2026-03-10T10:25:28.039587+0000 mon.a (mon.0) 2856 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:25:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:28 vm04 bash[28289]: cluster 2026-03-10T10:25:27.775377+0000 mon.a (mon.0) 2855 : cluster [DBG] osdmap e482: 8 total, 8 up, 8 in 2026-03-10T10:25:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:28 vm04 bash[28289]: cluster 2026-03-10T10:25:27.775377+0000 mon.a (mon.0) 2855 : cluster [DBG] osdmap e482: 8 total, 8 up, 8 in 2026-03-10T10:25:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:28 vm04 bash[28289]: audit 2026-03-10T10:25:28.039587+0000 mon.a (mon.0) 2856 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:25:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:28 vm04 bash[28289]: audit 2026-03-10T10:25:28.039587+0000 mon.a (mon.0) 2856 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:25:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:28 vm04 bash[20742]: cluster 2026-03-10T10:25:27.775377+0000 mon.a (mon.0) 2855 : cluster [DBG] osdmap e482: 8 total, 8 up, 8 in 2026-03-10T10:25:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:28 vm04 bash[20742]: cluster 2026-03-10T10:25:27.775377+0000 mon.a (mon.0) 2855 : cluster [DBG] osdmap e482: 8 total, 8 up, 8 in 2026-03-10T10:25:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:28 vm04 bash[20742]: audit 2026-03-10T10:25:28.039587+0000 mon.a (mon.0) 2856 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:25:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:28 vm04 bash[20742]: audit 2026-03-10T10:25:28.039587+0000 mon.a (mon.0) 2856 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:25:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:29 vm04 bash[28289]: cluster 2026-03-10T10:25:28.505547+0000 mgr.y (mgr.24422) 456 : cluster [DBG] pgmap v744: 260 pgs: 260 active+clean; 8.3 MiB data, 941 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:25:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:29 vm04 bash[28289]: cluster 2026-03-10T10:25:28.505547+0000 mgr.y (mgr.24422) 456 : cluster [DBG] pgmap v744: 260 pgs: 260 active+clean; 8.3 MiB data, 941 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:25:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:29 vm04 bash[28289]: audit 2026-03-10T10:25:28.673155+0000 mgr.y (mgr.24422) 457 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:29 vm04 bash[28289]: audit 2026-03-10T10:25:28.673155+0000 mgr.y (mgr.24422) 457 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:29 vm04 bash[28289]: cluster 2026-03-10T10:25:28.795928+0000 mon.a (mon.0) 2857 : cluster [DBG] osdmap e483: 8 total, 8 up, 8 in 2026-03-10T10:25:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:29 vm04 bash[28289]: cluster 2026-03-10T10:25:28.795928+0000 mon.a (mon.0) 2857 : cluster [DBG] osdmap e483: 8 total, 8 up, 8 in 2026-03-10T10:25:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:29 vm04 bash[28289]: audit 2026-03-10T10:25:28.800389+0000 mon.a (mon.0) 2858 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:25:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:29 vm04 bash[28289]: audit 2026-03-10T10:25:28.800389+0000 mon.a (mon.0) 2858 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:25:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:29 vm04 bash[20742]: cluster 2026-03-10T10:25:28.505547+0000 mgr.y (mgr.24422) 456 : cluster [DBG] pgmap v744: 260 pgs: 260 active+clean; 8.3 MiB data, 941 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:25:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:29 vm04 bash[20742]: cluster 2026-03-10T10:25:28.505547+0000 mgr.y (mgr.24422) 456 : cluster [DBG] pgmap v744: 260 pgs: 260 active+clean; 8.3 MiB data, 941 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:25:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:29 vm04 bash[20742]: audit 2026-03-10T10:25:28.673155+0000 mgr.y (mgr.24422) 457 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:29 vm04 bash[20742]: audit 2026-03-10T10:25:28.673155+0000 mgr.y (mgr.24422) 457 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:29 vm04 bash[20742]: cluster 2026-03-10T10:25:28.795928+0000 mon.a (mon.0) 2857 : cluster [DBG] osdmap e483: 8 total, 8 up, 8 in 2026-03-10T10:25:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:29 vm04 bash[20742]: cluster 2026-03-10T10:25:28.795928+0000 mon.a (mon.0) 2857 : cluster [DBG] osdmap e483: 8 total, 8 up, 8 in 2026-03-10T10:25:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:29 vm04 bash[20742]: audit 2026-03-10T10:25:28.800389+0000 mon.a (mon.0) 2858 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:25:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:29 vm04 bash[20742]: audit 2026-03-10T10:25:28.800389+0000 mon.a (mon.0) 2858 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:25:30.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:29 vm07 bash[23367]: cluster 2026-03-10T10:25:28.505547+0000 mgr.y (mgr.24422) 456 : cluster [DBG] pgmap v744: 260 pgs: 260 active+clean; 8.3 MiB data, 941 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:25:30.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:29 vm07 bash[23367]: cluster 2026-03-10T10:25:28.505547+0000 mgr.y (mgr.24422) 456 : cluster [DBG] pgmap v744: 260 pgs: 260 active+clean; 8.3 MiB data, 941 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:25:30.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:29 vm07 bash[23367]: audit 2026-03-10T10:25:28.673155+0000 mgr.y (mgr.24422) 457 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:30.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:29 vm07 bash[23367]: audit 2026-03-10T10:25:28.673155+0000 mgr.y (mgr.24422) 457 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:30.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:29 vm07 bash[23367]: cluster 2026-03-10T10:25:28.795928+0000 mon.a (mon.0) 2857 : cluster [DBG] osdmap e483: 8 total, 8 up, 8 in 2026-03-10T10:25:30.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:29 vm07 bash[23367]: cluster 2026-03-10T10:25:28.795928+0000 mon.a (mon.0) 2857 : cluster [DBG] osdmap e483: 8 total, 8 up, 8 in 2026-03-10T10:25:30.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:29 vm07 bash[23367]: audit 2026-03-10T10:25:28.800389+0000 mon.a (mon.0) 2858 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:25:30.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:29 vm07 bash[23367]: audit 2026-03-10T10:25:28.800389+0000 mon.a (mon.0) 2858 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-98","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:25:31.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:30 vm04 bash[28289]: audit 2026-03-10T10:25:29.793318+0000 mon.a (mon.0) 2859 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-98","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:25:31.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:30 vm04 bash[28289]: audit 2026-03-10T10:25:29.793318+0000 mon.a (mon.0) 2859 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-98","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:25:31.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:30 vm04 bash[28289]: cluster 2026-03-10T10:25:29.803714+0000 mon.a (mon.0) 2860 : cluster [DBG] osdmap e484: 8 total, 8 up, 8 in 2026-03-10T10:25:31.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:30 vm04 bash[28289]: cluster 2026-03-10T10:25:29.803714+0000 mon.a (mon.0) 2860 : cluster [DBG] osdmap e484: 8 total, 8 up, 8 in 2026-03-10T10:25:31.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:30 vm04 bash[28289]: audit 2026-03-10T10:25:29.823462+0000 mon.a (mon.0) 2861 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:25:31.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:30 vm04 bash[28289]: audit 2026-03-10T10:25:29.823462+0000 mon.a (mon.0) 2861 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:25:31.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:30 vm04 bash[20742]: audit 2026-03-10T10:25:29.793318+0000 mon.a (mon.0) 2859 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-98","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:25:31.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:30 vm04 bash[20742]: audit 2026-03-10T10:25:29.793318+0000 mon.a (mon.0) 2859 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-98","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:25:31.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:30 vm04 bash[20742]: cluster 2026-03-10T10:25:29.803714+0000 mon.a (mon.0) 2860 : cluster [DBG] osdmap e484: 8 total, 8 up, 8 in 2026-03-10T10:25:31.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:30 vm04 bash[20742]: cluster 2026-03-10T10:25:29.803714+0000 mon.a (mon.0) 2860 : cluster [DBG] osdmap e484: 8 total, 8 up, 8 in 2026-03-10T10:25:31.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:30 vm04 bash[20742]: audit 2026-03-10T10:25:29.823462+0000 mon.a (mon.0) 2861 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:25:31.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:30 vm04 bash[20742]: audit 2026-03-10T10:25:29.823462+0000 mon.a (mon.0) 2861 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:25:31.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:30 vm07 bash[23367]: audit 2026-03-10T10:25:29.793318+0000 mon.a (mon.0) 2859 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-98","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:25:31.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:30 vm07 bash[23367]: audit 2026-03-10T10:25:29.793318+0000 mon.a (mon.0) 2859 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-98","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:25:31.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:30 vm07 bash[23367]: cluster 2026-03-10T10:25:29.803714+0000 mon.a (mon.0) 2860 : cluster [DBG] osdmap e484: 8 total, 8 up, 8 in 2026-03-10T10:25:31.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:30 vm07 bash[23367]: cluster 2026-03-10T10:25:29.803714+0000 mon.a (mon.0) 2860 : cluster [DBG] osdmap e484: 8 total, 8 up, 8 in 2026-03-10T10:25:31.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:30 vm07 bash[23367]: audit 2026-03-10T10:25:29.823462+0000 mon.a (mon.0) 2861 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:25:31.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:30 vm07 bash[23367]: audit 2026-03-10T10:25:29.823462+0000 mon.a (mon.0) 2861 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-98", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:25:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:32 vm04 bash[28289]: cluster 2026-03-10T10:25:30.505867+0000 mgr.y (mgr.24422) 458 : cluster [DBG] pgmap v747: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:25:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:32 vm04 bash[28289]: cluster 2026-03-10T10:25:30.505867+0000 mgr.y (mgr.24422) 458 : cluster [DBG] pgmap v747: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:25:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:32 vm04 bash[28289]: audit 2026-03-10T10:25:30.803997+0000 mon.a (mon.0) 2862 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-98", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:25:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:32 vm04 bash[28289]: audit 2026-03-10T10:25:30.803997+0000 mon.a (mon.0) 2862 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-98", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:25:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:32 vm04 bash[28289]: cluster 2026-03-10T10:25:30.815385+0000 mon.a (mon.0) 2863 : cluster [DBG] osdmap e485: 8 total, 8 up, 8 in 2026-03-10T10:25:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:32 vm04 bash[28289]: cluster 2026-03-10T10:25:30.815385+0000 mon.a (mon.0) 2863 : cluster [DBG] osdmap e485: 8 total, 8 up, 8 in 2026-03-10T10:25:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:32 vm04 bash[28289]: audit 2026-03-10T10:25:30.815984+0000 mon.a (mon.0) 2864 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-98", "mode": "writeback"}]: dispatch 2026-03-10T10:25:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:32 vm04 bash[28289]: audit 2026-03-10T10:25:30.815984+0000 mon.a (mon.0) 2864 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-98", "mode": "writeback"}]: dispatch 2026-03-10T10:25:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:32 vm04 bash[20742]: cluster 2026-03-10T10:25:30.505867+0000 mgr.y (mgr.24422) 458 : cluster [DBG] pgmap v747: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:25:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:32 vm04 bash[20742]: cluster 2026-03-10T10:25:30.505867+0000 mgr.y (mgr.24422) 458 : cluster [DBG] pgmap v747: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:25:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:32 vm04 bash[20742]: audit 2026-03-10T10:25:30.803997+0000 mon.a (mon.0) 2862 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-98", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:25:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:32 vm04 bash[20742]: audit 2026-03-10T10:25:30.803997+0000 mon.a (mon.0) 2862 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-98", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:25:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:32 vm04 bash[20742]: cluster 2026-03-10T10:25:30.815385+0000 mon.a (mon.0) 2863 : cluster [DBG] osdmap e485: 8 total, 8 up, 8 in 2026-03-10T10:25:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:32 vm04 bash[20742]: cluster 2026-03-10T10:25:30.815385+0000 mon.a (mon.0) 2863 : cluster [DBG] osdmap e485: 8 total, 8 up, 8 in 2026-03-10T10:25:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:32 vm04 bash[20742]: audit 2026-03-10T10:25:30.815984+0000 mon.a (mon.0) 2864 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-98", "mode": "writeback"}]: dispatch 2026-03-10T10:25:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:32 vm04 bash[20742]: audit 2026-03-10T10:25:30.815984+0000 mon.a (mon.0) 2864 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-98", "mode": "writeback"}]: dispatch 2026-03-10T10:25:32.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:32 vm07 bash[23367]: cluster 2026-03-10T10:25:30.505867+0000 mgr.y (mgr.24422) 458 : cluster [DBG] pgmap v747: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:25:32.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:32 vm07 bash[23367]: cluster 2026-03-10T10:25:30.505867+0000 mgr.y (mgr.24422) 458 : cluster [DBG] pgmap v747: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:25:32.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:32 vm07 bash[23367]: audit 2026-03-10T10:25:30.803997+0000 mon.a (mon.0) 2862 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-98", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:25:32.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:32 vm07 bash[23367]: audit 2026-03-10T10:25:30.803997+0000 mon.a (mon.0) 2862 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-98", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:25:32.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:32 vm07 bash[23367]: cluster 2026-03-10T10:25:30.815385+0000 mon.a (mon.0) 2863 : cluster [DBG] osdmap e485: 8 total, 8 up, 8 in 2026-03-10T10:25:32.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:32 vm07 bash[23367]: cluster 2026-03-10T10:25:30.815385+0000 mon.a (mon.0) 2863 : cluster [DBG] osdmap e485: 8 total, 8 up, 8 in 2026-03-10T10:25:32.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:32 vm07 bash[23367]: audit 2026-03-10T10:25:30.815984+0000 mon.a (mon.0) 2864 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-98", "mode": "writeback"}]: dispatch 2026-03-10T10:25:32.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:32 vm07 bash[23367]: audit 2026-03-10T10:25:30.815984+0000 mon.a (mon.0) 2864 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-98", "mode": "writeback"}]: dispatch 2026-03-10T10:25:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:33 vm04 bash[28289]: cluster 2026-03-10T10:25:31.803909+0000 mon.a (mon.0) 2865 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:25:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:33 vm04 bash[28289]: cluster 2026-03-10T10:25:31.803909+0000 mon.a (mon.0) 2865 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:25:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:33 vm04 bash[28289]: audit 2026-03-10T10:25:32.042908+0000 mon.a (mon.0) 2866 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-98", "mode": "writeback"}]': finished 2026-03-10T10:25:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:33 vm04 bash[28289]: audit 2026-03-10T10:25:32.042908+0000 mon.a (mon.0) 2866 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-98", "mode": "writeback"}]': finished 2026-03-10T10:25:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:33 vm04 bash[28289]: cluster 2026-03-10T10:25:32.049267+0000 mon.a (mon.0) 2867 : cluster [DBG] osdmap e486: 8 total, 8 up, 8 in 2026-03-10T10:25:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:33 vm04 bash[28289]: cluster 2026-03-10T10:25:32.049267+0000 mon.a (mon.0) 2867 : cluster [DBG] osdmap e486: 8 total, 8 up, 8 in 2026-03-10T10:25:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:33 vm04 bash[28289]: audit 2026-03-10T10:25:32.052080+0000 mon.a (mon.0) 2868 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-98"}]: dispatch 2026-03-10T10:25:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:33 vm04 bash[28289]: audit 2026-03-10T10:25:32.052080+0000 mon.a (mon.0) 2868 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-98"}]: dispatch 2026-03-10T10:25:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:33 vm04 bash[28289]: audit 2026-03-10T10:25:33.045941+0000 mon.a (mon.0) 2869 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-98"}]': finished 2026-03-10T10:25:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:33 vm04 bash[28289]: audit 2026-03-10T10:25:33.045941+0000 mon.a (mon.0) 2869 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-98"}]': finished 2026-03-10T10:25:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:33 vm04 bash[28289]: cluster 2026-03-10T10:25:33.049719+0000 mon.a (mon.0) 2870 : cluster [DBG] osdmap e487: 8 total, 8 up, 8 in 2026-03-10T10:25:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:33 vm04 bash[28289]: cluster 2026-03-10T10:25:33.049719+0000 mon.a (mon.0) 2870 : cluster [DBG] osdmap e487: 8 total, 8 up, 8 in 2026-03-10T10:25:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:33 vm04 bash[28289]: audit 2026-03-10T10:25:33.050204+0000 mon.a (mon.0) 2871 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:25:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:33 vm04 bash[28289]: audit 2026-03-10T10:25:33.050204+0000 mon.a (mon.0) 2871 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:25:33.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:33 vm04 bash[20742]: cluster 2026-03-10T10:25:31.803909+0000 mon.a (mon.0) 2865 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:25:33.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:33 vm04 bash[20742]: cluster 2026-03-10T10:25:31.803909+0000 mon.a (mon.0) 2865 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:25:33.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:33 vm04 bash[20742]: audit 2026-03-10T10:25:32.042908+0000 mon.a (mon.0) 2866 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-98", "mode": "writeback"}]': finished 2026-03-10T10:25:33.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:33 vm04 bash[20742]: audit 2026-03-10T10:25:32.042908+0000 mon.a (mon.0) 2866 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-98", "mode": "writeback"}]': finished 2026-03-10T10:25:33.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:33 vm04 bash[20742]: cluster 2026-03-10T10:25:32.049267+0000 mon.a (mon.0) 2867 : cluster [DBG] osdmap e486: 8 total, 8 up, 8 in 2026-03-10T10:25:33.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:33 vm04 bash[20742]: cluster 2026-03-10T10:25:32.049267+0000 mon.a (mon.0) 2867 : cluster [DBG] osdmap e486: 8 total, 8 up, 8 in 2026-03-10T10:25:33.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:33 vm04 bash[20742]: audit 2026-03-10T10:25:32.052080+0000 mon.a (mon.0) 2868 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-98"}]: dispatch 2026-03-10T10:25:33.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:33 vm04 bash[20742]: audit 2026-03-10T10:25:32.052080+0000 mon.a (mon.0) 2868 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-98"}]: dispatch 2026-03-10T10:25:33.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:33 vm04 bash[20742]: audit 2026-03-10T10:25:33.045941+0000 mon.a (mon.0) 2869 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-98"}]': finished 2026-03-10T10:25:33.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:33 vm04 bash[20742]: audit 2026-03-10T10:25:33.045941+0000 mon.a (mon.0) 2869 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-98"}]': finished 2026-03-10T10:25:33.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:33 vm04 bash[20742]: cluster 2026-03-10T10:25:33.049719+0000 mon.a (mon.0) 2870 : cluster [DBG] osdmap e487: 8 total, 8 up, 8 in 2026-03-10T10:25:33.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:33 vm04 bash[20742]: cluster 2026-03-10T10:25:33.049719+0000 mon.a (mon.0) 2870 : cluster [DBG] osdmap e487: 8 total, 8 up, 8 in 2026-03-10T10:25:33.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:33 vm04 bash[20742]: audit 2026-03-10T10:25:33.050204+0000 mon.a (mon.0) 2871 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:25:33.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:33 vm04 bash[20742]: audit 2026-03-10T10:25:33.050204+0000 mon.a (mon.0) 2871 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:25:33.454 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:25:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:25:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:25:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:33 vm07 bash[23367]: cluster 2026-03-10T10:25:31.803909+0000 mon.a (mon.0) 2865 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:25:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:33 vm07 bash[23367]: cluster 2026-03-10T10:25:31.803909+0000 mon.a (mon.0) 2865 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:25:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:33 vm07 bash[23367]: audit 2026-03-10T10:25:32.042908+0000 mon.a (mon.0) 2866 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-98", "mode": "writeback"}]': finished 2026-03-10T10:25:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:33 vm07 bash[23367]: audit 2026-03-10T10:25:32.042908+0000 mon.a (mon.0) 2866 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-98", "mode": "writeback"}]': finished 2026-03-10T10:25:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:33 vm07 bash[23367]: cluster 2026-03-10T10:25:32.049267+0000 mon.a (mon.0) 2867 : cluster [DBG] osdmap e486: 8 total, 8 up, 8 in 2026-03-10T10:25:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:33 vm07 bash[23367]: cluster 2026-03-10T10:25:32.049267+0000 mon.a (mon.0) 2867 : cluster [DBG] osdmap e486: 8 total, 8 up, 8 in 2026-03-10T10:25:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:33 vm07 bash[23367]: audit 2026-03-10T10:25:32.052080+0000 mon.a (mon.0) 2868 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-98"}]: dispatch 2026-03-10T10:25:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:33 vm07 bash[23367]: audit 2026-03-10T10:25:32.052080+0000 mon.a (mon.0) 2868 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-98"}]: dispatch 2026-03-10T10:25:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:33 vm07 bash[23367]: audit 2026-03-10T10:25:33.045941+0000 mon.a (mon.0) 2869 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-98"}]': finished 2026-03-10T10:25:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:33 vm07 bash[23367]: audit 2026-03-10T10:25:33.045941+0000 mon.a (mon.0) 2869 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-98"}]': finished 2026-03-10T10:25:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:33 vm07 bash[23367]: cluster 2026-03-10T10:25:33.049719+0000 mon.a (mon.0) 2870 : cluster [DBG] osdmap e487: 8 total, 8 up, 8 in 2026-03-10T10:25:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:33 vm07 bash[23367]: cluster 2026-03-10T10:25:33.049719+0000 mon.a (mon.0) 2870 : cluster [DBG] osdmap e487: 8 total, 8 up, 8 in 2026-03-10T10:25:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:33 vm07 bash[23367]: audit 2026-03-10T10:25:33.050204+0000 mon.a (mon.0) 2871 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:25:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:33 vm07 bash[23367]: audit 2026-03-10T10:25:33.050204+0000 mon.a (mon.0) 2871 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:25:34.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:34 vm04 bash[28289]: cluster 2026-03-10T10:25:32.506187+0000 mgr.y (mgr.24422) 459 : cluster [DBG] pgmap v750: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:25:34.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:34 vm04 bash[28289]: cluster 2026-03-10T10:25:32.506187+0000 mgr.y (mgr.24422) 459 : cluster [DBG] pgmap v750: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:25:34.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:34 vm04 bash[28289]: cluster 2026-03-10T10:25:34.046199+0000 mon.a (mon.0) 2872 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:25:34.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:34 vm04 bash[28289]: cluster 2026-03-10T10:25:34.046199+0000 mon.a (mon.0) 2872 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:25:34.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:34 vm04 bash[28289]: audit 2026-03-10T10:25:34.049529+0000 mon.a (mon.0) 2873 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:25:34.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:34 vm04 bash[28289]: audit 2026-03-10T10:25:34.049529+0000 mon.a (mon.0) 2873 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:25:34.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:34 vm04 bash[28289]: cluster 2026-03-10T10:25:34.052410+0000 mon.a (mon.0) 2874 : cluster [DBG] osdmap e488: 8 total, 8 up, 8 in 2026-03-10T10:25:34.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:34 vm04 bash[28289]: cluster 2026-03-10T10:25:34.052410+0000 mon.a (mon.0) 2874 : cluster [DBG] osdmap e488: 8 total, 8 up, 8 in 2026-03-10T10:25:34.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:34 vm04 bash[28289]: audit 2026-03-10T10:25:34.052722+0000 mon.a (mon.0) 2875 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-10T10:25:34.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:34 vm04 bash[28289]: audit 2026-03-10T10:25:34.052722+0000 mon.a (mon.0) 2875 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-10T10:25:34.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:34 vm04 bash[20742]: cluster 2026-03-10T10:25:32.506187+0000 mgr.y (mgr.24422) 459 : cluster [DBG] pgmap v750: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:25:34.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:34 vm04 bash[20742]: cluster 2026-03-10T10:25:32.506187+0000 mgr.y (mgr.24422) 459 : cluster [DBG] pgmap v750: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:25:34.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:34 vm04 bash[20742]: cluster 2026-03-10T10:25:34.046199+0000 mon.a (mon.0) 2872 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:25:34.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:34 vm04 bash[20742]: cluster 2026-03-10T10:25:34.046199+0000 mon.a (mon.0) 2872 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:25:34.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:34 vm04 bash[20742]: audit 2026-03-10T10:25:34.049529+0000 mon.a (mon.0) 2873 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:25:34.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:34 vm04 bash[20742]: audit 2026-03-10T10:25:34.049529+0000 mon.a (mon.0) 2873 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:25:34.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:34 vm04 bash[20742]: cluster 2026-03-10T10:25:34.052410+0000 mon.a (mon.0) 2874 : cluster [DBG] osdmap e488: 8 total, 8 up, 8 in 2026-03-10T10:25:34.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:34 vm04 bash[20742]: cluster 2026-03-10T10:25:34.052410+0000 mon.a (mon.0) 2874 : cluster [DBG] osdmap e488: 8 total, 8 up, 8 in 2026-03-10T10:25:34.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:34 vm04 bash[20742]: audit 2026-03-10T10:25:34.052722+0000 mon.a (mon.0) 2875 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-10T10:25:34.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:34 vm04 bash[20742]: audit 2026-03-10T10:25:34.052722+0000 mon.a (mon.0) 2875 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-10T10:25:34.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:34 vm07 bash[23367]: cluster 2026-03-10T10:25:32.506187+0000 mgr.y (mgr.24422) 459 : cluster [DBG] pgmap v750: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:25:34.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:34 vm07 bash[23367]: cluster 2026-03-10T10:25:32.506187+0000 mgr.y (mgr.24422) 459 : cluster [DBG] pgmap v750: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 949 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:25:34.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:34 vm07 bash[23367]: cluster 2026-03-10T10:25:34.046199+0000 mon.a (mon.0) 2872 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:25:34.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:34 vm07 bash[23367]: cluster 2026-03-10T10:25:34.046199+0000 mon.a (mon.0) 2872 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:25:34.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:34 vm07 bash[23367]: audit 2026-03-10T10:25:34.049529+0000 mon.a (mon.0) 2873 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:25:34.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:34 vm07 bash[23367]: audit 2026-03-10T10:25:34.049529+0000 mon.a (mon.0) 2873 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:25:34.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:34 vm07 bash[23367]: cluster 2026-03-10T10:25:34.052410+0000 mon.a (mon.0) 2874 : cluster [DBG] osdmap e488: 8 total, 8 up, 8 in 2026-03-10T10:25:34.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:34 vm07 bash[23367]: cluster 2026-03-10T10:25:34.052410+0000 mon.a (mon.0) 2874 : cluster [DBG] osdmap e488: 8 total, 8 up, 8 in 2026-03-10T10:25:34.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:34 vm07 bash[23367]: audit 2026-03-10T10:25:34.052722+0000 mon.a (mon.0) 2875 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-10T10:25:34.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:34 vm07 bash[23367]: audit 2026-03-10T10:25:34.052722+0000 mon.a (mon.0) 2875 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_count","val": "1"}]: dispatch 2026-03-10T10:25:36.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:36 vm04 bash[28289]: cluster 2026-03-10T10:25:34.506537+0000 mgr.y (mgr.24422) 460 : cluster [DBG] pgmap v753: 292 pgs: 292 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:25:36.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:36 vm04 bash[28289]: cluster 2026-03-10T10:25:34.506537+0000 mgr.y (mgr.24422) 460 : cluster [DBG] pgmap v753: 292 pgs: 292 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:25:36.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:36 vm04 bash[28289]: audit 2026-03-10T10:25:35.052877+0000 mon.a (mon.0) 2876 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_count","val": "1"}]': finished 2026-03-10T10:25:36.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:36 vm04 bash[28289]: audit 2026-03-10T10:25:35.052877+0000 mon.a (mon.0) 2876 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_count","val": "1"}]': finished 2026-03-10T10:25:36.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:36 vm04 bash[28289]: cluster 2026-03-10T10:25:35.056508+0000 mon.a (mon.0) 2877 : cluster [DBG] osdmap e489: 8 total, 8 up, 8 in 2026-03-10T10:25:36.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:36 vm04 bash[28289]: cluster 2026-03-10T10:25:35.056508+0000 mon.a (mon.0) 2877 : cluster [DBG] osdmap e489: 8 total, 8 up, 8 in 2026-03-10T10:25:36.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:36 vm04 bash[28289]: audit 2026-03-10T10:25:35.056888+0000 mon.a (mon.0) 2878 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:25:36.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:36 vm04 bash[28289]: audit 2026-03-10T10:25:35.056888+0000 mon.a (mon.0) 2878 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:25:36.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:36 vm04 bash[20742]: cluster 2026-03-10T10:25:34.506537+0000 mgr.y (mgr.24422) 460 : cluster [DBG] pgmap v753: 292 pgs: 292 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:25:36.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:36 vm04 bash[20742]: cluster 2026-03-10T10:25:34.506537+0000 mgr.y (mgr.24422) 460 : cluster [DBG] pgmap v753: 292 pgs: 292 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:25:36.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:36 vm04 bash[20742]: audit 2026-03-10T10:25:35.052877+0000 mon.a (mon.0) 2876 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_count","val": "1"}]': finished 2026-03-10T10:25:36.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:36 vm04 bash[20742]: audit 2026-03-10T10:25:35.052877+0000 mon.a (mon.0) 2876 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_count","val": "1"}]': finished 2026-03-10T10:25:36.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:36 vm04 bash[20742]: cluster 2026-03-10T10:25:35.056508+0000 mon.a (mon.0) 2877 : cluster [DBG] osdmap e489: 8 total, 8 up, 8 in 2026-03-10T10:25:36.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:36 vm04 bash[20742]: cluster 2026-03-10T10:25:35.056508+0000 mon.a (mon.0) 2877 : cluster [DBG] osdmap e489: 8 total, 8 up, 8 in 2026-03-10T10:25:36.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:36 vm04 bash[20742]: audit 2026-03-10T10:25:35.056888+0000 mon.a (mon.0) 2878 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:25:36.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:36 vm04 bash[20742]: audit 2026-03-10T10:25:35.056888+0000 mon.a (mon.0) 2878 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:25:36.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:36 vm07 bash[23367]: cluster 2026-03-10T10:25:34.506537+0000 mgr.y (mgr.24422) 460 : cluster [DBG] pgmap v753: 292 pgs: 292 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:25:36.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:36 vm07 bash[23367]: cluster 2026-03-10T10:25:34.506537+0000 mgr.y (mgr.24422) 460 : cluster [DBG] pgmap v753: 292 pgs: 292 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:25:36.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:36 vm07 bash[23367]: audit 2026-03-10T10:25:35.052877+0000 mon.a (mon.0) 2876 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_count","val": "1"}]': finished 2026-03-10T10:25:36.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:36 vm07 bash[23367]: audit 2026-03-10T10:25:35.052877+0000 mon.a (mon.0) 2876 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_count","val": "1"}]': finished 2026-03-10T10:25:36.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:36 vm07 bash[23367]: cluster 2026-03-10T10:25:35.056508+0000 mon.a (mon.0) 2877 : cluster [DBG] osdmap e489: 8 total, 8 up, 8 in 2026-03-10T10:25:36.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:36 vm07 bash[23367]: cluster 2026-03-10T10:25:35.056508+0000 mon.a (mon.0) 2877 : cluster [DBG] osdmap e489: 8 total, 8 up, 8 in 2026-03-10T10:25:36.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:36 vm07 bash[23367]: audit 2026-03-10T10:25:35.056888+0000 mon.a (mon.0) 2878 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:25:36.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:36 vm07 bash[23367]: audit 2026-03-10T10:25:35.056888+0000 mon.a (mon.0) 2878 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:25:37.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:37 vm04 bash[28289]: audit 2026-03-10T10:25:36.056074+0000 mon.a (mon.0) 2879 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:25:37.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:37 vm04 bash[28289]: audit 2026-03-10T10:25:36.056074+0000 mon.a (mon.0) 2879 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:25:37.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:37 vm04 bash[28289]: cluster 2026-03-10T10:25:36.060647+0000 mon.a (mon.0) 2880 : cluster [DBG] osdmap e490: 8 total, 8 up, 8 in 2026-03-10T10:25:37.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:37 vm04 bash[28289]: cluster 2026-03-10T10:25:36.060647+0000 mon.a (mon.0) 2880 : cluster [DBG] osdmap e490: 8 total, 8 up, 8 in 2026-03-10T10:25:37.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:37 vm04 bash[28289]: audit 2026-03-10T10:25:36.061363+0000 mon.a (mon.0) 2881 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-10T10:25:37.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:37 vm04 bash[28289]: audit 2026-03-10T10:25:36.061363+0000 mon.a (mon.0) 2881 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-10T10:25:37.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:37 vm04 bash[28289]: audit 2026-03-10T10:25:37.059521+0000 mon.a (mon.0) 2882 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "target_max_objects","val": "250"}]': finished 2026-03-10T10:25:37.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:37 vm04 bash[28289]: audit 2026-03-10T10:25:37.059521+0000 mon.a (mon.0) 2882 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "target_max_objects","val": "250"}]': finished 2026-03-10T10:25:37.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:37 vm04 bash[28289]: cluster 2026-03-10T10:25:37.062465+0000 mon.a (mon.0) 2883 : cluster [DBG] osdmap e491: 8 total, 8 up, 8 in 2026-03-10T10:25:37.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:37 vm04 bash[28289]: cluster 2026-03-10T10:25:37.062465+0000 mon.a (mon.0) 2883 : cluster [DBG] osdmap e491: 8 total, 8 up, 8 in 2026-03-10T10:25:37.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:37 vm04 bash[20742]: audit 2026-03-10T10:25:36.056074+0000 mon.a (mon.0) 2879 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:25:37.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:37 vm04 bash[20742]: audit 2026-03-10T10:25:36.056074+0000 mon.a (mon.0) 2879 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:25:37.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:37 vm04 bash[20742]: cluster 2026-03-10T10:25:36.060647+0000 mon.a (mon.0) 2880 : cluster [DBG] osdmap e490: 8 total, 8 up, 8 in 2026-03-10T10:25:37.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:37 vm04 bash[20742]: cluster 2026-03-10T10:25:36.060647+0000 mon.a (mon.0) 2880 : cluster [DBG] osdmap e490: 8 total, 8 up, 8 in 2026-03-10T10:25:37.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:37 vm04 bash[20742]: audit 2026-03-10T10:25:36.061363+0000 mon.a (mon.0) 2881 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-10T10:25:37.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:37 vm04 bash[20742]: audit 2026-03-10T10:25:36.061363+0000 mon.a (mon.0) 2881 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-10T10:25:37.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:37 vm04 bash[20742]: audit 2026-03-10T10:25:37.059521+0000 mon.a (mon.0) 2882 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "target_max_objects","val": "250"}]': finished 2026-03-10T10:25:37.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:37 vm04 bash[20742]: audit 2026-03-10T10:25:37.059521+0000 mon.a (mon.0) 2882 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "target_max_objects","val": "250"}]': finished 2026-03-10T10:25:37.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:37 vm04 bash[20742]: cluster 2026-03-10T10:25:37.062465+0000 mon.a (mon.0) 2883 : cluster [DBG] osdmap e491: 8 total, 8 up, 8 in 2026-03-10T10:25:37.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:37 vm04 bash[20742]: cluster 2026-03-10T10:25:37.062465+0000 mon.a (mon.0) 2883 : cluster [DBG] osdmap e491: 8 total, 8 up, 8 in 2026-03-10T10:25:37.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:37 vm07 bash[23367]: audit 2026-03-10T10:25:36.056074+0000 mon.a (mon.0) 2879 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:25:37.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:37 vm07 bash[23367]: audit 2026-03-10T10:25:36.056074+0000 mon.a (mon.0) 2879 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:25:37.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:37 vm07 bash[23367]: cluster 2026-03-10T10:25:36.060647+0000 mon.a (mon.0) 2880 : cluster [DBG] osdmap e490: 8 total, 8 up, 8 in 2026-03-10T10:25:37.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:37 vm07 bash[23367]: cluster 2026-03-10T10:25:36.060647+0000 mon.a (mon.0) 2880 : cluster [DBG] osdmap e490: 8 total, 8 up, 8 in 2026-03-10T10:25:37.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:37 vm07 bash[23367]: audit 2026-03-10T10:25:36.061363+0000 mon.a (mon.0) 2881 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-10T10:25:37.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:37 vm07 bash[23367]: audit 2026-03-10T10:25:36.061363+0000 mon.a (mon.0) 2881 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "target_max_objects","val": "250"}]: dispatch 2026-03-10T10:25:37.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:37 vm07 bash[23367]: audit 2026-03-10T10:25:37.059521+0000 mon.a (mon.0) 2882 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "target_max_objects","val": "250"}]': finished 2026-03-10T10:25:37.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:37 vm07 bash[23367]: audit 2026-03-10T10:25:37.059521+0000 mon.a (mon.0) 2882 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-98","var": "target_max_objects","val": "250"}]': finished 2026-03-10T10:25:37.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:37 vm07 bash[23367]: cluster 2026-03-10T10:25:37.062465+0000 mon.a (mon.0) 2883 : cluster [DBG] osdmap e491: 8 total, 8 up, 8 in 2026-03-10T10:25:37.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:37 vm07 bash[23367]: cluster 2026-03-10T10:25:37.062465+0000 mon.a (mon.0) 2883 : cluster [DBG] osdmap e491: 8 total, 8 up, 8 in 2026-03-10T10:25:38.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:38 vm04 bash[28289]: cluster 2026-03-10T10:25:36.506872+0000 mgr.y (mgr.24422) 461 : cluster [DBG] pgmap v756: 292 pgs: 292 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:25:38.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:38 vm04 bash[28289]: cluster 2026-03-10T10:25:36.506872+0000 mgr.y (mgr.24422) 461 : cluster [DBG] pgmap v756: 292 pgs: 292 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:25:38.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:38 vm04 bash[28289]: audit 2026-03-10T10:25:37.113301+0000 mon.a (mon.0) 2884 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:25:38.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:38 vm04 bash[28289]: audit 2026-03-10T10:25:37.113301+0000 mon.a (mon.0) 2884 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:25:38.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:38 vm04 bash[20742]: cluster 2026-03-10T10:25:36.506872+0000 mgr.y (mgr.24422) 461 : cluster [DBG] pgmap v756: 292 pgs: 292 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:25:38.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:38 vm04 bash[20742]: cluster 2026-03-10T10:25:36.506872+0000 mgr.y (mgr.24422) 461 : cluster [DBG] pgmap v756: 292 pgs: 292 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:25:38.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:38 vm04 bash[20742]: audit 2026-03-10T10:25:37.113301+0000 mon.a (mon.0) 2884 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:25:38.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:38 vm04 bash[20742]: audit 2026-03-10T10:25:37.113301+0000 mon.a (mon.0) 2884 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:25:38.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:38 vm07 bash[23367]: cluster 2026-03-10T10:25:36.506872+0000 mgr.y (mgr.24422) 461 : cluster [DBG] pgmap v756: 292 pgs: 292 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:25:38.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:38 vm07 bash[23367]: cluster 2026-03-10T10:25:36.506872+0000 mgr.y (mgr.24422) 461 : cluster [DBG] pgmap v756: 292 pgs: 292 active+clean; 8.3 MiB data, 953 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:25:38.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:38 vm07 bash[23367]: audit 2026-03-10T10:25:37.113301+0000 mon.a (mon.0) 2884 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:25:38.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:38 vm07 bash[23367]: audit 2026-03-10T10:25:37.113301+0000 mon.a (mon.0) 2884 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:25:39.016 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:25:38 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:25:39.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:39 vm04 bash[28289]: audit 2026-03-10T10:25:38.104865+0000 mon.a (mon.0) 2885 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:25:39.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:39 vm04 bash[28289]: audit 2026-03-10T10:25:38.104865+0000 mon.a (mon.0) 2885 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:25:39.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:39 vm04 bash[28289]: cluster 2026-03-10T10:25:38.107714+0000 mon.a (mon.0) 2886 : cluster [DBG] osdmap e492: 8 total, 8 up, 8 in 2026-03-10T10:25:39.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:39 vm04 bash[28289]: cluster 2026-03-10T10:25:38.107714+0000 mon.a (mon.0) 2886 : cluster [DBG] osdmap e492: 8 total, 8 up, 8 in 2026-03-10T10:25:39.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:39 vm04 bash[28289]: audit 2026-03-10T10:25:38.108413+0000 mon.a (mon.0) 2887 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-98"}]: dispatch 2026-03-10T10:25:39.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:39 vm04 bash[28289]: audit 2026-03-10T10:25:38.108413+0000 mon.a (mon.0) 2887 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-98"}]: dispatch 2026-03-10T10:25:39.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:39 vm04 bash[20742]: audit 2026-03-10T10:25:38.104865+0000 mon.a (mon.0) 2885 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:25:39.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:39 vm04 bash[20742]: audit 2026-03-10T10:25:38.104865+0000 mon.a (mon.0) 2885 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:25:39.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:39 vm04 bash[20742]: cluster 2026-03-10T10:25:38.107714+0000 mon.a (mon.0) 2886 : cluster [DBG] osdmap e492: 8 total, 8 up, 8 in 2026-03-10T10:25:39.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:39 vm04 bash[20742]: cluster 2026-03-10T10:25:38.107714+0000 mon.a (mon.0) 2886 : cluster [DBG] osdmap e492: 8 total, 8 up, 8 in 2026-03-10T10:25:39.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:39 vm04 bash[20742]: audit 2026-03-10T10:25:38.108413+0000 mon.a (mon.0) 2887 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-98"}]: dispatch 2026-03-10T10:25:39.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:39 vm04 bash[20742]: audit 2026-03-10T10:25:38.108413+0000 mon.a (mon.0) 2887 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-98"}]: dispatch 2026-03-10T10:25:39.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:39 vm07 bash[23367]: audit 2026-03-10T10:25:38.104865+0000 mon.a (mon.0) 2885 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:25:39.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:39 vm07 bash[23367]: audit 2026-03-10T10:25:38.104865+0000 mon.a (mon.0) 2885 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:25:39.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:39 vm07 bash[23367]: cluster 2026-03-10T10:25:38.107714+0000 mon.a (mon.0) 2886 : cluster [DBG] osdmap e492: 8 total, 8 up, 8 in 2026-03-10T10:25:39.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:39 vm07 bash[23367]: cluster 2026-03-10T10:25:38.107714+0000 mon.a (mon.0) 2886 : cluster [DBG] osdmap e492: 8 total, 8 up, 8 in 2026-03-10T10:25:39.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:39 vm07 bash[23367]: audit 2026-03-10T10:25:38.108413+0000 mon.a (mon.0) 2887 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-98"}]: dispatch 2026-03-10T10:25:39.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:39 vm07 bash[23367]: audit 2026-03-10T10:25:38.108413+0000 mon.a (mon.0) 2887 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-98"}]: dispatch 2026-03-10T10:25:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:40 vm04 bash[28289]: cluster 2026-03-10T10:25:38.507480+0000 mgr.y (mgr.24422) 462 : cluster [DBG] pgmap v759: 292 pgs: 292 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:25:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:40 vm04 bash[28289]: cluster 2026-03-10T10:25:38.507480+0000 mgr.y (mgr.24422) 462 : cluster [DBG] pgmap v759: 292 pgs: 292 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:25:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:40 vm04 bash[28289]: audit 2026-03-10T10:25:38.683902+0000 mgr.y (mgr.24422) 463 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:40 vm04 bash[28289]: audit 2026-03-10T10:25:38.683902+0000 mgr.y (mgr.24422) 463 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:40 vm04 bash[28289]: audit 2026-03-10T10:25:39.108732+0000 mon.a (mon.0) 2888 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-98"}]': finished 2026-03-10T10:25:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:40 vm04 bash[28289]: audit 2026-03-10T10:25:39.108732+0000 mon.a (mon.0) 2888 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-98"}]': finished 2026-03-10T10:25:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:40 vm04 bash[28289]: cluster 2026-03-10T10:25:39.112547+0000 mon.a (mon.0) 2889 : cluster [DBG] osdmap e493: 8 total, 8 up, 8 in 2026-03-10T10:25:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:40 vm04 bash[28289]: cluster 2026-03-10T10:25:39.112547+0000 mon.a (mon.0) 2889 : cluster [DBG] osdmap e493: 8 total, 8 up, 8 in 2026-03-10T10:25:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:40 vm04 bash[20742]: cluster 2026-03-10T10:25:38.507480+0000 mgr.y (mgr.24422) 462 : cluster [DBG] pgmap v759: 292 pgs: 292 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:25:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:40 vm04 bash[20742]: cluster 2026-03-10T10:25:38.507480+0000 mgr.y (mgr.24422) 462 : cluster [DBG] pgmap v759: 292 pgs: 292 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:25:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:40 vm04 bash[20742]: audit 2026-03-10T10:25:38.683902+0000 mgr.y (mgr.24422) 463 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:40 vm04 bash[20742]: audit 2026-03-10T10:25:38.683902+0000 mgr.y (mgr.24422) 463 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:40 vm04 bash[20742]: audit 2026-03-10T10:25:39.108732+0000 mon.a (mon.0) 2888 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-98"}]': finished 2026-03-10T10:25:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:40 vm04 bash[20742]: audit 2026-03-10T10:25:39.108732+0000 mon.a (mon.0) 2888 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-98"}]': finished 2026-03-10T10:25:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:40 vm04 bash[20742]: cluster 2026-03-10T10:25:39.112547+0000 mon.a (mon.0) 2889 : cluster [DBG] osdmap e493: 8 total, 8 up, 8 in 2026-03-10T10:25:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:40 vm04 bash[20742]: cluster 2026-03-10T10:25:39.112547+0000 mon.a (mon.0) 2889 : cluster [DBG] osdmap e493: 8 total, 8 up, 8 in 2026-03-10T10:25:40.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:40 vm07 bash[23367]: cluster 2026-03-10T10:25:38.507480+0000 mgr.y (mgr.24422) 462 : cluster [DBG] pgmap v759: 292 pgs: 292 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:25:40.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:40 vm07 bash[23367]: cluster 2026-03-10T10:25:38.507480+0000 mgr.y (mgr.24422) 462 : cluster [DBG] pgmap v759: 292 pgs: 292 active+clean; 8.3 MiB data, 954 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:25:40.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:40 vm07 bash[23367]: audit 2026-03-10T10:25:38.683902+0000 mgr.y (mgr.24422) 463 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:40.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:40 vm07 bash[23367]: audit 2026-03-10T10:25:38.683902+0000 mgr.y (mgr.24422) 463 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:40.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:40 vm07 bash[23367]: audit 2026-03-10T10:25:39.108732+0000 mon.a (mon.0) 2888 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-98"}]': finished 2026-03-10T10:25:40.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:40 vm07 bash[23367]: audit 2026-03-10T10:25:39.108732+0000 mon.a (mon.0) 2888 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-98"}]': finished 2026-03-10T10:25:40.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:40 vm07 bash[23367]: cluster 2026-03-10T10:25:39.112547+0000 mon.a (mon.0) 2889 : cluster [DBG] osdmap e493: 8 total, 8 up, 8 in 2026-03-10T10:25:40.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:40 vm07 bash[23367]: cluster 2026-03-10T10:25:39.112547+0000 mon.a (mon.0) 2889 : cluster [DBG] osdmap e493: 8 total, 8 up, 8 in 2026-03-10T10:25:41.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:41 vm04 bash[28289]: cluster 2026-03-10T10:25:40.127087+0000 mon.a (mon.0) 2890 : cluster [DBG] osdmap e494: 8 total, 8 up, 8 in 2026-03-10T10:25:41.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:41 vm04 bash[28289]: cluster 2026-03-10T10:25:40.127087+0000 mon.a (mon.0) 2890 : cluster [DBG] osdmap e494: 8 total, 8 up, 8 in 2026-03-10T10:25:41.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:41 vm04 bash[28289]: cluster 2026-03-10T10:25:41.130356+0000 mon.a (mon.0) 2891 : cluster [DBG] osdmap e495: 8 total, 8 up, 8 in 2026-03-10T10:25:41.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:41 vm04 bash[28289]: cluster 2026-03-10T10:25:41.130356+0000 mon.a (mon.0) 2891 : cluster [DBG] osdmap e495: 8 total, 8 up, 8 in 2026-03-10T10:25:41.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:41 vm04 bash[28289]: audit 2026-03-10T10:25:41.131934+0000 mon.a (mon.0) 2892 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:25:41.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:41 vm04 bash[28289]: audit 2026-03-10T10:25:41.131934+0000 mon.a (mon.0) 2892 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:25:41.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:41 vm04 bash[20742]: cluster 2026-03-10T10:25:40.127087+0000 mon.a (mon.0) 2890 : cluster [DBG] osdmap e494: 8 total, 8 up, 8 in 2026-03-10T10:25:41.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:41 vm04 bash[20742]: cluster 2026-03-10T10:25:40.127087+0000 mon.a (mon.0) 2890 : cluster [DBG] osdmap e494: 8 total, 8 up, 8 in 2026-03-10T10:25:41.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:41 vm04 bash[20742]: cluster 2026-03-10T10:25:41.130356+0000 mon.a (mon.0) 2891 : cluster [DBG] osdmap e495: 8 total, 8 up, 8 in 2026-03-10T10:25:41.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:41 vm04 bash[20742]: cluster 2026-03-10T10:25:41.130356+0000 mon.a (mon.0) 2891 : cluster [DBG] osdmap e495: 8 total, 8 up, 8 in 2026-03-10T10:25:41.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:41 vm04 bash[20742]: audit 2026-03-10T10:25:41.131934+0000 mon.a (mon.0) 2892 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:25:41.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:41 vm04 bash[20742]: audit 2026-03-10T10:25:41.131934+0000 mon.a (mon.0) 2892 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:25:41.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:41 vm07 bash[23367]: cluster 2026-03-10T10:25:40.127087+0000 mon.a (mon.0) 2890 : cluster [DBG] osdmap e494: 8 total, 8 up, 8 in 2026-03-10T10:25:41.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:41 vm07 bash[23367]: cluster 2026-03-10T10:25:40.127087+0000 mon.a (mon.0) 2890 : cluster [DBG] osdmap e494: 8 total, 8 up, 8 in 2026-03-10T10:25:41.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:41 vm07 bash[23367]: cluster 2026-03-10T10:25:41.130356+0000 mon.a (mon.0) 2891 : cluster [DBG] osdmap e495: 8 total, 8 up, 8 in 2026-03-10T10:25:41.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:41 vm07 bash[23367]: cluster 2026-03-10T10:25:41.130356+0000 mon.a (mon.0) 2891 : cluster [DBG] osdmap e495: 8 total, 8 up, 8 in 2026-03-10T10:25:41.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:41 vm07 bash[23367]: audit 2026-03-10T10:25:41.131934+0000 mon.a (mon.0) 2892 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:25:41.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:41 vm07 bash[23367]: audit 2026-03-10T10:25:41.131934+0000 mon.a (mon.0) 2892 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-100","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:25:42.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:42 vm04 bash[28289]: cluster 2026-03-10T10:25:40.507821+0000 mgr.y (mgr.24422) 464 : cluster [DBG] pgmap v762: 260 pgs: 260 active+clean; 8.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:42.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:42 vm04 bash[28289]: cluster 2026-03-10T10:25:40.507821+0000 mgr.y (mgr.24422) 464 : cluster [DBG] pgmap v762: 260 pgs: 260 active+clean; 8.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:42.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:42 vm04 bash[28289]: audit 2026-03-10T10:25:42.130824+0000 mon.a (mon.0) 2893 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-100","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:25:42.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:42 vm04 bash[28289]: audit 2026-03-10T10:25:42.130824+0000 mon.a (mon.0) 2893 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-100","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:25:42.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:42 vm04 bash[28289]: cluster 2026-03-10T10:25:42.134156+0000 mon.a (mon.0) 2894 : cluster [DBG] osdmap e496: 8 total, 8 up, 8 in 2026-03-10T10:25:42.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:42 vm04 bash[28289]: cluster 2026-03-10T10:25:42.134156+0000 mon.a (mon.0) 2894 : cluster [DBG] osdmap e496: 8 total, 8 up, 8 in 2026-03-10T10:25:42.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:42 vm04 bash[28289]: audit 2026-03-10T10:25:42.138016+0000 mon.a (mon.0) 2895 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:25:42.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:42 vm04 bash[28289]: audit 2026-03-10T10:25:42.138016+0000 mon.a (mon.0) 2895 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:25:42.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:42 vm04 bash[20742]: cluster 2026-03-10T10:25:40.507821+0000 mgr.y (mgr.24422) 464 : cluster [DBG] pgmap v762: 260 pgs: 260 active+clean; 8.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:42.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:42 vm04 bash[20742]: cluster 2026-03-10T10:25:40.507821+0000 mgr.y (mgr.24422) 464 : cluster [DBG] pgmap v762: 260 pgs: 260 active+clean; 8.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:42.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:42 vm04 bash[20742]: audit 2026-03-10T10:25:42.130824+0000 mon.a (mon.0) 2893 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-100","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:25:42.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:42 vm04 bash[20742]: audit 2026-03-10T10:25:42.130824+0000 mon.a (mon.0) 2893 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-100","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:25:42.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:42 vm04 bash[20742]: cluster 2026-03-10T10:25:42.134156+0000 mon.a (mon.0) 2894 : cluster [DBG] osdmap e496: 8 total, 8 up, 8 in 2026-03-10T10:25:42.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:42 vm04 bash[20742]: cluster 2026-03-10T10:25:42.134156+0000 mon.a (mon.0) 2894 : cluster [DBG] osdmap e496: 8 total, 8 up, 8 in 2026-03-10T10:25:42.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:42 vm04 bash[20742]: audit 2026-03-10T10:25:42.138016+0000 mon.a (mon.0) 2895 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:25:42.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:42 vm04 bash[20742]: audit 2026-03-10T10:25:42.138016+0000 mon.a (mon.0) 2895 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:25:42.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:42 vm07 bash[23367]: cluster 2026-03-10T10:25:40.507821+0000 mgr.y (mgr.24422) 464 : cluster [DBG] pgmap v762: 260 pgs: 260 active+clean; 8.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:42.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:42 vm07 bash[23367]: cluster 2026-03-10T10:25:40.507821+0000 mgr.y (mgr.24422) 464 : cluster [DBG] pgmap v762: 260 pgs: 260 active+clean; 8.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:42.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:42 vm07 bash[23367]: audit 2026-03-10T10:25:42.130824+0000 mon.a (mon.0) 2893 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-100","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:25:42.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:42 vm07 bash[23367]: audit 2026-03-10T10:25:42.130824+0000 mon.a (mon.0) 2893 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-100","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:25:42.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:42 vm07 bash[23367]: cluster 2026-03-10T10:25:42.134156+0000 mon.a (mon.0) 2894 : cluster [DBG] osdmap e496: 8 total, 8 up, 8 in 2026-03-10T10:25:42.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:42 vm07 bash[23367]: cluster 2026-03-10T10:25:42.134156+0000 mon.a (mon.0) 2894 : cluster [DBG] osdmap e496: 8 total, 8 up, 8 in 2026-03-10T10:25:42.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:42 vm07 bash[23367]: audit 2026-03-10T10:25:42.138016+0000 mon.a (mon.0) 2895 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:25:42.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:42 vm07 bash[23367]: audit 2026-03-10T10:25:42.138016+0000 mon.a (mon.0) 2895 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-100", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:25:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:43 vm04 bash[28289]: audit 2026-03-10T10:25:43.045394+0000 mon.a (mon.0) 2896 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:25:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:43 vm04 bash[28289]: audit 2026-03-10T10:25:43.045394+0000 mon.a (mon.0) 2896 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:25:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:43 vm04 bash[28289]: audit 2026-03-10T10:25:43.153237+0000 mon.a (mon.0) 2897 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-100", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:25:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:43 vm04 bash[28289]: audit 2026-03-10T10:25:43.153237+0000 mon.a (mon.0) 2897 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-100", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:25:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:43 vm04 bash[28289]: cluster 2026-03-10T10:25:43.156672+0000 mon.a (mon.0) 2898 : cluster [DBG] osdmap e497: 8 total, 8 up, 8 in 2026-03-10T10:25:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:43 vm04 bash[28289]: cluster 2026-03-10T10:25:43.156672+0000 mon.a (mon.0) 2898 : cluster [DBG] osdmap e497: 8 total, 8 up, 8 in 2026-03-10T10:25:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:43 vm04 bash[28289]: audit 2026-03-10T10:25:43.157332+0000 mon.a (mon.0) 2899 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-100"}]: dispatch 2026-03-10T10:25:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:43 vm04 bash[28289]: audit 2026-03-10T10:25:43.157332+0000 mon.a (mon.0) 2899 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-100"}]: dispatch 2026-03-10T10:25:43.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:25:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:25:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:25:43.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:43 vm04 bash[20742]: audit 2026-03-10T10:25:43.045394+0000 mon.a (mon.0) 2896 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:25:43.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:43 vm04 bash[20742]: audit 2026-03-10T10:25:43.045394+0000 mon.a (mon.0) 2896 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:25:43.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:43 vm04 bash[20742]: audit 2026-03-10T10:25:43.153237+0000 mon.a (mon.0) 2897 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-100", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:25:43.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:43 vm04 bash[20742]: audit 2026-03-10T10:25:43.153237+0000 mon.a (mon.0) 2897 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-100", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:25:43.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:43 vm04 bash[20742]: cluster 2026-03-10T10:25:43.156672+0000 mon.a (mon.0) 2898 : cluster [DBG] osdmap e497: 8 total, 8 up, 8 in 2026-03-10T10:25:43.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:43 vm04 bash[20742]: cluster 2026-03-10T10:25:43.156672+0000 mon.a (mon.0) 2898 : cluster [DBG] osdmap e497: 8 total, 8 up, 8 in 2026-03-10T10:25:43.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:43 vm04 bash[20742]: audit 2026-03-10T10:25:43.157332+0000 mon.a (mon.0) 2899 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-100"}]: dispatch 2026-03-10T10:25:43.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:43 vm04 bash[20742]: audit 2026-03-10T10:25:43.157332+0000 mon.a (mon.0) 2899 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-100"}]: dispatch 2026-03-10T10:25:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:43 vm07 bash[23367]: audit 2026-03-10T10:25:43.045394+0000 mon.a (mon.0) 2896 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:25:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:43 vm07 bash[23367]: audit 2026-03-10T10:25:43.045394+0000 mon.a (mon.0) 2896 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:25:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:43 vm07 bash[23367]: audit 2026-03-10T10:25:43.153237+0000 mon.a (mon.0) 2897 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-100", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:25:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:43 vm07 bash[23367]: audit 2026-03-10T10:25:43.153237+0000 mon.a (mon.0) 2897 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-100", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:25:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:43 vm07 bash[23367]: cluster 2026-03-10T10:25:43.156672+0000 mon.a (mon.0) 2898 : cluster [DBG] osdmap e497: 8 total, 8 up, 8 in 2026-03-10T10:25:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:43 vm07 bash[23367]: cluster 2026-03-10T10:25:43.156672+0000 mon.a (mon.0) 2898 : cluster [DBG] osdmap e497: 8 total, 8 up, 8 in 2026-03-10T10:25:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:43 vm07 bash[23367]: audit 2026-03-10T10:25:43.157332+0000 mon.a (mon.0) 2899 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-100"}]: dispatch 2026-03-10T10:25:43.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:43 vm07 bash[23367]: audit 2026-03-10T10:25:43.157332+0000 mon.a (mon.0) 2899 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-100"}]: dispatch 2026-03-10T10:25:44.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:44 vm07 bash[23367]: cluster 2026-03-10T10:25:42.508249+0000 mgr.y (mgr.24422) 465 : cluster [DBG] pgmap v765: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:44.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:44 vm07 bash[23367]: cluster 2026-03-10T10:25:42.508249+0000 mgr.y (mgr.24422) 465 : cluster [DBG] pgmap v765: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:44 vm04 bash[28289]: cluster 2026-03-10T10:25:42.508249+0000 mgr.y (mgr.24422) 465 : cluster [DBG] pgmap v765: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:44 vm04 bash[28289]: cluster 2026-03-10T10:25:42.508249+0000 mgr.y (mgr.24422) 465 : cluster [DBG] pgmap v765: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:44 vm04 bash[20742]: cluster 2026-03-10T10:25:42.508249+0000 mgr.y (mgr.24422) 465 : cluster [DBG] pgmap v765: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:44 vm04 bash[20742]: cluster 2026-03-10T10:25:42.508249+0000 mgr.y (mgr.24422) 465 : cluster [DBG] pgmap v765: 292 pgs: 32 unknown, 260 active+clean; 8.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:45.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:45 vm07 bash[23367]: audit 2026-03-10T10:25:44.218113+0000 mon.a (mon.0) 2900 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-100"}]': finished 2026-03-10T10:25:45.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:45 vm07 bash[23367]: audit 2026-03-10T10:25:44.218113+0000 mon.a (mon.0) 2900 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-100"}]': finished 2026-03-10T10:25:45.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:45 vm07 bash[23367]: cluster 2026-03-10T10:25:44.221108+0000 mon.a (mon.0) 2901 : cluster [DBG] osdmap e498: 8 total, 8 up, 8 in 2026-03-10T10:25:45.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:45 vm07 bash[23367]: cluster 2026-03-10T10:25:44.221108+0000 mon.a (mon.0) 2901 : cluster [DBG] osdmap e498: 8 total, 8 up, 8 in 2026-03-10T10:25:45.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:45 vm07 bash[23367]: audit 2026-03-10T10:25:44.221863+0000 mon.a (mon.0) 2902 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-100", "mode": "writeback"}]: dispatch 2026-03-10T10:25:45.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:45 vm07 bash[23367]: audit 2026-03-10T10:25:44.221863+0000 mon.a (mon.0) 2902 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-100", "mode": "writeback"}]: dispatch 2026-03-10T10:25:45.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:45 vm07 bash[23367]: cluster 2026-03-10T10:25:45.218201+0000 mon.a (mon.0) 2903 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:25:45.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:45 vm07 bash[23367]: cluster 2026-03-10T10:25:45.218201+0000 mon.a (mon.0) 2903 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:25:45.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:45 vm07 bash[23367]: audit 2026-03-10T10:25:45.222091+0000 mon.a (mon.0) 2904 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-100", "mode": "writeback"}]': finished 2026-03-10T10:25:45.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:45 vm07 bash[23367]: audit 2026-03-10T10:25:45.222091+0000 mon.a (mon.0) 2904 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-100", "mode": "writeback"}]': finished 2026-03-10T10:25:45.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:45 vm07 bash[23367]: cluster 2026-03-10T10:25:45.225588+0000 mon.a (mon.0) 2905 : cluster [DBG] osdmap e499: 8 total, 8 up, 8 in 2026-03-10T10:25:45.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:45 vm07 bash[23367]: cluster 2026-03-10T10:25:45.225588+0000 mon.a (mon.0) 2905 : cluster [DBG] osdmap e499: 8 total, 8 up, 8 in 2026-03-10T10:25:45.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:45 vm04 bash[28289]: audit 2026-03-10T10:25:44.218113+0000 mon.a (mon.0) 2900 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-100"}]': finished 2026-03-10T10:25:45.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:45 vm04 bash[28289]: audit 2026-03-10T10:25:44.218113+0000 mon.a (mon.0) 2900 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-100"}]': finished 2026-03-10T10:25:45.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:45 vm04 bash[28289]: cluster 2026-03-10T10:25:44.221108+0000 mon.a (mon.0) 2901 : cluster [DBG] osdmap e498: 8 total, 8 up, 8 in 2026-03-10T10:25:45.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:45 vm04 bash[28289]: cluster 2026-03-10T10:25:44.221108+0000 mon.a (mon.0) 2901 : cluster [DBG] osdmap e498: 8 total, 8 up, 8 in 2026-03-10T10:25:45.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:45 vm04 bash[28289]: audit 2026-03-10T10:25:44.221863+0000 mon.a (mon.0) 2902 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-100", "mode": "writeback"}]: dispatch 2026-03-10T10:25:45.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:45 vm04 bash[28289]: audit 2026-03-10T10:25:44.221863+0000 mon.a (mon.0) 2902 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-100", "mode": "writeback"}]: dispatch 2026-03-10T10:25:45.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:45 vm04 bash[28289]: cluster 2026-03-10T10:25:45.218201+0000 mon.a (mon.0) 2903 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:25:45.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:45 vm04 bash[28289]: cluster 2026-03-10T10:25:45.218201+0000 mon.a (mon.0) 2903 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:25:45.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:45 vm04 bash[28289]: audit 2026-03-10T10:25:45.222091+0000 mon.a (mon.0) 2904 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-100", "mode": "writeback"}]': finished 2026-03-10T10:25:45.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:45 vm04 bash[28289]: audit 2026-03-10T10:25:45.222091+0000 mon.a (mon.0) 2904 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-100", "mode": "writeback"}]': finished 2026-03-10T10:25:45.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:45 vm04 bash[28289]: cluster 2026-03-10T10:25:45.225588+0000 mon.a (mon.0) 2905 : cluster [DBG] osdmap e499: 8 total, 8 up, 8 in 2026-03-10T10:25:45.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:45 vm04 bash[28289]: cluster 2026-03-10T10:25:45.225588+0000 mon.a (mon.0) 2905 : cluster [DBG] osdmap e499: 8 total, 8 up, 8 in 2026-03-10T10:25:45.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:45 vm04 bash[20742]: audit 2026-03-10T10:25:44.218113+0000 mon.a (mon.0) 2900 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-100"}]': finished 2026-03-10T10:25:45.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:45 vm04 bash[20742]: audit 2026-03-10T10:25:44.218113+0000 mon.a (mon.0) 2900 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-6", "overlaypool": "test-rados-api-vm04-59491-100"}]': finished 2026-03-10T10:25:45.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:45 vm04 bash[20742]: cluster 2026-03-10T10:25:44.221108+0000 mon.a (mon.0) 2901 : cluster [DBG] osdmap e498: 8 total, 8 up, 8 in 2026-03-10T10:25:45.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:45 vm04 bash[20742]: cluster 2026-03-10T10:25:44.221108+0000 mon.a (mon.0) 2901 : cluster [DBG] osdmap e498: 8 total, 8 up, 8 in 2026-03-10T10:25:45.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:45 vm04 bash[20742]: audit 2026-03-10T10:25:44.221863+0000 mon.a (mon.0) 2902 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-100", "mode": "writeback"}]: dispatch 2026-03-10T10:25:45.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:45 vm04 bash[20742]: audit 2026-03-10T10:25:44.221863+0000 mon.a (mon.0) 2902 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-100", "mode": "writeback"}]: dispatch 2026-03-10T10:25:45.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:45 vm04 bash[20742]: cluster 2026-03-10T10:25:45.218201+0000 mon.a (mon.0) 2903 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:25:45.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:45 vm04 bash[20742]: cluster 2026-03-10T10:25:45.218201+0000 mon.a (mon.0) 2903 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:25:45.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:45 vm04 bash[20742]: audit 2026-03-10T10:25:45.222091+0000 mon.a (mon.0) 2904 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-100", "mode": "writeback"}]': finished 2026-03-10T10:25:45.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:45 vm04 bash[20742]: audit 2026-03-10T10:25:45.222091+0000 mon.a (mon.0) 2904 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-100", "mode": "writeback"}]': finished 2026-03-10T10:25:45.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:45 vm04 bash[20742]: cluster 2026-03-10T10:25:45.225588+0000 mon.a (mon.0) 2905 : cluster [DBG] osdmap e499: 8 total, 8 up, 8 in 2026-03-10T10:25:45.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:45 vm04 bash[20742]: cluster 2026-03-10T10:25:45.225588+0000 mon.a (mon.0) 2905 : cluster [DBG] osdmap e499: 8 total, 8 up, 8 in 2026-03-10T10:25:46.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:46 vm07 bash[23367]: cluster 2026-03-10T10:25:44.508726+0000 mgr.y (mgr.24422) 466 : cluster [DBG] pgmap v768: 292 pgs: 292 active+clean; 8.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:46.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:46 vm07 bash[23367]: cluster 2026-03-10T10:25:44.508726+0000 mgr.y (mgr.24422) 466 : cluster [DBG] pgmap v768: 292 pgs: 292 active+clean; 8.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:46.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:46 vm07 bash[23367]: audit 2026-03-10T10:25:45.226988+0000 mon.a (mon.0) 2906 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:25:46.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:46 vm07 bash[23367]: audit 2026-03-10T10:25:45.226988+0000 mon.a (mon.0) 2906 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:25:46.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:46 vm07 bash[23367]: audit 2026-03-10T10:25:46.225563+0000 mon.a (mon.0) 2907 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:25:46.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:46 vm07 bash[23367]: audit 2026-03-10T10:25:46.225563+0000 mon.a (mon.0) 2907 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:25:46.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:46 vm07 bash[23367]: cluster 2026-03-10T10:25:46.233943+0000 mon.a (mon.0) 2908 : cluster [DBG] osdmap e500: 8 total, 8 up, 8 in 2026-03-10T10:25:46.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:46 vm07 bash[23367]: cluster 2026-03-10T10:25:46.233943+0000 mon.a (mon.0) 2908 : cluster [DBG] osdmap e500: 8 total, 8 up, 8 in 2026-03-10T10:25:46.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:46 vm07 bash[23367]: audit 2026-03-10T10:25:46.234685+0000 mon.a (mon.0) 2909 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:25:46.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:46 vm07 bash[23367]: audit 2026-03-10T10:25:46.234685+0000 mon.a (mon.0) 2909 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:25:46.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:46 vm04 bash[28289]: cluster 2026-03-10T10:25:44.508726+0000 mgr.y (mgr.24422) 466 : cluster [DBG] pgmap v768: 292 pgs: 292 active+clean; 8.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:46.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:46 vm04 bash[28289]: cluster 2026-03-10T10:25:44.508726+0000 mgr.y (mgr.24422) 466 : cluster [DBG] pgmap v768: 292 pgs: 292 active+clean; 8.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:46.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:46 vm04 bash[28289]: audit 2026-03-10T10:25:45.226988+0000 mon.a (mon.0) 2906 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:25:46.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:46 vm04 bash[28289]: audit 2026-03-10T10:25:45.226988+0000 mon.a (mon.0) 2906 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:25:46.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:46 vm04 bash[28289]: audit 2026-03-10T10:25:46.225563+0000 mon.a (mon.0) 2907 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:25:46.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:46 vm04 bash[28289]: audit 2026-03-10T10:25:46.225563+0000 mon.a (mon.0) 2907 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:25:46.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:46 vm04 bash[28289]: cluster 2026-03-10T10:25:46.233943+0000 mon.a (mon.0) 2908 : cluster [DBG] osdmap e500: 8 total, 8 up, 8 in 2026-03-10T10:25:46.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:46 vm04 bash[28289]: cluster 2026-03-10T10:25:46.233943+0000 mon.a (mon.0) 2908 : cluster [DBG] osdmap e500: 8 total, 8 up, 8 in 2026-03-10T10:25:46.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:46 vm04 bash[28289]: audit 2026-03-10T10:25:46.234685+0000 mon.a (mon.0) 2909 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:25:46.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:46 vm04 bash[28289]: audit 2026-03-10T10:25:46.234685+0000 mon.a (mon.0) 2909 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:25:46.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:46 vm04 bash[20742]: cluster 2026-03-10T10:25:44.508726+0000 mgr.y (mgr.24422) 466 : cluster [DBG] pgmap v768: 292 pgs: 292 active+clean; 8.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:46.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:46 vm04 bash[20742]: cluster 2026-03-10T10:25:44.508726+0000 mgr.y (mgr.24422) 466 : cluster [DBG] pgmap v768: 292 pgs: 292 active+clean; 8.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:46.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:46 vm04 bash[20742]: audit 2026-03-10T10:25:45.226988+0000 mon.a (mon.0) 2906 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:25:46.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:46 vm04 bash[20742]: audit 2026-03-10T10:25:45.226988+0000 mon.a (mon.0) 2906 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:25:46.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:46 vm04 bash[20742]: audit 2026-03-10T10:25:46.225563+0000 mon.a (mon.0) 2907 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:25:46.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:46 vm04 bash[20742]: audit 2026-03-10T10:25:46.225563+0000 mon.a (mon.0) 2907 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:25:46.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:46 vm04 bash[20742]: cluster 2026-03-10T10:25:46.233943+0000 mon.a (mon.0) 2908 : cluster [DBG] osdmap e500: 8 total, 8 up, 8 in 2026-03-10T10:25:46.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:46 vm04 bash[20742]: cluster 2026-03-10T10:25:46.233943+0000 mon.a (mon.0) 2908 : cluster [DBG] osdmap e500: 8 total, 8 up, 8 in 2026-03-10T10:25:46.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:46 vm04 bash[20742]: audit 2026-03-10T10:25:46.234685+0000 mon.a (mon.0) 2909 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:25:46.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:46 vm04 bash[20742]: audit 2026-03-10T10:25:46.234685+0000 mon.a (mon.0) 2909 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:25:48.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:48 vm07 bash[23367]: cluster 2026-03-10T10:25:46.509084+0000 mgr.y (mgr.24422) 467 : cluster [DBG] pgmap v771: 292 pgs: 292 active+clean; 8.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:48.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:48 vm07 bash[23367]: cluster 2026-03-10T10:25:46.509084+0000 mgr.y (mgr.24422) 467 : cluster [DBG] pgmap v771: 292 pgs: 292 active+clean; 8.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:48.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:48 vm07 bash[23367]: audit 2026-03-10T10:25:47.228450+0000 mon.a (mon.0) 2910 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:25:48.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:48 vm07 bash[23367]: audit 2026-03-10T10:25:47.228450+0000 mon.a (mon.0) 2910 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:25:48.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:48 vm07 bash[23367]: cluster 2026-03-10T10:25:47.236128+0000 mon.a (mon.0) 2911 : cluster [DBG] osdmap e501: 8 total, 8 up, 8 in 2026-03-10T10:25:48.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:48 vm07 bash[23367]: cluster 2026-03-10T10:25:47.236128+0000 mon.a (mon.0) 2911 : cluster [DBG] osdmap e501: 8 total, 8 up, 8 in 2026-03-10T10:25:48.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:48 vm07 bash[23367]: audit 2026-03-10T10:25:47.237036+0000 mon.a (mon.0) 2912 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T10:25:48.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:48 vm07 bash[23367]: audit 2026-03-10T10:25:47.237036+0000 mon.a (mon.0) 2912 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T10:25:48.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:48 vm04 bash[28289]: cluster 2026-03-10T10:25:46.509084+0000 mgr.y (mgr.24422) 467 : cluster [DBG] pgmap v771: 292 pgs: 292 active+clean; 8.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:48.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:48 vm04 bash[28289]: cluster 2026-03-10T10:25:46.509084+0000 mgr.y (mgr.24422) 467 : cluster [DBG] pgmap v771: 292 pgs: 292 active+clean; 8.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:48.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:48 vm04 bash[28289]: audit 2026-03-10T10:25:47.228450+0000 mon.a (mon.0) 2910 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:25:48.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:48 vm04 bash[28289]: audit 2026-03-10T10:25:47.228450+0000 mon.a (mon.0) 2910 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:25:48.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:48 vm04 bash[28289]: cluster 2026-03-10T10:25:47.236128+0000 mon.a (mon.0) 2911 : cluster [DBG] osdmap e501: 8 total, 8 up, 8 in 2026-03-10T10:25:48.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:48 vm04 bash[28289]: cluster 2026-03-10T10:25:47.236128+0000 mon.a (mon.0) 2911 : cluster [DBG] osdmap e501: 8 total, 8 up, 8 in 2026-03-10T10:25:48.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:48 vm04 bash[28289]: audit 2026-03-10T10:25:47.237036+0000 mon.a (mon.0) 2912 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T10:25:48.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:48 vm04 bash[28289]: audit 2026-03-10T10:25:47.237036+0000 mon.a (mon.0) 2912 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T10:25:48.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:48 vm04 bash[20742]: cluster 2026-03-10T10:25:46.509084+0000 mgr.y (mgr.24422) 467 : cluster [DBG] pgmap v771: 292 pgs: 292 active+clean; 8.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:48.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:48 vm04 bash[20742]: cluster 2026-03-10T10:25:46.509084+0000 mgr.y (mgr.24422) 467 : cluster [DBG] pgmap v771: 292 pgs: 292 active+clean; 8.3 MiB data, 972 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:48.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:48 vm04 bash[20742]: audit 2026-03-10T10:25:47.228450+0000 mon.a (mon.0) 2910 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:25:48.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:48 vm04 bash[20742]: audit 2026-03-10T10:25:47.228450+0000 mon.a (mon.0) 2910 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:25:48.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:48 vm04 bash[20742]: cluster 2026-03-10T10:25:47.236128+0000 mon.a (mon.0) 2911 : cluster [DBG] osdmap e501: 8 total, 8 up, 8 in 2026-03-10T10:25:48.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:48 vm04 bash[20742]: cluster 2026-03-10T10:25:47.236128+0000 mon.a (mon.0) 2911 : cluster [DBG] osdmap e501: 8 total, 8 up, 8 in 2026-03-10T10:25:48.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:48 vm04 bash[20742]: audit 2026-03-10T10:25:47.237036+0000 mon.a (mon.0) 2912 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T10:25:48.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:48 vm04 bash[20742]: audit 2026-03-10T10:25:47.237036+0000 mon.a (mon.0) 2912 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T10:25:49.015 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:25:48 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:25:49.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:49 vm04 bash[28289]: cluster 2026-03-10T10:25:48.228401+0000 mon.a (mon.0) 2913 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:25:49.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:49 vm04 bash[28289]: cluster 2026-03-10T10:25:48.228401+0000 mon.a (mon.0) 2913 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:25:49.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:49 vm04 bash[28289]: audit 2026-03-10T10:25:48.231509+0000 mon.a (mon.0) 2914 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T10:25:49.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:49 vm04 bash[28289]: audit 2026-03-10T10:25:48.231509+0000 mon.a (mon.0) 2914 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T10:25:49.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:49 vm04 bash[28289]: cluster 2026-03-10T10:25:48.239882+0000 mon.a (mon.0) 2915 : cluster [DBG] osdmap e502: 8 total, 8 up, 8 in 2026-03-10T10:25:49.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:49 vm04 bash[28289]: cluster 2026-03-10T10:25:48.239882+0000 mon.a (mon.0) 2915 : cluster [DBG] osdmap e502: 8 total, 8 up, 8 in 2026-03-10T10:25:49.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:49 vm04 bash[28289]: audit 2026-03-10T10:25:48.242197+0000 mon.a (mon.0) 2916 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-10T10:25:49.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:49 vm04 bash[28289]: audit 2026-03-10T10:25:48.242197+0000 mon.a (mon.0) 2916 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-10T10:25:49.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:49 vm04 bash[20742]: cluster 2026-03-10T10:25:48.228401+0000 mon.a (mon.0) 2913 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:25:49.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:49 vm04 bash[20742]: cluster 2026-03-10T10:25:48.228401+0000 mon.a (mon.0) 2913 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:25:49.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:49 vm04 bash[20742]: audit 2026-03-10T10:25:48.231509+0000 mon.a (mon.0) 2914 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T10:25:49.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:49 vm04 bash[20742]: audit 2026-03-10T10:25:48.231509+0000 mon.a (mon.0) 2914 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T10:25:49.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:49 vm04 bash[20742]: cluster 2026-03-10T10:25:48.239882+0000 mon.a (mon.0) 2915 : cluster [DBG] osdmap e502: 8 total, 8 up, 8 in 2026-03-10T10:25:49.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:49 vm04 bash[20742]: cluster 2026-03-10T10:25:48.239882+0000 mon.a (mon.0) 2915 : cluster [DBG] osdmap e502: 8 total, 8 up, 8 in 2026-03-10T10:25:49.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:49 vm04 bash[20742]: audit 2026-03-10T10:25:48.242197+0000 mon.a (mon.0) 2916 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-10T10:25:49.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:49 vm04 bash[20742]: audit 2026-03-10T10:25:48.242197+0000 mon.a (mon.0) 2916 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-10T10:25:49.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:49 vm07 bash[23367]: cluster 2026-03-10T10:25:48.228401+0000 mon.a (mon.0) 2913 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:25:49.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:49 vm07 bash[23367]: cluster 2026-03-10T10:25:48.228401+0000 mon.a (mon.0) 2913 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:25:49.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:49 vm07 bash[23367]: audit 2026-03-10T10:25:48.231509+0000 mon.a (mon.0) 2914 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T10:25:49.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:49 vm07 bash[23367]: audit 2026-03-10T10:25:48.231509+0000 mon.a (mon.0) 2914 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T10:25:49.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:49 vm07 bash[23367]: cluster 2026-03-10T10:25:48.239882+0000 mon.a (mon.0) 2915 : cluster [DBG] osdmap e502: 8 total, 8 up, 8 in 2026-03-10T10:25:49.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:49 vm07 bash[23367]: cluster 2026-03-10T10:25:48.239882+0000 mon.a (mon.0) 2915 : cluster [DBG] osdmap e502: 8 total, 8 up, 8 in 2026-03-10T10:25:49.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:49 vm07 bash[23367]: audit 2026-03-10T10:25:48.242197+0000 mon.a (mon.0) 2916 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-10T10:25:49.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:49 vm07 bash[23367]: audit 2026-03-10T10:25:48.242197+0000 mon.a (mon.0) 2916 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "min_read_recency_for_promote","val": "10000"}]: dispatch 2026-03-10T10:25:50.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:50 vm04 bash[28289]: cluster 2026-03-10T10:25:48.509692+0000 mgr.y (mgr.24422) 468 : cluster [DBG] pgmap v774: 292 pgs: 292 active+clean; 8.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:25:50.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:50 vm04 bash[28289]: cluster 2026-03-10T10:25:48.509692+0000 mgr.y (mgr.24422) 468 : cluster [DBG] pgmap v774: 292 pgs: 292 active+clean; 8.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:25:50.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:50 vm04 bash[28289]: audit 2026-03-10T10:25:48.692116+0000 mgr.y (mgr.24422) 469 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:50.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:50 vm04 bash[28289]: audit 2026-03-10T10:25:48.692116+0000 mgr.y (mgr.24422) 469 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:50.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:50 vm04 bash[28289]: audit 2026-03-10T10:25:49.241675+0000 mon.a (mon.0) 2917 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "min_read_recency_for_promote","val": "10000"}]': finished 2026-03-10T10:25:50.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:50 vm04 bash[28289]: audit 2026-03-10T10:25:49.241675+0000 mon.a (mon.0) 2917 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "min_read_recency_for_promote","val": "10000"}]': finished 2026-03-10T10:25:50.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:50 vm04 bash[28289]: cluster 2026-03-10T10:25:49.247800+0000 mon.a (mon.0) 2918 : cluster [DBG] osdmap e503: 8 total, 8 up, 8 in 2026-03-10T10:25:50.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:50 vm04 bash[28289]: cluster 2026-03-10T10:25:49.247800+0000 mon.a (mon.0) 2918 : cluster [DBG] osdmap e503: 8 total, 8 up, 8 in 2026-03-10T10:25:50.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:50 vm04 bash[28289]: audit 2026-03-10T10:25:49.293860+0000 mon.a (mon.0) 2919 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:25:50.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:50 vm04 bash[28289]: audit 2026-03-10T10:25:49.293860+0000 mon.a (mon.0) 2919 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:25:50.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:50 vm04 bash[20742]: cluster 2026-03-10T10:25:48.509692+0000 mgr.y (mgr.24422) 468 : cluster [DBG] pgmap v774: 292 pgs: 292 active+clean; 8.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:25:50.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:50 vm04 bash[20742]: cluster 2026-03-10T10:25:48.509692+0000 mgr.y (mgr.24422) 468 : cluster [DBG] pgmap v774: 292 pgs: 292 active+clean; 8.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:25:50.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:50 vm04 bash[20742]: audit 2026-03-10T10:25:48.692116+0000 mgr.y (mgr.24422) 469 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:50.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:50 vm04 bash[20742]: audit 2026-03-10T10:25:48.692116+0000 mgr.y (mgr.24422) 469 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:50.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:50 vm04 bash[20742]: audit 2026-03-10T10:25:49.241675+0000 mon.a (mon.0) 2917 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "min_read_recency_for_promote","val": "10000"}]': finished 2026-03-10T10:25:50.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:50 vm04 bash[20742]: audit 2026-03-10T10:25:49.241675+0000 mon.a (mon.0) 2917 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "min_read_recency_for_promote","val": "10000"}]': finished 2026-03-10T10:25:50.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:50 vm04 bash[20742]: cluster 2026-03-10T10:25:49.247800+0000 mon.a (mon.0) 2918 : cluster [DBG] osdmap e503: 8 total, 8 up, 8 in 2026-03-10T10:25:50.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:50 vm04 bash[20742]: cluster 2026-03-10T10:25:49.247800+0000 mon.a (mon.0) 2918 : cluster [DBG] osdmap e503: 8 total, 8 up, 8 in 2026-03-10T10:25:50.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:50 vm04 bash[20742]: audit 2026-03-10T10:25:49.293860+0000 mon.a (mon.0) 2919 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:25:50.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:50 vm04 bash[20742]: audit 2026-03-10T10:25:49.293860+0000 mon.a (mon.0) 2919 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:25:50.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:50 vm07 bash[23367]: cluster 2026-03-10T10:25:48.509692+0000 mgr.y (mgr.24422) 468 : cluster [DBG] pgmap v774: 292 pgs: 292 active+clean; 8.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:25:50.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:50 vm07 bash[23367]: cluster 2026-03-10T10:25:48.509692+0000 mgr.y (mgr.24422) 468 : cluster [DBG] pgmap v774: 292 pgs: 292 active+clean; 8.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:25:50.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:50 vm07 bash[23367]: audit 2026-03-10T10:25:48.692116+0000 mgr.y (mgr.24422) 469 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:50.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:50 vm07 bash[23367]: audit 2026-03-10T10:25:48.692116+0000 mgr.y (mgr.24422) 469 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:50.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:50 vm07 bash[23367]: audit 2026-03-10T10:25:49.241675+0000 mon.a (mon.0) 2917 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "min_read_recency_for_promote","val": "10000"}]': finished 2026-03-10T10:25:50.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:50 vm07 bash[23367]: audit 2026-03-10T10:25:49.241675+0000 mon.a (mon.0) 2917 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-100","var": "min_read_recency_for_promote","val": "10000"}]': finished 2026-03-10T10:25:50.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:50 vm07 bash[23367]: cluster 2026-03-10T10:25:49.247800+0000 mon.a (mon.0) 2918 : cluster [DBG] osdmap e503: 8 total, 8 up, 8 in 2026-03-10T10:25:50.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:50 vm07 bash[23367]: cluster 2026-03-10T10:25:49.247800+0000 mon.a (mon.0) 2918 : cluster [DBG] osdmap e503: 8 total, 8 up, 8 in 2026-03-10T10:25:50.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:50 vm07 bash[23367]: audit 2026-03-10T10:25:49.293860+0000 mon.a (mon.0) 2919 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:25:50.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:50 vm07 bash[23367]: audit 2026-03-10T10:25:49.293860+0000 mon.a (mon.0) 2919 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:25:51.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:51 vm04 bash[28289]: audit 2026-03-10T10:25:50.263126+0000 mon.a (mon.0) 2920 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:25:51.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:51 vm04 bash[28289]: audit 2026-03-10T10:25:50.263126+0000 mon.a (mon.0) 2920 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:25:51.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:51 vm04 bash[28289]: cluster 2026-03-10T10:25:50.267084+0000 mon.a (mon.0) 2921 : cluster [DBG] osdmap e504: 8 total, 8 up, 8 in 2026-03-10T10:25:51.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:51 vm04 bash[28289]: cluster 2026-03-10T10:25:50.267084+0000 mon.a (mon.0) 2921 : cluster [DBG] osdmap e504: 8 total, 8 up, 8 in 2026-03-10T10:25:51.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:51 vm04 bash[28289]: audit 2026-03-10T10:25:50.274725+0000 mon.a (mon.0) 2922 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-100"}]: dispatch 2026-03-10T10:25:51.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:51 vm04 bash[28289]: audit 2026-03-10T10:25:50.274725+0000 mon.a (mon.0) 2922 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-100"}]: dispatch 2026-03-10T10:25:51.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:51 vm04 bash[28289]: audit 2026-03-10T10:25:51.266373+0000 mon.a (mon.0) 2923 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-100"}]': finished 2026-03-10T10:25:51.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:51 vm04 bash[28289]: audit 2026-03-10T10:25:51.266373+0000 mon.a (mon.0) 2923 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-100"}]': finished 2026-03-10T10:25:51.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:51 vm04 bash[28289]: cluster 2026-03-10T10:25:51.273435+0000 mon.a (mon.0) 2924 : cluster [DBG] osdmap e505: 8 total, 8 up, 8 in 2026-03-10T10:25:51.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:51 vm04 bash[28289]: cluster 2026-03-10T10:25:51.273435+0000 mon.a (mon.0) 2924 : cluster [DBG] osdmap e505: 8 total, 8 up, 8 in 2026-03-10T10:25:51.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:51 vm04 bash[20742]: audit 2026-03-10T10:25:50.263126+0000 mon.a (mon.0) 2920 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:25:51.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:51 vm04 bash[20742]: audit 2026-03-10T10:25:50.263126+0000 mon.a (mon.0) 2920 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:25:51.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:51 vm04 bash[20742]: cluster 2026-03-10T10:25:50.267084+0000 mon.a (mon.0) 2921 : cluster [DBG] osdmap e504: 8 total, 8 up, 8 in 2026-03-10T10:25:51.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:51 vm04 bash[20742]: cluster 2026-03-10T10:25:50.267084+0000 mon.a (mon.0) 2921 : cluster [DBG] osdmap e504: 8 total, 8 up, 8 in 2026-03-10T10:25:51.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:51 vm04 bash[20742]: audit 2026-03-10T10:25:50.274725+0000 mon.a (mon.0) 2922 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-100"}]: dispatch 2026-03-10T10:25:51.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:51 vm04 bash[20742]: audit 2026-03-10T10:25:50.274725+0000 mon.a (mon.0) 2922 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-100"}]: dispatch 2026-03-10T10:25:51.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:51 vm04 bash[20742]: audit 2026-03-10T10:25:51.266373+0000 mon.a (mon.0) 2923 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-100"}]': finished 2026-03-10T10:25:51.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:51 vm04 bash[20742]: audit 2026-03-10T10:25:51.266373+0000 mon.a (mon.0) 2923 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-100"}]': finished 2026-03-10T10:25:51.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:51 vm04 bash[20742]: cluster 2026-03-10T10:25:51.273435+0000 mon.a (mon.0) 2924 : cluster [DBG] osdmap e505: 8 total, 8 up, 8 in 2026-03-10T10:25:51.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:51 vm04 bash[20742]: cluster 2026-03-10T10:25:51.273435+0000 mon.a (mon.0) 2924 : cluster [DBG] osdmap e505: 8 total, 8 up, 8 in 2026-03-10T10:25:51.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:51 vm07 bash[23367]: audit 2026-03-10T10:25:50.263126+0000 mon.a (mon.0) 2920 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:25:51.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:51 vm07 bash[23367]: audit 2026-03-10T10:25:50.263126+0000 mon.a (mon.0) 2920 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]': finished 2026-03-10T10:25:51.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:51 vm07 bash[23367]: cluster 2026-03-10T10:25:50.267084+0000 mon.a (mon.0) 2921 : cluster [DBG] osdmap e504: 8 total, 8 up, 8 in 2026-03-10T10:25:51.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:51 vm07 bash[23367]: cluster 2026-03-10T10:25:50.267084+0000 mon.a (mon.0) 2921 : cluster [DBG] osdmap e504: 8 total, 8 up, 8 in 2026-03-10T10:25:51.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:51 vm07 bash[23367]: audit 2026-03-10T10:25:50.274725+0000 mon.a (mon.0) 2922 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-100"}]: dispatch 2026-03-10T10:25:51.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:51 vm07 bash[23367]: audit 2026-03-10T10:25:50.274725+0000 mon.a (mon.0) 2922 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-100"}]: dispatch 2026-03-10T10:25:51.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:51 vm07 bash[23367]: audit 2026-03-10T10:25:51.266373+0000 mon.a (mon.0) 2923 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-100"}]': finished 2026-03-10T10:25:51.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:51 vm07 bash[23367]: audit 2026-03-10T10:25:51.266373+0000 mon.a (mon.0) 2923 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-100"}]': finished 2026-03-10T10:25:51.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:51 vm07 bash[23367]: cluster 2026-03-10T10:25:51.273435+0000 mon.a (mon.0) 2924 : cluster [DBG] osdmap e505: 8 total, 8 up, 8 in 2026-03-10T10:25:51.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:51 vm07 bash[23367]: cluster 2026-03-10T10:25:51.273435+0000 mon.a (mon.0) 2924 : cluster [DBG] osdmap e505: 8 total, 8 up, 8 in 2026-03-10T10:25:52.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:52 vm04 bash[28289]: cluster 2026-03-10T10:25:50.510102+0000 mgr.y (mgr.24422) 470 : cluster [DBG] pgmap v777: 292 pgs: 292 active+clean; 8.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:52.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:52 vm04 bash[28289]: cluster 2026-03-10T10:25:50.510102+0000 mgr.y (mgr.24422) 470 : cluster [DBG] pgmap v777: 292 pgs: 292 active+clean; 8.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:52.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:52 vm04 bash[20742]: cluster 2026-03-10T10:25:50.510102+0000 mgr.y (mgr.24422) 470 : cluster [DBG] pgmap v777: 292 pgs: 292 active+clean; 8.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:52.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:52 vm04 bash[20742]: cluster 2026-03-10T10:25:50.510102+0000 mgr.y (mgr.24422) 470 : cluster [DBG] pgmap v777: 292 pgs: 292 active+clean; 8.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:52.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:52 vm07 bash[23367]: cluster 2026-03-10T10:25:50.510102+0000 mgr.y (mgr.24422) 470 : cluster [DBG] pgmap v777: 292 pgs: 292 active+clean; 8.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:52.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:52 vm07 bash[23367]: cluster 2026-03-10T10:25:50.510102+0000 mgr.y (mgr.24422) 470 : cluster [DBG] pgmap v777: 292 pgs: 292 active+clean; 8.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:53.322 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:25:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:25:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:25:53.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:53 vm04 bash[28289]: cluster 2026-03-10T10:25:52.293622+0000 mon.a (mon.0) 2925 : cluster [DBG] osdmap e506: 8 total, 8 up, 8 in 2026-03-10T10:25:53.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:53 vm04 bash[28289]: cluster 2026-03-10T10:25:52.293622+0000 mon.a (mon.0) 2925 : cluster [DBG] osdmap e506: 8 total, 8 up, 8 in 2026-03-10T10:25:53.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:53 vm04 bash[28289]: cluster 2026-03-10T10:25:53.297680+0000 mon.a (mon.0) 2926 : cluster [DBG] osdmap e507: 8 total, 8 up, 8 in 2026-03-10T10:25:53.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:53 vm04 bash[28289]: cluster 2026-03-10T10:25:53.297680+0000 mon.a (mon.0) 2926 : cluster [DBG] osdmap e507: 8 total, 8 up, 8 in 2026-03-10T10:25:53.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:53 vm04 bash[28289]: audit 2026-03-10T10:25:53.298985+0000 mon.a (mon.0) 2927 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:25:53.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:53 vm04 bash[28289]: audit 2026-03-10T10:25:53.298985+0000 mon.a (mon.0) 2927 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:25:53.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:53 vm04 bash[20742]: cluster 2026-03-10T10:25:52.293622+0000 mon.a (mon.0) 2925 : cluster [DBG] osdmap e506: 8 total, 8 up, 8 in 2026-03-10T10:25:53.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:53 vm04 bash[20742]: cluster 2026-03-10T10:25:52.293622+0000 mon.a (mon.0) 2925 : cluster [DBG] osdmap e506: 8 total, 8 up, 8 in 2026-03-10T10:25:53.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:53 vm04 bash[20742]: cluster 2026-03-10T10:25:53.297680+0000 mon.a (mon.0) 2926 : cluster [DBG] osdmap e507: 8 total, 8 up, 8 in 2026-03-10T10:25:53.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:53 vm04 bash[20742]: cluster 2026-03-10T10:25:53.297680+0000 mon.a (mon.0) 2926 : cluster [DBG] osdmap e507: 8 total, 8 up, 8 in 2026-03-10T10:25:53.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:53 vm04 bash[20742]: audit 2026-03-10T10:25:53.298985+0000 mon.a (mon.0) 2927 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:25:53.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:53 vm04 bash[20742]: audit 2026-03-10T10:25:53.298985+0000 mon.a (mon.0) 2927 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:25:53.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:53 vm07 bash[23367]: cluster 2026-03-10T10:25:52.293622+0000 mon.a (mon.0) 2925 : cluster [DBG] osdmap e506: 8 total, 8 up, 8 in 2026-03-10T10:25:53.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:53 vm07 bash[23367]: cluster 2026-03-10T10:25:52.293622+0000 mon.a (mon.0) 2925 : cluster [DBG] osdmap e506: 8 total, 8 up, 8 in 2026-03-10T10:25:53.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:53 vm07 bash[23367]: cluster 2026-03-10T10:25:53.297680+0000 mon.a (mon.0) 2926 : cluster [DBG] osdmap e507: 8 total, 8 up, 8 in 2026-03-10T10:25:53.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:53 vm07 bash[23367]: cluster 2026-03-10T10:25:53.297680+0000 mon.a (mon.0) 2926 : cluster [DBG] osdmap e507: 8 total, 8 up, 8 in 2026-03-10T10:25:53.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:53 vm07 bash[23367]: audit 2026-03-10T10:25:53.298985+0000 mon.a (mon.0) 2927 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:25:53.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:53 vm07 bash[23367]: audit 2026-03-10T10:25:53.298985+0000 mon.a (mon.0) 2927 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-102","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:25:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:54 vm04 bash[28289]: cluster 2026-03-10T10:25:52.510484+0000 mgr.y (mgr.24422) 471 : cluster [DBG] pgmap v780: 260 pgs: 260 active+clean; 8.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:54 vm04 bash[28289]: cluster 2026-03-10T10:25:52.510484+0000 mgr.y (mgr.24422) 471 : cluster [DBG] pgmap v780: 260 pgs: 260 active+clean; 8.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:54 vm04 bash[28289]: audit 2026-03-10T10:25:54.297980+0000 mon.a (mon.0) 2928 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-102","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:25:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:54 vm04 bash[28289]: audit 2026-03-10T10:25:54.297980+0000 mon.a (mon.0) 2928 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-102","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:25:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:54 vm04 bash[28289]: cluster 2026-03-10T10:25:54.302876+0000 mon.a (mon.0) 2929 : cluster [DBG] osdmap e508: 8 total, 8 up, 8 in 2026-03-10T10:25:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:54 vm04 bash[28289]: cluster 2026-03-10T10:25:54.302876+0000 mon.a (mon.0) 2929 : cluster [DBG] osdmap e508: 8 total, 8 up, 8 in 2026-03-10T10:25:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:54 vm04 bash[28289]: audit 2026-03-10T10:25:54.304765+0000 mon.a (mon.0) 2930 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:25:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:54 vm04 bash[28289]: audit 2026-03-10T10:25:54.304765+0000 mon.a (mon.0) 2930 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:25:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:54 vm04 bash[28289]: audit 2026-03-10T10:25:54.306555+0000 mon.a (mon.0) 2931 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:25:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:54 vm04 bash[28289]: audit 2026-03-10T10:25:54.306555+0000 mon.a (mon.0) 2931 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:25:54.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:54 vm04 bash[20742]: cluster 2026-03-10T10:25:52.510484+0000 mgr.y (mgr.24422) 471 : cluster [DBG] pgmap v780: 260 pgs: 260 active+clean; 8.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:54.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:54 vm04 bash[20742]: cluster 2026-03-10T10:25:52.510484+0000 mgr.y (mgr.24422) 471 : cluster [DBG] pgmap v780: 260 pgs: 260 active+clean; 8.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:54.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:54 vm04 bash[20742]: audit 2026-03-10T10:25:54.297980+0000 mon.a (mon.0) 2928 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-102","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:25:54.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:54 vm04 bash[20742]: audit 2026-03-10T10:25:54.297980+0000 mon.a (mon.0) 2928 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-102","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:25:54.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:54 vm04 bash[20742]: cluster 2026-03-10T10:25:54.302876+0000 mon.a (mon.0) 2929 : cluster [DBG] osdmap e508: 8 total, 8 up, 8 in 2026-03-10T10:25:54.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:54 vm04 bash[20742]: cluster 2026-03-10T10:25:54.302876+0000 mon.a (mon.0) 2929 : cluster [DBG] osdmap e508: 8 total, 8 up, 8 in 2026-03-10T10:25:54.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:54 vm04 bash[20742]: audit 2026-03-10T10:25:54.304765+0000 mon.a (mon.0) 2930 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:25:54.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:54 vm04 bash[20742]: audit 2026-03-10T10:25:54.304765+0000 mon.a (mon.0) 2930 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:25:54.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:54 vm04 bash[20742]: audit 2026-03-10T10:25:54.306555+0000 mon.a (mon.0) 2931 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:25:54.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:54 vm04 bash[20742]: audit 2026-03-10T10:25:54.306555+0000 mon.a (mon.0) 2931 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:25:54.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:54 vm07 bash[23367]: cluster 2026-03-10T10:25:52.510484+0000 mgr.y (mgr.24422) 471 : cluster [DBG] pgmap v780: 260 pgs: 260 active+clean; 8.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:54.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:54 vm07 bash[23367]: cluster 2026-03-10T10:25:52.510484+0000 mgr.y (mgr.24422) 471 : cluster [DBG] pgmap v780: 260 pgs: 260 active+clean; 8.3 MiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:25:54.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:54 vm07 bash[23367]: audit 2026-03-10T10:25:54.297980+0000 mon.a (mon.0) 2928 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-102","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:25:54.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:54 vm07 bash[23367]: audit 2026-03-10T10:25:54.297980+0000 mon.a (mon.0) 2928 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-102","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:25:54.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:54 vm07 bash[23367]: cluster 2026-03-10T10:25:54.302876+0000 mon.a (mon.0) 2929 : cluster [DBG] osdmap e508: 8 total, 8 up, 8 in 2026-03-10T10:25:54.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:54 vm07 bash[23367]: cluster 2026-03-10T10:25:54.302876+0000 mon.a (mon.0) 2929 : cluster [DBG] osdmap e508: 8 total, 8 up, 8 in 2026-03-10T10:25:54.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:54 vm07 bash[23367]: audit 2026-03-10T10:25:54.304765+0000 mon.a (mon.0) 2930 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:25:54.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:54 vm07 bash[23367]: audit 2026-03-10T10:25:54.304765+0000 mon.a (mon.0) 2930 : audit [DBG] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd dump","format":"json"}]: dispatch 2026-03-10T10:25:54.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:54 vm07 bash[23367]: audit 2026-03-10T10:25:54.306555+0000 mon.a (mon.0) 2931 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:25:54.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:54 vm07 bash[23367]: audit 2026-03-10T10:25:54.306555+0000 mon.a (mon.0) 2931 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "fingerprint_algorithm","val": "sha1"}]: dispatch 2026-03-10T10:25:55.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:55 vm04 bash[28289]: cluster 2026-03-10T10:25:54.510850+0000 mgr.y (mgr.24422) 472 : cluster [DBG] pgmap v783: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:55.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:55 vm04 bash[28289]: cluster 2026-03-10T10:25:54.510850+0000 mgr.y (mgr.24422) 472 : cluster [DBG] pgmap v783: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:55.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:55 vm04 bash[28289]: audit 2026-03-10T10:25:55.383595+0000 mon.a (mon.0) 2932 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:25:55.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:55 vm04 bash[28289]: audit 2026-03-10T10:25:55.383595+0000 mon.a (mon.0) 2932 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:25:55.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:55 vm04 bash[28289]: cluster 2026-03-10T10:25:55.391211+0000 mon.a (mon.0) 2933 : cluster [DBG] osdmap e509: 8 total, 8 up, 8 in 2026-03-10T10:25:55.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:55 vm04 bash[28289]: cluster 2026-03-10T10:25:55.391211+0000 mon.a (mon.0) 2933 : cluster [DBG] osdmap e509: 8 total, 8 up, 8 in 2026-03-10T10:25:55.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:55 vm04 bash[28289]: audit 2026-03-10T10:25:55.392664+0000 mon.a (mon.0) 2934 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T10:25:55.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:55 vm04 bash[28289]: audit 2026-03-10T10:25:55.392664+0000 mon.a (mon.0) 2934 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T10:25:55.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:55 vm04 bash[20742]: cluster 2026-03-10T10:25:54.510850+0000 mgr.y (mgr.24422) 472 : cluster [DBG] pgmap v783: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:55.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:55 vm04 bash[20742]: cluster 2026-03-10T10:25:54.510850+0000 mgr.y (mgr.24422) 472 : cluster [DBG] pgmap v783: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:55.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:55 vm04 bash[20742]: audit 2026-03-10T10:25:55.383595+0000 mon.a (mon.0) 2932 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:25:55.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:55 vm04 bash[20742]: audit 2026-03-10T10:25:55.383595+0000 mon.a (mon.0) 2932 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:25:55.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:55 vm04 bash[20742]: cluster 2026-03-10T10:25:55.391211+0000 mon.a (mon.0) 2933 : cluster [DBG] osdmap e509: 8 total, 8 up, 8 in 2026-03-10T10:25:55.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:55 vm04 bash[20742]: cluster 2026-03-10T10:25:55.391211+0000 mon.a (mon.0) 2933 : cluster [DBG] osdmap e509: 8 total, 8 up, 8 in 2026-03-10T10:25:55.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:55 vm04 bash[20742]: audit 2026-03-10T10:25:55.392664+0000 mon.a (mon.0) 2934 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T10:25:55.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:55 vm04 bash[20742]: audit 2026-03-10T10:25:55.392664+0000 mon.a (mon.0) 2934 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T10:25:55.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:55 vm07 bash[23367]: cluster 2026-03-10T10:25:54.510850+0000 mgr.y (mgr.24422) 472 : cluster [DBG] pgmap v783: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:55.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:55 vm07 bash[23367]: cluster 2026-03-10T10:25:54.510850+0000 mgr.y (mgr.24422) 472 : cluster [DBG] pgmap v783: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:55.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:55 vm07 bash[23367]: audit 2026-03-10T10:25:55.383595+0000 mon.a (mon.0) 2932 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:25:55.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:55 vm07 bash[23367]: audit 2026-03-10T10:25:55.383595+0000 mon.a (mon.0) 2932 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "fingerprint_algorithm","val": "sha1"}]': finished 2026-03-10T10:25:55.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:55 vm07 bash[23367]: cluster 2026-03-10T10:25:55.391211+0000 mon.a (mon.0) 2933 : cluster [DBG] osdmap e509: 8 total, 8 up, 8 in 2026-03-10T10:25:55.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:55 vm07 bash[23367]: cluster 2026-03-10T10:25:55.391211+0000 mon.a (mon.0) 2933 : cluster [DBG] osdmap e509: 8 total, 8 up, 8 in 2026-03-10T10:25:55.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:55 vm07 bash[23367]: audit 2026-03-10T10:25:55.392664+0000 mon.a (mon.0) 2934 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T10:25:55.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:55 vm07 bash[23367]: audit 2026-03-10T10:25:55.392664+0000 mon.a (mon.0) 2934 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]: dispatch 2026-03-10T10:25:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:57 vm04 bash[28289]: audit 2026-03-10T10:25:56.387395+0000 mon.a (mon.0) 2935 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T10:25:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:57 vm04 bash[28289]: audit 2026-03-10T10:25:56.387395+0000 mon.a (mon.0) 2935 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T10:25:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:57 vm04 bash[28289]: cluster 2026-03-10T10:25:56.390669+0000 mon.a (mon.0) 2936 : cluster [DBG] osdmap e510: 8 total, 8 up, 8 in 2026-03-10T10:25:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:57 vm04 bash[28289]: cluster 2026-03-10T10:25:56.390669+0000 mon.a (mon.0) 2936 : cluster [DBG] osdmap e510: 8 total, 8 up, 8 in 2026-03-10T10:25:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:57 vm04 bash[28289]: audit 2026-03-10T10:25:56.391321+0000 mon.a (mon.0) 2937 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:25:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:57 vm04 bash[28289]: audit 2026-03-10T10:25:56.391321+0000 mon.a (mon.0) 2937 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:25:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:57 vm04 bash[28289]: cluster 2026-03-10T10:25:56.511291+0000 mgr.y (mgr.24422) 473 : cluster [DBG] pgmap v786: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:57 vm04 bash[28289]: cluster 2026-03-10T10:25:56.511291+0000 mgr.y (mgr.24422) 473 : cluster [DBG] pgmap v786: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:57 vm04 bash[20742]: audit 2026-03-10T10:25:56.387395+0000 mon.a (mon.0) 2935 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T10:25:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:57 vm04 bash[20742]: audit 2026-03-10T10:25:56.387395+0000 mon.a (mon.0) 2935 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T10:25:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:57 vm04 bash[20742]: cluster 2026-03-10T10:25:56.390669+0000 mon.a (mon.0) 2936 : cluster [DBG] osdmap e510: 8 total, 8 up, 8 in 2026-03-10T10:25:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:57 vm04 bash[20742]: cluster 2026-03-10T10:25:56.390669+0000 mon.a (mon.0) 2936 : cluster [DBG] osdmap e510: 8 total, 8 up, 8 in 2026-03-10T10:25:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:57 vm04 bash[20742]: audit 2026-03-10T10:25:56.391321+0000 mon.a (mon.0) 2937 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:25:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:57 vm04 bash[20742]: audit 2026-03-10T10:25:56.391321+0000 mon.a (mon.0) 2937 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:25:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:57 vm04 bash[20742]: cluster 2026-03-10T10:25:56.511291+0000 mgr.y (mgr.24422) 473 : cluster [DBG] pgmap v786: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:57 vm04 bash[20742]: cluster 2026-03-10T10:25:56.511291+0000 mgr.y (mgr.24422) 473 : cluster [DBG] pgmap v786: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:57.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:57 vm07 bash[23367]: audit 2026-03-10T10:25:56.387395+0000 mon.a (mon.0) 2935 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T10:25:57.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:57 vm07 bash[23367]: audit 2026-03-10T10:25:56.387395+0000 mon.a (mon.0) 2935 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "dedup_chunk_algorithm","val": "fastcdc"}]': finished 2026-03-10T10:25:57.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:57 vm07 bash[23367]: cluster 2026-03-10T10:25:56.390669+0000 mon.a (mon.0) 2936 : cluster [DBG] osdmap e510: 8 total, 8 up, 8 in 2026-03-10T10:25:57.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:57 vm07 bash[23367]: cluster 2026-03-10T10:25:56.390669+0000 mon.a (mon.0) 2936 : cluster [DBG] osdmap e510: 8 total, 8 up, 8 in 2026-03-10T10:25:57.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:57 vm07 bash[23367]: audit 2026-03-10T10:25:56.391321+0000 mon.a (mon.0) 2937 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:25:57.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:57 vm07 bash[23367]: audit 2026-03-10T10:25:56.391321+0000 mon.a (mon.0) 2937 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "dedup_cdc_chunk_size","val": "1024"}]: dispatch 2026-03-10T10:25:57.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:57 vm07 bash[23367]: cluster 2026-03-10T10:25:56.511291+0000 mgr.y (mgr.24422) 473 : cluster [DBG] pgmap v786: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:57.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:57 vm07 bash[23367]: cluster 2026-03-10T10:25:56.511291+0000 mgr.y (mgr.24422) 473 : cluster [DBG] pgmap v786: 292 pgs: 32 creating+peering, 260 active+clean; 8.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:25:58.701 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:58 vm07 bash[23367]: audit 2026-03-10T10:25:57.390957+0000 mon.a (mon.0) 2938 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:25:58.702 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:58 vm07 bash[23367]: audit 2026-03-10T10:25:57.390957+0000 mon.a (mon.0) 2938 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:25:58.702 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:58 vm07 bash[23367]: cluster 2026-03-10T10:25:57.394157+0000 mon.a (mon.0) 2939 : cluster [DBG] osdmap e511: 8 total, 8 up, 8 in 2026-03-10T10:25:58.702 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:58 vm07 bash[23367]: cluster 2026-03-10T10:25:57.394157+0000 mon.a (mon.0) 2939 : cluster [DBG] osdmap e511: 8 total, 8 up, 8 in 2026-03-10T10:25:58.702 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:58 vm07 bash[23367]: audit 2026-03-10T10:25:57.435898+0000 mon.a (mon.0) 2940 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:25:58.702 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:58 vm07 bash[23367]: audit 2026-03-10T10:25:57.435898+0000 mon.a (mon.0) 2940 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:25:58.702 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:58 vm07 bash[23367]: audit 2026-03-10T10:25:57.436138+0000 mon.a (mon.0) 2941 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-102"}]: dispatch 2026-03-10T10:25:58.702 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:58 vm07 bash[23367]: audit 2026-03-10T10:25:57.436138+0000 mon.a (mon.0) 2941 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-102"}]: dispatch 2026-03-10T10:25:58.702 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:58 vm07 bash[23367]: audit 2026-03-10T10:25:58.051516+0000 mon.a (mon.0) 2942 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:25:58.702 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:58 vm07 bash[23367]: audit 2026-03-10T10:25:58.051516+0000 mon.a (mon.0) 2942 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:25:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:58 vm04 bash[28289]: audit 2026-03-10T10:25:57.390957+0000 mon.a (mon.0) 2938 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:25:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:58 vm04 bash[28289]: audit 2026-03-10T10:25:57.390957+0000 mon.a (mon.0) 2938 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:25:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:58 vm04 bash[28289]: cluster 2026-03-10T10:25:57.394157+0000 mon.a (mon.0) 2939 : cluster [DBG] osdmap e511: 8 total, 8 up, 8 in 2026-03-10T10:25:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:58 vm04 bash[28289]: cluster 2026-03-10T10:25:57.394157+0000 mon.a (mon.0) 2939 : cluster [DBG] osdmap e511: 8 total, 8 up, 8 in 2026-03-10T10:25:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:58 vm04 bash[28289]: audit 2026-03-10T10:25:57.435898+0000 mon.a (mon.0) 2940 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:25:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:58 vm04 bash[28289]: audit 2026-03-10T10:25:57.435898+0000 mon.a (mon.0) 2940 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:25:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:58 vm04 bash[28289]: audit 2026-03-10T10:25:57.436138+0000 mon.a (mon.0) 2941 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-102"}]: dispatch 2026-03-10T10:25:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:58 vm04 bash[28289]: audit 2026-03-10T10:25:57.436138+0000 mon.a (mon.0) 2941 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-102"}]: dispatch 2026-03-10T10:25:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:58 vm04 bash[28289]: audit 2026-03-10T10:25:58.051516+0000 mon.a (mon.0) 2942 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:25:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:58 vm04 bash[28289]: audit 2026-03-10T10:25:58.051516+0000 mon.a (mon.0) 2942 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:25:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:58 vm04 bash[20742]: audit 2026-03-10T10:25:57.390957+0000 mon.a (mon.0) 2938 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:25:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:58 vm04 bash[20742]: audit 2026-03-10T10:25:57.390957+0000 mon.a (mon.0) 2938 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-102","var": "dedup_cdc_chunk_size","val": "1024"}]': finished 2026-03-10T10:25:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:58 vm04 bash[20742]: cluster 2026-03-10T10:25:57.394157+0000 mon.a (mon.0) 2939 : cluster [DBG] osdmap e511: 8 total, 8 up, 8 in 2026-03-10T10:25:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:58 vm04 bash[20742]: cluster 2026-03-10T10:25:57.394157+0000 mon.a (mon.0) 2939 : cluster [DBG] osdmap e511: 8 total, 8 up, 8 in 2026-03-10T10:25:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:58 vm04 bash[20742]: audit 2026-03-10T10:25:57.435898+0000 mon.a (mon.0) 2940 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:25:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:58 vm04 bash[20742]: audit 2026-03-10T10:25:57.435898+0000 mon.a (mon.0) 2940 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-6"}]: dispatch 2026-03-10T10:25:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:58 vm04 bash[20742]: audit 2026-03-10T10:25:57.436138+0000 mon.a (mon.0) 2941 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-102"}]: dispatch 2026-03-10T10:25:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:58 vm04 bash[20742]: audit 2026-03-10T10:25:57.436138+0000 mon.a (mon.0) 2941 : audit [INF] from='client.? 192.168.123.104:0/3515270359' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-6", "tierpool": "test-rados-api-vm04-59491-102"}]: dispatch 2026-03-10T10:25:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:58 vm04 bash[20742]: audit 2026-03-10T10:25:58.051516+0000 mon.a (mon.0) 2942 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:25:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:58 vm04 bash[20742]: audit 2026-03-10T10:25:58.051516+0000 mon.a (mon.0) 2942 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:25:59.016 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:25:58 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:25:59.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:59 vm04 bash[28289]: cluster 2026-03-10T10:25:58.410185+0000 mon.a (mon.0) 2943 : cluster [DBG] osdmap e512: 8 total, 8 up, 8 in 2026-03-10T10:25:59.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:59 vm04 bash[28289]: cluster 2026-03-10T10:25:58.410185+0000 mon.a (mon.0) 2943 : cluster [DBG] osdmap e512: 8 total, 8 up, 8 in 2026-03-10T10:25:59.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:59 vm04 bash[28289]: cluster 2026-03-10T10:25:58.511676+0000 mgr.y (mgr.24422) 474 : cluster [DBG] pgmap v789: 260 pgs: 260 active+clean; 8.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:25:59.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:59 vm04 bash[28289]: cluster 2026-03-10T10:25:58.511676+0000 mgr.y (mgr.24422) 474 : cluster [DBG] pgmap v789: 260 pgs: 260 active+clean; 8.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:25:59.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:59 vm04 bash[28289]: audit 2026-03-10T10:25:58.702522+0000 mgr.y (mgr.24422) 475 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:59.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:59 vm04 bash[28289]: audit 2026-03-10T10:25:58.702522+0000 mgr.y (mgr.24422) 475 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:59.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:59 vm04 bash[28289]: cluster 2026-03-10T10:25:59.414586+0000 mon.a (mon.0) 2944 : cluster [DBG] osdmap e513: 8 total, 8 up, 8 in 2026-03-10T10:25:59.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:25:59 vm04 bash[28289]: cluster 2026-03-10T10:25:59.414586+0000 mon.a (mon.0) 2944 : cluster [DBG] osdmap e513: 8 total, 8 up, 8 in 2026-03-10T10:25:59.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:59 vm04 bash[20742]: cluster 2026-03-10T10:25:58.410185+0000 mon.a (mon.0) 2943 : cluster [DBG] osdmap e512: 8 total, 8 up, 8 in 2026-03-10T10:25:59.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:59 vm04 bash[20742]: cluster 2026-03-10T10:25:58.410185+0000 mon.a (mon.0) 2943 : cluster [DBG] osdmap e512: 8 total, 8 up, 8 in 2026-03-10T10:25:59.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:59 vm04 bash[20742]: cluster 2026-03-10T10:25:58.511676+0000 mgr.y (mgr.24422) 474 : cluster [DBG] pgmap v789: 260 pgs: 260 active+clean; 8.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:25:59.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:59 vm04 bash[20742]: cluster 2026-03-10T10:25:58.511676+0000 mgr.y (mgr.24422) 474 : cluster [DBG] pgmap v789: 260 pgs: 260 active+clean; 8.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:25:59.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:59 vm04 bash[20742]: audit 2026-03-10T10:25:58.702522+0000 mgr.y (mgr.24422) 475 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:59.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:59 vm04 bash[20742]: audit 2026-03-10T10:25:58.702522+0000 mgr.y (mgr.24422) 475 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:59.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:59 vm04 bash[20742]: cluster 2026-03-10T10:25:59.414586+0000 mon.a (mon.0) 2944 : cluster [DBG] osdmap e513: 8 total, 8 up, 8 in 2026-03-10T10:25:59.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:25:59 vm04 bash[20742]: cluster 2026-03-10T10:25:59.414586+0000 mon.a (mon.0) 2944 : cluster [DBG] osdmap e513: 8 total, 8 up, 8 in 2026-03-10T10:25:59.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:59 vm07 bash[23367]: cluster 2026-03-10T10:25:58.410185+0000 mon.a (mon.0) 2943 : cluster [DBG] osdmap e512: 8 total, 8 up, 8 in 2026-03-10T10:25:59.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:59 vm07 bash[23367]: cluster 2026-03-10T10:25:58.410185+0000 mon.a (mon.0) 2943 : cluster [DBG] osdmap e512: 8 total, 8 up, 8 in 2026-03-10T10:25:59.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:59 vm07 bash[23367]: cluster 2026-03-10T10:25:58.511676+0000 mgr.y (mgr.24422) 474 : cluster [DBG] pgmap v789: 260 pgs: 260 active+clean; 8.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:25:59.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:59 vm07 bash[23367]: cluster 2026-03-10T10:25:58.511676+0000 mgr.y (mgr.24422) 474 : cluster [DBG] pgmap v789: 260 pgs: 260 active+clean; 8.3 MiB data, 978 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:25:59.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:59 vm07 bash[23367]: audit 2026-03-10T10:25:58.702522+0000 mgr.y (mgr.24422) 475 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:59.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:59 vm07 bash[23367]: audit 2026-03-10T10:25:58.702522+0000 mgr.y (mgr.24422) 475 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:25:59.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:59 vm07 bash[23367]: cluster 2026-03-10T10:25:59.414586+0000 mon.a (mon.0) 2944 : cluster [DBG] osdmap e513: 8 total, 8 up, 8 in 2026-03-10T10:25:59.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:25:59 vm07 bash[23367]: cluster 2026-03-10T10:25:59.414586+0000 mon.a (mon.0) 2944 : cluster [DBG] osdmap e513: 8 total, 8 up, 8 in 2026-03-10T10:26:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:00 vm04 bash[28289]: audit 2026-03-10T10:25:59.434576+0000 mon.c (mon.2) 457 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:00 vm04 bash[28289]: audit 2026-03-10T10:25:59.434576+0000 mon.c (mon.2) 457 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:00 vm04 bash[28289]: audit 2026-03-10T10:25:59.439051+0000 mon.a (mon.0) 2945 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:00 vm04 bash[28289]: audit 2026-03-10T10:25:59.439051+0000 mon.a (mon.0) 2945 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:00 vm04 bash[28289]: audit 2026-03-10T10:25:59.439649+0000 mon.c (mon.2) 458 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:00 vm04 bash[28289]: audit 2026-03-10T10:25:59.439649+0000 mon.c (mon.2) 458 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:00 vm04 bash[28289]: audit 2026-03-10T10:25:59.439861+0000 mon.a (mon.0) 2946 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:00 vm04 bash[28289]: audit 2026-03-10T10:25:59.439861+0000 mon.a (mon.0) 2946 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:00 vm04 bash[28289]: audit 2026-03-10T10:25:59.440430+0000 mon.c (mon.2) 459 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm04-59491-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:26:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:00 vm04 bash[28289]: audit 2026-03-10T10:25:59.440430+0000 mon.c (mon.2) 459 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm04-59491-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:26:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:00 vm04 bash[28289]: audit 2026-03-10T10:25:59.440666+0000 mon.a (mon.0) 2947 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm04-59491-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:26:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:00 vm04 bash[28289]: audit 2026-03-10T10:25:59.440666+0000 mon.a (mon.0) 2947 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm04-59491-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:26:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:00 vm04 bash[20742]: audit 2026-03-10T10:25:59.434576+0000 mon.c (mon.2) 457 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:00 vm04 bash[20742]: audit 2026-03-10T10:25:59.434576+0000 mon.c (mon.2) 457 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:00 vm04 bash[20742]: audit 2026-03-10T10:25:59.439051+0000 mon.a (mon.0) 2945 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:00 vm04 bash[20742]: audit 2026-03-10T10:25:59.439051+0000 mon.a (mon.0) 2945 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:00 vm04 bash[20742]: audit 2026-03-10T10:25:59.439649+0000 mon.c (mon.2) 458 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:00 vm04 bash[20742]: audit 2026-03-10T10:25:59.439649+0000 mon.c (mon.2) 458 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:00 vm04 bash[20742]: audit 2026-03-10T10:25:59.439861+0000 mon.a (mon.0) 2946 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:00 vm04 bash[20742]: audit 2026-03-10T10:25:59.439861+0000 mon.a (mon.0) 2946 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:00 vm04 bash[20742]: audit 2026-03-10T10:25:59.440430+0000 mon.c (mon.2) 459 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm04-59491-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:26:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:00 vm04 bash[20742]: audit 2026-03-10T10:25:59.440430+0000 mon.c (mon.2) 459 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm04-59491-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:26:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:00 vm04 bash[20742]: audit 2026-03-10T10:25:59.440666+0000 mon.a (mon.0) 2947 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm04-59491-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:26:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:00 vm04 bash[20742]: audit 2026-03-10T10:25:59.440666+0000 mon.a (mon.0) 2947 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm04-59491-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:26:00.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:00 vm07 bash[23367]: audit 2026-03-10T10:25:59.434576+0000 mon.c (mon.2) 457 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:00.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:00 vm07 bash[23367]: audit 2026-03-10T10:25:59.434576+0000 mon.c (mon.2) 457 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:00.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:00 vm07 bash[23367]: audit 2026-03-10T10:25:59.439051+0000 mon.a (mon.0) 2945 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:00.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:00 vm07 bash[23367]: audit 2026-03-10T10:25:59.439051+0000 mon.a (mon.0) 2945 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:00.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:00 vm07 bash[23367]: audit 2026-03-10T10:25:59.439649+0000 mon.c (mon.2) 458 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:00.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:00 vm07 bash[23367]: audit 2026-03-10T10:25:59.439649+0000 mon.c (mon.2) 458 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:00.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:00 vm07 bash[23367]: audit 2026-03-10T10:25:59.439861+0000 mon.a (mon.0) 2946 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:00.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:00 vm07 bash[23367]: audit 2026-03-10T10:25:59.439861+0000 mon.a (mon.0) 2946 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:00.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:00 vm07 bash[23367]: audit 2026-03-10T10:25:59.440430+0000 mon.c (mon.2) 459 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm04-59491-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:26:00.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:00 vm07 bash[23367]: audit 2026-03-10T10:25:59.440430+0000 mon.c (mon.2) 459 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm04-59491-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:26:00.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:00 vm07 bash[23367]: audit 2026-03-10T10:25:59.440666+0000 mon.a (mon.0) 2947 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm04-59491-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:26:00.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:00 vm07 bash[23367]: audit 2026-03-10T10:25:59.440666+0000 mon.a (mon.0) 2947 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm04-59491-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:26:01.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:01 vm07 bash[23367]: audit 2026-03-10T10:26:00.451741+0000 mon.a (mon.0) 2948 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm04-59491-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:26:01.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:01 vm07 bash[23367]: audit 2026-03-10T10:26:00.451741+0000 mon.a (mon.0) 2948 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm04-59491-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:26:01.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:01 vm07 bash[23367]: cluster 2026-03-10T10:26:00.455081+0000 mon.a (mon.0) 2949 : cluster [DBG] osdmap e514: 8 total, 8 up, 8 in 2026-03-10T10:26:01.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:01 vm07 bash[23367]: cluster 2026-03-10T10:26:00.455081+0000 mon.a (mon.0) 2949 : cluster [DBG] osdmap e514: 8 total, 8 up, 8 in 2026-03-10T10:26:01.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:01 vm07 bash[23367]: audit 2026-03-10T10:26:00.458272+0000 mon.c (mon.2) 460 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm04-59491-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:01.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:01 vm07 bash[23367]: audit 2026-03-10T10:26:00.458272+0000 mon.c (mon.2) 460 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm04-59491-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:01.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:01 vm07 bash[23367]: audit 2026-03-10T10:26:00.458538+0000 mon.a (mon.0) 2950 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm04-59491-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:01.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:01 vm07 bash[23367]: audit 2026-03-10T10:26:00.458538+0000 mon.a (mon.0) 2950 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm04-59491-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:01.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:01 vm07 bash[23367]: cluster 2026-03-10T10:26:00.512023+0000 mgr.y (mgr.24422) 476 : cluster [DBG] pgmap v792: 228 pgs: 228 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:01.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:01 vm07 bash[23367]: cluster 2026-03-10T10:26:00.512023+0000 mgr.y (mgr.24422) 476 : cluster [DBG] pgmap v792: 228 pgs: 228 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:01.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:01 vm04 bash[28289]: audit 2026-03-10T10:26:00.451741+0000 mon.a (mon.0) 2948 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm04-59491-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:26:01.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:01 vm04 bash[28289]: audit 2026-03-10T10:26:00.451741+0000 mon.a (mon.0) 2948 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm04-59491-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:26:01.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:01 vm04 bash[28289]: cluster 2026-03-10T10:26:00.455081+0000 mon.a (mon.0) 2949 : cluster [DBG] osdmap e514: 8 total, 8 up, 8 in 2026-03-10T10:26:01.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:01 vm04 bash[28289]: cluster 2026-03-10T10:26:00.455081+0000 mon.a (mon.0) 2949 : cluster [DBG] osdmap e514: 8 total, 8 up, 8 in 2026-03-10T10:26:01.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:01 vm04 bash[28289]: audit 2026-03-10T10:26:00.458272+0000 mon.c (mon.2) 460 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm04-59491-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:01.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:01 vm04 bash[28289]: audit 2026-03-10T10:26:00.458272+0000 mon.c (mon.2) 460 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm04-59491-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:01.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:01 vm04 bash[28289]: audit 2026-03-10T10:26:00.458538+0000 mon.a (mon.0) 2950 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm04-59491-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:01.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:01 vm04 bash[28289]: audit 2026-03-10T10:26:00.458538+0000 mon.a (mon.0) 2950 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm04-59491-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:01.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:01 vm04 bash[28289]: cluster 2026-03-10T10:26:00.512023+0000 mgr.y (mgr.24422) 476 : cluster [DBG] pgmap v792: 228 pgs: 228 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:01.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:01 vm04 bash[28289]: cluster 2026-03-10T10:26:00.512023+0000 mgr.y (mgr.24422) 476 : cluster [DBG] pgmap v792: 228 pgs: 228 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:01.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:01 vm04 bash[20742]: audit 2026-03-10T10:26:00.451741+0000 mon.a (mon.0) 2948 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm04-59491-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:26:01.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:01 vm04 bash[20742]: audit 2026-03-10T10:26:00.451741+0000 mon.a (mon.0) 2948 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-LibRadosTierECPP_vm04-59491-104", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:26:01.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:01 vm04 bash[20742]: cluster 2026-03-10T10:26:00.455081+0000 mon.a (mon.0) 2949 : cluster [DBG] osdmap e514: 8 total, 8 up, 8 in 2026-03-10T10:26:01.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:01 vm04 bash[20742]: cluster 2026-03-10T10:26:00.455081+0000 mon.a (mon.0) 2949 : cluster [DBG] osdmap e514: 8 total, 8 up, 8 in 2026-03-10T10:26:01.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:01 vm04 bash[20742]: audit 2026-03-10T10:26:00.458272+0000 mon.c (mon.2) 460 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm04-59491-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:01.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:01 vm04 bash[20742]: audit 2026-03-10T10:26:00.458272+0000 mon.c (mon.2) 460 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm04-59491-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:01.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:01 vm04 bash[20742]: audit 2026-03-10T10:26:00.458538+0000 mon.a (mon.0) 2950 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm04-59491-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:01.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:01 vm04 bash[20742]: audit 2026-03-10T10:26:00.458538+0000 mon.a (mon.0) 2950 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm04-59491-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:01.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:01 vm04 bash[20742]: cluster 2026-03-10T10:26:00.512023+0000 mgr.y (mgr.24422) 476 : cluster [DBG] pgmap v792: 228 pgs: 228 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:01.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:01 vm04 bash[20742]: cluster 2026-03-10T10:26:00.512023+0000 mgr.y (mgr.24422) 476 : cluster [DBG] pgmap v792: 228 pgs: 228 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:02 vm07 bash[23367]: cluster 2026-03-10T10:26:01.473655+0000 mon.a (mon.0) 2951 : cluster [DBG] osdmap e515: 8 total, 8 up, 8 in 2026-03-10T10:26:02.787 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:02 vm07 bash[23367]: cluster 2026-03-10T10:26:01.473655+0000 mon.a (mon.0) 2951 : cluster [DBG] osdmap e515: 8 total, 8 up, 8 in 2026-03-10T10:26:02.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:02 vm04 bash[28289]: cluster 2026-03-10T10:26:01.473655+0000 mon.a (mon.0) 2951 : cluster [DBG] osdmap e515: 8 total, 8 up, 8 in 2026-03-10T10:26:02.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:02 vm04 bash[28289]: cluster 2026-03-10T10:26:01.473655+0000 mon.a (mon.0) 2951 : cluster [DBG] osdmap e515: 8 total, 8 up, 8 in 2026-03-10T10:26:02.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:02 vm04 bash[20742]: cluster 2026-03-10T10:26:01.473655+0000 mon.a (mon.0) 2951 : cluster [DBG] osdmap e515: 8 total, 8 up, 8 in 2026-03-10T10:26:02.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:02 vm04 bash[20742]: cluster 2026-03-10T10:26:01.473655+0000 mon.a (mon.0) 2951 : cluster [DBG] osdmap e515: 8 total, 8 up, 8 in 2026-03-10T10:26:03.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:26:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:26:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:26:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:03 vm07 bash[23367]: audit 2026-03-10T10:26:02.481386+0000 mon.a (mon.0) 2952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm04-59491-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm04-59491-104"}]': finished 2026-03-10T10:26:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:03 vm07 bash[23367]: audit 2026-03-10T10:26:02.481386+0000 mon.a (mon.0) 2952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm04-59491-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm04-59491-104"}]': finished 2026-03-10T10:26:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:03 vm07 bash[23367]: cluster 2026-03-10T10:26:02.502303+0000 mon.a (mon.0) 2953 : cluster [DBG] osdmap e516: 8 total, 8 up, 8 in 2026-03-10T10:26:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:03 vm07 bash[23367]: cluster 2026-03-10T10:26:02.502303+0000 mon.a (mon.0) 2953 : cluster [DBG] osdmap e516: 8 total, 8 up, 8 in 2026-03-10T10:26:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:03 vm07 bash[23367]: cluster 2026-03-10T10:26:02.512460+0000 mgr.y (mgr.24422) 477 : cluster [DBG] pgmap v795: 236 pgs: 8 unknown, 228 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:03.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:03 vm07 bash[23367]: cluster 2026-03-10T10:26:02.512460+0000 mgr.y (mgr.24422) 477 : cluster [DBG] pgmap v795: 236 pgs: 8 unknown, 228 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:03.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:03 vm04 bash[28289]: audit 2026-03-10T10:26:02.481386+0000 mon.a (mon.0) 2952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm04-59491-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm04-59491-104"}]': finished 2026-03-10T10:26:03.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:03 vm04 bash[28289]: audit 2026-03-10T10:26:02.481386+0000 mon.a (mon.0) 2952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm04-59491-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm04-59491-104"}]': finished 2026-03-10T10:26:03.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:03 vm04 bash[28289]: cluster 2026-03-10T10:26:02.502303+0000 mon.a (mon.0) 2953 : cluster [DBG] osdmap e516: 8 total, 8 up, 8 in 2026-03-10T10:26:03.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:03 vm04 bash[28289]: cluster 2026-03-10T10:26:02.502303+0000 mon.a (mon.0) 2953 : cluster [DBG] osdmap e516: 8 total, 8 up, 8 in 2026-03-10T10:26:03.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:03 vm04 bash[28289]: cluster 2026-03-10T10:26:02.512460+0000 mgr.y (mgr.24422) 477 : cluster [DBG] pgmap v795: 236 pgs: 8 unknown, 228 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:03.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:03 vm04 bash[28289]: cluster 2026-03-10T10:26:02.512460+0000 mgr.y (mgr.24422) 477 : cluster [DBG] pgmap v795: 236 pgs: 8 unknown, 228 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:03.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:03 vm04 bash[20742]: audit 2026-03-10T10:26:02.481386+0000 mon.a (mon.0) 2952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm04-59491-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm04-59491-104"}]': finished 2026-03-10T10:26:03.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:03 vm04 bash[20742]: audit 2026-03-10T10:26:02.481386+0000 mon.a (mon.0) 2952 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "LibRadosTierECPP_vm04-59491-104", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-LibRadosTierECPP_vm04-59491-104"}]': finished 2026-03-10T10:26:03.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:03 vm04 bash[20742]: cluster 2026-03-10T10:26:02.502303+0000 mon.a (mon.0) 2953 : cluster [DBG] osdmap e516: 8 total, 8 up, 8 in 2026-03-10T10:26:03.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:03 vm04 bash[20742]: cluster 2026-03-10T10:26:02.502303+0000 mon.a (mon.0) 2953 : cluster [DBG] osdmap e516: 8 total, 8 up, 8 in 2026-03-10T10:26:03.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:03 vm04 bash[20742]: cluster 2026-03-10T10:26:02.512460+0000 mgr.y (mgr.24422) 477 : cluster [DBG] pgmap v795: 236 pgs: 8 unknown, 228 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:03.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:03 vm04 bash[20742]: cluster 2026-03-10T10:26:02.512460+0000 mgr.y (mgr.24422) 477 : cluster [DBG] pgmap v795: 236 pgs: 8 unknown, 228 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:04.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:04 vm04 bash[28289]: cluster 2026-03-10T10:26:03.481210+0000 mon.a (mon.0) 2954 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:04.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:04 vm04 bash[28289]: cluster 2026-03-10T10:26:03.481210+0000 mon.a (mon.0) 2954 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:04.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:04 vm04 bash[28289]: cluster 2026-03-10T10:26:03.494888+0000 mon.a (mon.0) 2955 : cluster [DBG] osdmap e517: 8 total, 8 up, 8 in 2026-03-10T10:26:04.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:04 vm04 bash[28289]: cluster 2026-03-10T10:26:03.494888+0000 mon.a (mon.0) 2955 : cluster [DBG] osdmap e517: 8 total, 8 up, 8 in 2026-03-10T10:26:04.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:04 vm04 bash[20742]: cluster 2026-03-10T10:26:03.481210+0000 mon.a (mon.0) 2954 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:04.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:04 vm04 bash[20742]: cluster 2026-03-10T10:26:03.481210+0000 mon.a (mon.0) 2954 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:04.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:04 vm04 bash[20742]: cluster 2026-03-10T10:26:03.494888+0000 mon.a (mon.0) 2955 : cluster [DBG] osdmap e517: 8 total, 8 up, 8 in 2026-03-10T10:26:04.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:04 vm04 bash[20742]: cluster 2026-03-10T10:26:03.494888+0000 mon.a (mon.0) 2955 : cluster [DBG] osdmap e517: 8 total, 8 up, 8 in 2026-03-10T10:26:05.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:04 vm07 bash[23367]: cluster 2026-03-10T10:26:03.481210+0000 mon.a (mon.0) 2954 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:05.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:04 vm07 bash[23367]: cluster 2026-03-10T10:26:03.481210+0000 mon.a (mon.0) 2954 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:05.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:04 vm07 bash[23367]: cluster 2026-03-10T10:26:03.494888+0000 mon.a (mon.0) 2955 : cluster [DBG] osdmap e517: 8 total, 8 up, 8 in 2026-03-10T10:26:05.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:04 vm07 bash[23367]: cluster 2026-03-10T10:26:03.494888+0000 mon.a (mon.0) 2955 : cluster [DBG] osdmap e517: 8 total, 8 up, 8 in 2026-03-10T10:26:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:05 vm04 bash[28289]: cluster 2026-03-10T10:26:04.513033+0000 mgr.y (mgr.24422) 478 : cluster [DBG] pgmap v797: 236 pgs: 1 creating+activating, 4 creating+peering, 231 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:26:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:05 vm04 bash[28289]: cluster 2026-03-10T10:26:04.513033+0000 mgr.y (mgr.24422) 478 : cluster [DBG] pgmap v797: 236 pgs: 1 creating+activating, 4 creating+peering, 231 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:26:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:05 vm04 bash[28289]: cluster 2026-03-10T10:26:04.604853+0000 mon.a (mon.0) 2956 : cluster [DBG] osdmap e518: 8 total, 8 up, 8 in 2026-03-10T10:26:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:05 vm04 bash[28289]: cluster 2026-03-10T10:26:04.604853+0000 mon.a (mon.0) 2956 : cluster [DBG] osdmap e518: 8 total, 8 up, 8 in 2026-03-10T10:26:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:05 vm04 bash[28289]: audit 2026-03-10T10:26:04.611554+0000 mon.b (mon.1) 215 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:05 vm04 bash[28289]: audit 2026-03-10T10:26:04.611554+0000 mon.b (mon.1) 215 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:05 vm04 bash[28289]: audit 2026-03-10T10:26:04.626519+0000 mon.a (mon.0) 2957 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:05.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:05 vm04 bash[28289]: audit 2026-03-10T10:26:04.626519+0000 mon.a (mon.0) 2957 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:05.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:05 vm04 bash[20742]: cluster 2026-03-10T10:26:04.513033+0000 mgr.y (mgr.24422) 478 : cluster [DBG] pgmap v797: 236 pgs: 1 creating+activating, 4 creating+peering, 231 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:26:05.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:05 vm04 bash[20742]: cluster 2026-03-10T10:26:04.513033+0000 mgr.y (mgr.24422) 478 : cluster [DBG] pgmap v797: 236 pgs: 1 creating+activating, 4 creating+peering, 231 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:26:05.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:05 vm04 bash[20742]: cluster 2026-03-10T10:26:04.604853+0000 mon.a (mon.0) 2956 : cluster [DBG] osdmap e518: 8 total, 8 up, 8 in 2026-03-10T10:26:05.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:05 vm04 bash[20742]: cluster 2026-03-10T10:26:04.604853+0000 mon.a (mon.0) 2956 : cluster [DBG] osdmap e518: 8 total, 8 up, 8 in 2026-03-10T10:26:05.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:05 vm04 bash[20742]: audit 2026-03-10T10:26:04.611554+0000 mon.b (mon.1) 215 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:05.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:05 vm04 bash[20742]: audit 2026-03-10T10:26:04.611554+0000 mon.b (mon.1) 215 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:05.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:05 vm04 bash[20742]: audit 2026-03-10T10:26:04.626519+0000 mon.a (mon.0) 2957 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:05.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:05 vm04 bash[20742]: audit 2026-03-10T10:26:04.626519+0000 mon.a (mon.0) 2957 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:05 vm07 bash[23367]: cluster 2026-03-10T10:26:04.513033+0000 mgr.y (mgr.24422) 478 : cluster [DBG] pgmap v797: 236 pgs: 1 creating+activating, 4 creating+peering, 231 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:26:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:05 vm07 bash[23367]: cluster 2026-03-10T10:26:04.513033+0000 mgr.y (mgr.24422) 478 : cluster [DBG] pgmap v797: 236 pgs: 1 creating+activating, 4 creating+peering, 231 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:26:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:05 vm07 bash[23367]: cluster 2026-03-10T10:26:04.604853+0000 mon.a (mon.0) 2956 : cluster [DBG] osdmap e518: 8 total, 8 up, 8 in 2026-03-10T10:26:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:05 vm07 bash[23367]: cluster 2026-03-10T10:26:04.604853+0000 mon.a (mon.0) 2956 : cluster [DBG] osdmap e518: 8 total, 8 up, 8 in 2026-03-10T10:26:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:05 vm07 bash[23367]: audit 2026-03-10T10:26:04.611554+0000 mon.b (mon.1) 215 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:05 vm07 bash[23367]: audit 2026-03-10T10:26:04.611554+0000 mon.b (mon.1) 215 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:05 vm07 bash[23367]: audit 2026-03-10T10:26:04.626519+0000 mon.a (mon.0) 2957 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:05 vm07 bash[23367]: audit 2026-03-10T10:26:04.626519+0000 mon.a (mon.0) 2957 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:06.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:06 vm04 bash[28289]: audit 2026-03-10T10:26:05.617669+0000 mon.a (mon.0) 2958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:06.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:06 vm04 bash[28289]: audit 2026-03-10T10:26:05.617669+0000 mon.a (mon.0) 2958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:06.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:06 vm04 bash[28289]: cluster 2026-03-10T10:26:05.628044+0000 mon.a (mon.0) 2959 : cluster [DBG] osdmap e519: 8 total, 8 up, 8 in 2026-03-10T10:26:06.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:06 vm04 bash[28289]: cluster 2026-03-10T10:26:05.628044+0000 mon.a (mon.0) 2959 : cluster [DBG] osdmap e519: 8 total, 8 up, 8 in 2026-03-10T10:26:06.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:06 vm04 bash[20742]: audit 2026-03-10T10:26:05.617669+0000 mon.a (mon.0) 2958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:06.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:06 vm04 bash[20742]: audit 2026-03-10T10:26:05.617669+0000 mon.a (mon.0) 2958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:06.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:06 vm04 bash[20742]: cluster 2026-03-10T10:26:05.628044+0000 mon.a (mon.0) 2959 : cluster [DBG] osdmap e519: 8 total, 8 up, 8 in 2026-03-10T10:26:06.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:06 vm04 bash[20742]: cluster 2026-03-10T10:26:05.628044+0000 mon.a (mon.0) 2959 : cluster [DBG] osdmap e519: 8 total, 8 up, 8 in 2026-03-10T10:26:07.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:06 vm07 bash[23367]: audit 2026-03-10T10:26:05.617669+0000 mon.a (mon.0) 2958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:07.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:06 vm07 bash[23367]: audit 2026-03-10T10:26:05.617669+0000 mon.a (mon.0) 2958 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:07.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:06 vm07 bash[23367]: cluster 2026-03-10T10:26:05.628044+0000 mon.a (mon.0) 2959 : cluster [DBG] osdmap e519: 8 total, 8 up, 8 in 2026-03-10T10:26:07.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:06 vm07 bash[23367]: cluster 2026-03-10T10:26:05.628044+0000 mon.a (mon.0) 2959 : cluster [DBG] osdmap e519: 8 total, 8 up, 8 in 2026-03-10T10:26:07.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:07 vm04 bash[28289]: cluster 2026-03-10T10:26:06.513424+0000 mgr.y (mgr.24422) 479 : cluster [DBG] pgmap v800: 268 pgs: 32 unknown, 1 creating+activating, 4 creating+peering, 231 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:26:07.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:07 vm04 bash[28289]: cluster 2026-03-10T10:26:06.513424+0000 mgr.y (mgr.24422) 479 : cluster [DBG] pgmap v800: 268 pgs: 32 unknown, 1 creating+activating, 4 creating+peering, 231 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:26:07.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:07 vm04 bash[28289]: cluster 2026-03-10T10:26:06.645900+0000 mon.a (mon.0) 2960 : cluster [DBG] osdmap e520: 8 total, 8 up, 8 in 2026-03-10T10:26:07.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:07 vm04 bash[28289]: cluster 2026-03-10T10:26:06.645900+0000 mon.a (mon.0) 2960 : cluster [DBG] osdmap e520: 8 total, 8 up, 8 in 2026-03-10T10:26:07.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:07 vm04 bash[28289]: audit 2026-03-10T10:26:06.652594+0000 mon.b (mon.1) 216 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:07.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:07 vm04 bash[28289]: audit 2026-03-10T10:26:06.652594+0000 mon.b (mon.1) 216 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:07.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:07 vm04 bash[28289]: audit 2026-03-10T10:26:06.655209+0000 mon.a (mon.0) 2961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:07.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:07 vm04 bash[28289]: audit 2026-03-10T10:26:06.655209+0000 mon.a (mon.0) 2961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:07.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:07 vm04 bash[20742]: cluster 2026-03-10T10:26:06.513424+0000 mgr.y (mgr.24422) 479 : cluster [DBG] pgmap v800: 268 pgs: 32 unknown, 1 creating+activating, 4 creating+peering, 231 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:26:07.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:07 vm04 bash[20742]: cluster 2026-03-10T10:26:06.513424+0000 mgr.y (mgr.24422) 479 : cluster [DBG] pgmap v800: 268 pgs: 32 unknown, 1 creating+activating, 4 creating+peering, 231 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:26:07.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:07 vm04 bash[20742]: cluster 2026-03-10T10:26:06.645900+0000 mon.a (mon.0) 2960 : cluster [DBG] osdmap e520: 8 total, 8 up, 8 in 2026-03-10T10:26:07.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:07 vm04 bash[20742]: cluster 2026-03-10T10:26:06.645900+0000 mon.a (mon.0) 2960 : cluster [DBG] osdmap e520: 8 total, 8 up, 8 in 2026-03-10T10:26:07.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:07 vm04 bash[20742]: audit 2026-03-10T10:26:06.652594+0000 mon.b (mon.1) 216 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:07.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:07 vm04 bash[20742]: audit 2026-03-10T10:26:06.652594+0000 mon.b (mon.1) 216 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:07.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:07 vm04 bash[20742]: audit 2026-03-10T10:26:06.655209+0000 mon.a (mon.0) 2961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:07.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:07 vm04 bash[20742]: audit 2026-03-10T10:26:06.655209+0000 mon.a (mon.0) 2961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:08.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:07 vm07 bash[23367]: cluster 2026-03-10T10:26:06.513424+0000 mgr.y (mgr.24422) 479 : cluster [DBG] pgmap v800: 268 pgs: 32 unknown, 1 creating+activating, 4 creating+peering, 231 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:26:08.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:07 vm07 bash[23367]: cluster 2026-03-10T10:26:06.513424+0000 mgr.y (mgr.24422) 479 : cluster [DBG] pgmap v800: 268 pgs: 32 unknown, 1 creating+activating, 4 creating+peering, 231 active+clean; 455 KiB data, 957 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:26:08.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:07 vm07 bash[23367]: cluster 2026-03-10T10:26:06.645900+0000 mon.a (mon.0) 2960 : cluster [DBG] osdmap e520: 8 total, 8 up, 8 in 2026-03-10T10:26:08.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:07 vm07 bash[23367]: cluster 2026-03-10T10:26:06.645900+0000 mon.a (mon.0) 2960 : cluster [DBG] osdmap e520: 8 total, 8 up, 8 in 2026-03-10T10:26:08.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:07 vm07 bash[23367]: audit 2026-03-10T10:26:06.652594+0000 mon.b (mon.1) 216 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:08.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:07 vm07 bash[23367]: audit 2026-03-10T10:26:06.652594+0000 mon.b (mon.1) 216 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:08.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:07 vm07 bash[23367]: audit 2026-03-10T10:26:06.655209+0000 mon.a (mon.0) 2961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:08.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:07 vm07 bash[23367]: audit 2026-03-10T10:26:06.655209+0000 mon.a (mon.0) 2961 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:08.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:08 vm04 bash[28289]: audit 2026-03-10T10:26:07.640114+0000 mon.a (mon.0) 2962 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:08.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:08 vm04 bash[28289]: audit 2026-03-10T10:26:07.640114+0000 mon.a (mon.0) 2962 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:08.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:08 vm04 bash[28289]: audit 2026-03-10T10:26:07.650690+0000 mon.b (mon.1) 217 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:08.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:08 vm04 bash[28289]: audit 2026-03-10T10:26:07.650690+0000 mon.b (mon.1) 217 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:08.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:08 vm04 bash[28289]: cluster 2026-03-10T10:26:07.659640+0000 mon.a (mon.0) 2963 : cluster [DBG] osdmap e521: 8 total, 8 up, 8 in 2026-03-10T10:26:08.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:08 vm04 bash[28289]: cluster 2026-03-10T10:26:07.659640+0000 mon.a (mon.0) 2963 : cluster [DBG] osdmap e521: 8 total, 8 up, 8 in 2026-03-10T10:26:08.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:08 vm04 bash[28289]: audit 2026-03-10T10:26:07.666516+0000 mon.a (mon.0) 2964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:08.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:08 vm04 bash[28289]: audit 2026-03-10T10:26:07.666516+0000 mon.a (mon.0) 2964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:08.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:08 vm04 bash[20742]: audit 2026-03-10T10:26:07.640114+0000 mon.a (mon.0) 2962 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:08.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:08 vm04 bash[20742]: audit 2026-03-10T10:26:07.640114+0000 mon.a (mon.0) 2962 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:08.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:08 vm04 bash[20742]: audit 2026-03-10T10:26:07.650690+0000 mon.b (mon.1) 217 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:08.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:08 vm04 bash[20742]: audit 2026-03-10T10:26:07.650690+0000 mon.b (mon.1) 217 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:08.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:08 vm04 bash[20742]: cluster 2026-03-10T10:26:07.659640+0000 mon.a (mon.0) 2963 : cluster [DBG] osdmap e521: 8 total, 8 up, 8 in 2026-03-10T10:26:08.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:08 vm04 bash[20742]: cluster 2026-03-10T10:26:07.659640+0000 mon.a (mon.0) 2963 : cluster [DBG] osdmap e521: 8 total, 8 up, 8 in 2026-03-10T10:26:08.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:08 vm04 bash[20742]: audit 2026-03-10T10:26:07.666516+0000 mon.a (mon.0) 2964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:08.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:08 vm04 bash[20742]: audit 2026-03-10T10:26:07.666516+0000 mon.a (mon.0) 2964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:09.015 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:26:08 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:26:09.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:08 vm07 bash[23367]: audit 2026-03-10T10:26:07.640114+0000 mon.a (mon.0) 2962 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:09.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:08 vm07 bash[23367]: audit 2026-03-10T10:26:07.640114+0000 mon.a (mon.0) 2962 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-107-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:09.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:08 vm07 bash[23367]: audit 2026-03-10T10:26:07.650690+0000 mon.b (mon.1) 217 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:09.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:08 vm07 bash[23367]: audit 2026-03-10T10:26:07.650690+0000 mon.b (mon.1) 217 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:09.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:08 vm07 bash[23367]: cluster 2026-03-10T10:26:07.659640+0000 mon.a (mon.0) 2963 : cluster [DBG] osdmap e521: 8 total, 8 up, 8 in 2026-03-10T10:26:09.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:08 vm07 bash[23367]: cluster 2026-03-10T10:26:07.659640+0000 mon.a (mon.0) 2963 : cluster [DBG] osdmap e521: 8 total, 8 up, 8 in 2026-03-10T10:26:09.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:08 vm07 bash[23367]: audit 2026-03-10T10:26:07.666516+0000 mon.a (mon.0) 2964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:09.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:08 vm07 bash[23367]: audit 2026-03-10T10:26:07.666516+0000 mon.a (mon.0) 2964 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:10.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:09 vm07 bash[23367]: cluster 2026-03-10T10:26:08.514070+0000 mgr.y (mgr.24422) 480 : cluster [DBG] pgmap v803: 300 pgs: 34 unknown, 1 creating+activating, 265 active+clean; 455 KiB data, 975 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:10.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:09 vm07 bash[23367]: cluster 2026-03-10T10:26:08.514070+0000 mgr.y (mgr.24422) 480 : cluster [DBG] pgmap v803: 300 pgs: 34 unknown, 1 creating+activating, 265 active+clean; 455 KiB data, 975 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:10.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:09 vm07 bash[23367]: audit 2026-03-10T10:26:08.672944+0000 mon.a (mon.0) 2965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]': finished 2026-03-10T10:26:10.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:09 vm07 bash[23367]: audit 2026-03-10T10:26:08.672944+0000 mon.a (mon.0) 2965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]': finished 2026-03-10T10:26:10.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:09 vm07 bash[23367]: audit 2026-03-10T10:26:08.674507+0000 mon.b (mon.1) 218 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-107", "overlaypool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:10.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:09 vm07 bash[23367]: audit 2026-03-10T10:26:08.674507+0000 mon.b (mon.1) 218 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-107", "overlaypool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:10.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:09 vm07 bash[23367]: cluster 2026-03-10T10:26:08.677116+0000 mon.a (mon.0) 2966 : cluster [DBG] osdmap e522: 8 total, 8 up, 8 in 2026-03-10T10:26:10.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:09 vm07 bash[23367]: cluster 2026-03-10T10:26:08.677116+0000 mon.a (mon.0) 2966 : cluster [DBG] osdmap e522: 8 total, 8 up, 8 in 2026-03-10T10:26:10.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:09 vm07 bash[23367]: audit 2026-03-10T10:26:08.678372+0000 mon.a (mon.0) 2967 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-107", "overlaypool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:10.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:09 vm07 bash[23367]: audit 2026-03-10T10:26:08.678372+0000 mon.a (mon.0) 2967 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-107", "overlaypool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:10.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:09 vm07 bash[23367]: audit 2026-03-10T10:26:08.712062+0000 mgr.y (mgr.24422) 481 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:26:10.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:09 vm07 bash[23367]: audit 2026-03-10T10:26:08.712062+0000 mgr.y (mgr.24422) 481 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:26:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:09 vm04 bash[28289]: cluster 2026-03-10T10:26:08.514070+0000 mgr.y (mgr.24422) 480 : cluster [DBG] pgmap v803: 300 pgs: 34 unknown, 1 creating+activating, 265 active+clean; 455 KiB data, 975 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:09 vm04 bash[28289]: cluster 2026-03-10T10:26:08.514070+0000 mgr.y (mgr.24422) 480 : cluster [DBG] pgmap v803: 300 pgs: 34 unknown, 1 creating+activating, 265 active+clean; 455 KiB data, 975 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:09 vm04 bash[28289]: audit 2026-03-10T10:26:08.672944+0000 mon.a (mon.0) 2965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]': finished 2026-03-10T10:26:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:09 vm04 bash[28289]: audit 2026-03-10T10:26:08.672944+0000 mon.a (mon.0) 2965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]': finished 2026-03-10T10:26:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:09 vm04 bash[28289]: audit 2026-03-10T10:26:08.674507+0000 mon.b (mon.1) 218 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-107", "overlaypool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:09 vm04 bash[28289]: audit 2026-03-10T10:26:08.674507+0000 mon.b (mon.1) 218 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-107", "overlaypool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:09 vm04 bash[28289]: cluster 2026-03-10T10:26:08.677116+0000 mon.a (mon.0) 2966 : cluster [DBG] osdmap e522: 8 total, 8 up, 8 in 2026-03-10T10:26:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:09 vm04 bash[28289]: cluster 2026-03-10T10:26:08.677116+0000 mon.a (mon.0) 2966 : cluster [DBG] osdmap e522: 8 total, 8 up, 8 in 2026-03-10T10:26:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:09 vm04 bash[28289]: audit 2026-03-10T10:26:08.678372+0000 mon.a (mon.0) 2967 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-107", "overlaypool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:09 vm04 bash[28289]: audit 2026-03-10T10:26:08.678372+0000 mon.a (mon.0) 2967 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-107", "overlaypool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:09 vm04 bash[28289]: audit 2026-03-10T10:26:08.712062+0000 mgr.y (mgr.24422) 481 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:26:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:09 vm04 bash[28289]: audit 2026-03-10T10:26:08.712062+0000 mgr.y (mgr.24422) 481 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:26:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:09 vm04 bash[20742]: cluster 2026-03-10T10:26:08.514070+0000 mgr.y (mgr.24422) 480 : cluster [DBG] pgmap v803: 300 pgs: 34 unknown, 1 creating+activating, 265 active+clean; 455 KiB data, 975 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:09 vm04 bash[20742]: cluster 2026-03-10T10:26:08.514070+0000 mgr.y (mgr.24422) 480 : cluster [DBG] pgmap v803: 300 pgs: 34 unknown, 1 creating+activating, 265 active+clean; 455 KiB data, 975 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:09 vm04 bash[20742]: audit 2026-03-10T10:26:08.672944+0000 mon.a (mon.0) 2965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]': finished 2026-03-10T10:26:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:09 vm04 bash[20742]: audit 2026-03-10T10:26:08.672944+0000 mon.a (mon.0) 2965 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]': finished 2026-03-10T10:26:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:09 vm04 bash[20742]: audit 2026-03-10T10:26:08.674507+0000 mon.b (mon.1) 218 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-107", "overlaypool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:09 vm04 bash[20742]: audit 2026-03-10T10:26:08.674507+0000 mon.b (mon.1) 218 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-107", "overlaypool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:09 vm04 bash[20742]: cluster 2026-03-10T10:26:08.677116+0000 mon.a (mon.0) 2966 : cluster [DBG] osdmap e522: 8 total, 8 up, 8 in 2026-03-10T10:26:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:09 vm04 bash[20742]: cluster 2026-03-10T10:26:08.677116+0000 mon.a (mon.0) 2966 : cluster [DBG] osdmap e522: 8 total, 8 up, 8 in 2026-03-10T10:26:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:09 vm04 bash[20742]: audit 2026-03-10T10:26:08.678372+0000 mon.a (mon.0) 2967 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-107", "overlaypool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:09 vm04 bash[20742]: audit 2026-03-10T10:26:08.678372+0000 mon.a (mon.0) 2967 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-107", "overlaypool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:09 vm04 bash[20742]: audit 2026-03-10T10:26:08.712062+0000 mgr.y (mgr.24422) 481 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:26:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:09 vm04 bash[20742]: audit 2026-03-10T10:26:08.712062+0000 mgr.y (mgr.24422) 481 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:26:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:10 vm07 bash[23367]: audit 2026-03-10T10:26:09.732148+0000 mon.a (mon.0) 2968 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-107", "overlaypool": "test-rados-api-vm04-59491-107-cache"}]': finished 2026-03-10T10:26:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:10 vm07 bash[23367]: audit 2026-03-10T10:26:09.732148+0000 mon.a (mon.0) 2968 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-107", "overlaypool": "test-rados-api-vm04-59491-107-cache"}]': finished 2026-03-10T10:26:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:10 vm07 bash[23367]: audit 2026-03-10T10:26:09.734247+0000 mon.b (mon.1) 219 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-107-cache", "mode": "writeback"}]: dispatch 2026-03-10T10:26:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:10 vm07 bash[23367]: audit 2026-03-10T10:26:09.734247+0000 mon.b (mon.1) 219 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-107-cache", "mode": "writeback"}]: dispatch 2026-03-10T10:26:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:10 vm07 bash[23367]: cluster 2026-03-10T10:26:09.735206+0000 mon.a (mon.0) 2969 : cluster [DBG] osdmap e523: 8 total, 8 up, 8 in 2026-03-10T10:26:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:10 vm07 bash[23367]: cluster 2026-03-10T10:26:09.735206+0000 mon.a (mon.0) 2969 : cluster [DBG] osdmap e523: 8 total, 8 up, 8 in 2026-03-10T10:26:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:10 vm07 bash[23367]: audit 2026-03-10T10:26:09.736831+0000 mon.a (mon.0) 2970 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-107-cache", "mode": "writeback"}]: dispatch 2026-03-10T10:26:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:10 vm07 bash[23367]: audit 2026-03-10T10:26:09.736831+0000 mon.a (mon.0) 2970 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-107-cache", "mode": "writeback"}]: dispatch 2026-03-10T10:26:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:10 vm07 bash[23367]: cluster 2026-03-10T10:26:10.732331+0000 mon.a (mon.0) 2971 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:26:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:10 vm07 bash[23367]: cluster 2026-03-10T10:26:10.732331+0000 mon.a (mon.0) 2971 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:26:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:10 vm07 bash[23367]: audit 2026-03-10T10:26:10.735570+0000 mon.a (mon.0) 2972 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-107-cache", "mode": "writeback"}]': finished 2026-03-10T10:26:11.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:10 vm07 bash[23367]: audit 2026-03-10T10:26:10.735570+0000 mon.a (mon.0) 2972 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-107-cache", "mode": "writeback"}]': finished 2026-03-10T10:26:11.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:10 vm04 bash[28289]: audit 2026-03-10T10:26:09.732148+0000 mon.a (mon.0) 2968 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-107", "overlaypool": "test-rados-api-vm04-59491-107-cache"}]': finished 2026-03-10T10:26:11.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:10 vm04 bash[28289]: audit 2026-03-10T10:26:09.732148+0000 mon.a (mon.0) 2968 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-107", "overlaypool": "test-rados-api-vm04-59491-107-cache"}]': finished 2026-03-10T10:26:11.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:10 vm04 bash[28289]: audit 2026-03-10T10:26:09.734247+0000 mon.b (mon.1) 219 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-107-cache", "mode": "writeback"}]: dispatch 2026-03-10T10:26:11.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:10 vm04 bash[28289]: audit 2026-03-10T10:26:09.734247+0000 mon.b (mon.1) 219 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-107-cache", "mode": "writeback"}]: dispatch 2026-03-10T10:26:11.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:10 vm04 bash[28289]: cluster 2026-03-10T10:26:09.735206+0000 mon.a (mon.0) 2969 : cluster [DBG] osdmap e523: 8 total, 8 up, 8 in 2026-03-10T10:26:11.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:10 vm04 bash[28289]: cluster 2026-03-10T10:26:09.735206+0000 mon.a (mon.0) 2969 : cluster [DBG] osdmap e523: 8 total, 8 up, 8 in 2026-03-10T10:26:11.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:10 vm04 bash[28289]: audit 2026-03-10T10:26:09.736831+0000 mon.a (mon.0) 2970 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-107-cache", "mode": "writeback"}]: dispatch 2026-03-10T10:26:11.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:10 vm04 bash[28289]: audit 2026-03-10T10:26:09.736831+0000 mon.a (mon.0) 2970 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-107-cache", "mode": "writeback"}]: dispatch 2026-03-10T10:26:11.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:10 vm04 bash[28289]: cluster 2026-03-10T10:26:10.732331+0000 mon.a (mon.0) 2971 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:26:11.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:10 vm04 bash[28289]: cluster 2026-03-10T10:26:10.732331+0000 mon.a (mon.0) 2971 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:26:11.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:10 vm04 bash[28289]: audit 2026-03-10T10:26:10.735570+0000 mon.a (mon.0) 2972 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-107-cache", "mode": "writeback"}]': finished 2026-03-10T10:26:11.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:10 vm04 bash[28289]: audit 2026-03-10T10:26:10.735570+0000 mon.a (mon.0) 2972 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-107-cache", "mode": "writeback"}]': finished 2026-03-10T10:26:11.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:10 vm04 bash[20742]: audit 2026-03-10T10:26:09.732148+0000 mon.a (mon.0) 2968 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-107", "overlaypool": "test-rados-api-vm04-59491-107-cache"}]': finished 2026-03-10T10:26:11.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:10 vm04 bash[20742]: audit 2026-03-10T10:26:09.732148+0000 mon.a (mon.0) 2968 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-107", "overlaypool": "test-rados-api-vm04-59491-107-cache"}]': finished 2026-03-10T10:26:11.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:10 vm04 bash[20742]: audit 2026-03-10T10:26:09.734247+0000 mon.b (mon.1) 219 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-107-cache", "mode": "writeback"}]: dispatch 2026-03-10T10:26:11.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:10 vm04 bash[20742]: audit 2026-03-10T10:26:09.734247+0000 mon.b (mon.1) 219 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-107-cache", "mode": "writeback"}]: dispatch 2026-03-10T10:26:11.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:10 vm04 bash[20742]: cluster 2026-03-10T10:26:09.735206+0000 mon.a (mon.0) 2969 : cluster [DBG] osdmap e523: 8 total, 8 up, 8 in 2026-03-10T10:26:11.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:10 vm04 bash[20742]: cluster 2026-03-10T10:26:09.735206+0000 mon.a (mon.0) 2969 : cluster [DBG] osdmap e523: 8 total, 8 up, 8 in 2026-03-10T10:26:11.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:10 vm04 bash[20742]: audit 2026-03-10T10:26:09.736831+0000 mon.a (mon.0) 2970 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-107-cache", "mode": "writeback"}]: dispatch 2026-03-10T10:26:11.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:10 vm04 bash[20742]: audit 2026-03-10T10:26:09.736831+0000 mon.a (mon.0) 2970 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-107-cache", "mode": "writeback"}]: dispatch 2026-03-10T10:26:11.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:10 vm04 bash[20742]: cluster 2026-03-10T10:26:10.732331+0000 mon.a (mon.0) 2971 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:26:11.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:10 vm04 bash[20742]: cluster 2026-03-10T10:26:10.732331+0000 mon.a (mon.0) 2971 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:26:11.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:10 vm04 bash[20742]: audit 2026-03-10T10:26:10.735570+0000 mon.a (mon.0) 2972 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-107-cache", "mode": "writeback"}]': finished 2026-03-10T10:26:11.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:10 vm04 bash[20742]: audit 2026-03-10T10:26:10.735570+0000 mon.a (mon.0) 2972 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-107-cache", "mode": "writeback"}]': finished 2026-03-10T10:26:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:11 vm07 bash[23367]: cluster 2026-03-10T10:26:10.514377+0000 mgr.y (mgr.24422) 482 : cluster [DBG] pgmap v806: 300 pgs: 300 active+clean; 455 KiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:11 vm07 bash[23367]: cluster 2026-03-10T10:26:10.514377+0000 mgr.y (mgr.24422) 482 : cluster [DBG] pgmap v806: 300 pgs: 300 active+clean; 455 KiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:11 vm07 bash[23367]: cluster 2026-03-10T10:26:10.741058+0000 mon.a (mon.0) 2973 : cluster [DBG] osdmap e524: 8 total, 8 up, 8 in 2026-03-10T10:26:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:11 vm07 bash[23367]: cluster 2026-03-10T10:26:10.741058+0000 mon.a (mon.0) 2973 : cluster [DBG] osdmap e524: 8 total, 8 up, 8 in 2026-03-10T10:26:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:11 vm07 bash[23367]: audit 2026-03-10T10:26:10.778299+0000 mon.b (mon.1) 220 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-107"}]: dispatch 2026-03-10T10:26:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:11 vm07 bash[23367]: audit 2026-03-10T10:26:10.778299+0000 mon.b (mon.1) 220 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-107"}]: dispatch 2026-03-10T10:26:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:11 vm07 bash[23367]: audit 2026-03-10T10:26:10.781556+0000 mon.a (mon.0) 2974 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-107"}]: dispatch 2026-03-10T10:26:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:11 vm07 bash[23367]: audit 2026-03-10T10:26:10.781556+0000 mon.a (mon.0) 2974 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-107"}]: dispatch 2026-03-10T10:26:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:11 vm04 bash[28289]: cluster 2026-03-10T10:26:10.514377+0000 mgr.y (mgr.24422) 482 : cluster [DBG] pgmap v806: 300 pgs: 300 active+clean; 455 KiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:11 vm04 bash[28289]: cluster 2026-03-10T10:26:10.514377+0000 mgr.y (mgr.24422) 482 : cluster [DBG] pgmap v806: 300 pgs: 300 active+clean; 455 KiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:11 vm04 bash[28289]: cluster 2026-03-10T10:26:10.741058+0000 mon.a (mon.0) 2973 : cluster [DBG] osdmap e524: 8 total, 8 up, 8 in 2026-03-10T10:26:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:11 vm04 bash[28289]: cluster 2026-03-10T10:26:10.741058+0000 mon.a (mon.0) 2973 : cluster [DBG] osdmap e524: 8 total, 8 up, 8 in 2026-03-10T10:26:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:11 vm04 bash[28289]: audit 2026-03-10T10:26:10.778299+0000 mon.b (mon.1) 220 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-107"}]: dispatch 2026-03-10T10:26:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:11 vm04 bash[28289]: audit 2026-03-10T10:26:10.778299+0000 mon.b (mon.1) 220 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-107"}]: dispatch 2026-03-10T10:26:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:11 vm04 bash[28289]: audit 2026-03-10T10:26:10.781556+0000 mon.a (mon.0) 2974 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-107"}]: dispatch 2026-03-10T10:26:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:11 vm04 bash[28289]: audit 2026-03-10T10:26:10.781556+0000 mon.a (mon.0) 2974 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-107"}]: dispatch 2026-03-10T10:26:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:11 vm04 bash[20742]: cluster 2026-03-10T10:26:10.514377+0000 mgr.y (mgr.24422) 482 : cluster [DBG] pgmap v806: 300 pgs: 300 active+clean; 455 KiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:11 vm04 bash[20742]: cluster 2026-03-10T10:26:10.514377+0000 mgr.y (mgr.24422) 482 : cluster [DBG] pgmap v806: 300 pgs: 300 active+clean; 455 KiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:11 vm04 bash[20742]: cluster 2026-03-10T10:26:10.741058+0000 mon.a (mon.0) 2973 : cluster [DBG] osdmap e524: 8 total, 8 up, 8 in 2026-03-10T10:26:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:11 vm04 bash[20742]: cluster 2026-03-10T10:26:10.741058+0000 mon.a (mon.0) 2973 : cluster [DBG] osdmap e524: 8 total, 8 up, 8 in 2026-03-10T10:26:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:11 vm04 bash[20742]: audit 2026-03-10T10:26:10.778299+0000 mon.b (mon.1) 220 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-107"}]: dispatch 2026-03-10T10:26:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:11 vm04 bash[20742]: audit 2026-03-10T10:26:10.778299+0000 mon.b (mon.1) 220 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-107"}]: dispatch 2026-03-10T10:26:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:11 vm04 bash[20742]: audit 2026-03-10T10:26:10.781556+0000 mon.a (mon.0) 2974 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-107"}]: dispatch 2026-03-10T10:26:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:11 vm04 bash[20742]: audit 2026-03-10T10:26:10.781556+0000 mon.a (mon.0) 2974 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-107"}]: dispatch 2026-03-10T10:26:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:12 vm04 bash[28289]: audit 2026-03-10T10:26:11.780150+0000 mon.a (mon.0) 2975 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-107"}]': finished 2026-03-10T10:26:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:12 vm04 bash[28289]: audit 2026-03-10T10:26:11.780150+0000 mon.a (mon.0) 2975 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-107"}]': finished 2026-03-10T10:26:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:12 vm04 bash[28289]: cluster 2026-03-10T10:26:11.783572+0000 mon.a (mon.0) 2976 : cluster [DBG] osdmap e525: 8 total, 8 up, 8 in 2026-03-10T10:26:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:12 vm04 bash[28289]: cluster 2026-03-10T10:26:11.783572+0000 mon.a (mon.0) 2976 : cluster [DBG] osdmap e525: 8 total, 8 up, 8 in 2026-03-10T10:26:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:12 vm04 bash[28289]: audit 2026-03-10T10:26:11.789298+0000 mon.b (mon.1) 221 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:12 vm04 bash[28289]: audit 2026-03-10T10:26:11.789298+0000 mon.b (mon.1) 221 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:12 vm04 bash[28289]: audit 2026-03-10T10:26:11.791700+0000 mon.a (mon.0) 2977 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:12 vm04 bash[28289]: audit 2026-03-10T10:26:11.791700+0000 mon.a (mon.0) 2977 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:13.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:12 vm04 bash[20742]: audit 2026-03-10T10:26:11.780150+0000 mon.a (mon.0) 2975 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-107"}]': finished 2026-03-10T10:26:13.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:12 vm04 bash[20742]: audit 2026-03-10T10:26:11.780150+0000 mon.a (mon.0) 2975 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-107"}]': finished 2026-03-10T10:26:13.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:12 vm04 bash[20742]: cluster 2026-03-10T10:26:11.783572+0000 mon.a (mon.0) 2976 : cluster [DBG] osdmap e525: 8 total, 8 up, 8 in 2026-03-10T10:26:13.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:12 vm04 bash[20742]: cluster 2026-03-10T10:26:11.783572+0000 mon.a (mon.0) 2976 : cluster [DBG] osdmap e525: 8 total, 8 up, 8 in 2026-03-10T10:26:13.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:12 vm04 bash[20742]: audit 2026-03-10T10:26:11.789298+0000 mon.b (mon.1) 221 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:13.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:12 vm04 bash[20742]: audit 2026-03-10T10:26:11.789298+0000 mon.b (mon.1) 221 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:13.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:12 vm04 bash[20742]: audit 2026-03-10T10:26:11.791700+0000 mon.a (mon.0) 2977 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:13.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:12 vm04 bash[20742]: audit 2026-03-10T10:26:11.791700+0000 mon.a (mon.0) 2977 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:13.203 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:26:13 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:26:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:26:13.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:12 vm07 bash[23367]: audit 2026-03-10T10:26:11.780150+0000 mon.a (mon.0) 2975 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-107"}]': finished 2026-03-10T10:26:13.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:12 vm07 bash[23367]: audit 2026-03-10T10:26:11.780150+0000 mon.a (mon.0) 2975 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-107"}]': finished 2026-03-10T10:26:13.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:12 vm07 bash[23367]: cluster 2026-03-10T10:26:11.783572+0000 mon.a (mon.0) 2976 : cluster [DBG] osdmap e525: 8 total, 8 up, 8 in 2026-03-10T10:26:13.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:12 vm07 bash[23367]: cluster 2026-03-10T10:26:11.783572+0000 mon.a (mon.0) 2976 : cluster [DBG] osdmap e525: 8 total, 8 up, 8 in 2026-03-10T10:26:13.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:12 vm07 bash[23367]: audit 2026-03-10T10:26:11.789298+0000 mon.b (mon.1) 221 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:13.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:12 vm07 bash[23367]: audit 2026-03-10T10:26:11.789298+0000 mon.b (mon.1) 221 : audit [INF] from='client.? 192.168.123.104:0/233388950' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:13.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:12 vm07 bash[23367]: audit 2026-03-10T10:26:11.791700+0000 mon.a (mon.0) 2977 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:13.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:12 vm07 bash[23367]: audit 2026-03-10T10:26:11.791700+0000 mon.a (mon.0) 2977 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]: dispatch 2026-03-10T10:26:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:13 vm04 bash[28289]: cluster 2026-03-10T10:26:12.514781+0000 mgr.y (mgr.24422) 483 : cluster [DBG] pgmap v809: 300 pgs: 300 active+clean; 455 KiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:13 vm04 bash[28289]: cluster 2026-03-10T10:26:12.514781+0000 mgr.y (mgr.24422) 483 : cluster [DBG] pgmap v809: 300 pgs: 300 active+clean; 455 KiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:13 vm04 bash[28289]: cluster 2026-03-10T10:26:12.820787+0000 mon.a (mon.0) 2978 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:26:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:13 vm04 bash[28289]: cluster 2026-03-10T10:26:12.820787+0000 mon.a (mon.0) 2978 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:26:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:13 vm04 bash[28289]: audit 2026-03-10T10:26:12.840504+0000 mon.a (mon.0) 2979 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]': finished 2026-03-10T10:26:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:13 vm04 bash[28289]: audit 2026-03-10T10:26:12.840504+0000 mon.a (mon.0) 2979 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]': finished 2026-03-10T10:26:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:13 vm04 bash[28289]: cluster 2026-03-10T10:26:12.844474+0000 mon.a (mon.0) 2980 : cluster [DBG] osdmap e526: 8 total, 8 up, 8 in 2026-03-10T10:26:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:13 vm04 bash[28289]: cluster 2026-03-10T10:26:12.844474+0000 mon.a (mon.0) 2980 : cluster [DBG] osdmap e526: 8 total, 8 up, 8 in 2026-03-10T10:26:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:13 vm04 bash[28289]: audit 2026-03-10T10:26:13.057582+0000 mon.a (mon.0) 2981 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:26:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:13 vm04 bash[28289]: audit 2026-03-10T10:26:13.057582+0000 mon.a (mon.0) 2981 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:26:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:13 vm04 bash[20742]: cluster 2026-03-10T10:26:12.514781+0000 mgr.y (mgr.24422) 483 : cluster [DBG] pgmap v809: 300 pgs: 300 active+clean; 455 KiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:13 vm04 bash[20742]: cluster 2026-03-10T10:26:12.514781+0000 mgr.y (mgr.24422) 483 : cluster [DBG] pgmap v809: 300 pgs: 300 active+clean; 455 KiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:13 vm04 bash[20742]: cluster 2026-03-10T10:26:12.820787+0000 mon.a (mon.0) 2978 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:26:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:13 vm04 bash[20742]: cluster 2026-03-10T10:26:12.820787+0000 mon.a (mon.0) 2978 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:26:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:13 vm04 bash[20742]: audit 2026-03-10T10:26:12.840504+0000 mon.a (mon.0) 2979 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]': finished 2026-03-10T10:26:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:13 vm04 bash[20742]: audit 2026-03-10T10:26:12.840504+0000 mon.a (mon.0) 2979 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]': finished 2026-03-10T10:26:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:13 vm04 bash[20742]: cluster 2026-03-10T10:26:12.844474+0000 mon.a (mon.0) 2980 : cluster [DBG] osdmap e526: 8 total, 8 up, 8 in 2026-03-10T10:26:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:13 vm04 bash[20742]: cluster 2026-03-10T10:26:12.844474+0000 mon.a (mon.0) 2980 : cluster [DBG] osdmap e526: 8 total, 8 up, 8 in 2026-03-10T10:26:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:13 vm04 bash[20742]: audit 2026-03-10T10:26:13.057582+0000 mon.a (mon.0) 2981 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:26:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:13 vm04 bash[20742]: audit 2026-03-10T10:26:13.057582+0000 mon.a (mon.0) 2981 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:26:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:13 vm07 bash[23367]: cluster 2026-03-10T10:26:12.514781+0000 mgr.y (mgr.24422) 483 : cluster [DBG] pgmap v809: 300 pgs: 300 active+clean; 455 KiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:13 vm07 bash[23367]: cluster 2026-03-10T10:26:12.514781+0000 mgr.y (mgr.24422) 483 : cluster [DBG] pgmap v809: 300 pgs: 300 active+clean; 455 KiB data, 959 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:13 vm07 bash[23367]: cluster 2026-03-10T10:26:12.820787+0000 mon.a (mon.0) 2978 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:26:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:13 vm07 bash[23367]: cluster 2026-03-10T10:26:12.820787+0000 mon.a (mon.0) 2978 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:26:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:13 vm07 bash[23367]: audit 2026-03-10T10:26:12.840504+0000 mon.a (mon.0) 2979 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]': finished 2026-03-10T10:26:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:13 vm07 bash[23367]: audit 2026-03-10T10:26:12.840504+0000 mon.a (mon.0) 2979 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-107", "tierpool": "test-rados-api-vm04-59491-107-cache"}]': finished 2026-03-10T10:26:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:13 vm07 bash[23367]: cluster 2026-03-10T10:26:12.844474+0000 mon.a (mon.0) 2980 : cluster [DBG] osdmap e526: 8 total, 8 up, 8 in 2026-03-10T10:26:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:13 vm07 bash[23367]: cluster 2026-03-10T10:26:12.844474+0000 mon.a (mon.0) 2980 : cluster [DBG] osdmap e526: 8 total, 8 up, 8 in 2026-03-10T10:26:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:13 vm07 bash[23367]: audit 2026-03-10T10:26:13.057582+0000 mon.a (mon.0) 2981 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:26:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:13 vm07 bash[23367]: audit 2026-03-10T10:26:13.057582+0000 mon.a (mon.0) 2981 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:26:15.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:14 vm04 bash[28289]: cluster 2026-03-10T10:26:13.852529+0000 mon.a (mon.0) 2982 : cluster [DBG] osdmap e527: 8 total, 8 up, 8 in 2026-03-10T10:26:15.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:14 vm04 bash[28289]: cluster 2026-03-10T10:26:13.852529+0000 mon.a (mon.0) 2982 : cluster [DBG] osdmap e527: 8 total, 8 up, 8 in 2026-03-10T10:26:15.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:14 vm04 bash[20742]: cluster 2026-03-10T10:26:13.852529+0000 mon.a (mon.0) 2982 : cluster [DBG] osdmap e527: 8 total, 8 up, 8 in 2026-03-10T10:26:15.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:14 vm04 bash[20742]: cluster 2026-03-10T10:26:13.852529+0000 mon.a (mon.0) 2982 : cluster [DBG] osdmap e527: 8 total, 8 up, 8 in 2026-03-10T10:26:15.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:14 vm07 bash[23367]: cluster 2026-03-10T10:26:13.852529+0000 mon.a (mon.0) 2982 : cluster [DBG] osdmap e527: 8 total, 8 up, 8 in 2026-03-10T10:26:15.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:14 vm07 bash[23367]: cluster 2026-03-10T10:26:13.852529+0000 mon.a (mon.0) 2982 : cluster [DBG] osdmap e527: 8 total, 8 up, 8 in 2026-03-10T10:26:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:15 vm04 bash[28289]: cluster 2026-03-10T10:26:14.515175+0000 mgr.y (mgr.24422) 484 : cluster [DBG] pgmap v812: 268 pgs: 268 active+clean; 455 KiB data, 960 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:26:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:15 vm04 bash[28289]: cluster 2026-03-10T10:26:14.515175+0000 mgr.y (mgr.24422) 484 : cluster [DBG] pgmap v812: 268 pgs: 268 active+clean; 455 KiB data, 960 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:26:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:15 vm04 bash[28289]: cluster 2026-03-10T10:26:14.848742+0000 mon.a (mon.0) 2983 : cluster [DBG] osdmap e528: 8 total, 8 up, 8 in 2026-03-10T10:26:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:15 vm04 bash[28289]: cluster 2026-03-10T10:26:14.848742+0000 mon.a (mon.0) 2983 : cluster [DBG] osdmap e528: 8 total, 8 up, 8 in 2026-03-10T10:26:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:15 vm04 bash[28289]: audit 2026-03-10T10:26:14.865280+0000 mon.c (mon.2) 461 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:15 vm04 bash[28289]: audit 2026-03-10T10:26:14.865280+0000 mon.c (mon.2) 461 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:15 vm04 bash[28289]: audit 2026-03-10T10:26:14.868861+0000 mon.a (mon.0) 2984 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:15 vm04 bash[28289]: audit 2026-03-10T10:26:14.868861+0000 mon.a (mon.0) 2984 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:15 vm04 bash[28289]: audit 2026-03-10T10:26:14.869405+0000 mon.c (mon.2) 462 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:15 vm04 bash[28289]: audit 2026-03-10T10:26:14.869405+0000 mon.c (mon.2) 462 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:15 vm04 bash[28289]: audit 2026-03-10T10:26:14.870372+0000 mon.a (mon.0) 2985 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:15 vm04 bash[28289]: audit 2026-03-10T10:26:14.870372+0000 mon.a (mon.0) 2985 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:15 vm04 bash[28289]: audit 2026-03-10T10:26:14.870856+0000 mon.c (mon.2) 463 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59491-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:26:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:15 vm04 bash[28289]: audit 2026-03-10T10:26:14.870856+0000 mon.c (mon.2) 463 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59491-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:26:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:15 vm04 bash[28289]: audit 2026-03-10T10:26:14.871836+0000 mon.a (mon.0) 2986 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59491-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:26:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:15 vm04 bash[28289]: audit 2026-03-10T10:26:14.871836+0000 mon.a (mon.0) 2986 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59491-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:26:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:15 vm04 bash[20742]: cluster 2026-03-10T10:26:14.515175+0000 mgr.y (mgr.24422) 484 : cluster [DBG] pgmap v812: 268 pgs: 268 active+clean; 455 KiB data, 960 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:26:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:15 vm04 bash[20742]: cluster 2026-03-10T10:26:14.515175+0000 mgr.y (mgr.24422) 484 : cluster [DBG] pgmap v812: 268 pgs: 268 active+clean; 455 KiB data, 960 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:26:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:15 vm04 bash[20742]: cluster 2026-03-10T10:26:14.848742+0000 mon.a (mon.0) 2983 : cluster [DBG] osdmap e528: 8 total, 8 up, 8 in 2026-03-10T10:26:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:15 vm04 bash[20742]: cluster 2026-03-10T10:26:14.848742+0000 mon.a (mon.0) 2983 : cluster [DBG] osdmap e528: 8 total, 8 up, 8 in 2026-03-10T10:26:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:15 vm04 bash[20742]: audit 2026-03-10T10:26:14.865280+0000 mon.c (mon.2) 461 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:15 vm04 bash[20742]: audit 2026-03-10T10:26:14.865280+0000 mon.c (mon.2) 461 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:16.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:15 vm04 bash[20742]: audit 2026-03-10T10:26:14.868861+0000 mon.a (mon.0) 2984 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:16.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:15 vm04 bash[20742]: audit 2026-03-10T10:26:14.868861+0000 mon.a (mon.0) 2984 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:16.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:15 vm04 bash[20742]: audit 2026-03-10T10:26:14.869405+0000 mon.c (mon.2) 462 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:16.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:15 vm04 bash[20742]: audit 2026-03-10T10:26:14.869405+0000 mon.c (mon.2) 462 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:16.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:15 vm04 bash[20742]: audit 2026-03-10T10:26:14.870372+0000 mon.a (mon.0) 2985 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:16.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:15 vm04 bash[20742]: audit 2026-03-10T10:26:14.870372+0000 mon.a (mon.0) 2985 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:16.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:15 vm04 bash[20742]: audit 2026-03-10T10:26:14.870856+0000 mon.c (mon.2) 463 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59491-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:26:16.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:15 vm04 bash[20742]: audit 2026-03-10T10:26:14.870856+0000 mon.c (mon.2) 463 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59491-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:26:16.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:15 vm04 bash[20742]: audit 2026-03-10T10:26:14.871836+0000 mon.a (mon.0) 2986 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59491-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:26:16.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:15 vm04 bash[20742]: audit 2026-03-10T10:26:14.871836+0000 mon.a (mon.0) 2986 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59491-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:26:16.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:15 vm07 bash[23367]: cluster 2026-03-10T10:26:14.515175+0000 mgr.y (mgr.24422) 484 : cluster [DBG] pgmap v812: 268 pgs: 268 active+clean; 455 KiB data, 960 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:26:16.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:15 vm07 bash[23367]: cluster 2026-03-10T10:26:14.515175+0000 mgr.y (mgr.24422) 484 : cluster [DBG] pgmap v812: 268 pgs: 268 active+clean; 455 KiB data, 960 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:26:16.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:15 vm07 bash[23367]: cluster 2026-03-10T10:26:14.848742+0000 mon.a (mon.0) 2983 : cluster [DBG] osdmap e528: 8 total, 8 up, 8 in 2026-03-10T10:26:16.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:15 vm07 bash[23367]: cluster 2026-03-10T10:26:14.848742+0000 mon.a (mon.0) 2983 : cluster [DBG] osdmap e528: 8 total, 8 up, 8 in 2026-03-10T10:26:16.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:15 vm07 bash[23367]: audit 2026-03-10T10:26:14.865280+0000 mon.c (mon.2) 461 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:16.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:15 vm07 bash[23367]: audit 2026-03-10T10:26:14.865280+0000 mon.c (mon.2) 461 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:16.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:15 vm07 bash[23367]: audit 2026-03-10T10:26:14.868861+0000 mon.a (mon.0) 2984 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:16.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:15 vm07 bash[23367]: audit 2026-03-10T10:26:14.868861+0000 mon.a (mon.0) 2984 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:16.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:15 vm07 bash[23367]: audit 2026-03-10T10:26:14.869405+0000 mon.c (mon.2) 462 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:16.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:15 vm07 bash[23367]: audit 2026-03-10T10:26:14.869405+0000 mon.c (mon.2) 462 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:16.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:15 vm07 bash[23367]: audit 2026-03-10T10:26:14.870372+0000 mon.a (mon.0) 2985 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:16.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:15 vm07 bash[23367]: audit 2026-03-10T10:26:14.870372+0000 mon.a (mon.0) 2985 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:16.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:15 vm07 bash[23367]: audit 2026-03-10T10:26:14.870856+0000 mon.c (mon.2) 463 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59491-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:26:16.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:15 vm07 bash[23367]: audit 2026-03-10T10:26:14.870856+0000 mon.c (mon.2) 463 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59491-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:26:16.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:15 vm07 bash[23367]: audit 2026-03-10T10:26:14.871836+0000 mon.a (mon.0) 2986 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59491-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:26:16.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:15 vm07 bash[23367]: audit 2026-03-10T10:26:14.871836+0000 mon.a (mon.0) 2986 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59491-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:26:17.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:16 vm04 bash[28289]: audit 2026-03-10T10:26:15.869814+0000 mon.a (mon.0) 2987 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59491-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:26:17.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:16 vm04 bash[28289]: audit 2026-03-10T10:26:15.869814+0000 mon.a (mon.0) 2987 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59491-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:26:17.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:16 vm04 bash[28289]: cluster 2026-03-10T10:26:15.872794+0000 mon.a (mon.0) 2988 : cluster [DBG] osdmap e529: 8 total, 8 up, 8 in 2026-03-10T10:26:17.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:16 vm04 bash[28289]: cluster 2026-03-10T10:26:15.872794+0000 mon.a (mon.0) 2988 : cluster [DBG] osdmap e529: 8 total, 8 up, 8 in 2026-03-10T10:26:17.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:16 vm04 bash[28289]: audit 2026-03-10T10:26:15.875849+0000 mon.c (mon.2) 464 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59491-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:17.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:16 vm04 bash[28289]: audit 2026-03-10T10:26:15.875849+0000 mon.c (mon.2) 464 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59491-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:17.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:16 vm04 bash[28289]: audit 2026-03-10T10:26:15.876229+0000 mon.a (mon.0) 2989 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59491-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:17.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:16 vm04 bash[28289]: audit 2026-03-10T10:26:15.876229+0000 mon.a (mon.0) 2989 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59491-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:17.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:16 vm04 bash[20742]: audit 2026-03-10T10:26:15.869814+0000 mon.a (mon.0) 2987 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59491-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:26:17.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:16 vm04 bash[20742]: audit 2026-03-10T10:26:15.869814+0000 mon.a (mon.0) 2987 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59491-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:26:17.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:16 vm04 bash[20742]: cluster 2026-03-10T10:26:15.872794+0000 mon.a (mon.0) 2988 : cluster [DBG] osdmap e529: 8 total, 8 up, 8 in 2026-03-10T10:26:17.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:16 vm04 bash[20742]: cluster 2026-03-10T10:26:15.872794+0000 mon.a (mon.0) 2988 : cluster [DBG] osdmap e529: 8 total, 8 up, 8 in 2026-03-10T10:26:17.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:16 vm04 bash[20742]: audit 2026-03-10T10:26:15.875849+0000 mon.c (mon.2) 464 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59491-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:17.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:16 vm04 bash[20742]: audit 2026-03-10T10:26:15.875849+0000 mon.c (mon.2) 464 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59491-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:17.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:16 vm04 bash[20742]: audit 2026-03-10T10:26:15.876229+0000 mon.a (mon.0) 2989 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59491-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:17.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:16 vm04 bash[20742]: audit 2026-03-10T10:26:15.876229+0000 mon.a (mon.0) 2989 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59491-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:17.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:16 vm07 bash[23367]: audit 2026-03-10T10:26:15.869814+0000 mon.a (mon.0) 2987 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59491-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:26:17.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:16 vm07 bash[23367]: audit 2026-03-10T10:26:15.869814+0000 mon.a (mon.0) 2987 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59491-109", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:26:17.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:16 vm07 bash[23367]: cluster 2026-03-10T10:26:15.872794+0000 mon.a (mon.0) 2988 : cluster [DBG] osdmap e529: 8 total, 8 up, 8 in 2026-03-10T10:26:17.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:16 vm07 bash[23367]: cluster 2026-03-10T10:26:15.872794+0000 mon.a (mon.0) 2988 : cluster [DBG] osdmap e529: 8 total, 8 up, 8 in 2026-03-10T10:26:17.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:16 vm07 bash[23367]: audit 2026-03-10T10:26:15.875849+0000 mon.c (mon.2) 464 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59491-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:17.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:16 vm07 bash[23367]: audit 2026-03-10T10:26:15.875849+0000 mon.c (mon.2) 464 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59491-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:17.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:16 vm07 bash[23367]: audit 2026-03-10T10:26:15.876229+0000 mon.a (mon.0) 2989 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59491-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:17.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:16 vm07 bash[23367]: audit 2026-03-10T10:26:15.876229+0000 mon.a (mon.0) 2989 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59491-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:18.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:17 vm04 bash[28289]: cluster 2026-03-10T10:26:16.515615+0000 mgr.y (mgr.24422) 485 : cluster [DBG] pgmap v815: 236 pgs: 236 active+clean; 455 KiB data, 960 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:18.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:17 vm04 bash[28289]: cluster 2026-03-10T10:26:16.515615+0000 mgr.y (mgr.24422) 485 : cluster [DBG] pgmap v815: 236 pgs: 236 active+clean; 455 KiB data, 960 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:18.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:17 vm04 bash[28289]: cluster 2026-03-10T10:26:16.898464+0000 mon.a (mon.0) 2990 : cluster [DBG] osdmap e530: 8 total, 8 up, 8 in 2026-03-10T10:26:18.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:17 vm04 bash[28289]: cluster 2026-03-10T10:26:16.898464+0000 mon.a (mon.0) 2990 : cluster [DBG] osdmap e530: 8 total, 8 up, 8 in 2026-03-10T10:26:18.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:17 vm04 bash[20742]: cluster 2026-03-10T10:26:16.515615+0000 mgr.y (mgr.24422) 485 : cluster [DBG] pgmap v815: 236 pgs: 236 active+clean; 455 KiB data, 960 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:18.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:17 vm04 bash[20742]: cluster 2026-03-10T10:26:16.515615+0000 mgr.y (mgr.24422) 485 : cluster [DBG] pgmap v815: 236 pgs: 236 active+clean; 455 KiB data, 960 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:18.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:17 vm04 bash[20742]: cluster 2026-03-10T10:26:16.898464+0000 mon.a (mon.0) 2990 : cluster [DBG] osdmap e530: 8 total, 8 up, 8 in 2026-03-10T10:26:18.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:17 vm04 bash[20742]: cluster 2026-03-10T10:26:16.898464+0000 mon.a (mon.0) 2990 : cluster [DBG] osdmap e530: 8 total, 8 up, 8 in 2026-03-10T10:26:18.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:17 vm07 bash[23367]: cluster 2026-03-10T10:26:16.515615+0000 mgr.y (mgr.24422) 485 : cluster [DBG] pgmap v815: 236 pgs: 236 active+clean; 455 KiB data, 960 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:18.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:17 vm07 bash[23367]: cluster 2026-03-10T10:26:16.515615+0000 mgr.y (mgr.24422) 485 : cluster [DBG] pgmap v815: 236 pgs: 236 active+clean; 455 KiB data, 960 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:18.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:17 vm07 bash[23367]: cluster 2026-03-10T10:26:16.898464+0000 mon.a (mon.0) 2990 : cluster [DBG] osdmap e530: 8 total, 8 up, 8 in 2026-03-10T10:26:18.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:17 vm07 bash[23367]: cluster 2026-03-10T10:26:16.898464+0000 mon.a (mon.0) 2990 : cluster [DBG] osdmap e530: 8 total, 8 up, 8 in 2026-03-10T10:26:19.016 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:26:18 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:26:19.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:18 vm07 bash[23367]: audit 2026-03-10T10:26:17.892287+0000 mon.a (mon.0) 2991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59491-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59491-109"}]': finished 2026-03-10T10:26:19.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:18 vm07 bash[23367]: audit 2026-03-10T10:26:17.892287+0000 mon.a (mon.0) 2991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59491-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59491-109"}]': finished 2026-03-10T10:26:19.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:18 vm07 bash[23367]: cluster 2026-03-10T10:26:17.895501+0000 mon.a (mon.0) 2992 : cluster [DBG] osdmap e531: 8 total, 8 up, 8 in 2026-03-10T10:26:19.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:18 vm07 bash[23367]: cluster 2026-03-10T10:26:17.895501+0000 mon.a (mon.0) 2992 : cluster [DBG] osdmap e531: 8 total, 8 up, 8 in 2026-03-10T10:26:19.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:18 vm04 bash[28289]: audit 2026-03-10T10:26:17.892287+0000 mon.a (mon.0) 2991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59491-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59491-109"}]': finished 2026-03-10T10:26:19.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:18 vm04 bash[28289]: audit 2026-03-10T10:26:17.892287+0000 mon.a (mon.0) 2991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59491-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59491-109"}]': finished 2026-03-10T10:26:19.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:18 vm04 bash[28289]: cluster 2026-03-10T10:26:17.895501+0000 mon.a (mon.0) 2992 : cluster [DBG] osdmap e531: 8 total, 8 up, 8 in 2026-03-10T10:26:19.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:18 vm04 bash[28289]: cluster 2026-03-10T10:26:17.895501+0000 mon.a (mon.0) 2992 : cluster [DBG] osdmap e531: 8 total, 8 up, 8 in 2026-03-10T10:26:19.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:18 vm04 bash[20742]: audit 2026-03-10T10:26:17.892287+0000 mon.a (mon.0) 2991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59491-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59491-109"}]': finished 2026-03-10T10:26:19.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:18 vm04 bash[20742]: audit 2026-03-10T10:26:17.892287+0000 mon.a (mon.0) 2991 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59491-109", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59491-109"}]': finished 2026-03-10T10:26:19.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:18 vm04 bash[20742]: cluster 2026-03-10T10:26:17.895501+0000 mon.a (mon.0) 2992 : cluster [DBG] osdmap e531: 8 total, 8 up, 8 in 2026-03-10T10:26:19.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:18 vm04 bash[20742]: cluster 2026-03-10T10:26:17.895501+0000 mon.a (mon.0) 2992 : cluster [DBG] osdmap e531: 8 total, 8 up, 8 in 2026-03-10T10:26:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:19 vm04 bash[28289]: cluster 2026-03-10T10:26:18.516344+0000 mgr.y (mgr.24422) 486 : cluster [DBG] pgmap v818: 244 pgs: 4 creating+peering, 4 unknown, 236 active+clean; 455 KiB data, 960 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:19 vm04 bash[28289]: cluster 2026-03-10T10:26:18.516344+0000 mgr.y (mgr.24422) 486 : cluster [DBG] pgmap v818: 244 pgs: 4 creating+peering, 4 unknown, 236 active+clean; 455 KiB data, 960 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:19 vm04 bash[28289]: audit 2026-03-10T10:26:18.722600+0000 mgr.y (mgr.24422) 487 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:26:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:19 vm04 bash[28289]: audit 2026-03-10T10:26:18.722600+0000 mgr.y (mgr.24422) 487 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:26:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:19 vm04 bash[28289]: cluster 2026-03-10T10:26:18.893320+0000 mon.a (mon.0) 2993 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:19 vm04 bash[28289]: cluster 2026-03-10T10:26:18.893320+0000 mon.a (mon.0) 2993 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:19 vm04 bash[28289]: cluster 2026-03-10T10:26:18.927947+0000 mon.a (mon.0) 2994 : cluster [DBG] osdmap e532: 8 total, 8 up, 8 in 2026-03-10T10:26:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:19 vm04 bash[28289]: cluster 2026-03-10T10:26:18.927947+0000 mon.a (mon.0) 2994 : cluster [DBG] osdmap e532: 8 total, 8 up, 8 in 2026-03-10T10:26:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:19 vm04 bash[28289]: audit 2026-03-10T10:26:18.936262+0000 mon.c (mon.2) 465 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:19 vm04 bash[28289]: audit 2026-03-10T10:26:18.936262+0000 mon.c (mon.2) 465 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:19 vm04 bash[28289]: audit 2026-03-10T10:26:18.939228+0000 mon.a (mon.0) 2995 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:19 vm04 bash[28289]: audit 2026-03-10T10:26:18.939228+0000 mon.a (mon.0) 2995 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:20.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:19 vm04 bash[20742]: cluster 2026-03-10T10:26:18.516344+0000 mgr.y (mgr.24422) 486 : cluster [DBG] pgmap v818: 244 pgs: 4 creating+peering, 4 unknown, 236 active+clean; 455 KiB data, 960 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:20.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:19 vm04 bash[20742]: cluster 2026-03-10T10:26:18.516344+0000 mgr.y (mgr.24422) 486 : cluster [DBG] pgmap v818: 244 pgs: 4 creating+peering, 4 unknown, 236 active+clean; 455 KiB data, 960 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:20.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:19 vm04 bash[20742]: audit 2026-03-10T10:26:18.722600+0000 mgr.y (mgr.24422) 487 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:26:20.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:19 vm04 bash[20742]: audit 2026-03-10T10:26:18.722600+0000 mgr.y (mgr.24422) 487 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:26:20.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:19 vm04 bash[20742]: cluster 2026-03-10T10:26:18.893320+0000 mon.a (mon.0) 2993 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:20.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:19 vm04 bash[20742]: cluster 2026-03-10T10:26:18.893320+0000 mon.a (mon.0) 2993 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:20.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:19 vm04 bash[20742]: cluster 2026-03-10T10:26:18.927947+0000 mon.a (mon.0) 2994 : cluster [DBG] osdmap e532: 8 total, 8 up, 8 in 2026-03-10T10:26:20.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:19 vm04 bash[20742]: cluster 2026-03-10T10:26:18.927947+0000 mon.a (mon.0) 2994 : cluster [DBG] osdmap e532: 8 total, 8 up, 8 in 2026-03-10T10:26:20.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:19 vm04 bash[20742]: audit 2026-03-10T10:26:18.936262+0000 mon.c (mon.2) 465 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:20.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:19 vm04 bash[20742]: audit 2026-03-10T10:26:18.936262+0000 mon.c (mon.2) 465 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:20.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:19 vm04 bash[20742]: audit 2026-03-10T10:26:18.939228+0000 mon.a (mon.0) 2995 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:20.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:19 vm04 bash[20742]: audit 2026-03-10T10:26:18.939228+0000 mon.a (mon.0) 2995 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:20.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:19 vm07 bash[23367]: cluster 2026-03-10T10:26:18.516344+0000 mgr.y (mgr.24422) 486 : cluster [DBG] pgmap v818: 244 pgs: 4 creating+peering, 4 unknown, 236 active+clean; 455 KiB data, 960 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:20.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:19 vm07 bash[23367]: cluster 2026-03-10T10:26:18.516344+0000 mgr.y (mgr.24422) 486 : cluster [DBG] pgmap v818: 244 pgs: 4 creating+peering, 4 unknown, 236 active+clean; 455 KiB data, 960 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:20.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:19 vm07 bash[23367]: audit 2026-03-10T10:26:18.722600+0000 mgr.y (mgr.24422) 487 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:26:20.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:19 vm07 bash[23367]: audit 2026-03-10T10:26:18.722600+0000 mgr.y (mgr.24422) 487 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:26:20.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:19 vm07 bash[23367]: cluster 2026-03-10T10:26:18.893320+0000 mon.a (mon.0) 2993 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:20.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:19 vm07 bash[23367]: cluster 2026-03-10T10:26:18.893320+0000 mon.a (mon.0) 2993 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:20.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:19 vm07 bash[23367]: cluster 2026-03-10T10:26:18.927947+0000 mon.a (mon.0) 2994 : cluster [DBG] osdmap e532: 8 total, 8 up, 8 in 2026-03-10T10:26:20.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:19 vm07 bash[23367]: cluster 2026-03-10T10:26:18.927947+0000 mon.a (mon.0) 2994 : cluster [DBG] osdmap e532: 8 total, 8 up, 8 in 2026-03-10T10:26:20.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:19 vm07 bash[23367]: audit 2026-03-10T10:26:18.936262+0000 mon.c (mon.2) 465 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:20.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:19 vm07 bash[23367]: audit 2026-03-10T10:26:18.936262+0000 mon.c (mon.2) 465 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:20.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:19 vm07 bash[23367]: audit 2026-03-10T10:26:18.939228+0000 mon.a (mon.0) 2995 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:20.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:19 vm07 bash[23367]: audit 2026-03-10T10:26:18.939228+0000 mon.a (mon.0) 2995 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-109-cache","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:21.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:21 vm04 bash[28289]: audit 2026-03-10T10:26:19.919308+0000 mon.a (mon.0) 2996 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-109-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:21.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:21 vm04 bash[28289]: audit 2026-03-10T10:26:19.919308+0000 mon.a (mon.0) 2996 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-109-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:21.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:21 vm04 bash[28289]: cluster 2026-03-10T10:26:19.928218+0000 mon.a (mon.0) 2997 : cluster [DBG] osdmap e533: 8 total, 8 up, 8 in 2026-03-10T10:26:21.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:21 vm04 bash[28289]: cluster 2026-03-10T10:26:19.928218+0000 mon.a (mon.0) 2997 : cluster [DBG] osdmap e533: 8 total, 8 up, 8 in 2026-03-10T10:26:21.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:21 vm04 bash[28289]: audit 2026-03-10T10:26:19.945997+0000 mon.c (mon.2) 466 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:21.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:21 vm04 bash[28289]: audit 2026-03-10T10:26:19.945997+0000 mon.c (mon.2) 466 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:21.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:21 vm04 bash[28289]: audit 2026-03-10T10:26:19.951510+0000 mon.a (mon.0) 2998 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:21.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:21 vm04 bash[28289]: audit 2026-03-10T10:26:19.951510+0000 mon.a (mon.0) 2998 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:21.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:21 vm04 bash[20742]: audit 2026-03-10T10:26:19.919308+0000 mon.a (mon.0) 2996 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-109-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:21.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:21 vm04 bash[20742]: audit 2026-03-10T10:26:19.919308+0000 mon.a (mon.0) 2996 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-109-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:21.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:21 vm04 bash[20742]: cluster 2026-03-10T10:26:19.928218+0000 mon.a (mon.0) 2997 : cluster [DBG] osdmap e533: 8 total, 8 up, 8 in 2026-03-10T10:26:21.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:21 vm04 bash[20742]: cluster 2026-03-10T10:26:19.928218+0000 mon.a (mon.0) 2997 : cluster [DBG] osdmap e533: 8 total, 8 up, 8 in 2026-03-10T10:26:21.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:21 vm04 bash[20742]: audit 2026-03-10T10:26:19.945997+0000 mon.c (mon.2) 466 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:21.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:21 vm04 bash[20742]: audit 2026-03-10T10:26:19.945997+0000 mon.c (mon.2) 466 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:21.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:21 vm04 bash[20742]: audit 2026-03-10T10:26:19.951510+0000 mon.a (mon.0) 2998 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:21.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:21 vm04 bash[20742]: audit 2026-03-10T10:26:19.951510+0000 mon.a (mon.0) 2998 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:21 vm07 bash[23367]: audit 2026-03-10T10:26:19.919308+0000 mon.a (mon.0) 2996 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-109-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:21 vm07 bash[23367]: audit 2026-03-10T10:26:19.919308+0000 mon.a (mon.0) 2996 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-109-cache","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:21 vm07 bash[23367]: cluster 2026-03-10T10:26:19.928218+0000 mon.a (mon.0) 2997 : cluster [DBG] osdmap e533: 8 total, 8 up, 8 in 2026-03-10T10:26:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:21 vm07 bash[23367]: cluster 2026-03-10T10:26:19.928218+0000 mon.a (mon.0) 2997 : cluster [DBG] osdmap e533: 8 total, 8 up, 8 in 2026-03-10T10:26:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:21 vm07 bash[23367]: audit 2026-03-10T10:26:19.945997+0000 mon.c (mon.2) 466 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:21 vm07 bash[23367]: audit 2026-03-10T10:26:19.945997+0000 mon.c (mon.2) 466 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:21 vm07 bash[23367]: audit 2026-03-10T10:26:19.951510+0000 mon.a (mon.0) 2998 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:21 vm07 bash[23367]: audit 2026-03-10T10:26:19.951510+0000 mon.a (mon.0) 2998 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:22 vm04 bash[28289]: cluster 2026-03-10T10:26:20.516656+0000 mgr.y (mgr.24422) 488 : cluster [DBG] pgmap v821: 276 pgs: 32 unknown, 8 creating+peering, 236 active+clean; 455 KiB data, 961 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:22 vm04 bash[28289]: cluster 2026-03-10T10:26:20.516656+0000 mgr.y (mgr.24422) 488 : cluster [DBG] pgmap v821: 276 pgs: 32 unknown, 8 creating+peering, 236 active+clean; 455 KiB data, 961 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:22 vm04 bash[28289]: audit 2026-03-10T10:26:21.030999+0000 mon.a (mon.0) 2999 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]': finished 2026-03-10T10:26:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:22 vm04 bash[28289]: audit 2026-03-10T10:26:21.030999+0000 mon.a (mon.0) 2999 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]': finished 2026-03-10T10:26:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:22 vm04 bash[28289]: cluster 2026-03-10T10:26:21.035577+0000 mon.a (mon.0) 3000 : cluster [DBG] osdmap e534: 8 total, 8 up, 8 in 2026-03-10T10:26:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:22 vm04 bash[28289]: cluster 2026-03-10T10:26:21.035577+0000 mon.a (mon.0) 3000 : cluster [DBG] osdmap e534: 8 total, 8 up, 8 in 2026-03-10T10:26:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:22 vm04 bash[28289]: audit 2026-03-10T10:26:21.043110+0000 mon.c (mon.2) 467 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-109", "overlaypool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:22 vm04 bash[28289]: audit 2026-03-10T10:26:21.043110+0000 mon.c (mon.2) 467 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-109", "overlaypool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:22 vm04 bash[28289]: audit 2026-03-10T10:26:21.043284+0000 mon.a (mon.0) 3001 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-109", "overlaypool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:22 vm04 bash[28289]: audit 2026-03-10T10:26:21.043284+0000 mon.a (mon.0) 3001 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-109", "overlaypool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:22 vm04 bash[20742]: cluster 2026-03-10T10:26:20.516656+0000 mgr.y (mgr.24422) 488 : cluster [DBG] pgmap v821: 276 pgs: 32 unknown, 8 creating+peering, 236 active+clean; 455 KiB data, 961 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:22 vm04 bash[20742]: cluster 2026-03-10T10:26:20.516656+0000 mgr.y (mgr.24422) 488 : cluster [DBG] pgmap v821: 276 pgs: 32 unknown, 8 creating+peering, 236 active+clean; 455 KiB data, 961 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:22 vm04 bash[20742]: audit 2026-03-10T10:26:21.030999+0000 mon.a (mon.0) 2999 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]': finished 2026-03-10T10:26:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:22 vm04 bash[20742]: audit 2026-03-10T10:26:21.030999+0000 mon.a (mon.0) 2999 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]': finished 2026-03-10T10:26:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:22 vm04 bash[20742]: cluster 2026-03-10T10:26:21.035577+0000 mon.a (mon.0) 3000 : cluster [DBG] osdmap e534: 8 total, 8 up, 8 in 2026-03-10T10:26:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:22 vm04 bash[20742]: cluster 2026-03-10T10:26:21.035577+0000 mon.a (mon.0) 3000 : cluster [DBG] osdmap e534: 8 total, 8 up, 8 in 2026-03-10T10:26:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:22 vm04 bash[20742]: audit 2026-03-10T10:26:21.043110+0000 mon.c (mon.2) 467 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-109", "overlaypool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:22 vm04 bash[20742]: audit 2026-03-10T10:26:21.043110+0000 mon.c (mon.2) 467 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-109", "overlaypool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:22 vm04 bash[20742]: audit 2026-03-10T10:26:21.043284+0000 mon.a (mon.0) 3001 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-109", "overlaypool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:22 vm04 bash[20742]: audit 2026-03-10T10:26:21.043284+0000 mon.a (mon.0) 3001 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-109", "overlaypool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:22 vm07 bash[23367]: cluster 2026-03-10T10:26:20.516656+0000 mgr.y (mgr.24422) 488 : cluster [DBG] pgmap v821: 276 pgs: 32 unknown, 8 creating+peering, 236 active+clean; 455 KiB data, 961 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:22 vm07 bash[23367]: cluster 2026-03-10T10:26:20.516656+0000 mgr.y (mgr.24422) 488 : cluster [DBG] pgmap v821: 276 pgs: 32 unknown, 8 creating+peering, 236 active+clean; 455 KiB data, 961 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:22 vm07 bash[23367]: audit 2026-03-10T10:26:21.030999+0000 mon.a (mon.0) 2999 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]': finished 2026-03-10T10:26:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:22 vm07 bash[23367]: audit 2026-03-10T10:26:21.030999+0000 mon.a (mon.0) 2999 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]': finished 2026-03-10T10:26:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:22 vm07 bash[23367]: cluster 2026-03-10T10:26:21.035577+0000 mon.a (mon.0) 3000 : cluster [DBG] osdmap e534: 8 total, 8 up, 8 in 2026-03-10T10:26:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:22 vm07 bash[23367]: cluster 2026-03-10T10:26:21.035577+0000 mon.a (mon.0) 3000 : cluster [DBG] osdmap e534: 8 total, 8 up, 8 in 2026-03-10T10:26:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:22 vm07 bash[23367]: audit 2026-03-10T10:26:21.043110+0000 mon.c (mon.2) 467 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-109", "overlaypool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:22 vm07 bash[23367]: audit 2026-03-10T10:26:21.043110+0000 mon.c (mon.2) 467 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-109", "overlaypool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:22 vm07 bash[23367]: audit 2026-03-10T10:26:21.043284+0000 mon.a (mon.0) 3001 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-109", "overlaypool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:22 vm07 bash[23367]: audit 2026-03-10T10:26:21.043284+0000 mon.a (mon.0) 3001 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-109", "overlaypool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:23 vm04 bash[28289]: audit 2026-03-10T10:26:22.034564+0000 mon.a (mon.0) 3002 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-109", "overlaypool": "test-rados-api-vm04-59491-109-cache"}]': finished 2026-03-10T10:26:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:23 vm04 bash[28289]: audit 2026-03-10T10:26:22.034564+0000 mon.a (mon.0) 3002 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-109", "overlaypool": "test-rados-api-vm04-59491-109-cache"}]': finished 2026-03-10T10:26:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:23 vm04 bash[28289]: cluster 2026-03-10T10:26:22.037322+0000 mon.a (mon.0) 3003 : cluster [DBG] osdmap e535: 8 total, 8 up, 8 in 2026-03-10T10:26:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:23 vm04 bash[28289]: cluster 2026-03-10T10:26:22.037322+0000 mon.a (mon.0) 3003 : cluster [DBG] osdmap e535: 8 total, 8 up, 8 in 2026-03-10T10:26:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:23 vm04 bash[28289]: audit 2026-03-10T10:26:22.041760+0000 mon.c (mon.2) 468 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-109-cache", "mode": "writeback"}]: dispatch 2026-03-10T10:26:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:23 vm04 bash[28289]: audit 2026-03-10T10:26:22.041760+0000 mon.c (mon.2) 468 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-109-cache", "mode": "writeback"}]: dispatch 2026-03-10T10:26:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:23 vm04 bash[28289]: audit 2026-03-10T10:26:22.050367+0000 mon.a (mon.0) 3004 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-109-cache", "mode": "writeback"}]: dispatch 2026-03-10T10:26:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:23 vm04 bash[28289]: audit 2026-03-10T10:26:22.050367+0000 mon.a (mon.0) 3004 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-109-cache", "mode": "writeback"}]: dispatch 2026-03-10T10:26:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:23 vm04 bash[20742]: audit 2026-03-10T10:26:22.034564+0000 mon.a (mon.0) 3002 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-109", "overlaypool": "test-rados-api-vm04-59491-109-cache"}]': finished 2026-03-10T10:26:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:23 vm04 bash[20742]: audit 2026-03-10T10:26:22.034564+0000 mon.a (mon.0) 3002 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-109", "overlaypool": "test-rados-api-vm04-59491-109-cache"}]': finished 2026-03-10T10:26:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:23 vm04 bash[20742]: cluster 2026-03-10T10:26:22.037322+0000 mon.a (mon.0) 3003 : cluster [DBG] osdmap e535: 8 total, 8 up, 8 in 2026-03-10T10:26:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:23 vm04 bash[20742]: cluster 2026-03-10T10:26:22.037322+0000 mon.a (mon.0) 3003 : cluster [DBG] osdmap e535: 8 total, 8 up, 8 in 2026-03-10T10:26:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:23 vm04 bash[20742]: audit 2026-03-10T10:26:22.041760+0000 mon.c (mon.2) 468 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-109-cache", "mode": "writeback"}]: dispatch 2026-03-10T10:26:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:23 vm04 bash[20742]: audit 2026-03-10T10:26:22.041760+0000 mon.c (mon.2) 468 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-109-cache", "mode": "writeback"}]: dispatch 2026-03-10T10:26:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:23 vm04 bash[20742]: audit 2026-03-10T10:26:22.050367+0000 mon.a (mon.0) 3004 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-109-cache", "mode": "writeback"}]: dispatch 2026-03-10T10:26:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:23 vm04 bash[20742]: audit 2026-03-10T10:26:22.050367+0000 mon.a (mon.0) 3004 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-109-cache", "mode": "writeback"}]: dispatch 2026-03-10T10:26:23.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:26:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:26:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:26:23.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:23 vm07 bash[23367]: audit 2026-03-10T10:26:22.034564+0000 mon.a (mon.0) 3002 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-109", "overlaypool": "test-rados-api-vm04-59491-109-cache"}]': finished 2026-03-10T10:26:23.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:23 vm07 bash[23367]: audit 2026-03-10T10:26:22.034564+0000 mon.a (mon.0) 3002 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-109", "overlaypool": "test-rados-api-vm04-59491-109-cache"}]': finished 2026-03-10T10:26:23.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:23 vm07 bash[23367]: cluster 2026-03-10T10:26:22.037322+0000 mon.a (mon.0) 3003 : cluster [DBG] osdmap e535: 8 total, 8 up, 8 in 2026-03-10T10:26:23.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:23 vm07 bash[23367]: cluster 2026-03-10T10:26:22.037322+0000 mon.a (mon.0) 3003 : cluster [DBG] osdmap e535: 8 total, 8 up, 8 in 2026-03-10T10:26:23.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:23 vm07 bash[23367]: audit 2026-03-10T10:26:22.041760+0000 mon.c (mon.2) 468 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-109-cache", "mode": "writeback"}]: dispatch 2026-03-10T10:26:23.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:23 vm07 bash[23367]: audit 2026-03-10T10:26:22.041760+0000 mon.c (mon.2) 468 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-109-cache", "mode": "writeback"}]: dispatch 2026-03-10T10:26:23.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:23 vm07 bash[23367]: audit 2026-03-10T10:26:22.050367+0000 mon.a (mon.0) 3004 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-109-cache", "mode": "writeback"}]: dispatch 2026-03-10T10:26:23.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:23 vm07 bash[23367]: audit 2026-03-10T10:26:22.050367+0000 mon.a (mon.0) 3004 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-109-cache", "mode": "writeback"}]: dispatch 2026-03-10T10:26:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:24 vm04 bash[28289]: cluster 2026-03-10T10:26:22.517026+0000 mgr.y (mgr.24422) 489 : cluster [DBG] pgmap v824: 276 pgs: 32 unknown, 8 creating+peering, 236 active+clean; 455 KiB data, 961 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:24 vm04 bash[28289]: cluster 2026-03-10T10:26:22.517026+0000 mgr.y (mgr.24422) 489 : cluster [DBG] pgmap v824: 276 pgs: 32 unknown, 8 creating+peering, 236 active+clean; 455 KiB data, 961 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:24 vm04 bash[28289]: cluster 2026-03-10T10:26:23.045234+0000 mon.a (mon.0) 3005 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:26:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:24 vm04 bash[28289]: cluster 2026-03-10T10:26:23.045234+0000 mon.a (mon.0) 3005 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:26:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:24 vm04 bash[28289]: audit 2026-03-10T10:26:23.053221+0000 mon.a (mon.0) 3006 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-109-cache", "mode": "writeback"}]': finished 2026-03-10T10:26:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:24 vm04 bash[28289]: audit 2026-03-10T10:26:23.053221+0000 mon.a (mon.0) 3006 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-109-cache", "mode": "writeback"}]': finished 2026-03-10T10:26:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:24 vm04 bash[28289]: cluster 2026-03-10T10:26:23.057820+0000 mon.a (mon.0) 3007 : cluster [DBG] osdmap e536: 8 total, 8 up, 8 in 2026-03-10T10:26:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:24 vm04 bash[28289]: cluster 2026-03-10T10:26:23.057820+0000 mon.a (mon.0) 3007 : cluster [DBG] osdmap e536: 8 total, 8 up, 8 in 2026-03-10T10:26:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:24 vm04 bash[28289]: audit 2026-03-10T10:26:23.061897+0000 mon.c (mon.2) 469 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:26:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:24 vm04 bash[28289]: audit 2026-03-10T10:26:23.061897+0000 mon.c (mon.2) 469 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:26:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:24 vm04 bash[28289]: audit 2026-03-10T10:26:23.062136+0000 mon.a (mon.0) 3008 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:26:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:24 vm04 bash[28289]: audit 2026-03-10T10:26:23.062136+0000 mon.a (mon.0) 3008 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:26:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:24 vm04 bash[20742]: cluster 2026-03-10T10:26:22.517026+0000 mgr.y (mgr.24422) 489 : cluster [DBG] pgmap v824: 276 pgs: 32 unknown, 8 creating+peering, 236 active+clean; 455 KiB data, 961 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:24 vm04 bash[20742]: cluster 2026-03-10T10:26:22.517026+0000 mgr.y (mgr.24422) 489 : cluster [DBG] pgmap v824: 276 pgs: 32 unknown, 8 creating+peering, 236 active+clean; 455 KiB data, 961 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:24 vm04 bash[20742]: cluster 2026-03-10T10:26:23.045234+0000 mon.a (mon.0) 3005 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:26:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:24 vm04 bash[20742]: cluster 2026-03-10T10:26:23.045234+0000 mon.a (mon.0) 3005 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:26:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:24 vm04 bash[20742]: audit 2026-03-10T10:26:23.053221+0000 mon.a (mon.0) 3006 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-109-cache", "mode": "writeback"}]': finished 2026-03-10T10:26:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:24 vm04 bash[20742]: audit 2026-03-10T10:26:23.053221+0000 mon.a (mon.0) 3006 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-109-cache", "mode": "writeback"}]': finished 2026-03-10T10:26:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:24 vm04 bash[20742]: cluster 2026-03-10T10:26:23.057820+0000 mon.a (mon.0) 3007 : cluster [DBG] osdmap e536: 8 total, 8 up, 8 in 2026-03-10T10:26:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:24 vm04 bash[20742]: cluster 2026-03-10T10:26:23.057820+0000 mon.a (mon.0) 3007 : cluster [DBG] osdmap e536: 8 total, 8 up, 8 in 2026-03-10T10:26:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:24 vm04 bash[20742]: audit 2026-03-10T10:26:23.061897+0000 mon.c (mon.2) 469 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:26:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:24 vm04 bash[20742]: audit 2026-03-10T10:26:23.061897+0000 mon.c (mon.2) 469 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:26:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:24 vm04 bash[20742]: audit 2026-03-10T10:26:23.062136+0000 mon.a (mon.0) 3008 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:26:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:24 vm04 bash[20742]: audit 2026-03-10T10:26:23.062136+0000 mon.a (mon.0) 3008 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:26:24.518 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:24 vm07 bash[23367]: cluster 2026-03-10T10:26:22.517026+0000 mgr.y (mgr.24422) 489 : cluster [DBG] pgmap v824: 276 pgs: 32 unknown, 8 creating+peering, 236 active+clean; 455 KiB data, 961 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:24.518 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:24 vm07 bash[23367]: cluster 2026-03-10T10:26:22.517026+0000 mgr.y (mgr.24422) 489 : cluster [DBG] pgmap v824: 276 pgs: 32 unknown, 8 creating+peering, 236 active+clean; 455 KiB data, 961 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:24.518 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:24 vm07 bash[23367]: cluster 2026-03-10T10:26:23.045234+0000 mon.a (mon.0) 3005 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:26:24.518 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:24 vm07 bash[23367]: cluster 2026-03-10T10:26:23.045234+0000 mon.a (mon.0) 3005 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:26:24.518 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:24 vm07 bash[23367]: audit 2026-03-10T10:26:23.053221+0000 mon.a (mon.0) 3006 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-109-cache", "mode": "writeback"}]': finished 2026-03-10T10:26:24.518 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:24 vm07 bash[23367]: audit 2026-03-10T10:26:23.053221+0000 mon.a (mon.0) 3006 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-109-cache", "mode": "writeback"}]': finished 2026-03-10T10:26:24.518 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:24 vm07 bash[23367]: cluster 2026-03-10T10:26:23.057820+0000 mon.a (mon.0) 3007 : cluster [DBG] osdmap e536: 8 total, 8 up, 8 in 2026-03-10T10:26:24.518 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:24 vm07 bash[23367]: cluster 2026-03-10T10:26:23.057820+0000 mon.a (mon.0) 3007 : cluster [DBG] osdmap e536: 8 total, 8 up, 8 in 2026-03-10T10:26:24.518 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:24 vm07 bash[23367]: audit 2026-03-10T10:26:23.061897+0000 mon.c (mon.2) 469 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:26:24.518 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:24 vm07 bash[23367]: audit 2026-03-10T10:26:23.061897+0000 mon.c (mon.2) 469 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:26:24.518 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:24 vm07 bash[23367]: audit 2026-03-10T10:26:23.062136+0000 mon.a (mon.0) 3008 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:26:24.518 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:24 vm07 bash[23367]: audit 2026-03-10T10:26:23.062136+0000 mon.a (mon.0) 3008 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:26:25.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:25 vm04 bash[28289]: audit 2026-03-10T10:26:24.064141+0000 mon.a (mon.0) 3009 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:26:25.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:25 vm04 bash[28289]: audit 2026-03-10T10:26:24.064141+0000 mon.a (mon.0) 3009 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:26:25.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:25 vm04 bash[28289]: audit 2026-03-10T10:26:24.067059+0000 mon.c (mon.2) 470 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:26:25.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:25 vm04 bash[28289]: audit 2026-03-10T10:26:24.067059+0000 mon.c (mon.2) 470 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:26:25.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:25 vm04 bash[28289]: cluster 2026-03-10T10:26:24.067582+0000 mon.a (mon.0) 3010 : cluster [DBG] osdmap e537: 8 total, 8 up, 8 in 2026-03-10T10:26:25.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:25 vm04 bash[28289]: cluster 2026-03-10T10:26:24.067582+0000 mon.a (mon.0) 3010 : cluster [DBG] osdmap e537: 8 total, 8 up, 8 in 2026-03-10T10:26:25.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:25 vm04 bash[28289]: audit 2026-03-10T10:26:24.068249+0000 mon.a (mon.0) 3011 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:26:25.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:25 vm04 bash[28289]: audit 2026-03-10T10:26:24.068249+0000 mon.a (mon.0) 3011 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:26:25.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:25 vm04 bash[20742]: audit 2026-03-10T10:26:24.064141+0000 mon.a (mon.0) 3009 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:26:25.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:25 vm04 bash[20742]: audit 2026-03-10T10:26:24.064141+0000 mon.a (mon.0) 3009 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:26:25.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:25 vm04 bash[20742]: audit 2026-03-10T10:26:24.067059+0000 mon.c (mon.2) 470 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:26:25.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:25 vm04 bash[20742]: audit 2026-03-10T10:26:24.067059+0000 mon.c (mon.2) 470 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:26:25.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:25 vm04 bash[20742]: cluster 2026-03-10T10:26:24.067582+0000 mon.a (mon.0) 3010 : cluster [DBG] osdmap e537: 8 total, 8 up, 8 in 2026-03-10T10:26:25.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:25 vm04 bash[20742]: cluster 2026-03-10T10:26:24.067582+0000 mon.a (mon.0) 3010 : cluster [DBG] osdmap e537: 8 total, 8 up, 8 in 2026-03-10T10:26:25.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:25 vm04 bash[20742]: audit 2026-03-10T10:26:24.068249+0000 mon.a (mon.0) 3011 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:26:25.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:25 vm04 bash[20742]: audit 2026-03-10T10:26:24.068249+0000 mon.a (mon.0) 3011 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:26:25.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:25 vm07 bash[23367]: audit 2026-03-10T10:26:24.064141+0000 mon.a (mon.0) 3009 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:26:25.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:25 vm07 bash[23367]: audit 2026-03-10T10:26:24.064141+0000 mon.a (mon.0) 3009 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:26:25.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:25 vm07 bash[23367]: audit 2026-03-10T10:26:24.067059+0000 mon.c (mon.2) 470 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:26:25.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:25 vm07 bash[23367]: audit 2026-03-10T10:26:24.067059+0000 mon.c (mon.2) 470 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:26:25.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:25 vm07 bash[23367]: cluster 2026-03-10T10:26:24.067582+0000 mon.a (mon.0) 3010 : cluster [DBG] osdmap e537: 8 total, 8 up, 8 in 2026-03-10T10:26:25.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:25 vm07 bash[23367]: cluster 2026-03-10T10:26:24.067582+0000 mon.a (mon.0) 3010 : cluster [DBG] osdmap e537: 8 total, 8 up, 8 in 2026-03-10T10:26:25.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:25 vm07 bash[23367]: audit 2026-03-10T10:26:24.068249+0000 mon.a (mon.0) 3011 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:26:25.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:25 vm07 bash[23367]: audit 2026-03-10T10:26:24.068249+0000 mon.a (mon.0) 3011 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:26:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:26 vm04 bash[28289]: cluster 2026-03-10T10:26:24.517378+0000 mgr.y (mgr.24422) 490 : cluster [DBG] pgmap v827: 276 pgs: 276 active+clean; 455 KiB data, 979 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:26 vm04 bash[28289]: cluster 2026-03-10T10:26:24.517378+0000 mgr.y (mgr.24422) 490 : cluster [DBG] pgmap v827: 276 pgs: 276 active+clean; 455 KiB data, 979 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:26 vm04 bash[28289]: audit 2026-03-10T10:26:25.067217+0000 mon.a (mon.0) 3012 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:26:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:26 vm04 bash[28289]: audit 2026-03-10T10:26:25.067217+0000 mon.a (mon.0) 3012 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:26:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:26 vm04 bash[28289]: cluster 2026-03-10T10:26:25.070039+0000 mon.a (mon.0) 3013 : cluster [DBG] osdmap e538: 8 total, 8 up, 8 in 2026-03-10T10:26:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:26 vm04 bash[28289]: cluster 2026-03-10T10:26:25.070039+0000 mon.a (mon.0) 3013 : cluster [DBG] osdmap e538: 8 total, 8 up, 8 in 2026-03-10T10:26:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:26 vm04 bash[28289]: audit 2026-03-10T10:26:25.070770+0000 mon.c (mon.2) 471 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T10:26:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:26 vm04 bash[28289]: audit 2026-03-10T10:26:25.070770+0000 mon.c (mon.2) 471 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T10:26:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:26 vm04 bash[28289]: audit 2026-03-10T10:26:25.072229+0000 mon.a (mon.0) 3014 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T10:26:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:26 vm04 bash[28289]: audit 2026-03-10T10:26:25.072229+0000 mon.a (mon.0) 3014 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T10:26:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:26 vm04 bash[28289]: cluster 2026-03-10T10:26:26.067473+0000 mon.a (mon.0) 3015 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:26:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:26 vm04 bash[28289]: cluster 2026-03-10T10:26:26.067473+0000 mon.a (mon.0) 3015 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:26:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:26 vm04 bash[28289]: audit 2026-03-10T10:26:26.070507+0000 mon.a (mon.0) 3016 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T10:26:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:26 vm04 bash[28289]: audit 2026-03-10T10:26:26.070507+0000 mon.a (mon.0) 3016 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T10:26:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:26 vm04 bash[28289]: cluster 2026-03-10T10:26:26.073660+0000 mon.a (mon.0) 3017 : cluster [DBG] osdmap e539: 8 total, 8 up, 8 in 2026-03-10T10:26:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:26 vm04 bash[28289]: cluster 2026-03-10T10:26:26.073660+0000 mon.a (mon.0) 3017 : cluster [DBG] osdmap e539: 8 total, 8 up, 8 in 2026-03-10T10:26:26.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:26 vm04 bash[20742]: cluster 2026-03-10T10:26:24.517378+0000 mgr.y (mgr.24422) 490 : cluster [DBG] pgmap v827: 276 pgs: 276 active+clean; 455 KiB data, 979 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:26.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:26 vm04 bash[20742]: cluster 2026-03-10T10:26:24.517378+0000 mgr.y (mgr.24422) 490 : cluster [DBG] pgmap v827: 276 pgs: 276 active+clean; 455 KiB data, 979 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:26.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:26 vm04 bash[20742]: audit 2026-03-10T10:26:25.067217+0000 mon.a (mon.0) 3012 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:26:26.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:26 vm04 bash[20742]: audit 2026-03-10T10:26:25.067217+0000 mon.a (mon.0) 3012 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:26:26.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:26 vm04 bash[20742]: cluster 2026-03-10T10:26:25.070039+0000 mon.a (mon.0) 3013 : cluster [DBG] osdmap e538: 8 total, 8 up, 8 in 2026-03-10T10:26:26.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:26 vm04 bash[20742]: cluster 2026-03-10T10:26:25.070039+0000 mon.a (mon.0) 3013 : cluster [DBG] osdmap e538: 8 total, 8 up, 8 in 2026-03-10T10:26:26.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:26 vm04 bash[20742]: audit 2026-03-10T10:26:25.070770+0000 mon.c (mon.2) 471 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T10:26:26.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:26 vm04 bash[20742]: audit 2026-03-10T10:26:25.070770+0000 mon.c (mon.2) 471 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T10:26:26.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:26 vm04 bash[20742]: audit 2026-03-10T10:26:25.072229+0000 mon.a (mon.0) 3014 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T10:26:26.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:26 vm04 bash[20742]: audit 2026-03-10T10:26:25.072229+0000 mon.a (mon.0) 3014 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T10:26:26.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:26 vm04 bash[20742]: cluster 2026-03-10T10:26:26.067473+0000 mon.a (mon.0) 3015 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:26:26.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:26 vm04 bash[20742]: cluster 2026-03-10T10:26:26.067473+0000 mon.a (mon.0) 3015 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:26:26.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:26 vm04 bash[20742]: audit 2026-03-10T10:26:26.070507+0000 mon.a (mon.0) 3016 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T10:26:26.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:26 vm04 bash[20742]: audit 2026-03-10T10:26:26.070507+0000 mon.a (mon.0) 3016 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T10:26:26.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:26 vm04 bash[20742]: cluster 2026-03-10T10:26:26.073660+0000 mon.a (mon.0) 3017 : cluster [DBG] osdmap e539: 8 total, 8 up, 8 in 2026-03-10T10:26:26.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:26 vm04 bash[20742]: cluster 2026-03-10T10:26:26.073660+0000 mon.a (mon.0) 3017 : cluster [DBG] osdmap e539: 8 total, 8 up, 8 in 2026-03-10T10:26:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:26 vm07 bash[23367]: cluster 2026-03-10T10:26:24.517378+0000 mgr.y (mgr.24422) 490 : cluster [DBG] pgmap v827: 276 pgs: 276 active+clean; 455 KiB data, 979 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:26 vm07 bash[23367]: cluster 2026-03-10T10:26:24.517378+0000 mgr.y (mgr.24422) 490 : cluster [DBG] pgmap v827: 276 pgs: 276 active+clean; 455 KiB data, 979 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:26 vm07 bash[23367]: audit 2026-03-10T10:26:25.067217+0000 mon.a (mon.0) 3012 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:26:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:26 vm07 bash[23367]: audit 2026-03-10T10:26:25.067217+0000 mon.a (mon.0) 3012 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:26:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:26 vm07 bash[23367]: cluster 2026-03-10T10:26:25.070039+0000 mon.a (mon.0) 3013 : cluster [DBG] osdmap e538: 8 total, 8 up, 8 in 2026-03-10T10:26:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:26 vm07 bash[23367]: cluster 2026-03-10T10:26:25.070039+0000 mon.a (mon.0) 3013 : cluster [DBG] osdmap e538: 8 total, 8 up, 8 in 2026-03-10T10:26:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:26 vm07 bash[23367]: audit 2026-03-10T10:26:25.070770+0000 mon.c (mon.2) 471 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T10:26:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:26 vm07 bash[23367]: audit 2026-03-10T10:26:25.070770+0000 mon.c (mon.2) 471 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T10:26:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:26 vm07 bash[23367]: audit 2026-03-10T10:26:25.072229+0000 mon.a (mon.0) 3014 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T10:26:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:26 vm07 bash[23367]: audit 2026-03-10T10:26:25.072229+0000 mon.a (mon.0) 3014 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T10:26:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:26 vm07 bash[23367]: cluster 2026-03-10T10:26:26.067473+0000 mon.a (mon.0) 3015 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:26:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:26 vm07 bash[23367]: cluster 2026-03-10T10:26:26.067473+0000 mon.a (mon.0) 3015 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:26:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:26 vm07 bash[23367]: audit 2026-03-10T10:26:26.070507+0000 mon.a (mon.0) 3016 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T10:26:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:26 vm07 bash[23367]: audit 2026-03-10T10:26:26.070507+0000 mon.a (mon.0) 3016 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T10:26:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:26 vm07 bash[23367]: cluster 2026-03-10T10:26:26.073660+0000 mon.a (mon.0) 3017 : cluster [DBG] osdmap e539: 8 total, 8 up, 8 in 2026-03-10T10:26:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:26 vm07 bash[23367]: cluster 2026-03-10T10:26:26.073660+0000 mon.a (mon.0) 3017 : cluster [DBG] osdmap e539: 8 total, 8 up, 8 in 2026-03-10T10:26:27.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:27 vm04 bash[28289]: audit 2026-03-10T10:26:26.077761+0000 mon.c (mon.2) 472 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-10T10:26:27.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:27 vm04 bash[28289]: audit 2026-03-10T10:26:26.077761+0000 mon.c (mon.2) 472 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-10T10:26:27.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:27 vm04 bash[28289]: audit 2026-03-10T10:26:26.086475+0000 mon.a (mon.0) 3018 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-10T10:26:27.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:27 vm04 bash[28289]: audit 2026-03-10T10:26:26.086475+0000 mon.a (mon.0) 3018 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-10T10:26:27.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:27 vm04 bash[20742]: audit 2026-03-10T10:26:26.077761+0000 mon.c (mon.2) 472 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-10T10:26:27.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:27 vm04 bash[20742]: audit 2026-03-10T10:26:26.077761+0000 mon.c (mon.2) 472 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-10T10:26:27.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:27 vm04 bash[20742]: audit 2026-03-10T10:26:26.086475+0000 mon.a (mon.0) 3018 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-10T10:26:27.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:27 vm04 bash[20742]: audit 2026-03-10T10:26:26.086475+0000 mon.a (mon.0) 3018 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-10T10:26:27.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:27 vm07 bash[23367]: audit 2026-03-10T10:26:26.077761+0000 mon.c (mon.2) 472 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-10T10:26:27.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:27 vm07 bash[23367]: audit 2026-03-10T10:26:26.077761+0000 mon.c (mon.2) 472 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-10T10:26:27.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:27 vm07 bash[23367]: audit 2026-03-10T10:26:26.086475+0000 mon.a (mon.0) 3018 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-10T10:26:27.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:27 vm07 bash[23367]: audit 2026-03-10T10:26:26.086475+0000 mon.a (mon.0) 3018 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "min_read_recency_for_promote","val": "4"}]: dispatch 2026-03-10T10:26:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:28 vm04 bash[28289]: cluster 2026-03-10T10:26:26.517802+0000 mgr.y (mgr.24422) 491 : cluster [DBG] pgmap v830: 276 pgs: 276 active+clean; 455 KiB data, 979 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:28 vm04 bash[28289]: cluster 2026-03-10T10:26:26.517802+0000 mgr.y (mgr.24422) 491 : cluster [DBG] pgmap v830: 276 pgs: 276 active+clean; 455 KiB data, 979 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:28 vm04 bash[28289]: audit 2026-03-10T10:26:27.082178+0000 mon.a (mon.0) 3019 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "min_read_recency_for_promote","val": "4"}]': finished 2026-03-10T10:26:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:28 vm04 bash[28289]: audit 2026-03-10T10:26:27.082178+0000 mon.a (mon.0) 3019 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "min_read_recency_for_promote","val": "4"}]': finished 2026-03-10T10:26:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:28 vm04 bash[28289]: cluster 2026-03-10T10:26:27.085663+0000 mon.a (mon.0) 3020 : cluster [DBG] osdmap e540: 8 total, 8 up, 8 in 2026-03-10T10:26:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:28 vm04 bash[28289]: cluster 2026-03-10T10:26:27.085663+0000 mon.a (mon.0) 3020 : cluster [DBG] osdmap e540: 8 total, 8 up, 8 in 2026-03-10T10:26:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:28 vm04 bash[28289]: audit 2026-03-10T10:26:27.130536+0000 mon.c (mon.2) 473 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:28 vm04 bash[28289]: audit 2026-03-10T10:26:27.130536+0000 mon.c (mon.2) 473 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:28 vm04 bash[28289]: audit 2026-03-10T10:26:27.130915+0000 mon.a (mon.0) 3021 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:28 vm04 bash[28289]: audit 2026-03-10T10:26:27.130915+0000 mon.a (mon.0) 3021 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:28 vm04 bash[28289]: audit 2026-03-10T10:26:27.161584+0000 mon.a (mon.0) 3022 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:26:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:28 vm04 bash[28289]: audit 2026-03-10T10:26:27.161584+0000 mon.a (mon.0) 3022 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:26:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:28 vm04 bash[28289]: audit 2026-03-10T10:26:28.063741+0000 mon.a (mon.0) 3023 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:26:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:28 vm04 bash[28289]: audit 2026-03-10T10:26:28.063741+0000 mon.a (mon.0) 3023 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:26:28.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:28 vm04 bash[20742]: cluster 2026-03-10T10:26:26.517802+0000 mgr.y (mgr.24422) 491 : cluster [DBG] pgmap v830: 276 pgs: 276 active+clean; 455 KiB data, 979 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:28.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:28 vm04 bash[20742]: cluster 2026-03-10T10:26:26.517802+0000 mgr.y (mgr.24422) 491 : cluster [DBG] pgmap v830: 276 pgs: 276 active+clean; 455 KiB data, 979 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:28.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:28 vm04 bash[20742]: audit 2026-03-10T10:26:27.082178+0000 mon.a (mon.0) 3019 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "min_read_recency_for_promote","val": "4"}]': finished 2026-03-10T10:26:28.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:28 vm04 bash[20742]: audit 2026-03-10T10:26:27.082178+0000 mon.a (mon.0) 3019 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "min_read_recency_for_promote","val": "4"}]': finished 2026-03-10T10:26:28.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:28 vm04 bash[20742]: cluster 2026-03-10T10:26:27.085663+0000 mon.a (mon.0) 3020 : cluster [DBG] osdmap e540: 8 total, 8 up, 8 in 2026-03-10T10:26:28.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:28 vm04 bash[20742]: cluster 2026-03-10T10:26:27.085663+0000 mon.a (mon.0) 3020 : cluster [DBG] osdmap e540: 8 total, 8 up, 8 in 2026-03-10T10:26:28.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:28 vm04 bash[20742]: audit 2026-03-10T10:26:27.130536+0000 mon.c (mon.2) 473 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:28.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:28 vm04 bash[20742]: audit 2026-03-10T10:26:27.130536+0000 mon.c (mon.2) 473 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:28 vm04 bash[20742]: audit 2026-03-10T10:26:27.130915+0000 mon.a (mon.0) 3021 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:28 vm04 bash[20742]: audit 2026-03-10T10:26:27.130915+0000 mon.a (mon.0) 3021 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:28 vm04 bash[20742]: audit 2026-03-10T10:26:27.161584+0000 mon.a (mon.0) 3022 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:26:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:28 vm04 bash[20742]: audit 2026-03-10T10:26:27.161584+0000 mon.a (mon.0) 3022 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:26:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:28 vm04 bash[20742]: audit 2026-03-10T10:26:28.063741+0000 mon.a (mon.0) 3023 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:26:28.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:28 vm04 bash[20742]: audit 2026-03-10T10:26:28.063741+0000 mon.a (mon.0) 3023 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:26:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:28 vm07 bash[23367]: cluster 2026-03-10T10:26:26.517802+0000 mgr.y (mgr.24422) 491 : cluster [DBG] pgmap v830: 276 pgs: 276 active+clean; 455 KiB data, 979 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:28 vm07 bash[23367]: cluster 2026-03-10T10:26:26.517802+0000 mgr.y (mgr.24422) 491 : cluster [DBG] pgmap v830: 276 pgs: 276 active+clean; 455 KiB data, 979 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:28 vm07 bash[23367]: audit 2026-03-10T10:26:27.082178+0000 mon.a (mon.0) 3019 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "min_read_recency_for_promote","val": "4"}]': finished 2026-03-10T10:26:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:28 vm07 bash[23367]: audit 2026-03-10T10:26:27.082178+0000 mon.a (mon.0) 3019 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-109-cache","var": "min_read_recency_for_promote","val": "4"}]': finished 2026-03-10T10:26:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:28 vm07 bash[23367]: cluster 2026-03-10T10:26:27.085663+0000 mon.a (mon.0) 3020 : cluster [DBG] osdmap e540: 8 total, 8 up, 8 in 2026-03-10T10:26:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:28 vm07 bash[23367]: cluster 2026-03-10T10:26:27.085663+0000 mon.a (mon.0) 3020 : cluster [DBG] osdmap e540: 8 total, 8 up, 8 in 2026-03-10T10:26:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:28 vm07 bash[23367]: audit 2026-03-10T10:26:27.130536+0000 mon.c (mon.2) 473 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:28 vm07 bash[23367]: audit 2026-03-10T10:26:27.130536+0000 mon.c (mon.2) 473 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:28 vm07 bash[23367]: audit 2026-03-10T10:26:27.130915+0000 mon.a (mon.0) 3021 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:28 vm07 bash[23367]: audit 2026-03-10T10:26:27.130915+0000 mon.a (mon.0) 3021 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:28 vm07 bash[23367]: audit 2026-03-10T10:26:27.161584+0000 mon.a (mon.0) 3022 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:26:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:28 vm07 bash[23367]: audit 2026-03-10T10:26:27.161584+0000 mon.a (mon.0) 3022 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:26:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:28 vm07 bash[23367]: audit 2026-03-10T10:26:28.063741+0000 mon.a (mon.0) 3023 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:26:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:28 vm07 bash[23367]: audit 2026-03-10T10:26:28.063741+0000 mon.a (mon.0) 3023 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:26:29.015 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:26:28 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:26:29.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:29 vm04 bash[28289]: audit 2026-03-10T10:26:28.095634+0000 mon.a (mon.0) 3024 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-109"}]': finished 2026-03-10T10:26:29.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:29 vm04 bash[28289]: audit 2026-03-10T10:26:28.095634+0000 mon.a (mon.0) 3024 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-109"}]': finished 2026-03-10T10:26:29.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:29 vm04 bash[28289]: audit 2026-03-10T10:26:28.099047+0000 mon.c (mon.2) 474 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:29.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:29 vm04 bash[28289]: audit 2026-03-10T10:26:28.099047+0000 mon.c (mon.2) 474 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:29.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:29 vm04 bash[28289]: cluster 2026-03-10T10:26:28.099391+0000 mon.a (mon.0) 3025 : cluster [DBG] osdmap e541: 8 total, 8 up, 8 in 2026-03-10T10:26:29.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:29 vm04 bash[28289]: cluster 2026-03-10T10:26:28.099391+0000 mon.a (mon.0) 3025 : cluster [DBG] osdmap e541: 8 total, 8 up, 8 in 2026-03-10T10:26:29.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:29 vm04 bash[28289]: audit 2026-03-10T10:26:28.100867+0000 mon.a (mon.0) 3026 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:29.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:29 vm04 bash[28289]: audit 2026-03-10T10:26:28.100867+0000 mon.a (mon.0) 3026 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:29.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:29 vm04 bash[28289]: audit 2026-03-10T10:26:29.099697+0000 mon.a (mon.0) 3027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]': finished 2026-03-10T10:26:29.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:29 vm04 bash[28289]: audit 2026-03-10T10:26:29.099697+0000 mon.a (mon.0) 3027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]': finished 2026-03-10T10:26:29.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:29 vm04 bash[28289]: cluster 2026-03-10T10:26:29.102738+0000 mon.a (mon.0) 3028 : cluster [DBG] osdmap e542: 8 total, 8 up, 8 in 2026-03-10T10:26:29.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:29 vm04 bash[28289]: cluster 2026-03-10T10:26:29.102738+0000 mon.a (mon.0) 3028 : cluster [DBG] osdmap e542: 8 total, 8 up, 8 in 2026-03-10T10:26:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:29 vm04 bash[20742]: audit 2026-03-10T10:26:28.095634+0000 mon.a (mon.0) 3024 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-109"}]': finished 2026-03-10T10:26:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:29 vm04 bash[20742]: audit 2026-03-10T10:26:28.095634+0000 mon.a (mon.0) 3024 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-109"}]': finished 2026-03-10T10:26:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:29 vm04 bash[20742]: audit 2026-03-10T10:26:28.099047+0000 mon.c (mon.2) 474 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:29 vm04 bash[20742]: audit 2026-03-10T10:26:28.099047+0000 mon.c (mon.2) 474 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:29 vm04 bash[20742]: cluster 2026-03-10T10:26:28.099391+0000 mon.a (mon.0) 3025 : cluster [DBG] osdmap e541: 8 total, 8 up, 8 in 2026-03-10T10:26:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:29 vm04 bash[20742]: cluster 2026-03-10T10:26:28.099391+0000 mon.a (mon.0) 3025 : cluster [DBG] osdmap e541: 8 total, 8 up, 8 in 2026-03-10T10:26:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:29 vm04 bash[20742]: audit 2026-03-10T10:26:28.100867+0000 mon.a (mon.0) 3026 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:29 vm04 bash[20742]: audit 2026-03-10T10:26:28.100867+0000 mon.a (mon.0) 3026 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:29 vm04 bash[20742]: audit 2026-03-10T10:26:29.099697+0000 mon.a (mon.0) 3027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]': finished 2026-03-10T10:26:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:29 vm04 bash[20742]: audit 2026-03-10T10:26:29.099697+0000 mon.a (mon.0) 3027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]': finished 2026-03-10T10:26:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:29 vm04 bash[20742]: cluster 2026-03-10T10:26:29.102738+0000 mon.a (mon.0) 3028 : cluster [DBG] osdmap e542: 8 total, 8 up, 8 in 2026-03-10T10:26:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:29 vm04 bash[20742]: cluster 2026-03-10T10:26:29.102738+0000 mon.a (mon.0) 3028 : cluster [DBG] osdmap e542: 8 total, 8 up, 8 in 2026-03-10T10:26:29.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:29 vm07 bash[23367]: audit 2026-03-10T10:26:28.095634+0000 mon.a (mon.0) 3024 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-109"}]': finished 2026-03-10T10:26:29.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:29 vm07 bash[23367]: audit 2026-03-10T10:26:28.095634+0000 mon.a (mon.0) 3024 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-109"}]': finished 2026-03-10T10:26:29.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:29 vm07 bash[23367]: audit 2026-03-10T10:26:28.099047+0000 mon.c (mon.2) 474 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:29.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:29 vm07 bash[23367]: audit 2026-03-10T10:26:28.099047+0000 mon.c (mon.2) 474 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:29.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:29 vm07 bash[23367]: cluster 2026-03-10T10:26:28.099391+0000 mon.a (mon.0) 3025 : cluster [DBG] osdmap e541: 8 total, 8 up, 8 in 2026-03-10T10:26:29.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:29 vm07 bash[23367]: cluster 2026-03-10T10:26:28.099391+0000 mon.a (mon.0) 3025 : cluster [DBG] osdmap e541: 8 total, 8 up, 8 in 2026-03-10T10:26:29.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:29 vm07 bash[23367]: audit 2026-03-10T10:26:28.100867+0000 mon.a (mon.0) 3026 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:29.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:29 vm07 bash[23367]: audit 2026-03-10T10:26:28.100867+0000 mon.a (mon.0) 3026 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]: dispatch 2026-03-10T10:26:29.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:29 vm07 bash[23367]: audit 2026-03-10T10:26:29.099697+0000 mon.a (mon.0) 3027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]': finished 2026-03-10T10:26:29.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:29 vm07 bash[23367]: audit 2026-03-10T10:26:29.099697+0000 mon.a (mon.0) 3027 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-109", "tierpool": "test-rados-api-vm04-59491-109-cache"}]': finished 2026-03-10T10:26:29.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:29 vm07 bash[23367]: cluster 2026-03-10T10:26:29.102738+0000 mon.a (mon.0) 3028 : cluster [DBG] osdmap e542: 8 total, 8 up, 8 in 2026-03-10T10:26:29.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:29 vm07 bash[23367]: cluster 2026-03-10T10:26:29.102738+0000 mon.a (mon.0) 3028 : cluster [DBG] osdmap e542: 8 total, 8 up, 8 in 2026-03-10T10:26:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:30 vm04 bash[28289]: cluster 2026-03-10T10:26:28.518393+0000 mgr.y (mgr.24422) 492 : cluster [DBG] pgmap v833: 276 pgs: 276 active+clean; 455 KiB data, 979 MiB used, 159 GiB / 160 GiB avail; 0 B/s wr, 0 op/s 2026-03-10T10:26:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:30 vm04 bash[28289]: cluster 2026-03-10T10:26:28.518393+0000 mgr.y (mgr.24422) 492 : cluster [DBG] pgmap v833: 276 pgs: 276 active+clean; 455 KiB data, 979 MiB used, 159 GiB / 160 GiB avail; 0 B/s wr, 0 op/s 2026-03-10T10:26:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:30 vm04 bash[28289]: audit 2026-03-10T10:26:28.732550+0000 mgr.y (mgr.24422) 493 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:26:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:30 vm04 bash[28289]: audit 2026-03-10T10:26:28.732550+0000 mgr.y (mgr.24422) 493 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:26:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:30 vm04 bash[20742]: cluster 2026-03-10T10:26:28.518393+0000 mgr.y (mgr.24422) 492 : cluster [DBG] pgmap v833: 276 pgs: 276 active+clean; 455 KiB data, 979 MiB used, 159 GiB / 160 GiB avail; 0 B/s wr, 0 op/s 2026-03-10T10:26:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:30 vm04 bash[20742]: cluster 2026-03-10T10:26:28.518393+0000 mgr.y (mgr.24422) 492 : cluster [DBG] pgmap v833: 276 pgs: 276 active+clean; 455 KiB data, 979 MiB used, 159 GiB / 160 GiB avail; 0 B/s wr, 0 op/s 2026-03-10T10:26:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:30 vm04 bash[20742]: audit 2026-03-10T10:26:28.732550+0000 mgr.y (mgr.24422) 493 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:26:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:30 vm04 bash[20742]: audit 2026-03-10T10:26:28.732550+0000 mgr.y (mgr.24422) 493 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:26:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:30 vm07 bash[23367]: cluster 2026-03-10T10:26:28.518393+0000 mgr.y (mgr.24422) 492 : cluster [DBG] pgmap v833: 276 pgs: 276 active+clean; 455 KiB data, 979 MiB used, 159 GiB / 160 GiB avail; 0 B/s wr, 0 op/s 2026-03-10T10:26:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:30 vm07 bash[23367]: cluster 2026-03-10T10:26:28.518393+0000 mgr.y (mgr.24422) 492 : cluster [DBG] pgmap v833: 276 pgs: 276 active+clean; 455 KiB data, 979 MiB used, 159 GiB / 160 GiB avail; 0 B/s wr, 0 op/s 2026-03-10T10:26:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:30 vm07 bash[23367]: audit 2026-03-10T10:26:28.732550+0000 mgr.y (mgr.24422) 493 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:26:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:30 vm07 bash[23367]: audit 2026-03-10T10:26:28.732550+0000 mgr.y (mgr.24422) 493 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:26:31.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:31 vm04 bash[20742]: cluster 2026-03-10T10:26:30.145252+0000 mon.a (mon.0) 3029 : cluster [DBG] osdmap e543: 8 total, 8 up, 8 in 2026-03-10T10:26:31.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:31 vm04 bash[20742]: cluster 2026-03-10T10:26:30.145252+0000 mon.a (mon.0) 3029 : cluster [DBG] osdmap e543: 8 total, 8 up, 8 in 2026-03-10T10:26:31.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:31 vm04 bash[28289]: cluster 2026-03-10T10:26:30.145252+0000 mon.a (mon.0) 3029 : cluster [DBG] osdmap e543: 8 total, 8 up, 8 in 2026-03-10T10:26:31.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:31 vm04 bash[28289]: cluster 2026-03-10T10:26:30.145252+0000 mon.a (mon.0) 3029 : cluster [DBG] osdmap e543: 8 total, 8 up, 8 in 2026-03-10T10:26:31.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:31 vm07 bash[23367]: cluster 2026-03-10T10:26:30.145252+0000 mon.a (mon.0) 3029 : cluster [DBG] osdmap e543: 8 total, 8 up, 8 in 2026-03-10T10:26:31.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:31 vm07 bash[23367]: cluster 2026-03-10T10:26:30.145252+0000 mon.a (mon.0) 3029 : cluster [DBG] osdmap e543: 8 total, 8 up, 8 in 2026-03-10T10:26:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:32 vm04 bash[28289]: cluster 2026-03-10T10:26:30.518734+0000 mgr.y (mgr.24422) 494 : cluster [DBG] pgmap v836: 244 pgs: 244 active+clean; 455 KiB data, 980 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:26:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:32 vm04 bash[28289]: cluster 2026-03-10T10:26:30.518734+0000 mgr.y (mgr.24422) 494 : cluster [DBG] pgmap v836: 244 pgs: 244 active+clean; 455 KiB data, 980 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:26:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:32 vm04 bash[28289]: cluster 2026-03-10T10:26:31.157520+0000 mon.a (mon.0) 3030 : cluster [DBG] osdmap e544: 8 total, 8 up, 8 in 2026-03-10T10:26:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:32 vm04 bash[28289]: cluster 2026-03-10T10:26:31.157520+0000 mon.a (mon.0) 3030 : cluster [DBG] osdmap e544: 8 total, 8 up, 8 in 2026-03-10T10:26:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:32 vm04 bash[28289]: audit 2026-03-10T10:26:31.160199+0000 mon.c (mon.2) 475 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:32 vm04 bash[28289]: audit 2026-03-10T10:26:31.160199+0000 mon.c (mon.2) 475 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:32 vm04 bash[28289]: audit 2026-03-10T10:26:31.160680+0000 mon.a (mon.0) 3031 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:32 vm04 bash[28289]: audit 2026-03-10T10:26:31.160680+0000 mon.a (mon.0) 3031 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:32 vm04 bash[28289]: audit 2026-03-10T10:26:32.155773+0000 mon.a (mon.0) 3032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-109"}]': finished 2026-03-10T10:26:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:32 vm04 bash[28289]: audit 2026-03-10T10:26:32.155773+0000 mon.a (mon.0) 3032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-109"}]': finished 2026-03-10T10:26:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:32 vm04 bash[20742]: cluster 2026-03-10T10:26:30.518734+0000 mgr.y (mgr.24422) 494 : cluster [DBG] pgmap v836: 244 pgs: 244 active+clean; 455 KiB data, 980 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:26:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:32 vm04 bash[20742]: cluster 2026-03-10T10:26:30.518734+0000 mgr.y (mgr.24422) 494 : cluster [DBG] pgmap v836: 244 pgs: 244 active+clean; 455 KiB data, 980 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:26:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:32 vm04 bash[20742]: cluster 2026-03-10T10:26:31.157520+0000 mon.a (mon.0) 3030 : cluster [DBG] osdmap e544: 8 total, 8 up, 8 in 2026-03-10T10:26:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:32 vm04 bash[20742]: cluster 2026-03-10T10:26:31.157520+0000 mon.a (mon.0) 3030 : cluster [DBG] osdmap e544: 8 total, 8 up, 8 in 2026-03-10T10:26:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:32 vm04 bash[20742]: audit 2026-03-10T10:26:31.160199+0000 mon.c (mon.2) 475 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:32 vm04 bash[20742]: audit 2026-03-10T10:26:31.160199+0000 mon.c (mon.2) 475 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:32 vm04 bash[20742]: audit 2026-03-10T10:26:31.160680+0000 mon.a (mon.0) 3031 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:32 vm04 bash[20742]: audit 2026-03-10T10:26:31.160680+0000 mon.a (mon.0) 3031 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:32 vm04 bash[20742]: audit 2026-03-10T10:26:32.155773+0000 mon.a (mon.0) 3032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-109"}]': finished 2026-03-10T10:26:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:32 vm04 bash[20742]: audit 2026-03-10T10:26:32.155773+0000 mon.a (mon.0) 3032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-109"}]': finished 2026-03-10T10:26:32.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:32 vm07 bash[23367]: cluster 2026-03-10T10:26:30.518734+0000 mgr.y (mgr.24422) 494 : cluster [DBG] pgmap v836: 244 pgs: 244 active+clean; 455 KiB data, 980 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:26:32.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:32 vm07 bash[23367]: cluster 2026-03-10T10:26:30.518734+0000 mgr.y (mgr.24422) 494 : cluster [DBG] pgmap v836: 244 pgs: 244 active+clean; 455 KiB data, 980 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:26:32.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:32 vm07 bash[23367]: cluster 2026-03-10T10:26:31.157520+0000 mon.a (mon.0) 3030 : cluster [DBG] osdmap e544: 8 total, 8 up, 8 in 2026-03-10T10:26:32.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:32 vm07 bash[23367]: cluster 2026-03-10T10:26:31.157520+0000 mon.a (mon.0) 3030 : cluster [DBG] osdmap e544: 8 total, 8 up, 8 in 2026-03-10T10:26:32.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:32 vm07 bash[23367]: audit 2026-03-10T10:26:31.160199+0000 mon.c (mon.2) 475 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:32.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:32 vm07 bash[23367]: audit 2026-03-10T10:26:31.160199+0000 mon.c (mon.2) 475 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:32.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:32 vm07 bash[23367]: audit 2026-03-10T10:26:31.160680+0000 mon.a (mon.0) 3031 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:32.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:32 vm07 bash[23367]: audit 2026-03-10T10:26:31.160680+0000 mon.a (mon.0) 3031 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:32.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:32 vm07 bash[23367]: audit 2026-03-10T10:26:32.155773+0000 mon.a (mon.0) 3032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-109"}]': finished 2026-03-10T10:26:32.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:32 vm07 bash[23367]: audit 2026-03-10T10:26:32.155773+0000 mon.a (mon.0) 3032 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-109"}]': finished 2026-03-10T10:26:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:33 vm04 bash[28289]: cluster 2026-03-10T10:26:32.162920+0000 mon.a (mon.0) 3033 : cluster [DBG] osdmap e545: 8 total, 8 up, 8 in 2026-03-10T10:26:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:33 vm04 bash[28289]: cluster 2026-03-10T10:26:32.162920+0000 mon.a (mon.0) 3033 : cluster [DBG] osdmap e545: 8 total, 8 up, 8 in 2026-03-10T10:26:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:33 vm04 bash[28289]: audit 2026-03-10T10:26:32.178500+0000 mon.c (mon.2) 476 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:33 vm04 bash[28289]: audit 2026-03-10T10:26:32.178500+0000 mon.c (mon.2) 476 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:33 vm04 bash[28289]: audit 2026-03-10T10:26:32.182458+0000 mon.a (mon.0) 3034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:33 vm04 bash[28289]: audit 2026-03-10T10:26:32.182458+0000 mon.a (mon.0) 3034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:33 vm04 bash[28289]: audit 2026-03-10T10:26:32.313012+0000 mon.a (mon.0) 3035 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:26:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:33 vm04 bash[28289]: audit 2026-03-10T10:26:32.313012+0000 mon.a (mon.0) 3035 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:26:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:33 vm04 bash[28289]: audit 2026-03-10T10:26:32.322615+0000 mon.a (mon.0) 3036 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:26:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:33 vm04 bash[28289]: audit 2026-03-10T10:26:32.322615+0000 mon.a (mon.0) 3036 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:26:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:33 vm04 bash[28289]: audit 2026-03-10T10:26:32.502618+0000 mon.a (mon.0) 3037 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:26:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:33 vm04 bash[28289]: audit 2026-03-10T10:26:32.502618+0000 mon.a (mon.0) 3037 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:26:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:33 vm04 bash[28289]: audit 2026-03-10T10:26:32.509378+0000 mon.a (mon.0) 3038 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:26:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:33 vm04 bash[28289]: audit 2026-03-10T10:26:32.509378+0000 mon.a (mon.0) 3038 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:26:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:33 vm04 bash[28289]: audit 2026-03-10T10:26:32.827841+0000 mon.a (mon.0) 3039 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:26:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:33 vm04 bash[28289]: audit 2026-03-10T10:26:32.827841+0000 mon.a (mon.0) 3039 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:26:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:33 vm04 bash[28289]: audit 2026-03-10T10:26:32.828474+0000 mon.a (mon.0) 3040 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:26:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:33 vm04 bash[28289]: audit 2026-03-10T10:26:32.828474+0000 mon.a (mon.0) 3040 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:26:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:33 vm04 bash[28289]: audit 2026-03-10T10:26:32.833870+0000 mon.a (mon.0) 3041 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:26:33.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:33 vm04 bash[28289]: audit 2026-03-10T10:26:32.833870+0000 mon.a (mon.0) 3041 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:26:33.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:33 vm04 bash[20742]: cluster 2026-03-10T10:26:32.162920+0000 mon.a (mon.0) 3033 : cluster [DBG] osdmap e545: 8 total, 8 up, 8 in 2026-03-10T10:26:33.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:33 vm04 bash[20742]: cluster 2026-03-10T10:26:32.162920+0000 mon.a (mon.0) 3033 : cluster [DBG] osdmap e545: 8 total, 8 up, 8 in 2026-03-10T10:26:33.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:33 vm04 bash[20742]: audit 2026-03-10T10:26:32.178500+0000 mon.c (mon.2) 476 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:33.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:33 vm04 bash[20742]: audit 2026-03-10T10:26:32.178500+0000 mon.c (mon.2) 476 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:33.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:33 vm04 bash[20742]: audit 2026-03-10T10:26:32.182458+0000 mon.a (mon.0) 3034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:33 vm04 bash[20742]: audit 2026-03-10T10:26:32.182458+0000 mon.a (mon.0) 3034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:33 vm04 bash[20742]: audit 2026-03-10T10:26:32.313012+0000 mon.a (mon.0) 3035 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:26:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:33 vm04 bash[20742]: audit 2026-03-10T10:26:32.313012+0000 mon.a (mon.0) 3035 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:26:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:33 vm04 bash[20742]: audit 2026-03-10T10:26:32.322615+0000 mon.a (mon.0) 3036 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:26:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:33 vm04 bash[20742]: audit 2026-03-10T10:26:32.322615+0000 mon.a (mon.0) 3036 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:26:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:33 vm04 bash[20742]: audit 2026-03-10T10:26:32.502618+0000 mon.a (mon.0) 3037 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:26:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:33 vm04 bash[20742]: audit 2026-03-10T10:26:32.502618+0000 mon.a (mon.0) 3037 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:26:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:33 vm04 bash[20742]: audit 2026-03-10T10:26:32.509378+0000 mon.a (mon.0) 3038 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:26:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:33 vm04 bash[20742]: audit 2026-03-10T10:26:32.509378+0000 mon.a (mon.0) 3038 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:26:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:33 vm04 bash[20742]: audit 2026-03-10T10:26:32.827841+0000 mon.a (mon.0) 3039 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:26:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:33 vm04 bash[20742]: audit 2026-03-10T10:26:32.827841+0000 mon.a (mon.0) 3039 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:26:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:33 vm04 bash[20742]: audit 2026-03-10T10:26:32.828474+0000 mon.a (mon.0) 3040 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:26:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:33 vm04 bash[20742]: audit 2026-03-10T10:26:32.828474+0000 mon.a (mon.0) 3040 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:26:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:33 vm04 bash[20742]: audit 2026-03-10T10:26:32.833870+0000 mon.a (mon.0) 3041 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:26:33.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:33 vm04 bash[20742]: audit 2026-03-10T10:26:32.833870+0000 mon.a (mon.0) 3041 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:26:33.454 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:26:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:26:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:26:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:33 vm07 bash[23367]: cluster 2026-03-10T10:26:32.162920+0000 mon.a (mon.0) 3033 : cluster [DBG] osdmap e545: 8 total, 8 up, 8 in 2026-03-10T10:26:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:33 vm07 bash[23367]: cluster 2026-03-10T10:26:32.162920+0000 mon.a (mon.0) 3033 : cluster [DBG] osdmap e545: 8 total, 8 up, 8 in 2026-03-10T10:26:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:33 vm07 bash[23367]: audit 2026-03-10T10:26:32.178500+0000 mon.c (mon.2) 476 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:33 vm07 bash[23367]: audit 2026-03-10T10:26:32.178500+0000 mon.c (mon.2) 476 : audit [INF] from='client.? 192.168.123.104:0/1596649674' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:33 vm07 bash[23367]: audit 2026-03-10T10:26:32.182458+0000 mon.a (mon.0) 3034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:33 vm07 bash[23367]: audit 2026-03-10T10:26:32.182458+0000 mon.a (mon.0) 3034 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-109"}]: dispatch 2026-03-10T10:26:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:33 vm07 bash[23367]: audit 2026-03-10T10:26:32.313012+0000 mon.a (mon.0) 3035 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:26:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:33 vm07 bash[23367]: audit 2026-03-10T10:26:32.313012+0000 mon.a (mon.0) 3035 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:26:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:33 vm07 bash[23367]: audit 2026-03-10T10:26:32.322615+0000 mon.a (mon.0) 3036 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:26:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:33 vm07 bash[23367]: audit 2026-03-10T10:26:32.322615+0000 mon.a (mon.0) 3036 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:26:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:33 vm07 bash[23367]: audit 2026-03-10T10:26:32.502618+0000 mon.a (mon.0) 3037 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:26:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:33 vm07 bash[23367]: audit 2026-03-10T10:26:32.502618+0000 mon.a (mon.0) 3037 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:26:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:33 vm07 bash[23367]: audit 2026-03-10T10:26:32.509378+0000 mon.a (mon.0) 3038 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:26:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:33 vm07 bash[23367]: audit 2026-03-10T10:26:32.509378+0000 mon.a (mon.0) 3038 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:26:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:33 vm07 bash[23367]: audit 2026-03-10T10:26:32.827841+0000 mon.a (mon.0) 3039 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:26:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:33 vm07 bash[23367]: audit 2026-03-10T10:26:32.827841+0000 mon.a (mon.0) 3039 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:26:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:33 vm07 bash[23367]: audit 2026-03-10T10:26:32.828474+0000 mon.a (mon.0) 3040 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:26:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:33 vm07 bash[23367]: audit 2026-03-10T10:26:32.828474+0000 mon.a (mon.0) 3040 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:26:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:33 vm07 bash[23367]: audit 2026-03-10T10:26:32.833870+0000 mon.a (mon.0) 3041 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:26:33.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:33 vm07 bash[23367]: audit 2026-03-10T10:26:32.833870+0000 mon.a (mon.0) 3041 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:26:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:34 vm04 bash[28289]: cluster 2026-03-10T10:26:32.519039+0000 mgr.y (mgr.24422) 495 : cluster [DBG] pgmap v839: 236 pgs: 236 active+clean; 455 KiB data, 980 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:34 vm04 bash[28289]: cluster 2026-03-10T10:26:32.519039+0000 mgr.y (mgr.24422) 495 : cluster [DBG] pgmap v839: 236 pgs: 236 active+clean; 455 KiB data, 980 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:34 vm04 bash[28289]: audit 2026-03-10T10:26:33.170027+0000 mon.a (mon.0) 3042 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-109"}]': finished 2026-03-10T10:26:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:34 vm04 bash[28289]: audit 2026-03-10T10:26:33.170027+0000 mon.a (mon.0) 3042 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-109"}]': finished 2026-03-10T10:26:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:34 vm04 bash[28289]: cluster 2026-03-10T10:26:33.172561+0000 mon.a (mon.0) 3043 : cluster [DBG] osdmap e546: 8 total, 8 up, 8 in 2026-03-10T10:26:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:34 vm04 bash[28289]: cluster 2026-03-10T10:26:33.172561+0000 mon.a (mon.0) 3043 : cluster [DBG] osdmap e546: 8 total, 8 up, 8 in 2026-03-10T10:26:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:34 vm04 bash[28289]: cluster 2026-03-10T10:26:33.507956+0000 mon.a (mon.0) 3044 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:34 vm04 bash[28289]: cluster 2026-03-10T10:26:33.507956+0000 mon.a (mon.0) 3044 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:34.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:34 vm04 bash[20742]: cluster 2026-03-10T10:26:32.519039+0000 mgr.y (mgr.24422) 495 : cluster [DBG] pgmap v839: 236 pgs: 236 active+clean; 455 KiB data, 980 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:34.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:34 vm04 bash[20742]: cluster 2026-03-10T10:26:32.519039+0000 mgr.y (mgr.24422) 495 : cluster [DBG] pgmap v839: 236 pgs: 236 active+clean; 455 KiB data, 980 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:34.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:34 vm04 bash[20742]: audit 2026-03-10T10:26:33.170027+0000 mon.a (mon.0) 3042 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-109"}]': finished 2026-03-10T10:26:34.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:34 vm04 bash[20742]: audit 2026-03-10T10:26:33.170027+0000 mon.a (mon.0) 3042 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-109"}]': finished 2026-03-10T10:26:34.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:34 vm04 bash[20742]: cluster 2026-03-10T10:26:33.172561+0000 mon.a (mon.0) 3043 : cluster [DBG] osdmap e546: 8 total, 8 up, 8 in 2026-03-10T10:26:34.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:34 vm04 bash[20742]: cluster 2026-03-10T10:26:33.172561+0000 mon.a (mon.0) 3043 : cluster [DBG] osdmap e546: 8 total, 8 up, 8 in 2026-03-10T10:26:34.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:34 vm04 bash[20742]: cluster 2026-03-10T10:26:33.507956+0000 mon.a (mon.0) 3044 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:34.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:34 vm04 bash[20742]: cluster 2026-03-10T10:26:33.507956+0000 mon.a (mon.0) 3044 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:34 vm07 bash[23367]: cluster 2026-03-10T10:26:32.519039+0000 mgr.y (mgr.24422) 495 : cluster [DBG] pgmap v839: 236 pgs: 236 active+clean; 455 KiB data, 980 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:34 vm07 bash[23367]: cluster 2026-03-10T10:26:32.519039+0000 mgr.y (mgr.24422) 495 : cluster [DBG] pgmap v839: 236 pgs: 236 active+clean; 455 KiB data, 980 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:34 vm07 bash[23367]: audit 2026-03-10T10:26:33.170027+0000 mon.a (mon.0) 3042 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-109"}]': finished 2026-03-10T10:26:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:34 vm07 bash[23367]: audit 2026-03-10T10:26:33.170027+0000 mon.a (mon.0) 3042 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-109"}]': finished 2026-03-10T10:26:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:34 vm07 bash[23367]: cluster 2026-03-10T10:26:33.172561+0000 mon.a (mon.0) 3043 : cluster [DBG] osdmap e546: 8 total, 8 up, 8 in 2026-03-10T10:26:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:34 vm07 bash[23367]: cluster 2026-03-10T10:26:33.172561+0000 mon.a (mon.0) 3043 : cluster [DBG] osdmap e546: 8 total, 8 up, 8 in 2026-03-10T10:26:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:34 vm07 bash[23367]: cluster 2026-03-10T10:26:33.507956+0000 mon.a (mon.0) 3044 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:34 vm07 bash[23367]: cluster 2026-03-10T10:26:33.507956+0000 mon.a (mon.0) 3044 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:35 vm04 bash[28289]: cluster 2026-03-10T10:26:34.379465+0000 mon.a (mon.0) 3045 : cluster [DBG] osdmap e547: 8 total, 8 up, 8 in 2026-03-10T10:26:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:35 vm04 bash[28289]: cluster 2026-03-10T10:26:34.379465+0000 mon.a (mon.0) 3045 : cluster [DBG] osdmap e547: 8 total, 8 up, 8 in 2026-03-10T10:26:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:35 vm04 bash[28289]: audit 2026-03-10T10:26:34.383365+0000 mon.c (mon.2) 477 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:35 vm04 bash[28289]: audit 2026-03-10T10:26:34.383365+0000 mon.c (mon.2) 477 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:35 vm04 bash[28289]: audit 2026-03-10T10:26:34.383557+0000 mon.a (mon.0) 3046 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:35 vm04 bash[28289]: audit 2026-03-10T10:26:34.383557+0000 mon.a (mon.0) 3046 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:35 vm04 bash[28289]: cluster 2026-03-10T10:26:34.519369+0000 mgr.y (mgr.24422) 496 : cluster [DBG] pgmap v842: 228 pgs: 228 active+clean; 455 KiB data, 980 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T10:26:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:35 vm04 bash[28289]: cluster 2026-03-10T10:26:34.519369+0000 mgr.y (mgr.24422) 496 : cluster [DBG] pgmap v842: 228 pgs: 228 active+clean; 455 KiB data, 980 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T10:26:35.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:35 vm04 bash[20742]: cluster 2026-03-10T10:26:34.379465+0000 mon.a (mon.0) 3045 : cluster [DBG] osdmap e547: 8 total, 8 up, 8 in 2026-03-10T10:26:35.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:35 vm04 bash[20742]: cluster 2026-03-10T10:26:34.379465+0000 mon.a (mon.0) 3045 : cluster [DBG] osdmap e547: 8 total, 8 up, 8 in 2026-03-10T10:26:35.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:35 vm04 bash[20742]: audit 2026-03-10T10:26:34.383365+0000 mon.c (mon.2) 477 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:35.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:35 vm04 bash[20742]: audit 2026-03-10T10:26:34.383365+0000 mon.c (mon.2) 477 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:35.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:35 vm04 bash[20742]: audit 2026-03-10T10:26:34.383557+0000 mon.a (mon.0) 3046 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:35.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:35 vm04 bash[20742]: audit 2026-03-10T10:26:34.383557+0000 mon.a (mon.0) 3046 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:35.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:35 vm04 bash[20742]: cluster 2026-03-10T10:26:34.519369+0000 mgr.y (mgr.24422) 496 : cluster [DBG] pgmap v842: 228 pgs: 228 active+clean; 455 KiB data, 980 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T10:26:35.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:35 vm04 bash[20742]: cluster 2026-03-10T10:26:34.519369+0000 mgr.y (mgr.24422) 496 : cluster [DBG] pgmap v842: 228 pgs: 228 active+clean; 455 KiB data, 980 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T10:26:35.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:35 vm07 bash[23367]: cluster 2026-03-10T10:26:34.379465+0000 mon.a (mon.0) 3045 : cluster [DBG] osdmap e547: 8 total, 8 up, 8 in 2026-03-10T10:26:35.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:35 vm07 bash[23367]: cluster 2026-03-10T10:26:34.379465+0000 mon.a (mon.0) 3045 : cluster [DBG] osdmap e547: 8 total, 8 up, 8 in 2026-03-10T10:26:35.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:35 vm07 bash[23367]: audit 2026-03-10T10:26:34.383365+0000 mon.c (mon.2) 477 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:35.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:35 vm07 bash[23367]: audit 2026-03-10T10:26:34.383365+0000 mon.c (mon.2) 477 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:35.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:35 vm07 bash[23367]: audit 2026-03-10T10:26:34.383557+0000 mon.a (mon.0) 3046 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:35.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:35 vm07 bash[23367]: audit 2026-03-10T10:26:34.383557+0000 mon.a (mon.0) 3046 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:35.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:35 vm07 bash[23367]: cluster 2026-03-10T10:26:34.519369+0000 mgr.y (mgr.24422) 496 : cluster [DBG] pgmap v842: 228 pgs: 228 active+clean; 455 KiB data, 980 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T10:26:35.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:35 vm07 bash[23367]: cluster 2026-03-10T10:26:34.519369+0000 mgr.y (mgr.24422) 496 : cluster [DBG] pgmap v842: 228 pgs: 228 active+clean; 455 KiB data, 980 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T10:26:36.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:36 vm04 bash[28289]: audit 2026-03-10T10:26:35.437293+0000 mon.a (mon.0) 3047 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm04-59491-104"}]': finished 2026-03-10T10:26:36.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:36 vm04 bash[28289]: audit 2026-03-10T10:26:35.437293+0000 mon.a (mon.0) 3047 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm04-59491-104"}]': finished 2026-03-10T10:26:36.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:36 vm04 bash[28289]: audit 2026-03-10T10:26:35.444844+0000 mon.c (mon.2) 478 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:36.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:36 vm04 bash[28289]: audit 2026-03-10T10:26:35.444844+0000 mon.c (mon.2) 478 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:36.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:36 vm04 bash[28289]: cluster 2026-03-10T10:26:35.448699+0000 mon.a (mon.0) 3048 : cluster [DBG] osdmap e548: 8 total, 8 up, 8 in 2026-03-10T10:26:36.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:36 vm04 bash[28289]: cluster 2026-03-10T10:26:35.448699+0000 mon.a (mon.0) 3048 : cluster [DBG] osdmap e548: 8 total, 8 up, 8 in 2026-03-10T10:26:36.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:36 vm04 bash[28289]: audit 2026-03-10T10:26:35.449433+0000 mon.a (mon.0) 3049 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:36.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:36 vm04 bash[28289]: audit 2026-03-10T10:26:35.449433+0000 mon.a (mon.0) 3049 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:36.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:36 vm04 bash[20742]: audit 2026-03-10T10:26:35.437293+0000 mon.a (mon.0) 3047 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm04-59491-104"}]': finished 2026-03-10T10:26:36.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:36 vm04 bash[20742]: audit 2026-03-10T10:26:35.437293+0000 mon.a (mon.0) 3047 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm04-59491-104"}]': finished 2026-03-10T10:26:36.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:36 vm04 bash[20742]: audit 2026-03-10T10:26:35.444844+0000 mon.c (mon.2) 478 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:36.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:36 vm04 bash[20742]: audit 2026-03-10T10:26:35.444844+0000 mon.c (mon.2) 478 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:36.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:36 vm04 bash[20742]: cluster 2026-03-10T10:26:35.448699+0000 mon.a (mon.0) 3048 : cluster [DBG] osdmap e548: 8 total, 8 up, 8 in 2026-03-10T10:26:36.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:36 vm04 bash[20742]: cluster 2026-03-10T10:26:35.448699+0000 mon.a (mon.0) 3048 : cluster [DBG] osdmap e548: 8 total, 8 up, 8 in 2026-03-10T10:26:36.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:36 vm04 bash[20742]: audit 2026-03-10T10:26:35.449433+0000 mon.a (mon.0) 3049 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:36.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:36 vm04 bash[20742]: audit 2026-03-10T10:26:35.449433+0000 mon.a (mon.0) 3049 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:36.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:36 vm07 bash[23367]: audit 2026-03-10T10:26:35.437293+0000 mon.a (mon.0) 3047 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm04-59491-104"}]': finished 2026-03-10T10:26:36.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:36 vm07 bash[23367]: audit 2026-03-10T10:26:35.437293+0000 mon.a (mon.0) 3047 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-LibRadosTierECPP_vm04-59491-104"}]': finished 2026-03-10T10:26:36.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:36 vm07 bash[23367]: audit 2026-03-10T10:26:35.444844+0000 mon.c (mon.2) 478 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:36.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:36 vm07 bash[23367]: audit 2026-03-10T10:26:35.444844+0000 mon.c (mon.2) 478 : audit [INF] from='client.? 192.168.123.104:0/3322610715' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:36.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:36 vm07 bash[23367]: cluster 2026-03-10T10:26:35.448699+0000 mon.a (mon.0) 3048 : cluster [DBG] osdmap e548: 8 total, 8 up, 8 in 2026-03-10T10:26:36.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:36 vm07 bash[23367]: cluster 2026-03-10T10:26:35.448699+0000 mon.a (mon.0) 3048 : cluster [DBG] osdmap e548: 8 total, 8 up, 8 in 2026-03-10T10:26:36.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:36 vm07 bash[23367]: audit 2026-03-10T10:26:35.449433+0000 mon.a (mon.0) 3049 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:36.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:36 vm07 bash[23367]: audit 2026-03-10T10:26:35.449433+0000 mon.a (mon.0) 3049 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm04-59491-104"}]: dispatch 2026-03-10T10:26:37.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:37 vm07 bash[23367]: audit 2026-03-10T10:26:36.441611+0000 mon.a (mon.0) 3050 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm04-59491-104"}]': finished 2026-03-10T10:26:37.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:37 vm07 bash[23367]: audit 2026-03-10T10:26:36.441611+0000 mon.a (mon.0) 3050 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm04-59491-104"}]': finished 2026-03-10T10:26:37.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:37 vm07 bash[23367]: cluster 2026-03-10T10:26:36.444717+0000 mon.a (mon.0) 3051 : cluster [DBG] osdmap e549: 8 total, 8 up, 8 in 2026-03-10T10:26:37.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:37 vm07 bash[23367]: cluster 2026-03-10T10:26:36.444717+0000 mon.a (mon.0) 3051 : cluster [DBG] osdmap e549: 8 total, 8 up, 8 in 2026-03-10T10:26:37.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:37 vm07 bash[23367]: audit 2026-03-10T10:26:36.473857+0000 mon.a (mon.0) 3052 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:26:37.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:37 vm07 bash[23367]: audit 2026-03-10T10:26:36.473857+0000 mon.a (mon.0) 3052 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:26:37.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:37 vm07 bash[23367]: audit 2026-03-10T10:26:36.475870+0000 mon.a (mon.0) 3053 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:26:37.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:37 vm07 bash[23367]: audit 2026-03-10T10:26:36.475870+0000 mon.a (mon.0) 3053 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:26:37.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:37 vm07 bash[23367]: audit 2026-03-10T10:26:36.476153+0000 mon.a (mon.0) 3054 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59491-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:26:37.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:37 vm07 bash[23367]: audit 2026-03-10T10:26:36.476153+0000 mon.a (mon.0) 3054 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59491-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:26:37.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:37 vm07 bash[23367]: cluster 2026-03-10T10:26:36.519728+0000 mgr.y (mgr.24422) 497 : cluster [DBG] pgmap v845: 228 pgs: 228 active+clean; 455 KiB data, 980 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T10:26:37.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:37 vm07 bash[23367]: cluster 2026-03-10T10:26:36.519728+0000 mgr.y (mgr.24422) 497 : cluster [DBG] pgmap v845: 228 pgs: 228 active+clean; 455 KiB data, 980 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T10:26:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:37 vm04 bash[28289]: audit 2026-03-10T10:26:36.441611+0000 mon.a (mon.0) 3050 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm04-59491-104"}]': finished 2026-03-10T10:26:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:37 vm04 bash[28289]: audit 2026-03-10T10:26:36.441611+0000 mon.a (mon.0) 3050 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm04-59491-104"}]': finished 2026-03-10T10:26:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:37 vm04 bash[28289]: cluster 2026-03-10T10:26:36.444717+0000 mon.a (mon.0) 3051 : cluster [DBG] osdmap e549: 8 total, 8 up, 8 in 2026-03-10T10:26:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:37 vm04 bash[28289]: cluster 2026-03-10T10:26:36.444717+0000 mon.a (mon.0) 3051 : cluster [DBG] osdmap e549: 8 total, 8 up, 8 in 2026-03-10T10:26:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:37 vm04 bash[28289]: audit 2026-03-10T10:26:36.473857+0000 mon.a (mon.0) 3052 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:26:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:37 vm04 bash[28289]: audit 2026-03-10T10:26:36.473857+0000 mon.a (mon.0) 3052 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:26:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:37 vm04 bash[28289]: audit 2026-03-10T10:26:36.475870+0000 mon.a (mon.0) 3053 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:26:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:37 vm04 bash[28289]: audit 2026-03-10T10:26:36.475870+0000 mon.a (mon.0) 3053 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:26:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:37 vm04 bash[28289]: audit 2026-03-10T10:26:36.476153+0000 mon.a (mon.0) 3054 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59491-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:26:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:37 vm04 bash[28289]: audit 2026-03-10T10:26:36.476153+0000 mon.a (mon.0) 3054 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59491-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:26:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:37 vm04 bash[28289]: cluster 2026-03-10T10:26:36.519728+0000 mgr.y (mgr.24422) 497 : cluster [DBG] pgmap v845: 228 pgs: 228 active+clean; 455 KiB data, 980 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T10:26:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:37 vm04 bash[28289]: cluster 2026-03-10T10:26:36.519728+0000 mgr.y (mgr.24422) 497 : cluster [DBG] pgmap v845: 228 pgs: 228 active+clean; 455 KiB data, 980 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T10:26:37.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:37 vm04 bash[20742]: audit 2026-03-10T10:26:36.441611+0000 mon.a (mon.0) 3050 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm04-59491-104"}]': finished 2026-03-10T10:26:37.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:37 vm04 bash[20742]: audit 2026-03-10T10:26:36.441611+0000 mon.a (mon.0) 3050 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"LibRadosTierECPP_vm04-59491-104"}]': finished 2026-03-10T10:26:37.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:37 vm04 bash[20742]: cluster 2026-03-10T10:26:36.444717+0000 mon.a (mon.0) 3051 : cluster [DBG] osdmap e549: 8 total, 8 up, 8 in 2026-03-10T10:26:37.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:37 vm04 bash[20742]: cluster 2026-03-10T10:26:36.444717+0000 mon.a (mon.0) 3051 : cluster [DBG] osdmap e549: 8 total, 8 up, 8 in 2026-03-10T10:26:37.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:37 vm04 bash[20742]: audit 2026-03-10T10:26:36.473857+0000 mon.a (mon.0) 3052 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:26:37.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:37 vm04 bash[20742]: audit 2026-03-10T10:26:36.473857+0000 mon.a (mon.0) 3052 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:26:37.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:37 vm04 bash[20742]: audit 2026-03-10T10:26:36.475870+0000 mon.a (mon.0) 3053 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:26:37.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:37 vm04 bash[20742]: audit 2026-03-10T10:26:36.475870+0000 mon.a (mon.0) 3053 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:26:37.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:37 vm04 bash[20742]: audit 2026-03-10T10:26:36.476153+0000 mon.a (mon.0) 3054 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59491-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:26:37.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:37 vm04 bash[20742]: audit 2026-03-10T10:26:36.476153+0000 mon.a (mon.0) 3054 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59491-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-10T10:26:37.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:37 vm04 bash[20742]: cluster 2026-03-10T10:26:36.519728+0000 mgr.y (mgr.24422) 497 : cluster [DBG] pgmap v845: 228 pgs: 228 active+clean; 455 KiB data, 980 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T10:26:37.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:37 vm04 bash[20742]: cluster 2026-03-10T10:26:36.519728+0000 mgr.y (mgr.24422) 497 : cluster [DBG] pgmap v845: 228 pgs: 228 active+clean; 455 KiB data, 980 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 0 op/s 2026-03-10T10:26:38.740 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:38 vm07 bash[23367]: audit 2026-03-10T10:26:37.462687+0000 mon.a (mon.0) 3055 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59491-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:26:38.740 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:38 vm07 bash[23367]: audit 2026-03-10T10:26:37.462687+0000 mon.a (mon.0) 3055 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59491-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:26:38.740 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:38 vm07 bash[23367]: cluster 2026-03-10T10:26:37.470957+0000 mon.a (mon.0) 3056 : cluster [DBG] osdmap e550: 8 total, 8 up, 8 in 2026-03-10T10:26:38.740 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:38 vm07 bash[23367]: cluster 2026-03-10T10:26:37.470957+0000 mon.a (mon.0) 3056 : cluster [DBG] osdmap e550: 8 total, 8 up, 8 in 2026-03-10T10:26:38.740 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:38 vm07 bash[23367]: audit 2026-03-10T10:26:37.472360+0000 mon.a (mon.0) 3057 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59491-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:26:38.740 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:38 vm07 bash[23367]: audit 2026-03-10T10:26:37.472360+0000 mon.a (mon.0) 3057 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59491-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:26:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:38 vm04 bash[28289]: audit 2026-03-10T10:26:37.462687+0000 mon.a (mon.0) 3055 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59491-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:26:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:38 vm04 bash[28289]: audit 2026-03-10T10:26:37.462687+0000 mon.a (mon.0) 3055 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59491-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:26:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:38 vm04 bash[28289]: cluster 2026-03-10T10:26:37.470957+0000 mon.a (mon.0) 3056 : cluster [DBG] osdmap e550: 8 total, 8 up, 8 in 2026-03-10T10:26:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:38 vm04 bash[28289]: cluster 2026-03-10T10:26:37.470957+0000 mon.a (mon.0) 3056 : cluster [DBG] osdmap e550: 8 total, 8 up, 8 in 2026-03-10T10:26:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:38 vm04 bash[28289]: audit 2026-03-10T10:26:37.472360+0000 mon.a (mon.0) 3057 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59491-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:26:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:38 vm04 bash[28289]: audit 2026-03-10T10:26:37.472360+0000 mon.a (mon.0) 3057 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59491-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:26:38.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:38 vm04 bash[20742]: audit 2026-03-10T10:26:37.462687+0000 mon.a (mon.0) 3055 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59491-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:26:38.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:38 vm04 bash[20742]: audit 2026-03-10T10:26:37.462687+0000 mon.a (mon.0) 3055 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-rados-api-vm04-59491-111", "profile": [ "k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-10T10:26:38.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:38 vm04 bash[20742]: cluster 2026-03-10T10:26:37.470957+0000 mon.a (mon.0) 3056 : cluster [DBG] osdmap e550: 8 total, 8 up, 8 in 2026-03-10T10:26:38.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:38 vm04 bash[20742]: cluster 2026-03-10T10:26:37.470957+0000 mon.a (mon.0) 3056 : cluster [DBG] osdmap e550: 8 total, 8 up, 8 in 2026-03-10T10:26:38.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:38 vm04 bash[20742]: audit 2026-03-10T10:26:37.472360+0000 mon.a (mon.0) 3057 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59491-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:26:38.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:38 vm04 bash[20742]: audit 2026-03-10T10:26:37.472360+0000 mon.a (mon.0) 3057 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59491-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:26:39.016 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:26:38 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:26:39.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:39 vm07 bash[23367]: cluster 2026-03-10T10:26:38.476414+0000 mon.a (mon.0) 3058 : cluster [DBG] osdmap e551: 8 total, 8 up, 8 in 2026-03-10T10:26:39.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:39 vm07 bash[23367]: cluster 2026-03-10T10:26:38.476414+0000 mon.a (mon.0) 3058 : cluster [DBG] osdmap e551: 8 total, 8 up, 8 in 2026-03-10T10:26:39.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:39 vm07 bash[23367]: cluster 2026-03-10T10:26:38.520058+0000 mgr.y (mgr.24422) 498 : cluster [DBG] pgmap v848: 228 pgs: 228 active+clean; 455 KiB data, 980 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:39.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:39 vm07 bash[23367]: cluster 2026-03-10T10:26:38.520058+0000 mgr.y (mgr.24422) 498 : cluster [DBG] pgmap v848: 228 pgs: 228 active+clean; 455 KiB data, 980 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:39.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:39 vm07 bash[23367]: audit 2026-03-10T10:26:38.741006+0000 mgr.y (mgr.24422) 499 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:26:39.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:39 vm07 bash[23367]: audit 2026-03-10T10:26:38.741006+0000 mgr.y (mgr.24422) 499 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:26:39.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:39 vm07 bash[23367]: audit 2026-03-10T10:26:39.470985+0000 mon.a (mon.0) 3059 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59491-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:26:39.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:39 vm07 bash[23367]: audit 2026-03-10T10:26:39.470985+0000 mon.a (mon.0) 3059 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59491-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:26:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:39 vm04 bash[28289]: cluster 2026-03-10T10:26:38.476414+0000 mon.a (mon.0) 3058 : cluster [DBG] osdmap e551: 8 total, 8 up, 8 in 2026-03-10T10:26:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:39 vm04 bash[28289]: cluster 2026-03-10T10:26:38.476414+0000 mon.a (mon.0) 3058 : cluster [DBG] osdmap e551: 8 total, 8 up, 8 in 2026-03-10T10:26:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:39 vm04 bash[28289]: cluster 2026-03-10T10:26:38.520058+0000 mgr.y (mgr.24422) 498 : cluster [DBG] pgmap v848: 228 pgs: 228 active+clean; 455 KiB data, 980 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:39 vm04 bash[28289]: cluster 2026-03-10T10:26:38.520058+0000 mgr.y (mgr.24422) 498 : cluster [DBG] pgmap v848: 228 pgs: 228 active+clean; 455 KiB data, 980 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:39 vm04 bash[28289]: audit 2026-03-10T10:26:38.741006+0000 mgr.y (mgr.24422) 499 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:26:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:39 vm04 bash[28289]: audit 2026-03-10T10:26:38.741006+0000 mgr.y (mgr.24422) 499 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:26:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:39 vm04 bash[28289]: audit 2026-03-10T10:26:39.470985+0000 mon.a (mon.0) 3059 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59491-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:26:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:39 vm04 bash[28289]: audit 2026-03-10T10:26:39.470985+0000 mon.a (mon.0) 3059 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59491-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:26:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:39 vm04 bash[20742]: cluster 2026-03-10T10:26:38.476414+0000 mon.a (mon.0) 3058 : cluster [DBG] osdmap e551: 8 total, 8 up, 8 in 2026-03-10T10:26:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:39 vm04 bash[20742]: cluster 2026-03-10T10:26:38.476414+0000 mon.a (mon.0) 3058 : cluster [DBG] osdmap e551: 8 total, 8 up, 8 in 2026-03-10T10:26:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:39 vm04 bash[20742]: cluster 2026-03-10T10:26:38.520058+0000 mgr.y (mgr.24422) 498 : cluster [DBG] pgmap v848: 228 pgs: 228 active+clean; 455 KiB data, 980 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:39 vm04 bash[20742]: cluster 2026-03-10T10:26:38.520058+0000 mgr.y (mgr.24422) 498 : cluster [DBG] pgmap v848: 228 pgs: 228 active+clean; 455 KiB data, 980 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:39 vm04 bash[20742]: audit 2026-03-10T10:26:38.741006+0000 mgr.y (mgr.24422) 499 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:26:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:39 vm04 bash[20742]: audit 2026-03-10T10:26:38.741006+0000 mgr.y (mgr.24422) 499 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:26:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:39 vm04 bash[20742]: audit 2026-03-10T10:26:39.470985+0000 mon.a (mon.0) 3059 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59491-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:26:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:39 vm04 bash[20742]: audit 2026-03-10T10:26:39.470985+0000 mon.a (mon.0) 3059 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "test-rados-api-vm04-59491-111", "pool_type":"erasure", "pg_num":8, "pgp_num":8, "erasure_code_profile":"testprofile-test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:26:40.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:40 vm07 bash[23367]: cluster 2026-03-10T10:26:39.483211+0000 mon.a (mon.0) 3060 : cluster [DBG] osdmap e552: 8 total, 8 up, 8 in 2026-03-10T10:26:40.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:40 vm07 bash[23367]: cluster 2026-03-10T10:26:39.483211+0000 mon.a (mon.0) 3060 : cluster [DBG] osdmap e552: 8 total, 8 up, 8 in 2026-03-10T10:26:40.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:40 vm07 bash[23367]: cluster 2026-03-10T10:26:40.477819+0000 mon.a (mon.0) 3061 : cluster [DBG] osdmap e553: 8 total, 8 up, 8 in 2026-03-10T10:26:40.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:40 vm07 bash[23367]: cluster 2026-03-10T10:26:40.477819+0000 mon.a (mon.0) 3061 : cluster [DBG] osdmap e553: 8 total, 8 up, 8 in 2026-03-10T10:26:40.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:40 vm07 bash[23367]: audit 2026-03-10T10:26:40.494277+0000 mon.a (mon.0) 3062 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:40.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:40 vm07 bash[23367]: audit 2026-03-10T10:26:40.494277+0000 mon.a (mon.0) 3062 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:40 vm04 bash[28289]: cluster 2026-03-10T10:26:39.483211+0000 mon.a (mon.0) 3060 : cluster [DBG] osdmap e552: 8 total, 8 up, 8 in 2026-03-10T10:26:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:40 vm04 bash[28289]: cluster 2026-03-10T10:26:39.483211+0000 mon.a (mon.0) 3060 : cluster [DBG] osdmap e552: 8 total, 8 up, 8 in 2026-03-10T10:26:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:40 vm04 bash[28289]: cluster 2026-03-10T10:26:40.477819+0000 mon.a (mon.0) 3061 : cluster [DBG] osdmap e553: 8 total, 8 up, 8 in 2026-03-10T10:26:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:40 vm04 bash[28289]: cluster 2026-03-10T10:26:40.477819+0000 mon.a (mon.0) 3061 : cluster [DBG] osdmap e553: 8 total, 8 up, 8 in 2026-03-10T10:26:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:40 vm04 bash[28289]: audit 2026-03-10T10:26:40.494277+0000 mon.a (mon.0) 3062 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:40 vm04 bash[28289]: audit 2026-03-10T10:26:40.494277+0000 mon.a (mon.0) 3062 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:40 vm04 bash[20742]: cluster 2026-03-10T10:26:39.483211+0000 mon.a (mon.0) 3060 : cluster [DBG] osdmap e552: 8 total, 8 up, 8 in 2026-03-10T10:26:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:40 vm04 bash[20742]: cluster 2026-03-10T10:26:39.483211+0000 mon.a (mon.0) 3060 : cluster [DBG] osdmap e552: 8 total, 8 up, 8 in 2026-03-10T10:26:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:40 vm04 bash[20742]: cluster 2026-03-10T10:26:40.477819+0000 mon.a (mon.0) 3061 : cluster [DBG] osdmap e553: 8 total, 8 up, 8 in 2026-03-10T10:26:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:40 vm04 bash[20742]: cluster 2026-03-10T10:26:40.477819+0000 mon.a (mon.0) 3061 : cluster [DBG] osdmap e553: 8 total, 8 up, 8 in 2026-03-10T10:26:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:40 vm04 bash[20742]: audit 2026-03-10T10:26:40.494277+0000 mon.a (mon.0) 3062 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:40 vm04 bash[20742]: audit 2026-03-10T10:26:40.494277+0000 mon.a (mon.0) 3062 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-112","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:41.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:41 vm07 bash[23367]: cluster 2026-03-10T10:26:40.520348+0000 mgr.y (mgr.24422) 500 : cluster [DBG] pgmap v851: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 985 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:41.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:41 vm07 bash[23367]: cluster 2026-03-10T10:26:40.520348+0000 mgr.y (mgr.24422) 500 : cluster [DBG] pgmap v851: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 985 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:41.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:41 vm07 bash[23367]: audit 2026-03-10T10:26:41.477460+0000 mon.a (mon.0) 3063 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-112","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:41.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:41 vm07 bash[23367]: audit 2026-03-10T10:26:41.477460+0000 mon.a (mon.0) 3063 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-112","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:41.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:41 vm07 bash[23367]: cluster 2026-03-10T10:26:41.481477+0000 mon.a (mon.0) 3064 : cluster [DBG] osdmap e554: 8 total, 8 up, 8 in 2026-03-10T10:26:41.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:41 vm07 bash[23367]: cluster 2026-03-10T10:26:41.481477+0000 mon.a (mon.0) 3064 : cluster [DBG] osdmap e554: 8 total, 8 up, 8 in 2026-03-10T10:26:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:41 vm04 bash[28289]: cluster 2026-03-10T10:26:40.520348+0000 mgr.y (mgr.24422) 500 : cluster [DBG] pgmap v851: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 985 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:41 vm04 bash[28289]: cluster 2026-03-10T10:26:40.520348+0000 mgr.y (mgr.24422) 500 : cluster [DBG] pgmap v851: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 985 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:41 vm04 bash[28289]: audit 2026-03-10T10:26:41.477460+0000 mon.a (mon.0) 3063 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-112","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:41 vm04 bash[28289]: audit 2026-03-10T10:26:41.477460+0000 mon.a (mon.0) 3063 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-112","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:41 vm04 bash[28289]: cluster 2026-03-10T10:26:41.481477+0000 mon.a (mon.0) 3064 : cluster [DBG] osdmap e554: 8 total, 8 up, 8 in 2026-03-10T10:26:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:41 vm04 bash[28289]: cluster 2026-03-10T10:26:41.481477+0000 mon.a (mon.0) 3064 : cluster [DBG] osdmap e554: 8 total, 8 up, 8 in 2026-03-10T10:26:41.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:41 vm04 bash[20742]: cluster 2026-03-10T10:26:40.520348+0000 mgr.y (mgr.24422) 500 : cluster [DBG] pgmap v851: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 985 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:41.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:41 vm04 bash[20742]: cluster 2026-03-10T10:26:40.520348+0000 mgr.y (mgr.24422) 500 : cluster [DBG] pgmap v851: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 985 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:41.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:41 vm04 bash[20742]: audit 2026-03-10T10:26:41.477460+0000 mon.a (mon.0) 3063 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-112","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:41.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:41 vm04 bash[20742]: audit 2026-03-10T10:26:41.477460+0000 mon.a (mon.0) 3063 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-112","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:41.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:41 vm04 bash[20742]: cluster 2026-03-10T10:26:41.481477+0000 mon.a (mon.0) 3064 : cluster [DBG] osdmap e554: 8 total, 8 up, 8 in 2026-03-10T10:26:41.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:41 vm04 bash[20742]: cluster 2026-03-10T10:26:41.481477+0000 mon.a (mon.0) 3064 : cluster [DBG] osdmap e554: 8 total, 8 up, 8 in 2026-03-10T10:26:42.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:42 vm04 bash[28289]: cluster 2026-03-10T10:26:41.501909+0000 mon.a (mon.0) 3065 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:42.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:42 vm04 bash[28289]: cluster 2026-03-10T10:26:41.501909+0000 mon.a (mon.0) 3065 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:42.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:42 vm04 bash[28289]: audit 2026-03-10T10:26:41.538486+0000 mon.a (mon.0) 3066 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:26:42.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:42 vm04 bash[28289]: audit 2026-03-10T10:26:41.538486+0000 mon.a (mon.0) 3066 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:26:42.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:42 vm04 bash[28289]: audit 2026-03-10T10:26:42.481106+0000 mon.a (mon.0) 3067 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-112", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:26:42.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:42 vm04 bash[28289]: audit 2026-03-10T10:26:42.481106+0000 mon.a (mon.0) 3067 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-112", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:26:42.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:42 vm04 bash[28289]: cluster 2026-03-10T10:26:42.484711+0000 mon.a (mon.0) 3068 : cluster [DBG] osdmap e555: 8 total, 8 up, 8 in 2026-03-10T10:26:42.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:42 vm04 bash[28289]: cluster 2026-03-10T10:26:42.484711+0000 mon.a (mon.0) 3068 : cluster [DBG] osdmap e555: 8 total, 8 up, 8 in 2026-03-10T10:26:42.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:42 vm04 bash[28289]: audit 2026-03-10T10:26:42.489396+0000 mon.a (mon.0) 3069 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-112"}]: dispatch 2026-03-10T10:26:42.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:42 vm04 bash[28289]: audit 2026-03-10T10:26:42.489396+0000 mon.a (mon.0) 3069 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-112"}]: dispatch 2026-03-10T10:26:42.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:42 vm04 bash[20742]: cluster 2026-03-10T10:26:41.501909+0000 mon.a (mon.0) 3065 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:42.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:42 vm04 bash[20742]: cluster 2026-03-10T10:26:41.501909+0000 mon.a (mon.0) 3065 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:42.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:42 vm04 bash[20742]: audit 2026-03-10T10:26:41.538486+0000 mon.a (mon.0) 3066 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:26:42.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:42 vm04 bash[20742]: audit 2026-03-10T10:26:41.538486+0000 mon.a (mon.0) 3066 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:26:42.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:42 vm04 bash[20742]: audit 2026-03-10T10:26:42.481106+0000 mon.a (mon.0) 3067 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-112", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:26:42.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:42 vm04 bash[20742]: audit 2026-03-10T10:26:42.481106+0000 mon.a (mon.0) 3067 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-112", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:26:42.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:42 vm04 bash[20742]: cluster 2026-03-10T10:26:42.484711+0000 mon.a (mon.0) 3068 : cluster [DBG] osdmap e555: 8 total, 8 up, 8 in 2026-03-10T10:26:42.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:42 vm04 bash[20742]: cluster 2026-03-10T10:26:42.484711+0000 mon.a (mon.0) 3068 : cluster [DBG] osdmap e555: 8 total, 8 up, 8 in 2026-03-10T10:26:42.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:42 vm04 bash[20742]: audit 2026-03-10T10:26:42.489396+0000 mon.a (mon.0) 3069 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-112"}]: dispatch 2026-03-10T10:26:42.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:42 vm04 bash[20742]: audit 2026-03-10T10:26:42.489396+0000 mon.a (mon.0) 3069 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-112"}]: dispatch 2026-03-10T10:26:43.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:42 vm07 bash[23367]: cluster 2026-03-10T10:26:41.501909+0000 mon.a (mon.0) 3065 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:43.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:42 vm07 bash[23367]: cluster 2026-03-10T10:26:41.501909+0000 mon.a (mon.0) 3065 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:43.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:42 vm07 bash[23367]: audit 2026-03-10T10:26:41.538486+0000 mon.a (mon.0) 3066 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:26:43.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:42 vm07 bash[23367]: audit 2026-03-10T10:26:41.538486+0000 mon.a (mon.0) 3066 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-112", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:26:43.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:42 vm07 bash[23367]: audit 2026-03-10T10:26:42.481106+0000 mon.a (mon.0) 3067 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-112", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:26:43.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:42 vm07 bash[23367]: audit 2026-03-10T10:26:42.481106+0000 mon.a (mon.0) 3067 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-112", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:26:43.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:42 vm07 bash[23367]: cluster 2026-03-10T10:26:42.484711+0000 mon.a (mon.0) 3068 : cluster [DBG] osdmap e555: 8 total, 8 up, 8 in 2026-03-10T10:26:43.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:42 vm07 bash[23367]: cluster 2026-03-10T10:26:42.484711+0000 mon.a (mon.0) 3068 : cluster [DBG] osdmap e555: 8 total, 8 up, 8 in 2026-03-10T10:26:43.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:42 vm07 bash[23367]: audit 2026-03-10T10:26:42.489396+0000 mon.a (mon.0) 3069 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-112"}]: dispatch 2026-03-10T10:26:43.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:42 vm07 bash[23367]: audit 2026-03-10T10:26:42.489396+0000 mon.a (mon.0) 3069 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-112"}]: dispatch 2026-03-10T10:26:43.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:26:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:26:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:26:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:43 vm04 bash[28289]: cluster 2026-03-10T10:26:42.521648+0000 mgr.y (mgr.24422) 501 : cluster [DBG] pgmap v854: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 985 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:43 vm04 bash[28289]: cluster 2026-03-10T10:26:42.521648+0000 mgr.y (mgr.24422) 501 : cluster [DBG] pgmap v854: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 985 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:43 vm04 bash[28289]: audit 2026-03-10T10:26:43.069771+0000 mon.a (mon.0) 3070 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:26:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:43 vm04 bash[28289]: audit 2026-03-10T10:26:43.069771+0000 mon.a (mon.0) 3070 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:26:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:43 vm04 bash[28289]: audit 2026-03-10T10:26:43.484338+0000 mon.a (mon.0) 3071 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-112"}]': finished 2026-03-10T10:26:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:43 vm04 bash[28289]: audit 2026-03-10T10:26:43.484338+0000 mon.a (mon.0) 3071 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-112"}]': finished 2026-03-10T10:26:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:43 vm04 bash[28289]: cluster 2026-03-10T10:26:43.487980+0000 mon.a (mon.0) 3072 : cluster [DBG] osdmap e556: 8 total, 8 up, 8 in 2026-03-10T10:26:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:43 vm04 bash[28289]: cluster 2026-03-10T10:26:43.487980+0000 mon.a (mon.0) 3072 : cluster [DBG] osdmap e556: 8 total, 8 up, 8 in 2026-03-10T10:26:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:43 vm04 bash[20742]: cluster 2026-03-10T10:26:42.521648+0000 mgr.y (mgr.24422) 501 : cluster [DBG] pgmap v854: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 985 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:43 vm04 bash[20742]: cluster 2026-03-10T10:26:42.521648+0000 mgr.y (mgr.24422) 501 : cluster [DBG] pgmap v854: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 985 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:43 vm04 bash[20742]: audit 2026-03-10T10:26:43.069771+0000 mon.a (mon.0) 3070 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:26:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:43 vm04 bash[20742]: audit 2026-03-10T10:26:43.069771+0000 mon.a (mon.0) 3070 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:26:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:43 vm04 bash[20742]: audit 2026-03-10T10:26:43.484338+0000 mon.a (mon.0) 3071 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-112"}]': finished 2026-03-10T10:26:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:43 vm04 bash[20742]: audit 2026-03-10T10:26:43.484338+0000 mon.a (mon.0) 3071 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-112"}]': finished 2026-03-10T10:26:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:43 vm04 bash[20742]: cluster 2026-03-10T10:26:43.487980+0000 mon.a (mon.0) 3072 : cluster [DBG] osdmap e556: 8 total, 8 up, 8 in 2026-03-10T10:26:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:43 vm04 bash[20742]: cluster 2026-03-10T10:26:43.487980+0000 mon.a (mon.0) 3072 : cluster [DBG] osdmap e556: 8 total, 8 up, 8 in 2026-03-10T10:26:44.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:43 vm07 bash[23367]: cluster 2026-03-10T10:26:42.521648+0000 mgr.y (mgr.24422) 501 : cluster [DBG] pgmap v854: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 985 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:44.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:43 vm07 bash[23367]: cluster 2026-03-10T10:26:42.521648+0000 mgr.y (mgr.24422) 501 : cluster [DBG] pgmap v854: 268 pgs: 40 unknown, 228 active+clean; 455 KiB data, 985 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:44.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:43 vm07 bash[23367]: audit 2026-03-10T10:26:43.069771+0000 mon.a (mon.0) 3070 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:26:44.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:43 vm07 bash[23367]: audit 2026-03-10T10:26:43.069771+0000 mon.a (mon.0) 3070 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:26:44.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:43 vm07 bash[23367]: audit 2026-03-10T10:26:43.484338+0000 mon.a (mon.0) 3071 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-112"}]': finished 2026-03-10T10:26:44.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:43 vm07 bash[23367]: audit 2026-03-10T10:26:43.484338+0000 mon.a (mon.0) 3071 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-112"}]': finished 2026-03-10T10:26:44.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:43 vm07 bash[23367]: cluster 2026-03-10T10:26:43.487980+0000 mon.a (mon.0) 3072 : cluster [DBG] osdmap e556: 8 total, 8 up, 8 in 2026-03-10T10:26:44.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:43 vm07 bash[23367]: cluster 2026-03-10T10:26:43.487980+0000 mon.a (mon.0) 3072 : cluster [DBG] osdmap e556: 8 total, 8 up, 8 in 2026-03-10T10:26:44.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:44 vm04 bash[28289]: audit 2026-03-10T10:26:43.536394+0000 mon.a (mon.0) 3073 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:26:44.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:44 vm04 bash[28289]: audit 2026-03-10T10:26:43.536394+0000 mon.a (mon.0) 3073 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:26:44.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:44 vm04 bash[20742]: audit 2026-03-10T10:26:43.536394+0000 mon.a (mon.0) 3073 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:26:44.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:44 vm04 bash[20742]: audit 2026-03-10T10:26:43.536394+0000 mon.a (mon.0) 3073 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:26:45.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:44 vm07 bash[23367]: audit 2026-03-10T10:26:43.536394+0000 mon.a (mon.0) 3073 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:26:45.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:44 vm07 bash[23367]: audit 2026-03-10T10:26:43.536394+0000 mon.a (mon.0) 3073 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:26:45.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:45 vm04 bash[28289]: cluster 2026-03-10T10:26:44.522234+0000 mgr.y (mgr.24422) 502 : cluster [DBG] pgmap v856: 268 pgs: 268 active+clean; 455 KiB data, 985 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 506 B/s wr, 1 op/s 2026-03-10T10:26:45.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:45 vm04 bash[28289]: cluster 2026-03-10T10:26:44.522234+0000 mgr.y (mgr.24422) 502 : cluster [DBG] pgmap v856: 268 pgs: 268 active+clean; 455 KiB data, 985 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 506 B/s wr, 1 op/s 2026-03-10T10:26:45.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:45 vm04 bash[28289]: audit 2026-03-10T10:26:44.602656+0000 mon.a (mon.0) 3074 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:26:45.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:45 vm04 bash[28289]: audit 2026-03-10T10:26:44.602656+0000 mon.a (mon.0) 3074 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:26:45.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:45 vm04 bash[28289]: cluster 2026-03-10T10:26:44.607184+0000 mon.a (mon.0) 3075 : cluster [DBG] osdmap e557: 8 total, 8 up, 8 in 2026-03-10T10:26:45.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:45 vm04 bash[28289]: cluster 2026-03-10T10:26:44.607184+0000 mon.a (mon.0) 3075 : cluster [DBG] osdmap e557: 8 total, 8 up, 8 in 2026-03-10T10:26:45.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:45 vm04 bash[28289]: audit 2026-03-10T10:26:44.607863+0000 mon.a (mon.0) 3076 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-112"}]: dispatch 2026-03-10T10:26:45.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:45 vm04 bash[28289]: audit 2026-03-10T10:26:44.607863+0000 mon.a (mon.0) 3076 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-112"}]: dispatch 2026-03-10T10:26:45.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:45 vm04 bash[20742]: cluster 2026-03-10T10:26:44.522234+0000 mgr.y (mgr.24422) 502 : cluster [DBG] pgmap v856: 268 pgs: 268 active+clean; 455 KiB data, 985 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 506 B/s wr, 1 op/s 2026-03-10T10:26:45.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:45 vm04 bash[20742]: cluster 2026-03-10T10:26:44.522234+0000 mgr.y (mgr.24422) 502 : cluster [DBG] pgmap v856: 268 pgs: 268 active+clean; 455 KiB data, 985 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 506 B/s wr, 1 op/s 2026-03-10T10:26:45.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:45 vm04 bash[20742]: audit 2026-03-10T10:26:44.602656+0000 mon.a (mon.0) 3074 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:26:45.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:45 vm04 bash[20742]: audit 2026-03-10T10:26:44.602656+0000 mon.a (mon.0) 3074 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:26:45.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:45 vm04 bash[20742]: cluster 2026-03-10T10:26:44.607184+0000 mon.a (mon.0) 3075 : cluster [DBG] osdmap e557: 8 total, 8 up, 8 in 2026-03-10T10:26:45.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:45 vm04 bash[20742]: cluster 2026-03-10T10:26:44.607184+0000 mon.a (mon.0) 3075 : cluster [DBG] osdmap e557: 8 total, 8 up, 8 in 2026-03-10T10:26:45.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:45 vm04 bash[20742]: audit 2026-03-10T10:26:44.607863+0000 mon.a (mon.0) 3076 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-112"}]: dispatch 2026-03-10T10:26:45.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:45 vm04 bash[20742]: audit 2026-03-10T10:26:44.607863+0000 mon.a (mon.0) 3076 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-112"}]: dispatch 2026-03-10T10:26:46.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:45 vm07 bash[23367]: cluster 2026-03-10T10:26:44.522234+0000 mgr.y (mgr.24422) 502 : cluster [DBG] pgmap v856: 268 pgs: 268 active+clean; 455 KiB data, 985 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 506 B/s wr, 1 op/s 2026-03-10T10:26:46.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:45 vm07 bash[23367]: cluster 2026-03-10T10:26:44.522234+0000 mgr.y (mgr.24422) 502 : cluster [DBG] pgmap v856: 268 pgs: 268 active+clean; 455 KiB data, 985 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 506 B/s wr, 1 op/s 2026-03-10T10:26:46.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:45 vm07 bash[23367]: audit 2026-03-10T10:26:44.602656+0000 mon.a (mon.0) 3074 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:26:46.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:45 vm07 bash[23367]: audit 2026-03-10T10:26:44.602656+0000 mon.a (mon.0) 3074 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:26:46.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:45 vm07 bash[23367]: cluster 2026-03-10T10:26:44.607184+0000 mon.a (mon.0) 3075 : cluster [DBG] osdmap e557: 8 total, 8 up, 8 in 2026-03-10T10:26:46.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:45 vm07 bash[23367]: cluster 2026-03-10T10:26:44.607184+0000 mon.a (mon.0) 3075 : cluster [DBG] osdmap e557: 8 total, 8 up, 8 in 2026-03-10T10:26:46.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:45 vm07 bash[23367]: audit 2026-03-10T10:26:44.607863+0000 mon.a (mon.0) 3076 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-112"}]: dispatch 2026-03-10T10:26:46.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:45 vm07 bash[23367]: audit 2026-03-10T10:26:44.607863+0000 mon.a (mon.0) 3076 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-112"}]: dispatch 2026-03-10T10:26:46.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:46 vm04 bash[28289]: audit 2026-03-10T10:26:45.606039+0000 mon.a (mon.0) 3077 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-112"}]': finished 2026-03-10T10:26:46.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:46 vm04 bash[28289]: audit 2026-03-10T10:26:45.606039+0000 mon.a (mon.0) 3077 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-112"}]': finished 2026-03-10T10:26:46.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:46 vm04 bash[28289]: cluster 2026-03-10T10:26:45.609553+0000 mon.a (mon.0) 3078 : cluster [DBG] osdmap e558: 8 total, 8 up, 8 in 2026-03-10T10:26:46.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:46 vm04 bash[28289]: cluster 2026-03-10T10:26:45.609553+0000 mon.a (mon.0) 3078 : cluster [DBG] osdmap e558: 8 total, 8 up, 8 in 2026-03-10T10:26:46.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:46 vm04 bash[28289]: cluster 2026-03-10T10:26:46.534413+0000 mon.a (mon.0) 3079 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:46.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:46 vm04 bash[28289]: cluster 2026-03-10T10:26:46.534413+0000 mon.a (mon.0) 3079 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:46.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:46 vm04 bash[20742]: audit 2026-03-10T10:26:45.606039+0000 mon.a (mon.0) 3077 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-112"}]': finished 2026-03-10T10:26:46.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:46 vm04 bash[20742]: audit 2026-03-10T10:26:45.606039+0000 mon.a (mon.0) 3077 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-112"}]': finished 2026-03-10T10:26:46.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:46 vm04 bash[20742]: cluster 2026-03-10T10:26:45.609553+0000 mon.a (mon.0) 3078 : cluster [DBG] osdmap e558: 8 total, 8 up, 8 in 2026-03-10T10:26:46.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:46 vm04 bash[20742]: cluster 2026-03-10T10:26:45.609553+0000 mon.a (mon.0) 3078 : cluster [DBG] osdmap e558: 8 total, 8 up, 8 in 2026-03-10T10:26:46.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:46 vm04 bash[20742]: cluster 2026-03-10T10:26:46.534413+0000 mon.a (mon.0) 3079 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:46.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:46 vm04 bash[20742]: cluster 2026-03-10T10:26:46.534413+0000 mon.a (mon.0) 3079 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:47.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:46 vm07 bash[23367]: audit 2026-03-10T10:26:45.606039+0000 mon.a (mon.0) 3077 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-112"}]': finished 2026-03-10T10:26:47.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:46 vm07 bash[23367]: audit 2026-03-10T10:26:45.606039+0000 mon.a (mon.0) 3077 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-112"}]': finished 2026-03-10T10:26:47.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:46 vm07 bash[23367]: cluster 2026-03-10T10:26:45.609553+0000 mon.a (mon.0) 3078 : cluster [DBG] osdmap e558: 8 total, 8 up, 8 in 2026-03-10T10:26:47.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:46 vm07 bash[23367]: cluster 2026-03-10T10:26:45.609553+0000 mon.a (mon.0) 3078 : cluster [DBG] osdmap e558: 8 total, 8 up, 8 in 2026-03-10T10:26:47.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:46 vm07 bash[23367]: cluster 2026-03-10T10:26:46.534413+0000 mon.a (mon.0) 3079 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:47.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:46 vm07 bash[23367]: cluster 2026-03-10T10:26:46.534413+0000 mon.a (mon.0) 3079 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:47.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:47 vm04 bash[28289]: cluster 2026-03-10T10:26:46.522703+0000 mgr.y (mgr.24422) 503 : cluster [DBG] pgmap v859: 268 pgs: 268 active+clean; 455 KiB data, 985 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 507 B/s wr, 1 op/s 2026-03-10T10:26:47.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:47 vm04 bash[28289]: cluster 2026-03-10T10:26:46.522703+0000 mgr.y (mgr.24422) 503 : cluster [DBG] pgmap v859: 268 pgs: 268 active+clean; 455 KiB data, 985 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 507 B/s wr, 1 op/s 2026-03-10T10:26:47.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:47 vm04 bash[28289]: cluster 2026-03-10T10:26:46.676288+0000 mon.a (mon.0) 3080 : cluster [DBG] osdmap e559: 8 total, 8 up, 8 in 2026-03-10T10:26:47.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:47 vm04 bash[28289]: cluster 2026-03-10T10:26:46.676288+0000 mon.a (mon.0) 3080 : cluster [DBG] osdmap e559: 8 total, 8 up, 8 in 2026-03-10T10:26:47.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:47 vm04 bash[20742]: cluster 2026-03-10T10:26:46.522703+0000 mgr.y (mgr.24422) 503 : cluster [DBG] pgmap v859: 268 pgs: 268 active+clean; 455 KiB data, 985 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 507 B/s wr, 1 op/s 2026-03-10T10:26:47.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:47 vm04 bash[20742]: cluster 2026-03-10T10:26:46.522703+0000 mgr.y (mgr.24422) 503 : cluster [DBG] pgmap v859: 268 pgs: 268 active+clean; 455 KiB data, 985 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 507 B/s wr, 1 op/s 2026-03-10T10:26:47.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:47 vm04 bash[20742]: cluster 2026-03-10T10:26:46.676288+0000 mon.a (mon.0) 3080 : cluster [DBG] osdmap e559: 8 total, 8 up, 8 in 2026-03-10T10:26:47.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:47 vm04 bash[20742]: cluster 2026-03-10T10:26:46.676288+0000 mon.a (mon.0) 3080 : cluster [DBG] osdmap e559: 8 total, 8 up, 8 in 2026-03-10T10:26:48.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:47 vm07 bash[23367]: cluster 2026-03-10T10:26:46.522703+0000 mgr.y (mgr.24422) 503 : cluster [DBG] pgmap v859: 268 pgs: 268 active+clean; 455 KiB data, 985 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 507 B/s wr, 1 op/s 2026-03-10T10:26:48.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:47 vm07 bash[23367]: cluster 2026-03-10T10:26:46.522703+0000 mgr.y (mgr.24422) 503 : cluster [DBG] pgmap v859: 268 pgs: 268 active+clean; 455 KiB data, 985 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 507 B/s wr, 1 op/s 2026-03-10T10:26:48.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:47 vm07 bash[23367]: cluster 2026-03-10T10:26:46.676288+0000 mon.a (mon.0) 3080 : cluster [DBG] osdmap e559: 8 total, 8 up, 8 in 2026-03-10T10:26:48.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:47 vm07 bash[23367]: cluster 2026-03-10T10:26:46.676288+0000 mon.a (mon.0) 3080 : cluster [DBG] osdmap e559: 8 total, 8 up, 8 in 2026-03-10T10:26:48.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:48 vm04 bash[28289]: cluster 2026-03-10T10:26:47.682505+0000 mon.a (mon.0) 3081 : cluster [DBG] osdmap e560: 8 total, 8 up, 8 in 2026-03-10T10:26:48.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:48 vm04 bash[28289]: cluster 2026-03-10T10:26:47.682505+0000 mon.a (mon.0) 3081 : cluster [DBG] osdmap e560: 8 total, 8 up, 8 in 2026-03-10T10:26:48.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:48 vm04 bash[28289]: audit 2026-03-10T10:26:47.684568+0000 mon.a (mon.0) 3082 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:48.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:48 vm04 bash[28289]: audit 2026-03-10T10:26:47.684568+0000 mon.a (mon.0) 3082 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:48.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:48 vm04 bash[20742]: cluster 2026-03-10T10:26:47.682505+0000 mon.a (mon.0) 3081 : cluster [DBG] osdmap e560: 8 total, 8 up, 8 in 2026-03-10T10:26:48.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:48 vm04 bash[20742]: cluster 2026-03-10T10:26:47.682505+0000 mon.a (mon.0) 3081 : cluster [DBG] osdmap e560: 8 total, 8 up, 8 in 2026-03-10T10:26:48.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:48 vm04 bash[20742]: audit 2026-03-10T10:26:47.684568+0000 mon.a (mon.0) 3082 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:48.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:48 vm04 bash[20742]: audit 2026-03-10T10:26:47.684568+0000 mon.a (mon.0) 3082 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:49.016 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:26:48 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:26:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:48 vm07 bash[23367]: cluster 2026-03-10T10:26:47.682505+0000 mon.a (mon.0) 3081 : cluster [DBG] osdmap e560: 8 total, 8 up, 8 in 2026-03-10T10:26:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:48 vm07 bash[23367]: cluster 2026-03-10T10:26:47.682505+0000 mon.a (mon.0) 3081 : cluster [DBG] osdmap e560: 8 total, 8 up, 8 in 2026-03-10T10:26:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:48 vm07 bash[23367]: audit 2026-03-10T10:26:47.684568+0000 mon.a (mon.0) 3082 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:49.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:48 vm07 bash[23367]: audit 2026-03-10T10:26:47.684568+0000 mon.a (mon.0) 3082 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-114","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:49 vm04 bash[28289]: cluster 2026-03-10T10:26:48.523311+0000 mgr.y (mgr.24422) 504 : cluster [DBG] pgmap v862: 268 pgs: 17 creating+peering, 15 unknown, 236 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:49 vm04 bash[28289]: cluster 2026-03-10T10:26:48.523311+0000 mgr.y (mgr.24422) 504 : cluster [DBG] pgmap v862: 268 pgs: 17 creating+peering, 15 unknown, 236 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:49 vm04 bash[28289]: audit 2026-03-10T10:26:48.690957+0000 mon.a (mon.0) 3083 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-114","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:49 vm04 bash[28289]: audit 2026-03-10T10:26:48.690957+0000 mon.a (mon.0) 3083 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-114","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:49 vm04 bash[28289]: cluster 2026-03-10T10:26:48.706208+0000 mon.a (mon.0) 3084 : cluster [DBG] osdmap e561: 8 total, 8 up, 8 in 2026-03-10T10:26:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:49 vm04 bash[28289]: cluster 2026-03-10T10:26:48.706208+0000 mon.a (mon.0) 3084 : cluster [DBG] osdmap e561: 8 total, 8 up, 8 in 2026-03-10T10:26:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:49 vm04 bash[28289]: audit 2026-03-10T10:26:48.730454+0000 mon.a (mon.0) 3085 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:26:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:49 vm04 bash[28289]: audit 2026-03-10T10:26:48.730454+0000 mon.a (mon.0) 3085 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:26:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:49 vm04 bash[28289]: audit 2026-03-10T10:26:48.747986+0000 mgr.y (mgr.24422) 505 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:26:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:49 vm04 bash[28289]: audit 2026-03-10T10:26:48.747986+0000 mgr.y (mgr.24422) 505 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:26:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:49 vm04 bash[20742]: cluster 2026-03-10T10:26:48.523311+0000 mgr.y (mgr.24422) 504 : cluster [DBG] pgmap v862: 268 pgs: 17 creating+peering, 15 unknown, 236 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:49 vm04 bash[20742]: cluster 2026-03-10T10:26:48.523311+0000 mgr.y (mgr.24422) 504 : cluster [DBG] pgmap v862: 268 pgs: 17 creating+peering, 15 unknown, 236 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:49 vm04 bash[20742]: audit 2026-03-10T10:26:48.690957+0000 mon.a (mon.0) 3083 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-114","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:49 vm04 bash[20742]: audit 2026-03-10T10:26:48.690957+0000 mon.a (mon.0) 3083 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-114","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:49 vm04 bash[20742]: cluster 2026-03-10T10:26:48.706208+0000 mon.a (mon.0) 3084 : cluster [DBG] osdmap e561: 8 total, 8 up, 8 in 2026-03-10T10:26:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:49 vm04 bash[20742]: cluster 2026-03-10T10:26:48.706208+0000 mon.a (mon.0) 3084 : cluster [DBG] osdmap e561: 8 total, 8 up, 8 in 2026-03-10T10:26:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:49 vm04 bash[20742]: audit 2026-03-10T10:26:48.730454+0000 mon.a (mon.0) 3085 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:26:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:49 vm04 bash[20742]: audit 2026-03-10T10:26:48.730454+0000 mon.a (mon.0) 3085 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:26:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:49 vm04 bash[20742]: audit 2026-03-10T10:26:48.747986+0000 mgr.y (mgr.24422) 505 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:26:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:49 vm04 bash[20742]: audit 2026-03-10T10:26:48.747986+0000 mgr.y (mgr.24422) 505 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:26:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:49 vm07 bash[23367]: cluster 2026-03-10T10:26:48.523311+0000 mgr.y (mgr.24422) 504 : cluster [DBG] pgmap v862: 268 pgs: 17 creating+peering, 15 unknown, 236 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:49 vm07 bash[23367]: cluster 2026-03-10T10:26:48.523311+0000 mgr.y (mgr.24422) 504 : cluster [DBG] pgmap v862: 268 pgs: 17 creating+peering, 15 unknown, 236 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:26:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:49 vm07 bash[23367]: audit 2026-03-10T10:26:48.690957+0000 mon.a (mon.0) 3083 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-114","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:49 vm07 bash[23367]: audit 2026-03-10T10:26:48.690957+0000 mon.a (mon.0) 3083 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-114","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:49 vm07 bash[23367]: cluster 2026-03-10T10:26:48.706208+0000 mon.a (mon.0) 3084 : cluster [DBG] osdmap e561: 8 total, 8 up, 8 in 2026-03-10T10:26:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:49 vm07 bash[23367]: cluster 2026-03-10T10:26:48.706208+0000 mon.a (mon.0) 3084 : cluster [DBG] osdmap e561: 8 total, 8 up, 8 in 2026-03-10T10:26:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:49 vm07 bash[23367]: audit 2026-03-10T10:26:48.730454+0000 mon.a (mon.0) 3085 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:26:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:49 vm07 bash[23367]: audit 2026-03-10T10:26:48.730454+0000 mon.a (mon.0) 3085 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-114", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:26:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:49 vm07 bash[23367]: audit 2026-03-10T10:26:48.747986+0000 mgr.y (mgr.24422) 505 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:26:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:49 vm07 bash[23367]: audit 2026-03-10T10:26:48.747986+0000 mgr.y (mgr.24422) 505 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:26:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:50 vm07 bash[23367]: audit 2026-03-10T10:26:49.702887+0000 mon.a (mon.0) 3086 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-114", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:26:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:50 vm07 bash[23367]: audit 2026-03-10T10:26:49.702887+0000 mon.a (mon.0) 3086 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-114", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:26:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:50 vm07 bash[23367]: cluster 2026-03-10T10:26:49.705780+0000 mon.a (mon.0) 3087 : cluster [DBG] osdmap e562: 8 total, 8 up, 8 in 2026-03-10T10:26:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:50 vm07 bash[23367]: cluster 2026-03-10T10:26:49.705780+0000 mon.a (mon.0) 3087 : cluster [DBG] osdmap e562: 8 total, 8 up, 8 in 2026-03-10T10:26:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:50 vm07 bash[23367]: audit 2026-03-10T10:26:49.706290+0000 mon.a (mon.0) 3088 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-114"}]: dispatch 2026-03-10T10:26:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:50 vm07 bash[23367]: audit 2026-03-10T10:26:49.706290+0000 mon.a (mon.0) 3088 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-114"}]: dispatch 2026-03-10T10:26:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:50 vm04 bash[28289]: audit 2026-03-10T10:26:49.702887+0000 mon.a (mon.0) 3086 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-114", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:26:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:50 vm04 bash[28289]: audit 2026-03-10T10:26:49.702887+0000 mon.a (mon.0) 3086 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-114", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:26:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:50 vm04 bash[28289]: cluster 2026-03-10T10:26:49.705780+0000 mon.a (mon.0) 3087 : cluster [DBG] osdmap e562: 8 total, 8 up, 8 in 2026-03-10T10:26:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:50 vm04 bash[28289]: cluster 2026-03-10T10:26:49.705780+0000 mon.a (mon.0) 3087 : cluster [DBG] osdmap e562: 8 total, 8 up, 8 in 2026-03-10T10:26:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:50 vm04 bash[28289]: audit 2026-03-10T10:26:49.706290+0000 mon.a (mon.0) 3088 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-114"}]: dispatch 2026-03-10T10:26:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:50 vm04 bash[28289]: audit 2026-03-10T10:26:49.706290+0000 mon.a (mon.0) 3088 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-114"}]: dispatch 2026-03-10T10:26:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:50 vm04 bash[20742]: audit 2026-03-10T10:26:49.702887+0000 mon.a (mon.0) 3086 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-114", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:26:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:50 vm04 bash[20742]: audit 2026-03-10T10:26:49.702887+0000 mon.a (mon.0) 3086 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-114", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:26:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:50 vm04 bash[20742]: cluster 2026-03-10T10:26:49.705780+0000 mon.a (mon.0) 3087 : cluster [DBG] osdmap e562: 8 total, 8 up, 8 in 2026-03-10T10:26:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:50 vm04 bash[20742]: cluster 2026-03-10T10:26:49.705780+0000 mon.a (mon.0) 3087 : cluster [DBG] osdmap e562: 8 total, 8 up, 8 in 2026-03-10T10:26:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:50 vm04 bash[20742]: audit 2026-03-10T10:26:49.706290+0000 mon.a (mon.0) 3088 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-114"}]: dispatch 2026-03-10T10:26:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:50 vm04 bash[20742]: audit 2026-03-10T10:26:49.706290+0000 mon.a (mon.0) 3088 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-114"}]: dispatch 2026-03-10T10:26:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:51 vm04 bash[28289]: cluster 2026-03-10T10:26:50.523774+0000 mgr.y (mgr.24422) 506 : cluster [DBG] pgmap v865: 268 pgs: 4 creating+activating, 23 creating+peering, 241 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:51 vm04 bash[28289]: cluster 2026-03-10T10:26:50.523774+0000 mgr.y (mgr.24422) 506 : cluster [DBG] pgmap v865: 268 pgs: 4 creating+activating, 23 creating+peering, 241 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:51 vm04 bash[28289]: audit 2026-03-10T10:26:50.714733+0000 mon.a (mon.0) 3089 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-114"}]': finished 2026-03-10T10:26:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:51 vm04 bash[28289]: audit 2026-03-10T10:26:50.714733+0000 mon.a (mon.0) 3089 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-114"}]': finished 2026-03-10T10:26:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:51 vm04 bash[28289]: cluster 2026-03-10T10:26:50.722744+0000 mon.a (mon.0) 3090 : cluster [DBG] osdmap e563: 8 total, 8 up, 8 in 2026-03-10T10:26:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:51 vm04 bash[28289]: cluster 2026-03-10T10:26:50.722744+0000 mon.a (mon.0) 3090 : cluster [DBG] osdmap e563: 8 total, 8 up, 8 in 2026-03-10T10:26:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:51 vm04 bash[28289]: audit 2026-03-10T10:26:50.724417+0000 mon.a (mon.0) 3091 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-114", "mode": "writeback"}]: dispatch 2026-03-10T10:26:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:51 vm04 bash[28289]: audit 2026-03-10T10:26:50.724417+0000 mon.a (mon.0) 3091 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-114", "mode": "writeback"}]: dispatch 2026-03-10T10:26:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:51 vm04 bash[28289]: cluster 2026-03-10T10:26:51.714817+0000 mon.a (mon.0) 3092 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:26:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:51 vm04 bash[28289]: cluster 2026-03-10T10:26:51.714817+0000 mon.a (mon.0) 3092 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:26:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:51 vm04 bash[20742]: cluster 2026-03-10T10:26:50.523774+0000 mgr.y (mgr.24422) 506 : cluster [DBG] pgmap v865: 268 pgs: 4 creating+activating, 23 creating+peering, 241 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:51 vm04 bash[20742]: cluster 2026-03-10T10:26:50.523774+0000 mgr.y (mgr.24422) 506 : cluster [DBG] pgmap v865: 268 pgs: 4 creating+activating, 23 creating+peering, 241 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:51 vm04 bash[20742]: audit 2026-03-10T10:26:50.714733+0000 mon.a (mon.0) 3089 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-114"}]': finished 2026-03-10T10:26:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:51 vm04 bash[20742]: audit 2026-03-10T10:26:50.714733+0000 mon.a (mon.0) 3089 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-114"}]': finished 2026-03-10T10:26:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:51 vm04 bash[20742]: cluster 2026-03-10T10:26:50.722744+0000 mon.a (mon.0) 3090 : cluster [DBG] osdmap e563: 8 total, 8 up, 8 in 2026-03-10T10:26:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:51 vm04 bash[20742]: cluster 2026-03-10T10:26:50.722744+0000 mon.a (mon.0) 3090 : cluster [DBG] osdmap e563: 8 total, 8 up, 8 in 2026-03-10T10:26:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:51 vm04 bash[20742]: audit 2026-03-10T10:26:50.724417+0000 mon.a (mon.0) 3091 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-114", "mode": "writeback"}]: dispatch 2026-03-10T10:26:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:51 vm04 bash[20742]: audit 2026-03-10T10:26:50.724417+0000 mon.a (mon.0) 3091 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-114", "mode": "writeback"}]: dispatch 2026-03-10T10:26:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:51 vm04 bash[20742]: cluster 2026-03-10T10:26:51.714817+0000 mon.a (mon.0) 3092 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:26:52.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:51 vm04 bash[20742]: cluster 2026-03-10T10:26:51.714817+0000 mon.a (mon.0) 3092 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:26:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:51 vm07 bash[23367]: cluster 2026-03-10T10:26:50.523774+0000 mgr.y (mgr.24422) 506 : cluster [DBG] pgmap v865: 268 pgs: 4 creating+activating, 23 creating+peering, 241 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:51 vm07 bash[23367]: cluster 2026-03-10T10:26:50.523774+0000 mgr.y (mgr.24422) 506 : cluster [DBG] pgmap v865: 268 pgs: 4 creating+activating, 23 creating+peering, 241 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:51 vm07 bash[23367]: audit 2026-03-10T10:26:50.714733+0000 mon.a (mon.0) 3089 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-114"}]': finished 2026-03-10T10:26:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:51 vm07 bash[23367]: audit 2026-03-10T10:26:50.714733+0000 mon.a (mon.0) 3089 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-114"}]': finished 2026-03-10T10:26:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:51 vm07 bash[23367]: cluster 2026-03-10T10:26:50.722744+0000 mon.a (mon.0) 3090 : cluster [DBG] osdmap e563: 8 total, 8 up, 8 in 2026-03-10T10:26:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:51 vm07 bash[23367]: cluster 2026-03-10T10:26:50.722744+0000 mon.a (mon.0) 3090 : cluster [DBG] osdmap e563: 8 total, 8 up, 8 in 2026-03-10T10:26:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:51 vm07 bash[23367]: audit 2026-03-10T10:26:50.724417+0000 mon.a (mon.0) 3091 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-114", "mode": "writeback"}]: dispatch 2026-03-10T10:26:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:51 vm07 bash[23367]: audit 2026-03-10T10:26:50.724417+0000 mon.a (mon.0) 3091 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-114", "mode": "writeback"}]: dispatch 2026-03-10T10:26:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:51 vm07 bash[23367]: cluster 2026-03-10T10:26:51.714817+0000 mon.a (mon.0) 3092 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:26:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:51 vm07 bash[23367]: cluster 2026-03-10T10:26:51.714817+0000 mon.a (mon.0) 3092 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:26:53.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:52 vm04 bash[28289]: audit 2026-03-10T10:26:51.775025+0000 mon.a (mon.0) 3093 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-114", "mode": "writeback"}]': finished 2026-03-10T10:26:53.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:52 vm04 bash[28289]: audit 2026-03-10T10:26:51.775025+0000 mon.a (mon.0) 3093 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-114", "mode": "writeback"}]': finished 2026-03-10T10:26:53.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:52 vm04 bash[28289]: cluster 2026-03-10T10:26:51.782791+0000 mon.a (mon.0) 3094 : cluster [DBG] osdmap e564: 8 total, 8 up, 8 in 2026-03-10T10:26:53.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:52 vm04 bash[28289]: cluster 2026-03-10T10:26:51.782791+0000 mon.a (mon.0) 3094 : cluster [DBG] osdmap e564: 8 total, 8 up, 8 in 2026-03-10T10:26:53.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:52 vm04 bash[28289]: audit 2026-03-10T10:26:51.825886+0000 mon.a (mon.0) 3095 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:26:53.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:52 vm04 bash[28289]: audit 2026-03-10T10:26:51.825886+0000 mon.a (mon.0) 3095 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:26:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:52 vm04 bash[20742]: audit 2026-03-10T10:26:51.775025+0000 mon.a (mon.0) 3093 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-114", "mode": "writeback"}]': finished 2026-03-10T10:26:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:52 vm04 bash[20742]: audit 2026-03-10T10:26:51.775025+0000 mon.a (mon.0) 3093 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-114", "mode": "writeback"}]': finished 2026-03-10T10:26:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:52 vm04 bash[20742]: cluster 2026-03-10T10:26:51.782791+0000 mon.a (mon.0) 3094 : cluster [DBG] osdmap e564: 8 total, 8 up, 8 in 2026-03-10T10:26:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:52 vm04 bash[20742]: cluster 2026-03-10T10:26:51.782791+0000 mon.a (mon.0) 3094 : cluster [DBG] osdmap e564: 8 total, 8 up, 8 in 2026-03-10T10:26:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:52 vm04 bash[20742]: audit 2026-03-10T10:26:51.825886+0000 mon.a (mon.0) 3095 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:26:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:52 vm04 bash[20742]: audit 2026-03-10T10:26:51.825886+0000 mon.a (mon.0) 3095 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:26:53.203 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:26:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:26:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:26:53.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:52 vm07 bash[23367]: audit 2026-03-10T10:26:51.775025+0000 mon.a (mon.0) 3093 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-114", "mode": "writeback"}]': finished 2026-03-10T10:26:53.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:52 vm07 bash[23367]: audit 2026-03-10T10:26:51.775025+0000 mon.a (mon.0) 3093 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-114", "mode": "writeback"}]': finished 2026-03-10T10:26:53.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:52 vm07 bash[23367]: cluster 2026-03-10T10:26:51.782791+0000 mon.a (mon.0) 3094 : cluster [DBG] osdmap e564: 8 total, 8 up, 8 in 2026-03-10T10:26:53.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:52 vm07 bash[23367]: cluster 2026-03-10T10:26:51.782791+0000 mon.a (mon.0) 3094 : cluster [DBG] osdmap e564: 8 total, 8 up, 8 in 2026-03-10T10:26:53.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:52 vm07 bash[23367]: audit 2026-03-10T10:26:51.825886+0000 mon.a (mon.0) 3095 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:26:53.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:52 vm07 bash[23367]: audit 2026-03-10T10:26:51.825886+0000 mon.a (mon.0) 3095 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:26:54.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:53 vm04 bash[20742]: cluster 2026-03-10T10:26:52.524235+0000 mgr.y (mgr.24422) 507 : cluster [DBG] pgmap v868: 268 pgs: 4 creating+activating, 23 creating+peering, 241 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:26:54.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:53 vm04 bash[20742]: cluster 2026-03-10T10:26:52.524235+0000 mgr.y (mgr.24422) 507 : cluster [DBG] pgmap v868: 268 pgs: 4 creating+activating, 23 creating+peering, 241 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:26:54.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:53 vm04 bash[20742]: audit 2026-03-10T10:26:52.796466+0000 mon.a (mon.0) 3096 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:26:54.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:53 vm04 bash[20742]: audit 2026-03-10T10:26:52.796466+0000 mon.a (mon.0) 3096 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:26:54.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:53 vm04 bash[20742]: cluster 2026-03-10T10:26:52.802955+0000 mon.a (mon.0) 3097 : cluster [DBG] osdmap e565: 8 total, 8 up, 8 in 2026-03-10T10:26:54.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:53 vm04 bash[20742]: cluster 2026-03-10T10:26:52.802955+0000 mon.a (mon.0) 3097 : cluster [DBG] osdmap e565: 8 total, 8 up, 8 in 2026-03-10T10:26:54.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:53 vm04 bash[20742]: audit 2026-03-10T10:26:52.803351+0000 mon.a (mon.0) 3098 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-114"}]: dispatch 2026-03-10T10:26:54.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:53 vm04 bash[20742]: audit 2026-03-10T10:26:52.803351+0000 mon.a (mon.0) 3098 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-114"}]: dispatch 2026-03-10T10:26:54.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:53 vm04 bash[28289]: cluster 2026-03-10T10:26:52.524235+0000 mgr.y (mgr.24422) 507 : cluster [DBG] pgmap v868: 268 pgs: 4 creating+activating, 23 creating+peering, 241 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:26:54.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:53 vm04 bash[28289]: cluster 2026-03-10T10:26:52.524235+0000 mgr.y (mgr.24422) 507 : cluster [DBG] pgmap v868: 268 pgs: 4 creating+activating, 23 creating+peering, 241 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:26:54.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:53 vm04 bash[28289]: audit 2026-03-10T10:26:52.796466+0000 mon.a (mon.0) 3096 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:26:54.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:53 vm04 bash[28289]: audit 2026-03-10T10:26:52.796466+0000 mon.a (mon.0) 3096 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:26:54.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:53 vm04 bash[28289]: cluster 2026-03-10T10:26:52.802955+0000 mon.a (mon.0) 3097 : cluster [DBG] osdmap e565: 8 total, 8 up, 8 in 2026-03-10T10:26:54.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:53 vm04 bash[28289]: cluster 2026-03-10T10:26:52.802955+0000 mon.a (mon.0) 3097 : cluster [DBG] osdmap e565: 8 total, 8 up, 8 in 2026-03-10T10:26:54.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:53 vm04 bash[28289]: audit 2026-03-10T10:26:52.803351+0000 mon.a (mon.0) 3098 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-114"}]: dispatch 2026-03-10T10:26:54.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:53 vm04 bash[28289]: audit 2026-03-10T10:26:52.803351+0000 mon.a (mon.0) 3098 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-114"}]: dispatch 2026-03-10T10:26:54.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:53 vm07 bash[23367]: cluster 2026-03-10T10:26:52.524235+0000 mgr.y (mgr.24422) 507 : cluster [DBG] pgmap v868: 268 pgs: 4 creating+activating, 23 creating+peering, 241 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:26:54.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:53 vm07 bash[23367]: cluster 2026-03-10T10:26:52.524235+0000 mgr.y (mgr.24422) 507 : cluster [DBG] pgmap v868: 268 pgs: 4 creating+activating, 23 creating+peering, 241 active+clean; 455 KiB data, 986 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:26:54.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:53 vm07 bash[23367]: audit 2026-03-10T10:26:52.796466+0000 mon.a (mon.0) 3096 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:26:54.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:53 vm07 bash[23367]: audit 2026-03-10T10:26:52.796466+0000 mon.a (mon.0) 3096 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:26:54.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:53 vm07 bash[23367]: cluster 2026-03-10T10:26:52.802955+0000 mon.a (mon.0) 3097 : cluster [DBG] osdmap e565: 8 total, 8 up, 8 in 2026-03-10T10:26:54.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:53 vm07 bash[23367]: cluster 2026-03-10T10:26:52.802955+0000 mon.a (mon.0) 3097 : cluster [DBG] osdmap e565: 8 total, 8 up, 8 in 2026-03-10T10:26:54.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:53 vm07 bash[23367]: audit 2026-03-10T10:26:52.803351+0000 mon.a (mon.0) 3098 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-114"}]: dispatch 2026-03-10T10:26:54.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:53 vm07 bash[23367]: audit 2026-03-10T10:26:52.803351+0000 mon.a (mon.0) 3098 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-114"}]: dispatch 2026-03-10T10:26:55.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:54 vm04 bash[20742]: cluster 2026-03-10T10:26:53.796520+0000 mon.a (mon.0) 3099 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:26:55.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:54 vm04 bash[20742]: cluster 2026-03-10T10:26:53.796520+0000 mon.a (mon.0) 3099 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:26:55.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:54 vm04 bash[20742]: audit 2026-03-10T10:26:53.799875+0000 mon.a (mon.0) 3100 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-114"}]': finished 2026-03-10T10:26:55.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:54 vm04 bash[20742]: audit 2026-03-10T10:26:53.799875+0000 mon.a (mon.0) 3100 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-114"}]': finished 2026-03-10T10:26:55.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:54 vm04 bash[20742]: cluster 2026-03-10T10:26:53.804617+0000 mon.a (mon.0) 3101 : cluster [DBG] osdmap e566: 8 total, 8 up, 8 in 2026-03-10T10:26:55.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:54 vm04 bash[20742]: cluster 2026-03-10T10:26:53.804617+0000 mon.a (mon.0) 3101 : cluster [DBG] osdmap e566: 8 total, 8 up, 8 in 2026-03-10T10:26:55.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:54 vm04 bash[28289]: cluster 2026-03-10T10:26:53.796520+0000 mon.a (mon.0) 3099 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:26:55.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:54 vm04 bash[28289]: cluster 2026-03-10T10:26:53.796520+0000 mon.a (mon.0) 3099 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:26:55.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:54 vm04 bash[28289]: audit 2026-03-10T10:26:53.799875+0000 mon.a (mon.0) 3100 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-114"}]': finished 2026-03-10T10:26:55.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:54 vm04 bash[28289]: audit 2026-03-10T10:26:53.799875+0000 mon.a (mon.0) 3100 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-114"}]': finished 2026-03-10T10:26:55.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:54 vm04 bash[28289]: cluster 2026-03-10T10:26:53.804617+0000 mon.a (mon.0) 3101 : cluster [DBG] osdmap e566: 8 total, 8 up, 8 in 2026-03-10T10:26:55.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:54 vm04 bash[28289]: cluster 2026-03-10T10:26:53.804617+0000 mon.a (mon.0) 3101 : cluster [DBG] osdmap e566: 8 total, 8 up, 8 in 2026-03-10T10:26:55.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:54 vm07 bash[23367]: cluster 2026-03-10T10:26:53.796520+0000 mon.a (mon.0) 3099 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:26:55.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:54 vm07 bash[23367]: cluster 2026-03-10T10:26:53.796520+0000 mon.a (mon.0) 3099 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:26:55.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:54 vm07 bash[23367]: audit 2026-03-10T10:26:53.799875+0000 mon.a (mon.0) 3100 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-114"}]': finished 2026-03-10T10:26:55.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:54 vm07 bash[23367]: audit 2026-03-10T10:26:53.799875+0000 mon.a (mon.0) 3100 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-114"}]': finished 2026-03-10T10:26:55.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:54 vm07 bash[23367]: cluster 2026-03-10T10:26:53.804617+0000 mon.a (mon.0) 3101 : cluster [DBG] osdmap e566: 8 total, 8 up, 8 in 2026-03-10T10:26:55.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:54 vm07 bash[23367]: cluster 2026-03-10T10:26:53.804617+0000 mon.a (mon.0) 3101 : cluster [DBG] osdmap e566: 8 total, 8 up, 8 in 2026-03-10T10:26:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:55 vm04 bash[20742]: cluster 2026-03-10T10:26:54.524588+0000 mgr.y (mgr.24422) 508 : cluster [DBG] pgmap v871: 268 pgs: 268 active+clean; 455 KiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:26:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:55 vm04 bash[20742]: cluster 2026-03-10T10:26:54.524588+0000 mgr.y (mgr.24422) 508 : cluster [DBG] pgmap v871: 268 pgs: 268 active+clean; 455 KiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:26:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:55 vm04 bash[20742]: cluster 2026-03-10T10:26:54.810148+0000 mon.a (mon.0) 3102 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:55 vm04 bash[20742]: cluster 2026-03-10T10:26:54.810148+0000 mon.a (mon.0) 3102 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:55 vm04 bash[20742]: cluster 2026-03-10T10:26:54.816320+0000 mon.a (mon.0) 3103 : cluster [DBG] osdmap e567: 8 total, 8 up, 8 in 2026-03-10T10:26:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:55 vm04 bash[20742]: cluster 2026-03-10T10:26:54.816320+0000 mon.a (mon.0) 3103 : cluster [DBG] osdmap e567: 8 total, 8 up, 8 in 2026-03-10T10:26:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:55 vm04 bash[28289]: cluster 2026-03-10T10:26:54.524588+0000 mgr.y (mgr.24422) 508 : cluster [DBG] pgmap v871: 268 pgs: 268 active+clean; 455 KiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:26:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:55 vm04 bash[28289]: cluster 2026-03-10T10:26:54.524588+0000 mgr.y (mgr.24422) 508 : cluster [DBG] pgmap v871: 268 pgs: 268 active+clean; 455 KiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:26:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:55 vm04 bash[28289]: cluster 2026-03-10T10:26:54.810148+0000 mon.a (mon.0) 3102 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:55 vm04 bash[28289]: cluster 2026-03-10T10:26:54.810148+0000 mon.a (mon.0) 3102 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:55 vm04 bash[28289]: cluster 2026-03-10T10:26:54.816320+0000 mon.a (mon.0) 3103 : cluster [DBG] osdmap e567: 8 total, 8 up, 8 in 2026-03-10T10:26:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:55 vm04 bash[28289]: cluster 2026-03-10T10:26:54.816320+0000 mon.a (mon.0) 3103 : cluster [DBG] osdmap e567: 8 total, 8 up, 8 in 2026-03-10T10:26:56.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:55 vm07 bash[23367]: cluster 2026-03-10T10:26:54.524588+0000 mgr.y (mgr.24422) 508 : cluster [DBG] pgmap v871: 268 pgs: 268 active+clean; 455 KiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:26:56.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:55 vm07 bash[23367]: cluster 2026-03-10T10:26:54.524588+0000 mgr.y (mgr.24422) 508 : cluster [DBG] pgmap v871: 268 pgs: 268 active+clean; 455 KiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:26:56.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:55 vm07 bash[23367]: cluster 2026-03-10T10:26:54.810148+0000 mon.a (mon.0) 3102 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:56.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:55 vm07 bash[23367]: cluster 2026-03-10T10:26:54.810148+0000 mon.a (mon.0) 3102 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:26:56.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:55 vm07 bash[23367]: cluster 2026-03-10T10:26:54.816320+0000 mon.a (mon.0) 3103 : cluster [DBG] osdmap e567: 8 total, 8 up, 8 in 2026-03-10T10:26:56.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:55 vm07 bash[23367]: cluster 2026-03-10T10:26:54.816320+0000 mon.a (mon.0) 3103 : cluster [DBG] osdmap e567: 8 total, 8 up, 8 in 2026-03-10T10:26:57.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:56 vm04 bash[20742]: cluster 2026-03-10T10:26:55.847339+0000 mon.a (mon.0) 3104 : cluster [DBG] osdmap e568: 8 total, 8 up, 8 in 2026-03-10T10:26:57.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:56 vm04 bash[20742]: cluster 2026-03-10T10:26:55.847339+0000 mon.a (mon.0) 3104 : cluster [DBG] osdmap e568: 8 total, 8 up, 8 in 2026-03-10T10:26:57.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:56 vm04 bash[20742]: audit 2026-03-10T10:26:55.849930+0000 mon.a (mon.0) 3105 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:57.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:56 vm04 bash[20742]: audit 2026-03-10T10:26:55.849930+0000 mon.a (mon.0) 3105 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:57.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:56 vm04 bash[20742]: audit 2026-03-10T10:26:56.847357+0000 mon.a (mon.0) 3106 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-116","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:57.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:56 vm04 bash[20742]: audit 2026-03-10T10:26:56.847357+0000 mon.a (mon.0) 3106 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-116","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:57.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:56 vm04 bash[20742]: cluster 2026-03-10T10:26:56.852128+0000 mon.a (mon.0) 3107 : cluster [DBG] osdmap e569: 8 total, 8 up, 8 in 2026-03-10T10:26:57.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:56 vm04 bash[20742]: cluster 2026-03-10T10:26:56.852128+0000 mon.a (mon.0) 3107 : cluster [DBG] osdmap e569: 8 total, 8 up, 8 in 2026-03-10T10:26:57.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:56 vm04 bash[28289]: cluster 2026-03-10T10:26:55.847339+0000 mon.a (mon.0) 3104 : cluster [DBG] osdmap e568: 8 total, 8 up, 8 in 2026-03-10T10:26:57.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:56 vm04 bash[28289]: cluster 2026-03-10T10:26:55.847339+0000 mon.a (mon.0) 3104 : cluster [DBG] osdmap e568: 8 total, 8 up, 8 in 2026-03-10T10:26:57.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:56 vm04 bash[28289]: audit 2026-03-10T10:26:55.849930+0000 mon.a (mon.0) 3105 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:57.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:56 vm04 bash[28289]: audit 2026-03-10T10:26:55.849930+0000 mon.a (mon.0) 3105 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:57.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:56 vm04 bash[28289]: audit 2026-03-10T10:26:56.847357+0000 mon.a (mon.0) 3106 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-116","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:57.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:56 vm04 bash[28289]: audit 2026-03-10T10:26:56.847357+0000 mon.a (mon.0) 3106 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-116","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:57.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:56 vm04 bash[28289]: cluster 2026-03-10T10:26:56.852128+0000 mon.a (mon.0) 3107 : cluster [DBG] osdmap e569: 8 total, 8 up, 8 in 2026-03-10T10:26:57.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:56 vm04 bash[28289]: cluster 2026-03-10T10:26:56.852128+0000 mon.a (mon.0) 3107 : cluster [DBG] osdmap e569: 8 total, 8 up, 8 in 2026-03-10T10:26:57.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:56 vm07 bash[23367]: cluster 2026-03-10T10:26:55.847339+0000 mon.a (mon.0) 3104 : cluster [DBG] osdmap e568: 8 total, 8 up, 8 in 2026-03-10T10:26:57.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:56 vm07 bash[23367]: cluster 2026-03-10T10:26:55.847339+0000 mon.a (mon.0) 3104 : cluster [DBG] osdmap e568: 8 total, 8 up, 8 in 2026-03-10T10:26:57.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:56 vm07 bash[23367]: audit 2026-03-10T10:26:55.849930+0000 mon.a (mon.0) 3105 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:57.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:56 vm07 bash[23367]: audit 2026-03-10T10:26:55.849930+0000 mon.a (mon.0) 3105 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-116","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:26:57.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:56 vm07 bash[23367]: audit 2026-03-10T10:26:56.847357+0000 mon.a (mon.0) 3106 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-116","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:57.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:56 vm07 bash[23367]: audit 2026-03-10T10:26:56.847357+0000 mon.a (mon.0) 3106 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-116","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:26:57.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:56 vm07 bash[23367]: cluster 2026-03-10T10:26:56.852128+0000 mon.a (mon.0) 3107 : cluster [DBG] osdmap e569: 8 total, 8 up, 8 in 2026-03-10T10:26:57.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:56 vm07 bash[23367]: cluster 2026-03-10T10:26:56.852128+0000 mon.a (mon.0) 3107 : cluster [DBG] osdmap e569: 8 total, 8 up, 8 in 2026-03-10T10:26:58.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:57 vm04 bash[20742]: cluster 2026-03-10T10:26:56.524944+0000 mgr.y (mgr.24422) 509 : cluster [DBG] pgmap v874: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:58.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:57 vm04 bash[20742]: cluster 2026-03-10T10:26:56.524944+0000 mgr.y (mgr.24422) 509 : cluster [DBG] pgmap v874: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:58.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:57 vm04 bash[20742]: cluster 2026-03-10T10:26:57.888837+0000 mon.a (mon.0) 3108 : cluster [DBG] osdmap e570: 8 total, 8 up, 8 in 2026-03-10T10:26:58.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:57 vm04 bash[20742]: cluster 2026-03-10T10:26:57.888837+0000 mon.a (mon.0) 3108 : cluster [DBG] osdmap e570: 8 total, 8 up, 8 in 2026-03-10T10:26:58.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:57 vm04 bash[28289]: cluster 2026-03-10T10:26:56.524944+0000 mgr.y (mgr.24422) 509 : cluster [DBG] pgmap v874: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:58.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:57 vm04 bash[28289]: cluster 2026-03-10T10:26:56.524944+0000 mgr.y (mgr.24422) 509 : cluster [DBG] pgmap v874: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:58.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:57 vm04 bash[28289]: cluster 2026-03-10T10:26:57.888837+0000 mon.a (mon.0) 3108 : cluster [DBG] osdmap e570: 8 total, 8 up, 8 in 2026-03-10T10:26:58.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:57 vm04 bash[28289]: cluster 2026-03-10T10:26:57.888837+0000 mon.a (mon.0) 3108 : cluster [DBG] osdmap e570: 8 total, 8 up, 8 in 2026-03-10T10:26:58.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:57 vm07 bash[23367]: cluster 2026-03-10T10:26:56.524944+0000 mgr.y (mgr.24422) 509 : cluster [DBG] pgmap v874: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:58.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:57 vm07 bash[23367]: cluster 2026-03-10T10:26:56.524944+0000 mgr.y (mgr.24422) 509 : cluster [DBG] pgmap v874: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:26:58.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:57 vm07 bash[23367]: cluster 2026-03-10T10:26:57.888837+0000 mon.a (mon.0) 3108 : cluster [DBG] osdmap e570: 8 total, 8 up, 8 in 2026-03-10T10:26:58.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:57 vm07 bash[23367]: cluster 2026-03-10T10:26:57.888837+0000 mon.a (mon.0) 3108 : cluster [DBG] osdmap e570: 8 total, 8 up, 8 in 2026-03-10T10:26:59.016 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:26:58 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:26:59.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:58 vm07 bash[23367]: audit 2026-03-10T10:26:57.925000+0000 mon.a (mon.0) 3109 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:26:59.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:58 vm07 bash[23367]: audit 2026-03-10T10:26:57.925000+0000 mon.a (mon.0) 3109 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:26:59.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:58 vm07 bash[23367]: audit 2026-03-10T10:26:58.075708+0000 mon.a (mon.0) 3110 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:26:59.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:58 vm07 bash[23367]: audit 2026-03-10T10:26:58.075708+0000 mon.a (mon.0) 3110 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:26:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:58 vm04 bash[28289]: audit 2026-03-10T10:26:57.925000+0000 mon.a (mon.0) 3109 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:26:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:58 vm04 bash[28289]: audit 2026-03-10T10:26:57.925000+0000 mon.a (mon.0) 3109 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:26:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:58 vm04 bash[28289]: audit 2026-03-10T10:26:58.075708+0000 mon.a (mon.0) 3110 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:26:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:58 vm04 bash[28289]: audit 2026-03-10T10:26:58.075708+0000 mon.a (mon.0) 3110 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:26:59.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:58 vm04 bash[20742]: audit 2026-03-10T10:26:57.925000+0000 mon.a (mon.0) 3109 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:26:59.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:58 vm04 bash[20742]: audit 2026-03-10T10:26:57.925000+0000 mon.a (mon.0) 3109 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-116", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:26:59.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:58 vm04 bash[20742]: audit 2026-03-10T10:26:58.075708+0000 mon.a (mon.0) 3110 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:26:59.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:58 vm04 bash[20742]: audit 2026-03-10T10:26:58.075708+0000 mon.a (mon.0) 3110 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:27:00.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:59 vm07 bash[23367]: cluster 2026-03-10T10:26:58.525492+0000 mgr.y (mgr.24422) 510 : cluster [DBG] pgmap v877: 268 pgs: 16 unknown, 252 active+clean; 455 KiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1023 B/s wr, 0 op/s 2026-03-10T10:27:00.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:59 vm07 bash[23367]: cluster 2026-03-10T10:26:58.525492+0000 mgr.y (mgr.24422) 510 : cluster [DBG] pgmap v877: 268 pgs: 16 unknown, 252 active+clean; 455 KiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1023 B/s wr, 0 op/s 2026-03-10T10:27:00.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:59 vm07 bash[23367]: audit 2026-03-10T10:26:58.755973+0000 mgr.y (mgr.24422) 511 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:00.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:59 vm07 bash[23367]: audit 2026-03-10T10:26:58.755973+0000 mgr.y (mgr.24422) 511 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:00.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:59 vm07 bash[23367]: audit 2026-03-10T10:26:58.914809+0000 mon.a (mon.0) 3111 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-116", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:00.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:59 vm07 bash[23367]: audit 2026-03-10T10:26:58.914809+0000 mon.a (mon.0) 3111 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-116", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:00.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:59 vm07 bash[23367]: cluster 2026-03-10T10:26:58.921380+0000 mon.a (mon.0) 3112 : cluster [DBG] osdmap e571: 8 total, 8 up, 8 in 2026-03-10T10:27:00.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:59 vm07 bash[23367]: cluster 2026-03-10T10:26:58.921380+0000 mon.a (mon.0) 3112 : cluster [DBG] osdmap e571: 8 total, 8 up, 8 in 2026-03-10T10:27:00.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:59 vm07 bash[23367]: audit 2026-03-10T10:26:58.922339+0000 mon.a (mon.0) 3113 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-116"}]: dispatch 2026-03-10T10:27:00.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:26:59 vm07 bash[23367]: audit 2026-03-10T10:26:58.922339+0000 mon.a (mon.0) 3113 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-116"}]: dispatch 2026-03-10T10:27:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:59 vm04 bash[28289]: cluster 2026-03-10T10:26:58.525492+0000 mgr.y (mgr.24422) 510 : cluster [DBG] pgmap v877: 268 pgs: 16 unknown, 252 active+clean; 455 KiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1023 B/s wr, 0 op/s 2026-03-10T10:27:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:59 vm04 bash[28289]: cluster 2026-03-10T10:26:58.525492+0000 mgr.y (mgr.24422) 510 : cluster [DBG] pgmap v877: 268 pgs: 16 unknown, 252 active+clean; 455 KiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1023 B/s wr, 0 op/s 2026-03-10T10:27:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:59 vm04 bash[28289]: audit 2026-03-10T10:26:58.755973+0000 mgr.y (mgr.24422) 511 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:59 vm04 bash[28289]: audit 2026-03-10T10:26:58.755973+0000 mgr.y (mgr.24422) 511 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:59 vm04 bash[28289]: audit 2026-03-10T10:26:58.914809+0000 mon.a (mon.0) 3111 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-116", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:59 vm04 bash[28289]: audit 2026-03-10T10:26:58.914809+0000 mon.a (mon.0) 3111 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-116", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:59 vm04 bash[28289]: cluster 2026-03-10T10:26:58.921380+0000 mon.a (mon.0) 3112 : cluster [DBG] osdmap e571: 8 total, 8 up, 8 in 2026-03-10T10:27:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:59 vm04 bash[28289]: cluster 2026-03-10T10:26:58.921380+0000 mon.a (mon.0) 3112 : cluster [DBG] osdmap e571: 8 total, 8 up, 8 in 2026-03-10T10:27:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:59 vm04 bash[28289]: audit 2026-03-10T10:26:58.922339+0000 mon.a (mon.0) 3113 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-116"}]: dispatch 2026-03-10T10:27:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:26:59 vm04 bash[28289]: audit 2026-03-10T10:26:58.922339+0000 mon.a (mon.0) 3113 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-116"}]: dispatch 2026-03-10T10:27:00.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:59 vm04 bash[20742]: cluster 2026-03-10T10:26:58.525492+0000 mgr.y (mgr.24422) 510 : cluster [DBG] pgmap v877: 268 pgs: 16 unknown, 252 active+clean; 455 KiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1023 B/s wr, 0 op/s 2026-03-10T10:27:00.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:59 vm04 bash[20742]: cluster 2026-03-10T10:26:58.525492+0000 mgr.y (mgr.24422) 510 : cluster [DBG] pgmap v877: 268 pgs: 16 unknown, 252 active+clean; 455 KiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1023 B/s wr, 0 op/s 2026-03-10T10:27:00.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:59 vm04 bash[20742]: audit 2026-03-10T10:26:58.755973+0000 mgr.y (mgr.24422) 511 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:00.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:59 vm04 bash[20742]: audit 2026-03-10T10:26:58.755973+0000 mgr.y (mgr.24422) 511 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:00.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:59 vm04 bash[20742]: audit 2026-03-10T10:26:58.914809+0000 mon.a (mon.0) 3111 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-116", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:00.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:59 vm04 bash[20742]: audit 2026-03-10T10:26:58.914809+0000 mon.a (mon.0) 3111 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-116", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:00.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:59 vm04 bash[20742]: cluster 2026-03-10T10:26:58.921380+0000 mon.a (mon.0) 3112 : cluster [DBG] osdmap e571: 8 total, 8 up, 8 in 2026-03-10T10:27:00.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:59 vm04 bash[20742]: cluster 2026-03-10T10:26:58.921380+0000 mon.a (mon.0) 3112 : cluster [DBG] osdmap e571: 8 total, 8 up, 8 in 2026-03-10T10:27:00.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:59 vm04 bash[20742]: audit 2026-03-10T10:26:58.922339+0000 mon.a (mon.0) 3113 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-116"}]: dispatch 2026-03-10T10:27:00.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:26:59 vm04 bash[20742]: audit 2026-03-10T10:26:58.922339+0000 mon.a (mon.0) 3113 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-116"}]: dispatch 2026-03-10T10:27:01.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:01 vm04 bash[28289]: audit 2026-03-10T10:26:59.943662+0000 mon.a (mon.0) 3114 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-116"}]': finished 2026-03-10T10:27:01.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:01 vm04 bash[28289]: audit 2026-03-10T10:26:59.943662+0000 mon.a (mon.0) 3114 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-116"}]': finished 2026-03-10T10:27:01.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:01 vm04 bash[28289]: cluster 2026-03-10T10:26:59.947166+0000 mon.a (mon.0) 3115 : cluster [DBG] osdmap e572: 8 total, 8 up, 8 in 2026-03-10T10:27:01.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:01 vm04 bash[28289]: cluster 2026-03-10T10:26:59.947166+0000 mon.a (mon.0) 3115 : cluster [DBG] osdmap e572: 8 total, 8 up, 8 in 2026-03-10T10:27:01.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:01 vm04 bash[28289]: audit 2026-03-10T10:26:59.949577+0000 mon.a (mon.0) 3116 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-116", "mode": "writeback"}]: dispatch 2026-03-10T10:27:01.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:01 vm04 bash[28289]: audit 2026-03-10T10:26:59.949577+0000 mon.a (mon.0) 3116 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-116", "mode": "writeback"}]: dispatch 2026-03-10T10:27:01.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:01 vm04 bash[28289]: cluster 2026-03-10T10:27:00.943787+0000 mon.a (mon.0) 3117 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:27:01.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:01 vm04 bash[28289]: cluster 2026-03-10T10:27:00.943787+0000 mon.a (mon.0) 3117 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:27:01.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:01 vm04 bash[20742]: audit 2026-03-10T10:26:59.943662+0000 mon.a (mon.0) 3114 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-116"}]': finished 2026-03-10T10:27:01.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:01 vm04 bash[20742]: audit 2026-03-10T10:26:59.943662+0000 mon.a (mon.0) 3114 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-116"}]': finished 2026-03-10T10:27:01.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:01 vm04 bash[20742]: cluster 2026-03-10T10:26:59.947166+0000 mon.a (mon.0) 3115 : cluster [DBG] osdmap e572: 8 total, 8 up, 8 in 2026-03-10T10:27:01.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:01 vm04 bash[20742]: cluster 2026-03-10T10:26:59.947166+0000 mon.a (mon.0) 3115 : cluster [DBG] osdmap e572: 8 total, 8 up, 8 in 2026-03-10T10:27:01.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:01 vm04 bash[20742]: audit 2026-03-10T10:26:59.949577+0000 mon.a (mon.0) 3116 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-116", "mode": "writeback"}]: dispatch 2026-03-10T10:27:01.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:01 vm04 bash[20742]: audit 2026-03-10T10:26:59.949577+0000 mon.a (mon.0) 3116 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-116", "mode": "writeback"}]: dispatch 2026-03-10T10:27:01.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:01 vm04 bash[20742]: cluster 2026-03-10T10:27:00.943787+0000 mon.a (mon.0) 3117 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:27:01.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:01 vm04 bash[20742]: cluster 2026-03-10T10:27:00.943787+0000 mon.a (mon.0) 3117 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:27:01.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:01 vm07 bash[23367]: audit 2026-03-10T10:26:59.943662+0000 mon.a (mon.0) 3114 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-116"}]': finished 2026-03-10T10:27:01.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:01 vm07 bash[23367]: audit 2026-03-10T10:26:59.943662+0000 mon.a (mon.0) 3114 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-116"}]': finished 2026-03-10T10:27:01.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:01 vm07 bash[23367]: cluster 2026-03-10T10:26:59.947166+0000 mon.a (mon.0) 3115 : cluster [DBG] osdmap e572: 8 total, 8 up, 8 in 2026-03-10T10:27:01.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:01 vm07 bash[23367]: cluster 2026-03-10T10:26:59.947166+0000 mon.a (mon.0) 3115 : cluster [DBG] osdmap e572: 8 total, 8 up, 8 in 2026-03-10T10:27:01.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:01 vm07 bash[23367]: audit 2026-03-10T10:26:59.949577+0000 mon.a (mon.0) 3116 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-116", "mode": "writeback"}]: dispatch 2026-03-10T10:27:01.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:01 vm07 bash[23367]: audit 2026-03-10T10:26:59.949577+0000 mon.a (mon.0) 3116 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-116", "mode": "writeback"}]: dispatch 2026-03-10T10:27:01.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:01 vm07 bash[23367]: cluster 2026-03-10T10:27:00.943787+0000 mon.a (mon.0) 3117 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:27:01.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:01 vm07 bash[23367]: cluster 2026-03-10T10:27:00.943787+0000 mon.a (mon.0) 3117 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:27:02.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:02 vm04 bash[28289]: cluster 2026-03-10T10:27:00.525886+0000 mgr.y (mgr.24422) 512 : cluster [DBG] pgmap v880: 268 pgs: 268 active+clean; 455 KiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:27:02.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:02 vm04 bash[28289]: cluster 2026-03-10T10:27:00.525886+0000 mgr.y (mgr.24422) 512 : cluster [DBG] pgmap v880: 268 pgs: 268 active+clean; 455 KiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:27:02.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:02 vm04 bash[28289]: audit 2026-03-10T10:27:01.000592+0000 mon.a (mon.0) 3118 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-116", "mode": "writeback"}]': finished 2026-03-10T10:27:02.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:02 vm04 bash[28289]: audit 2026-03-10T10:27:01.000592+0000 mon.a (mon.0) 3118 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-116", "mode": "writeback"}]': finished 2026-03-10T10:27:02.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:02 vm04 bash[28289]: cluster 2026-03-10T10:27:01.005373+0000 mon.a (mon.0) 3119 : cluster [DBG] osdmap e573: 8 total, 8 up, 8 in 2026-03-10T10:27:02.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:02 vm04 bash[28289]: cluster 2026-03-10T10:27:01.005373+0000 mon.a (mon.0) 3119 : cluster [DBG] osdmap e573: 8 total, 8 up, 8 in 2026-03-10T10:27:02.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:02 vm04 bash[28289]: audit 2026-03-10T10:27:01.042309+0000 mon.a (mon.0) 3120 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "318.7"}]: dispatch 2026-03-10T10:27:02.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:02 vm04 bash[28289]: audit 2026-03-10T10:27:01.042309+0000 mon.a (mon.0) 3120 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "318.7"}]: dispatch 2026-03-10T10:27:02.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:02 vm04 bash[28289]: audit 2026-03-10T10:27:01.042493+0000 mgr.y (mgr.24422) 513 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "318.7"}]: dispatch 2026-03-10T10:27:02.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:02 vm04 bash[28289]: audit 2026-03-10T10:27:01.042493+0000 mgr.y (mgr.24422) 513 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "318.7"}]: dispatch 2026-03-10T10:27:02.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:02 vm04 bash[28289]: cluster 2026-03-10T10:27:01.536430+0000 mon.a (mon.0) 3121 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:02.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:02 vm04 bash[28289]: cluster 2026-03-10T10:27:01.536430+0000 mon.a (mon.0) 3121 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:02.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:02 vm04 bash[28289]: cluster 2026-03-10T10:27:01.625865+0000 osd.2 (osd.2) 21 : cluster [DBG] 318.7 deep-scrub starts 2026-03-10T10:27:02.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:02 vm04 bash[28289]: cluster 2026-03-10T10:27:01.625865+0000 osd.2 (osd.2) 21 : cluster [DBG] 318.7 deep-scrub starts 2026-03-10T10:27:02.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:02 vm04 bash[28289]: cluster 2026-03-10T10:27:01.627096+0000 osd.2 (osd.2) 22 : cluster [DBG] 318.7 deep-scrub ok 2026-03-10T10:27:02.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:02 vm04 bash[28289]: cluster 2026-03-10T10:27:01.627096+0000 osd.2 (osd.2) 22 : cluster [DBG] 318.7 deep-scrub ok 2026-03-10T10:27:02.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:02 vm04 bash[20742]: cluster 2026-03-10T10:27:00.525886+0000 mgr.y (mgr.24422) 512 : cluster [DBG] pgmap v880: 268 pgs: 268 active+clean; 455 KiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:27:02.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:02 vm04 bash[20742]: cluster 2026-03-10T10:27:00.525886+0000 mgr.y (mgr.24422) 512 : cluster [DBG] pgmap v880: 268 pgs: 268 active+clean; 455 KiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:27:02.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:02 vm04 bash[20742]: audit 2026-03-10T10:27:01.000592+0000 mon.a (mon.0) 3118 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-116", "mode": "writeback"}]': finished 2026-03-10T10:27:02.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:02 vm04 bash[20742]: audit 2026-03-10T10:27:01.000592+0000 mon.a (mon.0) 3118 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-116", "mode": "writeback"}]': finished 2026-03-10T10:27:02.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:02 vm04 bash[20742]: cluster 2026-03-10T10:27:01.005373+0000 mon.a (mon.0) 3119 : cluster [DBG] osdmap e573: 8 total, 8 up, 8 in 2026-03-10T10:27:02.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:02 vm04 bash[20742]: cluster 2026-03-10T10:27:01.005373+0000 mon.a (mon.0) 3119 : cluster [DBG] osdmap e573: 8 total, 8 up, 8 in 2026-03-10T10:27:02.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:02 vm04 bash[20742]: audit 2026-03-10T10:27:01.042309+0000 mon.a (mon.0) 3120 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "318.7"}]: dispatch 2026-03-10T10:27:02.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:02 vm04 bash[20742]: audit 2026-03-10T10:27:01.042309+0000 mon.a (mon.0) 3120 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "318.7"}]: dispatch 2026-03-10T10:27:02.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:02 vm04 bash[20742]: audit 2026-03-10T10:27:01.042493+0000 mgr.y (mgr.24422) 513 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "318.7"}]: dispatch 2026-03-10T10:27:02.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:02 vm04 bash[20742]: audit 2026-03-10T10:27:01.042493+0000 mgr.y (mgr.24422) 513 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "318.7"}]: dispatch 2026-03-10T10:27:02.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:02 vm04 bash[20742]: cluster 2026-03-10T10:27:01.536430+0000 mon.a (mon.0) 3121 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:02.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:02 vm04 bash[20742]: cluster 2026-03-10T10:27:01.536430+0000 mon.a (mon.0) 3121 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:02.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:02 vm04 bash[20742]: cluster 2026-03-10T10:27:01.625865+0000 osd.2 (osd.2) 21 : cluster [DBG] 318.7 deep-scrub starts 2026-03-10T10:27:02.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:02 vm04 bash[20742]: cluster 2026-03-10T10:27:01.625865+0000 osd.2 (osd.2) 21 : cluster [DBG] 318.7 deep-scrub starts 2026-03-10T10:27:02.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:02 vm04 bash[20742]: cluster 2026-03-10T10:27:01.627096+0000 osd.2 (osd.2) 22 : cluster [DBG] 318.7 deep-scrub ok 2026-03-10T10:27:02.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:02 vm04 bash[20742]: cluster 2026-03-10T10:27:01.627096+0000 osd.2 (osd.2) 22 : cluster [DBG] 318.7 deep-scrub ok 2026-03-10T10:27:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:02 vm07 bash[23367]: cluster 2026-03-10T10:27:00.525886+0000 mgr.y (mgr.24422) 512 : cluster [DBG] pgmap v880: 268 pgs: 268 active+clean; 455 KiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:27:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:02 vm07 bash[23367]: cluster 2026-03-10T10:27:00.525886+0000 mgr.y (mgr.24422) 512 : cluster [DBG] pgmap v880: 268 pgs: 268 active+clean; 455 KiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:27:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:02 vm07 bash[23367]: audit 2026-03-10T10:27:01.000592+0000 mon.a (mon.0) 3118 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-116", "mode": "writeback"}]': finished 2026-03-10T10:27:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:02 vm07 bash[23367]: audit 2026-03-10T10:27:01.000592+0000 mon.a (mon.0) 3118 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-116", "mode": "writeback"}]': finished 2026-03-10T10:27:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:02 vm07 bash[23367]: cluster 2026-03-10T10:27:01.005373+0000 mon.a (mon.0) 3119 : cluster [DBG] osdmap e573: 8 total, 8 up, 8 in 2026-03-10T10:27:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:02 vm07 bash[23367]: cluster 2026-03-10T10:27:01.005373+0000 mon.a (mon.0) 3119 : cluster [DBG] osdmap e573: 8 total, 8 up, 8 in 2026-03-10T10:27:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:02 vm07 bash[23367]: audit 2026-03-10T10:27:01.042309+0000 mon.a (mon.0) 3120 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "318.7"}]: dispatch 2026-03-10T10:27:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:02 vm07 bash[23367]: audit 2026-03-10T10:27:01.042309+0000 mon.a (mon.0) 3120 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "pg scrub", "pgid": "318.7"}]: dispatch 2026-03-10T10:27:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:02 vm07 bash[23367]: audit 2026-03-10T10:27:01.042493+0000 mgr.y (mgr.24422) 513 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "318.7"}]: dispatch 2026-03-10T10:27:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:02 vm07 bash[23367]: audit 2026-03-10T10:27:01.042493+0000 mgr.y (mgr.24422) 513 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "pg scrub", "pgid": "318.7"}]: dispatch 2026-03-10T10:27:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:02 vm07 bash[23367]: cluster 2026-03-10T10:27:01.536430+0000 mon.a (mon.0) 3121 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:02 vm07 bash[23367]: cluster 2026-03-10T10:27:01.536430+0000 mon.a (mon.0) 3121 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:02 vm07 bash[23367]: cluster 2026-03-10T10:27:01.625865+0000 osd.2 (osd.2) 21 : cluster [DBG] 318.7 deep-scrub starts 2026-03-10T10:27:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:02 vm07 bash[23367]: cluster 2026-03-10T10:27:01.625865+0000 osd.2 (osd.2) 21 : cluster [DBG] 318.7 deep-scrub starts 2026-03-10T10:27:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:02 vm07 bash[23367]: cluster 2026-03-10T10:27:01.627096+0000 osd.2 (osd.2) 22 : cluster [DBG] 318.7 deep-scrub ok 2026-03-10T10:27:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:02 vm07 bash[23367]: cluster 2026-03-10T10:27:01.627096+0000 osd.2 (osd.2) 22 : cluster [DBG] 318.7 deep-scrub ok 2026-03-10T10:27:03.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:27:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:27:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:27:04.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:04 vm04 bash[28289]: cluster 2026-03-10T10:27:02.526304+0000 mgr.y (mgr.24422) 514 : cluster [DBG] pgmap v882: 268 pgs: 268 active+clean; 455 KiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T10:27:04.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:04 vm04 bash[28289]: cluster 2026-03-10T10:27:02.526304+0000 mgr.y (mgr.24422) 514 : cluster [DBG] pgmap v882: 268 pgs: 268 active+clean; 455 KiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T10:27:04.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:04 vm04 bash[20742]: cluster 2026-03-10T10:27:02.526304+0000 mgr.y (mgr.24422) 514 : cluster [DBG] pgmap v882: 268 pgs: 268 active+clean; 455 KiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T10:27:04.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:04 vm04 bash[20742]: cluster 2026-03-10T10:27:02.526304+0000 mgr.y (mgr.24422) 514 : cluster [DBG] pgmap v882: 268 pgs: 268 active+clean; 455 KiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T10:27:04.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:04 vm07 bash[23367]: cluster 2026-03-10T10:27:02.526304+0000 mgr.y (mgr.24422) 514 : cluster [DBG] pgmap v882: 268 pgs: 268 active+clean; 455 KiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T10:27:04.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:04 vm07 bash[23367]: cluster 2026-03-10T10:27:02.526304+0000 mgr.y (mgr.24422) 514 : cluster [DBG] pgmap v882: 268 pgs: 268 active+clean; 455 KiB data, 987 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1.5 KiB/s wr, 2 op/s 2026-03-10T10:27:06.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:06 vm04 bash[28289]: cluster 2026-03-10T10:27:04.527229+0000 mgr.y (mgr.24422) 515 : cluster [DBG] pgmap v883: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:27:06.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:06 vm04 bash[28289]: cluster 2026-03-10T10:27:04.527229+0000 mgr.y (mgr.24422) 515 : cluster [DBG] pgmap v883: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:27:06.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:06 vm04 bash[20742]: cluster 2026-03-10T10:27:04.527229+0000 mgr.y (mgr.24422) 515 : cluster [DBG] pgmap v883: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:27:06.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:06 vm04 bash[20742]: cluster 2026-03-10T10:27:04.527229+0000 mgr.y (mgr.24422) 515 : cluster [DBG] pgmap v883: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:27:06.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:06 vm07 bash[23367]: cluster 2026-03-10T10:27:04.527229+0000 mgr.y (mgr.24422) 515 : cluster [DBG] pgmap v883: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:27:06.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:06 vm07 bash[23367]: cluster 2026-03-10T10:27:04.527229+0000 mgr.y (mgr.24422) 515 : cluster [DBG] pgmap v883: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:27:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:08 vm04 bash[28289]: cluster 2026-03-10T10:27:06.527594+0000 mgr.y (mgr.24422) 516 : cluster [DBG] pgmap v884: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 941 B/s rd, 1 op/s 2026-03-10T10:27:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:08 vm04 bash[28289]: cluster 2026-03-10T10:27:06.527594+0000 mgr.y (mgr.24422) 516 : cluster [DBG] pgmap v884: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 941 B/s rd, 1 op/s 2026-03-10T10:27:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:08 vm04 bash[20742]: cluster 2026-03-10T10:27:06.527594+0000 mgr.y (mgr.24422) 516 : cluster [DBG] pgmap v884: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 941 B/s rd, 1 op/s 2026-03-10T10:27:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:08 vm04 bash[20742]: cluster 2026-03-10T10:27:06.527594+0000 mgr.y (mgr.24422) 516 : cluster [DBG] pgmap v884: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 941 B/s rd, 1 op/s 2026-03-10T10:27:08.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:08 vm07 bash[23367]: cluster 2026-03-10T10:27:06.527594+0000 mgr.y (mgr.24422) 516 : cluster [DBG] pgmap v884: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 941 B/s rd, 1 op/s 2026-03-10T10:27:08.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:08 vm07 bash[23367]: cluster 2026-03-10T10:27:06.527594+0000 mgr.y (mgr.24422) 516 : cluster [DBG] pgmap v884: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 941 B/s rd, 1 op/s 2026-03-10T10:27:09.015 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:27:08 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:27:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:10 vm04 bash[28289]: cluster 2026-03-10T10:27:08.528307+0000 mgr.y (mgr.24422) 517 : cluster [DBG] pgmap v885: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 835 B/s rd, 1 op/s 2026-03-10T10:27:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:10 vm04 bash[28289]: cluster 2026-03-10T10:27:08.528307+0000 mgr.y (mgr.24422) 517 : cluster [DBG] pgmap v885: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 835 B/s rd, 1 op/s 2026-03-10T10:27:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:10 vm04 bash[28289]: audit 2026-03-10T10:27:08.763390+0000 mgr.y (mgr.24422) 518 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:10 vm04 bash[28289]: audit 2026-03-10T10:27:08.763390+0000 mgr.y (mgr.24422) 518 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:10.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:10 vm04 bash[20742]: cluster 2026-03-10T10:27:08.528307+0000 mgr.y (mgr.24422) 517 : cluster [DBG] pgmap v885: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 835 B/s rd, 1 op/s 2026-03-10T10:27:10.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:10 vm04 bash[20742]: cluster 2026-03-10T10:27:08.528307+0000 mgr.y (mgr.24422) 517 : cluster [DBG] pgmap v885: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 835 B/s rd, 1 op/s 2026-03-10T10:27:10.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:10 vm04 bash[20742]: audit 2026-03-10T10:27:08.763390+0000 mgr.y (mgr.24422) 518 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:10.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:10 vm04 bash[20742]: audit 2026-03-10T10:27:08.763390+0000 mgr.y (mgr.24422) 518 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:10.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:10 vm07 bash[23367]: cluster 2026-03-10T10:27:08.528307+0000 mgr.y (mgr.24422) 517 : cluster [DBG] pgmap v885: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 835 B/s rd, 1 op/s 2026-03-10T10:27:10.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:10 vm07 bash[23367]: cluster 2026-03-10T10:27:08.528307+0000 mgr.y (mgr.24422) 517 : cluster [DBG] pgmap v885: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 835 B/s rd, 1 op/s 2026-03-10T10:27:10.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:10 vm07 bash[23367]: audit 2026-03-10T10:27:08.763390+0000 mgr.y (mgr.24422) 518 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:10.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:10 vm07 bash[23367]: audit 2026-03-10T10:27:08.763390+0000 mgr.y (mgr.24422) 518 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:12.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:12 vm04 bash[28289]: cluster 2026-03-10T10:27:10.528943+0000 mgr.y (mgr.24422) 519 : cluster [DBG] pgmap v886: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:27:12.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:12 vm04 bash[28289]: cluster 2026-03-10T10:27:10.528943+0000 mgr.y (mgr.24422) 519 : cluster [DBG] pgmap v886: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:27:12.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:12 vm04 bash[20742]: cluster 2026-03-10T10:27:10.528943+0000 mgr.y (mgr.24422) 519 : cluster [DBG] pgmap v886: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:27:12.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:12 vm04 bash[20742]: cluster 2026-03-10T10:27:10.528943+0000 mgr.y (mgr.24422) 519 : cluster [DBG] pgmap v886: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:27:12.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:12 vm07 bash[23367]: cluster 2026-03-10T10:27:10.528943+0000 mgr.y (mgr.24422) 519 : cluster [DBG] pgmap v886: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:27:12.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:12 vm07 bash[23367]: cluster 2026-03-10T10:27:10.528943+0000 mgr.y (mgr.24422) 519 : cluster [DBG] pgmap v886: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:27:13.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:27:13 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:27:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:27:14.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:14 vm04 bash[28289]: cluster 2026-03-10T10:27:12.529233+0000 mgr.y (mgr.24422) 520 : cluster [DBG] pgmap v887: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T10:27:14.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:14 vm04 bash[28289]: cluster 2026-03-10T10:27:12.529233+0000 mgr.y (mgr.24422) 520 : cluster [DBG] pgmap v887: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T10:27:14.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:14 vm04 bash[28289]: audit 2026-03-10T10:27:13.086024+0000 mon.a (mon.0) 3122 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:27:14.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:14 vm04 bash[28289]: audit 2026-03-10T10:27:13.086024+0000 mon.a (mon.0) 3122 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:27:14.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:14 vm04 bash[28289]: audit 2026-03-10T10:27:13.086637+0000 mon.a (mon.0) 3123 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:27:14.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:14 vm04 bash[28289]: audit 2026-03-10T10:27:13.086637+0000 mon.a (mon.0) 3123 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:27:14.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:14 vm04 bash[20742]: cluster 2026-03-10T10:27:12.529233+0000 mgr.y (mgr.24422) 520 : cluster [DBG] pgmap v887: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T10:27:14.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:14 vm04 bash[20742]: cluster 2026-03-10T10:27:12.529233+0000 mgr.y (mgr.24422) 520 : cluster [DBG] pgmap v887: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T10:27:14.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:14 vm04 bash[20742]: audit 2026-03-10T10:27:13.086024+0000 mon.a (mon.0) 3122 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:27:14.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:14 vm04 bash[20742]: audit 2026-03-10T10:27:13.086024+0000 mon.a (mon.0) 3122 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:27:14.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:14 vm04 bash[20742]: audit 2026-03-10T10:27:13.086637+0000 mon.a (mon.0) 3123 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:27:14.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:14 vm04 bash[20742]: audit 2026-03-10T10:27:13.086637+0000 mon.a (mon.0) 3123 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:27:14.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:14 vm07 bash[23367]: cluster 2026-03-10T10:27:12.529233+0000 mgr.y (mgr.24422) 520 : cluster [DBG] pgmap v887: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T10:27:14.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:14 vm07 bash[23367]: cluster 2026-03-10T10:27:12.529233+0000 mgr.y (mgr.24422) 520 : cluster [DBG] pgmap v887: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T10:27:14.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:14 vm07 bash[23367]: audit 2026-03-10T10:27:13.086024+0000 mon.a (mon.0) 3122 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:27:14.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:14 vm07 bash[23367]: audit 2026-03-10T10:27:13.086024+0000 mon.a (mon.0) 3122 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:27:14.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:14 vm07 bash[23367]: audit 2026-03-10T10:27:13.086637+0000 mon.a (mon.0) 3123 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:27:14.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:14 vm07 bash[23367]: audit 2026-03-10T10:27:13.086637+0000 mon.a (mon.0) 3123 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:27:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:16 vm04 bash[28289]: cluster 2026-03-10T10:27:14.530060+0000 mgr.y (mgr.24422) 521 : cluster [DBG] pgmap v888: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T10:27:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:16 vm04 bash[28289]: cluster 2026-03-10T10:27:14.530060+0000 mgr.y (mgr.24422) 521 : cluster [DBG] pgmap v888: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T10:27:16.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:16 vm04 bash[20742]: cluster 2026-03-10T10:27:14.530060+0000 mgr.y (mgr.24422) 521 : cluster [DBG] pgmap v888: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T10:27:16.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:16 vm04 bash[20742]: cluster 2026-03-10T10:27:14.530060+0000 mgr.y (mgr.24422) 521 : cluster [DBG] pgmap v888: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T10:27:16.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:16 vm07 bash[23367]: cluster 2026-03-10T10:27:14.530060+0000 mgr.y (mgr.24422) 521 : cluster [DBG] pgmap v888: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T10:27:16.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:16 vm07 bash[23367]: cluster 2026-03-10T10:27:14.530060+0000 mgr.y (mgr.24422) 521 : cluster [DBG] pgmap v888: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T10:27:17.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:17 vm04 bash[28289]: cluster 2026-03-10T10:27:16.111056+0000 mon.a (mon.0) 3124 : cluster [DBG] osdmap e574: 8 total, 8 up, 8 in 2026-03-10T10:27:17.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:17 vm04 bash[28289]: cluster 2026-03-10T10:27:16.111056+0000 mon.a (mon.0) 3124 : cluster [DBG] osdmap e574: 8 total, 8 up, 8 in 2026-03-10T10:27:17.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:17 vm04 bash[28289]: audit 2026-03-10T10:27:16.147810+0000 mon.a (mon.0) 3125 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:27:17.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:17 vm04 bash[28289]: audit 2026-03-10T10:27:16.147810+0000 mon.a (mon.0) 3125 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:27:17.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:17 vm04 bash[20742]: cluster 2026-03-10T10:27:16.111056+0000 mon.a (mon.0) 3124 : cluster [DBG] osdmap e574: 8 total, 8 up, 8 in 2026-03-10T10:27:17.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:17 vm04 bash[20742]: cluster 2026-03-10T10:27:16.111056+0000 mon.a (mon.0) 3124 : cluster [DBG] osdmap e574: 8 total, 8 up, 8 in 2026-03-10T10:27:17.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:17 vm04 bash[20742]: audit 2026-03-10T10:27:16.147810+0000 mon.a (mon.0) 3125 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:27:17.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:17 vm04 bash[20742]: audit 2026-03-10T10:27:16.147810+0000 mon.a (mon.0) 3125 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:27:17.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:17 vm07 bash[23367]: cluster 2026-03-10T10:27:16.111056+0000 mon.a (mon.0) 3124 : cluster [DBG] osdmap e574: 8 total, 8 up, 8 in 2026-03-10T10:27:17.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:17 vm07 bash[23367]: cluster 2026-03-10T10:27:16.111056+0000 mon.a (mon.0) 3124 : cluster [DBG] osdmap e574: 8 total, 8 up, 8 in 2026-03-10T10:27:17.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:17 vm07 bash[23367]: audit 2026-03-10T10:27:16.147810+0000 mon.a (mon.0) 3125 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:27:17.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:17 vm07 bash[23367]: audit 2026-03-10T10:27:16.147810+0000 mon.a (mon.0) 3125 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:27:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:18 vm04 bash[28289]: cluster 2026-03-10T10:27:16.530485+0000 mgr.y (mgr.24422) 522 : cluster [DBG] pgmap v890: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:27:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:18 vm04 bash[28289]: cluster 2026-03-10T10:27:16.530485+0000 mgr.y (mgr.24422) 522 : cluster [DBG] pgmap v890: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:27:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:18 vm04 bash[28289]: audit 2026-03-10T10:27:17.111249+0000 mon.a (mon.0) 3126 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:27:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:18 vm04 bash[28289]: audit 2026-03-10T10:27:17.111249+0000 mon.a (mon.0) 3126 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:27:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:18 vm04 bash[28289]: cluster 2026-03-10T10:27:17.115821+0000 mon.a (mon.0) 3127 : cluster [DBG] osdmap e575: 8 total, 8 up, 8 in 2026-03-10T10:27:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:18 vm04 bash[28289]: cluster 2026-03-10T10:27:17.115821+0000 mon.a (mon.0) 3127 : cluster [DBG] osdmap e575: 8 total, 8 up, 8 in 2026-03-10T10:27:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:18 vm04 bash[28289]: audit 2026-03-10T10:27:17.116299+0000 mon.a (mon.0) 3128 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-116"}]: dispatch 2026-03-10T10:27:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:18 vm04 bash[28289]: audit 2026-03-10T10:27:17.116299+0000 mon.a (mon.0) 3128 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-116"}]: dispatch 2026-03-10T10:27:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:18 vm04 bash[20742]: cluster 2026-03-10T10:27:16.530485+0000 mgr.y (mgr.24422) 522 : cluster [DBG] pgmap v890: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:27:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:18 vm04 bash[20742]: cluster 2026-03-10T10:27:16.530485+0000 mgr.y (mgr.24422) 522 : cluster [DBG] pgmap v890: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:27:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:18 vm04 bash[20742]: audit 2026-03-10T10:27:17.111249+0000 mon.a (mon.0) 3126 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:27:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:18 vm04 bash[20742]: audit 2026-03-10T10:27:17.111249+0000 mon.a (mon.0) 3126 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:27:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:18 vm04 bash[20742]: cluster 2026-03-10T10:27:17.115821+0000 mon.a (mon.0) 3127 : cluster [DBG] osdmap e575: 8 total, 8 up, 8 in 2026-03-10T10:27:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:18 vm04 bash[20742]: cluster 2026-03-10T10:27:17.115821+0000 mon.a (mon.0) 3127 : cluster [DBG] osdmap e575: 8 total, 8 up, 8 in 2026-03-10T10:27:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:18 vm04 bash[20742]: audit 2026-03-10T10:27:17.116299+0000 mon.a (mon.0) 3128 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-116"}]: dispatch 2026-03-10T10:27:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:18 vm04 bash[20742]: audit 2026-03-10T10:27:17.116299+0000 mon.a (mon.0) 3128 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-116"}]: dispatch 2026-03-10T10:27:18.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:18 vm07 bash[23367]: cluster 2026-03-10T10:27:16.530485+0000 mgr.y (mgr.24422) 522 : cluster [DBG] pgmap v890: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:27:18.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:18 vm07 bash[23367]: cluster 2026-03-10T10:27:16.530485+0000 mgr.y (mgr.24422) 522 : cluster [DBG] pgmap v890: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:27:18.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:18 vm07 bash[23367]: audit 2026-03-10T10:27:17.111249+0000 mon.a (mon.0) 3126 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:27:18.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:18 vm07 bash[23367]: audit 2026-03-10T10:27:17.111249+0000 mon.a (mon.0) 3126 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:27:18.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:18 vm07 bash[23367]: cluster 2026-03-10T10:27:17.115821+0000 mon.a (mon.0) 3127 : cluster [DBG] osdmap e575: 8 total, 8 up, 8 in 2026-03-10T10:27:18.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:18 vm07 bash[23367]: cluster 2026-03-10T10:27:17.115821+0000 mon.a (mon.0) 3127 : cluster [DBG] osdmap e575: 8 total, 8 up, 8 in 2026-03-10T10:27:18.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:18 vm07 bash[23367]: audit 2026-03-10T10:27:17.116299+0000 mon.a (mon.0) 3128 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-116"}]: dispatch 2026-03-10T10:27:18.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:18 vm07 bash[23367]: audit 2026-03-10T10:27:17.116299+0000 mon.a (mon.0) 3128 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-116"}]: dispatch 2026-03-10T10:27:19.136 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:27:18 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:27:19.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:19 vm04 bash[28289]: cluster 2026-03-10T10:27:18.111512+0000 mon.a (mon.0) 3129 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:27:19.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:19 vm04 bash[28289]: cluster 2026-03-10T10:27:18.111512+0000 mon.a (mon.0) 3129 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:27:19.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:19 vm04 bash[28289]: audit 2026-03-10T10:27:18.115567+0000 mon.a (mon.0) 3130 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-116"}]': finished 2026-03-10T10:27:19.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:19 vm04 bash[28289]: audit 2026-03-10T10:27:18.115567+0000 mon.a (mon.0) 3130 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-116"}]': finished 2026-03-10T10:27:19.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:19 vm04 bash[28289]: cluster 2026-03-10T10:27:18.122648+0000 mon.a (mon.0) 3131 : cluster [DBG] osdmap e576: 8 total, 8 up, 8 in 2026-03-10T10:27:19.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:19 vm04 bash[28289]: cluster 2026-03-10T10:27:18.122648+0000 mon.a (mon.0) 3131 : cluster [DBG] osdmap e576: 8 total, 8 up, 8 in 2026-03-10T10:27:19.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:19 vm04 bash[20742]: cluster 2026-03-10T10:27:18.111512+0000 mon.a (mon.0) 3129 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:27:19.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:19 vm04 bash[20742]: cluster 2026-03-10T10:27:18.111512+0000 mon.a (mon.0) 3129 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:27:19.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:19 vm04 bash[20742]: audit 2026-03-10T10:27:18.115567+0000 mon.a (mon.0) 3130 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-116"}]': finished 2026-03-10T10:27:19.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:19 vm04 bash[20742]: audit 2026-03-10T10:27:18.115567+0000 mon.a (mon.0) 3130 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-116"}]': finished 2026-03-10T10:27:19.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:19 vm04 bash[20742]: cluster 2026-03-10T10:27:18.122648+0000 mon.a (mon.0) 3131 : cluster [DBG] osdmap e576: 8 total, 8 up, 8 in 2026-03-10T10:27:19.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:19 vm04 bash[20742]: cluster 2026-03-10T10:27:18.122648+0000 mon.a (mon.0) 3131 : cluster [DBG] osdmap e576: 8 total, 8 up, 8 in 2026-03-10T10:27:19.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:19 vm07 bash[23367]: cluster 2026-03-10T10:27:18.111512+0000 mon.a (mon.0) 3129 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:27:19.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:19 vm07 bash[23367]: cluster 2026-03-10T10:27:18.111512+0000 mon.a (mon.0) 3129 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:27:19.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:19 vm07 bash[23367]: audit 2026-03-10T10:27:18.115567+0000 mon.a (mon.0) 3130 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-116"}]': finished 2026-03-10T10:27:19.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:19 vm07 bash[23367]: audit 2026-03-10T10:27:18.115567+0000 mon.a (mon.0) 3130 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-116"}]': finished 2026-03-10T10:27:19.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:19 vm07 bash[23367]: cluster 2026-03-10T10:27:18.122648+0000 mon.a (mon.0) 3131 : cluster [DBG] osdmap e576: 8 total, 8 up, 8 in 2026-03-10T10:27:19.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:19 vm07 bash[23367]: cluster 2026-03-10T10:27:18.122648+0000 mon.a (mon.0) 3131 : cluster [DBG] osdmap e576: 8 total, 8 up, 8 in 2026-03-10T10:27:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:20 vm04 bash[28289]: cluster 2026-03-10T10:27:18.531128+0000 mgr.y (mgr.24422) 523 : cluster [DBG] pgmap v893: 268 pgs: 1 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 264 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:27:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:20 vm04 bash[28289]: cluster 2026-03-10T10:27:18.531128+0000 mgr.y (mgr.24422) 523 : cluster [DBG] pgmap v893: 268 pgs: 1 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 264 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:27:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:20 vm04 bash[28289]: audit 2026-03-10T10:27:18.769994+0000 mgr.y (mgr.24422) 524 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:20 vm04 bash[28289]: audit 2026-03-10T10:27:18.769994+0000 mgr.y (mgr.24422) 524 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:20 vm04 bash[28289]: cluster 2026-03-10T10:27:19.125742+0000 mon.a (mon.0) 3132 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:20 vm04 bash[28289]: cluster 2026-03-10T10:27:19.125742+0000 mon.a (mon.0) 3132 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:20 vm04 bash[28289]: cluster 2026-03-10T10:27:19.151123+0000 mon.a (mon.0) 3133 : cluster [DBG] osdmap e577: 8 total, 8 up, 8 in 2026-03-10T10:27:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:20 vm04 bash[28289]: cluster 2026-03-10T10:27:19.151123+0000 mon.a (mon.0) 3133 : cluster [DBG] osdmap e577: 8 total, 8 up, 8 in 2026-03-10T10:27:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:20 vm04 bash[20742]: cluster 2026-03-10T10:27:18.531128+0000 mgr.y (mgr.24422) 523 : cluster [DBG] pgmap v893: 268 pgs: 1 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 264 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:27:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:20 vm04 bash[20742]: cluster 2026-03-10T10:27:18.531128+0000 mgr.y (mgr.24422) 523 : cluster [DBG] pgmap v893: 268 pgs: 1 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 264 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:27:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:20 vm04 bash[20742]: audit 2026-03-10T10:27:18.769994+0000 mgr.y (mgr.24422) 524 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:20 vm04 bash[20742]: audit 2026-03-10T10:27:18.769994+0000 mgr.y (mgr.24422) 524 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:20 vm04 bash[20742]: cluster 2026-03-10T10:27:19.125742+0000 mon.a (mon.0) 3132 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:20 vm04 bash[20742]: cluster 2026-03-10T10:27:19.125742+0000 mon.a (mon.0) 3132 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:20 vm04 bash[20742]: cluster 2026-03-10T10:27:19.151123+0000 mon.a (mon.0) 3133 : cluster [DBG] osdmap e577: 8 total, 8 up, 8 in 2026-03-10T10:27:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:20 vm04 bash[20742]: cluster 2026-03-10T10:27:19.151123+0000 mon.a (mon.0) 3133 : cluster [DBG] osdmap e577: 8 total, 8 up, 8 in 2026-03-10T10:27:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:20 vm07 bash[23367]: cluster 2026-03-10T10:27:18.531128+0000 mgr.y (mgr.24422) 523 : cluster [DBG] pgmap v893: 268 pgs: 1 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 264 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:27:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:20 vm07 bash[23367]: cluster 2026-03-10T10:27:18.531128+0000 mgr.y (mgr.24422) 523 : cluster [DBG] pgmap v893: 268 pgs: 1 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 264 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:27:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:20 vm07 bash[23367]: audit 2026-03-10T10:27:18.769994+0000 mgr.y (mgr.24422) 524 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:20 vm07 bash[23367]: audit 2026-03-10T10:27:18.769994+0000 mgr.y (mgr.24422) 524 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:20 vm07 bash[23367]: cluster 2026-03-10T10:27:19.125742+0000 mon.a (mon.0) 3132 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:20 vm07 bash[23367]: cluster 2026-03-10T10:27:19.125742+0000 mon.a (mon.0) 3132 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:20 vm07 bash[23367]: cluster 2026-03-10T10:27:19.151123+0000 mon.a (mon.0) 3133 : cluster [DBG] osdmap e577: 8 total, 8 up, 8 in 2026-03-10T10:27:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:20 vm07 bash[23367]: cluster 2026-03-10T10:27:19.151123+0000 mon.a (mon.0) 3133 : cluster [DBG] osdmap e577: 8 total, 8 up, 8 in 2026-03-10T10:27:21.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:21 vm04 bash[28289]: cluster 2026-03-10T10:27:20.150150+0000 mon.a (mon.0) 3134 : cluster [DBG] osdmap e578: 8 total, 8 up, 8 in 2026-03-10T10:27:21.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:21 vm04 bash[28289]: cluster 2026-03-10T10:27:20.150150+0000 mon.a (mon.0) 3134 : cluster [DBG] osdmap e578: 8 total, 8 up, 8 in 2026-03-10T10:27:21.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:21 vm04 bash[28289]: audit 2026-03-10T10:27:20.156398+0000 mon.a (mon.0) 3135 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:27:21.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:21 vm04 bash[28289]: audit 2026-03-10T10:27:20.156398+0000 mon.a (mon.0) 3135 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:27:21.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:21 vm04 bash[20742]: cluster 2026-03-10T10:27:20.150150+0000 mon.a (mon.0) 3134 : cluster [DBG] osdmap e578: 8 total, 8 up, 8 in 2026-03-10T10:27:21.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:21 vm04 bash[20742]: cluster 2026-03-10T10:27:20.150150+0000 mon.a (mon.0) 3134 : cluster [DBG] osdmap e578: 8 total, 8 up, 8 in 2026-03-10T10:27:21.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:21 vm04 bash[20742]: audit 2026-03-10T10:27:20.156398+0000 mon.a (mon.0) 3135 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:27:21.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:21 vm04 bash[20742]: audit 2026-03-10T10:27:20.156398+0000 mon.a (mon.0) 3135 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:27:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:21 vm07 bash[23367]: cluster 2026-03-10T10:27:20.150150+0000 mon.a (mon.0) 3134 : cluster [DBG] osdmap e578: 8 total, 8 up, 8 in 2026-03-10T10:27:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:21 vm07 bash[23367]: cluster 2026-03-10T10:27:20.150150+0000 mon.a (mon.0) 3134 : cluster [DBG] osdmap e578: 8 total, 8 up, 8 in 2026-03-10T10:27:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:21 vm07 bash[23367]: audit 2026-03-10T10:27:20.156398+0000 mon.a (mon.0) 3135 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:27:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:21 vm07 bash[23367]: audit 2026-03-10T10:27:20.156398+0000 mon.a (mon.0) 3135 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-118","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:27:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:22 vm04 bash[28289]: cluster 2026-03-10T10:27:20.531595+0000 mgr.y (mgr.24422) 525 : cluster [DBG] pgmap v896: 268 pgs: 32 unknown, 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 233 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-10T10:27:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:22 vm04 bash[28289]: cluster 2026-03-10T10:27:20.531595+0000 mgr.y (mgr.24422) 525 : cluster [DBG] pgmap v896: 268 pgs: 32 unknown, 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 233 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-10T10:27:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:22 vm04 bash[28289]: audit 2026-03-10T10:27:21.150863+0000 mon.a (mon.0) 3136 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-118","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:27:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:22 vm04 bash[28289]: audit 2026-03-10T10:27:21.150863+0000 mon.a (mon.0) 3136 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-118","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:27:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:22 vm04 bash[28289]: cluster 2026-03-10T10:27:21.154988+0000 mon.a (mon.0) 3137 : cluster [DBG] osdmap e579: 8 total, 8 up, 8 in 2026-03-10T10:27:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:22 vm04 bash[28289]: cluster 2026-03-10T10:27:21.154988+0000 mon.a (mon.0) 3137 : cluster [DBG] osdmap e579: 8 total, 8 up, 8 in 2026-03-10T10:27:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:22 vm04 bash[20742]: cluster 2026-03-10T10:27:20.531595+0000 mgr.y (mgr.24422) 525 : cluster [DBG] pgmap v896: 268 pgs: 32 unknown, 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 233 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-10T10:27:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:22 vm04 bash[20742]: cluster 2026-03-10T10:27:20.531595+0000 mgr.y (mgr.24422) 525 : cluster [DBG] pgmap v896: 268 pgs: 32 unknown, 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 233 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-10T10:27:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:22 vm04 bash[20742]: audit 2026-03-10T10:27:21.150863+0000 mon.a (mon.0) 3136 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-118","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:27:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:22 vm04 bash[20742]: audit 2026-03-10T10:27:21.150863+0000 mon.a (mon.0) 3136 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-118","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:27:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:22 vm04 bash[20742]: cluster 2026-03-10T10:27:21.154988+0000 mon.a (mon.0) 3137 : cluster [DBG] osdmap e579: 8 total, 8 up, 8 in 2026-03-10T10:27:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:22 vm04 bash[20742]: cluster 2026-03-10T10:27:21.154988+0000 mon.a (mon.0) 3137 : cluster [DBG] osdmap e579: 8 total, 8 up, 8 in 2026-03-10T10:27:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:22 vm07 bash[23367]: cluster 2026-03-10T10:27:20.531595+0000 mgr.y (mgr.24422) 525 : cluster [DBG] pgmap v896: 268 pgs: 32 unknown, 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 233 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-10T10:27:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:22 vm07 bash[23367]: cluster 2026-03-10T10:27:20.531595+0000 mgr.y (mgr.24422) 525 : cluster [DBG] pgmap v896: 268 pgs: 32 unknown, 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 233 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 1 op/s 2026-03-10T10:27:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:22 vm07 bash[23367]: audit 2026-03-10T10:27:21.150863+0000 mon.a (mon.0) 3136 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-118","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:27:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:22 vm07 bash[23367]: audit 2026-03-10T10:27:21.150863+0000 mon.a (mon.0) 3136 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-118","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:27:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:22 vm07 bash[23367]: cluster 2026-03-10T10:27:21.154988+0000 mon.a (mon.0) 3137 : cluster [DBG] osdmap e579: 8 total, 8 up, 8 in 2026-03-10T10:27:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:22 vm07 bash[23367]: cluster 2026-03-10T10:27:21.154988+0000 mon.a (mon.0) 3137 : cluster [DBG] osdmap e579: 8 total, 8 up, 8 in 2026-03-10T10:27:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:23 vm04 bash[28289]: cluster 2026-03-10T10:27:22.206155+0000 mon.a (mon.0) 3138 : cluster [DBG] osdmap e580: 8 total, 8 up, 8 in 2026-03-10T10:27:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:23 vm04 bash[28289]: cluster 2026-03-10T10:27:22.206155+0000 mon.a (mon.0) 3138 : cluster [DBG] osdmap e580: 8 total, 8 up, 8 in 2026-03-10T10:27:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:23 vm04 bash[28289]: audit 2026-03-10T10:27:22.214579+0000 mon.a (mon.0) 3139 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:27:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:23 vm04 bash[28289]: audit 2026-03-10T10:27:22.214579+0000 mon.a (mon.0) 3139 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:27:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:23 vm04 bash[20742]: cluster 2026-03-10T10:27:22.206155+0000 mon.a (mon.0) 3138 : cluster [DBG] osdmap e580: 8 total, 8 up, 8 in 2026-03-10T10:27:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:23 vm04 bash[20742]: cluster 2026-03-10T10:27:22.206155+0000 mon.a (mon.0) 3138 : cluster [DBG] osdmap e580: 8 total, 8 up, 8 in 2026-03-10T10:27:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:23 vm04 bash[20742]: audit 2026-03-10T10:27:22.214579+0000 mon.a (mon.0) 3139 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:27:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:23 vm04 bash[20742]: audit 2026-03-10T10:27:22.214579+0000 mon.a (mon.0) 3139 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:27:23.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:27:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:27:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:27:23.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:23 vm07 bash[23367]: cluster 2026-03-10T10:27:22.206155+0000 mon.a (mon.0) 3138 : cluster [DBG] osdmap e580: 8 total, 8 up, 8 in 2026-03-10T10:27:23.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:23 vm07 bash[23367]: cluster 2026-03-10T10:27:22.206155+0000 mon.a (mon.0) 3138 : cluster [DBG] osdmap e580: 8 total, 8 up, 8 in 2026-03-10T10:27:23.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:23 vm07 bash[23367]: audit 2026-03-10T10:27:22.214579+0000 mon.a (mon.0) 3139 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:27:23.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:23 vm07 bash[23367]: audit 2026-03-10T10:27:22.214579+0000 mon.a (mon.0) 3139 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-118", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:27:24.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:24 vm07 bash[23367]: cluster 2026-03-10T10:27:22.532006+0000 mgr.y (mgr.24422) 526 : cluster [DBG] pgmap v899: 268 pgs: 32 unknown, 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 233 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:27:24.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:24 vm07 bash[23367]: cluster 2026-03-10T10:27:22.532006+0000 mgr.y (mgr.24422) 526 : cluster [DBG] pgmap v899: 268 pgs: 32 unknown, 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 233 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:27:24.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:24 vm07 bash[23367]: audit 2026-03-10T10:27:23.252476+0000 mon.a (mon.0) 3140 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-118", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:24.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:24 vm07 bash[23367]: audit 2026-03-10T10:27:23.252476+0000 mon.a (mon.0) 3140 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-118", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:24.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:24 vm07 bash[23367]: cluster 2026-03-10T10:27:23.256541+0000 mon.a (mon.0) 3141 : cluster [DBG] osdmap e581: 8 total, 8 up, 8 in 2026-03-10T10:27:24.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:24 vm07 bash[23367]: cluster 2026-03-10T10:27:23.256541+0000 mon.a (mon.0) 3141 : cluster [DBG] osdmap e581: 8 total, 8 up, 8 in 2026-03-10T10:27:24.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:24 vm07 bash[23367]: audit 2026-03-10T10:27:23.256894+0000 mon.a (mon.0) 3142 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-118"}]: dispatch 2026-03-10T10:27:24.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:24 vm07 bash[23367]: audit 2026-03-10T10:27:23.256894+0000 mon.a (mon.0) 3142 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-118"}]: dispatch 2026-03-10T10:27:24.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:24 vm04 bash[28289]: cluster 2026-03-10T10:27:22.532006+0000 mgr.y (mgr.24422) 526 : cluster [DBG] pgmap v899: 268 pgs: 32 unknown, 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 233 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:27:24.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:24 vm04 bash[28289]: cluster 2026-03-10T10:27:22.532006+0000 mgr.y (mgr.24422) 526 : cluster [DBG] pgmap v899: 268 pgs: 32 unknown, 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 233 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:27:24.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:24 vm04 bash[28289]: audit 2026-03-10T10:27:23.252476+0000 mon.a (mon.0) 3140 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-118", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:24.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:24 vm04 bash[28289]: audit 2026-03-10T10:27:23.252476+0000 mon.a (mon.0) 3140 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-118", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:24.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:24 vm04 bash[28289]: cluster 2026-03-10T10:27:23.256541+0000 mon.a (mon.0) 3141 : cluster [DBG] osdmap e581: 8 total, 8 up, 8 in 2026-03-10T10:27:24.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:24 vm04 bash[28289]: cluster 2026-03-10T10:27:23.256541+0000 mon.a (mon.0) 3141 : cluster [DBG] osdmap e581: 8 total, 8 up, 8 in 2026-03-10T10:27:24.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:24 vm04 bash[28289]: audit 2026-03-10T10:27:23.256894+0000 mon.a (mon.0) 3142 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-118"}]: dispatch 2026-03-10T10:27:24.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:24 vm04 bash[28289]: audit 2026-03-10T10:27:23.256894+0000 mon.a (mon.0) 3142 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-118"}]: dispatch 2026-03-10T10:27:24.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:24 vm04 bash[20742]: cluster 2026-03-10T10:27:22.532006+0000 mgr.y (mgr.24422) 526 : cluster [DBG] pgmap v899: 268 pgs: 32 unknown, 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 233 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:27:24.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:24 vm04 bash[20742]: cluster 2026-03-10T10:27:22.532006+0000 mgr.y (mgr.24422) 526 : cluster [DBG] pgmap v899: 268 pgs: 32 unknown, 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 233 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:27:24.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:24 vm04 bash[20742]: audit 2026-03-10T10:27:23.252476+0000 mon.a (mon.0) 3140 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-118", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:24.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:24 vm04 bash[20742]: audit 2026-03-10T10:27:23.252476+0000 mon.a (mon.0) 3140 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-118", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:24.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:24 vm04 bash[20742]: cluster 2026-03-10T10:27:23.256541+0000 mon.a (mon.0) 3141 : cluster [DBG] osdmap e581: 8 total, 8 up, 8 in 2026-03-10T10:27:24.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:24 vm04 bash[20742]: cluster 2026-03-10T10:27:23.256541+0000 mon.a (mon.0) 3141 : cluster [DBG] osdmap e581: 8 total, 8 up, 8 in 2026-03-10T10:27:24.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:24 vm04 bash[20742]: audit 2026-03-10T10:27:23.256894+0000 mon.a (mon.0) 3142 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-118"}]: dispatch 2026-03-10T10:27:24.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:24 vm04 bash[20742]: audit 2026-03-10T10:27:23.256894+0000 mon.a (mon.0) 3142 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-118"}]: dispatch 2026-03-10T10:27:25.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:25 vm04 bash[28289]: audit 2026-03-10T10:27:24.263089+0000 mon.a (mon.0) 3143 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-118"}]': finished 2026-03-10T10:27:25.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:25 vm04 bash[28289]: audit 2026-03-10T10:27:24.263089+0000 mon.a (mon.0) 3143 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-118"}]': finished 2026-03-10T10:27:25.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:25 vm04 bash[28289]: cluster 2026-03-10T10:27:24.267135+0000 mon.a (mon.0) 3144 : cluster [DBG] osdmap e582: 8 total, 8 up, 8 in 2026-03-10T10:27:25.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:25 vm04 bash[28289]: cluster 2026-03-10T10:27:24.267135+0000 mon.a (mon.0) 3144 : cluster [DBG] osdmap e582: 8 total, 8 up, 8 in 2026-03-10T10:27:25.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:25 vm04 bash[28289]: audit 2026-03-10T10:27:24.267880+0000 mon.a (mon.0) 3145 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-118", "mode": "writeback"}]: dispatch 2026-03-10T10:27:25.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:25 vm04 bash[28289]: audit 2026-03-10T10:27:24.267880+0000 mon.a (mon.0) 3145 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-118", "mode": "writeback"}]: dispatch 2026-03-10T10:27:25.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:25 vm04 bash[20742]: audit 2026-03-10T10:27:24.263089+0000 mon.a (mon.0) 3143 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-118"}]': finished 2026-03-10T10:27:25.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:25 vm04 bash[20742]: audit 2026-03-10T10:27:24.263089+0000 mon.a (mon.0) 3143 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-118"}]': finished 2026-03-10T10:27:25.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:25 vm04 bash[20742]: cluster 2026-03-10T10:27:24.267135+0000 mon.a (mon.0) 3144 : cluster [DBG] osdmap e582: 8 total, 8 up, 8 in 2026-03-10T10:27:25.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:25 vm04 bash[20742]: cluster 2026-03-10T10:27:24.267135+0000 mon.a (mon.0) 3144 : cluster [DBG] osdmap e582: 8 total, 8 up, 8 in 2026-03-10T10:27:25.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:25 vm04 bash[20742]: audit 2026-03-10T10:27:24.267880+0000 mon.a (mon.0) 3145 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-118", "mode": "writeback"}]: dispatch 2026-03-10T10:27:25.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:25 vm04 bash[20742]: audit 2026-03-10T10:27:24.267880+0000 mon.a (mon.0) 3145 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-118", "mode": "writeback"}]: dispatch 2026-03-10T10:27:25.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:25 vm07 bash[23367]: audit 2026-03-10T10:27:24.263089+0000 mon.a (mon.0) 3143 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-118"}]': finished 2026-03-10T10:27:25.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:25 vm07 bash[23367]: audit 2026-03-10T10:27:24.263089+0000 mon.a (mon.0) 3143 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-118"}]': finished 2026-03-10T10:27:25.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:25 vm07 bash[23367]: cluster 2026-03-10T10:27:24.267135+0000 mon.a (mon.0) 3144 : cluster [DBG] osdmap e582: 8 total, 8 up, 8 in 2026-03-10T10:27:25.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:25 vm07 bash[23367]: cluster 2026-03-10T10:27:24.267135+0000 mon.a (mon.0) 3144 : cluster [DBG] osdmap e582: 8 total, 8 up, 8 in 2026-03-10T10:27:25.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:25 vm07 bash[23367]: audit 2026-03-10T10:27:24.267880+0000 mon.a (mon.0) 3145 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-118", "mode": "writeback"}]: dispatch 2026-03-10T10:27:25.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:25 vm07 bash[23367]: audit 2026-03-10T10:27:24.267880+0000 mon.a (mon.0) 3145 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-118", "mode": "writeback"}]: dispatch 2026-03-10T10:27:26.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:26 vm04 bash[28289]: cluster 2026-03-10T10:27:24.532397+0000 mgr.y (mgr.24422) 527 : cluster [DBG] pgmap v902: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 989 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:27:26.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:26 vm04 bash[28289]: cluster 2026-03-10T10:27:24.532397+0000 mgr.y (mgr.24422) 527 : cluster [DBG] pgmap v902: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 989 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:27:26.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:26 vm04 bash[28289]: cluster 2026-03-10T10:27:25.263319+0000 mon.a (mon.0) 3146 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:27:26.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:26 vm04 bash[28289]: cluster 2026-03-10T10:27:25.263319+0000 mon.a (mon.0) 3146 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:27:26.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:26 vm04 bash[28289]: audit 2026-03-10T10:27:25.281901+0000 mon.a (mon.0) 3147 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-118", "mode": "writeback"}]': finished 2026-03-10T10:27:26.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:26 vm04 bash[28289]: audit 2026-03-10T10:27:25.281901+0000 mon.a (mon.0) 3147 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-118", "mode": "writeback"}]': finished 2026-03-10T10:27:26.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:26 vm04 bash[28289]: cluster 2026-03-10T10:27:25.286555+0000 mon.a (mon.0) 3148 : cluster [DBG] osdmap e583: 8 total, 8 up, 8 in 2026-03-10T10:27:26.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:26 vm04 bash[28289]: cluster 2026-03-10T10:27:25.286555+0000 mon.a (mon.0) 3148 : cluster [DBG] osdmap e583: 8 total, 8 up, 8 in 2026-03-10T10:27:26.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:26 vm04 bash[28289]: cluster 2026-03-10T10:27:26.295383+0000 mon.a (mon.0) 3149 : cluster [DBG] osdmap e584: 8 total, 8 up, 8 in 2026-03-10T10:27:26.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:26 vm04 bash[28289]: cluster 2026-03-10T10:27:26.295383+0000 mon.a (mon.0) 3149 : cluster [DBG] osdmap e584: 8 total, 8 up, 8 in 2026-03-10T10:27:26.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:26 vm04 bash[20742]: cluster 2026-03-10T10:27:24.532397+0000 mgr.y (mgr.24422) 527 : cluster [DBG] pgmap v902: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 989 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:27:26.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:26 vm04 bash[20742]: cluster 2026-03-10T10:27:24.532397+0000 mgr.y (mgr.24422) 527 : cluster [DBG] pgmap v902: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 989 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:27:26.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:26 vm04 bash[20742]: cluster 2026-03-10T10:27:25.263319+0000 mon.a (mon.0) 3146 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:27:26.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:26 vm04 bash[20742]: cluster 2026-03-10T10:27:25.263319+0000 mon.a (mon.0) 3146 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:27:26.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:26 vm04 bash[20742]: audit 2026-03-10T10:27:25.281901+0000 mon.a (mon.0) 3147 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-118", "mode": "writeback"}]': finished 2026-03-10T10:27:26.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:26 vm04 bash[20742]: audit 2026-03-10T10:27:25.281901+0000 mon.a (mon.0) 3147 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-118", "mode": "writeback"}]': finished 2026-03-10T10:27:26.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:26 vm04 bash[20742]: cluster 2026-03-10T10:27:25.286555+0000 mon.a (mon.0) 3148 : cluster [DBG] osdmap e583: 8 total, 8 up, 8 in 2026-03-10T10:27:26.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:26 vm04 bash[20742]: cluster 2026-03-10T10:27:25.286555+0000 mon.a (mon.0) 3148 : cluster [DBG] osdmap e583: 8 total, 8 up, 8 in 2026-03-10T10:27:26.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:26 vm04 bash[20742]: cluster 2026-03-10T10:27:26.295383+0000 mon.a (mon.0) 3149 : cluster [DBG] osdmap e584: 8 total, 8 up, 8 in 2026-03-10T10:27:26.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:26 vm04 bash[20742]: cluster 2026-03-10T10:27:26.295383+0000 mon.a (mon.0) 3149 : cluster [DBG] osdmap e584: 8 total, 8 up, 8 in 2026-03-10T10:27:26.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:26 vm07 bash[23367]: cluster 2026-03-10T10:27:24.532397+0000 mgr.y (mgr.24422) 527 : cluster [DBG] pgmap v902: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 989 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:27:26.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:26 vm07 bash[23367]: cluster 2026-03-10T10:27:24.532397+0000 mgr.y (mgr.24422) 527 : cluster [DBG] pgmap v902: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 989 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:27:26.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:26 vm07 bash[23367]: cluster 2026-03-10T10:27:25.263319+0000 mon.a (mon.0) 3146 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:27:26.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:26 vm07 bash[23367]: cluster 2026-03-10T10:27:25.263319+0000 mon.a (mon.0) 3146 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:27:26.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:26 vm07 bash[23367]: audit 2026-03-10T10:27:25.281901+0000 mon.a (mon.0) 3147 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-118", "mode": "writeback"}]': finished 2026-03-10T10:27:26.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:26 vm07 bash[23367]: audit 2026-03-10T10:27:25.281901+0000 mon.a (mon.0) 3147 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-118", "mode": "writeback"}]': finished 2026-03-10T10:27:26.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:26 vm07 bash[23367]: cluster 2026-03-10T10:27:25.286555+0000 mon.a (mon.0) 3148 : cluster [DBG] osdmap e583: 8 total, 8 up, 8 in 2026-03-10T10:27:26.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:26 vm07 bash[23367]: cluster 2026-03-10T10:27:25.286555+0000 mon.a (mon.0) 3148 : cluster [DBG] osdmap e583: 8 total, 8 up, 8 in 2026-03-10T10:27:26.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:26 vm07 bash[23367]: cluster 2026-03-10T10:27:26.295383+0000 mon.a (mon.0) 3149 : cluster [DBG] osdmap e584: 8 total, 8 up, 8 in 2026-03-10T10:27:26.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:26 vm07 bash[23367]: cluster 2026-03-10T10:27:26.295383+0000 mon.a (mon.0) 3149 : cluster [DBG] osdmap e584: 8 total, 8 up, 8 in 2026-03-10T10:27:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:27 vm04 bash[28289]: audit 2026-03-10T10:27:26.338353+0000 mon.a (mon.0) 3150 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:27:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:27 vm04 bash[28289]: audit 2026-03-10T10:27:26.338353+0000 mon.a (mon.0) 3150 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:27:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:27 vm04 bash[28289]: cluster 2026-03-10T10:27:26.539295+0000 mon.a (mon.0) 3151 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:27 vm04 bash[28289]: cluster 2026-03-10T10:27:26.539295+0000 mon.a (mon.0) 3151 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:27.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:27 vm04 bash[20742]: audit 2026-03-10T10:27:26.338353+0000 mon.a (mon.0) 3150 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:27:27.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:27 vm04 bash[20742]: audit 2026-03-10T10:27:26.338353+0000 mon.a (mon.0) 3150 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:27:27.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:27 vm04 bash[20742]: cluster 2026-03-10T10:27:26.539295+0000 mon.a (mon.0) 3151 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:27.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:27 vm04 bash[20742]: cluster 2026-03-10T10:27:26.539295+0000 mon.a (mon.0) 3151 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:27.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:27 vm07 bash[23367]: audit 2026-03-10T10:27:26.338353+0000 mon.a (mon.0) 3150 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:27:27.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:27 vm07 bash[23367]: audit 2026-03-10T10:27:26.338353+0000 mon.a (mon.0) 3150 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:27:27.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:27 vm07 bash[23367]: cluster 2026-03-10T10:27:26.539295+0000 mon.a (mon.0) 3151 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:27.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:27 vm07 bash[23367]: cluster 2026-03-10T10:27:26.539295+0000 mon.a (mon.0) 3151 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:28 vm04 bash[28289]: cluster 2026-03-10T10:27:26.532778+0000 mgr.y (mgr.24422) 528 : cluster [DBG] pgmap v905: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 989 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:27:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:28 vm04 bash[28289]: cluster 2026-03-10T10:27:26.532778+0000 mgr.y (mgr.24422) 528 : cluster [DBG] pgmap v905: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 989 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:27:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:28 vm04 bash[28289]: audit 2026-03-10T10:27:27.312675+0000 mon.a (mon.0) 3152 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:27:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:28 vm04 bash[28289]: audit 2026-03-10T10:27:27.312675+0000 mon.a (mon.0) 3152 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:27:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:28 vm04 bash[28289]: cluster 2026-03-10T10:27:27.316416+0000 mon.a (mon.0) 3153 : cluster [DBG] osdmap e585: 8 total, 8 up, 8 in 2026-03-10T10:27:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:28 vm04 bash[28289]: cluster 2026-03-10T10:27:27.316416+0000 mon.a (mon.0) 3153 : cluster [DBG] osdmap e585: 8 total, 8 up, 8 in 2026-03-10T10:27:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:28 vm04 bash[28289]: audit 2026-03-10T10:27:27.317017+0000 mon.a (mon.0) 3154 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-118"}]: dispatch 2026-03-10T10:27:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:28 vm04 bash[28289]: audit 2026-03-10T10:27:27.317017+0000 mon.a (mon.0) 3154 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-118"}]: dispatch 2026-03-10T10:27:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:28 vm04 bash[28289]: audit 2026-03-10T10:27:28.092598+0000 mon.a (mon.0) 3155 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:27:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:28 vm04 bash[28289]: audit 2026-03-10T10:27:28.092598+0000 mon.a (mon.0) 3155 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:27:28.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:28 vm04 bash[20742]: cluster 2026-03-10T10:27:26.532778+0000 mgr.y (mgr.24422) 528 : cluster [DBG] pgmap v905: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 989 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:27:28.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:28 vm04 bash[20742]: cluster 2026-03-10T10:27:26.532778+0000 mgr.y (mgr.24422) 528 : cluster [DBG] pgmap v905: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 989 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:27:28.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:28 vm04 bash[20742]: audit 2026-03-10T10:27:27.312675+0000 mon.a (mon.0) 3152 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:27:28.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:28 vm04 bash[20742]: audit 2026-03-10T10:27:27.312675+0000 mon.a (mon.0) 3152 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:27:28.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:28 vm04 bash[20742]: cluster 2026-03-10T10:27:27.316416+0000 mon.a (mon.0) 3153 : cluster [DBG] osdmap e585: 8 total, 8 up, 8 in 2026-03-10T10:27:28.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:28 vm04 bash[20742]: cluster 2026-03-10T10:27:27.316416+0000 mon.a (mon.0) 3153 : cluster [DBG] osdmap e585: 8 total, 8 up, 8 in 2026-03-10T10:27:28.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:28 vm04 bash[20742]: audit 2026-03-10T10:27:27.317017+0000 mon.a (mon.0) 3154 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-118"}]: dispatch 2026-03-10T10:27:28.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:28 vm04 bash[20742]: audit 2026-03-10T10:27:27.317017+0000 mon.a (mon.0) 3154 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-118"}]: dispatch 2026-03-10T10:27:28.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:28 vm04 bash[20742]: audit 2026-03-10T10:27:28.092598+0000 mon.a (mon.0) 3155 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:27:28.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:28 vm04 bash[20742]: audit 2026-03-10T10:27:28.092598+0000 mon.a (mon.0) 3155 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:27:28.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:28 vm07 bash[23367]: cluster 2026-03-10T10:27:26.532778+0000 mgr.y (mgr.24422) 528 : cluster [DBG] pgmap v905: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 989 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:27:28.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:28 vm07 bash[23367]: cluster 2026-03-10T10:27:26.532778+0000 mgr.y (mgr.24422) 528 : cluster [DBG] pgmap v905: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 989 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:27:28.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:28 vm07 bash[23367]: audit 2026-03-10T10:27:27.312675+0000 mon.a (mon.0) 3152 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:27:28.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:28 vm07 bash[23367]: audit 2026-03-10T10:27:27.312675+0000 mon.a (mon.0) 3152 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:27:28.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:28 vm07 bash[23367]: cluster 2026-03-10T10:27:27.316416+0000 mon.a (mon.0) 3153 : cluster [DBG] osdmap e585: 8 total, 8 up, 8 in 2026-03-10T10:27:28.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:28 vm07 bash[23367]: cluster 2026-03-10T10:27:27.316416+0000 mon.a (mon.0) 3153 : cluster [DBG] osdmap e585: 8 total, 8 up, 8 in 2026-03-10T10:27:28.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:28 vm07 bash[23367]: audit 2026-03-10T10:27:27.317017+0000 mon.a (mon.0) 3154 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-118"}]: dispatch 2026-03-10T10:27:28.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:28 vm07 bash[23367]: audit 2026-03-10T10:27:27.317017+0000 mon.a (mon.0) 3154 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-118"}]: dispatch 2026-03-10T10:27:28.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:28 vm07 bash[23367]: audit 2026-03-10T10:27:28.092598+0000 mon.a (mon.0) 3155 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:27:28.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:28 vm07 bash[23367]: audit 2026-03-10T10:27:28.092598+0000 mon.a (mon.0) 3155 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:27:29.266 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:27:28 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:27:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:29 vm04 bash[28289]: cluster 2026-03-10T10:27:28.312759+0000 mon.a (mon.0) 3156 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:27:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:29 vm04 bash[28289]: cluster 2026-03-10T10:27:28.312759+0000 mon.a (mon.0) 3156 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:27:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:29 vm04 bash[28289]: audit 2026-03-10T10:27:28.316290+0000 mon.a (mon.0) 3157 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-118"}]': finished 2026-03-10T10:27:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:29 vm04 bash[28289]: audit 2026-03-10T10:27:28.316290+0000 mon.a (mon.0) 3157 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-118"}]': finished 2026-03-10T10:27:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:29 vm04 bash[28289]: cluster 2026-03-10T10:27:28.320207+0000 mon.a (mon.0) 3158 : cluster [DBG] osdmap e586: 8 total, 8 up, 8 in 2026-03-10T10:27:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:29 vm04 bash[28289]: cluster 2026-03-10T10:27:28.320207+0000 mon.a (mon.0) 3158 : cluster [DBG] osdmap e586: 8 total, 8 up, 8 in 2026-03-10T10:27:29.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:29 vm04 bash[20742]: cluster 2026-03-10T10:27:28.312759+0000 mon.a (mon.0) 3156 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:27:29.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:29 vm04 bash[20742]: cluster 2026-03-10T10:27:28.312759+0000 mon.a (mon.0) 3156 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:27:29.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:29 vm04 bash[20742]: audit 2026-03-10T10:27:28.316290+0000 mon.a (mon.0) 3157 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-118"}]': finished 2026-03-10T10:27:29.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:29 vm04 bash[20742]: audit 2026-03-10T10:27:28.316290+0000 mon.a (mon.0) 3157 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-118"}]': finished 2026-03-10T10:27:29.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:29 vm04 bash[20742]: cluster 2026-03-10T10:27:28.320207+0000 mon.a (mon.0) 3158 : cluster [DBG] osdmap e586: 8 total, 8 up, 8 in 2026-03-10T10:27:29.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:29 vm04 bash[20742]: cluster 2026-03-10T10:27:28.320207+0000 mon.a (mon.0) 3158 : cluster [DBG] osdmap e586: 8 total, 8 up, 8 in 2026-03-10T10:27:29.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:29 vm07 bash[23367]: cluster 2026-03-10T10:27:28.312759+0000 mon.a (mon.0) 3156 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:27:29.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:29 vm07 bash[23367]: cluster 2026-03-10T10:27:28.312759+0000 mon.a (mon.0) 3156 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:27:29.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:29 vm07 bash[23367]: audit 2026-03-10T10:27:28.316290+0000 mon.a (mon.0) 3157 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-118"}]': finished 2026-03-10T10:27:29.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:29 vm07 bash[23367]: audit 2026-03-10T10:27:28.316290+0000 mon.a (mon.0) 3157 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-118"}]': finished 2026-03-10T10:27:29.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:29 vm07 bash[23367]: cluster 2026-03-10T10:27:28.320207+0000 mon.a (mon.0) 3158 : cluster [DBG] osdmap e586: 8 total, 8 up, 8 in 2026-03-10T10:27:29.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:29 vm07 bash[23367]: cluster 2026-03-10T10:27:28.320207+0000 mon.a (mon.0) 3158 : cluster [DBG] osdmap e586: 8 total, 8 up, 8 in 2026-03-10T10:27:30.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:30 vm04 bash[28289]: cluster 2026-03-10T10:27:28.533535+0000 mgr.y (mgr.24422) 529 : cluster [DBG] pgmap v908: 268 pgs: 268 active+clean; 455 KiB data, 989 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:27:30.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:30 vm04 bash[28289]: cluster 2026-03-10T10:27:28.533535+0000 mgr.y (mgr.24422) 529 : cluster [DBG] pgmap v908: 268 pgs: 268 active+clean; 455 KiB data, 989 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:27:30.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:30 vm04 bash[28289]: audit 2026-03-10T10:27:28.780300+0000 mgr.y (mgr.24422) 530 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:30.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:30 vm04 bash[28289]: audit 2026-03-10T10:27:28.780300+0000 mgr.y (mgr.24422) 530 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:30.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:30 vm04 bash[28289]: cluster 2026-03-10T10:27:29.344769+0000 mon.a (mon.0) 3159 : cluster [DBG] osdmap e587: 8 total, 8 up, 8 in 2026-03-10T10:27:30.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:30 vm04 bash[28289]: cluster 2026-03-10T10:27:29.344769+0000 mon.a (mon.0) 3159 : cluster [DBG] osdmap e587: 8 total, 8 up, 8 in 2026-03-10T10:27:30.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:30 vm04 bash[20742]: cluster 2026-03-10T10:27:28.533535+0000 mgr.y (mgr.24422) 529 : cluster [DBG] pgmap v908: 268 pgs: 268 active+clean; 455 KiB data, 989 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:27:30.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:30 vm04 bash[20742]: cluster 2026-03-10T10:27:28.533535+0000 mgr.y (mgr.24422) 529 : cluster [DBG] pgmap v908: 268 pgs: 268 active+clean; 455 KiB data, 989 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:27:30.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:30 vm04 bash[20742]: audit 2026-03-10T10:27:28.780300+0000 mgr.y (mgr.24422) 530 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:30.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:30 vm04 bash[20742]: audit 2026-03-10T10:27:28.780300+0000 mgr.y (mgr.24422) 530 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:30.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:30 vm04 bash[20742]: cluster 2026-03-10T10:27:29.344769+0000 mon.a (mon.0) 3159 : cluster [DBG] osdmap e587: 8 total, 8 up, 8 in 2026-03-10T10:27:30.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:30 vm04 bash[20742]: cluster 2026-03-10T10:27:29.344769+0000 mon.a (mon.0) 3159 : cluster [DBG] osdmap e587: 8 total, 8 up, 8 in 2026-03-10T10:27:30.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:30 vm07 bash[23367]: cluster 2026-03-10T10:27:28.533535+0000 mgr.y (mgr.24422) 529 : cluster [DBG] pgmap v908: 268 pgs: 268 active+clean; 455 KiB data, 989 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:27:30.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:30 vm07 bash[23367]: cluster 2026-03-10T10:27:28.533535+0000 mgr.y (mgr.24422) 529 : cluster [DBG] pgmap v908: 268 pgs: 268 active+clean; 455 KiB data, 989 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:27:30.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:30 vm07 bash[23367]: audit 2026-03-10T10:27:28.780300+0000 mgr.y (mgr.24422) 530 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:30.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:30 vm07 bash[23367]: audit 2026-03-10T10:27:28.780300+0000 mgr.y (mgr.24422) 530 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:30.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:30 vm07 bash[23367]: cluster 2026-03-10T10:27:29.344769+0000 mon.a (mon.0) 3159 : cluster [DBG] osdmap e587: 8 total, 8 up, 8 in 2026-03-10T10:27:30.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:30 vm07 bash[23367]: cluster 2026-03-10T10:27:29.344769+0000 mon.a (mon.0) 3159 : cluster [DBG] osdmap e587: 8 total, 8 up, 8 in 2026-03-10T10:27:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:31 vm04 bash[28289]: cluster 2026-03-10T10:27:30.370160+0000 mon.a (mon.0) 3160 : cluster [DBG] osdmap e588: 8 total, 8 up, 8 in 2026-03-10T10:27:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:31 vm04 bash[28289]: cluster 2026-03-10T10:27:30.370160+0000 mon.a (mon.0) 3160 : cluster [DBG] osdmap e588: 8 total, 8 up, 8 in 2026-03-10T10:27:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:31 vm04 bash[28289]: audit 2026-03-10T10:27:30.371701+0000 mon.a (mon.0) 3161 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:27:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:31 vm04 bash[28289]: audit 2026-03-10T10:27:30.371701+0000 mon.a (mon.0) 3161 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:27:31.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:31 vm04 bash[20742]: cluster 2026-03-10T10:27:30.370160+0000 mon.a (mon.0) 3160 : cluster [DBG] osdmap e588: 8 total, 8 up, 8 in 2026-03-10T10:27:31.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:31 vm04 bash[20742]: cluster 2026-03-10T10:27:30.370160+0000 mon.a (mon.0) 3160 : cluster [DBG] osdmap e588: 8 total, 8 up, 8 in 2026-03-10T10:27:31.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:31 vm04 bash[20742]: audit 2026-03-10T10:27:30.371701+0000 mon.a (mon.0) 3161 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:27:31.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:31 vm04 bash[20742]: audit 2026-03-10T10:27:30.371701+0000 mon.a (mon.0) 3161 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:27:31.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:31 vm07 bash[23367]: cluster 2026-03-10T10:27:30.370160+0000 mon.a (mon.0) 3160 : cluster [DBG] osdmap e588: 8 total, 8 up, 8 in 2026-03-10T10:27:31.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:31 vm07 bash[23367]: cluster 2026-03-10T10:27:30.370160+0000 mon.a (mon.0) 3160 : cluster [DBG] osdmap e588: 8 total, 8 up, 8 in 2026-03-10T10:27:31.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:31 vm07 bash[23367]: audit 2026-03-10T10:27:30.371701+0000 mon.a (mon.0) 3161 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:27:31.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:31 vm07 bash[23367]: audit 2026-03-10T10:27:30.371701+0000 mon.a (mon.0) 3161 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-120","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:27:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:32 vm04 bash[28289]: cluster 2026-03-10T10:27:30.533910+0000 mgr.y (mgr.24422) 531 : cluster [DBG] pgmap v911: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:27:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:32 vm04 bash[28289]: cluster 2026-03-10T10:27:30.533910+0000 mgr.y (mgr.24422) 531 : cluster [DBG] pgmap v911: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:27:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:32 vm04 bash[28289]: audit 2026-03-10T10:27:31.345423+0000 mon.a (mon.0) 3162 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-120","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:27:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:32 vm04 bash[28289]: audit 2026-03-10T10:27:31.345423+0000 mon.a (mon.0) 3162 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-120","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:27:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:32 vm04 bash[28289]: cluster 2026-03-10T10:27:31.351513+0000 mon.a (mon.0) 3163 : cluster [DBG] osdmap e589: 8 total, 8 up, 8 in 2026-03-10T10:27:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:32 vm04 bash[28289]: cluster 2026-03-10T10:27:31.351513+0000 mon.a (mon.0) 3163 : cluster [DBG] osdmap e589: 8 total, 8 up, 8 in 2026-03-10T10:27:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:32 vm04 bash[28289]: audit 2026-03-10T10:27:31.394358+0000 mon.a (mon.0) 3164 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:27:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:32 vm04 bash[28289]: audit 2026-03-10T10:27:31.394358+0000 mon.a (mon.0) 3164 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:27:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:32 vm04 bash[28289]: cluster 2026-03-10T10:27:31.539938+0000 mon.a (mon.0) 3165 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:32 vm04 bash[28289]: cluster 2026-03-10T10:27:31.539938+0000 mon.a (mon.0) 3165 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:32 vm04 bash[28289]: audit 2026-03-10T10:27:31.543696+0000 mon.a (mon.0) 3166 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-120", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:32 vm04 bash[28289]: audit 2026-03-10T10:27:31.543696+0000 mon.a (mon.0) 3166 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-120", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:32 vm04 bash[28289]: cluster 2026-03-10T10:27:31.547245+0000 mon.a (mon.0) 3167 : cluster [DBG] osdmap e590: 8 total, 8 up, 8 in 2026-03-10T10:27:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:32 vm04 bash[28289]: cluster 2026-03-10T10:27:31.547245+0000 mon.a (mon.0) 3167 : cluster [DBG] osdmap e590: 8 total, 8 up, 8 in 2026-03-10T10:27:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:32 vm04 bash[28289]: audit 2026-03-10T10:27:31.547921+0000 mon.a (mon.0) 3168 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-120"}]: dispatch 2026-03-10T10:27:32.704 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:32 vm04 bash[28289]: audit 2026-03-10T10:27:31.547921+0000 mon.a (mon.0) 3168 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-120"}]: dispatch 2026-03-10T10:27:32.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:32 vm04 bash[20742]: cluster 2026-03-10T10:27:30.533910+0000 mgr.y (mgr.24422) 531 : cluster [DBG] pgmap v911: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:27:32.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:32 vm04 bash[20742]: cluster 2026-03-10T10:27:30.533910+0000 mgr.y (mgr.24422) 531 : cluster [DBG] pgmap v911: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:27:32.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:32 vm04 bash[20742]: audit 2026-03-10T10:27:31.345423+0000 mon.a (mon.0) 3162 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-120","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:27:32.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:32 vm04 bash[20742]: audit 2026-03-10T10:27:31.345423+0000 mon.a (mon.0) 3162 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-120","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:27:32.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:32 vm04 bash[20742]: cluster 2026-03-10T10:27:31.351513+0000 mon.a (mon.0) 3163 : cluster [DBG] osdmap e589: 8 total, 8 up, 8 in 2026-03-10T10:27:32.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:32 vm04 bash[20742]: cluster 2026-03-10T10:27:31.351513+0000 mon.a (mon.0) 3163 : cluster [DBG] osdmap e589: 8 total, 8 up, 8 in 2026-03-10T10:27:32.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:32 vm04 bash[20742]: audit 2026-03-10T10:27:31.394358+0000 mon.a (mon.0) 3164 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:27:32.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:32 vm04 bash[20742]: audit 2026-03-10T10:27:31.394358+0000 mon.a (mon.0) 3164 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:27:32.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:32 vm04 bash[20742]: cluster 2026-03-10T10:27:31.539938+0000 mon.a (mon.0) 3165 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:32.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:32 vm04 bash[20742]: cluster 2026-03-10T10:27:31.539938+0000 mon.a (mon.0) 3165 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:32.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:32 vm04 bash[20742]: audit 2026-03-10T10:27:31.543696+0000 mon.a (mon.0) 3166 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-120", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:32.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:32 vm04 bash[20742]: audit 2026-03-10T10:27:31.543696+0000 mon.a (mon.0) 3166 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-120", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:32.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:32 vm04 bash[20742]: cluster 2026-03-10T10:27:31.547245+0000 mon.a (mon.0) 3167 : cluster [DBG] osdmap e590: 8 total, 8 up, 8 in 2026-03-10T10:27:32.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:32 vm04 bash[20742]: cluster 2026-03-10T10:27:31.547245+0000 mon.a (mon.0) 3167 : cluster [DBG] osdmap e590: 8 total, 8 up, 8 in 2026-03-10T10:27:32.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:32 vm04 bash[20742]: audit 2026-03-10T10:27:31.547921+0000 mon.a (mon.0) 3168 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-120"}]: dispatch 2026-03-10T10:27:32.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:32 vm04 bash[20742]: audit 2026-03-10T10:27:31.547921+0000 mon.a (mon.0) 3168 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-120"}]: dispatch 2026-03-10T10:27:32.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:32 vm07 bash[23367]: cluster 2026-03-10T10:27:30.533910+0000 mgr.y (mgr.24422) 531 : cluster [DBG] pgmap v911: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:27:32.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:32 vm07 bash[23367]: cluster 2026-03-10T10:27:30.533910+0000 mgr.y (mgr.24422) 531 : cluster [DBG] pgmap v911: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:27:32.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:32 vm07 bash[23367]: audit 2026-03-10T10:27:31.345423+0000 mon.a (mon.0) 3162 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-120","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:27:32.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:32 vm07 bash[23367]: audit 2026-03-10T10:27:31.345423+0000 mon.a (mon.0) 3162 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-120","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:27:32.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:32 vm07 bash[23367]: cluster 2026-03-10T10:27:31.351513+0000 mon.a (mon.0) 3163 : cluster [DBG] osdmap e589: 8 total, 8 up, 8 in 2026-03-10T10:27:32.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:32 vm07 bash[23367]: cluster 2026-03-10T10:27:31.351513+0000 mon.a (mon.0) 3163 : cluster [DBG] osdmap e589: 8 total, 8 up, 8 in 2026-03-10T10:27:32.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:32 vm07 bash[23367]: audit 2026-03-10T10:27:31.394358+0000 mon.a (mon.0) 3164 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:27:32.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:32 vm07 bash[23367]: audit 2026-03-10T10:27:31.394358+0000 mon.a (mon.0) 3164 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-120", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:27:32.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:32 vm07 bash[23367]: cluster 2026-03-10T10:27:31.539938+0000 mon.a (mon.0) 3165 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:32.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:32 vm07 bash[23367]: cluster 2026-03-10T10:27:31.539938+0000 mon.a (mon.0) 3165 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:32.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:32 vm07 bash[23367]: audit 2026-03-10T10:27:31.543696+0000 mon.a (mon.0) 3166 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-120", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:32.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:32 vm07 bash[23367]: audit 2026-03-10T10:27:31.543696+0000 mon.a (mon.0) 3166 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-120", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:32.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:32 vm07 bash[23367]: cluster 2026-03-10T10:27:31.547245+0000 mon.a (mon.0) 3167 : cluster [DBG] osdmap e590: 8 total, 8 up, 8 in 2026-03-10T10:27:32.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:32 vm07 bash[23367]: cluster 2026-03-10T10:27:31.547245+0000 mon.a (mon.0) 3167 : cluster [DBG] osdmap e590: 8 total, 8 up, 8 in 2026-03-10T10:27:32.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:32 vm07 bash[23367]: audit 2026-03-10T10:27:31.547921+0000 mon.a (mon.0) 3168 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-120"}]: dispatch 2026-03-10T10:27:32.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:32 vm07 bash[23367]: audit 2026-03-10T10:27:31.547921+0000 mon.a (mon.0) 3168 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-120"}]: dispatch 2026-03-10T10:27:33.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:27:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:27:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:27:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:33 vm04 bash[28289]: cluster 2026-03-10T10:27:32.534268+0000 mgr.y (mgr.24422) 532 : cluster [DBG] pgmap v914: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:27:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:33 vm04 bash[28289]: cluster 2026-03-10T10:27:32.534268+0000 mgr.y (mgr.24422) 532 : cluster [DBG] pgmap v914: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:27:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:33 vm04 bash[28289]: audit 2026-03-10T10:27:32.547328+0000 mon.a (mon.0) 3169 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-120"}]': finished 2026-03-10T10:27:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:33 vm04 bash[28289]: audit 2026-03-10T10:27:32.547328+0000 mon.a (mon.0) 3169 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-120"}]': finished 2026-03-10T10:27:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:33 vm04 bash[28289]: cluster 2026-03-10T10:27:32.550668+0000 mon.a (mon.0) 3170 : cluster [DBG] osdmap e591: 8 total, 8 up, 8 in 2026-03-10T10:27:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:33 vm04 bash[28289]: cluster 2026-03-10T10:27:32.550668+0000 mon.a (mon.0) 3170 : cluster [DBG] osdmap e591: 8 total, 8 up, 8 in 2026-03-10T10:27:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:33 vm04 bash[28289]: audit 2026-03-10T10:27:32.551070+0000 mon.a (mon.0) 3171 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-120", "mode": "writeback"}]: dispatch 2026-03-10T10:27:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:33 vm04 bash[28289]: audit 2026-03-10T10:27:32.551070+0000 mon.a (mon.0) 3171 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-120", "mode": "writeback"}]: dispatch 2026-03-10T10:27:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:33 vm04 bash[28289]: audit 2026-03-10T10:27:32.873925+0000 mon.a (mon.0) 3172 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:27:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:33 vm04 bash[28289]: audit 2026-03-10T10:27:32.873925+0000 mon.a (mon.0) 3172 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:27:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:33 vm04 bash[28289]: audit 2026-03-10T10:27:33.223786+0000 mon.a (mon.0) 3173 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:27:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:33 vm04 bash[28289]: audit 2026-03-10T10:27:33.223786+0000 mon.a (mon.0) 3173 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:27:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:33 vm04 bash[28289]: audit 2026-03-10T10:27:33.224395+0000 mon.a (mon.0) 3174 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:27:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:33 vm04 bash[28289]: audit 2026-03-10T10:27:33.224395+0000 mon.a (mon.0) 3174 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:27:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:33 vm04 bash[28289]: audit 2026-03-10T10:27:33.230548+0000 mon.a (mon.0) 3175 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:27:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:33 vm04 bash[28289]: audit 2026-03-10T10:27:33.230548+0000 mon.a (mon.0) 3175 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:27:33.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:33 vm04 bash[20742]: cluster 2026-03-10T10:27:32.534268+0000 mgr.y (mgr.24422) 532 : cluster [DBG] pgmap v914: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:27:33.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:33 vm04 bash[20742]: cluster 2026-03-10T10:27:32.534268+0000 mgr.y (mgr.24422) 532 : cluster [DBG] pgmap v914: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:27:33.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:33 vm04 bash[20742]: audit 2026-03-10T10:27:32.547328+0000 mon.a (mon.0) 3169 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-120"}]': finished 2026-03-10T10:27:33.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:33 vm04 bash[20742]: audit 2026-03-10T10:27:32.547328+0000 mon.a (mon.0) 3169 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-120"}]': finished 2026-03-10T10:27:33.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:33 vm04 bash[20742]: cluster 2026-03-10T10:27:32.550668+0000 mon.a (mon.0) 3170 : cluster [DBG] osdmap e591: 8 total, 8 up, 8 in 2026-03-10T10:27:33.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:33 vm04 bash[20742]: cluster 2026-03-10T10:27:32.550668+0000 mon.a (mon.0) 3170 : cluster [DBG] osdmap e591: 8 total, 8 up, 8 in 2026-03-10T10:27:33.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:33 vm04 bash[20742]: audit 2026-03-10T10:27:32.551070+0000 mon.a (mon.0) 3171 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-120", "mode": "writeback"}]: dispatch 2026-03-10T10:27:33.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:33 vm04 bash[20742]: audit 2026-03-10T10:27:32.551070+0000 mon.a (mon.0) 3171 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-120", "mode": "writeback"}]: dispatch 2026-03-10T10:27:33.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:33 vm04 bash[20742]: audit 2026-03-10T10:27:32.873925+0000 mon.a (mon.0) 3172 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:27:33.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:33 vm04 bash[20742]: audit 2026-03-10T10:27:32.873925+0000 mon.a (mon.0) 3172 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:27:33.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:33 vm04 bash[20742]: audit 2026-03-10T10:27:33.223786+0000 mon.a (mon.0) 3173 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:27:33.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:33 vm04 bash[20742]: audit 2026-03-10T10:27:33.223786+0000 mon.a (mon.0) 3173 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:27:33.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:33 vm04 bash[20742]: audit 2026-03-10T10:27:33.224395+0000 mon.a (mon.0) 3174 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:27:33.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:33 vm04 bash[20742]: audit 2026-03-10T10:27:33.224395+0000 mon.a (mon.0) 3174 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:27:33.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:33 vm04 bash[20742]: audit 2026-03-10T10:27:33.230548+0000 mon.a (mon.0) 3175 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:27:33.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:33 vm04 bash[20742]: audit 2026-03-10T10:27:33.230548+0000 mon.a (mon.0) 3175 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:27:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:33 vm07 bash[23367]: cluster 2026-03-10T10:27:32.534268+0000 mgr.y (mgr.24422) 532 : cluster [DBG] pgmap v914: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:27:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:33 vm07 bash[23367]: cluster 2026-03-10T10:27:32.534268+0000 mgr.y (mgr.24422) 532 : cluster [DBG] pgmap v914: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 973 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:27:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:33 vm07 bash[23367]: audit 2026-03-10T10:27:32.547328+0000 mon.a (mon.0) 3169 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-120"}]': finished 2026-03-10T10:27:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:33 vm07 bash[23367]: audit 2026-03-10T10:27:32.547328+0000 mon.a (mon.0) 3169 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-120"}]': finished 2026-03-10T10:27:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:33 vm07 bash[23367]: cluster 2026-03-10T10:27:32.550668+0000 mon.a (mon.0) 3170 : cluster [DBG] osdmap e591: 8 total, 8 up, 8 in 2026-03-10T10:27:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:33 vm07 bash[23367]: cluster 2026-03-10T10:27:32.550668+0000 mon.a (mon.0) 3170 : cluster [DBG] osdmap e591: 8 total, 8 up, 8 in 2026-03-10T10:27:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:33 vm07 bash[23367]: audit 2026-03-10T10:27:32.551070+0000 mon.a (mon.0) 3171 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-120", "mode": "writeback"}]: dispatch 2026-03-10T10:27:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:33 vm07 bash[23367]: audit 2026-03-10T10:27:32.551070+0000 mon.a (mon.0) 3171 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-120", "mode": "writeback"}]: dispatch 2026-03-10T10:27:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:33 vm07 bash[23367]: audit 2026-03-10T10:27:32.873925+0000 mon.a (mon.0) 3172 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:27:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:33 vm07 bash[23367]: audit 2026-03-10T10:27:32.873925+0000 mon.a (mon.0) 3172 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:27:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:33 vm07 bash[23367]: audit 2026-03-10T10:27:33.223786+0000 mon.a (mon.0) 3173 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:27:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:33 vm07 bash[23367]: audit 2026-03-10T10:27:33.223786+0000 mon.a (mon.0) 3173 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:27:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:33 vm07 bash[23367]: audit 2026-03-10T10:27:33.224395+0000 mon.a (mon.0) 3174 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:27:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:33 vm07 bash[23367]: audit 2026-03-10T10:27:33.224395+0000 mon.a (mon.0) 3174 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:27:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:33 vm07 bash[23367]: audit 2026-03-10T10:27:33.230548+0000 mon.a (mon.0) 3175 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:27:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:33 vm07 bash[23367]: audit 2026-03-10T10:27:33.230548+0000 mon.a (mon.0) 3175 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:27:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:34 vm04 bash[28289]: cluster 2026-03-10T10:27:33.547402+0000 mon.a (mon.0) 3176 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:27:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:34 vm04 bash[28289]: cluster 2026-03-10T10:27:33.547402+0000 mon.a (mon.0) 3176 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:27:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:34 vm04 bash[28289]: audit 2026-03-10T10:27:33.551194+0000 mon.a (mon.0) 3177 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-120", "mode": "writeback"}]': finished 2026-03-10T10:27:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:34 vm04 bash[28289]: audit 2026-03-10T10:27:33.551194+0000 mon.a (mon.0) 3177 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-120", "mode": "writeback"}]': finished 2026-03-10T10:27:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:34 vm04 bash[28289]: cluster 2026-03-10T10:27:33.557617+0000 mon.a (mon.0) 3178 : cluster [DBG] osdmap e592: 8 total, 8 up, 8 in 2026-03-10T10:27:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:34 vm04 bash[28289]: cluster 2026-03-10T10:27:33.557617+0000 mon.a (mon.0) 3178 : cluster [DBG] osdmap e592: 8 total, 8 up, 8 in 2026-03-10T10:27:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:34 vm04 bash[28289]: audit 2026-03-10T10:27:33.623037+0000 mon.a (mon.0) 3179 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:27:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:34 vm04 bash[28289]: audit 2026-03-10T10:27:33.623037+0000 mon.a (mon.0) 3179 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:27:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:34 vm04 bash[20742]: cluster 2026-03-10T10:27:33.547402+0000 mon.a (mon.0) 3176 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:27:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:34 vm04 bash[20742]: cluster 2026-03-10T10:27:33.547402+0000 mon.a (mon.0) 3176 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:27:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:34 vm04 bash[20742]: audit 2026-03-10T10:27:33.551194+0000 mon.a (mon.0) 3177 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-120", "mode": "writeback"}]': finished 2026-03-10T10:27:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:34 vm04 bash[20742]: audit 2026-03-10T10:27:33.551194+0000 mon.a (mon.0) 3177 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-120", "mode": "writeback"}]': finished 2026-03-10T10:27:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:34 vm04 bash[20742]: cluster 2026-03-10T10:27:33.557617+0000 mon.a (mon.0) 3178 : cluster [DBG] osdmap e592: 8 total, 8 up, 8 in 2026-03-10T10:27:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:34 vm04 bash[20742]: cluster 2026-03-10T10:27:33.557617+0000 mon.a (mon.0) 3178 : cluster [DBG] osdmap e592: 8 total, 8 up, 8 in 2026-03-10T10:27:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:34 vm04 bash[20742]: audit 2026-03-10T10:27:33.623037+0000 mon.a (mon.0) 3179 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:27:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:34 vm04 bash[20742]: audit 2026-03-10T10:27:33.623037+0000 mon.a (mon.0) 3179 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:27:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:34 vm07 bash[23367]: cluster 2026-03-10T10:27:33.547402+0000 mon.a (mon.0) 3176 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:27:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:34 vm07 bash[23367]: cluster 2026-03-10T10:27:33.547402+0000 mon.a (mon.0) 3176 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:27:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:34 vm07 bash[23367]: audit 2026-03-10T10:27:33.551194+0000 mon.a (mon.0) 3177 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-120", "mode": "writeback"}]': finished 2026-03-10T10:27:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:34 vm07 bash[23367]: audit 2026-03-10T10:27:33.551194+0000 mon.a (mon.0) 3177 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-120", "mode": "writeback"}]': finished 2026-03-10T10:27:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:34 vm07 bash[23367]: cluster 2026-03-10T10:27:33.557617+0000 mon.a (mon.0) 3178 : cluster [DBG] osdmap e592: 8 total, 8 up, 8 in 2026-03-10T10:27:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:34 vm07 bash[23367]: cluster 2026-03-10T10:27:33.557617+0000 mon.a (mon.0) 3178 : cluster [DBG] osdmap e592: 8 total, 8 up, 8 in 2026-03-10T10:27:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:34 vm07 bash[23367]: audit 2026-03-10T10:27:33.623037+0000 mon.a (mon.0) 3179 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:27:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:34 vm07 bash[23367]: audit 2026-03-10T10:27:33.623037+0000 mon.a (mon.0) 3179 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:27:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:35 vm04 bash[28289]: cluster 2026-03-10T10:27:34.534952+0000 mgr.y (mgr.24422) 533 : cluster [DBG] pgmap v917: 268 pgs: 268 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:27:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:35 vm04 bash[28289]: cluster 2026-03-10T10:27:34.534952+0000 mgr.y (mgr.24422) 533 : cluster [DBG] pgmap v917: 268 pgs: 268 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:27:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:35 vm04 bash[28289]: audit 2026-03-10T10:27:34.570915+0000 mon.a (mon.0) 3180 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:27:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:35 vm04 bash[28289]: audit 2026-03-10T10:27:34.570915+0000 mon.a (mon.0) 3180 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:27:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:35 vm04 bash[28289]: cluster 2026-03-10T10:27:34.577146+0000 mon.a (mon.0) 3181 : cluster [DBG] osdmap e593: 8 total, 8 up, 8 in 2026-03-10T10:27:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:35 vm04 bash[28289]: cluster 2026-03-10T10:27:34.577146+0000 mon.a (mon.0) 3181 : cluster [DBG] osdmap e593: 8 total, 8 up, 8 in 2026-03-10T10:27:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:35 vm04 bash[28289]: audit 2026-03-10T10:27:34.577781+0000 mon.a (mon.0) 3182 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-120"}]: dispatch 2026-03-10T10:27:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:35 vm04 bash[28289]: audit 2026-03-10T10:27:34.577781+0000 mon.a (mon.0) 3182 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-120"}]: dispatch 2026-03-10T10:27:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:35 vm04 bash[20742]: cluster 2026-03-10T10:27:34.534952+0000 mgr.y (mgr.24422) 533 : cluster [DBG] pgmap v917: 268 pgs: 268 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:27:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:35 vm04 bash[20742]: cluster 2026-03-10T10:27:34.534952+0000 mgr.y (mgr.24422) 533 : cluster [DBG] pgmap v917: 268 pgs: 268 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:27:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:35 vm04 bash[20742]: audit 2026-03-10T10:27:34.570915+0000 mon.a (mon.0) 3180 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:27:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:35 vm04 bash[20742]: audit 2026-03-10T10:27:34.570915+0000 mon.a (mon.0) 3180 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:27:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:35 vm04 bash[20742]: cluster 2026-03-10T10:27:34.577146+0000 mon.a (mon.0) 3181 : cluster [DBG] osdmap e593: 8 total, 8 up, 8 in 2026-03-10T10:27:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:35 vm04 bash[20742]: cluster 2026-03-10T10:27:34.577146+0000 mon.a (mon.0) 3181 : cluster [DBG] osdmap e593: 8 total, 8 up, 8 in 2026-03-10T10:27:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:35 vm04 bash[20742]: audit 2026-03-10T10:27:34.577781+0000 mon.a (mon.0) 3182 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-120"}]: dispatch 2026-03-10T10:27:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:35 vm04 bash[20742]: audit 2026-03-10T10:27:34.577781+0000 mon.a (mon.0) 3182 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-120"}]: dispatch 2026-03-10T10:27:36.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:35 vm07 bash[23367]: cluster 2026-03-10T10:27:34.534952+0000 mgr.y (mgr.24422) 533 : cluster [DBG] pgmap v917: 268 pgs: 268 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:27:36.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:35 vm07 bash[23367]: cluster 2026-03-10T10:27:34.534952+0000 mgr.y (mgr.24422) 533 : cluster [DBG] pgmap v917: 268 pgs: 268 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:27:36.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:35 vm07 bash[23367]: audit 2026-03-10T10:27:34.570915+0000 mon.a (mon.0) 3180 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:27:36.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:35 vm07 bash[23367]: audit 2026-03-10T10:27:34.570915+0000 mon.a (mon.0) 3180 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:27:36.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:35 vm07 bash[23367]: cluster 2026-03-10T10:27:34.577146+0000 mon.a (mon.0) 3181 : cluster [DBG] osdmap e593: 8 total, 8 up, 8 in 2026-03-10T10:27:36.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:35 vm07 bash[23367]: cluster 2026-03-10T10:27:34.577146+0000 mon.a (mon.0) 3181 : cluster [DBG] osdmap e593: 8 total, 8 up, 8 in 2026-03-10T10:27:36.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:35 vm07 bash[23367]: audit 2026-03-10T10:27:34.577781+0000 mon.a (mon.0) 3182 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-120"}]: dispatch 2026-03-10T10:27:36.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:35 vm07 bash[23367]: audit 2026-03-10T10:27:34.577781+0000 mon.a (mon.0) 3182 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-120"}]: dispatch 2026-03-10T10:27:36.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:36 vm04 bash[28289]: cluster 2026-03-10T10:27:35.571171+0000 mon.a (mon.0) 3183 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:27:36.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:36 vm04 bash[28289]: cluster 2026-03-10T10:27:35.571171+0000 mon.a (mon.0) 3183 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:27:36.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:36 vm04 bash[28289]: audit 2026-03-10T10:27:35.575166+0000 mon.a (mon.0) 3184 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-120"}]': finished 2026-03-10T10:27:36.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:36 vm04 bash[28289]: audit 2026-03-10T10:27:35.575166+0000 mon.a (mon.0) 3184 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-120"}]': finished 2026-03-10T10:27:36.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:36 vm04 bash[28289]: cluster 2026-03-10T10:27:35.580831+0000 mon.a (mon.0) 3185 : cluster [DBG] osdmap e594: 8 total, 8 up, 8 in 2026-03-10T10:27:36.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:36 vm04 bash[28289]: cluster 2026-03-10T10:27:35.580831+0000 mon.a (mon.0) 3185 : cluster [DBG] osdmap e594: 8 total, 8 up, 8 in 2026-03-10T10:27:36.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:36 vm04 bash[28289]: cluster 2026-03-10T10:27:36.541862+0000 mon.a (mon.0) 3186 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:36.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:36 vm04 bash[28289]: cluster 2026-03-10T10:27:36.541862+0000 mon.a (mon.0) 3186 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:36.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:36 vm04 bash[20742]: cluster 2026-03-10T10:27:35.571171+0000 mon.a (mon.0) 3183 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:27:36.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:36 vm04 bash[20742]: cluster 2026-03-10T10:27:35.571171+0000 mon.a (mon.0) 3183 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:27:36.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:36 vm04 bash[20742]: audit 2026-03-10T10:27:35.575166+0000 mon.a (mon.0) 3184 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-120"}]': finished 2026-03-10T10:27:36.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:36 vm04 bash[20742]: audit 2026-03-10T10:27:35.575166+0000 mon.a (mon.0) 3184 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-120"}]': finished 2026-03-10T10:27:36.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:36 vm04 bash[20742]: cluster 2026-03-10T10:27:35.580831+0000 mon.a (mon.0) 3185 : cluster [DBG] osdmap e594: 8 total, 8 up, 8 in 2026-03-10T10:27:36.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:36 vm04 bash[20742]: cluster 2026-03-10T10:27:35.580831+0000 mon.a (mon.0) 3185 : cluster [DBG] osdmap e594: 8 total, 8 up, 8 in 2026-03-10T10:27:36.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:36 vm04 bash[20742]: cluster 2026-03-10T10:27:36.541862+0000 mon.a (mon.0) 3186 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:36.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:36 vm04 bash[20742]: cluster 2026-03-10T10:27:36.541862+0000 mon.a (mon.0) 3186 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:36 vm07 bash[23367]: cluster 2026-03-10T10:27:35.571171+0000 mon.a (mon.0) 3183 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:27:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:36 vm07 bash[23367]: cluster 2026-03-10T10:27:35.571171+0000 mon.a (mon.0) 3183 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:27:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:36 vm07 bash[23367]: audit 2026-03-10T10:27:35.575166+0000 mon.a (mon.0) 3184 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-120"}]': finished 2026-03-10T10:27:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:36 vm07 bash[23367]: audit 2026-03-10T10:27:35.575166+0000 mon.a (mon.0) 3184 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-120"}]': finished 2026-03-10T10:27:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:36 vm07 bash[23367]: cluster 2026-03-10T10:27:35.580831+0000 mon.a (mon.0) 3185 : cluster [DBG] osdmap e594: 8 total, 8 up, 8 in 2026-03-10T10:27:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:36 vm07 bash[23367]: cluster 2026-03-10T10:27:35.580831+0000 mon.a (mon.0) 3185 : cluster [DBG] osdmap e594: 8 total, 8 up, 8 in 2026-03-10T10:27:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:36 vm07 bash[23367]: cluster 2026-03-10T10:27:36.541862+0000 mon.a (mon.0) 3186 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:36 vm07 bash[23367]: cluster 2026-03-10T10:27:36.541862+0000 mon.a (mon.0) 3186 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:37 vm04 bash[28289]: cluster 2026-03-10T10:27:36.535367+0000 mgr.y (mgr.24422) 534 : cluster [DBG] pgmap v920: 268 pgs: 268 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:27:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:37 vm04 bash[28289]: cluster 2026-03-10T10:27:36.535367+0000 mgr.y (mgr.24422) 534 : cluster [DBG] pgmap v920: 268 pgs: 268 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:27:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:37 vm04 bash[28289]: cluster 2026-03-10T10:27:36.612501+0000 mon.a (mon.0) 3187 : cluster [DBG] osdmap e595: 8 total, 8 up, 8 in 2026-03-10T10:27:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:37 vm04 bash[28289]: cluster 2026-03-10T10:27:36.612501+0000 mon.a (mon.0) 3187 : cluster [DBG] osdmap e595: 8 total, 8 up, 8 in 2026-03-10T10:27:37.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:37 vm04 bash[20742]: cluster 2026-03-10T10:27:36.535367+0000 mgr.y (mgr.24422) 534 : cluster [DBG] pgmap v920: 268 pgs: 268 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:27:37.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:37 vm04 bash[20742]: cluster 2026-03-10T10:27:36.535367+0000 mgr.y (mgr.24422) 534 : cluster [DBG] pgmap v920: 268 pgs: 268 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:27:37.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:37 vm04 bash[20742]: cluster 2026-03-10T10:27:36.612501+0000 mon.a (mon.0) 3187 : cluster [DBG] osdmap e595: 8 total, 8 up, 8 in 2026-03-10T10:27:37.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:37 vm04 bash[20742]: cluster 2026-03-10T10:27:36.612501+0000 mon.a (mon.0) 3187 : cluster [DBG] osdmap e595: 8 total, 8 up, 8 in 2026-03-10T10:27:38.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:37 vm07 bash[23367]: cluster 2026-03-10T10:27:36.535367+0000 mgr.y (mgr.24422) 534 : cluster [DBG] pgmap v920: 268 pgs: 268 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:27:38.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:37 vm07 bash[23367]: cluster 2026-03-10T10:27:36.535367+0000 mgr.y (mgr.24422) 534 : cluster [DBG] pgmap v920: 268 pgs: 268 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:27:38.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:37 vm07 bash[23367]: cluster 2026-03-10T10:27:36.612501+0000 mon.a (mon.0) 3187 : cluster [DBG] osdmap e595: 8 total, 8 up, 8 in 2026-03-10T10:27:38.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:37 vm07 bash[23367]: cluster 2026-03-10T10:27:36.612501+0000 mon.a (mon.0) 3187 : cluster [DBG] osdmap e595: 8 total, 8 up, 8 in 2026-03-10T10:27:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:38 vm04 bash[28289]: cluster 2026-03-10T10:27:37.622881+0000 mon.a (mon.0) 3188 : cluster [DBG] osdmap e596: 8 total, 8 up, 8 in 2026-03-10T10:27:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:38 vm04 bash[28289]: cluster 2026-03-10T10:27:37.622881+0000 mon.a (mon.0) 3188 : cluster [DBG] osdmap e596: 8 total, 8 up, 8 in 2026-03-10T10:27:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:38 vm04 bash[28289]: audit 2026-03-10T10:27:37.626454+0000 mon.a (mon.0) 3189 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:27:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:38 vm04 bash[28289]: audit 2026-03-10T10:27:37.626454+0000 mon.a (mon.0) 3189 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:27:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:38 vm04 bash[28289]: audit 2026-03-10T10:27:38.614295+0000 mon.a (mon.0) 3190 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-122","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:27:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:38 vm04 bash[28289]: audit 2026-03-10T10:27:38.614295+0000 mon.a (mon.0) 3190 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-122","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:27:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:38 vm04 bash[28289]: cluster 2026-03-10T10:27:38.619999+0000 mon.a (mon.0) 3191 : cluster [DBG] osdmap e597: 8 total, 8 up, 8 in 2026-03-10T10:27:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:38 vm04 bash[28289]: cluster 2026-03-10T10:27:38.619999+0000 mon.a (mon.0) 3191 : cluster [DBG] osdmap e597: 8 total, 8 up, 8 in 2026-03-10T10:27:38.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:38 vm04 bash[20742]: cluster 2026-03-10T10:27:37.622881+0000 mon.a (mon.0) 3188 : cluster [DBG] osdmap e596: 8 total, 8 up, 8 in 2026-03-10T10:27:38.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:38 vm04 bash[20742]: cluster 2026-03-10T10:27:37.622881+0000 mon.a (mon.0) 3188 : cluster [DBG] osdmap e596: 8 total, 8 up, 8 in 2026-03-10T10:27:38.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:38 vm04 bash[20742]: audit 2026-03-10T10:27:37.626454+0000 mon.a (mon.0) 3189 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:27:38.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:38 vm04 bash[20742]: audit 2026-03-10T10:27:37.626454+0000 mon.a (mon.0) 3189 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:27:38.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:38 vm04 bash[20742]: audit 2026-03-10T10:27:38.614295+0000 mon.a (mon.0) 3190 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-122","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:27:38.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:38 vm04 bash[20742]: audit 2026-03-10T10:27:38.614295+0000 mon.a (mon.0) 3190 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-122","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:27:38.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:38 vm04 bash[20742]: cluster 2026-03-10T10:27:38.619999+0000 mon.a (mon.0) 3191 : cluster [DBG] osdmap e597: 8 total, 8 up, 8 in 2026-03-10T10:27:38.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:38 vm04 bash[20742]: cluster 2026-03-10T10:27:38.619999+0000 mon.a (mon.0) 3191 : cluster [DBG] osdmap e597: 8 total, 8 up, 8 in 2026-03-10T10:27:39.016 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:27:38 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:27:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:38 vm07 bash[23367]: cluster 2026-03-10T10:27:37.622881+0000 mon.a (mon.0) 3188 : cluster [DBG] osdmap e596: 8 total, 8 up, 8 in 2026-03-10T10:27:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:38 vm07 bash[23367]: cluster 2026-03-10T10:27:37.622881+0000 mon.a (mon.0) 3188 : cluster [DBG] osdmap e596: 8 total, 8 up, 8 in 2026-03-10T10:27:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:38 vm07 bash[23367]: audit 2026-03-10T10:27:37.626454+0000 mon.a (mon.0) 3189 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:27:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:38 vm07 bash[23367]: audit 2026-03-10T10:27:37.626454+0000 mon.a (mon.0) 3189 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-122","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:27:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:38 vm07 bash[23367]: audit 2026-03-10T10:27:38.614295+0000 mon.a (mon.0) 3190 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-122","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:27:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:38 vm07 bash[23367]: audit 2026-03-10T10:27:38.614295+0000 mon.a (mon.0) 3190 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-122","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:27:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:38 vm07 bash[23367]: cluster 2026-03-10T10:27:38.619999+0000 mon.a (mon.0) 3191 : cluster [DBG] osdmap e597: 8 total, 8 up, 8 in 2026-03-10T10:27:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:38 vm07 bash[23367]: cluster 2026-03-10T10:27:38.619999+0000 mon.a (mon.0) 3191 : cluster [DBG] osdmap e597: 8 total, 8 up, 8 in 2026-03-10T10:27:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:39 vm04 bash[28289]: cluster 2026-03-10T10:27:38.536094+0000 mgr.y (mgr.24422) 535 : cluster [DBG] pgmap v923: 268 pgs: 18 creating+peering, 14 unknown, 236 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T10:27:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:39 vm04 bash[28289]: cluster 2026-03-10T10:27:38.536094+0000 mgr.y (mgr.24422) 535 : cluster [DBG] pgmap v923: 268 pgs: 18 creating+peering, 14 unknown, 236 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T10:27:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:39 vm04 bash[28289]: audit 2026-03-10T10:27:38.672943+0000 mon.a (mon.0) 3192 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:27:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:39 vm04 bash[28289]: audit 2026-03-10T10:27:38.672943+0000 mon.a (mon.0) 3192 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:27:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:39 vm04 bash[28289]: audit 2026-03-10T10:27:38.788605+0000 mgr.y (mgr.24422) 536 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:39 vm04 bash[28289]: audit 2026-03-10T10:27:38.788605+0000 mgr.y (mgr.24422) 536 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:39 vm04 bash[28289]: audit 2026-03-10T10:27:39.618812+0000 mon.a (mon.0) 3193 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-122", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:39 vm04 bash[28289]: audit 2026-03-10T10:27:39.618812+0000 mon.a (mon.0) 3193 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-122", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:39 vm04 bash[28289]: cluster 2026-03-10T10:27:39.622266+0000 mon.a (mon.0) 3194 : cluster [DBG] osdmap e598: 8 total, 8 up, 8 in 2026-03-10T10:27:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:39 vm04 bash[28289]: cluster 2026-03-10T10:27:39.622266+0000 mon.a (mon.0) 3194 : cluster [DBG] osdmap e598: 8 total, 8 up, 8 in 2026-03-10T10:27:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:39 vm04 bash[28289]: audit 2026-03-10T10:27:39.623254+0000 mon.a (mon.0) 3195 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-122"}]: dispatch 2026-03-10T10:27:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:39 vm04 bash[28289]: audit 2026-03-10T10:27:39.623254+0000 mon.a (mon.0) 3195 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-122"}]: dispatch 2026-03-10T10:27:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:39 vm04 bash[20742]: cluster 2026-03-10T10:27:38.536094+0000 mgr.y (mgr.24422) 535 : cluster [DBG] pgmap v923: 268 pgs: 18 creating+peering, 14 unknown, 236 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T10:27:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:39 vm04 bash[20742]: cluster 2026-03-10T10:27:38.536094+0000 mgr.y (mgr.24422) 535 : cluster [DBG] pgmap v923: 268 pgs: 18 creating+peering, 14 unknown, 236 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T10:27:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:39 vm04 bash[20742]: audit 2026-03-10T10:27:38.672943+0000 mon.a (mon.0) 3192 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:27:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:39 vm04 bash[20742]: audit 2026-03-10T10:27:38.672943+0000 mon.a (mon.0) 3192 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:27:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:39 vm04 bash[20742]: audit 2026-03-10T10:27:38.788605+0000 mgr.y (mgr.24422) 536 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:39 vm04 bash[20742]: audit 2026-03-10T10:27:38.788605+0000 mgr.y (mgr.24422) 536 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:39 vm04 bash[20742]: audit 2026-03-10T10:27:39.618812+0000 mon.a (mon.0) 3193 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-122", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:39 vm04 bash[20742]: audit 2026-03-10T10:27:39.618812+0000 mon.a (mon.0) 3193 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-122", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:39 vm04 bash[20742]: cluster 2026-03-10T10:27:39.622266+0000 mon.a (mon.0) 3194 : cluster [DBG] osdmap e598: 8 total, 8 up, 8 in 2026-03-10T10:27:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:39 vm04 bash[20742]: cluster 2026-03-10T10:27:39.622266+0000 mon.a (mon.0) 3194 : cluster [DBG] osdmap e598: 8 total, 8 up, 8 in 2026-03-10T10:27:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:39 vm04 bash[20742]: audit 2026-03-10T10:27:39.623254+0000 mon.a (mon.0) 3195 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-122"}]: dispatch 2026-03-10T10:27:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:39 vm04 bash[20742]: audit 2026-03-10T10:27:39.623254+0000 mon.a (mon.0) 3195 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-122"}]: dispatch 2026-03-10T10:27:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:39 vm07 bash[23367]: cluster 2026-03-10T10:27:38.536094+0000 mgr.y (mgr.24422) 535 : cluster [DBG] pgmap v923: 268 pgs: 18 creating+peering, 14 unknown, 236 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T10:27:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:39 vm07 bash[23367]: cluster 2026-03-10T10:27:38.536094+0000 mgr.y (mgr.24422) 535 : cluster [DBG] pgmap v923: 268 pgs: 18 creating+peering, 14 unknown, 236 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 0 B/s rd, 255 B/s wr, 0 op/s 2026-03-10T10:27:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:39 vm07 bash[23367]: audit 2026-03-10T10:27:38.672943+0000 mon.a (mon.0) 3192 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:27:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:39 vm07 bash[23367]: audit 2026-03-10T10:27:38.672943+0000 mon.a (mon.0) 3192 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-122", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:27:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:39 vm07 bash[23367]: audit 2026-03-10T10:27:38.788605+0000 mgr.y (mgr.24422) 536 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:39 vm07 bash[23367]: audit 2026-03-10T10:27:38.788605+0000 mgr.y (mgr.24422) 536 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:39 vm07 bash[23367]: audit 2026-03-10T10:27:39.618812+0000 mon.a (mon.0) 3193 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-122", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:39 vm07 bash[23367]: audit 2026-03-10T10:27:39.618812+0000 mon.a (mon.0) 3193 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-122", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:39 vm07 bash[23367]: cluster 2026-03-10T10:27:39.622266+0000 mon.a (mon.0) 3194 : cluster [DBG] osdmap e598: 8 total, 8 up, 8 in 2026-03-10T10:27:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:39 vm07 bash[23367]: cluster 2026-03-10T10:27:39.622266+0000 mon.a (mon.0) 3194 : cluster [DBG] osdmap e598: 8 total, 8 up, 8 in 2026-03-10T10:27:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:39 vm07 bash[23367]: audit 2026-03-10T10:27:39.623254+0000 mon.a (mon.0) 3195 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-122"}]: dispatch 2026-03-10T10:27:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:39 vm07 bash[23367]: audit 2026-03-10T10:27:39.623254+0000 mon.a (mon.0) 3195 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-122"}]: dispatch 2026-03-10T10:27:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:41 vm04 bash[28289]: cluster 2026-03-10T10:27:40.536499+0000 mgr.y (mgr.24422) 537 : cluster [DBG] pgmap v926: 268 pgs: 3 creating+activating, 18 creating+peering, 247 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:27:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:41 vm04 bash[28289]: cluster 2026-03-10T10:27:40.536499+0000 mgr.y (mgr.24422) 537 : cluster [DBG] pgmap v926: 268 pgs: 3 creating+activating, 18 creating+peering, 247 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:27:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:41 vm04 bash[28289]: audit 2026-03-10T10:27:40.624976+0000 mon.a (mon.0) 3196 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-122"}]': finished 2026-03-10T10:27:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:41 vm04 bash[28289]: audit 2026-03-10T10:27:40.624976+0000 mon.a (mon.0) 3196 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-122"}]': finished 2026-03-10T10:27:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:41 vm04 bash[28289]: cluster 2026-03-10T10:27:40.628262+0000 mon.a (mon.0) 3197 : cluster [DBG] osdmap e599: 8 total, 8 up, 8 in 2026-03-10T10:27:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:41 vm04 bash[28289]: cluster 2026-03-10T10:27:40.628262+0000 mon.a (mon.0) 3197 : cluster [DBG] osdmap e599: 8 total, 8 up, 8 in 2026-03-10T10:27:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:41 vm04 bash[28289]: audit 2026-03-10T10:27:40.628648+0000 mon.a (mon.0) 3198 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-122", "mode": "writeback"}]: dispatch 2026-03-10T10:27:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:41 vm04 bash[28289]: audit 2026-03-10T10:27:40.628648+0000 mon.a (mon.0) 3198 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-122", "mode": "writeback"}]: dispatch 2026-03-10T10:27:41.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:41 vm04 bash[20742]: cluster 2026-03-10T10:27:40.536499+0000 mgr.y (mgr.24422) 537 : cluster [DBG] pgmap v926: 268 pgs: 3 creating+activating, 18 creating+peering, 247 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:27:41.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:41 vm04 bash[20742]: cluster 2026-03-10T10:27:40.536499+0000 mgr.y (mgr.24422) 537 : cluster [DBG] pgmap v926: 268 pgs: 3 creating+activating, 18 creating+peering, 247 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:27:41.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:41 vm04 bash[20742]: audit 2026-03-10T10:27:40.624976+0000 mon.a (mon.0) 3196 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-122"}]': finished 2026-03-10T10:27:41.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:41 vm04 bash[20742]: audit 2026-03-10T10:27:40.624976+0000 mon.a (mon.0) 3196 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-122"}]': finished 2026-03-10T10:27:41.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:41 vm04 bash[20742]: cluster 2026-03-10T10:27:40.628262+0000 mon.a (mon.0) 3197 : cluster [DBG] osdmap e599: 8 total, 8 up, 8 in 2026-03-10T10:27:41.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:41 vm04 bash[20742]: cluster 2026-03-10T10:27:40.628262+0000 mon.a (mon.0) 3197 : cluster [DBG] osdmap e599: 8 total, 8 up, 8 in 2026-03-10T10:27:41.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:41 vm04 bash[20742]: audit 2026-03-10T10:27:40.628648+0000 mon.a (mon.0) 3198 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-122", "mode": "writeback"}]: dispatch 2026-03-10T10:27:41.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:41 vm04 bash[20742]: audit 2026-03-10T10:27:40.628648+0000 mon.a (mon.0) 3198 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-122", "mode": "writeback"}]: dispatch 2026-03-10T10:27:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:41 vm07 bash[23367]: cluster 2026-03-10T10:27:40.536499+0000 mgr.y (mgr.24422) 537 : cluster [DBG] pgmap v926: 268 pgs: 3 creating+activating, 18 creating+peering, 247 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:27:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:41 vm07 bash[23367]: cluster 2026-03-10T10:27:40.536499+0000 mgr.y (mgr.24422) 537 : cluster [DBG] pgmap v926: 268 pgs: 3 creating+activating, 18 creating+peering, 247 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:27:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:41 vm07 bash[23367]: audit 2026-03-10T10:27:40.624976+0000 mon.a (mon.0) 3196 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-122"}]': finished 2026-03-10T10:27:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:41 vm07 bash[23367]: audit 2026-03-10T10:27:40.624976+0000 mon.a (mon.0) 3196 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-122"}]': finished 2026-03-10T10:27:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:41 vm07 bash[23367]: cluster 2026-03-10T10:27:40.628262+0000 mon.a (mon.0) 3197 : cluster [DBG] osdmap e599: 8 total, 8 up, 8 in 2026-03-10T10:27:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:41 vm07 bash[23367]: cluster 2026-03-10T10:27:40.628262+0000 mon.a (mon.0) 3197 : cluster [DBG] osdmap e599: 8 total, 8 up, 8 in 2026-03-10T10:27:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:41 vm07 bash[23367]: audit 2026-03-10T10:27:40.628648+0000 mon.a (mon.0) 3198 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-122", "mode": "writeback"}]: dispatch 2026-03-10T10:27:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:41 vm07 bash[23367]: audit 2026-03-10T10:27:40.628648+0000 mon.a (mon.0) 3198 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-122", "mode": "writeback"}]: dispatch 2026-03-10T10:27:42.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:42 vm04 bash[28289]: cluster 2026-03-10T10:27:41.625161+0000 mon.a (mon.0) 3199 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:27:42.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:42 vm04 bash[28289]: cluster 2026-03-10T10:27:41.625161+0000 mon.a (mon.0) 3199 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:27:42.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:42 vm04 bash[28289]: audit 2026-03-10T10:27:41.628627+0000 mon.a (mon.0) 3200 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-122", "mode": "writeback"}]': finished 2026-03-10T10:27:42.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:42 vm04 bash[28289]: audit 2026-03-10T10:27:41.628627+0000 mon.a (mon.0) 3200 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-122", "mode": "writeback"}]': finished 2026-03-10T10:27:42.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:42 vm04 bash[28289]: cluster 2026-03-10T10:27:41.637429+0000 mon.a (mon.0) 3201 : cluster [DBG] osdmap e600: 8 total, 8 up, 8 in 2026-03-10T10:27:42.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:42 vm04 bash[28289]: cluster 2026-03-10T10:27:41.637429+0000 mon.a (mon.0) 3201 : cluster [DBG] osdmap e600: 8 total, 8 up, 8 in 2026-03-10T10:27:42.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:42 vm04 bash[28289]: audit 2026-03-10T10:27:41.700019+0000 mon.a (mon.0) 3202 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:27:42.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:42 vm04 bash[28289]: audit 2026-03-10T10:27:41.700019+0000 mon.a (mon.0) 3202 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:27:42.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:42 vm04 bash[20742]: cluster 2026-03-10T10:27:41.625161+0000 mon.a (mon.0) 3199 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:27:42.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:42 vm04 bash[20742]: cluster 2026-03-10T10:27:41.625161+0000 mon.a (mon.0) 3199 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:27:42.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:42 vm04 bash[20742]: audit 2026-03-10T10:27:41.628627+0000 mon.a (mon.0) 3200 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-122", "mode": "writeback"}]': finished 2026-03-10T10:27:42.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:42 vm04 bash[20742]: audit 2026-03-10T10:27:41.628627+0000 mon.a (mon.0) 3200 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-122", "mode": "writeback"}]': finished 2026-03-10T10:27:42.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:42 vm04 bash[20742]: cluster 2026-03-10T10:27:41.637429+0000 mon.a (mon.0) 3201 : cluster [DBG] osdmap e600: 8 total, 8 up, 8 in 2026-03-10T10:27:42.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:42 vm04 bash[20742]: cluster 2026-03-10T10:27:41.637429+0000 mon.a (mon.0) 3201 : cluster [DBG] osdmap e600: 8 total, 8 up, 8 in 2026-03-10T10:27:42.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:42 vm04 bash[20742]: audit 2026-03-10T10:27:41.700019+0000 mon.a (mon.0) 3202 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:27:42.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:42 vm04 bash[20742]: audit 2026-03-10T10:27:41.700019+0000 mon.a (mon.0) 3202 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:27:43.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:42 vm07 bash[23367]: cluster 2026-03-10T10:27:41.625161+0000 mon.a (mon.0) 3199 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:27:43.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:42 vm07 bash[23367]: cluster 2026-03-10T10:27:41.625161+0000 mon.a (mon.0) 3199 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:27:43.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:42 vm07 bash[23367]: audit 2026-03-10T10:27:41.628627+0000 mon.a (mon.0) 3200 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-122", "mode": "writeback"}]': finished 2026-03-10T10:27:43.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:42 vm07 bash[23367]: audit 2026-03-10T10:27:41.628627+0000 mon.a (mon.0) 3200 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-122", "mode": "writeback"}]': finished 2026-03-10T10:27:43.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:42 vm07 bash[23367]: cluster 2026-03-10T10:27:41.637429+0000 mon.a (mon.0) 3201 : cluster [DBG] osdmap e600: 8 total, 8 up, 8 in 2026-03-10T10:27:43.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:42 vm07 bash[23367]: cluster 2026-03-10T10:27:41.637429+0000 mon.a (mon.0) 3201 : cluster [DBG] osdmap e600: 8 total, 8 up, 8 in 2026-03-10T10:27:43.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:42 vm07 bash[23367]: audit 2026-03-10T10:27:41.700019+0000 mon.a (mon.0) 3202 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:27:43.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:42 vm07 bash[23367]: audit 2026-03-10T10:27:41.700019+0000 mon.a (mon.0) 3202 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:27:43.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:27:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:27:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:27:44.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:43 vm07 bash[23367]: cluster 2026-03-10T10:27:42.536774+0000 mgr.y (mgr.24422) 538 : cluster [DBG] pgmap v929: 268 pgs: 3 creating+activating, 18 creating+peering, 247 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:27:44.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:43 vm07 bash[23367]: cluster 2026-03-10T10:27:42.536774+0000 mgr.y (mgr.24422) 538 : cluster [DBG] pgmap v929: 268 pgs: 3 creating+activating, 18 creating+peering, 247 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:27:44.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:43 vm07 bash[23367]: audit 2026-03-10T10:27:42.695867+0000 mon.a (mon.0) 3203 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:27:44.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:43 vm07 bash[23367]: audit 2026-03-10T10:27:42.695867+0000 mon.a (mon.0) 3203 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:27:44.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:43 vm07 bash[23367]: cluster 2026-03-10T10:27:42.704497+0000 mon.a (mon.0) 3204 : cluster [DBG] osdmap e601: 8 total, 8 up, 8 in 2026-03-10T10:27:44.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:43 vm07 bash[23367]: cluster 2026-03-10T10:27:42.704497+0000 mon.a (mon.0) 3204 : cluster [DBG] osdmap e601: 8 total, 8 up, 8 in 2026-03-10T10:27:44.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:43 vm07 bash[23367]: audit 2026-03-10T10:27:42.705326+0000 mon.a (mon.0) 3205 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-122"}]: dispatch 2026-03-10T10:27:44.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:43 vm07 bash[23367]: audit 2026-03-10T10:27:42.705326+0000 mon.a (mon.0) 3205 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-122"}]: dispatch 2026-03-10T10:27:44.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:43 vm07 bash[23367]: audit 2026-03-10T10:27:43.099290+0000 mon.a (mon.0) 3206 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:27:44.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:43 vm07 bash[23367]: audit 2026-03-10T10:27:43.099290+0000 mon.a (mon.0) 3206 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:27:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:43 vm04 bash[28289]: cluster 2026-03-10T10:27:42.536774+0000 mgr.y (mgr.24422) 538 : cluster [DBG] pgmap v929: 268 pgs: 3 creating+activating, 18 creating+peering, 247 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:27:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:43 vm04 bash[28289]: cluster 2026-03-10T10:27:42.536774+0000 mgr.y (mgr.24422) 538 : cluster [DBG] pgmap v929: 268 pgs: 3 creating+activating, 18 creating+peering, 247 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:27:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:43 vm04 bash[28289]: audit 2026-03-10T10:27:42.695867+0000 mon.a (mon.0) 3203 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:27:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:43 vm04 bash[28289]: audit 2026-03-10T10:27:42.695867+0000 mon.a (mon.0) 3203 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:27:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:43 vm04 bash[28289]: cluster 2026-03-10T10:27:42.704497+0000 mon.a (mon.0) 3204 : cluster [DBG] osdmap e601: 8 total, 8 up, 8 in 2026-03-10T10:27:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:43 vm04 bash[28289]: cluster 2026-03-10T10:27:42.704497+0000 mon.a (mon.0) 3204 : cluster [DBG] osdmap e601: 8 total, 8 up, 8 in 2026-03-10T10:27:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:43 vm04 bash[28289]: audit 2026-03-10T10:27:42.705326+0000 mon.a (mon.0) 3205 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-122"}]: dispatch 2026-03-10T10:27:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:43 vm04 bash[28289]: audit 2026-03-10T10:27:42.705326+0000 mon.a (mon.0) 3205 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-122"}]: dispatch 2026-03-10T10:27:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:43 vm04 bash[28289]: audit 2026-03-10T10:27:43.099290+0000 mon.a (mon.0) 3206 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:27:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:43 vm04 bash[28289]: audit 2026-03-10T10:27:43.099290+0000 mon.a (mon.0) 3206 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:27:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:43 vm04 bash[20742]: cluster 2026-03-10T10:27:42.536774+0000 mgr.y (mgr.24422) 538 : cluster [DBG] pgmap v929: 268 pgs: 3 creating+activating, 18 creating+peering, 247 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:27:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:43 vm04 bash[20742]: cluster 2026-03-10T10:27:42.536774+0000 mgr.y (mgr.24422) 538 : cluster [DBG] pgmap v929: 268 pgs: 3 creating+activating, 18 creating+peering, 247 active+clean; 455 KiB data, 974 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:27:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:43 vm04 bash[20742]: audit 2026-03-10T10:27:42.695867+0000 mon.a (mon.0) 3203 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:27:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:43 vm04 bash[20742]: audit 2026-03-10T10:27:42.695867+0000 mon.a (mon.0) 3203 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:27:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:43 vm04 bash[20742]: cluster 2026-03-10T10:27:42.704497+0000 mon.a (mon.0) 3204 : cluster [DBG] osdmap e601: 8 total, 8 up, 8 in 2026-03-10T10:27:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:43 vm04 bash[20742]: cluster 2026-03-10T10:27:42.704497+0000 mon.a (mon.0) 3204 : cluster [DBG] osdmap e601: 8 total, 8 up, 8 in 2026-03-10T10:27:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:43 vm04 bash[20742]: audit 2026-03-10T10:27:42.705326+0000 mon.a (mon.0) 3205 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-122"}]: dispatch 2026-03-10T10:27:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:43 vm04 bash[20742]: audit 2026-03-10T10:27:42.705326+0000 mon.a (mon.0) 3205 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-122"}]: dispatch 2026-03-10T10:27:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:43 vm04 bash[20742]: audit 2026-03-10T10:27:43.099290+0000 mon.a (mon.0) 3206 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:27:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:43 vm04 bash[20742]: audit 2026-03-10T10:27:43.099290+0000 mon.a (mon.0) 3206 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:27:45.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:44 vm07 bash[23367]: cluster 2026-03-10T10:27:43.755451+0000 mon.a (mon.0) 3207 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:27:45.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:44 vm07 bash[23367]: cluster 2026-03-10T10:27:43.755451+0000 mon.a (mon.0) 3207 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:27:45.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:44 vm07 bash[23367]: audit 2026-03-10T10:27:43.765930+0000 mon.a (mon.0) 3208 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-122"}]': finished 2026-03-10T10:27:45.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:44 vm07 bash[23367]: audit 2026-03-10T10:27:43.765930+0000 mon.a (mon.0) 3208 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-122"}]': finished 2026-03-10T10:27:45.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:44 vm07 bash[23367]: cluster 2026-03-10T10:27:43.769224+0000 mon.a (mon.0) 3209 : cluster [DBG] osdmap e602: 8 total, 8 up, 8 in 2026-03-10T10:27:45.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:44 vm07 bash[23367]: cluster 2026-03-10T10:27:43.769224+0000 mon.a (mon.0) 3209 : cluster [DBG] osdmap e602: 8 total, 8 up, 8 in 2026-03-10T10:27:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:44 vm04 bash[28289]: cluster 2026-03-10T10:27:43.755451+0000 mon.a (mon.0) 3207 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:27:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:44 vm04 bash[28289]: cluster 2026-03-10T10:27:43.755451+0000 mon.a (mon.0) 3207 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:27:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:44 vm04 bash[28289]: audit 2026-03-10T10:27:43.765930+0000 mon.a (mon.0) 3208 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-122"}]': finished 2026-03-10T10:27:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:44 vm04 bash[28289]: audit 2026-03-10T10:27:43.765930+0000 mon.a (mon.0) 3208 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-122"}]': finished 2026-03-10T10:27:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:44 vm04 bash[28289]: cluster 2026-03-10T10:27:43.769224+0000 mon.a (mon.0) 3209 : cluster [DBG] osdmap e602: 8 total, 8 up, 8 in 2026-03-10T10:27:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:44 vm04 bash[28289]: cluster 2026-03-10T10:27:43.769224+0000 mon.a (mon.0) 3209 : cluster [DBG] osdmap e602: 8 total, 8 up, 8 in 2026-03-10T10:27:45.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:44 vm04 bash[20742]: cluster 2026-03-10T10:27:43.755451+0000 mon.a (mon.0) 3207 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:27:45.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:44 vm04 bash[20742]: cluster 2026-03-10T10:27:43.755451+0000 mon.a (mon.0) 3207 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:27:45.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:44 vm04 bash[20742]: audit 2026-03-10T10:27:43.765930+0000 mon.a (mon.0) 3208 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-122"}]': finished 2026-03-10T10:27:45.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:44 vm04 bash[20742]: audit 2026-03-10T10:27:43.765930+0000 mon.a (mon.0) 3208 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-122"}]': finished 2026-03-10T10:27:45.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:44 vm04 bash[20742]: cluster 2026-03-10T10:27:43.769224+0000 mon.a (mon.0) 3209 : cluster [DBG] osdmap e602: 8 total, 8 up, 8 in 2026-03-10T10:27:45.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:44 vm04 bash[20742]: cluster 2026-03-10T10:27:43.769224+0000 mon.a (mon.0) 3209 : cluster [DBG] osdmap e602: 8 total, 8 up, 8 in 2026-03-10T10:27:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:45 vm04 bash[28289]: cluster 2026-03-10T10:27:44.537088+0000 mgr.y (mgr.24422) 539 : cluster [DBG] pgmap v932: 268 pgs: 268 active+clean; 455 KiB data, 975 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-10T10:27:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:45 vm04 bash[28289]: cluster 2026-03-10T10:27:44.537088+0000 mgr.y (mgr.24422) 539 : cluster [DBG] pgmap v932: 268 pgs: 268 active+clean; 455 KiB data, 975 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-10T10:27:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:45 vm04 bash[28289]: cluster 2026-03-10T10:27:44.778243+0000 mon.a (mon.0) 3210 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:45 vm04 bash[28289]: cluster 2026-03-10T10:27:44.778243+0000 mon.a (mon.0) 3210 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:45 vm04 bash[28289]: cluster 2026-03-10T10:27:44.790199+0000 mon.a (mon.0) 3211 : cluster [DBG] osdmap e603: 8 total, 8 up, 8 in 2026-03-10T10:27:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:45 vm04 bash[28289]: cluster 2026-03-10T10:27:44.790199+0000 mon.a (mon.0) 3211 : cluster [DBG] osdmap e603: 8 total, 8 up, 8 in 2026-03-10T10:27:46.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:45 vm04 bash[20742]: cluster 2026-03-10T10:27:44.537088+0000 mgr.y (mgr.24422) 539 : cluster [DBG] pgmap v932: 268 pgs: 268 active+clean; 455 KiB data, 975 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-10T10:27:46.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:45 vm04 bash[20742]: cluster 2026-03-10T10:27:44.537088+0000 mgr.y (mgr.24422) 539 : cluster [DBG] pgmap v932: 268 pgs: 268 active+clean; 455 KiB data, 975 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-10T10:27:46.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:45 vm04 bash[20742]: cluster 2026-03-10T10:27:44.778243+0000 mon.a (mon.0) 3210 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:46.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:45 vm04 bash[20742]: cluster 2026-03-10T10:27:44.778243+0000 mon.a (mon.0) 3210 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:46.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:45 vm04 bash[20742]: cluster 2026-03-10T10:27:44.790199+0000 mon.a (mon.0) 3211 : cluster [DBG] osdmap e603: 8 total, 8 up, 8 in 2026-03-10T10:27:46.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:45 vm04 bash[20742]: cluster 2026-03-10T10:27:44.790199+0000 mon.a (mon.0) 3211 : cluster [DBG] osdmap e603: 8 total, 8 up, 8 in 2026-03-10T10:27:46.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:45 vm07 bash[23367]: cluster 2026-03-10T10:27:44.537088+0000 mgr.y (mgr.24422) 539 : cluster [DBG] pgmap v932: 268 pgs: 268 active+clean; 455 KiB data, 975 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-10T10:27:46.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:45 vm07 bash[23367]: cluster 2026-03-10T10:27:44.537088+0000 mgr.y (mgr.24422) 539 : cluster [DBG] pgmap v932: 268 pgs: 268 active+clean; 455 KiB data, 975 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 3 op/s 2026-03-10T10:27:46.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:45 vm07 bash[23367]: cluster 2026-03-10T10:27:44.778243+0000 mon.a (mon.0) 3210 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:46.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:45 vm07 bash[23367]: cluster 2026-03-10T10:27:44.778243+0000 mon.a (mon.0) 3210 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:46.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:45 vm07 bash[23367]: cluster 2026-03-10T10:27:44.790199+0000 mon.a (mon.0) 3211 : cluster [DBG] osdmap e603: 8 total, 8 up, 8 in 2026-03-10T10:27:46.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:45 vm07 bash[23367]: cluster 2026-03-10T10:27:44.790199+0000 mon.a (mon.0) 3211 : cluster [DBG] osdmap e603: 8 total, 8 up, 8 in 2026-03-10T10:27:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:46 vm04 bash[28289]: cluster 2026-03-10T10:27:45.811926+0000 mon.a (mon.0) 3212 : cluster [DBG] osdmap e604: 8 total, 8 up, 8 in 2026-03-10T10:27:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:46 vm04 bash[28289]: cluster 2026-03-10T10:27:45.811926+0000 mon.a (mon.0) 3212 : cluster [DBG] osdmap e604: 8 total, 8 up, 8 in 2026-03-10T10:27:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:46 vm04 bash[28289]: audit 2026-03-10T10:27:45.814020+0000 mon.a (mon.0) 3213 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:27:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:46 vm04 bash[28289]: audit 2026-03-10T10:27:45.814020+0000 mon.a (mon.0) 3213 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:27:47.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:46 vm04 bash[20742]: cluster 2026-03-10T10:27:45.811926+0000 mon.a (mon.0) 3212 : cluster [DBG] osdmap e604: 8 total, 8 up, 8 in 2026-03-10T10:27:47.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:46 vm04 bash[20742]: cluster 2026-03-10T10:27:45.811926+0000 mon.a (mon.0) 3212 : cluster [DBG] osdmap e604: 8 total, 8 up, 8 in 2026-03-10T10:27:47.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:46 vm04 bash[20742]: audit 2026-03-10T10:27:45.814020+0000 mon.a (mon.0) 3213 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:27:47.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:46 vm04 bash[20742]: audit 2026-03-10T10:27:45.814020+0000 mon.a (mon.0) 3213 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:27:47.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:46 vm07 bash[23367]: cluster 2026-03-10T10:27:45.811926+0000 mon.a (mon.0) 3212 : cluster [DBG] osdmap e604: 8 total, 8 up, 8 in 2026-03-10T10:27:47.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:46 vm07 bash[23367]: cluster 2026-03-10T10:27:45.811926+0000 mon.a (mon.0) 3212 : cluster [DBG] osdmap e604: 8 total, 8 up, 8 in 2026-03-10T10:27:47.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:46 vm07 bash[23367]: audit 2026-03-10T10:27:45.814020+0000 mon.a (mon.0) 3213 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:27:47.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:46 vm07 bash[23367]: audit 2026-03-10T10:27:45.814020+0000 mon.a (mon.0) 3213 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-124","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:27:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:47 vm04 bash[28289]: cluster 2026-03-10T10:27:46.537439+0000 mgr.y (mgr.24422) 540 : cluster [DBG] pgmap v935: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 975 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:27:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:47 vm04 bash[28289]: cluster 2026-03-10T10:27:46.537439+0000 mgr.y (mgr.24422) 540 : cluster [DBG] pgmap v935: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 975 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:27:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:47 vm04 bash[28289]: audit 2026-03-10T10:27:46.824095+0000 mon.a (mon.0) 3214 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-124","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:27:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:47 vm04 bash[28289]: audit 2026-03-10T10:27:46.824095+0000 mon.a (mon.0) 3214 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-124","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:27:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:47 vm04 bash[28289]: cluster 2026-03-10T10:27:46.829959+0000 mon.a (mon.0) 3215 : cluster [DBG] osdmap e605: 8 total, 8 up, 8 in 2026-03-10T10:27:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:47 vm04 bash[28289]: cluster 2026-03-10T10:27:46.829959+0000 mon.a (mon.0) 3215 : cluster [DBG] osdmap e605: 8 total, 8 up, 8 in 2026-03-10T10:27:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:47 vm04 bash[20742]: cluster 2026-03-10T10:27:46.537439+0000 mgr.y (mgr.24422) 540 : cluster [DBG] pgmap v935: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 975 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:27:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:47 vm04 bash[20742]: cluster 2026-03-10T10:27:46.537439+0000 mgr.y (mgr.24422) 540 : cluster [DBG] pgmap v935: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 975 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:27:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:47 vm04 bash[20742]: audit 2026-03-10T10:27:46.824095+0000 mon.a (mon.0) 3214 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-124","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:27:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:47 vm04 bash[20742]: audit 2026-03-10T10:27:46.824095+0000 mon.a (mon.0) 3214 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-124","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:27:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:47 vm04 bash[20742]: cluster 2026-03-10T10:27:46.829959+0000 mon.a (mon.0) 3215 : cluster [DBG] osdmap e605: 8 total, 8 up, 8 in 2026-03-10T10:27:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:47 vm04 bash[20742]: cluster 2026-03-10T10:27:46.829959+0000 mon.a (mon.0) 3215 : cluster [DBG] osdmap e605: 8 total, 8 up, 8 in 2026-03-10T10:27:48.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:47 vm07 bash[23367]: cluster 2026-03-10T10:27:46.537439+0000 mgr.y (mgr.24422) 540 : cluster [DBG] pgmap v935: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 975 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:27:48.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:47 vm07 bash[23367]: cluster 2026-03-10T10:27:46.537439+0000 mgr.y (mgr.24422) 540 : cluster [DBG] pgmap v935: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 975 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:27:48.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:47 vm07 bash[23367]: audit 2026-03-10T10:27:46.824095+0000 mon.a (mon.0) 3214 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-124","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:27:48.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:47 vm07 bash[23367]: audit 2026-03-10T10:27:46.824095+0000 mon.a (mon.0) 3214 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-124","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:27:48.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:47 vm07 bash[23367]: cluster 2026-03-10T10:27:46.829959+0000 mon.a (mon.0) 3215 : cluster [DBG] osdmap e605: 8 total, 8 up, 8 in 2026-03-10T10:27:48.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:47 vm07 bash[23367]: cluster 2026-03-10T10:27:46.829959+0000 mon.a (mon.0) 3215 : cluster [DBG] osdmap e605: 8 total, 8 up, 8 in 2026-03-10T10:27:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:48 vm04 bash[28289]: cluster 2026-03-10T10:27:47.849165+0000 mon.a (mon.0) 3216 : cluster [DBG] osdmap e606: 8 total, 8 up, 8 in 2026-03-10T10:27:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:48 vm04 bash[28289]: cluster 2026-03-10T10:27:47.849165+0000 mon.a (mon.0) 3216 : cluster [DBG] osdmap e606: 8 total, 8 up, 8 in 2026-03-10T10:27:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:48 vm04 bash[28289]: audit 2026-03-10T10:27:47.876713+0000 mon.a (mon.0) 3217 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:27:49.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:48 vm04 bash[28289]: audit 2026-03-10T10:27:47.876713+0000 mon.a (mon.0) 3217 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:27:49.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:48 vm04 bash[20742]: cluster 2026-03-10T10:27:47.849165+0000 mon.a (mon.0) 3216 : cluster [DBG] osdmap e606: 8 total, 8 up, 8 in 2026-03-10T10:27:49.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:48 vm04 bash[20742]: cluster 2026-03-10T10:27:47.849165+0000 mon.a (mon.0) 3216 : cluster [DBG] osdmap e606: 8 total, 8 up, 8 in 2026-03-10T10:27:49.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:48 vm04 bash[20742]: audit 2026-03-10T10:27:47.876713+0000 mon.a (mon.0) 3217 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:27:49.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:48 vm04 bash[20742]: audit 2026-03-10T10:27:47.876713+0000 mon.a (mon.0) 3217 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:27:49.266 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:27:48 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:27:49.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:48 vm07 bash[23367]: cluster 2026-03-10T10:27:47.849165+0000 mon.a (mon.0) 3216 : cluster [DBG] osdmap e606: 8 total, 8 up, 8 in 2026-03-10T10:27:49.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:48 vm07 bash[23367]: cluster 2026-03-10T10:27:47.849165+0000 mon.a (mon.0) 3216 : cluster [DBG] osdmap e606: 8 total, 8 up, 8 in 2026-03-10T10:27:49.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:48 vm07 bash[23367]: audit 2026-03-10T10:27:47.876713+0000 mon.a (mon.0) 3217 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:27:49.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:48 vm07 bash[23367]: audit 2026-03-10T10:27:47.876713+0000 mon.a (mon.0) 3217 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-124", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:27:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:49 vm04 bash[28289]: cluster 2026-03-10T10:27:48.538038+0000 mgr.y (mgr.24422) 541 : cluster [DBG] pgmap v938: 268 pgs: 20 unknown, 248 active+clean; 455 KiB data, 975 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:27:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:49 vm04 bash[28289]: cluster 2026-03-10T10:27:48.538038+0000 mgr.y (mgr.24422) 541 : cluster [DBG] pgmap v938: 268 pgs: 20 unknown, 248 active+clean; 455 KiB data, 975 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:27:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:49 vm04 bash[28289]: audit 2026-03-10T10:27:48.791305+0000 mgr.y (mgr.24422) 542 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:49 vm04 bash[28289]: audit 2026-03-10T10:27:48.791305+0000 mgr.y (mgr.24422) 542 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:49 vm04 bash[28289]: audit 2026-03-10T10:27:48.875710+0000 mon.a (mon.0) 3218 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-124", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:49 vm04 bash[28289]: audit 2026-03-10T10:27:48.875710+0000 mon.a (mon.0) 3218 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-124", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:49 vm04 bash[28289]: cluster 2026-03-10T10:27:48.879831+0000 mon.a (mon.0) 3219 : cluster [DBG] osdmap e607: 8 total, 8 up, 8 in 2026-03-10T10:27:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:49 vm04 bash[28289]: cluster 2026-03-10T10:27:48.879831+0000 mon.a (mon.0) 3219 : cluster [DBG] osdmap e607: 8 total, 8 up, 8 in 2026-03-10T10:27:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:49 vm04 bash[28289]: audit 2026-03-10T10:27:48.880424+0000 mon.a (mon.0) 3220 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-124"}]: dispatch 2026-03-10T10:27:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:49 vm04 bash[28289]: audit 2026-03-10T10:27:48.880424+0000 mon.a (mon.0) 3220 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-124"}]: dispatch 2026-03-10T10:27:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:49 vm04 bash[20742]: cluster 2026-03-10T10:27:48.538038+0000 mgr.y (mgr.24422) 541 : cluster [DBG] pgmap v938: 268 pgs: 20 unknown, 248 active+clean; 455 KiB data, 975 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:27:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:49 vm04 bash[20742]: cluster 2026-03-10T10:27:48.538038+0000 mgr.y (mgr.24422) 541 : cluster [DBG] pgmap v938: 268 pgs: 20 unknown, 248 active+clean; 455 KiB data, 975 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:27:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:49 vm04 bash[20742]: audit 2026-03-10T10:27:48.791305+0000 mgr.y (mgr.24422) 542 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:49 vm04 bash[20742]: audit 2026-03-10T10:27:48.791305+0000 mgr.y (mgr.24422) 542 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:49 vm04 bash[20742]: audit 2026-03-10T10:27:48.875710+0000 mon.a (mon.0) 3218 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-124", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:49 vm04 bash[20742]: audit 2026-03-10T10:27:48.875710+0000 mon.a (mon.0) 3218 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-124", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:49 vm04 bash[20742]: cluster 2026-03-10T10:27:48.879831+0000 mon.a (mon.0) 3219 : cluster [DBG] osdmap e607: 8 total, 8 up, 8 in 2026-03-10T10:27:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:49 vm04 bash[20742]: cluster 2026-03-10T10:27:48.879831+0000 mon.a (mon.0) 3219 : cluster [DBG] osdmap e607: 8 total, 8 up, 8 in 2026-03-10T10:27:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:49 vm04 bash[20742]: audit 2026-03-10T10:27:48.880424+0000 mon.a (mon.0) 3220 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-124"}]: dispatch 2026-03-10T10:27:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:49 vm04 bash[20742]: audit 2026-03-10T10:27:48.880424+0000 mon.a (mon.0) 3220 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-124"}]: dispatch 2026-03-10T10:27:50.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:49 vm07 bash[23367]: cluster 2026-03-10T10:27:48.538038+0000 mgr.y (mgr.24422) 541 : cluster [DBG] pgmap v938: 268 pgs: 20 unknown, 248 active+clean; 455 KiB data, 975 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:27:50.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:49 vm07 bash[23367]: cluster 2026-03-10T10:27:48.538038+0000 mgr.y (mgr.24422) 541 : cluster [DBG] pgmap v938: 268 pgs: 20 unknown, 248 active+clean; 455 KiB data, 975 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:27:50.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:49 vm07 bash[23367]: audit 2026-03-10T10:27:48.791305+0000 mgr.y (mgr.24422) 542 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:50.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:49 vm07 bash[23367]: audit 2026-03-10T10:27:48.791305+0000 mgr.y (mgr.24422) 542 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:27:50.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:49 vm07 bash[23367]: audit 2026-03-10T10:27:48.875710+0000 mon.a (mon.0) 3218 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-124", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:50.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:49 vm07 bash[23367]: audit 2026-03-10T10:27:48.875710+0000 mon.a (mon.0) 3218 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-124", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:50.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:49 vm07 bash[23367]: cluster 2026-03-10T10:27:48.879831+0000 mon.a (mon.0) 3219 : cluster [DBG] osdmap e607: 8 total, 8 up, 8 in 2026-03-10T10:27:50.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:49 vm07 bash[23367]: cluster 2026-03-10T10:27:48.879831+0000 mon.a (mon.0) 3219 : cluster [DBG] osdmap e607: 8 total, 8 up, 8 in 2026-03-10T10:27:50.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:49 vm07 bash[23367]: audit 2026-03-10T10:27:48.880424+0000 mon.a (mon.0) 3220 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-124"}]: dispatch 2026-03-10T10:27:50.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:49 vm07 bash[23367]: audit 2026-03-10T10:27:48.880424+0000 mon.a (mon.0) 3220 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-124"}]: dispatch 2026-03-10T10:27:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:50 vm04 bash[28289]: audit 2026-03-10T10:27:49.879476+0000 mon.a (mon.0) 3221 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-124"}]': finished 2026-03-10T10:27:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:50 vm04 bash[28289]: audit 2026-03-10T10:27:49.879476+0000 mon.a (mon.0) 3221 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-124"}]': finished 2026-03-10T10:27:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:50 vm04 bash[28289]: cluster 2026-03-10T10:27:49.882275+0000 mon.a (mon.0) 3222 : cluster [DBG] osdmap e608: 8 total, 8 up, 8 in 2026-03-10T10:27:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:50 vm04 bash[28289]: cluster 2026-03-10T10:27:49.882275+0000 mon.a (mon.0) 3222 : cluster [DBG] osdmap e608: 8 total, 8 up, 8 in 2026-03-10T10:27:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:50 vm04 bash[28289]: audit 2026-03-10T10:27:49.885361+0000 mon.a (mon.0) 3223 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-124", "mode": "writeback"}]: dispatch 2026-03-10T10:27:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:50 vm04 bash[28289]: audit 2026-03-10T10:27:49.885361+0000 mon.a (mon.0) 3223 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-124", "mode": "writeback"}]: dispatch 2026-03-10T10:27:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:50 vm04 bash[28289]: cluster 2026-03-10T10:27:50.879619+0000 mon.a (mon.0) 3224 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:27:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:50 vm04 bash[28289]: cluster 2026-03-10T10:27:50.879619+0000 mon.a (mon.0) 3224 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:27:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:50 vm04 bash[28289]: audit 2026-03-10T10:27:50.882761+0000 mon.a (mon.0) 3225 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-124", "mode": "writeback"}]': finished 2026-03-10T10:27:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:50 vm04 bash[28289]: audit 2026-03-10T10:27:50.882761+0000 mon.a (mon.0) 3225 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-124", "mode": "writeback"}]': finished 2026-03-10T10:27:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:50 vm04 bash[28289]: cluster 2026-03-10T10:27:50.885335+0000 mon.a (mon.0) 3226 : cluster [DBG] osdmap e609: 8 total, 8 up, 8 in 2026-03-10T10:27:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:50 vm04 bash[28289]: cluster 2026-03-10T10:27:50.885335+0000 mon.a (mon.0) 3226 : cluster [DBG] osdmap e609: 8 total, 8 up, 8 in 2026-03-10T10:27:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:50 vm04 bash[20742]: audit 2026-03-10T10:27:49.879476+0000 mon.a (mon.0) 3221 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-124"}]': finished 2026-03-10T10:27:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:50 vm04 bash[20742]: audit 2026-03-10T10:27:49.879476+0000 mon.a (mon.0) 3221 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-124"}]': finished 2026-03-10T10:27:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:50 vm04 bash[20742]: cluster 2026-03-10T10:27:49.882275+0000 mon.a (mon.0) 3222 : cluster [DBG] osdmap e608: 8 total, 8 up, 8 in 2026-03-10T10:27:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:50 vm04 bash[20742]: cluster 2026-03-10T10:27:49.882275+0000 mon.a (mon.0) 3222 : cluster [DBG] osdmap e608: 8 total, 8 up, 8 in 2026-03-10T10:27:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:50 vm04 bash[20742]: audit 2026-03-10T10:27:49.885361+0000 mon.a (mon.0) 3223 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-124", "mode": "writeback"}]: dispatch 2026-03-10T10:27:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:50 vm04 bash[20742]: audit 2026-03-10T10:27:49.885361+0000 mon.a (mon.0) 3223 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-124", "mode": "writeback"}]: dispatch 2026-03-10T10:27:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:50 vm04 bash[20742]: cluster 2026-03-10T10:27:50.879619+0000 mon.a (mon.0) 3224 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:27:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:50 vm04 bash[20742]: cluster 2026-03-10T10:27:50.879619+0000 mon.a (mon.0) 3224 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:27:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:50 vm04 bash[20742]: audit 2026-03-10T10:27:50.882761+0000 mon.a (mon.0) 3225 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-124", "mode": "writeback"}]': finished 2026-03-10T10:27:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:50 vm04 bash[20742]: audit 2026-03-10T10:27:50.882761+0000 mon.a (mon.0) 3225 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-124", "mode": "writeback"}]': finished 2026-03-10T10:27:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:50 vm04 bash[20742]: cluster 2026-03-10T10:27:50.885335+0000 mon.a (mon.0) 3226 : cluster [DBG] osdmap e609: 8 total, 8 up, 8 in 2026-03-10T10:27:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:50 vm04 bash[20742]: cluster 2026-03-10T10:27:50.885335+0000 mon.a (mon.0) 3226 : cluster [DBG] osdmap e609: 8 total, 8 up, 8 in 2026-03-10T10:27:51.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:50 vm07 bash[23367]: audit 2026-03-10T10:27:49.879476+0000 mon.a (mon.0) 3221 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-124"}]': finished 2026-03-10T10:27:51.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:50 vm07 bash[23367]: audit 2026-03-10T10:27:49.879476+0000 mon.a (mon.0) 3221 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-124"}]': finished 2026-03-10T10:27:51.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:50 vm07 bash[23367]: cluster 2026-03-10T10:27:49.882275+0000 mon.a (mon.0) 3222 : cluster [DBG] osdmap e608: 8 total, 8 up, 8 in 2026-03-10T10:27:51.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:50 vm07 bash[23367]: cluster 2026-03-10T10:27:49.882275+0000 mon.a (mon.0) 3222 : cluster [DBG] osdmap e608: 8 total, 8 up, 8 in 2026-03-10T10:27:51.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:50 vm07 bash[23367]: audit 2026-03-10T10:27:49.885361+0000 mon.a (mon.0) 3223 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-124", "mode": "writeback"}]: dispatch 2026-03-10T10:27:51.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:50 vm07 bash[23367]: audit 2026-03-10T10:27:49.885361+0000 mon.a (mon.0) 3223 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-124", "mode": "writeback"}]: dispatch 2026-03-10T10:27:51.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:50 vm07 bash[23367]: cluster 2026-03-10T10:27:50.879619+0000 mon.a (mon.0) 3224 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:27:51.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:50 vm07 bash[23367]: cluster 2026-03-10T10:27:50.879619+0000 mon.a (mon.0) 3224 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:27:51.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:50 vm07 bash[23367]: audit 2026-03-10T10:27:50.882761+0000 mon.a (mon.0) 3225 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-124", "mode": "writeback"}]': finished 2026-03-10T10:27:51.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:50 vm07 bash[23367]: audit 2026-03-10T10:27:50.882761+0000 mon.a (mon.0) 3225 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-124", "mode": "writeback"}]': finished 2026-03-10T10:27:51.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:50 vm07 bash[23367]: cluster 2026-03-10T10:27:50.885335+0000 mon.a (mon.0) 3226 : cluster [DBG] osdmap e609: 8 total, 8 up, 8 in 2026-03-10T10:27:51.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:50 vm07 bash[23367]: cluster 2026-03-10T10:27:50.885335+0000 mon.a (mon.0) 3226 : cluster [DBG] osdmap e609: 8 total, 8 up, 8 in 2026-03-10T10:27:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:51 vm04 bash[28289]: cluster 2026-03-10T10:27:50.538428+0000 mgr.y (mgr.24422) 543 : cluster [DBG] pgmap v941: 268 pgs: 268 active+clean; 455 KiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:27:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:51 vm04 bash[28289]: cluster 2026-03-10T10:27:50.538428+0000 mgr.y (mgr.24422) 543 : cluster [DBG] pgmap v941: 268 pgs: 268 active+clean; 455 KiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:27:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:51 vm04 bash[28289]: cluster 2026-03-10T10:27:51.543873+0000 mon.a (mon.0) 3227 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:51 vm04 bash[28289]: cluster 2026-03-10T10:27:51.543873+0000 mon.a (mon.0) 3227 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:51 vm04 bash[20742]: cluster 2026-03-10T10:27:50.538428+0000 mgr.y (mgr.24422) 543 : cluster [DBG] pgmap v941: 268 pgs: 268 active+clean; 455 KiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:27:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:51 vm04 bash[20742]: cluster 2026-03-10T10:27:50.538428+0000 mgr.y (mgr.24422) 543 : cluster [DBG] pgmap v941: 268 pgs: 268 active+clean; 455 KiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:27:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:51 vm04 bash[20742]: cluster 2026-03-10T10:27:51.543873+0000 mon.a (mon.0) 3227 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:51 vm04 bash[20742]: cluster 2026-03-10T10:27:51.543873+0000 mon.a (mon.0) 3227 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:51 vm07 bash[23367]: cluster 2026-03-10T10:27:50.538428+0000 mgr.y (mgr.24422) 543 : cluster [DBG] pgmap v941: 268 pgs: 268 active+clean; 455 KiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:27:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:51 vm07 bash[23367]: cluster 2026-03-10T10:27:50.538428+0000 mgr.y (mgr.24422) 543 : cluster [DBG] pgmap v941: 268 pgs: 268 active+clean; 455 KiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s 2026-03-10T10:27:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:51 vm07 bash[23367]: cluster 2026-03-10T10:27:51.543873+0000 mon.a (mon.0) 3227 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:51 vm07 bash[23367]: cluster 2026-03-10T10:27:51.543873+0000 mon.a (mon.0) 3227 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:53.319 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:27:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:27:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:27:53.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:53 vm04 bash[28289]: cluster 2026-03-10T10:27:51.920763+0000 mon.a (mon.0) 3228 : cluster [DBG] osdmap e610: 8 total, 8 up, 8 in 2026-03-10T10:27:53.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:53 vm04 bash[28289]: cluster 2026-03-10T10:27:51.920763+0000 mon.a (mon.0) 3228 : cluster [DBG] osdmap e610: 8 total, 8 up, 8 in 2026-03-10T10:27:53.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:53 vm04 bash[28289]: audit 2026-03-10T10:27:51.959300+0000 mon.a (mon.0) 3229 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:27:53.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:53 vm04 bash[28289]: audit 2026-03-10T10:27:51.959300+0000 mon.a (mon.0) 3229 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:27:53.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:53 vm04 bash[20742]: cluster 2026-03-10T10:27:51.920763+0000 mon.a (mon.0) 3228 : cluster [DBG] osdmap e610: 8 total, 8 up, 8 in 2026-03-10T10:27:53.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:53 vm04 bash[20742]: cluster 2026-03-10T10:27:51.920763+0000 mon.a (mon.0) 3228 : cluster [DBG] osdmap e610: 8 total, 8 up, 8 in 2026-03-10T10:27:53.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:53 vm04 bash[20742]: audit 2026-03-10T10:27:51.959300+0000 mon.a (mon.0) 3229 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:27:53.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:53 vm04 bash[20742]: audit 2026-03-10T10:27:51.959300+0000 mon.a (mon.0) 3229 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:27:53.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:53 vm07 bash[23367]: cluster 2026-03-10T10:27:51.920763+0000 mon.a (mon.0) 3228 : cluster [DBG] osdmap e610: 8 total, 8 up, 8 in 2026-03-10T10:27:53.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:53 vm07 bash[23367]: cluster 2026-03-10T10:27:51.920763+0000 mon.a (mon.0) 3228 : cluster [DBG] osdmap e610: 8 total, 8 up, 8 in 2026-03-10T10:27:53.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:53 vm07 bash[23367]: audit 2026-03-10T10:27:51.959300+0000 mon.a (mon.0) 3229 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:27:53.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:53 vm07 bash[23367]: audit 2026-03-10T10:27:51.959300+0000 mon.a (mon.0) 3229 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:27:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:54 vm04 bash[28289]: cluster 2026-03-10T10:27:52.538819+0000 mgr.y (mgr.24422) 544 : cluster [DBG] pgmap v944: 268 pgs: 268 active+clean; 455 KiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T10:27:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:54 vm04 bash[28289]: cluster 2026-03-10T10:27:52.538819+0000 mgr.y (mgr.24422) 544 : cluster [DBG] pgmap v944: 268 pgs: 268 active+clean; 455 KiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T10:27:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:54 vm04 bash[28289]: audit 2026-03-10T10:27:53.266364+0000 mon.a (mon.0) 3230 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:27:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:54 vm04 bash[28289]: audit 2026-03-10T10:27:53.266364+0000 mon.a (mon.0) 3230 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:27:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:54 vm04 bash[28289]: cluster 2026-03-10T10:27:53.558415+0000 mon.a (mon.0) 3231 : cluster [DBG] osdmap e611: 8 total, 8 up, 8 in 2026-03-10T10:27:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:54 vm04 bash[28289]: cluster 2026-03-10T10:27:53.558415+0000 mon.a (mon.0) 3231 : cluster [DBG] osdmap e611: 8 total, 8 up, 8 in 2026-03-10T10:27:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:54 vm04 bash[28289]: audit 2026-03-10T10:27:53.559028+0000 mon.a (mon.0) 3232 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-124"}]: dispatch 2026-03-10T10:27:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:54 vm04 bash[28289]: audit 2026-03-10T10:27:53.559028+0000 mon.a (mon.0) 3232 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-124"}]: dispatch 2026-03-10T10:27:54.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:54 vm04 bash[20742]: cluster 2026-03-10T10:27:52.538819+0000 mgr.y (mgr.24422) 544 : cluster [DBG] pgmap v944: 268 pgs: 268 active+clean; 455 KiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T10:27:54.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:54 vm04 bash[20742]: cluster 2026-03-10T10:27:52.538819+0000 mgr.y (mgr.24422) 544 : cluster [DBG] pgmap v944: 268 pgs: 268 active+clean; 455 KiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T10:27:54.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:54 vm04 bash[20742]: audit 2026-03-10T10:27:53.266364+0000 mon.a (mon.0) 3230 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:27:54.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:54 vm04 bash[20742]: audit 2026-03-10T10:27:53.266364+0000 mon.a (mon.0) 3230 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:27:54.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:54 vm04 bash[20742]: cluster 2026-03-10T10:27:53.558415+0000 mon.a (mon.0) 3231 : cluster [DBG] osdmap e611: 8 total, 8 up, 8 in 2026-03-10T10:27:54.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:54 vm04 bash[20742]: cluster 2026-03-10T10:27:53.558415+0000 mon.a (mon.0) 3231 : cluster [DBG] osdmap e611: 8 total, 8 up, 8 in 2026-03-10T10:27:54.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:54 vm04 bash[20742]: audit 2026-03-10T10:27:53.559028+0000 mon.a (mon.0) 3232 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-124"}]: dispatch 2026-03-10T10:27:54.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:54 vm04 bash[20742]: audit 2026-03-10T10:27:53.559028+0000 mon.a (mon.0) 3232 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-124"}]: dispatch 2026-03-10T10:27:54.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:54 vm07 bash[23367]: cluster 2026-03-10T10:27:52.538819+0000 mgr.y (mgr.24422) 544 : cluster [DBG] pgmap v944: 268 pgs: 268 active+clean; 455 KiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T10:27:54.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:54 vm07 bash[23367]: cluster 2026-03-10T10:27:52.538819+0000 mgr.y (mgr.24422) 544 : cluster [DBG] pgmap v944: 268 pgs: 268 active+clean; 455 KiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T10:27:54.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:54 vm07 bash[23367]: audit 2026-03-10T10:27:53.266364+0000 mon.a (mon.0) 3230 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:27:54.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:54 vm07 bash[23367]: audit 2026-03-10T10:27:53.266364+0000 mon.a (mon.0) 3230 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:27:54.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:54 vm07 bash[23367]: cluster 2026-03-10T10:27:53.558415+0000 mon.a (mon.0) 3231 : cluster [DBG] osdmap e611: 8 total, 8 up, 8 in 2026-03-10T10:27:54.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:54 vm07 bash[23367]: cluster 2026-03-10T10:27:53.558415+0000 mon.a (mon.0) 3231 : cluster [DBG] osdmap e611: 8 total, 8 up, 8 in 2026-03-10T10:27:54.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:54 vm07 bash[23367]: audit 2026-03-10T10:27:53.559028+0000 mon.a (mon.0) 3232 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-124"}]: dispatch 2026-03-10T10:27:54.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:54 vm07 bash[23367]: audit 2026-03-10T10:27:53.559028+0000 mon.a (mon.0) 3232 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-124"}]: dispatch 2026-03-10T10:27:55.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:55 vm04 bash[28289]: cluster 2026-03-10T10:27:54.266545+0000 mon.a (mon.0) 3233 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:27:55.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:55 vm04 bash[28289]: cluster 2026-03-10T10:27:54.266545+0000 mon.a (mon.0) 3233 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:27:55.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:55 vm04 bash[28289]: audit 2026-03-10T10:27:54.269768+0000 mon.a (mon.0) 3234 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-124"}]': finished 2026-03-10T10:27:55.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:55 vm04 bash[28289]: audit 2026-03-10T10:27:54.269768+0000 mon.a (mon.0) 3234 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-124"}]': finished 2026-03-10T10:27:55.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:55 vm04 bash[28289]: cluster 2026-03-10T10:27:54.278220+0000 mon.a (mon.0) 3235 : cluster [DBG] osdmap e612: 8 total, 8 up, 8 in 2026-03-10T10:27:55.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:55 vm04 bash[28289]: cluster 2026-03-10T10:27:54.278220+0000 mon.a (mon.0) 3235 : cluster [DBG] osdmap e612: 8 total, 8 up, 8 in 2026-03-10T10:27:55.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:55 vm04 bash[20742]: cluster 2026-03-10T10:27:54.266545+0000 mon.a (mon.0) 3233 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:27:55.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:55 vm04 bash[20742]: cluster 2026-03-10T10:27:54.266545+0000 mon.a (mon.0) 3233 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:27:55.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:55 vm04 bash[20742]: audit 2026-03-10T10:27:54.269768+0000 mon.a (mon.0) 3234 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-124"}]': finished 2026-03-10T10:27:55.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:55 vm04 bash[20742]: audit 2026-03-10T10:27:54.269768+0000 mon.a (mon.0) 3234 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-124"}]': finished 2026-03-10T10:27:55.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:55 vm04 bash[20742]: cluster 2026-03-10T10:27:54.278220+0000 mon.a (mon.0) 3235 : cluster [DBG] osdmap e612: 8 total, 8 up, 8 in 2026-03-10T10:27:55.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:55 vm04 bash[20742]: cluster 2026-03-10T10:27:54.278220+0000 mon.a (mon.0) 3235 : cluster [DBG] osdmap e612: 8 total, 8 up, 8 in 2026-03-10T10:27:55.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:55 vm07 bash[23367]: cluster 2026-03-10T10:27:54.266545+0000 mon.a (mon.0) 3233 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:27:55.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:55 vm07 bash[23367]: cluster 2026-03-10T10:27:54.266545+0000 mon.a (mon.0) 3233 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:27:55.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:55 vm07 bash[23367]: audit 2026-03-10T10:27:54.269768+0000 mon.a (mon.0) 3234 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-124"}]': finished 2026-03-10T10:27:55.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:55 vm07 bash[23367]: audit 2026-03-10T10:27:54.269768+0000 mon.a (mon.0) 3234 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-124"}]': finished 2026-03-10T10:27:55.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:55 vm07 bash[23367]: cluster 2026-03-10T10:27:54.278220+0000 mon.a (mon.0) 3235 : cluster [DBG] osdmap e612: 8 total, 8 up, 8 in 2026-03-10T10:27:55.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:55 vm07 bash[23367]: cluster 2026-03-10T10:27:54.278220+0000 mon.a (mon.0) 3235 : cluster [DBG] osdmap e612: 8 total, 8 up, 8 in 2026-03-10T10:27:56.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:56 vm04 bash[28289]: cluster 2026-03-10T10:27:54.539222+0000 mgr.y (mgr.24422) 545 : cluster [DBG] pgmap v947: 268 pgs: 2 active+clean+snaptrim, 266 active+clean; 455 KiB data, 976 MiB used, 159 GiB / 160 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-10T10:27:56.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:56 vm04 bash[28289]: cluster 2026-03-10T10:27:54.539222+0000 mgr.y (mgr.24422) 545 : cluster [DBG] pgmap v947: 268 pgs: 2 active+clean+snaptrim, 266 active+clean; 455 KiB data, 976 MiB used, 159 GiB / 160 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-10T10:27:56.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:56 vm04 bash[28289]: cluster 2026-03-10T10:27:55.295277+0000 mon.a (mon.0) 3236 : cluster [DBG] osdmap e613: 8 total, 8 up, 8 in 2026-03-10T10:27:56.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:56 vm04 bash[28289]: cluster 2026-03-10T10:27:55.295277+0000 mon.a (mon.0) 3236 : cluster [DBG] osdmap e613: 8 total, 8 up, 8 in 2026-03-10T10:27:56.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:56 vm04 bash[20742]: cluster 2026-03-10T10:27:54.539222+0000 mgr.y (mgr.24422) 545 : cluster [DBG] pgmap v947: 268 pgs: 2 active+clean+snaptrim, 266 active+clean; 455 KiB data, 976 MiB used, 159 GiB / 160 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-10T10:27:56.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:56 vm04 bash[20742]: cluster 2026-03-10T10:27:54.539222+0000 mgr.y (mgr.24422) 545 : cluster [DBG] pgmap v947: 268 pgs: 2 active+clean+snaptrim, 266 active+clean; 455 KiB data, 976 MiB used, 159 GiB / 160 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-10T10:27:56.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:56 vm04 bash[20742]: cluster 2026-03-10T10:27:55.295277+0000 mon.a (mon.0) 3236 : cluster [DBG] osdmap e613: 8 total, 8 up, 8 in 2026-03-10T10:27:56.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:56 vm04 bash[20742]: cluster 2026-03-10T10:27:55.295277+0000 mon.a (mon.0) 3236 : cluster [DBG] osdmap e613: 8 total, 8 up, 8 in 2026-03-10T10:27:56.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:56 vm07 bash[23367]: cluster 2026-03-10T10:27:54.539222+0000 mgr.y (mgr.24422) 545 : cluster [DBG] pgmap v947: 268 pgs: 2 active+clean+snaptrim, 266 active+clean; 455 KiB data, 976 MiB used, 159 GiB / 160 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-10T10:27:56.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:56 vm07 bash[23367]: cluster 2026-03-10T10:27:54.539222+0000 mgr.y (mgr.24422) 545 : cluster [DBG] pgmap v947: 268 pgs: 2 active+clean+snaptrim, 266 active+clean; 455 KiB data, 976 MiB used, 159 GiB / 160 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-10T10:27:56.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:56 vm07 bash[23367]: cluster 2026-03-10T10:27:55.295277+0000 mon.a (mon.0) 3236 : cluster [DBG] osdmap e613: 8 total, 8 up, 8 in 2026-03-10T10:27:56.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:56 vm07 bash[23367]: cluster 2026-03-10T10:27:55.295277+0000 mon.a (mon.0) 3236 : cluster [DBG] osdmap e613: 8 total, 8 up, 8 in 2026-03-10T10:27:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:57 vm04 bash[28289]: cluster 2026-03-10T10:27:56.320432+0000 mon.a (mon.0) 3237 : cluster [DBG] osdmap e614: 8 total, 8 up, 8 in 2026-03-10T10:27:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:57 vm04 bash[28289]: cluster 2026-03-10T10:27:56.320432+0000 mon.a (mon.0) 3237 : cluster [DBG] osdmap e614: 8 total, 8 up, 8 in 2026-03-10T10:27:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:57 vm04 bash[28289]: audit 2026-03-10T10:27:56.324302+0000 mon.a (mon.0) 3238 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:27:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:57 vm04 bash[28289]: audit 2026-03-10T10:27:56.324302+0000 mon.a (mon.0) 3238 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:27:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:57 vm04 bash[28289]: cluster 2026-03-10T10:27:56.544566+0000 mon.a (mon.0) 3239 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:57 vm04 bash[28289]: cluster 2026-03-10T10:27:56.544566+0000 mon.a (mon.0) 3239 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:57 vm04 bash[20742]: cluster 2026-03-10T10:27:56.320432+0000 mon.a (mon.0) 3237 : cluster [DBG] osdmap e614: 8 total, 8 up, 8 in 2026-03-10T10:27:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:57 vm04 bash[20742]: cluster 2026-03-10T10:27:56.320432+0000 mon.a (mon.0) 3237 : cluster [DBG] osdmap e614: 8 total, 8 up, 8 in 2026-03-10T10:27:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:57 vm04 bash[20742]: audit 2026-03-10T10:27:56.324302+0000 mon.a (mon.0) 3238 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:27:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:57 vm04 bash[20742]: audit 2026-03-10T10:27:56.324302+0000 mon.a (mon.0) 3238 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:27:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:57 vm04 bash[20742]: cluster 2026-03-10T10:27:56.544566+0000 mon.a (mon.0) 3239 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:57 vm04 bash[20742]: cluster 2026-03-10T10:27:56.544566+0000 mon.a (mon.0) 3239 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:57.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:57 vm07 bash[23367]: cluster 2026-03-10T10:27:56.320432+0000 mon.a (mon.0) 3237 : cluster [DBG] osdmap e614: 8 total, 8 up, 8 in 2026-03-10T10:27:57.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:57 vm07 bash[23367]: cluster 2026-03-10T10:27:56.320432+0000 mon.a (mon.0) 3237 : cluster [DBG] osdmap e614: 8 total, 8 up, 8 in 2026-03-10T10:27:57.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:57 vm07 bash[23367]: audit 2026-03-10T10:27:56.324302+0000 mon.a (mon.0) 3238 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:27:57.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:57 vm07 bash[23367]: audit 2026-03-10T10:27:56.324302+0000 mon.a (mon.0) 3238 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-126","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:27:57.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:57 vm07 bash[23367]: cluster 2026-03-10T10:27:56.544566+0000 mon.a (mon.0) 3239 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:57.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:57 vm07 bash[23367]: cluster 2026-03-10T10:27:56.544566+0000 mon.a (mon.0) 3239 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:58 vm04 bash[28289]: cluster 2026-03-10T10:27:56.539624+0000 mgr.y (mgr.24422) 546 : cluster [DBG] pgmap v950: 268 pgs: 32 unknown, 2 active+clean+snaptrim, 234 active+clean; 455 KiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:58 vm04 bash[28289]: cluster 2026-03-10T10:27:56.539624+0000 mgr.y (mgr.24422) 546 : cluster [DBG] pgmap v950: 268 pgs: 32 unknown, 2 active+clean+snaptrim, 234 active+clean; 455 KiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:58 vm04 bash[28289]: audit 2026-03-10T10:27:57.306466+0000 mon.a (mon.0) 3240 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-126","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:58 vm04 bash[28289]: audit 2026-03-10T10:27:57.306466+0000 mon.a (mon.0) 3240 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-126","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:58 vm04 bash[28289]: cluster 2026-03-10T10:27:57.310733+0000 mon.a (mon.0) 3241 : cluster [DBG] osdmap e615: 8 total, 8 up, 8 in 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:58 vm04 bash[28289]: cluster 2026-03-10T10:27:57.310733+0000 mon.a (mon.0) 3241 : cluster [DBG] osdmap e615: 8 total, 8 up, 8 in 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:58 vm04 bash[28289]: audit 2026-03-10T10:27:57.324295+0000 mon.a (mon.0) 3242 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:58 vm04 bash[28289]: audit 2026-03-10T10:27:57.324295+0000 mon.a (mon.0) 3242 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:58 vm04 bash[28289]: audit 2026-03-10T10:27:58.109798+0000 mon.a (mon.0) 3243 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:58 vm04 bash[28289]: audit 2026-03-10T10:27:58.109798+0000 mon.a (mon.0) 3243 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:58 vm04 bash[28289]: audit 2026-03-10T10:27:58.110583+0000 mon.a (mon.0) 3244 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:58 vm04 bash[28289]: audit 2026-03-10T10:27:58.110583+0000 mon.a (mon.0) 3244 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:58 vm04 bash[28289]: audit 2026-03-10T10:27:58.309766+0000 mon.a (mon.0) 3245 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-126", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:58 vm04 bash[28289]: audit 2026-03-10T10:27:58.309766+0000 mon.a (mon.0) 3245 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-126", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:58 vm04 bash[28289]: cluster 2026-03-10T10:27:58.313136+0000 mon.a (mon.0) 3246 : cluster [DBG] osdmap e616: 8 total, 8 up, 8 in 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:58 vm04 bash[28289]: cluster 2026-03-10T10:27:58.313136+0000 mon.a (mon.0) 3246 : cluster [DBG] osdmap e616: 8 total, 8 up, 8 in 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:58 vm04 bash[28289]: audit 2026-03-10T10:27:58.313466+0000 mon.a (mon.0) 3247 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-126"}]: dispatch 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:27:58 vm04 bash[28289]: audit 2026-03-10T10:27:58.313466+0000 mon.a (mon.0) 3247 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-126"}]: dispatch 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:58 vm04 bash[20742]: cluster 2026-03-10T10:27:56.539624+0000 mgr.y (mgr.24422) 546 : cluster [DBG] pgmap v950: 268 pgs: 32 unknown, 2 active+clean+snaptrim, 234 active+clean; 455 KiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:58 vm04 bash[20742]: cluster 2026-03-10T10:27:56.539624+0000 mgr.y (mgr.24422) 546 : cluster [DBG] pgmap v950: 268 pgs: 32 unknown, 2 active+clean+snaptrim, 234 active+clean; 455 KiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:58 vm04 bash[20742]: audit 2026-03-10T10:27:57.306466+0000 mon.a (mon.0) 3240 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-126","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:58 vm04 bash[20742]: audit 2026-03-10T10:27:57.306466+0000 mon.a (mon.0) 3240 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-126","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:58 vm04 bash[20742]: cluster 2026-03-10T10:27:57.310733+0000 mon.a (mon.0) 3241 : cluster [DBG] osdmap e615: 8 total, 8 up, 8 in 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:58 vm04 bash[20742]: cluster 2026-03-10T10:27:57.310733+0000 mon.a (mon.0) 3241 : cluster [DBG] osdmap e615: 8 total, 8 up, 8 in 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:58 vm04 bash[20742]: audit 2026-03-10T10:27:57.324295+0000 mon.a (mon.0) 3242 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:58 vm04 bash[20742]: audit 2026-03-10T10:27:57.324295+0000 mon.a (mon.0) 3242 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:58 vm04 bash[20742]: audit 2026-03-10T10:27:58.109798+0000 mon.a (mon.0) 3243 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:58 vm04 bash[20742]: audit 2026-03-10T10:27:58.109798+0000 mon.a (mon.0) 3243 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:58 vm04 bash[20742]: audit 2026-03-10T10:27:58.110583+0000 mon.a (mon.0) 3244 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:58 vm04 bash[20742]: audit 2026-03-10T10:27:58.110583+0000 mon.a (mon.0) 3244 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:58 vm04 bash[20742]: audit 2026-03-10T10:27:58.309766+0000 mon.a (mon.0) 3245 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-126", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:58 vm04 bash[20742]: audit 2026-03-10T10:27:58.309766+0000 mon.a (mon.0) 3245 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-126", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:58 vm04 bash[20742]: cluster 2026-03-10T10:27:58.313136+0000 mon.a (mon.0) 3246 : cluster [DBG] osdmap e616: 8 total, 8 up, 8 in 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:58 vm04 bash[20742]: cluster 2026-03-10T10:27:58.313136+0000 mon.a (mon.0) 3246 : cluster [DBG] osdmap e616: 8 total, 8 up, 8 in 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:58 vm04 bash[20742]: audit 2026-03-10T10:27:58.313466+0000 mon.a (mon.0) 3247 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-126"}]: dispatch 2026-03-10T10:27:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:27:58 vm04 bash[20742]: audit 2026-03-10T10:27:58.313466+0000 mon.a (mon.0) 3247 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-126"}]: dispatch 2026-03-10T10:27:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:58 vm07 bash[23367]: cluster 2026-03-10T10:27:56.539624+0000 mgr.y (mgr.24422) 546 : cluster [DBG] pgmap v950: 268 pgs: 32 unknown, 2 active+clean+snaptrim, 234 active+clean; 455 KiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:27:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:58 vm07 bash[23367]: cluster 2026-03-10T10:27:56.539624+0000 mgr.y (mgr.24422) 546 : cluster [DBG] pgmap v950: 268 pgs: 32 unknown, 2 active+clean+snaptrim, 234 active+clean; 455 KiB data, 976 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:27:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:58 vm07 bash[23367]: audit 2026-03-10T10:27:57.306466+0000 mon.a (mon.0) 3240 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-126","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:27:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:58 vm07 bash[23367]: audit 2026-03-10T10:27:57.306466+0000 mon.a (mon.0) 3240 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-126","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:27:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:58 vm07 bash[23367]: cluster 2026-03-10T10:27:57.310733+0000 mon.a (mon.0) 3241 : cluster [DBG] osdmap e615: 8 total, 8 up, 8 in 2026-03-10T10:27:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:58 vm07 bash[23367]: cluster 2026-03-10T10:27:57.310733+0000 mon.a (mon.0) 3241 : cluster [DBG] osdmap e615: 8 total, 8 up, 8 in 2026-03-10T10:27:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:58 vm07 bash[23367]: audit 2026-03-10T10:27:57.324295+0000 mon.a (mon.0) 3242 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:27:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:58 vm07 bash[23367]: audit 2026-03-10T10:27:57.324295+0000 mon.a (mon.0) 3242 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-126", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:27:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:58 vm07 bash[23367]: audit 2026-03-10T10:27:58.109798+0000 mon.a (mon.0) 3243 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:27:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:58 vm07 bash[23367]: audit 2026-03-10T10:27:58.109798+0000 mon.a (mon.0) 3243 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:27:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:58 vm07 bash[23367]: audit 2026-03-10T10:27:58.110583+0000 mon.a (mon.0) 3244 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:27:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:58 vm07 bash[23367]: audit 2026-03-10T10:27:58.110583+0000 mon.a (mon.0) 3244 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:27:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:58 vm07 bash[23367]: audit 2026-03-10T10:27:58.309766+0000 mon.a (mon.0) 3245 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-126", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:58 vm07 bash[23367]: audit 2026-03-10T10:27:58.309766+0000 mon.a (mon.0) 3245 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-126", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:27:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:58 vm07 bash[23367]: cluster 2026-03-10T10:27:58.313136+0000 mon.a (mon.0) 3246 : cluster [DBG] osdmap e616: 8 total, 8 up, 8 in 2026-03-10T10:27:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:58 vm07 bash[23367]: cluster 2026-03-10T10:27:58.313136+0000 mon.a (mon.0) 3246 : cluster [DBG] osdmap e616: 8 total, 8 up, 8 in 2026-03-10T10:27:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:58 vm07 bash[23367]: audit 2026-03-10T10:27:58.313466+0000 mon.a (mon.0) 3247 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-126"}]: dispatch 2026-03-10T10:27:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:27:58 vm07 bash[23367]: audit 2026-03-10T10:27:58.313466+0000 mon.a (mon.0) 3247 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-126"}]: dispatch 2026-03-10T10:27:59.266 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:27:58 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:28:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:00 vm04 bash[28289]: cluster 2026-03-10T10:27:58.540213+0000 mgr.y (mgr.24422) 547 : cluster [DBG] pgmap v953: 268 pgs: 21 unknown, 247 active+clean; 455 KiB data, 977 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:28:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:00 vm04 bash[28289]: cluster 2026-03-10T10:27:58.540213+0000 mgr.y (mgr.24422) 547 : cluster [DBG] pgmap v953: 268 pgs: 21 unknown, 247 active+clean; 455 KiB data, 977 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:28:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:00 vm04 bash[28289]: audit 2026-03-10T10:27:58.800115+0000 mgr.y (mgr.24422) 548 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:00 vm04 bash[28289]: audit 2026-03-10T10:27:58.800115+0000 mgr.y (mgr.24422) 548 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:00 vm04 bash[28289]: audit 2026-03-10T10:27:59.313163+0000 mon.a (mon.0) 3248 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-126"}]': finished 2026-03-10T10:28:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:00 vm04 bash[28289]: audit 2026-03-10T10:27:59.313163+0000 mon.a (mon.0) 3248 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-126"}]': finished 2026-03-10T10:28:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:00 vm04 bash[28289]: cluster 2026-03-10T10:27:59.317019+0000 mon.a (mon.0) 3249 : cluster [DBG] osdmap e617: 8 total, 8 up, 8 in 2026-03-10T10:28:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:00 vm04 bash[28289]: cluster 2026-03-10T10:27:59.317019+0000 mon.a (mon.0) 3249 : cluster [DBG] osdmap e617: 8 total, 8 up, 8 in 2026-03-10T10:28:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:00 vm04 bash[28289]: audit 2026-03-10T10:27:59.317298+0000 mon.a (mon.0) 3250 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-126", "mode": "writeback"}]: dispatch 2026-03-10T10:28:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:00 vm04 bash[28289]: audit 2026-03-10T10:27:59.317298+0000 mon.a (mon.0) 3250 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-126", "mode": "writeback"}]: dispatch 2026-03-10T10:28:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:00 vm04 bash[20742]: cluster 2026-03-10T10:27:58.540213+0000 mgr.y (mgr.24422) 547 : cluster [DBG] pgmap v953: 268 pgs: 21 unknown, 247 active+clean; 455 KiB data, 977 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:28:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:00 vm04 bash[20742]: cluster 2026-03-10T10:27:58.540213+0000 mgr.y (mgr.24422) 547 : cluster [DBG] pgmap v953: 268 pgs: 21 unknown, 247 active+clean; 455 KiB data, 977 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:28:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:00 vm04 bash[20742]: audit 2026-03-10T10:27:58.800115+0000 mgr.y (mgr.24422) 548 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:00 vm04 bash[20742]: audit 2026-03-10T10:27:58.800115+0000 mgr.y (mgr.24422) 548 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:00 vm04 bash[20742]: audit 2026-03-10T10:27:59.313163+0000 mon.a (mon.0) 3248 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-126"}]': finished 2026-03-10T10:28:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:00 vm04 bash[20742]: audit 2026-03-10T10:27:59.313163+0000 mon.a (mon.0) 3248 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-126"}]': finished 2026-03-10T10:28:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:00 vm04 bash[20742]: cluster 2026-03-10T10:27:59.317019+0000 mon.a (mon.0) 3249 : cluster [DBG] osdmap e617: 8 total, 8 up, 8 in 2026-03-10T10:28:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:00 vm04 bash[20742]: cluster 2026-03-10T10:27:59.317019+0000 mon.a (mon.0) 3249 : cluster [DBG] osdmap e617: 8 total, 8 up, 8 in 2026-03-10T10:28:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:00 vm04 bash[20742]: audit 2026-03-10T10:27:59.317298+0000 mon.a (mon.0) 3250 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-126", "mode": "writeback"}]: dispatch 2026-03-10T10:28:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:00 vm04 bash[20742]: audit 2026-03-10T10:27:59.317298+0000 mon.a (mon.0) 3250 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-126", "mode": "writeback"}]: dispatch 2026-03-10T10:28:00.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:00 vm07 bash[23367]: cluster 2026-03-10T10:27:58.540213+0000 mgr.y (mgr.24422) 547 : cluster [DBG] pgmap v953: 268 pgs: 21 unknown, 247 active+clean; 455 KiB data, 977 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:28:00.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:00 vm07 bash[23367]: cluster 2026-03-10T10:27:58.540213+0000 mgr.y (mgr.24422) 547 : cluster [DBG] pgmap v953: 268 pgs: 21 unknown, 247 active+clean; 455 KiB data, 977 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:28:00.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:00 vm07 bash[23367]: audit 2026-03-10T10:27:58.800115+0000 mgr.y (mgr.24422) 548 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:00.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:00 vm07 bash[23367]: audit 2026-03-10T10:27:58.800115+0000 mgr.y (mgr.24422) 548 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:00.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:00 vm07 bash[23367]: audit 2026-03-10T10:27:59.313163+0000 mon.a (mon.0) 3248 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-126"}]': finished 2026-03-10T10:28:00.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:00 vm07 bash[23367]: audit 2026-03-10T10:27:59.313163+0000 mon.a (mon.0) 3248 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-126"}]': finished 2026-03-10T10:28:00.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:00 vm07 bash[23367]: cluster 2026-03-10T10:27:59.317019+0000 mon.a (mon.0) 3249 : cluster [DBG] osdmap e617: 8 total, 8 up, 8 in 2026-03-10T10:28:00.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:00 vm07 bash[23367]: cluster 2026-03-10T10:27:59.317019+0000 mon.a (mon.0) 3249 : cluster [DBG] osdmap e617: 8 total, 8 up, 8 in 2026-03-10T10:28:00.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:00 vm07 bash[23367]: audit 2026-03-10T10:27:59.317298+0000 mon.a (mon.0) 3250 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-126", "mode": "writeback"}]: dispatch 2026-03-10T10:28:00.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:00 vm07 bash[23367]: audit 2026-03-10T10:27:59.317298+0000 mon.a (mon.0) 3250 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-126", "mode": "writeback"}]: dispatch 2026-03-10T10:28:01.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:01 vm07 bash[23367]: cluster 2026-03-10T10:28:00.313303+0000 mon.a (mon.0) 3251 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:01.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:01 vm07 bash[23367]: cluster 2026-03-10T10:28:00.313303+0000 mon.a (mon.0) 3251 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:01.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:01 vm07 bash[23367]: audit 2026-03-10T10:28:00.316948+0000 mon.a (mon.0) 3252 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-126", "mode": "writeback"}]': finished 2026-03-10T10:28:01.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:01 vm07 bash[23367]: audit 2026-03-10T10:28:00.316948+0000 mon.a (mon.0) 3252 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-126", "mode": "writeback"}]': finished 2026-03-10T10:28:01.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:01 vm07 bash[23367]: cluster 2026-03-10T10:28:00.319558+0000 mon.a (mon.0) 3253 : cluster [DBG] osdmap e618: 8 total, 8 up, 8 in 2026-03-10T10:28:01.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:01 vm07 bash[23367]: cluster 2026-03-10T10:28:00.319558+0000 mon.a (mon.0) 3253 : cluster [DBG] osdmap e618: 8 total, 8 up, 8 in 2026-03-10T10:28:01.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:01 vm07 bash[23367]: audit 2026-03-10T10:28:00.388794+0000 mon.a (mon.0) 3254 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:01.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:01 vm07 bash[23367]: audit 2026-03-10T10:28:00.388794+0000 mon.a (mon.0) 3254 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:01.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:01 vm04 bash[28289]: cluster 2026-03-10T10:28:00.313303+0000 mon.a (mon.0) 3251 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:01.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:01 vm04 bash[28289]: cluster 2026-03-10T10:28:00.313303+0000 mon.a (mon.0) 3251 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:01.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:01 vm04 bash[28289]: audit 2026-03-10T10:28:00.316948+0000 mon.a (mon.0) 3252 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-126", "mode": "writeback"}]': finished 2026-03-10T10:28:01.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:01 vm04 bash[28289]: audit 2026-03-10T10:28:00.316948+0000 mon.a (mon.0) 3252 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-126", "mode": "writeback"}]': finished 2026-03-10T10:28:01.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:01 vm04 bash[28289]: cluster 2026-03-10T10:28:00.319558+0000 mon.a (mon.0) 3253 : cluster [DBG] osdmap e618: 8 total, 8 up, 8 in 2026-03-10T10:28:01.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:01 vm04 bash[28289]: cluster 2026-03-10T10:28:00.319558+0000 mon.a (mon.0) 3253 : cluster [DBG] osdmap e618: 8 total, 8 up, 8 in 2026-03-10T10:28:01.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:01 vm04 bash[28289]: audit 2026-03-10T10:28:00.388794+0000 mon.a (mon.0) 3254 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:01.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:01 vm04 bash[28289]: audit 2026-03-10T10:28:00.388794+0000 mon.a (mon.0) 3254 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:01.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:01 vm04 bash[20742]: cluster 2026-03-10T10:28:00.313303+0000 mon.a (mon.0) 3251 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:01.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:01 vm04 bash[20742]: cluster 2026-03-10T10:28:00.313303+0000 mon.a (mon.0) 3251 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:01.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:01 vm04 bash[20742]: audit 2026-03-10T10:28:00.316948+0000 mon.a (mon.0) 3252 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-126", "mode": "writeback"}]': finished 2026-03-10T10:28:01.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:01 vm04 bash[20742]: audit 2026-03-10T10:28:00.316948+0000 mon.a (mon.0) 3252 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-126", "mode": "writeback"}]': finished 2026-03-10T10:28:01.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:01 vm04 bash[20742]: cluster 2026-03-10T10:28:00.319558+0000 mon.a (mon.0) 3253 : cluster [DBG] osdmap e618: 8 total, 8 up, 8 in 2026-03-10T10:28:01.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:01 vm04 bash[20742]: cluster 2026-03-10T10:28:00.319558+0000 mon.a (mon.0) 3253 : cluster [DBG] osdmap e618: 8 total, 8 up, 8 in 2026-03-10T10:28:01.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:01 vm04 bash[20742]: audit 2026-03-10T10:28:00.388794+0000 mon.a (mon.0) 3254 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:01.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:01 vm04 bash[20742]: audit 2026-03-10T10:28:00.388794+0000 mon.a (mon.0) 3254 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:02.693 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T10:28:02.693 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.TierFlushDuringFlush (9142 ms) 2026-03-10T10:28:02.693 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestSnapHasChunk 2026-03-10T10:28:02.693 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T10:28:02.693 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestSnapHasChunk (6190 ms) 2026-03-10T10:28:02.693 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestRollback 2026-03-10T10:28:02.693 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T10:28:02.693 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestRollback (5132 ms) 2026-03-10T10:28:02.693 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestRollbackRefcount 2026-03-10T10:28:02.693 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T10:28:02.693 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestRollbackRefcount (24949 ms) 2026-03-10T10:28:02.693 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.ManifestEvictRollback 2026-03-10T10:28:02.693 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T10:28:02.693 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.ManifestEvictRollback (14222 ms) 2026-03-10T10:28:02.693 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.PropagateBaseTierError 2026-03-10T10:28:02.693 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.PropagateBaseTierError (12348 ms) 2026-03-10T10:28:02.693 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.HelloWriteReturn 2026-03-10T10:28:02.693 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: 00000000 79 6f 75 20 6d 69 67 68 74 20 73 65 65 20 74 68 |you might see th| 2026-03-10T10:28:02.693 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: 00000010 69 73 |is| 2026-03-10T10:28:02.693 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: 00000012 2026-03-10T10:28:02.693 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.HelloWriteReturn (12167 ms) 2026-03-10T10:28:02.693 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsPP.TierFlushDuringUnsetDedupTier 2026-03-10T10:28:02.693 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: require_osd_release = squid 2026-03-10T10:28:02.693 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsPP.TierFlushDuringUnsetDedupTier (6116 ms) 2026-03-10T10:28:02.693 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [----------] 48 tests from LibRadosTwoPoolsPP (562016 ms total) 2026-03-10T10:28:02.693 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: 2026-03-10T10:28:02.693 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [----------] 4 tests from LibRadosTierECPP 2026-03-10T10:28:02.693 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTierECPP.Dirty 2026-03-10T10:28:02.693 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTierECPP.Dirty (1056 ms) 2026-03-10T10:28:02.693 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTierECPP.FlushWriteRaces 2026-03-10T10:28:02.693 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTierECPP.FlushWriteRaces (11293 ms) 2026-03-10T10:28:02.693 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTierECPP.CallForcesPromote 2026-03-10T10:28:02.693 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTierECPP.CallForcesPromote (18349 ms) 2026-03-10T10:28:02.693 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTierECPP.HitSetNone 2026-03-10T10:28:02.694 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTierECPP.HitSetNone (0 ms) 2026-03-10T10:28:02.694 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [----------] 4 tests from LibRadosTierECPP (30698 ms total) 2026-03-10T10:28:02.694 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: 2026-03-10T10:28:02.694 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [----------] 22 tests from LibRadosTwoPoolsECPP 2026-03-10T10:28:02.694 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.Overlay 2026-03-10T10:28:02.694 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.Overlay (7188 ms) 2026-03-10T10:28:02.694 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.Promote 2026-03-10T10:28:02.694 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.Promote (8140 ms) 2026-03-10T10:28:02.694 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.PromoteSnap 2026-03-10T10:28:02.694 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: waiting for scrub... 2026-03-10T10:28:02.694 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: done waiting 2026-03-10T10:28:02.694 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.PromoteSnap (24335 ms) 2026-03-10T10:28:02.694 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.PromoteSnapTrimRace 2026-03-10T10:28:02.694 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.PromoteSnapTrimRace (10193 ms) 2026-03-10T10:28:02.694 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.Whiteout 2026-03-10T10:28:02.694 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.Whiteout (7271 ms) 2026-03-10T10:28:02.694 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.Evict 2026-03-10T10:28:02.694 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.Evict (8174 ms) 2026-03-10T10:28:02.694 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.EvictSnap 2026-03-10T10:28:02.694 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.EvictSnap (10507 ms) 2026-03-10T10:28:02.694 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.TryFlush 2026-03-10T10:28:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:02 vm07 bash[23367]: cluster 2026-03-10T10:28:00.540571+0000 mgr.y (mgr.24422) 549 : cluster [DBG] pgmap v956: 268 pgs: 268 active+clean; 455 KiB data, 977 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:02 vm07 bash[23367]: cluster 2026-03-10T10:28:00.540571+0000 mgr.y (mgr.24422) 549 : cluster [DBG] pgmap v956: 268 pgs: 268 active+clean; 455 KiB data, 977 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:02 vm07 bash[23367]: audit 2026-03-10T10:28:01.498240+0000 mon.a (mon.0) 3255 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:02 vm07 bash[23367]: audit 2026-03-10T10:28:01.498240+0000 mon.a (mon.0) 3255 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:02 vm07 bash[23367]: cluster 2026-03-10T10:28:01.501073+0000 mon.a (mon.0) 3256 : cluster [DBG] osdmap e619: 8 total, 8 up, 8 in 2026-03-10T10:28:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:02 vm07 bash[23367]: cluster 2026-03-10T10:28:01.501073+0000 mon.a (mon.0) 3256 : cluster [DBG] osdmap e619: 8 total, 8 up, 8 in 2026-03-10T10:28:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:02 vm07 bash[23367]: audit 2026-03-10T10:28:01.501361+0000 mon.a (mon.0) 3257 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-126"}]: dispatch 2026-03-10T10:28:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:02 vm07 bash[23367]: audit 2026-03-10T10:28:01.501361+0000 mon.a (mon.0) 3257 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-126"}]: dispatch 2026-03-10T10:28:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:02 vm07 bash[23367]: cluster 2026-03-10T10:28:01.545082+0000 mon.a (mon.0) 3258 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:02 vm07 bash[23367]: cluster 2026-03-10T10:28:01.545082+0000 mon.a (mon.0) 3258 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:02 vm07 bash[23367]: cluster 2026-03-10T10:28:01.545417+0000 mon.a (mon.0) 3259 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:02 vm07 bash[23367]: cluster 2026-03-10T10:28:01.545417+0000 mon.a (mon.0) 3259 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:02 vm07 bash[23367]: audit 2026-03-10T10:28:01.687587+0000 mon.a (mon.0) 3260 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-126"}]': finished 2026-03-10T10:28:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:02 vm07 bash[23367]: audit 2026-03-10T10:28:01.687587+0000 mon.a (mon.0) 3260 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-126"}]': finished 2026-03-10T10:28:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:02 vm07 bash[23367]: cluster 2026-03-10T10:28:01.691494+0000 mon.a (mon.0) 3261 : cluster [DBG] osdmap e620: 8 total, 8 up, 8 in 2026-03-10T10:28:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:02 vm07 bash[23367]: cluster 2026-03-10T10:28:01.691494+0000 mon.a (mon.0) 3261 : cluster [DBG] osdmap e620: 8 total, 8 up, 8 in 2026-03-10T10:28:02.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:02 vm04 bash[28289]: cluster 2026-03-10T10:28:00.540571+0000 mgr.y (mgr.24422) 549 : cluster [DBG] pgmap v956: 268 pgs: 268 active+clean; 455 KiB data, 977 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:02.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:02 vm04 bash[28289]: cluster 2026-03-10T10:28:00.540571+0000 mgr.y (mgr.24422) 549 : cluster [DBG] pgmap v956: 268 pgs: 268 active+clean; 455 KiB data, 977 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:02.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:02 vm04 bash[28289]: audit 2026-03-10T10:28:01.498240+0000 mon.a (mon.0) 3255 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:02.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:02 vm04 bash[28289]: audit 2026-03-10T10:28:01.498240+0000 mon.a (mon.0) 3255 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:02.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:02 vm04 bash[28289]: cluster 2026-03-10T10:28:01.501073+0000 mon.a (mon.0) 3256 : cluster [DBG] osdmap e619: 8 total, 8 up, 8 in 2026-03-10T10:28:02.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:02 vm04 bash[28289]: cluster 2026-03-10T10:28:01.501073+0000 mon.a (mon.0) 3256 : cluster [DBG] osdmap e619: 8 total, 8 up, 8 in 2026-03-10T10:28:02.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:02 vm04 bash[28289]: audit 2026-03-10T10:28:01.501361+0000 mon.a (mon.0) 3257 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-126"}]: dispatch 2026-03-10T10:28:02.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:02 vm04 bash[28289]: audit 2026-03-10T10:28:01.501361+0000 mon.a (mon.0) 3257 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-126"}]: dispatch 2026-03-10T10:28:02.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:02 vm04 bash[28289]: cluster 2026-03-10T10:28:01.545082+0000 mon.a (mon.0) 3258 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:02.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:02 vm04 bash[28289]: cluster 2026-03-10T10:28:01.545082+0000 mon.a (mon.0) 3258 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:02.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:02 vm04 bash[28289]: cluster 2026-03-10T10:28:01.545417+0000 mon.a (mon.0) 3259 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:02.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:02 vm04 bash[28289]: cluster 2026-03-10T10:28:01.545417+0000 mon.a (mon.0) 3259 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:02.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:02 vm04 bash[28289]: audit 2026-03-10T10:28:01.687587+0000 mon.a (mon.0) 3260 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-126"}]': finished 2026-03-10T10:28:02.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:02 vm04 bash[28289]: audit 2026-03-10T10:28:01.687587+0000 mon.a (mon.0) 3260 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-126"}]': finished 2026-03-10T10:28:02.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:02 vm04 bash[28289]: cluster 2026-03-10T10:28:01.691494+0000 mon.a (mon.0) 3261 : cluster [DBG] osdmap e620: 8 total, 8 up, 8 in 2026-03-10T10:28:02.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:02 vm04 bash[28289]: cluster 2026-03-10T10:28:01.691494+0000 mon.a (mon.0) 3261 : cluster [DBG] osdmap e620: 8 total, 8 up, 8 in 2026-03-10T10:28:02.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:02 vm04 bash[20742]: cluster 2026-03-10T10:28:00.540571+0000 mgr.y (mgr.24422) 549 : cluster [DBG] pgmap v956: 268 pgs: 268 active+clean; 455 KiB data, 977 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:02.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:02 vm04 bash[20742]: cluster 2026-03-10T10:28:00.540571+0000 mgr.y (mgr.24422) 549 : cluster [DBG] pgmap v956: 268 pgs: 268 active+clean; 455 KiB data, 977 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:02.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:02 vm04 bash[20742]: audit 2026-03-10T10:28:01.498240+0000 mon.a (mon.0) 3255 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:02.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:02 vm04 bash[20742]: audit 2026-03-10T10:28:01.498240+0000 mon.a (mon.0) 3255 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:02.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:02 vm04 bash[20742]: cluster 2026-03-10T10:28:01.501073+0000 mon.a (mon.0) 3256 : cluster [DBG] osdmap e619: 8 total, 8 up, 8 in 2026-03-10T10:28:02.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:02 vm04 bash[20742]: cluster 2026-03-10T10:28:01.501073+0000 mon.a (mon.0) 3256 : cluster [DBG] osdmap e619: 8 total, 8 up, 8 in 2026-03-10T10:28:02.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:02 vm04 bash[20742]: audit 2026-03-10T10:28:01.501361+0000 mon.a (mon.0) 3257 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-126"}]: dispatch 2026-03-10T10:28:02.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:02 vm04 bash[20742]: audit 2026-03-10T10:28:01.501361+0000 mon.a (mon.0) 3257 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-126"}]: dispatch 2026-03-10T10:28:02.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:02 vm04 bash[20742]: cluster 2026-03-10T10:28:01.545082+0000 mon.a (mon.0) 3258 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:02.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:02 vm04 bash[20742]: cluster 2026-03-10T10:28:01.545082+0000 mon.a (mon.0) 3258 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:02.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:02 vm04 bash[20742]: cluster 2026-03-10T10:28:01.545417+0000 mon.a (mon.0) 3259 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:02.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:02 vm04 bash[20742]: cluster 2026-03-10T10:28:01.545417+0000 mon.a (mon.0) 3259 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:02.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:02 vm04 bash[20742]: audit 2026-03-10T10:28:01.687587+0000 mon.a (mon.0) 3260 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-126"}]': finished 2026-03-10T10:28:02.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:02 vm04 bash[20742]: audit 2026-03-10T10:28:01.687587+0000 mon.a (mon.0) 3260 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-126"}]': finished 2026-03-10T10:28:02.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:02 vm04 bash[20742]: cluster 2026-03-10T10:28:01.691494+0000 mon.a (mon.0) 3261 : cluster [DBG] osdmap e620: 8 total, 8 up, 8 in 2026-03-10T10:28:02.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:02 vm04 bash[20742]: cluster 2026-03-10T10:28:01.691494+0000 mon.a (mon.0) 3261 : cluster [DBG] osdmap e620: 8 total, 8 up, 8 in 2026-03-10T10:28:03.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:28:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:28:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:28:03.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:03 vm04 bash[28289]: cluster 2026-03-10T10:28:02.540921+0000 mgr.y (mgr.24422) 550 : cluster [DBG] pgmap v959: 268 pgs: 268 active+clean; 455 KiB data, 977 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:03.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:03 vm04 bash[28289]: cluster 2026-03-10T10:28:02.540921+0000 mgr.y (mgr.24422) 550 : cluster [DBG] pgmap v959: 268 pgs: 268 active+clean; 455 KiB data, 977 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:03.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:03 vm04 bash[28289]: cluster 2026-03-10T10:28:02.694090+0000 mon.a (mon.0) 3262 : cluster [DBG] osdmap e621: 8 total, 8 up, 8 in 2026-03-10T10:28:03.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:03 vm04 bash[28289]: cluster 2026-03-10T10:28:02.694090+0000 mon.a (mon.0) 3262 : cluster [DBG] osdmap e621: 8 total, 8 up, 8 in 2026-03-10T10:28:03.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:03 vm04 bash[20742]: cluster 2026-03-10T10:28:02.540921+0000 mgr.y (mgr.24422) 550 : cluster [DBG] pgmap v959: 268 pgs: 268 active+clean; 455 KiB data, 977 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:03.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:03 vm04 bash[20742]: cluster 2026-03-10T10:28:02.540921+0000 mgr.y (mgr.24422) 550 : cluster [DBG] pgmap v959: 268 pgs: 268 active+clean; 455 KiB data, 977 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:03.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:03 vm04 bash[20742]: cluster 2026-03-10T10:28:02.694090+0000 mon.a (mon.0) 3262 : cluster [DBG] osdmap e621: 8 total, 8 up, 8 in 2026-03-10T10:28:03.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:03 vm04 bash[20742]: cluster 2026-03-10T10:28:02.694090+0000 mon.a (mon.0) 3262 : cluster [DBG] osdmap e621: 8 total, 8 up, 8 in 2026-03-10T10:28:04.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:03 vm07 bash[23367]: cluster 2026-03-10T10:28:02.540921+0000 mgr.y (mgr.24422) 550 : cluster [DBG] pgmap v959: 268 pgs: 268 active+clean; 455 KiB data, 977 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:04.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:03 vm07 bash[23367]: cluster 2026-03-10T10:28:02.540921+0000 mgr.y (mgr.24422) 550 : cluster [DBG] pgmap v959: 268 pgs: 268 active+clean; 455 KiB data, 977 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:04.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:03 vm07 bash[23367]: cluster 2026-03-10T10:28:02.694090+0000 mon.a (mon.0) 3262 : cluster [DBG] osdmap e621: 8 total, 8 up, 8 in 2026-03-10T10:28:04.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:03 vm07 bash[23367]: cluster 2026-03-10T10:28:02.694090+0000 mon.a (mon.0) 3262 : cluster [DBG] osdmap e621: 8 total, 8 up, 8 in 2026-03-10T10:28:05.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:04 vm07 bash[23367]: cluster 2026-03-10T10:28:03.710932+0000 mon.a (mon.0) 3263 : cluster [DBG] osdmap e622: 8 total, 8 up, 8 in 2026-03-10T10:28:05.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:04 vm07 bash[23367]: cluster 2026-03-10T10:28:03.710932+0000 mon.a (mon.0) 3263 : cluster [DBG] osdmap e622: 8 total, 8 up, 8 in 2026-03-10T10:28:05.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:04 vm07 bash[23367]: audit 2026-03-10T10:28:03.711816+0000 mon.a (mon.0) 3264 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:05.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:04 vm07 bash[23367]: audit 2026-03-10T10:28:03.711816+0000 mon.a (mon.0) 3264 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:05.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:04 vm04 bash[28289]: cluster 2026-03-10T10:28:03.710932+0000 mon.a (mon.0) 3263 : cluster [DBG] osdmap e622: 8 total, 8 up, 8 in 2026-03-10T10:28:05.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:04 vm04 bash[28289]: cluster 2026-03-10T10:28:03.710932+0000 mon.a (mon.0) 3263 : cluster [DBG] osdmap e622: 8 total, 8 up, 8 in 2026-03-10T10:28:05.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:04 vm04 bash[28289]: audit 2026-03-10T10:28:03.711816+0000 mon.a (mon.0) 3264 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:05.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:04 vm04 bash[28289]: audit 2026-03-10T10:28:03.711816+0000 mon.a (mon.0) 3264 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:05.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:04 vm04 bash[20742]: cluster 2026-03-10T10:28:03.710932+0000 mon.a (mon.0) 3263 : cluster [DBG] osdmap e622: 8 total, 8 up, 8 in 2026-03-10T10:28:05.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:04 vm04 bash[20742]: cluster 2026-03-10T10:28:03.710932+0000 mon.a (mon.0) 3263 : cluster [DBG] osdmap e622: 8 total, 8 up, 8 in 2026-03-10T10:28:05.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:04 vm04 bash[20742]: audit 2026-03-10T10:28:03.711816+0000 mon.a (mon.0) 3264 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:05.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:04 vm04 bash[20742]: audit 2026-03-10T10:28:03.711816+0000 mon.a (mon.0) 3264 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-128","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:05 vm07 bash[23367]: cluster 2026-03-10T10:28:04.541445+0000 mgr.y (mgr.24422) 551 : cluster [DBG] pgmap v962: 268 pgs: 16 creating+peering, 16 unknown, 236 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:05 vm07 bash[23367]: cluster 2026-03-10T10:28:04.541445+0000 mgr.y (mgr.24422) 551 : cluster [DBG] pgmap v962: 268 pgs: 16 creating+peering, 16 unknown, 236 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:05 vm07 bash[23367]: audit 2026-03-10T10:28:04.703755+0000 mon.a (mon.0) 3265 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-128","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:05 vm07 bash[23367]: audit 2026-03-10T10:28:04.703755+0000 mon.a (mon.0) 3265 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-128","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:05 vm07 bash[23367]: cluster 2026-03-10T10:28:04.712966+0000 mon.a (mon.0) 3266 : cluster [DBG] osdmap e623: 8 total, 8 up, 8 in 2026-03-10T10:28:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:05 vm07 bash[23367]: cluster 2026-03-10T10:28:04.712966+0000 mon.a (mon.0) 3266 : cluster [DBG] osdmap e623: 8 total, 8 up, 8 in 2026-03-10T10:28:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:05 vm07 bash[23367]: audit 2026-03-10T10:28:04.723057+0000 mon.a (mon.0) 3267 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:05 vm07 bash[23367]: audit 2026-03-10T10:28:04.723057+0000 mon.a (mon.0) 3267 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:05 vm07 bash[23367]: audit 2026-03-10T10:28:05.706712+0000 mon.a (mon.0) 3268 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-128", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:05 vm07 bash[23367]: audit 2026-03-10T10:28:05.706712+0000 mon.a (mon.0) 3268 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-128", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:05 vm07 bash[23367]: cluster 2026-03-10T10:28:05.709548+0000 mon.a (mon.0) 3269 : cluster [DBG] osdmap e624: 8 total, 8 up, 8 in 2026-03-10T10:28:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:05 vm07 bash[23367]: cluster 2026-03-10T10:28:05.709548+0000 mon.a (mon.0) 3269 : cluster [DBG] osdmap e624: 8 total, 8 up, 8 in 2026-03-10T10:28:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:05 vm07 bash[23367]: audit 2026-03-10T10:28:05.709949+0000 mon.a (mon.0) 3270 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-128"}]: dispatch 2026-03-10T10:28:06.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:05 vm07 bash[23367]: audit 2026-03-10T10:28:05.709949+0000 mon.a (mon.0) 3270 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-128"}]: dispatch 2026-03-10T10:28:06.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:05 vm04 bash[28289]: cluster 2026-03-10T10:28:04.541445+0000 mgr.y (mgr.24422) 551 : cluster [DBG] pgmap v962: 268 pgs: 16 creating+peering, 16 unknown, 236 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:06.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:05 vm04 bash[28289]: cluster 2026-03-10T10:28:04.541445+0000 mgr.y (mgr.24422) 551 : cluster [DBG] pgmap v962: 268 pgs: 16 creating+peering, 16 unknown, 236 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:06.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:05 vm04 bash[28289]: audit 2026-03-10T10:28:04.703755+0000 mon.a (mon.0) 3265 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-128","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:06.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:05 vm04 bash[28289]: audit 2026-03-10T10:28:04.703755+0000 mon.a (mon.0) 3265 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-128","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:06.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:05 vm04 bash[28289]: cluster 2026-03-10T10:28:04.712966+0000 mon.a (mon.0) 3266 : cluster [DBG] osdmap e623: 8 total, 8 up, 8 in 2026-03-10T10:28:06.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:05 vm04 bash[28289]: cluster 2026-03-10T10:28:04.712966+0000 mon.a (mon.0) 3266 : cluster [DBG] osdmap e623: 8 total, 8 up, 8 in 2026-03-10T10:28:06.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:05 vm04 bash[28289]: audit 2026-03-10T10:28:04.723057+0000 mon.a (mon.0) 3267 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:06.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:05 vm04 bash[28289]: audit 2026-03-10T10:28:04.723057+0000 mon.a (mon.0) 3267 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:06.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:05 vm04 bash[28289]: audit 2026-03-10T10:28:05.706712+0000 mon.a (mon.0) 3268 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-128", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:06.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:05 vm04 bash[28289]: audit 2026-03-10T10:28:05.706712+0000 mon.a (mon.0) 3268 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-128", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:06.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:05 vm04 bash[28289]: cluster 2026-03-10T10:28:05.709548+0000 mon.a (mon.0) 3269 : cluster [DBG] osdmap e624: 8 total, 8 up, 8 in 2026-03-10T10:28:06.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:05 vm04 bash[28289]: cluster 2026-03-10T10:28:05.709548+0000 mon.a (mon.0) 3269 : cluster [DBG] osdmap e624: 8 total, 8 up, 8 in 2026-03-10T10:28:06.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:05 vm04 bash[28289]: audit 2026-03-10T10:28:05.709949+0000 mon.a (mon.0) 3270 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-128"}]: dispatch 2026-03-10T10:28:06.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:05 vm04 bash[28289]: audit 2026-03-10T10:28:05.709949+0000 mon.a (mon.0) 3270 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-128"}]: dispatch 2026-03-10T10:28:06.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:05 vm04 bash[20742]: cluster 2026-03-10T10:28:04.541445+0000 mgr.y (mgr.24422) 551 : cluster [DBG] pgmap v962: 268 pgs: 16 creating+peering, 16 unknown, 236 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:06.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:05 vm04 bash[20742]: cluster 2026-03-10T10:28:04.541445+0000 mgr.y (mgr.24422) 551 : cluster [DBG] pgmap v962: 268 pgs: 16 creating+peering, 16 unknown, 236 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:06.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:05 vm04 bash[20742]: audit 2026-03-10T10:28:04.703755+0000 mon.a (mon.0) 3265 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-128","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:06.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:05 vm04 bash[20742]: audit 2026-03-10T10:28:04.703755+0000 mon.a (mon.0) 3265 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-128","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:06.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:05 vm04 bash[20742]: cluster 2026-03-10T10:28:04.712966+0000 mon.a (mon.0) 3266 : cluster [DBG] osdmap e623: 8 total, 8 up, 8 in 2026-03-10T10:28:06.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:05 vm04 bash[20742]: cluster 2026-03-10T10:28:04.712966+0000 mon.a (mon.0) 3266 : cluster [DBG] osdmap e623: 8 total, 8 up, 8 in 2026-03-10T10:28:06.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:05 vm04 bash[20742]: audit 2026-03-10T10:28:04.723057+0000 mon.a (mon.0) 3267 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:06.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:05 vm04 bash[20742]: audit 2026-03-10T10:28:04.723057+0000 mon.a (mon.0) 3267 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-128", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:06.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:05 vm04 bash[20742]: audit 2026-03-10T10:28:05.706712+0000 mon.a (mon.0) 3268 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-128", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:06.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:05 vm04 bash[20742]: audit 2026-03-10T10:28:05.706712+0000 mon.a (mon.0) 3268 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-128", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:06.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:05 vm04 bash[20742]: cluster 2026-03-10T10:28:05.709548+0000 mon.a (mon.0) 3269 : cluster [DBG] osdmap e624: 8 total, 8 up, 8 in 2026-03-10T10:28:06.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:05 vm04 bash[20742]: cluster 2026-03-10T10:28:05.709548+0000 mon.a (mon.0) 3269 : cluster [DBG] osdmap e624: 8 total, 8 up, 8 in 2026-03-10T10:28:06.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:05 vm04 bash[20742]: audit 2026-03-10T10:28:05.709949+0000 mon.a (mon.0) 3270 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-128"}]: dispatch 2026-03-10T10:28:06.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:05 vm04 bash[20742]: audit 2026-03-10T10:28:05.709949+0000 mon.a (mon.0) 3270 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-128"}]: dispatch 2026-03-10T10:28:07.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:06 vm07 bash[23367]: cluster 2026-03-10T10:28:06.633947+0000 mon.a (mon.0) 3271 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:07.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:06 vm07 bash[23367]: cluster 2026-03-10T10:28:06.633947+0000 mon.a (mon.0) 3271 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:07.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:06 vm07 bash[23367]: audit 2026-03-10T10:28:06.709669+0000 mon.a (mon.0) 3272 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-128"}]': finished 2026-03-10T10:28:07.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:06 vm07 bash[23367]: audit 2026-03-10T10:28:06.709669+0000 mon.a (mon.0) 3272 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-128"}]': finished 2026-03-10T10:28:07.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:06 vm07 bash[23367]: cluster 2026-03-10T10:28:06.712408+0000 mon.a (mon.0) 3273 : cluster [DBG] osdmap e625: 8 total, 8 up, 8 in 2026-03-10T10:28:07.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:06 vm07 bash[23367]: cluster 2026-03-10T10:28:06.712408+0000 mon.a (mon.0) 3273 : cluster [DBG] osdmap e625: 8 total, 8 up, 8 in 2026-03-10T10:28:07.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:06 vm07 bash[23367]: audit 2026-03-10T10:28:06.712878+0000 mon.a (mon.0) 3274 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-128", "mode": "writeback"}]: dispatch 2026-03-10T10:28:07.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:06 vm07 bash[23367]: audit 2026-03-10T10:28:06.712878+0000 mon.a (mon.0) 3274 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-128", "mode": "writeback"}]: dispatch 2026-03-10T10:28:07.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:06 vm04 bash[28289]: cluster 2026-03-10T10:28:06.633947+0000 mon.a (mon.0) 3271 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:07.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:06 vm04 bash[28289]: cluster 2026-03-10T10:28:06.633947+0000 mon.a (mon.0) 3271 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:07.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:06 vm04 bash[28289]: audit 2026-03-10T10:28:06.709669+0000 mon.a (mon.0) 3272 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-128"}]': finished 2026-03-10T10:28:07.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:06 vm04 bash[28289]: audit 2026-03-10T10:28:06.709669+0000 mon.a (mon.0) 3272 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-128"}]': finished 2026-03-10T10:28:07.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:06 vm04 bash[28289]: cluster 2026-03-10T10:28:06.712408+0000 mon.a (mon.0) 3273 : cluster [DBG] osdmap e625: 8 total, 8 up, 8 in 2026-03-10T10:28:07.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:06 vm04 bash[28289]: cluster 2026-03-10T10:28:06.712408+0000 mon.a (mon.0) 3273 : cluster [DBG] osdmap e625: 8 total, 8 up, 8 in 2026-03-10T10:28:07.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:06 vm04 bash[28289]: audit 2026-03-10T10:28:06.712878+0000 mon.a (mon.0) 3274 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-128", "mode": "writeback"}]: dispatch 2026-03-10T10:28:07.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:06 vm04 bash[28289]: audit 2026-03-10T10:28:06.712878+0000 mon.a (mon.0) 3274 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-128", "mode": "writeback"}]: dispatch 2026-03-10T10:28:07.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:06 vm04 bash[20742]: cluster 2026-03-10T10:28:06.633947+0000 mon.a (mon.0) 3271 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:07.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:06 vm04 bash[20742]: cluster 2026-03-10T10:28:06.633947+0000 mon.a (mon.0) 3271 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:07.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:06 vm04 bash[20742]: audit 2026-03-10T10:28:06.709669+0000 mon.a (mon.0) 3272 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-128"}]': finished 2026-03-10T10:28:07.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:06 vm04 bash[20742]: audit 2026-03-10T10:28:06.709669+0000 mon.a (mon.0) 3272 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-128"}]': finished 2026-03-10T10:28:07.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:06 vm04 bash[20742]: cluster 2026-03-10T10:28:06.712408+0000 mon.a (mon.0) 3273 : cluster [DBG] osdmap e625: 8 total, 8 up, 8 in 2026-03-10T10:28:07.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:06 vm04 bash[20742]: cluster 2026-03-10T10:28:06.712408+0000 mon.a (mon.0) 3273 : cluster [DBG] osdmap e625: 8 total, 8 up, 8 in 2026-03-10T10:28:07.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:06 vm04 bash[20742]: audit 2026-03-10T10:28:06.712878+0000 mon.a (mon.0) 3274 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-128", "mode": "writeback"}]: dispatch 2026-03-10T10:28:07.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:06 vm04 bash[20742]: audit 2026-03-10T10:28:06.712878+0000 mon.a (mon.0) 3274 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-128", "mode": "writeback"}]: dispatch 2026-03-10T10:28:08.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:07 vm07 bash[23367]: cluster 2026-03-10T10:28:06.541883+0000 mgr.y (mgr.24422) 552 : cluster [DBG] pgmap v965: 268 pgs: 16 creating+peering, 16 unknown, 236 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:08.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:07 vm07 bash[23367]: cluster 2026-03-10T10:28:06.541883+0000 mgr.y (mgr.24422) 552 : cluster [DBG] pgmap v965: 268 pgs: 16 creating+peering, 16 unknown, 236 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:08.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:07 vm07 bash[23367]: cluster 2026-03-10T10:28:07.709781+0000 mon.a (mon.0) 3275 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:08.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:07 vm07 bash[23367]: cluster 2026-03-10T10:28:07.709781+0000 mon.a (mon.0) 3275 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:08.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:07 vm07 bash[23367]: audit 2026-03-10T10:28:07.712425+0000 mon.a (mon.0) 3276 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-128", "mode": "writeback"}]': finished 2026-03-10T10:28:08.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:07 vm07 bash[23367]: audit 2026-03-10T10:28:07.712425+0000 mon.a (mon.0) 3276 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-128", "mode": "writeback"}]': finished 2026-03-10T10:28:08.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:07 vm07 bash[23367]: cluster 2026-03-10T10:28:07.720922+0000 mon.a (mon.0) 3277 : cluster [DBG] osdmap e626: 8 total, 8 up, 8 in 2026-03-10T10:28:08.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:07 vm07 bash[23367]: cluster 2026-03-10T10:28:07.720922+0000 mon.a (mon.0) 3277 : cluster [DBG] osdmap e626: 8 total, 8 up, 8 in 2026-03-10T10:28:08.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:07 vm04 bash[28289]: cluster 2026-03-10T10:28:06.541883+0000 mgr.y (mgr.24422) 552 : cluster [DBG] pgmap v965: 268 pgs: 16 creating+peering, 16 unknown, 236 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:08.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:07 vm04 bash[28289]: cluster 2026-03-10T10:28:06.541883+0000 mgr.y (mgr.24422) 552 : cluster [DBG] pgmap v965: 268 pgs: 16 creating+peering, 16 unknown, 236 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:08.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:07 vm04 bash[28289]: cluster 2026-03-10T10:28:07.709781+0000 mon.a (mon.0) 3275 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:08.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:07 vm04 bash[28289]: cluster 2026-03-10T10:28:07.709781+0000 mon.a (mon.0) 3275 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:08.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:07 vm04 bash[28289]: audit 2026-03-10T10:28:07.712425+0000 mon.a (mon.0) 3276 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-128", "mode": "writeback"}]': finished 2026-03-10T10:28:08.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:07 vm04 bash[28289]: audit 2026-03-10T10:28:07.712425+0000 mon.a (mon.0) 3276 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-128", "mode": "writeback"}]': finished 2026-03-10T10:28:08.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:07 vm04 bash[28289]: cluster 2026-03-10T10:28:07.720922+0000 mon.a (mon.0) 3277 : cluster [DBG] osdmap e626: 8 total, 8 up, 8 in 2026-03-10T10:28:08.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:07 vm04 bash[28289]: cluster 2026-03-10T10:28:07.720922+0000 mon.a (mon.0) 3277 : cluster [DBG] osdmap e626: 8 total, 8 up, 8 in 2026-03-10T10:28:08.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:07 vm04 bash[20742]: cluster 2026-03-10T10:28:06.541883+0000 mgr.y (mgr.24422) 552 : cluster [DBG] pgmap v965: 268 pgs: 16 creating+peering, 16 unknown, 236 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:08.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:07 vm04 bash[20742]: cluster 2026-03-10T10:28:06.541883+0000 mgr.y (mgr.24422) 552 : cluster [DBG] pgmap v965: 268 pgs: 16 creating+peering, 16 unknown, 236 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:08.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:07 vm04 bash[20742]: cluster 2026-03-10T10:28:07.709781+0000 mon.a (mon.0) 3275 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:08.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:07 vm04 bash[20742]: cluster 2026-03-10T10:28:07.709781+0000 mon.a (mon.0) 3275 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:08.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:07 vm04 bash[20742]: audit 2026-03-10T10:28:07.712425+0000 mon.a (mon.0) 3276 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-128", "mode": "writeback"}]': finished 2026-03-10T10:28:08.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:07 vm04 bash[20742]: audit 2026-03-10T10:28:07.712425+0000 mon.a (mon.0) 3276 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-128", "mode": "writeback"}]': finished 2026-03-10T10:28:08.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:07 vm04 bash[20742]: cluster 2026-03-10T10:28:07.720922+0000 mon.a (mon.0) 3277 : cluster [DBG] osdmap e626: 8 total, 8 up, 8 in 2026-03-10T10:28:08.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:07 vm04 bash[20742]: cluster 2026-03-10T10:28:07.720922+0000 mon.a (mon.0) 3277 : cluster [DBG] osdmap e626: 8 total, 8 up, 8 in 2026-03-10T10:28:09.266 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:28:08 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:28:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:09 vm04 bash[28289]: cluster 2026-03-10T10:28:08.542458+0000 mgr.y (mgr.24422) 553 : cluster [DBG] pgmap v968: 268 pgs: 16 creating+peering, 4 unknown, 248 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 511 B/s wr, 0 op/s 2026-03-10T10:28:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:09 vm04 bash[28289]: cluster 2026-03-10T10:28:08.542458+0000 mgr.y (mgr.24422) 553 : cluster [DBG] pgmap v968: 268 pgs: 16 creating+peering, 4 unknown, 248 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 511 B/s wr, 0 op/s 2026-03-10T10:28:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:09 vm04 bash[28289]: audit 2026-03-10T10:28:08.807505+0000 mgr.y (mgr.24422) 554 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:09 vm04 bash[28289]: audit 2026-03-10T10:28:08.807505+0000 mgr.y (mgr.24422) 554 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:09 vm04 bash[20742]: cluster 2026-03-10T10:28:08.542458+0000 mgr.y (mgr.24422) 553 : cluster [DBG] pgmap v968: 268 pgs: 16 creating+peering, 4 unknown, 248 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 511 B/s wr, 0 op/s 2026-03-10T10:28:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:09 vm04 bash[20742]: cluster 2026-03-10T10:28:08.542458+0000 mgr.y (mgr.24422) 553 : cluster [DBG] pgmap v968: 268 pgs: 16 creating+peering, 4 unknown, 248 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 511 B/s wr, 0 op/s 2026-03-10T10:28:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:09 vm04 bash[20742]: audit 2026-03-10T10:28:08.807505+0000 mgr.y (mgr.24422) 554 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:09 vm04 bash[20742]: audit 2026-03-10T10:28:08.807505+0000 mgr.y (mgr.24422) 554 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:10.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:09 vm07 bash[23367]: cluster 2026-03-10T10:28:08.542458+0000 mgr.y (mgr.24422) 553 : cluster [DBG] pgmap v968: 268 pgs: 16 creating+peering, 4 unknown, 248 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 511 B/s wr, 0 op/s 2026-03-10T10:28:10.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:09 vm07 bash[23367]: cluster 2026-03-10T10:28:08.542458+0000 mgr.y (mgr.24422) 553 : cluster [DBG] pgmap v968: 268 pgs: 16 creating+peering, 4 unknown, 248 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 511 B/s wr, 0 op/s 2026-03-10T10:28:10.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:09 vm07 bash[23367]: audit 2026-03-10T10:28:08.807505+0000 mgr.y (mgr.24422) 554 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:10.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:09 vm07 bash[23367]: audit 2026-03-10T10:28:08.807505+0000 mgr.y (mgr.24422) 554 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:11 vm04 bash[28289]: cluster 2026-03-10T10:28:10.543106+0000 mgr.y (mgr.24422) 555 : cluster [DBG] pgmap v969: 268 pgs: 268 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 350 B/s wr, 1 op/s 2026-03-10T10:28:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:11 vm04 bash[28289]: cluster 2026-03-10T10:28:10.543106+0000 mgr.y (mgr.24422) 555 : cluster [DBG] pgmap v969: 268 pgs: 268 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 350 B/s wr, 1 op/s 2026-03-10T10:28:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:11 vm04 bash[28289]: cluster 2026-03-10T10:28:11.634954+0000 mon.a (mon.0) 3278 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:11 vm04 bash[28289]: cluster 2026-03-10T10:28:11.634954+0000 mon.a (mon.0) 3278 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:11 vm04 bash[20742]: cluster 2026-03-10T10:28:10.543106+0000 mgr.y (mgr.24422) 555 : cluster [DBG] pgmap v969: 268 pgs: 268 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 350 B/s wr, 1 op/s 2026-03-10T10:28:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:11 vm04 bash[20742]: cluster 2026-03-10T10:28:10.543106+0000 mgr.y (mgr.24422) 555 : cluster [DBG] pgmap v969: 268 pgs: 268 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 350 B/s wr, 1 op/s 2026-03-10T10:28:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:11 vm04 bash[20742]: cluster 2026-03-10T10:28:11.634954+0000 mon.a (mon.0) 3278 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:11 vm04 bash[20742]: cluster 2026-03-10T10:28:11.634954+0000 mon.a (mon.0) 3278 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:12.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:11 vm07 bash[23367]: cluster 2026-03-10T10:28:10.543106+0000 mgr.y (mgr.24422) 555 : cluster [DBG] pgmap v969: 268 pgs: 268 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 350 B/s wr, 1 op/s 2026-03-10T10:28:12.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:11 vm07 bash[23367]: cluster 2026-03-10T10:28:10.543106+0000 mgr.y (mgr.24422) 555 : cluster [DBG] pgmap v969: 268 pgs: 268 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 350 B/s wr, 1 op/s 2026-03-10T10:28:12.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:11 vm07 bash[23367]: cluster 2026-03-10T10:28:11.634954+0000 mon.a (mon.0) 3278 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:12.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:11 vm07 bash[23367]: cluster 2026-03-10T10:28:11.634954+0000 mon.a (mon.0) 3278 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:12 vm04 bash[28289]: audit 2026-03-10T10:28:12.769655+0000 mon.a (mon.0) 3279 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:13.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:12 vm04 bash[28289]: audit 2026-03-10T10:28:12.769655+0000 mon.a (mon.0) 3279 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:13.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:12 vm04 bash[20742]: audit 2026-03-10T10:28:12.769655+0000 mon.a (mon.0) 3279 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:13.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:12 vm04 bash[20742]: audit 2026-03-10T10:28:12.769655+0000 mon.a (mon.0) 3279 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:13.203 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:28:13 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:28:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:28:13.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:12 vm07 bash[23367]: audit 2026-03-10T10:28:12.769655+0000 mon.a (mon.0) 3279 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:13.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:12 vm07 bash[23367]: audit 2026-03-10T10:28:12.769655+0000 mon.a (mon.0) 3279 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:13 vm04 bash[28289]: cluster 2026-03-10T10:28:12.543486+0000 mgr.y (mgr.24422) 556 : cluster [DBG] pgmap v970: 268 pgs: 268 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 898 B/s rd, 299 B/s wr, 1 op/s 2026-03-10T10:28:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:13 vm04 bash[28289]: cluster 2026-03-10T10:28:12.543486+0000 mgr.y (mgr.24422) 556 : cluster [DBG] pgmap v970: 268 pgs: 268 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 898 B/s rd, 299 B/s wr, 1 op/s 2026-03-10T10:28:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:13 vm04 bash[28289]: audit 2026-03-10T10:28:12.851676+0000 mon.a (mon.0) 3280 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:13 vm04 bash[28289]: audit 2026-03-10T10:28:12.851676+0000 mon.a (mon.0) 3280 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:13 vm04 bash[28289]: cluster 2026-03-10T10:28:12.856817+0000 mon.a (mon.0) 3281 : cluster [DBG] osdmap e627: 8 total, 8 up, 8 in 2026-03-10T10:28:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:13 vm04 bash[28289]: cluster 2026-03-10T10:28:12.856817+0000 mon.a (mon.0) 3281 : cluster [DBG] osdmap e627: 8 total, 8 up, 8 in 2026-03-10T10:28:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:13 vm04 bash[28289]: audit 2026-03-10T10:28:12.857233+0000 mon.a (mon.0) 3282 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-128"}]: dispatch 2026-03-10T10:28:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:13 vm04 bash[28289]: audit 2026-03-10T10:28:12.857233+0000 mon.a (mon.0) 3282 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-128"}]: dispatch 2026-03-10T10:28:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:13 vm04 bash[28289]: audit 2026-03-10T10:28:13.123991+0000 mon.a (mon.0) 3283 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:28:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:13 vm04 bash[28289]: audit 2026-03-10T10:28:13.123991+0000 mon.a (mon.0) 3283 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:28:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:13 vm04 bash[28289]: audit 2026-03-10T10:28:13.124876+0000 mon.a (mon.0) 3284 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:28:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:13 vm04 bash[28289]: audit 2026-03-10T10:28:13.124876+0000 mon.a (mon.0) 3284 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:28:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:13 vm04 bash[20742]: cluster 2026-03-10T10:28:12.543486+0000 mgr.y (mgr.24422) 556 : cluster [DBG] pgmap v970: 268 pgs: 268 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 898 B/s rd, 299 B/s wr, 1 op/s 2026-03-10T10:28:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:13 vm04 bash[20742]: cluster 2026-03-10T10:28:12.543486+0000 mgr.y (mgr.24422) 556 : cluster [DBG] pgmap v970: 268 pgs: 268 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 898 B/s rd, 299 B/s wr, 1 op/s 2026-03-10T10:28:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:13 vm04 bash[20742]: audit 2026-03-10T10:28:12.851676+0000 mon.a (mon.0) 3280 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:13 vm04 bash[20742]: audit 2026-03-10T10:28:12.851676+0000 mon.a (mon.0) 3280 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:13 vm04 bash[20742]: cluster 2026-03-10T10:28:12.856817+0000 mon.a (mon.0) 3281 : cluster [DBG] osdmap e627: 8 total, 8 up, 8 in 2026-03-10T10:28:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:13 vm04 bash[20742]: cluster 2026-03-10T10:28:12.856817+0000 mon.a (mon.0) 3281 : cluster [DBG] osdmap e627: 8 total, 8 up, 8 in 2026-03-10T10:28:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:13 vm04 bash[20742]: audit 2026-03-10T10:28:12.857233+0000 mon.a (mon.0) 3282 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-128"}]: dispatch 2026-03-10T10:28:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:13 vm04 bash[20742]: audit 2026-03-10T10:28:12.857233+0000 mon.a (mon.0) 3282 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-128"}]: dispatch 2026-03-10T10:28:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:13 vm04 bash[20742]: audit 2026-03-10T10:28:13.123991+0000 mon.a (mon.0) 3283 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:28:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:13 vm04 bash[20742]: audit 2026-03-10T10:28:13.123991+0000 mon.a (mon.0) 3283 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:28:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:13 vm04 bash[20742]: audit 2026-03-10T10:28:13.124876+0000 mon.a (mon.0) 3284 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:28:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:13 vm04 bash[20742]: audit 2026-03-10T10:28:13.124876+0000 mon.a (mon.0) 3284 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:28:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:13 vm07 bash[23367]: cluster 2026-03-10T10:28:12.543486+0000 mgr.y (mgr.24422) 556 : cluster [DBG] pgmap v970: 268 pgs: 268 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 898 B/s rd, 299 B/s wr, 1 op/s 2026-03-10T10:28:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:13 vm07 bash[23367]: cluster 2026-03-10T10:28:12.543486+0000 mgr.y (mgr.24422) 556 : cluster [DBG] pgmap v970: 268 pgs: 268 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 898 B/s rd, 299 B/s wr, 1 op/s 2026-03-10T10:28:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:13 vm07 bash[23367]: audit 2026-03-10T10:28:12.851676+0000 mon.a (mon.0) 3280 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:13 vm07 bash[23367]: audit 2026-03-10T10:28:12.851676+0000 mon.a (mon.0) 3280 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:13 vm07 bash[23367]: cluster 2026-03-10T10:28:12.856817+0000 mon.a (mon.0) 3281 : cluster [DBG] osdmap e627: 8 total, 8 up, 8 in 2026-03-10T10:28:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:13 vm07 bash[23367]: cluster 2026-03-10T10:28:12.856817+0000 mon.a (mon.0) 3281 : cluster [DBG] osdmap e627: 8 total, 8 up, 8 in 2026-03-10T10:28:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:13 vm07 bash[23367]: audit 2026-03-10T10:28:12.857233+0000 mon.a (mon.0) 3282 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-128"}]: dispatch 2026-03-10T10:28:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:13 vm07 bash[23367]: audit 2026-03-10T10:28:12.857233+0000 mon.a (mon.0) 3282 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-128"}]: dispatch 2026-03-10T10:28:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:13 vm07 bash[23367]: audit 2026-03-10T10:28:13.123991+0000 mon.a (mon.0) 3283 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:28:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:13 vm07 bash[23367]: audit 2026-03-10T10:28:13.123991+0000 mon.a (mon.0) 3283 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:28:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:13 vm07 bash[23367]: audit 2026-03-10T10:28:13.124876+0000 mon.a (mon.0) 3284 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:28:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:13 vm07 bash[23367]: audit 2026-03-10T10:28:13.124876+0000 mon.a (mon.0) 3284 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:28:15.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:14 vm07 bash[23367]: cluster 2026-03-10T10:28:13.851677+0000 mon.a (mon.0) 3285 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:15.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:14 vm07 bash[23367]: cluster 2026-03-10T10:28:13.851677+0000 mon.a (mon.0) 3285 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:15.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:14 vm07 bash[23367]: audit 2026-03-10T10:28:13.855172+0000 mon.a (mon.0) 3286 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-128"}]': finished 2026-03-10T10:28:15.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:14 vm07 bash[23367]: audit 2026-03-10T10:28:13.855172+0000 mon.a (mon.0) 3286 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-128"}]': finished 2026-03-10T10:28:15.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:14 vm07 bash[23367]: cluster 2026-03-10T10:28:13.861439+0000 mon.a (mon.0) 3287 : cluster [DBG] osdmap e628: 8 total, 8 up, 8 in 2026-03-10T10:28:15.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:14 vm07 bash[23367]: cluster 2026-03-10T10:28:13.861439+0000 mon.a (mon.0) 3287 : cluster [DBG] osdmap e628: 8 total, 8 up, 8 in 2026-03-10T10:28:15.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:15 vm04 bash[28289]: cluster 2026-03-10T10:28:13.851677+0000 mon.a (mon.0) 3285 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:15.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:15 vm04 bash[28289]: cluster 2026-03-10T10:28:13.851677+0000 mon.a (mon.0) 3285 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:15.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:15 vm04 bash[28289]: audit 2026-03-10T10:28:13.855172+0000 mon.a (mon.0) 3286 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-128"}]': finished 2026-03-10T10:28:15.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:15 vm04 bash[28289]: audit 2026-03-10T10:28:13.855172+0000 mon.a (mon.0) 3286 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-128"}]': finished 2026-03-10T10:28:15.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:15 vm04 bash[28289]: cluster 2026-03-10T10:28:13.861439+0000 mon.a (mon.0) 3287 : cluster [DBG] osdmap e628: 8 total, 8 up, 8 in 2026-03-10T10:28:15.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:15 vm04 bash[28289]: cluster 2026-03-10T10:28:13.861439+0000 mon.a (mon.0) 3287 : cluster [DBG] osdmap e628: 8 total, 8 up, 8 in 2026-03-10T10:28:15.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:14 vm04 bash[20742]: cluster 2026-03-10T10:28:13.851677+0000 mon.a (mon.0) 3285 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:15.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:14 vm04 bash[20742]: cluster 2026-03-10T10:28:13.851677+0000 mon.a (mon.0) 3285 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:15.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:15 vm04 bash[20742]: audit 2026-03-10T10:28:13.855172+0000 mon.a (mon.0) 3286 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-128"}]': finished 2026-03-10T10:28:15.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:15 vm04 bash[20742]: audit 2026-03-10T10:28:13.855172+0000 mon.a (mon.0) 3286 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-128"}]': finished 2026-03-10T10:28:15.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:15 vm04 bash[20742]: cluster 2026-03-10T10:28:13.861439+0000 mon.a (mon.0) 3287 : cluster [DBG] osdmap e628: 8 total, 8 up, 8 in 2026-03-10T10:28:15.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:15 vm04 bash[20742]: cluster 2026-03-10T10:28:13.861439+0000 mon.a (mon.0) 3287 : cluster [DBG] osdmap e628: 8 total, 8 up, 8 in 2026-03-10T10:28:16.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:16 vm07 bash[23367]: cluster 2026-03-10T10:28:14.543773+0000 mgr.y (mgr.24422) 557 : cluster [DBG] pgmap v973: 268 pgs: 268 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 299 B/s wr, 2 op/s 2026-03-10T10:28:16.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:16 vm07 bash[23367]: cluster 2026-03-10T10:28:14.543773+0000 mgr.y (mgr.24422) 557 : cluster [DBG] pgmap v973: 268 pgs: 268 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 299 B/s wr, 2 op/s 2026-03-10T10:28:16.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:16 vm07 bash[23367]: cluster 2026-03-10T10:28:15.012957+0000 mon.a (mon.0) 3288 : cluster [DBG] osdmap e629: 8 total, 8 up, 8 in 2026-03-10T10:28:16.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:16 vm07 bash[23367]: cluster 2026-03-10T10:28:15.012957+0000 mon.a (mon.0) 3288 : cluster [DBG] osdmap e629: 8 total, 8 up, 8 in 2026-03-10T10:28:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:16 vm04 bash[28289]: cluster 2026-03-10T10:28:14.543773+0000 mgr.y (mgr.24422) 557 : cluster [DBG] pgmap v973: 268 pgs: 268 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 299 B/s wr, 2 op/s 2026-03-10T10:28:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:16 vm04 bash[28289]: cluster 2026-03-10T10:28:14.543773+0000 mgr.y (mgr.24422) 557 : cluster [DBG] pgmap v973: 268 pgs: 268 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 299 B/s wr, 2 op/s 2026-03-10T10:28:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:16 vm04 bash[28289]: cluster 2026-03-10T10:28:15.012957+0000 mon.a (mon.0) 3288 : cluster [DBG] osdmap e629: 8 total, 8 up, 8 in 2026-03-10T10:28:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:16 vm04 bash[28289]: cluster 2026-03-10T10:28:15.012957+0000 mon.a (mon.0) 3288 : cluster [DBG] osdmap e629: 8 total, 8 up, 8 in 2026-03-10T10:28:16.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:16 vm04 bash[20742]: cluster 2026-03-10T10:28:14.543773+0000 mgr.y (mgr.24422) 557 : cluster [DBG] pgmap v973: 268 pgs: 268 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 299 B/s wr, 2 op/s 2026-03-10T10:28:16.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:16 vm04 bash[20742]: cluster 2026-03-10T10:28:14.543773+0000 mgr.y (mgr.24422) 557 : cluster [DBG] pgmap v973: 268 pgs: 268 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1.6 KiB/s rd, 299 B/s wr, 2 op/s 2026-03-10T10:28:16.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:16 vm04 bash[20742]: cluster 2026-03-10T10:28:15.012957+0000 mon.a (mon.0) 3288 : cluster [DBG] osdmap e629: 8 total, 8 up, 8 in 2026-03-10T10:28:16.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:16 vm04 bash[20742]: cluster 2026-03-10T10:28:15.012957+0000 mon.a (mon.0) 3288 : cluster [DBG] osdmap e629: 8 total, 8 up, 8 in 2026-03-10T10:28:17.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:17 vm04 bash[28289]: cluster 2026-03-10T10:28:16.012740+0000 mon.a (mon.0) 3289 : cluster [DBG] osdmap e630: 8 total, 8 up, 8 in 2026-03-10T10:28:17.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:17 vm04 bash[28289]: cluster 2026-03-10T10:28:16.012740+0000 mon.a (mon.0) 3289 : cluster [DBG] osdmap e630: 8 total, 8 up, 8 in 2026-03-10T10:28:17.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:17 vm04 bash[28289]: audit 2026-03-10T10:28:16.013676+0000 mon.a (mon.0) 3290 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:17.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:17 vm04 bash[28289]: audit 2026-03-10T10:28:16.013676+0000 mon.a (mon.0) 3290 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:17.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:17 vm04 bash[28289]: cluster 2026-03-10T10:28:16.636586+0000 mon.a (mon.0) 3291 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:17.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:17 vm04 bash[28289]: cluster 2026-03-10T10:28:16.636586+0000 mon.a (mon.0) 3291 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:17.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:17 vm04 bash[20742]: cluster 2026-03-10T10:28:16.012740+0000 mon.a (mon.0) 3289 : cluster [DBG] osdmap e630: 8 total, 8 up, 8 in 2026-03-10T10:28:17.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:17 vm04 bash[20742]: cluster 2026-03-10T10:28:16.012740+0000 mon.a (mon.0) 3289 : cluster [DBG] osdmap e630: 8 total, 8 up, 8 in 2026-03-10T10:28:17.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:17 vm04 bash[20742]: audit 2026-03-10T10:28:16.013676+0000 mon.a (mon.0) 3290 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:17.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:17 vm04 bash[20742]: audit 2026-03-10T10:28:16.013676+0000 mon.a (mon.0) 3290 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:17.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:17 vm04 bash[20742]: cluster 2026-03-10T10:28:16.636586+0000 mon.a (mon.0) 3291 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:17.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:17 vm04 bash[20742]: cluster 2026-03-10T10:28:16.636586+0000 mon.a (mon.0) 3291 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:17.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:17 vm07 bash[23367]: cluster 2026-03-10T10:28:16.012740+0000 mon.a (mon.0) 3289 : cluster [DBG] osdmap e630: 8 total, 8 up, 8 in 2026-03-10T10:28:17.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:17 vm07 bash[23367]: cluster 2026-03-10T10:28:16.012740+0000 mon.a (mon.0) 3289 : cluster [DBG] osdmap e630: 8 total, 8 up, 8 in 2026-03-10T10:28:17.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:17 vm07 bash[23367]: audit 2026-03-10T10:28:16.013676+0000 mon.a (mon.0) 3290 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:17.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:17 vm07 bash[23367]: audit 2026-03-10T10:28:16.013676+0000 mon.a (mon.0) 3290 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-130","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:17.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:17 vm07 bash[23367]: cluster 2026-03-10T10:28:16.636586+0000 mon.a (mon.0) 3291 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:17.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:17 vm07 bash[23367]: cluster 2026-03-10T10:28:16.636586+0000 mon.a (mon.0) 3291 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:18 vm04 bash[28289]: cluster 2026-03-10T10:28:16.544195+0000 mgr.y (mgr.24422) 558 : cluster [DBG] pgmap v976: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:28:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:18 vm04 bash[28289]: cluster 2026-03-10T10:28:16.544195+0000 mgr.y (mgr.24422) 558 : cluster [DBG] pgmap v976: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:28:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:18 vm04 bash[28289]: audit 2026-03-10T10:28:17.013705+0000 mon.a (mon.0) 3292 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-130","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:18 vm04 bash[28289]: audit 2026-03-10T10:28:17.013705+0000 mon.a (mon.0) 3292 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-130","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:18 vm04 bash[28289]: cluster 2026-03-10T10:28:17.018530+0000 mon.a (mon.0) 3293 : cluster [DBG] osdmap e631: 8 total, 8 up, 8 in 2026-03-10T10:28:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:18 vm04 bash[28289]: cluster 2026-03-10T10:28:17.018530+0000 mon.a (mon.0) 3293 : cluster [DBG] osdmap e631: 8 total, 8 up, 8 in 2026-03-10T10:28:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:18 vm04 bash[28289]: audit 2026-03-10T10:28:17.022966+0000 mon.a (mon.0) 3294 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:18 vm04 bash[28289]: audit 2026-03-10T10:28:17.022966+0000 mon.a (mon.0) 3294 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:18 vm04 bash[28289]: audit 2026-03-10T10:28:18.017072+0000 mon.a (mon.0) 3295 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-130", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:18 vm04 bash[28289]: audit 2026-03-10T10:28:18.017072+0000 mon.a (mon.0) 3295 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-130", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:18 vm04 bash[28289]: cluster 2026-03-10T10:28:18.020187+0000 mon.a (mon.0) 3296 : cluster [DBG] osdmap e632: 8 total, 8 up, 8 in 2026-03-10T10:28:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:18 vm04 bash[28289]: cluster 2026-03-10T10:28:18.020187+0000 mon.a (mon.0) 3296 : cluster [DBG] osdmap e632: 8 total, 8 up, 8 in 2026-03-10T10:28:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:18 vm04 bash[28289]: audit 2026-03-10T10:28:18.021134+0000 mon.a (mon.0) 3297 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-130"}]: dispatch 2026-03-10T10:28:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:18 vm04 bash[28289]: audit 2026-03-10T10:28:18.021134+0000 mon.a (mon.0) 3297 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-130"}]: dispatch 2026-03-10T10:28:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:18 vm04 bash[20742]: cluster 2026-03-10T10:28:16.544195+0000 mgr.y (mgr.24422) 558 : cluster [DBG] pgmap v976: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:28:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:18 vm04 bash[20742]: cluster 2026-03-10T10:28:16.544195+0000 mgr.y (mgr.24422) 558 : cluster [DBG] pgmap v976: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:28:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:18 vm04 bash[20742]: audit 2026-03-10T10:28:17.013705+0000 mon.a (mon.0) 3292 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-130","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:18 vm04 bash[20742]: audit 2026-03-10T10:28:17.013705+0000 mon.a (mon.0) 3292 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-130","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:18 vm04 bash[20742]: cluster 2026-03-10T10:28:17.018530+0000 mon.a (mon.0) 3293 : cluster [DBG] osdmap e631: 8 total, 8 up, 8 in 2026-03-10T10:28:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:18 vm04 bash[20742]: cluster 2026-03-10T10:28:17.018530+0000 mon.a (mon.0) 3293 : cluster [DBG] osdmap e631: 8 total, 8 up, 8 in 2026-03-10T10:28:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:18 vm04 bash[20742]: audit 2026-03-10T10:28:17.022966+0000 mon.a (mon.0) 3294 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:18 vm04 bash[20742]: audit 2026-03-10T10:28:17.022966+0000 mon.a (mon.0) 3294 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:18 vm04 bash[20742]: audit 2026-03-10T10:28:18.017072+0000 mon.a (mon.0) 3295 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-130", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:18 vm04 bash[20742]: audit 2026-03-10T10:28:18.017072+0000 mon.a (mon.0) 3295 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-130", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:18 vm04 bash[20742]: cluster 2026-03-10T10:28:18.020187+0000 mon.a (mon.0) 3296 : cluster [DBG] osdmap e632: 8 total, 8 up, 8 in 2026-03-10T10:28:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:18 vm04 bash[20742]: cluster 2026-03-10T10:28:18.020187+0000 mon.a (mon.0) 3296 : cluster [DBG] osdmap e632: 8 total, 8 up, 8 in 2026-03-10T10:28:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:18 vm04 bash[20742]: audit 2026-03-10T10:28:18.021134+0000 mon.a (mon.0) 3297 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-130"}]: dispatch 2026-03-10T10:28:18.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:18 vm04 bash[20742]: audit 2026-03-10T10:28:18.021134+0000 mon.a (mon.0) 3297 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-130"}]: dispatch 2026-03-10T10:28:18.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:18 vm07 bash[23367]: cluster 2026-03-10T10:28:16.544195+0000 mgr.y (mgr.24422) 558 : cluster [DBG] pgmap v976: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:28:18.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:18 vm07 bash[23367]: cluster 2026-03-10T10:28:16.544195+0000 mgr.y (mgr.24422) 558 : cluster [DBG] pgmap v976: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 978 MiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:28:18.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:18 vm07 bash[23367]: audit 2026-03-10T10:28:17.013705+0000 mon.a (mon.0) 3292 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-130","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:18.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:18 vm07 bash[23367]: audit 2026-03-10T10:28:17.013705+0000 mon.a (mon.0) 3292 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-130","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:18.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:18 vm07 bash[23367]: cluster 2026-03-10T10:28:17.018530+0000 mon.a (mon.0) 3293 : cluster [DBG] osdmap e631: 8 total, 8 up, 8 in 2026-03-10T10:28:18.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:18 vm07 bash[23367]: cluster 2026-03-10T10:28:17.018530+0000 mon.a (mon.0) 3293 : cluster [DBG] osdmap e631: 8 total, 8 up, 8 in 2026-03-10T10:28:18.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:18 vm07 bash[23367]: audit 2026-03-10T10:28:17.022966+0000 mon.a (mon.0) 3294 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:18.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:18 vm07 bash[23367]: audit 2026-03-10T10:28:17.022966+0000 mon.a (mon.0) 3294 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-130", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:18.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:18 vm07 bash[23367]: audit 2026-03-10T10:28:18.017072+0000 mon.a (mon.0) 3295 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-130", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:18.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:18 vm07 bash[23367]: audit 2026-03-10T10:28:18.017072+0000 mon.a (mon.0) 3295 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-130", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:18.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:18 vm07 bash[23367]: cluster 2026-03-10T10:28:18.020187+0000 mon.a (mon.0) 3296 : cluster [DBG] osdmap e632: 8 total, 8 up, 8 in 2026-03-10T10:28:18.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:18 vm07 bash[23367]: cluster 2026-03-10T10:28:18.020187+0000 mon.a (mon.0) 3296 : cluster [DBG] osdmap e632: 8 total, 8 up, 8 in 2026-03-10T10:28:18.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:18 vm07 bash[23367]: audit 2026-03-10T10:28:18.021134+0000 mon.a (mon.0) 3297 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-130"}]: dispatch 2026-03-10T10:28:18.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:18 vm07 bash[23367]: audit 2026-03-10T10:28:18.021134+0000 mon.a (mon.0) 3297 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-130"}]: dispatch 2026-03-10T10:28:19.266 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:28:18 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:28:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:20 vm04 bash[28289]: cluster 2026-03-10T10:28:18.544744+0000 mgr.y (mgr.24422) 559 : cluster [DBG] pgmap v979: 268 pgs: 14 unknown, 254 active+clean; 455 KiB data, 983 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:28:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:20 vm04 bash[28289]: cluster 2026-03-10T10:28:18.544744+0000 mgr.y (mgr.24422) 559 : cluster [DBG] pgmap v979: 268 pgs: 14 unknown, 254 active+clean; 455 KiB data, 983 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:28:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:20 vm04 bash[28289]: audit 2026-03-10T10:28:18.812128+0000 mgr.y (mgr.24422) 560 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:20 vm04 bash[28289]: audit 2026-03-10T10:28:18.812128+0000 mgr.y (mgr.24422) 560 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:20 vm04 bash[28289]: audit 2026-03-10T10:28:19.020811+0000 mon.a (mon.0) 3298 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-130"}]': finished 2026-03-10T10:28:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:20 vm04 bash[28289]: audit 2026-03-10T10:28:19.020811+0000 mon.a (mon.0) 3298 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-130"}]': finished 2026-03-10T10:28:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:20 vm04 bash[28289]: cluster 2026-03-10T10:28:19.024436+0000 mon.a (mon.0) 3299 : cluster [DBG] osdmap e633: 8 total, 8 up, 8 in 2026-03-10T10:28:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:20 vm04 bash[28289]: cluster 2026-03-10T10:28:19.024436+0000 mon.a (mon.0) 3299 : cluster [DBG] osdmap e633: 8 total, 8 up, 8 in 2026-03-10T10:28:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:20 vm04 bash[28289]: audit 2026-03-10T10:28:19.024959+0000 mon.a (mon.0) 3300 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-130", "mode": "writeback"}]: dispatch 2026-03-10T10:28:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:20 vm04 bash[28289]: audit 2026-03-10T10:28:19.024959+0000 mon.a (mon.0) 3300 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-130", "mode": "writeback"}]: dispatch 2026-03-10T10:28:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:20 vm04 bash[20742]: cluster 2026-03-10T10:28:18.544744+0000 mgr.y (mgr.24422) 559 : cluster [DBG] pgmap v979: 268 pgs: 14 unknown, 254 active+clean; 455 KiB data, 983 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:28:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:20 vm04 bash[20742]: cluster 2026-03-10T10:28:18.544744+0000 mgr.y (mgr.24422) 559 : cluster [DBG] pgmap v979: 268 pgs: 14 unknown, 254 active+clean; 455 KiB data, 983 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:28:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:20 vm04 bash[20742]: audit 2026-03-10T10:28:18.812128+0000 mgr.y (mgr.24422) 560 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:20 vm04 bash[20742]: audit 2026-03-10T10:28:18.812128+0000 mgr.y (mgr.24422) 560 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:20 vm04 bash[20742]: audit 2026-03-10T10:28:19.020811+0000 mon.a (mon.0) 3298 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-130"}]': finished 2026-03-10T10:28:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:20 vm04 bash[20742]: audit 2026-03-10T10:28:19.020811+0000 mon.a (mon.0) 3298 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-130"}]': finished 2026-03-10T10:28:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:20 vm04 bash[20742]: cluster 2026-03-10T10:28:19.024436+0000 mon.a (mon.0) 3299 : cluster [DBG] osdmap e633: 8 total, 8 up, 8 in 2026-03-10T10:28:20.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:20 vm04 bash[20742]: cluster 2026-03-10T10:28:19.024436+0000 mon.a (mon.0) 3299 : cluster [DBG] osdmap e633: 8 total, 8 up, 8 in 2026-03-10T10:28:20.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:20 vm04 bash[20742]: audit 2026-03-10T10:28:19.024959+0000 mon.a (mon.0) 3300 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-130", "mode": "writeback"}]: dispatch 2026-03-10T10:28:20.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:20 vm04 bash[20742]: audit 2026-03-10T10:28:19.024959+0000 mon.a (mon.0) 3300 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-130", "mode": "writeback"}]: dispatch 2026-03-10T10:28:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:20 vm07 bash[23367]: cluster 2026-03-10T10:28:18.544744+0000 mgr.y (mgr.24422) 559 : cluster [DBG] pgmap v979: 268 pgs: 14 unknown, 254 active+clean; 455 KiB data, 983 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:28:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:20 vm07 bash[23367]: cluster 2026-03-10T10:28:18.544744+0000 mgr.y (mgr.24422) 559 : cluster [DBG] pgmap v979: 268 pgs: 14 unknown, 254 active+clean; 455 KiB data, 983 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:28:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:20 vm07 bash[23367]: audit 2026-03-10T10:28:18.812128+0000 mgr.y (mgr.24422) 560 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:20 vm07 bash[23367]: audit 2026-03-10T10:28:18.812128+0000 mgr.y (mgr.24422) 560 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:20 vm07 bash[23367]: audit 2026-03-10T10:28:19.020811+0000 mon.a (mon.0) 3298 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-130"}]': finished 2026-03-10T10:28:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:20 vm07 bash[23367]: audit 2026-03-10T10:28:19.020811+0000 mon.a (mon.0) 3298 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-130"}]': finished 2026-03-10T10:28:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:20 vm07 bash[23367]: cluster 2026-03-10T10:28:19.024436+0000 mon.a (mon.0) 3299 : cluster [DBG] osdmap e633: 8 total, 8 up, 8 in 2026-03-10T10:28:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:20 vm07 bash[23367]: cluster 2026-03-10T10:28:19.024436+0000 mon.a (mon.0) 3299 : cluster [DBG] osdmap e633: 8 total, 8 up, 8 in 2026-03-10T10:28:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:20 vm07 bash[23367]: audit 2026-03-10T10:28:19.024959+0000 mon.a (mon.0) 3300 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-130", "mode": "writeback"}]: dispatch 2026-03-10T10:28:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:20 vm07 bash[23367]: audit 2026-03-10T10:28:19.024959+0000 mon.a (mon.0) 3300 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-130", "mode": "writeback"}]: dispatch 2026-03-10T10:28:21.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:21 vm04 bash[28289]: cluster 2026-03-10T10:28:20.020835+0000 mon.a (mon.0) 3301 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:21.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:21 vm04 bash[28289]: cluster 2026-03-10T10:28:20.020835+0000 mon.a (mon.0) 3301 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:21.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:21 vm04 bash[28289]: audit 2026-03-10T10:28:20.024509+0000 mon.a (mon.0) 3302 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-130", "mode": "writeback"}]': finished 2026-03-10T10:28:21.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:21 vm04 bash[28289]: audit 2026-03-10T10:28:20.024509+0000 mon.a (mon.0) 3302 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-130", "mode": "writeback"}]': finished 2026-03-10T10:28:21.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:21 vm04 bash[28289]: cluster 2026-03-10T10:28:20.030746+0000 mon.a (mon.0) 3303 : cluster [DBG] osdmap e634: 8 total, 8 up, 8 in 2026-03-10T10:28:21.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:21 vm04 bash[28289]: cluster 2026-03-10T10:28:20.030746+0000 mon.a (mon.0) 3303 : cluster [DBG] osdmap e634: 8 total, 8 up, 8 in 2026-03-10T10:28:21.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:21 vm04 bash[28289]: audit 2026-03-10T10:28:20.112262+0000 mon.a (mon.0) 3304 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:21.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:21 vm04 bash[28289]: audit 2026-03-10T10:28:20.112262+0000 mon.a (mon.0) 3304 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:21.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:21 vm04 bash[20742]: cluster 2026-03-10T10:28:20.020835+0000 mon.a (mon.0) 3301 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:21.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:21 vm04 bash[20742]: cluster 2026-03-10T10:28:20.020835+0000 mon.a (mon.0) 3301 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:21.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:21 vm04 bash[20742]: audit 2026-03-10T10:28:20.024509+0000 mon.a (mon.0) 3302 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-130", "mode": "writeback"}]': finished 2026-03-10T10:28:21.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:21 vm04 bash[20742]: audit 2026-03-10T10:28:20.024509+0000 mon.a (mon.0) 3302 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-130", "mode": "writeback"}]': finished 2026-03-10T10:28:21.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:21 vm04 bash[20742]: cluster 2026-03-10T10:28:20.030746+0000 mon.a (mon.0) 3303 : cluster [DBG] osdmap e634: 8 total, 8 up, 8 in 2026-03-10T10:28:21.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:21 vm04 bash[20742]: cluster 2026-03-10T10:28:20.030746+0000 mon.a (mon.0) 3303 : cluster [DBG] osdmap e634: 8 total, 8 up, 8 in 2026-03-10T10:28:21.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:21 vm04 bash[20742]: audit 2026-03-10T10:28:20.112262+0000 mon.a (mon.0) 3304 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:21.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:21 vm04 bash[20742]: audit 2026-03-10T10:28:20.112262+0000 mon.a (mon.0) 3304 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:21 vm07 bash[23367]: cluster 2026-03-10T10:28:20.020835+0000 mon.a (mon.0) 3301 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:21 vm07 bash[23367]: cluster 2026-03-10T10:28:20.020835+0000 mon.a (mon.0) 3301 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:21 vm07 bash[23367]: audit 2026-03-10T10:28:20.024509+0000 mon.a (mon.0) 3302 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-130", "mode": "writeback"}]': finished 2026-03-10T10:28:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:21 vm07 bash[23367]: audit 2026-03-10T10:28:20.024509+0000 mon.a (mon.0) 3302 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-130", "mode": "writeback"}]': finished 2026-03-10T10:28:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:21 vm07 bash[23367]: cluster 2026-03-10T10:28:20.030746+0000 mon.a (mon.0) 3303 : cluster [DBG] osdmap e634: 8 total, 8 up, 8 in 2026-03-10T10:28:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:21 vm07 bash[23367]: cluster 2026-03-10T10:28:20.030746+0000 mon.a (mon.0) 3303 : cluster [DBG] osdmap e634: 8 total, 8 up, 8 in 2026-03-10T10:28:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:21 vm07 bash[23367]: audit 2026-03-10T10:28:20.112262+0000 mon.a (mon.0) 3304 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:21.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:21 vm07 bash[23367]: audit 2026-03-10T10:28:20.112262+0000 mon.a (mon.0) 3304 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:22 vm04 bash[28289]: cluster 2026-03-10T10:28:20.545099+0000 mgr.y (mgr.24422) 561 : cluster [DBG] pgmap v982: 268 pgs: 268 active+clean; 455 KiB data, 983 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:22 vm04 bash[28289]: cluster 2026-03-10T10:28:20.545099+0000 mgr.y (mgr.24422) 561 : cluster [DBG] pgmap v982: 268 pgs: 268 active+clean; 455 KiB data, 983 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:22 vm04 bash[28289]: audit 2026-03-10T10:28:21.179522+0000 mon.a (mon.0) 3305 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:22 vm04 bash[28289]: audit 2026-03-10T10:28:21.179522+0000 mon.a (mon.0) 3305 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:22 vm04 bash[28289]: cluster 2026-03-10T10:28:21.184510+0000 mon.a (mon.0) 3306 : cluster [DBG] osdmap e635: 8 total, 8 up, 8 in 2026-03-10T10:28:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:22 vm04 bash[28289]: cluster 2026-03-10T10:28:21.184510+0000 mon.a (mon.0) 3306 : cluster [DBG] osdmap e635: 8 total, 8 up, 8 in 2026-03-10T10:28:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:22 vm04 bash[28289]: audit 2026-03-10T10:28:21.186928+0000 mon.a (mon.0) 3307 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-130"}]: dispatch 2026-03-10T10:28:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:22 vm04 bash[28289]: audit 2026-03-10T10:28:21.186928+0000 mon.a (mon.0) 3307 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-130"}]: dispatch 2026-03-10T10:28:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:22 vm04 bash[28289]: cluster 2026-03-10T10:28:21.637467+0000 mon.a (mon.0) 3308 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:22 vm04 bash[28289]: cluster 2026-03-10T10:28:21.637467+0000 mon.a (mon.0) 3308 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:22 vm04 bash[20742]: cluster 2026-03-10T10:28:20.545099+0000 mgr.y (mgr.24422) 561 : cluster [DBG] pgmap v982: 268 pgs: 268 active+clean; 455 KiB data, 983 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:22 vm04 bash[20742]: cluster 2026-03-10T10:28:20.545099+0000 mgr.y (mgr.24422) 561 : cluster [DBG] pgmap v982: 268 pgs: 268 active+clean; 455 KiB data, 983 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:22 vm04 bash[20742]: audit 2026-03-10T10:28:21.179522+0000 mon.a (mon.0) 3305 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:22 vm04 bash[20742]: audit 2026-03-10T10:28:21.179522+0000 mon.a (mon.0) 3305 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:22 vm04 bash[20742]: cluster 2026-03-10T10:28:21.184510+0000 mon.a (mon.0) 3306 : cluster [DBG] osdmap e635: 8 total, 8 up, 8 in 2026-03-10T10:28:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:22 vm04 bash[20742]: cluster 2026-03-10T10:28:21.184510+0000 mon.a (mon.0) 3306 : cluster [DBG] osdmap e635: 8 total, 8 up, 8 in 2026-03-10T10:28:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:22 vm04 bash[20742]: audit 2026-03-10T10:28:21.186928+0000 mon.a (mon.0) 3307 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-130"}]: dispatch 2026-03-10T10:28:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:22 vm04 bash[20742]: audit 2026-03-10T10:28:21.186928+0000 mon.a (mon.0) 3307 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-130"}]: dispatch 2026-03-10T10:28:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:22 vm04 bash[20742]: cluster 2026-03-10T10:28:21.637467+0000 mon.a (mon.0) 3308 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:22 vm04 bash[20742]: cluster 2026-03-10T10:28:21.637467+0000 mon.a (mon.0) 3308 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:22 vm07 bash[23367]: cluster 2026-03-10T10:28:20.545099+0000 mgr.y (mgr.24422) 561 : cluster [DBG] pgmap v982: 268 pgs: 268 active+clean; 455 KiB data, 983 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:22 vm07 bash[23367]: cluster 2026-03-10T10:28:20.545099+0000 mgr.y (mgr.24422) 561 : cluster [DBG] pgmap v982: 268 pgs: 268 active+clean; 455 KiB data, 983 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:22 vm07 bash[23367]: audit 2026-03-10T10:28:21.179522+0000 mon.a (mon.0) 3305 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:22 vm07 bash[23367]: audit 2026-03-10T10:28:21.179522+0000 mon.a (mon.0) 3305 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:22 vm07 bash[23367]: cluster 2026-03-10T10:28:21.184510+0000 mon.a (mon.0) 3306 : cluster [DBG] osdmap e635: 8 total, 8 up, 8 in 2026-03-10T10:28:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:22 vm07 bash[23367]: cluster 2026-03-10T10:28:21.184510+0000 mon.a (mon.0) 3306 : cluster [DBG] osdmap e635: 8 total, 8 up, 8 in 2026-03-10T10:28:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:22 vm07 bash[23367]: audit 2026-03-10T10:28:21.186928+0000 mon.a (mon.0) 3307 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-130"}]: dispatch 2026-03-10T10:28:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:22 vm07 bash[23367]: audit 2026-03-10T10:28:21.186928+0000 mon.a (mon.0) 3307 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-130"}]: dispatch 2026-03-10T10:28:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:22 vm07 bash[23367]: cluster 2026-03-10T10:28:21.637467+0000 mon.a (mon.0) 3308 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:22 vm07 bash[23367]: cluster 2026-03-10T10:28:21.637467+0000 mon.a (mon.0) 3308 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:23 vm04 bash[28289]: cluster 2026-03-10T10:28:22.179729+0000 mon.a (mon.0) 3309 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:23 vm04 bash[28289]: cluster 2026-03-10T10:28:22.179729+0000 mon.a (mon.0) 3309 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:23 vm04 bash[28289]: audit 2026-03-10T10:28:22.182877+0000 mon.a (mon.0) 3310 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-130"}]': finished 2026-03-10T10:28:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:23 vm04 bash[28289]: audit 2026-03-10T10:28:22.182877+0000 mon.a (mon.0) 3310 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-130"}]': finished 2026-03-10T10:28:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:23 vm04 bash[28289]: cluster 2026-03-10T10:28:22.186158+0000 mon.a (mon.0) 3311 : cluster [DBG] osdmap e636: 8 total, 8 up, 8 in 2026-03-10T10:28:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:23 vm04 bash[28289]: cluster 2026-03-10T10:28:22.186158+0000 mon.a (mon.0) 3311 : cluster [DBG] osdmap e636: 8 total, 8 up, 8 in 2026-03-10T10:28:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:23 vm04 bash[20742]: cluster 2026-03-10T10:28:22.179729+0000 mon.a (mon.0) 3309 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:23 vm04 bash[20742]: cluster 2026-03-10T10:28:22.179729+0000 mon.a (mon.0) 3309 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:23 vm04 bash[20742]: audit 2026-03-10T10:28:22.182877+0000 mon.a (mon.0) 3310 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-130"}]': finished 2026-03-10T10:28:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:23 vm04 bash[20742]: audit 2026-03-10T10:28:22.182877+0000 mon.a (mon.0) 3310 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-130"}]': finished 2026-03-10T10:28:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:23 vm04 bash[20742]: cluster 2026-03-10T10:28:22.186158+0000 mon.a (mon.0) 3311 : cluster [DBG] osdmap e636: 8 total, 8 up, 8 in 2026-03-10T10:28:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:23 vm04 bash[20742]: cluster 2026-03-10T10:28:22.186158+0000 mon.a (mon.0) 3311 : cluster [DBG] osdmap e636: 8 total, 8 up, 8 in 2026-03-10T10:28:23.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:28:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:28:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:28:23.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:23 vm07 bash[23367]: cluster 2026-03-10T10:28:22.179729+0000 mon.a (mon.0) 3309 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:23.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:23 vm07 bash[23367]: cluster 2026-03-10T10:28:22.179729+0000 mon.a (mon.0) 3309 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:23.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:23 vm07 bash[23367]: audit 2026-03-10T10:28:22.182877+0000 mon.a (mon.0) 3310 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-130"}]': finished 2026-03-10T10:28:23.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:23 vm07 bash[23367]: audit 2026-03-10T10:28:22.182877+0000 mon.a (mon.0) 3310 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-130"}]': finished 2026-03-10T10:28:23.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:23 vm07 bash[23367]: cluster 2026-03-10T10:28:22.186158+0000 mon.a (mon.0) 3311 : cluster [DBG] osdmap e636: 8 total, 8 up, 8 in 2026-03-10T10:28:23.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:23 vm07 bash[23367]: cluster 2026-03-10T10:28:22.186158+0000 mon.a (mon.0) 3311 : cluster [DBG] osdmap e636: 8 total, 8 up, 8 in 2026-03-10T10:28:24.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:24 vm07 bash[23367]: cluster 2026-03-10T10:28:22.545467+0000 mgr.y (mgr.24422) 562 : cluster [DBG] pgmap v985: 268 pgs: 268 active+clean; 455 KiB data, 983 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:24.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:24 vm07 bash[23367]: cluster 2026-03-10T10:28:22.545467+0000 mgr.y (mgr.24422) 562 : cluster [DBG] pgmap v985: 268 pgs: 268 active+clean; 455 KiB data, 983 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:24.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:24 vm07 bash[23367]: cluster 2026-03-10T10:28:23.209349+0000 mon.a (mon.0) 3312 : cluster [DBG] osdmap e637: 8 total, 8 up, 8 in 2026-03-10T10:28:24.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:24 vm07 bash[23367]: cluster 2026-03-10T10:28:23.209349+0000 mon.a (mon.0) 3312 : cluster [DBG] osdmap e637: 8 total, 8 up, 8 in 2026-03-10T10:28:24.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:24 vm04 bash[28289]: cluster 2026-03-10T10:28:22.545467+0000 mgr.y (mgr.24422) 562 : cluster [DBG] pgmap v985: 268 pgs: 268 active+clean; 455 KiB data, 983 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:24.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:24 vm04 bash[28289]: cluster 2026-03-10T10:28:22.545467+0000 mgr.y (mgr.24422) 562 : cluster [DBG] pgmap v985: 268 pgs: 268 active+clean; 455 KiB data, 983 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:24.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:24 vm04 bash[28289]: cluster 2026-03-10T10:28:23.209349+0000 mon.a (mon.0) 3312 : cluster [DBG] osdmap e637: 8 total, 8 up, 8 in 2026-03-10T10:28:24.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:24 vm04 bash[28289]: cluster 2026-03-10T10:28:23.209349+0000 mon.a (mon.0) 3312 : cluster [DBG] osdmap e637: 8 total, 8 up, 8 in 2026-03-10T10:28:24.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:24 vm04 bash[20742]: cluster 2026-03-10T10:28:22.545467+0000 mgr.y (mgr.24422) 562 : cluster [DBG] pgmap v985: 268 pgs: 268 active+clean; 455 KiB data, 983 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:24.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:24 vm04 bash[20742]: cluster 2026-03-10T10:28:22.545467+0000 mgr.y (mgr.24422) 562 : cluster [DBG] pgmap v985: 268 pgs: 268 active+clean; 455 KiB data, 983 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:24.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:24 vm04 bash[20742]: cluster 2026-03-10T10:28:23.209349+0000 mon.a (mon.0) 3312 : cluster [DBG] osdmap e637: 8 total, 8 up, 8 in 2026-03-10T10:28:24.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:24 vm04 bash[20742]: cluster 2026-03-10T10:28:23.209349+0000 mon.a (mon.0) 3312 : cluster [DBG] osdmap e637: 8 total, 8 up, 8 in 2026-03-10T10:28:25.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:25 vm07 bash[23367]: cluster 2026-03-10T10:28:24.248842+0000 mon.a (mon.0) 3313 : cluster [DBG] osdmap e638: 8 total, 8 up, 8 in 2026-03-10T10:28:25.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:25 vm07 bash[23367]: cluster 2026-03-10T10:28:24.248842+0000 mon.a (mon.0) 3313 : cluster [DBG] osdmap e638: 8 total, 8 up, 8 in 2026-03-10T10:28:25.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:25 vm07 bash[23367]: audit 2026-03-10T10:28:24.254349+0000 mon.a (mon.0) 3314 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:25.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:25 vm07 bash[23367]: audit 2026-03-10T10:28:24.254349+0000 mon.a (mon.0) 3314 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:25.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:25 vm04 bash[28289]: cluster 2026-03-10T10:28:24.248842+0000 mon.a (mon.0) 3313 : cluster [DBG] osdmap e638: 8 total, 8 up, 8 in 2026-03-10T10:28:25.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:25 vm04 bash[28289]: cluster 2026-03-10T10:28:24.248842+0000 mon.a (mon.0) 3313 : cluster [DBG] osdmap e638: 8 total, 8 up, 8 in 2026-03-10T10:28:25.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:25 vm04 bash[28289]: audit 2026-03-10T10:28:24.254349+0000 mon.a (mon.0) 3314 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:25.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:25 vm04 bash[28289]: audit 2026-03-10T10:28:24.254349+0000 mon.a (mon.0) 3314 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:25.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:25 vm04 bash[20742]: cluster 2026-03-10T10:28:24.248842+0000 mon.a (mon.0) 3313 : cluster [DBG] osdmap e638: 8 total, 8 up, 8 in 2026-03-10T10:28:25.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:25 vm04 bash[20742]: cluster 2026-03-10T10:28:24.248842+0000 mon.a (mon.0) 3313 : cluster [DBG] osdmap e638: 8 total, 8 up, 8 in 2026-03-10T10:28:25.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:25 vm04 bash[20742]: audit 2026-03-10T10:28:24.254349+0000 mon.a (mon.0) 3314 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:25.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:25 vm04 bash[20742]: audit 2026-03-10T10:28:24.254349+0000 mon.a (mon.0) 3314 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-132","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:26.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:26 vm04 bash[28289]: cluster 2026-03-10T10:28:24.545853+0000 mgr.y (mgr.24422) 563 : cluster [DBG] pgmap v988: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:26.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:26 vm04 bash[28289]: cluster 2026-03-10T10:28:24.545853+0000 mgr.y (mgr.24422) 563 : cluster [DBG] pgmap v988: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:26.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:26 vm04 bash[28289]: audit 2026-03-10T10:28:25.233845+0000 mon.a (mon.0) 3315 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-132","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:26.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:26 vm04 bash[28289]: audit 2026-03-10T10:28:25.233845+0000 mon.a (mon.0) 3315 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-132","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:26.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:26 vm04 bash[28289]: cluster 2026-03-10T10:28:25.244291+0000 mon.a (mon.0) 3316 : cluster [DBG] osdmap e639: 8 total, 8 up, 8 in 2026-03-10T10:28:26.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:26 vm04 bash[28289]: cluster 2026-03-10T10:28:25.244291+0000 mon.a (mon.0) 3316 : cluster [DBG] osdmap e639: 8 total, 8 up, 8 in 2026-03-10T10:28:26.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:26 vm04 bash[28289]: audit 2026-03-10T10:28:25.276244+0000 mon.a (mon.0) 3317 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:26.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:26 vm04 bash[28289]: audit 2026-03-10T10:28:25.276244+0000 mon.a (mon.0) 3317 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:26.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:26 vm04 bash[20742]: cluster 2026-03-10T10:28:24.545853+0000 mgr.y (mgr.24422) 563 : cluster [DBG] pgmap v988: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:26.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:26 vm04 bash[20742]: cluster 2026-03-10T10:28:24.545853+0000 mgr.y (mgr.24422) 563 : cluster [DBG] pgmap v988: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:26.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:26 vm04 bash[20742]: audit 2026-03-10T10:28:25.233845+0000 mon.a (mon.0) 3315 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-132","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:26.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:26 vm04 bash[20742]: audit 2026-03-10T10:28:25.233845+0000 mon.a (mon.0) 3315 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-132","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:26.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:26 vm04 bash[20742]: cluster 2026-03-10T10:28:25.244291+0000 mon.a (mon.0) 3316 : cluster [DBG] osdmap e639: 8 total, 8 up, 8 in 2026-03-10T10:28:26.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:26 vm04 bash[20742]: cluster 2026-03-10T10:28:25.244291+0000 mon.a (mon.0) 3316 : cluster [DBG] osdmap e639: 8 total, 8 up, 8 in 2026-03-10T10:28:26.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:26 vm04 bash[20742]: audit 2026-03-10T10:28:25.276244+0000 mon.a (mon.0) 3317 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:26.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:26 vm04 bash[20742]: audit 2026-03-10T10:28:25.276244+0000 mon.a (mon.0) 3317 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:26.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:26 vm07 bash[23367]: cluster 2026-03-10T10:28:24.545853+0000 mgr.y (mgr.24422) 563 : cluster [DBG] pgmap v988: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:26.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:26 vm07 bash[23367]: cluster 2026-03-10T10:28:24.545853+0000 mgr.y (mgr.24422) 563 : cluster [DBG] pgmap v988: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:26.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:26 vm07 bash[23367]: audit 2026-03-10T10:28:25.233845+0000 mon.a (mon.0) 3315 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-132","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:26.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:26 vm07 bash[23367]: audit 2026-03-10T10:28:25.233845+0000 mon.a (mon.0) 3315 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-132","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:26.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:26 vm07 bash[23367]: cluster 2026-03-10T10:28:25.244291+0000 mon.a (mon.0) 3316 : cluster [DBG] osdmap e639: 8 total, 8 up, 8 in 2026-03-10T10:28:26.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:26 vm07 bash[23367]: cluster 2026-03-10T10:28:25.244291+0000 mon.a (mon.0) 3316 : cluster [DBG] osdmap e639: 8 total, 8 up, 8 in 2026-03-10T10:28:26.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:26 vm07 bash[23367]: audit 2026-03-10T10:28:25.276244+0000 mon.a (mon.0) 3317 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:26.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:26 vm07 bash[23367]: audit 2026-03-10T10:28:25.276244+0000 mon.a (mon.0) 3317 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-132", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:27 vm04 bash[28289]: audit 2026-03-10T10:28:26.376869+0000 mon.a (mon.0) 3318 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-132", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:27 vm04 bash[28289]: audit 2026-03-10T10:28:26.376869+0000 mon.a (mon.0) 3318 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-132", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:27 vm04 bash[28289]: cluster 2026-03-10T10:28:26.433172+0000 mon.a (mon.0) 3319 : cluster [DBG] osdmap e640: 8 total, 8 up, 8 in 2026-03-10T10:28:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:27 vm04 bash[28289]: cluster 2026-03-10T10:28:26.433172+0000 mon.a (mon.0) 3319 : cluster [DBG] osdmap e640: 8 total, 8 up, 8 in 2026-03-10T10:28:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:27 vm04 bash[28289]: audit 2026-03-10T10:28:26.434869+0000 mon.a (mon.0) 3320 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-132"}]: dispatch 2026-03-10T10:28:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:27 vm04 bash[28289]: audit 2026-03-10T10:28:26.434869+0000 mon.a (mon.0) 3320 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-132"}]: dispatch 2026-03-10T10:28:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:27 vm04 bash[28289]: cluster 2026-03-10T10:28:26.546222+0000 mgr.y (mgr.24422) 564 : cluster [DBG] pgmap v991: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:27 vm04 bash[28289]: cluster 2026-03-10T10:28:26.546222+0000 mgr.y (mgr.24422) 564 : cluster [DBG] pgmap v991: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:27 vm04 bash[28289]: cluster 2026-03-10T10:28:26.638459+0000 mon.a (mon.0) 3321 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:27 vm04 bash[28289]: cluster 2026-03-10T10:28:26.638459+0000 mon.a (mon.0) 3321 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:27 vm04 bash[28289]: audit 2026-03-10T10:28:27.380232+0000 mon.a (mon.0) 3322 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-132"}]': finished 2026-03-10T10:28:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:27 vm04 bash[28289]: audit 2026-03-10T10:28:27.380232+0000 mon.a (mon.0) 3322 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-132"}]': finished 2026-03-10T10:28:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:27 vm04 bash[28289]: cluster 2026-03-10T10:28:27.386234+0000 mon.a (mon.0) 3323 : cluster [DBG] osdmap e641: 8 total, 8 up, 8 in 2026-03-10T10:28:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:27 vm04 bash[28289]: cluster 2026-03-10T10:28:27.386234+0000 mon.a (mon.0) 3323 : cluster [DBG] osdmap e641: 8 total, 8 up, 8 in 2026-03-10T10:28:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:27 vm04 bash[28289]: audit 2026-03-10T10:28:27.386933+0000 mon.a (mon.0) 3324 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-132", "mode": "writeback"}]: dispatch 2026-03-10T10:28:27.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:27 vm04 bash[28289]: audit 2026-03-10T10:28:27.386933+0000 mon.a (mon.0) 3324 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-132", "mode": "writeback"}]: dispatch 2026-03-10T10:28:27.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:27 vm04 bash[20742]: audit 2026-03-10T10:28:26.376869+0000 mon.a (mon.0) 3318 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-132", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:27.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:27 vm04 bash[20742]: audit 2026-03-10T10:28:26.376869+0000 mon.a (mon.0) 3318 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-132", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:27.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:27 vm04 bash[20742]: cluster 2026-03-10T10:28:26.433172+0000 mon.a (mon.0) 3319 : cluster [DBG] osdmap e640: 8 total, 8 up, 8 in 2026-03-10T10:28:27.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:27 vm04 bash[20742]: cluster 2026-03-10T10:28:26.433172+0000 mon.a (mon.0) 3319 : cluster [DBG] osdmap e640: 8 total, 8 up, 8 in 2026-03-10T10:28:27.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:27 vm04 bash[20742]: audit 2026-03-10T10:28:26.434869+0000 mon.a (mon.0) 3320 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-132"}]: dispatch 2026-03-10T10:28:27.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:27 vm04 bash[20742]: audit 2026-03-10T10:28:26.434869+0000 mon.a (mon.0) 3320 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-132"}]: dispatch 2026-03-10T10:28:27.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:27 vm04 bash[20742]: cluster 2026-03-10T10:28:26.546222+0000 mgr.y (mgr.24422) 564 : cluster [DBG] pgmap v991: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:27.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:27 vm04 bash[20742]: cluster 2026-03-10T10:28:26.546222+0000 mgr.y (mgr.24422) 564 : cluster [DBG] pgmap v991: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:27.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:27 vm04 bash[20742]: cluster 2026-03-10T10:28:26.638459+0000 mon.a (mon.0) 3321 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:27.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:27 vm04 bash[20742]: cluster 2026-03-10T10:28:26.638459+0000 mon.a (mon.0) 3321 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:27.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:27 vm04 bash[20742]: audit 2026-03-10T10:28:27.380232+0000 mon.a (mon.0) 3322 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-132"}]': finished 2026-03-10T10:28:27.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:27 vm04 bash[20742]: audit 2026-03-10T10:28:27.380232+0000 mon.a (mon.0) 3322 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-132"}]': finished 2026-03-10T10:28:27.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:27 vm04 bash[20742]: cluster 2026-03-10T10:28:27.386234+0000 mon.a (mon.0) 3323 : cluster [DBG] osdmap e641: 8 total, 8 up, 8 in 2026-03-10T10:28:27.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:27 vm04 bash[20742]: cluster 2026-03-10T10:28:27.386234+0000 mon.a (mon.0) 3323 : cluster [DBG] osdmap e641: 8 total, 8 up, 8 in 2026-03-10T10:28:27.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:27 vm04 bash[20742]: audit 2026-03-10T10:28:27.386933+0000 mon.a (mon.0) 3324 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-132", "mode": "writeback"}]: dispatch 2026-03-10T10:28:27.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:27 vm04 bash[20742]: audit 2026-03-10T10:28:27.386933+0000 mon.a (mon.0) 3324 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-132", "mode": "writeback"}]: dispatch 2026-03-10T10:28:27.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:27 vm07 bash[23367]: audit 2026-03-10T10:28:26.376869+0000 mon.a (mon.0) 3318 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-132", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:27.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:27 vm07 bash[23367]: audit 2026-03-10T10:28:26.376869+0000 mon.a (mon.0) 3318 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-132", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:27.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:27 vm07 bash[23367]: cluster 2026-03-10T10:28:26.433172+0000 mon.a (mon.0) 3319 : cluster [DBG] osdmap e640: 8 total, 8 up, 8 in 2026-03-10T10:28:27.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:27 vm07 bash[23367]: cluster 2026-03-10T10:28:26.433172+0000 mon.a (mon.0) 3319 : cluster [DBG] osdmap e640: 8 total, 8 up, 8 in 2026-03-10T10:28:27.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:27 vm07 bash[23367]: audit 2026-03-10T10:28:26.434869+0000 mon.a (mon.0) 3320 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-132"}]: dispatch 2026-03-10T10:28:27.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:27 vm07 bash[23367]: audit 2026-03-10T10:28:26.434869+0000 mon.a (mon.0) 3320 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-132"}]: dispatch 2026-03-10T10:28:27.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:27 vm07 bash[23367]: cluster 2026-03-10T10:28:26.546222+0000 mgr.y (mgr.24422) 564 : cluster [DBG] pgmap v991: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:27.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:27 vm07 bash[23367]: cluster 2026-03-10T10:28:26.546222+0000 mgr.y (mgr.24422) 564 : cluster [DBG] pgmap v991: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:27.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:27 vm07 bash[23367]: cluster 2026-03-10T10:28:26.638459+0000 mon.a (mon.0) 3321 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:27.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:27 vm07 bash[23367]: cluster 2026-03-10T10:28:26.638459+0000 mon.a (mon.0) 3321 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:27.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:27 vm07 bash[23367]: audit 2026-03-10T10:28:27.380232+0000 mon.a (mon.0) 3322 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-132"}]': finished 2026-03-10T10:28:27.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:27 vm07 bash[23367]: audit 2026-03-10T10:28:27.380232+0000 mon.a (mon.0) 3322 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-132"}]': finished 2026-03-10T10:28:27.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:27 vm07 bash[23367]: cluster 2026-03-10T10:28:27.386234+0000 mon.a (mon.0) 3323 : cluster [DBG] osdmap e641: 8 total, 8 up, 8 in 2026-03-10T10:28:27.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:27 vm07 bash[23367]: cluster 2026-03-10T10:28:27.386234+0000 mon.a (mon.0) 3323 : cluster [DBG] osdmap e641: 8 total, 8 up, 8 in 2026-03-10T10:28:27.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:27 vm07 bash[23367]: audit 2026-03-10T10:28:27.386933+0000 mon.a (mon.0) 3324 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-132", "mode": "writeback"}]: dispatch 2026-03-10T10:28:27.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:27 vm07 bash[23367]: audit 2026-03-10T10:28:27.386933+0000 mon.a (mon.0) 3324 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-132", "mode": "writeback"}]: dispatch 2026-03-10T10:28:29.136 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:28:28 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:28:29.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:29 vm04 bash[28289]: audit 2026-03-10T10:28:28.135339+0000 mon.a (mon.0) 3325 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:28:29.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:29 vm04 bash[28289]: audit 2026-03-10T10:28:28.135339+0000 mon.a (mon.0) 3325 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:28:29.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:29 vm04 bash[28289]: audit 2026-03-10T10:28:28.136292+0000 mon.a (mon.0) 3326 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:28:29.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:29 vm04 bash[28289]: audit 2026-03-10T10:28:28.136292+0000 mon.a (mon.0) 3326 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:28:29.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:29 vm04 bash[28289]: cluster 2026-03-10T10:28:28.380317+0000 mon.a (mon.0) 3327 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:29.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:29 vm04 bash[28289]: cluster 2026-03-10T10:28:28.380317+0000 mon.a (mon.0) 3327 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:29.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:29 vm04 bash[28289]: audit 2026-03-10T10:28:28.383601+0000 mon.a (mon.0) 3328 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-132", "mode": "writeback"}]': finished 2026-03-10T10:28:29.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:29 vm04 bash[28289]: audit 2026-03-10T10:28:28.383601+0000 mon.a (mon.0) 3328 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-132", "mode": "writeback"}]': finished 2026-03-10T10:28:29.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:29 vm04 bash[28289]: cluster 2026-03-10T10:28:28.387589+0000 mon.a (mon.0) 3329 : cluster [DBG] osdmap e642: 8 total, 8 up, 8 in 2026-03-10T10:28:29.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:29 vm04 bash[28289]: cluster 2026-03-10T10:28:28.387589+0000 mon.a (mon.0) 3329 : cluster [DBG] osdmap e642: 8 total, 8 up, 8 in 2026-03-10T10:28:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:29 vm04 bash[20742]: audit 2026-03-10T10:28:28.135339+0000 mon.a (mon.0) 3325 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:28:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:29 vm04 bash[20742]: audit 2026-03-10T10:28:28.135339+0000 mon.a (mon.0) 3325 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:28:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:29 vm04 bash[20742]: audit 2026-03-10T10:28:28.136292+0000 mon.a (mon.0) 3326 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:28:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:29 vm04 bash[20742]: audit 2026-03-10T10:28:28.136292+0000 mon.a (mon.0) 3326 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:28:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:29 vm04 bash[20742]: cluster 2026-03-10T10:28:28.380317+0000 mon.a (mon.0) 3327 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:29 vm04 bash[20742]: cluster 2026-03-10T10:28:28.380317+0000 mon.a (mon.0) 3327 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:29 vm04 bash[20742]: audit 2026-03-10T10:28:28.383601+0000 mon.a (mon.0) 3328 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-132", "mode": "writeback"}]': finished 2026-03-10T10:28:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:29 vm04 bash[20742]: audit 2026-03-10T10:28:28.383601+0000 mon.a (mon.0) 3328 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-132", "mode": "writeback"}]': finished 2026-03-10T10:28:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:29 vm04 bash[20742]: cluster 2026-03-10T10:28:28.387589+0000 mon.a (mon.0) 3329 : cluster [DBG] osdmap e642: 8 total, 8 up, 8 in 2026-03-10T10:28:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:29 vm04 bash[20742]: cluster 2026-03-10T10:28:28.387589+0000 mon.a (mon.0) 3329 : cluster [DBG] osdmap e642: 8 total, 8 up, 8 in 2026-03-10T10:28:29.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:29 vm07 bash[23367]: audit 2026-03-10T10:28:28.135339+0000 mon.a (mon.0) 3325 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:28:29.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:29 vm07 bash[23367]: audit 2026-03-10T10:28:28.135339+0000 mon.a (mon.0) 3325 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:28:29.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:29 vm07 bash[23367]: audit 2026-03-10T10:28:28.136292+0000 mon.a (mon.0) 3326 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:28:29.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:29 vm07 bash[23367]: audit 2026-03-10T10:28:28.136292+0000 mon.a (mon.0) 3326 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:28:29.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:29 vm07 bash[23367]: cluster 2026-03-10T10:28:28.380317+0000 mon.a (mon.0) 3327 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:29.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:29 vm07 bash[23367]: cluster 2026-03-10T10:28:28.380317+0000 mon.a (mon.0) 3327 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:29.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:29 vm07 bash[23367]: audit 2026-03-10T10:28:28.383601+0000 mon.a (mon.0) 3328 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-132", "mode": "writeback"}]': finished 2026-03-10T10:28:29.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:29 vm07 bash[23367]: audit 2026-03-10T10:28:28.383601+0000 mon.a (mon.0) 3328 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-132", "mode": "writeback"}]': finished 2026-03-10T10:28:29.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:29 vm07 bash[23367]: cluster 2026-03-10T10:28:28.387589+0000 mon.a (mon.0) 3329 : cluster [DBG] osdmap e642: 8 total, 8 up, 8 in 2026-03-10T10:28:29.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:29 vm07 bash[23367]: cluster 2026-03-10T10:28:28.387589+0000 mon.a (mon.0) 3329 : cluster [DBG] osdmap e642: 8 total, 8 up, 8 in 2026-03-10T10:28:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:30 vm04 bash[28289]: cluster 2026-03-10T10:28:28.548223+0000 mgr.y (mgr.24422) 565 : cluster [DBG] pgmap v994: 268 pgs: 13 unknown, 255 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:28:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:30 vm04 bash[28289]: cluster 2026-03-10T10:28:28.548223+0000 mgr.y (mgr.24422) 565 : cluster [DBG] pgmap v994: 268 pgs: 13 unknown, 255 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:28:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:30 vm04 bash[28289]: audit 2026-03-10T10:28:28.815560+0000 mgr.y (mgr.24422) 566 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:30 vm04 bash[28289]: audit 2026-03-10T10:28:28.815560+0000 mgr.y (mgr.24422) 566 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:30 vm04 bash[28289]: cluster 2026-03-10T10:28:29.396400+0000 mon.a (mon.0) 3330 : cluster [DBG] osdmap e643: 8 total, 8 up, 8 in 2026-03-10T10:28:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:30 vm04 bash[28289]: cluster 2026-03-10T10:28:29.396400+0000 mon.a (mon.0) 3330 : cluster [DBG] osdmap e643: 8 total, 8 up, 8 in 2026-03-10T10:28:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:30 vm04 bash[20742]: cluster 2026-03-10T10:28:28.548223+0000 mgr.y (mgr.24422) 565 : cluster [DBG] pgmap v994: 268 pgs: 13 unknown, 255 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:28:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:30 vm04 bash[20742]: cluster 2026-03-10T10:28:28.548223+0000 mgr.y (mgr.24422) 565 : cluster [DBG] pgmap v994: 268 pgs: 13 unknown, 255 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:28:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:30 vm04 bash[20742]: audit 2026-03-10T10:28:28.815560+0000 mgr.y (mgr.24422) 566 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:30 vm04 bash[20742]: audit 2026-03-10T10:28:28.815560+0000 mgr.y (mgr.24422) 566 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:30 vm04 bash[20742]: cluster 2026-03-10T10:28:29.396400+0000 mon.a (mon.0) 3330 : cluster [DBG] osdmap e643: 8 total, 8 up, 8 in 2026-03-10T10:28:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:30 vm04 bash[20742]: cluster 2026-03-10T10:28:29.396400+0000 mon.a (mon.0) 3330 : cluster [DBG] osdmap e643: 8 total, 8 up, 8 in 2026-03-10T10:28:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:30 vm07 bash[23367]: cluster 2026-03-10T10:28:28.548223+0000 mgr.y (mgr.24422) 565 : cluster [DBG] pgmap v994: 268 pgs: 13 unknown, 255 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:28:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:30 vm07 bash[23367]: cluster 2026-03-10T10:28:28.548223+0000 mgr.y (mgr.24422) 565 : cluster [DBG] pgmap v994: 268 pgs: 13 unknown, 255 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:28:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:30 vm07 bash[23367]: audit 2026-03-10T10:28:28.815560+0000 mgr.y (mgr.24422) 566 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:30 vm07 bash[23367]: audit 2026-03-10T10:28:28.815560+0000 mgr.y (mgr.24422) 566 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:30 vm07 bash[23367]: cluster 2026-03-10T10:28:29.396400+0000 mon.a (mon.0) 3330 : cluster [DBG] osdmap e643: 8 total, 8 up, 8 in 2026-03-10T10:28:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:30 vm07 bash[23367]: cluster 2026-03-10T10:28:29.396400+0000 mon.a (mon.0) 3330 : cluster [DBG] osdmap e643: 8 total, 8 up, 8 in 2026-03-10T10:28:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:31 vm04 bash[28289]: cluster 2026-03-10T10:28:30.402638+0000 mon.a (mon.0) 3331 : cluster [DBG] osdmap e644: 8 total, 8 up, 8 in 2026-03-10T10:28:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:31 vm04 bash[28289]: cluster 2026-03-10T10:28:30.402638+0000 mon.a (mon.0) 3331 : cluster [DBG] osdmap e644: 8 total, 8 up, 8 in 2026-03-10T10:28:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:31 vm04 bash[28289]: audit 2026-03-10T10:28:30.447732+0000 mon.a (mon.0) 3332 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:31 vm04 bash[28289]: audit 2026-03-10T10:28:30.447732+0000 mon.a (mon.0) 3332 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:31 vm04 bash[28289]: cluster 2026-03-10T10:28:30.548518+0000 mgr.y (mgr.24422) 567 : cluster [DBG] pgmap v997: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:31 vm04 bash[28289]: cluster 2026-03-10T10:28:30.548518+0000 mgr.y (mgr.24422) 567 : cluster [DBG] pgmap v997: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:31.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:31 vm04 bash[20742]: cluster 2026-03-10T10:28:30.402638+0000 mon.a (mon.0) 3331 : cluster [DBG] osdmap e644: 8 total, 8 up, 8 in 2026-03-10T10:28:31.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:31 vm04 bash[20742]: cluster 2026-03-10T10:28:30.402638+0000 mon.a (mon.0) 3331 : cluster [DBG] osdmap e644: 8 total, 8 up, 8 in 2026-03-10T10:28:31.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:31 vm04 bash[20742]: audit 2026-03-10T10:28:30.447732+0000 mon.a (mon.0) 3332 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:31.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:31 vm04 bash[20742]: audit 2026-03-10T10:28:30.447732+0000 mon.a (mon.0) 3332 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:31.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:31 vm04 bash[20742]: cluster 2026-03-10T10:28:30.548518+0000 mgr.y (mgr.24422) 567 : cluster [DBG] pgmap v997: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:31.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:31 vm04 bash[20742]: cluster 2026-03-10T10:28:30.548518+0000 mgr.y (mgr.24422) 567 : cluster [DBG] pgmap v997: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:31.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:31 vm07 bash[23367]: cluster 2026-03-10T10:28:30.402638+0000 mon.a (mon.0) 3331 : cluster [DBG] osdmap e644: 8 total, 8 up, 8 in 2026-03-10T10:28:31.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:31 vm07 bash[23367]: cluster 2026-03-10T10:28:30.402638+0000 mon.a (mon.0) 3331 : cluster [DBG] osdmap e644: 8 total, 8 up, 8 in 2026-03-10T10:28:31.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:31 vm07 bash[23367]: audit 2026-03-10T10:28:30.447732+0000 mon.a (mon.0) 3332 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:31.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:31 vm07 bash[23367]: audit 2026-03-10T10:28:30.447732+0000 mon.a (mon.0) 3332 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:31.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:31 vm07 bash[23367]: cluster 2026-03-10T10:28:30.548518+0000 mgr.y (mgr.24422) 567 : cluster [DBG] pgmap v997: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:31.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:31 vm07 bash[23367]: cluster 2026-03-10T10:28:30.548518+0000 mgr.y (mgr.24422) 567 : cluster [DBG] pgmap v997: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:32 vm04 bash[28289]: audit 2026-03-10T10:28:31.408803+0000 mon.a (mon.0) 3333 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:32 vm04 bash[28289]: audit 2026-03-10T10:28:31.408803+0000 mon.a (mon.0) 3333 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:32 vm04 bash[28289]: cluster 2026-03-10T10:28:31.413327+0000 mon.a (mon.0) 3334 : cluster [DBG] osdmap e645: 8 total, 8 up, 8 in 2026-03-10T10:28:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:32 vm04 bash[28289]: cluster 2026-03-10T10:28:31.413327+0000 mon.a (mon.0) 3334 : cluster [DBG] osdmap e645: 8 total, 8 up, 8 in 2026-03-10T10:28:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:32 vm04 bash[28289]: audit 2026-03-10T10:28:31.431251+0000 mon.a (mon.0) 3335 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-132"}]: dispatch 2026-03-10T10:28:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:32 vm04 bash[28289]: audit 2026-03-10T10:28:31.431251+0000 mon.a (mon.0) 3335 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-132"}]: dispatch 2026-03-10T10:28:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:32 vm04 bash[28289]: cluster 2026-03-10T10:28:31.639036+0000 mon.a (mon.0) 3336 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:32 vm04 bash[28289]: cluster 2026-03-10T10:28:31.639036+0000 mon.a (mon.0) 3336 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:32 vm04 bash[20742]: audit 2026-03-10T10:28:31.408803+0000 mon.a (mon.0) 3333 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:32 vm04 bash[20742]: audit 2026-03-10T10:28:31.408803+0000 mon.a (mon.0) 3333 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:32 vm04 bash[20742]: cluster 2026-03-10T10:28:31.413327+0000 mon.a (mon.0) 3334 : cluster [DBG] osdmap e645: 8 total, 8 up, 8 in 2026-03-10T10:28:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:32 vm04 bash[20742]: cluster 2026-03-10T10:28:31.413327+0000 mon.a (mon.0) 3334 : cluster [DBG] osdmap e645: 8 total, 8 up, 8 in 2026-03-10T10:28:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:32 vm04 bash[20742]: audit 2026-03-10T10:28:31.431251+0000 mon.a (mon.0) 3335 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-132"}]: dispatch 2026-03-10T10:28:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:32 vm04 bash[20742]: audit 2026-03-10T10:28:31.431251+0000 mon.a (mon.0) 3335 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-132"}]: dispatch 2026-03-10T10:28:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:32 vm04 bash[20742]: cluster 2026-03-10T10:28:31.639036+0000 mon.a (mon.0) 3336 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:32 vm04 bash[20742]: cluster 2026-03-10T10:28:31.639036+0000 mon.a (mon.0) 3336 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:32.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:32 vm07 bash[23367]: audit 2026-03-10T10:28:31.408803+0000 mon.a (mon.0) 3333 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:32.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:32 vm07 bash[23367]: audit 2026-03-10T10:28:31.408803+0000 mon.a (mon.0) 3333 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:32.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:32 vm07 bash[23367]: cluster 2026-03-10T10:28:31.413327+0000 mon.a (mon.0) 3334 : cluster [DBG] osdmap e645: 8 total, 8 up, 8 in 2026-03-10T10:28:32.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:32 vm07 bash[23367]: cluster 2026-03-10T10:28:31.413327+0000 mon.a (mon.0) 3334 : cluster [DBG] osdmap e645: 8 total, 8 up, 8 in 2026-03-10T10:28:32.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:32 vm07 bash[23367]: audit 2026-03-10T10:28:31.431251+0000 mon.a (mon.0) 3335 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-132"}]: dispatch 2026-03-10T10:28:32.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:32 vm07 bash[23367]: audit 2026-03-10T10:28:31.431251+0000 mon.a (mon.0) 3335 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-132"}]: dispatch 2026-03-10T10:28:32.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:32 vm07 bash[23367]: cluster 2026-03-10T10:28:31.639036+0000 mon.a (mon.0) 3336 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:32.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:32 vm07 bash[23367]: cluster 2026-03-10T10:28:31.639036+0000 mon.a (mon.0) 3336 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:33.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:33 vm04 bash[20742]: audit 2026-03-10T10:28:32.435990+0000 mon.a (mon.0) 3337 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-132"}]': finished 2026-03-10T10:28:33.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:33 vm04 bash[20742]: audit 2026-03-10T10:28:32.435990+0000 mon.a (mon.0) 3337 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-132"}]': finished 2026-03-10T10:28:33.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:33 vm04 bash[20742]: cluster 2026-03-10T10:28:32.438316+0000 mon.a (mon.0) 3338 : cluster [DBG] osdmap e646: 8 total, 8 up, 8 in 2026-03-10T10:28:33.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:33 vm04 bash[20742]: cluster 2026-03-10T10:28:32.438316+0000 mon.a (mon.0) 3338 : cluster [DBG] osdmap e646: 8 total, 8 up, 8 in 2026-03-10T10:28:33.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:33 vm04 bash[20742]: cluster 2026-03-10T10:28:32.548851+0000 mgr.y (mgr.24422) 568 : cluster [DBG] pgmap v1000: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:33.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:33 vm04 bash[20742]: cluster 2026-03-10T10:28:32.548851+0000 mgr.y (mgr.24422) 568 : cluster [DBG] pgmap v1000: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:33.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:33 vm04 bash[20742]: audit 2026-03-10T10:28:33.271617+0000 mon.a (mon.0) 3339 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:28:33.456 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:33 vm04 bash[20742]: audit 2026-03-10T10:28:33.271617+0000 mon.a (mon.0) 3339 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:28:33.456 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:28:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:28:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:28:33.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:33 vm04 bash[28289]: audit 2026-03-10T10:28:32.435990+0000 mon.a (mon.0) 3337 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-132"}]': finished 2026-03-10T10:28:33.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:33 vm04 bash[28289]: audit 2026-03-10T10:28:32.435990+0000 mon.a (mon.0) 3337 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-132"}]': finished 2026-03-10T10:28:33.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:33 vm04 bash[28289]: cluster 2026-03-10T10:28:32.438316+0000 mon.a (mon.0) 3338 : cluster [DBG] osdmap e646: 8 total, 8 up, 8 in 2026-03-10T10:28:33.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:33 vm04 bash[28289]: cluster 2026-03-10T10:28:32.438316+0000 mon.a (mon.0) 3338 : cluster [DBG] osdmap e646: 8 total, 8 up, 8 in 2026-03-10T10:28:33.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:33 vm04 bash[28289]: cluster 2026-03-10T10:28:32.548851+0000 mgr.y (mgr.24422) 568 : cluster [DBG] pgmap v1000: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:33.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:33 vm04 bash[28289]: cluster 2026-03-10T10:28:32.548851+0000 mgr.y (mgr.24422) 568 : cluster [DBG] pgmap v1000: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:33.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:33 vm04 bash[28289]: audit 2026-03-10T10:28:33.271617+0000 mon.a (mon.0) 3339 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:28:33.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:33 vm04 bash[28289]: audit 2026-03-10T10:28:33.271617+0000 mon.a (mon.0) 3339 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:28:33.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:33 vm07 bash[23367]: audit 2026-03-10T10:28:32.435990+0000 mon.a (mon.0) 3337 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-132"}]': finished 2026-03-10T10:28:33.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:33 vm07 bash[23367]: audit 2026-03-10T10:28:32.435990+0000 mon.a (mon.0) 3337 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-132"}]': finished 2026-03-10T10:28:33.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:33 vm07 bash[23367]: cluster 2026-03-10T10:28:32.438316+0000 mon.a (mon.0) 3338 : cluster [DBG] osdmap e646: 8 total, 8 up, 8 in 2026-03-10T10:28:33.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:33 vm07 bash[23367]: cluster 2026-03-10T10:28:32.438316+0000 mon.a (mon.0) 3338 : cluster [DBG] osdmap e646: 8 total, 8 up, 8 in 2026-03-10T10:28:33.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:33 vm07 bash[23367]: cluster 2026-03-10T10:28:32.548851+0000 mgr.y (mgr.24422) 568 : cluster [DBG] pgmap v1000: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:33.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:33 vm07 bash[23367]: cluster 2026-03-10T10:28:32.548851+0000 mgr.y (mgr.24422) 568 : cluster [DBG] pgmap v1000: 268 pgs: 268 active+clean; 455 KiB data, 988 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:28:33.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:33 vm07 bash[23367]: audit 2026-03-10T10:28:33.271617+0000 mon.a (mon.0) 3339 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:28:33.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:33 vm07 bash[23367]: audit 2026-03-10T10:28:33.271617+0000 mon.a (mon.0) 3339 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:28:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:34 vm07 bash[23367]: cluster 2026-03-10T10:28:33.459703+0000 mon.a (mon.0) 3340 : cluster [DBG] osdmap e647: 8 total, 8 up, 8 in 2026-03-10T10:28:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:34 vm07 bash[23367]: cluster 2026-03-10T10:28:33.459703+0000 mon.a (mon.0) 3340 : cluster [DBG] osdmap e647: 8 total, 8 up, 8 in 2026-03-10T10:28:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:34 vm07 bash[23367]: audit 2026-03-10T10:28:33.535921+0000 mon.a (mon.0) 3341 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:34 vm07 bash[23367]: audit 2026-03-10T10:28:33.535921+0000 mon.a (mon.0) 3341 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:34 vm07 bash[23367]: audit 2026-03-10T10:28:33.625036+0000 mon.a (mon.0) 3342 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:28:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:34 vm07 bash[23367]: audit 2026-03-10T10:28:33.625036+0000 mon.a (mon.0) 3342 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:28:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:34 vm07 bash[23367]: audit 2026-03-10T10:28:33.625646+0000 mon.a (mon.0) 3343 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:28:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:34 vm07 bash[23367]: audit 2026-03-10T10:28:33.625646+0000 mon.a (mon.0) 3343 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:28:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:34 vm07 bash[23367]: audit 2026-03-10T10:28:33.631014+0000 mon.a (mon.0) 3344 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:28:34.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:34 vm07 bash[23367]: audit 2026-03-10T10:28:33.631014+0000 mon.a (mon.0) 3344 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:28:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:34 vm04 bash[28289]: cluster 2026-03-10T10:28:33.459703+0000 mon.a (mon.0) 3340 : cluster [DBG] osdmap e647: 8 total, 8 up, 8 in 2026-03-10T10:28:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:34 vm04 bash[28289]: cluster 2026-03-10T10:28:33.459703+0000 mon.a (mon.0) 3340 : cluster [DBG] osdmap e647: 8 total, 8 up, 8 in 2026-03-10T10:28:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:34 vm04 bash[28289]: audit 2026-03-10T10:28:33.535921+0000 mon.a (mon.0) 3341 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:34 vm04 bash[28289]: audit 2026-03-10T10:28:33.535921+0000 mon.a (mon.0) 3341 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:34 vm04 bash[28289]: audit 2026-03-10T10:28:33.625036+0000 mon.a (mon.0) 3342 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:28:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:34 vm04 bash[28289]: audit 2026-03-10T10:28:33.625036+0000 mon.a (mon.0) 3342 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:28:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:34 vm04 bash[28289]: audit 2026-03-10T10:28:33.625646+0000 mon.a (mon.0) 3343 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:28:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:34 vm04 bash[28289]: audit 2026-03-10T10:28:33.625646+0000 mon.a (mon.0) 3343 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:28:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:34 vm04 bash[28289]: audit 2026-03-10T10:28:33.631014+0000 mon.a (mon.0) 3344 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:28:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:34 vm04 bash[28289]: audit 2026-03-10T10:28:33.631014+0000 mon.a (mon.0) 3344 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:28:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:34 vm04 bash[20742]: cluster 2026-03-10T10:28:33.459703+0000 mon.a (mon.0) 3340 : cluster [DBG] osdmap e647: 8 total, 8 up, 8 in 2026-03-10T10:28:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:34 vm04 bash[20742]: cluster 2026-03-10T10:28:33.459703+0000 mon.a (mon.0) 3340 : cluster [DBG] osdmap e647: 8 total, 8 up, 8 in 2026-03-10T10:28:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:34 vm04 bash[20742]: audit 2026-03-10T10:28:33.535921+0000 mon.a (mon.0) 3341 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:34 vm04 bash[20742]: audit 2026-03-10T10:28:33.535921+0000 mon.a (mon.0) 3341 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:34 vm04 bash[20742]: audit 2026-03-10T10:28:33.625036+0000 mon.a (mon.0) 3342 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:28:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:34 vm04 bash[20742]: audit 2026-03-10T10:28:33.625036+0000 mon.a (mon.0) 3342 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:28:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:34 vm04 bash[20742]: audit 2026-03-10T10:28:33.625646+0000 mon.a (mon.0) 3343 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:28:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:34 vm04 bash[20742]: audit 2026-03-10T10:28:33.625646+0000 mon.a (mon.0) 3343 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:28:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:34 vm04 bash[20742]: audit 2026-03-10T10:28:33.631014+0000 mon.a (mon.0) 3344 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:28:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:34 vm04 bash[20742]: audit 2026-03-10T10:28:33.631014+0000 mon.a (mon.0) 3344 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:28:35.789 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:35 vm07 bash[23367]: audit 2026-03-10T10:28:34.514152+0000 mon.a (mon.0) 3345 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:35.789 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:35 vm07 bash[23367]: audit 2026-03-10T10:28:34.514152+0000 mon.a (mon.0) 3345 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:35.789 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:35 vm07 bash[23367]: cluster 2026-03-10T10:28:34.517994+0000 mon.a (mon.0) 3346 : cluster [DBG] osdmap e648: 8 total, 8 up, 8 in 2026-03-10T10:28:35.789 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:35 vm07 bash[23367]: cluster 2026-03-10T10:28:34.517994+0000 mon.a (mon.0) 3346 : cluster [DBG] osdmap e648: 8 total, 8 up, 8 in 2026-03-10T10:28:35.789 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:35 vm07 bash[23367]: audit 2026-03-10T10:28:34.518726+0000 mon.a (mon.0) 3347 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-132"}]: dispatch 2026-03-10T10:28:35.789 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:35 vm07 bash[23367]: audit 2026-03-10T10:28:34.518726+0000 mon.a (mon.0) 3347 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-132"}]: dispatch 2026-03-10T10:28:35.789 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:35 vm07 bash[23367]: cluster 2026-03-10T10:28:34.549217+0000 mgr.y (mgr.24422) 569 : cluster [DBG] pgmap v1003: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 989 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T10:28:35.789 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:35 vm07 bash[23367]: cluster 2026-03-10T10:28:34.549217+0000 mgr.y (mgr.24422) 569 : cluster [DBG] pgmap v1003: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 989 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T10:28:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:35 vm04 bash[28289]: audit 2026-03-10T10:28:34.514152+0000 mon.a (mon.0) 3345 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:35 vm04 bash[28289]: audit 2026-03-10T10:28:34.514152+0000 mon.a (mon.0) 3345 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:35 vm04 bash[28289]: cluster 2026-03-10T10:28:34.517994+0000 mon.a (mon.0) 3346 : cluster [DBG] osdmap e648: 8 total, 8 up, 8 in 2026-03-10T10:28:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:35 vm04 bash[28289]: cluster 2026-03-10T10:28:34.517994+0000 mon.a (mon.0) 3346 : cluster [DBG] osdmap e648: 8 total, 8 up, 8 in 2026-03-10T10:28:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:35 vm04 bash[28289]: audit 2026-03-10T10:28:34.518726+0000 mon.a (mon.0) 3347 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-132"}]: dispatch 2026-03-10T10:28:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:35 vm04 bash[28289]: audit 2026-03-10T10:28:34.518726+0000 mon.a (mon.0) 3347 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-132"}]: dispatch 2026-03-10T10:28:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:35 vm04 bash[28289]: cluster 2026-03-10T10:28:34.549217+0000 mgr.y (mgr.24422) 569 : cluster [DBG] pgmap v1003: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 989 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T10:28:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:35 vm04 bash[28289]: cluster 2026-03-10T10:28:34.549217+0000 mgr.y (mgr.24422) 569 : cluster [DBG] pgmap v1003: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 989 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T10:28:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:35 vm04 bash[20742]: audit 2026-03-10T10:28:34.514152+0000 mon.a (mon.0) 3345 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:35 vm04 bash[20742]: audit 2026-03-10T10:28:34.514152+0000 mon.a (mon.0) 3345 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:35 vm04 bash[20742]: cluster 2026-03-10T10:28:34.517994+0000 mon.a (mon.0) 3346 : cluster [DBG] osdmap e648: 8 total, 8 up, 8 in 2026-03-10T10:28:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:35 vm04 bash[20742]: cluster 2026-03-10T10:28:34.517994+0000 mon.a (mon.0) 3346 : cluster [DBG] osdmap e648: 8 total, 8 up, 8 in 2026-03-10T10:28:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:35 vm04 bash[20742]: audit 2026-03-10T10:28:34.518726+0000 mon.a (mon.0) 3347 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-132"}]: dispatch 2026-03-10T10:28:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:35 vm04 bash[20742]: audit 2026-03-10T10:28:34.518726+0000 mon.a (mon.0) 3347 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-132"}]: dispatch 2026-03-10T10:28:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:35 vm04 bash[20742]: cluster 2026-03-10T10:28:34.549217+0000 mgr.y (mgr.24422) 569 : cluster [DBG] pgmap v1003: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 989 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T10:28:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:35 vm04 bash[20742]: cluster 2026-03-10T10:28:34.549217+0000 mgr.y (mgr.24422) 569 : cluster [DBG] pgmap v1003: 268 pgs: 1 active+clean+snaptrim, 267 active+clean; 455 KiB data, 989 MiB used, 159 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.2 KiB/s wr, 3 op/s 2026-03-10T10:28:36.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:36 vm04 bash[28289]: cluster 2026-03-10T10:28:35.514422+0000 mon.a (mon.0) 3348 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:36.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:36 vm04 bash[28289]: cluster 2026-03-10T10:28:35.514422+0000 mon.a (mon.0) 3348 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:36.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:36 vm04 bash[28289]: audit 2026-03-10T10:28:35.517593+0000 mon.a (mon.0) 3349 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-132"}]': finished 2026-03-10T10:28:36.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:36 vm04 bash[28289]: audit 2026-03-10T10:28:35.517593+0000 mon.a (mon.0) 3349 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-132"}]': finished 2026-03-10T10:28:36.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:36 vm04 bash[28289]: cluster 2026-03-10T10:28:35.521092+0000 mon.a (mon.0) 3350 : cluster [DBG] osdmap e649: 8 total, 8 up, 8 in 2026-03-10T10:28:36.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:36 vm04 bash[28289]: cluster 2026-03-10T10:28:35.521092+0000 mon.a (mon.0) 3350 : cluster [DBG] osdmap e649: 8 total, 8 up, 8 in 2026-03-10T10:28:36.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:36 vm04 bash[20742]: cluster 2026-03-10T10:28:35.514422+0000 mon.a (mon.0) 3348 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:36.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:36 vm04 bash[20742]: cluster 2026-03-10T10:28:35.514422+0000 mon.a (mon.0) 3348 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:36.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:36 vm04 bash[20742]: audit 2026-03-10T10:28:35.517593+0000 mon.a (mon.0) 3349 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-132"}]': finished 2026-03-10T10:28:36.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:36 vm04 bash[20742]: audit 2026-03-10T10:28:35.517593+0000 mon.a (mon.0) 3349 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-132"}]': finished 2026-03-10T10:28:36.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:36 vm04 bash[20742]: cluster 2026-03-10T10:28:35.521092+0000 mon.a (mon.0) 3350 : cluster [DBG] osdmap e649: 8 total, 8 up, 8 in 2026-03-10T10:28:36.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:36 vm04 bash[20742]: cluster 2026-03-10T10:28:35.521092+0000 mon.a (mon.0) 3350 : cluster [DBG] osdmap e649: 8 total, 8 up, 8 in 2026-03-10T10:28:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:36 vm07 bash[23367]: cluster 2026-03-10T10:28:35.514422+0000 mon.a (mon.0) 3348 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:36 vm07 bash[23367]: cluster 2026-03-10T10:28:35.514422+0000 mon.a (mon.0) 3348 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:36 vm07 bash[23367]: audit 2026-03-10T10:28:35.517593+0000 mon.a (mon.0) 3349 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-132"}]': finished 2026-03-10T10:28:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:36 vm07 bash[23367]: audit 2026-03-10T10:28:35.517593+0000 mon.a (mon.0) 3349 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-132"}]': finished 2026-03-10T10:28:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:36 vm07 bash[23367]: cluster 2026-03-10T10:28:35.521092+0000 mon.a (mon.0) 3350 : cluster [DBG] osdmap e649: 8 total, 8 up, 8 in 2026-03-10T10:28:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:36 vm07 bash[23367]: cluster 2026-03-10T10:28:35.521092+0000 mon.a (mon.0) 3350 : cluster [DBG] osdmap e649: 8 total, 8 up, 8 in 2026-03-10T10:28:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:37 vm04 bash[28289]: cluster 2026-03-10T10:28:36.543110+0000 mon.a (mon.0) 3351 : cluster [DBG] osdmap e650: 8 total, 8 up, 8 in 2026-03-10T10:28:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:37 vm04 bash[28289]: cluster 2026-03-10T10:28:36.543110+0000 mon.a (mon.0) 3351 : cluster [DBG] osdmap e650: 8 total, 8 up, 8 in 2026-03-10T10:28:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:37 vm04 bash[28289]: cluster 2026-03-10T10:28:36.549541+0000 mgr.y (mgr.24422) 570 : cluster [DBG] pgmap v1006: 236 pgs: 236 active+clean; 455 KiB data, 989 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:28:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:37 vm04 bash[28289]: cluster 2026-03-10T10:28:36.549541+0000 mgr.y (mgr.24422) 570 : cluster [DBG] pgmap v1006: 236 pgs: 236 active+clean; 455 KiB data, 989 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:28:37.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:37 vm04 bash[20742]: cluster 2026-03-10T10:28:36.543110+0000 mon.a (mon.0) 3351 : cluster [DBG] osdmap e650: 8 total, 8 up, 8 in 2026-03-10T10:28:37.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:37 vm04 bash[20742]: cluster 2026-03-10T10:28:36.543110+0000 mon.a (mon.0) 3351 : cluster [DBG] osdmap e650: 8 total, 8 up, 8 in 2026-03-10T10:28:37.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:37 vm04 bash[20742]: cluster 2026-03-10T10:28:36.549541+0000 mgr.y (mgr.24422) 570 : cluster [DBG] pgmap v1006: 236 pgs: 236 active+clean; 455 KiB data, 989 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:28:37.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:37 vm04 bash[20742]: cluster 2026-03-10T10:28:36.549541+0000 mgr.y (mgr.24422) 570 : cluster [DBG] pgmap v1006: 236 pgs: 236 active+clean; 455 KiB data, 989 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:28:38.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:37 vm07 bash[23367]: cluster 2026-03-10T10:28:36.543110+0000 mon.a (mon.0) 3351 : cluster [DBG] osdmap e650: 8 total, 8 up, 8 in 2026-03-10T10:28:38.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:37 vm07 bash[23367]: cluster 2026-03-10T10:28:36.543110+0000 mon.a (mon.0) 3351 : cluster [DBG] osdmap e650: 8 total, 8 up, 8 in 2026-03-10T10:28:38.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:37 vm07 bash[23367]: cluster 2026-03-10T10:28:36.549541+0000 mgr.y (mgr.24422) 570 : cluster [DBG] pgmap v1006: 236 pgs: 236 active+clean; 455 KiB data, 989 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:28:38.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:37 vm07 bash[23367]: cluster 2026-03-10T10:28:36.549541+0000 mgr.y (mgr.24422) 570 : cluster [DBG] pgmap v1006: 236 pgs: 236 active+clean; 455 KiB data, 989 MiB used, 159 GiB / 160 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-10T10:28:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:38 vm04 bash[28289]: cluster 2026-03-10T10:28:37.553012+0000 mon.a (mon.0) 3352 : cluster [DBG] osdmap e651: 8 total, 8 up, 8 in 2026-03-10T10:28:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:38 vm04 bash[28289]: cluster 2026-03-10T10:28:37.553012+0000 mon.a (mon.0) 3352 : cluster [DBG] osdmap e651: 8 total, 8 up, 8 in 2026-03-10T10:28:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:38 vm04 bash[28289]: audit 2026-03-10T10:28:37.574738+0000 mon.a (mon.0) 3353 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:38 vm04 bash[28289]: audit 2026-03-10T10:28:37.574738+0000 mon.a (mon.0) 3353 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:38.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:38 vm04 bash[20742]: cluster 2026-03-10T10:28:37.553012+0000 mon.a (mon.0) 3352 : cluster [DBG] osdmap e651: 8 total, 8 up, 8 in 2026-03-10T10:28:38.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:38 vm04 bash[20742]: cluster 2026-03-10T10:28:37.553012+0000 mon.a (mon.0) 3352 : cluster [DBG] osdmap e651: 8 total, 8 up, 8 in 2026-03-10T10:28:38.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:38 vm04 bash[20742]: audit 2026-03-10T10:28:37.574738+0000 mon.a (mon.0) 3353 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:38.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:38 vm04 bash[20742]: audit 2026-03-10T10:28:37.574738+0000 mon.a (mon.0) 3353 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:39.016 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:28:38 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:28:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:38 vm07 bash[23367]: cluster 2026-03-10T10:28:37.553012+0000 mon.a (mon.0) 3352 : cluster [DBG] osdmap e651: 8 total, 8 up, 8 in 2026-03-10T10:28:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:38 vm07 bash[23367]: cluster 2026-03-10T10:28:37.553012+0000 mon.a (mon.0) 3352 : cluster [DBG] osdmap e651: 8 total, 8 up, 8 in 2026-03-10T10:28:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:38 vm07 bash[23367]: audit 2026-03-10T10:28:37.574738+0000 mon.a (mon.0) 3353 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:39.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:38 vm07 bash[23367]: audit 2026-03-10T10:28:37.574738+0000 mon.a (mon.0) 3353 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-134","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:39 vm07 bash[23367]: cluster 2026-03-10T10:28:38.550547+0000 mgr.y (mgr.24422) 571 : cluster [DBG] pgmap v1008: 268 pgs: 12 creating+peering, 1 active+clean+snaptrim, 20 unknown, 235 active+clean; 455 KiB data, 990 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:28:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:39 vm07 bash[23367]: cluster 2026-03-10T10:28:38.550547+0000 mgr.y (mgr.24422) 571 : cluster [DBG] pgmap v1008: 268 pgs: 12 creating+peering, 1 active+clean+snaptrim, 20 unknown, 235 active+clean; 455 KiB data, 990 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:28:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:39 vm07 bash[23367]: audit 2026-03-10T10:28:38.589285+0000 mon.a (mon.0) 3354 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-134","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:39 vm07 bash[23367]: audit 2026-03-10T10:28:38.589285+0000 mon.a (mon.0) 3354 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-134","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:39 vm07 bash[23367]: cluster 2026-03-10T10:28:38.591717+0000 mon.a (mon.0) 3355 : cluster [DBG] osdmap e652: 8 total, 8 up, 8 in 2026-03-10T10:28:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:39 vm07 bash[23367]: cluster 2026-03-10T10:28:38.591717+0000 mon.a (mon.0) 3355 : cluster [DBG] osdmap e652: 8 total, 8 up, 8 in 2026-03-10T10:28:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:39 vm07 bash[23367]: audit 2026-03-10T10:28:38.592940+0000 mon.a (mon.0) 3356 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:39 vm07 bash[23367]: audit 2026-03-10T10:28:38.592940+0000 mon.a (mon.0) 3356 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:39 vm07 bash[23367]: cluster 2026-03-10T10:28:38.601115+0000 mon.a (mon.0) 3357 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:39 vm07 bash[23367]: cluster 2026-03-10T10:28:38.601115+0000 mon.a (mon.0) 3357 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:39 vm07 bash[23367]: audit 2026-03-10T10:28:38.824135+0000 mgr.y (mgr.24422) 572 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:39 vm07 bash[23367]: audit 2026-03-10T10:28:38.824135+0000 mgr.y (mgr.24422) 572 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:40.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:39 vm04 bash[28289]: cluster 2026-03-10T10:28:38.550547+0000 mgr.y (mgr.24422) 571 : cluster [DBG] pgmap v1008: 268 pgs: 12 creating+peering, 1 active+clean+snaptrim, 20 unknown, 235 active+clean; 455 KiB data, 990 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:28:40.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:39 vm04 bash[28289]: cluster 2026-03-10T10:28:38.550547+0000 mgr.y (mgr.24422) 571 : cluster [DBG] pgmap v1008: 268 pgs: 12 creating+peering, 1 active+clean+snaptrim, 20 unknown, 235 active+clean; 455 KiB data, 990 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:28:40.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:39 vm04 bash[28289]: audit 2026-03-10T10:28:38.589285+0000 mon.a (mon.0) 3354 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-134","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:40.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:39 vm04 bash[28289]: audit 2026-03-10T10:28:38.589285+0000 mon.a (mon.0) 3354 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-134","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:40.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:39 vm04 bash[28289]: cluster 2026-03-10T10:28:38.591717+0000 mon.a (mon.0) 3355 : cluster [DBG] osdmap e652: 8 total, 8 up, 8 in 2026-03-10T10:28:40.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:39 vm04 bash[28289]: cluster 2026-03-10T10:28:38.591717+0000 mon.a (mon.0) 3355 : cluster [DBG] osdmap e652: 8 total, 8 up, 8 in 2026-03-10T10:28:40.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:39 vm04 bash[28289]: audit 2026-03-10T10:28:38.592940+0000 mon.a (mon.0) 3356 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:40.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:39 vm04 bash[28289]: audit 2026-03-10T10:28:38.592940+0000 mon.a (mon.0) 3356 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:40.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:39 vm04 bash[28289]: cluster 2026-03-10T10:28:38.601115+0000 mon.a (mon.0) 3357 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:40.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:39 vm04 bash[28289]: cluster 2026-03-10T10:28:38.601115+0000 mon.a (mon.0) 3357 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:40.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:39 vm04 bash[28289]: audit 2026-03-10T10:28:38.824135+0000 mgr.y (mgr.24422) 572 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:40.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:39 vm04 bash[28289]: audit 2026-03-10T10:28:38.824135+0000 mgr.y (mgr.24422) 572 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:39 vm04 bash[20742]: cluster 2026-03-10T10:28:38.550547+0000 mgr.y (mgr.24422) 571 : cluster [DBG] pgmap v1008: 268 pgs: 12 creating+peering, 1 active+clean+snaptrim, 20 unknown, 235 active+clean; 455 KiB data, 990 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:28:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:39 vm04 bash[20742]: cluster 2026-03-10T10:28:38.550547+0000 mgr.y (mgr.24422) 571 : cluster [DBG] pgmap v1008: 268 pgs: 12 creating+peering, 1 active+clean+snaptrim, 20 unknown, 235 active+clean; 455 KiB data, 990 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:28:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:39 vm04 bash[20742]: audit 2026-03-10T10:28:38.589285+0000 mon.a (mon.0) 3354 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-134","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:39 vm04 bash[20742]: audit 2026-03-10T10:28:38.589285+0000 mon.a (mon.0) 3354 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-134","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:39 vm04 bash[20742]: cluster 2026-03-10T10:28:38.591717+0000 mon.a (mon.0) 3355 : cluster [DBG] osdmap e652: 8 total, 8 up, 8 in 2026-03-10T10:28:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:39 vm04 bash[20742]: cluster 2026-03-10T10:28:38.591717+0000 mon.a (mon.0) 3355 : cluster [DBG] osdmap e652: 8 total, 8 up, 8 in 2026-03-10T10:28:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:39 vm04 bash[20742]: audit 2026-03-10T10:28:38.592940+0000 mon.a (mon.0) 3356 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:39 vm04 bash[20742]: audit 2026-03-10T10:28:38.592940+0000 mon.a (mon.0) 3356 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-134", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:39 vm04 bash[20742]: cluster 2026-03-10T10:28:38.601115+0000 mon.a (mon.0) 3357 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:39 vm04 bash[20742]: cluster 2026-03-10T10:28:38.601115+0000 mon.a (mon.0) 3357 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:39 vm04 bash[20742]: audit 2026-03-10T10:28:38.824135+0000 mgr.y (mgr.24422) 572 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:39 vm04 bash[20742]: audit 2026-03-10T10:28:38.824135+0000 mgr.y (mgr.24422) 572 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:41.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:40 vm04 bash[28289]: audit 2026-03-10T10:28:39.726017+0000 mon.a (mon.0) 3358 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-134", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:41.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:40 vm04 bash[28289]: audit 2026-03-10T10:28:39.726017+0000 mon.a (mon.0) 3358 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-134", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:41.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:40 vm04 bash[28289]: cluster 2026-03-10T10:28:39.728749+0000 mon.a (mon.0) 3359 : cluster [DBG] osdmap e653: 8 total, 8 up, 8 in 2026-03-10T10:28:41.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:40 vm04 bash[28289]: cluster 2026-03-10T10:28:39.728749+0000 mon.a (mon.0) 3359 : cluster [DBG] osdmap e653: 8 total, 8 up, 8 in 2026-03-10T10:28:41.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:40 vm04 bash[28289]: audit 2026-03-10T10:28:39.729296+0000 mon.a (mon.0) 3360 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-134"}]: dispatch 2026-03-10T10:28:41.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:40 vm04 bash[28289]: audit 2026-03-10T10:28:39.729296+0000 mon.a (mon.0) 3360 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-134"}]: dispatch 2026-03-10T10:28:41.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:40 vm04 bash[20742]: audit 2026-03-10T10:28:39.726017+0000 mon.a (mon.0) 3358 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-134", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:41.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:40 vm04 bash[20742]: audit 2026-03-10T10:28:39.726017+0000 mon.a (mon.0) 3358 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-134", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:41.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:40 vm04 bash[20742]: cluster 2026-03-10T10:28:39.728749+0000 mon.a (mon.0) 3359 : cluster [DBG] osdmap e653: 8 total, 8 up, 8 in 2026-03-10T10:28:41.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:40 vm04 bash[20742]: cluster 2026-03-10T10:28:39.728749+0000 mon.a (mon.0) 3359 : cluster [DBG] osdmap e653: 8 total, 8 up, 8 in 2026-03-10T10:28:41.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:40 vm04 bash[20742]: audit 2026-03-10T10:28:39.729296+0000 mon.a (mon.0) 3360 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-134"}]: dispatch 2026-03-10T10:28:41.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:40 vm04 bash[20742]: audit 2026-03-10T10:28:39.729296+0000 mon.a (mon.0) 3360 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-134"}]: dispatch 2026-03-10T10:28:41.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:40 vm07 bash[23367]: audit 2026-03-10T10:28:39.726017+0000 mon.a (mon.0) 3358 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-134", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:41.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:40 vm07 bash[23367]: audit 2026-03-10T10:28:39.726017+0000 mon.a (mon.0) 3358 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-134", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:41.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:40 vm07 bash[23367]: cluster 2026-03-10T10:28:39.728749+0000 mon.a (mon.0) 3359 : cluster [DBG] osdmap e653: 8 total, 8 up, 8 in 2026-03-10T10:28:41.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:40 vm07 bash[23367]: cluster 2026-03-10T10:28:39.728749+0000 mon.a (mon.0) 3359 : cluster [DBG] osdmap e653: 8 total, 8 up, 8 in 2026-03-10T10:28:41.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:40 vm07 bash[23367]: audit 2026-03-10T10:28:39.729296+0000 mon.a (mon.0) 3360 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-134"}]: dispatch 2026-03-10T10:28:41.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:40 vm07 bash[23367]: audit 2026-03-10T10:28:39.729296+0000 mon.a (mon.0) 3360 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-134"}]: dispatch 2026-03-10T10:28:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:41 vm04 bash[28289]: cluster 2026-03-10T10:28:40.550840+0000 mgr.y (mgr.24422) 573 : cluster [DBG] pgmap v1011: 268 pgs: 12 creating+peering, 1 active+clean+snaptrim, 255 active+clean; 455 KiB data, 990 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:41 vm04 bash[28289]: cluster 2026-03-10T10:28:40.550840+0000 mgr.y (mgr.24422) 573 : cluster [DBG] pgmap v1011: 268 pgs: 12 creating+peering, 1 active+clean+snaptrim, 255 active+clean; 455 KiB data, 990 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:41 vm04 bash[28289]: audit 2026-03-10T10:28:40.842072+0000 mon.a (mon.0) 3361 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-134"}]': finished 2026-03-10T10:28:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:41 vm04 bash[28289]: audit 2026-03-10T10:28:40.842072+0000 mon.a (mon.0) 3361 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-134"}]': finished 2026-03-10T10:28:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:41 vm04 bash[28289]: cluster 2026-03-10T10:28:40.849020+0000 mon.a (mon.0) 3362 : cluster [DBG] osdmap e654: 8 total, 8 up, 8 in 2026-03-10T10:28:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:41 vm04 bash[28289]: cluster 2026-03-10T10:28:40.849020+0000 mon.a (mon.0) 3362 : cluster [DBG] osdmap e654: 8 total, 8 up, 8 in 2026-03-10T10:28:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:41 vm04 bash[28289]: audit 2026-03-10T10:28:40.849485+0000 mon.a (mon.0) 3363 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-134", "mode": "writeback"}]: dispatch 2026-03-10T10:28:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:41 vm04 bash[28289]: audit 2026-03-10T10:28:40.849485+0000 mon.a (mon.0) 3363 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-134", "mode": "writeback"}]: dispatch 2026-03-10T10:28:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:41 vm04 bash[20742]: cluster 2026-03-10T10:28:40.550840+0000 mgr.y (mgr.24422) 573 : cluster [DBG] pgmap v1011: 268 pgs: 12 creating+peering, 1 active+clean+snaptrim, 255 active+clean; 455 KiB data, 990 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:41 vm04 bash[20742]: cluster 2026-03-10T10:28:40.550840+0000 mgr.y (mgr.24422) 573 : cluster [DBG] pgmap v1011: 268 pgs: 12 creating+peering, 1 active+clean+snaptrim, 255 active+clean; 455 KiB data, 990 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:41 vm04 bash[20742]: audit 2026-03-10T10:28:40.842072+0000 mon.a (mon.0) 3361 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-134"}]': finished 2026-03-10T10:28:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:41 vm04 bash[20742]: audit 2026-03-10T10:28:40.842072+0000 mon.a (mon.0) 3361 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-134"}]': finished 2026-03-10T10:28:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:41 vm04 bash[20742]: cluster 2026-03-10T10:28:40.849020+0000 mon.a (mon.0) 3362 : cluster [DBG] osdmap e654: 8 total, 8 up, 8 in 2026-03-10T10:28:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:41 vm04 bash[20742]: cluster 2026-03-10T10:28:40.849020+0000 mon.a (mon.0) 3362 : cluster [DBG] osdmap e654: 8 total, 8 up, 8 in 2026-03-10T10:28:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:41 vm04 bash[20742]: audit 2026-03-10T10:28:40.849485+0000 mon.a (mon.0) 3363 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-134", "mode": "writeback"}]: dispatch 2026-03-10T10:28:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:41 vm04 bash[20742]: audit 2026-03-10T10:28:40.849485+0000 mon.a (mon.0) 3363 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-134", "mode": "writeback"}]: dispatch 2026-03-10T10:28:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:41 vm07 bash[23367]: cluster 2026-03-10T10:28:40.550840+0000 mgr.y (mgr.24422) 573 : cluster [DBG] pgmap v1011: 268 pgs: 12 creating+peering, 1 active+clean+snaptrim, 255 active+clean; 455 KiB data, 990 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:41 vm07 bash[23367]: cluster 2026-03-10T10:28:40.550840+0000 mgr.y (mgr.24422) 573 : cluster [DBG] pgmap v1011: 268 pgs: 12 creating+peering, 1 active+clean+snaptrim, 255 active+clean; 455 KiB data, 990 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:41 vm07 bash[23367]: audit 2026-03-10T10:28:40.842072+0000 mon.a (mon.0) 3361 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-134"}]': finished 2026-03-10T10:28:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:41 vm07 bash[23367]: audit 2026-03-10T10:28:40.842072+0000 mon.a (mon.0) 3361 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-134"}]': finished 2026-03-10T10:28:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:41 vm07 bash[23367]: cluster 2026-03-10T10:28:40.849020+0000 mon.a (mon.0) 3362 : cluster [DBG] osdmap e654: 8 total, 8 up, 8 in 2026-03-10T10:28:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:41 vm07 bash[23367]: cluster 2026-03-10T10:28:40.849020+0000 mon.a (mon.0) 3362 : cluster [DBG] osdmap e654: 8 total, 8 up, 8 in 2026-03-10T10:28:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:41 vm07 bash[23367]: audit 2026-03-10T10:28:40.849485+0000 mon.a (mon.0) 3363 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-134", "mode": "writeback"}]: dispatch 2026-03-10T10:28:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:41 vm07 bash[23367]: audit 2026-03-10T10:28:40.849485+0000 mon.a (mon.0) 3363 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-134", "mode": "writeback"}]: dispatch 2026-03-10T10:28:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:42 vm04 bash[28289]: cluster 2026-03-10T10:28:41.842305+0000 mon.a (mon.0) 3364 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:42 vm04 bash[28289]: cluster 2026-03-10T10:28:41.842305+0000 mon.a (mon.0) 3364 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:42 vm04 bash[28289]: audit 2026-03-10T10:28:41.845968+0000 mon.a (mon.0) 3365 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-134", "mode": "writeback"}]': finished 2026-03-10T10:28:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:42 vm04 bash[28289]: audit 2026-03-10T10:28:41.845968+0000 mon.a (mon.0) 3365 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-134", "mode": "writeback"}]': finished 2026-03-10T10:28:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:42 vm04 bash[28289]: cluster 2026-03-10T10:28:41.849180+0000 mon.a (mon.0) 3366 : cluster [DBG] osdmap e655: 8 total, 8 up, 8 in 2026-03-10T10:28:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:42 vm04 bash[28289]: cluster 2026-03-10T10:28:41.849180+0000 mon.a (mon.0) 3366 : cluster [DBG] osdmap e655: 8 total, 8 up, 8 in 2026-03-10T10:28:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:42 vm04 bash[28289]: audit 2026-03-10T10:28:41.897237+0000 mon.a (mon.0) 3367 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:42 vm04 bash[28289]: audit 2026-03-10T10:28:41.897237+0000 mon.a (mon.0) 3367 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:43.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:42 vm04 bash[20742]: cluster 2026-03-10T10:28:41.842305+0000 mon.a (mon.0) 3364 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:43.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:42 vm04 bash[20742]: cluster 2026-03-10T10:28:41.842305+0000 mon.a (mon.0) 3364 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:43.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:42 vm04 bash[20742]: audit 2026-03-10T10:28:41.845968+0000 mon.a (mon.0) 3365 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-134", "mode": "writeback"}]': finished 2026-03-10T10:28:43.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:42 vm04 bash[20742]: audit 2026-03-10T10:28:41.845968+0000 mon.a (mon.0) 3365 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-134", "mode": "writeback"}]': finished 2026-03-10T10:28:43.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:42 vm04 bash[20742]: cluster 2026-03-10T10:28:41.849180+0000 mon.a (mon.0) 3366 : cluster [DBG] osdmap e655: 8 total, 8 up, 8 in 2026-03-10T10:28:43.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:42 vm04 bash[20742]: cluster 2026-03-10T10:28:41.849180+0000 mon.a (mon.0) 3366 : cluster [DBG] osdmap e655: 8 total, 8 up, 8 in 2026-03-10T10:28:43.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:42 vm04 bash[20742]: audit 2026-03-10T10:28:41.897237+0000 mon.a (mon.0) 3367 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:43.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:42 vm04 bash[20742]: audit 2026-03-10T10:28:41.897237+0000 mon.a (mon.0) 3367 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:43.203 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:28:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:28:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:28:43.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:42 vm07 bash[23367]: cluster 2026-03-10T10:28:41.842305+0000 mon.a (mon.0) 3364 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:43.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:42 vm07 bash[23367]: cluster 2026-03-10T10:28:41.842305+0000 mon.a (mon.0) 3364 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:43.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:42 vm07 bash[23367]: audit 2026-03-10T10:28:41.845968+0000 mon.a (mon.0) 3365 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-134", "mode": "writeback"}]': finished 2026-03-10T10:28:43.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:42 vm07 bash[23367]: audit 2026-03-10T10:28:41.845968+0000 mon.a (mon.0) 3365 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-134", "mode": "writeback"}]': finished 2026-03-10T10:28:43.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:42 vm07 bash[23367]: cluster 2026-03-10T10:28:41.849180+0000 mon.a (mon.0) 3366 : cluster [DBG] osdmap e655: 8 total, 8 up, 8 in 2026-03-10T10:28:43.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:42 vm07 bash[23367]: cluster 2026-03-10T10:28:41.849180+0000 mon.a (mon.0) 3366 : cluster [DBG] osdmap e655: 8 total, 8 up, 8 in 2026-03-10T10:28:43.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:42 vm07 bash[23367]: audit 2026-03-10T10:28:41.897237+0000 mon.a (mon.0) 3367 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:43.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:42 vm07 bash[23367]: audit 2026-03-10T10:28:41.897237+0000 mon.a (mon.0) 3367 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:43 vm04 bash[28289]: cluster 2026-03-10T10:28:42.551138+0000 mgr.y (mgr.24422) 574 : cluster [DBG] pgmap v1014: 268 pgs: 12 creating+peering, 1 active+clean+snaptrim, 255 active+clean; 455 KiB data, 990 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:43 vm04 bash[28289]: cluster 2026-03-10T10:28:42.551138+0000 mgr.y (mgr.24422) 574 : cluster [DBG] pgmap v1014: 268 pgs: 12 creating+peering, 1 active+clean+snaptrim, 255 active+clean; 455 KiB data, 990 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:43 vm04 bash[28289]: audit 2026-03-10T10:28:42.896567+0000 mon.a (mon.0) 3368 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:43 vm04 bash[28289]: audit 2026-03-10T10:28:42.896567+0000 mon.a (mon.0) 3368 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:43 vm04 bash[28289]: cluster 2026-03-10T10:28:42.906065+0000 mon.a (mon.0) 3369 : cluster [DBG] osdmap e656: 8 total, 8 up, 8 in 2026-03-10T10:28:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:43 vm04 bash[28289]: cluster 2026-03-10T10:28:42.906065+0000 mon.a (mon.0) 3369 : cluster [DBG] osdmap e656: 8 total, 8 up, 8 in 2026-03-10T10:28:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:43 vm04 bash[28289]: audit 2026-03-10T10:28:42.906768+0000 mon.a (mon.0) 3370 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-134"}]: dispatch 2026-03-10T10:28:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:43 vm04 bash[28289]: audit 2026-03-10T10:28:42.906768+0000 mon.a (mon.0) 3370 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-134"}]: dispatch 2026-03-10T10:28:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:43 vm04 bash[28289]: audit 2026-03-10T10:28:43.146856+0000 mon.a (mon.0) 3371 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:28:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:43 vm04 bash[28289]: audit 2026-03-10T10:28:43.146856+0000 mon.a (mon.0) 3371 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:28:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:43 vm04 bash[28289]: audit 2026-03-10T10:28:43.148258+0000 mon.a (mon.0) 3372 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:28:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:43 vm04 bash[28289]: audit 2026-03-10T10:28:43.148258+0000 mon.a (mon.0) 3372 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:28:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:43 vm04 bash[20742]: cluster 2026-03-10T10:28:42.551138+0000 mgr.y (mgr.24422) 574 : cluster [DBG] pgmap v1014: 268 pgs: 12 creating+peering, 1 active+clean+snaptrim, 255 active+clean; 455 KiB data, 990 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:43 vm04 bash[20742]: cluster 2026-03-10T10:28:42.551138+0000 mgr.y (mgr.24422) 574 : cluster [DBG] pgmap v1014: 268 pgs: 12 creating+peering, 1 active+clean+snaptrim, 255 active+clean; 455 KiB data, 990 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:43 vm04 bash[20742]: audit 2026-03-10T10:28:42.896567+0000 mon.a (mon.0) 3368 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:43 vm04 bash[20742]: audit 2026-03-10T10:28:42.896567+0000 mon.a (mon.0) 3368 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:43 vm04 bash[20742]: cluster 2026-03-10T10:28:42.906065+0000 mon.a (mon.0) 3369 : cluster [DBG] osdmap e656: 8 total, 8 up, 8 in 2026-03-10T10:28:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:43 vm04 bash[20742]: cluster 2026-03-10T10:28:42.906065+0000 mon.a (mon.0) 3369 : cluster [DBG] osdmap e656: 8 total, 8 up, 8 in 2026-03-10T10:28:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:43 vm04 bash[20742]: audit 2026-03-10T10:28:42.906768+0000 mon.a (mon.0) 3370 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-134"}]: dispatch 2026-03-10T10:28:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:43 vm04 bash[20742]: audit 2026-03-10T10:28:42.906768+0000 mon.a (mon.0) 3370 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-134"}]: dispatch 2026-03-10T10:28:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:43 vm04 bash[20742]: audit 2026-03-10T10:28:43.146856+0000 mon.a (mon.0) 3371 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:28:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:43 vm04 bash[20742]: audit 2026-03-10T10:28:43.146856+0000 mon.a (mon.0) 3371 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:28:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:43 vm04 bash[20742]: audit 2026-03-10T10:28:43.148258+0000 mon.a (mon.0) 3372 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:28:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:43 vm04 bash[20742]: audit 2026-03-10T10:28:43.148258+0000 mon.a (mon.0) 3372 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:28:44.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:43 vm07 bash[23367]: cluster 2026-03-10T10:28:42.551138+0000 mgr.y (mgr.24422) 574 : cluster [DBG] pgmap v1014: 268 pgs: 12 creating+peering, 1 active+clean+snaptrim, 255 active+clean; 455 KiB data, 990 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:44.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:43 vm07 bash[23367]: cluster 2026-03-10T10:28:42.551138+0000 mgr.y (mgr.24422) 574 : cluster [DBG] pgmap v1014: 268 pgs: 12 creating+peering, 1 active+clean+snaptrim, 255 active+clean; 455 KiB data, 990 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:44.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:43 vm07 bash[23367]: audit 2026-03-10T10:28:42.896567+0000 mon.a (mon.0) 3368 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:44.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:43 vm07 bash[23367]: audit 2026-03-10T10:28:42.896567+0000 mon.a (mon.0) 3368 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:44.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:43 vm07 bash[23367]: cluster 2026-03-10T10:28:42.906065+0000 mon.a (mon.0) 3369 : cluster [DBG] osdmap e656: 8 total, 8 up, 8 in 2026-03-10T10:28:44.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:43 vm07 bash[23367]: cluster 2026-03-10T10:28:42.906065+0000 mon.a (mon.0) 3369 : cluster [DBG] osdmap e656: 8 total, 8 up, 8 in 2026-03-10T10:28:44.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:43 vm07 bash[23367]: audit 2026-03-10T10:28:42.906768+0000 mon.a (mon.0) 3370 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-134"}]: dispatch 2026-03-10T10:28:44.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:43 vm07 bash[23367]: audit 2026-03-10T10:28:42.906768+0000 mon.a (mon.0) 3370 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-134"}]: dispatch 2026-03-10T10:28:44.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:43 vm07 bash[23367]: audit 2026-03-10T10:28:43.146856+0000 mon.a (mon.0) 3371 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:28:44.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:43 vm07 bash[23367]: audit 2026-03-10T10:28:43.146856+0000 mon.a (mon.0) 3371 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:28:44.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:43 vm07 bash[23367]: audit 2026-03-10T10:28:43.148258+0000 mon.a (mon.0) 3372 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:28:44.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:43 vm07 bash[23367]: audit 2026-03-10T10:28:43.148258+0000 mon.a (mon.0) 3372 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:28:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:44 vm04 bash[28289]: cluster 2026-03-10T10:28:43.896692+0000 mon.a (mon.0) 3373 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:44 vm04 bash[28289]: cluster 2026-03-10T10:28:43.896692+0000 mon.a (mon.0) 3373 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:44 vm04 bash[28289]: audit 2026-03-10T10:28:43.900441+0000 mon.a (mon.0) 3374 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-134"}]': finished 2026-03-10T10:28:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:44 vm04 bash[28289]: audit 2026-03-10T10:28:43.900441+0000 mon.a (mon.0) 3374 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-134"}]': finished 2026-03-10T10:28:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:44 vm04 bash[28289]: cluster 2026-03-10T10:28:43.905664+0000 mon.a (mon.0) 3375 : cluster [DBG] osdmap e657: 8 total, 8 up, 8 in 2026-03-10T10:28:45.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:44 vm04 bash[28289]: cluster 2026-03-10T10:28:43.905664+0000 mon.a (mon.0) 3375 : cluster [DBG] osdmap e657: 8 total, 8 up, 8 in 2026-03-10T10:28:45.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:44 vm04 bash[20742]: cluster 2026-03-10T10:28:43.896692+0000 mon.a (mon.0) 3373 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:45.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:44 vm04 bash[20742]: cluster 2026-03-10T10:28:43.896692+0000 mon.a (mon.0) 3373 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:45.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:44 vm04 bash[20742]: audit 2026-03-10T10:28:43.900441+0000 mon.a (mon.0) 3374 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-134"}]': finished 2026-03-10T10:28:45.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:44 vm04 bash[20742]: audit 2026-03-10T10:28:43.900441+0000 mon.a (mon.0) 3374 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-134"}]': finished 2026-03-10T10:28:45.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:44 vm04 bash[20742]: cluster 2026-03-10T10:28:43.905664+0000 mon.a (mon.0) 3375 : cluster [DBG] osdmap e657: 8 total, 8 up, 8 in 2026-03-10T10:28:45.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:44 vm04 bash[20742]: cluster 2026-03-10T10:28:43.905664+0000 mon.a (mon.0) 3375 : cluster [DBG] osdmap e657: 8 total, 8 up, 8 in 2026-03-10T10:28:45.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:44 vm07 bash[23367]: cluster 2026-03-10T10:28:43.896692+0000 mon.a (mon.0) 3373 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:45.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:44 vm07 bash[23367]: cluster 2026-03-10T10:28:43.896692+0000 mon.a (mon.0) 3373 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:45.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:44 vm07 bash[23367]: audit 2026-03-10T10:28:43.900441+0000 mon.a (mon.0) 3374 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-134"}]': finished 2026-03-10T10:28:45.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:44 vm07 bash[23367]: audit 2026-03-10T10:28:43.900441+0000 mon.a (mon.0) 3374 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-134"}]': finished 2026-03-10T10:28:45.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:44 vm07 bash[23367]: cluster 2026-03-10T10:28:43.905664+0000 mon.a (mon.0) 3375 : cluster [DBG] osdmap e657: 8 total, 8 up, 8 in 2026-03-10T10:28:45.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:44 vm07 bash[23367]: cluster 2026-03-10T10:28:43.905664+0000 mon.a (mon.0) 3375 : cluster [DBG] osdmap e657: 8 total, 8 up, 8 in 2026-03-10T10:28:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:45 vm04 bash[28289]: cluster 2026-03-10T10:28:44.551468+0000 mgr.y (mgr.24422) 575 : cluster [DBG] pgmap v1017: 268 pgs: 268 active+clean; 455 KiB data, 990 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.2 KiB/s wr, 3 op/s 2026-03-10T10:28:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:45 vm04 bash[28289]: cluster 2026-03-10T10:28:44.551468+0000 mgr.y (mgr.24422) 575 : cluster [DBG] pgmap v1017: 268 pgs: 268 active+clean; 455 KiB data, 990 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.2 KiB/s wr, 3 op/s 2026-03-10T10:28:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:45 vm04 bash[28289]: cluster 2026-03-10T10:28:44.909146+0000 mon.a (mon.0) 3376 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:45 vm04 bash[28289]: cluster 2026-03-10T10:28:44.909146+0000 mon.a (mon.0) 3376 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:45 vm04 bash[28289]: cluster 2026-03-10T10:28:44.924023+0000 mon.a (mon.0) 3377 : cluster [DBG] osdmap e658: 8 total, 8 up, 8 in 2026-03-10T10:28:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:45 vm04 bash[28289]: cluster 2026-03-10T10:28:44.924023+0000 mon.a (mon.0) 3377 : cluster [DBG] osdmap e658: 8 total, 8 up, 8 in 2026-03-10T10:28:46.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:45 vm04 bash[20742]: cluster 2026-03-10T10:28:44.551468+0000 mgr.y (mgr.24422) 575 : cluster [DBG] pgmap v1017: 268 pgs: 268 active+clean; 455 KiB data, 990 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.2 KiB/s wr, 3 op/s 2026-03-10T10:28:46.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:45 vm04 bash[20742]: cluster 2026-03-10T10:28:44.551468+0000 mgr.y (mgr.24422) 575 : cluster [DBG] pgmap v1017: 268 pgs: 268 active+clean; 455 KiB data, 990 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.2 KiB/s wr, 3 op/s 2026-03-10T10:28:46.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:45 vm04 bash[20742]: cluster 2026-03-10T10:28:44.909146+0000 mon.a (mon.0) 3376 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:46.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:45 vm04 bash[20742]: cluster 2026-03-10T10:28:44.909146+0000 mon.a (mon.0) 3376 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:46.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:45 vm04 bash[20742]: cluster 2026-03-10T10:28:44.924023+0000 mon.a (mon.0) 3377 : cluster [DBG] osdmap e658: 8 total, 8 up, 8 in 2026-03-10T10:28:46.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:45 vm04 bash[20742]: cluster 2026-03-10T10:28:44.924023+0000 mon.a (mon.0) 3377 : cluster [DBG] osdmap e658: 8 total, 8 up, 8 in 2026-03-10T10:28:46.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:45 vm07 bash[23367]: cluster 2026-03-10T10:28:44.551468+0000 mgr.y (mgr.24422) 575 : cluster [DBG] pgmap v1017: 268 pgs: 268 active+clean; 455 KiB data, 990 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.2 KiB/s wr, 3 op/s 2026-03-10T10:28:46.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:45 vm07 bash[23367]: cluster 2026-03-10T10:28:44.551468+0000 mgr.y (mgr.24422) 575 : cluster [DBG] pgmap v1017: 268 pgs: 268 active+clean; 455 KiB data, 990 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.2 KiB/s wr, 3 op/s 2026-03-10T10:28:46.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:45 vm07 bash[23367]: cluster 2026-03-10T10:28:44.909146+0000 mon.a (mon.0) 3376 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:46.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:45 vm07 bash[23367]: cluster 2026-03-10T10:28:44.909146+0000 mon.a (mon.0) 3376 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:46.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:45 vm07 bash[23367]: cluster 2026-03-10T10:28:44.924023+0000 mon.a (mon.0) 3377 : cluster [DBG] osdmap e658: 8 total, 8 up, 8 in 2026-03-10T10:28:46.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:45 vm07 bash[23367]: cluster 2026-03-10T10:28:44.924023+0000 mon.a (mon.0) 3377 : cluster [DBG] osdmap e658: 8 total, 8 up, 8 in 2026-03-10T10:28:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:46 vm04 bash[28289]: cluster 2026-03-10T10:28:45.952544+0000 mon.a (mon.0) 3378 : cluster [DBG] osdmap e659: 8 total, 8 up, 8 in 2026-03-10T10:28:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:46 vm04 bash[28289]: cluster 2026-03-10T10:28:45.952544+0000 mon.a (mon.0) 3378 : cluster [DBG] osdmap e659: 8 total, 8 up, 8 in 2026-03-10T10:28:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:46 vm04 bash[28289]: audit 2026-03-10T10:28:45.955170+0000 mon.a (mon.0) 3379 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:46 vm04 bash[28289]: audit 2026-03-10T10:28:45.955170+0000 mon.a (mon.0) 3379 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:46 vm04 bash[28289]: audit 2026-03-10T10:28:46.645441+0000 mon.a (mon.0) 3380 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-136","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:46 vm04 bash[28289]: audit 2026-03-10T10:28:46.645441+0000 mon.a (mon.0) 3380 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-136","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:46 vm04 bash[28289]: cluster 2026-03-10T10:28:46.650348+0000 mon.a (mon.0) 3381 : cluster [DBG] osdmap e660: 8 total, 8 up, 8 in 2026-03-10T10:28:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:46 vm04 bash[28289]: cluster 2026-03-10T10:28:46.650348+0000 mon.a (mon.0) 3381 : cluster [DBG] osdmap e660: 8 total, 8 up, 8 in 2026-03-10T10:28:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:46 vm04 bash[28289]: audit 2026-03-10T10:28:46.652410+0000 mon.a (mon.0) 3382 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:47.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:46 vm04 bash[28289]: audit 2026-03-10T10:28:46.652410+0000 mon.a (mon.0) 3382 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:47.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:46 vm04 bash[20742]: cluster 2026-03-10T10:28:45.952544+0000 mon.a (mon.0) 3378 : cluster [DBG] osdmap e659: 8 total, 8 up, 8 in 2026-03-10T10:28:47.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:46 vm04 bash[20742]: cluster 2026-03-10T10:28:45.952544+0000 mon.a (mon.0) 3378 : cluster [DBG] osdmap e659: 8 total, 8 up, 8 in 2026-03-10T10:28:47.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:46 vm04 bash[20742]: audit 2026-03-10T10:28:45.955170+0000 mon.a (mon.0) 3379 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:47.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:46 vm04 bash[20742]: audit 2026-03-10T10:28:45.955170+0000 mon.a (mon.0) 3379 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:47.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:46 vm04 bash[20742]: audit 2026-03-10T10:28:46.645441+0000 mon.a (mon.0) 3380 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-136","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:47.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:46 vm04 bash[20742]: audit 2026-03-10T10:28:46.645441+0000 mon.a (mon.0) 3380 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-136","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:47.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:46 vm04 bash[20742]: cluster 2026-03-10T10:28:46.650348+0000 mon.a (mon.0) 3381 : cluster [DBG] osdmap e660: 8 total, 8 up, 8 in 2026-03-10T10:28:47.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:46 vm04 bash[20742]: cluster 2026-03-10T10:28:46.650348+0000 mon.a (mon.0) 3381 : cluster [DBG] osdmap e660: 8 total, 8 up, 8 in 2026-03-10T10:28:47.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:46 vm04 bash[20742]: audit 2026-03-10T10:28:46.652410+0000 mon.a (mon.0) 3382 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:47.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:46 vm04 bash[20742]: audit 2026-03-10T10:28:46.652410+0000 mon.a (mon.0) 3382 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:47.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:46 vm07 bash[23367]: cluster 2026-03-10T10:28:45.952544+0000 mon.a (mon.0) 3378 : cluster [DBG] osdmap e659: 8 total, 8 up, 8 in 2026-03-10T10:28:47.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:46 vm07 bash[23367]: cluster 2026-03-10T10:28:45.952544+0000 mon.a (mon.0) 3378 : cluster [DBG] osdmap e659: 8 total, 8 up, 8 in 2026-03-10T10:28:47.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:46 vm07 bash[23367]: audit 2026-03-10T10:28:45.955170+0000 mon.a (mon.0) 3379 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:47.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:46 vm07 bash[23367]: audit 2026-03-10T10:28:45.955170+0000 mon.a (mon.0) 3379 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-136","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:47.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:46 vm07 bash[23367]: audit 2026-03-10T10:28:46.645441+0000 mon.a (mon.0) 3380 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-136","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:47.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:46 vm07 bash[23367]: audit 2026-03-10T10:28:46.645441+0000 mon.a (mon.0) 3380 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-136","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:47.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:46 vm07 bash[23367]: cluster 2026-03-10T10:28:46.650348+0000 mon.a (mon.0) 3381 : cluster [DBG] osdmap e660: 8 total, 8 up, 8 in 2026-03-10T10:28:47.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:46 vm07 bash[23367]: cluster 2026-03-10T10:28:46.650348+0000 mon.a (mon.0) 3381 : cluster [DBG] osdmap e660: 8 total, 8 up, 8 in 2026-03-10T10:28:47.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:46 vm07 bash[23367]: audit 2026-03-10T10:28:46.652410+0000 mon.a (mon.0) 3382 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:47.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:46 vm07 bash[23367]: audit 2026-03-10T10:28:46.652410+0000 mon.a (mon.0) 3382 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-136", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:47 vm04 bash[28289]: cluster 2026-03-10T10:28:46.551855+0000 mgr.y (mgr.24422) 576 : cluster [DBG] pgmap v1020: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 990 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T10:28:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:47 vm04 bash[28289]: cluster 2026-03-10T10:28:46.551855+0000 mgr.y (mgr.24422) 576 : cluster [DBG] pgmap v1020: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 990 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T10:28:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:47 vm04 bash[28289]: audit 2026-03-10T10:28:47.648424+0000 mon.a (mon.0) 3383 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-136", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:47 vm04 bash[28289]: audit 2026-03-10T10:28:47.648424+0000 mon.a (mon.0) 3383 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-136", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:47 vm04 bash[28289]: cluster 2026-03-10T10:28:47.655871+0000 mon.a (mon.0) 3384 : cluster [DBG] osdmap e661: 8 total, 8 up, 8 in 2026-03-10T10:28:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:47 vm04 bash[28289]: cluster 2026-03-10T10:28:47.655871+0000 mon.a (mon.0) 3384 : cluster [DBG] osdmap e661: 8 total, 8 up, 8 in 2026-03-10T10:28:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:47 vm04 bash[28289]: audit 2026-03-10T10:28:47.658036+0000 mon.a (mon.0) 3385 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-136"}]: dispatch 2026-03-10T10:28:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:47 vm04 bash[28289]: audit 2026-03-10T10:28:47.658036+0000 mon.a (mon.0) 3385 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-136"}]: dispatch 2026-03-10T10:28:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:47 vm04 bash[20742]: cluster 2026-03-10T10:28:46.551855+0000 mgr.y (mgr.24422) 576 : cluster [DBG] pgmap v1020: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 990 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T10:28:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:47 vm04 bash[20742]: cluster 2026-03-10T10:28:46.551855+0000 mgr.y (mgr.24422) 576 : cluster [DBG] pgmap v1020: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 990 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T10:28:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:47 vm04 bash[20742]: audit 2026-03-10T10:28:47.648424+0000 mon.a (mon.0) 3383 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-136", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:47 vm04 bash[20742]: audit 2026-03-10T10:28:47.648424+0000 mon.a (mon.0) 3383 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-136", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:47 vm04 bash[20742]: cluster 2026-03-10T10:28:47.655871+0000 mon.a (mon.0) 3384 : cluster [DBG] osdmap e661: 8 total, 8 up, 8 in 2026-03-10T10:28:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:47 vm04 bash[20742]: cluster 2026-03-10T10:28:47.655871+0000 mon.a (mon.0) 3384 : cluster [DBG] osdmap e661: 8 total, 8 up, 8 in 2026-03-10T10:28:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:47 vm04 bash[20742]: audit 2026-03-10T10:28:47.658036+0000 mon.a (mon.0) 3385 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-136"}]: dispatch 2026-03-10T10:28:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:47 vm04 bash[20742]: audit 2026-03-10T10:28:47.658036+0000 mon.a (mon.0) 3385 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-136"}]: dispatch 2026-03-10T10:28:48.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:47 vm07 bash[23367]: cluster 2026-03-10T10:28:46.551855+0000 mgr.y (mgr.24422) 576 : cluster [DBG] pgmap v1020: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 990 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T10:28:48.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:47 vm07 bash[23367]: cluster 2026-03-10T10:28:46.551855+0000 mgr.y (mgr.24422) 576 : cluster [DBG] pgmap v1020: 268 pgs: 32 unknown, 236 active+clean; 455 KiB data, 990 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.2 KiB/s wr, 2 op/s 2026-03-10T10:28:48.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:47 vm07 bash[23367]: audit 2026-03-10T10:28:47.648424+0000 mon.a (mon.0) 3383 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-136", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:48.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:47 vm07 bash[23367]: audit 2026-03-10T10:28:47.648424+0000 mon.a (mon.0) 3383 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-136", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:48.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:47 vm07 bash[23367]: cluster 2026-03-10T10:28:47.655871+0000 mon.a (mon.0) 3384 : cluster [DBG] osdmap e661: 8 total, 8 up, 8 in 2026-03-10T10:28:48.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:47 vm07 bash[23367]: cluster 2026-03-10T10:28:47.655871+0000 mon.a (mon.0) 3384 : cluster [DBG] osdmap e661: 8 total, 8 up, 8 in 2026-03-10T10:28:48.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:47 vm07 bash[23367]: audit 2026-03-10T10:28:47.658036+0000 mon.a (mon.0) 3385 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-136"}]: dispatch 2026-03-10T10:28:48.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:47 vm07 bash[23367]: audit 2026-03-10T10:28:47.658036+0000 mon.a (mon.0) 3385 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-136"}]: dispatch 2026-03-10T10:28:49.266 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:28:48 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:28:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:49 vm04 bash[28289]: cluster 2026-03-10T10:28:48.552594+0000 mgr.y (mgr.24422) 577 : cluster [DBG] pgmap v1023: 268 pgs: 12 unknown, 256 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:28:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:49 vm04 bash[28289]: cluster 2026-03-10T10:28:48.552594+0000 mgr.y (mgr.24422) 577 : cluster [DBG] pgmap v1023: 268 pgs: 12 unknown, 256 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:28:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:49 vm04 bash[28289]: audit 2026-03-10T10:28:48.651509+0000 mon.a (mon.0) 3386 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-136"}]': finished 2026-03-10T10:28:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:49 vm04 bash[28289]: audit 2026-03-10T10:28:48.651509+0000 mon.a (mon.0) 3386 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-136"}]': finished 2026-03-10T10:28:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:49 vm04 bash[28289]: cluster 2026-03-10T10:28:48.655293+0000 mon.a (mon.0) 3387 : cluster [DBG] osdmap e662: 8 total, 8 up, 8 in 2026-03-10T10:28:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:49 vm04 bash[28289]: cluster 2026-03-10T10:28:48.655293+0000 mon.a (mon.0) 3387 : cluster [DBG] osdmap e662: 8 total, 8 up, 8 in 2026-03-10T10:28:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:49 vm04 bash[28289]: audit 2026-03-10T10:28:48.657657+0000 mon.a (mon.0) 3388 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-136", "mode": "writeback"}]: dispatch 2026-03-10T10:28:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:49 vm04 bash[28289]: audit 2026-03-10T10:28:48.657657+0000 mon.a (mon.0) 3388 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-136", "mode": "writeback"}]: dispatch 2026-03-10T10:28:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:49 vm04 bash[28289]: audit 2026-03-10T10:28:48.834850+0000 mgr.y (mgr.24422) 578 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:49.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:49 vm04 bash[28289]: audit 2026-03-10T10:28:48.834850+0000 mgr.y (mgr.24422) 578 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:49 vm04 bash[20742]: cluster 2026-03-10T10:28:48.552594+0000 mgr.y (mgr.24422) 577 : cluster [DBG] pgmap v1023: 268 pgs: 12 unknown, 256 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:28:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:49 vm04 bash[20742]: cluster 2026-03-10T10:28:48.552594+0000 mgr.y (mgr.24422) 577 : cluster [DBG] pgmap v1023: 268 pgs: 12 unknown, 256 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:28:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:49 vm04 bash[20742]: audit 2026-03-10T10:28:48.651509+0000 mon.a (mon.0) 3386 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-136"}]': finished 2026-03-10T10:28:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:49 vm04 bash[20742]: audit 2026-03-10T10:28:48.651509+0000 mon.a (mon.0) 3386 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-136"}]': finished 2026-03-10T10:28:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:49 vm04 bash[20742]: cluster 2026-03-10T10:28:48.655293+0000 mon.a (mon.0) 3387 : cluster [DBG] osdmap e662: 8 total, 8 up, 8 in 2026-03-10T10:28:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:49 vm04 bash[20742]: cluster 2026-03-10T10:28:48.655293+0000 mon.a (mon.0) 3387 : cluster [DBG] osdmap e662: 8 total, 8 up, 8 in 2026-03-10T10:28:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:49 vm04 bash[20742]: audit 2026-03-10T10:28:48.657657+0000 mon.a (mon.0) 3388 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-136", "mode": "writeback"}]: dispatch 2026-03-10T10:28:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:49 vm04 bash[20742]: audit 2026-03-10T10:28:48.657657+0000 mon.a (mon.0) 3388 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-136", "mode": "writeback"}]: dispatch 2026-03-10T10:28:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:49 vm04 bash[20742]: audit 2026-03-10T10:28:48.834850+0000 mgr.y (mgr.24422) 578 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:49.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:49 vm04 bash[20742]: audit 2026-03-10T10:28:48.834850+0000 mgr.y (mgr.24422) 578 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:49 vm07 bash[23367]: cluster 2026-03-10T10:28:48.552594+0000 mgr.y (mgr.24422) 577 : cluster [DBG] pgmap v1023: 268 pgs: 12 unknown, 256 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:28:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:49 vm07 bash[23367]: cluster 2026-03-10T10:28:48.552594+0000 mgr.y (mgr.24422) 577 : cluster [DBG] pgmap v1023: 268 pgs: 12 unknown, 256 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:28:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:49 vm07 bash[23367]: audit 2026-03-10T10:28:48.651509+0000 mon.a (mon.0) 3386 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-136"}]': finished 2026-03-10T10:28:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:49 vm07 bash[23367]: audit 2026-03-10T10:28:48.651509+0000 mon.a (mon.0) 3386 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-136"}]': finished 2026-03-10T10:28:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:49 vm07 bash[23367]: cluster 2026-03-10T10:28:48.655293+0000 mon.a (mon.0) 3387 : cluster [DBG] osdmap e662: 8 total, 8 up, 8 in 2026-03-10T10:28:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:49 vm07 bash[23367]: cluster 2026-03-10T10:28:48.655293+0000 mon.a (mon.0) 3387 : cluster [DBG] osdmap e662: 8 total, 8 up, 8 in 2026-03-10T10:28:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:49 vm07 bash[23367]: audit 2026-03-10T10:28:48.657657+0000 mon.a (mon.0) 3388 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-136", "mode": "writeback"}]: dispatch 2026-03-10T10:28:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:49 vm07 bash[23367]: audit 2026-03-10T10:28:48.657657+0000 mon.a (mon.0) 3388 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-136", "mode": "writeback"}]: dispatch 2026-03-10T10:28:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:49 vm07 bash[23367]: audit 2026-03-10T10:28:48.834850+0000 mgr.y (mgr.24422) 578 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:49 vm07 bash[23367]: audit 2026-03-10T10:28:48.834850+0000 mgr.y (mgr.24422) 578 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:28:50.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:50 vm04 bash[28289]: cluster 2026-03-10T10:28:49.651616+0000 mon.a (mon.0) 3389 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:50.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:50 vm04 bash[28289]: cluster 2026-03-10T10:28:49.651616+0000 mon.a (mon.0) 3389 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:50.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:50 vm04 bash[28289]: audit 2026-03-10T10:28:49.654618+0000 mon.a (mon.0) 3390 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-136", "mode": "writeback"}]': finished 2026-03-10T10:28:50.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:50 vm04 bash[28289]: audit 2026-03-10T10:28:49.654618+0000 mon.a (mon.0) 3390 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-136", "mode": "writeback"}]': finished 2026-03-10T10:28:50.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:50 vm04 bash[28289]: cluster 2026-03-10T10:28:49.660072+0000 mon.a (mon.0) 3391 : cluster [DBG] osdmap e663: 8 total, 8 up, 8 in 2026-03-10T10:28:50.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:50 vm04 bash[28289]: cluster 2026-03-10T10:28:49.660072+0000 mon.a (mon.0) 3391 : cluster [DBG] osdmap e663: 8 total, 8 up, 8 in 2026-03-10T10:28:50.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:50 vm04 bash[28289]: audit 2026-03-10T10:28:49.730557+0000 mon.a (mon.0) 3392 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:50.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:50 vm04 bash[28289]: audit 2026-03-10T10:28:49.730557+0000 mon.a (mon.0) 3392 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:50.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:50 vm04 bash[20742]: cluster 2026-03-10T10:28:49.651616+0000 mon.a (mon.0) 3389 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:50.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:50 vm04 bash[20742]: cluster 2026-03-10T10:28:49.651616+0000 mon.a (mon.0) 3389 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:50.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:50 vm04 bash[20742]: audit 2026-03-10T10:28:49.654618+0000 mon.a (mon.0) 3390 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-136", "mode": "writeback"}]': finished 2026-03-10T10:28:50.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:50 vm04 bash[20742]: audit 2026-03-10T10:28:49.654618+0000 mon.a (mon.0) 3390 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-136", "mode": "writeback"}]': finished 2026-03-10T10:28:50.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:50 vm04 bash[20742]: cluster 2026-03-10T10:28:49.660072+0000 mon.a (mon.0) 3391 : cluster [DBG] osdmap e663: 8 total, 8 up, 8 in 2026-03-10T10:28:50.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:50 vm04 bash[20742]: cluster 2026-03-10T10:28:49.660072+0000 mon.a (mon.0) 3391 : cluster [DBG] osdmap e663: 8 total, 8 up, 8 in 2026-03-10T10:28:50.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:50 vm04 bash[20742]: audit 2026-03-10T10:28:49.730557+0000 mon.a (mon.0) 3392 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:50.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:50 vm04 bash[20742]: audit 2026-03-10T10:28:49.730557+0000 mon.a (mon.0) 3392 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:50 vm07 bash[23367]: cluster 2026-03-10T10:28:49.651616+0000 mon.a (mon.0) 3389 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:50 vm07 bash[23367]: cluster 2026-03-10T10:28:49.651616+0000 mon.a (mon.0) 3389 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:28:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:50 vm07 bash[23367]: audit 2026-03-10T10:28:49.654618+0000 mon.a (mon.0) 3390 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-136", "mode": "writeback"}]': finished 2026-03-10T10:28:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:50 vm07 bash[23367]: audit 2026-03-10T10:28:49.654618+0000 mon.a (mon.0) 3390 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-136", "mode": "writeback"}]': finished 2026-03-10T10:28:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:50 vm07 bash[23367]: cluster 2026-03-10T10:28:49.660072+0000 mon.a (mon.0) 3391 : cluster [DBG] osdmap e663: 8 total, 8 up, 8 in 2026-03-10T10:28:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:50 vm07 bash[23367]: cluster 2026-03-10T10:28:49.660072+0000 mon.a (mon.0) 3391 : cluster [DBG] osdmap e663: 8 total, 8 up, 8 in 2026-03-10T10:28:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:50 vm07 bash[23367]: audit 2026-03-10T10:28:49.730557+0000 mon.a (mon.0) 3392 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:51.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:50 vm07 bash[23367]: audit 2026-03-10T10:28:49.730557+0000 mon.a (mon.0) 3392 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:28:51.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:51 vm04 bash[28289]: cluster 2026-03-10T10:28:50.552903+0000 mgr.y (mgr.24422) 579 : cluster [DBG] pgmap v1026: 268 pgs: 268 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:51.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:51 vm04 bash[28289]: cluster 2026-03-10T10:28:50.552903+0000 mgr.y (mgr.24422) 579 : cluster [DBG] pgmap v1026: 268 pgs: 268 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:51.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:51 vm04 bash[28289]: audit 2026-03-10T10:28:50.679632+0000 mon.a (mon.0) 3393 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:51.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:51 vm04 bash[28289]: audit 2026-03-10T10:28:50.679632+0000 mon.a (mon.0) 3393 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:51.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:51 vm04 bash[28289]: cluster 2026-03-10T10:28:50.683978+0000 mon.a (mon.0) 3394 : cluster [DBG] osdmap e664: 8 total, 8 up, 8 in 2026-03-10T10:28:51.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:51 vm04 bash[28289]: cluster 2026-03-10T10:28:50.683978+0000 mon.a (mon.0) 3394 : cluster [DBG] osdmap e664: 8 total, 8 up, 8 in 2026-03-10T10:28:51.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:51 vm04 bash[28289]: audit 2026-03-10T10:28:50.684170+0000 mon.a (mon.0) 3395 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-136"}]: dispatch 2026-03-10T10:28:51.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:51 vm04 bash[28289]: audit 2026-03-10T10:28:50.684170+0000 mon.a (mon.0) 3395 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-136"}]: dispatch 2026-03-10T10:28:51.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:51 vm04 bash[28289]: cluster 2026-03-10T10:28:51.642808+0000 mon.a (mon.0) 3396 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:51.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:51 vm04 bash[28289]: cluster 2026-03-10T10:28:51.642808+0000 mon.a (mon.0) 3396 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:51.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:51 vm04 bash[20742]: cluster 2026-03-10T10:28:50.552903+0000 mgr.y (mgr.24422) 579 : cluster [DBG] pgmap v1026: 268 pgs: 268 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:51.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:51 vm04 bash[20742]: cluster 2026-03-10T10:28:50.552903+0000 mgr.y (mgr.24422) 579 : cluster [DBG] pgmap v1026: 268 pgs: 268 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:51.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:51 vm04 bash[20742]: audit 2026-03-10T10:28:50.679632+0000 mon.a (mon.0) 3393 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:51.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:51 vm04 bash[20742]: audit 2026-03-10T10:28:50.679632+0000 mon.a (mon.0) 3393 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:51.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:51 vm04 bash[20742]: cluster 2026-03-10T10:28:50.683978+0000 mon.a (mon.0) 3394 : cluster [DBG] osdmap e664: 8 total, 8 up, 8 in 2026-03-10T10:28:51.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:51 vm04 bash[20742]: cluster 2026-03-10T10:28:50.683978+0000 mon.a (mon.0) 3394 : cluster [DBG] osdmap e664: 8 total, 8 up, 8 in 2026-03-10T10:28:51.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:51 vm04 bash[20742]: audit 2026-03-10T10:28:50.684170+0000 mon.a (mon.0) 3395 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-136"}]: dispatch 2026-03-10T10:28:51.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:51 vm04 bash[20742]: audit 2026-03-10T10:28:50.684170+0000 mon.a (mon.0) 3395 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-136"}]: dispatch 2026-03-10T10:28:51.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:51 vm04 bash[20742]: cluster 2026-03-10T10:28:51.642808+0000 mon.a (mon.0) 3396 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:51.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:51 vm04 bash[20742]: cluster 2026-03-10T10:28:51.642808+0000 mon.a (mon.0) 3396 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:52.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:51 vm07 bash[23367]: cluster 2026-03-10T10:28:50.552903+0000 mgr.y (mgr.24422) 579 : cluster [DBG] pgmap v1026: 268 pgs: 268 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:52.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:51 vm07 bash[23367]: cluster 2026-03-10T10:28:50.552903+0000 mgr.y (mgr.24422) 579 : cluster [DBG] pgmap v1026: 268 pgs: 268 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:52.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:51 vm07 bash[23367]: audit 2026-03-10T10:28:50.679632+0000 mon.a (mon.0) 3393 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:52.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:51 vm07 bash[23367]: audit 2026-03-10T10:28:50.679632+0000 mon.a (mon.0) 3393 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:28:52.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:51 vm07 bash[23367]: cluster 2026-03-10T10:28:50.683978+0000 mon.a (mon.0) 3394 : cluster [DBG] osdmap e664: 8 total, 8 up, 8 in 2026-03-10T10:28:52.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:51 vm07 bash[23367]: cluster 2026-03-10T10:28:50.683978+0000 mon.a (mon.0) 3394 : cluster [DBG] osdmap e664: 8 total, 8 up, 8 in 2026-03-10T10:28:52.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:51 vm07 bash[23367]: audit 2026-03-10T10:28:50.684170+0000 mon.a (mon.0) 3395 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-136"}]: dispatch 2026-03-10T10:28:52.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:51 vm07 bash[23367]: audit 2026-03-10T10:28:50.684170+0000 mon.a (mon.0) 3395 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-136"}]: dispatch 2026-03-10T10:28:52.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:51 vm07 bash[23367]: cluster 2026-03-10T10:28:51.642808+0000 mon.a (mon.0) 3396 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:52.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:51 vm07 bash[23367]: cluster 2026-03-10T10:28:51.642808+0000 mon.a (mon.0) 3396 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:52.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:52 vm04 bash[28289]: cluster 2026-03-10T10:28:51.680648+0000 mon.a (mon.0) 3397 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:52.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:52 vm04 bash[28289]: cluster 2026-03-10T10:28:51.680648+0000 mon.a (mon.0) 3397 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:52.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:52 vm04 bash[28289]: audit 2026-03-10T10:28:51.690404+0000 mon.a (mon.0) 3398 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-136"}]': finished 2026-03-10T10:28:52.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:52 vm04 bash[28289]: audit 2026-03-10T10:28:51.690404+0000 mon.a (mon.0) 3398 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-136"}]': finished 2026-03-10T10:28:52.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:52 vm04 bash[28289]: cluster 2026-03-10T10:28:51.692663+0000 mon.a (mon.0) 3399 : cluster [DBG] osdmap e665: 8 total, 8 up, 8 in 2026-03-10T10:28:52.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:52 vm04 bash[28289]: cluster 2026-03-10T10:28:51.692663+0000 mon.a (mon.0) 3399 : cluster [DBG] osdmap e665: 8 total, 8 up, 8 in 2026-03-10T10:28:52.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:52 vm04 bash[20742]: cluster 2026-03-10T10:28:51.680648+0000 mon.a (mon.0) 3397 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:52.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:52 vm04 bash[20742]: cluster 2026-03-10T10:28:51.680648+0000 mon.a (mon.0) 3397 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:52.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:52 vm04 bash[20742]: audit 2026-03-10T10:28:51.690404+0000 mon.a (mon.0) 3398 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-136"}]': finished 2026-03-10T10:28:52.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:52 vm04 bash[20742]: audit 2026-03-10T10:28:51.690404+0000 mon.a (mon.0) 3398 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-136"}]': finished 2026-03-10T10:28:52.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:52 vm04 bash[20742]: cluster 2026-03-10T10:28:51.692663+0000 mon.a (mon.0) 3399 : cluster [DBG] osdmap e665: 8 total, 8 up, 8 in 2026-03-10T10:28:52.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:52 vm04 bash[20742]: cluster 2026-03-10T10:28:51.692663+0000 mon.a (mon.0) 3399 : cluster [DBG] osdmap e665: 8 total, 8 up, 8 in 2026-03-10T10:28:53.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:52 vm07 bash[23367]: cluster 2026-03-10T10:28:51.680648+0000 mon.a (mon.0) 3397 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:53.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:52 vm07 bash[23367]: cluster 2026-03-10T10:28:51.680648+0000 mon.a (mon.0) 3397 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:28:53.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:52 vm07 bash[23367]: audit 2026-03-10T10:28:51.690404+0000 mon.a (mon.0) 3398 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-136"}]': finished 2026-03-10T10:28:53.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:52 vm07 bash[23367]: audit 2026-03-10T10:28:51.690404+0000 mon.a (mon.0) 3398 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-136"}]': finished 2026-03-10T10:28:53.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:52 vm07 bash[23367]: cluster 2026-03-10T10:28:51.692663+0000 mon.a (mon.0) 3399 : cluster [DBG] osdmap e665: 8 total, 8 up, 8 in 2026-03-10T10:28:53.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:52 vm07 bash[23367]: cluster 2026-03-10T10:28:51.692663+0000 mon.a (mon.0) 3399 : cluster [DBG] osdmap e665: 8 total, 8 up, 8 in 2026-03-10T10:28:53.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:28:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:28:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:28:53.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:53 vm04 bash[28289]: cluster 2026-03-10T10:28:52.553246+0000 mgr.y (mgr.24422) 580 : cluster [DBG] pgmap v1029: 268 pgs: 268 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:53.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:53 vm04 bash[28289]: cluster 2026-03-10T10:28:52.553246+0000 mgr.y (mgr.24422) 580 : cluster [DBG] pgmap v1029: 268 pgs: 268 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:53.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:53 vm04 bash[28289]: cluster 2026-03-10T10:28:52.709965+0000 mon.a (mon.0) 3400 : cluster [DBG] osdmap e666: 8 total, 8 up, 8 in 2026-03-10T10:28:53.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:53 vm04 bash[28289]: cluster 2026-03-10T10:28:52.709965+0000 mon.a (mon.0) 3400 : cluster [DBG] osdmap e666: 8 total, 8 up, 8 in 2026-03-10T10:28:53.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:53 vm04 bash[20742]: cluster 2026-03-10T10:28:52.553246+0000 mgr.y (mgr.24422) 580 : cluster [DBG] pgmap v1029: 268 pgs: 268 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:53.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:53 vm04 bash[20742]: cluster 2026-03-10T10:28:52.553246+0000 mgr.y (mgr.24422) 580 : cluster [DBG] pgmap v1029: 268 pgs: 268 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:53.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:53 vm04 bash[20742]: cluster 2026-03-10T10:28:52.709965+0000 mon.a (mon.0) 3400 : cluster [DBG] osdmap e666: 8 total, 8 up, 8 in 2026-03-10T10:28:53.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:53 vm04 bash[20742]: cluster 2026-03-10T10:28:52.709965+0000 mon.a (mon.0) 3400 : cluster [DBG] osdmap e666: 8 total, 8 up, 8 in 2026-03-10T10:28:54.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:53 vm07 bash[23367]: cluster 2026-03-10T10:28:52.553246+0000 mgr.y (mgr.24422) 580 : cluster [DBG] pgmap v1029: 268 pgs: 268 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:54.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:53 vm07 bash[23367]: cluster 2026-03-10T10:28:52.553246+0000 mgr.y (mgr.24422) 580 : cluster [DBG] pgmap v1029: 268 pgs: 268 active+clean; 455 KiB data, 991 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:28:54.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:53 vm07 bash[23367]: cluster 2026-03-10T10:28:52.709965+0000 mon.a (mon.0) 3400 : cluster [DBG] osdmap e666: 8 total, 8 up, 8 in 2026-03-10T10:28:54.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:53 vm07 bash[23367]: cluster 2026-03-10T10:28:52.709965+0000 mon.a (mon.0) 3400 : cluster [DBG] osdmap e666: 8 total, 8 up, 8 in 2026-03-10T10:28:55.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:54 vm07 bash[23367]: cluster 2026-03-10T10:28:53.718291+0000 mon.a (mon.0) 3401 : cluster [DBG] osdmap e667: 8 total, 8 up, 8 in 2026-03-10T10:28:55.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:54 vm07 bash[23367]: cluster 2026-03-10T10:28:53.718291+0000 mon.a (mon.0) 3401 : cluster [DBG] osdmap e667: 8 total, 8 up, 8 in 2026-03-10T10:28:55.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:54 vm07 bash[23367]: audit 2026-03-10T10:28:53.733333+0000 mon.a (mon.0) 3402 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:55.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:54 vm07 bash[23367]: audit 2026-03-10T10:28:53.733333+0000 mon.a (mon.0) 3402 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:55.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:54 vm04 bash[28289]: cluster 2026-03-10T10:28:53.718291+0000 mon.a (mon.0) 3401 : cluster [DBG] osdmap e667: 8 total, 8 up, 8 in 2026-03-10T10:28:55.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:54 vm04 bash[28289]: cluster 2026-03-10T10:28:53.718291+0000 mon.a (mon.0) 3401 : cluster [DBG] osdmap e667: 8 total, 8 up, 8 in 2026-03-10T10:28:55.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:54 vm04 bash[28289]: audit 2026-03-10T10:28:53.733333+0000 mon.a (mon.0) 3402 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:55.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:54 vm04 bash[28289]: audit 2026-03-10T10:28:53.733333+0000 mon.a (mon.0) 3402 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:55.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:54 vm04 bash[20742]: cluster 2026-03-10T10:28:53.718291+0000 mon.a (mon.0) 3401 : cluster [DBG] osdmap e667: 8 total, 8 up, 8 in 2026-03-10T10:28:55.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:54 vm04 bash[20742]: cluster 2026-03-10T10:28:53.718291+0000 mon.a (mon.0) 3401 : cluster [DBG] osdmap e667: 8 total, 8 up, 8 in 2026-03-10T10:28:55.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:54 vm04 bash[20742]: audit 2026-03-10T10:28:53.733333+0000 mon.a (mon.0) 3402 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:55.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:54 vm04 bash[20742]: audit 2026-03-10T10:28:53.733333+0000 mon.a (mon.0) 3402 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-138","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:28:56.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:55 vm07 bash[23367]: cluster 2026-03-10T10:28:54.553953+0000 mgr.y (mgr.24422) 581 : cluster [DBG] pgmap v1032: 268 pgs: 14 creating+peering, 18 unknown, 236 active+clean; 4.3 MiB data, 997 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T10:28:56.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:55 vm07 bash[23367]: cluster 2026-03-10T10:28:54.553953+0000 mgr.y (mgr.24422) 581 : cluster [DBG] pgmap v1032: 268 pgs: 14 creating+peering, 18 unknown, 236 active+clean; 4.3 MiB data, 997 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T10:28:56.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:55 vm07 bash[23367]: audit 2026-03-10T10:28:54.722285+0000 mon.a (mon.0) 3403 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-138","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:56.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:55 vm07 bash[23367]: audit 2026-03-10T10:28:54.722285+0000 mon.a (mon.0) 3403 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-138","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:56.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:55 vm07 bash[23367]: cluster 2026-03-10T10:28:54.726896+0000 mon.a (mon.0) 3404 : cluster [DBG] osdmap e668: 8 total, 8 up, 8 in 2026-03-10T10:28:56.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:55 vm07 bash[23367]: cluster 2026-03-10T10:28:54.726896+0000 mon.a (mon.0) 3404 : cluster [DBG] osdmap e668: 8 total, 8 up, 8 in 2026-03-10T10:28:56.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:55 vm07 bash[23367]: audit 2026-03-10T10:28:54.727515+0000 mon.a (mon.0) 3405 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:56.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:55 vm07 bash[23367]: audit 2026-03-10T10:28:54.727515+0000 mon.a (mon.0) 3405 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:55 vm04 bash[28289]: cluster 2026-03-10T10:28:54.553953+0000 mgr.y (mgr.24422) 581 : cluster [DBG] pgmap v1032: 268 pgs: 14 creating+peering, 18 unknown, 236 active+clean; 4.3 MiB data, 997 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T10:28:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:55 vm04 bash[28289]: cluster 2026-03-10T10:28:54.553953+0000 mgr.y (mgr.24422) 581 : cluster [DBG] pgmap v1032: 268 pgs: 14 creating+peering, 18 unknown, 236 active+clean; 4.3 MiB data, 997 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T10:28:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:55 vm04 bash[28289]: audit 2026-03-10T10:28:54.722285+0000 mon.a (mon.0) 3403 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-138","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:55 vm04 bash[28289]: audit 2026-03-10T10:28:54.722285+0000 mon.a (mon.0) 3403 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-138","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:55 vm04 bash[28289]: cluster 2026-03-10T10:28:54.726896+0000 mon.a (mon.0) 3404 : cluster [DBG] osdmap e668: 8 total, 8 up, 8 in 2026-03-10T10:28:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:55 vm04 bash[28289]: cluster 2026-03-10T10:28:54.726896+0000 mon.a (mon.0) 3404 : cluster [DBG] osdmap e668: 8 total, 8 up, 8 in 2026-03-10T10:28:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:55 vm04 bash[28289]: audit 2026-03-10T10:28:54.727515+0000 mon.a (mon.0) 3405 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:55 vm04 bash[28289]: audit 2026-03-10T10:28:54.727515+0000 mon.a (mon.0) 3405 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:55 vm04 bash[20742]: cluster 2026-03-10T10:28:54.553953+0000 mgr.y (mgr.24422) 581 : cluster [DBG] pgmap v1032: 268 pgs: 14 creating+peering, 18 unknown, 236 active+clean; 4.3 MiB data, 997 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T10:28:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:55 vm04 bash[20742]: cluster 2026-03-10T10:28:54.553953+0000 mgr.y (mgr.24422) 581 : cluster [DBG] pgmap v1032: 268 pgs: 14 creating+peering, 18 unknown, 236 active+clean; 4.3 MiB data, 997 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 977 KiB/s wr, 1 op/s 2026-03-10T10:28:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:55 vm04 bash[20742]: audit 2026-03-10T10:28:54.722285+0000 mon.a (mon.0) 3403 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-138","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:55 vm04 bash[20742]: audit 2026-03-10T10:28:54.722285+0000 mon.a (mon.0) 3403 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-138","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:28:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:55 vm04 bash[20742]: cluster 2026-03-10T10:28:54.726896+0000 mon.a (mon.0) 3404 : cluster [DBG] osdmap e668: 8 total, 8 up, 8 in 2026-03-10T10:28:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:55 vm04 bash[20742]: cluster 2026-03-10T10:28:54.726896+0000 mon.a (mon.0) 3404 : cluster [DBG] osdmap e668: 8 total, 8 up, 8 in 2026-03-10T10:28:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:55 vm04 bash[20742]: audit 2026-03-10T10:28:54.727515+0000 mon.a (mon.0) 3405 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:55 vm04 bash[20742]: audit 2026-03-10T10:28:54.727515+0000 mon.a (mon.0) 3405 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-138", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:28:57.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:56 vm07 bash[23367]: audit 2026-03-10T10:28:55.726844+0000 mon.a (mon.0) 3406 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-138", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:57.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:56 vm07 bash[23367]: audit 2026-03-10T10:28:55.726844+0000 mon.a (mon.0) 3406 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-138", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:57.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:56 vm07 bash[23367]: cluster 2026-03-10T10:28:55.729762+0000 mon.a (mon.0) 3407 : cluster [DBG] osdmap e669: 8 total, 8 up, 8 in 2026-03-10T10:28:57.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:56 vm07 bash[23367]: cluster 2026-03-10T10:28:55.729762+0000 mon.a (mon.0) 3407 : cluster [DBG] osdmap e669: 8 total, 8 up, 8 in 2026-03-10T10:28:57.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:56 vm07 bash[23367]: audit 2026-03-10T10:28:55.731393+0000 mon.a (mon.0) 3408 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:28:57.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:56 vm07 bash[23367]: audit 2026-03-10T10:28:55.731393+0000 mon.a (mon.0) 3408 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:28:57.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:56 vm07 bash[23367]: cluster 2026-03-10T10:28:56.643340+0000 mon.a (mon.0) 3409 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:57.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:56 vm07 bash[23367]: cluster 2026-03-10T10:28:56.643340+0000 mon.a (mon.0) 3409 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:57.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:56 vm07 bash[23367]: audit 2026-03-10T10:28:56.730173+0000 mon.a (mon.0) 3410 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:28:57.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:56 vm07 bash[23367]: audit 2026-03-10T10:28:56.730173+0000 mon.a (mon.0) 3410 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:28:57.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:56 vm07 bash[23367]: cluster 2026-03-10T10:28:56.737742+0000 mon.a (mon.0) 3411 : cluster [DBG] osdmap e670: 8 total, 8 up, 8 in 2026-03-10T10:28:57.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:56 vm07 bash[23367]: cluster 2026-03-10T10:28:56.737742+0000 mon.a (mon.0) 3411 : cluster [DBG] osdmap e670: 8 total, 8 up, 8 in 2026-03-10T10:28:57.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:56 vm07 bash[23367]: audit 2026-03-10T10:28:56.738694+0000 mon.a (mon.0) 3412 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:28:57.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:56 vm07 bash[23367]: audit 2026-03-10T10:28:56.738694+0000 mon.a (mon.0) 3412 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:28:57.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:56 vm04 bash[28289]: audit 2026-03-10T10:28:55.726844+0000 mon.a (mon.0) 3406 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-138", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:57.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:56 vm04 bash[28289]: audit 2026-03-10T10:28:55.726844+0000 mon.a (mon.0) 3406 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-138", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:57.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:56 vm04 bash[28289]: cluster 2026-03-10T10:28:55.729762+0000 mon.a (mon.0) 3407 : cluster [DBG] osdmap e669: 8 total, 8 up, 8 in 2026-03-10T10:28:57.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:56 vm04 bash[28289]: cluster 2026-03-10T10:28:55.729762+0000 mon.a (mon.0) 3407 : cluster [DBG] osdmap e669: 8 total, 8 up, 8 in 2026-03-10T10:28:57.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:56 vm04 bash[28289]: audit 2026-03-10T10:28:55.731393+0000 mon.a (mon.0) 3408 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:28:57.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:56 vm04 bash[28289]: audit 2026-03-10T10:28:55.731393+0000 mon.a (mon.0) 3408 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:28:57.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:56 vm04 bash[28289]: cluster 2026-03-10T10:28:56.643340+0000 mon.a (mon.0) 3409 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:57.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:56 vm04 bash[28289]: cluster 2026-03-10T10:28:56.643340+0000 mon.a (mon.0) 3409 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:57.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:56 vm04 bash[28289]: audit 2026-03-10T10:28:56.730173+0000 mon.a (mon.0) 3410 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:28:57.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:56 vm04 bash[28289]: audit 2026-03-10T10:28:56.730173+0000 mon.a (mon.0) 3410 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:28:57.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:56 vm04 bash[28289]: cluster 2026-03-10T10:28:56.737742+0000 mon.a (mon.0) 3411 : cluster [DBG] osdmap e670: 8 total, 8 up, 8 in 2026-03-10T10:28:57.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:56 vm04 bash[28289]: cluster 2026-03-10T10:28:56.737742+0000 mon.a (mon.0) 3411 : cluster [DBG] osdmap e670: 8 total, 8 up, 8 in 2026-03-10T10:28:57.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:56 vm04 bash[28289]: audit 2026-03-10T10:28:56.738694+0000 mon.a (mon.0) 3412 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:28:57.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:56 vm04 bash[28289]: audit 2026-03-10T10:28:56.738694+0000 mon.a (mon.0) 3412 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:28:57.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:56 vm04 bash[20742]: audit 2026-03-10T10:28:55.726844+0000 mon.a (mon.0) 3406 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-138", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:57.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:56 vm04 bash[20742]: audit 2026-03-10T10:28:55.726844+0000 mon.a (mon.0) 3406 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-138", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:28:57.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:56 vm04 bash[20742]: cluster 2026-03-10T10:28:55.729762+0000 mon.a (mon.0) 3407 : cluster [DBG] osdmap e669: 8 total, 8 up, 8 in 2026-03-10T10:28:57.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:56 vm04 bash[20742]: cluster 2026-03-10T10:28:55.729762+0000 mon.a (mon.0) 3407 : cluster [DBG] osdmap e669: 8 total, 8 up, 8 in 2026-03-10T10:28:57.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:56 vm04 bash[20742]: audit 2026-03-10T10:28:55.731393+0000 mon.a (mon.0) 3408 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:28:57.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:56 vm04 bash[20742]: audit 2026-03-10T10:28:55.731393+0000 mon.a (mon.0) 3408 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:28:57.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:56 vm04 bash[20742]: cluster 2026-03-10T10:28:56.643340+0000 mon.a (mon.0) 3409 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:57.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:56 vm04 bash[20742]: cluster 2026-03-10T10:28:56.643340+0000 mon.a (mon.0) 3409 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:28:57.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:56 vm04 bash[20742]: audit 2026-03-10T10:28:56.730173+0000 mon.a (mon.0) 3410 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:28:57.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:56 vm04 bash[20742]: audit 2026-03-10T10:28:56.730173+0000 mon.a (mon.0) 3410 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:28:57.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:56 vm04 bash[20742]: cluster 2026-03-10T10:28:56.737742+0000 mon.a (mon.0) 3411 : cluster [DBG] osdmap e670: 8 total, 8 up, 8 in 2026-03-10T10:28:57.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:56 vm04 bash[20742]: cluster 2026-03-10T10:28:56.737742+0000 mon.a (mon.0) 3411 : cluster [DBG] osdmap e670: 8 total, 8 up, 8 in 2026-03-10T10:28:57.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:56 vm04 bash[20742]: audit 2026-03-10T10:28:56.738694+0000 mon.a (mon.0) 3412 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:28:57.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:56 vm04 bash[20742]: audit 2026-03-10T10:28:56.738694+0000 mon.a (mon.0) 3412 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:28:58.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:57 vm07 bash[23367]: cluster 2026-03-10T10:28:56.554420+0000 mgr.y (mgr.24422) 582 : cluster [DBG] pgmap v1035: 268 pgs: 14 creating+peering, 18 unknown, 236 active+clean; 4.3 MiB data, 997 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 976 KiB/s wr, 1 op/s 2026-03-10T10:28:58.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:57 vm07 bash[23367]: cluster 2026-03-10T10:28:56.554420+0000 mgr.y (mgr.24422) 582 : cluster [DBG] pgmap v1035: 268 pgs: 14 creating+peering, 18 unknown, 236 active+clean; 4.3 MiB data, 997 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 976 KiB/s wr, 1 op/s 2026-03-10T10:28:58.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:57 vm07 bash[23367]: audit 2026-03-10T10:28:57.733614+0000 mon.a (mon.0) 3413 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:28:58.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:57 vm07 bash[23367]: audit 2026-03-10T10:28:57.733614+0000 mon.a (mon.0) 3413 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:28:58.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:57 vm07 bash[23367]: cluster 2026-03-10T10:28:57.739290+0000 mon.a (mon.0) 3414 : cluster [DBG] osdmap e671: 8 total, 8 up, 8 in 2026-03-10T10:28:58.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:57 vm07 bash[23367]: cluster 2026-03-10T10:28:57.739290+0000 mon.a (mon.0) 3414 : cluster [DBG] osdmap e671: 8 total, 8 up, 8 in 2026-03-10T10:28:58.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:57 vm07 bash[23367]: audit 2026-03-10T10:28:57.739970+0000 mon.a (mon.0) 3415 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T10:28:58.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:57 vm07 bash[23367]: audit 2026-03-10T10:28:57.739970+0000 mon.a (mon.0) 3415 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T10:28:58.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:57 vm04 bash[28289]: cluster 2026-03-10T10:28:56.554420+0000 mgr.y (mgr.24422) 582 : cluster [DBG] pgmap v1035: 268 pgs: 14 creating+peering, 18 unknown, 236 active+clean; 4.3 MiB data, 997 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 976 KiB/s wr, 1 op/s 2026-03-10T10:28:58.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:57 vm04 bash[28289]: cluster 2026-03-10T10:28:56.554420+0000 mgr.y (mgr.24422) 582 : cluster [DBG] pgmap v1035: 268 pgs: 14 creating+peering, 18 unknown, 236 active+clean; 4.3 MiB data, 997 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 976 KiB/s wr, 1 op/s 2026-03-10T10:28:58.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:57 vm04 bash[28289]: audit 2026-03-10T10:28:57.733614+0000 mon.a (mon.0) 3413 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:28:58.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:57 vm04 bash[28289]: audit 2026-03-10T10:28:57.733614+0000 mon.a (mon.0) 3413 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:28:58.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:57 vm04 bash[28289]: cluster 2026-03-10T10:28:57.739290+0000 mon.a (mon.0) 3414 : cluster [DBG] osdmap e671: 8 total, 8 up, 8 in 2026-03-10T10:28:58.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:57 vm04 bash[28289]: cluster 2026-03-10T10:28:57.739290+0000 mon.a (mon.0) 3414 : cluster [DBG] osdmap e671: 8 total, 8 up, 8 in 2026-03-10T10:28:58.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:57 vm04 bash[28289]: audit 2026-03-10T10:28:57.739970+0000 mon.a (mon.0) 3415 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T10:28:58.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:57 vm04 bash[28289]: audit 2026-03-10T10:28:57.739970+0000 mon.a (mon.0) 3415 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T10:28:58.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:57 vm04 bash[20742]: cluster 2026-03-10T10:28:56.554420+0000 mgr.y (mgr.24422) 582 : cluster [DBG] pgmap v1035: 268 pgs: 14 creating+peering, 18 unknown, 236 active+clean; 4.3 MiB data, 997 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 976 KiB/s wr, 1 op/s 2026-03-10T10:28:58.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:57 vm04 bash[20742]: cluster 2026-03-10T10:28:56.554420+0000 mgr.y (mgr.24422) 582 : cluster [DBG] pgmap v1035: 268 pgs: 14 creating+peering, 18 unknown, 236 active+clean; 4.3 MiB data, 997 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 976 KiB/s wr, 1 op/s 2026-03-10T10:28:58.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:57 vm04 bash[20742]: audit 2026-03-10T10:28:57.733614+0000 mon.a (mon.0) 3413 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:28:58.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:57 vm04 bash[20742]: audit 2026-03-10T10:28:57.733614+0000 mon.a (mon.0) 3413 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:28:58.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:57 vm04 bash[20742]: cluster 2026-03-10T10:28:57.739290+0000 mon.a (mon.0) 3414 : cluster [DBG] osdmap e671: 8 total, 8 up, 8 in 2026-03-10T10:28:58.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:57 vm04 bash[20742]: cluster 2026-03-10T10:28:57.739290+0000 mon.a (mon.0) 3414 : cluster [DBG] osdmap e671: 8 total, 8 up, 8 in 2026-03-10T10:28:58.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:57 vm04 bash[20742]: audit 2026-03-10T10:28:57.739970+0000 mon.a (mon.0) 3415 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T10:28:58.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:57 vm04 bash[20742]: audit 2026-03-10T10:28:57.739970+0000 mon.a (mon.0) 3415 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_type","val": "explicit_object"}]: dispatch 2026-03-10T10:28:59.167 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:28:58 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:28:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:59 vm04 bash[28289]: audit 2026-03-10T10:28:58.161593+0000 mon.a (mon.0) 3416 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:28:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:59 vm04 bash[28289]: audit 2026-03-10T10:28:58.161593+0000 mon.a (mon.0) 3416 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:28:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:59 vm04 bash[28289]: audit 2026-03-10T10:28:58.162812+0000 mon.a (mon.0) 3417 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:28:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:59 vm04 bash[28289]: audit 2026-03-10T10:28:58.162812+0000 mon.a (mon.0) 3417 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:28:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:59 vm04 bash[28289]: audit 2026-03-10T10:28:58.736996+0000 mon.a (mon.0) 3418 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T10:28:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:59 vm04 bash[28289]: audit 2026-03-10T10:28:58.736996+0000 mon.a (mon.0) 3418 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T10:28:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:59 vm04 bash[28289]: cluster 2026-03-10T10:28:58.740317+0000 mon.a (mon.0) 3419 : cluster [DBG] osdmap e672: 8 total, 8 up, 8 in 2026-03-10T10:28:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:28:59 vm04 bash[28289]: cluster 2026-03-10T10:28:58.740317+0000 mon.a (mon.0) 3419 : cluster [DBG] osdmap e672: 8 total, 8 up, 8 in 2026-03-10T10:28:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:59 vm04 bash[20742]: audit 2026-03-10T10:28:58.161593+0000 mon.a (mon.0) 3416 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:28:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:59 vm04 bash[20742]: audit 2026-03-10T10:28:58.161593+0000 mon.a (mon.0) 3416 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:28:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:59 vm04 bash[20742]: audit 2026-03-10T10:28:58.162812+0000 mon.a (mon.0) 3417 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:28:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:59 vm04 bash[20742]: audit 2026-03-10T10:28:58.162812+0000 mon.a (mon.0) 3417 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:28:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:59 vm04 bash[20742]: audit 2026-03-10T10:28:58.736996+0000 mon.a (mon.0) 3418 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T10:28:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:59 vm04 bash[20742]: audit 2026-03-10T10:28:58.736996+0000 mon.a (mon.0) 3418 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T10:28:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:59 vm04 bash[20742]: cluster 2026-03-10T10:28:58.740317+0000 mon.a (mon.0) 3419 : cluster [DBG] osdmap e672: 8 total, 8 up, 8 in 2026-03-10T10:28:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:28:59 vm04 bash[20742]: cluster 2026-03-10T10:28:58.740317+0000 mon.a (mon.0) 3419 : cluster [DBG] osdmap e672: 8 total, 8 up, 8 in 2026-03-10T10:28:59.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:59 vm07 bash[23367]: audit 2026-03-10T10:28:58.161593+0000 mon.a (mon.0) 3416 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:28:59.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:59 vm07 bash[23367]: audit 2026-03-10T10:28:58.161593+0000 mon.a (mon.0) 3416 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:28:59.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:59 vm07 bash[23367]: audit 2026-03-10T10:28:58.162812+0000 mon.a (mon.0) 3417 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:28:59.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:59 vm07 bash[23367]: audit 2026-03-10T10:28:58.162812+0000 mon.a (mon.0) 3417 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:28:59.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:59 vm07 bash[23367]: audit 2026-03-10T10:28:58.736996+0000 mon.a (mon.0) 3418 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T10:28:59.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:59 vm07 bash[23367]: audit 2026-03-10T10:28:58.736996+0000 mon.a (mon.0) 3418 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-138","var": "hit_set_type","val": "explicit_object"}]': finished 2026-03-10T10:28:59.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:59 vm07 bash[23367]: cluster 2026-03-10T10:28:58.740317+0000 mon.a (mon.0) 3419 : cluster [DBG] osdmap e672: 8 total, 8 up, 8 in 2026-03-10T10:28:59.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:28:59 vm07 bash[23367]: cluster 2026-03-10T10:28:58.740317+0000 mon.a (mon.0) 3419 : cluster [DBG] osdmap e672: 8 total, 8 up, 8 in 2026-03-10T10:29:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:00 vm04 bash[28289]: cluster 2026-03-10T10:28:58.554957+0000 mgr.y (mgr.24422) 583 : cluster [DBG] pgmap v1038: 268 pgs: 14 creating+peering, 4 unknown, 250 active+clean; 4.3 MiB data, 998 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:29:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:00 vm04 bash[28289]: cluster 2026-03-10T10:28:58.554957+0000 mgr.y (mgr.24422) 583 : cluster [DBG] pgmap v1038: 268 pgs: 14 creating+peering, 4 unknown, 250 active+clean; 4.3 MiB data, 998 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:29:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:00 vm04 bash[28289]: audit 2026-03-10T10:28:58.845515+0000 mgr.y (mgr.24422) 584 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:00 vm04 bash[28289]: audit 2026-03-10T10:28:58.845515+0000 mgr.y (mgr.24422) 584 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:00 vm04 bash[28289]: audit 2026-03-10T10:28:59.759675+0000 mon.a (mon.0) 3420 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:00 vm04 bash[28289]: audit 2026-03-10T10:28:59.759675+0000 mon.a (mon.0) 3420 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:00 vm04 bash[28289]: audit 2026-03-10T10:28:59.759943+0000 mon.a (mon.0) 3421 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-138"}]: dispatch 2026-03-10T10:29:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:00 vm04 bash[28289]: audit 2026-03-10T10:28:59.759943+0000 mon.a (mon.0) 3421 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-138"}]: dispatch 2026-03-10T10:29:00.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:00 vm04 bash[20742]: cluster 2026-03-10T10:28:58.554957+0000 mgr.y (mgr.24422) 583 : cluster [DBG] pgmap v1038: 268 pgs: 14 creating+peering, 4 unknown, 250 active+clean; 4.3 MiB data, 998 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:29:00.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:00 vm04 bash[20742]: cluster 2026-03-10T10:28:58.554957+0000 mgr.y (mgr.24422) 583 : cluster [DBG] pgmap v1038: 268 pgs: 14 creating+peering, 4 unknown, 250 active+clean; 4.3 MiB data, 998 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:29:00.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:00 vm04 bash[20742]: audit 2026-03-10T10:28:58.845515+0000 mgr.y (mgr.24422) 584 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:00.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:00 vm04 bash[20742]: audit 2026-03-10T10:28:58.845515+0000 mgr.y (mgr.24422) 584 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:00.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:00 vm04 bash[20742]: audit 2026-03-10T10:28:59.759675+0000 mon.a (mon.0) 3420 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:00.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:00 vm04 bash[20742]: audit 2026-03-10T10:28:59.759675+0000 mon.a (mon.0) 3420 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:00.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:00 vm04 bash[20742]: audit 2026-03-10T10:28:59.759943+0000 mon.a (mon.0) 3421 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-138"}]: dispatch 2026-03-10T10:29:00.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:00 vm04 bash[20742]: audit 2026-03-10T10:28:59.759943+0000 mon.a (mon.0) 3421 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-138"}]: dispatch 2026-03-10T10:29:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:00 vm07 bash[23367]: cluster 2026-03-10T10:28:58.554957+0000 mgr.y (mgr.24422) 583 : cluster [DBG] pgmap v1038: 268 pgs: 14 creating+peering, 4 unknown, 250 active+clean; 4.3 MiB data, 998 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:29:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:00 vm07 bash[23367]: cluster 2026-03-10T10:28:58.554957+0000 mgr.y (mgr.24422) 583 : cluster [DBG] pgmap v1038: 268 pgs: 14 creating+peering, 4 unknown, 250 active+clean; 4.3 MiB data, 998 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:29:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:00 vm07 bash[23367]: audit 2026-03-10T10:28:58.845515+0000 mgr.y (mgr.24422) 584 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:00 vm07 bash[23367]: audit 2026-03-10T10:28:58.845515+0000 mgr.y (mgr.24422) 584 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:00 vm07 bash[23367]: audit 2026-03-10T10:28:59.759675+0000 mon.a (mon.0) 3420 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:00 vm07 bash[23367]: audit 2026-03-10T10:28:59.759675+0000 mon.a (mon.0) 3420 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:00 vm07 bash[23367]: audit 2026-03-10T10:28:59.759943+0000 mon.a (mon.0) 3421 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-138"}]: dispatch 2026-03-10T10:29:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:00 vm07 bash[23367]: audit 2026-03-10T10:28:59.759943+0000 mon.a (mon.0) 3421 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-138"}]: dispatch 2026-03-10T10:29:01.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:01 vm04 bash[28289]: audit 2026-03-10T10:29:00.178506+0000 mon.a (mon.0) 3422 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-138"}]': finished 2026-03-10T10:29:01.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:01 vm04 bash[28289]: audit 2026-03-10T10:29:00.178506+0000 mon.a (mon.0) 3422 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-138"}]': finished 2026-03-10T10:29:01.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:01 vm04 bash[28289]: cluster 2026-03-10T10:29:00.182682+0000 mon.a (mon.0) 3423 : cluster [DBG] osdmap e673: 8 total, 8 up, 8 in 2026-03-10T10:29:01.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:01 vm04 bash[28289]: cluster 2026-03-10T10:29:00.182682+0000 mon.a (mon.0) 3423 : cluster [DBG] osdmap e673: 8 total, 8 up, 8 in 2026-03-10T10:29:01.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:01 vm04 bash[20742]: audit 2026-03-10T10:29:00.178506+0000 mon.a (mon.0) 3422 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-138"}]': finished 2026-03-10T10:29:01.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:01 vm04 bash[20742]: audit 2026-03-10T10:29:00.178506+0000 mon.a (mon.0) 3422 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-138"}]': finished 2026-03-10T10:29:01.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:01 vm04 bash[20742]: cluster 2026-03-10T10:29:00.182682+0000 mon.a (mon.0) 3423 : cluster [DBG] osdmap e673: 8 total, 8 up, 8 in 2026-03-10T10:29:01.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:01 vm04 bash[20742]: cluster 2026-03-10T10:29:00.182682+0000 mon.a (mon.0) 3423 : cluster [DBG] osdmap e673: 8 total, 8 up, 8 in 2026-03-10T10:29:01.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:01 vm07 bash[23367]: audit 2026-03-10T10:29:00.178506+0000 mon.a (mon.0) 3422 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-138"}]': finished 2026-03-10T10:29:01.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:01 vm07 bash[23367]: audit 2026-03-10T10:29:00.178506+0000 mon.a (mon.0) 3422 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-138"}]': finished 2026-03-10T10:29:01.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:01 vm07 bash[23367]: cluster 2026-03-10T10:29:00.182682+0000 mon.a (mon.0) 3423 : cluster [DBG] osdmap e673: 8 total, 8 up, 8 in 2026-03-10T10:29:01.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:01 vm07 bash[23367]: cluster 2026-03-10T10:29:00.182682+0000 mon.a (mon.0) 3423 : cluster [DBG] osdmap e673: 8 total, 8 up, 8 in 2026-03-10T10:29:02.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:02 vm04 bash[28289]: cluster 2026-03-10T10:29:00.555306+0000 mgr.y (mgr.24422) 585 : cluster [DBG] pgmap v1041: 268 pgs: 268 active+clean; 4.3 MiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:02.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:02 vm04 bash[28289]: cluster 2026-03-10T10:29:00.555306+0000 mgr.y (mgr.24422) 585 : cluster [DBG] pgmap v1041: 268 pgs: 268 active+clean; 4.3 MiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:02.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:02 vm04 bash[28289]: cluster 2026-03-10T10:29:01.198132+0000 mon.a (mon.0) 3424 : cluster [DBG] osdmap e674: 8 total, 8 up, 8 in 2026-03-10T10:29:02.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:02 vm04 bash[28289]: cluster 2026-03-10T10:29:01.198132+0000 mon.a (mon.0) 3424 : cluster [DBG] osdmap e674: 8 total, 8 up, 8 in 2026-03-10T10:29:02.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:02 vm04 bash[20742]: cluster 2026-03-10T10:29:00.555306+0000 mgr.y (mgr.24422) 585 : cluster [DBG] pgmap v1041: 268 pgs: 268 active+clean; 4.3 MiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:02.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:02 vm04 bash[20742]: cluster 2026-03-10T10:29:00.555306+0000 mgr.y (mgr.24422) 585 : cluster [DBG] pgmap v1041: 268 pgs: 268 active+clean; 4.3 MiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:02.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:02 vm04 bash[20742]: cluster 2026-03-10T10:29:01.198132+0000 mon.a (mon.0) 3424 : cluster [DBG] osdmap e674: 8 total, 8 up, 8 in 2026-03-10T10:29:02.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:02 vm04 bash[20742]: cluster 2026-03-10T10:29:01.198132+0000 mon.a (mon.0) 3424 : cluster [DBG] osdmap e674: 8 total, 8 up, 8 in 2026-03-10T10:29:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:02 vm07 bash[23367]: cluster 2026-03-10T10:29:00.555306+0000 mgr.y (mgr.24422) 585 : cluster [DBG] pgmap v1041: 268 pgs: 268 active+clean; 4.3 MiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:02 vm07 bash[23367]: cluster 2026-03-10T10:29:00.555306+0000 mgr.y (mgr.24422) 585 : cluster [DBG] pgmap v1041: 268 pgs: 268 active+clean; 4.3 MiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:02 vm07 bash[23367]: cluster 2026-03-10T10:29:01.198132+0000 mon.a (mon.0) 3424 : cluster [DBG] osdmap e674: 8 total, 8 up, 8 in 2026-03-10T10:29:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:02 vm07 bash[23367]: cluster 2026-03-10T10:29:01.198132+0000 mon.a (mon.0) 3424 : cluster [DBG] osdmap e674: 8 total, 8 up, 8 in 2026-03-10T10:29:03.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:03 vm04 bash[28289]: cluster 2026-03-10T10:29:02.215160+0000 mon.a (mon.0) 3425 : cluster [DBG] osdmap e675: 8 total, 8 up, 8 in 2026-03-10T10:29:03.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:03 vm04 bash[28289]: cluster 2026-03-10T10:29:02.215160+0000 mon.a (mon.0) 3425 : cluster [DBG] osdmap e675: 8 total, 8 up, 8 in 2026-03-10T10:29:03.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:03 vm04 bash[28289]: audit 2026-03-10T10:29:02.217610+0000 mon.a (mon.0) 3426 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:29:03.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:03 vm04 bash[28289]: audit 2026-03-10T10:29:02.217610+0000 mon.a (mon.0) 3426 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:29:03.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:03 vm04 bash[20742]: cluster 2026-03-10T10:29:02.215160+0000 mon.a (mon.0) 3425 : cluster [DBG] osdmap e675: 8 total, 8 up, 8 in 2026-03-10T10:29:03.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:03 vm04 bash[20742]: cluster 2026-03-10T10:29:02.215160+0000 mon.a (mon.0) 3425 : cluster [DBG] osdmap e675: 8 total, 8 up, 8 in 2026-03-10T10:29:03.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:03 vm04 bash[20742]: audit 2026-03-10T10:29:02.217610+0000 mon.a (mon.0) 3426 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:29:03.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:03 vm04 bash[20742]: audit 2026-03-10T10:29:02.217610+0000 mon.a (mon.0) 3426 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:29:03.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:29:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:29:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:29:03.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:03 vm07 bash[23367]: cluster 2026-03-10T10:29:02.215160+0000 mon.a (mon.0) 3425 : cluster [DBG] osdmap e675: 8 total, 8 up, 8 in 2026-03-10T10:29:03.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:03 vm07 bash[23367]: cluster 2026-03-10T10:29:02.215160+0000 mon.a (mon.0) 3425 : cluster [DBG] osdmap e675: 8 total, 8 up, 8 in 2026-03-10T10:29:03.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:03 vm07 bash[23367]: audit 2026-03-10T10:29:02.217610+0000 mon.a (mon.0) 3426 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:29:03.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:03 vm07 bash[23367]: audit 2026-03-10T10:29:02.217610+0000 mon.a (mon.0) 3426 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-140","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:29:04.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:04 vm07 bash[23367]: cluster 2026-03-10T10:29:02.555786+0000 mgr.y (mgr.24422) 586 : cluster [DBG] pgmap v1044: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:04.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:04 vm07 bash[23367]: cluster 2026-03-10T10:29:02.555786+0000 mgr.y (mgr.24422) 586 : cluster [DBG] pgmap v1044: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:04.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:04 vm07 bash[23367]: audit 2026-03-10T10:29:03.202318+0000 mon.a (mon.0) 3427 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-140","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:29:04.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:04 vm07 bash[23367]: audit 2026-03-10T10:29:03.202318+0000 mon.a (mon.0) 3427 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-140","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:29:04.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:04 vm07 bash[23367]: cluster 2026-03-10T10:29:03.206561+0000 mon.a (mon.0) 3428 : cluster [DBG] osdmap e676: 8 total, 8 up, 8 in 2026-03-10T10:29:04.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:04 vm07 bash[23367]: cluster 2026-03-10T10:29:03.206561+0000 mon.a (mon.0) 3428 : cluster [DBG] osdmap e676: 8 total, 8 up, 8 in 2026-03-10T10:29:04.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:04 vm07 bash[23367]: audit 2026-03-10T10:29:03.216601+0000 mon.a (mon.0) 3429 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:29:04.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:04 vm07 bash[23367]: audit 2026-03-10T10:29:03.216601+0000 mon.a (mon.0) 3429 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:29:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:04 vm04 bash[28289]: cluster 2026-03-10T10:29:02.555786+0000 mgr.y (mgr.24422) 586 : cluster [DBG] pgmap v1044: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:04 vm04 bash[28289]: cluster 2026-03-10T10:29:02.555786+0000 mgr.y (mgr.24422) 586 : cluster [DBG] pgmap v1044: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:04 vm04 bash[28289]: audit 2026-03-10T10:29:03.202318+0000 mon.a (mon.0) 3427 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-140","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:29:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:04 vm04 bash[28289]: audit 2026-03-10T10:29:03.202318+0000 mon.a (mon.0) 3427 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-140","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:29:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:04 vm04 bash[28289]: cluster 2026-03-10T10:29:03.206561+0000 mon.a (mon.0) 3428 : cluster [DBG] osdmap e676: 8 total, 8 up, 8 in 2026-03-10T10:29:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:04 vm04 bash[28289]: cluster 2026-03-10T10:29:03.206561+0000 mon.a (mon.0) 3428 : cluster [DBG] osdmap e676: 8 total, 8 up, 8 in 2026-03-10T10:29:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:04 vm04 bash[28289]: audit 2026-03-10T10:29:03.216601+0000 mon.a (mon.0) 3429 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:29:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:04 vm04 bash[28289]: audit 2026-03-10T10:29:03.216601+0000 mon.a (mon.0) 3429 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:29:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:04 vm04 bash[20742]: cluster 2026-03-10T10:29:02.555786+0000 mgr.y (mgr.24422) 586 : cluster [DBG] pgmap v1044: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:04 vm04 bash[20742]: cluster 2026-03-10T10:29:02.555786+0000 mgr.y (mgr.24422) 586 : cluster [DBG] pgmap v1044: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 998 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:04 vm04 bash[20742]: audit 2026-03-10T10:29:03.202318+0000 mon.a (mon.0) 3427 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-140","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:29:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:04 vm04 bash[20742]: audit 2026-03-10T10:29:03.202318+0000 mon.a (mon.0) 3427 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-140","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:29:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:04 vm04 bash[20742]: cluster 2026-03-10T10:29:03.206561+0000 mon.a (mon.0) 3428 : cluster [DBG] osdmap e676: 8 total, 8 up, 8 in 2026-03-10T10:29:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:04 vm04 bash[20742]: cluster 2026-03-10T10:29:03.206561+0000 mon.a (mon.0) 3428 : cluster [DBG] osdmap e676: 8 total, 8 up, 8 in 2026-03-10T10:29:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:04 vm04 bash[20742]: audit 2026-03-10T10:29:03.216601+0000 mon.a (mon.0) 3429 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:29:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:04 vm04 bash[20742]: audit 2026-03-10T10:29:03.216601+0000 mon.a (mon.0) 3429 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-140", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:29:05.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:05 vm04 bash[28289]: audit 2026-03-10T10:29:04.240007+0000 mon.a (mon.0) 3430 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-140", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:29:05.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:05 vm04 bash[28289]: audit 2026-03-10T10:29:04.240007+0000 mon.a (mon.0) 3430 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-140", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:29:05.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:05 vm04 bash[28289]: cluster 2026-03-10T10:29:04.244129+0000 mon.a (mon.0) 3431 : cluster [DBG] osdmap e677: 8 total, 8 up, 8 in 2026-03-10T10:29:05.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:05 vm04 bash[28289]: cluster 2026-03-10T10:29:04.244129+0000 mon.a (mon.0) 3431 : cluster [DBG] osdmap e677: 8 total, 8 up, 8 in 2026-03-10T10:29:05.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:05 vm04 bash[28289]: audit 2026-03-10T10:29:04.245431+0000 mon.a (mon.0) 3432 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T10:29:05.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:05 vm04 bash[28289]: audit 2026-03-10T10:29:04.245431+0000 mon.a (mon.0) 3432 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T10:29:05.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:05 vm04 bash[20742]: audit 2026-03-10T10:29:04.240007+0000 mon.a (mon.0) 3430 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-140", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:29:05.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:05 vm04 bash[20742]: audit 2026-03-10T10:29:04.240007+0000 mon.a (mon.0) 3430 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-140", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:29:05.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:05 vm04 bash[20742]: cluster 2026-03-10T10:29:04.244129+0000 mon.a (mon.0) 3431 : cluster [DBG] osdmap e677: 8 total, 8 up, 8 in 2026-03-10T10:29:05.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:05 vm04 bash[20742]: cluster 2026-03-10T10:29:04.244129+0000 mon.a (mon.0) 3431 : cluster [DBG] osdmap e677: 8 total, 8 up, 8 in 2026-03-10T10:29:05.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:05 vm04 bash[20742]: audit 2026-03-10T10:29:04.245431+0000 mon.a (mon.0) 3432 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T10:29:05.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:05 vm04 bash[20742]: audit 2026-03-10T10:29:04.245431+0000 mon.a (mon.0) 3432 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T10:29:05.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:05 vm07 bash[23367]: audit 2026-03-10T10:29:04.240007+0000 mon.a (mon.0) 3430 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-140", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:29:05.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:05 vm07 bash[23367]: audit 2026-03-10T10:29:04.240007+0000 mon.a (mon.0) 3430 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-140", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:29:05.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:05 vm07 bash[23367]: cluster 2026-03-10T10:29:04.244129+0000 mon.a (mon.0) 3431 : cluster [DBG] osdmap e677: 8 total, 8 up, 8 in 2026-03-10T10:29:05.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:05 vm07 bash[23367]: cluster 2026-03-10T10:29:04.244129+0000 mon.a (mon.0) 3431 : cluster [DBG] osdmap e677: 8 total, 8 up, 8 in 2026-03-10T10:29:05.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:05 vm07 bash[23367]: audit 2026-03-10T10:29:04.245431+0000 mon.a (mon.0) 3432 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T10:29:05.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:05 vm07 bash[23367]: audit 2026-03-10T10:29:04.245431+0000 mon.a (mon.0) 3432 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_count","val": "3"}]: dispatch 2026-03-10T10:29:06.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:06 vm04 bash[28289]: cluster 2026-03-10T10:29:04.556220+0000 mgr.y (mgr.24422) 587 : cluster [DBG] pgmap v1047: 268 pgs: 268 active+clean; 4.3 MiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:06.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:06 vm04 bash[28289]: cluster 2026-03-10T10:29:04.556220+0000 mgr.y (mgr.24422) 587 : cluster [DBG] pgmap v1047: 268 pgs: 268 active+clean; 4.3 MiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:06.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:06 vm04 bash[28289]: cluster 2026-03-10T10:29:05.249760+0000 mon.a (mon.0) 3433 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:29:06.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:06 vm04 bash[28289]: cluster 2026-03-10T10:29:05.249760+0000 mon.a (mon.0) 3433 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:29:06.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:06 vm04 bash[28289]: audit 2026-03-10T10:29:05.298949+0000 mon.a (mon.0) 3434 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_count","val": "3"}]': finished 2026-03-10T10:29:06.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:06 vm04 bash[28289]: audit 2026-03-10T10:29:05.298949+0000 mon.a (mon.0) 3434 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_count","val": "3"}]': finished 2026-03-10T10:29:06.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:06 vm04 bash[28289]: cluster 2026-03-10T10:29:05.302772+0000 mon.a (mon.0) 3435 : cluster [DBG] osdmap e678: 8 total, 8 up, 8 in 2026-03-10T10:29:06.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:06 vm04 bash[28289]: cluster 2026-03-10T10:29:05.302772+0000 mon.a (mon.0) 3435 : cluster [DBG] osdmap e678: 8 total, 8 up, 8 in 2026-03-10T10:29:06.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:06 vm04 bash[28289]: audit 2026-03-10T10:29:05.304311+0000 mon.a (mon.0) 3436 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T10:29:06.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:06 vm04 bash[28289]: audit 2026-03-10T10:29:05.304311+0000 mon.a (mon.0) 3436 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T10:29:06.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:06 vm04 bash[20742]: cluster 2026-03-10T10:29:04.556220+0000 mgr.y (mgr.24422) 587 : cluster [DBG] pgmap v1047: 268 pgs: 268 active+clean; 4.3 MiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:06.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:06 vm04 bash[20742]: cluster 2026-03-10T10:29:04.556220+0000 mgr.y (mgr.24422) 587 : cluster [DBG] pgmap v1047: 268 pgs: 268 active+clean; 4.3 MiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:06.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:06 vm04 bash[20742]: cluster 2026-03-10T10:29:05.249760+0000 mon.a (mon.0) 3433 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:29:06.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:06 vm04 bash[20742]: cluster 2026-03-10T10:29:05.249760+0000 mon.a (mon.0) 3433 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:29:06.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:06 vm04 bash[20742]: audit 2026-03-10T10:29:05.298949+0000 mon.a (mon.0) 3434 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_count","val": "3"}]': finished 2026-03-10T10:29:06.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:06 vm04 bash[20742]: audit 2026-03-10T10:29:05.298949+0000 mon.a (mon.0) 3434 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_count","val": "3"}]': finished 2026-03-10T10:29:06.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:06 vm04 bash[20742]: cluster 2026-03-10T10:29:05.302772+0000 mon.a (mon.0) 3435 : cluster [DBG] osdmap e678: 8 total, 8 up, 8 in 2026-03-10T10:29:06.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:06 vm04 bash[20742]: cluster 2026-03-10T10:29:05.302772+0000 mon.a (mon.0) 3435 : cluster [DBG] osdmap e678: 8 total, 8 up, 8 in 2026-03-10T10:29:06.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:06 vm04 bash[20742]: audit 2026-03-10T10:29:05.304311+0000 mon.a (mon.0) 3436 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T10:29:06.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:06 vm04 bash[20742]: audit 2026-03-10T10:29:05.304311+0000 mon.a (mon.0) 3436 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T10:29:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:06 vm07 bash[23367]: cluster 2026-03-10T10:29:04.556220+0000 mgr.y (mgr.24422) 587 : cluster [DBG] pgmap v1047: 268 pgs: 268 active+clean; 4.3 MiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:06 vm07 bash[23367]: cluster 2026-03-10T10:29:04.556220+0000 mgr.y (mgr.24422) 587 : cluster [DBG] pgmap v1047: 268 pgs: 268 active+clean; 4.3 MiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:06 vm07 bash[23367]: cluster 2026-03-10T10:29:05.249760+0000 mon.a (mon.0) 3433 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:29:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:06 vm07 bash[23367]: cluster 2026-03-10T10:29:05.249760+0000 mon.a (mon.0) 3433 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:29:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:06 vm07 bash[23367]: audit 2026-03-10T10:29:05.298949+0000 mon.a (mon.0) 3434 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_count","val": "3"}]': finished 2026-03-10T10:29:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:06 vm07 bash[23367]: audit 2026-03-10T10:29:05.298949+0000 mon.a (mon.0) 3434 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_count","val": "3"}]': finished 2026-03-10T10:29:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:06 vm07 bash[23367]: cluster 2026-03-10T10:29:05.302772+0000 mon.a (mon.0) 3435 : cluster [DBG] osdmap e678: 8 total, 8 up, 8 in 2026-03-10T10:29:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:06 vm07 bash[23367]: cluster 2026-03-10T10:29:05.302772+0000 mon.a (mon.0) 3435 : cluster [DBG] osdmap e678: 8 total, 8 up, 8 in 2026-03-10T10:29:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:06 vm07 bash[23367]: audit 2026-03-10T10:29:05.304311+0000 mon.a (mon.0) 3436 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T10:29:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:06 vm07 bash[23367]: audit 2026-03-10T10:29:05.304311+0000 mon.a (mon.0) 3436 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_period","val": "3"}]: dispatch 2026-03-10T10:29:07.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:07 vm04 bash[28289]: audit 2026-03-10T10:29:06.409399+0000 mon.a (mon.0) 3437 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_period","val": "3"}]': finished 2026-03-10T10:29:07.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:07 vm04 bash[28289]: audit 2026-03-10T10:29:06.409399+0000 mon.a (mon.0) 3437 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_period","val": "3"}]': finished 2026-03-10T10:29:07.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:07 vm04 bash[28289]: cluster 2026-03-10T10:29:06.413007+0000 mon.a (mon.0) 3438 : cluster [DBG] osdmap e679: 8 total, 8 up, 8 in 2026-03-10T10:29:07.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:07 vm04 bash[28289]: cluster 2026-03-10T10:29:06.413007+0000 mon.a (mon.0) 3438 : cluster [DBG] osdmap e679: 8 total, 8 up, 8 in 2026-03-10T10:29:07.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:07 vm04 bash[28289]: audit 2026-03-10T10:29:06.414234+0000 mon.a (mon.0) 3439 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:29:07.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:07 vm04 bash[28289]: audit 2026-03-10T10:29:06.414234+0000 mon.a (mon.0) 3439 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:29:07.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:07 vm04 bash[28289]: cluster 2026-03-10T10:29:06.556562+0000 mgr.y (mgr.24422) 588 : cluster [DBG] pgmap v1050: 268 pgs: 268 active+clean; 4.3 MiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:07.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:07 vm04 bash[28289]: cluster 2026-03-10T10:29:06.556562+0000 mgr.y (mgr.24422) 588 : cluster [DBG] pgmap v1050: 268 pgs: 268 active+clean; 4.3 MiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:07.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:07 vm04 bash[28289]: audit 2026-03-10T10:29:07.412991+0000 mon.a (mon.0) 3440 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:29:07.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:07 vm04 bash[28289]: audit 2026-03-10T10:29:07.412991+0000 mon.a (mon.0) 3440 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:29:07.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:07 vm04 bash[28289]: cluster 2026-03-10T10:29:07.416490+0000 mon.a (mon.0) 3441 : cluster [DBG] osdmap e680: 8 total, 8 up, 8 in 2026-03-10T10:29:07.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:07 vm04 bash[28289]: cluster 2026-03-10T10:29:07.416490+0000 mon.a (mon.0) 3441 : cluster [DBG] osdmap e680: 8 total, 8 up, 8 in 2026-03-10T10:29:07.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:07 vm04 bash[28289]: audit 2026-03-10T10:29:07.416917+0000 mon.a (mon.0) 3442 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T10:29:07.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:07 vm04 bash[28289]: audit 2026-03-10T10:29:07.416917+0000 mon.a (mon.0) 3442 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T10:29:07.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:07 vm04 bash[20742]: audit 2026-03-10T10:29:06.409399+0000 mon.a (mon.0) 3437 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_period","val": "3"}]': finished 2026-03-10T10:29:07.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:07 vm04 bash[20742]: audit 2026-03-10T10:29:06.409399+0000 mon.a (mon.0) 3437 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_period","val": "3"}]': finished 2026-03-10T10:29:07.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:07 vm04 bash[20742]: cluster 2026-03-10T10:29:06.413007+0000 mon.a (mon.0) 3438 : cluster [DBG] osdmap e679: 8 total, 8 up, 8 in 2026-03-10T10:29:07.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:07 vm04 bash[20742]: cluster 2026-03-10T10:29:06.413007+0000 mon.a (mon.0) 3438 : cluster [DBG] osdmap e679: 8 total, 8 up, 8 in 2026-03-10T10:29:07.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:07 vm04 bash[20742]: audit 2026-03-10T10:29:06.414234+0000 mon.a (mon.0) 3439 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:29:07.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:07 vm04 bash[20742]: audit 2026-03-10T10:29:06.414234+0000 mon.a (mon.0) 3439 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:29:07.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:07 vm04 bash[20742]: cluster 2026-03-10T10:29:06.556562+0000 mgr.y (mgr.24422) 588 : cluster [DBG] pgmap v1050: 268 pgs: 268 active+clean; 4.3 MiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:07.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:07 vm04 bash[20742]: cluster 2026-03-10T10:29:06.556562+0000 mgr.y (mgr.24422) 588 : cluster [DBG] pgmap v1050: 268 pgs: 268 active+clean; 4.3 MiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:07.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:07 vm04 bash[20742]: audit 2026-03-10T10:29:07.412991+0000 mon.a (mon.0) 3440 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:29:07.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:07 vm04 bash[20742]: audit 2026-03-10T10:29:07.412991+0000 mon.a (mon.0) 3440 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:29:07.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:07 vm04 bash[20742]: cluster 2026-03-10T10:29:07.416490+0000 mon.a (mon.0) 3441 : cluster [DBG] osdmap e680: 8 total, 8 up, 8 in 2026-03-10T10:29:07.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:07 vm04 bash[20742]: cluster 2026-03-10T10:29:07.416490+0000 mon.a (mon.0) 3441 : cluster [DBG] osdmap e680: 8 total, 8 up, 8 in 2026-03-10T10:29:07.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:07 vm04 bash[20742]: audit 2026-03-10T10:29:07.416917+0000 mon.a (mon.0) 3442 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T10:29:07.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:07 vm04 bash[20742]: audit 2026-03-10T10:29:07.416917+0000 mon.a (mon.0) 3442 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T10:29:07.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:07 vm07 bash[23367]: audit 2026-03-10T10:29:06.409399+0000 mon.a (mon.0) 3437 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_period","val": "3"}]': finished 2026-03-10T10:29:07.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:07 vm07 bash[23367]: audit 2026-03-10T10:29:06.409399+0000 mon.a (mon.0) 3437 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_period","val": "3"}]': finished 2026-03-10T10:29:07.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:07 vm07 bash[23367]: cluster 2026-03-10T10:29:06.413007+0000 mon.a (mon.0) 3438 : cluster [DBG] osdmap e679: 8 total, 8 up, 8 in 2026-03-10T10:29:07.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:07 vm07 bash[23367]: cluster 2026-03-10T10:29:06.413007+0000 mon.a (mon.0) 3438 : cluster [DBG] osdmap e679: 8 total, 8 up, 8 in 2026-03-10T10:29:07.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:07 vm07 bash[23367]: audit 2026-03-10T10:29:06.414234+0000 mon.a (mon.0) 3439 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:29:07.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:07 vm07 bash[23367]: audit 2026-03-10T10:29:06.414234+0000 mon.a (mon.0) 3439 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:29:07.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:07 vm07 bash[23367]: cluster 2026-03-10T10:29:06.556562+0000 mgr.y (mgr.24422) 588 : cluster [DBG] pgmap v1050: 268 pgs: 268 active+clean; 4.3 MiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:07.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:07 vm07 bash[23367]: cluster 2026-03-10T10:29:06.556562+0000 mgr.y (mgr.24422) 588 : cluster [DBG] pgmap v1050: 268 pgs: 268 active+clean; 4.3 MiB data, 1003 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:07.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:07 vm07 bash[23367]: audit 2026-03-10T10:29:07.412991+0000 mon.a (mon.0) 3440 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:29:07.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:07 vm07 bash[23367]: audit 2026-03-10T10:29:07.412991+0000 mon.a (mon.0) 3440 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:29:07.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:07 vm07 bash[23367]: cluster 2026-03-10T10:29:07.416490+0000 mon.a (mon.0) 3441 : cluster [DBG] osdmap e680: 8 total, 8 up, 8 in 2026-03-10T10:29:07.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:07 vm07 bash[23367]: cluster 2026-03-10T10:29:07.416490+0000 mon.a (mon.0) 3441 : cluster [DBG] osdmap e680: 8 total, 8 up, 8 in 2026-03-10T10:29:07.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:07 vm07 bash[23367]: audit 2026-03-10T10:29:07.416917+0000 mon.a (mon.0) 3442 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T10:29:07.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:07 vm07 bash[23367]: audit 2026-03-10T10:29:07.416917+0000 mon.a (mon.0) 3442 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_fpp","val": ".01"}]: dispatch 2026-03-10T10:29:09.266 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:29:08 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:29:09.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:09 vm04 bash[28289]: audit 2026-03-10T10:29:08.444211+0000 mon.a (mon.0) 3443 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-10T10:29:09.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:09 vm04 bash[28289]: audit 2026-03-10T10:29:08.444211+0000 mon.a (mon.0) 3443 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-10T10:29:09.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:09 vm04 bash[28289]: cluster 2026-03-10T10:29:08.449011+0000 mon.a (mon.0) 3444 : cluster [DBG] osdmap e681: 8 total, 8 up, 8 in 2026-03-10T10:29:09.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:09 vm04 bash[28289]: cluster 2026-03-10T10:29:08.449011+0000 mon.a (mon.0) 3444 : cluster [DBG] osdmap e681: 8 total, 8 up, 8 in 2026-03-10T10:29:09.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:09 vm04 bash[28289]: cluster 2026-03-10T10:29:08.557063+0000 mgr.y (mgr.24422) 589 : cluster [DBG] pgmap v1053: 268 pgs: 268 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:29:09.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:09 vm04 bash[28289]: cluster 2026-03-10T10:29:08.557063+0000 mgr.y (mgr.24422) 589 : cluster [DBG] pgmap v1053: 268 pgs: 268 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:29:09.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:09 vm04 bash[28289]: audit 2026-03-10T10:29:08.856249+0000 mgr.y (mgr.24422) 590 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:09.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:09 vm04 bash[28289]: audit 2026-03-10T10:29:08.856249+0000 mgr.y (mgr.24422) 590 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:09.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:09 vm04 bash[20742]: audit 2026-03-10T10:29:08.444211+0000 mon.a (mon.0) 3443 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-10T10:29:09.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:09 vm04 bash[20742]: audit 2026-03-10T10:29:08.444211+0000 mon.a (mon.0) 3443 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-10T10:29:09.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:09 vm04 bash[20742]: cluster 2026-03-10T10:29:08.449011+0000 mon.a (mon.0) 3444 : cluster [DBG] osdmap e681: 8 total, 8 up, 8 in 2026-03-10T10:29:09.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:09 vm04 bash[20742]: cluster 2026-03-10T10:29:08.449011+0000 mon.a (mon.0) 3444 : cluster [DBG] osdmap e681: 8 total, 8 up, 8 in 2026-03-10T10:29:09.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:09 vm04 bash[20742]: cluster 2026-03-10T10:29:08.557063+0000 mgr.y (mgr.24422) 589 : cluster [DBG] pgmap v1053: 268 pgs: 268 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:29:09.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:09 vm04 bash[20742]: cluster 2026-03-10T10:29:08.557063+0000 mgr.y (mgr.24422) 589 : cluster [DBG] pgmap v1053: 268 pgs: 268 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:29:09.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:09 vm04 bash[20742]: audit 2026-03-10T10:29:08.856249+0000 mgr.y (mgr.24422) 590 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:09.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:09 vm04 bash[20742]: audit 2026-03-10T10:29:08.856249+0000 mgr.y (mgr.24422) 590 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:09.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:09 vm07 bash[23367]: audit 2026-03-10T10:29:08.444211+0000 mon.a (mon.0) 3443 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-10T10:29:09.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:09 vm07 bash[23367]: audit 2026-03-10T10:29:08.444211+0000 mon.a (mon.0) 3443 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-140","var": "hit_set_fpp","val": ".01"}]': finished 2026-03-10T10:29:09.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:09 vm07 bash[23367]: cluster 2026-03-10T10:29:08.449011+0000 mon.a (mon.0) 3444 : cluster [DBG] osdmap e681: 8 total, 8 up, 8 in 2026-03-10T10:29:09.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:09 vm07 bash[23367]: cluster 2026-03-10T10:29:08.449011+0000 mon.a (mon.0) 3444 : cluster [DBG] osdmap e681: 8 total, 8 up, 8 in 2026-03-10T10:29:09.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:09 vm07 bash[23367]: cluster 2026-03-10T10:29:08.557063+0000 mgr.y (mgr.24422) 589 : cluster [DBG] pgmap v1053: 268 pgs: 268 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:29:09.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:09 vm07 bash[23367]: cluster 2026-03-10T10:29:08.557063+0000 mgr.y (mgr.24422) 589 : cluster [DBG] pgmap v1053: 268 pgs: 268 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:29:09.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:09 vm07 bash[23367]: audit 2026-03-10T10:29:08.856249+0000 mgr.y (mgr.24422) 590 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:09.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:09 vm07 bash[23367]: audit 2026-03-10T10:29:08.856249+0000 mgr.y (mgr.24422) 590 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:11.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:11 vm04 bash[28289]: cluster 2026-03-10T10:29:10.557634+0000 mgr.y (mgr.24422) 591 : cluster [DBG] pgmap v1054: 268 pgs: 268 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 973 B/s rd, 0 op/s 2026-03-10T10:29:11.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:11 vm04 bash[28289]: cluster 2026-03-10T10:29:10.557634+0000 mgr.y (mgr.24422) 591 : cluster [DBG] pgmap v1054: 268 pgs: 268 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 973 B/s rd, 0 op/s 2026-03-10T10:29:11.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:11 vm04 bash[20742]: cluster 2026-03-10T10:29:10.557634+0000 mgr.y (mgr.24422) 591 : cluster [DBG] pgmap v1054: 268 pgs: 268 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 973 B/s rd, 0 op/s 2026-03-10T10:29:11.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:11 vm04 bash[20742]: cluster 2026-03-10T10:29:10.557634+0000 mgr.y (mgr.24422) 591 : cluster [DBG] pgmap v1054: 268 pgs: 268 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 973 B/s rd, 0 op/s 2026-03-10T10:29:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:11 vm07 bash[23367]: cluster 2026-03-10T10:29:10.557634+0000 mgr.y (mgr.24422) 591 : cluster [DBG] pgmap v1054: 268 pgs: 268 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 973 B/s rd, 0 op/s 2026-03-10T10:29:12.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:11 vm07 bash[23367]: cluster 2026-03-10T10:29:10.557634+0000 mgr.y (mgr.24422) 591 : cluster [DBG] pgmap v1054: 268 pgs: 268 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 973 B/s rd, 0 op/s 2026-03-10T10:29:13.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:29:13 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:29:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:29:13.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:13 vm04 bash[28289]: cluster 2026-03-10T10:29:12.557983+0000 mgr.y (mgr.24422) 592 : cluster [DBG] pgmap v1055: 268 pgs: 268 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 832 B/s rd, 0 op/s 2026-03-10T10:29:13.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:13 vm04 bash[28289]: cluster 2026-03-10T10:29:12.557983+0000 mgr.y (mgr.24422) 592 : cluster [DBG] pgmap v1055: 268 pgs: 268 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 832 B/s rd, 0 op/s 2026-03-10T10:29:13.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:13 vm04 bash[28289]: audit 2026-03-10T10:29:13.169068+0000 mon.a (mon.0) 3445 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:29:13.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:13 vm04 bash[28289]: audit 2026-03-10T10:29:13.169068+0000 mon.a (mon.0) 3445 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:29:13.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:13 vm04 bash[20742]: cluster 2026-03-10T10:29:12.557983+0000 mgr.y (mgr.24422) 592 : cluster [DBG] pgmap v1055: 268 pgs: 268 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 832 B/s rd, 0 op/s 2026-03-10T10:29:13.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:13 vm04 bash[20742]: cluster 2026-03-10T10:29:12.557983+0000 mgr.y (mgr.24422) 592 : cluster [DBG] pgmap v1055: 268 pgs: 268 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 832 B/s rd, 0 op/s 2026-03-10T10:29:13.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:13 vm04 bash[20742]: audit 2026-03-10T10:29:13.169068+0000 mon.a (mon.0) 3445 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:29:13.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:13 vm04 bash[20742]: audit 2026-03-10T10:29:13.169068+0000 mon.a (mon.0) 3445 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:29:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:13 vm07 bash[23367]: cluster 2026-03-10T10:29:12.557983+0000 mgr.y (mgr.24422) 592 : cluster [DBG] pgmap v1055: 268 pgs: 268 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 832 B/s rd, 0 op/s 2026-03-10T10:29:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:13 vm07 bash[23367]: cluster 2026-03-10T10:29:12.557983+0000 mgr.y (mgr.24422) 592 : cluster [DBG] pgmap v1055: 268 pgs: 268 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 832 B/s rd, 0 op/s 2026-03-10T10:29:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:13 vm07 bash[23367]: audit 2026-03-10T10:29:13.169068+0000 mon.a (mon.0) 3445 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:29:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:13 vm07 bash[23367]: audit 2026-03-10T10:29:13.169068+0000 mon.a (mon.0) 3445 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:29:15.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:15 vm04 bash[28289]: cluster 2026-03-10T10:29:14.558889+0000 mgr.y (mgr.24422) 593 : cluster [DBG] pgmap v1056: 268 pgs: 268 active+clean; 4.3 MiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.1 KiB/s wr, 1 op/s 2026-03-10T10:29:15.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:15 vm04 bash[28289]: cluster 2026-03-10T10:29:14.558889+0000 mgr.y (mgr.24422) 593 : cluster [DBG] pgmap v1056: 268 pgs: 268 active+clean; 4.3 MiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.1 KiB/s wr, 1 op/s 2026-03-10T10:29:15.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:15 vm04 bash[20742]: cluster 2026-03-10T10:29:14.558889+0000 mgr.y (mgr.24422) 593 : cluster [DBG] pgmap v1056: 268 pgs: 268 active+clean; 4.3 MiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.1 KiB/s wr, 1 op/s 2026-03-10T10:29:15.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:15 vm04 bash[20742]: cluster 2026-03-10T10:29:14.558889+0000 mgr.y (mgr.24422) 593 : cluster [DBG] pgmap v1056: 268 pgs: 268 active+clean; 4.3 MiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.1 KiB/s wr, 1 op/s 2026-03-10T10:29:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:15 vm07 bash[23367]: cluster 2026-03-10T10:29:14.558889+0000 mgr.y (mgr.24422) 593 : cluster [DBG] pgmap v1056: 268 pgs: 268 active+clean; 4.3 MiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.1 KiB/s wr, 1 op/s 2026-03-10T10:29:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:15 vm07 bash[23367]: cluster 2026-03-10T10:29:14.558889+0000 mgr.y (mgr.24422) 593 : cluster [DBG] pgmap v1056: 268 pgs: 268 active+clean; 4.3 MiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 5.1 KiB/s wr, 1 op/s 2026-03-10T10:29:17.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:17 vm04 bash[28289]: cluster 2026-03-10T10:29:16.559344+0000 mgr.y (mgr.24422) 594 : cluster [DBG] pgmap v1057: 268 pgs: 268 active+clean; 4.3 MiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 4.5 KiB/s wr, 1 op/s 2026-03-10T10:29:17.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:17 vm04 bash[28289]: cluster 2026-03-10T10:29:16.559344+0000 mgr.y (mgr.24422) 594 : cluster [DBG] pgmap v1057: 268 pgs: 268 active+clean; 4.3 MiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 4.5 KiB/s wr, 1 op/s 2026-03-10T10:29:17.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:17 vm04 bash[20742]: cluster 2026-03-10T10:29:16.559344+0000 mgr.y (mgr.24422) 594 : cluster [DBG] pgmap v1057: 268 pgs: 268 active+clean; 4.3 MiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 4.5 KiB/s wr, 1 op/s 2026-03-10T10:29:17.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:17 vm04 bash[20742]: cluster 2026-03-10T10:29:16.559344+0000 mgr.y (mgr.24422) 594 : cluster [DBG] pgmap v1057: 268 pgs: 268 active+clean; 4.3 MiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 4.5 KiB/s wr, 1 op/s 2026-03-10T10:29:18.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:17 vm07 bash[23367]: cluster 2026-03-10T10:29:16.559344+0000 mgr.y (mgr.24422) 594 : cluster [DBG] pgmap v1057: 268 pgs: 268 active+clean; 4.3 MiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 4.5 KiB/s wr, 1 op/s 2026-03-10T10:29:18.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:17 vm07 bash[23367]: cluster 2026-03-10T10:29:16.559344+0000 mgr.y (mgr.24422) 594 : cluster [DBG] pgmap v1057: 268 pgs: 268 active+clean; 4.3 MiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 4.5 KiB/s wr, 1 op/s 2026-03-10T10:29:19.266 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:29:18 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:29:19.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:19 vm04 bash[28289]: cluster 2026-03-10T10:29:18.559928+0000 mgr.y (mgr.24422) 595 : cluster [DBG] pgmap v1058: 268 pgs: 268 active+clean; 4.3 MiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 1012 B/s rd, 8.2 KiB/s wr, 2 op/s 2026-03-10T10:29:19.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:19 vm04 bash[28289]: cluster 2026-03-10T10:29:18.559928+0000 mgr.y (mgr.24422) 595 : cluster [DBG] pgmap v1058: 268 pgs: 268 active+clean; 4.3 MiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 1012 B/s rd, 8.2 KiB/s wr, 2 op/s 2026-03-10T10:29:19.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:19 vm04 bash[28289]: audit 2026-03-10T10:29:18.866900+0000 mgr.y (mgr.24422) 596 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:19.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:19 vm04 bash[28289]: audit 2026-03-10T10:29:18.866900+0000 mgr.y (mgr.24422) 596 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:19.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:19 vm04 bash[20742]: cluster 2026-03-10T10:29:18.559928+0000 mgr.y (mgr.24422) 595 : cluster [DBG] pgmap v1058: 268 pgs: 268 active+clean; 4.3 MiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 1012 B/s rd, 8.2 KiB/s wr, 2 op/s 2026-03-10T10:29:19.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:19 vm04 bash[20742]: cluster 2026-03-10T10:29:18.559928+0000 mgr.y (mgr.24422) 595 : cluster [DBG] pgmap v1058: 268 pgs: 268 active+clean; 4.3 MiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 1012 B/s rd, 8.2 KiB/s wr, 2 op/s 2026-03-10T10:29:19.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:19 vm04 bash[20742]: audit 2026-03-10T10:29:18.866900+0000 mgr.y (mgr.24422) 596 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:19.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:19 vm04 bash[20742]: audit 2026-03-10T10:29:18.866900+0000 mgr.y (mgr.24422) 596 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:20.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:19 vm07 bash[23367]: cluster 2026-03-10T10:29:18.559928+0000 mgr.y (mgr.24422) 595 : cluster [DBG] pgmap v1058: 268 pgs: 268 active+clean; 4.3 MiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 1012 B/s rd, 8.2 KiB/s wr, 2 op/s 2026-03-10T10:29:20.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:19 vm07 bash[23367]: cluster 2026-03-10T10:29:18.559928+0000 mgr.y (mgr.24422) 595 : cluster [DBG] pgmap v1058: 268 pgs: 268 active+clean; 4.3 MiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 1012 B/s rd, 8.2 KiB/s wr, 2 op/s 2026-03-10T10:29:20.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:19 vm07 bash[23367]: audit 2026-03-10T10:29:18.866900+0000 mgr.y (mgr.24422) 596 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:20.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:19 vm07 bash[23367]: audit 2026-03-10T10:29:18.866900+0000 mgr.y (mgr.24422) 596 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:20.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:20 vm04 bash[28289]: audit 2026-03-10T10:29:20.495799+0000 mon.a (mon.0) 3446 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:20.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:20 vm04 bash[28289]: audit 2026-03-10T10:29:20.495799+0000 mon.a (mon.0) 3446 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:20.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:20 vm04 bash[28289]: audit 2026-03-10T10:29:20.496017+0000 mon.a (mon.0) 3447 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-140"}]: dispatch 2026-03-10T10:29:20.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:20 vm04 bash[28289]: audit 2026-03-10T10:29:20.496017+0000 mon.a (mon.0) 3447 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-140"}]: dispatch 2026-03-10T10:29:20.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:20 vm04 bash[20742]: audit 2026-03-10T10:29:20.495799+0000 mon.a (mon.0) 3446 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:20.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:20 vm04 bash[20742]: audit 2026-03-10T10:29:20.495799+0000 mon.a (mon.0) 3446 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:20.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:20 vm04 bash[20742]: audit 2026-03-10T10:29:20.496017+0000 mon.a (mon.0) 3447 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-140"}]: dispatch 2026-03-10T10:29:20.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:20 vm04 bash[20742]: audit 2026-03-10T10:29:20.496017+0000 mon.a (mon.0) 3447 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-140"}]: dispatch 2026-03-10T10:29:21.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:20 vm07 bash[23367]: audit 2026-03-10T10:29:20.495799+0000 mon.a (mon.0) 3446 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:21.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:20 vm07 bash[23367]: audit 2026-03-10T10:29:20.495799+0000 mon.a (mon.0) 3446 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:21.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:20 vm07 bash[23367]: audit 2026-03-10T10:29:20.496017+0000 mon.a (mon.0) 3447 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-140"}]: dispatch 2026-03-10T10:29:21.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:20 vm07 bash[23367]: audit 2026-03-10T10:29:20.496017+0000 mon.a (mon.0) 3447 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-140"}]: dispatch 2026-03-10T10:29:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:21 vm04 bash[28289]: cluster 2026-03-10T10:29:20.560546+0000 mgr.y (mgr.24422) 597 : cluster [DBG] pgmap v1059: 268 pgs: 268 active+clean; 4.3 MiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 6.9 KiB/s wr, 2 op/s 2026-03-10T10:29:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:21 vm04 bash[28289]: cluster 2026-03-10T10:29:20.560546+0000 mgr.y (mgr.24422) 597 : cluster [DBG] pgmap v1059: 268 pgs: 268 active+clean; 4.3 MiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 6.9 KiB/s wr, 2 op/s 2026-03-10T10:29:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:21 vm04 bash[28289]: audit 2026-03-10T10:29:20.650558+0000 mon.a (mon.0) 3448 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-140"}]': finished 2026-03-10T10:29:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:21 vm04 bash[28289]: audit 2026-03-10T10:29:20.650558+0000 mon.a (mon.0) 3448 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-140"}]': finished 2026-03-10T10:29:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:21 vm04 bash[28289]: cluster 2026-03-10T10:29:20.653751+0000 mon.a (mon.0) 3449 : cluster [DBG] osdmap e682: 8 total, 8 up, 8 in 2026-03-10T10:29:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:21 vm04 bash[28289]: cluster 2026-03-10T10:29:20.653751+0000 mon.a (mon.0) 3449 : cluster [DBG] osdmap e682: 8 total, 8 up, 8 in 2026-03-10T10:29:21.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:21 vm04 bash[20742]: cluster 2026-03-10T10:29:20.560546+0000 mgr.y (mgr.24422) 597 : cluster [DBG] pgmap v1059: 268 pgs: 268 active+clean; 4.3 MiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 6.9 KiB/s wr, 2 op/s 2026-03-10T10:29:21.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:21 vm04 bash[20742]: cluster 2026-03-10T10:29:20.560546+0000 mgr.y (mgr.24422) 597 : cluster [DBG] pgmap v1059: 268 pgs: 268 active+clean; 4.3 MiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 6.9 KiB/s wr, 2 op/s 2026-03-10T10:29:21.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:21 vm04 bash[20742]: audit 2026-03-10T10:29:20.650558+0000 mon.a (mon.0) 3448 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-140"}]': finished 2026-03-10T10:29:21.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:21 vm04 bash[20742]: audit 2026-03-10T10:29:20.650558+0000 mon.a (mon.0) 3448 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-140"}]': finished 2026-03-10T10:29:21.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:21 vm04 bash[20742]: cluster 2026-03-10T10:29:20.653751+0000 mon.a (mon.0) 3449 : cluster [DBG] osdmap e682: 8 total, 8 up, 8 in 2026-03-10T10:29:21.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:21 vm04 bash[20742]: cluster 2026-03-10T10:29:20.653751+0000 mon.a (mon.0) 3449 : cluster [DBG] osdmap e682: 8 total, 8 up, 8 in 2026-03-10T10:29:22.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:21 vm07 bash[23367]: cluster 2026-03-10T10:29:20.560546+0000 mgr.y (mgr.24422) 597 : cluster [DBG] pgmap v1059: 268 pgs: 268 active+clean; 4.3 MiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 6.9 KiB/s wr, 2 op/s 2026-03-10T10:29:22.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:21 vm07 bash[23367]: cluster 2026-03-10T10:29:20.560546+0000 mgr.y (mgr.24422) 597 : cluster [DBG] pgmap v1059: 268 pgs: 268 active+clean; 4.3 MiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 6.9 KiB/s wr, 2 op/s 2026-03-10T10:29:22.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:21 vm07 bash[23367]: audit 2026-03-10T10:29:20.650558+0000 mon.a (mon.0) 3448 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-140"}]': finished 2026-03-10T10:29:22.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:21 vm07 bash[23367]: audit 2026-03-10T10:29:20.650558+0000 mon.a (mon.0) 3448 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-140"}]': finished 2026-03-10T10:29:22.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:21 vm07 bash[23367]: cluster 2026-03-10T10:29:20.653751+0000 mon.a (mon.0) 3449 : cluster [DBG] osdmap e682: 8 total, 8 up, 8 in 2026-03-10T10:29:22.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:21 vm07 bash[23367]: cluster 2026-03-10T10:29:20.653751+0000 mon.a (mon.0) 3449 : cluster [DBG] osdmap e682: 8 total, 8 up, 8 in 2026-03-10T10:29:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:22 vm04 bash[28289]: cluster 2026-03-10T10:29:21.680163+0000 mon.a (mon.0) 3450 : cluster [DBG] osdmap e683: 8 total, 8 up, 8 in 2026-03-10T10:29:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:22 vm04 bash[28289]: cluster 2026-03-10T10:29:21.680163+0000 mon.a (mon.0) 3450 : cluster [DBG] osdmap e683: 8 total, 8 up, 8 in 2026-03-10T10:29:22.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:22 vm04 bash[20742]: cluster 2026-03-10T10:29:21.680163+0000 mon.a (mon.0) 3450 : cluster [DBG] osdmap e683: 8 total, 8 up, 8 in 2026-03-10T10:29:22.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:22 vm04 bash[20742]: cluster 2026-03-10T10:29:21.680163+0000 mon.a (mon.0) 3450 : cluster [DBG] osdmap e683: 8 total, 8 up, 8 in 2026-03-10T10:29:23.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:22 vm07 bash[23367]: cluster 2026-03-10T10:29:21.680163+0000 mon.a (mon.0) 3450 : cluster [DBG] osdmap e683: 8 total, 8 up, 8 in 2026-03-10T10:29:23.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:22 vm07 bash[23367]: cluster 2026-03-10T10:29:21.680163+0000 mon.a (mon.0) 3450 : cluster [DBG] osdmap e683: 8 total, 8 up, 8 in 2026-03-10T10:29:23.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:29:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:29:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:29:24.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:23 vm07 bash[23367]: cluster 2026-03-10T10:29:22.560933+0000 mgr.y (mgr.24422) 598 : cluster [DBG] pgmap v1062: 236 pgs: 236 active+clean; 4.3 MiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T10:29:24.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:23 vm07 bash[23367]: cluster 2026-03-10T10:29:22.560933+0000 mgr.y (mgr.24422) 598 : cluster [DBG] pgmap v1062: 236 pgs: 236 active+clean; 4.3 MiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T10:29:24.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:23 vm07 bash[23367]: cluster 2026-03-10T10:29:22.686458+0000 mon.a (mon.0) 3451 : cluster [DBG] osdmap e684: 8 total, 8 up, 8 in 2026-03-10T10:29:24.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:23 vm07 bash[23367]: cluster 2026-03-10T10:29:22.686458+0000 mon.a (mon.0) 3451 : cluster [DBG] osdmap e684: 8 total, 8 up, 8 in 2026-03-10T10:29:24.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:23 vm07 bash[23367]: audit 2026-03-10T10:29:22.689837+0000 mon.a (mon.0) 3452 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:29:24.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:23 vm07 bash[23367]: audit 2026-03-10T10:29:22.689837+0000 mon.a (mon.0) 3452 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:29:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:23 vm04 bash[28289]: cluster 2026-03-10T10:29:22.560933+0000 mgr.y (mgr.24422) 598 : cluster [DBG] pgmap v1062: 236 pgs: 236 active+clean; 4.3 MiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T10:29:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:23 vm04 bash[28289]: cluster 2026-03-10T10:29:22.560933+0000 mgr.y (mgr.24422) 598 : cluster [DBG] pgmap v1062: 236 pgs: 236 active+clean; 4.3 MiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T10:29:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:23 vm04 bash[28289]: cluster 2026-03-10T10:29:22.686458+0000 mon.a (mon.0) 3451 : cluster [DBG] osdmap e684: 8 total, 8 up, 8 in 2026-03-10T10:29:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:23 vm04 bash[28289]: cluster 2026-03-10T10:29:22.686458+0000 mon.a (mon.0) 3451 : cluster [DBG] osdmap e684: 8 total, 8 up, 8 in 2026-03-10T10:29:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:23 vm04 bash[28289]: audit 2026-03-10T10:29:22.689837+0000 mon.a (mon.0) 3452 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:29:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:23 vm04 bash[28289]: audit 2026-03-10T10:29:22.689837+0000 mon.a (mon.0) 3452 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:29:24.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:23 vm04 bash[20742]: cluster 2026-03-10T10:29:22.560933+0000 mgr.y (mgr.24422) 598 : cluster [DBG] pgmap v1062: 236 pgs: 236 active+clean; 4.3 MiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T10:29:24.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:23 vm04 bash[20742]: cluster 2026-03-10T10:29:22.560933+0000 mgr.y (mgr.24422) 598 : cluster [DBG] pgmap v1062: 236 pgs: 236 active+clean; 4.3 MiB data, 1012 MiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T10:29:24.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:23 vm04 bash[20742]: cluster 2026-03-10T10:29:22.686458+0000 mon.a (mon.0) 3451 : cluster [DBG] osdmap e684: 8 total, 8 up, 8 in 2026-03-10T10:29:24.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:23 vm04 bash[20742]: cluster 2026-03-10T10:29:22.686458+0000 mon.a (mon.0) 3451 : cluster [DBG] osdmap e684: 8 total, 8 up, 8 in 2026-03-10T10:29:24.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:23 vm04 bash[20742]: audit 2026-03-10T10:29:22.689837+0000 mon.a (mon.0) 3452 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:29:24.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:23 vm04 bash[20742]: audit 2026-03-10T10:29:22.689837+0000 mon.a (mon.0) 3452 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-142","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:29:25.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:24 vm07 bash[23367]: audit 2026-03-10T10:29:23.688252+0000 mon.a (mon.0) 3453 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-142","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:29:25.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:24 vm07 bash[23367]: audit 2026-03-10T10:29:23.688252+0000 mon.a (mon.0) 3453 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-142","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:29:25.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:24 vm07 bash[23367]: cluster 2026-03-10T10:29:23.691720+0000 mon.a (mon.0) 3454 : cluster [DBG] osdmap e685: 8 total, 8 up, 8 in 2026-03-10T10:29:25.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:24 vm07 bash[23367]: cluster 2026-03-10T10:29:23.691720+0000 mon.a (mon.0) 3454 : cluster [DBG] osdmap e685: 8 total, 8 up, 8 in 2026-03-10T10:29:25.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:24 vm07 bash[23367]: audit 2026-03-10T10:29:23.782223+0000 mon.a (mon.0) 3455 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:29:25.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:24 vm07 bash[23367]: audit 2026-03-10T10:29:23.782223+0000 mon.a (mon.0) 3455 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:29:25.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:24 vm04 bash[28289]: audit 2026-03-10T10:29:23.688252+0000 mon.a (mon.0) 3453 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-142","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:29:25.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:24 vm04 bash[28289]: audit 2026-03-10T10:29:23.688252+0000 mon.a (mon.0) 3453 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-142","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:29:25.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:24 vm04 bash[28289]: cluster 2026-03-10T10:29:23.691720+0000 mon.a (mon.0) 3454 : cluster [DBG] osdmap e685: 8 total, 8 up, 8 in 2026-03-10T10:29:25.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:24 vm04 bash[28289]: cluster 2026-03-10T10:29:23.691720+0000 mon.a (mon.0) 3454 : cluster [DBG] osdmap e685: 8 total, 8 up, 8 in 2026-03-10T10:29:25.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:24 vm04 bash[28289]: audit 2026-03-10T10:29:23.782223+0000 mon.a (mon.0) 3455 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:29:25.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:24 vm04 bash[28289]: audit 2026-03-10T10:29:23.782223+0000 mon.a (mon.0) 3455 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:29:25.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:24 vm04 bash[20742]: audit 2026-03-10T10:29:23.688252+0000 mon.a (mon.0) 3453 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-142","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:29:25.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:24 vm04 bash[20742]: audit 2026-03-10T10:29:23.688252+0000 mon.a (mon.0) 3453 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-142","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:29:25.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:24 vm04 bash[20742]: cluster 2026-03-10T10:29:23.691720+0000 mon.a (mon.0) 3454 : cluster [DBG] osdmap e685: 8 total, 8 up, 8 in 2026-03-10T10:29:25.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:24 vm04 bash[20742]: cluster 2026-03-10T10:29:23.691720+0000 mon.a (mon.0) 3454 : cluster [DBG] osdmap e685: 8 total, 8 up, 8 in 2026-03-10T10:29:25.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:24 vm04 bash[20742]: audit 2026-03-10T10:29:23.782223+0000 mon.a (mon.0) 3455 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:29:25.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:24 vm04 bash[20742]: audit 2026-03-10T10:29:23.782223+0000 mon.a (mon.0) 3455 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-142", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:29:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:25 vm07 bash[23367]: cluster 2026-03-10T10:29:24.561584+0000 mgr.y (mgr.24422) 599 : cluster [DBG] pgmap v1065: 268 pgs: 3 creating+activating, 16 creating+peering, 249 active+clean; 4.3 MiB data, 1020 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:25 vm07 bash[23367]: cluster 2026-03-10T10:29:24.561584+0000 mgr.y (mgr.24422) 599 : cluster [DBG] pgmap v1065: 268 pgs: 3 creating+activating, 16 creating+peering, 249 active+clean; 4.3 MiB data, 1020 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:25 vm07 bash[23367]: audit 2026-03-10T10:29:24.713744+0000 mon.a (mon.0) 3456 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-142", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:29:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:25 vm07 bash[23367]: audit 2026-03-10T10:29:24.713744+0000 mon.a (mon.0) 3456 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-142", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:29:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:25 vm07 bash[23367]: cluster 2026-03-10T10:29:24.726681+0000 mon.a (mon.0) 3457 : cluster [DBG] osdmap e686: 8 total, 8 up, 8 in 2026-03-10T10:29:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:25 vm07 bash[23367]: cluster 2026-03-10T10:29:24.726681+0000 mon.a (mon.0) 3457 : cluster [DBG] osdmap e686: 8 total, 8 up, 8 in 2026-03-10T10:29:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:25 vm07 bash[23367]: audit 2026-03-10T10:29:24.727388+0000 mon.a (mon.0) 3458 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-142"}]: dispatch 2026-03-10T10:29:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:25 vm07 bash[23367]: audit 2026-03-10T10:29:24.727388+0000 mon.a (mon.0) 3458 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-142"}]: dispatch 2026-03-10T10:29:26.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:25 vm04 bash[28289]: cluster 2026-03-10T10:29:24.561584+0000 mgr.y (mgr.24422) 599 : cluster [DBG] pgmap v1065: 268 pgs: 3 creating+activating, 16 creating+peering, 249 active+clean; 4.3 MiB data, 1020 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:26.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:25 vm04 bash[28289]: cluster 2026-03-10T10:29:24.561584+0000 mgr.y (mgr.24422) 599 : cluster [DBG] pgmap v1065: 268 pgs: 3 creating+activating, 16 creating+peering, 249 active+clean; 4.3 MiB data, 1020 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:26.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:25 vm04 bash[28289]: audit 2026-03-10T10:29:24.713744+0000 mon.a (mon.0) 3456 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-142", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:29:26.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:25 vm04 bash[28289]: audit 2026-03-10T10:29:24.713744+0000 mon.a (mon.0) 3456 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-142", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:29:26.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:25 vm04 bash[28289]: cluster 2026-03-10T10:29:24.726681+0000 mon.a (mon.0) 3457 : cluster [DBG] osdmap e686: 8 total, 8 up, 8 in 2026-03-10T10:29:26.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:25 vm04 bash[28289]: cluster 2026-03-10T10:29:24.726681+0000 mon.a (mon.0) 3457 : cluster [DBG] osdmap e686: 8 total, 8 up, 8 in 2026-03-10T10:29:26.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:25 vm04 bash[28289]: audit 2026-03-10T10:29:24.727388+0000 mon.a (mon.0) 3458 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-142"}]: dispatch 2026-03-10T10:29:26.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:25 vm04 bash[28289]: audit 2026-03-10T10:29:24.727388+0000 mon.a (mon.0) 3458 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-142"}]: dispatch 2026-03-10T10:29:26.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:25 vm04 bash[20742]: cluster 2026-03-10T10:29:24.561584+0000 mgr.y (mgr.24422) 599 : cluster [DBG] pgmap v1065: 268 pgs: 3 creating+activating, 16 creating+peering, 249 active+clean; 4.3 MiB data, 1020 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:26.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:25 vm04 bash[20742]: cluster 2026-03-10T10:29:24.561584+0000 mgr.y (mgr.24422) 599 : cluster [DBG] pgmap v1065: 268 pgs: 3 creating+activating, 16 creating+peering, 249 active+clean; 4.3 MiB data, 1020 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:26.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:25 vm04 bash[20742]: audit 2026-03-10T10:29:24.713744+0000 mon.a (mon.0) 3456 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-142", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:29:26.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:25 vm04 bash[20742]: audit 2026-03-10T10:29:24.713744+0000 mon.a (mon.0) 3456 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-142", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:29:26.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:25 vm04 bash[20742]: cluster 2026-03-10T10:29:24.726681+0000 mon.a (mon.0) 3457 : cluster [DBG] osdmap e686: 8 total, 8 up, 8 in 2026-03-10T10:29:26.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:25 vm04 bash[20742]: cluster 2026-03-10T10:29:24.726681+0000 mon.a (mon.0) 3457 : cluster [DBG] osdmap e686: 8 total, 8 up, 8 in 2026-03-10T10:29:26.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:25 vm04 bash[20742]: audit 2026-03-10T10:29:24.727388+0000 mon.a (mon.0) 3458 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-142"}]: dispatch 2026-03-10T10:29:26.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:25 vm04 bash[20742]: audit 2026-03-10T10:29:24.727388+0000 mon.a (mon.0) 3458 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-142"}]: dispatch 2026-03-10T10:29:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:26 vm07 bash[23367]: audit 2026-03-10T10:29:25.717009+0000 mon.a (mon.0) 3459 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-142"}]': finished 2026-03-10T10:29:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:26 vm07 bash[23367]: audit 2026-03-10T10:29:25.717009+0000 mon.a (mon.0) 3459 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-142"}]': finished 2026-03-10T10:29:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:26 vm07 bash[23367]: cluster 2026-03-10T10:29:25.720116+0000 mon.a (mon.0) 3460 : cluster [DBG] osdmap e687: 8 total, 8 up, 8 in 2026-03-10T10:29:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:26 vm07 bash[23367]: cluster 2026-03-10T10:29:25.720116+0000 mon.a (mon.0) 3460 : cluster [DBG] osdmap e687: 8 total, 8 up, 8 in 2026-03-10T10:29:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:26 vm07 bash[23367]: audit 2026-03-10T10:29:25.720962+0000 mon.a (mon.0) 3461 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-142", "mode": "writeback"}]: dispatch 2026-03-10T10:29:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:26 vm07 bash[23367]: audit 2026-03-10T10:29:25.720962+0000 mon.a (mon.0) 3461 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-142", "mode": "writeback"}]: dispatch 2026-03-10T10:29:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:26 vm07 bash[23367]: cluster 2026-03-10T10:29:26.717101+0000 mon.a (mon.0) 3462 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:29:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:26 vm07 bash[23367]: cluster 2026-03-10T10:29:26.717101+0000 mon.a (mon.0) 3462 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:29:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:26 vm07 bash[23367]: audit 2026-03-10T10:29:26.721072+0000 mon.a (mon.0) 3463 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-142", "mode": "writeback"}]': finished 2026-03-10T10:29:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:26 vm07 bash[23367]: audit 2026-03-10T10:29:26.721072+0000 mon.a (mon.0) 3463 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-142", "mode": "writeback"}]': finished 2026-03-10T10:29:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:26 vm07 bash[23367]: cluster 2026-03-10T10:29:26.725218+0000 mon.a (mon.0) 3464 : cluster [DBG] osdmap e688: 8 total, 8 up, 8 in 2026-03-10T10:29:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:26 vm07 bash[23367]: cluster 2026-03-10T10:29:26.725218+0000 mon.a (mon.0) 3464 : cluster [DBG] osdmap e688: 8 total, 8 up, 8 in 2026-03-10T10:29:27.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:26 vm04 bash[28289]: audit 2026-03-10T10:29:25.717009+0000 mon.a (mon.0) 3459 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-142"}]': finished 2026-03-10T10:29:27.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:26 vm04 bash[28289]: audit 2026-03-10T10:29:25.717009+0000 mon.a (mon.0) 3459 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-142"}]': finished 2026-03-10T10:29:27.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:26 vm04 bash[28289]: cluster 2026-03-10T10:29:25.720116+0000 mon.a (mon.0) 3460 : cluster [DBG] osdmap e687: 8 total, 8 up, 8 in 2026-03-10T10:29:27.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:26 vm04 bash[28289]: cluster 2026-03-10T10:29:25.720116+0000 mon.a (mon.0) 3460 : cluster [DBG] osdmap e687: 8 total, 8 up, 8 in 2026-03-10T10:29:27.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:26 vm04 bash[28289]: audit 2026-03-10T10:29:25.720962+0000 mon.a (mon.0) 3461 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-142", "mode": "writeback"}]: dispatch 2026-03-10T10:29:27.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:26 vm04 bash[28289]: audit 2026-03-10T10:29:25.720962+0000 mon.a (mon.0) 3461 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-142", "mode": "writeback"}]: dispatch 2026-03-10T10:29:27.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:26 vm04 bash[28289]: cluster 2026-03-10T10:29:26.717101+0000 mon.a (mon.0) 3462 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:29:27.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:26 vm04 bash[28289]: cluster 2026-03-10T10:29:26.717101+0000 mon.a (mon.0) 3462 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:29:27.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:26 vm04 bash[28289]: audit 2026-03-10T10:29:26.721072+0000 mon.a (mon.0) 3463 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-142", "mode": "writeback"}]': finished 2026-03-10T10:29:27.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:26 vm04 bash[28289]: audit 2026-03-10T10:29:26.721072+0000 mon.a (mon.0) 3463 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-142", "mode": "writeback"}]': finished 2026-03-10T10:29:27.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:26 vm04 bash[28289]: cluster 2026-03-10T10:29:26.725218+0000 mon.a (mon.0) 3464 : cluster [DBG] osdmap e688: 8 total, 8 up, 8 in 2026-03-10T10:29:27.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:26 vm04 bash[28289]: cluster 2026-03-10T10:29:26.725218+0000 mon.a (mon.0) 3464 : cluster [DBG] osdmap e688: 8 total, 8 up, 8 in 2026-03-10T10:29:27.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:26 vm04 bash[20742]: audit 2026-03-10T10:29:25.717009+0000 mon.a (mon.0) 3459 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-142"}]': finished 2026-03-10T10:29:27.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:26 vm04 bash[20742]: audit 2026-03-10T10:29:25.717009+0000 mon.a (mon.0) 3459 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-142"}]': finished 2026-03-10T10:29:27.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:26 vm04 bash[20742]: cluster 2026-03-10T10:29:25.720116+0000 mon.a (mon.0) 3460 : cluster [DBG] osdmap e687: 8 total, 8 up, 8 in 2026-03-10T10:29:27.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:26 vm04 bash[20742]: cluster 2026-03-10T10:29:25.720116+0000 mon.a (mon.0) 3460 : cluster [DBG] osdmap e687: 8 total, 8 up, 8 in 2026-03-10T10:29:27.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:26 vm04 bash[20742]: audit 2026-03-10T10:29:25.720962+0000 mon.a (mon.0) 3461 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-142", "mode": "writeback"}]: dispatch 2026-03-10T10:29:27.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:26 vm04 bash[20742]: audit 2026-03-10T10:29:25.720962+0000 mon.a (mon.0) 3461 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-142", "mode": "writeback"}]: dispatch 2026-03-10T10:29:27.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:26 vm04 bash[20742]: cluster 2026-03-10T10:29:26.717101+0000 mon.a (mon.0) 3462 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:29:27.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:26 vm04 bash[20742]: cluster 2026-03-10T10:29:26.717101+0000 mon.a (mon.0) 3462 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:29:27.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:26 vm04 bash[20742]: audit 2026-03-10T10:29:26.721072+0000 mon.a (mon.0) 3463 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-142", "mode": "writeback"}]': finished 2026-03-10T10:29:27.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:26 vm04 bash[20742]: audit 2026-03-10T10:29:26.721072+0000 mon.a (mon.0) 3463 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-142", "mode": "writeback"}]': finished 2026-03-10T10:29:27.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:26 vm04 bash[20742]: cluster 2026-03-10T10:29:26.725218+0000 mon.a (mon.0) 3464 : cluster [DBG] osdmap e688: 8 total, 8 up, 8 in 2026-03-10T10:29:27.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:26 vm04 bash[20742]: cluster 2026-03-10T10:29:26.725218+0000 mon.a (mon.0) 3464 : cluster [DBG] osdmap e688: 8 total, 8 up, 8 in 2026-03-10T10:29:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:27 vm07 bash[23367]: cluster 2026-03-10T10:29:26.561891+0000 mgr.y (mgr.24422) 600 : cluster [DBG] pgmap v1068: 268 pgs: 3 creating+activating, 16 creating+peering, 249 active+clean; 4.3 MiB data, 1020 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:29:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:27 vm07 bash[23367]: cluster 2026-03-10T10:29:26.561891+0000 mgr.y (mgr.24422) 600 : cluster [DBG] pgmap v1068: 268 pgs: 3 creating+activating, 16 creating+peering, 249 active+clean; 4.3 MiB data, 1020 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:29:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:27 vm07 bash[23367]: audit 2026-03-10T10:29:26.726651+0000 mon.a (mon.0) 3465 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:29:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:27 vm07 bash[23367]: audit 2026-03-10T10:29:26.726651+0000 mon.a (mon.0) 3465 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:29:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:27 vm07 bash[23367]: audit 2026-03-10T10:29:27.723968+0000 mon.a (mon.0) 3466 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:29:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:27 vm07 bash[23367]: audit 2026-03-10T10:29:27.723968+0000 mon.a (mon.0) 3466 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:29:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:27 vm07 bash[23367]: cluster 2026-03-10T10:29:27.727606+0000 mon.a (mon.0) 3467 : cluster [DBG] osdmap e689: 8 total, 8 up, 8 in 2026-03-10T10:29:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:27 vm07 bash[23367]: cluster 2026-03-10T10:29:27.727606+0000 mon.a (mon.0) 3467 : cluster [DBG] osdmap e689: 8 total, 8 up, 8 in 2026-03-10T10:29:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:27 vm04 bash[28289]: cluster 2026-03-10T10:29:26.561891+0000 mgr.y (mgr.24422) 600 : cluster [DBG] pgmap v1068: 268 pgs: 3 creating+activating, 16 creating+peering, 249 active+clean; 4.3 MiB data, 1020 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:29:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:27 vm04 bash[28289]: cluster 2026-03-10T10:29:26.561891+0000 mgr.y (mgr.24422) 600 : cluster [DBG] pgmap v1068: 268 pgs: 3 creating+activating, 16 creating+peering, 249 active+clean; 4.3 MiB data, 1020 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:29:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:27 vm04 bash[28289]: audit 2026-03-10T10:29:26.726651+0000 mon.a (mon.0) 3465 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:29:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:27 vm04 bash[28289]: audit 2026-03-10T10:29:26.726651+0000 mon.a (mon.0) 3465 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:29:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:27 vm04 bash[28289]: audit 2026-03-10T10:29:27.723968+0000 mon.a (mon.0) 3466 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:29:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:27 vm04 bash[28289]: audit 2026-03-10T10:29:27.723968+0000 mon.a (mon.0) 3466 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:29:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:27 vm04 bash[28289]: cluster 2026-03-10T10:29:27.727606+0000 mon.a (mon.0) 3467 : cluster [DBG] osdmap e689: 8 total, 8 up, 8 in 2026-03-10T10:29:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:27 vm04 bash[28289]: cluster 2026-03-10T10:29:27.727606+0000 mon.a (mon.0) 3467 : cluster [DBG] osdmap e689: 8 total, 8 up, 8 in 2026-03-10T10:29:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:27 vm04 bash[20742]: cluster 2026-03-10T10:29:26.561891+0000 mgr.y (mgr.24422) 600 : cluster [DBG] pgmap v1068: 268 pgs: 3 creating+activating, 16 creating+peering, 249 active+clean; 4.3 MiB data, 1020 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:29:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:27 vm04 bash[20742]: cluster 2026-03-10T10:29:26.561891+0000 mgr.y (mgr.24422) 600 : cluster [DBG] pgmap v1068: 268 pgs: 3 creating+activating, 16 creating+peering, 249 active+clean; 4.3 MiB data, 1020 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:29:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:27 vm04 bash[20742]: audit 2026-03-10T10:29:26.726651+0000 mon.a (mon.0) 3465 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:29:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:27 vm04 bash[20742]: audit 2026-03-10T10:29:26.726651+0000 mon.a (mon.0) 3465 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:29:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:27 vm04 bash[20742]: audit 2026-03-10T10:29:27.723968+0000 mon.a (mon.0) 3466 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:29:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:27 vm04 bash[20742]: audit 2026-03-10T10:29:27.723968+0000 mon.a (mon.0) 3466 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:29:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:27 vm04 bash[20742]: cluster 2026-03-10T10:29:27.727606+0000 mon.a (mon.0) 3467 : cluster [DBG] osdmap e689: 8 total, 8 up, 8 in 2026-03-10T10:29:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:27 vm04 bash[20742]: cluster 2026-03-10T10:29:27.727606+0000 mon.a (mon.0) 3467 : cluster [DBG] osdmap e689: 8 total, 8 up, 8 in 2026-03-10T10:29:29.016 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:29:28 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:29:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:28 vm07 bash[23367]: audit 2026-03-10T10:29:27.728674+0000 mon.a (mon.0) 3468 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:29:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:28 vm07 bash[23367]: audit 2026-03-10T10:29:27.728674+0000 mon.a (mon.0) 3468 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:29:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:28 vm07 bash[23367]: audit 2026-03-10T10:29:28.179859+0000 mon.a (mon.0) 3469 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:28 vm07 bash[23367]: audit 2026-03-10T10:29:28.179859+0000 mon.a (mon.0) 3469 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:28 vm07 bash[23367]: audit 2026-03-10T10:29:28.181066+0000 mon.a (mon.0) 3470 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:29:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:28 vm07 bash[23367]: audit 2026-03-10T10:29:28.181066+0000 mon.a (mon.0) 3470 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:29:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:28 vm07 bash[23367]: audit 2026-03-10T10:29:28.727338+0000 mon.a (mon.0) 3471 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:29:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:28 vm07 bash[23367]: audit 2026-03-10T10:29:28.727338+0000 mon.a (mon.0) 3471 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:29:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:28 vm07 bash[23367]: cluster 2026-03-10T10:29:28.734340+0000 mon.a (mon.0) 3472 : cluster [DBG] osdmap e690: 8 total, 8 up, 8 in 2026-03-10T10:29:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:28 vm07 bash[23367]: cluster 2026-03-10T10:29:28.734340+0000 mon.a (mon.0) 3472 : cluster [DBG] osdmap e690: 8 total, 8 up, 8 in 2026-03-10T10:29:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:28 vm07 bash[23367]: audit 2026-03-10T10:29:28.734991+0000 mon.a (mon.0) 3473 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:29:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:28 vm07 bash[23367]: audit 2026-03-10T10:29:28.734991+0000 mon.a (mon.0) 3473 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:29:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:28 vm04 bash[28289]: audit 2026-03-10T10:29:27.728674+0000 mon.a (mon.0) 3468 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:29:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:28 vm04 bash[28289]: audit 2026-03-10T10:29:27.728674+0000 mon.a (mon.0) 3468 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:29:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:28 vm04 bash[28289]: audit 2026-03-10T10:29:28.179859+0000 mon.a (mon.0) 3469 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:28 vm04 bash[28289]: audit 2026-03-10T10:29:28.179859+0000 mon.a (mon.0) 3469 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:28 vm04 bash[28289]: audit 2026-03-10T10:29:28.181066+0000 mon.a (mon.0) 3470 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:29:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:28 vm04 bash[28289]: audit 2026-03-10T10:29:28.181066+0000 mon.a (mon.0) 3470 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:29:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:28 vm04 bash[28289]: audit 2026-03-10T10:29:28.727338+0000 mon.a (mon.0) 3471 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:29:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:28 vm04 bash[28289]: audit 2026-03-10T10:29:28.727338+0000 mon.a (mon.0) 3471 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:29:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:28 vm04 bash[28289]: cluster 2026-03-10T10:29:28.734340+0000 mon.a (mon.0) 3472 : cluster [DBG] osdmap e690: 8 total, 8 up, 8 in 2026-03-10T10:29:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:28 vm04 bash[28289]: cluster 2026-03-10T10:29:28.734340+0000 mon.a (mon.0) 3472 : cluster [DBG] osdmap e690: 8 total, 8 up, 8 in 2026-03-10T10:29:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:28 vm04 bash[28289]: audit 2026-03-10T10:29:28.734991+0000 mon.a (mon.0) 3473 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:29:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:28 vm04 bash[28289]: audit 2026-03-10T10:29:28.734991+0000 mon.a (mon.0) 3473 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:29:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:28 vm04 bash[20742]: audit 2026-03-10T10:29:27.728674+0000 mon.a (mon.0) 3468 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:29:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:28 vm04 bash[20742]: audit 2026-03-10T10:29:27.728674+0000 mon.a (mon.0) 3468 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:29:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:28 vm04 bash[20742]: audit 2026-03-10T10:29:28.179859+0000 mon.a (mon.0) 3469 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:28 vm04 bash[20742]: audit 2026-03-10T10:29:28.179859+0000 mon.a (mon.0) 3469 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:28 vm04 bash[20742]: audit 2026-03-10T10:29:28.181066+0000 mon.a (mon.0) 3470 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:29:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:28 vm04 bash[20742]: audit 2026-03-10T10:29:28.181066+0000 mon.a (mon.0) 3470 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:29:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:28 vm04 bash[20742]: audit 2026-03-10T10:29:28.727338+0000 mon.a (mon.0) 3471 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:29:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:28 vm04 bash[20742]: audit 2026-03-10T10:29:28.727338+0000 mon.a (mon.0) 3471 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:29:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:28 vm04 bash[20742]: cluster 2026-03-10T10:29:28.734340+0000 mon.a (mon.0) 3472 : cluster [DBG] osdmap e690: 8 total, 8 up, 8 in 2026-03-10T10:29:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:28 vm04 bash[20742]: cluster 2026-03-10T10:29:28.734340+0000 mon.a (mon.0) 3472 : cluster [DBG] osdmap e690: 8 total, 8 up, 8 in 2026-03-10T10:29:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:28 vm04 bash[20742]: audit 2026-03-10T10:29:28.734991+0000 mon.a (mon.0) 3473 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:29:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:28 vm04 bash[20742]: audit 2026-03-10T10:29:28.734991+0000 mon.a (mon.0) 3473 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:29:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:30 vm04 bash[28289]: cluster 2026-03-10T10:29:28.562503+0000 mgr.y (mgr.24422) 601 : cluster [DBG] pgmap v1071: 268 pgs: 3 creating+activating, 3 creating+peering, 262 active+clean; 4.3 MiB data, 1021 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s wr, 3 op/s 2026-03-10T10:29:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:30 vm04 bash[28289]: cluster 2026-03-10T10:29:28.562503+0000 mgr.y (mgr.24422) 601 : cluster [DBG] pgmap v1071: 268 pgs: 3 creating+activating, 3 creating+peering, 262 active+clean; 4.3 MiB data, 1021 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s wr, 3 op/s 2026-03-10T10:29:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:30 vm04 bash[28289]: audit 2026-03-10T10:29:28.871214+0000 mgr.y (mgr.24422) 602 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:30 vm04 bash[28289]: audit 2026-03-10T10:29:28.871214+0000 mgr.y (mgr.24422) 602 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:30 vm04 bash[28289]: cluster 2026-03-10T10:29:29.727631+0000 mon.a (mon.0) 3474 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:29:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:30 vm04 bash[28289]: cluster 2026-03-10T10:29:29.727631+0000 mon.a (mon.0) 3474 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:29:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:30 vm04 bash[28289]: audit 2026-03-10T10:29:29.743128+0000 mon.a (mon.0) 3475 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:29:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:30 vm04 bash[28289]: audit 2026-03-10T10:29:29.743128+0000 mon.a (mon.0) 3475 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:29:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:30 vm04 bash[28289]: cluster 2026-03-10T10:29:29.750713+0000 mon.a (mon.0) 3476 : cluster [DBG] osdmap e691: 8 total, 8 up, 8 in 2026-03-10T10:29:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:30 vm04 bash[28289]: cluster 2026-03-10T10:29:29.750713+0000 mon.a (mon.0) 3476 : cluster [DBG] osdmap e691: 8 total, 8 up, 8 in 2026-03-10T10:29:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:30 vm04 bash[28289]: audit 2026-03-10T10:29:29.752193+0000 mon.a (mon.0) 3477 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T10:29:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:30 vm04 bash[28289]: audit 2026-03-10T10:29:29.752193+0000 mon.a (mon.0) 3477 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T10:29:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:30 vm04 bash[20742]: cluster 2026-03-10T10:29:28.562503+0000 mgr.y (mgr.24422) 601 : cluster [DBG] pgmap v1071: 268 pgs: 3 creating+activating, 3 creating+peering, 262 active+clean; 4.3 MiB data, 1021 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s wr, 3 op/s 2026-03-10T10:29:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:30 vm04 bash[20742]: cluster 2026-03-10T10:29:28.562503+0000 mgr.y (mgr.24422) 601 : cluster [DBG] pgmap v1071: 268 pgs: 3 creating+activating, 3 creating+peering, 262 active+clean; 4.3 MiB data, 1021 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s wr, 3 op/s 2026-03-10T10:29:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:30 vm04 bash[20742]: audit 2026-03-10T10:29:28.871214+0000 mgr.y (mgr.24422) 602 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:30 vm04 bash[20742]: audit 2026-03-10T10:29:28.871214+0000 mgr.y (mgr.24422) 602 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:30 vm04 bash[20742]: cluster 2026-03-10T10:29:29.727631+0000 mon.a (mon.0) 3474 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:29:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:30 vm04 bash[20742]: cluster 2026-03-10T10:29:29.727631+0000 mon.a (mon.0) 3474 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:29:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:30 vm04 bash[20742]: audit 2026-03-10T10:29:29.743128+0000 mon.a (mon.0) 3475 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:29:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:30 vm04 bash[20742]: audit 2026-03-10T10:29:29.743128+0000 mon.a (mon.0) 3475 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:29:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:30 vm04 bash[20742]: cluster 2026-03-10T10:29:29.750713+0000 mon.a (mon.0) 3476 : cluster [DBG] osdmap e691: 8 total, 8 up, 8 in 2026-03-10T10:29:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:30 vm04 bash[20742]: cluster 2026-03-10T10:29:29.750713+0000 mon.a (mon.0) 3476 : cluster [DBG] osdmap e691: 8 total, 8 up, 8 in 2026-03-10T10:29:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:30 vm04 bash[20742]: audit 2026-03-10T10:29:29.752193+0000 mon.a (mon.0) 3477 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T10:29:30.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:30 vm04 bash[20742]: audit 2026-03-10T10:29:29.752193+0000 mon.a (mon.0) 3477 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T10:29:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:30 vm07 bash[23367]: cluster 2026-03-10T10:29:28.562503+0000 mgr.y (mgr.24422) 601 : cluster [DBG] pgmap v1071: 268 pgs: 3 creating+activating, 3 creating+peering, 262 active+clean; 4.3 MiB data, 1021 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s wr, 3 op/s 2026-03-10T10:29:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:30 vm07 bash[23367]: cluster 2026-03-10T10:29:28.562503+0000 mgr.y (mgr.24422) 601 : cluster [DBG] pgmap v1071: 268 pgs: 3 creating+activating, 3 creating+peering, 262 active+clean; 4.3 MiB data, 1021 MiB used, 159 GiB / 160 GiB avail; 3.7 KiB/s wr, 3 op/s 2026-03-10T10:29:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:30 vm07 bash[23367]: audit 2026-03-10T10:29:28.871214+0000 mgr.y (mgr.24422) 602 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:30 vm07 bash[23367]: audit 2026-03-10T10:29:28.871214+0000 mgr.y (mgr.24422) 602 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:30 vm07 bash[23367]: cluster 2026-03-10T10:29:29.727631+0000 mon.a (mon.0) 3474 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:29:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:30 vm07 bash[23367]: cluster 2026-03-10T10:29:29.727631+0000 mon.a (mon.0) 3474 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:29:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:30 vm07 bash[23367]: audit 2026-03-10T10:29:29.743128+0000 mon.a (mon.0) 3475 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:29:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:30 vm07 bash[23367]: audit 2026-03-10T10:29:29.743128+0000 mon.a (mon.0) 3475 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:29:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:30 vm07 bash[23367]: cluster 2026-03-10T10:29:29.750713+0000 mon.a (mon.0) 3476 : cluster [DBG] osdmap e691: 8 total, 8 up, 8 in 2026-03-10T10:29:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:30 vm07 bash[23367]: cluster 2026-03-10T10:29:29.750713+0000 mon.a (mon.0) 3476 : cluster [DBG] osdmap e691: 8 total, 8 up, 8 in 2026-03-10T10:29:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:30 vm07 bash[23367]: audit 2026-03-10T10:29:29.752193+0000 mon.a (mon.0) 3477 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T10:29:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:30 vm07 bash[23367]: audit 2026-03-10T10:29:29.752193+0000 mon.a (mon.0) 3477 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T10:29:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:31 vm07 bash[23367]: cluster 2026-03-10T10:29:30.562802+0000 mgr.y (mgr.24422) 603 : cluster [DBG] pgmap v1074: 268 pgs: 268 active+clean; 4.3 MiB data, 1021 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 4.7 KiB/s wr, 5 op/s 2026-03-10T10:29:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:31 vm07 bash[23367]: cluster 2026-03-10T10:29:30.562802+0000 mgr.y (mgr.24422) 603 : cluster [DBG] pgmap v1074: 268 pgs: 268 active+clean; 4.3 MiB data, 1021 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 4.7 KiB/s wr, 5 op/s 2026-03-10T10:29:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:31 vm07 bash[23367]: audit 2026-03-10T10:29:30.746606+0000 mon.a (mon.0) 3478 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T10:29:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:31 vm07 bash[23367]: audit 2026-03-10T10:29:30.746606+0000 mon.a (mon.0) 3478 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T10:29:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:31 vm07 bash[23367]: cluster 2026-03-10T10:29:30.750189+0000 mon.a (mon.0) 3479 : cluster [DBG] osdmap e692: 8 total, 8 up, 8 in 2026-03-10T10:29:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:31 vm07 bash[23367]: cluster 2026-03-10T10:29:30.750189+0000 mon.a (mon.0) 3479 : cluster [DBG] osdmap e692: 8 total, 8 up, 8 in 2026-03-10T10:29:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:31 vm07 bash[23367]: audit 2026-03-10T10:29:30.750864+0000 mon.a (mon.0) 3480 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T10:29:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:31 vm07 bash[23367]: audit 2026-03-10T10:29:30.750864+0000 mon.a (mon.0) 3480 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T10:29:32.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:31 vm04 bash[28289]: cluster 2026-03-10T10:29:30.562802+0000 mgr.y (mgr.24422) 603 : cluster [DBG] pgmap v1074: 268 pgs: 268 active+clean; 4.3 MiB data, 1021 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 4.7 KiB/s wr, 5 op/s 2026-03-10T10:29:32.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:31 vm04 bash[28289]: cluster 2026-03-10T10:29:30.562802+0000 mgr.y (mgr.24422) 603 : cluster [DBG] pgmap v1074: 268 pgs: 268 active+clean; 4.3 MiB data, 1021 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 4.7 KiB/s wr, 5 op/s 2026-03-10T10:29:32.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:31 vm04 bash[28289]: audit 2026-03-10T10:29:30.746606+0000 mon.a (mon.0) 3478 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T10:29:32.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:31 vm04 bash[28289]: audit 2026-03-10T10:29:30.746606+0000 mon.a (mon.0) 3478 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T10:29:32.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:31 vm04 bash[28289]: cluster 2026-03-10T10:29:30.750189+0000 mon.a (mon.0) 3479 : cluster [DBG] osdmap e692: 8 total, 8 up, 8 in 2026-03-10T10:29:32.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:31 vm04 bash[28289]: cluster 2026-03-10T10:29:30.750189+0000 mon.a (mon.0) 3479 : cluster [DBG] osdmap e692: 8 total, 8 up, 8 in 2026-03-10T10:29:32.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:31 vm04 bash[28289]: audit 2026-03-10T10:29:30.750864+0000 mon.a (mon.0) 3480 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T10:29:32.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:31 vm04 bash[28289]: audit 2026-03-10T10:29:30.750864+0000 mon.a (mon.0) 3480 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T10:29:32.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:31 vm04 bash[20742]: cluster 2026-03-10T10:29:30.562802+0000 mgr.y (mgr.24422) 603 : cluster [DBG] pgmap v1074: 268 pgs: 268 active+clean; 4.3 MiB data, 1021 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 4.7 KiB/s wr, 5 op/s 2026-03-10T10:29:32.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:31 vm04 bash[20742]: cluster 2026-03-10T10:29:30.562802+0000 mgr.y (mgr.24422) 603 : cluster [DBG] pgmap v1074: 268 pgs: 268 active+clean; 4.3 MiB data, 1021 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 4.7 KiB/s wr, 5 op/s 2026-03-10T10:29:32.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:31 vm04 bash[20742]: audit 2026-03-10T10:29:30.746606+0000 mon.a (mon.0) 3478 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T10:29:32.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:31 vm04 bash[20742]: audit 2026-03-10T10:29:30.746606+0000 mon.a (mon.0) 3478 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T10:29:32.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:31 vm04 bash[20742]: cluster 2026-03-10T10:29:30.750189+0000 mon.a (mon.0) 3479 : cluster [DBG] osdmap e692: 8 total, 8 up, 8 in 2026-03-10T10:29:32.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:31 vm04 bash[20742]: cluster 2026-03-10T10:29:30.750189+0000 mon.a (mon.0) 3479 : cluster [DBG] osdmap e692: 8 total, 8 up, 8 in 2026-03-10T10:29:32.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:31 vm04 bash[20742]: audit 2026-03-10T10:29:30.750864+0000 mon.a (mon.0) 3480 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T10:29:32.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:31 vm04 bash[20742]: audit 2026-03-10T10:29:30.750864+0000 mon.a (mon.0) 3480 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_grade_decay_rate","val": "20"}]: dispatch 2026-03-10T10:29:33.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:32 vm04 bash[28289]: audit 2026-03-10T10:29:31.749672+0000 mon.a (mon.0) 3481 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-10T10:29:33.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:32 vm04 bash[28289]: audit 2026-03-10T10:29:31.749672+0000 mon.a (mon.0) 3481 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-10T10:29:33.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:32 vm04 bash[28289]: cluster 2026-03-10T10:29:31.752535+0000 mon.a (mon.0) 3482 : cluster [DBG] osdmap e693: 8 total, 8 up, 8 in 2026-03-10T10:29:33.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:32 vm04 bash[28289]: cluster 2026-03-10T10:29:31.752535+0000 mon.a (mon.0) 3482 : cluster [DBG] osdmap e693: 8 total, 8 up, 8 in 2026-03-10T10:29:33.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:32 vm04 bash[28289]: audit 2026-03-10T10:29:31.754110+0000 mon.a (mon.0) 3483 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T10:29:33.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:32 vm04 bash[28289]: audit 2026-03-10T10:29:31.754110+0000 mon.a (mon.0) 3483 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T10:29:33.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:32 vm04 bash[28289]: audit 2026-03-10T10:29:32.753178+0000 mon.a (mon.0) 3484 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-10T10:29:33.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:32 vm04 bash[28289]: audit 2026-03-10T10:29:32.753178+0000 mon.a (mon.0) 3484 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-10T10:29:33.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:32 vm04 bash[28289]: cluster 2026-03-10T10:29:32.756286+0000 mon.a (mon.0) 3485 : cluster [DBG] osdmap e694: 8 total, 8 up, 8 in 2026-03-10T10:29:33.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:32 vm04 bash[28289]: cluster 2026-03-10T10:29:32.756286+0000 mon.a (mon.0) 3485 : cluster [DBG] osdmap e694: 8 total, 8 up, 8 in 2026-03-10T10:29:33.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:32 vm04 bash[20742]: audit 2026-03-10T10:29:31.749672+0000 mon.a (mon.0) 3481 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-10T10:29:33.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:32 vm04 bash[20742]: audit 2026-03-10T10:29:31.749672+0000 mon.a (mon.0) 3481 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-10T10:29:33.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:32 vm04 bash[20742]: cluster 2026-03-10T10:29:31.752535+0000 mon.a (mon.0) 3482 : cluster [DBG] osdmap e693: 8 total, 8 up, 8 in 2026-03-10T10:29:33.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:32 vm04 bash[20742]: cluster 2026-03-10T10:29:31.752535+0000 mon.a (mon.0) 3482 : cluster [DBG] osdmap e693: 8 total, 8 up, 8 in 2026-03-10T10:29:33.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:32 vm04 bash[20742]: audit 2026-03-10T10:29:31.754110+0000 mon.a (mon.0) 3483 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T10:29:33.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:32 vm04 bash[20742]: audit 2026-03-10T10:29:31.754110+0000 mon.a (mon.0) 3483 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T10:29:33.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:32 vm04 bash[20742]: audit 2026-03-10T10:29:32.753178+0000 mon.a (mon.0) 3484 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-10T10:29:33.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:32 vm04 bash[20742]: audit 2026-03-10T10:29:32.753178+0000 mon.a (mon.0) 3484 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-10T10:29:33.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:32 vm04 bash[20742]: cluster 2026-03-10T10:29:32.756286+0000 mon.a (mon.0) 3485 : cluster [DBG] osdmap e694: 8 total, 8 up, 8 in 2026-03-10T10:29:33.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:32 vm04 bash[20742]: cluster 2026-03-10T10:29:32.756286+0000 mon.a (mon.0) 3485 : cluster [DBG] osdmap e694: 8 total, 8 up, 8 in 2026-03-10T10:29:33.203 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:29:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:29:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:29:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:32 vm07 bash[23367]: audit 2026-03-10T10:29:31.749672+0000 mon.a (mon.0) 3481 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-10T10:29:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:32 vm07 bash[23367]: audit 2026-03-10T10:29:31.749672+0000 mon.a (mon.0) 3481 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_grade_decay_rate","val": "20"}]': finished 2026-03-10T10:29:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:32 vm07 bash[23367]: cluster 2026-03-10T10:29:31.752535+0000 mon.a (mon.0) 3482 : cluster [DBG] osdmap e693: 8 total, 8 up, 8 in 2026-03-10T10:29:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:32 vm07 bash[23367]: cluster 2026-03-10T10:29:31.752535+0000 mon.a (mon.0) 3482 : cluster [DBG] osdmap e693: 8 total, 8 up, 8 in 2026-03-10T10:29:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:32 vm07 bash[23367]: audit 2026-03-10T10:29:31.754110+0000 mon.a (mon.0) 3483 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T10:29:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:32 vm07 bash[23367]: audit 2026-03-10T10:29:31.754110+0000 mon.a (mon.0) 3483 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_search_last_n","val": "1"}]: dispatch 2026-03-10T10:29:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:32 vm07 bash[23367]: audit 2026-03-10T10:29:32.753178+0000 mon.a (mon.0) 3484 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-10T10:29:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:32 vm07 bash[23367]: audit 2026-03-10T10:29:32.753178+0000 mon.a (mon.0) 3484 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-142","var": "hit_set_search_last_n","val": "1"}]': finished 2026-03-10T10:29:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:32 vm07 bash[23367]: cluster 2026-03-10T10:29:32.756286+0000 mon.a (mon.0) 3485 : cluster [DBG] osdmap e694: 8 total, 8 up, 8 in 2026-03-10T10:29:33.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:32 vm07 bash[23367]: cluster 2026-03-10T10:29:32.756286+0000 mon.a (mon.0) 3485 : cluster [DBG] osdmap e694: 8 total, 8 up, 8 in 2026-03-10T10:29:34.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:33 vm04 bash[28289]: cluster 2026-03-10T10:29:32.563153+0000 mgr.y (mgr.24422) 604 : cluster [DBG] pgmap v1077: 268 pgs: 268 active+clean; 4.3 MiB data, 1021 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T10:29:34.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:33 vm04 bash[28289]: cluster 2026-03-10T10:29:32.563153+0000 mgr.y (mgr.24422) 604 : cluster [DBG] pgmap v1077: 268 pgs: 268 active+clean; 4.3 MiB data, 1021 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T10:29:34.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:33 vm04 bash[28289]: audit 2026-03-10T10:29:32.800628+0000 mon.a (mon.0) 3486 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:34.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:33 vm04 bash[28289]: audit 2026-03-10T10:29:32.800628+0000 mon.a (mon.0) 3486 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:34.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:33 vm04 bash[28289]: audit 2026-03-10T10:29:33.673543+0000 mon.a (mon.0) 3487 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:29:34.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:33 vm04 bash[28289]: audit 2026-03-10T10:29:33.673543+0000 mon.a (mon.0) 3487 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:29:34.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:33 vm04 bash[20742]: cluster 2026-03-10T10:29:32.563153+0000 mgr.y (mgr.24422) 604 : cluster [DBG] pgmap v1077: 268 pgs: 268 active+clean; 4.3 MiB data, 1021 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T10:29:34.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:33 vm04 bash[20742]: cluster 2026-03-10T10:29:32.563153+0000 mgr.y (mgr.24422) 604 : cluster [DBG] pgmap v1077: 268 pgs: 268 active+clean; 4.3 MiB data, 1021 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T10:29:34.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:33 vm04 bash[20742]: audit 2026-03-10T10:29:32.800628+0000 mon.a (mon.0) 3486 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:34.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:33 vm04 bash[20742]: audit 2026-03-10T10:29:32.800628+0000 mon.a (mon.0) 3486 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:34.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:33 vm04 bash[20742]: audit 2026-03-10T10:29:33.673543+0000 mon.a (mon.0) 3487 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:29:34.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:33 vm04 bash[20742]: audit 2026-03-10T10:29:33.673543+0000 mon.a (mon.0) 3487 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:29:34.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:33 vm07 bash[23367]: cluster 2026-03-10T10:29:32.563153+0000 mgr.y (mgr.24422) 604 : cluster [DBG] pgmap v1077: 268 pgs: 268 active+clean; 4.3 MiB data, 1021 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T10:29:34.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:33 vm07 bash[23367]: cluster 2026-03-10T10:29:32.563153+0000 mgr.y (mgr.24422) 604 : cluster [DBG] pgmap v1077: 268 pgs: 268 active+clean; 4.3 MiB data, 1021 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T10:29:34.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:33 vm07 bash[23367]: audit 2026-03-10T10:29:32.800628+0000 mon.a (mon.0) 3486 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:34.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:33 vm07 bash[23367]: audit 2026-03-10T10:29:32.800628+0000 mon.a (mon.0) 3486 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:34.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:33 vm07 bash[23367]: audit 2026-03-10T10:29:33.673543+0000 mon.a (mon.0) 3487 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:29:34.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:33 vm07 bash[23367]: audit 2026-03-10T10:29:33.673543+0000 mon.a (mon.0) 3487 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:29:35.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:34 vm04 bash[28289]: audit 2026-03-10T10:29:33.790744+0000 mon.a (mon.0) 3488 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:29:35.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:34 vm04 bash[28289]: audit 2026-03-10T10:29:33.790744+0000 mon.a (mon.0) 3488 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:29:35.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:34 vm04 bash[28289]: cluster 2026-03-10T10:29:33.803927+0000 mon.a (mon.0) 3489 : cluster [DBG] osdmap e695: 8 total, 8 up, 8 in 2026-03-10T10:29:35.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:34 vm04 bash[28289]: cluster 2026-03-10T10:29:33.803927+0000 mon.a (mon.0) 3489 : cluster [DBG] osdmap e695: 8 total, 8 up, 8 in 2026-03-10T10:29:35.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:34 vm04 bash[28289]: audit 2026-03-10T10:29:33.804759+0000 mon.a (mon.0) 3490 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-142"}]: dispatch 2026-03-10T10:29:35.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:34 vm04 bash[28289]: audit 2026-03-10T10:29:33.804759+0000 mon.a (mon.0) 3490 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-142"}]: dispatch 2026-03-10T10:29:35.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:34 vm04 bash[28289]: audit 2026-03-10T10:29:34.008096+0000 mon.a (mon.0) 3491 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:35.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:34 vm04 bash[28289]: audit 2026-03-10T10:29:34.008096+0000 mon.a (mon.0) 3491 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:35.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:34 vm04 bash[28289]: audit 2026-03-10T10:29:34.017846+0000 mon.a (mon.0) 3492 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:35.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:34 vm04 bash[28289]: audit 2026-03-10T10:29:34.017846+0000 mon.a (mon.0) 3492 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:35.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:34 vm04 bash[28289]: audit 2026-03-10T10:29:34.036521+0000 mon.a (mon.0) 3493 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:35.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:34 vm04 bash[28289]: audit 2026-03-10T10:29:34.036521+0000 mon.a (mon.0) 3493 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:35.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:34 vm04 bash[28289]: audit 2026-03-10T10:29:34.044760+0000 mon.a (mon.0) 3494 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:35.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:34 vm04 bash[28289]: audit 2026-03-10T10:29:34.044760+0000 mon.a (mon.0) 3494 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:35.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:34 vm04 bash[28289]: audit 2026-03-10T10:29:34.378928+0000 mon.a (mon.0) 3495 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:29:35.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:34 vm04 bash[28289]: audit 2026-03-10T10:29:34.378928+0000 mon.a (mon.0) 3495 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:29:35.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:34 vm04 bash[28289]: audit 2026-03-10T10:29:34.379500+0000 mon.a (mon.0) 3496 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:29:35.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:34 vm04 bash[28289]: audit 2026-03-10T10:29:34.379500+0000 mon.a (mon.0) 3496 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:29:35.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:34 vm04 bash[28289]: audit 2026-03-10T10:29:34.408570+0000 mon.a (mon.0) 3497 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:35.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:34 vm04 bash[28289]: audit 2026-03-10T10:29:34.408570+0000 mon.a (mon.0) 3497 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:35.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:34 vm04 bash[20742]: audit 2026-03-10T10:29:33.790744+0000 mon.a (mon.0) 3488 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:29:35.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:34 vm04 bash[20742]: audit 2026-03-10T10:29:33.790744+0000 mon.a (mon.0) 3488 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:29:35.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:34 vm04 bash[20742]: cluster 2026-03-10T10:29:33.803927+0000 mon.a (mon.0) 3489 : cluster [DBG] osdmap e695: 8 total, 8 up, 8 in 2026-03-10T10:29:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:34 vm04 bash[20742]: cluster 2026-03-10T10:29:33.803927+0000 mon.a (mon.0) 3489 : cluster [DBG] osdmap e695: 8 total, 8 up, 8 in 2026-03-10T10:29:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:34 vm04 bash[20742]: audit 2026-03-10T10:29:33.804759+0000 mon.a (mon.0) 3490 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-142"}]: dispatch 2026-03-10T10:29:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:34 vm04 bash[20742]: audit 2026-03-10T10:29:33.804759+0000 mon.a (mon.0) 3490 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-142"}]: dispatch 2026-03-10T10:29:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:34 vm04 bash[20742]: audit 2026-03-10T10:29:34.008096+0000 mon.a (mon.0) 3491 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:34 vm04 bash[20742]: audit 2026-03-10T10:29:34.008096+0000 mon.a (mon.0) 3491 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:34 vm04 bash[20742]: audit 2026-03-10T10:29:34.017846+0000 mon.a (mon.0) 3492 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:34 vm04 bash[20742]: audit 2026-03-10T10:29:34.017846+0000 mon.a (mon.0) 3492 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:34 vm04 bash[20742]: audit 2026-03-10T10:29:34.036521+0000 mon.a (mon.0) 3493 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:34 vm04 bash[20742]: audit 2026-03-10T10:29:34.036521+0000 mon.a (mon.0) 3493 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:34 vm04 bash[20742]: audit 2026-03-10T10:29:34.044760+0000 mon.a (mon.0) 3494 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:34 vm04 bash[20742]: audit 2026-03-10T10:29:34.044760+0000 mon.a (mon.0) 3494 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:34 vm04 bash[20742]: audit 2026-03-10T10:29:34.378928+0000 mon.a (mon.0) 3495 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:29:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:34 vm04 bash[20742]: audit 2026-03-10T10:29:34.378928+0000 mon.a (mon.0) 3495 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:29:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:34 vm04 bash[20742]: audit 2026-03-10T10:29:34.379500+0000 mon.a (mon.0) 3496 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:29:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:34 vm04 bash[20742]: audit 2026-03-10T10:29:34.379500+0000 mon.a (mon.0) 3496 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:29:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:34 vm04 bash[20742]: audit 2026-03-10T10:29:34.408570+0000 mon.a (mon.0) 3497 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:35.204 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:34 vm04 bash[20742]: audit 2026-03-10T10:29:34.408570+0000 mon.a (mon.0) 3497 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:34 vm07 bash[23367]: audit 2026-03-10T10:29:33.790744+0000 mon.a (mon.0) 3488 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:29:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:34 vm07 bash[23367]: audit 2026-03-10T10:29:33.790744+0000 mon.a (mon.0) 3488 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:29:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:34 vm07 bash[23367]: cluster 2026-03-10T10:29:33.803927+0000 mon.a (mon.0) 3489 : cluster [DBG] osdmap e695: 8 total, 8 up, 8 in 2026-03-10T10:29:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:34 vm07 bash[23367]: cluster 2026-03-10T10:29:33.803927+0000 mon.a (mon.0) 3489 : cluster [DBG] osdmap e695: 8 total, 8 up, 8 in 2026-03-10T10:29:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:34 vm07 bash[23367]: audit 2026-03-10T10:29:33.804759+0000 mon.a (mon.0) 3490 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-142"}]: dispatch 2026-03-10T10:29:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:34 vm07 bash[23367]: audit 2026-03-10T10:29:33.804759+0000 mon.a (mon.0) 3490 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-142"}]: dispatch 2026-03-10T10:29:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:34 vm07 bash[23367]: audit 2026-03-10T10:29:34.008096+0000 mon.a (mon.0) 3491 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:34 vm07 bash[23367]: audit 2026-03-10T10:29:34.008096+0000 mon.a (mon.0) 3491 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:34 vm07 bash[23367]: audit 2026-03-10T10:29:34.017846+0000 mon.a (mon.0) 3492 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:34 vm07 bash[23367]: audit 2026-03-10T10:29:34.017846+0000 mon.a (mon.0) 3492 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:34 vm07 bash[23367]: audit 2026-03-10T10:29:34.036521+0000 mon.a (mon.0) 3493 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:34 vm07 bash[23367]: audit 2026-03-10T10:29:34.036521+0000 mon.a (mon.0) 3493 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:34 vm07 bash[23367]: audit 2026-03-10T10:29:34.044760+0000 mon.a (mon.0) 3494 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:34 vm07 bash[23367]: audit 2026-03-10T10:29:34.044760+0000 mon.a (mon.0) 3494 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:34 vm07 bash[23367]: audit 2026-03-10T10:29:34.378928+0000 mon.a (mon.0) 3495 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:29:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:34 vm07 bash[23367]: audit 2026-03-10T10:29:34.378928+0000 mon.a (mon.0) 3495 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:29:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:34 vm07 bash[23367]: audit 2026-03-10T10:29:34.379500+0000 mon.a (mon.0) 3496 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:29:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:34 vm07 bash[23367]: audit 2026-03-10T10:29:34.379500+0000 mon.a (mon.0) 3496 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:29:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:34 vm07 bash[23367]: audit 2026-03-10T10:29:34.408570+0000 mon.a (mon.0) 3497 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:34 vm07 bash[23367]: audit 2026-03-10T10:29:34.408570+0000 mon.a (mon.0) 3497 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:35 vm04 bash[28289]: cluster 2026-03-10T10:29:34.563492+0000 mgr.y (mgr.24422) 605 : cluster [DBG] pgmap v1080: 268 pgs: 268 active+clean; 4.3 MiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T10:29:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:35 vm04 bash[28289]: cluster 2026-03-10T10:29:34.563492+0000 mgr.y (mgr.24422) 605 : cluster [DBG] pgmap v1080: 268 pgs: 268 active+clean; 4.3 MiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T10:29:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:35 vm04 bash[28289]: audit 2026-03-10T10:29:34.793528+0000 mon.a (mon.0) 3498 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-142"}]': finished 2026-03-10T10:29:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:35 vm04 bash[28289]: audit 2026-03-10T10:29:34.793528+0000 mon.a (mon.0) 3498 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-142"}]': finished 2026-03-10T10:29:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:35 vm04 bash[28289]: cluster 2026-03-10T10:29:34.797693+0000 mon.a (mon.0) 3499 : cluster [DBG] osdmap e696: 8 total, 8 up, 8 in 2026-03-10T10:29:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:35 vm04 bash[28289]: cluster 2026-03-10T10:29:34.797693+0000 mon.a (mon.0) 3499 : cluster [DBG] osdmap e696: 8 total, 8 up, 8 in 2026-03-10T10:29:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:35 vm04 bash[28289]: audit 2026-03-10T10:29:34.854704+0000 mon.a (mon.0) 3500 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:35 vm04 bash[28289]: audit 2026-03-10T10:29:34.854704+0000 mon.a (mon.0) 3500 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:35 vm04 bash[28289]: audit 2026-03-10T10:29:34.854929+0000 mon.a (mon.0) 3501 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-142"}]: dispatch 2026-03-10T10:29:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:35 vm04 bash[28289]: audit 2026-03-10T10:29:34.854929+0000 mon.a (mon.0) 3501 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-142"}]: dispatch 2026-03-10T10:29:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:35 vm04 bash[20742]: cluster 2026-03-10T10:29:34.563492+0000 mgr.y (mgr.24422) 605 : cluster [DBG] pgmap v1080: 268 pgs: 268 active+clean; 4.3 MiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T10:29:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:35 vm04 bash[20742]: cluster 2026-03-10T10:29:34.563492+0000 mgr.y (mgr.24422) 605 : cluster [DBG] pgmap v1080: 268 pgs: 268 active+clean; 4.3 MiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T10:29:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:35 vm04 bash[20742]: audit 2026-03-10T10:29:34.793528+0000 mon.a (mon.0) 3498 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-142"}]': finished 2026-03-10T10:29:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:35 vm04 bash[20742]: audit 2026-03-10T10:29:34.793528+0000 mon.a (mon.0) 3498 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-142"}]': finished 2026-03-10T10:29:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:35 vm04 bash[20742]: cluster 2026-03-10T10:29:34.797693+0000 mon.a (mon.0) 3499 : cluster [DBG] osdmap e696: 8 total, 8 up, 8 in 2026-03-10T10:29:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:35 vm04 bash[20742]: cluster 2026-03-10T10:29:34.797693+0000 mon.a (mon.0) 3499 : cluster [DBG] osdmap e696: 8 total, 8 up, 8 in 2026-03-10T10:29:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:35 vm04 bash[20742]: audit 2026-03-10T10:29:34.854704+0000 mon.a (mon.0) 3500 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:35 vm04 bash[20742]: audit 2026-03-10T10:29:34.854704+0000 mon.a (mon.0) 3500 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:35 vm04 bash[20742]: audit 2026-03-10T10:29:34.854929+0000 mon.a (mon.0) 3501 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-142"}]: dispatch 2026-03-10T10:29:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:35 vm04 bash[20742]: audit 2026-03-10T10:29:34.854929+0000 mon.a (mon.0) 3501 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-142"}]: dispatch 2026-03-10T10:29:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:35 vm07 bash[23367]: cluster 2026-03-10T10:29:34.563492+0000 mgr.y (mgr.24422) 605 : cluster [DBG] pgmap v1080: 268 pgs: 268 active+clean; 4.3 MiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T10:29:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:35 vm07 bash[23367]: cluster 2026-03-10T10:29:34.563492+0000 mgr.y (mgr.24422) 605 : cluster [DBG] pgmap v1080: 268 pgs: 268 active+clean; 4.3 MiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T10:29:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:35 vm07 bash[23367]: audit 2026-03-10T10:29:34.793528+0000 mon.a (mon.0) 3498 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-142"}]': finished 2026-03-10T10:29:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:35 vm07 bash[23367]: audit 2026-03-10T10:29:34.793528+0000 mon.a (mon.0) 3498 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-142"}]': finished 2026-03-10T10:29:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:35 vm07 bash[23367]: cluster 2026-03-10T10:29:34.797693+0000 mon.a (mon.0) 3499 : cluster [DBG] osdmap e696: 8 total, 8 up, 8 in 2026-03-10T10:29:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:35 vm07 bash[23367]: cluster 2026-03-10T10:29:34.797693+0000 mon.a (mon.0) 3499 : cluster [DBG] osdmap e696: 8 total, 8 up, 8 in 2026-03-10T10:29:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:35 vm07 bash[23367]: audit 2026-03-10T10:29:34.854704+0000 mon.a (mon.0) 3500 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:35 vm07 bash[23367]: audit 2026-03-10T10:29:34.854704+0000 mon.a (mon.0) 3500 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:35 vm07 bash[23367]: audit 2026-03-10T10:29:34.854929+0000 mon.a (mon.0) 3501 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-142"}]: dispatch 2026-03-10T10:29:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:35 vm07 bash[23367]: audit 2026-03-10T10:29:34.854929+0000 mon.a (mon.0) 3501 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-142"}]: dispatch 2026-03-10T10:29:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:37 vm04 bash[28289]: cluster 2026-03-10T10:29:35.850858+0000 mon.a (mon.0) 3502 : cluster [DBG] osdmap e697: 8 total, 8 up, 8 in 2026-03-10T10:29:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:37 vm04 bash[28289]: cluster 2026-03-10T10:29:35.850858+0000 mon.a (mon.0) 3502 : cluster [DBG] osdmap e697: 8 total, 8 up, 8 in 2026-03-10T10:29:37.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:37 vm04 bash[20742]: cluster 2026-03-10T10:29:35.850858+0000 mon.a (mon.0) 3502 : cluster [DBG] osdmap e697: 8 total, 8 up, 8 in 2026-03-10T10:29:37.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:37 vm04 bash[20742]: cluster 2026-03-10T10:29:35.850858+0000 mon.a (mon.0) 3502 : cluster [DBG] osdmap e697: 8 total, 8 up, 8 in 2026-03-10T10:29:38.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:37 vm07 bash[23367]: cluster 2026-03-10T10:29:35.850858+0000 mon.a (mon.0) 3502 : cluster [DBG] osdmap e697: 8 total, 8 up, 8 in 2026-03-10T10:29:38.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:37 vm07 bash[23367]: cluster 2026-03-10T10:29:35.850858+0000 mon.a (mon.0) 3502 : cluster [DBG] osdmap e697: 8 total, 8 up, 8 in 2026-03-10T10:29:38.875 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:38 vm07 bash[23367]: cluster 2026-03-10T10:29:36.563868+0000 mgr.y (mgr.24422) 606 : cluster [DBG] pgmap v1083: 236 pgs: 236 active+clean; 4.3 MiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T10:29:38.875 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:38 vm07 bash[23367]: cluster 2026-03-10T10:29:36.563868+0000 mgr.y (mgr.24422) 606 : cluster [DBG] pgmap v1083: 236 pgs: 236 active+clean; 4.3 MiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T10:29:38.875 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:38 vm07 bash[23367]: cluster 2026-03-10T10:29:37.646359+0000 mon.a (mon.0) 3503 : cluster [DBG] osdmap e698: 8 total, 8 up, 8 in 2026-03-10T10:29:38.875 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:38 vm07 bash[23367]: cluster 2026-03-10T10:29:37.646359+0000 mon.a (mon.0) 3503 : cluster [DBG] osdmap e698: 8 total, 8 up, 8 in 2026-03-10T10:29:38.875 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:38 vm07 bash[23367]: audit 2026-03-10T10:29:37.647515+0000 mon.a (mon.0) 3504 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:29:38.875 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:38 vm07 bash[23367]: audit 2026-03-10T10:29:37.647515+0000 mon.a (mon.0) 3504 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:29:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:38 vm04 bash[28289]: cluster 2026-03-10T10:29:36.563868+0000 mgr.y (mgr.24422) 606 : cluster [DBG] pgmap v1083: 236 pgs: 236 active+clean; 4.3 MiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T10:29:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:38 vm04 bash[28289]: cluster 2026-03-10T10:29:36.563868+0000 mgr.y (mgr.24422) 606 : cluster [DBG] pgmap v1083: 236 pgs: 236 active+clean; 4.3 MiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T10:29:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:38 vm04 bash[28289]: cluster 2026-03-10T10:29:37.646359+0000 mon.a (mon.0) 3503 : cluster [DBG] osdmap e698: 8 total, 8 up, 8 in 2026-03-10T10:29:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:38 vm04 bash[28289]: cluster 2026-03-10T10:29:37.646359+0000 mon.a (mon.0) 3503 : cluster [DBG] osdmap e698: 8 total, 8 up, 8 in 2026-03-10T10:29:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:38 vm04 bash[28289]: audit 2026-03-10T10:29:37.647515+0000 mon.a (mon.0) 3504 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:29:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:38 vm04 bash[28289]: audit 2026-03-10T10:29:37.647515+0000 mon.a (mon.0) 3504 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:29:38.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:38 vm04 bash[20742]: cluster 2026-03-10T10:29:36.563868+0000 mgr.y (mgr.24422) 606 : cluster [DBG] pgmap v1083: 236 pgs: 236 active+clean; 4.3 MiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T10:29:38.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:38 vm04 bash[20742]: cluster 2026-03-10T10:29:36.563868+0000 mgr.y (mgr.24422) 606 : cluster [DBG] pgmap v1083: 236 pgs: 236 active+clean; 4.3 MiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T10:29:38.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:38 vm04 bash[20742]: cluster 2026-03-10T10:29:37.646359+0000 mon.a (mon.0) 3503 : cluster [DBG] osdmap e698: 8 total, 8 up, 8 in 2026-03-10T10:29:38.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:38 vm04 bash[20742]: cluster 2026-03-10T10:29:37.646359+0000 mon.a (mon.0) 3503 : cluster [DBG] osdmap e698: 8 total, 8 up, 8 in 2026-03-10T10:29:38.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:38 vm04 bash[20742]: audit 2026-03-10T10:29:37.647515+0000 mon.a (mon.0) 3504 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:29:38.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:38 vm04 bash[20742]: audit 2026-03-10T10:29:37.647515+0000 mon.a (mon.0) 3504 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-144","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:29:39.266 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:29:38 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:29:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:39 vm04 bash[28289]: cluster 2026-03-10T10:29:38.564355+0000 mgr.y (mgr.24422) 607 : cluster [DBG] pgmap v1085: 268 pgs: 17 creating+peering, 15 unknown, 236 active+clean; 4.3 MiB data, 1022 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:29:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:39 vm04 bash[28289]: cluster 2026-03-10T10:29:38.564355+0000 mgr.y (mgr.24422) 607 : cluster [DBG] pgmap v1085: 268 pgs: 17 creating+peering, 15 unknown, 236 active+clean; 4.3 MiB data, 1022 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:29:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:39 vm04 bash[28289]: cluster 2026-03-10T10:29:38.590482+0000 mon.a (mon.0) 3505 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:29:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:39 vm04 bash[28289]: cluster 2026-03-10T10:29:38.590482+0000 mon.a (mon.0) 3505 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:29:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:39 vm04 bash[28289]: audit 2026-03-10T10:29:38.618928+0000 mon.a (mon.0) 3506 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-144","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:29:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:39 vm04 bash[28289]: audit 2026-03-10T10:29:38.618928+0000 mon.a (mon.0) 3506 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-144","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:29:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:39 vm04 bash[28289]: cluster 2026-03-10T10:29:38.625200+0000 mon.a (mon.0) 3507 : cluster [DBG] osdmap e699: 8 total, 8 up, 8 in 2026-03-10T10:29:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:39 vm04 bash[28289]: cluster 2026-03-10T10:29:38.625200+0000 mon.a (mon.0) 3507 : cluster [DBG] osdmap e699: 8 total, 8 up, 8 in 2026-03-10T10:29:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:39 vm04 bash[28289]: audit 2026-03-10T10:29:38.659809+0000 mon.a (mon.0) 3508 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:29:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:39 vm04 bash[28289]: audit 2026-03-10T10:29:38.659809+0000 mon.a (mon.0) 3508 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:29:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:39 vm04 bash[28289]: audit 2026-03-10T10:29:38.875490+0000 mgr.y (mgr.24422) 608 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:39 vm04 bash[28289]: audit 2026-03-10T10:29:38.875490+0000 mgr.y (mgr.24422) 608 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:39 vm04 bash[20742]: cluster 2026-03-10T10:29:38.564355+0000 mgr.y (mgr.24422) 607 : cluster [DBG] pgmap v1085: 268 pgs: 17 creating+peering, 15 unknown, 236 active+clean; 4.3 MiB data, 1022 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:29:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:39 vm04 bash[20742]: cluster 2026-03-10T10:29:38.564355+0000 mgr.y (mgr.24422) 607 : cluster [DBG] pgmap v1085: 268 pgs: 17 creating+peering, 15 unknown, 236 active+clean; 4.3 MiB data, 1022 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:29:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:39 vm04 bash[20742]: cluster 2026-03-10T10:29:38.590482+0000 mon.a (mon.0) 3505 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:29:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:39 vm04 bash[20742]: cluster 2026-03-10T10:29:38.590482+0000 mon.a (mon.0) 3505 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:29:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:39 vm04 bash[20742]: audit 2026-03-10T10:29:38.618928+0000 mon.a (mon.0) 3506 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-144","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:29:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:39 vm04 bash[20742]: audit 2026-03-10T10:29:38.618928+0000 mon.a (mon.0) 3506 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-144","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:29:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:39 vm04 bash[20742]: cluster 2026-03-10T10:29:38.625200+0000 mon.a (mon.0) 3507 : cluster [DBG] osdmap e699: 8 total, 8 up, 8 in 2026-03-10T10:29:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:39 vm04 bash[20742]: cluster 2026-03-10T10:29:38.625200+0000 mon.a (mon.0) 3507 : cluster [DBG] osdmap e699: 8 total, 8 up, 8 in 2026-03-10T10:29:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:39 vm04 bash[20742]: audit 2026-03-10T10:29:38.659809+0000 mon.a (mon.0) 3508 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:29:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:39 vm04 bash[20742]: audit 2026-03-10T10:29:38.659809+0000 mon.a (mon.0) 3508 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:29:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:39 vm04 bash[20742]: audit 2026-03-10T10:29:38.875490+0000 mgr.y (mgr.24422) 608 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:39 vm04 bash[20742]: audit 2026-03-10T10:29:38.875490+0000 mgr.y (mgr.24422) 608 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:39 vm07 bash[23367]: cluster 2026-03-10T10:29:38.564355+0000 mgr.y (mgr.24422) 607 : cluster [DBG] pgmap v1085: 268 pgs: 17 creating+peering, 15 unknown, 236 active+clean; 4.3 MiB data, 1022 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:29:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:39 vm07 bash[23367]: cluster 2026-03-10T10:29:38.564355+0000 mgr.y (mgr.24422) 607 : cluster [DBG] pgmap v1085: 268 pgs: 17 creating+peering, 15 unknown, 236 active+clean; 4.3 MiB data, 1022 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:29:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:39 vm07 bash[23367]: cluster 2026-03-10T10:29:38.590482+0000 mon.a (mon.0) 3505 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:29:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:39 vm07 bash[23367]: cluster 2026-03-10T10:29:38.590482+0000 mon.a (mon.0) 3505 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:29:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:39 vm07 bash[23367]: audit 2026-03-10T10:29:38.618928+0000 mon.a (mon.0) 3506 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-144","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:29:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:39 vm07 bash[23367]: audit 2026-03-10T10:29:38.618928+0000 mon.a (mon.0) 3506 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-144","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:29:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:39 vm07 bash[23367]: cluster 2026-03-10T10:29:38.625200+0000 mon.a (mon.0) 3507 : cluster [DBG] osdmap e699: 8 total, 8 up, 8 in 2026-03-10T10:29:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:39 vm07 bash[23367]: cluster 2026-03-10T10:29:38.625200+0000 mon.a (mon.0) 3507 : cluster [DBG] osdmap e699: 8 total, 8 up, 8 in 2026-03-10T10:29:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:39 vm07 bash[23367]: audit 2026-03-10T10:29:38.659809+0000 mon.a (mon.0) 3508 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:29:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:39 vm07 bash[23367]: audit 2026-03-10T10:29:38.659809+0000 mon.a (mon.0) 3508 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-144", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:29:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:39 vm07 bash[23367]: audit 2026-03-10T10:29:38.875490+0000 mgr.y (mgr.24422) 608 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:39 vm07 bash[23367]: audit 2026-03-10T10:29:38.875490+0000 mgr.y (mgr.24422) 608 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:40 vm04 bash[28289]: audit 2026-03-10T10:29:39.622028+0000 mon.a (mon.0) 3509 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-144", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:29:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:40 vm04 bash[28289]: audit 2026-03-10T10:29:39.622028+0000 mon.a (mon.0) 3509 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-144", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:29:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:40 vm04 bash[28289]: cluster 2026-03-10T10:29:39.625321+0000 mon.a (mon.0) 3510 : cluster [DBG] osdmap e700: 8 total, 8 up, 8 in 2026-03-10T10:29:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:40 vm04 bash[28289]: cluster 2026-03-10T10:29:39.625321+0000 mon.a (mon.0) 3510 : cluster [DBG] osdmap e700: 8 total, 8 up, 8 in 2026-03-10T10:29:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:40 vm04 bash[28289]: audit 2026-03-10T10:29:39.626561+0000 mon.a (mon.0) 3511 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-144"}]: dispatch 2026-03-10T10:29:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:40 vm04 bash[28289]: audit 2026-03-10T10:29:39.626561+0000 mon.a (mon.0) 3511 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-144"}]: dispatch 2026-03-10T10:29:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:40 vm04 bash[28289]: audit 2026-03-10T10:29:40.629355+0000 mon.a (mon.0) 3512 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-144"}]': finished 2026-03-10T10:29:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:40 vm04 bash[28289]: audit 2026-03-10T10:29:40.629355+0000 mon.a (mon.0) 3512 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-144"}]': finished 2026-03-10T10:29:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:40 vm04 bash[28289]: cluster 2026-03-10T10:29:40.641221+0000 mon.a (mon.0) 3513 : cluster [DBG] osdmap e701: 8 total, 8 up, 8 in 2026-03-10T10:29:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:40 vm04 bash[28289]: cluster 2026-03-10T10:29:40.641221+0000 mon.a (mon.0) 3513 : cluster [DBG] osdmap e701: 8 total, 8 up, 8 in 2026-03-10T10:29:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:40 vm04 bash[20742]: audit 2026-03-10T10:29:39.622028+0000 mon.a (mon.0) 3509 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-144", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:29:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:40 vm04 bash[20742]: audit 2026-03-10T10:29:39.622028+0000 mon.a (mon.0) 3509 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-144", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:29:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:40 vm04 bash[20742]: cluster 2026-03-10T10:29:39.625321+0000 mon.a (mon.0) 3510 : cluster [DBG] osdmap e700: 8 total, 8 up, 8 in 2026-03-10T10:29:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:40 vm04 bash[20742]: cluster 2026-03-10T10:29:39.625321+0000 mon.a (mon.0) 3510 : cluster [DBG] osdmap e700: 8 total, 8 up, 8 in 2026-03-10T10:29:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:40 vm04 bash[20742]: audit 2026-03-10T10:29:39.626561+0000 mon.a (mon.0) 3511 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-144"}]: dispatch 2026-03-10T10:29:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:40 vm04 bash[20742]: audit 2026-03-10T10:29:39.626561+0000 mon.a (mon.0) 3511 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-144"}]: dispatch 2026-03-10T10:29:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:40 vm04 bash[20742]: audit 2026-03-10T10:29:40.629355+0000 mon.a (mon.0) 3512 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-144"}]': finished 2026-03-10T10:29:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:40 vm04 bash[20742]: audit 2026-03-10T10:29:40.629355+0000 mon.a (mon.0) 3512 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-144"}]': finished 2026-03-10T10:29:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:40 vm04 bash[20742]: cluster 2026-03-10T10:29:40.641221+0000 mon.a (mon.0) 3513 : cluster [DBG] osdmap e701: 8 total, 8 up, 8 in 2026-03-10T10:29:40.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:40 vm04 bash[20742]: cluster 2026-03-10T10:29:40.641221+0000 mon.a (mon.0) 3513 : cluster [DBG] osdmap e701: 8 total, 8 up, 8 in 2026-03-10T10:29:41.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:40 vm07 bash[23367]: audit 2026-03-10T10:29:39.622028+0000 mon.a (mon.0) 3509 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-144", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:29:41.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:40 vm07 bash[23367]: audit 2026-03-10T10:29:39.622028+0000 mon.a (mon.0) 3509 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-144", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:29:41.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:40 vm07 bash[23367]: cluster 2026-03-10T10:29:39.625321+0000 mon.a (mon.0) 3510 : cluster [DBG] osdmap e700: 8 total, 8 up, 8 in 2026-03-10T10:29:41.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:40 vm07 bash[23367]: cluster 2026-03-10T10:29:39.625321+0000 mon.a (mon.0) 3510 : cluster [DBG] osdmap e700: 8 total, 8 up, 8 in 2026-03-10T10:29:41.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:40 vm07 bash[23367]: audit 2026-03-10T10:29:39.626561+0000 mon.a (mon.0) 3511 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-144"}]: dispatch 2026-03-10T10:29:41.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:40 vm07 bash[23367]: audit 2026-03-10T10:29:39.626561+0000 mon.a (mon.0) 3511 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-144"}]: dispatch 2026-03-10T10:29:41.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:40 vm07 bash[23367]: audit 2026-03-10T10:29:40.629355+0000 mon.a (mon.0) 3512 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-144"}]': finished 2026-03-10T10:29:41.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:40 vm07 bash[23367]: audit 2026-03-10T10:29:40.629355+0000 mon.a (mon.0) 3512 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-144"}]': finished 2026-03-10T10:29:41.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:40 vm07 bash[23367]: cluster 2026-03-10T10:29:40.641221+0000 mon.a (mon.0) 3513 : cluster [DBG] osdmap e701: 8 total, 8 up, 8 in 2026-03-10T10:29:41.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:40 vm07 bash[23367]: cluster 2026-03-10T10:29:40.641221+0000 mon.a (mon.0) 3513 : cluster [DBG] osdmap e701: 8 total, 8 up, 8 in 2026-03-10T10:29:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:41 vm04 bash[28289]: cluster 2026-03-10T10:29:40.564703+0000 mgr.y (mgr.24422) 609 : cluster [DBG] pgmap v1088: 268 pgs: 17 creating+peering, 251 active+clean; 4.3 MiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 216 B/s wr, 1 op/s 2026-03-10T10:29:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:41 vm04 bash[28289]: cluster 2026-03-10T10:29:40.564703+0000 mgr.y (mgr.24422) 609 : cluster [DBG] pgmap v1088: 268 pgs: 17 creating+peering, 251 active+clean; 4.3 MiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 216 B/s wr, 1 op/s 2026-03-10T10:29:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:41 vm04 bash[28289]: audit 2026-03-10T10:29:40.641633+0000 mon.a (mon.0) 3514 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-144", "mode": "readproxy"}]: dispatch 2026-03-10T10:29:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:41 vm04 bash[28289]: audit 2026-03-10T10:29:40.641633+0000 mon.a (mon.0) 3514 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-144", "mode": "readproxy"}]: dispatch 2026-03-10T10:29:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:41 vm04 bash[28289]: cluster 2026-03-10T10:29:41.629400+0000 mon.a (mon.0) 3515 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:29:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:41 vm04 bash[28289]: cluster 2026-03-10T10:29:41.629400+0000 mon.a (mon.0) 3515 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:29:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:41 vm04 bash[28289]: audit 2026-03-10T10:29:41.632458+0000 mon.a (mon.0) 3516 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-144", "mode": "readproxy"}]': finished 2026-03-10T10:29:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:41 vm04 bash[28289]: audit 2026-03-10T10:29:41.632458+0000 mon.a (mon.0) 3516 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-144", "mode": "readproxy"}]': finished 2026-03-10T10:29:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:41 vm04 bash[28289]: cluster 2026-03-10T10:29:41.641113+0000 mon.a (mon.0) 3517 : cluster [DBG] osdmap e702: 8 total, 8 up, 8 in 2026-03-10T10:29:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:41 vm04 bash[28289]: cluster 2026-03-10T10:29:41.641113+0000 mon.a (mon.0) 3517 : cluster [DBG] osdmap e702: 8 total, 8 up, 8 in 2026-03-10T10:29:41.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:41 vm04 bash[20742]: cluster 2026-03-10T10:29:40.564703+0000 mgr.y (mgr.24422) 609 : cluster [DBG] pgmap v1088: 268 pgs: 17 creating+peering, 251 active+clean; 4.3 MiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 216 B/s wr, 1 op/s 2026-03-10T10:29:41.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:41 vm04 bash[20742]: cluster 2026-03-10T10:29:40.564703+0000 mgr.y (mgr.24422) 609 : cluster [DBG] pgmap v1088: 268 pgs: 17 creating+peering, 251 active+clean; 4.3 MiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 216 B/s wr, 1 op/s 2026-03-10T10:29:41.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:41 vm04 bash[20742]: audit 2026-03-10T10:29:40.641633+0000 mon.a (mon.0) 3514 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-144", "mode": "readproxy"}]: dispatch 2026-03-10T10:29:41.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:41 vm04 bash[20742]: audit 2026-03-10T10:29:40.641633+0000 mon.a (mon.0) 3514 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-144", "mode": "readproxy"}]: dispatch 2026-03-10T10:29:41.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:41 vm04 bash[20742]: cluster 2026-03-10T10:29:41.629400+0000 mon.a (mon.0) 3515 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:29:41.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:41 vm04 bash[20742]: cluster 2026-03-10T10:29:41.629400+0000 mon.a (mon.0) 3515 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:29:41.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:41 vm04 bash[20742]: audit 2026-03-10T10:29:41.632458+0000 mon.a (mon.0) 3516 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-144", "mode": "readproxy"}]': finished 2026-03-10T10:29:41.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:41 vm04 bash[20742]: audit 2026-03-10T10:29:41.632458+0000 mon.a (mon.0) 3516 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-144", "mode": "readproxy"}]': finished 2026-03-10T10:29:41.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:41 vm04 bash[20742]: cluster 2026-03-10T10:29:41.641113+0000 mon.a (mon.0) 3517 : cluster [DBG] osdmap e702: 8 total, 8 up, 8 in 2026-03-10T10:29:41.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:41 vm04 bash[20742]: cluster 2026-03-10T10:29:41.641113+0000 mon.a (mon.0) 3517 : cluster [DBG] osdmap e702: 8 total, 8 up, 8 in 2026-03-10T10:29:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:41 vm07 bash[23367]: cluster 2026-03-10T10:29:40.564703+0000 mgr.y (mgr.24422) 609 : cluster [DBG] pgmap v1088: 268 pgs: 17 creating+peering, 251 active+clean; 4.3 MiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 216 B/s wr, 1 op/s 2026-03-10T10:29:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:41 vm07 bash[23367]: cluster 2026-03-10T10:29:40.564703+0000 mgr.y (mgr.24422) 609 : cluster [DBG] pgmap v1088: 268 pgs: 17 creating+peering, 251 active+clean; 4.3 MiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 216 B/s wr, 1 op/s 2026-03-10T10:29:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:41 vm07 bash[23367]: audit 2026-03-10T10:29:40.641633+0000 mon.a (mon.0) 3514 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-144", "mode": "readproxy"}]: dispatch 2026-03-10T10:29:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:41 vm07 bash[23367]: audit 2026-03-10T10:29:40.641633+0000 mon.a (mon.0) 3514 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-144", "mode": "readproxy"}]: dispatch 2026-03-10T10:29:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:41 vm07 bash[23367]: cluster 2026-03-10T10:29:41.629400+0000 mon.a (mon.0) 3515 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:29:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:41 vm07 bash[23367]: cluster 2026-03-10T10:29:41.629400+0000 mon.a (mon.0) 3515 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:29:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:41 vm07 bash[23367]: audit 2026-03-10T10:29:41.632458+0000 mon.a (mon.0) 3516 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-144", "mode": "readproxy"}]': finished 2026-03-10T10:29:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:41 vm07 bash[23367]: audit 2026-03-10T10:29:41.632458+0000 mon.a (mon.0) 3516 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-144", "mode": "readproxy"}]': finished 2026-03-10T10:29:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:41 vm07 bash[23367]: cluster 2026-03-10T10:29:41.641113+0000 mon.a (mon.0) 3517 : cluster [DBG] osdmap e702: 8 total, 8 up, 8 in 2026-03-10T10:29:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:41 vm07 bash[23367]: cluster 2026-03-10T10:29:41.641113+0000 mon.a (mon.0) 3517 : cluster [DBG] osdmap e702: 8 total, 8 up, 8 in 2026-03-10T10:29:43.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:29:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:29:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:29:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:43 vm04 bash[28289]: cluster 2026-03-10T10:29:42.565050+0000 mgr.y (mgr.24422) 610 : cluster [DBG] pgmap v1091: 268 pgs: 17 creating+peering, 251 active+clean; 4.3 MiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:29:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:43 vm04 bash[28289]: cluster 2026-03-10T10:29:42.565050+0000 mgr.y (mgr.24422) 610 : cluster [DBG] pgmap v1091: 268 pgs: 17 creating+peering, 251 active+clean; 4.3 MiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:29:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:43 vm04 bash[28289]: audit 2026-03-10T10:29:43.186985+0000 mon.a (mon.0) 3518 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:29:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:43 vm04 bash[28289]: audit 2026-03-10T10:29:43.186985+0000 mon.a (mon.0) 3518 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:29:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:43 vm04 bash[20742]: cluster 2026-03-10T10:29:42.565050+0000 mgr.y (mgr.24422) 610 : cluster [DBG] pgmap v1091: 268 pgs: 17 creating+peering, 251 active+clean; 4.3 MiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:29:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:43 vm04 bash[20742]: cluster 2026-03-10T10:29:42.565050+0000 mgr.y (mgr.24422) 610 : cluster [DBG] pgmap v1091: 268 pgs: 17 creating+peering, 251 active+clean; 4.3 MiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:29:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:43 vm04 bash[20742]: audit 2026-03-10T10:29:43.186985+0000 mon.a (mon.0) 3518 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:29:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:43 vm04 bash[20742]: audit 2026-03-10T10:29:43.186985+0000 mon.a (mon.0) 3518 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:29:44.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:43 vm07 bash[23367]: cluster 2026-03-10T10:29:42.565050+0000 mgr.y (mgr.24422) 610 : cluster [DBG] pgmap v1091: 268 pgs: 17 creating+peering, 251 active+clean; 4.3 MiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:29:44.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:43 vm07 bash[23367]: cluster 2026-03-10T10:29:42.565050+0000 mgr.y (mgr.24422) 610 : cluster [DBG] pgmap v1091: 268 pgs: 17 creating+peering, 251 active+clean; 4.3 MiB data, 1022 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-10T10:29:44.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:43 vm07 bash[23367]: audit 2026-03-10T10:29:43.186985+0000 mon.a (mon.0) 3518 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:29:44.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:43 vm07 bash[23367]: audit 2026-03-10T10:29:43.186985+0000 mon.a (mon.0) 3518 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:29:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:45 vm04 bash[28289]: cluster 2026-03-10T10:29:44.565987+0000 mgr.y (mgr.24422) 611 : cluster [DBG] pgmap v1092: 268 pgs: 268 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 172 B/s wr, 2 op/s 2026-03-10T10:29:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:45 vm04 bash[28289]: cluster 2026-03-10T10:29:44.565987+0000 mgr.y (mgr.24422) 611 : cluster [DBG] pgmap v1092: 268 pgs: 268 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 172 B/s wr, 2 op/s 2026-03-10T10:29:46.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:45 vm04 bash[20742]: cluster 2026-03-10T10:29:44.565987+0000 mgr.y (mgr.24422) 611 : cluster [DBG] pgmap v1092: 268 pgs: 268 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 172 B/s wr, 2 op/s 2026-03-10T10:29:46.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:45 vm04 bash[20742]: cluster 2026-03-10T10:29:44.565987+0000 mgr.y (mgr.24422) 611 : cluster [DBG] pgmap v1092: 268 pgs: 268 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 172 B/s wr, 2 op/s 2026-03-10T10:29:46.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:45 vm07 bash[23367]: cluster 2026-03-10T10:29:44.565987+0000 mgr.y (mgr.24422) 611 : cluster [DBG] pgmap v1092: 268 pgs: 268 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 172 B/s wr, 2 op/s 2026-03-10T10:29:46.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:45 vm07 bash[23367]: cluster 2026-03-10T10:29:44.565987+0000 mgr.y (mgr.24422) 611 : cluster [DBG] pgmap v1092: 268 pgs: 268 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.8 KiB/s rd, 172 B/s wr, 2 op/s 2026-03-10T10:29:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:47 vm04 bash[28289]: cluster 2026-03-10T10:29:46.566397+0000 mgr.y (mgr.24422) 612 : cluster [DBG] pgmap v1093: 268 pgs: 268 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 885 B/s rd, 0 op/s 2026-03-10T10:29:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:47 vm04 bash[28289]: cluster 2026-03-10T10:29:46.566397+0000 mgr.y (mgr.24422) 612 : cluster [DBG] pgmap v1093: 268 pgs: 268 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 885 B/s rd, 0 op/s 2026-03-10T10:29:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:47 vm04 bash[28289]: cluster 2026-03-10T10:29:47.604595+0000 mon.a (mon.0) 3519 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:29:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:47 vm04 bash[28289]: cluster 2026-03-10T10:29:47.604595+0000 mon.a (mon.0) 3519 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:29:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:47 vm04 bash[20742]: cluster 2026-03-10T10:29:46.566397+0000 mgr.y (mgr.24422) 612 : cluster [DBG] pgmap v1093: 268 pgs: 268 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 885 B/s rd, 0 op/s 2026-03-10T10:29:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:47 vm04 bash[20742]: cluster 2026-03-10T10:29:46.566397+0000 mgr.y (mgr.24422) 612 : cluster [DBG] pgmap v1093: 268 pgs: 268 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 885 B/s rd, 0 op/s 2026-03-10T10:29:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:47 vm04 bash[20742]: cluster 2026-03-10T10:29:47.604595+0000 mon.a (mon.0) 3519 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:29:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:47 vm04 bash[20742]: cluster 2026-03-10T10:29:47.604595+0000 mon.a (mon.0) 3519 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:29:48.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:47 vm07 bash[23367]: cluster 2026-03-10T10:29:46.566397+0000 mgr.y (mgr.24422) 612 : cluster [DBG] pgmap v1093: 268 pgs: 268 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 885 B/s rd, 0 op/s 2026-03-10T10:29:48.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:47 vm07 bash[23367]: cluster 2026-03-10T10:29:46.566397+0000 mgr.y (mgr.24422) 612 : cluster [DBG] pgmap v1093: 268 pgs: 268 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 885 B/s rd, 0 op/s 2026-03-10T10:29:48.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:47 vm07 bash[23367]: cluster 2026-03-10T10:29:47.604595+0000 mon.a (mon.0) 3519 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:29:48.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:47 vm07 bash[23367]: cluster 2026-03-10T10:29:47.604595+0000 mon.a (mon.0) 3519 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:29:49.266 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:29:48 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:29:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:49 vm04 bash[28289]: cluster 2026-03-10T10:29:48.567091+0000 mgr.y (mgr.24422) 613 : cluster [DBG] pgmap v1094: 268 pgs: 268 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:29:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:49 vm04 bash[28289]: cluster 2026-03-10T10:29:48.567091+0000 mgr.y (mgr.24422) 613 : cluster [DBG] pgmap v1094: 268 pgs: 268 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:29:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:49 vm04 bash[28289]: audit 2026-03-10T10:29:48.879299+0000 mgr.y (mgr.24422) 614 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:49 vm04 bash[28289]: audit 2026-03-10T10:29:48.879299+0000 mgr.y (mgr.24422) 614 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:49 vm04 bash[20742]: cluster 2026-03-10T10:29:48.567091+0000 mgr.y (mgr.24422) 613 : cluster [DBG] pgmap v1094: 268 pgs: 268 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:29:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:49 vm04 bash[20742]: cluster 2026-03-10T10:29:48.567091+0000 mgr.y (mgr.24422) 613 : cluster [DBG] pgmap v1094: 268 pgs: 268 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:29:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:49 vm04 bash[20742]: audit 2026-03-10T10:29:48.879299+0000 mgr.y (mgr.24422) 614 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:49 vm04 bash[20742]: audit 2026-03-10T10:29:48.879299+0000 mgr.y (mgr.24422) 614 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:50.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:49 vm07 bash[23367]: cluster 2026-03-10T10:29:48.567091+0000 mgr.y (mgr.24422) 613 : cluster [DBG] pgmap v1094: 268 pgs: 268 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:29:50.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:49 vm07 bash[23367]: cluster 2026-03-10T10:29:48.567091+0000 mgr.y (mgr.24422) 613 : cluster [DBG] pgmap v1094: 268 pgs: 268 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:29:50.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:49 vm07 bash[23367]: audit 2026-03-10T10:29:48.879299+0000 mgr.y (mgr.24422) 614 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:50.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:49 vm07 bash[23367]: audit 2026-03-10T10:29:48.879299+0000 mgr.y (mgr.24422) 614 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:29:52.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:52 vm04 bash[28289]: cluster 2026-03-10T10:29:50.567703+0000 mgr.y (mgr.24422) 615 : cluster [DBG] pgmap v1095: 268 pgs: 268 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T10:29:52.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:52 vm04 bash[28289]: cluster 2026-03-10T10:29:50.567703+0000 mgr.y (mgr.24422) 615 : cluster [DBG] pgmap v1095: 268 pgs: 268 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T10:29:52.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:52 vm04 bash[28289]: audit 2026-03-10T10:29:51.750610+0000 mon.a (mon.0) 3520 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:52.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:52 vm04 bash[28289]: audit 2026-03-10T10:29:51.750610+0000 mon.a (mon.0) 3520 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:52.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:52 vm04 bash[20742]: cluster 2026-03-10T10:29:50.567703+0000 mgr.y (mgr.24422) 615 : cluster [DBG] pgmap v1095: 268 pgs: 268 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T10:29:52.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:52 vm04 bash[20742]: cluster 2026-03-10T10:29:50.567703+0000 mgr.y (mgr.24422) 615 : cluster [DBG] pgmap v1095: 268 pgs: 268 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T10:29:52.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:52 vm04 bash[20742]: audit 2026-03-10T10:29:51.750610+0000 mon.a (mon.0) 3520 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:52.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:52 vm04 bash[20742]: audit 2026-03-10T10:29:51.750610+0000 mon.a (mon.0) 3520 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:52.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:52 vm07 bash[23367]: cluster 2026-03-10T10:29:50.567703+0000 mgr.y (mgr.24422) 615 : cluster [DBG] pgmap v1095: 268 pgs: 268 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T10:29:52.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:52 vm07 bash[23367]: cluster 2026-03-10T10:29:50.567703+0000 mgr.y (mgr.24422) 615 : cluster [DBG] pgmap v1095: 268 pgs: 268 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T10:29:52.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:52 vm07 bash[23367]: audit 2026-03-10T10:29:51.750610+0000 mon.a (mon.0) 3520 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:52.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:52 vm07 bash[23367]: audit 2026-03-10T10:29:51.750610+0000 mon.a (mon.0) 3520 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:53.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:53 vm04 bash[28289]: audit 2026-03-10T10:29:52.054059+0000 mon.a (mon.0) 3521 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:29:53.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:53 vm04 bash[28289]: audit 2026-03-10T10:29:52.054059+0000 mon.a (mon.0) 3521 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:29:53.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:53 vm04 bash[28289]: cluster 2026-03-10T10:29:52.058169+0000 mon.a (mon.0) 3522 : cluster [DBG] osdmap e703: 8 total, 8 up, 8 in 2026-03-10T10:29:53.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:53 vm04 bash[28289]: cluster 2026-03-10T10:29:52.058169+0000 mon.a (mon.0) 3522 : cluster [DBG] osdmap e703: 8 total, 8 up, 8 in 2026-03-10T10:29:53.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:53 vm04 bash[28289]: audit 2026-03-10T10:29:52.058327+0000 mon.a (mon.0) 3523 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-144"}]: dispatch 2026-03-10T10:29:53.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:53 vm04 bash[28289]: audit 2026-03-10T10:29:52.058327+0000 mon.a (mon.0) 3523 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-144"}]: dispatch 2026-03-10T10:29:53.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:53 vm04 bash[20742]: audit 2026-03-10T10:29:52.054059+0000 mon.a (mon.0) 3521 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:29:53.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:53 vm04 bash[20742]: audit 2026-03-10T10:29:52.054059+0000 mon.a (mon.0) 3521 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:29:53.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:53 vm04 bash[20742]: cluster 2026-03-10T10:29:52.058169+0000 mon.a (mon.0) 3522 : cluster [DBG] osdmap e703: 8 total, 8 up, 8 in 2026-03-10T10:29:53.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:53 vm04 bash[20742]: cluster 2026-03-10T10:29:52.058169+0000 mon.a (mon.0) 3522 : cluster [DBG] osdmap e703: 8 total, 8 up, 8 in 2026-03-10T10:29:53.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:53 vm04 bash[20742]: audit 2026-03-10T10:29:52.058327+0000 mon.a (mon.0) 3523 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-144"}]: dispatch 2026-03-10T10:29:53.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:53 vm04 bash[20742]: audit 2026-03-10T10:29:52.058327+0000 mon.a (mon.0) 3523 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-144"}]: dispatch 2026-03-10T10:29:53.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:29:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:29:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:29:53.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:53 vm07 bash[23367]: audit 2026-03-10T10:29:52.054059+0000 mon.a (mon.0) 3521 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:29:53.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:53 vm07 bash[23367]: audit 2026-03-10T10:29:52.054059+0000 mon.a (mon.0) 3521 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:29:53.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:53 vm07 bash[23367]: cluster 2026-03-10T10:29:52.058169+0000 mon.a (mon.0) 3522 : cluster [DBG] osdmap e703: 8 total, 8 up, 8 in 2026-03-10T10:29:53.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:53 vm07 bash[23367]: cluster 2026-03-10T10:29:52.058169+0000 mon.a (mon.0) 3522 : cluster [DBG] osdmap e703: 8 total, 8 up, 8 in 2026-03-10T10:29:53.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:53 vm07 bash[23367]: audit 2026-03-10T10:29:52.058327+0000 mon.a (mon.0) 3523 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-144"}]: dispatch 2026-03-10T10:29:53.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:53 vm07 bash[23367]: audit 2026-03-10T10:29:52.058327+0000 mon.a (mon.0) 3523 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-144"}]: dispatch 2026-03-10T10:29:54.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:54 vm04 bash[28289]: cluster 2026-03-10T10:29:52.568119+0000 mgr.y (mgr.24422) 616 : cluster [DBG] pgmap v1097: 268 pgs: 268 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T10:29:54.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:54 vm04 bash[28289]: cluster 2026-03-10T10:29:52.568119+0000 mgr.y (mgr.24422) 616 : cluster [DBG] pgmap v1097: 268 pgs: 268 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T10:29:54.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:54 vm04 bash[28289]: cluster 2026-03-10T10:29:53.053945+0000 mon.a (mon.0) 3524 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:29:54.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:54 vm04 bash[28289]: cluster 2026-03-10T10:29:53.053945+0000 mon.a (mon.0) 3524 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:29:54.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:54 vm04 bash[28289]: audit 2026-03-10T10:29:53.056421+0000 mon.a (mon.0) 3525 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-144"}]': finished 2026-03-10T10:29:54.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:54 vm04 bash[28289]: audit 2026-03-10T10:29:53.056421+0000 mon.a (mon.0) 3525 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-144"}]': finished 2026-03-10T10:29:54.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:54 vm04 bash[28289]: cluster 2026-03-10T10:29:53.059614+0000 mon.a (mon.0) 3526 : cluster [DBG] osdmap e704: 8 total, 8 up, 8 in 2026-03-10T10:29:54.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:54 vm04 bash[28289]: cluster 2026-03-10T10:29:53.059614+0000 mon.a (mon.0) 3526 : cluster [DBG] osdmap e704: 8 total, 8 up, 8 in 2026-03-10T10:29:54.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:54 vm04 bash[28289]: audit 2026-03-10T10:29:53.095809+0000 mon.a (mon.0) 3527 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:54.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:54 vm04 bash[28289]: audit 2026-03-10T10:29:53.095809+0000 mon.a (mon.0) 3527 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:54.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:54 vm04 bash[28289]: audit 2026-03-10T10:29:53.096095+0000 mon.a (mon.0) 3528 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-144"}]: dispatch 2026-03-10T10:29:54.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:54 vm04 bash[28289]: audit 2026-03-10T10:29:53.096095+0000 mon.a (mon.0) 3528 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-144"}]: dispatch 2026-03-10T10:29:54.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:54 vm04 bash[20742]: cluster 2026-03-10T10:29:52.568119+0000 mgr.y (mgr.24422) 616 : cluster [DBG] pgmap v1097: 268 pgs: 268 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T10:29:54.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:54 vm04 bash[20742]: cluster 2026-03-10T10:29:52.568119+0000 mgr.y (mgr.24422) 616 : cluster [DBG] pgmap v1097: 268 pgs: 268 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T10:29:54.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:54 vm04 bash[20742]: cluster 2026-03-10T10:29:53.053945+0000 mon.a (mon.0) 3524 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:29:54.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:54 vm04 bash[20742]: cluster 2026-03-10T10:29:53.053945+0000 mon.a (mon.0) 3524 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:29:54.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:54 vm04 bash[20742]: audit 2026-03-10T10:29:53.056421+0000 mon.a (mon.0) 3525 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-144"}]': finished 2026-03-10T10:29:54.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:54 vm04 bash[20742]: audit 2026-03-10T10:29:53.056421+0000 mon.a (mon.0) 3525 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-144"}]': finished 2026-03-10T10:29:54.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:54 vm04 bash[20742]: cluster 2026-03-10T10:29:53.059614+0000 mon.a (mon.0) 3526 : cluster [DBG] osdmap e704: 8 total, 8 up, 8 in 2026-03-10T10:29:54.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:54 vm04 bash[20742]: cluster 2026-03-10T10:29:53.059614+0000 mon.a (mon.0) 3526 : cluster [DBG] osdmap e704: 8 total, 8 up, 8 in 2026-03-10T10:29:54.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:54 vm04 bash[20742]: audit 2026-03-10T10:29:53.095809+0000 mon.a (mon.0) 3527 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:54.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:54 vm04 bash[20742]: audit 2026-03-10T10:29:53.095809+0000 mon.a (mon.0) 3527 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:54.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:54 vm04 bash[20742]: audit 2026-03-10T10:29:53.096095+0000 mon.a (mon.0) 3528 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-144"}]: dispatch 2026-03-10T10:29:54.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:54 vm04 bash[20742]: audit 2026-03-10T10:29:53.096095+0000 mon.a (mon.0) 3528 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-144"}]: dispatch 2026-03-10T10:29:54.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:54 vm07 bash[23367]: cluster 2026-03-10T10:29:52.568119+0000 mgr.y (mgr.24422) 616 : cluster [DBG] pgmap v1097: 268 pgs: 268 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T10:29:54.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:54 vm07 bash[23367]: cluster 2026-03-10T10:29:52.568119+0000 mgr.y (mgr.24422) 616 : cluster [DBG] pgmap v1097: 268 pgs: 268 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T10:29:54.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:54 vm07 bash[23367]: cluster 2026-03-10T10:29:53.053945+0000 mon.a (mon.0) 3524 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:29:54.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:54 vm07 bash[23367]: cluster 2026-03-10T10:29:53.053945+0000 mon.a (mon.0) 3524 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:29:54.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:54 vm07 bash[23367]: audit 2026-03-10T10:29:53.056421+0000 mon.a (mon.0) 3525 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-144"}]': finished 2026-03-10T10:29:54.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:54 vm07 bash[23367]: audit 2026-03-10T10:29:53.056421+0000 mon.a (mon.0) 3525 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-144"}]': finished 2026-03-10T10:29:54.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:54 vm07 bash[23367]: cluster 2026-03-10T10:29:53.059614+0000 mon.a (mon.0) 3526 : cluster [DBG] osdmap e704: 8 total, 8 up, 8 in 2026-03-10T10:29:54.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:54 vm07 bash[23367]: cluster 2026-03-10T10:29:53.059614+0000 mon.a (mon.0) 3526 : cluster [DBG] osdmap e704: 8 total, 8 up, 8 in 2026-03-10T10:29:54.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:54 vm07 bash[23367]: audit 2026-03-10T10:29:53.095809+0000 mon.a (mon.0) 3527 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:54.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:54 vm07 bash[23367]: audit 2026-03-10T10:29:53.095809+0000 mon.a (mon.0) 3527 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:29:54.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:54 vm07 bash[23367]: audit 2026-03-10T10:29:53.096095+0000 mon.a (mon.0) 3528 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-144"}]: dispatch 2026-03-10T10:29:54.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:54 vm07 bash[23367]: audit 2026-03-10T10:29:53.096095+0000 mon.a (mon.0) 3528 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-144"}]: dispatch 2026-03-10T10:29:55.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:55 vm04 bash[28289]: cluster 2026-03-10T10:29:54.071734+0000 mon.a (mon.0) 3529 : cluster [DBG] osdmap e705: 8 total, 8 up, 8 in 2026-03-10T10:29:55.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:55 vm04 bash[28289]: cluster 2026-03-10T10:29:54.071734+0000 mon.a (mon.0) 3529 : cluster [DBG] osdmap e705: 8 total, 8 up, 8 in 2026-03-10T10:29:55.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:55 vm04 bash[28289]: cluster 2026-03-10T10:29:55.074937+0000 mon.a (mon.0) 3530 : cluster [DBG] osdmap e706: 8 total, 8 up, 8 in 2026-03-10T10:29:55.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:55 vm04 bash[28289]: cluster 2026-03-10T10:29:55.074937+0000 mon.a (mon.0) 3530 : cluster [DBG] osdmap e706: 8 total, 8 up, 8 in 2026-03-10T10:29:55.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:55 vm04 bash[28289]: audit 2026-03-10T10:29:55.075885+0000 mon.a (mon.0) 3531 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:29:55.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:55 vm04 bash[28289]: audit 2026-03-10T10:29:55.075885+0000 mon.a (mon.0) 3531 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:29:55.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:55 vm04 bash[20742]: cluster 2026-03-10T10:29:54.071734+0000 mon.a (mon.0) 3529 : cluster [DBG] osdmap e705: 8 total, 8 up, 8 in 2026-03-10T10:29:55.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:55 vm04 bash[20742]: cluster 2026-03-10T10:29:54.071734+0000 mon.a (mon.0) 3529 : cluster [DBG] osdmap e705: 8 total, 8 up, 8 in 2026-03-10T10:29:55.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:55 vm04 bash[20742]: cluster 2026-03-10T10:29:55.074937+0000 mon.a (mon.0) 3530 : cluster [DBG] osdmap e706: 8 total, 8 up, 8 in 2026-03-10T10:29:55.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:55 vm04 bash[20742]: cluster 2026-03-10T10:29:55.074937+0000 mon.a (mon.0) 3530 : cluster [DBG] osdmap e706: 8 total, 8 up, 8 in 2026-03-10T10:29:55.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:55 vm04 bash[20742]: audit 2026-03-10T10:29:55.075885+0000 mon.a (mon.0) 3531 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:29:55.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:55 vm04 bash[20742]: audit 2026-03-10T10:29:55.075885+0000 mon.a (mon.0) 3531 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:29:55.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:55 vm07 bash[23367]: cluster 2026-03-10T10:29:54.071734+0000 mon.a (mon.0) 3529 : cluster [DBG] osdmap e705: 8 total, 8 up, 8 in 2026-03-10T10:29:55.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:55 vm07 bash[23367]: cluster 2026-03-10T10:29:54.071734+0000 mon.a (mon.0) 3529 : cluster [DBG] osdmap e705: 8 total, 8 up, 8 in 2026-03-10T10:29:55.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:55 vm07 bash[23367]: cluster 2026-03-10T10:29:55.074937+0000 mon.a (mon.0) 3530 : cluster [DBG] osdmap e706: 8 total, 8 up, 8 in 2026-03-10T10:29:55.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:55 vm07 bash[23367]: cluster 2026-03-10T10:29:55.074937+0000 mon.a (mon.0) 3530 : cluster [DBG] osdmap e706: 8 total, 8 up, 8 in 2026-03-10T10:29:55.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:55 vm07 bash[23367]: audit 2026-03-10T10:29:55.075885+0000 mon.a (mon.0) 3531 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:29:55.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:55 vm07 bash[23367]: audit 2026-03-10T10:29:55.075885+0000 mon.a (mon.0) 3531 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-146","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:29:56.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:56 vm04 bash[28289]: cluster 2026-03-10T10:29:54.568556+0000 mgr.y (mgr.24422) 617 : cluster [DBG] pgmap v1100: 236 pgs: 236 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T10:29:56.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:56 vm04 bash[28289]: cluster 2026-03-10T10:29:54.568556+0000 mgr.y (mgr.24422) 617 : cluster [DBG] pgmap v1100: 236 pgs: 236 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T10:29:56.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:56 vm04 bash[28289]: audit 2026-03-10T10:29:56.075537+0000 mon.a (mon.0) 3532 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-146","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:29:56.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:56 vm04 bash[28289]: audit 2026-03-10T10:29:56.075537+0000 mon.a (mon.0) 3532 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-146","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:29:56.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:56 vm04 bash[28289]: cluster 2026-03-10T10:29:56.079485+0000 mon.a (mon.0) 3533 : cluster [DBG] osdmap e707: 8 total, 8 up, 8 in 2026-03-10T10:29:56.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:56 vm04 bash[28289]: cluster 2026-03-10T10:29:56.079485+0000 mon.a (mon.0) 3533 : cluster [DBG] osdmap e707: 8 total, 8 up, 8 in 2026-03-10T10:29:56.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:56 vm04 bash[20742]: cluster 2026-03-10T10:29:54.568556+0000 mgr.y (mgr.24422) 617 : cluster [DBG] pgmap v1100: 236 pgs: 236 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T10:29:56.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:56 vm04 bash[20742]: cluster 2026-03-10T10:29:54.568556+0000 mgr.y (mgr.24422) 617 : cluster [DBG] pgmap v1100: 236 pgs: 236 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T10:29:56.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:56 vm04 bash[20742]: audit 2026-03-10T10:29:56.075537+0000 mon.a (mon.0) 3532 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-146","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:29:56.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:56 vm04 bash[20742]: audit 2026-03-10T10:29:56.075537+0000 mon.a (mon.0) 3532 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-146","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:29:56.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:56 vm04 bash[20742]: cluster 2026-03-10T10:29:56.079485+0000 mon.a (mon.0) 3533 : cluster [DBG] osdmap e707: 8 total, 8 up, 8 in 2026-03-10T10:29:56.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:56 vm04 bash[20742]: cluster 2026-03-10T10:29:56.079485+0000 mon.a (mon.0) 3533 : cluster [DBG] osdmap e707: 8 total, 8 up, 8 in 2026-03-10T10:29:56.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:56 vm07 bash[23367]: cluster 2026-03-10T10:29:54.568556+0000 mgr.y (mgr.24422) 617 : cluster [DBG] pgmap v1100: 236 pgs: 236 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T10:29:56.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:56 vm07 bash[23367]: cluster 2026-03-10T10:29:54.568556+0000 mgr.y (mgr.24422) 617 : cluster [DBG] pgmap v1100: 236 pgs: 236 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s 2026-03-10T10:29:56.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:56 vm07 bash[23367]: audit 2026-03-10T10:29:56.075537+0000 mon.a (mon.0) 3532 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-146","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:29:56.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:56 vm07 bash[23367]: audit 2026-03-10T10:29:56.075537+0000 mon.a (mon.0) 3532 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-146","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:29:56.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:56 vm07 bash[23367]: cluster 2026-03-10T10:29:56.079485+0000 mon.a (mon.0) 3533 : cluster [DBG] osdmap e707: 8 total, 8 up, 8 in 2026-03-10T10:29:56.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:56 vm07 bash[23367]: cluster 2026-03-10T10:29:56.079485+0000 mon.a (mon.0) 3533 : cluster [DBG] osdmap e707: 8 total, 8 up, 8 in 2026-03-10T10:29:57.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:57 vm04 bash[28289]: audit 2026-03-10T10:29:56.128362+0000 mon.a (mon.0) 3534 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:29:57.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:57 vm04 bash[28289]: audit 2026-03-10T10:29:56.128362+0000 mon.a (mon.0) 3534 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:29:57.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:57 vm04 bash[28289]: audit 2026-03-10T10:29:57.093839+0000 mon.a (mon.0) 3535 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-146", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:29:57.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:57 vm04 bash[28289]: audit 2026-03-10T10:29:57.093839+0000 mon.a (mon.0) 3535 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-146", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:29:57.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:57 vm04 bash[28289]: cluster 2026-03-10T10:29:57.096860+0000 mon.a (mon.0) 3536 : cluster [DBG] osdmap e708: 8 total, 8 up, 8 in 2026-03-10T10:29:57.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:57 vm04 bash[28289]: cluster 2026-03-10T10:29:57.096860+0000 mon.a (mon.0) 3536 : cluster [DBG] osdmap e708: 8 total, 8 up, 8 in 2026-03-10T10:29:57.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:57 vm04 bash[28289]: audit 2026-03-10T10:29:57.097323+0000 mon.a (mon.0) 3537 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-146"}]: dispatch 2026-03-10T10:29:57.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:57 vm04 bash[28289]: audit 2026-03-10T10:29:57.097323+0000 mon.a (mon.0) 3537 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-146"}]: dispatch 2026-03-10T10:29:57.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:57 vm04 bash[20742]: audit 2026-03-10T10:29:56.128362+0000 mon.a (mon.0) 3534 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:29:57.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:57 vm04 bash[20742]: audit 2026-03-10T10:29:56.128362+0000 mon.a (mon.0) 3534 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:29:57.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:57 vm04 bash[20742]: audit 2026-03-10T10:29:57.093839+0000 mon.a (mon.0) 3535 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-146", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:29:57.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:57 vm04 bash[20742]: audit 2026-03-10T10:29:57.093839+0000 mon.a (mon.0) 3535 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-146", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:29:57.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:57 vm04 bash[20742]: cluster 2026-03-10T10:29:57.096860+0000 mon.a (mon.0) 3536 : cluster [DBG] osdmap e708: 8 total, 8 up, 8 in 2026-03-10T10:29:57.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:57 vm04 bash[20742]: cluster 2026-03-10T10:29:57.096860+0000 mon.a (mon.0) 3536 : cluster [DBG] osdmap e708: 8 total, 8 up, 8 in 2026-03-10T10:29:57.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:57 vm04 bash[20742]: audit 2026-03-10T10:29:57.097323+0000 mon.a (mon.0) 3537 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-146"}]: dispatch 2026-03-10T10:29:57.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:57 vm04 bash[20742]: audit 2026-03-10T10:29:57.097323+0000 mon.a (mon.0) 3537 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-146"}]: dispatch 2026-03-10T10:29:57.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:57 vm07 bash[23367]: audit 2026-03-10T10:29:56.128362+0000 mon.a (mon.0) 3534 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:29:57.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:57 vm07 bash[23367]: audit 2026-03-10T10:29:56.128362+0000 mon.a (mon.0) 3534 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-146", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:29:57.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:57 vm07 bash[23367]: audit 2026-03-10T10:29:57.093839+0000 mon.a (mon.0) 3535 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-146", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:29:57.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:57 vm07 bash[23367]: audit 2026-03-10T10:29:57.093839+0000 mon.a (mon.0) 3535 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-146", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:29:57.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:57 vm07 bash[23367]: cluster 2026-03-10T10:29:57.096860+0000 mon.a (mon.0) 3536 : cluster [DBG] osdmap e708: 8 total, 8 up, 8 in 2026-03-10T10:29:57.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:57 vm07 bash[23367]: cluster 2026-03-10T10:29:57.096860+0000 mon.a (mon.0) 3536 : cluster [DBG] osdmap e708: 8 total, 8 up, 8 in 2026-03-10T10:29:57.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:57 vm07 bash[23367]: audit 2026-03-10T10:29:57.097323+0000 mon.a (mon.0) 3537 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-146"}]: dispatch 2026-03-10T10:29:57.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:57 vm07 bash[23367]: audit 2026-03-10T10:29:57.097323+0000 mon.a (mon.0) 3537 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-146"}]: dispatch 2026-03-10T10:29:58.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:58 vm04 bash[28289]: cluster 2026-03-10T10:29:56.568947+0000 mgr.y (mgr.24422) 618 : cluster [DBG] pgmap v1103: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:58.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:58 vm04 bash[28289]: cluster 2026-03-10T10:29:56.568947+0000 mgr.y (mgr.24422) 618 : cluster [DBG] pgmap v1103: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:58.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:58 vm04 bash[20742]: cluster 2026-03-10T10:29:56.568947+0000 mgr.y (mgr.24422) 618 : cluster [DBG] pgmap v1103: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:58.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:58 vm04 bash[20742]: cluster 2026-03-10T10:29:56.568947+0000 mgr.y (mgr.24422) 618 : cluster [DBG] pgmap v1103: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:58.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:58 vm07 bash[23367]: cluster 2026-03-10T10:29:56.568947+0000 mgr.y (mgr.24422) 618 : cluster [DBG] pgmap v1103: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:58.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:58 vm07 bash[23367]: cluster 2026-03-10T10:29:56.568947+0000 mgr.y (mgr.24422) 618 : cluster [DBG] pgmap v1103: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1023 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:29:59.266 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:29:58 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:29:59.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:59 vm07 bash[23367]: audit 2026-03-10T10:29:58.121409+0000 mon.a (mon.0) 3538 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-146"}]': finished 2026-03-10T10:29:59.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:59 vm07 bash[23367]: audit 2026-03-10T10:29:58.121409+0000 mon.a (mon.0) 3538 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-146"}]': finished 2026-03-10T10:29:59.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:59 vm07 bash[23367]: cluster 2026-03-10T10:29:58.125098+0000 mon.a (mon.0) 3539 : cluster [DBG] osdmap e709: 8 total, 8 up, 8 in 2026-03-10T10:29:59.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:59 vm07 bash[23367]: cluster 2026-03-10T10:29:58.125098+0000 mon.a (mon.0) 3539 : cluster [DBG] osdmap e709: 8 total, 8 up, 8 in 2026-03-10T10:29:59.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:59 vm07 bash[23367]: audit 2026-03-10T10:29:58.126062+0000 mon.a (mon.0) 3540 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-146", "mode": "writeback"}]: dispatch 2026-03-10T10:29:59.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:59 vm07 bash[23367]: audit 2026-03-10T10:29:58.126062+0000 mon.a (mon.0) 3540 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-146", "mode": "writeback"}]: dispatch 2026-03-10T10:29:59.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:59 vm07 bash[23367]: audit 2026-03-10T10:29:58.198421+0000 mon.a (mon.0) 3541 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:59.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:59 vm07 bash[23367]: audit 2026-03-10T10:29:58.198421+0000 mon.a (mon.0) 3541 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:59.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:59 vm07 bash[23367]: audit 2026-03-10T10:29:58.199191+0000 mon.a (mon.0) 3542 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:29:59.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:59 vm07 bash[23367]: audit 2026-03-10T10:29:58.199191+0000 mon.a (mon.0) 3542 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:29:59.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:59 vm07 bash[23367]: cluster 2026-03-10T10:29:59.121599+0000 mon.a (mon.0) 3543 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:29:59.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:59 vm07 bash[23367]: cluster 2026-03-10T10:29:59.121599+0000 mon.a (mon.0) 3543 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:29:59.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:59 vm07 bash[23367]: audit 2026-03-10T10:29:59.124117+0000 mon.a (mon.0) 3544 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-146", "mode": "writeback"}]': finished 2026-03-10T10:29:59.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:59 vm07 bash[23367]: audit 2026-03-10T10:29:59.124117+0000 mon.a (mon.0) 3544 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-146", "mode": "writeback"}]': finished 2026-03-10T10:29:59.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:59 vm07 bash[23367]: cluster 2026-03-10T10:29:59.129382+0000 mon.a (mon.0) 3545 : cluster [DBG] osdmap e710: 8 total, 8 up, 8 in 2026-03-10T10:29:59.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:29:59 vm07 bash[23367]: cluster 2026-03-10T10:29:59.129382+0000 mon.a (mon.0) 3545 : cluster [DBG] osdmap e710: 8 total, 8 up, 8 in 2026-03-10T10:29:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:59 vm04 bash[28289]: audit 2026-03-10T10:29:58.121409+0000 mon.a (mon.0) 3538 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-146"}]': finished 2026-03-10T10:29:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:59 vm04 bash[28289]: audit 2026-03-10T10:29:58.121409+0000 mon.a (mon.0) 3538 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-146"}]': finished 2026-03-10T10:29:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:59 vm04 bash[28289]: cluster 2026-03-10T10:29:58.125098+0000 mon.a (mon.0) 3539 : cluster [DBG] osdmap e709: 8 total, 8 up, 8 in 2026-03-10T10:29:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:59 vm04 bash[28289]: cluster 2026-03-10T10:29:58.125098+0000 mon.a (mon.0) 3539 : cluster [DBG] osdmap e709: 8 total, 8 up, 8 in 2026-03-10T10:29:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:59 vm04 bash[28289]: audit 2026-03-10T10:29:58.126062+0000 mon.a (mon.0) 3540 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-146", "mode": "writeback"}]: dispatch 2026-03-10T10:29:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:59 vm04 bash[28289]: audit 2026-03-10T10:29:58.126062+0000 mon.a (mon.0) 3540 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-146", "mode": "writeback"}]: dispatch 2026-03-10T10:29:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:59 vm04 bash[28289]: audit 2026-03-10T10:29:58.198421+0000 mon.a (mon.0) 3541 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:59 vm04 bash[28289]: audit 2026-03-10T10:29:58.198421+0000 mon.a (mon.0) 3541 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:59 vm04 bash[28289]: audit 2026-03-10T10:29:58.199191+0000 mon.a (mon.0) 3542 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:29:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:59 vm04 bash[28289]: audit 2026-03-10T10:29:58.199191+0000 mon.a (mon.0) 3542 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:29:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:59 vm04 bash[28289]: cluster 2026-03-10T10:29:59.121599+0000 mon.a (mon.0) 3543 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:29:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:59 vm04 bash[28289]: cluster 2026-03-10T10:29:59.121599+0000 mon.a (mon.0) 3543 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:29:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:59 vm04 bash[28289]: audit 2026-03-10T10:29:59.124117+0000 mon.a (mon.0) 3544 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-146", "mode": "writeback"}]': finished 2026-03-10T10:29:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:59 vm04 bash[28289]: audit 2026-03-10T10:29:59.124117+0000 mon.a (mon.0) 3544 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-146", "mode": "writeback"}]': finished 2026-03-10T10:29:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:59 vm04 bash[28289]: cluster 2026-03-10T10:29:59.129382+0000 mon.a (mon.0) 3545 : cluster [DBG] osdmap e710: 8 total, 8 up, 8 in 2026-03-10T10:29:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:29:59 vm04 bash[28289]: cluster 2026-03-10T10:29:59.129382+0000 mon.a (mon.0) 3545 : cluster [DBG] osdmap e710: 8 total, 8 up, 8 in 2026-03-10T10:29:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:59 vm04 bash[20742]: audit 2026-03-10T10:29:58.121409+0000 mon.a (mon.0) 3538 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-146"}]': finished 2026-03-10T10:29:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:59 vm04 bash[20742]: audit 2026-03-10T10:29:58.121409+0000 mon.a (mon.0) 3538 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier set-overlay", "pool": "test-rados-api-vm04-59491-111", "overlaypool": "test-rados-api-vm04-59491-146"}]': finished 2026-03-10T10:29:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:59 vm04 bash[20742]: cluster 2026-03-10T10:29:58.125098+0000 mon.a (mon.0) 3539 : cluster [DBG] osdmap e709: 8 total, 8 up, 8 in 2026-03-10T10:29:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:59 vm04 bash[20742]: cluster 2026-03-10T10:29:58.125098+0000 mon.a (mon.0) 3539 : cluster [DBG] osdmap e709: 8 total, 8 up, 8 in 2026-03-10T10:29:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:59 vm04 bash[20742]: audit 2026-03-10T10:29:58.126062+0000 mon.a (mon.0) 3540 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-146", "mode": "writeback"}]: dispatch 2026-03-10T10:29:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:59 vm04 bash[20742]: audit 2026-03-10T10:29:58.126062+0000 mon.a (mon.0) 3540 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-146", "mode": "writeback"}]: dispatch 2026-03-10T10:29:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:59 vm04 bash[20742]: audit 2026-03-10T10:29:58.198421+0000 mon.a (mon.0) 3541 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:59 vm04 bash[20742]: audit 2026-03-10T10:29:58.198421+0000 mon.a (mon.0) 3541 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:29:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:59 vm04 bash[20742]: audit 2026-03-10T10:29:58.199191+0000 mon.a (mon.0) 3542 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:29:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:59 vm04 bash[20742]: audit 2026-03-10T10:29:58.199191+0000 mon.a (mon.0) 3542 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:29:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:59 vm04 bash[20742]: cluster 2026-03-10T10:29:59.121599+0000 mon.a (mon.0) 3543 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:29:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:59 vm04 bash[20742]: cluster 2026-03-10T10:29:59.121599+0000 mon.a (mon.0) 3543 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET) 2026-03-10T10:29:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:59 vm04 bash[20742]: audit 2026-03-10T10:29:59.124117+0000 mon.a (mon.0) 3544 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-146", "mode": "writeback"}]': finished 2026-03-10T10:29:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:59 vm04 bash[20742]: audit 2026-03-10T10:29:59.124117+0000 mon.a (mon.0) 3544 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "test-rados-api-vm04-59491-146", "mode": "writeback"}]': finished 2026-03-10T10:29:59.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:59 vm04 bash[20742]: cluster 2026-03-10T10:29:59.129382+0000 mon.a (mon.0) 3545 : cluster [DBG] osdmap e710: 8 total, 8 up, 8 in 2026-03-10T10:29:59.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:29:59 vm04 bash[20742]: cluster 2026-03-10T10:29:59.129382+0000 mon.a (mon.0) 3545 : cluster [DBG] osdmap e710: 8 total, 8 up, 8 in 2026-03-10T10:30:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:00 vm04 bash[28289]: cluster 2026-03-10T10:29:58.569504+0000 mgr.y (mgr.24422) 619 : cluster [DBG] pgmap v1106: 268 pgs: 18 unknown, 250 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-10T10:30:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:00 vm04 bash[28289]: cluster 2026-03-10T10:29:58.569504+0000 mgr.y (mgr.24422) 619 : cluster [DBG] pgmap v1106: 268 pgs: 18 unknown, 250 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-10T10:30:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:00 vm04 bash[28289]: audit 2026-03-10T10:29:58.889469+0000 mgr.y (mgr.24422) 620 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:00 vm04 bash[28289]: audit 2026-03-10T10:29:58.889469+0000 mgr.y (mgr.24422) 620 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:00 vm04 bash[28289]: audit 2026-03-10T10:29:59.167219+0000 mon.a (mon.0) 3546 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:30:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:00 vm04 bash[28289]: audit 2026-03-10T10:29:59.167219+0000 mon.a (mon.0) 3546 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:30:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:00 vm04 bash[28289]: cluster 2026-03-10T10:30:00.000131+0000 mon.a (mon.0) 3547 : cluster [WRN] Health detail: HEALTH_WARN 1 cache pools are missing hit_sets; 4 pool(s) do not have an application enabled 2026-03-10T10:30:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:00 vm04 bash[28289]: cluster 2026-03-10T10:30:00.000131+0000 mon.a (mon.0) 3547 : cluster [WRN] Health detail: HEALTH_WARN 1 cache pools are missing hit_sets; 4 pool(s) do not have an application enabled 2026-03-10T10:30:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:00 vm04 bash[28289]: cluster 2026-03-10T10:30:00.000152+0000 mon.a (mon.0) 3548 : cluster [WRN] [WRN] CACHE_POOL_NO_HIT_SET: 1 cache pools are missing hit_sets 2026-03-10T10:30:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:00 vm04 bash[28289]: cluster 2026-03-10T10:30:00.000152+0000 mon.a (mon.0) 3548 : cluster [WRN] [WRN] CACHE_POOL_NO_HIT_SET: 1 cache pools are missing hit_sets 2026-03-10T10:30:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:00 vm04 bash[28289]: cluster 2026-03-10T10:30:00.000160+0000 mon.a (mon.0) 3549 : cluster [WRN] pool 'test-rados-api-vm04-59491-146' with cache_mode writeback needs hit_set_type to be set but it is not 2026-03-10T10:30:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:00 vm04 bash[28289]: cluster 2026-03-10T10:30:00.000160+0000 mon.a (mon.0) 3549 : cluster [WRN] pool 'test-rados-api-vm04-59491-146' with cache_mode writeback needs hit_set_type to be set but it is not 2026-03-10T10:30:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:00 vm04 bash[28289]: cluster 2026-03-10T10:30:00.000166+0000 mon.a (mon.0) 3550 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled 2026-03-10T10:30:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:00 vm04 bash[28289]: cluster 2026-03-10T10:30:00.000166+0000 mon.a (mon.0) 3550 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled 2026-03-10T10:30:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:00 vm04 bash[28289]: cluster 2026-03-10T10:30:00.000172+0000 mon.a (mon.0) 3551 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T10:30:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:00 vm04 bash[28289]: cluster 2026-03-10T10:30:00.000172+0000 mon.a (mon.0) 3551 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T10:30:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:00 vm04 bash[28289]: cluster 2026-03-10T10:30:00.000177+0000 mon.a (mon.0) 3552 : cluster [WRN] application not enabled on pool 'WatchNotifyvm04-60261-1' 2026-03-10T10:30:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:00 vm04 bash[28289]: cluster 2026-03-10T10:30:00.000177+0000 mon.a (mon.0) 3552 : cluster [WRN] application not enabled on pool 'WatchNotifyvm04-60261-1' 2026-03-10T10:30:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:00 vm04 bash[28289]: cluster 2026-03-10T10:30:00.000184+0000 mon.a (mon.0) 3553 : cluster [WRN] application not enabled on pool 'AssertExistsvm04-60281-1' 2026-03-10T10:30:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:00 vm04 bash[28289]: cluster 2026-03-10T10:30:00.000184+0000 mon.a (mon.0) 3553 : cluster [WRN] application not enabled on pool 'AssertExistsvm04-60281-1' 2026-03-10T10:30:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:00 vm04 bash[28289]: cluster 2026-03-10T10:30:00.000189+0000 mon.a (mon.0) 3554 : cluster [WRN] application not enabled on pool 'test-rados-api-vm04-59491-111' 2026-03-10T10:30:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:00 vm04 bash[28289]: cluster 2026-03-10T10:30:00.000189+0000 mon.a (mon.0) 3554 : cluster [WRN] application not enabled on pool 'test-rados-api-vm04-59491-111' 2026-03-10T10:30:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:00 vm04 bash[28289]: cluster 2026-03-10T10:30:00.000194+0000 mon.a (mon.0) 3555 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T10:30:00.454 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:00 vm04 bash[28289]: cluster 2026-03-10T10:30:00.000194+0000 mon.a (mon.0) 3555 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T10:30:00.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:00 vm04 bash[20742]: cluster 2026-03-10T10:29:58.569504+0000 mgr.y (mgr.24422) 619 : cluster [DBG] pgmap v1106: 268 pgs: 18 unknown, 250 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-10T10:30:00.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:00 vm04 bash[20742]: cluster 2026-03-10T10:29:58.569504+0000 mgr.y (mgr.24422) 619 : cluster [DBG] pgmap v1106: 268 pgs: 18 unknown, 250 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-10T10:30:00.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:00 vm04 bash[20742]: audit 2026-03-10T10:29:58.889469+0000 mgr.y (mgr.24422) 620 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:00.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:00 vm04 bash[20742]: audit 2026-03-10T10:29:58.889469+0000 mgr.y (mgr.24422) 620 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:00.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:00 vm04 bash[20742]: audit 2026-03-10T10:29:59.167219+0000 mon.a (mon.0) 3546 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:30:00.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:00 vm04 bash[20742]: audit 2026-03-10T10:29:59.167219+0000 mon.a (mon.0) 3546 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:30:00.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:00 vm04 bash[20742]: cluster 2026-03-10T10:30:00.000131+0000 mon.a (mon.0) 3547 : cluster [WRN] Health detail: HEALTH_WARN 1 cache pools are missing hit_sets; 4 pool(s) do not have an application enabled 2026-03-10T10:30:00.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:00 vm04 bash[20742]: cluster 2026-03-10T10:30:00.000131+0000 mon.a (mon.0) 3547 : cluster [WRN] Health detail: HEALTH_WARN 1 cache pools are missing hit_sets; 4 pool(s) do not have an application enabled 2026-03-10T10:30:00.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:00 vm04 bash[20742]: cluster 2026-03-10T10:30:00.000152+0000 mon.a (mon.0) 3548 : cluster [WRN] [WRN] CACHE_POOL_NO_HIT_SET: 1 cache pools are missing hit_sets 2026-03-10T10:30:00.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:00 vm04 bash[20742]: cluster 2026-03-10T10:30:00.000152+0000 mon.a (mon.0) 3548 : cluster [WRN] [WRN] CACHE_POOL_NO_HIT_SET: 1 cache pools are missing hit_sets 2026-03-10T10:30:00.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:00 vm04 bash[20742]: cluster 2026-03-10T10:30:00.000160+0000 mon.a (mon.0) 3549 : cluster [WRN] pool 'test-rados-api-vm04-59491-146' with cache_mode writeback needs hit_set_type to be set but it is not 2026-03-10T10:30:00.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:00 vm04 bash[20742]: cluster 2026-03-10T10:30:00.000160+0000 mon.a (mon.0) 3549 : cluster [WRN] pool 'test-rados-api-vm04-59491-146' with cache_mode writeback needs hit_set_type to be set but it is not 2026-03-10T10:30:00.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:00 vm04 bash[20742]: cluster 2026-03-10T10:30:00.000166+0000 mon.a (mon.0) 3550 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled 2026-03-10T10:30:00.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:00 vm04 bash[20742]: cluster 2026-03-10T10:30:00.000166+0000 mon.a (mon.0) 3550 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled 2026-03-10T10:30:00.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:00 vm04 bash[20742]: cluster 2026-03-10T10:30:00.000172+0000 mon.a (mon.0) 3551 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T10:30:00.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:00 vm04 bash[20742]: cluster 2026-03-10T10:30:00.000172+0000 mon.a (mon.0) 3551 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T10:30:00.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:00 vm04 bash[20742]: cluster 2026-03-10T10:30:00.000177+0000 mon.a (mon.0) 3552 : cluster [WRN] application not enabled on pool 'WatchNotifyvm04-60261-1' 2026-03-10T10:30:00.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:00 vm04 bash[20742]: cluster 2026-03-10T10:30:00.000177+0000 mon.a (mon.0) 3552 : cluster [WRN] application not enabled on pool 'WatchNotifyvm04-60261-1' 2026-03-10T10:30:00.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:00 vm04 bash[20742]: cluster 2026-03-10T10:30:00.000184+0000 mon.a (mon.0) 3553 : cluster [WRN] application not enabled on pool 'AssertExistsvm04-60281-1' 2026-03-10T10:30:00.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:00 vm04 bash[20742]: cluster 2026-03-10T10:30:00.000184+0000 mon.a (mon.0) 3553 : cluster [WRN] application not enabled on pool 'AssertExistsvm04-60281-1' 2026-03-10T10:30:00.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:00 vm04 bash[20742]: cluster 2026-03-10T10:30:00.000189+0000 mon.a (mon.0) 3554 : cluster [WRN] application not enabled on pool 'test-rados-api-vm04-59491-111' 2026-03-10T10:30:00.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:00 vm04 bash[20742]: cluster 2026-03-10T10:30:00.000189+0000 mon.a (mon.0) 3554 : cluster [WRN] application not enabled on pool 'test-rados-api-vm04-59491-111' 2026-03-10T10:30:00.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:00 vm04 bash[20742]: cluster 2026-03-10T10:30:00.000194+0000 mon.a (mon.0) 3555 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T10:30:00.454 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:00 vm04 bash[20742]: cluster 2026-03-10T10:30:00.000194+0000 mon.a (mon.0) 3555 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T10:30:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:00 vm07 bash[23367]: cluster 2026-03-10T10:29:58.569504+0000 mgr.y (mgr.24422) 619 : cluster [DBG] pgmap v1106: 268 pgs: 18 unknown, 250 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-10T10:30:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:00 vm07 bash[23367]: cluster 2026-03-10T10:29:58.569504+0000 mgr.y (mgr.24422) 619 : cluster [DBG] pgmap v1106: 268 pgs: 18 unknown, 250 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 255 B/s wr, 0 op/s 2026-03-10T10:30:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:00 vm07 bash[23367]: audit 2026-03-10T10:29:58.889469+0000 mgr.y (mgr.24422) 620 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:00 vm07 bash[23367]: audit 2026-03-10T10:29:58.889469+0000 mgr.y (mgr.24422) 620 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:00 vm07 bash[23367]: audit 2026-03-10T10:29:59.167219+0000 mon.a (mon.0) 3546 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:30:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:00 vm07 bash[23367]: audit 2026-03-10T10:29:59.167219+0000 mon.a (mon.0) 3546 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_count","val": "2"}]: dispatch 2026-03-10T10:30:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:00 vm07 bash[23367]: cluster 2026-03-10T10:30:00.000131+0000 mon.a (mon.0) 3547 : cluster [WRN] Health detail: HEALTH_WARN 1 cache pools are missing hit_sets; 4 pool(s) do not have an application enabled 2026-03-10T10:30:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:00 vm07 bash[23367]: cluster 2026-03-10T10:30:00.000131+0000 mon.a (mon.0) 3547 : cluster [WRN] Health detail: HEALTH_WARN 1 cache pools are missing hit_sets; 4 pool(s) do not have an application enabled 2026-03-10T10:30:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:00 vm07 bash[23367]: cluster 2026-03-10T10:30:00.000152+0000 mon.a (mon.0) 3548 : cluster [WRN] [WRN] CACHE_POOL_NO_HIT_SET: 1 cache pools are missing hit_sets 2026-03-10T10:30:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:00 vm07 bash[23367]: cluster 2026-03-10T10:30:00.000152+0000 mon.a (mon.0) 3548 : cluster [WRN] [WRN] CACHE_POOL_NO_HIT_SET: 1 cache pools are missing hit_sets 2026-03-10T10:30:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:00 vm07 bash[23367]: cluster 2026-03-10T10:30:00.000160+0000 mon.a (mon.0) 3549 : cluster [WRN] pool 'test-rados-api-vm04-59491-146' with cache_mode writeback needs hit_set_type to be set but it is not 2026-03-10T10:30:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:00 vm07 bash[23367]: cluster 2026-03-10T10:30:00.000160+0000 mon.a (mon.0) 3549 : cluster [WRN] pool 'test-rados-api-vm04-59491-146' with cache_mode writeback needs hit_set_type to be set but it is not 2026-03-10T10:30:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:00 vm07 bash[23367]: cluster 2026-03-10T10:30:00.000166+0000 mon.a (mon.0) 3550 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled 2026-03-10T10:30:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:00 vm07 bash[23367]: cluster 2026-03-10T10:30:00.000166+0000 mon.a (mon.0) 3550 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled 2026-03-10T10:30:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:00 vm07 bash[23367]: cluster 2026-03-10T10:30:00.000172+0000 mon.a (mon.0) 3551 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T10:30:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:00 vm07 bash[23367]: cluster 2026-03-10T10:30:00.000172+0000 mon.a (mon.0) 3551 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T10:30:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:00 vm07 bash[23367]: cluster 2026-03-10T10:30:00.000177+0000 mon.a (mon.0) 3552 : cluster [WRN] application not enabled on pool 'WatchNotifyvm04-60261-1' 2026-03-10T10:30:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:00 vm07 bash[23367]: cluster 2026-03-10T10:30:00.000177+0000 mon.a (mon.0) 3552 : cluster [WRN] application not enabled on pool 'WatchNotifyvm04-60261-1' 2026-03-10T10:30:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:00 vm07 bash[23367]: cluster 2026-03-10T10:30:00.000184+0000 mon.a (mon.0) 3553 : cluster [WRN] application not enabled on pool 'AssertExistsvm04-60281-1' 2026-03-10T10:30:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:00 vm07 bash[23367]: cluster 2026-03-10T10:30:00.000184+0000 mon.a (mon.0) 3553 : cluster [WRN] application not enabled on pool 'AssertExistsvm04-60281-1' 2026-03-10T10:30:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:00 vm07 bash[23367]: cluster 2026-03-10T10:30:00.000189+0000 mon.a (mon.0) 3554 : cluster [WRN] application not enabled on pool 'test-rados-api-vm04-59491-111' 2026-03-10T10:30:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:00 vm07 bash[23367]: cluster 2026-03-10T10:30:00.000189+0000 mon.a (mon.0) 3554 : cluster [WRN] application not enabled on pool 'test-rados-api-vm04-59491-111' 2026-03-10T10:30:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:00 vm07 bash[23367]: cluster 2026-03-10T10:30:00.000194+0000 mon.a (mon.0) 3555 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T10:30:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:00 vm07 bash[23367]: cluster 2026-03-10T10:30:00.000194+0000 mon.a (mon.0) 3555 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T10:30:01.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:01 vm07 bash[23367]: audit 2026-03-10T10:30:00.164797+0000 mon.a (mon.0) 3556 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:30:01.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:01 vm07 bash[23367]: audit 2026-03-10T10:30:00.164797+0000 mon.a (mon.0) 3556 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:30:01.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:01 vm07 bash[23367]: cluster 2026-03-10T10:30:00.167686+0000 mon.a (mon.0) 3557 : cluster [DBG] osdmap e711: 8 total, 8 up, 8 in 2026-03-10T10:30:01.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:01 vm07 bash[23367]: cluster 2026-03-10T10:30:00.167686+0000 mon.a (mon.0) 3557 : cluster [DBG] osdmap e711: 8 total, 8 up, 8 in 2026-03-10T10:30:01.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:01 vm07 bash[23367]: audit 2026-03-10T10:30:00.168145+0000 mon.a (mon.0) 3558 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:30:01.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:01 vm07 bash[23367]: audit 2026-03-10T10:30:00.168145+0000 mon.a (mon.0) 3558 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:30:01.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:01 vm04 bash[28289]: audit 2026-03-10T10:30:00.164797+0000 mon.a (mon.0) 3556 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:30:01.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:01 vm04 bash[28289]: audit 2026-03-10T10:30:00.164797+0000 mon.a (mon.0) 3556 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:30:01.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:01 vm04 bash[28289]: cluster 2026-03-10T10:30:00.167686+0000 mon.a (mon.0) 3557 : cluster [DBG] osdmap e711: 8 total, 8 up, 8 in 2026-03-10T10:30:01.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:01 vm04 bash[28289]: cluster 2026-03-10T10:30:00.167686+0000 mon.a (mon.0) 3557 : cluster [DBG] osdmap e711: 8 total, 8 up, 8 in 2026-03-10T10:30:01.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:01 vm04 bash[28289]: audit 2026-03-10T10:30:00.168145+0000 mon.a (mon.0) 3558 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:30:01.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:01 vm04 bash[28289]: audit 2026-03-10T10:30:00.168145+0000 mon.a (mon.0) 3558 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:30:01.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:01 vm04 bash[20742]: audit 2026-03-10T10:30:00.164797+0000 mon.a (mon.0) 3556 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:30:01.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:01 vm04 bash[20742]: audit 2026-03-10T10:30:00.164797+0000 mon.a (mon.0) 3556 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_count","val": "2"}]': finished 2026-03-10T10:30:01.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:01 vm04 bash[20742]: cluster 2026-03-10T10:30:00.167686+0000 mon.a (mon.0) 3557 : cluster [DBG] osdmap e711: 8 total, 8 up, 8 in 2026-03-10T10:30:01.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:01 vm04 bash[20742]: cluster 2026-03-10T10:30:00.167686+0000 mon.a (mon.0) 3557 : cluster [DBG] osdmap e711: 8 total, 8 up, 8 in 2026-03-10T10:30:01.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:01 vm04 bash[20742]: audit 2026-03-10T10:30:00.168145+0000 mon.a (mon.0) 3558 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:30:01.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:01 vm04 bash[20742]: audit 2026-03-10T10:30:00.168145+0000 mon.a (mon.0) 3558 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_period","val": "600"}]: dispatch 2026-03-10T10:30:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:02 vm07 bash[23367]: cluster 2026-03-10T10:30:00.569864+0000 mgr.y (mgr.24422) 621 : cluster [DBG] pgmap v1109: 268 pgs: 268 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T10:30:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:02 vm07 bash[23367]: cluster 2026-03-10T10:30:00.569864+0000 mgr.y (mgr.24422) 621 : cluster [DBG] pgmap v1109: 268 pgs: 268 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T10:30:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:02 vm07 bash[23367]: audit 2026-03-10T10:30:01.204738+0000 mon.a (mon.0) 3559 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:30:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:02 vm07 bash[23367]: audit 2026-03-10T10:30:01.204738+0000 mon.a (mon.0) 3559 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:30:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:02 vm07 bash[23367]: cluster 2026-03-10T10:30:01.207798+0000 mon.a (mon.0) 3560 : cluster [DBG] osdmap e712: 8 total, 8 up, 8 in 2026-03-10T10:30:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:02 vm07 bash[23367]: cluster 2026-03-10T10:30:01.207798+0000 mon.a (mon.0) 3560 : cluster [DBG] osdmap e712: 8 total, 8 up, 8 in 2026-03-10T10:30:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:02 vm07 bash[23367]: audit 2026-03-10T10:30:01.210662+0000 mon.a (mon.0) 3561 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:30:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:02 vm07 bash[23367]: audit 2026-03-10T10:30:01.210662+0000 mon.a (mon.0) 3561 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:30:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:02 vm07 bash[23367]: cluster 2026-03-10T10:30:02.204839+0000 mon.a (mon.0) 3562 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:30:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:02 vm07 bash[23367]: cluster 2026-03-10T10:30:02.204839+0000 mon.a (mon.0) 3562 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:30:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:02 vm07 bash[23367]: audit 2026-03-10T10:30:02.209173+0000 mon.a (mon.0) 3563 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:30:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:02 vm07 bash[23367]: audit 2026-03-10T10:30:02.209173+0000 mon.a (mon.0) 3563 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:30:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:02 vm07 bash[23367]: cluster 2026-03-10T10:30:02.212686+0000 mon.a (mon.0) 3564 : cluster [DBG] osdmap e713: 8 total, 8 up, 8 in 2026-03-10T10:30:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:02 vm07 bash[23367]: cluster 2026-03-10T10:30:02.212686+0000 mon.a (mon.0) 3564 : cluster [DBG] osdmap e713: 8 total, 8 up, 8 in 2026-03-10T10:30:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:02 vm04 bash[28289]: cluster 2026-03-10T10:30:00.569864+0000 mgr.y (mgr.24422) 621 : cluster [DBG] pgmap v1109: 268 pgs: 268 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T10:30:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:02 vm04 bash[28289]: cluster 2026-03-10T10:30:00.569864+0000 mgr.y (mgr.24422) 621 : cluster [DBG] pgmap v1109: 268 pgs: 268 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T10:30:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:02 vm04 bash[28289]: audit 2026-03-10T10:30:01.204738+0000 mon.a (mon.0) 3559 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:30:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:02 vm04 bash[28289]: audit 2026-03-10T10:30:01.204738+0000 mon.a (mon.0) 3559 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:30:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:02 vm04 bash[28289]: cluster 2026-03-10T10:30:01.207798+0000 mon.a (mon.0) 3560 : cluster [DBG] osdmap e712: 8 total, 8 up, 8 in 2026-03-10T10:30:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:02 vm04 bash[28289]: cluster 2026-03-10T10:30:01.207798+0000 mon.a (mon.0) 3560 : cluster [DBG] osdmap e712: 8 total, 8 up, 8 in 2026-03-10T10:30:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:02 vm04 bash[28289]: audit 2026-03-10T10:30:01.210662+0000 mon.a (mon.0) 3561 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:30:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:02 vm04 bash[28289]: audit 2026-03-10T10:30:01.210662+0000 mon.a (mon.0) 3561 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:30:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:02 vm04 bash[28289]: cluster 2026-03-10T10:30:02.204839+0000 mon.a (mon.0) 3562 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:30:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:02 vm04 bash[28289]: cluster 2026-03-10T10:30:02.204839+0000 mon.a (mon.0) 3562 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:30:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:02 vm04 bash[28289]: audit 2026-03-10T10:30:02.209173+0000 mon.a (mon.0) 3563 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:30:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:02 vm04 bash[28289]: audit 2026-03-10T10:30:02.209173+0000 mon.a (mon.0) 3563 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:30:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:02 vm04 bash[28289]: cluster 2026-03-10T10:30:02.212686+0000 mon.a (mon.0) 3564 : cluster [DBG] osdmap e713: 8 total, 8 up, 8 in 2026-03-10T10:30:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:02 vm04 bash[28289]: cluster 2026-03-10T10:30:02.212686+0000 mon.a (mon.0) 3564 : cluster [DBG] osdmap e713: 8 total, 8 up, 8 in 2026-03-10T10:30:02.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:02 vm04 bash[20742]: cluster 2026-03-10T10:30:00.569864+0000 mgr.y (mgr.24422) 621 : cluster [DBG] pgmap v1109: 268 pgs: 268 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T10:30:02.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:02 vm04 bash[20742]: cluster 2026-03-10T10:30:00.569864+0000 mgr.y (mgr.24422) 621 : cluster [DBG] pgmap v1109: 268 pgs: 268 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s 2026-03-10T10:30:02.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:02 vm04 bash[20742]: audit 2026-03-10T10:30:01.204738+0000 mon.a (mon.0) 3559 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:30:02.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:02 vm04 bash[20742]: audit 2026-03-10T10:30:01.204738+0000 mon.a (mon.0) 3559 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_period","val": "600"}]': finished 2026-03-10T10:30:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:02 vm04 bash[20742]: cluster 2026-03-10T10:30:01.207798+0000 mon.a (mon.0) 3560 : cluster [DBG] osdmap e712: 8 total, 8 up, 8 in 2026-03-10T10:30:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:02 vm04 bash[20742]: cluster 2026-03-10T10:30:01.207798+0000 mon.a (mon.0) 3560 : cluster [DBG] osdmap e712: 8 total, 8 up, 8 in 2026-03-10T10:30:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:02 vm04 bash[20742]: audit 2026-03-10T10:30:01.210662+0000 mon.a (mon.0) 3561 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:30:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:02 vm04 bash[20742]: audit 2026-03-10T10:30:01.210662+0000 mon.a (mon.0) 3561 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_type","val": "bloom"}]: dispatch 2026-03-10T10:30:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:02 vm04 bash[20742]: cluster 2026-03-10T10:30:02.204839+0000 mon.a (mon.0) 3562 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:30:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:02 vm04 bash[20742]: cluster 2026-03-10T10:30:02.204839+0000 mon.a (mon.0) 3562 : cluster [INF] Health check cleared: CACHE_POOL_NO_HIT_SET (was: 1 cache pools are missing hit_sets) 2026-03-10T10:30:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:02 vm04 bash[20742]: audit 2026-03-10T10:30:02.209173+0000 mon.a (mon.0) 3563 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:30:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:02 vm04 bash[20742]: audit 2026-03-10T10:30:02.209173+0000 mon.a (mon.0) 3563 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "hit_set_type","val": "bloom"}]': finished 2026-03-10T10:30:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:02 vm04 bash[20742]: cluster 2026-03-10T10:30:02.212686+0000 mon.a (mon.0) 3564 : cluster [DBG] osdmap e713: 8 total, 8 up, 8 in 2026-03-10T10:30:02.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:02 vm04 bash[20742]: cluster 2026-03-10T10:30:02.212686+0000 mon.a (mon.0) 3564 : cluster [DBG] osdmap e713: 8 total, 8 up, 8 in 2026-03-10T10:30:03.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:03 vm04 bash[28289]: audit 2026-03-10T10:30:02.213883+0000 mon.a (mon.0) 3565 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T10:30:03.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:03 vm04 bash[28289]: audit 2026-03-10T10:30:02.213883+0000 mon.a (mon.0) 3565 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T10:30:03.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:03 vm04 bash[28289]: audit 2026-03-10T10:30:03.212258+0000 mon.a (mon.0) 3566 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T10:30:03.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:03 vm04 bash[28289]: audit 2026-03-10T10:30:03.212258+0000 mon.a (mon.0) 3566 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T10:30:03.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:03 vm04 bash[28289]: cluster 2026-03-10T10:30:03.215220+0000 mon.a (mon.0) 3567 : cluster [DBG] osdmap e714: 8 total, 8 up, 8 in 2026-03-10T10:30:03.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:03 vm04 bash[28289]: cluster 2026-03-10T10:30:03.215220+0000 mon.a (mon.0) 3567 : cluster [DBG] osdmap e714: 8 total, 8 up, 8 in 2026-03-10T10:30:03.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:03 vm04 bash[28289]: audit 2026-03-10T10:30:03.216379+0000 mon.a (mon.0) 3568 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T10:30:03.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:03 vm04 bash[28289]: audit 2026-03-10T10:30:03.216379+0000 mon.a (mon.0) 3568 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T10:30:03.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:03 vm04 bash[20742]: audit 2026-03-10T10:30:02.213883+0000 mon.a (mon.0) 3565 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T10:30:03.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:03 vm04 bash[20742]: audit 2026-03-10T10:30:02.213883+0000 mon.a (mon.0) 3565 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T10:30:03.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:03 vm04 bash[20742]: audit 2026-03-10T10:30:03.212258+0000 mon.a (mon.0) 3566 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T10:30:03.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:03 vm04 bash[20742]: audit 2026-03-10T10:30:03.212258+0000 mon.a (mon.0) 3566 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T10:30:03.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:03 vm04 bash[20742]: cluster 2026-03-10T10:30:03.215220+0000 mon.a (mon.0) 3567 : cluster [DBG] osdmap e714: 8 total, 8 up, 8 in 2026-03-10T10:30:03.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:03 vm04 bash[20742]: cluster 2026-03-10T10:30:03.215220+0000 mon.a (mon.0) 3567 : cluster [DBG] osdmap e714: 8 total, 8 up, 8 in 2026-03-10T10:30:03.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:03 vm04 bash[20742]: audit 2026-03-10T10:30:03.216379+0000 mon.a (mon.0) 3568 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T10:30:03.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:03 vm04 bash[20742]: audit 2026-03-10T10:30:03.216379+0000 mon.a (mon.0) 3568 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T10:30:03.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:30:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:30:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:30:03.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:03 vm07 bash[23367]: audit 2026-03-10T10:30:02.213883+0000 mon.a (mon.0) 3565 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T10:30:03.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:03 vm07 bash[23367]: audit 2026-03-10T10:30:02.213883+0000 mon.a (mon.0) 3565 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "min_read_recency_for_promote","val": "1"}]: dispatch 2026-03-10T10:30:03.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:03 vm07 bash[23367]: audit 2026-03-10T10:30:03.212258+0000 mon.a (mon.0) 3566 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T10:30:03.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:03 vm07 bash[23367]: audit 2026-03-10T10:30:03.212258+0000 mon.a (mon.0) 3566 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "min_read_recency_for_promote","val": "1"}]': finished 2026-03-10T10:30:03.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:03 vm07 bash[23367]: cluster 2026-03-10T10:30:03.215220+0000 mon.a (mon.0) 3567 : cluster [DBG] osdmap e714: 8 total, 8 up, 8 in 2026-03-10T10:30:03.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:03 vm07 bash[23367]: cluster 2026-03-10T10:30:03.215220+0000 mon.a (mon.0) 3567 : cluster [DBG] osdmap e714: 8 total, 8 up, 8 in 2026-03-10T10:30:03.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:03 vm07 bash[23367]: audit 2026-03-10T10:30:03.216379+0000 mon.a (mon.0) 3568 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T10:30:03.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:03 vm07 bash[23367]: audit 2026-03-10T10:30:03.216379+0000 mon.a (mon.0) 3568 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "target_max_objects","val": "1"}]: dispatch 2026-03-10T10:30:04.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:04 vm07 bash[23367]: cluster 2026-03-10T10:30:02.570219+0000 mgr.y (mgr.24422) 622 : cluster [DBG] pgmap v1112: 268 pgs: 268 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:30:04.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:04 vm07 bash[23367]: cluster 2026-03-10T10:30:02.570219+0000 mgr.y (mgr.24422) 622 : cluster [DBG] pgmap v1112: 268 pgs: 268 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:30:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:04 vm04 bash[28289]: cluster 2026-03-10T10:30:02.570219+0000 mgr.y (mgr.24422) 622 : cluster [DBG] pgmap v1112: 268 pgs: 268 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:30:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:04 vm04 bash[28289]: cluster 2026-03-10T10:30:02.570219+0000 mgr.y (mgr.24422) 622 : cluster [DBG] pgmap v1112: 268 pgs: 268 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:30:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:04 vm04 bash[20742]: cluster 2026-03-10T10:30:02.570219+0000 mgr.y (mgr.24422) 622 : cluster [DBG] pgmap v1112: 268 pgs: 268 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:30:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:04 vm04 bash[20742]: cluster 2026-03-10T10:30:02.570219+0000 mgr.y (mgr.24422) 622 : cluster [DBG] pgmap v1112: 268 pgs: 268 active+clean; 4.3 MiB data, 1007 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:30:05.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:05 vm07 bash[23367]: audit 2026-03-10T10:30:04.232285+0000 mon.a (mon.0) 3569 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "target_max_objects","val": "1"}]': finished 2026-03-10T10:30:05.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:05 vm07 bash[23367]: audit 2026-03-10T10:30:04.232285+0000 mon.a (mon.0) 3569 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "target_max_objects","val": "1"}]': finished 2026-03-10T10:30:05.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:05 vm07 bash[23367]: cluster 2026-03-10T10:30:04.235975+0000 mon.a (mon.0) 3570 : cluster [DBG] osdmap e715: 8 total, 8 up, 8 in 2026-03-10T10:30:05.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:05 vm07 bash[23367]: cluster 2026-03-10T10:30:04.235975+0000 mon.a (mon.0) 3570 : cluster [DBG] osdmap e715: 8 total, 8 up, 8 in 2026-03-10T10:30:05.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:05 vm04 bash[28289]: audit 2026-03-10T10:30:04.232285+0000 mon.a (mon.0) 3569 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "target_max_objects","val": "1"}]': finished 2026-03-10T10:30:05.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:05 vm04 bash[28289]: audit 2026-03-10T10:30:04.232285+0000 mon.a (mon.0) 3569 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "target_max_objects","val": "1"}]': finished 2026-03-10T10:30:05.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:05 vm04 bash[28289]: cluster 2026-03-10T10:30:04.235975+0000 mon.a (mon.0) 3570 : cluster [DBG] osdmap e715: 8 total, 8 up, 8 in 2026-03-10T10:30:05.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:05 vm04 bash[28289]: cluster 2026-03-10T10:30:04.235975+0000 mon.a (mon.0) 3570 : cluster [DBG] osdmap e715: 8 total, 8 up, 8 in 2026-03-10T10:30:05.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:05 vm04 bash[20742]: audit 2026-03-10T10:30:04.232285+0000 mon.a (mon.0) 3569 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "target_max_objects","val": "1"}]': finished 2026-03-10T10:30:05.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:05 vm04 bash[20742]: audit 2026-03-10T10:30:04.232285+0000 mon.a (mon.0) 3569 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-146","var": "target_max_objects","val": "1"}]': finished 2026-03-10T10:30:05.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:05 vm04 bash[20742]: cluster 2026-03-10T10:30:04.235975+0000 mon.a (mon.0) 3570 : cluster [DBG] osdmap e715: 8 total, 8 up, 8 in 2026-03-10T10:30:05.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:05 vm04 bash[20742]: cluster 2026-03-10T10:30:04.235975+0000 mon.a (mon.0) 3570 : cluster [DBG] osdmap e715: 8 total, 8 up, 8 in 2026-03-10T10:30:06.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:06 vm07 bash[23367]: cluster 2026-03-10T10:30:04.570547+0000 mgr.y (mgr.24422) 623 : cluster [DBG] pgmap v1115: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T10:30:06.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:06 vm07 bash[23367]: cluster 2026-03-10T10:30:04.570547+0000 mgr.y (mgr.24422) 623 : cluster [DBG] pgmap v1115: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T10:30:06.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:06 vm07 bash[23367]: cluster 2026-03-10T10:30:05.239652+0000 mon.a (mon.0) 3571 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-10T10:30:06.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:06 vm07 bash[23367]: cluster 2026-03-10T10:30:05.239652+0000 mon.a (mon.0) 3571 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-10T10:30:06.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:06 vm04 bash[28289]: cluster 2026-03-10T10:30:04.570547+0000 mgr.y (mgr.24422) 623 : cluster [DBG] pgmap v1115: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T10:30:06.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:06 vm04 bash[28289]: cluster 2026-03-10T10:30:04.570547+0000 mgr.y (mgr.24422) 623 : cluster [DBG] pgmap v1115: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T10:30:06.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:06 vm04 bash[28289]: cluster 2026-03-10T10:30:05.239652+0000 mon.a (mon.0) 3571 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-10T10:30:06.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:06 vm04 bash[28289]: cluster 2026-03-10T10:30:05.239652+0000 mon.a (mon.0) 3571 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-10T10:30:06.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:06 vm04 bash[20742]: cluster 2026-03-10T10:30:04.570547+0000 mgr.y (mgr.24422) 623 : cluster [DBG] pgmap v1115: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T10:30:06.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:06 vm04 bash[20742]: cluster 2026-03-10T10:30:04.570547+0000 mgr.y (mgr.24422) 623 : cluster [DBG] pgmap v1115: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T10:30:06.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:06 vm04 bash[20742]: cluster 2026-03-10T10:30:05.239652+0000 mon.a (mon.0) 3571 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-10T10:30:06.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:06 vm04 bash[20742]: cluster 2026-03-10T10:30:05.239652+0000 mon.a (mon.0) 3571 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL) 2026-03-10T10:30:08.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:08 vm07 bash[23367]: cluster 2026-03-10T10:30:06.570862+0000 mgr.y (mgr.24422) 624 : cluster [DBG] pgmap v1116: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:30:08.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:08 vm07 bash[23367]: cluster 2026-03-10T10:30:06.570862+0000 mgr.y (mgr.24422) 624 : cluster [DBG] pgmap v1116: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:30:08.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:08 vm04 bash[28289]: cluster 2026-03-10T10:30:06.570862+0000 mgr.y (mgr.24422) 624 : cluster [DBG] pgmap v1116: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:30:08.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:08 vm04 bash[28289]: cluster 2026-03-10T10:30:06.570862+0000 mgr.y (mgr.24422) 624 : cluster [DBG] pgmap v1116: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:30:08.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:08 vm04 bash[20742]: cluster 2026-03-10T10:30:06.570862+0000 mgr.y (mgr.24422) 624 : cluster [DBG] pgmap v1116: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:30:08.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:08 vm04 bash[20742]: cluster 2026-03-10T10:30:06.570862+0000 mgr.y (mgr.24422) 624 : cluster [DBG] pgmap v1116: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:30:09.266 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:30:08 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:30:10.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:10 vm07 bash[23367]: cluster 2026-03-10T10:30:08.571457+0000 mgr.y (mgr.24422) 625 : cluster [DBG] pgmap v1117: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:30:10.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:10 vm07 bash[23367]: cluster 2026-03-10T10:30:08.571457+0000 mgr.y (mgr.24422) 625 : cluster [DBG] pgmap v1117: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:30:10.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:10 vm07 bash[23367]: audit 2026-03-10T10:30:08.900109+0000 mgr.y (mgr.24422) 626 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:10.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:10 vm07 bash[23367]: audit 2026-03-10T10:30:08.900109+0000 mgr.y (mgr.24422) 626 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:10 vm04 bash[28289]: cluster 2026-03-10T10:30:08.571457+0000 mgr.y (mgr.24422) 625 : cluster [DBG] pgmap v1117: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:30:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:10 vm04 bash[28289]: cluster 2026-03-10T10:30:08.571457+0000 mgr.y (mgr.24422) 625 : cluster [DBG] pgmap v1117: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:30:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:10 vm04 bash[28289]: audit 2026-03-10T10:30:08.900109+0000 mgr.y (mgr.24422) 626 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:10 vm04 bash[28289]: audit 2026-03-10T10:30:08.900109+0000 mgr.y (mgr.24422) 626 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:10.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:10 vm04 bash[20742]: cluster 2026-03-10T10:30:08.571457+0000 mgr.y (mgr.24422) 625 : cluster [DBG] pgmap v1117: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:30:10.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:10 vm04 bash[20742]: cluster 2026-03-10T10:30:08.571457+0000 mgr.y (mgr.24422) 625 : cluster [DBG] pgmap v1117: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:30:10.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:10 vm04 bash[20742]: audit 2026-03-10T10:30:08.900109+0000 mgr.y (mgr.24422) 626 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:10.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:10 vm04 bash[20742]: audit 2026-03-10T10:30:08.900109+0000 mgr.y (mgr.24422) 626 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:12.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:12 vm04 bash[28289]: cluster 2026-03-10T10:30:10.572143+0000 mgr.y (mgr.24422) 627 : cluster [DBG] pgmap v1118: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:30:12.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:12 vm04 bash[28289]: cluster 2026-03-10T10:30:10.572143+0000 mgr.y (mgr.24422) 627 : cluster [DBG] pgmap v1118: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:30:12.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:12 vm04 bash[20742]: cluster 2026-03-10T10:30:10.572143+0000 mgr.y (mgr.24422) 627 : cluster [DBG] pgmap v1118: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:30:12.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:12 vm04 bash[20742]: cluster 2026-03-10T10:30:10.572143+0000 mgr.y (mgr.24422) 627 : cluster [DBG] pgmap v1118: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:30:12.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:12 vm07 bash[23367]: cluster 2026-03-10T10:30:10.572143+0000 mgr.y (mgr.24422) 627 : cluster [DBG] pgmap v1118: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:30:12.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:12 vm07 bash[23367]: cluster 2026-03-10T10:30:10.572143+0000 mgr.y (mgr.24422) 627 : cluster [DBG] pgmap v1118: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:30:13.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:30:13 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:30:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:30:14.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:14 vm07 bash[23367]: cluster 2026-03-10T10:30:12.572511+0000 mgr.y (mgr.24422) 628 : cluster [DBG] pgmap v1119: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:30:14.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:14 vm07 bash[23367]: cluster 2026-03-10T10:30:12.572511+0000 mgr.y (mgr.24422) 628 : cluster [DBG] pgmap v1119: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:30:14.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:14 vm07 bash[23367]: audit 2026-03-10T10:30:13.209421+0000 mon.a (mon.0) 3572 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:30:14.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:14 vm07 bash[23367]: audit 2026-03-10T10:30:13.209421+0000 mon.a (mon.0) 3572 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:30:14.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:14 vm07 bash[23367]: audit 2026-03-10T10:30:13.210175+0000 mon.a (mon.0) 3573 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:30:14.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:14 vm07 bash[23367]: audit 2026-03-10T10:30:13.210175+0000 mon.a (mon.0) 3573 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:30:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:14 vm04 bash[28289]: cluster 2026-03-10T10:30:12.572511+0000 mgr.y (mgr.24422) 628 : cluster [DBG] pgmap v1119: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:30:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:14 vm04 bash[28289]: cluster 2026-03-10T10:30:12.572511+0000 mgr.y (mgr.24422) 628 : cluster [DBG] pgmap v1119: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:30:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:14 vm04 bash[28289]: audit 2026-03-10T10:30:13.209421+0000 mon.a (mon.0) 3572 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:30:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:14 vm04 bash[28289]: audit 2026-03-10T10:30:13.209421+0000 mon.a (mon.0) 3572 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:30:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:14 vm04 bash[28289]: audit 2026-03-10T10:30:13.210175+0000 mon.a (mon.0) 3573 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:30:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:14 vm04 bash[28289]: audit 2026-03-10T10:30:13.210175+0000 mon.a (mon.0) 3573 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:30:14.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:14 vm04 bash[20742]: cluster 2026-03-10T10:30:12.572511+0000 mgr.y (mgr.24422) 628 : cluster [DBG] pgmap v1119: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:30:14.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:14 vm04 bash[20742]: cluster 2026-03-10T10:30:12.572511+0000 mgr.y (mgr.24422) 628 : cluster [DBG] pgmap v1119: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T10:30:14.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:14 vm04 bash[20742]: audit 2026-03-10T10:30:13.209421+0000 mon.a (mon.0) 3572 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:30:14.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:14 vm04 bash[20742]: audit 2026-03-10T10:30:13.209421+0000 mon.a (mon.0) 3572 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:30:14.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:14 vm04 bash[20742]: audit 2026-03-10T10:30:13.210175+0000 mon.a (mon.0) 3573 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:30:14.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:14 vm04 bash[20742]: audit 2026-03-10T10:30:13.210175+0000 mon.a (mon.0) 3573 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:30:15.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:15 vm07 bash[23367]: audit 2026-03-10T10:30:14.239729+0000 mon.a (mon.0) 3574 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:15.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:15 vm07 bash[23367]: audit 2026-03-10T10:30:14.239729+0000 mon.a (mon.0) 3574 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:15.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:15 vm04 bash[28289]: audit 2026-03-10T10:30:14.239729+0000 mon.a (mon.0) 3574 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:15.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:15 vm04 bash[28289]: audit 2026-03-10T10:30:14.239729+0000 mon.a (mon.0) 3574 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:15.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:15 vm04 bash[20742]: audit 2026-03-10T10:30:14.239729+0000 mon.a (mon.0) 3574 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:15.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:15 vm04 bash[20742]: audit 2026-03-10T10:30:14.239729+0000 mon.a (mon.0) 3574 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:16.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:16 vm04 bash[28289]: cluster 2026-03-10T10:30:14.573244+0000 mgr.y (mgr.24422) 629 : cluster [DBG] pgmap v1120: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 990 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:30:16.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:16 vm04 bash[28289]: cluster 2026-03-10T10:30:14.573244+0000 mgr.y (mgr.24422) 629 : cluster [DBG] pgmap v1120: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 990 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:30:16.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:16 vm04 bash[28289]: audit 2026-03-10T10:30:15.255678+0000 mon.a (mon.0) 3575 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:30:16.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:16 vm04 bash[28289]: audit 2026-03-10T10:30:15.255678+0000 mon.a (mon.0) 3575 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:30:16.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:16 vm04 bash[28289]: cluster 2026-03-10T10:30:15.261105+0000 mon.a (mon.0) 3576 : cluster [DBG] osdmap e716: 8 total, 8 up, 8 in 2026-03-10T10:30:16.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:16 vm04 bash[28289]: cluster 2026-03-10T10:30:15.261105+0000 mon.a (mon.0) 3576 : cluster [DBG] osdmap e716: 8 total, 8 up, 8 in 2026-03-10T10:30:16.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:16 vm04 bash[28289]: audit 2026-03-10T10:30:15.261585+0000 mon.a (mon.0) 3577 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-146"}]: dispatch 2026-03-10T10:30:16.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:16 vm04 bash[28289]: audit 2026-03-10T10:30:15.261585+0000 mon.a (mon.0) 3577 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-146"}]: dispatch 2026-03-10T10:30:16.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:16 vm04 bash[20742]: cluster 2026-03-10T10:30:14.573244+0000 mgr.y (mgr.24422) 629 : cluster [DBG] pgmap v1120: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 990 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:30:16.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:16 vm04 bash[20742]: cluster 2026-03-10T10:30:14.573244+0000 mgr.y (mgr.24422) 629 : cluster [DBG] pgmap v1120: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 990 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:30:16.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:16 vm04 bash[20742]: audit 2026-03-10T10:30:15.255678+0000 mon.a (mon.0) 3575 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:30:16.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:16 vm04 bash[20742]: audit 2026-03-10T10:30:15.255678+0000 mon.a (mon.0) 3575 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:30:16.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:16 vm04 bash[20742]: cluster 2026-03-10T10:30:15.261105+0000 mon.a (mon.0) 3576 : cluster [DBG] osdmap e716: 8 total, 8 up, 8 in 2026-03-10T10:30:16.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:16 vm04 bash[20742]: cluster 2026-03-10T10:30:15.261105+0000 mon.a (mon.0) 3576 : cluster [DBG] osdmap e716: 8 total, 8 up, 8 in 2026-03-10T10:30:16.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:16 vm04 bash[20742]: audit 2026-03-10T10:30:15.261585+0000 mon.a (mon.0) 3577 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-146"}]: dispatch 2026-03-10T10:30:16.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:16 vm04 bash[20742]: audit 2026-03-10T10:30:15.261585+0000 mon.a (mon.0) 3577 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-146"}]: dispatch 2026-03-10T10:30:16.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:16 vm07 bash[23367]: cluster 2026-03-10T10:30:14.573244+0000 mgr.y (mgr.24422) 629 : cluster [DBG] pgmap v1120: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 990 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:30:16.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:16 vm07 bash[23367]: cluster 2026-03-10T10:30:14.573244+0000 mgr.y (mgr.24422) 629 : cluster [DBG] pgmap v1120: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 990 B/s rd, 0 B/s wr, 1 op/s 2026-03-10T10:30:16.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:16 vm07 bash[23367]: audit 2026-03-10T10:30:15.255678+0000 mon.a (mon.0) 3575 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:30:16.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:16 vm07 bash[23367]: audit 2026-03-10T10:30:15.255678+0000 mon.a (mon.0) 3575 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:30:16.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:16 vm07 bash[23367]: cluster 2026-03-10T10:30:15.261105+0000 mon.a (mon.0) 3576 : cluster [DBG] osdmap e716: 8 total, 8 up, 8 in 2026-03-10T10:30:16.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:16 vm07 bash[23367]: cluster 2026-03-10T10:30:15.261105+0000 mon.a (mon.0) 3576 : cluster [DBG] osdmap e716: 8 total, 8 up, 8 in 2026-03-10T10:30:16.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:16 vm07 bash[23367]: audit 2026-03-10T10:30:15.261585+0000 mon.a (mon.0) 3577 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-146"}]: dispatch 2026-03-10T10:30:16.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:16 vm07 bash[23367]: audit 2026-03-10T10:30:15.261585+0000 mon.a (mon.0) 3577 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-146"}]: dispatch 2026-03-10T10:30:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:17 vm04 bash[28289]: audit 2026-03-10T10:30:16.288592+0000 mon.a (mon.0) 3578 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-146"}]': finished 2026-03-10T10:30:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:17 vm04 bash[28289]: audit 2026-03-10T10:30:16.288592+0000 mon.a (mon.0) 3578 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-146"}]': finished 2026-03-10T10:30:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:17 vm04 bash[28289]: cluster 2026-03-10T10:30:16.297204+0000 mon.a (mon.0) 3579 : cluster [DBG] osdmap e717: 8 total, 8 up, 8 in 2026-03-10T10:30:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:17 vm04 bash[28289]: cluster 2026-03-10T10:30:16.297204+0000 mon.a (mon.0) 3579 : cluster [DBG] osdmap e717: 8 total, 8 up, 8 in 2026-03-10T10:30:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:17 vm04 bash[28289]: audit 2026-03-10T10:30:16.454285+0000 mon.a (mon.0) 3580 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:17 vm04 bash[28289]: audit 2026-03-10T10:30:16.454285+0000 mon.a (mon.0) 3580 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:17 vm04 bash[28289]: audit 2026-03-10T10:30:16.454540+0000 mon.a (mon.0) 3581 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-146"}]: dispatch 2026-03-10T10:30:17.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:17 vm04 bash[28289]: audit 2026-03-10T10:30:16.454540+0000 mon.a (mon.0) 3581 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-146"}]: dispatch 2026-03-10T10:30:17.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:17 vm04 bash[20742]: audit 2026-03-10T10:30:16.288592+0000 mon.a (mon.0) 3578 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-146"}]': finished 2026-03-10T10:30:17.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:17 vm04 bash[20742]: audit 2026-03-10T10:30:16.288592+0000 mon.a (mon.0) 3578 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-146"}]': finished 2026-03-10T10:30:17.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:17 vm04 bash[20742]: cluster 2026-03-10T10:30:16.297204+0000 mon.a (mon.0) 3579 : cluster [DBG] osdmap e717: 8 total, 8 up, 8 in 2026-03-10T10:30:17.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:17 vm04 bash[20742]: cluster 2026-03-10T10:30:16.297204+0000 mon.a (mon.0) 3579 : cluster [DBG] osdmap e717: 8 total, 8 up, 8 in 2026-03-10T10:30:17.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:17 vm04 bash[20742]: audit 2026-03-10T10:30:16.454285+0000 mon.a (mon.0) 3580 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:17.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:17 vm04 bash[20742]: audit 2026-03-10T10:30:16.454285+0000 mon.a (mon.0) 3580 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:17.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:17 vm04 bash[20742]: audit 2026-03-10T10:30:16.454540+0000 mon.a (mon.0) 3581 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-146"}]: dispatch 2026-03-10T10:30:17.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:17 vm04 bash[20742]: audit 2026-03-10T10:30:16.454540+0000 mon.a (mon.0) 3581 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-146"}]: dispatch 2026-03-10T10:30:17.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:17 vm07 bash[23367]: audit 2026-03-10T10:30:16.288592+0000 mon.a (mon.0) 3578 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-146"}]': finished 2026-03-10T10:30:17.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:17 vm07 bash[23367]: audit 2026-03-10T10:30:16.288592+0000 mon.a (mon.0) 3578 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-146"}]': finished 2026-03-10T10:30:17.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:17 vm07 bash[23367]: cluster 2026-03-10T10:30:16.297204+0000 mon.a (mon.0) 3579 : cluster [DBG] osdmap e717: 8 total, 8 up, 8 in 2026-03-10T10:30:17.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:17 vm07 bash[23367]: cluster 2026-03-10T10:30:16.297204+0000 mon.a (mon.0) 3579 : cluster [DBG] osdmap e717: 8 total, 8 up, 8 in 2026-03-10T10:30:17.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:17 vm07 bash[23367]: audit 2026-03-10T10:30:16.454285+0000 mon.a (mon.0) 3580 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:17.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:17 vm07 bash[23367]: audit 2026-03-10T10:30:16.454285+0000 mon.a (mon.0) 3580 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:17.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:17 vm07 bash[23367]: audit 2026-03-10T10:30:16.454540+0000 mon.a (mon.0) 3581 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-146"}]: dispatch 2026-03-10T10:30:17.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:17 vm07 bash[23367]: audit 2026-03-10T10:30:16.454540+0000 mon.a (mon.0) 3581 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-146"}]: dispatch 2026-03-10T10:30:18.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:18 vm04 bash[28289]: cluster 2026-03-10T10:30:16.573620+0000 mgr.y (mgr.24422) 630 : cluster [DBG] pgmap v1123: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:30:18.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:18 vm04 bash[28289]: cluster 2026-03-10T10:30:16.573620+0000 mgr.y (mgr.24422) 630 : cluster [DBG] pgmap v1123: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:30:18.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:18 vm04 bash[28289]: cluster 2026-03-10T10:30:17.328226+0000 mon.a (mon.0) 3582 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:30:18.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:18 vm04 bash[28289]: cluster 2026-03-10T10:30:17.328226+0000 mon.a (mon.0) 3582 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:30:18.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:18 vm04 bash[28289]: cluster 2026-03-10T10:30:17.328246+0000 mon.a (mon.0) 3583 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-10T10:30:18.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:18 vm04 bash[28289]: cluster 2026-03-10T10:30:17.328246+0000 mon.a (mon.0) 3583 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-10T10:30:18.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:18 vm04 bash[28289]: cluster 2026-03-10T10:30:17.475487+0000 mon.a (mon.0) 3584 : cluster [DBG] osdmap e718: 8 total, 8 up, 8 in 2026-03-10T10:30:18.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:18 vm04 bash[28289]: cluster 2026-03-10T10:30:17.475487+0000 mon.a (mon.0) 3584 : cluster [DBG] osdmap e718: 8 total, 8 up, 8 in 2026-03-10T10:30:18.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:18 vm04 bash[20742]: cluster 2026-03-10T10:30:16.573620+0000 mgr.y (mgr.24422) 630 : cluster [DBG] pgmap v1123: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:30:18.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:18 vm04 bash[20742]: cluster 2026-03-10T10:30:16.573620+0000 mgr.y (mgr.24422) 630 : cluster [DBG] pgmap v1123: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:30:18.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:18 vm04 bash[20742]: cluster 2026-03-10T10:30:17.328226+0000 mon.a (mon.0) 3582 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:30:18.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:18 vm04 bash[20742]: cluster 2026-03-10T10:30:17.328226+0000 mon.a (mon.0) 3582 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:30:18.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:18 vm04 bash[20742]: cluster 2026-03-10T10:30:17.328246+0000 mon.a (mon.0) 3583 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-10T10:30:18.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:18 vm04 bash[20742]: cluster 2026-03-10T10:30:17.328246+0000 mon.a (mon.0) 3583 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-10T10:30:18.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:18 vm04 bash[20742]: cluster 2026-03-10T10:30:17.475487+0000 mon.a (mon.0) 3584 : cluster [DBG] osdmap e718: 8 total, 8 up, 8 in 2026-03-10T10:30:18.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:18 vm04 bash[20742]: cluster 2026-03-10T10:30:17.475487+0000 mon.a (mon.0) 3584 : cluster [DBG] osdmap e718: 8 total, 8 up, 8 in 2026-03-10T10:30:18.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:18 vm07 bash[23367]: cluster 2026-03-10T10:30:16.573620+0000 mgr.y (mgr.24422) 630 : cluster [DBG] pgmap v1123: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:30:18.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:18 vm07 bash[23367]: cluster 2026-03-10T10:30:16.573620+0000 mgr.y (mgr.24422) 630 : cluster [DBG] pgmap v1123: 268 pgs: 268 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:30:18.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:18 vm07 bash[23367]: cluster 2026-03-10T10:30:17.328226+0000 mon.a (mon.0) 3582 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:30:18.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:18 vm07 bash[23367]: cluster 2026-03-10T10:30:17.328226+0000 mon.a (mon.0) 3582 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:30:18.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:18 vm07 bash[23367]: cluster 2026-03-10T10:30:17.328246+0000 mon.a (mon.0) 3583 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-10T10:30:18.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:18 vm07 bash[23367]: cluster 2026-03-10T10:30:17.328246+0000 mon.a (mon.0) 3583 : cluster [INF] Health check cleared: CACHE_POOL_NEAR_FULL (was: 1 cache pools at or near target size) 2026-03-10T10:30:18.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:18 vm07 bash[23367]: cluster 2026-03-10T10:30:17.475487+0000 mon.a (mon.0) 3584 : cluster [DBG] osdmap e718: 8 total, 8 up, 8 in 2026-03-10T10:30:18.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:18 vm07 bash[23367]: cluster 2026-03-10T10:30:17.475487+0000 mon.a (mon.0) 3584 : cluster [DBG] osdmap e718: 8 total, 8 up, 8 in 2026-03-10T10:30:19.266 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:30:18 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:30:19.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:19 vm04 bash[28289]: cluster 2026-03-10T10:30:18.432095+0000 mon.a (mon.0) 3585 : cluster [DBG] osdmap e719: 8 total, 8 up, 8 in 2026-03-10T10:30:19.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:19 vm04 bash[28289]: cluster 2026-03-10T10:30:18.432095+0000 mon.a (mon.0) 3585 : cluster [DBG] osdmap e719: 8 total, 8 up, 8 in 2026-03-10T10:30:19.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:19 vm04 bash[28289]: audit 2026-03-10T10:30:18.432964+0000 mon.a (mon.0) 3586 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:30:19.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:19 vm04 bash[28289]: audit 2026-03-10T10:30:18.432964+0000 mon.a (mon.0) 3586 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:30:19.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:19 vm04 bash[28289]: cluster 2026-03-10T10:30:18.574125+0000 mgr.y (mgr.24422) 631 : cluster [DBG] pgmap v1126: 268 pgs: 12 creating+peering, 20 unknown, 236 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:30:19.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:19 vm04 bash[28289]: cluster 2026-03-10T10:30:18.574125+0000 mgr.y (mgr.24422) 631 : cluster [DBG] pgmap v1126: 268 pgs: 12 creating+peering, 20 unknown, 236 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:30:19.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:19 vm04 bash[28289]: audit 2026-03-10T10:30:18.907054+0000 mgr.y (mgr.24422) 632 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:19.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:19 vm04 bash[28289]: audit 2026-03-10T10:30:18.907054+0000 mgr.y (mgr.24422) 632 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:19.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:19 vm04 bash[20742]: cluster 2026-03-10T10:30:18.432095+0000 mon.a (mon.0) 3585 : cluster [DBG] osdmap e719: 8 total, 8 up, 8 in 2026-03-10T10:30:19.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:19 vm04 bash[20742]: cluster 2026-03-10T10:30:18.432095+0000 mon.a (mon.0) 3585 : cluster [DBG] osdmap e719: 8 total, 8 up, 8 in 2026-03-10T10:30:19.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:19 vm04 bash[20742]: audit 2026-03-10T10:30:18.432964+0000 mon.a (mon.0) 3586 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:30:19.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:19 vm04 bash[20742]: audit 2026-03-10T10:30:18.432964+0000 mon.a (mon.0) 3586 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:30:19.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:19 vm04 bash[20742]: cluster 2026-03-10T10:30:18.574125+0000 mgr.y (mgr.24422) 631 : cluster [DBG] pgmap v1126: 268 pgs: 12 creating+peering, 20 unknown, 236 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:30:19.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:19 vm04 bash[20742]: cluster 2026-03-10T10:30:18.574125+0000 mgr.y (mgr.24422) 631 : cluster [DBG] pgmap v1126: 268 pgs: 12 creating+peering, 20 unknown, 236 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:30:19.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:19 vm04 bash[20742]: audit 2026-03-10T10:30:18.907054+0000 mgr.y (mgr.24422) 632 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:19.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:19 vm04 bash[20742]: audit 2026-03-10T10:30:18.907054+0000 mgr.y (mgr.24422) 632 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:19.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:19 vm07 bash[23367]: cluster 2026-03-10T10:30:18.432095+0000 mon.a (mon.0) 3585 : cluster [DBG] osdmap e719: 8 total, 8 up, 8 in 2026-03-10T10:30:19.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:19 vm07 bash[23367]: cluster 2026-03-10T10:30:18.432095+0000 mon.a (mon.0) 3585 : cluster [DBG] osdmap e719: 8 total, 8 up, 8 in 2026-03-10T10:30:19.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:19 vm07 bash[23367]: audit 2026-03-10T10:30:18.432964+0000 mon.a (mon.0) 3586 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:30:19.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:19 vm07 bash[23367]: audit 2026-03-10T10:30:18.432964+0000 mon.a (mon.0) 3586 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-148","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:30:19.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:19 vm07 bash[23367]: cluster 2026-03-10T10:30:18.574125+0000 mgr.y (mgr.24422) 631 : cluster [DBG] pgmap v1126: 268 pgs: 12 creating+peering, 20 unknown, 236 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:30:19.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:19 vm07 bash[23367]: cluster 2026-03-10T10:30:18.574125+0000 mgr.y (mgr.24422) 631 : cluster [DBG] pgmap v1126: 268 pgs: 12 creating+peering, 20 unknown, 236 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail 2026-03-10T10:30:19.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:19 vm07 bash[23367]: audit 2026-03-10T10:30:18.907054+0000 mgr.y (mgr.24422) 632 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:19.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:19 vm07 bash[23367]: audit 2026-03-10T10:30:18.907054+0000 mgr.y (mgr.24422) 632 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:20 vm04 bash[28289]: audit 2026-03-10T10:30:19.450355+0000 mon.a (mon.0) 3587 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-148","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:30:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:20 vm04 bash[28289]: audit 2026-03-10T10:30:19.450355+0000 mon.a (mon.0) 3587 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-148","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:30:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:20 vm04 bash[28289]: cluster 2026-03-10T10:30:19.461339+0000 mon.a (mon.0) 3588 : cluster [DBG] osdmap e720: 8 total, 8 up, 8 in 2026-03-10T10:30:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:20 vm04 bash[28289]: cluster 2026-03-10T10:30:19.461339+0000 mon.a (mon.0) 3588 : cluster [DBG] osdmap e720: 8 total, 8 up, 8 in 2026-03-10T10:30:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:20 vm04 bash[28289]: audit 2026-03-10T10:30:19.498552+0000 mon.a (mon.0) 3589 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:30:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:20 vm04 bash[28289]: audit 2026-03-10T10:30:19.498552+0000 mon.a (mon.0) 3589 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:30:20.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:20 vm04 bash[20742]: audit 2026-03-10T10:30:19.450355+0000 mon.a (mon.0) 3587 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-148","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:30:20.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:20 vm04 bash[20742]: audit 2026-03-10T10:30:19.450355+0000 mon.a (mon.0) 3587 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-148","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:30:20.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:20 vm04 bash[20742]: cluster 2026-03-10T10:30:19.461339+0000 mon.a (mon.0) 3588 : cluster [DBG] osdmap e720: 8 total, 8 up, 8 in 2026-03-10T10:30:20.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:20 vm04 bash[20742]: cluster 2026-03-10T10:30:19.461339+0000 mon.a (mon.0) 3588 : cluster [DBG] osdmap e720: 8 total, 8 up, 8 in 2026-03-10T10:30:20.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:20 vm04 bash[20742]: audit 2026-03-10T10:30:19.498552+0000 mon.a (mon.0) 3589 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:30:20.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:20 vm04 bash[20742]: audit 2026-03-10T10:30:19.498552+0000 mon.a (mon.0) 3589 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:30:20.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:20 vm07 bash[23367]: audit 2026-03-10T10:30:19.450355+0000 mon.a (mon.0) 3587 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-148","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:30:20.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:20 vm07 bash[23367]: audit 2026-03-10T10:30:19.450355+0000 mon.a (mon.0) 3587 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-148","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:30:20.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:20 vm07 bash[23367]: cluster 2026-03-10T10:30:19.461339+0000 mon.a (mon.0) 3588 : cluster [DBG] osdmap e720: 8 total, 8 up, 8 in 2026-03-10T10:30:20.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:20 vm07 bash[23367]: cluster 2026-03-10T10:30:19.461339+0000 mon.a (mon.0) 3588 : cluster [DBG] osdmap e720: 8 total, 8 up, 8 in 2026-03-10T10:30:20.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:20 vm07 bash[23367]: audit 2026-03-10T10:30:19.498552+0000 mon.a (mon.0) 3589 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:30:20.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:20 vm07 bash[23367]: audit 2026-03-10T10:30:19.498552+0000 mon.a (mon.0) 3589 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-148", "force_nonempty": "--force-nonempty" }]: dispatch 2026-03-10T10:30:21.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:21 vm07 bash[23367]: audit 2026-03-10T10:30:20.460617+0000 mon.a (mon.0) 3590 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-148", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:30:21.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:21 vm07 bash[23367]: audit 2026-03-10T10:30:20.460617+0000 mon.a (mon.0) 3590 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-148", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:30:21.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:21 vm07 bash[23367]: cluster 2026-03-10T10:30:20.463429+0000 mon.a (mon.0) 3591 : cluster [DBG] osdmap e721: 8 total, 8 up, 8 in 2026-03-10T10:30:21.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:21 vm07 bash[23367]: cluster 2026-03-10T10:30:20.463429+0000 mon.a (mon.0) 3591 : cluster [DBG] osdmap e721: 8 total, 8 up, 8 in 2026-03-10T10:30:21.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:21 vm07 bash[23367]: audit 2026-03-10T10:30:20.476874+0000 mon.a (mon.0) 3592 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-148"}]: dispatch 2026-03-10T10:30:21.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:21 vm07 bash[23367]: audit 2026-03-10T10:30:20.476874+0000 mon.a (mon.0) 3592 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-148"}]: dispatch 2026-03-10T10:30:21.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:21 vm07 bash[23367]: cluster 2026-03-10T10:30:20.574468+0000 mgr.y (mgr.24422) 633 : cluster [DBG] pgmap v1129: 268 pgs: 29 creating+peering, 3 unknown, 236 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T10:30:21.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:21 vm07 bash[23367]: cluster 2026-03-10T10:30:20.574468+0000 mgr.y (mgr.24422) 633 : cluster [DBG] pgmap v1129: 268 pgs: 29 creating+peering, 3 unknown, 236 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T10:30:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:21 vm04 bash[28289]: audit 2026-03-10T10:30:20.460617+0000 mon.a (mon.0) 3590 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-148", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:30:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:21 vm04 bash[28289]: audit 2026-03-10T10:30:20.460617+0000 mon.a (mon.0) 3590 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-148", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:30:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:21 vm04 bash[28289]: cluster 2026-03-10T10:30:20.463429+0000 mon.a (mon.0) 3591 : cluster [DBG] osdmap e721: 8 total, 8 up, 8 in 2026-03-10T10:30:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:21 vm04 bash[28289]: cluster 2026-03-10T10:30:20.463429+0000 mon.a (mon.0) 3591 : cluster [DBG] osdmap e721: 8 total, 8 up, 8 in 2026-03-10T10:30:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:21 vm04 bash[28289]: audit 2026-03-10T10:30:20.476874+0000 mon.a (mon.0) 3592 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-148"}]: dispatch 2026-03-10T10:30:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:21 vm04 bash[28289]: audit 2026-03-10T10:30:20.476874+0000 mon.a (mon.0) 3592 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-148"}]: dispatch 2026-03-10T10:30:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:21 vm04 bash[28289]: cluster 2026-03-10T10:30:20.574468+0000 mgr.y (mgr.24422) 633 : cluster [DBG] pgmap v1129: 268 pgs: 29 creating+peering, 3 unknown, 236 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T10:30:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:21 vm04 bash[28289]: cluster 2026-03-10T10:30:20.574468+0000 mgr.y (mgr.24422) 633 : cluster [DBG] pgmap v1129: 268 pgs: 29 creating+peering, 3 unknown, 236 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T10:30:21.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:21 vm04 bash[20742]: audit 2026-03-10T10:30:20.460617+0000 mon.a (mon.0) 3590 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-148", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:30:21.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:21 vm04 bash[20742]: audit 2026-03-10T10:30:20.460617+0000 mon.a (mon.0) 3590 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-148", "force_nonempty": "--force-nonempty" }]': finished 2026-03-10T10:30:21.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:21 vm04 bash[20742]: cluster 2026-03-10T10:30:20.463429+0000 mon.a (mon.0) 3591 : cluster [DBG] osdmap e721: 8 total, 8 up, 8 in 2026-03-10T10:30:21.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:21 vm04 bash[20742]: cluster 2026-03-10T10:30:20.463429+0000 mon.a (mon.0) 3591 : cluster [DBG] osdmap e721: 8 total, 8 up, 8 in 2026-03-10T10:30:21.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:21 vm04 bash[20742]: audit 2026-03-10T10:30:20.476874+0000 mon.a (mon.0) 3592 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-148"}]: dispatch 2026-03-10T10:30:21.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:21 vm04 bash[20742]: audit 2026-03-10T10:30:20.476874+0000 mon.a (mon.0) 3592 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-148"}]: dispatch 2026-03-10T10:30:21.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:21 vm04 bash[20742]: cluster 2026-03-10T10:30:20.574468+0000 mgr.y (mgr.24422) 633 : cluster [DBG] pgmap v1129: 268 pgs: 29 creating+peering, 3 unknown, 236 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T10:30:21.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:21 vm04 bash[20742]: cluster 2026-03-10T10:30:20.574468+0000 mgr.y (mgr.24422) 633 : cluster [DBG] pgmap v1129: 268 pgs: 29 creating+peering, 3 unknown, 236 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T10:30:22.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:22 vm07 bash[23367]: audit 2026-03-10T10:30:21.468876+0000 mon.a (mon.0) 3593 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-148"}]': finished 2026-03-10T10:30:22.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:22 vm07 bash[23367]: audit 2026-03-10T10:30:21.468876+0000 mon.a (mon.0) 3593 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-148"}]': finished 2026-03-10T10:30:22.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:22 vm07 bash[23367]: cluster 2026-03-10T10:30:21.473947+0000 mon.a (mon.0) 3594 : cluster [DBG] osdmap e722: 8 total, 8 up, 8 in 2026-03-10T10:30:22.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:22 vm07 bash[23367]: cluster 2026-03-10T10:30:21.473947+0000 mon.a (mon.0) 3594 : cluster [DBG] osdmap e722: 8 total, 8 up, 8 in 2026-03-10T10:30:22.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:22 vm07 bash[23367]: audit 2026-03-10T10:30:21.504384+0000 mon.a (mon.0) 3595 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:22.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:22 vm07 bash[23367]: audit 2026-03-10T10:30:21.504384+0000 mon.a (mon.0) 3595 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:22.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:22 vm07 bash[23367]: audit 2026-03-10T10:30:21.504608+0000 mon.a (mon.0) 3596 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-148"}]: dispatch 2026-03-10T10:30:22.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:22 vm07 bash[23367]: audit 2026-03-10T10:30:21.504608+0000 mon.a (mon.0) 3596 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-148"}]: dispatch 2026-03-10T10:30:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:22 vm04 bash[28289]: audit 2026-03-10T10:30:21.468876+0000 mon.a (mon.0) 3593 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-148"}]': finished 2026-03-10T10:30:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:22 vm04 bash[28289]: audit 2026-03-10T10:30:21.468876+0000 mon.a (mon.0) 3593 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-148"}]': finished 2026-03-10T10:30:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:22 vm04 bash[28289]: cluster 2026-03-10T10:30:21.473947+0000 mon.a (mon.0) 3594 : cluster [DBG] osdmap e722: 8 total, 8 up, 8 in 2026-03-10T10:30:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:22 vm04 bash[28289]: cluster 2026-03-10T10:30:21.473947+0000 mon.a (mon.0) 3594 : cluster [DBG] osdmap e722: 8 total, 8 up, 8 in 2026-03-10T10:30:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:22 vm04 bash[28289]: audit 2026-03-10T10:30:21.504384+0000 mon.a (mon.0) 3595 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:22 vm04 bash[28289]: audit 2026-03-10T10:30:21.504384+0000 mon.a (mon.0) 3595 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:22 vm04 bash[28289]: audit 2026-03-10T10:30:21.504608+0000 mon.a (mon.0) 3596 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-148"}]: dispatch 2026-03-10T10:30:22.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:22 vm04 bash[28289]: audit 2026-03-10T10:30:21.504608+0000 mon.a (mon.0) 3596 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-148"}]: dispatch 2026-03-10T10:30:22.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:22 vm04 bash[20742]: audit 2026-03-10T10:30:21.468876+0000 mon.a (mon.0) 3593 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-148"}]': finished 2026-03-10T10:30:22.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:22 vm04 bash[20742]: audit 2026-03-10T10:30:21.468876+0000 mon.a (mon.0) 3593 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-148"}]': finished 2026-03-10T10:30:22.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:22 vm04 bash[20742]: cluster 2026-03-10T10:30:21.473947+0000 mon.a (mon.0) 3594 : cluster [DBG] osdmap e722: 8 total, 8 up, 8 in 2026-03-10T10:30:22.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:22 vm04 bash[20742]: cluster 2026-03-10T10:30:21.473947+0000 mon.a (mon.0) 3594 : cluster [DBG] osdmap e722: 8 total, 8 up, 8 in 2026-03-10T10:30:22.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:22 vm04 bash[20742]: audit 2026-03-10T10:30:21.504384+0000 mon.a (mon.0) 3595 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:22.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:22 vm04 bash[20742]: audit 2026-03-10T10:30:21.504384+0000 mon.a (mon.0) 3595 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:22.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:22 vm04 bash[20742]: audit 2026-03-10T10:30:21.504608+0000 mon.a (mon.0) 3596 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-148"}]: dispatch 2026-03-10T10:30:22.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:22 vm04 bash[20742]: audit 2026-03-10T10:30:21.504608+0000 mon.a (mon.0) 3596 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-148"}]: dispatch 2026-03-10T10:30:23.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:30:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:30:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:30:23.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:23 vm04 bash[28289]: cluster 2026-03-10T10:30:22.485243+0000 mon.a (mon.0) 3597 : cluster [DBG] osdmap e723: 8 total, 8 up, 8 in 2026-03-10T10:30:23.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:23 vm04 bash[28289]: cluster 2026-03-10T10:30:22.485243+0000 mon.a (mon.0) 3597 : cluster [DBG] osdmap e723: 8 total, 8 up, 8 in 2026-03-10T10:30:23.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:23 vm04 bash[28289]: cluster 2026-03-10T10:30:22.574787+0000 mgr.y (mgr.24422) 634 : cluster [DBG] pgmap v1132: 236 pgs: 236 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:30:23.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:23 vm04 bash[28289]: cluster 2026-03-10T10:30:22.574787+0000 mgr.y (mgr.24422) 634 : cluster [DBG] pgmap v1132: 236 pgs: 236 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:30:23.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:23 vm04 bash[28289]: cluster 2026-03-10T10:30:22.610050+0000 mon.a (mon.0) 3598 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:30:23.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:23 vm04 bash[28289]: cluster 2026-03-10T10:30:22.610050+0000 mon.a (mon.0) 3598 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:30:23.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:23 vm04 bash[20742]: cluster 2026-03-10T10:30:22.485243+0000 mon.a (mon.0) 3597 : cluster [DBG] osdmap e723: 8 total, 8 up, 8 in 2026-03-10T10:30:23.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:23 vm04 bash[20742]: cluster 2026-03-10T10:30:22.485243+0000 mon.a (mon.0) 3597 : cluster [DBG] osdmap e723: 8 total, 8 up, 8 in 2026-03-10T10:30:23.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:23 vm04 bash[20742]: cluster 2026-03-10T10:30:22.574787+0000 mgr.y (mgr.24422) 634 : cluster [DBG] pgmap v1132: 236 pgs: 236 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:30:23.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:23 vm04 bash[20742]: cluster 2026-03-10T10:30:22.574787+0000 mgr.y (mgr.24422) 634 : cluster [DBG] pgmap v1132: 236 pgs: 236 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:30:23.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:23 vm04 bash[20742]: cluster 2026-03-10T10:30:22.610050+0000 mon.a (mon.0) 3598 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:30:23.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:23 vm04 bash[20742]: cluster 2026-03-10T10:30:22.610050+0000 mon.a (mon.0) 3598 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:30:24.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:23 vm07 bash[23367]: cluster 2026-03-10T10:30:22.485243+0000 mon.a (mon.0) 3597 : cluster [DBG] osdmap e723: 8 total, 8 up, 8 in 2026-03-10T10:30:24.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:23 vm07 bash[23367]: cluster 2026-03-10T10:30:22.485243+0000 mon.a (mon.0) 3597 : cluster [DBG] osdmap e723: 8 total, 8 up, 8 in 2026-03-10T10:30:24.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:23 vm07 bash[23367]: cluster 2026-03-10T10:30:22.574787+0000 mgr.y (mgr.24422) 634 : cluster [DBG] pgmap v1132: 236 pgs: 236 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:30:24.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:23 vm07 bash[23367]: cluster 2026-03-10T10:30:22.574787+0000 mgr.y (mgr.24422) 634 : cluster [DBG] pgmap v1132: 236 pgs: 236 active+clean; 4.3 MiB data, 1008 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:30:24.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:23 vm07 bash[23367]: cluster 2026-03-10T10:30:22.610050+0000 mon.a (mon.0) 3598 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:30:24.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:23 vm07 bash[23367]: cluster 2026-03-10T10:30:22.610050+0000 mon.a (mon.0) 3598 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:30:24.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:24 vm04 bash[20742]: cluster 2026-03-10T10:30:23.513650+0000 mon.a (mon.0) 3599 : cluster [DBG] osdmap e724: 8 total, 8 up, 8 in 2026-03-10T10:30:24.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:24 vm04 bash[20742]: cluster 2026-03-10T10:30:23.513650+0000 mon.a (mon.0) 3599 : cluster [DBG] osdmap e724: 8 total, 8 up, 8 in 2026-03-10T10:30:24.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:24 vm04 bash[20742]: audit 2026-03-10T10:30:23.516775+0000 mon.a (mon.0) 3600 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:30:24.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:24 vm04 bash[20742]: audit 2026-03-10T10:30:23.516775+0000 mon.a (mon.0) 3600 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:30:24.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:24 vm04 bash[28289]: cluster 2026-03-10T10:30:23.513650+0000 mon.a (mon.0) 3599 : cluster [DBG] osdmap e724: 8 total, 8 up, 8 in 2026-03-10T10:30:24.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:24 vm04 bash[28289]: cluster 2026-03-10T10:30:23.513650+0000 mon.a (mon.0) 3599 : cluster [DBG] osdmap e724: 8 total, 8 up, 8 in 2026-03-10T10:30:24.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:24 vm04 bash[28289]: audit 2026-03-10T10:30:23.516775+0000 mon.a (mon.0) 3600 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:30:24.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:24 vm04 bash[28289]: audit 2026-03-10T10:30:23.516775+0000 mon.a (mon.0) 3600 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:30:25.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:24 vm07 bash[23367]: cluster 2026-03-10T10:30:23.513650+0000 mon.a (mon.0) 3599 : cluster [DBG] osdmap e724: 8 total, 8 up, 8 in 2026-03-10T10:30:25.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:24 vm07 bash[23367]: cluster 2026-03-10T10:30:23.513650+0000 mon.a (mon.0) 3599 : cluster [DBG] osdmap e724: 8 total, 8 up, 8 in 2026-03-10T10:30:25.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:24 vm07 bash[23367]: audit 2026-03-10T10:30:23.516775+0000 mon.a (mon.0) 3600 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:30:25.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:24 vm07 bash[23367]: audit 2026-03-10T10:30:23.516775+0000 mon.a (mon.0) 3600 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-150","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:30:25.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:25 vm04 bash[20742]: audit 2026-03-10T10:30:24.510656+0000 mon.a (mon.0) 3601 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-150","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:30:25.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:25 vm04 bash[20742]: audit 2026-03-10T10:30:24.510656+0000 mon.a (mon.0) 3601 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-150","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:30:25.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:25 vm04 bash[20742]: cluster 2026-03-10T10:30:24.516264+0000 mon.a (mon.0) 3602 : cluster [DBG] osdmap e725: 8 total, 8 up, 8 in 2026-03-10T10:30:25.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:25 vm04 bash[20742]: cluster 2026-03-10T10:30:24.516264+0000 mon.a (mon.0) 3602 : cluster [DBG] osdmap e725: 8 total, 8 up, 8 in 2026-03-10T10:30:25.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:25 vm04 bash[20742]: audit 2026-03-10T10:30:24.568513+0000 mon.a (mon.0) 3603 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:25.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:25 vm04 bash[20742]: audit 2026-03-10T10:30:24.568513+0000 mon.a (mon.0) 3603 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:25.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:25 vm04 bash[20742]: audit 2026-03-10T10:30:24.568693+0000 mon.a (mon.0) 3604 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-150"}]: dispatch 2026-03-10T10:30:25.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:25 vm04 bash[20742]: audit 2026-03-10T10:30:24.568693+0000 mon.a (mon.0) 3604 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-150"}]: dispatch 2026-03-10T10:30:25.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:25 vm04 bash[20742]: cluster 2026-03-10T10:30:24.575131+0000 mgr.y (mgr.24422) 635 : cluster [DBG] pgmap v1135: 268 pgs: 15 creating+peering, 17 unknown, 236 active+clean; 4.3 MiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:30:25.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:25 vm04 bash[20742]: cluster 2026-03-10T10:30:24.575131+0000 mgr.y (mgr.24422) 635 : cluster [DBG] pgmap v1135: 268 pgs: 15 creating+peering, 17 unknown, 236 active+clean; 4.3 MiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:30:25.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:25 vm04 bash[28289]: audit 2026-03-10T10:30:24.510656+0000 mon.a (mon.0) 3601 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-150","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:30:25.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:25 vm04 bash[28289]: audit 2026-03-10T10:30:24.510656+0000 mon.a (mon.0) 3601 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-150","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:30:25.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:25 vm04 bash[28289]: cluster 2026-03-10T10:30:24.516264+0000 mon.a (mon.0) 3602 : cluster [DBG] osdmap e725: 8 total, 8 up, 8 in 2026-03-10T10:30:25.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:25 vm04 bash[28289]: cluster 2026-03-10T10:30:24.516264+0000 mon.a (mon.0) 3602 : cluster [DBG] osdmap e725: 8 total, 8 up, 8 in 2026-03-10T10:30:25.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:25 vm04 bash[28289]: audit 2026-03-10T10:30:24.568513+0000 mon.a (mon.0) 3603 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:25.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:25 vm04 bash[28289]: audit 2026-03-10T10:30:24.568513+0000 mon.a (mon.0) 3603 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:25.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:25 vm04 bash[28289]: audit 2026-03-10T10:30:24.568693+0000 mon.a (mon.0) 3604 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-150"}]: dispatch 2026-03-10T10:30:25.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:25 vm04 bash[28289]: audit 2026-03-10T10:30:24.568693+0000 mon.a (mon.0) 3604 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-150"}]: dispatch 2026-03-10T10:30:25.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:25 vm04 bash[28289]: cluster 2026-03-10T10:30:24.575131+0000 mgr.y (mgr.24422) 635 : cluster [DBG] pgmap v1135: 268 pgs: 15 creating+peering, 17 unknown, 236 active+clean; 4.3 MiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:30:25.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:25 vm04 bash[28289]: cluster 2026-03-10T10:30:24.575131+0000 mgr.y (mgr.24422) 635 : cluster [DBG] pgmap v1135: 268 pgs: 15 creating+peering, 17 unknown, 236 active+clean; 4.3 MiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:30:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:25 vm07 bash[23367]: audit 2026-03-10T10:30:24.510656+0000 mon.a (mon.0) 3601 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-150","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:30:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:25 vm07 bash[23367]: audit 2026-03-10T10:30:24.510656+0000 mon.a (mon.0) 3601 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-150","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:30:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:25 vm07 bash[23367]: cluster 2026-03-10T10:30:24.516264+0000 mon.a (mon.0) 3602 : cluster [DBG] osdmap e725: 8 total, 8 up, 8 in 2026-03-10T10:30:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:25 vm07 bash[23367]: cluster 2026-03-10T10:30:24.516264+0000 mon.a (mon.0) 3602 : cluster [DBG] osdmap e725: 8 total, 8 up, 8 in 2026-03-10T10:30:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:25 vm07 bash[23367]: audit 2026-03-10T10:30:24.568513+0000 mon.a (mon.0) 3603 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:25 vm07 bash[23367]: audit 2026-03-10T10:30:24.568513+0000 mon.a (mon.0) 3603 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:25 vm07 bash[23367]: audit 2026-03-10T10:30:24.568693+0000 mon.a (mon.0) 3604 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-150"}]: dispatch 2026-03-10T10:30:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:25 vm07 bash[23367]: audit 2026-03-10T10:30:24.568693+0000 mon.a (mon.0) 3604 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-150"}]: dispatch 2026-03-10T10:30:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:25 vm07 bash[23367]: cluster 2026-03-10T10:30:24.575131+0000 mgr.y (mgr.24422) 635 : cluster [DBG] pgmap v1135: 268 pgs: 15 creating+peering, 17 unknown, 236 active+clean; 4.3 MiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:30:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:25 vm07 bash[23367]: cluster 2026-03-10T10:30:24.575131+0000 mgr.y (mgr.24422) 635 : cluster [DBG] pgmap v1135: 268 pgs: 15 creating+peering, 17 unknown, 236 active+clean; 4.3 MiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:30:26.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:26 vm04 bash[20742]: cluster 2026-03-10T10:30:25.531031+0000 mon.a (mon.0) 3605 : cluster [DBG] osdmap e726: 8 total, 8 up, 8 in 2026-03-10T10:30:26.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:26 vm04 bash[20742]: cluster 2026-03-10T10:30:25.531031+0000 mon.a (mon.0) 3605 : cluster [DBG] osdmap e726: 8 total, 8 up, 8 in 2026-03-10T10:30:26.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:26 vm04 bash[28289]: cluster 2026-03-10T10:30:25.531031+0000 mon.a (mon.0) 3605 : cluster [DBG] osdmap e726: 8 total, 8 up, 8 in 2026-03-10T10:30:26.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:26 vm04 bash[28289]: cluster 2026-03-10T10:30:25.531031+0000 mon.a (mon.0) 3605 : cluster [DBG] osdmap e726: 8 total, 8 up, 8 in 2026-03-10T10:30:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:26 vm07 bash[23367]: cluster 2026-03-10T10:30:25.531031+0000 mon.a (mon.0) 3605 : cluster [DBG] osdmap e726: 8 total, 8 up, 8 in 2026-03-10T10:30:27.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:26 vm07 bash[23367]: cluster 2026-03-10T10:30:25.531031+0000 mon.a (mon.0) 3605 : cluster [DBG] osdmap e726: 8 total, 8 up, 8 in 2026-03-10T10:30:27.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:27 vm04 bash[20742]: cluster 2026-03-10T10:30:26.541659+0000 mon.a (mon.0) 3606 : cluster [DBG] osdmap e727: 8 total, 8 up, 8 in 2026-03-10T10:30:27.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:27 vm04 bash[20742]: cluster 2026-03-10T10:30:26.541659+0000 mon.a (mon.0) 3606 : cluster [DBG] osdmap e727: 8 total, 8 up, 8 in 2026-03-10T10:30:27.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:27 vm04 bash[20742]: audit 2026-03-10T10:30:26.559378+0000 mon.a (mon.0) 3607 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:30:27.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:27 vm04 bash[20742]: audit 2026-03-10T10:30:26.559378+0000 mon.a (mon.0) 3607 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:30:27.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:27 vm04 bash[20742]: cluster 2026-03-10T10:30:26.575439+0000 mgr.y (mgr.24422) 636 : cluster [DBG] pgmap v1138: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:30:27.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:27 vm04 bash[20742]: cluster 2026-03-10T10:30:26.575439+0000 mgr.y (mgr.24422) 636 : cluster [DBG] pgmap v1138: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:30:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:27 vm04 bash[28289]: cluster 2026-03-10T10:30:26.541659+0000 mon.a (mon.0) 3606 : cluster [DBG] osdmap e727: 8 total, 8 up, 8 in 2026-03-10T10:30:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:27 vm04 bash[28289]: cluster 2026-03-10T10:30:26.541659+0000 mon.a (mon.0) 3606 : cluster [DBG] osdmap e727: 8 total, 8 up, 8 in 2026-03-10T10:30:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:27 vm04 bash[28289]: audit 2026-03-10T10:30:26.559378+0000 mon.a (mon.0) 3607 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:30:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:27 vm04 bash[28289]: audit 2026-03-10T10:30:26.559378+0000 mon.a (mon.0) 3607 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:30:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:27 vm04 bash[28289]: cluster 2026-03-10T10:30:26.575439+0000 mgr.y (mgr.24422) 636 : cluster [DBG] pgmap v1138: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:30:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:27 vm04 bash[28289]: cluster 2026-03-10T10:30:26.575439+0000 mgr.y (mgr.24422) 636 : cluster [DBG] pgmap v1138: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:30:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:27 vm07 bash[23367]: cluster 2026-03-10T10:30:26.541659+0000 mon.a (mon.0) 3606 : cluster [DBG] osdmap e727: 8 total, 8 up, 8 in 2026-03-10T10:30:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:27 vm07 bash[23367]: cluster 2026-03-10T10:30:26.541659+0000 mon.a (mon.0) 3606 : cluster [DBG] osdmap e727: 8 total, 8 up, 8 in 2026-03-10T10:30:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:27 vm07 bash[23367]: audit 2026-03-10T10:30:26.559378+0000 mon.a (mon.0) 3607 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:30:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:27 vm07 bash[23367]: audit 2026-03-10T10:30:26.559378+0000 mon.a (mon.0) 3607 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-152","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:30:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:27 vm07 bash[23367]: cluster 2026-03-10T10:30:26.575439+0000 mgr.y (mgr.24422) 636 : cluster [DBG] pgmap v1138: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:30:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:27 vm07 bash[23367]: cluster 2026-03-10T10:30:26.575439+0000 mgr.y (mgr.24422) 636 : cluster [DBG] pgmap v1138: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s 2026-03-10T10:30:28.913 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:28 vm07 bash[23367]: audit 2026-03-10T10:30:27.569708+0000 mon.a (mon.0) 3608 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-152","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:30:28.913 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:28 vm07 bash[23367]: audit 2026-03-10T10:30:27.569708+0000 mon.a (mon.0) 3608 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-152","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:30:28.913 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:28 vm07 bash[23367]: cluster 2026-03-10T10:30:27.572370+0000 mon.a (mon.0) 3609 : cluster [DBG] osdmap e728: 8 total, 8 up, 8 in 2026-03-10T10:30:28.913 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:28 vm07 bash[23367]: cluster 2026-03-10T10:30:27.572370+0000 mon.a (mon.0) 3609 : cluster [DBG] osdmap e728: 8 total, 8 up, 8 in 2026-03-10T10:30:28.913 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:28 vm07 bash[23367]: cluster 2026-03-10T10:30:27.611358+0000 mon.a (mon.0) 3610 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:30:28.913 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:28 vm07 bash[23367]: cluster 2026-03-10T10:30:27.611358+0000 mon.a (mon.0) 3610 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:30:28.913 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:28 vm07 bash[23367]: audit 2026-03-10T10:30:27.637907+0000 mon.a (mon.0) 3611 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:28.913 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:28 vm07 bash[23367]: audit 2026-03-10T10:30:27.637907+0000 mon.a (mon.0) 3611 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:28.913 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:28 vm07 bash[23367]: audit 2026-03-10T10:30:27.638108+0000 mon.a (mon.0) 3612 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-152"}]: dispatch 2026-03-10T10:30:28.913 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:28 vm07 bash[23367]: audit 2026-03-10T10:30:27.638108+0000 mon.a (mon.0) 3612 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-152"}]: dispatch 2026-03-10T10:30:28.913 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:28 vm07 bash[23367]: audit 2026-03-10T10:30:28.220054+0000 mon.a (mon.0) 3613 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:30:28.913 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:28 vm07 bash[23367]: audit 2026-03-10T10:30:28.220054+0000 mon.a (mon.0) 3613 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:30:28.913 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:28 vm07 bash[23367]: audit 2026-03-10T10:30:28.220834+0000 mon.a (mon.0) 3614 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:30:28.913 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:28 vm07 bash[23367]: audit 2026-03-10T10:30:28.220834+0000 mon.a (mon.0) 3614 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:30:28.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:28 vm04 bash[20742]: audit 2026-03-10T10:30:27.569708+0000 mon.a (mon.0) 3608 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-152","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:30:28.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:28 vm04 bash[20742]: audit 2026-03-10T10:30:27.569708+0000 mon.a (mon.0) 3608 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-152","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:30:28.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:28 vm04 bash[20742]: cluster 2026-03-10T10:30:27.572370+0000 mon.a (mon.0) 3609 : cluster [DBG] osdmap e728: 8 total, 8 up, 8 in 2026-03-10T10:30:28.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:28 vm04 bash[20742]: cluster 2026-03-10T10:30:27.572370+0000 mon.a (mon.0) 3609 : cluster [DBG] osdmap e728: 8 total, 8 up, 8 in 2026-03-10T10:30:28.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:28 vm04 bash[20742]: cluster 2026-03-10T10:30:27.611358+0000 mon.a (mon.0) 3610 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:30:28.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:28 vm04 bash[20742]: cluster 2026-03-10T10:30:27.611358+0000 mon.a (mon.0) 3610 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:30:28.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:28 vm04 bash[20742]: audit 2026-03-10T10:30:27.637907+0000 mon.a (mon.0) 3611 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:28.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:28 vm04 bash[20742]: audit 2026-03-10T10:30:27.637907+0000 mon.a (mon.0) 3611 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:28.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:28 vm04 bash[20742]: audit 2026-03-10T10:30:27.638108+0000 mon.a (mon.0) 3612 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-152"}]: dispatch 2026-03-10T10:30:28.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:28 vm04 bash[20742]: audit 2026-03-10T10:30:27.638108+0000 mon.a (mon.0) 3612 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-152"}]: dispatch 2026-03-10T10:30:28.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:28 vm04 bash[20742]: audit 2026-03-10T10:30:28.220054+0000 mon.a (mon.0) 3613 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:30:28.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:28 vm04 bash[20742]: audit 2026-03-10T10:30:28.220054+0000 mon.a (mon.0) 3613 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:30:28.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:28 vm04 bash[20742]: audit 2026-03-10T10:30:28.220834+0000 mon.a (mon.0) 3614 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:30:28.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:28 vm04 bash[20742]: audit 2026-03-10T10:30:28.220834+0000 mon.a (mon.0) 3614 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:30:28.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:28 vm04 bash[28289]: audit 2026-03-10T10:30:27.569708+0000 mon.a (mon.0) 3608 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-152","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:30:28.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:28 vm04 bash[28289]: audit 2026-03-10T10:30:27.569708+0000 mon.a (mon.0) 3608 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-152","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:30:28.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:28 vm04 bash[28289]: cluster 2026-03-10T10:30:27.572370+0000 mon.a (mon.0) 3609 : cluster [DBG] osdmap e728: 8 total, 8 up, 8 in 2026-03-10T10:30:28.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:28 vm04 bash[28289]: cluster 2026-03-10T10:30:27.572370+0000 mon.a (mon.0) 3609 : cluster [DBG] osdmap e728: 8 total, 8 up, 8 in 2026-03-10T10:30:28.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:28 vm04 bash[28289]: cluster 2026-03-10T10:30:27.611358+0000 mon.a (mon.0) 3610 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:30:28.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:28 vm04 bash[28289]: cluster 2026-03-10T10:30:27.611358+0000 mon.a (mon.0) 3610 : cluster [WRN] Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:30:28.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:28 vm04 bash[28289]: audit 2026-03-10T10:30:27.637907+0000 mon.a (mon.0) 3611 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:28.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:28 vm04 bash[28289]: audit 2026-03-10T10:30:27.637907+0000 mon.a (mon.0) 3611 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:28.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:28 vm04 bash[28289]: audit 2026-03-10T10:30:27.638108+0000 mon.a (mon.0) 3612 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-152"}]: dispatch 2026-03-10T10:30:28.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:28 vm04 bash[28289]: audit 2026-03-10T10:30:27.638108+0000 mon.a (mon.0) 3612 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-152"}]: dispatch 2026-03-10T10:30:28.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:28 vm04 bash[28289]: audit 2026-03-10T10:30:28.220054+0000 mon.a (mon.0) 3613 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:30:28.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:28 vm04 bash[28289]: audit 2026-03-10T10:30:28.220054+0000 mon.a (mon.0) 3613 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:30:28.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:28 vm04 bash[28289]: audit 2026-03-10T10:30:28.220834+0000 mon.a (mon.0) 3614 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:30:28.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:28 vm04 bash[28289]: audit 2026-03-10T10:30:28.220834+0000 mon.a (mon.0) 3614 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:30:29.266 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:30:28 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:30:29.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:29 vm04 bash[20742]: cluster 2026-03-10T10:30:28.576127+0000 mgr.y (mgr.24422) 637 : cluster [DBG] pgmap v1140: 268 pgs: 14 unknown, 254 active+clean; 4.3 MiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 504 B/s rd, 504 B/s wr, 1 op/s 2026-03-10T10:30:29.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:29 vm04 bash[20742]: cluster 2026-03-10T10:30:28.576127+0000 mgr.y (mgr.24422) 637 : cluster [DBG] pgmap v1140: 268 pgs: 14 unknown, 254 active+clean; 4.3 MiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 504 B/s rd, 504 B/s wr, 1 op/s 2026-03-10T10:30:29.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:29 vm04 bash[20742]: cluster 2026-03-10T10:30:28.605204+0000 mon.a (mon.0) 3615 : cluster [DBG] osdmap e729: 8 total, 8 up, 8 in 2026-03-10T10:30:29.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:29 vm04 bash[20742]: cluster 2026-03-10T10:30:28.605204+0000 mon.a (mon.0) 3615 : cluster [DBG] osdmap e729: 8 total, 8 up, 8 in 2026-03-10T10:30:29.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:29 vm04 bash[20742]: audit 2026-03-10T10:30:28.913642+0000 mgr.y (mgr.24422) 638 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:29.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:29 vm04 bash[20742]: audit 2026-03-10T10:30:28.913642+0000 mgr.y (mgr.24422) 638 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:29.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:29 vm04 bash[28289]: cluster 2026-03-10T10:30:28.576127+0000 mgr.y (mgr.24422) 637 : cluster [DBG] pgmap v1140: 268 pgs: 14 unknown, 254 active+clean; 4.3 MiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 504 B/s rd, 504 B/s wr, 1 op/s 2026-03-10T10:30:29.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:29 vm04 bash[28289]: cluster 2026-03-10T10:30:28.576127+0000 mgr.y (mgr.24422) 637 : cluster [DBG] pgmap v1140: 268 pgs: 14 unknown, 254 active+clean; 4.3 MiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 504 B/s rd, 504 B/s wr, 1 op/s 2026-03-10T10:30:29.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:29 vm04 bash[28289]: cluster 2026-03-10T10:30:28.605204+0000 mon.a (mon.0) 3615 : cluster [DBG] osdmap e729: 8 total, 8 up, 8 in 2026-03-10T10:30:29.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:29 vm04 bash[28289]: cluster 2026-03-10T10:30:28.605204+0000 mon.a (mon.0) 3615 : cluster [DBG] osdmap e729: 8 total, 8 up, 8 in 2026-03-10T10:30:29.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:29 vm04 bash[28289]: audit 2026-03-10T10:30:28.913642+0000 mgr.y (mgr.24422) 638 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:29.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:29 vm04 bash[28289]: audit 2026-03-10T10:30:28.913642+0000 mgr.y (mgr.24422) 638 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:29 vm07 bash[23367]: cluster 2026-03-10T10:30:28.576127+0000 mgr.y (mgr.24422) 637 : cluster [DBG] pgmap v1140: 268 pgs: 14 unknown, 254 active+clean; 4.3 MiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 504 B/s rd, 504 B/s wr, 1 op/s 2026-03-10T10:30:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:29 vm07 bash[23367]: cluster 2026-03-10T10:30:28.576127+0000 mgr.y (mgr.24422) 637 : cluster [DBG] pgmap v1140: 268 pgs: 14 unknown, 254 active+clean; 4.3 MiB data, 1009 MiB used, 159 GiB / 160 GiB avail; 504 B/s rd, 504 B/s wr, 1 op/s 2026-03-10T10:30:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:29 vm07 bash[23367]: cluster 2026-03-10T10:30:28.605204+0000 mon.a (mon.0) 3615 : cluster [DBG] osdmap e729: 8 total, 8 up, 8 in 2026-03-10T10:30:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:29 vm07 bash[23367]: cluster 2026-03-10T10:30:28.605204+0000 mon.a (mon.0) 3615 : cluster [DBG] osdmap e729: 8 total, 8 up, 8 in 2026-03-10T10:30:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:29 vm07 bash[23367]: audit 2026-03-10T10:30:28.913642+0000 mgr.y (mgr.24422) 638 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:29 vm07 bash[23367]: audit 2026-03-10T10:30:28.913642+0000 mgr.y (mgr.24422) 638 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:30.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:30 vm04 bash[20742]: cluster 2026-03-10T10:30:29.614169+0000 mon.a (mon.0) 3616 : cluster [DBG] osdmap e730: 8 total, 8 up, 8 in 2026-03-10T10:30:30.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:30 vm04 bash[20742]: cluster 2026-03-10T10:30:29.614169+0000 mon.a (mon.0) 3616 : cluster [DBG] osdmap e730: 8 total, 8 up, 8 in 2026-03-10T10:30:30.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:30 vm04 bash[20742]: audit 2026-03-10T10:30:29.619315+0000 mon.a (mon.0) 3617 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:30:30.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:30 vm04 bash[20742]: audit 2026-03-10T10:30:29.619315+0000 mon.a (mon.0) 3617 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:30:30.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:30 vm04 bash[28289]: cluster 2026-03-10T10:30:29.614169+0000 mon.a (mon.0) 3616 : cluster [DBG] osdmap e730: 8 total, 8 up, 8 in 2026-03-10T10:30:30.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:30 vm04 bash[28289]: cluster 2026-03-10T10:30:29.614169+0000 mon.a (mon.0) 3616 : cluster [DBG] osdmap e730: 8 total, 8 up, 8 in 2026-03-10T10:30:30.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:30 vm04 bash[28289]: audit 2026-03-10T10:30:29.619315+0000 mon.a (mon.0) 3617 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:30:30.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:30 vm04 bash[28289]: audit 2026-03-10T10:30:29.619315+0000 mon.a (mon.0) 3617 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:30:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:30 vm07 bash[23367]: cluster 2026-03-10T10:30:29.614169+0000 mon.a (mon.0) 3616 : cluster [DBG] osdmap e730: 8 total, 8 up, 8 in 2026-03-10T10:30:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:30 vm07 bash[23367]: cluster 2026-03-10T10:30:29.614169+0000 mon.a (mon.0) 3616 : cluster [DBG] osdmap e730: 8 total, 8 up, 8 in 2026-03-10T10:30:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:30 vm07 bash[23367]: audit 2026-03-10T10:30:29.619315+0000 mon.a (mon.0) 3617 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:30:31.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:30 vm07 bash[23367]: audit 2026-03-10T10:30:29.619315+0000 mon.a (mon.0) 3617 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-154","app": "rados","yes_i_really_mean_it": true}]: dispatch 2026-03-10T10:30:31.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:31 vm04 bash[28289]: cluster 2026-03-10T10:30:30.576554+0000 mgr.y (mgr.24422) 639 : cluster [DBG] pgmap v1143: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 759 B/s wr, 4 op/s 2026-03-10T10:30:31.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:31 vm04 bash[28289]: cluster 2026-03-10T10:30:30.576554+0000 mgr.y (mgr.24422) 639 : cluster [DBG] pgmap v1143: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 759 B/s wr, 4 op/s 2026-03-10T10:30:31.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:31 vm04 bash[28289]: audit 2026-03-10T10:30:30.615388+0000 mon.a (mon.0) 3618 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-154","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:30:31.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:31 vm04 bash[28289]: audit 2026-03-10T10:30:30.615388+0000 mon.a (mon.0) 3618 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-154","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:30:31.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:31 vm04 bash[28289]: cluster 2026-03-10T10:30:30.623255+0000 mon.a (mon.0) 3619 : cluster [DBG] osdmap e731: 8 total, 8 up, 8 in 2026-03-10T10:30:31.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:31 vm04 bash[28289]: cluster 2026-03-10T10:30:30.623255+0000 mon.a (mon.0) 3619 : cluster [DBG] osdmap e731: 8 total, 8 up, 8 in 2026-03-10T10:30:31.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:31 vm04 bash[28289]: audit 2026-03-10T10:30:30.643086+0000 mon.a (mon.0) 3620 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-111","var": "dedup_tier","val": "test-rados-api-vm04-59491-154"}]: dispatch 2026-03-10T10:30:31.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:31 vm04 bash[28289]: audit 2026-03-10T10:30:30.643086+0000 mon.a (mon.0) 3620 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-111","var": "dedup_tier","val": "test-rados-api-vm04-59491-154"}]: dispatch 2026-03-10T10:30:31.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:31 vm04 bash[28289]: audit 2026-03-10T10:30:30.670183+0000 mon.a (mon.0) 3621 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:31.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:31 vm04 bash[28289]: audit 2026-03-10T10:30:30.670183+0000 mon.a (mon.0) 3621 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:31.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:31 vm04 bash[28289]: audit 2026-03-10T10:30:30.670391+0000 mon.a (mon.0) 3622 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-154"}]: dispatch 2026-03-10T10:30:31.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:31 vm04 bash[28289]: audit 2026-03-10T10:30:30.670391+0000 mon.a (mon.0) 3622 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-154"}]: dispatch 2026-03-10T10:30:31.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:31 vm04 bash[28289]: cluster 2026-03-10T10:30:31.627611+0000 mon.a (mon.0) 3623 : cluster [DBG] osdmap e732: 8 total, 8 up, 8 in 2026-03-10T10:30:31.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:31 vm04 bash[28289]: cluster 2026-03-10T10:30:31.627611+0000 mon.a (mon.0) 3623 : cluster [DBG] osdmap e732: 8 total, 8 up, 8 in 2026-03-10T10:30:31.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:31 vm04 bash[20742]: cluster 2026-03-10T10:30:30.576554+0000 mgr.y (mgr.24422) 639 : cluster [DBG] pgmap v1143: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 759 B/s wr, 4 op/s 2026-03-10T10:30:31.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:31 vm04 bash[20742]: cluster 2026-03-10T10:30:30.576554+0000 mgr.y (mgr.24422) 639 : cluster [DBG] pgmap v1143: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 759 B/s wr, 4 op/s 2026-03-10T10:30:31.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:31 vm04 bash[20742]: audit 2026-03-10T10:30:30.615388+0000 mon.a (mon.0) 3618 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-154","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:30:31.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:31 vm04 bash[20742]: audit 2026-03-10T10:30:30.615388+0000 mon.a (mon.0) 3618 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-154","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:30:31.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:31 vm04 bash[20742]: cluster 2026-03-10T10:30:30.623255+0000 mon.a (mon.0) 3619 : cluster [DBG] osdmap e731: 8 total, 8 up, 8 in 2026-03-10T10:30:31.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:31 vm04 bash[20742]: cluster 2026-03-10T10:30:30.623255+0000 mon.a (mon.0) 3619 : cluster [DBG] osdmap e731: 8 total, 8 up, 8 in 2026-03-10T10:30:31.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:31 vm04 bash[20742]: audit 2026-03-10T10:30:30.643086+0000 mon.a (mon.0) 3620 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-111","var": "dedup_tier","val": "test-rados-api-vm04-59491-154"}]: dispatch 2026-03-10T10:30:31.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:31 vm04 bash[20742]: audit 2026-03-10T10:30:30.643086+0000 mon.a (mon.0) 3620 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-111","var": "dedup_tier","val": "test-rados-api-vm04-59491-154"}]: dispatch 2026-03-10T10:30:31.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:31 vm04 bash[20742]: audit 2026-03-10T10:30:30.670183+0000 mon.a (mon.0) 3621 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:31.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:31 vm04 bash[20742]: audit 2026-03-10T10:30:30.670183+0000 mon.a (mon.0) 3621 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:31.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:31 vm04 bash[20742]: audit 2026-03-10T10:30:30.670391+0000 mon.a (mon.0) 3622 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-154"}]: dispatch 2026-03-10T10:30:31.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:31 vm04 bash[20742]: audit 2026-03-10T10:30:30.670391+0000 mon.a (mon.0) 3622 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-154"}]: dispatch 2026-03-10T10:30:31.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:31 vm04 bash[20742]: cluster 2026-03-10T10:30:31.627611+0000 mon.a (mon.0) 3623 : cluster [DBG] osdmap e732: 8 total, 8 up, 8 in 2026-03-10T10:30:31.954 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:31 vm04 bash[20742]: cluster 2026-03-10T10:30:31.627611+0000 mon.a (mon.0) 3623 : cluster [DBG] osdmap e732: 8 total, 8 up, 8 in 2026-03-10T10:30:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:31 vm07 bash[23367]: cluster 2026-03-10T10:30:30.576554+0000 mgr.y (mgr.24422) 639 : cluster [DBG] pgmap v1143: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 759 B/s wr, 4 op/s 2026-03-10T10:30:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:31 vm07 bash[23367]: cluster 2026-03-10T10:30:30.576554+0000 mgr.y (mgr.24422) 639 : cluster [DBG] pgmap v1143: 268 pgs: 32 unknown, 236 active+clean; 4.3 MiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 2.2 KiB/s rd, 759 B/s wr, 4 op/s 2026-03-10T10:30:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:31 vm07 bash[23367]: audit 2026-03-10T10:30:30.615388+0000 mon.a (mon.0) 3618 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-154","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:30:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:31 vm07 bash[23367]: audit 2026-03-10T10:30:30.615388+0000 mon.a (mon.0) 3618 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test-rados-api-vm04-59491-154","app": "rados","yes_i_really_mean_it": true}]': finished 2026-03-10T10:30:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:31 vm07 bash[23367]: cluster 2026-03-10T10:30:30.623255+0000 mon.a (mon.0) 3619 : cluster [DBG] osdmap e731: 8 total, 8 up, 8 in 2026-03-10T10:30:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:31 vm07 bash[23367]: cluster 2026-03-10T10:30:30.623255+0000 mon.a (mon.0) 3619 : cluster [DBG] osdmap e731: 8 total, 8 up, 8 in 2026-03-10T10:30:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:31 vm07 bash[23367]: audit 2026-03-10T10:30:30.643086+0000 mon.a (mon.0) 3620 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-111","var": "dedup_tier","val": "test-rados-api-vm04-59491-154"}]: dispatch 2026-03-10T10:30:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:31 vm07 bash[23367]: audit 2026-03-10T10:30:30.643086+0000 mon.a (mon.0) 3620 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd pool set","pool":"test-rados-api-vm04-59491-111","var": "dedup_tier","val": "test-rados-api-vm04-59491-154"}]: dispatch 2026-03-10T10:30:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:31 vm07 bash[23367]: audit 2026-03-10T10:30:30.670183+0000 mon.a (mon.0) 3621 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:31 vm07 bash[23367]: audit 2026-03-10T10:30:30.670183+0000 mon.a (mon.0) 3621 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove-overlay", "pool": "test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:31 vm07 bash[23367]: audit 2026-03-10T10:30:30.670391+0000 mon.a (mon.0) 3622 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-154"}]: dispatch 2026-03-10T10:30:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:31 vm07 bash[23367]: audit 2026-03-10T10:30:30.670391+0000 mon.a (mon.0) 3622 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "test-rados-api-vm04-59491-111", "tierpool": "test-rados-api-vm04-59491-154"}]: dispatch 2026-03-10T10:30:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:31 vm07 bash[23367]: cluster 2026-03-10T10:30:31.627611+0000 mon.a (mon.0) 3623 : cluster [DBG] osdmap e732: 8 total, 8 up, 8 in 2026-03-10T10:30:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:31 vm07 bash[23367]: cluster 2026-03-10T10:30:31.627611+0000 mon.a (mon.0) 3623 : cluster [DBG] osdmap e732: 8 total, 8 up, 8 in 2026-03-10T10:30:33.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:30:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:30:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:30:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:33 vm04 bash[28289]: cluster 2026-03-10T10:30:32.576919+0000 mgr.y (mgr.24422) 640 : cluster [DBG] pgmap v1146: 236 pgs: 236 active+clean; 4.3 MiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:30:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:33 vm04 bash[28289]: cluster 2026-03-10T10:30:32.576919+0000 mgr.y (mgr.24422) 640 : cluster [DBG] pgmap v1146: 236 pgs: 236 active+clean; 4.3 MiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:30:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:33 vm04 bash[28289]: cluster 2026-03-10T10:30:32.624703+0000 mon.a (mon.0) 3624 : cluster [DBG] osdmap e733: 8 total, 8 up, 8 in 2026-03-10T10:30:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:33 vm04 bash[28289]: cluster 2026-03-10T10:30:32.624703+0000 mon.a (mon.0) 3624 : cluster [DBG] osdmap e733: 8 total, 8 up, 8 in 2026-03-10T10:30:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:33 vm04 bash[28289]: audit 2026-03-10T10:30:32.625491+0000 mon.a (mon.0) 3625 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:33 vm04 bash[28289]: audit 2026-03-10T10:30:32.625491+0000 mon.a (mon.0) 3625 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:33 vm04 bash[28289]: cluster 2026-03-10T10:30:32.647046+0000 mon.a (mon.0) 3626 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:30:33.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:33 vm04 bash[28289]: cluster 2026-03-10T10:30:32.647046+0000 mon.a (mon.0) 3626 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:30:33.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:33 vm04 bash[20742]: cluster 2026-03-10T10:30:32.576919+0000 mgr.y (mgr.24422) 640 : cluster [DBG] pgmap v1146: 236 pgs: 236 active+clean; 4.3 MiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:30:33.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:33 vm04 bash[20742]: cluster 2026-03-10T10:30:32.576919+0000 mgr.y (mgr.24422) 640 : cluster [DBG] pgmap v1146: 236 pgs: 236 active+clean; 4.3 MiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:30:33.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:33 vm04 bash[20742]: cluster 2026-03-10T10:30:32.624703+0000 mon.a (mon.0) 3624 : cluster [DBG] osdmap e733: 8 total, 8 up, 8 in 2026-03-10T10:30:33.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:33 vm04 bash[20742]: cluster 2026-03-10T10:30:32.624703+0000 mon.a (mon.0) 3624 : cluster [DBG] osdmap e733: 8 total, 8 up, 8 in 2026-03-10T10:30:33.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:33 vm04 bash[20742]: audit 2026-03-10T10:30:32.625491+0000 mon.a (mon.0) 3625 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:33.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:33 vm04 bash[20742]: audit 2026-03-10T10:30:32.625491+0000 mon.a (mon.0) 3625 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:33.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:33 vm04 bash[20742]: cluster 2026-03-10T10:30:32.647046+0000 mon.a (mon.0) 3626 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:30:33.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:33 vm04 bash[20742]: cluster 2026-03-10T10:30:32.647046+0000 mon.a (mon.0) 3626 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:30:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:33 vm07 bash[23367]: cluster 2026-03-10T10:30:32.576919+0000 mgr.y (mgr.24422) 640 : cluster [DBG] pgmap v1146: 236 pgs: 236 active+clean; 4.3 MiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:30:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:33 vm07 bash[23367]: cluster 2026-03-10T10:30:32.576919+0000 mgr.y (mgr.24422) 640 : cluster [DBG] pgmap v1146: 236 pgs: 236 active+clean; 4.3 MiB data, 1010 MiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s 2026-03-10T10:30:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:33 vm07 bash[23367]: cluster 2026-03-10T10:30:32.624703+0000 mon.a (mon.0) 3624 : cluster [DBG] osdmap e733: 8 total, 8 up, 8 in 2026-03-10T10:30:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:33 vm07 bash[23367]: cluster 2026-03-10T10:30:32.624703+0000 mon.a (mon.0) 3624 : cluster [DBG] osdmap e733: 8 total, 8 up, 8 in 2026-03-10T10:30:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:33 vm07 bash[23367]: audit 2026-03-10T10:30:32.625491+0000 mon.a (mon.0) 3625 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:33 vm07 bash[23367]: audit 2026-03-10T10:30:32.625491+0000 mon.a (mon.0) 3625 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:33 vm07 bash[23367]: cluster 2026-03-10T10:30:32.647046+0000 mon.a (mon.0) 3626 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:30:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:33 vm07 bash[23367]: cluster 2026-03-10T10:30:32.647046+0000 mon.a (mon.0) 3626 : cluster [WRN] Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:30:34.646 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.TryFlush (7397 ms) 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.FailedFlush 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.FailedFlush (12319 ms) 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.Flush 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.Flush (8197 ms) 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.FlushSnap 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.FlushSnap (13333 ms) 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.FlushTryFlushRaces 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.FlushTryFlushRaces (8381 ms) 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.TryFlushReadRace 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.TryFlushReadRace (7786 ms) 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.HitSetRead 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: hmm, no HitSet yet 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: ok, hit_set contains 329:602f83fe:::foo:head 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.HitSetRead (8489 ms) 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.HitSetTrim 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: got ls 1773138549,0 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: first is 1773138549 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: got ls 1773138549,0 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: got ls 1773138549,0 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: got ls 1773138549,1773138551,1773138552,0 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: got ls 1773138549,1773138551,1773138552,0 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: got ls 1773138549,1773138551,1773138552,0 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: got ls 1773138549,1773138551,1773138552,1773138554,1773138555,0 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: got ls 1773138549,1773138551,1773138552,1773138554,1773138555,0 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: got ls 1773138549,1773138551,1773138552,1773138554,1773138555,0 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: got ls 1773138549,1773138551,1773138552,1773138554,1773138555,1773138557,1773138558,0 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: got ls 1773138549,1773138551,1773138552,1773138554,1773138555,1773138557,1773138558,0 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: got ls 1773138549,1773138551,1773138552,1773138554,1773138555,1773138557,1773138558,0 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: got ls 1773138552,1773138554,1773138555,1773138557,1773138558,1773138560,1773138561,0 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: first now 1773138552, trimmed 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.HitSetTrim (20483 ms) 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.PromoteOn2ndRead 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: foo0 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: verifying foo0 is eventually promoted 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.PromoteOn2ndRead (14169 ms) 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.ProxyRead 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.ProxyRead (18230 ms) 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.CachePin 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.CachePin (23394 ms) 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.SetRedirectRead 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.SetRedirectRead (5009 ms) 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.SetChunkRead 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.SetChunkRead (3047 ms) 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.ManifestPromoteRead 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.ManifestPromoteRead (3074 ms) 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ RUN ] LibRadosTwoPoolsECPP.TrySetDedupTier 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ OK ] LibRadosTwoPoolsECPP.TrySetDedupTier (3023 ms) 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [----------] 22 tests from LibRadosTwoPoolsECPP (232140 ms total) 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [----------] Global test environment tear-down 2026-03-10T10:30:34.647 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [==========] 77 tests from 4 test suites ran. (854339 ms total) 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stdout: api_tier_pp: [ PASSED ] 77 tests. 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59329 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 59329 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59514 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 59514 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59945 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 59945 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59712 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 59712 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59596 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 59596 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59421 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 59421 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59658 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 59658 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=60094 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 60094 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=60116 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 60116 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59459 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 59459 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=60012 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 60012 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59265 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 59265 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59362 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 59362 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:30:34.648 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59903 2026-03-10T10:30:34.649 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 59903 2026-03-10T10:30:34.649 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:30:34.649 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59257 2026-03-10T10:30:34.649 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 59257 2026-03-10T10:30:34.649 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:30:34.649 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59277 2026-03-10T10:30:34.649 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 59277 2026-03-10T10:30:34.649 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:30:34.649 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=60194 2026-03-10T10:30:34.649 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 60194 2026-03-10T10:30:34.649 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:30:34.649 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59538 2026-03-10T10:30:34.649 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 59538 2026-03-10T10:30:34.649 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:30:34.649 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=60239 2026-03-10T10:30:34.649 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 60239 2026-03-10T10:30:34.649 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:30:34.649 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59613 2026-03-10T10:30:34.649 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 59613 2026-03-10T10:30:34.649 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:30:34.649 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=60063 2026-03-10T10:30:34.649 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 60063 2026-03-10T10:30:34.649 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:30:34.649 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=60255 2026-03-10T10:30:34.649 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 60255 2026-03-10T10:30:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:34 vm04 bash[20742]: audit 2026-03-10T10:30:33.624508+0000 mon.a (mon.0) 3627 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:30:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:34 vm04 bash[20742]: audit 2026-03-10T10:30:33.624508+0000 mon.a (mon.0) 3627 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:30:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:34 vm04 bash[20742]: cluster 2026-03-10T10:30:33.626228+0000 mon.a (mon.0) 3628 : cluster [DBG] osdmap e734: 8 total, 8 up, 8 in 2026-03-10T10:30:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:34 vm04 bash[20742]: cluster 2026-03-10T10:30:33.626228+0000 mon.a (mon.0) 3628 : cluster [DBG] osdmap e734: 8 total, 8 up, 8 in 2026-03-10T10:30:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:34 vm04 bash[20742]: audit 2026-03-10T10:30:33.627140+0000 mon.a (mon.0) 3629 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:34 vm04 bash[20742]: audit 2026-03-10T10:30:33.627140+0000 mon.a (mon.0) 3629 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:34 vm04 bash[20742]: audit 2026-03-10T10:30:34.448479+0000 mon.a (mon.0) 3630 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:30:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:34 vm04 bash[20742]: audit 2026-03-10T10:30:34.448479+0000 mon.a (mon.0) 3630 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:30:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:34 vm04 bash[20742]: audit 2026-03-10T10:30:34.628546+0000 mon.a (mon.0) 3631 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:30:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:34 vm04 bash[20742]: audit 2026-03-10T10:30:34.628546+0000 mon.a (mon.0) 3631 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:30:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:34 vm04 bash[20742]: cluster 2026-03-10T10:30:34.635757+0000 mon.a (mon.0) 3632 : cluster [DBG] osdmap e735: 8 total, 8 up, 8 in 2026-03-10T10:30:34.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:34 vm04 bash[20742]: cluster 2026-03-10T10:30:34.635757+0000 mon.a (mon.0) 3632 : cluster [DBG] osdmap e735: 8 total, 8 up, 8 in 2026-03-10T10:30:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:34 vm04 bash[28289]: audit 2026-03-10T10:30:33.624508+0000 mon.a (mon.0) 3627 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:30:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:34 vm04 bash[28289]: audit 2026-03-10T10:30:33.624508+0000 mon.a (mon.0) 3627 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:30:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:34 vm04 bash[28289]: cluster 2026-03-10T10:30:33.626228+0000 mon.a (mon.0) 3628 : cluster [DBG] osdmap e734: 8 total, 8 up, 8 in 2026-03-10T10:30:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:34 vm04 bash[28289]: cluster 2026-03-10T10:30:33.626228+0000 mon.a (mon.0) 3628 : cluster [DBG] osdmap e734: 8 total, 8 up, 8 in 2026-03-10T10:30:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:34 vm04 bash[28289]: audit 2026-03-10T10:30:33.627140+0000 mon.a (mon.0) 3629 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:34 vm04 bash[28289]: audit 2026-03-10T10:30:33.627140+0000 mon.a (mon.0) 3629 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:34 vm04 bash[28289]: audit 2026-03-10T10:30:34.448479+0000 mon.a (mon.0) 3630 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:30:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:34 vm04 bash[28289]: audit 2026-03-10T10:30:34.448479+0000 mon.a (mon.0) 3630 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:30:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:34 vm04 bash[28289]: audit 2026-03-10T10:30:34.628546+0000 mon.a (mon.0) 3631 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:30:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:34 vm04 bash[28289]: audit 2026-03-10T10:30:34.628546+0000 mon.a (mon.0) 3631 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:30:34.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:34 vm04 bash[28289]: cluster 2026-03-10T10:30:34.635757+0000 mon.a (mon.0) 3632 : cluster [DBG] osdmap e735: 8 total, 8 up, 8 in 2026-03-10T10:30:34.954 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:34 vm04 bash[28289]: cluster 2026-03-10T10:30:34.635757+0000 mon.a (mon.0) 3632 : cluster [DBG] osdmap e735: 8 total, 8 up, 8 in 2026-03-10T10:30:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:34 vm07 bash[23367]: audit 2026-03-10T10:30:33.624508+0000 mon.a (mon.0) 3627 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:30:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:34 vm07 bash[23367]: audit 2026-03-10T10:30:33.624508+0000 mon.a (mon.0) 3627 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile rm", "name": "testprofile-test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:30:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:34 vm07 bash[23367]: cluster 2026-03-10T10:30:33.626228+0000 mon.a (mon.0) 3628 : cluster [DBG] osdmap e734: 8 total, 8 up, 8 in 2026-03-10T10:30:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:34 vm07 bash[23367]: cluster 2026-03-10T10:30:33.626228+0000 mon.a (mon.0) 3628 : cluster [DBG] osdmap e734: 8 total, 8 up, 8 in 2026-03-10T10:30:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:34 vm07 bash[23367]: audit 2026-03-10T10:30:33.627140+0000 mon.a (mon.0) 3629 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:34 vm07 bash[23367]: audit 2026-03-10T10:30:33.627140+0000 mon.a (mon.0) 3629 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd=[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-111"}]: dispatch 2026-03-10T10:30:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:34 vm07 bash[23367]: audit 2026-03-10T10:30:34.448479+0000 mon.a (mon.0) 3630 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:30:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:34 vm07 bash[23367]: audit 2026-03-10T10:30:34.448479+0000 mon.a (mon.0) 3630 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:30:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:34 vm07 bash[23367]: audit 2026-03-10T10:30:34.628546+0000 mon.a (mon.0) 3631 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:30:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:34 vm07 bash[23367]: audit 2026-03-10T10:30:34.628546+0000 mon.a (mon.0) 3631 : audit [INF] from='client.? 192.168.123.104:0/4160398265' entity='client.admin' cmd='[{"prefix": "osd crush rule rm", "name":"test-rados-api-vm04-59491-111"}]': finished 2026-03-10T10:30:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:34 vm07 bash[23367]: cluster 2026-03-10T10:30:34.635757+0000 mon.a (mon.0) 3632 : cluster [DBG] osdmap e735: 8 total, 8 up, 8 in 2026-03-10T10:30:35.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:34 vm07 bash[23367]: cluster 2026-03-10T10:30:34.635757+0000 mon.a (mon.0) 3632 : cluster [DBG] osdmap e735: 8 total, 8 up, 8 in 2026-03-10T10:30:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:35 vm04 bash[20742]: cluster 2026-03-10T10:30:34.577460+0000 mgr.y (mgr.24422) 641 : cluster [DBG] pgmap v1149: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:30:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:35 vm04 bash[20742]: cluster 2026-03-10T10:30:34.577460+0000 mgr.y (mgr.24422) 641 : cluster [DBG] pgmap v1149: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:30:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:35 vm04 bash[20742]: audit 2026-03-10T10:30:34.789477+0000 mon.a (mon.0) 3633 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:30:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:35 vm04 bash[20742]: audit 2026-03-10T10:30:34.789477+0000 mon.a (mon.0) 3633 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:30:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:35 vm04 bash[20742]: audit 2026-03-10T10:30:34.790022+0000 mon.a (mon.0) 3634 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:30:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:35 vm04 bash[20742]: audit 2026-03-10T10:30:34.790022+0000 mon.a (mon.0) 3634 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:30:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:35 vm04 bash[20742]: audit 2026-03-10T10:30:34.796152+0000 mon.a (mon.0) 3635 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:30:35.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:35 vm04 bash[20742]: audit 2026-03-10T10:30:34.796152+0000 mon.a (mon.0) 3635 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:30:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:35 vm04 bash[28289]: cluster 2026-03-10T10:30:34.577460+0000 mgr.y (mgr.24422) 641 : cluster [DBG] pgmap v1149: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:30:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:35 vm04 bash[28289]: cluster 2026-03-10T10:30:34.577460+0000 mgr.y (mgr.24422) 641 : cluster [DBG] pgmap v1149: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:30:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:35 vm04 bash[28289]: audit 2026-03-10T10:30:34.789477+0000 mon.a (mon.0) 3633 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:30:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:35 vm04 bash[28289]: audit 2026-03-10T10:30:34.789477+0000 mon.a (mon.0) 3633 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:30:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:35 vm04 bash[28289]: audit 2026-03-10T10:30:34.790022+0000 mon.a (mon.0) 3634 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:30:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:35 vm04 bash[28289]: audit 2026-03-10T10:30:34.790022+0000 mon.a (mon.0) 3634 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:30:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:35 vm04 bash[28289]: audit 2026-03-10T10:30:34.796152+0000 mon.a (mon.0) 3635 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:30:35.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:35 vm04 bash[28289]: audit 2026-03-10T10:30:34.796152+0000 mon.a (mon.0) 3635 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:30:36.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:35 vm07 bash[23367]: cluster 2026-03-10T10:30:34.577460+0000 mgr.y (mgr.24422) 641 : cluster [DBG] pgmap v1149: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:30:36.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:35 vm07 bash[23367]: cluster 2026-03-10T10:30:34.577460+0000 mgr.y (mgr.24422) 641 : cluster [DBG] pgmap v1149: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:30:36.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:35 vm07 bash[23367]: audit 2026-03-10T10:30:34.789477+0000 mon.a (mon.0) 3633 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:30:36.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:35 vm07 bash[23367]: audit 2026-03-10T10:30:34.789477+0000 mon.a (mon.0) 3633 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:30:36.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:35 vm07 bash[23367]: audit 2026-03-10T10:30:34.790022+0000 mon.a (mon.0) 3634 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:30:36.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:35 vm07 bash[23367]: audit 2026-03-10T10:30:34.790022+0000 mon.a (mon.0) 3634 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:30:36.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:35 vm07 bash[23367]: audit 2026-03-10T10:30:34.796152+0000 mon.a (mon.0) 3635 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:30:36.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:35 vm07 bash[23367]: audit 2026-03-10T10:30:34.796152+0000 mon.a (mon.0) 3635 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:30:37.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:37 vm04 bash[20742]: cluster 2026-03-10T10:30:36.577786+0000 mgr.y (mgr.24422) 642 : cluster [DBG] pgmap v1151: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:30:37.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:37 vm04 bash[20742]: cluster 2026-03-10T10:30:36.577786+0000 mgr.y (mgr.24422) 642 : cluster [DBG] pgmap v1151: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:30:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:37 vm04 bash[28289]: cluster 2026-03-10T10:30:36.577786+0000 mgr.y (mgr.24422) 642 : cluster [DBG] pgmap v1151: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:30:37.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:37 vm04 bash[28289]: cluster 2026-03-10T10:30:36.577786+0000 mgr.y (mgr.24422) 642 : cluster [DBG] pgmap v1151: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:30:38.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:37 vm07 bash[23367]: cluster 2026-03-10T10:30:36.577786+0000 mgr.y (mgr.24422) 642 : cluster [DBG] pgmap v1151: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:30:38.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:37 vm07 bash[23367]: cluster 2026-03-10T10:30:36.577786+0000 mgr.y (mgr.24422) 642 : cluster [DBG] pgmap v1151: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:30:39.266 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:30:38 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:30:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:39 vm04 bash[28289]: cluster 2026-03-10T10:30:38.578236+0000 mgr.y (mgr.24422) 643 : cluster [DBG] pgmap v1152: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:30:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:39 vm04 bash[28289]: cluster 2026-03-10T10:30:38.578236+0000 mgr.y (mgr.24422) 643 : cluster [DBG] pgmap v1152: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:30:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:39 vm04 bash[28289]: audit 2026-03-10T10:30:38.924270+0000 mgr.y (mgr.24422) 644 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:39.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:39 vm04 bash[28289]: audit 2026-03-10T10:30:38.924270+0000 mgr.y (mgr.24422) 644 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:39 vm04 bash[20742]: cluster 2026-03-10T10:30:38.578236+0000 mgr.y (mgr.24422) 643 : cluster [DBG] pgmap v1152: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:30:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:39 vm04 bash[20742]: cluster 2026-03-10T10:30:38.578236+0000 mgr.y (mgr.24422) 643 : cluster [DBG] pgmap v1152: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:30:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:39 vm04 bash[20742]: audit 2026-03-10T10:30:38.924270+0000 mgr.y (mgr.24422) 644 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:39.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:39 vm04 bash[20742]: audit 2026-03-10T10:30:38.924270+0000 mgr.y (mgr.24422) 644 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:39 vm07 bash[23367]: cluster 2026-03-10T10:30:38.578236+0000 mgr.y (mgr.24422) 643 : cluster [DBG] pgmap v1152: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:30:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:39 vm07 bash[23367]: cluster 2026-03-10T10:30:38.578236+0000 mgr.y (mgr.24422) 643 : cluster [DBG] pgmap v1152: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:30:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:39 vm07 bash[23367]: audit 2026-03-10T10:30:38.924270+0000 mgr.y (mgr.24422) 644 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:40.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:39 vm07 bash[23367]: audit 2026-03-10T10:30:38.924270+0000 mgr.y (mgr.24422) 644 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:41 vm04 bash[28289]: cluster 2026-03-10T10:30:40.578821+0000 mgr.y (mgr.24422) 645 : cluster [DBG] pgmap v1153: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T10:30:41.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:41 vm04 bash[28289]: cluster 2026-03-10T10:30:40.578821+0000 mgr.y (mgr.24422) 645 : cluster [DBG] pgmap v1153: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T10:30:41.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:41 vm04 bash[20742]: cluster 2026-03-10T10:30:40.578821+0000 mgr.y (mgr.24422) 645 : cluster [DBG] pgmap v1153: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T10:30:41.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:41 vm04 bash[20742]: cluster 2026-03-10T10:30:40.578821+0000 mgr.y (mgr.24422) 645 : cluster [DBG] pgmap v1153: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T10:30:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:41 vm07 bash[23367]: cluster 2026-03-10T10:30:40.578821+0000 mgr.y (mgr.24422) 645 : cluster [DBG] pgmap v1153: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T10:30:42.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:41 vm07 bash[23367]: cluster 2026-03-10T10:30:40.578821+0000 mgr.y (mgr.24422) 645 : cluster [DBG] pgmap v1153: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T10:30:42.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:42 vm04 bash[28289]: cluster 2026-03-10T10:30:42.613241+0000 mon.a (mon.0) 3636 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:30:42.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:42 vm04 bash[28289]: cluster 2026-03-10T10:30:42.613241+0000 mon.a (mon.0) 3636 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:30:42.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:42 vm04 bash[20742]: cluster 2026-03-10T10:30:42.613241+0000 mon.a (mon.0) 3636 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:30:42.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:42 vm04 bash[20742]: cluster 2026-03-10T10:30:42.613241+0000 mon.a (mon.0) 3636 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:30:43.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:42 vm07 bash[23367]: cluster 2026-03-10T10:30:42.613241+0000 mon.a (mon.0) 3636 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:30:43.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:42 vm07 bash[23367]: cluster 2026-03-10T10:30:42.613241+0000 mon.a (mon.0) 3636 : cluster [WRN] Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:30:43.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:30:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:30:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:30:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:43 vm04 bash[28289]: cluster 2026-03-10T10:30:42.579208+0000 mgr.y (mgr.24422) 646 : cluster [DBG] pgmap v1154: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T10:30:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:43 vm04 bash[28289]: cluster 2026-03-10T10:30:42.579208+0000 mgr.y (mgr.24422) 646 : cluster [DBG] pgmap v1154: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T10:30:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:43 vm04 bash[28289]: audit 2026-03-10T10:30:43.226552+0000 mon.a (mon.0) 3637 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:30:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:43 vm04 bash[28289]: audit 2026-03-10T10:30:43.226552+0000 mon.a (mon.0) 3637 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:30:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:43 vm04 bash[20742]: cluster 2026-03-10T10:30:42.579208+0000 mgr.y (mgr.24422) 646 : cluster [DBG] pgmap v1154: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T10:30:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:43 vm04 bash[20742]: cluster 2026-03-10T10:30:42.579208+0000 mgr.y (mgr.24422) 646 : cluster [DBG] pgmap v1154: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T10:30:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:43 vm04 bash[20742]: audit 2026-03-10T10:30:43.226552+0000 mon.a (mon.0) 3637 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:30:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:43 vm04 bash[20742]: audit 2026-03-10T10:30:43.226552+0000 mon.a (mon.0) 3637 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:30:44.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:43 vm07 bash[23367]: cluster 2026-03-10T10:30:42.579208+0000 mgr.y (mgr.24422) 646 : cluster [DBG] pgmap v1154: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T10:30:44.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:43 vm07 bash[23367]: cluster 2026-03-10T10:30:42.579208+0000 mgr.y (mgr.24422) 646 : cluster [DBG] pgmap v1154: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T10:30:44.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:43 vm07 bash[23367]: audit 2026-03-10T10:30:43.226552+0000 mon.a (mon.0) 3637 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:30:44.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:43 vm07 bash[23367]: audit 2026-03-10T10:30:43.226552+0000 mon.a (mon.0) 3637 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:30:45.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:45 vm04 bash[28289]: cluster 2026-03-10T10:30:44.580019+0000 mgr.y (mgr.24422) 647 : cluster [DBG] pgmap v1155: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:30:45.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:45 vm04 bash[28289]: cluster 2026-03-10T10:30:44.580019+0000 mgr.y (mgr.24422) 647 : cluster [DBG] pgmap v1155: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:30:45.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:45 vm04 bash[20742]: cluster 2026-03-10T10:30:44.580019+0000 mgr.y (mgr.24422) 647 : cluster [DBG] pgmap v1155: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:30:45.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:45 vm04 bash[20742]: cluster 2026-03-10T10:30:44.580019+0000 mgr.y (mgr.24422) 647 : cluster [DBG] pgmap v1155: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:30:46.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:45 vm07 bash[23367]: cluster 2026-03-10T10:30:44.580019+0000 mgr.y (mgr.24422) 647 : cluster [DBG] pgmap v1155: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:30:46.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:45 vm07 bash[23367]: cluster 2026-03-10T10:30:44.580019+0000 mgr.y (mgr.24422) 647 : cluster [DBG] pgmap v1155: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:30:47.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:47 vm04 bash[28289]: cluster 2026-03-10T10:30:46.580390+0000 mgr.y (mgr.24422) 648 : cluster [DBG] pgmap v1156: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 857 B/s rd, 0 op/s 2026-03-10T10:30:47.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:47 vm04 bash[28289]: cluster 2026-03-10T10:30:46.580390+0000 mgr.y (mgr.24422) 648 : cluster [DBG] pgmap v1156: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 857 B/s rd, 0 op/s 2026-03-10T10:30:47.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:47 vm04 bash[20742]: cluster 2026-03-10T10:30:46.580390+0000 mgr.y (mgr.24422) 648 : cluster [DBG] pgmap v1156: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 857 B/s rd, 0 op/s 2026-03-10T10:30:47.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:47 vm04 bash[20742]: cluster 2026-03-10T10:30:46.580390+0000 mgr.y (mgr.24422) 648 : cluster [DBG] pgmap v1156: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 857 B/s rd, 0 op/s 2026-03-10T10:30:48.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:47 vm07 bash[23367]: cluster 2026-03-10T10:30:46.580390+0000 mgr.y (mgr.24422) 648 : cluster [DBG] pgmap v1156: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 857 B/s rd, 0 op/s 2026-03-10T10:30:48.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:47 vm07 bash[23367]: cluster 2026-03-10T10:30:46.580390+0000 mgr.y (mgr.24422) 648 : cluster [DBG] pgmap v1156: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 857 B/s rd, 0 op/s 2026-03-10T10:30:49.266 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:30:48 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:30:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:49 vm07 bash[23367]: cluster 2026-03-10T10:30:48.580920+0000 mgr.y (mgr.24422) 649 : cluster [DBG] pgmap v1157: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:30:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:49 vm07 bash[23367]: cluster 2026-03-10T10:30:48.580920+0000 mgr.y (mgr.24422) 649 : cluster [DBG] pgmap v1157: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:30:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:49 vm07 bash[23367]: audit 2026-03-10T10:30:48.935095+0000 mgr.y (mgr.24422) 650 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:50.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:49 vm07 bash[23367]: audit 2026-03-10T10:30:48.935095+0000 mgr.y (mgr.24422) 650 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:49 vm04 bash[28289]: cluster 2026-03-10T10:30:48.580920+0000 mgr.y (mgr.24422) 649 : cluster [DBG] pgmap v1157: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:30:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:49 vm04 bash[28289]: cluster 2026-03-10T10:30:48.580920+0000 mgr.y (mgr.24422) 649 : cluster [DBG] pgmap v1157: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:30:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:49 vm04 bash[28289]: audit 2026-03-10T10:30:48.935095+0000 mgr.y (mgr.24422) 650 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:49 vm04 bash[28289]: audit 2026-03-10T10:30:48.935095+0000 mgr.y (mgr.24422) 650 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:49 vm04 bash[20742]: cluster 2026-03-10T10:30:48.580920+0000 mgr.y (mgr.24422) 649 : cluster [DBG] pgmap v1157: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:30:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:49 vm04 bash[20742]: cluster 2026-03-10T10:30:48.580920+0000 mgr.y (mgr.24422) 649 : cluster [DBG] pgmap v1157: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:30:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:49 vm04 bash[20742]: audit 2026-03-10T10:30:48.935095+0000 mgr.y (mgr.24422) 650 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:49 vm04 bash[20742]: audit 2026-03-10T10:30:48.935095+0000 mgr.y (mgr.24422) 650 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:30:52.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:51 vm07 bash[23367]: cluster 2026-03-10T10:30:50.581448+0000 mgr.y (mgr.24422) 651 : cluster [DBG] pgmap v1158: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:30:52.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:51 vm07 bash[23367]: cluster 2026-03-10T10:30:50.581448+0000 mgr.y (mgr.24422) 651 : cluster [DBG] pgmap v1158: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:30:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:51 vm04 bash[28289]: cluster 2026-03-10T10:30:50.581448+0000 mgr.y (mgr.24422) 651 : cluster [DBG] pgmap v1158: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:30:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:51 vm04 bash[28289]: cluster 2026-03-10T10:30:50.581448+0000 mgr.y (mgr.24422) 651 : cluster [DBG] pgmap v1158: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:30:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:51 vm04 bash[20742]: cluster 2026-03-10T10:30:50.581448+0000 mgr.y (mgr.24422) 651 : cluster [DBG] pgmap v1158: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:30:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:51 vm04 bash[20742]: cluster 2026-03-10T10:30:50.581448+0000 mgr.y (mgr.24422) 651 : cluster [DBG] pgmap v1158: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:30:53.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:30:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:30:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:30:54.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:53 vm07 bash[23367]: cluster 2026-03-10T10:30:52.581796+0000 mgr.y (mgr.24422) 652 : cluster [DBG] pgmap v1159: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:30:54.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:53 vm07 bash[23367]: cluster 2026-03-10T10:30:52.581796+0000 mgr.y (mgr.24422) 652 : cluster [DBG] pgmap v1159: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:30:54.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:53 vm04 bash[28289]: cluster 2026-03-10T10:30:52.581796+0000 mgr.y (mgr.24422) 652 : cluster [DBG] pgmap v1159: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:30:54.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:53 vm04 bash[28289]: cluster 2026-03-10T10:30:52.581796+0000 mgr.y (mgr.24422) 652 : cluster [DBG] pgmap v1159: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:30:54.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:53 vm04 bash[20742]: cluster 2026-03-10T10:30:52.581796+0000 mgr.y (mgr.24422) 652 : cluster [DBG] pgmap v1159: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:30:54.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:53 vm04 bash[20742]: cluster 2026-03-10T10:30:52.581796+0000 mgr.y (mgr.24422) 652 : cluster [DBG] pgmap v1159: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:30:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:55 vm04 bash[28289]: cluster 2026-03-10T10:30:54.582443+0000 mgr.y (mgr.24422) 653 : cluster [DBG] pgmap v1160: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:30:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:55 vm04 bash[28289]: cluster 2026-03-10T10:30:54.582443+0000 mgr.y (mgr.24422) 653 : cluster [DBG] pgmap v1160: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:30:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:55 vm04 bash[20742]: cluster 2026-03-10T10:30:54.582443+0000 mgr.y (mgr.24422) 653 : cluster [DBG] pgmap v1160: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:30:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:55 vm04 bash[20742]: cluster 2026-03-10T10:30:54.582443+0000 mgr.y (mgr.24422) 653 : cluster [DBG] pgmap v1160: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:30:56.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:55 vm07 bash[23367]: cluster 2026-03-10T10:30:54.582443+0000 mgr.y (mgr.24422) 653 : cluster [DBG] pgmap v1160: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:30:56.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:55 vm07 bash[23367]: cluster 2026-03-10T10:30:54.582443+0000 mgr.y (mgr.24422) 653 : cluster [DBG] pgmap v1160: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:30:58.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:57 vm04 bash[28289]: cluster 2026-03-10T10:30:56.582818+0000 mgr.y (mgr.24422) 654 : cluster [DBG] pgmap v1161: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:30:58.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:57 vm04 bash[28289]: cluster 2026-03-10T10:30:56.582818+0000 mgr.y (mgr.24422) 654 : cluster [DBG] pgmap v1161: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:30:58.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:57 vm04 bash[20742]: cluster 2026-03-10T10:30:56.582818+0000 mgr.y (mgr.24422) 654 : cluster [DBG] pgmap v1161: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:30:58.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:57 vm04 bash[20742]: cluster 2026-03-10T10:30:56.582818+0000 mgr.y (mgr.24422) 654 : cluster [DBG] pgmap v1161: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:30:58.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:57 vm07 bash[23367]: cluster 2026-03-10T10:30:56.582818+0000 mgr.y (mgr.24422) 654 : cluster [DBG] pgmap v1161: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:30:58.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:57 vm07 bash[23367]: cluster 2026-03-10T10:30:56.582818+0000 mgr.y (mgr.24422) 654 : cluster [DBG] pgmap v1161: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:30:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:58 vm04 bash[28289]: audit 2026-03-10T10:30:58.232743+0000 mon.a (mon.0) 3638 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:30:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:58 vm04 bash[28289]: audit 2026-03-10T10:30:58.232743+0000 mon.a (mon.0) 3638 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:30:59.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:58 vm04 bash[20742]: audit 2026-03-10T10:30:58.232743+0000 mon.a (mon.0) 3638 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:30:59.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:58 vm04 bash[20742]: audit 2026-03-10T10:30:58.232743+0000 mon.a (mon.0) 3638 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:30:59.266 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:30:58 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:30:59.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:58 vm07 bash[23367]: audit 2026-03-10T10:30:58.232743+0000 mon.a (mon.0) 3638 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:30:59.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:58 vm07 bash[23367]: audit 2026-03-10T10:30:58.232743+0000 mon.a (mon.0) 3638 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:31:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:59 vm04 bash[28289]: cluster 2026-03-10T10:30:58.583354+0000 mgr.y (mgr.24422) 655 : cluster [DBG] pgmap v1162: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:59 vm04 bash[28289]: cluster 2026-03-10T10:30:58.583354+0000 mgr.y (mgr.24422) 655 : cluster [DBG] pgmap v1162: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:59 vm04 bash[28289]: audit 2026-03-10T10:30:58.945837+0000 mgr.y (mgr.24422) 656 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:30:59 vm04 bash[28289]: audit 2026-03-10T10:30:58.945837+0000 mgr.y (mgr.24422) 656 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:59 vm04 bash[20742]: cluster 2026-03-10T10:30:58.583354+0000 mgr.y (mgr.24422) 655 : cluster [DBG] pgmap v1162: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:59 vm04 bash[20742]: cluster 2026-03-10T10:30:58.583354+0000 mgr.y (mgr.24422) 655 : cluster [DBG] pgmap v1162: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:59 vm04 bash[20742]: audit 2026-03-10T10:30:58.945837+0000 mgr.y (mgr.24422) 656 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:30:59 vm04 bash[20742]: audit 2026-03-10T10:30:58.945837+0000 mgr.y (mgr.24422) 656 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:00.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:59 vm07 bash[23367]: cluster 2026-03-10T10:30:58.583354+0000 mgr.y (mgr.24422) 655 : cluster [DBG] pgmap v1162: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:00.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:59 vm07 bash[23367]: cluster 2026-03-10T10:30:58.583354+0000 mgr.y (mgr.24422) 655 : cluster [DBG] pgmap v1162: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:00.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:59 vm07 bash[23367]: audit 2026-03-10T10:30:58.945837+0000 mgr.y (mgr.24422) 656 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:00.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:30:59 vm07 bash[23367]: audit 2026-03-10T10:30:58.945837+0000 mgr.y (mgr.24422) 656 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:02.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:01 vm04 bash[28289]: cluster 2026-03-10T10:31:00.583818+0000 mgr.y (mgr.24422) 657 : cluster [DBG] pgmap v1163: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:02.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:01 vm04 bash[28289]: cluster 2026-03-10T10:31:00.583818+0000 mgr.y (mgr.24422) 657 : cluster [DBG] pgmap v1163: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:01 vm04 bash[20742]: cluster 2026-03-10T10:31:00.583818+0000 mgr.y (mgr.24422) 657 : cluster [DBG] pgmap v1163: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:01 vm04 bash[20742]: cluster 2026-03-10T10:31:00.583818+0000 mgr.y (mgr.24422) 657 : cluster [DBG] pgmap v1163: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:02.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:01 vm07 bash[23367]: cluster 2026-03-10T10:31:00.583818+0000 mgr.y (mgr.24422) 657 : cluster [DBG] pgmap v1163: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:02.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:01 vm07 bash[23367]: cluster 2026-03-10T10:31:00.583818+0000 mgr.y (mgr.24422) 657 : cluster [DBG] pgmap v1163: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:03.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:31:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:31:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:31:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:03 vm04 bash[28289]: cluster 2026-03-10T10:31:02.584186+0000 mgr.y (mgr.24422) 658 : cluster [DBG] pgmap v1164: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:03 vm04 bash[28289]: cluster 2026-03-10T10:31:02.584186+0000 mgr.y (mgr.24422) 658 : cluster [DBG] pgmap v1164: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:03 vm04 bash[20742]: cluster 2026-03-10T10:31:02.584186+0000 mgr.y (mgr.24422) 658 : cluster [DBG] pgmap v1164: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:03 vm04 bash[20742]: cluster 2026-03-10T10:31:02.584186+0000 mgr.y (mgr.24422) 658 : cluster [DBG] pgmap v1164: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:04.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:03 vm07 bash[23367]: cluster 2026-03-10T10:31:02.584186+0000 mgr.y (mgr.24422) 658 : cluster [DBG] pgmap v1164: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:04.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:03 vm07 bash[23367]: cluster 2026-03-10T10:31:02.584186+0000 mgr.y (mgr.24422) 658 : cluster [DBG] pgmap v1164: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:06.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:05 vm04 bash[28289]: cluster 2026-03-10T10:31:04.584836+0000 mgr.y (mgr.24422) 659 : cluster [DBG] pgmap v1165: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:06.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:05 vm04 bash[28289]: cluster 2026-03-10T10:31:04.584836+0000 mgr.y (mgr.24422) 659 : cluster [DBG] pgmap v1165: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:06.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:05 vm04 bash[20742]: cluster 2026-03-10T10:31:04.584836+0000 mgr.y (mgr.24422) 659 : cluster [DBG] pgmap v1165: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:06.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:05 vm04 bash[20742]: cluster 2026-03-10T10:31:04.584836+0000 mgr.y (mgr.24422) 659 : cluster [DBG] pgmap v1165: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:06.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:05 vm07 bash[23367]: cluster 2026-03-10T10:31:04.584836+0000 mgr.y (mgr.24422) 659 : cluster [DBG] pgmap v1165: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:06.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:05 vm07 bash[23367]: cluster 2026-03-10T10:31:04.584836+0000 mgr.y (mgr.24422) 659 : cluster [DBG] pgmap v1165: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:08.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:07 vm04 bash[28289]: cluster 2026-03-10T10:31:06.585120+0000 mgr.y (mgr.24422) 660 : cluster [DBG] pgmap v1166: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:08.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:07 vm04 bash[28289]: cluster 2026-03-10T10:31:06.585120+0000 mgr.y (mgr.24422) 660 : cluster [DBG] pgmap v1166: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:08.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:07 vm04 bash[20742]: cluster 2026-03-10T10:31:06.585120+0000 mgr.y (mgr.24422) 660 : cluster [DBG] pgmap v1166: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:08.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:07 vm04 bash[20742]: cluster 2026-03-10T10:31:06.585120+0000 mgr.y (mgr.24422) 660 : cluster [DBG] pgmap v1166: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:08.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:07 vm07 bash[23367]: cluster 2026-03-10T10:31:06.585120+0000 mgr.y (mgr.24422) 660 : cluster [DBG] pgmap v1166: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:08.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:07 vm07 bash[23367]: cluster 2026-03-10T10:31:06.585120+0000 mgr.y (mgr.24422) 660 : cluster [DBG] pgmap v1166: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:09.266 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:31:08 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:31:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:09 vm04 bash[28289]: cluster 2026-03-10T10:31:08.585628+0000 mgr.y (mgr.24422) 661 : cluster [DBG] pgmap v1167: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:09 vm04 bash[28289]: cluster 2026-03-10T10:31:08.585628+0000 mgr.y (mgr.24422) 661 : cluster [DBG] pgmap v1167: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:09 vm04 bash[28289]: audit 2026-03-10T10:31:08.956480+0000 mgr.y (mgr.24422) 662 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:09 vm04 bash[28289]: audit 2026-03-10T10:31:08.956480+0000 mgr.y (mgr.24422) 662 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:09 vm04 bash[20742]: cluster 2026-03-10T10:31:08.585628+0000 mgr.y (mgr.24422) 661 : cluster [DBG] pgmap v1167: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:09 vm04 bash[20742]: cluster 2026-03-10T10:31:08.585628+0000 mgr.y (mgr.24422) 661 : cluster [DBG] pgmap v1167: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:09 vm04 bash[20742]: audit 2026-03-10T10:31:08.956480+0000 mgr.y (mgr.24422) 662 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:09 vm04 bash[20742]: audit 2026-03-10T10:31:08.956480+0000 mgr.y (mgr.24422) 662 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:10.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:09 vm07 bash[23367]: cluster 2026-03-10T10:31:08.585628+0000 mgr.y (mgr.24422) 661 : cluster [DBG] pgmap v1167: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:10.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:09 vm07 bash[23367]: cluster 2026-03-10T10:31:08.585628+0000 mgr.y (mgr.24422) 661 : cluster [DBG] pgmap v1167: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:10.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:09 vm07 bash[23367]: audit 2026-03-10T10:31:08.956480+0000 mgr.y (mgr.24422) 662 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:10.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:09 vm07 bash[23367]: audit 2026-03-10T10:31:08.956480+0000 mgr.y (mgr.24422) 662 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:11 vm04 bash[28289]: cluster 2026-03-10T10:31:10.586123+0000 mgr.y (mgr.24422) 663 : cluster [DBG] pgmap v1168: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:11 vm04 bash[28289]: cluster 2026-03-10T10:31:10.586123+0000 mgr.y (mgr.24422) 663 : cluster [DBG] pgmap v1168: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:11 vm04 bash[20742]: cluster 2026-03-10T10:31:10.586123+0000 mgr.y (mgr.24422) 663 : cluster [DBG] pgmap v1168: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:11 vm04 bash[20742]: cluster 2026-03-10T10:31:10.586123+0000 mgr.y (mgr.24422) 663 : cluster [DBG] pgmap v1168: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:12.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:11 vm07 bash[23367]: cluster 2026-03-10T10:31:10.586123+0000 mgr.y (mgr.24422) 663 : cluster [DBG] pgmap v1168: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:12.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:11 vm07 bash[23367]: cluster 2026-03-10T10:31:10.586123+0000 mgr.y (mgr.24422) 663 : cluster [DBG] pgmap v1168: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:13.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:31:13 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:31:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:31:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:13 vm04 bash[28289]: cluster 2026-03-10T10:31:12.586457+0000 mgr.y (mgr.24422) 664 : cluster [DBG] pgmap v1169: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:13 vm04 bash[28289]: cluster 2026-03-10T10:31:12.586457+0000 mgr.y (mgr.24422) 664 : cluster [DBG] pgmap v1169: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:13 vm04 bash[28289]: audit 2026-03-10T10:31:13.238442+0000 mon.a (mon.0) 3639 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:31:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:13 vm04 bash[28289]: audit 2026-03-10T10:31:13.238442+0000 mon.a (mon.0) 3639 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:31:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:13 vm04 bash[20742]: cluster 2026-03-10T10:31:12.586457+0000 mgr.y (mgr.24422) 664 : cluster [DBG] pgmap v1169: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:13 vm04 bash[20742]: cluster 2026-03-10T10:31:12.586457+0000 mgr.y (mgr.24422) 664 : cluster [DBG] pgmap v1169: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:13 vm04 bash[20742]: audit 2026-03-10T10:31:13.238442+0000 mon.a (mon.0) 3639 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:31:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:13 vm04 bash[20742]: audit 2026-03-10T10:31:13.238442+0000 mon.a (mon.0) 3639 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:31:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:13 vm07 bash[23367]: cluster 2026-03-10T10:31:12.586457+0000 mgr.y (mgr.24422) 664 : cluster [DBG] pgmap v1169: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:13 vm07 bash[23367]: cluster 2026-03-10T10:31:12.586457+0000 mgr.y (mgr.24422) 664 : cluster [DBG] pgmap v1169: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:13 vm07 bash[23367]: audit 2026-03-10T10:31:13.238442+0000 mon.a (mon.0) 3639 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:31:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:13 vm07 bash[23367]: audit 2026-03-10T10:31:13.238442+0000 mon.a (mon.0) 3639 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:31:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:15 vm04 bash[28289]: cluster 2026-03-10T10:31:14.587090+0000 mgr.y (mgr.24422) 665 : cluster [DBG] pgmap v1170: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:15 vm04 bash[28289]: cluster 2026-03-10T10:31:14.587090+0000 mgr.y (mgr.24422) 665 : cluster [DBG] pgmap v1170: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:15 vm04 bash[20742]: cluster 2026-03-10T10:31:14.587090+0000 mgr.y (mgr.24422) 665 : cluster [DBG] pgmap v1170: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:15 vm04 bash[20742]: cluster 2026-03-10T10:31:14.587090+0000 mgr.y (mgr.24422) 665 : cluster [DBG] pgmap v1170: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:16.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:15 vm07 bash[23367]: cluster 2026-03-10T10:31:14.587090+0000 mgr.y (mgr.24422) 665 : cluster [DBG] pgmap v1170: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:16.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:15 vm07 bash[23367]: cluster 2026-03-10T10:31:14.587090+0000 mgr.y (mgr.24422) 665 : cluster [DBG] pgmap v1170: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:18.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:17 vm04 bash[28289]: cluster 2026-03-10T10:31:16.587371+0000 mgr.y (mgr.24422) 666 : cluster [DBG] pgmap v1171: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:18.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:17 vm04 bash[28289]: cluster 2026-03-10T10:31:16.587371+0000 mgr.y (mgr.24422) 666 : cluster [DBG] pgmap v1171: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:18.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:17 vm04 bash[20742]: cluster 2026-03-10T10:31:16.587371+0000 mgr.y (mgr.24422) 666 : cluster [DBG] pgmap v1171: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:18.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:17 vm04 bash[20742]: cluster 2026-03-10T10:31:16.587371+0000 mgr.y (mgr.24422) 666 : cluster [DBG] pgmap v1171: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:18.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:17 vm07 bash[23367]: cluster 2026-03-10T10:31:16.587371+0000 mgr.y (mgr.24422) 666 : cluster [DBG] pgmap v1171: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:18.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:17 vm07 bash[23367]: cluster 2026-03-10T10:31:16.587371+0000 mgr.y (mgr.24422) 666 : cluster [DBG] pgmap v1171: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:19.266 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:31:18 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:31:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:19 vm04 bash[28289]: cluster 2026-03-10T10:31:18.587831+0000 mgr.y (mgr.24422) 667 : cluster [DBG] pgmap v1172: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:19 vm04 bash[28289]: cluster 2026-03-10T10:31:18.587831+0000 mgr.y (mgr.24422) 667 : cluster [DBG] pgmap v1172: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:19 vm04 bash[28289]: audit 2026-03-10T10:31:18.967054+0000 mgr.y (mgr.24422) 668 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:19 vm04 bash[28289]: audit 2026-03-10T10:31:18.967054+0000 mgr.y (mgr.24422) 668 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:20.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:19 vm04 bash[20742]: cluster 2026-03-10T10:31:18.587831+0000 mgr.y (mgr.24422) 667 : cluster [DBG] pgmap v1172: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:20.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:19 vm04 bash[20742]: cluster 2026-03-10T10:31:18.587831+0000 mgr.y (mgr.24422) 667 : cluster [DBG] pgmap v1172: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:20.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:19 vm04 bash[20742]: audit 2026-03-10T10:31:18.967054+0000 mgr.y (mgr.24422) 668 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:20.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:19 vm04 bash[20742]: audit 2026-03-10T10:31:18.967054+0000 mgr.y (mgr.24422) 668 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:20.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:19 vm07 bash[23367]: cluster 2026-03-10T10:31:18.587831+0000 mgr.y (mgr.24422) 667 : cluster [DBG] pgmap v1172: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:20.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:19 vm07 bash[23367]: cluster 2026-03-10T10:31:18.587831+0000 mgr.y (mgr.24422) 667 : cluster [DBG] pgmap v1172: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:20.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:19 vm07 bash[23367]: audit 2026-03-10T10:31:18.967054+0000 mgr.y (mgr.24422) 668 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:20.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:19 vm07 bash[23367]: audit 2026-03-10T10:31:18.967054+0000 mgr.y (mgr.24422) 668 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:22.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:21 vm04 bash[28289]: cluster 2026-03-10T10:31:20.588394+0000 mgr.y (mgr.24422) 669 : cluster [DBG] pgmap v1173: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:22.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:21 vm04 bash[28289]: cluster 2026-03-10T10:31:20.588394+0000 mgr.y (mgr.24422) 669 : cluster [DBG] pgmap v1173: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:22.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:21 vm04 bash[20742]: cluster 2026-03-10T10:31:20.588394+0000 mgr.y (mgr.24422) 669 : cluster [DBG] pgmap v1173: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:22.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:21 vm04 bash[20742]: cluster 2026-03-10T10:31:20.588394+0000 mgr.y (mgr.24422) 669 : cluster [DBG] pgmap v1173: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:22.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:21 vm07 bash[23367]: cluster 2026-03-10T10:31:20.588394+0000 mgr.y (mgr.24422) 669 : cluster [DBG] pgmap v1173: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:22.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:21 vm07 bash[23367]: cluster 2026-03-10T10:31:20.588394+0000 mgr.y (mgr.24422) 669 : cluster [DBG] pgmap v1173: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:23.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:31:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:31:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:31:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:23 vm04 bash[28289]: cluster 2026-03-10T10:31:22.588690+0000 mgr.y (mgr.24422) 670 : cluster [DBG] pgmap v1174: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:23 vm04 bash[28289]: cluster 2026-03-10T10:31:22.588690+0000 mgr.y (mgr.24422) 670 : cluster [DBG] pgmap v1174: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:24.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:23 vm04 bash[20742]: cluster 2026-03-10T10:31:22.588690+0000 mgr.y (mgr.24422) 670 : cluster [DBG] pgmap v1174: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:24.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:23 vm04 bash[20742]: cluster 2026-03-10T10:31:22.588690+0000 mgr.y (mgr.24422) 670 : cluster [DBG] pgmap v1174: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:24.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:23 vm07 bash[23367]: cluster 2026-03-10T10:31:22.588690+0000 mgr.y (mgr.24422) 670 : cluster [DBG] pgmap v1174: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:24.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:23 vm07 bash[23367]: cluster 2026-03-10T10:31:22.588690+0000 mgr.y (mgr.24422) 670 : cluster [DBG] pgmap v1174: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:26.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:25 vm07 bash[23367]: cluster 2026-03-10T10:31:24.589443+0000 mgr.y (mgr.24422) 671 : cluster [DBG] pgmap v1175: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:26.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:25 vm07 bash[23367]: cluster 2026-03-10T10:31:24.589443+0000 mgr.y (mgr.24422) 671 : cluster [DBG] pgmap v1175: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:25 vm04 bash[28289]: cluster 2026-03-10T10:31:24.589443+0000 mgr.y (mgr.24422) 671 : cluster [DBG] pgmap v1175: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:25 vm04 bash[28289]: cluster 2026-03-10T10:31:24.589443+0000 mgr.y (mgr.24422) 671 : cluster [DBG] pgmap v1175: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:26.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:25 vm04 bash[20742]: cluster 2026-03-10T10:31:24.589443+0000 mgr.y (mgr.24422) 671 : cluster [DBG] pgmap v1175: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:26.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:25 vm04 bash[20742]: cluster 2026-03-10T10:31:24.589443+0000 mgr.y (mgr.24422) 671 : cluster [DBG] pgmap v1175: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:28.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:27 vm07 bash[23367]: cluster 2026-03-10T10:31:26.589764+0000 mgr.y (mgr.24422) 672 : cluster [DBG] pgmap v1176: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:28.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:27 vm07 bash[23367]: cluster 2026-03-10T10:31:26.589764+0000 mgr.y (mgr.24422) 672 : cluster [DBG] pgmap v1176: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:27 vm04 bash[28289]: cluster 2026-03-10T10:31:26.589764+0000 mgr.y (mgr.24422) 672 : cluster [DBG] pgmap v1176: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:27 vm04 bash[28289]: cluster 2026-03-10T10:31:26.589764+0000 mgr.y (mgr.24422) 672 : cluster [DBG] pgmap v1176: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:28.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:27 vm04 bash[20742]: cluster 2026-03-10T10:31:26.589764+0000 mgr.y (mgr.24422) 672 : cluster [DBG] pgmap v1176: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:28.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:27 vm04 bash[20742]: cluster 2026-03-10T10:31:26.589764+0000 mgr.y (mgr.24422) 672 : cluster [DBG] pgmap v1176: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:29.266 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:31:28 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:31:29.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:28 vm07 bash[23367]: audit 2026-03-10T10:31:28.243933+0000 mon.a (mon.0) 3640 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:31:29.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:28 vm07 bash[23367]: audit 2026-03-10T10:31:28.243933+0000 mon.a (mon.0) 3640 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:31:29.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:28 vm04 bash[28289]: audit 2026-03-10T10:31:28.243933+0000 mon.a (mon.0) 3640 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:31:29.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:28 vm04 bash[28289]: audit 2026-03-10T10:31:28.243933+0000 mon.a (mon.0) 3640 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:31:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:28 vm04 bash[20742]: audit 2026-03-10T10:31:28.243933+0000 mon.a (mon.0) 3640 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:31:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:28 vm04 bash[20742]: audit 2026-03-10T10:31:28.243933+0000 mon.a (mon.0) 3640 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:31:30.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:29 vm07 bash[23367]: cluster 2026-03-10T10:31:28.590364+0000 mgr.y (mgr.24422) 673 : cluster [DBG] pgmap v1177: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:30.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:29 vm07 bash[23367]: cluster 2026-03-10T10:31:28.590364+0000 mgr.y (mgr.24422) 673 : cluster [DBG] pgmap v1177: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:30.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:29 vm07 bash[23367]: audit 2026-03-10T10:31:28.977655+0000 mgr.y (mgr.24422) 674 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:30.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:29 vm07 bash[23367]: audit 2026-03-10T10:31:28.977655+0000 mgr.y (mgr.24422) 674 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:29 vm04 bash[28289]: cluster 2026-03-10T10:31:28.590364+0000 mgr.y (mgr.24422) 673 : cluster [DBG] pgmap v1177: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:29 vm04 bash[28289]: cluster 2026-03-10T10:31:28.590364+0000 mgr.y (mgr.24422) 673 : cluster [DBG] pgmap v1177: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:29 vm04 bash[28289]: audit 2026-03-10T10:31:28.977655+0000 mgr.y (mgr.24422) 674 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:29 vm04 bash[28289]: audit 2026-03-10T10:31:28.977655+0000 mgr.y (mgr.24422) 674 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:29 vm04 bash[20742]: cluster 2026-03-10T10:31:28.590364+0000 mgr.y (mgr.24422) 673 : cluster [DBG] pgmap v1177: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:29 vm04 bash[20742]: cluster 2026-03-10T10:31:28.590364+0000 mgr.y (mgr.24422) 673 : cluster [DBG] pgmap v1177: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:29 vm04 bash[20742]: audit 2026-03-10T10:31:28.977655+0000 mgr.y (mgr.24422) 674 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:29 vm04 bash[20742]: audit 2026-03-10T10:31:28.977655+0000 mgr.y (mgr.24422) 674 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:31 vm07 bash[23367]: cluster 2026-03-10T10:31:30.590907+0000 mgr.y (mgr.24422) 675 : cluster [DBG] pgmap v1178: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:32.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:31 vm07 bash[23367]: cluster 2026-03-10T10:31:30.590907+0000 mgr.y (mgr.24422) 675 : cluster [DBG] pgmap v1178: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:31 vm04 bash[28289]: cluster 2026-03-10T10:31:30.590907+0000 mgr.y (mgr.24422) 675 : cluster [DBG] pgmap v1178: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:31 vm04 bash[28289]: cluster 2026-03-10T10:31:30.590907+0000 mgr.y (mgr.24422) 675 : cluster [DBG] pgmap v1178: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:31 vm04 bash[20742]: cluster 2026-03-10T10:31:30.590907+0000 mgr.y (mgr.24422) 675 : cluster [DBG] pgmap v1178: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:31 vm04 bash[20742]: cluster 2026-03-10T10:31:30.590907+0000 mgr.y (mgr.24422) 675 : cluster [DBG] pgmap v1178: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:33.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:31:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:31:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:31:34.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:33 vm07 bash[23367]: cluster 2026-03-10T10:31:32.591278+0000 mgr.y (mgr.24422) 676 : cluster [DBG] pgmap v1179: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:34.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:33 vm07 bash[23367]: cluster 2026-03-10T10:31:32.591278+0000 mgr.y (mgr.24422) 676 : cluster [DBG] pgmap v1179: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:34.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:34 vm04 bash[28289]: cluster 2026-03-10T10:31:32.591278+0000 mgr.y (mgr.24422) 676 : cluster [DBG] pgmap v1179: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:34.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:34 vm04 bash[28289]: cluster 2026-03-10T10:31:32.591278+0000 mgr.y (mgr.24422) 676 : cluster [DBG] pgmap v1179: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:34.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:34 vm04 bash[20742]: cluster 2026-03-10T10:31:32.591278+0000 mgr.y (mgr.24422) 676 : cluster [DBG] pgmap v1179: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:34.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:34 vm04 bash[20742]: cluster 2026-03-10T10:31:32.591278+0000 mgr.y (mgr.24422) 676 : cluster [DBG] pgmap v1179: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:35 vm07 bash[23367]: audit 2026-03-10T10:31:34.836298+0000 mon.a (mon.0) 3641 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:31:35.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:35 vm07 bash[23367]: audit 2026-03-10T10:31:34.836298+0000 mon.a (mon.0) 3641 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:31:35.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:35 vm04 bash[28289]: audit 2026-03-10T10:31:34.836298+0000 mon.a (mon.0) 3641 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:31:35.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:35 vm04 bash[28289]: audit 2026-03-10T10:31:34.836298+0000 mon.a (mon.0) 3641 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:31:35.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:35 vm04 bash[20742]: audit 2026-03-10T10:31:34.836298+0000 mon.a (mon.0) 3641 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:31:35.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:35 vm04 bash[20742]: audit 2026-03-10T10:31:34.836298+0000 mon.a (mon.0) 3641 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:31:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:36 vm07 bash[23367]: cluster 2026-03-10T10:31:34.592005+0000 mgr.y (mgr.24422) 677 : cluster [DBG] pgmap v1180: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:36 vm07 bash[23367]: cluster 2026-03-10T10:31:34.592005+0000 mgr.y (mgr.24422) 677 : cluster [DBG] pgmap v1180: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:36 vm07 bash[23367]: audit 2026-03-10T10:31:35.148922+0000 mon.a (mon.0) 3642 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:31:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:36 vm07 bash[23367]: audit 2026-03-10T10:31:35.148922+0000 mon.a (mon.0) 3642 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:31:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:36 vm07 bash[23367]: audit 2026-03-10T10:31:35.149450+0000 mon.a (mon.0) 3643 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:31:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:36 vm07 bash[23367]: audit 2026-03-10T10:31:35.149450+0000 mon.a (mon.0) 3643 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:31:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:36 vm07 bash[23367]: audit 2026-03-10T10:31:35.154973+0000 mon.a (mon.0) 3644 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:31:36.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:36 vm07 bash[23367]: audit 2026-03-10T10:31:35.154973+0000 mon.a (mon.0) 3644 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:31:36.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:36 vm04 bash[28289]: cluster 2026-03-10T10:31:34.592005+0000 mgr.y (mgr.24422) 677 : cluster [DBG] pgmap v1180: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:36.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:36 vm04 bash[28289]: cluster 2026-03-10T10:31:34.592005+0000 mgr.y (mgr.24422) 677 : cluster [DBG] pgmap v1180: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:36.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:36 vm04 bash[28289]: audit 2026-03-10T10:31:35.148922+0000 mon.a (mon.0) 3642 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:31:36.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:36 vm04 bash[28289]: audit 2026-03-10T10:31:35.148922+0000 mon.a (mon.0) 3642 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:31:36.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:36 vm04 bash[28289]: audit 2026-03-10T10:31:35.149450+0000 mon.a (mon.0) 3643 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:31:36.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:36 vm04 bash[28289]: audit 2026-03-10T10:31:35.149450+0000 mon.a (mon.0) 3643 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:31:36.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:36 vm04 bash[28289]: audit 2026-03-10T10:31:35.154973+0000 mon.a (mon.0) 3644 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:31:36.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:36 vm04 bash[28289]: audit 2026-03-10T10:31:35.154973+0000 mon.a (mon.0) 3644 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:31:36.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:36 vm04 bash[20742]: cluster 2026-03-10T10:31:34.592005+0000 mgr.y (mgr.24422) 677 : cluster [DBG] pgmap v1180: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:36.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:36 vm04 bash[20742]: cluster 2026-03-10T10:31:34.592005+0000 mgr.y (mgr.24422) 677 : cluster [DBG] pgmap v1180: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:36.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:36 vm04 bash[20742]: audit 2026-03-10T10:31:35.148922+0000 mon.a (mon.0) 3642 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:31:36.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:36 vm04 bash[20742]: audit 2026-03-10T10:31:35.148922+0000 mon.a (mon.0) 3642 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:31:36.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:36 vm04 bash[20742]: audit 2026-03-10T10:31:35.149450+0000 mon.a (mon.0) 3643 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:31:36.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:36 vm04 bash[20742]: audit 2026-03-10T10:31:35.149450+0000 mon.a (mon.0) 3643 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:31:36.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:36 vm04 bash[20742]: audit 2026-03-10T10:31:35.154973+0000 mon.a (mon.0) 3644 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:31:36.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:36 vm04 bash[20742]: audit 2026-03-10T10:31:35.154973+0000 mon.a (mon.0) 3644 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:31:38.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:38 vm04 bash[28289]: cluster 2026-03-10T10:31:36.592373+0000 mgr.y (mgr.24422) 678 : cluster [DBG] pgmap v1181: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:38.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:38 vm04 bash[28289]: cluster 2026-03-10T10:31:36.592373+0000 mgr.y (mgr.24422) 678 : cluster [DBG] pgmap v1181: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:38.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:38 vm04 bash[20742]: cluster 2026-03-10T10:31:36.592373+0000 mgr.y (mgr.24422) 678 : cluster [DBG] pgmap v1181: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:38.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:38 vm04 bash[20742]: cluster 2026-03-10T10:31:36.592373+0000 mgr.y (mgr.24422) 678 : cluster [DBG] pgmap v1181: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:38.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:38 vm07 bash[23367]: cluster 2026-03-10T10:31:36.592373+0000 mgr.y (mgr.24422) 678 : cluster [DBG] pgmap v1181: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:38.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:38 vm07 bash[23367]: cluster 2026-03-10T10:31:36.592373+0000 mgr.y (mgr.24422) 678 : cluster [DBG] pgmap v1181: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:39.266 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:31:38 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:31:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:40 vm04 bash[28289]: cluster 2026-03-10T10:31:38.592903+0000 mgr.y (mgr.24422) 679 : cluster [DBG] pgmap v1182: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:40 vm04 bash[28289]: cluster 2026-03-10T10:31:38.592903+0000 mgr.y (mgr.24422) 679 : cluster [DBG] pgmap v1182: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:40 vm04 bash[28289]: audit 2026-03-10T10:31:38.983276+0000 mgr.y (mgr.24422) 680 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:40 vm04 bash[28289]: audit 2026-03-10T10:31:38.983276+0000 mgr.y (mgr.24422) 680 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:40 vm04 bash[20742]: cluster 2026-03-10T10:31:38.592903+0000 mgr.y (mgr.24422) 679 : cluster [DBG] pgmap v1182: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:40 vm04 bash[20742]: cluster 2026-03-10T10:31:38.592903+0000 mgr.y (mgr.24422) 679 : cluster [DBG] pgmap v1182: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:40 vm04 bash[20742]: audit 2026-03-10T10:31:38.983276+0000 mgr.y (mgr.24422) 680 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:40 vm04 bash[20742]: audit 2026-03-10T10:31:38.983276+0000 mgr.y (mgr.24422) 680 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:40.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:40 vm07 bash[23367]: cluster 2026-03-10T10:31:38.592903+0000 mgr.y (mgr.24422) 679 : cluster [DBG] pgmap v1182: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:40.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:40 vm07 bash[23367]: cluster 2026-03-10T10:31:38.592903+0000 mgr.y (mgr.24422) 679 : cluster [DBG] pgmap v1182: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:40.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:40 vm07 bash[23367]: audit 2026-03-10T10:31:38.983276+0000 mgr.y (mgr.24422) 680 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:40.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:40 vm07 bash[23367]: audit 2026-03-10T10:31:38.983276+0000 mgr.y (mgr.24422) 680 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:42.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:42 vm04 bash[28289]: cluster 2026-03-10T10:31:40.593469+0000 mgr.y (mgr.24422) 681 : cluster [DBG] pgmap v1183: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:42.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:42 vm04 bash[28289]: cluster 2026-03-10T10:31:40.593469+0000 mgr.y (mgr.24422) 681 : cluster [DBG] pgmap v1183: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:42.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:42 vm04 bash[20742]: cluster 2026-03-10T10:31:40.593469+0000 mgr.y (mgr.24422) 681 : cluster [DBG] pgmap v1183: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:42.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:42 vm04 bash[20742]: cluster 2026-03-10T10:31:40.593469+0000 mgr.y (mgr.24422) 681 : cluster [DBG] pgmap v1183: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:42.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:42 vm07 bash[23367]: cluster 2026-03-10T10:31:40.593469+0000 mgr.y (mgr.24422) 681 : cluster [DBG] pgmap v1183: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:42.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:42 vm07 bash[23367]: cluster 2026-03-10T10:31:40.593469+0000 mgr.y (mgr.24422) 681 : cluster [DBG] pgmap v1183: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:43.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:31:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:31:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:31:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:44 vm04 bash[28289]: cluster 2026-03-10T10:31:42.593785+0000 mgr.y (mgr.24422) 682 : cluster [DBG] pgmap v1184: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:44 vm04 bash[28289]: cluster 2026-03-10T10:31:42.593785+0000 mgr.y (mgr.24422) 682 : cluster [DBG] pgmap v1184: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:44 vm04 bash[28289]: audit 2026-03-10T10:31:43.249535+0000 mon.a (mon.0) 3645 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:31:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:44 vm04 bash[28289]: audit 2026-03-10T10:31:43.249535+0000 mon.a (mon.0) 3645 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:31:44.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:44 vm04 bash[20742]: cluster 2026-03-10T10:31:42.593785+0000 mgr.y (mgr.24422) 682 : cluster [DBG] pgmap v1184: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:44.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:44 vm04 bash[20742]: cluster 2026-03-10T10:31:42.593785+0000 mgr.y (mgr.24422) 682 : cluster [DBG] pgmap v1184: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:44.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:44 vm04 bash[20742]: audit 2026-03-10T10:31:43.249535+0000 mon.a (mon.0) 3645 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:31:44.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:44 vm04 bash[20742]: audit 2026-03-10T10:31:43.249535+0000 mon.a (mon.0) 3645 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:31:44.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:44 vm07 bash[23367]: cluster 2026-03-10T10:31:42.593785+0000 mgr.y (mgr.24422) 682 : cluster [DBG] pgmap v1184: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:44.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:44 vm07 bash[23367]: cluster 2026-03-10T10:31:42.593785+0000 mgr.y (mgr.24422) 682 : cluster [DBG] pgmap v1184: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:44.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:44 vm07 bash[23367]: audit 2026-03-10T10:31:43.249535+0000 mon.a (mon.0) 3645 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:31:44.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:44 vm07 bash[23367]: audit 2026-03-10T10:31:43.249535+0000 mon.a (mon.0) 3645 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:31:46.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:46 vm04 bash[28289]: cluster 2026-03-10T10:31:44.594422+0000 mgr.y (mgr.24422) 683 : cluster [DBG] pgmap v1185: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:46.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:46 vm04 bash[28289]: cluster 2026-03-10T10:31:44.594422+0000 mgr.y (mgr.24422) 683 : cluster [DBG] pgmap v1185: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:46.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:46 vm04 bash[20742]: cluster 2026-03-10T10:31:44.594422+0000 mgr.y (mgr.24422) 683 : cluster [DBG] pgmap v1185: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:46.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:46 vm04 bash[20742]: cluster 2026-03-10T10:31:44.594422+0000 mgr.y (mgr.24422) 683 : cluster [DBG] pgmap v1185: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:46.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:46 vm07 bash[23367]: cluster 2026-03-10T10:31:44.594422+0000 mgr.y (mgr.24422) 683 : cluster [DBG] pgmap v1185: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:46.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:46 vm07 bash[23367]: cluster 2026-03-10T10:31:44.594422+0000 mgr.y (mgr.24422) 683 : cluster [DBG] pgmap v1185: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:48.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:48 vm04 bash[28289]: cluster 2026-03-10T10:31:46.594806+0000 mgr.y (mgr.24422) 684 : cluster [DBG] pgmap v1186: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:48.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:48 vm04 bash[28289]: cluster 2026-03-10T10:31:46.594806+0000 mgr.y (mgr.24422) 684 : cluster [DBG] pgmap v1186: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:48.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:48 vm04 bash[20742]: cluster 2026-03-10T10:31:46.594806+0000 mgr.y (mgr.24422) 684 : cluster [DBG] pgmap v1186: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:48.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:48 vm04 bash[20742]: cluster 2026-03-10T10:31:46.594806+0000 mgr.y (mgr.24422) 684 : cluster [DBG] pgmap v1186: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:48.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:48 vm07 bash[23367]: cluster 2026-03-10T10:31:46.594806+0000 mgr.y (mgr.24422) 684 : cluster [DBG] pgmap v1186: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:48.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:48 vm07 bash[23367]: cluster 2026-03-10T10:31:46.594806+0000 mgr.y (mgr.24422) 684 : cluster [DBG] pgmap v1186: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:49.266 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:31:48 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:31:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:50 vm04 bash[28289]: cluster 2026-03-10T10:31:48.595343+0000 mgr.y (mgr.24422) 685 : cluster [DBG] pgmap v1187: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:50 vm04 bash[28289]: cluster 2026-03-10T10:31:48.595343+0000 mgr.y (mgr.24422) 685 : cluster [DBG] pgmap v1187: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:50 vm04 bash[28289]: audit 2026-03-10T10:31:48.991090+0000 mgr.y (mgr.24422) 686 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:50 vm04 bash[28289]: audit 2026-03-10T10:31:48.991090+0000 mgr.y (mgr.24422) 686 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:50.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:50 vm04 bash[20742]: cluster 2026-03-10T10:31:48.595343+0000 mgr.y (mgr.24422) 685 : cluster [DBG] pgmap v1187: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:50.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:50 vm04 bash[20742]: cluster 2026-03-10T10:31:48.595343+0000 mgr.y (mgr.24422) 685 : cluster [DBG] pgmap v1187: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:50.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:50 vm04 bash[20742]: audit 2026-03-10T10:31:48.991090+0000 mgr.y (mgr.24422) 686 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:50.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:50 vm04 bash[20742]: audit 2026-03-10T10:31:48.991090+0000 mgr.y (mgr.24422) 686 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:50 vm07 bash[23367]: cluster 2026-03-10T10:31:48.595343+0000 mgr.y (mgr.24422) 685 : cluster [DBG] pgmap v1187: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:50 vm07 bash[23367]: cluster 2026-03-10T10:31:48.595343+0000 mgr.y (mgr.24422) 685 : cluster [DBG] pgmap v1187: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:50 vm07 bash[23367]: audit 2026-03-10T10:31:48.991090+0000 mgr.y (mgr.24422) 686 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:50 vm07 bash[23367]: audit 2026-03-10T10:31:48.991090+0000 mgr.y (mgr.24422) 686 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:31:52.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:52 vm04 bash[28289]: cluster 2026-03-10T10:31:50.595864+0000 mgr.y (mgr.24422) 687 : cluster [DBG] pgmap v1188: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:52.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:52 vm04 bash[28289]: cluster 2026-03-10T10:31:50.595864+0000 mgr.y (mgr.24422) 687 : cluster [DBG] pgmap v1188: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:52.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:52 vm04 bash[20742]: cluster 2026-03-10T10:31:50.595864+0000 mgr.y (mgr.24422) 687 : cluster [DBG] pgmap v1188: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:52.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:52 vm04 bash[20742]: cluster 2026-03-10T10:31:50.595864+0000 mgr.y (mgr.24422) 687 : cluster [DBG] pgmap v1188: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:52.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:52 vm07 bash[23367]: cluster 2026-03-10T10:31:50.595864+0000 mgr.y (mgr.24422) 687 : cluster [DBG] pgmap v1188: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:52.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:52 vm07 bash[23367]: cluster 2026-03-10T10:31:50.595864+0000 mgr.y (mgr.24422) 687 : cluster [DBG] pgmap v1188: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:53.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:31:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:31:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:31:54.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:54 vm04 bash[20742]: cluster 2026-03-10T10:31:52.596330+0000 mgr.y (mgr.24422) 688 : cluster [DBG] pgmap v1189: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:54.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:54 vm04 bash[20742]: cluster 2026-03-10T10:31:52.596330+0000 mgr.y (mgr.24422) 688 : cluster [DBG] pgmap v1189: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:54.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:54 vm04 bash[28289]: cluster 2026-03-10T10:31:52.596330+0000 mgr.y (mgr.24422) 688 : cluster [DBG] pgmap v1189: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:54.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:54 vm04 bash[28289]: cluster 2026-03-10T10:31:52.596330+0000 mgr.y (mgr.24422) 688 : cluster [DBG] pgmap v1189: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:54.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:54 vm07 bash[23367]: cluster 2026-03-10T10:31:52.596330+0000 mgr.y (mgr.24422) 688 : cluster [DBG] pgmap v1189: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:54.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:54 vm07 bash[23367]: cluster 2026-03-10T10:31:52.596330+0000 mgr.y (mgr.24422) 688 : cluster [DBG] pgmap v1189: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:56.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:56 vm04 bash[28289]: cluster 2026-03-10T10:31:54.596999+0000 mgr.y (mgr.24422) 689 : cluster [DBG] pgmap v1190: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:56.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:56 vm04 bash[28289]: cluster 2026-03-10T10:31:54.596999+0000 mgr.y (mgr.24422) 689 : cluster [DBG] pgmap v1190: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:56.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:56 vm04 bash[20742]: cluster 2026-03-10T10:31:54.596999+0000 mgr.y (mgr.24422) 689 : cluster [DBG] pgmap v1190: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:56.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:56 vm04 bash[20742]: cluster 2026-03-10T10:31:54.596999+0000 mgr.y (mgr.24422) 689 : cluster [DBG] pgmap v1190: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:56.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:56 vm07 bash[23367]: cluster 2026-03-10T10:31:54.596999+0000 mgr.y (mgr.24422) 689 : cluster [DBG] pgmap v1190: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:56.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:56 vm07 bash[23367]: cluster 2026-03-10T10:31:54.596999+0000 mgr.y (mgr.24422) 689 : cluster [DBG] pgmap v1190: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:31:58.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:58 vm04 bash[28289]: cluster 2026-03-10T10:31:56.597338+0000 mgr.y (mgr.24422) 690 : cluster [DBG] pgmap v1191: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:58.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:58 vm04 bash[28289]: cluster 2026-03-10T10:31:56.597338+0000 mgr.y (mgr.24422) 690 : cluster [DBG] pgmap v1191: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:58.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:58 vm04 bash[20742]: cluster 2026-03-10T10:31:56.597338+0000 mgr.y (mgr.24422) 690 : cluster [DBG] pgmap v1191: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:58.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:58 vm04 bash[20742]: cluster 2026-03-10T10:31:56.597338+0000 mgr.y (mgr.24422) 690 : cluster [DBG] pgmap v1191: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:58.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:58 vm07 bash[23367]: cluster 2026-03-10T10:31:56.597338+0000 mgr.y (mgr.24422) 690 : cluster [DBG] pgmap v1191: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:58.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:58 vm07 bash[23367]: cluster 2026-03-10T10:31:56.597338+0000 mgr.y (mgr.24422) 690 : cluster [DBG] pgmap v1191: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:31:59.266 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:31:58 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:31:59.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:59 vm07 bash[23367]: audit 2026-03-10T10:31:58.255244+0000 mon.a (mon.0) 3646 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:31:59.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:31:59 vm07 bash[23367]: audit 2026-03-10T10:31:58.255244+0000 mon.a (mon.0) 3646 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:31:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:59 vm04 bash[28289]: audit 2026-03-10T10:31:58.255244+0000 mon.a (mon.0) 3646 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:31:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:31:59 vm04 bash[28289]: audit 2026-03-10T10:31:58.255244+0000 mon.a (mon.0) 3646 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:31:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:59 vm04 bash[20742]: audit 2026-03-10T10:31:58.255244+0000 mon.a (mon.0) 3646 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:31:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:31:59 vm04 bash[20742]: audit 2026-03-10T10:31:58.255244+0000 mon.a (mon.0) 3646 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:32:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:00 vm04 bash[28289]: cluster 2026-03-10T10:31:58.598123+0000 mgr.y (mgr.24422) 691 : cluster [DBG] pgmap v1192: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:00 vm04 bash[28289]: cluster 2026-03-10T10:31:58.598123+0000 mgr.y (mgr.24422) 691 : cluster [DBG] pgmap v1192: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:00 vm04 bash[28289]: audit 2026-03-10T10:31:58.999148+0000 mgr.y (mgr.24422) 692 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:00 vm04 bash[28289]: audit 2026-03-10T10:31:58.999148+0000 mgr.y (mgr.24422) 692 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:00.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:00 vm04 bash[20742]: cluster 2026-03-10T10:31:58.598123+0000 mgr.y (mgr.24422) 691 : cluster [DBG] pgmap v1192: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:00.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:00 vm04 bash[20742]: cluster 2026-03-10T10:31:58.598123+0000 mgr.y (mgr.24422) 691 : cluster [DBG] pgmap v1192: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:00.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:00 vm04 bash[20742]: audit 2026-03-10T10:31:58.999148+0000 mgr.y (mgr.24422) 692 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:00.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:00 vm04 bash[20742]: audit 2026-03-10T10:31:58.999148+0000 mgr.y (mgr.24422) 692 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:00 vm07 bash[23367]: cluster 2026-03-10T10:31:58.598123+0000 mgr.y (mgr.24422) 691 : cluster [DBG] pgmap v1192: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:00 vm07 bash[23367]: cluster 2026-03-10T10:31:58.598123+0000 mgr.y (mgr.24422) 691 : cluster [DBG] pgmap v1192: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:00 vm07 bash[23367]: audit 2026-03-10T10:31:58.999148+0000 mgr.y (mgr.24422) 692 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:00.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:00 vm07 bash[23367]: audit 2026-03-10T10:31:58.999148+0000 mgr.y (mgr.24422) 692 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:02.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:02 vm04 bash[28289]: cluster 2026-03-10T10:32:00.598753+0000 mgr.y (mgr.24422) 693 : cluster [DBG] pgmap v1193: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:02.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:02 vm04 bash[28289]: cluster 2026-03-10T10:32:00.598753+0000 mgr.y (mgr.24422) 693 : cluster [DBG] pgmap v1193: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:02.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:02 vm04 bash[20742]: cluster 2026-03-10T10:32:00.598753+0000 mgr.y (mgr.24422) 693 : cluster [DBG] pgmap v1193: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:02.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:02 vm04 bash[20742]: cluster 2026-03-10T10:32:00.598753+0000 mgr.y (mgr.24422) 693 : cluster [DBG] pgmap v1193: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:02 vm07 bash[23367]: cluster 2026-03-10T10:32:00.598753+0000 mgr.y (mgr.24422) 693 : cluster [DBG] pgmap v1193: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:02.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:02 vm07 bash[23367]: cluster 2026-03-10T10:32:00.598753+0000 mgr.y (mgr.24422) 693 : cluster [DBG] pgmap v1193: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:03.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:32:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:32:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:32:04.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:04 vm04 bash[28289]: cluster 2026-03-10T10:32:02.599127+0000 mgr.y (mgr.24422) 694 : cluster [DBG] pgmap v1194: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:04.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:04 vm04 bash[28289]: cluster 2026-03-10T10:32:02.599127+0000 mgr.y (mgr.24422) 694 : cluster [DBG] pgmap v1194: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:04.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:04 vm04 bash[20742]: cluster 2026-03-10T10:32:02.599127+0000 mgr.y (mgr.24422) 694 : cluster [DBG] pgmap v1194: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:04.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:04 vm04 bash[20742]: cluster 2026-03-10T10:32:02.599127+0000 mgr.y (mgr.24422) 694 : cluster [DBG] pgmap v1194: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:04.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:04 vm07 bash[23367]: cluster 2026-03-10T10:32:02.599127+0000 mgr.y (mgr.24422) 694 : cluster [DBG] pgmap v1194: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:04.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:04 vm07 bash[23367]: cluster 2026-03-10T10:32:02.599127+0000 mgr.y (mgr.24422) 694 : cluster [DBG] pgmap v1194: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:06.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:06 vm04 bash[28289]: cluster 2026-03-10T10:32:04.599797+0000 mgr.y (mgr.24422) 695 : cluster [DBG] pgmap v1195: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:06.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:06 vm04 bash[28289]: cluster 2026-03-10T10:32:04.599797+0000 mgr.y (mgr.24422) 695 : cluster [DBG] pgmap v1195: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:06.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:06 vm04 bash[20742]: cluster 2026-03-10T10:32:04.599797+0000 mgr.y (mgr.24422) 695 : cluster [DBG] pgmap v1195: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:06.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:06 vm04 bash[20742]: cluster 2026-03-10T10:32:04.599797+0000 mgr.y (mgr.24422) 695 : cluster [DBG] pgmap v1195: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:06.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:06 vm07 bash[23367]: cluster 2026-03-10T10:32:04.599797+0000 mgr.y (mgr.24422) 695 : cluster [DBG] pgmap v1195: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:06.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:06 vm07 bash[23367]: cluster 2026-03-10T10:32:04.599797+0000 mgr.y (mgr.24422) 695 : cluster [DBG] pgmap v1195: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:08 vm04 bash[28289]: cluster 2026-03-10T10:32:06.600123+0000 mgr.y (mgr.24422) 696 : cluster [DBG] pgmap v1196: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:08 vm04 bash[28289]: cluster 2026-03-10T10:32:06.600123+0000 mgr.y (mgr.24422) 696 : cluster [DBG] pgmap v1196: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:08 vm04 bash[20742]: cluster 2026-03-10T10:32:06.600123+0000 mgr.y (mgr.24422) 696 : cluster [DBG] pgmap v1196: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:08 vm04 bash[20742]: cluster 2026-03-10T10:32:06.600123+0000 mgr.y (mgr.24422) 696 : cluster [DBG] pgmap v1196: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:08.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:08 vm07 bash[23367]: cluster 2026-03-10T10:32:06.600123+0000 mgr.y (mgr.24422) 696 : cluster [DBG] pgmap v1196: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:08.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:08 vm07 bash[23367]: cluster 2026-03-10T10:32:06.600123+0000 mgr.y (mgr.24422) 696 : cluster [DBG] pgmap v1196: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:09.266 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:32:09 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:32:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:10 vm04 bash[28289]: cluster 2026-03-10T10:32:08.600677+0000 mgr.y (mgr.24422) 697 : cluster [DBG] pgmap v1197: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:10 vm04 bash[28289]: cluster 2026-03-10T10:32:08.600677+0000 mgr.y (mgr.24422) 697 : cluster [DBG] pgmap v1197: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:10 vm04 bash[28289]: audit 2026-03-10T10:32:09.009800+0000 mgr.y (mgr.24422) 698 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:10 vm04 bash[28289]: audit 2026-03-10T10:32:09.009800+0000 mgr.y (mgr.24422) 698 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:10.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:10 vm04 bash[20742]: cluster 2026-03-10T10:32:08.600677+0000 mgr.y (mgr.24422) 697 : cluster [DBG] pgmap v1197: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:10.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:10 vm04 bash[20742]: cluster 2026-03-10T10:32:08.600677+0000 mgr.y (mgr.24422) 697 : cluster [DBG] pgmap v1197: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:10.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:10 vm04 bash[20742]: audit 2026-03-10T10:32:09.009800+0000 mgr.y (mgr.24422) 698 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:10.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:10 vm04 bash[20742]: audit 2026-03-10T10:32:09.009800+0000 mgr.y (mgr.24422) 698 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:10.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:10 vm07 bash[23367]: cluster 2026-03-10T10:32:08.600677+0000 mgr.y (mgr.24422) 697 : cluster [DBG] pgmap v1197: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:10.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:10 vm07 bash[23367]: cluster 2026-03-10T10:32:08.600677+0000 mgr.y (mgr.24422) 697 : cluster [DBG] pgmap v1197: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:10.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:10 vm07 bash[23367]: audit 2026-03-10T10:32:09.009800+0000 mgr.y (mgr.24422) 698 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:10.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:10 vm07 bash[23367]: audit 2026-03-10T10:32:09.009800+0000 mgr.y (mgr.24422) 698 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:12.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:12 vm04 bash[28289]: cluster 2026-03-10T10:32:10.601181+0000 mgr.y (mgr.24422) 699 : cluster [DBG] pgmap v1198: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:12.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:12 vm04 bash[28289]: cluster 2026-03-10T10:32:10.601181+0000 mgr.y (mgr.24422) 699 : cluster [DBG] pgmap v1198: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:12.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:12 vm04 bash[20742]: cluster 2026-03-10T10:32:10.601181+0000 mgr.y (mgr.24422) 699 : cluster [DBG] pgmap v1198: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:12.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:12 vm04 bash[20742]: cluster 2026-03-10T10:32:10.601181+0000 mgr.y (mgr.24422) 699 : cluster [DBG] pgmap v1198: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:12.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:12 vm07 bash[23367]: cluster 2026-03-10T10:32:10.601181+0000 mgr.y (mgr.24422) 699 : cluster [DBG] pgmap v1198: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:12.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:12 vm07 bash[23367]: cluster 2026-03-10T10:32:10.601181+0000 mgr.y (mgr.24422) 699 : cluster [DBG] pgmap v1198: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:13.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:32:13 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:32:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:32:14.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:14 vm04 bash[28289]: cluster 2026-03-10T10:32:12.601492+0000 mgr.y (mgr.24422) 700 : cluster [DBG] pgmap v1199: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:14.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:14 vm04 bash[28289]: cluster 2026-03-10T10:32:12.601492+0000 mgr.y (mgr.24422) 700 : cluster [DBG] pgmap v1199: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:14.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:14 vm04 bash[28289]: audit 2026-03-10T10:32:13.260781+0000 mon.a (mon.0) 3647 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:32:14.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:14 vm04 bash[28289]: audit 2026-03-10T10:32:13.260781+0000 mon.a (mon.0) 3647 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:32:14.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:14 vm04 bash[20742]: cluster 2026-03-10T10:32:12.601492+0000 mgr.y (mgr.24422) 700 : cluster [DBG] pgmap v1199: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:14.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:14 vm04 bash[20742]: cluster 2026-03-10T10:32:12.601492+0000 mgr.y (mgr.24422) 700 : cluster [DBG] pgmap v1199: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:14.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:14 vm04 bash[20742]: audit 2026-03-10T10:32:13.260781+0000 mon.a (mon.0) 3647 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:32:14.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:14 vm04 bash[20742]: audit 2026-03-10T10:32:13.260781+0000 mon.a (mon.0) 3647 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:32:14.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:14 vm07 bash[23367]: cluster 2026-03-10T10:32:12.601492+0000 mgr.y (mgr.24422) 700 : cluster [DBG] pgmap v1199: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:14.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:14 vm07 bash[23367]: cluster 2026-03-10T10:32:12.601492+0000 mgr.y (mgr.24422) 700 : cluster [DBG] pgmap v1199: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:14.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:14 vm07 bash[23367]: audit 2026-03-10T10:32:13.260781+0000 mon.a (mon.0) 3647 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:32:14.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:14 vm07 bash[23367]: audit 2026-03-10T10:32:13.260781+0000 mon.a (mon.0) 3647 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:32:16.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:16 vm04 bash[28289]: cluster 2026-03-10T10:32:14.602107+0000 mgr.y (mgr.24422) 701 : cluster [DBG] pgmap v1200: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:16 vm04 bash[28289]: cluster 2026-03-10T10:32:14.602107+0000 mgr.y (mgr.24422) 701 : cluster [DBG] pgmap v1200: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:16.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:16 vm04 bash[20742]: cluster 2026-03-10T10:32:14.602107+0000 mgr.y (mgr.24422) 701 : cluster [DBG] pgmap v1200: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:16.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:16 vm04 bash[20742]: cluster 2026-03-10T10:32:14.602107+0000 mgr.y (mgr.24422) 701 : cluster [DBG] pgmap v1200: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:16.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:16 vm07 bash[23367]: cluster 2026-03-10T10:32:14.602107+0000 mgr.y (mgr.24422) 701 : cluster [DBG] pgmap v1200: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:16.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:16 vm07 bash[23367]: cluster 2026-03-10T10:32:14.602107+0000 mgr.y (mgr.24422) 701 : cluster [DBG] pgmap v1200: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:18.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:18 vm04 bash[28289]: cluster 2026-03-10T10:32:16.602437+0000 mgr.y (mgr.24422) 702 : cluster [DBG] pgmap v1201: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:18 vm04 bash[28289]: cluster 2026-03-10T10:32:16.602437+0000 mgr.y (mgr.24422) 702 : cluster [DBG] pgmap v1201: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:18 vm04 bash[20742]: cluster 2026-03-10T10:32:16.602437+0000 mgr.y (mgr.24422) 702 : cluster [DBG] pgmap v1201: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:18 vm04 bash[20742]: cluster 2026-03-10T10:32:16.602437+0000 mgr.y (mgr.24422) 702 : cluster [DBG] pgmap v1201: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:18.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:18 vm07 bash[23367]: cluster 2026-03-10T10:32:16.602437+0000 mgr.y (mgr.24422) 702 : cluster [DBG] pgmap v1201: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:18.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:18 vm07 bash[23367]: cluster 2026-03-10T10:32:16.602437+0000 mgr.y (mgr.24422) 702 : cluster [DBG] pgmap v1201: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:19.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:32:19 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:32:20.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:20 vm04 bash[28289]: cluster 2026-03-10T10:32:18.602987+0000 mgr.y (mgr.24422) 703 : cluster [DBG] pgmap v1202: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:20 vm04 bash[28289]: cluster 2026-03-10T10:32:18.602987+0000 mgr.y (mgr.24422) 703 : cluster [DBG] pgmap v1202: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:20 vm04 bash[28289]: audit 2026-03-10T10:32:19.020382+0000 mgr.y (mgr.24422) 704 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:20 vm04 bash[28289]: audit 2026-03-10T10:32:19.020382+0000 mgr.y (mgr.24422) 704 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:20 vm04 bash[20742]: cluster 2026-03-10T10:32:18.602987+0000 mgr.y (mgr.24422) 703 : cluster [DBG] pgmap v1202: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:20 vm04 bash[20742]: cluster 2026-03-10T10:32:18.602987+0000 mgr.y (mgr.24422) 703 : cluster [DBG] pgmap v1202: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:20 vm04 bash[20742]: audit 2026-03-10T10:32:19.020382+0000 mgr.y (mgr.24422) 704 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:20 vm04 bash[20742]: audit 2026-03-10T10:32:19.020382+0000 mgr.y (mgr.24422) 704 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:20 vm07 bash[23367]: cluster 2026-03-10T10:32:18.602987+0000 mgr.y (mgr.24422) 703 : cluster [DBG] pgmap v1202: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:20 vm07 bash[23367]: cluster 2026-03-10T10:32:18.602987+0000 mgr.y (mgr.24422) 703 : cluster [DBG] pgmap v1202: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:20 vm07 bash[23367]: audit 2026-03-10T10:32:19.020382+0000 mgr.y (mgr.24422) 704 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:20 vm07 bash[23367]: audit 2026-03-10T10:32:19.020382+0000 mgr.y (mgr.24422) 704 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:22 vm04 bash[28289]: cluster 2026-03-10T10:32:20.603610+0000 mgr.y (mgr.24422) 705 : cluster [DBG] pgmap v1203: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:22 vm04 bash[28289]: cluster 2026-03-10T10:32:20.603610+0000 mgr.y (mgr.24422) 705 : cluster [DBG] pgmap v1203: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:22 vm04 bash[20742]: cluster 2026-03-10T10:32:20.603610+0000 mgr.y (mgr.24422) 705 : cluster [DBG] pgmap v1203: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:22 vm04 bash[20742]: cluster 2026-03-10T10:32:20.603610+0000 mgr.y (mgr.24422) 705 : cluster [DBG] pgmap v1203: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:22 vm07 bash[23367]: cluster 2026-03-10T10:32:20.603610+0000 mgr.y (mgr.24422) 705 : cluster [DBG] pgmap v1203: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:22 vm07 bash[23367]: cluster 2026-03-10T10:32:20.603610+0000 mgr.y (mgr.24422) 705 : cluster [DBG] pgmap v1203: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:23.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:32:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:32:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:32:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:24 vm04 bash[28289]: cluster 2026-03-10T10:32:22.603913+0000 mgr.y (mgr.24422) 706 : cluster [DBG] pgmap v1204: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:24 vm04 bash[28289]: cluster 2026-03-10T10:32:22.603913+0000 mgr.y (mgr.24422) 706 : cluster [DBG] pgmap v1204: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:24 vm04 bash[20742]: cluster 2026-03-10T10:32:22.603913+0000 mgr.y (mgr.24422) 706 : cluster [DBG] pgmap v1204: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:24 vm04 bash[20742]: cluster 2026-03-10T10:32:22.603913+0000 mgr.y (mgr.24422) 706 : cluster [DBG] pgmap v1204: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:24.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:24 vm07 bash[23367]: cluster 2026-03-10T10:32:22.603913+0000 mgr.y (mgr.24422) 706 : cluster [DBG] pgmap v1204: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:24.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:24 vm07 bash[23367]: cluster 2026-03-10T10:32:22.603913+0000 mgr.y (mgr.24422) 706 : cluster [DBG] pgmap v1204: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:26 vm04 bash[28289]: cluster 2026-03-10T10:32:24.604570+0000 mgr.y (mgr.24422) 707 : cluster [DBG] pgmap v1205: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:26 vm04 bash[28289]: cluster 2026-03-10T10:32:24.604570+0000 mgr.y (mgr.24422) 707 : cluster [DBG] pgmap v1205: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:26.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:26 vm04 bash[20742]: cluster 2026-03-10T10:32:24.604570+0000 mgr.y (mgr.24422) 707 : cluster [DBG] pgmap v1205: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:26.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:26 vm04 bash[20742]: cluster 2026-03-10T10:32:24.604570+0000 mgr.y (mgr.24422) 707 : cluster [DBG] pgmap v1205: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:26 vm07 bash[23367]: cluster 2026-03-10T10:32:24.604570+0000 mgr.y (mgr.24422) 707 : cluster [DBG] pgmap v1205: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:26 vm07 bash[23367]: cluster 2026-03-10T10:32:24.604570+0000 mgr.y (mgr.24422) 707 : cluster [DBG] pgmap v1205: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:28 vm04 bash[28289]: cluster 2026-03-10T10:32:26.604875+0000 mgr.y (mgr.24422) 708 : cluster [DBG] pgmap v1206: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:28 vm04 bash[28289]: cluster 2026-03-10T10:32:26.604875+0000 mgr.y (mgr.24422) 708 : cluster [DBG] pgmap v1206: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:28.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:28 vm04 bash[20742]: cluster 2026-03-10T10:32:26.604875+0000 mgr.y (mgr.24422) 708 : cluster [DBG] pgmap v1206: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:28.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:28 vm04 bash[20742]: cluster 2026-03-10T10:32:26.604875+0000 mgr.y (mgr.24422) 708 : cluster [DBG] pgmap v1206: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:28 vm07 bash[23367]: cluster 2026-03-10T10:32:26.604875+0000 mgr.y (mgr.24422) 708 : cluster [DBG] pgmap v1206: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:28 vm07 bash[23367]: cluster 2026-03-10T10:32:26.604875+0000 mgr.y (mgr.24422) 708 : cluster [DBG] pgmap v1206: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:29.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:32:29 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:32:29.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:29 vm07 bash[23367]: audit 2026-03-10T10:32:28.273691+0000 mon.a (mon.0) 3648 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:32:29.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:29 vm07 bash[23367]: audit 2026-03-10T10:32:28.273691+0000 mon.a (mon.0) 3648 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:32:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:29 vm04 bash[28289]: audit 2026-03-10T10:32:28.273691+0000 mon.a (mon.0) 3648 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:32:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:29 vm04 bash[28289]: audit 2026-03-10T10:32:28.273691+0000 mon.a (mon.0) 3648 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:32:29.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:29 vm04 bash[20742]: audit 2026-03-10T10:32:28.273691+0000 mon.a (mon.0) 3648 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:32:29.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:29 vm04 bash[20742]: audit 2026-03-10T10:32:28.273691+0000 mon.a (mon.0) 3648 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:32:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:30 vm07 bash[23367]: cluster 2026-03-10T10:32:28.605379+0000 mgr.y (mgr.24422) 709 : cluster [DBG] pgmap v1207: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:30 vm07 bash[23367]: cluster 2026-03-10T10:32:28.605379+0000 mgr.y (mgr.24422) 709 : cluster [DBG] pgmap v1207: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:30 vm07 bash[23367]: audit 2026-03-10T10:32:29.031032+0000 mgr.y (mgr.24422) 710 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:30 vm07 bash[23367]: audit 2026-03-10T10:32:29.031032+0000 mgr.y (mgr.24422) 710 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:30.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:30 vm04 bash[20742]: cluster 2026-03-10T10:32:28.605379+0000 mgr.y (mgr.24422) 709 : cluster [DBG] pgmap v1207: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:30.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:30 vm04 bash[20742]: cluster 2026-03-10T10:32:28.605379+0000 mgr.y (mgr.24422) 709 : cluster [DBG] pgmap v1207: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:30.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:30 vm04 bash[20742]: audit 2026-03-10T10:32:29.031032+0000 mgr.y (mgr.24422) 710 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:30.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:30 vm04 bash[20742]: audit 2026-03-10T10:32:29.031032+0000 mgr.y (mgr.24422) 710 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:30.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:30 vm04 bash[28289]: cluster 2026-03-10T10:32:28.605379+0000 mgr.y (mgr.24422) 709 : cluster [DBG] pgmap v1207: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:30.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:30 vm04 bash[28289]: cluster 2026-03-10T10:32:28.605379+0000 mgr.y (mgr.24422) 709 : cluster [DBG] pgmap v1207: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:30.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:30 vm04 bash[28289]: audit 2026-03-10T10:32:29.031032+0000 mgr.y (mgr.24422) 710 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:30.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:30 vm04 bash[28289]: audit 2026-03-10T10:32:29.031032+0000 mgr.y (mgr.24422) 710 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:32.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:32 vm07 bash[23367]: cluster 2026-03-10T10:32:30.605909+0000 mgr.y (mgr.24422) 711 : cluster [DBG] pgmap v1208: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:32.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:32 vm07 bash[23367]: cluster 2026-03-10T10:32:30.605909+0000 mgr.y (mgr.24422) 711 : cluster [DBG] pgmap v1208: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:32 vm04 bash[20742]: cluster 2026-03-10T10:32:30.605909+0000 mgr.y (mgr.24422) 711 : cluster [DBG] pgmap v1208: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:32 vm04 bash[20742]: cluster 2026-03-10T10:32:30.605909+0000 mgr.y (mgr.24422) 711 : cluster [DBG] pgmap v1208: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:32 vm04 bash[28289]: cluster 2026-03-10T10:32:30.605909+0000 mgr.y (mgr.24422) 711 : cluster [DBG] pgmap v1208: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:32 vm04 bash[28289]: cluster 2026-03-10T10:32:30.605909+0000 mgr.y (mgr.24422) 711 : cluster [DBG] pgmap v1208: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:33.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:32:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:32:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:32:34.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:34 vm07 bash[23367]: cluster 2026-03-10T10:32:32.606244+0000 mgr.y (mgr.24422) 712 : cluster [DBG] pgmap v1209: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:34.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:34 vm07 bash[23367]: cluster 2026-03-10T10:32:32.606244+0000 mgr.y (mgr.24422) 712 : cluster [DBG] pgmap v1209: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:34 vm04 bash[28289]: cluster 2026-03-10T10:32:32.606244+0000 mgr.y (mgr.24422) 712 : cluster [DBG] pgmap v1209: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:34 vm04 bash[28289]: cluster 2026-03-10T10:32:32.606244+0000 mgr.y (mgr.24422) 712 : cluster [DBG] pgmap v1209: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:34.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:34 vm04 bash[20742]: cluster 2026-03-10T10:32:32.606244+0000 mgr.y (mgr.24422) 712 : cluster [DBG] pgmap v1209: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:34.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:34 vm04 bash[20742]: cluster 2026-03-10T10:32:32.606244+0000 mgr.y (mgr.24422) 712 : cluster [DBG] pgmap v1209: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:35.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:35 vm07 bash[23367]: audit 2026-03-10T10:32:35.193325+0000 mon.a (mon.0) 3649 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:32:35.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:35 vm07 bash[23367]: audit 2026-03-10T10:32:35.193325+0000 mon.a (mon.0) 3649 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:32:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:35 vm04 bash[28289]: audit 2026-03-10T10:32:35.193325+0000 mon.a (mon.0) 3649 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:32:35.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:35 vm04 bash[28289]: audit 2026-03-10T10:32:35.193325+0000 mon.a (mon.0) 3649 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:32:35.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:35 vm04 bash[20742]: audit 2026-03-10T10:32:35.193325+0000 mon.a (mon.0) 3649 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:32:35.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:35 vm04 bash[20742]: audit 2026-03-10T10:32:35.193325+0000 mon.a (mon.0) 3649 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:32:36.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:36 vm07 bash[23367]: cluster 2026-03-10T10:32:34.606835+0000 mgr.y (mgr.24422) 713 : cluster [DBG] pgmap v1210: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:36.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:36 vm07 bash[23367]: cluster 2026-03-10T10:32:34.606835+0000 mgr.y (mgr.24422) 713 : cluster [DBG] pgmap v1210: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:36.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:36 vm07 bash[23367]: audit 2026-03-10T10:32:35.511949+0000 mon.a (mon.0) 3650 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:32:36.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:36 vm07 bash[23367]: audit 2026-03-10T10:32:35.511949+0000 mon.a (mon.0) 3650 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:32:36.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:36 vm07 bash[23367]: audit 2026-03-10T10:32:35.512481+0000 mon.a (mon.0) 3651 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:32:36.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:36 vm07 bash[23367]: audit 2026-03-10T10:32:35.512481+0000 mon.a (mon.0) 3651 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:32:36.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:36 vm07 bash[23367]: audit 2026-03-10T10:32:35.518984+0000 mon.a (mon.0) 3652 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:32:36.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:36 vm07 bash[23367]: audit 2026-03-10T10:32:35.518984+0000 mon.a (mon.0) 3652 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:32:36.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:36 vm04 bash[28289]: cluster 2026-03-10T10:32:34.606835+0000 mgr.y (mgr.24422) 713 : cluster [DBG] pgmap v1210: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:36.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:36 vm04 bash[28289]: cluster 2026-03-10T10:32:34.606835+0000 mgr.y (mgr.24422) 713 : cluster [DBG] pgmap v1210: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:36.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:36 vm04 bash[28289]: audit 2026-03-10T10:32:35.511949+0000 mon.a (mon.0) 3650 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:32:36.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:36 vm04 bash[28289]: audit 2026-03-10T10:32:35.511949+0000 mon.a (mon.0) 3650 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:32:36.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:36 vm04 bash[28289]: audit 2026-03-10T10:32:35.512481+0000 mon.a (mon.0) 3651 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:32:36.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:36 vm04 bash[28289]: audit 2026-03-10T10:32:35.512481+0000 mon.a (mon.0) 3651 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:32:36.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:36 vm04 bash[28289]: audit 2026-03-10T10:32:35.518984+0000 mon.a (mon.0) 3652 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:32:36.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:36 vm04 bash[28289]: audit 2026-03-10T10:32:35.518984+0000 mon.a (mon.0) 3652 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:32:36.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:36 vm04 bash[20742]: cluster 2026-03-10T10:32:34.606835+0000 mgr.y (mgr.24422) 713 : cluster [DBG] pgmap v1210: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:36.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:36 vm04 bash[20742]: cluster 2026-03-10T10:32:34.606835+0000 mgr.y (mgr.24422) 713 : cluster [DBG] pgmap v1210: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:36.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:36 vm04 bash[20742]: audit 2026-03-10T10:32:35.511949+0000 mon.a (mon.0) 3650 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:32:36.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:36 vm04 bash[20742]: audit 2026-03-10T10:32:35.511949+0000 mon.a (mon.0) 3650 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:32:36.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:36 vm04 bash[20742]: audit 2026-03-10T10:32:35.512481+0000 mon.a (mon.0) 3651 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:32:36.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:36 vm04 bash[20742]: audit 2026-03-10T10:32:35.512481+0000 mon.a (mon.0) 3651 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:32:36.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:36 vm04 bash[20742]: audit 2026-03-10T10:32:35.518984+0000 mon.a (mon.0) 3652 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:32:36.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:36 vm04 bash[20742]: audit 2026-03-10T10:32:35.518984+0000 mon.a (mon.0) 3652 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:32:38.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:38 vm07 bash[23367]: cluster 2026-03-10T10:32:36.607142+0000 mgr.y (mgr.24422) 714 : cluster [DBG] pgmap v1211: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:38.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:38 vm07 bash[23367]: cluster 2026-03-10T10:32:36.607142+0000 mgr.y (mgr.24422) 714 : cluster [DBG] pgmap v1211: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:38.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:38 vm04 bash[28289]: cluster 2026-03-10T10:32:36.607142+0000 mgr.y (mgr.24422) 714 : cluster [DBG] pgmap v1211: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:38.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:38 vm04 bash[28289]: cluster 2026-03-10T10:32:36.607142+0000 mgr.y (mgr.24422) 714 : cluster [DBG] pgmap v1211: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:38.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:38 vm04 bash[20742]: cluster 2026-03-10T10:32:36.607142+0000 mgr.y (mgr.24422) 714 : cluster [DBG] pgmap v1211: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:38.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:38 vm04 bash[20742]: cluster 2026-03-10T10:32:36.607142+0000 mgr.y (mgr.24422) 714 : cluster [DBG] pgmap v1211: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:39.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:32:39 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:32:40.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:40 vm07 bash[23367]: cluster 2026-03-10T10:32:38.607638+0000 mgr.y (mgr.24422) 715 : cluster [DBG] pgmap v1212: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:40.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:40 vm07 bash[23367]: cluster 2026-03-10T10:32:38.607638+0000 mgr.y (mgr.24422) 715 : cluster [DBG] pgmap v1212: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:40.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:40 vm07 bash[23367]: audit 2026-03-10T10:32:39.041558+0000 mgr.y (mgr.24422) 716 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:40.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:40 vm07 bash[23367]: audit 2026-03-10T10:32:39.041558+0000 mgr.y (mgr.24422) 716 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:40 vm04 bash[28289]: cluster 2026-03-10T10:32:38.607638+0000 mgr.y (mgr.24422) 715 : cluster [DBG] pgmap v1212: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:40 vm04 bash[28289]: cluster 2026-03-10T10:32:38.607638+0000 mgr.y (mgr.24422) 715 : cluster [DBG] pgmap v1212: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:40 vm04 bash[28289]: audit 2026-03-10T10:32:39.041558+0000 mgr.y (mgr.24422) 716 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:40 vm04 bash[28289]: audit 2026-03-10T10:32:39.041558+0000 mgr.y (mgr.24422) 716 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:40 vm04 bash[20742]: cluster 2026-03-10T10:32:38.607638+0000 mgr.y (mgr.24422) 715 : cluster [DBG] pgmap v1212: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:40 vm04 bash[20742]: cluster 2026-03-10T10:32:38.607638+0000 mgr.y (mgr.24422) 715 : cluster [DBG] pgmap v1212: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:40 vm04 bash[20742]: audit 2026-03-10T10:32:39.041558+0000 mgr.y (mgr.24422) 716 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:40 vm04 bash[20742]: audit 2026-03-10T10:32:39.041558+0000 mgr.y (mgr.24422) 716 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:42.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:42 vm07 bash[23367]: cluster 2026-03-10T10:32:40.608210+0000 mgr.y (mgr.24422) 717 : cluster [DBG] pgmap v1213: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:42.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:42 vm07 bash[23367]: cluster 2026-03-10T10:32:40.608210+0000 mgr.y (mgr.24422) 717 : cluster [DBG] pgmap v1213: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:42.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:42 vm04 bash[28289]: cluster 2026-03-10T10:32:40.608210+0000 mgr.y (mgr.24422) 717 : cluster [DBG] pgmap v1213: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:42.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:42 vm04 bash[28289]: cluster 2026-03-10T10:32:40.608210+0000 mgr.y (mgr.24422) 717 : cluster [DBG] pgmap v1213: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:42.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:42 vm04 bash[20742]: cluster 2026-03-10T10:32:40.608210+0000 mgr.y (mgr.24422) 717 : cluster [DBG] pgmap v1213: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:42.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:42 vm04 bash[20742]: cluster 2026-03-10T10:32:40.608210+0000 mgr.y (mgr.24422) 717 : cluster [DBG] pgmap v1213: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:43.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:32:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:32:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:32:44.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:44 vm07 bash[23367]: cluster 2026-03-10T10:32:42.608568+0000 mgr.y (mgr.24422) 718 : cluster [DBG] pgmap v1214: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:44.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:44 vm07 bash[23367]: cluster 2026-03-10T10:32:42.608568+0000 mgr.y (mgr.24422) 718 : cluster [DBG] pgmap v1214: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:44.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:44 vm07 bash[23367]: audit 2026-03-10T10:32:43.279957+0000 mon.a (mon.0) 3653 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:32:44.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:44 vm07 bash[23367]: audit 2026-03-10T10:32:43.279957+0000 mon.a (mon.0) 3653 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:32:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:44 vm04 bash[28289]: cluster 2026-03-10T10:32:42.608568+0000 mgr.y (mgr.24422) 718 : cluster [DBG] pgmap v1214: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:44 vm04 bash[28289]: cluster 2026-03-10T10:32:42.608568+0000 mgr.y (mgr.24422) 718 : cluster [DBG] pgmap v1214: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:44 vm04 bash[28289]: audit 2026-03-10T10:32:43.279957+0000 mon.a (mon.0) 3653 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:32:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:44 vm04 bash[28289]: audit 2026-03-10T10:32:43.279957+0000 mon.a (mon.0) 3653 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:32:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:44 vm04 bash[20742]: cluster 2026-03-10T10:32:42.608568+0000 mgr.y (mgr.24422) 718 : cluster [DBG] pgmap v1214: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:44 vm04 bash[20742]: cluster 2026-03-10T10:32:42.608568+0000 mgr.y (mgr.24422) 718 : cluster [DBG] pgmap v1214: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:44 vm04 bash[20742]: audit 2026-03-10T10:32:43.279957+0000 mon.a (mon.0) 3653 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:32:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:44 vm04 bash[20742]: audit 2026-03-10T10:32:43.279957+0000 mon.a (mon.0) 3653 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:32:46.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:46 vm04 bash[28289]: cluster 2026-03-10T10:32:44.609193+0000 mgr.y (mgr.24422) 719 : cluster [DBG] pgmap v1215: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:46.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:46 vm04 bash[28289]: cluster 2026-03-10T10:32:44.609193+0000 mgr.y (mgr.24422) 719 : cluster [DBG] pgmap v1215: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:46.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:46 vm04 bash[20742]: cluster 2026-03-10T10:32:44.609193+0000 mgr.y (mgr.24422) 719 : cluster [DBG] pgmap v1215: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:46.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:46 vm04 bash[20742]: cluster 2026-03-10T10:32:44.609193+0000 mgr.y (mgr.24422) 719 : cluster [DBG] pgmap v1215: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:46.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:46 vm07 bash[23367]: cluster 2026-03-10T10:32:44.609193+0000 mgr.y (mgr.24422) 719 : cluster [DBG] pgmap v1215: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:46.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:46 vm07 bash[23367]: cluster 2026-03-10T10:32:44.609193+0000 mgr.y (mgr.24422) 719 : cluster [DBG] pgmap v1215: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:48.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:48 vm04 bash[28289]: cluster 2026-03-10T10:32:46.609488+0000 mgr.y (mgr.24422) 720 : cluster [DBG] pgmap v1216: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:48.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:48 vm04 bash[28289]: cluster 2026-03-10T10:32:46.609488+0000 mgr.y (mgr.24422) 720 : cluster [DBG] pgmap v1216: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:48.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:48 vm04 bash[20742]: cluster 2026-03-10T10:32:46.609488+0000 mgr.y (mgr.24422) 720 : cluster [DBG] pgmap v1216: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:48.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:48 vm04 bash[20742]: cluster 2026-03-10T10:32:46.609488+0000 mgr.y (mgr.24422) 720 : cluster [DBG] pgmap v1216: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:48.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:48 vm07 bash[23367]: cluster 2026-03-10T10:32:46.609488+0000 mgr.y (mgr.24422) 720 : cluster [DBG] pgmap v1216: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:48.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:48 vm07 bash[23367]: cluster 2026-03-10T10:32:46.609488+0000 mgr.y (mgr.24422) 720 : cluster [DBG] pgmap v1216: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:49.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:32:49 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:32:50.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:50 vm04 bash[28289]: cluster 2026-03-10T10:32:48.609987+0000 mgr.y (mgr.24422) 721 : cluster [DBG] pgmap v1217: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:50.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:50 vm04 bash[28289]: cluster 2026-03-10T10:32:48.609987+0000 mgr.y (mgr.24422) 721 : cluster [DBG] pgmap v1217: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:50.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:50 vm04 bash[28289]: audit 2026-03-10T10:32:49.046553+0000 mgr.y (mgr.24422) 722 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:50.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:50 vm04 bash[28289]: audit 2026-03-10T10:32:49.046553+0000 mgr.y (mgr.24422) 722 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:50.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:50 vm04 bash[20742]: cluster 2026-03-10T10:32:48.609987+0000 mgr.y (mgr.24422) 721 : cluster [DBG] pgmap v1217: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:50.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:50 vm04 bash[20742]: cluster 2026-03-10T10:32:48.609987+0000 mgr.y (mgr.24422) 721 : cluster [DBG] pgmap v1217: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:50.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:50 vm04 bash[20742]: audit 2026-03-10T10:32:49.046553+0000 mgr.y (mgr.24422) 722 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:50.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:50 vm04 bash[20742]: audit 2026-03-10T10:32:49.046553+0000 mgr.y (mgr.24422) 722 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:50.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:50 vm07 bash[23367]: cluster 2026-03-10T10:32:48.609987+0000 mgr.y (mgr.24422) 721 : cluster [DBG] pgmap v1217: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:50.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:50 vm07 bash[23367]: cluster 2026-03-10T10:32:48.609987+0000 mgr.y (mgr.24422) 721 : cluster [DBG] pgmap v1217: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:50.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:50 vm07 bash[23367]: audit 2026-03-10T10:32:49.046553+0000 mgr.y (mgr.24422) 722 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:50.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:50 vm07 bash[23367]: audit 2026-03-10T10:32:49.046553+0000 mgr.y (mgr.24422) 722 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:32:52.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:52 vm04 bash[28289]: cluster 2026-03-10T10:32:50.610490+0000 mgr.y (mgr.24422) 723 : cluster [DBG] pgmap v1218: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:52.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:52 vm04 bash[28289]: cluster 2026-03-10T10:32:50.610490+0000 mgr.y (mgr.24422) 723 : cluster [DBG] pgmap v1218: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:52.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:52 vm04 bash[20742]: cluster 2026-03-10T10:32:50.610490+0000 mgr.y (mgr.24422) 723 : cluster [DBG] pgmap v1218: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:52.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:52 vm04 bash[20742]: cluster 2026-03-10T10:32:50.610490+0000 mgr.y (mgr.24422) 723 : cluster [DBG] pgmap v1218: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:52.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:52 vm07 bash[23367]: cluster 2026-03-10T10:32:50.610490+0000 mgr.y (mgr.24422) 723 : cluster [DBG] pgmap v1218: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:52.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:52 vm07 bash[23367]: cluster 2026-03-10T10:32:50.610490+0000 mgr.y (mgr.24422) 723 : cluster [DBG] pgmap v1218: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:53.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:32:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:32:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:32:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:54 vm04 bash[28289]: cluster 2026-03-10T10:32:52.610794+0000 mgr.y (mgr.24422) 724 : cluster [DBG] pgmap v1219: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:54 vm04 bash[28289]: cluster 2026-03-10T10:32:52.610794+0000 mgr.y (mgr.24422) 724 : cluster [DBG] pgmap v1219: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:54.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:54 vm04 bash[20742]: cluster 2026-03-10T10:32:52.610794+0000 mgr.y (mgr.24422) 724 : cluster [DBG] pgmap v1219: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:54.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:54 vm04 bash[20742]: cluster 2026-03-10T10:32:52.610794+0000 mgr.y (mgr.24422) 724 : cluster [DBG] pgmap v1219: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:54.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:54 vm07 bash[23367]: cluster 2026-03-10T10:32:52.610794+0000 mgr.y (mgr.24422) 724 : cluster [DBG] pgmap v1219: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:54.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:54 vm07 bash[23367]: cluster 2026-03-10T10:32:52.610794+0000 mgr.y (mgr.24422) 724 : cluster [DBG] pgmap v1219: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:56.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:56 vm04 bash[28289]: cluster 2026-03-10T10:32:54.611403+0000 mgr.y (mgr.24422) 725 : cluster [DBG] pgmap v1220: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:56.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:56 vm04 bash[28289]: cluster 2026-03-10T10:32:54.611403+0000 mgr.y (mgr.24422) 725 : cluster [DBG] pgmap v1220: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:56.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:56 vm04 bash[20742]: cluster 2026-03-10T10:32:54.611403+0000 mgr.y (mgr.24422) 725 : cluster [DBG] pgmap v1220: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:56.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:56 vm04 bash[20742]: cluster 2026-03-10T10:32:54.611403+0000 mgr.y (mgr.24422) 725 : cluster [DBG] pgmap v1220: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:56.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:56 vm07 bash[23367]: cluster 2026-03-10T10:32:54.611403+0000 mgr.y (mgr.24422) 725 : cluster [DBG] pgmap v1220: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:56.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:56 vm07 bash[23367]: cluster 2026-03-10T10:32:54.611403+0000 mgr.y (mgr.24422) 725 : cluster [DBG] pgmap v1220: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:32:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:58 vm04 bash[28289]: cluster 2026-03-10T10:32:56.611706+0000 mgr.y (mgr.24422) 726 : cluster [DBG] pgmap v1221: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:58 vm04 bash[28289]: cluster 2026-03-10T10:32:56.611706+0000 mgr.y (mgr.24422) 726 : cluster [DBG] pgmap v1221: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:58 vm04 bash[28289]: audit 2026-03-10T10:32:58.285762+0000 mon.a (mon.0) 3654 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:32:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:32:58 vm04 bash[28289]: audit 2026-03-10T10:32:58.285762+0000 mon.a (mon.0) 3654 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:32:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:58 vm04 bash[20742]: cluster 2026-03-10T10:32:56.611706+0000 mgr.y (mgr.24422) 726 : cluster [DBG] pgmap v1221: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:58 vm04 bash[20742]: cluster 2026-03-10T10:32:56.611706+0000 mgr.y (mgr.24422) 726 : cluster [DBG] pgmap v1221: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:58 vm04 bash[20742]: audit 2026-03-10T10:32:58.285762+0000 mon.a (mon.0) 3654 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:32:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:32:58 vm04 bash[20742]: audit 2026-03-10T10:32:58.285762+0000 mon.a (mon.0) 3654 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:32:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:58 vm07 bash[23367]: cluster 2026-03-10T10:32:56.611706+0000 mgr.y (mgr.24422) 726 : cluster [DBG] pgmap v1221: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:58 vm07 bash[23367]: cluster 2026-03-10T10:32:56.611706+0000 mgr.y (mgr.24422) 726 : cluster [DBG] pgmap v1221: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:32:58.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:58 vm07 bash[23367]: audit 2026-03-10T10:32:58.285762+0000 mon.a (mon.0) 3654 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:32:58.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:32:58 vm07 bash[23367]: audit 2026-03-10T10:32:58.285762+0000 mon.a (mon.0) 3654 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:32:59.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:32:59 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:33:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:00 vm04 bash[28289]: cluster 2026-03-10T10:32:58.612284+0000 mgr.y (mgr.24422) 727 : cluster [DBG] pgmap v1222: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:00 vm04 bash[28289]: cluster 2026-03-10T10:32:58.612284+0000 mgr.y (mgr.24422) 727 : cluster [DBG] pgmap v1222: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:00 vm04 bash[28289]: audit 2026-03-10T10:32:59.051600+0000 mgr.y (mgr.24422) 728 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:00 vm04 bash[28289]: audit 2026-03-10T10:32:59.051600+0000 mgr.y (mgr.24422) 728 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:00 vm04 bash[20742]: cluster 2026-03-10T10:32:58.612284+0000 mgr.y (mgr.24422) 727 : cluster [DBG] pgmap v1222: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:00 vm04 bash[20742]: cluster 2026-03-10T10:32:58.612284+0000 mgr.y (mgr.24422) 727 : cluster [DBG] pgmap v1222: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:00 vm04 bash[20742]: audit 2026-03-10T10:32:59.051600+0000 mgr.y (mgr.24422) 728 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:00 vm04 bash[20742]: audit 2026-03-10T10:32:59.051600+0000 mgr.y (mgr.24422) 728 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:00.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:00 vm07 bash[23367]: cluster 2026-03-10T10:32:58.612284+0000 mgr.y (mgr.24422) 727 : cluster [DBG] pgmap v1222: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:00.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:00 vm07 bash[23367]: cluster 2026-03-10T10:32:58.612284+0000 mgr.y (mgr.24422) 727 : cluster [DBG] pgmap v1222: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:00.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:00 vm07 bash[23367]: audit 2026-03-10T10:32:59.051600+0000 mgr.y (mgr.24422) 728 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:00.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:00 vm07 bash[23367]: audit 2026-03-10T10:32:59.051600+0000 mgr.y (mgr.24422) 728 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:02 vm04 bash[28289]: cluster 2026-03-10T10:33:00.612865+0000 mgr.y (mgr.24422) 729 : cluster [DBG] pgmap v1223: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:02 vm04 bash[28289]: cluster 2026-03-10T10:33:00.612865+0000 mgr.y (mgr.24422) 729 : cluster [DBG] pgmap v1223: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:02.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:02 vm04 bash[20742]: cluster 2026-03-10T10:33:00.612865+0000 mgr.y (mgr.24422) 729 : cluster [DBG] pgmap v1223: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:02.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:02 vm04 bash[20742]: cluster 2026-03-10T10:33:00.612865+0000 mgr.y (mgr.24422) 729 : cluster [DBG] pgmap v1223: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:02 vm07 bash[23367]: cluster 2026-03-10T10:33:00.612865+0000 mgr.y (mgr.24422) 729 : cluster [DBG] pgmap v1223: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:02 vm07 bash[23367]: cluster 2026-03-10T10:33:00.612865+0000 mgr.y (mgr.24422) 729 : cluster [DBG] pgmap v1223: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:03.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:33:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:33:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:33:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:04 vm04 bash[28289]: cluster 2026-03-10T10:33:02.613126+0000 mgr.y (mgr.24422) 730 : cluster [DBG] pgmap v1224: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:04 vm04 bash[28289]: cluster 2026-03-10T10:33:02.613126+0000 mgr.y (mgr.24422) 730 : cluster [DBG] pgmap v1224: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:04 vm04 bash[20742]: cluster 2026-03-10T10:33:02.613126+0000 mgr.y (mgr.24422) 730 : cluster [DBG] pgmap v1224: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:04 vm04 bash[20742]: cluster 2026-03-10T10:33:02.613126+0000 mgr.y (mgr.24422) 730 : cluster [DBG] pgmap v1224: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:04.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:04 vm07 bash[23367]: cluster 2026-03-10T10:33:02.613126+0000 mgr.y (mgr.24422) 730 : cluster [DBG] pgmap v1224: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:04.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:04 vm07 bash[23367]: cluster 2026-03-10T10:33:02.613126+0000 mgr.y (mgr.24422) 730 : cluster [DBG] pgmap v1224: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:06.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:06 vm04 bash[28289]: cluster 2026-03-10T10:33:04.613701+0000 mgr.y (mgr.24422) 731 : cluster [DBG] pgmap v1225: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:06.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:06 vm04 bash[28289]: cluster 2026-03-10T10:33:04.613701+0000 mgr.y (mgr.24422) 731 : cluster [DBG] pgmap v1225: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:06.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:06 vm04 bash[20742]: cluster 2026-03-10T10:33:04.613701+0000 mgr.y (mgr.24422) 731 : cluster [DBG] pgmap v1225: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:06.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:06 vm04 bash[20742]: cluster 2026-03-10T10:33:04.613701+0000 mgr.y (mgr.24422) 731 : cluster [DBG] pgmap v1225: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:06 vm07 bash[23367]: cluster 2026-03-10T10:33:04.613701+0000 mgr.y (mgr.24422) 731 : cluster [DBG] pgmap v1225: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:06 vm07 bash[23367]: cluster 2026-03-10T10:33:04.613701+0000 mgr.y (mgr.24422) 731 : cluster [DBG] pgmap v1225: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:08.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:08 vm04 bash[28289]: cluster 2026-03-10T10:33:06.613994+0000 mgr.y (mgr.24422) 732 : cluster [DBG] pgmap v1226: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:08.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:08 vm04 bash[28289]: cluster 2026-03-10T10:33:06.613994+0000 mgr.y (mgr.24422) 732 : cluster [DBG] pgmap v1226: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:08.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:08 vm04 bash[20742]: cluster 2026-03-10T10:33:06.613994+0000 mgr.y (mgr.24422) 732 : cluster [DBG] pgmap v1226: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:08.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:08 vm04 bash[20742]: cluster 2026-03-10T10:33:06.613994+0000 mgr.y (mgr.24422) 732 : cluster [DBG] pgmap v1226: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:08.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:08 vm07 bash[23367]: cluster 2026-03-10T10:33:06.613994+0000 mgr.y (mgr.24422) 732 : cluster [DBG] pgmap v1226: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:08.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:08 vm07 bash[23367]: cluster 2026-03-10T10:33:06.613994+0000 mgr.y (mgr.24422) 732 : cluster [DBG] pgmap v1226: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:09.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:33:09 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:33:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:10 vm04 bash[28289]: cluster 2026-03-10T10:33:08.614495+0000 mgr.y (mgr.24422) 733 : cluster [DBG] pgmap v1227: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:10 vm04 bash[28289]: cluster 2026-03-10T10:33:08.614495+0000 mgr.y (mgr.24422) 733 : cluster [DBG] pgmap v1227: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:10 vm04 bash[28289]: audit 2026-03-10T10:33:09.058802+0000 mgr.y (mgr.24422) 734 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:10 vm04 bash[28289]: audit 2026-03-10T10:33:09.058802+0000 mgr.y (mgr.24422) 734 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:10.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:10 vm04 bash[20742]: cluster 2026-03-10T10:33:08.614495+0000 mgr.y (mgr.24422) 733 : cluster [DBG] pgmap v1227: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:10.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:10 vm04 bash[20742]: cluster 2026-03-10T10:33:08.614495+0000 mgr.y (mgr.24422) 733 : cluster [DBG] pgmap v1227: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:10.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:10 vm04 bash[20742]: audit 2026-03-10T10:33:09.058802+0000 mgr.y (mgr.24422) 734 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:10.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:10 vm04 bash[20742]: audit 2026-03-10T10:33:09.058802+0000 mgr.y (mgr.24422) 734 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:10.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:10 vm07 bash[23367]: cluster 2026-03-10T10:33:08.614495+0000 mgr.y (mgr.24422) 733 : cluster [DBG] pgmap v1227: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:10.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:10 vm07 bash[23367]: cluster 2026-03-10T10:33:08.614495+0000 mgr.y (mgr.24422) 733 : cluster [DBG] pgmap v1227: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:10.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:10 vm07 bash[23367]: audit 2026-03-10T10:33:09.058802+0000 mgr.y (mgr.24422) 734 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:10.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:10 vm07 bash[23367]: audit 2026-03-10T10:33:09.058802+0000 mgr.y (mgr.24422) 734 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:12.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:12 vm04 bash[28289]: cluster 2026-03-10T10:33:10.615147+0000 mgr.y (mgr.24422) 735 : cluster [DBG] pgmap v1228: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:12.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:12 vm04 bash[28289]: cluster 2026-03-10T10:33:10.615147+0000 mgr.y (mgr.24422) 735 : cluster [DBG] pgmap v1228: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:12.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:12 vm04 bash[20742]: cluster 2026-03-10T10:33:10.615147+0000 mgr.y (mgr.24422) 735 : cluster [DBG] pgmap v1228: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:12.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:12 vm04 bash[20742]: cluster 2026-03-10T10:33:10.615147+0000 mgr.y (mgr.24422) 735 : cluster [DBG] pgmap v1228: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:12.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:12 vm07 bash[23367]: cluster 2026-03-10T10:33:10.615147+0000 mgr.y (mgr.24422) 735 : cluster [DBG] pgmap v1228: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:12.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:12 vm07 bash[23367]: cluster 2026-03-10T10:33:10.615147+0000 mgr.y (mgr.24422) 735 : cluster [DBG] pgmap v1228: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:13.407 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:33:13 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:33:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:33:13.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:13 vm04 bash[28289]: audit 2026-03-10T10:33:13.291315+0000 mon.a (mon.0) 3655 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:33:13.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:13 vm04 bash[28289]: audit 2026-03-10T10:33:13.291315+0000 mon.a (mon.0) 3655 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:33:13.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:13 vm04 bash[20742]: audit 2026-03-10T10:33:13.291315+0000 mon.a (mon.0) 3655 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:33:13.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:13 vm04 bash[20742]: audit 2026-03-10T10:33:13.291315+0000 mon.a (mon.0) 3655 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:33:13.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:13 vm07 bash[23367]: audit 2026-03-10T10:33:13.291315+0000 mon.a (mon.0) 3655 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:33:13.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:13 vm07 bash[23367]: audit 2026-03-10T10:33:13.291315+0000 mon.a (mon.0) 3655 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:33:14.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:14 vm04 bash[20742]: cluster 2026-03-10T10:33:12.615494+0000 mgr.y (mgr.24422) 736 : cluster [DBG] pgmap v1229: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:14.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:14 vm04 bash[20742]: cluster 2026-03-10T10:33:12.615494+0000 mgr.y (mgr.24422) 736 : cluster [DBG] pgmap v1229: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:14 vm04 bash[28289]: cluster 2026-03-10T10:33:12.615494+0000 mgr.y (mgr.24422) 736 : cluster [DBG] pgmap v1229: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:14 vm04 bash[28289]: cluster 2026-03-10T10:33:12.615494+0000 mgr.y (mgr.24422) 736 : cluster [DBG] pgmap v1229: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:14.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:14 vm07 bash[23367]: cluster 2026-03-10T10:33:12.615494+0000 mgr.y (mgr.24422) 736 : cluster [DBG] pgmap v1229: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:14.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:14 vm07 bash[23367]: cluster 2026-03-10T10:33:12.615494+0000 mgr.y (mgr.24422) 736 : cluster [DBG] pgmap v1229: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:16.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:16 vm04 bash[28289]: cluster 2026-03-10T10:33:14.616183+0000 mgr.y (mgr.24422) 737 : cluster [DBG] pgmap v1230: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:16.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:16 vm04 bash[28289]: cluster 2026-03-10T10:33:14.616183+0000 mgr.y (mgr.24422) 737 : cluster [DBG] pgmap v1230: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:16.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:16 vm04 bash[20742]: cluster 2026-03-10T10:33:14.616183+0000 mgr.y (mgr.24422) 737 : cluster [DBG] pgmap v1230: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:16.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:16 vm04 bash[20742]: cluster 2026-03-10T10:33:14.616183+0000 mgr.y (mgr.24422) 737 : cluster [DBG] pgmap v1230: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:16.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:16 vm07 bash[23367]: cluster 2026-03-10T10:33:14.616183+0000 mgr.y (mgr.24422) 737 : cluster [DBG] pgmap v1230: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:16.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:16 vm07 bash[23367]: cluster 2026-03-10T10:33:14.616183+0000 mgr.y (mgr.24422) 737 : cluster [DBG] pgmap v1230: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:17.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:17 vm07 bash[23367]: cluster 2026-03-10T10:33:16.616530+0000 mgr.y (mgr.24422) 738 : cluster [DBG] pgmap v1231: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:17.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:17 vm07 bash[23367]: cluster 2026-03-10T10:33:16.616530+0000 mgr.y (mgr.24422) 738 : cluster [DBG] pgmap v1231: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:17.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:17 vm04 bash[28289]: cluster 2026-03-10T10:33:16.616530+0000 mgr.y (mgr.24422) 738 : cluster [DBG] pgmap v1231: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:17.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:17 vm04 bash[28289]: cluster 2026-03-10T10:33:16.616530+0000 mgr.y (mgr.24422) 738 : cluster [DBG] pgmap v1231: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:17.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:17 vm04 bash[20742]: cluster 2026-03-10T10:33:16.616530+0000 mgr.y (mgr.24422) 738 : cluster [DBG] pgmap v1231: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:17.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:17 vm04 bash[20742]: cluster 2026-03-10T10:33:16.616530+0000 mgr.y (mgr.24422) 738 : cluster [DBG] pgmap v1231: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:19.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:33:19 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:33:19.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:19 vm04 bash[28289]: cluster 2026-03-10T10:33:18.617110+0000 mgr.y (mgr.24422) 739 : cluster [DBG] pgmap v1232: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:19.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:19 vm04 bash[28289]: cluster 2026-03-10T10:33:18.617110+0000 mgr.y (mgr.24422) 739 : cluster [DBG] pgmap v1232: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:19.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:19 vm04 bash[28289]: audit 2026-03-10T10:33:19.063566+0000 mgr.y (mgr.24422) 740 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:19.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:19 vm04 bash[28289]: audit 2026-03-10T10:33:19.063566+0000 mgr.y (mgr.24422) 740 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:19.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:19 vm04 bash[20742]: cluster 2026-03-10T10:33:18.617110+0000 mgr.y (mgr.24422) 739 : cluster [DBG] pgmap v1232: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:19.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:19 vm04 bash[20742]: cluster 2026-03-10T10:33:18.617110+0000 mgr.y (mgr.24422) 739 : cluster [DBG] pgmap v1232: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:19.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:19 vm04 bash[20742]: audit 2026-03-10T10:33:19.063566+0000 mgr.y (mgr.24422) 740 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:19.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:19 vm04 bash[20742]: audit 2026-03-10T10:33:19.063566+0000 mgr.y (mgr.24422) 740 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:20.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:19 vm07 bash[23367]: cluster 2026-03-10T10:33:18.617110+0000 mgr.y (mgr.24422) 739 : cluster [DBG] pgmap v1232: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:20.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:19 vm07 bash[23367]: cluster 2026-03-10T10:33:18.617110+0000 mgr.y (mgr.24422) 739 : cluster [DBG] pgmap v1232: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:20.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:19 vm07 bash[23367]: audit 2026-03-10T10:33:19.063566+0000 mgr.y (mgr.24422) 740 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:20.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:19 vm07 bash[23367]: audit 2026-03-10T10:33:19.063566+0000 mgr.y (mgr.24422) 740 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:21.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:21 vm04 bash[28289]: cluster 2026-03-10T10:33:20.617590+0000 mgr.y (mgr.24422) 741 : cluster [DBG] pgmap v1233: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:21 vm04 bash[28289]: cluster 2026-03-10T10:33:20.617590+0000 mgr.y (mgr.24422) 741 : cluster [DBG] pgmap v1233: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:21.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:21 vm04 bash[20742]: cluster 2026-03-10T10:33:20.617590+0000 mgr.y (mgr.24422) 741 : cluster [DBG] pgmap v1233: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:21.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:21 vm04 bash[20742]: cluster 2026-03-10T10:33:20.617590+0000 mgr.y (mgr.24422) 741 : cluster [DBG] pgmap v1233: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:22.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:21 vm07 bash[23367]: cluster 2026-03-10T10:33:20.617590+0000 mgr.y (mgr.24422) 741 : cluster [DBG] pgmap v1233: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:22.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:21 vm07 bash[23367]: cluster 2026-03-10T10:33:20.617590+0000 mgr.y (mgr.24422) 741 : cluster [DBG] pgmap v1233: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:23.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:33:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:33:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:33:23.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:23 vm04 bash[28289]: cluster 2026-03-10T10:33:22.617843+0000 mgr.y (mgr.24422) 742 : cluster [DBG] pgmap v1234: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:23.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:23 vm04 bash[28289]: cluster 2026-03-10T10:33:22.617843+0000 mgr.y (mgr.24422) 742 : cluster [DBG] pgmap v1234: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:23.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:23 vm04 bash[20742]: cluster 2026-03-10T10:33:22.617843+0000 mgr.y (mgr.24422) 742 : cluster [DBG] pgmap v1234: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:23.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:23 vm04 bash[20742]: cluster 2026-03-10T10:33:22.617843+0000 mgr.y (mgr.24422) 742 : cluster [DBG] pgmap v1234: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:24.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:23 vm07 bash[23367]: cluster 2026-03-10T10:33:22.617843+0000 mgr.y (mgr.24422) 742 : cluster [DBG] pgmap v1234: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:24.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:23 vm07 bash[23367]: cluster 2026-03-10T10:33:22.617843+0000 mgr.y (mgr.24422) 742 : cluster [DBG] pgmap v1234: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:25.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:25 vm04 bash[28289]: cluster 2026-03-10T10:33:24.618362+0000 mgr.y (mgr.24422) 743 : cluster [DBG] pgmap v1235: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:25.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:25 vm04 bash[28289]: cluster 2026-03-10T10:33:24.618362+0000 mgr.y (mgr.24422) 743 : cluster [DBG] pgmap v1235: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:25.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:25 vm04 bash[20742]: cluster 2026-03-10T10:33:24.618362+0000 mgr.y (mgr.24422) 743 : cluster [DBG] pgmap v1235: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:25.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:25 vm04 bash[20742]: cluster 2026-03-10T10:33:24.618362+0000 mgr.y (mgr.24422) 743 : cluster [DBG] pgmap v1235: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:25 vm07 bash[23367]: cluster 2026-03-10T10:33:24.618362+0000 mgr.y (mgr.24422) 743 : cluster [DBG] pgmap v1235: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:25 vm07 bash[23367]: cluster 2026-03-10T10:33:24.618362+0000 mgr.y (mgr.24422) 743 : cluster [DBG] pgmap v1235: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:27 vm04 bash[28289]: cluster 2026-03-10T10:33:26.618665+0000 mgr.y (mgr.24422) 744 : cluster [DBG] pgmap v1236: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:27.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:27 vm04 bash[28289]: cluster 2026-03-10T10:33:26.618665+0000 mgr.y (mgr.24422) 744 : cluster [DBG] pgmap v1236: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:27.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:27 vm04 bash[20742]: cluster 2026-03-10T10:33:26.618665+0000 mgr.y (mgr.24422) 744 : cluster [DBG] pgmap v1236: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:27.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:27 vm04 bash[20742]: cluster 2026-03-10T10:33:26.618665+0000 mgr.y (mgr.24422) 744 : cluster [DBG] pgmap v1236: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:27 vm07 bash[23367]: cluster 2026-03-10T10:33:26.618665+0000 mgr.y (mgr.24422) 744 : cluster [DBG] pgmap v1236: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:27 vm07 bash[23367]: cluster 2026-03-10T10:33:26.618665+0000 mgr.y (mgr.24422) 744 : cluster [DBG] pgmap v1236: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:28.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:28 vm04 bash[28289]: audit 2026-03-10T10:33:28.297179+0000 mon.a (mon.0) 3656 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:33:28.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:28 vm04 bash[28289]: audit 2026-03-10T10:33:28.297179+0000 mon.a (mon.0) 3656 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:33:28.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:28 vm04 bash[20742]: audit 2026-03-10T10:33:28.297179+0000 mon.a (mon.0) 3656 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:33:28.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:28 vm04 bash[20742]: audit 2026-03-10T10:33:28.297179+0000 mon.a (mon.0) 3656 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:33:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:28 vm07 bash[23367]: audit 2026-03-10T10:33:28.297179+0000 mon.a (mon.0) 3656 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:33:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:28 vm07 bash[23367]: audit 2026-03-10T10:33:28.297179+0000 mon.a (mon.0) 3656 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:33:29.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:33:29 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:33:29.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:29 vm04 bash[20742]: cluster 2026-03-10T10:33:28.619200+0000 mgr.y (mgr.24422) 745 : cluster [DBG] pgmap v1237: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:29.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:29 vm04 bash[20742]: cluster 2026-03-10T10:33:28.619200+0000 mgr.y (mgr.24422) 745 : cluster [DBG] pgmap v1237: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:29.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:29 vm04 bash[20742]: audit 2026-03-10T10:33:29.072649+0000 mgr.y (mgr.24422) 746 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:29.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:29 vm04 bash[20742]: audit 2026-03-10T10:33:29.072649+0000 mgr.y (mgr.24422) 746 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:29.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:29 vm04 bash[28289]: cluster 2026-03-10T10:33:28.619200+0000 mgr.y (mgr.24422) 745 : cluster [DBG] pgmap v1237: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:29.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:29 vm04 bash[28289]: cluster 2026-03-10T10:33:28.619200+0000 mgr.y (mgr.24422) 745 : cluster [DBG] pgmap v1237: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:29.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:29 vm04 bash[28289]: audit 2026-03-10T10:33:29.072649+0000 mgr.y (mgr.24422) 746 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:29.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:29 vm04 bash[28289]: audit 2026-03-10T10:33:29.072649+0000 mgr.y (mgr.24422) 746 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:29 vm07 bash[23367]: cluster 2026-03-10T10:33:28.619200+0000 mgr.y (mgr.24422) 745 : cluster [DBG] pgmap v1237: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:29 vm07 bash[23367]: cluster 2026-03-10T10:33:28.619200+0000 mgr.y (mgr.24422) 745 : cluster [DBG] pgmap v1237: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:29 vm07 bash[23367]: audit 2026-03-10T10:33:29.072649+0000 mgr.y (mgr.24422) 746 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:29 vm07 bash[23367]: audit 2026-03-10T10:33:29.072649+0000 mgr.y (mgr.24422) 746 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:31 vm07 bash[23367]: cluster 2026-03-10T10:33:30.619746+0000 mgr.y (mgr.24422) 747 : cluster [DBG] pgmap v1238: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:31 vm07 bash[23367]: cluster 2026-03-10T10:33:30.619746+0000 mgr.y (mgr.24422) 747 : cluster [DBG] pgmap v1238: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:32.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:31 vm04 bash[28289]: cluster 2026-03-10T10:33:30.619746+0000 mgr.y (mgr.24422) 747 : cluster [DBG] pgmap v1238: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:32.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:31 vm04 bash[28289]: cluster 2026-03-10T10:33:30.619746+0000 mgr.y (mgr.24422) 747 : cluster [DBG] pgmap v1238: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:32.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:31 vm04 bash[20742]: cluster 2026-03-10T10:33:30.619746+0000 mgr.y (mgr.24422) 747 : cluster [DBG] pgmap v1238: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:32.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:31 vm04 bash[20742]: cluster 2026-03-10T10:33:30.619746+0000 mgr.y (mgr.24422) 747 : cluster [DBG] pgmap v1238: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:33.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:33:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:33:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:33:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:33 vm07 bash[23367]: cluster 2026-03-10T10:33:32.620064+0000 mgr.y (mgr.24422) 748 : cluster [DBG] pgmap v1239: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:33 vm07 bash[23367]: cluster 2026-03-10T10:33:32.620064+0000 mgr.y (mgr.24422) 748 : cluster [DBG] pgmap v1239: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:34.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:33 vm04 bash[28289]: cluster 2026-03-10T10:33:32.620064+0000 mgr.y (mgr.24422) 748 : cluster [DBG] pgmap v1239: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:34.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:33 vm04 bash[28289]: cluster 2026-03-10T10:33:32.620064+0000 mgr.y (mgr.24422) 748 : cluster [DBG] pgmap v1239: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:34.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:33 vm04 bash[20742]: cluster 2026-03-10T10:33:32.620064+0000 mgr.y (mgr.24422) 748 : cluster [DBG] pgmap v1239: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:34.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:33 vm04 bash[20742]: cluster 2026-03-10T10:33:32.620064+0000 mgr.y (mgr.24422) 748 : cluster [DBG] pgmap v1239: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:36.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:35 vm07 bash[23367]: cluster 2026-03-10T10:33:34.620828+0000 mgr.y (mgr.24422) 749 : cluster [DBG] pgmap v1240: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:36.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:35 vm07 bash[23367]: cluster 2026-03-10T10:33:34.620828+0000 mgr.y (mgr.24422) 749 : cluster [DBG] pgmap v1240: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:36.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:35 vm07 bash[23367]: audit 2026-03-10T10:33:35.558072+0000 mon.a (mon.0) 3657 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:33:36.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:35 vm07 bash[23367]: audit 2026-03-10T10:33:35.558072+0000 mon.a (mon.0) 3657 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:33:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:35 vm04 bash[28289]: cluster 2026-03-10T10:33:34.620828+0000 mgr.y (mgr.24422) 749 : cluster [DBG] pgmap v1240: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:35 vm04 bash[28289]: cluster 2026-03-10T10:33:34.620828+0000 mgr.y (mgr.24422) 749 : cluster [DBG] pgmap v1240: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:35 vm04 bash[28289]: audit 2026-03-10T10:33:35.558072+0000 mon.a (mon.0) 3657 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:33:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:35 vm04 bash[28289]: audit 2026-03-10T10:33:35.558072+0000 mon.a (mon.0) 3657 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:33:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:35 vm04 bash[20742]: cluster 2026-03-10T10:33:34.620828+0000 mgr.y (mgr.24422) 749 : cluster [DBG] pgmap v1240: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:35 vm04 bash[20742]: cluster 2026-03-10T10:33:34.620828+0000 mgr.y (mgr.24422) 749 : cluster [DBG] pgmap v1240: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:35 vm04 bash[20742]: audit 2026-03-10T10:33:35.558072+0000 mon.a (mon.0) 3657 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:33:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:35 vm04 bash[20742]: audit 2026-03-10T10:33:35.558072+0000 mon.a (mon.0) 3657 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:33:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:36 vm07 bash[23367]: audit 2026-03-10T10:33:35.869792+0000 mon.a (mon.0) 3658 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:33:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:36 vm07 bash[23367]: audit 2026-03-10T10:33:35.869792+0000 mon.a (mon.0) 3658 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:33:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:36 vm07 bash[23367]: audit 2026-03-10T10:33:35.870257+0000 mon.a (mon.0) 3659 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:33:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:36 vm07 bash[23367]: audit 2026-03-10T10:33:35.870257+0000 mon.a (mon.0) 3659 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:33:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:36 vm07 bash[23367]: audit 2026-03-10T10:33:35.874861+0000 mon.a (mon.0) 3660 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:33:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:36 vm07 bash[23367]: audit 2026-03-10T10:33:35.874861+0000 mon.a (mon.0) 3660 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:33:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:36 vm04 bash[28289]: audit 2026-03-10T10:33:35.869792+0000 mon.a (mon.0) 3658 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:33:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:36 vm04 bash[28289]: audit 2026-03-10T10:33:35.869792+0000 mon.a (mon.0) 3658 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:33:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:36 vm04 bash[28289]: audit 2026-03-10T10:33:35.870257+0000 mon.a (mon.0) 3659 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:33:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:36 vm04 bash[28289]: audit 2026-03-10T10:33:35.870257+0000 mon.a (mon.0) 3659 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:33:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:36 vm04 bash[28289]: audit 2026-03-10T10:33:35.874861+0000 mon.a (mon.0) 3660 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:33:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:36 vm04 bash[28289]: audit 2026-03-10T10:33:35.874861+0000 mon.a (mon.0) 3660 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:33:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:36 vm04 bash[20742]: audit 2026-03-10T10:33:35.869792+0000 mon.a (mon.0) 3658 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:33:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:36 vm04 bash[20742]: audit 2026-03-10T10:33:35.869792+0000 mon.a (mon.0) 3658 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:33:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:36 vm04 bash[20742]: audit 2026-03-10T10:33:35.870257+0000 mon.a (mon.0) 3659 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:33:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:36 vm04 bash[20742]: audit 2026-03-10T10:33:35.870257+0000 mon.a (mon.0) 3659 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:33:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:36 vm04 bash[20742]: audit 2026-03-10T10:33:35.874861+0000 mon.a (mon.0) 3660 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:33:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:36 vm04 bash[20742]: audit 2026-03-10T10:33:35.874861+0000 mon.a (mon.0) 3660 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:33:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:37 vm04 bash[28289]: cluster 2026-03-10T10:33:36.621207+0000 mgr.y (mgr.24422) 750 : cluster [DBG] pgmap v1241: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:37 vm04 bash[28289]: cluster 2026-03-10T10:33:36.621207+0000 mgr.y (mgr.24422) 750 : cluster [DBG] pgmap v1241: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:38.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:37 vm04 bash[20742]: cluster 2026-03-10T10:33:36.621207+0000 mgr.y (mgr.24422) 750 : cluster [DBG] pgmap v1241: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:38.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:37 vm04 bash[20742]: cluster 2026-03-10T10:33:36.621207+0000 mgr.y (mgr.24422) 750 : cluster [DBG] pgmap v1241: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:38.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:37 vm07 bash[23367]: cluster 2026-03-10T10:33:36.621207+0000 mgr.y (mgr.24422) 750 : cluster [DBG] pgmap v1241: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:38.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:37 vm07 bash[23367]: cluster 2026-03-10T10:33:36.621207+0000 mgr.y (mgr.24422) 750 : cluster [DBG] pgmap v1241: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:39.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:33:39 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:33:40.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:39 vm04 bash[28289]: cluster 2026-03-10T10:33:38.621706+0000 mgr.y (mgr.24422) 751 : cluster [DBG] pgmap v1242: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:40.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:39 vm04 bash[28289]: cluster 2026-03-10T10:33:38.621706+0000 mgr.y (mgr.24422) 751 : cluster [DBG] pgmap v1242: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:40.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:39 vm04 bash[28289]: audit 2026-03-10T10:33:39.080523+0000 mgr.y (mgr.24422) 752 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:40.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:39 vm04 bash[28289]: audit 2026-03-10T10:33:39.080523+0000 mgr.y (mgr.24422) 752 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:39 vm04 bash[20742]: cluster 2026-03-10T10:33:38.621706+0000 mgr.y (mgr.24422) 751 : cluster [DBG] pgmap v1242: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:39 vm04 bash[20742]: cluster 2026-03-10T10:33:38.621706+0000 mgr.y (mgr.24422) 751 : cluster [DBG] pgmap v1242: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:39 vm04 bash[20742]: audit 2026-03-10T10:33:39.080523+0000 mgr.y (mgr.24422) 752 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:39 vm04 bash[20742]: audit 2026-03-10T10:33:39.080523+0000 mgr.y (mgr.24422) 752 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:39 vm07 bash[23367]: cluster 2026-03-10T10:33:38.621706+0000 mgr.y (mgr.24422) 751 : cluster [DBG] pgmap v1242: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:39 vm07 bash[23367]: cluster 2026-03-10T10:33:38.621706+0000 mgr.y (mgr.24422) 751 : cluster [DBG] pgmap v1242: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:39 vm07 bash[23367]: audit 2026-03-10T10:33:39.080523+0000 mgr.y (mgr.24422) 752 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:39 vm07 bash[23367]: audit 2026-03-10T10:33:39.080523+0000 mgr.y (mgr.24422) 752 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:41 vm04 bash[28289]: cluster 2026-03-10T10:33:40.622203+0000 mgr.y (mgr.24422) 753 : cluster [DBG] pgmap v1243: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:41 vm04 bash[28289]: cluster 2026-03-10T10:33:40.622203+0000 mgr.y (mgr.24422) 753 : cluster [DBG] pgmap v1243: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:41 vm04 bash[20742]: cluster 2026-03-10T10:33:40.622203+0000 mgr.y (mgr.24422) 753 : cluster [DBG] pgmap v1243: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:41 vm04 bash[20742]: cluster 2026-03-10T10:33:40.622203+0000 mgr.y (mgr.24422) 753 : cluster [DBG] pgmap v1243: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:41 vm07 bash[23367]: cluster 2026-03-10T10:33:40.622203+0000 mgr.y (mgr.24422) 753 : cluster [DBG] pgmap v1243: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:41 vm07 bash[23367]: cluster 2026-03-10T10:33:40.622203+0000 mgr.y (mgr.24422) 753 : cluster [DBG] pgmap v1243: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:43.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:33:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:33:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:33:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:43 vm04 bash[28289]: cluster 2026-03-10T10:33:42.622455+0000 mgr.y (mgr.24422) 754 : cluster [DBG] pgmap v1244: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:43 vm04 bash[28289]: cluster 2026-03-10T10:33:42.622455+0000 mgr.y (mgr.24422) 754 : cluster [DBG] pgmap v1244: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:43 vm04 bash[28289]: audit 2026-03-10T10:33:43.303220+0000 mon.a (mon.0) 3661 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:33:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:43 vm04 bash[28289]: audit 2026-03-10T10:33:43.303220+0000 mon.a (mon.0) 3661 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:33:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:43 vm04 bash[20742]: cluster 2026-03-10T10:33:42.622455+0000 mgr.y (mgr.24422) 754 : cluster [DBG] pgmap v1244: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:43 vm04 bash[20742]: cluster 2026-03-10T10:33:42.622455+0000 mgr.y (mgr.24422) 754 : cluster [DBG] pgmap v1244: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:43 vm04 bash[20742]: audit 2026-03-10T10:33:43.303220+0000 mon.a (mon.0) 3661 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:33:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:43 vm04 bash[20742]: audit 2026-03-10T10:33:43.303220+0000 mon.a (mon.0) 3661 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:33:44.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:43 vm07 bash[23367]: cluster 2026-03-10T10:33:42.622455+0000 mgr.y (mgr.24422) 754 : cluster [DBG] pgmap v1244: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:44.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:43 vm07 bash[23367]: cluster 2026-03-10T10:33:42.622455+0000 mgr.y (mgr.24422) 754 : cluster [DBG] pgmap v1244: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:44.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:43 vm07 bash[23367]: audit 2026-03-10T10:33:43.303220+0000 mon.a (mon.0) 3661 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:33:44.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:43 vm07 bash[23367]: audit 2026-03-10T10:33:43.303220+0000 mon.a (mon.0) 3661 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:33:46.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:45 vm04 bash[28289]: cluster 2026-03-10T10:33:44.623095+0000 mgr.y (mgr.24422) 755 : cluster [DBG] pgmap v1245: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:45 vm04 bash[28289]: cluster 2026-03-10T10:33:44.623095+0000 mgr.y (mgr.24422) 755 : cluster [DBG] pgmap v1245: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:46.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:45 vm04 bash[20742]: cluster 2026-03-10T10:33:44.623095+0000 mgr.y (mgr.24422) 755 : cluster [DBG] pgmap v1245: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:46.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:45 vm04 bash[20742]: cluster 2026-03-10T10:33:44.623095+0000 mgr.y (mgr.24422) 755 : cluster [DBG] pgmap v1245: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:46.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:45 vm07 bash[23367]: cluster 2026-03-10T10:33:44.623095+0000 mgr.y (mgr.24422) 755 : cluster [DBG] pgmap v1245: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:46.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:45 vm07 bash[23367]: cluster 2026-03-10T10:33:44.623095+0000 mgr.y (mgr.24422) 755 : cluster [DBG] pgmap v1245: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:47 vm04 bash[28289]: cluster 2026-03-10T10:33:46.623395+0000 mgr.y (mgr.24422) 756 : cluster [DBG] pgmap v1246: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:47 vm04 bash[28289]: cluster 2026-03-10T10:33:46.623395+0000 mgr.y (mgr.24422) 756 : cluster [DBG] pgmap v1246: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:47 vm04 bash[20742]: cluster 2026-03-10T10:33:46.623395+0000 mgr.y (mgr.24422) 756 : cluster [DBG] pgmap v1246: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:47 vm04 bash[20742]: cluster 2026-03-10T10:33:46.623395+0000 mgr.y (mgr.24422) 756 : cluster [DBG] pgmap v1246: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:48.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:47 vm07 bash[23367]: cluster 2026-03-10T10:33:46.623395+0000 mgr.y (mgr.24422) 756 : cluster [DBG] pgmap v1246: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:48.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:47 vm07 bash[23367]: cluster 2026-03-10T10:33:46.623395+0000 mgr.y (mgr.24422) 756 : cluster [DBG] pgmap v1246: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:49.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:33:49 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:33:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:49 vm04 bash[28289]: cluster 2026-03-10T10:33:48.623802+0000 mgr.y (mgr.24422) 757 : cluster [DBG] pgmap v1247: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:49 vm04 bash[28289]: cluster 2026-03-10T10:33:48.623802+0000 mgr.y (mgr.24422) 757 : cluster [DBG] pgmap v1247: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:49 vm04 bash[28289]: audit 2026-03-10T10:33:49.088635+0000 mgr.y (mgr.24422) 758 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:49 vm04 bash[28289]: audit 2026-03-10T10:33:49.088635+0000 mgr.y (mgr.24422) 758 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:49 vm04 bash[20742]: cluster 2026-03-10T10:33:48.623802+0000 mgr.y (mgr.24422) 757 : cluster [DBG] pgmap v1247: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:49 vm04 bash[20742]: cluster 2026-03-10T10:33:48.623802+0000 mgr.y (mgr.24422) 757 : cluster [DBG] pgmap v1247: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:49 vm04 bash[20742]: audit 2026-03-10T10:33:49.088635+0000 mgr.y (mgr.24422) 758 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:49 vm04 bash[20742]: audit 2026-03-10T10:33:49.088635+0000 mgr.y (mgr.24422) 758 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:50.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:49 vm07 bash[23367]: cluster 2026-03-10T10:33:48.623802+0000 mgr.y (mgr.24422) 757 : cluster [DBG] pgmap v1247: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:50.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:49 vm07 bash[23367]: cluster 2026-03-10T10:33:48.623802+0000 mgr.y (mgr.24422) 757 : cluster [DBG] pgmap v1247: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:50.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:49 vm07 bash[23367]: audit 2026-03-10T10:33:49.088635+0000 mgr.y (mgr.24422) 758 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:50.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:49 vm07 bash[23367]: audit 2026-03-10T10:33:49.088635+0000 mgr.y (mgr.24422) 758 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:33:52.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:51 vm04 bash[28289]: cluster 2026-03-10T10:33:50.624281+0000 mgr.y (mgr.24422) 759 : cluster [DBG] pgmap v1248: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:51 vm04 bash[28289]: cluster 2026-03-10T10:33:50.624281+0000 mgr.y (mgr.24422) 759 : cluster [DBG] pgmap v1248: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:51 vm04 bash[20742]: cluster 2026-03-10T10:33:50.624281+0000 mgr.y (mgr.24422) 759 : cluster [DBG] pgmap v1248: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:51 vm04 bash[20742]: cluster 2026-03-10T10:33:50.624281+0000 mgr.y (mgr.24422) 759 : cluster [DBG] pgmap v1248: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:51 vm07 bash[23367]: cluster 2026-03-10T10:33:50.624281+0000 mgr.y (mgr.24422) 759 : cluster [DBG] pgmap v1248: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:51 vm07 bash[23367]: cluster 2026-03-10T10:33:50.624281+0000 mgr.y (mgr.24422) 759 : cluster [DBG] pgmap v1248: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:53.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:33:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:33:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:33:54.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:53 vm04 bash[28289]: cluster 2026-03-10T10:33:52.624544+0000 mgr.y (mgr.24422) 760 : cluster [DBG] pgmap v1249: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:54.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:53 vm04 bash[28289]: cluster 2026-03-10T10:33:52.624544+0000 mgr.y (mgr.24422) 760 : cluster [DBG] pgmap v1249: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:54.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:53 vm04 bash[20742]: cluster 2026-03-10T10:33:52.624544+0000 mgr.y (mgr.24422) 760 : cluster [DBG] pgmap v1249: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:54.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:53 vm04 bash[20742]: cluster 2026-03-10T10:33:52.624544+0000 mgr.y (mgr.24422) 760 : cluster [DBG] pgmap v1249: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:54.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:53 vm07 bash[23367]: cluster 2026-03-10T10:33:52.624544+0000 mgr.y (mgr.24422) 760 : cluster [DBG] pgmap v1249: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:54.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:53 vm07 bash[23367]: cluster 2026-03-10T10:33:52.624544+0000 mgr.y (mgr.24422) 760 : cluster [DBG] pgmap v1249: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:55 vm04 bash[28289]: cluster 2026-03-10T10:33:54.625119+0000 mgr.y (mgr.24422) 761 : cluster [DBG] pgmap v1250: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:55 vm04 bash[28289]: cluster 2026-03-10T10:33:54.625119+0000 mgr.y (mgr.24422) 761 : cluster [DBG] pgmap v1250: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:55 vm04 bash[20742]: cluster 2026-03-10T10:33:54.625119+0000 mgr.y (mgr.24422) 761 : cluster [DBG] pgmap v1250: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:55 vm04 bash[20742]: cluster 2026-03-10T10:33:54.625119+0000 mgr.y (mgr.24422) 761 : cluster [DBG] pgmap v1250: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:56.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:55 vm07 bash[23367]: cluster 2026-03-10T10:33:54.625119+0000 mgr.y (mgr.24422) 761 : cluster [DBG] pgmap v1250: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:56.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:55 vm07 bash[23367]: cluster 2026-03-10T10:33:54.625119+0000 mgr.y (mgr.24422) 761 : cluster [DBG] pgmap v1250: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:33:58.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:57 vm04 bash[28289]: cluster 2026-03-10T10:33:56.625423+0000 mgr.y (mgr.24422) 762 : cluster [DBG] pgmap v1251: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:58.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:57 vm04 bash[28289]: cluster 2026-03-10T10:33:56.625423+0000 mgr.y (mgr.24422) 762 : cluster [DBG] pgmap v1251: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:58.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:57 vm04 bash[20742]: cluster 2026-03-10T10:33:56.625423+0000 mgr.y (mgr.24422) 762 : cluster [DBG] pgmap v1251: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:58.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:57 vm04 bash[20742]: cluster 2026-03-10T10:33:56.625423+0000 mgr.y (mgr.24422) 762 : cluster [DBG] pgmap v1251: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:58.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:57 vm07 bash[23367]: cluster 2026-03-10T10:33:56.625423+0000 mgr.y (mgr.24422) 762 : cluster [DBG] pgmap v1251: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:58.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:57 vm07 bash[23367]: cluster 2026-03-10T10:33:56.625423+0000 mgr.y (mgr.24422) 762 : cluster [DBG] pgmap v1251: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:33:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:58 vm04 bash[28289]: audit 2026-03-10T10:33:58.309112+0000 mon.a (mon.0) 3662 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:33:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:58 vm04 bash[28289]: audit 2026-03-10T10:33:58.309112+0000 mon.a (mon.0) 3662 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:33:59.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:58 vm04 bash[20742]: audit 2026-03-10T10:33:58.309112+0000 mon.a (mon.0) 3662 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:33:59.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:58 vm04 bash[20742]: audit 2026-03-10T10:33:58.309112+0000 mon.a (mon.0) 3662 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:33:59.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:58 vm07 bash[23367]: audit 2026-03-10T10:33:58.309112+0000 mon.a (mon.0) 3662 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:33:59.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:58 vm07 bash[23367]: audit 2026-03-10T10:33:58.309112+0000 mon.a (mon.0) 3662 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:33:59.266 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:33:59 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:34:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:59 vm04 bash[20742]: cluster 2026-03-10T10:33:58.625798+0000 mgr.y (mgr.24422) 763 : cluster [DBG] pgmap v1252: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:59 vm04 bash[20742]: cluster 2026-03-10T10:33:58.625798+0000 mgr.y (mgr.24422) 763 : cluster [DBG] pgmap v1252: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:59 vm04 bash[20742]: audit 2026-03-10T10:33:59.098164+0000 mgr.y (mgr.24422) 764 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:33:59 vm04 bash[20742]: audit 2026-03-10T10:33:59.098164+0000 mgr.y (mgr.24422) 764 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:59 vm04 bash[28289]: cluster 2026-03-10T10:33:58.625798+0000 mgr.y (mgr.24422) 763 : cluster [DBG] pgmap v1252: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:59 vm04 bash[28289]: cluster 2026-03-10T10:33:58.625798+0000 mgr.y (mgr.24422) 763 : cluster [DBG] pgmap v1252: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:59 vm04 bash[28289]: audit 2026-03-10T10:33:59.098164+0000 mgr.y (mgr.24422) 764 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:33:59 vm04 bash[28289]: audit 2026-03-10T10:33:59.098164+0000 mgr.y (mgr.24422) 764 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:00.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:59 vm07 bash[23367]: cluster 2026-03-10T10:33:58.625798+0000 mgr.y (mgr.24422) 763 : cluster [DBG] pgmap v1252: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:00.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:59 vm07 bash[23367]: cluster 2026-03-10T10:33:58.625798+0000 mgr.y (mgr.24422) 763 : cluster [DBG] pgmap v1252: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:00.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:59 vm07 bash[23367]: audit 2026-03-10T10:33:59.098164+0000 mgr.y (mgr.24422) 764 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:00.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:33:59 vm07 bash[23367]: audit 2026-03-10T10:33:59.098164+0000 mgr.y (mgr.24422) 764 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:02.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:01 vm04 bash[28289]: cluster 2026-03-10T10:34:00.626326+0000 mgr.y (mgr.24422) 765 : cluster [DBG] pgmap v1253: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:02.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:01 vm04 bash[28289]: cluster 2026-03-10T10:34:00.626326+0000 mgr.y (mgr.24422) 765 : cluster [DBG] pgmap v1253: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:01 vm04 bash[20742]: cluster 2026-03-10T10:34:00.626326+0000 mgr.y (mgr.24422) 765 : cluster [DBG] pgmap v1253: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:01 vm04 bash[20742]: cluster 2026-03-10T10:34:00.626326+0000 mgr.y (mgr.24422) 765 : cluster [DBG] pgmap v1253: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:02.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:01 vm07 bash[23367]: cluster 2026-03-10T10:34:00.626326+0000 mgr.y (mgr.24422) 765 : cluster [DBG] pgmap v1253: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:02.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:01 vm07 bash[23367]: cluster 2026-03-10T10:34:00.626326+0000 mgr.y (mgr.24422) 765 : cluster [DBG] pgmap v1253: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:03.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:34:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:34:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:34:04.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:03 vm07 bash[23367]: cluster 2026-03-10T10:34:02.626581+0000 mgr.y (mgr.24422) 766 : cluster [DBG] pgmap v1254: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:04.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:03 vm07 bash[23367]: cluster 2026-03-10T10:34:02.626581+0000 mgr.y (mgr.24422) 766 : cluster [DBG] pgmap v1254: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:04.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:03 vm04 bash[28289]: cluster 2026-03-10T10:34:02.626581+0000 mgr.y (mgr.24422) 766 : cluster [DBG] pgmap v1254: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:04.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:03 vm04 bash[28289]: cluster 2026-03-10T10:34:02.626581+0000 mgr.y (mgr.24422) 766 : cluster [DBG] pgmap v1254: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:04.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:03 vm04 bash[20742]: cluster 2026-03-10T10:34:02.626581+0000 mgr.y (mgr.24422) 766 : cluster [DBG] pgmap v1254: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:04.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:03 vm04 bash[20742]: cluster 2026-03-10T10:34:02.626581+0000 mgr.y (mgr.24422) 766 : cluster [DBG] pgmap v1254: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:06.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:05 vm07 bash[23367]: cluster 2026-03-10T10:34:04.627305+0000 mgr.y (mgr.24422) 767 : cluster [DBG] pgmap v1255: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:06.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:05 vm07 bash[23367]: cluster 2026-03-10T10:34:04.627305+0000 mgr.y (mgr.24422) 767 : cluster [DBG] pgmap v1255: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:06.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:05 vm04 bash[28289]: cluster 2026-03-10T10:34:04.627305+0000 mgr.y (mgr.24422) 767 : cluster [DBG] pgmap v1255: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:06.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:05 vm04 bash[28289]: cluster 2026-03-10T10:34:04.627305+0000 mgr.y (mgr.24422) 767 : cluster [DBG] pgmap v1255: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:06.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:05 vm04 bash[20742]: cluster 2026-03-10T10:34:04.627305+0000 mgr.y (mgr.24422) 767 : cluster [DBG] pgmap v1255: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:06.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:05 vm04 bash[20742]: cluster 2026-03-10T10:34:04.627305+0000 mgr.y (mgr.24422) 767 : cluster [DBG] pgmap v1255: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:08.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:07 vm07 bash[23367]: cluster 2026-03-10T10:34:06.627551+0000 mgr.y (mgr.24422) 768 : cluster [DBG] pgmap v1256: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:08.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:07 vm07 bash[23367]: cluster 2026-03-10T10:34:06.627551+0000 mgr.y (mgr.24422) 768 : cluster [DBG] pgmap v1256: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:07 vm04 bash[28289]: cluster 2026-03-10T10:34:06.627551+0000 mgr.y (mgr.24422) 768 : cluster [DBG] pgmap v1256: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:07 vm04 bash[28289]: cluster 2026-03-10T10:34:06.627551+0000 mgr.y (mgr.24422) 768 : cluster [DBG] pgmap v1256: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:07 vm04 bash[20742]: cluster 2026-03-10T10:34:06.627551+0000 mgr.y (mgr.24422) 768 : cluster [DBG] pgmap v1256: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:07 vm04 bash[20742]: cluster 2026-03-10T10:34:06.627551+0000 mgr.y (mgr.24422) 768 : cluster [DBG] pgmap v1256: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:09.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:34:09 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:34:10.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:09 vm07 bash[23367]: cluster 2026-03-10T10:34:08.628078+0000 mgr.y (mgr.24422) 769 : cluster [DBG] pgmap v1257: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:10.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:09 vm07 bash[23367]: cluster 2026-03-10T10:34:08.628078+0000 mgr.y (mgr.24422) 769 : cluster [DBG] pgmap v1257: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:10.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:09 vm07 bash[23367]: audit 2026-03-10T10:34:09.108863+0000 mgr.y (mgr.24422) 770 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:10.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:09 vm07 bash[23367]: audit 2026-03-10T10:34:09.108863+0000 mgr.y (mgr.24422) 770 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:10.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:09 vm04 bash[28289]: cluster 2026-03-10T10:34:08.628078+0000 mgr.y (mgr.24422) 769 : cluster [DBG] pgmap v1257: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:09 vm04 bash[28289]: cluster 2026-03-10T10:34:08.628078+0000 mgr.y (mgr.24422) 769 : cluster [DBG] pgmap v1257: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:09 vm04 bash[28289]: audit 2026-03-10T10:34:09.108863+0000 mgr.y (mgr.24422) 770 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:09 vm04 bash[28289]: audit 2026-03-10T10:34:09.108863+0000 mgr.y (mgr.24422) 770 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:10.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:09 vm04 bash[20742]: cluster 2026-03-10T10:34:08.628078+0000 mgr.y (mgr.24422) 769 : cluster [DBG] pgmap v1257: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:10.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:09 vm04 bash[20742]: cluster 2026-03-10T10:34:08.628078+0000 mgr.y (mgr.24422) 769 : cluster [DBG] pgmap v1257: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:10.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:09 vm04 bash[20742]: audit 2026-03-10T10:34:09.108863+0000 mgr.y (mgr.24422) 770 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:10.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:09 vm04 bash[20742]: audit 2026-03-10T10:34:09.108863+0000 mgr.y (mgr.24422) 770 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:12.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:11 vm07 bash[23367]: cluster 2026-03-10T10:34:10.628600+0000 mgr.y (mgr.24422) 771 : cluster [DBG] pgmap v1258: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:12.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:11 vm07 bash[23367]: cluster 2026-03-10T10:34:10.628600+0000 mgr.y (mgr.24422) 771 : cluster [DBG] pgmap v1258: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:12.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:11 vm04 bash[28289]: cluster 2026-03-10T10:34:10.628600+0000 mgr.y (mgr.24422) 771 : cluster [DBG] pgmap v1258: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:12.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:11 vm04 bash[28289]: cluster 2026-03-10T10:34:10.628600+0000 mgr.y (mgr.24422) 771 : cluster [DBG] pgmap v1258: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:12.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:11 vm04 bash[20742]: cluster 2026-03-10T10:34:10.628600+0000 mgr.y (mgr.24422) 771 : cluster [DBG] pgmap v1258: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:12.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:11 vm04 bash[20742]: cluster 2026-03-10T10:34:10.628600+0000 mgr.y (mgr.24422) 771 : cluster [DBG] pgmap v1258: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:13.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:34:13 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:34:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:34:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:13 vm07 bash[23367]: cluster 2026-03-10T10:34:12.628943+0000 mgr.y (mgr.24422) 772 : cluster [DBG] pgmap v1259: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:13 vm07 bash[23367]: cluster 2026-03-10T10:34:12.628943+0000 mgr.y (mgr.24422) 772 : cluster [DBG] pgmap v1259: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:13 vm07 bash[23367]: audit 2026-03-10T10:34:13.314622+0000 mon.a (mon.0) 3663 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:34:14.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:13 vm07 bash[23367]: audit 2026-03-10T10:34:13.314622+0000 mon.a (mon.0) 3663 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:34:14.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:13 vm04 bash[28289]: cluster 2026-03-10T10:34:12.628943+0000 mgr.y (mgr.24422) 772 : cluster [DBG] pgmap v1259: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:14.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:13 vm04 bash[28289]: cluster 2026-03-10T10:34:12.628943+0000 mgr.y (mgr.24422) 772 : cluster [DBG] pgmap v1259: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:14.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:13 vm04 bash[28289]: audit 2026-03-10T10:34:13.314622+0000 mon.a (mon.0) 3663 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:34:14.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:13 vm04 bash[28289]: audit 2026-03-10T10:34:13.314622+0000 mon.a (mon.0) 3663 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:34:14.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:13 vm04 bash[20742]: cluster 2026-03-10T10:34:12.628943+0000 mgr.y (mgr.24422) 772 : cluster [DBG] pgmap v1259: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:14.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:13 vm04 bash[20742]: cluster 2026-03-10T10:34:12.628943+0000 mgr.y (mgr.24422) 772 : cluster [DBG] pgmap v1259: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:14.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:13 vm04 bash[20742]: audit 2026-03-10T10:34:13.314622+0000 mon.a (mon.0) 3663 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:34:14.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:13 vm04 bash[20742]: audit 2026-03-10T10:34:13.314622+0000 mon.a (mon.0) 3663 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:34:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:16 vm04 bash[28289]: cluster 2026-03-10T10:34:14.629542+0000 mgr.y (mgr.24422) 773 : cluster [DBG] pgmap v1260: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:16 vm04 bash[28289]: cluster 2026-03-10T10:34:14.629542+0000 mgr.y (mgr.24422) 773 : cluster [DBG] pgmap v1260: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:16.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:16 vm04 bash[20742]: cluster 2026-03-10T10:34:14.629542+0000 mgr.y (mgr.24422) 773 : cluster [DBG] pgmap v1260: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:16.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:16 vm04 bash[20742]: cluster 2026-03-10T10:34:14.629542+0000 mgr.y (mgr.24422) 773 : cluster [DBG] pgmap v1260: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:16.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:16 vm07 bash[23367]: cluster 2026-03-10T10:34:14.629542+0000 mgr.y (mgr.24422) 773 : cluster [DBG] pgmap v1260: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:16.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:16 vm07 bash[23367]: cluster 2026-03-10T10:34:14.629542+0000 mgr.y (mgr.24422) 773 : cluster [DBG] pgmap v1260: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:18.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:18 vm04 bash[28289]: cluster 2026-03-10T10:34:16.629807+0000 mgr.y (mgr.24422) 774 : cluster [DBG] pgmap v1261: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:18 vm04 bash[28289]: cluster 2026-03-10T10:34:16.629807+0000 mgr.y (mgr.24422) 774 : cluster [DBG] pgmap v1261: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:18 vm04 bash[20742]: cluster 2026-03-10T10:34:16.629807+0000 mgr.y (mgr.24422) 774 : cluster [DBG] pgmap v1261: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:18 vm04 bash[20742]: cluster 2026-03-10T10:34:16.629807+0000 mgr.y (mgr.24422) 774 : cluster [DBG] pgmap v1261: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:18.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:18 vm07 bash[23367]: cluster 2026-03-10T10:34:16.629807+0000 mgr.y (mgr.24422) 774 : cluster [DBG] pgmap v1261: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:18.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:18 vm07 bash[23367]: cluster 2026-03-10T10:34:16.629807+0000 mgr.y (mgr.24422) 774 : cluster [DBG] pgmap v1261: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:19.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:34:19 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:34:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:20 vm04 bash[28289]: cluster 2026-03-10T10:34:18.630409+0000 mgr.y (mgr.24422) 775 : cluster [DBG] pgmap v1262: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:20 vm04 bash[28289]: cluster 2026-03-10T10:34:18.630409+0000 mgr.y (mgr.24422) 775 : cluster [DBG] pgmap v1262: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:20 vm04 bash[28289]: audit 2026-03-10T10:34:19.117291+0000 mgr.y (mgr.24422) 776 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:20 vm04 bash[28289]: audit 2026-03-10T10:34:19.117291+0000 mgr.y (mgr.24422) 776 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:20 vm04 bash[20742]: cluster 2026-03-10T10:34:18.630409+0000 mgr.y (mgr.24422) 775 : cluster [DBG] pgmap v1262: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:20 vm04 bash[20742]: cluster 2026-03-10T10:34:18.630409+0000 mgr.y (mgr.24422) 775 : cluster [DBG] pgmap v1262: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:20 vm04 bash[20742]: audit 2026-03-10T10:34:19.117291+0000 mgr.y (mgr.24422) 776 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:20 vm04 bash[20742]: audit 2026-03-10T10:34:19.117291+0000 mgr.y (mgr.24422) 776 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:20 vm07 bash[23367]: cluster 2026-03-10T10:34:18.630409+0000 mgr.y (mgr.24422) 775 : cluster [DBG] pgmap v1262: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:20 vm07 bash[23367]: cluster 2026-03-10T10:34:18.630409+0000 mgr.y (mgr.24422) 775 : cluster [DBG] pgmap v1262: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:20 vm07 bash[23367]: audit 2026-03-10T10:34:19.117291+0000 mgr.y (mgr.24422) 776 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:20 vm07 bash[23367]: audit 2026-03-10T10:34:19.117291+0000 mgr.y (mgr.24422) 776 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:22 vm04 bash[28289]: cluster 2026-03-10T10:34:20.630886+0000 mgr.y (mgr.24422) 777 : cluster [DBG] pgmap v1263: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:22.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:22 vm04 bash[28289]: cluster 2026-03-10T10:34:20.630886+0000 mgr.y (mgr.24422) 777 : cluster [DBG] pgmap v1263: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:22 vm04 bash[20742]: cluster 2026-03-10T10:34:20.630886+0000 mgr.y (mgr.24422) 777 : cluster [DBG] pgmap v1263: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:22 vm04 bash[20742]: cluster 2026-03-10T10:34:20.630886+0000 mgr.y (mgr.24422) 777 : cluster [DBG] pgmap v1263: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:22 vm07 bash[23367]: cluster 2026-03-10T10:34:20.630886+0000 mgr.y (mgr.24422) 777 : cluster [DBG] pgmap v1263: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:22 vm07 bash[23367]: cluster 2026-03-10T10:34:20.630886+0000 mgr.y (mgr.24422) 777 : cluster [DBG] pgmap v1263: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:23.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:34:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:34:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:34:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:24 vm04 bash[28289]: cluster 2026-03-10T10:34:22.631172+0000 mgr.y (mgr.24422) 778 : cluster [DBG] pgmap v1264: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:24 vm04 bash[28289]: cluster 2026-03-10T10:34:22.631172+0000 mgr.y (mgr.24422) 778 : cluster [DBG] pgmap v1264: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:24 vm04 bash[20742]: cluster 2026-03-10T10:34:22.631172+0000 mgr.y (mgr.24422) 778 : cluster [DBG] pgmap v1264: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:24 vm04 bash[20742]: cluster 2026-03-10T10:34:22.631172+0000 mgr.y (mgr.24422) 778 : cluster [DBG] pgmap v1264: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:24.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:24 vm07 bash[23367]: cluster 2026-03-10T10:34:22.631172+0000 mgr.y (mgr.24422) 778 : cluster [DBG] pgmap v1264: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:24.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:24 vm07 bash[23367]: cluster 2026-03-10T10:34:22.631172+0000 mgr.y (mgr.24422) 778 : cluster [DBG] pgmap v1264: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:26 vm04 bash[28289]: cluster 2026-03-10T10:34:24.631850+0000 mgr.y (mgr.24422) 779 : cluster [DBG] pgmap v1265: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:26 vm04 bash[28289]: cluster 2026-03-10T10:34:24.631850+0000 mgr.y (mgr.24422) 779 : cluster [DBG] pgmap v1265: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:26.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:26 vm04 bash[20742]: cluster 2026-03-10T10:34:24.631850+0000 mgr.y (mgr.24422) 779 : cluster [DBG] pgmap v1265: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:26.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:26 vm04 bash[20742]: cluster 2026-03-10T10:34:24.631850+0000 mgr.y (mgr.24422) 779 : cluster [DBG] pgmap v1265: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:26 vm07 bash[23367]: cluster 2026-03-10T10:34:24.631850+0000 mgr.y (mgr.24422) 779 : cluster [DBG] pgmap v1265: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:26 vm07 bash[23367]: cluster 2026-03-10T10:34:24.631850+0000 mgr.y (mgr.24422) 779 : cluster [DBG] pgmap v1265: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:28.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:28 vm04 bash[28289]: cluster 2026-03-10T10:34:26.632237+0000 mgr.y (mgr.24422) 780 : cluster [DBG] pgmap v1266: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:28 vm04 bash[28289]: cluster 2026-03-10T10:34:26.632237+0000 mgr.y (mgr.24422) 780 : cluster [DBG] pgmap v1266: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:28.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:28 vm04 bash[20742]: cluster 2026-03-10T10:34:26.632237+0000 mgr.y (mgr.24422) 780 : cluster [DBG] pgmap v1266: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:28.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:28 vm04 bash[20742]: cluster 2026-03-10T10:34:26.632237+0000 mgr.y (mgr.24422) 780 : cluster [DBG] pgmap v1266: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:28 vm07 bash[23367]: cluster 2026-03-10T10:34:26.632237+0000 mgr.y (mgr.24422) 780 : cluster [DBG] pgmap v1266: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:28.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:28 vm07 bash[23367]: cluster 2026-03-10T10:34:26.632237+0000 mgr.y (mgr.24422) 780 : cluster [DBG] pgmap v1266: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:29.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:29 vm04 bash[28289]: audit 2026-03-10T10:34:28.320629+0000 mon.a (mon.0) 3664 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:34:29.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:29 vm04 bash[28289]: audit 2026-03-10T10:34:28.320629+0000 mon.a (mon.0) 3664 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:34:29.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:29 vm04 bash[20742]: audit 2026-03-10T10:34:28.320629+0000 mon.a (mon.0) 3664 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:34:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:29 vm04 bash[20742]: audit 2026-03-10T10:34:28.320629+0000 mon.a (mon.0) 3664 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:34:29.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:29 vm07 bash[23367]: audit 2026-03-10T10:34:28.320629+0000 mon.a (mon.0) 3664 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:34:29.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:29 vm07 bash[23367]: audit 2026-03-10T10:34:28.320629+0000 mon.a (mon.0) 3664 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:34:29.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:34:29 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:34:30.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:30 vm04 bash[28289]: cluster 2026-03-10T10:34:28.632806+0000 mgr.y (mgr.24422) 781 : cluster [DBG] pgmap v1267: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:30.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:30 vm04 bash[28289]: cluster 2026-03-10T10:34:28.632806+0000 mgr.y (mgr.24422) 781 : cluster [DBG] pgmap v1267: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:30.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:30 vm04 bash[28289]: audit 2026-03-10T10:34:29.117992+0000 mgr.y (mgr.24422) 782 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:30.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:30 vm04 bash[28289]: audit 2026-03-10T10:34:29.117992+0000 mgr.y (mgr.24422) 782 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:30 vm04 bash[20742]: cluster 2026-03-10T10:34:28.632806+0000 mgr.y (mgr.24422) 781 : cluster [DBG] pgmap v1267: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:30 vm04 bash[20742]: cluster 2026-03-10T10:34:28.632806+0000 mgr.y (mgr.24422) 781 : cluster [DBG] pgmap v1267: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:30 vm04 bash[20742]: audit 2026-03-10T10:34:29.117992+0000 mgr.y (mgr.24422) 782 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:30 vm04 bash[20742]: audit 2026-03-10T10:34:29.117992+0000 mgr.y (mgr.24422) 782 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:30 vm07 bash[23367]: cluster 2026-03-10T10:34:28.632806+0000 mgr.y (mgr.24422) 781 : cluster [DBG] pgmap v1267: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:30 vm07 bash[23367]: cluster 2026-03-10T10:34:28.632806+0000 mgr.y (mgr.24422) 781 : cluster [DBG] pgmap v1267: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:30 vm07 bash[23367]: audit 2026-03-10T10:34:29.117992+0000 mgr.y (mgr.24422) 782 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:30.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:30 vm07 bash[23367]: audit 2026-03-10T10:34:29.117992+0000 mgr.y (mgr.24422) 782 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:32.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:32 vm04 bash[28289]: cluster 2026-03-10T10:34:30.633336+0000 mgr.y (mgr.24422) 783 : cluster [DBG] pgmap v1268: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:32.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:32 vm04 bash[28289]: cluster 2026-03-10T10:34:30.633336+0000 mgr.y (mgr.24422) 783 : cluster [DBG] pgmap v1268: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:32 vm04 bash[20742]: cluster 2026-03-10T10:34:30.633336+0000 mgr.y (mgr.24422) 783 : cluster [DBG] pgmap v1268: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:32 vm04 bash[20742]: cluster 2026-03-10T10:34:30.633336+0000 mgr.y (mgr.24422) 783 : cluster [DBG] pgmap v1268: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:32.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:32 vm07 bash[23367]: cluster 2026-03-10T10:34:30.633336+0000 mgr.y (mgr.24422) 783 : cluster [DBG] pgmap v1268: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:32.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:32 vm07 bash[23367]: cluster 2026-03-10T10:34:30.633336+0000 mgr.y (mgr.24422) 783 : cluster [DBG] pgmap v1268: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:33.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:34:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:34:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:34:34.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:34 vm04 bash[28289]: cluster 2026-03-10T10:34:32.633608+0000 mgr.y (mgr.24422) 784 : cluster [DBG] pgmap v1269: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:34.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:34 vm04 bash[28289]: cluster 2026-03-10T10:34:32.633608+0000 mgr.y (mgr.24422) 784 : cluster [DBG] pgmap v1269: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:34.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:34 vm04 bash[20742]: cluster 2026-03-10T10:34:32.633608+0000 mgr.y (mgr.24422) 784 : cluster [DBG] pgmap v1269: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:34.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:34 vm04 bash[20742]: cluster 2026-03-10T10:34:32.633608+0000 mgr.y (mgr.24422) 784 : cluster [DBG] pgmap v1269: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:34.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:34 vm07 bash[23367]: cluster 2026-03-10T10:34:32.633608+0000 mgr.y (mgr.24422) 784 : cluster [DBG] pgmap v1269: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:34.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:34 vm07 bash[23367]: cluster 2026-03-10T10:34:32.633608+0000 mgr.y (mgr.24422) 784 : cluster [DBG] pgmap v1269: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:36.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:36 vm04 bash[28289]: cluster 2026-03-10T10:34:34.634232+0000 mgr.y (mgr.24422) 785 : cluster [DBG] pgmap v1270: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:36.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:36 vm04 bash[28289]: cluster 2026-03-10T10:34:34.634232+0000 mgr.y (mgr.24422) 785 : cluster [DBG] pgmap v1270: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:36.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:36 vm04 bash[28289]: audit 2026-03-10T10:34:35.913393+0000 mon.a (mon.0) 3665 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:34:36.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:36 vm04 bash[28289]: audit 2026-03-10T10:34:35.913393+0000 mon.a (mon.0) 3665 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:34:36.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:36 vm04 bash[20742]: cluster 2026-03-10T10:34:34.634232+0000 mgr.y (mgr.24422) 785 : cluster [DBG] pgmap v1270: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:36.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:36 vm04 bash[20742]: cluster 2026-03-10T10:34:34.634232+0000 mgr.y (mgr.24422) 785 : cluster [DBG] pgmap v1270: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:36.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:36 vm04 bash[20742]: audit 2026-03-10T10:34:35.913393+0000 mon.a (mon.0) 3665 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:34:36.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:36 vm04 bash[20742]: audit 2026-03-10T10:34:35.913393+0000 mon.a (mon.0) 3665 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:34:36.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:36 vm07 bash[23367]: cluster 2026-03-10T10:34:34.634232+0000 mgr.y (mgr.24422) 785 : cluster [DBG] pgmap v1270: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:36.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:36 vm07 bash[23367]: cluster 2026-03-10T10:34:34.634232+0000 mgr.y (mgr.24422) 785 : cluster [DBG] pgmap v1270: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:34:36.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:36 vm07 bash[23367]: audit 2026-03-10T10:34:35.913393+0000 mon.a (mon.0) 3665 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:34:36.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:36 vm07 bash[23367]: audit 2026-03-10T10:34:35.913393+0000 mon.a (mon.0) 3665 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:34:37.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:37 vm04 bash[28289]: audit 2026-03-10T10:34:36.216750+0000 mon.a (mon.0) 3666 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:34:37.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:37 vm04 bash[28289]: audit 2026-03-10T10:34:36.216750+0000 mon.a (mon.0) 3666 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:34:37.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:37 vm04 bash[28289]: audit 2026-03-10T10:34:36.217213+0000 mon.a (mon.0) 3667 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:34:37.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:37 vm04 bash[28289]: audit 2026-03-10T10:34:36.217213+0000 mon.a (mon.0) 3667 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:34:37.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:37 vm04 bash[28289]: audit 2026-03-10T10:34:36.221789+0000 mon.a (mon.0) 3668 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:34:37.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:37 vm04 bash[28289]: audit 2026-03-10T10:34:36.221789+0000 mon.a (mon.0) 3668 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:34:37.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:37 vm04 bash[20742]: audit 2026-03-10T10:34:36.216750+0000 mon.a (mon.0) 3666 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:34:37.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:37 vm04 bash[20742]: audit 2026-03-10T10:34:36.216750+0000 mon.a (mon.0) 3666 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:34:37.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:37 vm04 bash[20742]: audit 2026-03-10T10:34:36.217213+0000 mon.a (mon.0) 3667 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:34:37.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:37 vm04 bash[20742]: audit 2026-03-10T10:34:36.217213+0000 mon.a (mon.0) 3667 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:34:37.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:37 vm04 bash[20742]: audit 2026-03-10T10:34:36.221789+0000 mon.a (mon.0) 3668 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:34:37.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:37 vm04 bash[20742]: audit 2026-03-10T10:34:36.221789+0000 mon.a (mon.0) 3668 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:34:37.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:37 vm07 bash[23367]: audit 2026-03-10T10:34:36.216750+0000 mon.a (mon.0) 3666 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:34:37.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:37 vm07 bash[23367]: audit 2026-03-10T10:34:36.216750+0000 mon.a (mon.0) 3666 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:34:37.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:37 vm07 bash[23367]: audit 2026-03-10T10:34:36.217213+0000 mon.a (mon.0) 3667 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:34:37.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:37 vm07 bash[23367]: audit 2026-03-10T10:34:36.217213+0000 mon.a (mon.0) 3667 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:34:37.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:37 vm07 bash[23367]: audit 2026-03-10T10:34:36.221789+0000 mon.a (mon.0) 3668 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:34:37.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:37 vm07 bash[23367]: audit 2026-03-10T10:34:36.221789+0000 mon.a (mon.0) 3668 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:34:38.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:38 vm07 bash[23367]: cluster 2026-03-10T10:34:36.634529+0000 mgr.y (mgr.24422) 786 : cluster [DBG] pgmap v1271: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:38.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:38 vm07 bash[23367]: cluster 2026-03-10T10:34:36.634529+0000 mgr.y (mgr.24422) 786 : cluster [DBG] pgmap v1271: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:38.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:38 vm04 bash[20742]: cluster 2026-03-10T10:34:36.634529+0000 mgr.y (mgr.24422) 786 : cluster [DBG] pgmap v1271: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:38.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:38 vm04 bash[20742]: cluster 2026-03-10T10:34:36.634529+0000 mgr.y (mgr.24422) 786 : cluster [DBG] pgmap v1271: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:38.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:38 vm04 bash[28289]: cluster 2026-03-10T10:34:36.634529+0000 mgr.y (mgr.24422) 786 : cluster [DBG] pgmap v1271: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:38.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:38 vm04 bash[28289]: cluster 2026-03-10T10:34:36.634529+0000 mgr.y (mgr.24422) 786 : cluster [DBG] pgmap v1271: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:39.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:34:39 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:34:40.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:40 vm07 bash[23367]: cluster 2026-03-10T10:34:38.635463+0000 mgr.y (mgr.24422) 787 : cluster [DBG] pgmap v1272: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:40.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:40 vm07 bash[23367]: cluster 2026-03-10T10:34:38.635463+0000 mgr.y (mgr.24422) 787 : cluster [DBG] pgmap v1272: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:40.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:40 vm07 bash[23367]: audit 2026-03-10T10:34:39.126070+0000 mgr.y (mgr.24422) 788 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:40.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:40 vm07 bash[23367]: audit 2026-03-10T10:34:39.126070+0000 mgr.y (mgr.24422) 788 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:40.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:40 vm04 bash[28289]: cluster 2026-03-10T10:34:38.635463+0000 mgr.y (mgr.24422) 787 : cluster [DBG] pgmap v1272: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:40 vm04 bash[28289]: cluster 2026-03-10T10:34:38.635463+0000 mgr.y (mgr.24422) 787 : cluster [DBG] pgmap v1272: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:40 vm04 bash[28289]: audit 2026-03-10T10:34:39.126070+0000 mgr.y (mgr.24422) 788 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:40 vm04 bash[28289]: audit 2026-03-10T10:34:39.126070+0000 mgr.y (mgr.24422) 788 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:40 vm04 bash[20742]: cluster 2026-03-10T10:34:38.635463+0000 mgr.y (mgr.24422) 787 : cluster [DBG] pgmap v1272: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:40 vm04 bash[20742]: cluster 2026-03-10T10:34:38.635463+0000 mgr.y (mgr.24422) 787 : cluster [DBG] pgmap v1272: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:40 vm04 bash[20742]: audit 2026-03-10T10:34:39.126070+0000 mgr.y (mgr.24422) 788 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:40 vm04 bash[20742]: audit 2026-03-10T10:34:39.126070+0000 mgr.y (mgr.24422) 788 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:42.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:42 vm07 bash[23367]: cluster 2026-03-10T10:34:40.635983+0000 mgr.y (mgr.24422) 789 : cluster [DBG] pgmap v1273: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:42.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:42 vm07 bash[23367]: cluster 2026-03-10T10:34:40.635983+0000 mgr.y (mgr.24422) 789 : cluster [DBG] pgmap v1273: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:42.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:42 vm04 bash[20742]: cluster 2026-03-10T10:34:40.635983+0000 mgr.y (mgr.24422) 789 : cluster [DBG] pgmap v1273: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:42.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:42 vm04 bash[20742]: cluster 2026-03-10T10:34:40.635983+0000 mgr.y (mgr.24422) 789 : cluster [DBG] pgmap v1273: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:42.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:42 vm04 bash[28289]: cluster 2026-03-10T10:34:40.635983+0000 mgr.y (mgr.24422) 789 : cluster [DBG] pgmap v1273: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:42.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:42 vm04 bash[28289]: cluster 2026-03-10T10:34:40.635983+0000 mgr.y (mgr.24422) 789 : cluster [DBG] pgmap v1273: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:43.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:34:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:34:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:34:44.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:44 vm07 bash[23367]: cluster 2026-03-10T10:34:42.636285+0000 mgr.y (mgr.24422) 790 : cluster [DBG] pgmap v1274: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:44.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:44 vm07 bash[23367]: cluster 2026-03-10T10:34:42.636285+0000 mgr.y (mgr.24422) 790 : cluster [DBG] pgmap v1274: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:44.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:44 vm07 bash[23367]: audit 2026-03-10T10:34:43.326781+0000 mon.a (mon.0) 3669 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:34:44.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:44 vm07 bash[23367]: audit 2026-03-10T10:34:43.326781+0000 mon.a (mon.0) 3669 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:34:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:44 vm04 bash[20742]: cluster 2026-03-10T10:34:42.636285+0000 mgr.y (mgr.24422) 790 : cluster [DBG] pgmap v1274: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:44 vm04 bash[20742]: cluster 2026-03-10T10:34:42.636285+0000 mgr.y (mgr.24422) 790 : cluster [DBG] pgmap v1274: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:44 vm04 bash[20742]: audit 2026-03-10T10:34:43.326781+0000 mon.a (mon.0) 3669 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:34:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:44 vm04 bash[20742]: audit 2026-03-10T10:34:43.326781+0000 mon.a (mon.0) 3669 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:34:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:44 vm04 bash[28289]: cluster 2026-03-10T10:34:42.636285+0000 mgr.y (mgr.24422) 790 : cluster [DBG] pgmap v1274: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:44 vm04 bash[28289]: cluster 2026-03-10T10:34:42.636285+0000 mgr.y (mgr.24422) 790 : cluster [DBG] pgmap v1274: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:44 vm04 bash[28289]: audit 2026-03-10T10:34:43.326781+0000 mon.a (mon.0) 3669 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:34:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:44 vm04 bash[28289]: audit 2026-03-10T10:34:43.326781+0000 mon.a (mon.0) 3669 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:34:46.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:46 vm07 bash[23367]: cluster 2026-03-10T10:34:44.636958+0000 mgr.y (mgr.24422) 791 : cluster [DBG] pgmap v1275: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:46.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:46 vm07 bash[23367]: cluster 2026-03-10T10:34:44.636958+0000 mgr.y (mgr.24422) 791 : cluster [DBG] pgmap v1275: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:46.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:46 vm04 bash[20742]: cluster 2026-03-10T10:34:44.636958+0000 mgr.y (mgr.24422) 791 : cluster [DBG] pgmap v1275: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:46.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:46 vm04 bash[20742]: cluster 2026-03-10T10:34:44.636958+0000 mgr.y (mgr.24422) 791 : cluster [DBG] pgmap v1275: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:46.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:46 vm04 bash[28289]: cluster 2026-03-10T10:34:44.636958+0000 mgr.y (mgr.24422) 791 : cluster [DBG] pgmap v1275: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:46.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:46 vm04 bash[28289]: cluster 2026-03-10T10:34:44.636958+0000 mgr.y (mgr.24422) 791 : cluster [DBG] pgmap v1275: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:48.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:48 vm04 bash[20742]: cluster 2026-03-10T10:34:46.637214+0000 mgr.y (mgr.24422) 792 : cluster [DBG] pgmap v1276: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:48.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:48 vm04 bash[20742]: cluster 2026-03-10T10:34:46.637214+0000 mgr.y (mgr.24422) 792 : cluster [DBG] pgmap v1276: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:48.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:48 vm04 bash[28289]: cluster 2026-03-10T10:34:46.637214+0000 mgr.y (mgr.24422) 792 : cluster [DBG] pgmap v1276: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:48.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:48 vm04 bash[28289]: cluster 2026-03-10T10:34:46.637214+0000 mgr.y (mgr.24422) 792 : cluster [DBG] pgmap v1276: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:48.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:48 vm07 bash[23367]: cluster 2026-03-10T10:34:46.637214+0000 mgr.y (mgr.24422) 792 : cluster [DBG] pgmap v1276: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:48.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:48 vm07 bash[23367]: cluster 2026-03-10T10:34:46.637214+0000 mgr.y (mgr.24422) 792 : cluster [DBG] pgmap v1276: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:49.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:34:49 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:34:50.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:50 vm04 bash[20742]: cluster 2026-03-10T10:34:48.637727+0000 mgr.y (mgr.24422) 793 : cluster [DBG] pgmap v1277: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:50.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:50 vm04 bash[20742]: cluster 2026-03-10T10:34:48.637727+0000 mgr.y (mgr.24422) 793 : cluster [DBG] pgmap v1277: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:50.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:50 vm04 bash[20742]: audit 2026-03-10T10:34:49.136815+0000 mgr.y (mgr.24422) 794 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:50.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:50 vm04 bash[20742]: audit 2026-03-10T10:34:49.136815+0000 mgr.y (mgr.24422) 794 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:50.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:50 vm04 bash[28289]: cluster 2026-03-10T10:34:48.637727+0000 mgr.y (mgr.24422) 793 : cluster [DBG] pgmap v1277: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:50.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:50 vm04 bash[28289]: cluster 2026-03-10T10:34:48.637727+0000 mgr.y (mgr.24422) 793 : cluster [DBG] pgmap v1277: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:50.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:50 vm04 bash[28289]: audit 2026-03-10T10:34:49.136815+0000 mgr.y (mgr.24422) 794 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:50.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:50 vm04 bash[28289]: audit 2026-03-10T10:34:49.136815+0000 mgr.y (mgr.24422) 794 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:50.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:50 vm07 bash[23367]: cluster 2026-03-10T10:34:48.637727+0000 mgr.y (mgr.24422) 793 : cluster [DBG] pgmap v1277: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:50.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:50 vm07 bash[23367]: cluster 2026-03-10T10:34:48.637727+0000 mgr.y (mgr.24422) 793 : cluster [DBG] pgmap v1277: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:50.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:50 vm07 bash[23367]: audit 2026-03-10T10:34:49.136815+0000 mgr.y (mgr.24422) 794 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:50.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:50 vm07 bash[23367]: audit 2026-03-10T10:34:49.136815+0000 mgr.y (mgr.24422) 794 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:34:52.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:52 vm04 bash[28289]: cluster 2026-03-10T10:34:50.638241+0000 mgr.y (mgr.24422) 795 : cluster [DBG] pgmap v1278: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:52.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:52 vm04 bash[28289]: cluster 2026-03-10T10:34:50.638241+0000 mgr.y (mgr.24422) 795 : cluster [DBG] pgmap v1278: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:52.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:52 vm04 bash[20742]: cluster 2026-03-10T10:34:50.638241+0000 mgr.y (mgr.24422) 795 : cluster [DBG] pgmap v1278: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:52.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:52 vm04 bash[20742]: cluster 2026-03-10T10:34:50.638241+0000 mgr.y (mgr.24422) 795 : cluster [DBG] pgmap v1278: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:52.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:52 vm07 bash[23367]: cluster 2026-03-10T10:34:50.638241+0000 mgr.y (mgr.24422) 795 : cluster [DBG] pgmap v1278: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:52.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:52 vm07 bash[23367]: cluster 2026-03-10T10:34:50.638241+0000 mgr.y (mgr.24422) 795 : cluster [DBG] pgmap v1278: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:53.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:34:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:34:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:34:54.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:54 vm04 bash[20742]: cluster 2026-03-10T10:34:52.638623+0000 mgr.y (mgr.24422) 796 : cluster [DBG] pgmap v1279: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:54.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:54 vm04 bash[20742]: cluster 2026-03-10T10:34:52.638623+0000 mgr.y (mgr.24422) 796 : cluster [DBG] pgmap v1279: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:54 vm04 bash[28289]: cluster 2026-03-10T10:34:52.638623+0000 mgr.y (mgr.24422) 796 : cluster [DBG] pgmap v1279: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:54.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:54 vm04 bash[28289]: cluster 2026-03-10T10:34:52.638623+0000 mgr.y (mgr.24422) 796 : cluster [DBG] pgmap v1279: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:54.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:54 vm07 bash[23367]: cluster 2026-03-10T10:34:52.638623+0000 mgr.y (mgr.24422) 796 : cluster [DBG] pgmap v1279: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:54.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:54 vm07 bash[23367]: cluster 2026-03-10T10:34:52.638623+0000 mgr.y (mgr.24422) 796 : cluster [DBG] pgmap v1279: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:56.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:56 vm04 bash[20742]: cluster 2026-03-10T10:34:54.639306+0000 mgr.y (mgr.24422) 797 : cluster [DBG] pgmap v1280: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:56.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:56 vm04 bash[20742]: cluster 2026-03-10T10:34:54.639306+0000 mgr.y (mgr.24422) 797 : cluster [DBG] pgmap v1280: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:56.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:56 vm04 bash[28289]: cluster 2026-03-10T10:34:54.639306+0000 mgr.y (mgr.24422) 797 : cluster [DBG] pgmap v1280: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:56.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:56 vm04 bash[28289]: cluster 2026-03-10T10:34:54.639306+0000 mgr.y (mgr.24422) 797 : cluster [DBG] pgmap v1280: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:56.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:56 vm07 bash[23367]: cluster 2026-03-10T10:34:54.639306+0000 mgr.y (mgr.24422) 797 : cluster [DBG] pgmap v1280: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:56.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:56 vm07 bash[23367]: cluster 2026-03-10T10:34:54.639306+0000 mgr.y (mgr.24422) 797 : cluster [DBG] pgmap v1280: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:34:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:58 vm04 bash[20742]: cluster 2026-03-10T10:34:56.639664+0000 mgr.y (mgr.24422) 798 : cluster [DBG] pgmap v1281: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:58 vm04 bash[20742]: cluster 2026-03-10T10:34:56.639664+0000 mgr.y (mgr.24422) 798 : cluster [DBG] pgmap v1281: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:58 vm04 bash[28289]: cluster 2026-03-10T10:34:56.639664+0000 mgr.y (mgr.24422) 798 : cluster [DBG] pgmap v1281: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:58 vm04 bash[28289]: cluster 2026-03-10T10:34:56.639664+0000 mgr.y (mgr.24422) 798 : cluster [DBG] pgmap v1281: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:58 vm07 bash[23367]: cluster 2026-03-10T10:34:56.639664+0000 mgr.y (mgr.24422) 798 : cluster [DBG] pgmap v1281: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:58.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:58 vm07 bash[23367]: cluster 2026-03-10T10:34:56.639664+0000 mgr.y (mgr.24422) 798 : cluster [DBG] pgmap v1281: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:34:59.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:34:59 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:34:59.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:59 vm07 bash[23367]: audit 2026-03-10T10:34:58.333373+0000 mon.a (mon.0) 3670 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:34:59.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:34:59 vm07 bash[23367]: audit 2026-03-10T10:34:58.333373+0000 mon.a (mon.0) 3670 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:34:59.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:59 vm04 bash[20742]: audit 2026-03-10T10:34:58.333373+0000 mon.a (mon.0) 3670 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:34:59.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:34:59 vm04 bash[20742]: audit 2026-03-10T10:34:58.333373+0000 mon.a (mon.0) 3670 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:34:59.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:59 vm04 bash[28289]: audit 2026-03-10T10:34:58.333373+0000 mon.a (mon.0) 3670 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:34:59.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:34:59 vm04 bash[28289]: audit 2026-03-10T10:34:58.333373+0000 mon.a (mon.0) 3670 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:35:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:00 vm04 bash[20742]: cluster 2026-03-10T10:34:58.640089+0000 mgr.y (mgr.24422) 799 : cluster [DBG] pgmap v1282: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:00 vm04 bash[20742]: cluster 2026-03-10T10:34:58.640089+0000 mgr.y (mgr.24422) 799 : cluster [DBG] pgmap v1282: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:00 vm04 bash[20742]: audit 2026-03-10T10:34:59.142947+0000 mgr.y (mgr.24422) 800 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:00 vm04 bash[20742]: audit 2026-03-10T10:34:59.142947+0000 mgr.y (mgr.24422) 800 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:00 vm04 bash[28289]: cluster 2026-03-10T10:34:58.640089+0000 mgr.y (mgr.24422) 799 : cluster [DBG] pgmap v1282: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:00 vm04 bash[28289]: cluster 2026-03-10T10:34:58.640089+0000 mgr.y (mgr.24422) 799 : cluster [DBG] pgmap v1282: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:00 vm04 bash[28289]: audit 2026-03-10T10:34:59.142947+0000 mgr.y (mgr.24422) 800 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:00 vm04 bash[28289]: audit 2026-03-10T10:34:59.142947+0000 mgr.y (mgr.24422) 800 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:00.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:00 vm07 bash[23367]: cluster 2026-03-10T10:34:58.640089+0000 mgr.y (mgr.24422) 799 : cluster [DBG] pgmap v1282: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:00.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:00 vm07 bash[23367]: cluster 2026-03-10T10:34:58.640089+0000 mgr.y (mgr.24422) 799 : cluster [DBG] pgmap v1282: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:00.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:00 vm07 bash[23367]: audit 2026-03-10T10:34:59.142947+0000 mgr.y (mgr.24422) 800 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:00.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:00 vm07 bash[23367]: audit 2026-03-10T10:34:59.142947+0000 mgr.y (mgr.24422) 800 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:02.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:02 vm04 bash[20742]: cluster 2026-03-10T10:35:00.640648+0000 mgr.y (mgr.24422) 801 : cluster [DBG] pgmap v1283: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:02.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:02 vm04 bash[20742]: cluster 2026-03-10T10:35:00.640648+0000 mgr.y (mgr.24422) 801 : cluster [DBG] pgmap v1283: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:02 vm04 bash[28289]: cluster 2026-03-10T10:35:00.640648+0000 mgr.y (mgr.24422) 801 : cluster [DBG] pgmap v1283: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:02 vm04 bash[28289]: cluster 2026-03-10T10:35:00.640648+0000 mgr.y (mgr.24422) 801 : cluster [DBG] pgmap v1283: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:02 vm07 bash[23367]: cluster 2026-03-10T10:35:00.640648+0000 mgr.y (mgr.24422) 801 : cluster [DBG] pgmap v1283: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:02.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:02 vm07 bash[23367]: cluster 2026-03-10T10:35:00.640648+0000 mgr.y (mgr.24422) 801 : cluster [DBG] pgmap v1283: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:03.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:35:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:35:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:35:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:04 vm04 bash[28289]: cluster 2026-03-10T10:35:02.641029+0000 mgr.y (mgr.24422) 802 : cluster [DBG] pgmap v1284: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:04 vm04 bash[28289]: cluster 2026-03-10T10:35:02.641029+0000 mgr.y (mgr.24422) 802 : cluster [DBG] pgmap v1284: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:04 vm04 bash[20742]: cluster 2026-03-10T10:35:02.641029+0000 mgr.y (mgr.24422) 802 : cluster [DBG] pgmap v1284: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:04 vm04 bash[20742]: cluster 2026-03-10T10:35:02.641029+0000 mgr.y (mgr.24422) 802 : cluster [DBG] pgmap v1284: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:04.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:04 vm07 bash[23367]: cluster 2026-03-10T10:35:02.641029+0000 mgr.y (mgr.24422) 802 : cluster [DBG] pgmap v1284: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:04.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:04 vm07 bash[23367]: cluster 2026-03-10T10:35:02.641029+0000 mgr.y (mgr.24422) 802 : cluster [DBG] pgmap v1284: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:06.016 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:35:05 vm07 bash[50688]: logger=cleanup t=2026-03-10T10:35:05.587506343Z level=info msg="Completed cleanup jobs" duration=1.338644ms 2026-03-10T10:35:06.016 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:35:05 vm07 bash[50688]: logger=plugins.update.checker t=2026-03-10T10:35:05.727946931Z level=info msg="Update check succeeded" duration=48.118426ms 2026-03-10T10:35:06.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:06 vm04 bash[28289]: cluster 2026-03-10T10:35:04.641717+0000 mgr.y (mgr.24422) 803 : cluster [DBG] pgmap v1285: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:06.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:06 vm04 bash[28289]: cluster 2026-03-10T10:35:04.641717+0000 mgr.y (mgr.24422) 803 : cluster [DBG] pgmap v1285: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:06.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:06 vm04 bash[20742]: cluster 2026-03-10T10:35:04.641717+0000 mgr.y (mgr.24422) 803 : cluster [DBG] pgmap v1285: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:06.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:06 vm04 bash[20742]: cluster 2026-03-10T10:35:04.641717+0000 mgr.y (mgr.24422) 803 : cluster [DBG] pgmap v1285: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:06 vm07 bash[23367]: cluster 2026-03-10T10:35:04.641717+0000 mgr.y (mgr.24422) 803 : cluster [DBG] pgmap v1285: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:06.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:06 vm07 bash[23367]: cluster 2026-03-10T10:35:04.641717+0000 mgr.y (mgr.24422) 803 : cluster [DBG] pgmap v1285: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:08.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:08 vm04 bash[20742]: cluster 2026-03-10T10:35:06.642121+0000 mgr.y (mgr.24422) 804 : cluster [DBG] pgmap v1286: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:08.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:08 vm04 bash[20742]: cluster 2026-03-10T10:35:06.642121+0000 mgr.y (mgr.24422) 804 : cluster [DBG] pgmap v1286: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:08.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:08 vm04 bash[28289]: cluster 2026-03-10T10:35:06.642121+0000 mgr.y (mgr.24422) 804 : cluster [DBG] pgmap v1286: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:08.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:08 vm04 bash[28289]: cluster 2026-03-10T10:35:06.642121+0000 mgr.y (mgr.24422) 804 : cluster [DBG] pgmap v1286: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:08.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:08 vm07 bash[23367]: cluster 2026-03-10T10:35:06.642121+0000 mgr.y (mgr.24422) 804 : cluster [DBG] pgmap v1286: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:08.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:08 vm07 bash[23367]: cluster 2026-03-10T10:35:06.642121+0000 mgr.y (mgr.24422) 804 : cluster [DBG] pgmap v1286: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:09.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:35:09 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:35:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:10 vm04 bash[28289]: cluster 2026-03-10T10:35:08.642638+0000 mgr.y (mgr.24422) 805 : cluster [DBG] pgmap v1287: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:10 vm04 bash[28289]: cluster 2026-03-10T10:35:08.642638+0000 mgr.y (mgr.24422) 805 : cluster [DBG] pgmap v1287: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:10 vm04 bash[28289]: audit 2026-03-10T10:35:09.153619+0000 mgr.y (mgr.24422) 806 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:10 vm04 bash[28289]: audit 2026-03-10T10:35:09.153619+0000 mgr.y (mgr.24422) 806 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:10.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:10 vm04 bash[20742]: cluster 2026-03-10T10:35:08.642638+0000 mgr.y (mgr.24422) 805 : cluster [DBG] pgmap v1287: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:10.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:10 vm04 bash[20742]: cluster 2026-03-10T10:35:08.642638+0000 mgr.y (mgr.24422) 805 : cluster [DBG] pgmap v1287: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:10.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:10 vm04 bash[20742]: audit 2026-03-10T10:35:09.153619+0000 mgr.y (mgr.24422) 806 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:10.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:10 vm04 bash[20742]: audit 2026-03-10T10:35:09.153619+0000 mgr.y (mgr.24422) 806 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:10.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:10 vm07 bash[23367]: cluster 2026-03-10T10:35:08.642638+0000 mgr.y (mgr.24422) 805 : cluster [DBG] pgmap v1287: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:10.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:10 vm07 bash[23367]: cluster 2026-03-10T10:35:08.642638+0000 mgr.y (mgr.24422) 805 : cluster [DBG] pgmap v1287: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:10.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:10 vm07 bash[23367]: audit 2026-03-10T10:35:09.153619+0000 mgr.y (mgr.24422) 806 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:10.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:10 vm07 bash[23367]: audit 2026-03-10T10:35:09.153619+0000 mgr.y (mgr.24422) 806 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:12.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:12 vm04 bash[20742]: cluster 2026-03-10T10:35:10.643139+0000 mgr.y (mgr.24422) 807 : cluster [DBG] pgmap v1288: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:12.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:12 vm04 bash[20742]: cluster 2026-03-10T10:35:10.643139+0000 mgr.y (mgr.24422) 807 : cluster [DBG] pgmap v1288: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:12.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:12 vm04 bash[28289]: cluster 2026-03-10T10:35:10.643139+0000 mgr.y (mgr.24422) 807 : cluster [DBG] pgmap v1288: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:12.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:12 vm04 bash[28289]: cluster 2026-03-10T10:35:10.643139+0000 mgr.y (mgr.24422) 807 : cluster [DBG] pgmap v1288: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:12.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:12 vm07 bash[23367]: cluster 2026-03-10T10:35:10.643139+0000 mgr.y (mgr.24422) 807 : cluster [DBG] pgmap v1288: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:12.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:12 vm07 bash[23367]: cluster 2026-03-10T10:35:10.643139+0000 mgr.y (mgr.24422) 807 : cluster [DBG] pgmap v1288: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:13.389 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:35:13 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:35:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:35:13.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:13 vm04 bash[20742]: audit 2026-03-10T10:35:13.339675+0000 mon.a (mon.0) 3671 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:35:13.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:13 vm04 bash[20742]: audit 2026-03-10T10:35:13.339675+0000 mon.a (mon.0) 3671 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:35:13.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:13 vm04 bash[28289]: audit 2026-03-10T10:35:13.339675+0000 mon.a (mon.0) 3671 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:35:13.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:13 vm04 bash[28289]: audit 2026-03-10T10:35:13.339675+0000 mon.a (mon.0) 3671 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:35:13.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:13 vm07 bash[23367]: audit 2026-03-10T10:35:13.339675+0000 mon.a (mon.0) 3671 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:35:13.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:13 vm07 bash[23367]: audit 2026-03-10T10:35:13.339675+0000 mon.a (mon.0) 3671 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:35:14.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:14 vm04 bash[20742]: cluster 2026-03-10T10:35:12.643442+0000 mgr.y (mgr.24422) 808 : cluster [DBG] pgmap v1289: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:14.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:14 vm04 bash[20742]: cluster 2026-03-10T10:35:12.643442+0000 mgr.y (mgr.24422) 808 : cluster [DBG] pgmap v1289: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:14 vm04 bash[28289]: cluster 2026-03-10T10:35:12.643442+0000 mgr.y (mgr.24422) 808 : cluster [DBG] pgmap v1289: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:14 vm04 bash[28289]: cluster 2026-03-10T10:35:12.643442+0000 mgr.y (mgr.24422) 808 : cluster [DBG] pgmap v1289: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:14.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:14 vm07 bash[23367]: cluster 2026-03-10T10:35:12.643442+0000 mgr.y (mgr.24422) 808 : cluster [DBG] pgmap v1289: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:14.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:14 vm07 bash[23367]: cluster 2026-03-10T10:35:12.643442+0000 mgr.y (mgr.24422) 808 : cluster [DBG] pgmap v1289: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:16.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:16 vm04 bash[20742]: cluster 2026-03-10T10:35:14.644124+0000 mgr.y (mgr.24422) 809 : cluster [DBG] pgmap v1290: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:16.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:16 vm04 bash[20742]: cluster 2026-03-10T10:35:14.644124+0000 mgr.y (mgr.24422) 809 : cluster [DBG] pgmap v1290: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:16.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:16 vm04 bash[28289]: cluster 2026-03-10T10:35:14.644124+0000 mgr.y (mgr.24422) 809 : cluster [DBG] pgmap v1290: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:16.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:16 vm04 bash[28289]: cluster 2026-03-10T10:35:14.644124+0000 mgr.y (mgr.24422) 809 : cluster [DBG] pgmap v1290: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:16.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:16 vm07 bash[23367]: cluster 2026-03-10T10:35:14.644124+0000 mgr.y (mgr.24422) 809 : cluster [DBG] pgmap v1290: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:16.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:16 vm07 bash[23367]: cluster 2026-03-10T10:35:14.644124+0000 mgr.y (mgr.24422) 809 : cluster [DBG] pgmap v1290: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:18.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:18 vm04 bash[20742]: cluster 2026-03-10T10:35:16.644456+0000 mgr.y (mgr.24422) 810 : cluster [DBG] pgmap v1291: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:18.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:18 vm04 bash[20742]: cluster 2026-03-10T10:35:16.644456+0000 mgr.y (mgr.24422) 810 : cluster [DBG] pgmap v1291: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:18.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:18 vm04 bash[28289]: cluster 2026-03-10T10:35:16.644456+0000 mgr.y (mgr.24422) 810 : cluster [DBG] pgmap v1291: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:18.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:18 vm04 bash[28289]: cluster 2026-03-10T10:35:16.644456+0000 mgr.y (mgr.24422) 810 : cluster [DBG] pgmap v1291: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:18.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:18 vm07 bash[23367]: cluster 2026-03-10T10:35:16.644456+0000 mgr.y (mgr.24422) 810 : cluster [DBG] pgmap v1291: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:18.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:18 vm07 bash[23367]: cluster 2026-03-10T10:35:16.644456+0000 mgr.y (mgr.24422) 810 : cluster [DBG] pgmap v1291: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:19.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:35:19 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:35:20.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:20 vm04 bash[20742]: cluster 2026-03-10T10:35:18.644978+0000 mgr.y (mgr.24422) 811 : cluster [DBG] pgmap v1292: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:20.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:20 vm04 bash[20742]: cluster 2026-03-10T10:35:18.644978+0000 mgr.y (mgr.24422) 811 : cluster [DBG] pgmap v1292: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:20.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:20 vm04 bash[20742]: audit 2026-03-10T10:35:19.160713+0000 mgr.y (mgr.24422) 812 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:20.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:20 vm04 bash[20742]: audit 2026-03-10T10:35:19.160713+0000 mgr.y (mgr.24422) 812 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:20 vm04 bash[28289]: cluster 2026-03-10T10:35:18.644978+0000 mgr.y (mgr.24422) 811 : cluster [DBG] pgmap v1292: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:20 vm04 bash[28289]: cluster 2026-03-10T10:35:18.644978+0000 mgr.y (mgr.24422) 811 : cluster [DBG] pgmap v1292: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:20 vm04 bash[28289]: audit 2026-03-10T10:35:19.160713+0000 mgr.y (mgr.24422) 812 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:20 vm04 bash[28289]: audit 2026-03-10T10:35:19.160713+0000 mgr.y (mgr.24422) 812 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:20.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:20 vm07 bash[23367]: cluster 2026-03-10T10:35:18.644978+0000 mgr.y (mgr.24422) 811 : cluster [DBG] pgmap v1292: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:20.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:20 vm07 bash[23367]: cluster 2026-03-10T10:35:18.644978+0000 mgr.y (mgr.24422) 811 : cluster [DBG] pgmap v1292: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:20.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:20 vm07 bash[23367]: audit 2026-03-10T10:35:19.160713+0000 mgr.y (mgr.24422) 812 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:20 vm07 bash[23367]: audit 2026-03-10T10:35:19.160713+0000 mgr.y (mgr.24422) 812 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:22.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:22 vm04 bash[20742]: cluster 2026-03-10T10:35:20.645461+0000 mgr.y (mgr.24422) 813 : cluster [DBG] pgmap v1293: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:22.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:22 vm04 bash[20742]: cluster 2026-03-10T10:35:20.645461+0000 mgr.y (mgr.24422) 813 : cluster [DBG] pgmap v1293: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:22.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:22 vm04 bash[28289]: cluster 2026-03-10T10:35:20.645461+0000 mgr.y (mgr.24422) 813 : cluster [DBG] pgmap v1293: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:22.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:22 vm04 bash[28289]: cluster 2026-03-10T10:35:20.645461+0000 mgr.y (mgr.24422) 813 : cluster [DBG] pgmap v1293: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:22.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:22 vm07 bash[23367]: cluster 2026-03-10T10:35:20.645461+0000 mgr.y (mgr.24422) 813 : cluster [DBG] pgmap v1293: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:22.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:22 vm07 bash[23367]: cluster 2026-03-10T10:35:20.645461+0000 mgr.y (mgr.24422) 813 : cluster [DBG] pgmap v1293: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:23.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:35:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:35:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:35:24.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:24 vm04 bash[28289]: cluster 2026-03-10T10:35:22.645742+0000 mgr.y (mgr.24422) 814 : cluster [DBG] pgmap v1294: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:24.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:24 vm04 bash[28289]: cluster 2026-03-10T10:35:22.645742+0000 mgr.y (mgr.24422) 814 : cluster [DBG] pgmap v1294: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:24.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:24 vm04 bash[20742]: cluster 2026-03-10T10:35:22.645742+0000 mgr.y (mgr.24422) 814 : cluster [DBG] pgmap v1294: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:24.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:24 vm04 bash[20742]: cluster 2026-03-10T10:35:22.645742+0000 mgr.y (mgr.24422) 814 : cluster [DBG] pgmap v1294: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:24.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:24 vm07 bash[23367]: cluster 2026-03-10T10:35:22.645742+0000 mgr.y (mgr.24422) 814 : cluster [DBG] pgmap v1294: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:24.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:24 vm07 bash[23367]: cluster 2026-03-10T10:35:22.645742+0000 mgr.y (mgr.24422) 814 : cluster [DBG] pgmap v1294: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:25.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:25 vm07 bash[23367]: cluster 2026-03-10T10:35:24.646375+0000 mgr.y (mgr.24422) 815 : cluster [DBG] pgmap v1295: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:25.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:25 vm07 bash[23367]: cluster 2026-03-10T10:35:24.646375+0000 mgr.y (mgr.24422) 815 : cluster [DBG] pgmap v1295: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:25.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:25 vm04 bash[28289]: cluster 2026-03-10T10:35:24.646375+0000 mgr.y (mgr.24422) 815 : cluster [DBG] pgmap v1295: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:25.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:25 vm04 bash[28289]: cluster 2026-03-10T10:35:24.646375+0000 mgr.y (mgr.24422) 815 : cluster [DBG] pgmap v1295: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:25.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:25 vm04 bash[20742]: cluster 2026-03-10T10:35:24.646375+0000 mgr.y (mgr.24422) 815 : cluster [DBG] pgmap v1295: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:25.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:25 vm04 bash[20742]: cluster 2026-03-10T10:35:24.646375+0000 mgr.y (mgr.24422) 815 : cluster [DBG] pgmap v1295: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:27.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:27 vm04 bash[28289]: cluster 2026-03-10T10:35:26.646676+0000 mgr.y (mgr.24422) 816 : cluster [DBG] pgmap v1296: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:27.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:27 vm04 bash[28289]: cluster 2026-03-10T10:35:26.646676+0000 mgr.y (mgr.24422) 816 : cluster [DBG] pgmap v1296: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:27.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:27 vm04 bash[20742]: cluster 2026-03-10T10:35:26.646676+0000 mgr.y (mgr.24422) 816 : cluster [DBG] pgmap v1296: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:27.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:27 vm04 bash[20742]: cluster 2026-03-10T10:35:26.646676+0000 mgr.y (mgr.24422) 816 : cluster [DBG] pgmap v1296: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:27 vm07 bash[23367]: cluster 2026-03-10T10:35:26.646676+0000 mgr.y (mgr.24422) 816 : cluster [DBG] pgmap v1296: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:28.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:27 vm07 bash[23367]: cluster 2026-03-10T10:35:26.646676+0000 mgr.y (mgr.24422) 816 : cluster [DBG] pgmap v1296: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:29.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:28 vm07 bash[23367]: audit 2026-03-10T10:35:28.345373+0000 mon.a (mon.0) 3672 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:35:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:28 vm07 bash[23367]: audit 2026-03-10T10:35:28.345373+0000 mon.a (mon.0) 3672 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:35:29.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:28 vm04 bash[28289]: audit 2026-03-10T10:35:28.345373+0000 mon.a (mon.0) 3672 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:35:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:28 vm04 bash[28289]: audit 2026-03-10T10:35:28.345373+0000 mon.a (mon.0) 3672 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:35:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:28 vm04 bash[20742]: audit 2026-03-10T10:35:28.345373+0000 mon.a (mon.0) 3672 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:35:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:28 vm04 bash[20742]: audit 2026-03-10T10:35:28.345373+0000 mon.a (mon.0) 3672 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:35:29.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:35:29 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:35:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:29 vm07 bash[23367]: cluster 2026-03-10T10:35:28.647265+0000 mgr.y (mgr.24422) 817 : cluster [DBG] pgmap v1297: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:29 vm07 bash[23367]: cluster 2026-03-10T10:35:28.647265+0000 mgr.y (mgr.24422) 817 : cluster [DBG] pgmap v1297: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:29 vm07 bash[23367]: audit 2026-03-10T10:35:29.165115+0000 mgr.y (mgr.24422) 818 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:29 vm07 bash[23367]: audit 2026-03-10T10:35:29.165115+0000 mgr.y (mgr.24422) 818 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:30.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:29 vm04 bash[28289]: cluster 2026-03-10T10:35:28.647265+0000 mgr.y (mgr.24422) 817 : cluster [DBG] pgmap v1297: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:30.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:29 vm04 bash[28289]: cluster 2026-03-10T10:35:28.647265+0000 mgr.y (mgr.24422) 817 : cluster [DBG] pgmap v1297: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:30.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:29 vm04 bash[28289]: audit 2026-03-10T10:35:29.165115+0000 mgr.y (mgr.24422) 818 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:30.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:29 vm04 bash[28289]: audit 2026-03-10T10:35:29.165115+0000 mgr.y (mgr.24422) 818 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:29 vm04 bash[20742]: cluster 2026-03-10T10:35:28.647265+0000 mgr.y (mgr.24422) 817 : cluster [DBG] pgmap v1297: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:29 vm04 bash[20742]: cluster 2026-03-10T10:35:28.647265+0000 mgr.y (mgr.24422) 817 : cluster [DBG] pgmap v1297: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:29 vm04 bash[20742]: audit 2026-03-10T10:35:29.165115+0000 mgr.y (mgr.24422) 818 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:29 vm04 bash[20742]: audit 2026-03-10T10:35:29.165115+0000 mgr.y (mgr.24422) 818 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:31 vm07 bash[23367]: cluster 2026-03-10T10:35:30.647756+0000 mgr.y (mgr.24422) 819 : cluster [DBG] pgmap v1298: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:32.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:31 vm07 bash[23367]: cluster 2026-03-10T10:35:30.647756+0000 mgr.y (mgr.24422) 819 : cluster [DBG] pgmap v1298: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:32.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:31 vm04 bash[28289]: cluster 2026-03-10T10:35:30.647756+0000 mgr.y (mgr.24422) 819 : cluster [DBG] pgmap v1298: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:32.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:31 vm04 bash[28289]: cluster 2026-03-10T10:35:30.647756+0000 mgr.y (mgr.24422) 819 : cluster [DBG] pgmap v1298: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:32.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:31 vm04 bash[20742]: cluster 2026-03-10T10:35:30.647756+0000 mgr.y (mgr.24422) 819 : cluster [DBG] pgmap v1298: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:32.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:31 vm04 bash[20742]: cluster 2026-03-10T10:35:30.647756+0000 mgr.y (mgr.24422) 819 : cluster [DBG] pgmap v1298: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:33.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:35:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:35:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:35:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:33 vm07 bash[23367]: cluster 2026-03-10T10:35:32.648079+0000 mgr.y (mgr.24422) 820 : cluster [DBG] pgmap v1299: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:33 vm07 bash[23367]: cluster 2026-03-10T10:35:32.648079+0000 mgr.y (mgr.24422) 820 : cluster [DBG] pgmap v1299: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:34.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:33 vm04 bash[28289]: cluster 2026-03-10T10:35:32.648079+0000 mgr.y (mgr.24422) 820 : cluster [DBG] pgmap v1299: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:34.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:33 vm04 bash[28289]: cluster 2026-03-10T10:35:32.648079+0000 mgr.y (mgr.24422) 820 : cluster [DBG] pgmap v1299: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:34.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:33 vm04 bash[20742]: cluster 2026-03-10T10:35:32.648079+0000 mgr.y (mgr.24422) 820 : cluster [DBG] pgmap v1299: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:34.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:33 vm04 bash[20742]: cluster 2026-03-10T10:35:32.648079+0000 mgr.y (mgr.24422) 820 : cluster [DBG] pgmap v1299: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:36.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:35 vm07 bash[23367]: cluster 2026-03-10T10:35:34.648580+0000 mgr.y (mgr.24422) 821 : cluster [DBG] pgmap v1300: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:36.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:35 vm07 bash[23367]: cluster 2026-03-10T10:35:34.648580+0000 mgr.y (mgr.24422) 821 : cluster [DBG] pgmap v1300: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:36.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:35 vm04 bash[28289]: cluster 2026-03-10T10:35:34.648580+0000 mgr.y (mgr.24422) 821 : cluster [DBG] pgmap v1300: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:36.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:35 vm04 bash[28289]: cluster 2026-03-10T10:35:34.648580+0000 mgr.y (mgr.24422) 821 : cluster [DBG] pgmap v1300: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:36.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:35 vm04 bash[20742]: cluster 2026-03-10T10:35:34.648580+0000 mgr.y (mgr.24422) 821 : cluster [DBG] pgmap v1300: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:35 vm04 bash[20742]: cluster 2026-03-10T10:35:34.648580+0000 mgr.y (mgr.24422) 821 : cluster [DBG] pgmap v1300: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:36 vm07 bash[23367]: audit 2026-03-10T10:35:36.260409+0000 mon.a (mon.0) 3673 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:35:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:36 vm07 bash[23367]: audit 2026-03-10T10:35:36.260409+0000 mon.a (mon.0) 3673 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:35:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:36 vm07 bash[23367]: audit 2026-03-10T10:35:36.565819+0000 mon.a (mon.0) 3674 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:35:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:36 vm07 bash[23367]: audit 2026-03-10T10:35:36.565819+0000 mon.a (mon.0) 3674 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:35:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:36 vm07 bash[23367]: audit 2026-03-10T10:35:36.568369+0000 mon.a (mon.0) 3675 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:35:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:36 vm07 bash[23367]: audit 2026-03-10T10:35:36.568369+0000 mon.a (mon.0) 3675 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:35:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:36 vm07 bash[23367]: audit 2026-03-10T10:35:36.568936+0000 mon.a (mon.0) 3676 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:35:37.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:36 vm07 bash[23367]: audit 2026-03-10T10:35:36.568936+0000 mon.a (mon.0) 3676 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:35:37.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:36 vm07 bash[23367]: audit 2026-03-10T10:35:36.569318+0000 mon.a (mon.0) 3677 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:35:37.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:36 vm07 bash[23367]: audit 2026-03-10T10:35:36.569318+0000 mon.a (mon.0) 3677 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:35:37.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:36 vm07 bash[23367]: audit 2026-03-10T10:35:36.573823+0000 mon.a (mon.0) 3678 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:35:37.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:36 vm07 bash[23367]: audit 2026-03-10T10:35:36.573823+0000 mon.a (mon.0) 3678 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:35:37.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:36 vm04 bash[28289]: audit 2026-03-10T10:35:36.260409+0000 mon.a (mon.0) 3673 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:35:37.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:36 vm04 bash[28289]: audit 2026-03-10T10:35:36.260409+0000 mon.a (mon.0) 3673 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:35:37.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:36 vm04 bash[28289]: audit 2026-03-10T10:35:36.565819+0000 mon.a (mon.0) 3674 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:35:37.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:36 vm04 bash[28289]: audit 2026-03-10T10:35:36.565819+0000 mon.a (mon.0) 3674 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:35:37.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:36 vm04 bash[28289]: audit 2026-03-10T10:35:36.568369+0000 mon.a (mon.0) 3675 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:35:37.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:36 vm04 bash[28289]: audit 2026-03-10T10:35:36.568369+0000 mon.a (mon.0) 3675 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:35:37.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:36 vm04 bash[28289]: audit 2026-03-10T10:35:36.568936+0000 mon.a (mon.0) 3676 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:35:37.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:36 vm04 bash[28289]: audit 2026-03-10T10:35:36.568936+0000 mon.a (mon.0) 3676 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:35:37.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:36 vm04 bash[28289]: audit 2026-03-10T10:35:36.569318+0000 mon.a (mon.0) 3677 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:35:37.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:36 vm04 bash[28289]: audit 2026-03-10T10:35:36.569318+0000 mon.a (mon.0) 3677 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:35:37.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:36 vm04 bash[28289]: audit 2026-03-10T10:35:36.573823+0000 mon.a (mon.0) 3678 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:35:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:36 vm04 bash[28289]: audit 2026-03-10T10:35:36.573823+0000 mon.a (mon.0) 3678 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:35:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:36 vm04 bash[20742]: audit 2026-03-10T10:35:36.260409+0000 mon.a (mon.0) 3673 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:35:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:36 vm04 bash[20742]: audit 2026-03-10T10:35:36.260409+0000 mon.a (mon.0) 3673 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:35:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:36 vm04 bash[20742]: audit 2026-03-10T10:35:36.565819+0000 mon.a (mon.0) 3674 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:35:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:36 vm04 bash[20742]: audit 2026-03-10T10:35:36.565819+0000 mon.a (mon.0) 3674 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:35:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:36 vm04 bash[20742]: audit 2026-03-10T10:35:36.568369+0000 mon.a (mon.0) 3675 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:35:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:36 vm04 bash[20742]: audit 2026-03-10T10:35:36.568369+0000 mon.a (mon.0) 3675 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:35:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:36 vm04 bash[20742]: audit 2026-03-10T10:35:36.568936+0000 mon.a (mon.0) 3676 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:35:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:36 vm04 bash[20742]: audit 2026-03-10T10:35:36.568936+0000 mon.a (mon.0) 3676 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:35:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:36 vm04 bash[20742]: audit 2026-03-10T10:35:36.569318+0000 mon.a (mon.0) 3677 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:35:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:36 vm04 bash[20742]: audit 2026-03-10T10:35:36.569318+0000 mon.a (mon.0) 3677 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:35:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:36 vm04 bash[20742]: audit 2026-03-10T10:35:36.573823+0000 mon.a (mon.0) 3678 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:35:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:36 vm04 bash[20742]: audit 2026-03-10T10:35:36.573823+0000 mon.a (mon.0) 3678 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:35:38.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:37 vm07 bash[23367]: cluster 2026-03-10T10:35:36.648914+0000 mgr.y (mgr.24422) 822 : cluster [DBG] pgmap v1301: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:38.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:37 vm07 bash[23367]: cluster 2026-03-10T10:35:36.648914+0000 mgr.y (mgr.24422) 822 : cluster [DBG] pgmap v1301: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:38.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:37 vm04 bash[28289]: cluster 2026-03-10T10:35:36.648914+0000 mgr.y (mgr.24422) 822 : cluster [DBG] pgmap v1301: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:38.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:37 vm04 bash[28289]: cluster 2026-03-10T10:35:36.648914+0000 mgr.y (mgr.24422) 822 : cluster [DBG] pgmap v1301: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:38.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:37 vm04 bash[20742]: cluster 2026-03-10T10:35:36.648914+0000 mgr.y (mgr.24422) 822 : cluster [DBG] pgmap v1301: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:38.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:37 vm04 bash[20742]: cluster 2026-03-10T10:35:36.648914+0000 mgr.y (mgr.24422) 822 : cluster [DBG] pgmap v1301: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:39.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:35:39 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:35:40.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:39 vm07 bash[23367]: cluster 2026-03-10T10:35:38.649290+0000 mgr.y (mgr.24422) 823 : cluster [DBG] pgmap v1302: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:40.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:39 vm07 bash[23367]: cluster 2026-03-10T10:35:38.649290+0000 mgr.y (mgr.24422) 823 : cluster [DBG] pgmap v1302: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:40.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:39 vm07 bash[23367]: audit 2026-03-10T10:35:39.175778+0000 mgr.y (mgr.24422) 824 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:40.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:39 vm07 bash[23367]: audit 2026-03-10T10:35:39.175778+0000 mgr.y (mgr.24422) 824 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:40.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:39 vm04 bash[28289]: cluster 2026-03-10T10:35:38.649290+0000 mgr.y (mgr.24422) 823 : cluster [DBG] pgmap v1302: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:40.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:39 vm04 bash[28289]: cluster 2026-03-10T10:35:38.649290+0000 mgr.y (mgr.24422) 823 : cluster [DBG] pgmap v1302: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:40.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:39 vm04 bash[28289]: audit 2026-03-10T10:35:39.175778+0000 mgr.y (mgr.24422) 824 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:40.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:39 vm04 bash[28289]: audit 2026-03-10T10:35:39.175778+0000 mgr.y (mgr.24422) 824 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:39 vm04 bash[20742]: cluster 2026-03-10T10:35:38.649290+0000 mgr.y (mgr.24422) 823 : cluster [DBG] pgmap v1302: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:39 vm04 bash[20742]: cluster 2026-03-10T10:35:38.649290+0000 mgr.y (mgr.24422) 823 : cluster [DBG] pgmap v1302: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:39 vm04 bash[20742]: audit 2026-03-10T10:35:39.175778+0000 mgr.y (mgr.24422) 824 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:39 vm04 bash[20742]: audit 2026-03-10T10:35:39.175778+0000 mgr.y (mgr.24422) 824 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:42.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:41 vm04 bash[28289]: cluster 2026-03-10T10:35:40.649788+0000 mgr.y (mgr.24422) 825 : cluster [DBG] pgmap v1303: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:42.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:41 vm04 bash[28289]: cluster 2026-03-10T10:35:40.649788+0000 mgr.y (mgr.24422) 825 : cluster [DBG] pgmap v1303: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:41 vm04 bash[20742]: cluster 2026-03-10T10:35:40.649788+0000 mgr.y (mgr.24422) 825 : cluster [DBG] pgmap v1303: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:41 vm04 bash[20742]: cluster 2026-03-10T10:35:40.649788+0000 mgr.y (mgr.24422) 825 : cluster [DBG] pgmap v1303: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:41 vm07 bash[23367]: cluster 2026-03-10T10:35:40.649788+0000 mgr.y (mgr.24422) 825 : cluster [DBG] pgmap v1303: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:42.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:41 vm07 bash[23367]: cluster 2026-03-10T10:35:40.649788+0000 mgr.y (mgr.24422) 825 : cluster [DBG] pgmap v1303: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:43.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:35:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:35:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:35:44.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:43 vm04 bash[28289]: cluster 2026-03-10T10:35:42.650098+0000 mgr.y (mgr.24422) 826 : cluster [DBG] pgmap v1304: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:44.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:43 vm04 bash[28289]: cluster 2026-03-10T10:35:42.650098+0000 mgr.y (mgr.24422) 826 : cluster [DBG] pgmap v1304: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:44.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:43 vm04 bash[28289]: audit 2026-03-10T10:35:43.351283+0000 mon.a (mon.0) 3679 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:35:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:43 vm04 bash[28289]: audit 2026-03-10T10:35:43.351283+0000 mon.a (mon.0) 3679 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:35:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:43 vm04 bash[20742]: cluster 2026-03-10T10:35:42.650098+0000 mgr.y (mgr.24422) 826 : cluster [DBG] pgmap v1304: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:43 vm04 bash[20742]: cluster 2026-03-10T10:35:42.650098+0000 mgr.y (mgr.24422) 826 : cluster [DBG] pgmap v1304: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:43 vm04 bash[20742]: audit 2026-03-10T10:35:43.351283+0000 mon.a (mon.0) 3679 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:35:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:43 vm04 bash[20742]: audit 2026-03-10T10:35:43.351283+0000 mon.a (mon.0) 3679 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:35:44.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:43 vm07 bash[23367]: cluster 2026-03-10T10:35:42.650098+0000 mgr.y (mgr.24422) 826 : cluster [DBG] pgmap v1304: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:44.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:43 vm07 bash[23367]: cluster 2026-03-10T10:35:42.650098+0000 mgr.y (mgr.24422) 826 : cluster [DBG] pgmap v1304: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:44.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:43 vm07 bash[23367]: audit 2026-03-10T10:35:43.351283+0000 mon.a (mon.0) 3679 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:35:44.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:43 vm07 bash[23367]: audit 2026-03-10T10:35:43.351283+0000 mon.a (mon.0) 3679 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:35:46.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:45 vm04 bash[28289]: cluster 2026-03-10T10:35:44.650783+0000 mgr.y (mgr.24422) 827 : cluster [DBG] pgmap v1305: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:46.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:45 vm04 bash[28289]: cluster 2026-03-10T10:35:44.650783+0000 mgr.y (mgr.24422) 827 : cluster [DBG] pgmap v1305: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:46.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:45 vm04 bash[20742]: cluster 2026-03-10T10:35:44.650783+0000 mgr.y (mgr.24422) 827 : cluster [DBG] pgmap v1305: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:46.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:45 vm04 bash[20742]: cluster 2026-03-10T10:35:44.650783+0000 mgr.y (mgr.24422) 827 : cluster [DBG] pgmap v1305: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:46.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:45 vm07 bash[23367]: cluster 2026-03-10T10:35:44.650783+0000 mgr.y (mgr.24422) 827 : cluster [DBG] pgmap v1305: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:46.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:45 vm07 bash[23367]: cluster 2026-03-10T10:35:44.650783+0000 mgr.y (mgr.24422) 827 : cluster [DBG] pgmap v1305: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:47 vm04 bash[28289]: cluster 2026-03-10T10:35:46.651102+0000 mgr.y (mgr.24422) 828 : cluster [DBG] pgmap v1306: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:47 vm04 bash[28289]: cluster 2026-03-10T10:35:46.651102+0000 mgr.y (mgr.24422) 828 : cluster [DBG] pgmap v1306: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:47 vm04 bash[20742]: cluster 2026-03-10T10:35:46.651102+0000 mgr.y (mgr.24422) 828 : cluster [DBG] pgmap v1306: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:47 vm04 bash[20742]: cluster 2026-03-10T10:35:46.651102+0000 mgr.y (mgr.24422) 828 : cluster [DBG] pgmap v1306: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:48.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:47 vm07 bash[23367]: cluster 2026-03-10T10:35:46.651102+0000 mgr.y (mgr.24422) 828 : cluster [DBG] pgmap v1306: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:48.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:47 vm07 bash[23367]: cluster 2026-03-10T10:35:46.651102+0000 mgr.y (mgr.24422) 828 : cluster [DBG] pgmap v1306: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:49.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:35:49 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:35:50.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:49 vm04 bash[28289]: cluster 2026-03-10T10:35:48.651759+0000 mgr.y (mgr.24422) 829 : cluster [DBG] pgmap v1307: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:50.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:49 vm04 bash[28289]: cluster 2026-03-10T10:35:48.651759+0000 mgr.y (mgr.24422) 829 : cluster [DBG] pgmap v1307: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:50.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:49 vm04 bash[28289]: audit 2026-03-10T10:35:49.176744+0000 mgr.y (mgr.24422) 830 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:49 vm04 bash[28289]: audit 2026-03-10T10:35:49.176744+0000 mgr.y (mgr.24422) 830 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:49 vm04 bash[20742]: cluster 2026-03-10T10:35:48.651759+0000 mgr.y (mgr.24422) 829 : cluster [DBG] pgmap v1307: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:49 vm04 bash[20742]: cluster 2026-03-10T10:35:48.651759+0000 mgr.y (mgr.24422) 829 : cluster [DBG] pgmap v1307: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:49 vm04 bash[20742]: audit 2026-03-10T10:35:49.176744+0000 mgr.y (mgr.24422) 830 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:49 vm04 bash[20742]: audit 2026-03-10T10:35:49.176744+0000 mgr.y (mgr.24422) 830 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:50.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:49 vm07 bash[23367]: cluster 2026-03-10T10:35:48.651759+0000 mgr.y (mgr.24422) 829 : cluster [DBG] pgmap v1307: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:50.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:49 vm07 bash[23367]: cluster 2026-03-10T10:35:48.651759+0000 mgr.y (mgr.24422) 829 : cluster [DBG] pgmap v1307: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:50.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:49 vm07 bash[23367]: audit 2026-03-10T10:35:49.176744+0000 mgr.y (mgr.24422) 830 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:50.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:49 vm07 bash[23367]: audit 2026-03-10T10:35:49.176744+0000 mgr.y (mgr.24422) 830 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:35:52.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:51 vm04 bash[28289]: cluster 2026-03-10T10:35:50.652273+0000 mgr.y (mgr.24422) 831 : cluster [DBG] pgmap v1308: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:51 vm04 bash[28289]: cluster 2026-03-10T10:35:50.652273+0000 mgr.y (mgr.24422) 831 : cluster [DBG] pgmap v1308: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:51 vm04 bash[20742]: cluster 2026-03-10T10:35:50.652273+0000 mgr.y (mgr.24422) 831 : cluster [DBG] pgmap v1308: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:51 vm04 bash[20742]: cluster 2026-03-10T10:35:50.652273+0000 mgr.y (mgr.24422) 831 : cluster [DBG] pgmap v1308: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:51 vm07 bash[23367]: cluster 2026-03-10T10:35:50.652273+0000 mgr.y (mgr.24422) 831 : cluster [DBG] pgmap v1308: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:52.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:51 vm07 bash[23367]: cluster 2026-03-10T10:35:50.652273+0000 mgr.y (mgr.24422) 831 : cluster [DBG] pgmap v1308: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:53.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:35:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:35:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:35:54.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:53 vm04 bash[28289]: cluster 2026-03-10T10:35:52.652556+0000 mgr.y (mgr.24422) 832 : cluster [DBG] pgmap v1309: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:54.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:53 vm04 bash[28289]: cluster 2026-03-10T10:35:52.652556+0000 mgr.y (mgr.24422) 832 : cluster [DBG] pgmap v1309: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:54.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:53 vm04 bash[20742]: cluster 2026-03-10T10:35:52.652556+0000 mgr.y (mgr.24422) 832 : cluster [DBG] pgmap v1309: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:54.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:53 vm04 bash[20742]: cluster 2026-03-10T10:35:52.652556+0000 mgr.y (mgr.24422) 832 : cluster [DBG] pgmap v1309: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:54.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:53 vm07 bash[23367]: cluster 2026-03-10T10:35:52.652556+0000 mgr.y (mgr.24422) 832 : cluster [DBG] pgmap v1309: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:54.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:53 vm07 bash[23367]: cluster 2026-03-10T10:35:52.652556+0000 mgr.y (mgr.24422) 832 : cluster [DBG] pgmap v1309: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:56.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:55 vm04 bash[28289]: cluster 2026-03-10T10:35:54.653174+0000 mgr.y (mgr.24422) 833 : cluster [DBG] pgmap v1310: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:56.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:55 vm04 bash[28289]: cluster 2026-03-10T10:35:54.653174+0000 mgr.y (mgr.24422) 833 : cluster [DBG] pgmap v1310: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:55 vm04 bash[20742]: cluster 2026-03-10T10:35:54.653174+0000 mgr.y (mgr.24422) 833 : cluster [DBG] pgmap v1310: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:55 vm04 bash[20742]: cluster 2026-03-10T10:35:54.653174+0000 mgr.y (mgr.24422) 833 : cluster [DBG] pgmap v1310: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:56.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:55 vm07 bash[23367]: cluster 2026-03-10T10:35:54.653174+0000 mgr.y (mgr.24422) 833 : cluster [DBG] pgmap v1310: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:56.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:55 vm07 bash[23367]: cluster 2026-03-10T10:35:54.653174+0000 mgr.y (mgr.24422) 833 : cluster [DBG] pgmap v1310: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:35:58.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:57 vm04 bash[28289]: cluster 2026-03-10T10:35:56.653513+0000 mgr.y (mgr.24422) 834 : cluster [DBG] pgmap v1311: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:58.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:57 vm04 bash[28289]: cluster 2026-03-10T10:35:56.653513+0000 mgr.y (mgr.24422) 834 : cluster [DBG] pgmap v1311: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:58.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:57 vm04 bash[20742]: cluster 2026-03-10T10:35:56.653513+0000 mgr.y (mgr.24422) 834 : cluster [DBG] pgmap v1311: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:58.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:57 vm04 bash[20742]: cluster 2026-03-10T10:35:56.653513+0000 mgr.y (mgr.24422) 834 : cluster [DBG] pgmap v1311: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:58.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:57 vm07 bash[23367]: cluster 2026-03-10T10:35:56.653513+0000 mgr.y (mgr.24422) 834 : cluster [DBG] pgmap v1311: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:57 vm07 bash[23367]: cluster 2026-03-10T10:35:56.653513+0000 mgr.y (mgr.24422) 834 : cluster [DBG] pgmap v1311: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:35:59.183 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:58 vm07 bash[23367]: audit 2026-03-10T10:35:58.356838+0000 mon.a (mon.0) 3680 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:35:59.183 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:58 vm07 bash[23367]: audit 2026-03-10T10:35:58.356838+0000 mon.a (mon.0) 3680 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:35:59.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:58 vm04 bash[28289]: audit 2026-03-10T10:35:58.356838+0000 mon.a (mon.0) 3680 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:35:59.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:58 vm04 bash[28289]: audit 2026-03-10T10:35:58.356838+0000 mon.a (mon.0) 3680 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:35:59.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:58 vm04 bash[20742]: audit 2026-03-10T10:35:58.356838+0000 mon.a (mon.0) 3680 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:35:59.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:58 vm04 bash[20742]: audit 2026-03-10T10:35:58.356838+0000 mon.a (mon.0) 3680 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:35:59.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:35:59 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:36:00.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:59 vm04 bash[28289]: cluster 2026-03-10T10:35:58.653927+0000 mgr.y (mgr.24422) 835 : cluster [DBG] pgmap v1312: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:00.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:59 vm04 bash[28289]: cluster 2026-03-10T10:35:58.653927+0000 mgr.y (mgr.24422) 835 : cluster [DBG] pgmap v1312: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:59 vm04 bash[28289]: audit 2026-03-10T10:35:59.184026+0000 mgr.y (mgr.24422) 836 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:35:59 vm04 bash[28289]: audit 2026-03-10T10:35:59.184026+0000 mgr.y (mgr.24422) 836 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:59 vm04 bash[20742]: cluster 2026-03-10T10:35:58.653927+0000 mgr.y (mgr.24422) 835 : cluster [DBG] pgmap v1312: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:59 vm04 bash[20742]: cluster 2026-03-10T10:35:58.653927+0000 mgr.y (mgr.24422) 835 : cluster [DBG] pgmap v1312: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:59 vm04 bash[20742]: audit 2026-03-10T10:35:59.184026+0000 mgr.y (mgr.24422) 836 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:35:59 vm04 bash[20742]: audit 2026-03-10T10:35:59.184026+0000 mgr.y (mgr.24422) 836 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:00.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:59 vm07 bash[23367]: cluster 2026-03-10T10:35:58.653927+0000 mgr.y (mgr.24422) 835 : cluster [DBG] pgmap v1312: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:00.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:59 vm07 bash[23367]: cluster 2026-03-10T10:35:58.653927+0000 mgr.y (mgr.24422) 835 : cluster [DBG] pgmap v1312: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:00.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:59 vm07 bash[23367]: audit 2026-03-10T10:35:59.184026+0000 mgr.y (mgr.24422) 836 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:00.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:35:59 vm07 bash[23367]: audit 2026-03-10T10:35:59.184026+0000 mgr.y (mgr.24422) 836 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:02.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:01 vm04 bash[28289]: cluster 2026-03-10T10:36:00.654439+0000 mgr.y (mgr.24422) 837 : cluster [DBG] pgmap v1313: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:02.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:01 vm04 bash[28289]: cluster 2026-03-10T10:36:00.654439+0000 mgr.y (mgr.24422) 837 : cluster [DBG] pgmap v1313: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:01 vm04 bash[20742]: cluster 2026-03-10T10:36:00.654439+0000 mgr.y (mgr.24422) 837 : cluster [DBG] pgmap v1313: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:01 vm04 bash[20742]: cluster 2026-03-10T10:36:00.654439+0000 mgr.y (mgr.24422) 837 : cluster [DBG] pgmap v1313: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:02.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:01 vm07 bash[23367]: cluster 2026-03-10T10:36:00.654439+0000 mgr.y (mgr.24422) 837 : cluster [DBG] pgmap v1313: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:02.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:01 vm07 bash[23367]: cluster 2026-03-10T10:36:00.654439+0000 mgr.y (mgr.24422) 837 : cluster [DBG] pgmap v1313: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:03.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:36:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:36:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:36:04.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:03 vm04 bash[28289]: cluster 2026-03-10T10:36:02.654773+0000 mgr.y (mgr.24422) 838 : cluster [DBG] pgmap v1314: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:04.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:03 vm04 bash[28289]: cluster 2026-03-10T10:36:02.654773+0000 mgr.y (mgr.24422) 838 : cluster [DBG] pgmap v1314: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:03 vm04 bash[20742]: cluster 2026-03-10T10:36:02.654773+0000 mgr.y (mgr.24422) 838 : cluster [DBG] pgmap v1314: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:03 vm04 bash[20742]: cluster 2026-03-10T10:36:02.654773+0000 mgr.y (mgr.24422) 838 : cluster [DBG] pgmap v1314: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:04.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:03 vm07 bash[23367]: cluster 2026-03-10T10:36:02.654773+0000 mgr.y (mgr.24422) 838 : cluster [DBG] pgmap v1314: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:04.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:03 vm07 bash[23367]: cluster 2026-03-10T10:36:02.654773+0000 mgr.y (mgr.24422) 838 : cluster [DBG] pgmap v1314: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:06.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:05 vm07 bash[23367]: cluster 2026-03-10T10:36:04.655340+0000 mgr.y (mgr.24422) 839 : cluster [DBG] pgmap v1315: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:06.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:05 vm07 bash[23367]: cluster 2026-03-10T10:36:04.655340+0000 mgr.y (mgr.24422) 839 : cluster [DBG] pgmap v1315: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:06.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:05 vm04 bash[28289]: cluster 2026-03-10T10:36:04.655340+0000 mgr.y (mgr.24422) 839 : cluster [DBG] pgmap v1315: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:06.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:05 vm04 bash[28289]: cluster 2026-03-10T10:36:04.655340+0000 mgr.y (mgr.24422) 839 : cluster [DBG] pgmap v1315: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:06.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:05 vm04 bash[20742]: cluster 2026-03-10T10:36:04.655340+0000 mgr.y (mgr.24422) 839 : cluster [DBG] pgmap v1315: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:06.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:05 vm04 bash[20742]: cluster 2026-03-10T10:36:04.655340+0000 mgr.y (mgr.24422) 839 : cluster [DBG] pgmap v1315: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:08.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:07 vm07 bash[23367]: cluster 2026-03-10T10:36:06.655634+0000 mgr.y (mgr.24422) 840 : cluster [DBG] pgmap v1316: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:08.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:07 vm07 bash[23367]: cluster 2026-03-10T10:36:06.655634+0000 mgr.y (mgr.24422) 840 : cluster [DBG] pgmap v1316: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:08.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:07 vm04 bash[28289]: cluster 2026-03-10T10:36:06.655634+0000 mgr.y (mgr.24422) 840 : cluster [DBG] pgmap v1316: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:08.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:07 vm04 bash[28289]: cluster 2026-03-10T10:36:06.655634+0000 mgr.y (mgr.24422) 840 : cluster [DBG] pgmap v1316: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:08.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:07 vm04 bash[20742]: cluster 2026-03-10T10:36:06.655634+0000 mgr.y (mgr.24422) 840 : cluster [DBG] pgmap v1316: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:07 vm04 bash[20742]: cluster 2026-03-10T10:36:06.655634+0000 mgr.y (mgr.24422) 840 : cluster [DBG] pgmap v1316: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:09.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:36:09 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:36:10.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:09 vm07 bash[23367]: cluster 2026-03-10T10:36:08.656132+0000 mgr.y (mgr.24422) 841 : cluster [DBG] pgmap v1317: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:10.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:09 vm07 bash[23367]: cluster 2026-03-10T10:36:08.656132+0000 mgr.y (mgr.24422) 841 : cluster [DBG] pgmap v1317: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:10.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:09 vm07 bash[23367]: audit 2026-03-10T10:36:09.193568+0000 mgr.y (mgr.24422) 842 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:10.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:09 vm07 bash[23367]: audit 2026-03-10T10:36:09.193568+0000 mgr.y (mgr.24422) 842 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:10.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:09 vm04 bash[28289]: cluster 2026-03-10T10:36:08.656132+0000 mgr.y (mgr.24422) 841 : cluster [DBG] pgmap v1317: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:10.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:09 vm04 bash[28289]: cluster 2026-03-10T10:36:08.656132+0000 mgr.y (mgr.24422) 841 : cluster [DBG] pgmap v1317: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:10.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:09 vm04 bash[28289]: audit 2026-03-10T10:36:09.193568+0000 mgr.y (mgr.24422) 842 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:10.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:09 vm04 bash[28289]: audit 2026-03-10T10:36:09.193568+0000 mgr.y (mgr.24422) 842 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:10.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:09 vm04 bash[20742]: cluster 2026-03-10T10:36:08.656132+0000 mgr.y (mgr.24422) 841 : cluster [DBG] pgmap v1317: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:10.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:09 vm04 bash[20742]: cluster 2026-03-10T10:36:08.656132+0000 mgr.y (mgr.24422) 841 : cluster [DBG] pgmap v1317: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:10.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:09 vm04 bash[20742]: audit 2026-03-10T10:36:09.193568+0000 mgr.y (mgr.24422) 842 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:10.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:09 vm04 bash[20742]: audit 2026-03-10T10:36:09.193568+0000 mgr.y (mgr.24422) 842 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:12.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:11 vm07 bash[23367]: cluster 2026-03-10T10:36:10.656688+0000 mgr.y (mgr.24422) 843 : cluster [DBG] pgmap v1318: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:12.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:11 vm07 bash[23367]: cluster 2026-03-10T10:36:10.656688+0000 mgr.y (mgr.24422) 843 : cluster [DBG] pgmap v1318: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:12.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:11 vm04 bash[28289]: cluster 2026-03-10T10:36:10.656688+0000 mgr.y (mgr.24422) 843 : cluster [DBG] pgmap v1318: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:12.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:11 vm04 bash[28289]: cluster 2026-03-10T10:36:10.656688+0000 mgr.y (mgr.24422) 843 : cluster [DBG] pgmap v1318: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:12.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:11 vm04 bash[20742]: cluster 2026-03-10T10:36:10.656688+0000 mgr.y (mgr.24422) 843 : cluster [DBG] pgmap v1318: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:12.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:11 vm04 bash[20742]: cluster 2026-03-10T10:36:10.656688+0000 mgr.y (mgr.24422) 843 : cluster [DBG] pgmap v1318: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:13.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:36:13 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:36:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:36:13.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:13 vm04 bash[28289]: cluster 2026-03-10T10:36:12.656945+0000 mgr.y (mgr.24422) 844 : cluster [DBG] pgmap v1319: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:13.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:13 vm04 bash[28289]: cluster 2026-03-10T10:36:12.656945+0000 mgr.y (mgr.24422) 844 : cluster [DBG] pgmap v1319: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:13.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:13 vm04 bash[28289]: audit 2026-03-10T10:36:13.362863+0000 mon.a (mon.0) 3681 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:36:13.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:13 vm04 bash[28289]: audit 2026-03-10T10:36:13.362863+0000 mon.a (mon.0) 3681 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:36:13.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:13 vm04 bash[20742]: cluster 2026-03-10T10:36:12.656945+0000 mgr.y (mgr.24422) 844 : cluster [DBG] pgmap v1319: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:13.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:13 vm04 bash[20742]: cluster 2026-03-10T10:36:12.656945+0000 mgr.y (mgr.24422) 844 : cluster [DBG] pgmap v1319: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:13.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:13 vm04 bash[20742]: audit 2026-03-10T10:36:13.362863+0000 mon.a (mon.0) 3681 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:36:13.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:13 vm04 bash[20742]: audit 2026-03-10T10:36:13.362863+0000 mon.a (mon.0) 3681 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:36:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:13 vm07 bash[23367]: cluster 2026-03-10T10:36:12.656945+0000 mgr.y (mgr.24422) 844 : cluster [DBG] pgmap v1319: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:13 vm07 bash[23367]: cluster 2026-03-10T10:36:12.656945+0000 mgr.y (mgr.24422) 844 : cluster [DBG] pgmap v1319: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:13 vm07 bash[23367]: audit 2026-03-10T10:36:13.362863+0000 mon.a (mon.0) 3681 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:36:14.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:13 vm07 bash[23367]: audit 2026-03-10T10:36:13.362863+0000 mon.a (mon.0) 3681 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:36:15.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:15 vm04 bash[28289]: cluster 2026-03-10T10:36:14.657519+0000 mgr.y (mgr.24422) 845 : cluster [DBG] pgmap v1320: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:15.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:15 vm04 bash[28289]: cluster 2026-03-10T10:36:14.657519+0000 mgr.y (mgr.24422) 845 : cluster [DBG] pgmap v1320: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:15.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:15 vm04 bash[20742]: cluster 2026-03-10T10:36:14.657519+0000 mgr.y (mgr.24422) 845 : cluster [DBG] pgmap v1320: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:15.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:15 vm04 bash[20742]: cluster 2026-03-10T10:36:14.657519+0000 mgr.y (mgr.24422) 845 : cluster [DBG] pgmap v1320: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:15 vm07 bash[23367]: cluster 2026-03-10T10:36:14.657519+0000 mgr.y (mgr.24422) 845 : cluster [DBG] pgmap v1320: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:16.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:15 vm07 bash[23367]: cluster 2026-03-10T10:36:14.657519+0000 mgr.y (mgr.24422) 845 : cluster [DBG] pgmap v1320: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:17.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:17 vm04 bash[28289]: cluster 2026-03-10T10:36:16.657807+0000 mgr.y (mgr.24422) 846 : cluster [DBG] pgmap v1321: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:17.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:17 vm04 bash[28289]: cluster 2026-03-10T10:36:16.657807+0000 mgr.y (mgr.24422) 846 : cluster [DBG] pgmap v1321: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:17.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:17 vm04 bash[20742]: cluster 2026-03-10T10:36:16.657807+0000 mgr.y (mgr.24422) 846 : cluster [DBG] pgmap v1321: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:17.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:17 vm04 bash[20742]: cluster 2026-03-10T10:36:16.657807+0000 mgr.y (mgr.24422) 846 : cluster [DBG] pgmap v1321: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:18.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:17 vm07 bash[23367]: cluster 2026-03-10T10:36:16.657807+0000 mgr.y (mgr.24422) 846 : cluster [DBG] pgmap v1321: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:18.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:17 vm07 bash[23367]: cluster 2026-03-10T10:36:16.657807+0000 mgr.y (mgr.24422) 846 : cluster [DBG] pgmap v1321: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:19.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:36:19 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:36:19.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:19 vm04 bash[28289]: cluster 2026-03-10T10:36:18.658404+0000 mgr.y (mgr.24422) 847 : cluster [DBG] pgmap v1322: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:19.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:19 vm04 bash[28289]: cluster 2026-03-10T10:36:18.658404+0000 mgr.y (mgr.24422) 847 : cluster [DBG] pgmap v1322: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:19.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:19 vm04 bash[28289]: audit 2026-03-10T10:36:19.202825+0000 mgr.y (mgr.24422) 848 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:19.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:19 vm04 bash[28289]: audit 2026-03-10T10:36:19.202825+0000 mgr.y (mgr.24422) 848 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:19.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:19 vm04 bash[20742]: cluster 2026-03-10T10:36:18.658404+0000 mgr.y (mgr.24422) 847 : cluster [DBG] pgmap v1322: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:19.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:19 vm04 bash[20742]: cluster 2026-03-10T10:36:18.658404+0000 mgr.y (mgr.24422) 847 : cluster [DBG] pgmap v1322: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:19.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:19 vm04 bash[20742]: audit 2026-03-10T10:36:19.202825+0000 mgr.y (mgr.24422) 848 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:19.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:19 vm04 bash[20742]: audit 2026-03-10T10:36:19.202825+0000 mgr.y (mgr.24422) 848 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:20.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:19 vm07 bash[23367]: cluster 2026-03-10T10:36:18.658404+0000 mgr.y (mgr.24422) 847 : cluster [DBG] pgmap v1322: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:20.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:19 vm07 bash[23367]: cluster 2026-03-10T10:36:18.658404+0000 mgr.y (mgr.24422) 847 : cluster [DBG] pgmap v1322: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:20.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:19 vm07 bash[23367]: audit 2026-03-10T10:36:19.202825+0000 mgr.y (mgr.24422) 848 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:20.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:19 vm07 bash[23367]: audit 2026-03-10T10:36:19.202825+0000 mgr.y (mgr.24422) 848 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:21.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:21 vm04 bash[28289]: cluster 2026-03-10T10:36:20.659043+0000 mgr.y (mgr.24422) 849 : cluster [DBG] pgmap v1323: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:21.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:21 vm04 bash[28289]: cluster 2026-03-10T10:36:20.659043+0000 mgr.y (mgr.24422) 849 : cluster [DBG] pgmap v1323: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:21.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:21 vm04 bash[20742]: cluster 2026-03-10T10:36:20.659043+0000 mgr.y (mgr.24422) 849 : cluster [DBG] pgmap v1323: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:21.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:21 vm04 bash[20742]: cluster 2026-03-10T10:36:20.659043+0000 mgr.y (mgr.24422) 849 : cluster [DBG] pgmap v1323: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:22.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:21 vm07 bash[23367]: cluster 2026-03-10T10:36:20.659043+0000 mgr.y (mgr.24422) 849 : cluster [DBG] pgmap v1323: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:22.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:21 vm07 bash[23367]: cluster 2026-03-10T10:36:20.659043+0000 mgr.y (mgr.24422) 849 : cluster [DBG] pgmap v1323: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:23.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:36:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:36:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:36:23.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:23 vm04 bash[28289]: cluster 2026-03-10T10:36:22.659389+0000 mgr.y (mgr.24422) 850 : cluster [DBG] pgmap v1324: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:23.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:23 vm04 bash[28289]: cluster 2026-03-10T10:36:22.659389+0000 mgr.y (mgr.24422) 850 : cluster [DBG] pgmap v1324: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:23.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:23 vm04 bash[20742]: cluster 2026-03-10T10:36:22.659389+0000 mgr.y (mgr.24422) 850 : cluster [DBG] pgmap v1324: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:23.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:23 vm04 bash[20742]: cluster 2026-03-10T10:36:22.659389+0000 mgr.y (mgr.24422) 850 : cluster [DBG] pgmap v1324: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:24.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:23 vm07 bash[23367]: cluster 2026-03-10T10:36:22.659389+0000 mgr.y (mgr.24422) 850 : cluster [DBG] pgmap v1324: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:24.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:23 vm07 bash[23367]: cluster 2026-03-10T10:36:22.659389+0000 mgr.y (mgr.24422) 850 : cluster [DBG] pgmap v1324: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:25.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:25 vm04 bash[28289]: cluster 2026-03-10T10:36:24.660053+0000 mgr.y (mgr.24422) 851 : cluster [DBG] pgmap v1325: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:25.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:25 vm04 bash[28289]: cluster 2026-03-10T10:36:24.660053+0000 mgr.y (mgr.24422) 851 : cluster [DBG] pgmap v1325: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:25.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:25 vm04 bash[20742]: cluster 2026-03-10T10:36:24.660053+0000 mgr.y (mgr.24422) 851 : cluster [DBG] pgmap v1325: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:25.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:25 vm04 bash[20742]: cluster 2026-03-10T10:36:24.660053+0000 mgr.y (mgr.24422) 851 : cluster [DBG] pgmap v1325: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:26.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:25 vm07 bash[23367]: cluster 2026-03-10T10:36:24.660053+0000 mgr.y (mgr.24422) 851 : cluster [DBG] pgmap v1325: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:26.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:25 vm07 bash[23367]: cluster 2026-03-10T10:36:24.660053+0000 mgr.y (mgr.24422) 851 : cluster [DBG] pgmap v1325: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:27 vm07 bash[23367]: cluster 2026-03-10T10:36:26.660406+0000 mgr.y (mgr.24422) 852 : cluster [DBG] pgmap v1326: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:27 vm07 bash[23367]: cluster 2026-03-10T10:36:26.660406+0000 mgr.y (mgr.24422) 852 : cluster [DBG] pgmap v1326: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:28.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:27 vm04 bash[28289]: cluster 2026-03-10T10:36:26.660406+0000 mgr.y (mgr.24422) 852 : cluster [DBG] pgmap v1326: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:27 vm04 bash[28289]: cluster 2026-03-10T10:36:26.660406+0000 mgr.y (mgr.24422) 852 : cluster [DBG] pgmap v1326: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:27 vm04 bash[20742]: cluster 2026-03-10T10:36:26.660406+0000 mgr.y (mgr.24422) 852 : cluster [DBG] pgmap v1326: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:27 vm04 bash[20742]: cluster 2026-03-10T10:36:26.660406+0000 mgr.y (mgr.24422) 852 : cluster [DBG] pgmap v1326: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:28 vm07 bash[23367]: audit 2026-03-10T10:36:28.369221+0000 mon.a (mon.0) 3682 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:36:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:28 vm07 bash[23367]: audit 2026-03-10T10:36:28.369221+0000 mon.a (mon.0) 3682 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:36:29.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:28 vm04 bash[28289]: audit 2026-03-10T10:36:28.369221+0000 mon.a (mon.0) 3682 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:36:29.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:28 vm04 bash[28289]: audit 2026-03-10T10:36:28.369221+0000 mon.a (mon.0) 3682 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:36:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:28 vm04 bash[20742]: audit 2026-03-10T10:36:28.369221+0000 mon.a (mon.0) 3682 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:36:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:28 vm04 bash[20742]: audit 2026-03-10T10:36:28.369221+0000 mon.a (mon.0) 3682 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:36:29.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:36:29 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:36:30.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:29 vm07 bash[23367]: cluster 2026-03-10T10:36:28.661098+0000 mgr.y (mgr.24422) 853 : cluster [DBG] pgmap v1327: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:30.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:29 vm07 bash[23367]: cluster 2026-03-10T10:36:28.661098+0000 mgr.y (mgr.24422) 853 : cluster [DBG] pgmap v1327: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:30.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:29 vm07 bash[23367]: audit 2026-03-10T10:36:29.213605+0000 mgr.y (mgr.24422) 854 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:30.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:29 vm07 bash[23367]: audit 2026-03-10T10:36:29.213605+0000 mgr.y (mgr.24422) 854 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:29 vm04 bash[28289]: cluster 2026-03-10T10:36:28.661098+0000 mgr.y (mgr.24422) 853 : cluster [DBG] pgmap v1327: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:29 vm04 bash[28289]: cluster 2026-03-10T10:36:28.661098+0000 mgr.y (mgr.24422) 853 : cluster [DBG] pgmap v1327: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:29 vm04 bash[28289]: audit 2026-03-10T10:36:29.213605+0000 mgr.y (mgr.24422) 854 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:29 vm04 bash[28289]: audit 2026-03-10T10:36:29.213605+0000 mgr.y (mgr.24422) 854 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:29 vm04 bash[20742]: cluster 2026-03-10T10:36:28.661098+0000 mgr.y (mgr.24422) 853 : cluster [DBG] pgmap v1327: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:29 vm04 bash[20742]: cluster 2026-03-10T10:36:28.661098+0000 mgr.y (mgr.24422) 853 : cluster [DBG] pgmap v1327: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:29 vm04 bash[20742]: audit 2026-03-10T10:36:29.213605+0000 mgr.y (mgr.24422) 854 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:29 vm04 bash[20742]: audit 2026-03-10T10:36:29.213605+0000 mgr.y (mgr.24422) 854 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:32.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:31 vm07 bash[23367]: cluster 2026-03-10T10:36:30.661673+0000 mgr.y (mgr.24422) 855 : cluster [DBG] pgmap v1328: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:32.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:31 vm07 bash[23367]: cluster 2026-03-10T10:36:30.661673+0000 mgr.y (mgr.24422) 855 : cluster [DBG] pgmap v1328: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:32.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:31 vm04 bash[28289]: cluster 2026-03-10T10:36:30.661673+0000 mgr.y (mgr.24422) 855 : cluster [DBG] pgmap v1328: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:32.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:31 vm04 bash[28289]: cluster 2026-03-10T10:36:30.661673+0000 mgr.y (mgr.24422) 855 : cluster [DBG] pgmap v1328: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:32.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:31 vm04 bash[20742]: cluster 2026-03-10T10:36:30.661673+0000 mgr.y (mgr.24422) 855 : cluster [DBG] pgmap v1328: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:32.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:31 vm04 bash[20742]: cluster 2026-03-10T10:36:30.661673+0000 mgr.y (mgr.24422) 855 : cluster [DBG] pgmap v1328: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:33.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:36:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:36:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:36:34.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:33 vm07 bash[23367]: cluster 2026-03-10T10:36:32.662040+0000 mgr.y (mgr.24422) 856 : cluster [DBG] pgmap v1329: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:34.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:33 vm07 bash[23367]: cluster 2026-03-10T10:36:32.662040+0000 mgr.y (mgr.24422) 856 : cluster [DBG] pgmap v1329: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:34.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:33 vm04 bash[28289]: cluster 2026-03-10T10:36:32.662040+0000 mgr.y (mgr.24422) 856 : cluster [DBG] pgmap v1329: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:34.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:33 vm04 bash[28289]: cluster 2026-03-10T10:36:32.662040+0000 mgr.y (mgr.24422) 856 : cluster [DBG] pgmap v1329: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:34.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:33 vm04 bash[20742]: cluster 2026-03-10T10:36:32.662040+0000 mgr.y (mgr.24422) 856 : cluster [DBG] pgmap v1329: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:34.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:33 vm04 bash[20742]: cluster 2026-03-10T10:36:32.662040+0000 mgr.y (mgr.24422) 856 : cluster [DBG] pgmap v1329: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:36.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:35 vm07 bash[23367]: cluster 2026-03-10T10:36:34.662720+0000 mgr.y (mgr.24422) 857 : cluster [DBG] pgmap v1330: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:36.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:35 vm07 bash[23367]: cluster 2026-03-10T10:36:34.662720+0000 mgr.y (mgr.24422) 857 : cluster [DBG] pgmap v1330: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:35 vm04 bash[28289]: cluster 2026-03-10T10:36:34.662720+0000 mgr.y (mgr.24422) 857 : cluster [DBG] pgmap v1330: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:35 vm04 bash[28289]: cluster 2026-03-10T10:36:34.662720+0000 mgr.y (mgr.24422) 857 : cluster [DBG] pgmap v1330: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:35 vm04 bash[20742]: cluster 2026-03-10T10:36:34.662720+0000 mgr.y (mgr.24422) 857 : cluster [DBG] pgmap v1330: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:35 vm04 bash[20742]: cluster 2026-03-10T10:36:34.662720+0000 mgr.y (mgr.24422) 857 : cluster [DBG] pgmap v1330: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:37.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:36 vm04 bash[28289]: audit 2026-03-10T10:36:36.612670+0000 mon.a (mon.0) 3683 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:36:37.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:36 vm04 bash[28289]: audit 2026-03-10T10:36:36.612670+0000 mon.a (mon.0) 3683 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:36:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:36 vm04 bash[20742]: audit 2026-03-10T10:36:36.612670+0000 mon.a (mon.0) 3683 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:36:37.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:36 vm04 bash[20742]: audit 2026-03-10T10:36:36.612670+0000 mon.a (mon.0) 3683 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:36:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:36 vm07 bash[23367]: audit 2026-03-10T10:36:36.612670+0000 mon.a (mon.0) 3683 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:36:37.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:36 vm07 bash[23367]: audit 2026-03-10T10:36:36.612670+0000 mon.a (mon.0) 3683 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:36:38.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:37 vm04 bash[28289]: cluster 2026-03-10T10:36:36.663073+0000 mgr.y (mgr.24422) 858 : cluster [DBG] pgmap v1331: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:37 vm04 bash[28289]: cluster 2026-03-10T10:36:36.663073+0000 mgr.y (mgr.24422) 858 : cluster [DBG] pgmap v1331: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:38.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:37 vm04 bash[20742]: cluster 2026-03-10T10:36:36.663073+0000 mgr.y (mgr.24422) 858 : cluster [DBG] pgmap v1331: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:38.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:37 vm04 bash[20742]: cluster 2026-03-10T10:36:36.663073+0000 mgr.y (mgr.24422) 858 : cluster [DBG] pgmap v1331: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:37 vm07 bash[23367]: cluster 2026-03-10T10:36:36.663073+0000 mgr.y (mgr.24422) 858 : cluster [DBG] pgmap v1331: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:37 vm07 bash[23367]: cluster 2026-03-10T10:36:36.663073+0000 mgr.y (mgr.24422) 858 : cluster [DBG] pgmap v1331: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:39.517 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:36:39 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:36:40.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:39 vm04 bash[28289]: cluster 2026-03-10T10:36:38.663494+0000 mgr.y (mgr.24422) 859 : cluster [DBG] pgmap v1332: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:40.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:39 vm04 bash[28289]: cluster 2026-03-10T10:36:38.663494+0000 mgr.y (mgr.24422) 859 : cluster [DBG] pgmap v1332: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:40.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:39 vm04 bash[28289]: audit 2026-03-10T10:36:39.219554+0000 mgr.y (mgr.24422) 860 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:40.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:39 vm04 bash[28289]: audit 2026-03-10T10:36:39.219554+0000 mgr.y (mgr.24422) 860 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:39 vm04 bash[20742]: cluster 2026-03-10T10:36:38.663494+0000 mgr.y (mgr.24422) 859 : cluster [DBG] pgmap v1332: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:39 vm04 bash[20742]: cluster 2026-03-10T10:36:38.663494+0000 mgr.y (mgr.24422) 859 : cluster [DBG] pgmap v1332: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:39 vm04 bash[20742]: audit 2026-03-10T10:36:39.219554+0000 mgr.y (mgr.24422) 860 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:39 vm04 bash[20742]: audit 2026-03-10T10:36:39.219554+0000 mgr.y (mgr.24422) 860 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:40.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:39 vm07 bash[23367]: cluster 2026-03-10T10:36:38.663494+0000 mgr.y (mgr.24422) 859 : cluster [DBG] pgmap v1332: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:39 vm07 bash[23367]: cluster 2026-03-10T10:36:38.663494+0000 mgr.y (mgr.24422) 859 : cluster [DBG] pgmap v1332: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:39 vm07 bash[23367]: audit 2026-03-10T10:36:39.219554+0000 mgr.y (mgr.24422) 860 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:39 vm07 bash[23367]: audit 2026-03-10T10:36:39.219554+0000 mgr.y (mgr.24422) 860 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:41 vm04 bash[28289]: cluster 2026-03-10T10:36:40.664178+0000 mgr.y (mgr.24422) 861 : cluster [DBG] pgmap v1333: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:41 vm04 bash[28289]: cluster 2026-03-10T10:36:40.664178+0000 mgr.y (mgr.24422) 861 : cluster [DBG] pgmap v1333: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:41 vm04 bash[28289]: audit 2026-03-10T10:36:41.794088+0000 mon.a (mon.0) 3684 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:36:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:41 vm04 bash[28289]: audit 2026-03-10T10:36:41.794088+0000 mon.a (mon.0) 3684 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:36:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:41 vm04 bash[28289]: audit 2026-03-10T10:36:41.800772+0000 mon.a (mon.0) 3685 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:36:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:41 vm04 bash[28289]: audit 2026-03-10T10:36:41.800772+0000 mon.a (mon.0) 3685 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:36:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:41 vm04 bash[20742]: cluster 2026-03-10T10:36:40.664178+0000 mgr.y (mgr.24422) 861 : cluster [DBG] pgmap v1333: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:41 vm04 bash[20742]: cluster 2026-03-10T10:36:40.664178+0000 mgr.y (mgr.24422) 861 : cluster [DBG] pgmap v1333: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:41 vm04 bash[20742]: audit 2026-03-10T10:36:41.794088+0000 mon.a (mon.0) 3684 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:36:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:41 vm04 bash[20742]: audit 2026-03-10T10:36:41.794088+0000 mon.a (mon.0) 3684 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:36:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:41 vm04 bash[20742]: audit 2026-03-10T10:36:41.800772+0000 mon.a (mon.0) 3685 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:36:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:41 vm04 bash[20742]: audit 2026-03-10T10:36:41.800772+0000 mon.a (mon.0) 3685 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:36:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:41 vm07 bash[23367]: cluster 2026-03-10T10:36:40.664178+0000 mgr.y (mgr.24422) 861 : cluster [DBG] pgmap v1333: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:41 vm07 bash[23367]: cluster 2026-03-10T10:36:40.664178+0000 mgr.y (mgr.24422) 861 : cluster [DBG] pgmap v1333: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:41 vm07 bash[23367]: audit 2026-03-10T10:36:41.794088+0000 mon.a (mon.0) 3684 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:36:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:41 vm07 bash[23367]: audit 2026-03-10T10:36:41.794088+0000 mon.a (mon.0) 3684 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:36:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:41 vm07 bash[23367]: audit 2026-03-10T10:36:41.800772+0000 mon.a (mon.0) 3685 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:36:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:41 vm07 bash[23367]: audit 2026-03-10T10:36:41.800772+0000 mon.a (mon.0) 3685 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:36:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:43 vm04 bash[28289]: audit 2026-03-10T10:36:42.023247+0000 mon.a (mon.0) 3686 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:36:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:43 vm04 bash[28289]: audit 2026-03-10T10:36:42.023247+0000 mon.a (mon.0) 3686 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:36:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:43 vm04 bash[28289]: audit 2026-03-10T10:36:42.030785+0000 mon.a (mon.0) 3687 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:36:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:43 vm04 bash[28289]: audit 2026-03-10T10:36:42.030785+0000 mon.a (mon.0) 3687 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:36:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:43 vm04 bash[28289]: audit 2026-03-10T10:36:42.339981+0000 mon.a (mon.0) 3688 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:36:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:43 vm04 bash[28289]: audit 2026-03-10T10:36:42.339981+0000 mon.a (mon.0) 3688 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:36:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:43 vm04 bash[28289]: audit 2026-03-10T10:36:42.340533+0000 mon.a (mon.0) 3689 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:36:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:43 vm04 bash[28289]: audit 2026-03-10T10:36:42.340533+0000 mon.a (mon.0) 3689 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:36:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:43 vm04 bash[28289]: audit 2026-03-10T10:36:42.345422+0000 mon.a (mon.0) 3690 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:36:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:43 vm04 bash[28289]: audit 2026-03-10T10:36:42.345422+0000 mon.a (mon.0) 3690 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:36:43.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:43 vm04 bash[20742]: audit 2026-03-10T10:36:42.023247+0000 mon.a (mon.0) 3686 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:36:43.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:43 vm04 bash[20742]: audit 2026-03-10T10:36:42.023247+0000 mon.a (mon.0) 3686 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:36:43.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:43 vm04 bash[20742]: audit 2026-03-10T10:36:42.030785+0000 mon.a (mon.0) 3687 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:36:43.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:43 vm04 bash[20742]: audit 2026-03-10T10:36:42.030785+0000 mon.a (mon.0) 3687 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:36:43.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:43 vm04 bash[20742]: audit 2026-03-10T10:36:42.339981+0000 mon.a (mon.0) 3688 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:36:43.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:43 vm04 bash[20742]: audit 2026-03-10T10:36:42.339981+0000 mon.a (mon.0) 3688 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:36:43.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:43 vm04 bash[20742]: audit 2026-03-10T10:36:42.340533+0000 mon.a (mon.0) 3689 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:36:43.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:43 vm04 bash[20742]: audit 2026-03-10T10:36:42.340533+0000 mon.a (mon.0) 3689 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:36:43.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:43 vm04 bash[20742]: audit 2026-03-10T10:36:42.345422+0000 mon.a (mon.0) 3690 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:36:43.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:43 vm04 bash[20742]: audit 2026-03-10T10:36:42.345422+0000 mon.a (mon.0) 3690 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:36:43.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:36:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:36:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:36:43.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:43 vm07 bash[23367]: audit 2026-03-10T10:36:42.023247+0000 mon.a (mon.0) 3686 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:36:43.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:43 vm07 bash[23367]: audit 2026-03-10T10:36:42.023247+0000 mon.a (mon.0) 3686 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:36:43.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:43 vm07 bash[23367]: audit 2026-03-10T10:36:42.030785+0000 mon.a (mon.0) 3687 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:36:43.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:43 vm07 bash[23367]: audit 2026-03-10T10:36:42.030785+0000 mon.a (mon.0) 3687 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:36:43.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:43 vm07 bash[23367]: audit 2026-03-10T10:36:42.339981+0000 mon.a (mon.0) 3688 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:36:43.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:43 vm07 bash[23367]: audit 2026-03-10T10:36:42.339981+0000 mon.a (mon.0) 3688 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:36:43.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:43 vm07 bash[23367]: audit 2026-03-10T10:36:42.340533+0000 mon.a (mon.0) 3689 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:36:43.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:43 vm07 bash[23367]: audit 2026-03-10T10:36:42.340533+0000 mon.a (mon.0) 3689 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:36:43.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:43 vm07 bash[23367]: audit 2026-03-10T10:36:42.345422+0000 mon.a (mon.0) 3690 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:36:43.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:43 vm07 bash[23367]: audit 2026-03-10T10:36:42.345422+0000 mon.a (mon.0) 3690 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:36:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:44 vm04 bash[28289]: cluster 2026-03-10T10:36:42.664564+0000 mgr.y (mgr.24422) 862 : cluster [DBG] pgmap v1334: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:44 vm04 bash[28289]: cluster 2026-03-10T10:36:42.664564+0000 mgr.y (mgr.24422) 862 : cluster [DBG] pgmap v1334: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:44 vm04 bash[28289]: audit 2026-03-10T10:36:43.375981+0000 mon.a (mon.0) 3691 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:36:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:44 vm04 bash[28289]: audit 2026-03-10T10:36:43.375981+0000 mon.a (mon.0) 3691 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:36:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:44 vm04 bash[20742]: cluster 2026-03-10T10:36:42.664564+0000 mgr.y (mgr.24422) 862 : cluster [DBG] pgmap v1334: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:44 vm04 bash[20742]: cluster 2026-03-10T10:36:42.664564+0000 mgr.y (mgr.24422) 862 : cluster [DBG] pgmap v1334: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:44 vm04 bash[20742]: audit 2026-03-10T10:36:43.375981+0000 mon.a (mon.0) 3691 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:36:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:44 vm04 bash[20742]: audit 2026-03-10T10:36:43.375981+0000 mon.a (mon.0) 3691 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:36:44.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:44 vm07 bash[23367]: cluster 2026-03-10T10:36:42.664564+0000 mgr.y (mgr.24422) 862 : cluster [DBG] pgmap v1334: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:44.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:44 vm07 bash[23367]: cluster 2026-03-10T10:36:42.664564+0000 mgr.y (mgr.24422) 862 : cluster [DBG] pgmap v1334: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:44.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:44 vm07 bash[23367]: audit 2026-03-10T10:36:43.375981+0000 mon.a (mon.0) 3691 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:36:44.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:44 vm07 bash[23367]: audit 2026-03-10T10:36:43.375981+0000 mon.a (mon.0) 3691 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:36:46.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:46 vm04 bash[28289]: cluster 2026-03-10T10:36:44.665406+0000 mgr.y (mgr.24422) 863 : cluster [DBG] pgmap v1335: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:46.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:46 vm04 bash[28289]: cluster 2026-03-10T10:36:44.665406+0000 mgr.y (mgr.24422) 863 : cluster [DBG] pgmap v1335: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:46.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:46 vm04 bash[20742]: cluster 2026-03-10T10:36:44.665406+0000 mgr.y (mgr.24422) 863 : cluster [DBG] pgmap v1335: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:46.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:46 vm04 bash[20742]: cluster 2026-03-10T10:36:44.665406+0000 mgr.y (mgr.24422) 863 : cluster [DBG] pgmap v1335: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:46.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:46 vm07 bash[23367]: cluster 2026-03-10T10:36:44.665406+0000 mgr.y (mgr.24422) 863 : cluster [DBG] pgmap v1335: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:46.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:46 vm07 bash[23367]: cluster 2026-03-10T10:36:44.665406+0000 mgr.y (mgr.24422) 863 : cluster [DBG] pgmap v1335: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:48.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:48 vm04 bash[28289]: cluster 2026-03-10T10:36:46.665759+0000 mgr.y (mgr.24422) 864 : cluster [DBG] pgmap v1336: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:48.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:48 vm04 bash[28289]: cluster 2026-03-10T10:36:46.665759+0000 mgr.y (mgr.24422) 864 : cluster [DBG] pgmap v1336: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:48.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:48 vm04 bash[20742]: cluster 2026-03-10T10:36:46.665759+0000 mgr.y (mgr.24422) 864 : cluster [DBG] pgmap v1336: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:48.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:48 vm04 bash[20742]: cluster 2026-03-10T10:36:46.665759+0000 mgr.y (mgr.24422) 864 : cluster [DBG] pgmap v1336: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:48.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:48 vm07 bash[23367]: cluster 2026-03-10T10:36:46.665759+0000 mgr.y (mgr.24422) 864 : cluster [DBG] pgmap v1336: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:48.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:48 vm07 bash[23367]: cluster 2026-03-10T10:36:46.665759+0000 mgr.y (mgr.24422) 864 : cluster [DBG] pgmap v1336: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:49.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:36:49 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:36:50.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:50 vm04 bash[28289]: cluster 2026-03-10T10:36:48.666295+0000 mgr.y (mgr.24422) 865 : cluster [DBG] pgmap v1337: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:50.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:50 vm04 bash[28289]: cluster 2026-03-10T10:36:48.666295+0000 mgr.y (mgr.24422) 865 : cluster [DBG] pgmap v1337: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:50.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:50 vm04 bash[28289]: audit 2026-03-10T10:36:49.228098+0000 mgr.y (mgr.24422) 866 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:50.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:50 vm04 bash[28289]: audit 2026-03-10T10:36:49.228098+0000 mgr.y (mgr.24422) 866 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:50.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:50 vm04 bash[20742]: cluster 2026-03-10T10:36:48.666295+0000 mgr.y (mgr.24422) 865 : cluster [DBG] pgmap v1337: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:50.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:50 vm04 bash[20742]: cluster 2026-03-10T10:36:48.666295+0000 mgr.y (mgr.24422) 865 : cluster [DBG] pgmap v1337: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:50.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:50 vm04 bash[20742]: audit 2026-03-10T10:36:49.228098+0000 mgr.y (mgr.24422) 866 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:50.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:50 vm04 bash[20742]: audit 2026-03-10T10:36:49.228098+0000 mgr.y (mgr.24422) 866 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:50.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:50 vm07 bash[23367]: cluster 2026-03-10T10:36:48.666295+0000 mgr.y (mgr.24422) 865 : cluster [DBG] pgmap v1337: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:50.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:50 vm07 bash[23367]: cluster 2026-03-10T10:36:48.666295+0000 mgr.y (mgr.24422) 865 : cluster [DBG] pgmap v1337: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:50.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:50 vm07 bash[23367]: audit 2026-03-10T10:36:49.228098+0000 mgr.y (mgr.24422) 866 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:50.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:50 vm07 bash[23367]: audit 2026-03-10T10:36:49.228098+0000 mgr.y (mgr.24422) 866 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:36:52.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:52 vm04 bash[28289]: cluster 2026-03-10T10:36:50.666868+0000 mgr.y (mgr.24422) 867 : cluster [DBG] pgmap v1338: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:52.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:52 vm04 bash[28289]: cluster 2026-03-10T10:36:50.666868+0000 mgr.y (mgr.24422) 867 : cluster [DBG] pgmap v1338: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:52.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:52 vm04 bash[20742]: cluster 2026-03-10T10:36:50.666868+0000 mgr.y (mgr.24422) 867 : cluster [DBG] pgmap v1338: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:52.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:52 vm04 bash[20742]: cluster 2026-03-10T10:36:50.666868+0000 mgr.y (mgr.24422) 867 : cluster [DBG] pgmap v1338: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:52.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:52 vm07 bash[23367]: cluster 2026-03-10T10:36:50.666868+0000 mgr.y (mgr.24422) 867 : cluster [DBG] pgmap v1338: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:52.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:52 vm07 bash[23367]: cluster 2026-03-10T10:36:50.666868+0000 mgr.y (mgr.24422) 867 : cluster [DBG] pgmap v1338: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:53.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:36:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:36:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:36:53.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:53 vm04 bash[28289]: cluster 2026-03-10T10:36:52.667420+0000 mgr.y (mgr.24422) 868 : cluster [DBG] pgmap v1339: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:53.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:53 vm04 bash[28289]: cluster 2026-03-10T10:36:52.667420+0000 mgr.y (mgr.24422) 868 : cluster [DBG] pgmap v1339: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:53.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:53 vm04 bash[20742]: cluster 2026-03-10T10:36:52.667420+0000 mgr.y (mgr.24422) 868 : cluster [DBG] pgmap v1339: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:53.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:53 vm04 bash[20742]: cluster 2026-03-10T10:36:52.667420+0000 mgr.y (mgr.24422) 868 : cluster [DBG] pgmap v1339: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:54.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:53 vm07 bash[23367]: cluster 2026-03-10T10:36:52.667420+0000 mgr.y (mgr.24422) 868 : cluster [DBG] pgmap v1339: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:54.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:53 vm07 bash[23367]: cluster 2026-03-10T10:36:52.667420+0000 mgr.y (mgr.24422) 868 : cluster [DBG] pgmap v1339: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:55.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:55 vm04 bash[28289]: cluster 2026-03-10T10:36:54.668646+0000 mgr.y (mgr.24422) 869 : cluster [DBG] pgmap v1340: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:55.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:55 vm04 bash[28289]: cluster 2026-03-10T10:36:54.668646+0000 mgr.y (mgr.24422) 869 : cluster [DBG] pgmap v1340: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:55.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:55 vm04 bash[20742]: cluster 2026-03-10T10:36:54.668646+0000 mgr.y (mgr.24422) 869 : cluster [DBG] pgmap v1340: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:55.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:55 vm04 bash[20742]: cluster 2026-03-10T10:36:54.668646+0000 mgr.y (mgr.24422) 869 : cluster [DBG] pgmap v1340: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:56.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:55 vm07 bash[23367]: cluster 2026-03-10T10:36:54.668646+0000 mgr.y (mgr.24422) 869 : cluster [DBG] pgmap v1340: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:56.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:55 vm07 bash[23367]: cluster 2026-03-10T10:36:54.668646+0000 mgr.y (mgr.24422) 869 : cluster [DBG] pgmap v1340: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:36:57.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:57 vm04 bash[28289]: cluster 2026-03-10T10:36:56.669009+0000 mgr.y (mgr.24422) 870 : cluster [DBG] pgmap v1341: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:57.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:57 vm04 bash[28289]: cluster 2026-03-10T10:36:56.669009+0000 mgr.y (mgr.24422) 870 : cluster [DBG] pgmap v1341: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:57.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:57 vm04 bash[20742]: cluster 2026-03-10T10:36:56.669009+0000 mgr.y (mgr.24422) 870 : cluster [DBG] pgmap v1341: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:57.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:57 vm04 bash[20742]: cluster 2026-03-10T10:36:56.669009+0000 mgr.y (mgr.24422) 870 : cluster [DBG] pgmap v1341: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:58.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:57 vm07 bash[23367]: cluster 2026-03-10T10:36:56.669009+0000 mgr.y (mgr.24422) 870 : cluster [DBG] pgmap v1341: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:58.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:57 vm07 bash[23367]: cluster 2026-03-10T10:36:56.669009+0000 mgr.y (mgr.24422) 870 : cluster [DBG] pgmap v1341: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:36:58.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:58 vm04 bash[28289]: audit 2026-03-10T10:36:58.382996+0000 mon.a (mon.0) 3692 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:36:58.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:58 vm04 bash[28289]: audit 2026-03-10T10:36:58.382996+0000 mon.a (mon.0) 3692 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:36:58.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:58 vm04 bash[20742]: audit 2026-03-10T10:36:58.382996+0000 mon.a (mon.0) 3692 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:36:58.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:58 vm04 bash[20742]: audit 2026-03-10T10:36:58.382996+0000 mon.a (mon.0) 3692 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:36:59.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:58 vm07 bash[23367]: audit 2026-03-10T10:36:58.382996+0000 mon.a (mon.0) 3692 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:36:59.016 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:58 vm07 bash[23367]: audit 2026-03-10T10:36:58.382996+0000 mon.a (mon.0) 3692 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:36:59.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:36:59 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:37:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:59 vm07 bash[23367]: cluster 2026-03-10T10:36:58.669705+0000 mgr.y (mgr.24422) 871 : cluster [DBG] pgmap v1342: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:59 vm07 bash[23367]: cluster 2026-03-10T10:36:58.669705+0000 mgr.y (mgr.24422) 871 : cluster [DBG] pgmap v1342: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:59 vm07 bash[23367]: audit 2026-03-10T10:36:59.238858+0000 mgr.y (mgr.24422) 872 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:36:59 vm07 bash[23367]: audit 2026-03-10T10:36:59.238858+0000 mgr.y (mgr.24422) 872 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:59 vm04 bash[28289]: cluster 2026-03-10T10:36:58.669705+0000 mgr.y (mgr.24422) 871 : cluster [DBG] pgmap v1342: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:59 vm04 bash[28289]: cluster 2026-03-10T10:36:58.669705+0000 mgr.y (mgr.24422) 871 : cluster [DBG] pgmap v1342: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:59 vm04 bash[28289]: audit 2026-03-10T10:36:59.238858+0000 mgr.y (mgr.24422) 872 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:36:59 vm04 bash[28289]: audit 2026-03-10T10:36:59.238858+0000 mgr.y (mgr.24422) 872 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:59 vm04 bash[20742]: cluster 2026-03-10T10:36:58.669705+0000 mgr.y (mgr.24422) 871 : cluster [DBG] pgmap v1342: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:59 vm04 bash[20742]: cluster 2026-03-10T10:36:58.669705+0000 mgr.y (mgr.24422) 871 : cluster [DBG] pgmap v1342: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:59 vm04 bash[20742]: audit 2026-03-10T10:36:59.238858+0000 mgr.y (mgr.24422) 872 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:36:59 vm04 bash[20742]: audit 2026-03-10T10:36:59.238858+0000 mgr.y (mgr.24422) 872 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:02.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:01 vm07 bash[23367]: cluster 2026-03-10T10:37:00.670256+0000 mgr.y (mgr.24422) 873 : cluster [DBG] pgmap v1343: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:02.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:01 vm07 bash[23367]: cluster 2026-03-10T10:37:00.670256+0000 mgr.y (mgr.24422) 873 : cluster [DBG] pgmap v1343: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:02.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:01 vm04 bash[28289]: cluster 2026-03-10T10:37:00.670256+0000 mgr.y (mgr.24422) 873 : cluster [DBG] pgmap v1343: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:02.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:01 vm04 bash[28289]: cluster 2026-03-10T10:37:00.670256+0000 mgr.y (mgr.24422) 873 : cluster [DBG] pgmap v1343: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:01 vm04 bash[20742]: cluster 2026-03-10T10:37:00.670256+0000 mgr.y (mgr.24422) 873 : cluster [DBG] pgmap v1343: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:01 vm04 bash[20742]: cluster 2026-03-10T10:37:00.670256+0000 mgr.y (mgr.24422) 873 : cluster [DBG] pgmap v1343: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:03.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:37:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:37:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:37:04.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:03 vm07 bash[23367]: cluster 2026-03-10T10:37:02.670611+0000 mgr.y (mgr.24422) 874 : cluster [DBG] pgmap v1344: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:04.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:03 vm07 bash[23367]: cluster 2026-03-10T10:37:02.670611+0000 mgr.y (mgr.24422) 874 : cluster [DBG] pgmap v1344: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:03 vm04 bash[28289]: cluster 2026-03-10T10:37:02.670611+0000 mgr.y (mgr.24422) 874 : cluster [DBG] pgmap v1344: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:03 vm04 bash[28289]: cluster 2026-03-10T10:37:02.670611+0000 mgr.y (mgr.24422) 874 : cluster [DBG] pgmap v1344: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:03 vm04 bash[20742]: cluster 2026-03-10T10:37:02.670611+0000 mgr.y (mgr.24422) 874 : cluster [DBG] pgmap v1344: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:03 vm04 bash[20742]: cluster 2026-03-10T10:37:02.670611+0000 mgr.y (mgr.24422) 874 : cluster [DBG] pgmap v1344: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:06.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:05 vm07 bash[23367]: cluster 2026-03-10T10:37:04.671375+0000 mgr.y (mgr.24422) 875 : cluster [DBG] pgmap v1345: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:06.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:05 vm07 bash[23367]: cluster 2026-03-10T10:37:04.671375+0000 mgr.y (mgr.24422) 875 : cluster [DBG] pgmap v1345: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:06.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:05 vm04 bash[28289]: cluster 2026-03-10T10:37:04.671375+0000 mgr.y (mgr.24422) 875 : cluster [DBG] pgmap v1345: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:06.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:05 vm04 bash[28289]: cluster 2026-03-10T10:37:04.671375+0000 mgr.y (mgr.24422) 875 : cluster [DBG] pgmap v1345: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:06.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:05 vm04 bash[20742]: cluster 2026-03-10T10:37:04.671375+0000 mgr.y (mgr.24422) 875 : cluster [DBG] pgmap v1345: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:06.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:05 vm04 bash[20742]: cluster 2026-03-10T10:37:04.671375+0000 mgr.y (mgr.24422) 875 : cluster [DBG] pgmap v1345: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:08.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:07 vm07 bash[23367]: cluster 2026-03-10T10:37:06.671706+0000 mgr.y (mgr.24422) 876 : cluster [DBG] pgmap v1346: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:37:08.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:07 vm07 bash[23367]: cluster 2026-03-10T10:37:06.671706+0000 mgr.y (mgr.24422) 876 : cluster [DBG] pgmap v1346: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:37:08.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:07 vm04 bash[28289]: cluster 2026-03-10T10:37:06.671706+0000 mgr.y (mgr.24422) 876 : cluster [DBG] pgmap v1346: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:37:08.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:07 vm04 bash[28289]: cluster 2026-03-10T10:37:06.671706+0000 mgr.y (mgr.24422) 876 : cluster [DBG] pgmap v1346: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:37:08.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:07 vm04 bash[20742]: cluster 2026-03-10T10:37:06.671706+0000 mgr.y (mgr.24422) 876 : cluster [DBG] pgmap v1346: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:37:08.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:07 vm04 bash[20742]: cluster 2026-03-10T10:37:06.671706+0000 mgr.y (mgr.24422) 876 : cluster [DBG] pgmap v1346: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:37:09.517 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:37:09 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:37:10.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:09 vm07 bash[23367]: cluster 2026-03-10T10:37:08.672173+0000 mgr.y (mgr.24422) 877 : cluster [DBG] pgmap v1347: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:37:10.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:09 vm07 bash[23367]: cluster 2026-03-10T10:37:08.672173+0000 mgr.y (mgr.24422) 877 : cluster [DBG] pgmap v1347: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:37:10.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:09 vm07 bash[23367]: audit 2026-03-10T10:37:09.249612+0000 mgr.y (mgr.24422) 878 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:10.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:09 vm07 bash[23367]: audit 2026-03-10T10:37:09.249612+0000 mgr.y (mgr.24422) 878 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:10.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:09 vm04 bash[28289]: cluster 2026-03-10T10:37:08.672173+0000 mgr.y (mgr.24422) 877 : cluster [DBG] pgmap v1347: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:37:10.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:09 vm04 bash[28289]: cluster 2026-03-10T10:37:08.672173+0000 mgr.y (mgr.24422) 877 : cluster [DBG] pgmap v1347: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:37:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:09 vm04 bash[28289]: audit 2026-03-10T10:37:09.249612+0000 mgr.y (mgr.24422) 878 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:09 vm04 bash[28289]: audit 2026-03-10T10:37:09.249612+0000 mgr.y (mgr.24422) 878 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:09 vm04 bash[20742]: cluster 2026-03-10T10:37:08.672173+0000 mgr.y (mgr.24422) 877 : cluster [DBG] pgmap v1347: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:37:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:09 vm04 bash[20742]: cluster 2026-03-10T10:37:08.672173+0000 mgr.y (mgr.24422) 877 : cluster [DBG] pgmap v1347: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:37:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:09 vm04 bash[20742]: audit 2026-03-10T10:37:09.249612+0000 mgr.y (mgr.24422) 878 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:09 vm04 bash[20742]: audit 2026-03-10T10:37:09.249612+0000 mgr.y (mgr.24422) 878 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:11 vm07 bash[23367]: cluster 2026-03-10T10:37:10.672717+0000 mgr.y (mgr.24422) 879 : cluster [DBG] pgmap v1348: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:12.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:11 vm07 bash[23367]: cluster 2026-03-10T10:37:10.672717+0000 mgr.y (mgr.24422) 879 : cluster [DBG] pgmap v1348: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:12.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:11 vm04 bash[28289]: cluster 2026-03-10T10:37:10.672717+0000 mgr.y (mgr.24422) 879 : cluster [DBG] pgmap v1348: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:12.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:11 vm04 bash[28289]: cluster 2026-03-10T10:37:10.672717+0000 mgr.y (mgr.24422) 879 : cluster [DBG] pgmap v1348: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:11 vm04 bash[20742]: cluster 2026-03-10T10:37:10.672717+0000 mgr.y (mgr.24422) 879 : cluster [DBG] pgmap v1348: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:11 vm04 bash[20742]: cluster 2026-03-10T10:37:10.672717+0000 mgr.y (mgr.24422) 879 : cluster [DBG] pgmap v1348: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:13.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:37:13 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:37:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:37:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:13 vm07 bash[23367]: cluster 2026-03-10T10:37:12.673012+0000 mgr.y (mgr.24422) 880 : cluster [DBG] pgmap v1349: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:37:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:13 vm07 bash[23367]: cluster 2026-03-10T10:37:12.673012+0000 mgr.y (mgr.24422) 880 : cluster [DBG] pgmap v1349: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:37:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:13 vm07 bash[23367]: audit 2026-03-10T10:37:13.391054+0000 mon.a (mon.0) 3693 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:37:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:13 vm07 bash[23367]: audit 2026-03-10T10:37:13.391054+0000 mon.a (mon.0) 3693 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:37:14.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:13 vm04 bash[28289]: cluster 2026-03-10T10:37:12.673012+0000 mgr.y (mgr.24422) 880 : cluster [DBG] pgmap v1349: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:37:14.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:13 vm04 bash[28289]: cluster 2026-03-10T10:37:12.673012+0000 mgr.y (mgr.24422) 880 : cluster [DBG] pgmap v1349: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:37:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:13 vm04 bash[28289]: audit 2026-03-10T10:37:13.391054+0000 mon.a (mon.0) 3693 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:37:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:13 vm04 bash[28289]: audit 2026-03-10T10:37:13.391054+0000 mon.a (mon.0) 3693 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:37:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:13 vm04 bash[20742]: cluster 2026-03-10T10:37:12.673012+0000 mgr.y (mgr.24422) 880 : cluster [DBG] pgmap v1349: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:37:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:13 vm04 bash[20742]: cluster 2026-03-10T10:37:12.673012+0000 mgr.y (mgr.24422) 880 : cluster [DBG] pgmap v1349: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:37:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:13 vm04 bash[20742]: audit 2026-03-10T10:37:13.391054+0000 mon.a (mon.0) 3693 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:37:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:13 vm04 bash[20742]: audit 2026-03-10T10:37:13.391054+0000 mon.a (mon.0) 3693 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:37:16.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:15 vm07 bash[23367]: cluster 2026-03-10T10:37:14.673686+0000 mgr.y (mgr.24422) 881 : cluster [DBG] pgmap v1350: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:16.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:15 vm07 bash[23367]: cluster 2026-03-10T10:37:14.673686+0000 mgr.y (mgr.24422) 881 : cluster [DBG] pgmap v1350: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:15 vm04 bash[28289]: cluster 2026-03-10T10:37:14.673686+0000 mgr.y (mgr.24422) 881 : cluster [DBG] pgmap v1350: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:15 vm04 bash[28289]: cluster 2026-03-10T10:37:14.673686+0000 mgr.y (mgr.24422) 881 : cluster [DBG] pgmap v1350: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:15 vm04 bash[20742]: cluster 2026-03-10T10:37:14.673686+0000 mgr.y (mgr.24422) 881 : cluster [DBG] pgmap v1350: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:15 vm04 bash[20742]: cluster 2026-03-10T10:37:14.673686+0000 mgr.y (mgr.24422) 881 : cluster [DBG] pgmap v1350: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:18.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:17 vm04 bash[28289]: cluster 2026-03-10T10:37:16.674015+0000 mgr.y (mgr.24422) 882 : cluster [DBG] pgmap v1351: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:18.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:17 vm04 bash[28289]: cluster 2026-03-10T10:37:16.674015+0000 mgr.y (mgr.24422) 882 : cluster [DBG] pgmap v1351: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:18.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:17 vm04 bash[20742]: cluster 2026-03-10T10:37:16.674015+0000 mgr.y (mgr.24422) 882 : cluster [DBG] pgmap v1351: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:18.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:17 vm04 bash[20742]: cluster 2026-03-10T10:37:16.674015+0000 mgr.y (mgr.24422) 882 : cluster [DBG] pgmap v1351: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:18.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:17 vm07 bash[23367]: cluster 2026-03-10T10:37:16.674015+0000 mgr.y (mgr.24422) 882 : cluster [DBG] pgmap v1351: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:18.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:17 vm07 bash[23367]: cluster 2026-03-10T10:37:16.674015+0000 mgr.y (mgr.24422) 882 : cluster [DBG] pgmap v1351: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:19.517 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:37:19 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:37:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:19 vm04 bash[28289]: cluster 2026-03-10T10:37:18.674449+0000 mgr.y (mgr.24422) 883 : cluster [DBG] pgmap v1352: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:19 vm04 bash[28289]: cluster 2026-03-10T10:37:18.674449+0000 mgr.y (mgr.24422) 883 : cluster [DBG] pgmap v1352: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:19 vm04 bash[28289]: audit 2026-03-10T10:37:19.254611+0000 mgr.y (mgr.24422) 884 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:19 vm04 bash[28289]: audit 2026-03-10T10:37:19.254611+0000 mgr.y (mgr.24422) 884 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:20.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:19 vm04 bash[20742]: cluster 2026-03-10T10:37:18.674449+0000 mgr.y (mgr.24422) 883 : cluster [DBG] pgmap v1352: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:20.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:19 vm04 bash[20742]: cluster 2026-03-10T10:37:18.674449+0000 mgr.y (mgr.24422) 883 : cluster [DBG] pgmap v1352: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:20.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:19 vm04 bash[20742]: audit 2026-03-10T10:37:19.254611+0000 mgr.y (mgr.24422) 884 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:20.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:19 vm04 bash[20742]: audit 2026-03-10T10:37:19.254611+0000 mgr.y (mgr.24422) 884 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:20.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:19 vm07 bash[23367]: cluster 2026-03-10T10:37:18.674449+0000 mgr.y (mgr.24422) 883 : cluster [DBG] pgmap v1352: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:20.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:19 vm07 bash[23367]: cluster 2026-03-10T10:37:18.674449+0000 mgr.y (mgr.24422) 883 : cluster [DBG] pgmap v1352: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:20.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:19 vm07 bash[23367]: audit 2026-03-10T10:37:19.254611+0000 mgr.y (mgr.24422) 884 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:20.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:19 vm07 bash[23367]: audit 2026-03-10T10:37:19.254611+0000 mgr.y (mgr.24422) 884 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:22.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:21 vm04 bash[28289]: cluster 2026-03-10T10:37:20.675004+0000 mgr.y (mgr.24422) 885 : cluster [DBG] pgmap v1353: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:22.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:21 vm04 bash[28289]: cluster 2026-03-10T10:37:20.675004+0000 mgr.y (mgr.24422) 885 : cluster [DBG] pgmap v1353: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:22.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:21 vm04 bash[20742]: cluster 2026-03-10T10:37:20.675004+0000 mgr.y (mgr.24422) 885 : cluster [DBG] pgmap v1353: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:22.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:21 vm04 bash[20742]: cluster 2026-03-10T10:37:20.675004+0000 mgr.y (mgr.24422) 885 : cluster [DBG] pgmap v1353: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:22.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:21 vm07 bash[23367]: cluster 2026-03-10T10:37:20.675004+0000 mgr.y (mgr.24422) 885 : cluster [DBG] pgmap v1353: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:22.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:21 vm07 bash[23367]: cluster 2026-03-10T10:37:20.675004+0000 mgr.y (mgr.24422) 885 : cluster [DBG] pgmap v1353: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:23.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:37:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:37:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:37:24.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:23 vm04 bash[28289]: cluster 2026-03-10T10:37:22.675309+0000 mgr.y (mgr.24422) 886 : cluster [DBG] pgmap v1354: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:23 vm04 bash[28289]: cluster 2026-03-10T10:37:22.675309+0000 mgr.y (mgr.24422) 886 : cluster [DBG] pgmap v1354: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:24.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:23 vm04 bash[20742]: cluster 2026-03-10T10:37:22.675309+0000 mgr.y (mgr.24422) 886 : cluster [DBG] pgmap v1354: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:24.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:23 vm04 bash[20742]: cluster 2026-03-10T10:37:22.675309+0000 mgr.y (mgr.24422) 886 : cluster [DBG] pgmap v1354: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:24.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:23 vm07 bash[23367]: cluster 2026-03-10T10:37:22.675309+0000 mgr.y (mgr.24422) 886 : cluster [DBG] pgmap v1354: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:24.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:23 vm07 bash[23367]: cluster 2026-03-10T10:37:22.675309+0000 mgr.y (mgr.24422) 886 : cluster [DBG] pgmap v1354: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:26.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:25 vm04 bash[28289]: cluster 2026-03-10T10:37:24.676177+0000 mgr.y (mgr.24422) 887 : cluster [DBG] pgmap v1355: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:26.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:25 vm04 bash[28289]: cluster 2026-03-10T10:37:24.676177+0000 mgr.y (mgr.24422) 887 : cluster [DBG] pgmap v1355: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:26.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:25 vm04 bash[20742]: cluster 2026-03-10T10:37:24.676177+0000 mgr.y (mgr.24422) 887 : cluster [DBG] pgmap v1355: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:26.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:25 vm04 bash[20742]: cluster 2026-03-10T10:37:24.676177+0000 mgr.y (mgr.24422) 887 : cluster [DBG] pgmap v1355: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:26.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:25 vm07 bash[23367]: cluster 2026-03-10T10:37:24.676177+0000 mgr.y (mgr.24422) 887 : cluster [DBG] pgmap v1355: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:26.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:25 vm07 bash[23367]: cluster 2026-03-10T10:37:24.676177+0000 mgr.y (mgr.24422) 887 : cluster [DBG] pgmap v1355: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:28.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:27 vm04 bash[28289]: cluster 2026-03-10T10:37:26.676521+0000 mgr.y (mgr.24422) 888 : cluster [DBG] pgmap v1356: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:27 vm04 bash[28289]: cluster 2026-03-10T10:37:26.676521+0000 mgr.y (mgr.24422) 888 : cluster [DBG] pgmap v1356: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:27 vm04 bash[20742]: cluster 2026-03-10T10:37:26.676521+0000 mgr.y (mgr.24422) 888 : cluster [DBG] pgmap v1356: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:27 vm04 bash[20742]: cluster 2026-03-10T10:37:26.676521+0000 mgr.y (mgr.24422) 888 : cluster [DBG] pgmap v1356: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:28.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:27 vm07 bash[23367]: cluster 2026-03-10T10:37:26.676521+0000 mgr.y (mgr.24422) 888 : cluster [DBG] pgmap v1356: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:28.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:27 vm07 bash[23367]: cluster 2026-03-10T10:37:26.676521+0000 mgr.y (mgr.24422) 888 : cluster [DBG] pgmap v1356: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:29.258 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:28 vm07 bash[23367]: audit 2026-03-10T10:37:28.396914+0000 mon.a (mon.0) 3694 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:37:29.259 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:28 vm07 bash[23367]: audit 2026-03-10T10:37:28.396914+0000 mon.a (mon.0) 3694 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:37:29.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:28 vm04 bash[28289]: audit 2026-03-10T10:37:28.396914+0000 mon.a (mon.0) 3694 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:37:29.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:28 vm04 bash[28289]: audit 2026-03-10T10:37:28.396914+0000 mon.a (mon.0) 3694 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:37:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:28 vm04 bash[20742]: audit 2026-03-10T10:37:28.396914+0000 mon.a (mon.0) 3694 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:37:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:28 vm04 bash[20742]: audit 2026-03-10T10:37:28.396914+0000 mon.a (mon.0) 3694 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:37:29.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:37:29 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:37:30.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:29 vm07 bash[23367]: cluster 2026-03-10T10:37:28.677056+0000 mgr.y (mgr.24422) 889 : cluster [DBG] pgmap v1357: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:30.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:29 vm07 bash[23367]: cluster 2026-03-10T10:37:28.677056+0000 mgr.y (mgr.24422) 889 : cluster [DBG] pgmap v1357: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:30.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:29 vm07 bash[23367]: audit 2026-03-10T10:37:29.259494+0000 mgr.y (mgr.24422) 890 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:30.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:29 vm07 bash[23367]: audit 2026-03-10T10:37:29.259494+0000 mgr.y (mgr.24422) 890 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:30.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:29 vm04 bash[28289]: cluster 2026-03-10T10:37:28.677056+0000 mgr.y (mgr.24422) 889 : cluster [DBG] pgmap v1357: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:29 vm04 bash[28289]: cluster 2026-03-10T10:37:28.677056+0000 mgr.y (mgr.24422) 889 : cluster [DBG] pgmap v1357: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:29 vm04 bash[28289]: audit 2026-03-10T10:37:29.259494+0000 mgr.y (mgr.24422) 890 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:30.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:29 vm04 bash[28289]: audit 2026-03-10T10:37:29.259494+0000 mgr.y (mgr.24422) 890 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:29 vm04 bash[20742]: cluster 2026-03-10T10:37:28.677056+0000 mgr.y (mgr.24422) 889 : cluster [DBG] pgmap v1357: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:29 vm04 bash[20742]: cluster 2026-03-10T10:37:28.677056+0000 mgr.y (mgr.24422) 889 : cluster [DBG] pgmap v1357: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:29 vm04 bash[20742]: audit 2026-03-10T10:37:29.259494+0000 mgr.y (mgr.24422) 890 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:30.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:29 vm04 bash[20742]: audit 2026-03-10T10:37:29.259494+0000 mgr.y (mgr.24422) 890 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:31 vm07 bash[23367]: cluster 2026-03-10T10:37:30.677597+0000 mgr.y (mgr.24422) 891 : cluster [DBG] pgmap v1358: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:31 vm07 bash[23367]: cluster 2026-03-10T10:37:30.677597+0000 mgr.y (mgr.24422) 891 : cluster [DBG] pgmap v1358: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:31 vm04 bash[28289]: cluster 2026-03-10T10:37:30.677597+0000 mgr.y (mgr.24422) 891 : cluster [DBG] pgmap v1358: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:32.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:31 vm04 bash[28289]: cluster 2026-03-10T10:37:30.677597+0000 mgr.y (mgr.24422) 891 : cluster [DBG] pgmap v1358: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:31 vm04 bash[20742]: cluster 2026-03-10T10:37:30.677597+0000 mgr.y (mgr.24422) 891 : cluster [DBG] pgmap v1358: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:32.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:31 vm04 bash[20742]: cluster 2026-03-10T10:37:30.677597+0000 mgr.y (mgr.24422) 891 : cluster [DBG] pgmap v1358: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:33.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:37:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:37:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:37:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:33 vm07 bash[23367]: cluster 2026-03-10T10:37:32.678070+0000 mgr.y (mgr.24422) 892 : cluster [DBG] pgmap v1359: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:33 vm07 bash[23367]: cluster 2026-03-10T10:37:32.678070+0000 mgr.y (mgr.24422) 892 : cluster [DBG] pgmap v1359: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:34.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:33 vm04 bash[28289]: cluster 2026-03-10T10:37:32.678070+0000 mgr.y (mgr.24422) 892 : cluster [DBG] pgmap v1359: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:34.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:33 vm04 bash[28289]: cluster 2026-03-10T10:37:32.678070+0000 mgr.y (mgr.24422) 892 : cluster [DBG] pgmap v1359: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:34.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:33 vm04 bash[20742]: cluster 2026-03-10T10:37:32.678070+0000 mgr.y (mgr.24422) 892 : cluster [DBG] pgmap v1359: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:34.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:33 vm04 bash[20742]: cluster 2026-03-10T10:37:32.678070+0000 mgr.y (mgr.24422) 892 : cluster [DBG] pgmap v1359: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:36.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:36 vm04 bash[28289]: cluster 2026-03-10T10:37:34.678849+0000 mgr.y (mgr.24422) 893 : cluster [DBG] pgmap v1360: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:36.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:36 vm04 bash[28289]: cluster 2026-03-10T10:37:34.678849+0000 mgr.y (mgr.24422) 893 : cluster [DBG] pgmap v1360: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:36.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:36 vm04 bash[20742]: cluster 2026-03-10T10:37:34.678849+0000 mgr.y (mgr.24422) 893 : cluster [DBG] pgmap v1360: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:36.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:36 vm04 bash[20742]: cluster 2026-03-10T10:37:34.678849+0000 mgr.y (mgr.24422) 893 : cluster [DBG] pgmap v1360: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:36.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:36 vm07 bash[23367]: cluster 2026-03-10T10:37:34.678849+0000 mgr.y (mgr.24422) 893 : cluster [DBG] pgmap v1360: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:36.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:36 vm07 bash[23367]: cluster 2026-03-10T10:37:34.678849+0000 mgr.y (mgr.24422) 893 : cluster [DBG] pgmap v1360: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:38.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:38 vm04 bash[28289]: cluster 2026-03-10T10:37:36.679256+0000 mgr.y (mgr.24422) 894 : cluster [DBG] pgmap v1361: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:38.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:38 vm04 bash[28289]: cluster 2026-03-10T10:37:36.679256+0000 mgr.y (mgr.24422) 894 : cluster [DBG] pgmap v1361: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:38.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:38 vm04 bash[20742]: cluster 2026-03-10T10:37:36.679256+0000 mgr.y (mgr.24422) 894 : cluster [DBG] pgmap v1361: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:38.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:38 vm04 bash[20742]: cluster 2026-03-10T10:37:36.679256+0000 mgr.y (mgr.24422) 894 : cluster [DBG] pgmap v1361: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:38.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:38 vm07 bash[23367]: cluster 2026-03-10T10:37:36.679256+0000 mgr.y (mgr.24422) 894 : cluster [DBG] pgmap v1361: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:38.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:38 vm07 bash[23367]: cluster 2026-03-10T10:37:36.679256+0000 mgr.y (mgr.24422) 894 : cluster [DBG] pgmap v1361: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:39.767 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:37:39 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:37:40.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:40 vm04 bash[28289]: cluster 2026-03-10T10:37:38.679786+0000 mgr.y (mgr.24422) 895 : cluster [DBG] pgmap v1362: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:40.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:40 vm04 bash[28289]: cluster 2026-03-10T10:37:38.679786+0000 mgr.y (mgr.24422) 895 : cluster [DBG] pgmap v1362: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:40.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:40 vm04 bash[28289]: audit 2026-03-10T10:37:39.270339+0000 mgr.y (mgr.24422) 896 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:40.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:40 vm04 bash[28289]: audit 2026-03-10T10:37:39.270339+0000 mgr.y (mgr.24422) 896 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:40 vm04 bash[20742]: cluster 2026-03-10T10:37:38.679786+0000 mgr.y (mgr.24422) 895 : cluster [DBG] pgmap v1362: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:40 vm04 bash[20742]: cluster 2026-03-10T10:37:38.679786+0000 mgr.y (mgr.24422) 895 : cluster [DBG] pgmap v1362: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:40 vm04 bash[20742]: audit 2026-03-10T10:37:39.270339+0000 mgr.y (mgr.24422) 896 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:40 vm04 bash[20742]: audit 2026-03-10T10:37:39.270339+0000 mgr.y (mgr.24422) 896 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:40.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:40 vm07 bash[23367]: cluster 2026-03-10T10:37:38.679786+0000 mgr.y (mgr.24422) 895 : cluster [DBG] pgmap v1362: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:40.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:40 vm07 bash[23367]: cluster 2026-03-10T10:37:38.679786+0000 mgr.y (mgr.24422) 895 : cluster [DBG] pgmap v1362: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:40.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:40 vm07 bash[23367]: audit 2026-03-10T10:37:39.270339+0000 mgr.y (mgr.24422) 896 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:40.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:40 vm07 bash[23367]: audit 2026-03-10T10:37:39.270339+0000 mgr.y (mgr.24422) 896 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:42.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:42 vm04 bash[20742]: cluster 2026-03-10T10:37:40.680308+0000 mgr.y (mgr.24422) 897 : cluster [DBG] pgmap v1363: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:42.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:42 vm04 bash[20742]: cluster 2026-03-10T10:37:40.680308+0000 mgr.y (mgr.24422) 897 : cluster [DBG] pgmap v1363: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:42.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:42 vm04 bash[28289]: cluster 2026-03-10T10:37:40.680308+0000 mgr.y (mgr.24422) 897 : cluster [DBG] pgmap v1363: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:42.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:42 vm04 bash[28289]: cluster 2026-03-10T10:37:40.680308+0000 mgr.y (mgr.24422) 897 : cluster [DBG] pgmap v1363: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:42.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:42 vm07 bash[23367]: cluster 2026-03-10T10:37:40.680308+0000 mgr.y (mgr.24422) 897 : cluster [DBG] pgmap v1363: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:42.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:42 vm07 bash[23367]: cluster 2026-03-10T10:37:40.680308+0000 mgr.y (mgr.24422) 897 : cluster [DBG] pgmap v1363: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:43 vm04 bash[28289]: audit 2026-03-10T10:37:42.387856+0000 mon.a (mon.0) 3695 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:37:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:43 vm04 bash[28289]: audit 2026-03-10T10:37:42.387856+0000 mon.a (mon.0) 3695 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:37:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:43 vm04 bash[28289]: audit 2026-03-10T10:37:42.734412+0000 mon.a (mon.0) 3696 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:37:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:43 vm04 bash[28289]: audit 2026-03-10T10:37:42.734412+0000 mon.a (mon.0) 3696 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:37:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:43 vm04 bash[28289]: audit 2026-03-10T10:37:42.734985+0000 mon.a (mon.0) 3697 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:37:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:43 vm04 bash[28289]: audit 2026-03-10T10:37:42.734985+0000 mon.a (mon.0) 3697 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:37:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:43 vm04 bash[28289]: audit 2026-03-10T10:37:42.740180+0000 mon.a (mon.0) 3698 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:37:43.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:43 vm04 bash[28289]: audit 2026-03-10T10:37:42.740180+0000 mon.a (mon.0) 3698 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:37:43.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:43 vm04 bash[20742]: audit 2026-03-10T10:37:42.387856+0000 mon.a (mon.0) 3695 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:37:43.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:43 vm04 bash[20742]: audit 2026-03-10T10:37:42.387856+0000 mon.a (mon.0) 3695 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:37:43.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:43 vm04 bash[20742]: audit 2026-03-10T10:37:42.734412+0000 mon.a (mon.0) 3696 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:37:43.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:43 vm04 bash[20742]: audit 2026-03-10T10:37:42.734412+0000 mon.a (mon.0) 3696 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:37:43.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:43 vm04 bash[20742]: audit 2026-03-10T10:37:42.734985+0000 mon.a (mon.0) 3697 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:37:43.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:43 vm04 bash[20742]: audit 2026-03-10T10:37:42.734985+0000 mon.a (mon.0) 3697 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:37:43.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:43 vm04 bash[20742]: audit 2026-03-10T10:37:42.740180+0000 mon.a (mon.0) 3698 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:37:43.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:43 vm04 bash[20742]: audit 2026-03-10T10:37:42.740180+0000 mon.a (mon.0) 3698 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:37:43.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:37:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:37:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:37:43.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:43 vm07 bash[23367]: audit 2026-03-10T10:37:42.387856+0000 mon.a (mon.0) 3695 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:37:43.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:43 vm07 bash[23367]: audit 2026-03-10T10:37:42.387856+0000 mon.a (mon.0) 3695 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:37:43.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:43 vm07 bash[23367]: audit 2026-03-10T10:37:42.734412+0000 mon.a (mon.0) 3696 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:37:43.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:43 vm07 bash[23367]: audit 2026-03-10T10:37:42.734412+0000 mon.a (mon.0) 3696 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:37:43.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:43 vm07 bash[23367]: audit 2026-03-10T10:37:42.734985+0000 mon.a (mon.0) 3697 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:37:43.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:43 vm07 bash[23367]: audit 2026-03-10T10:37:42.734985+0000 mon.a (mon.0) 3697 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:37:43.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:43 vm07 bash[23367]: audit 2026-03-10T10:37:42.740180+0000 mon.a (mon.0) 3698 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:37:43.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:43 vm07 bash[23367]: audit 2026-03-10T10:37:42.740180+0000 mon.a (mon.0) 3698 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:37:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:44 vm04 bash[28289]: cluster 2026-03-10T10:37:42.680614+0000 mgr.y (mgr.24422) 898 : cluster [DBG] pgmap v1364: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:44 vm04 bash[28289]: cluster 2026-03-10T10:37:42.680614+0000 mgr.y (mgr.24422) 898 : cluster [DBG] pgmap v1364: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:44 vm04 bash[28289]: audit 2026-03-10T10:37:43.403076+0000 mon.a (mon.0) 3699 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:37:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:44 vm04 bash[28289]: audit 2026-03-10T10:37:43.403076+0000 mon.a (mon.0) 3699 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:37:44.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:44 vm04 bash[20742]: cluster 2026-03-10T10:37:42.680614+0000 mgr.y (mgr.24422) 898 : cluster [DBG] pgmap v1364: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:44.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:44 vm04 bash[20742]: cluster 2026-03-10T10:37:42.680614+0000 mgr.y (mgr.24422) 898 : cluster [DBG] pgmap v1364: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:44.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:44 vm04 bash[20742]: audit 2026-03-10T10:37:43.403076+0000 mon.a (mon.0) 3699 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:37:44.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:44 vm04 bash[20742]: audit 2026-03-10T10:37:43.403076+0000 mon.a (mon.0) 3699 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:37:44.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:44 vm07 bash[23367]: cluster 2026-03-10T10:37:42.680614+0000 mgr.y (mgr.24422) 898 : cluster [DBG] pgmap v1364: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:44.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:44 vm07 bash[23367]: cluster 2026-03-10T10:37:42.680614+0000 mgr.y (mgr.24422) 898 : cluster [DBG] pgmap v1364: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:44.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:44 vm07 bash[23367]: audit 2026-03-10T10:37:43.403076+0000 mon.a (mon.0) 3699 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:37:44.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:44 vm07 bash[23367]: audit 2026-03-10T10:37:43.403076+0000 mon.a (mon.0) 3699 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:37:46.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:46 vm04 bash[28289]: cluster 2026-03-10T10:37:44.681332+0000 mgr.y (mgr.24422) 899 : cluster [DBG] pgmap v1365: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:46.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:46 vm04 bash[28289]: cluster 2026-03-10T10:37:44.681332+0000 mgr.y (mgr.24422) 899 : cluster [DBG] pgmap v1365: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:46.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:46 vm04 bash[20742]: cluster 2026-03-10T10:37:44.681332+0000 mgr.y (mgr.24422) 899 : cluster [DBG] pgmap v1365: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:46.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:46 vm04 bash[20742]: cluster 2026-03-10T10:37:44.681332+0000 mgr.y (mgr.24422) 899 : cluster [DBG] pgmap v1365: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:46.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:46 vm07 bash[23367]: cluster 2026-03-10T10:37:44.681332+0000 mgr.y (mgr.24422) 899 : cluster [DBG] pgmap v1365: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:46.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:46 vm07 bash[23367]: cluster 2026-03-10T10:37:44.681332+0000 mgr.y (mgr.24422) 899 : cluster [DBG] pgmap v1365: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:48.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:48 vm04 bash[28289]: cluster 2026-03-10T10:37:46.681859+0000 mgr.y (mgr.24422) 900 : cluster [DBG] pgmap v1366: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:48.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:48 vm04 bash[28289]: cluster 2026-03-10T10:37:46.681859+0000 mgr.y (mgr.24422) 900 : cluster [DBG] pgmap v1366: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:48.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:48 vm04 bash[20742]: cluster 2026-03-10T10:37:46.681859+0000 mgr.y (mgr.24422) 900 : cluster [DBG] pgmap v1366: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:48.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:48 vm04 bash[20742]: cluster 2026-03-10T10:37:46.681859+0000 mgr.y (mgr.24422) 900 : cluster [DBG] pgmap v1366: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:48.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:48 vm07 bash[23367]: cluster 2026-03-10T10:37:46.681859+0000 mgr.y (mgr.24422) 900 : cluster [DBG] pgmap v1366: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:48.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:48 vm07 bash[23367]: cluster 2026-03-10T10:37:46.681859+0000 mgr.y (mgr.24422) 900 : cluster [DBG] pgmap v1366: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:49.766 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:37:49 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:37:50.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:50 vm04 bash[28289]: cluster 2026-03-10T10:37:48.682627+0000 mgr.y (mgr.24422) 901 : cluster [DBG] pgmap v1367: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:50.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:50 vm04 bash[28289]: cluster 2026-03-10T10:37:48.682627+0000 mgr.y (mgr.24422) 901 : cluster [DBG] pgmap v1367: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:50.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:50 vm04 bash[28289]: audit 2026-03-10T10:37:49.280990+0000 mgr.y (mgr.24422) 902 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:50.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:50 vm04 bash[28289]: audit 2026-03-10T10:37:49.280990+0000 mgr.y (mgr.24422) 902 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:50.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:50 vm04 bash[20742]: cluster 2026-03-10T10:37:48.682627+0000 mgr.y (mgr.24422) 901 : cluster [DBG] pgmap v1367: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:50.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:50 vm04 bash[20742]: cluster 2026-03-10T10:37:48.682627+0000 mgr.y (mgr.24422) 901 : cluster [DBG] pgmap v1367: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:50.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:50 vm04 bash[20742]: audit 2026-03-10T10:37:49.280990+0000 mgr.y (mgr.24422) 902 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:50.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:50 vm04 bash[20742]: audit 2026-03-10T10:37:49.280990+0000 mgr.y (mgr.24422) 902 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:50 vm07 bash[23367]: cluster 2026-03-10T10:37:48.682627+0000 mgr.y (mgr.24422) 901 : cluster [DBG] pgmap v1367: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:50.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:50 vm07 bash[23367]: cluster 2026-03-10T10:37:48.682627+0000 mgr.y (mgr.24422) 901 : cluster [DBG] pgmap v1367: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:50.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:50 vm07 bash[23367]: audit 2026-03-10T10:37:49.280990+0000 mgr.y (mgr.24422) 902 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:50.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:50 vm07 bash[23367]: audit 2026-03-10T10:37:49.280990+0000 mgr.y (mgr.24422) 902 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:37:52.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:52 vm04 bash[28289]: cluster 2026-03-10T10:37:50.683378+0000 mgr.y (mgr.24422) 903 : cluster [DBG] pgmap v1368: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:52.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:52 vm04 bash[28289]: cluster 2026-03-10T10:37:50.683378+0000 mgr.y (mgr.24422) 903 : cluster [DBG] pgmap v1368: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:52.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:52 vm04 bash[20742]: cluster 2026-03-10T10:37:50.683378+0000 mgr.y (mgr.24422) 903 : cluster [DBG] pgmap v1368: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:52.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:52 vm04 bash[20742]: cluster 2026-03-10T10:37:50.683378+0000 mgr.y (mgr.24422) 903 : cluster [DBG] pgmap v1368: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:52.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:52 vm07 bash[23367]: cluster 2026-03-10T10:37:50.683378+0000 mgr.y (mgr.24422) 903 : cluster [DBG] pgmap v1368: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:52.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:52 vm07 bash[23367]: cluster 2026-03-10T10:37:50.683378+0000 mgr.y (mgr.24422) 903 : cluster [DBG] pgmap v1368: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:53.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:37:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:37:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:37:54.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:54 vm04 bash[28289]: cluster 2026-03-10T10:37:52.683796+0000 mgr.y (mgr.24422) 904 : cluster [DBG] pgmap v1369: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:54.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:54 vm04 bash[28289]: cluster 2026-03-10T10:37:52.683796+0000 mgr.y (mgr.24422) 904 : cluster [DBG] pgmap v1369: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:54.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:54 vm04 bash[20742]: cluster 2026-03-10T10:37:52.683796+0000 mgr.y (mgr.24422) 904 : cluster [DBG] pgmap v1369: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:54.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:54 vm04 bash[20742]: cluster 2026-03-10T10:37:52.683796+0000 mgr.y (mgr.24422) 904 : cluster [DBG] pgmap v1369: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:54.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:54 vm07 bash[23367]: cluster 2026-03-10T10:37:52.683796+0000 mgr.y (mgr.24422) 904 : cluster [DBG] pgmap v1369: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:54.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:54 vm07 bash[23367]: cluster 2026-03-10T10:37:52.683796+0000 mgr.y (mgr.24422) 904 : cluster [DBG] pgmap v1369: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:56.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:56 vm04 bash[28289]: cluster 2026-03-10T10:37:54.684617+0000 mgr.y (mgr.24422) 905 : cluster [DBG] pgmap v1370: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:56.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:56 vm04 bash[28289]: cluster 2026-03-10T10:37:54.684617+0000 mgr.y (mgr.24422) 905 : cluster [DBG] pgmap v1370: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:56.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:56 vm04 bash[20742]: cluster 2026-03-10T10:37:54.684617+0000 mgr.y (mgr.24422) 905 : cluster [DBG] pgmap v1370: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:56.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:56 vm04 bash[20742]: cluster 2026-03-10T10:37:54.684617+0000 mgr.y (mgr.24422) 905 : cluster [DBG] pgmap v1370: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:56 vm07 bash[23367]: cluster 2026-03-10T10:37:54.684617+0000 mgr.y (mgr.24422) 905 : cluster [DBG] pgmap v1370: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:56 vm07 bash[23367]: cluster 2026-03-10T10:37:54.684617+0000 mgr.y (mgr.24422) 905 : cluster [DBG] pgmap v1370: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:37:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:58 vm04 bash[28289]: cluster 2026-03-10T10:37:56.685073+0000 mgr.y (mgr.24422) 906 : cluster [DBG] pgmap v1371: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:58 vm04 bash[28289]: cluster 2026-03-10T10:37:56.685073+0000 mgr.y (mgr.24422) 906 : cluster [DBG] pgmap v1371: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:58 vm04 bash[20742]: cluster 2026-03-10T10:37:56.685073+0000 mgr.y (mgr.24422) 906 : cluster [DBG] pgmap v1371: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:58 vm04 bash[20742]: cluster 2026-03-10T10:37:56.685073+0000 mgr.y (mgr.24422) 906 : cluster [DBG] pgmap v1371: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:58.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:58 vm07 bash[23367]: cluster 2026-03-10T10:37:56.685073+0000 mgr.y (mgr.24422) 906 : cluster [DBG] pgmap v1371: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:58.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:58 vm07 bash[23367]: cluster 2026-03-10T10:37:56.685073+0000 mgr.y (mgr.24422) 906 : cluster [DBG] pgmap v1371: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:37:59.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:59 vm04 bash[28289]: audit 2026-03-10T10:37:58.409815+0000 mon.a (mon.0) 3700 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:37:59.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:37:59 vm04 bash[28289]: audit 2026-03-10T10:37:58.409815+0000 mon.a (mon.0) 3700 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:37:59.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:59 vm04 bash[20742]: audit 2026-03-10T10:37:58.409815+0000 mon.a (mon.0) 3700 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:37:59.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:37:59 vm04 bash[20742]: audit 2026-03-10T10:37:58.409815+0000 mon.a (mon.0) 3700 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:37:59.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:59 vm07 bash[23367]: audit 2026-03-10T10:37:58.409815+0000 mon.a (mon.0) 3700 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:37:59.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:37:59 vm07 bash[23367]: audit 2026-03-10T10:37:58.409815+0000 mon.a (mon.0) 3700 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:37:59.767 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:37:59 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:38:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:00 vm04 bash[28289]: cluster 2026-03-10T10:37:58.685854+0000 mgr.y (mgr.24422) 907 : cluster [DBG] pgmap v1372: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:00 vm04 bash[28289]: cluster 2026-03-10T10:37:58.685854+0000 mgr.y (mgr.24422) 907 : cluster [DBG] pgmap v1372: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:00 vm04 bash[28289]: audit 2026-03-10T10:37:59.290426+0000 mgr.y (mgr.24422) 908 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:00 vm04 bash[28289]: audit 2026-03-10T10:37:59.290426+0000 mgr.y (mgr.24422) 908 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:00 vm04 bash[20742]: cluster 2026-03-10T10:37:58.685854+0000 mgr.y (mgr.24422) 907 : cluster [DBG] pgmap v1372: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:00 vm04 bash[20742]: cluster 2026-03-10T10:37:58.685854+0000 mgr.y (mgr.24422) 907 : cluster [DBG] pgmap v1372: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:00 vm04 bash[20742]: audit 2026-03-10T10:37:59.290426+0000 mgr.y (mgr.24422) 908 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:00 vm04 bash[20742]: audit 2026-03-10T10:37:59.290426+0000 mgr.y (mgr.24422) 908 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:00.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:00 vm07 bash[23367]: cluster 2026-03-10T10:37:58.685854+0000 mgr.y (mgr.24422) 907 : cluster [DBG] pgmap v1372: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:00.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:00 vm07 bash[23367]: cluster 2026-03-10T10:37:58.685854+0000 mgr.y (mgr.24422) 907 : cluster [DBG] pgmap v1372: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:00.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:00 vm07 bash[23367]: audit 2026-03-10T10:37:59.290426+0000 mgr.y (mgr.24422) 908 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:00.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:00 vm07 bash[23367]: audit 2026-03-10T10:37:59.290426+0000 mgr.y (mgr.24422) 908 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:02 vm04 bash[28289]: cluster 2026-03-10T10:38:00.687329+0000 mgr.y (mgr.24422) 909 : cluster [DBG] pgmap v1373: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:02.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:02 vm04 bash[28289]: cluster 2026-03-10T10:38:00.687329+0000 mgr.y (mgr.24422) 909 : cluster [DBG] pgmap v1373: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:02.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:02 vm04 bash[20742]: cluster 2026-03-10T10:38:00.687329+0000 mgr.y (mgr.24422) 909 : cluster [DBG] pgmap v1373: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:02.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:02 vm04 bash[20742]: cluster 2026-03-10T10:38:00.687329+0000 mgr.y (mgr.24422) 909 : cluster [DBG] pgmap v1373: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:02 vm07 bash[23367]: cluster 2026-03-10T10:38:00.687329+0000 mgr.y (mgr.24422) 909 : cluster [DBG] pgmap v1373: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:02 vm07 bash[23367]: cluster 2026-03-10T10:38:00.687329+0000 mgr.y (mgr.24422) 909 : cluster [DBG] pgmap v1373: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:03.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:38:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:38:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:38:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:04 vm04 bash[28289]: cluster 2026-03-10T10:38:02.687717+0000 mgr.y (mgr.24422) 910 : cluster [DBG] pgmap v1374: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:04.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:04 vm04 bash[28289]: cluster 2026-03-10T10:38:02.687717+0000 mgr.y (mgr.24422) 910 : cluster [DBG] pgmap v1374: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:04 vm04 bash[20742]: cluster 2026-03-10T10:38:02.687717+0000 mgr.y (mgr.24422) 910 : cluster [DBG] pgmap v1374: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:04.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:04 vm04 bash[20742]: cluster 2026-03-10T10:38:02.687717+0000 mgr.y (mgr.24422) 910 : cluster [DBG] pgmap v1374: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:04.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:04 vm07 bash[23367]: cluster 2026-03-10T10:38:02.687717+0000 mgr.y (mgr.24422) 910 : cluster [DBG] pgmap v1374: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:04.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:04 vm07 bash[23367]: cluster 2026-03-10T10:38:02.687717+0000 mgr.y (mgr.24422) 910 : cluster [DBG] pgmap v1374: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:06.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:06 vm04 bash[28289]: cluster 2026-03-10T10:38:04.688662+0000 mgr.y (mgr.24422) 911 : cluster [DBG] pgmap v1375: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:06.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:06 vm04 bash[28289]: cluster 2026-03-10T10:38:04.688662+0000 mgr.y (mgr.24422) 911 : cluster [DBG] pgmap v1375: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:06.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:06 vm04 bash[20742]: cluster 2026-03-10T10:38:04.688662+0000 mgr.y (mgr.24422) 911 : cluster [DBG] pgmap v1375: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:06.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:06 vm04 bash[20742]: cluster 2026-03-10T10:38:04.688662+0000 mgr.y (mgr.24422) 911 : cluster [DBG] pgmap v1375: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:06.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:06 vm07 bash[23367]: cluster 2026-03-10T10:38:04.688662+0000 mgr.y (mgr.24422) 911 : cluster [DBG] pgmap v1375: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:06.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:06 vm07 bash[23367]: cluster 2026-03-10T10:38:04.688662+0000 mgr.y (mgr.24422) 911 : cluster [DBG] pgmap v1375: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:08.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:08 vm04 bash[28289]: cluster 2026-03-10T10:38:06.688996+0000 mgr.y (mgr.24422) 912 : cluster [DBG] pgmap v1376: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-10T10:38:08.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:08 vm04 bash[28289]: cluster 2026-03-10T10:38:06.688996+0000 mgr.y (mgr.24422) 912 : cluster [DBG] pgmap v1376: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-10T10:38:08.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:08 vm04 bash[20742]: cluster 2026-03-10T10:38:06.688996+0000 mgr.y (mgr.24422) 912 : cluster [DBG] pgmap v1376: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-10T10:38:08.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:08 vm04 bash[20742]: cluster 2026-03-10T10:38:06.688996+0000 mgr.y (mgr.24422) 912 : cluster [DBG] pgmap v1376: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-10T10:38:08.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:08 vm07 bash[23367]: cluster 2026-03-10T10:38:06.688996+0000 mgr.y (mgr.24422) 912 : cluster [DBG] pgmap v1376: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-10T10:38:08.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:08 vm07 bash[23367]: cluster 2026-03-10T10:38:06.688996+0000 mgr.y (mgr.24422) 912 : cluster [DBG] pgmap v1376: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-10T10:38:09.767 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:38:09 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:38:10.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:10 vm04 bash[28289]: cluster 2026-03-10T10:38:08.689725+0000 mgr.y (mgr.24422) 913 : cluster [DBG] pgmap v1377: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:10 vm04 bash[28289]: cluster 2026-03-10T10:38:08.689725+0000 mgr.y (mgr.24422) 913 : cluster [DBG] pgmap v1377: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:10 vm04 bash[28289]: audit 2026-03-10T10:38:09.296084+0000 mgr.y (mgr.24422) 914 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:10 vm04 bash[28289]: audit 2026-03-10T10:38:09.296084+0000 mgr.y (mgr.24422) 914 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:10.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:10 vm04 bash[20742]: cluster 2026-03-10T10:38:08.689725+0000 mgr.y (mgr.24422) 913 : cluster [DBG] pgmap v1377: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:10.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:10 vm04 bash[20742]: cluster 2026-03-10T10:38:08.689725+0000 mgr.y (mgr.24422) 913 : cluster [DBG] pgmap v1377: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:10.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:10 vm04 bash[20742]: audit 2026-03-10T10:38:09.296084+0000 mgr.y (mgr.24422) 914 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:10.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:10 vm04 bash[20742]: audit 2026-03-10T10:38:09.296084+0000 mgr.y (mgr.24422) 914 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:10.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:10 vm07 bash[23367]: cluster 2026-03-10T10:38:08.689725+0000 mgr.y (mgr.24422) 913 : cluster [DBG] pgmap v1377: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:10.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:10 vm07 bash[23367]: cluster 2026-03-10T10:38:08.689725+0000 mgr.y (mgr.24422) 913 : cluster [DBG] pgmap v1377: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:10.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:10 vm07 bash[23367]: audit 2026-03-10T10:38:09.296084+0000 mgr.y (mgr.24422) 914 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:10.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:10 vm07 bash[23367]: audit 2026-03-10T10:38:09.296084+0000 mgr.y (mgr.24422) 914 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:12.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:12 vm04 bash[28289]: cluster 2026-03-10T10:38:10.690269+0000 mgr.y (mgr.24422) 915 : cluster [DBG] pgmap v1378: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:12.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:12 vm04 bash[28289]: cluster 2026-03-10T10:38:10.690269+0000 mgr.y (mgr.24422) 915 : cluster [DBG] pgmap v1378: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:12.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:12 vm04 bash[20742]: cluster 2026-03-10T10:38:10.690269+0000 mgr.y (mgr.24422) 915 : cluster [DBG] pgmap v1378: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:12.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:12 vm04 bash[20742]: cluster 2026-03-10T10:38:10.690269+0000 mgr.y (mgr.24422) 915 : cluster [DBG] pgmap v1378: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:12.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:12 vm07 bash[23367]: cluster 2026-03-10T10:38:10.690269+0000 mgr.y (mgr.24422) 915 : cluster [DBG] pgmap v1378: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:12.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:12 vm07 bash[23367]: cluster 2026-03-10T10:38:10.690269+0000 mgr.y (mgr.24422) 915 : cluster [DBG] pgmap v1378: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:13.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:38:13 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:38:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:38:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:14 vm04 bash[28289]: cluster 2026-03-10T10:38:12.690618+0000 mgr.y (mgr.24422) 916 : cluster [DBG] pgmap v1379: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:14 vm04 bash[28289]: cluster 2026-03-10T10:38:12.690618+0000 mgr.y (mgr.24422) 916 : cluster [DBG] pgmap v1379: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:14 vm04 bash[28289]: audit 2026-03-10T10:38:13.417348+0000 mon.a (mon.0) 3701 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:38:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:14 vm04 bash[28289]: audit 2026-03-10T10:38:13.417348+0000 mon.a (mon.0) 3701 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:38:14.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:14 vm04 bash[20742]: cluster 2026-03-10T10:38:12.690618+0000 mgr.y (mgr.24422) 916 : cluster [DBG] pgmap v1379: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:14.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:14 vm04 bash[20742]: cluster 2026-03-10T10:38:12.690618+0000 mgr.y (mgr.24422) 916 : cluster [DBG] pgmap v1379: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:14.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:14 vm04 bash[20742]: audit 2026-03-10T10:38:13.417348+0000 mon.a (mon.0) 3701 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:38:14.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:14 vm04 bash[20742]: audit 2026-03-10T10:38:13.417348+0000 mon.a (mon.0) 3701 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:38:14.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:14 vm07 bash[23367]: cluster 2026-03-10T10:38:12.690618+0000 mgr.y (mgr.24422) 916 : cluster [DBG] pgmap v1379: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:14.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:14 vm07 bash[23367]: cluster 2026-03-10T10:38:12.690618+0000 mgr.y (mgr.24422) 916 : cluster [DBG] pgmap v1379: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:14.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:14 vm07 bash[23367]: audit 2026-03-10T10:38:13.417348+0000 mon.a (mon.0) 3701 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:38:14.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:14 vm07 bash[23367]: audit 2026-03-10T10:38:13.417348+0000 mon.a (mon.0) 3701 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:38:16.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:16 vm04 bash[28289]: cluster 2026-03-10T10:38:14.691406+0000 mgr.y (mgr.24422) 917 : cluster [DBG] pgmap v1380: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:16.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:16 vm04 bash[28289]: cluster 2026-03-10T10:38:14.691406+0000 mgr.y (mgr.24422) 917 : cluster [DBG] pgmap v1380: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:16.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:16 vm04 bash[20742]: cluster 2026-03-10T10:38:14.691406+0000 mgr.y (mgr.24422) 917 : cluster [DBG] pgmap v1380: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:16.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:16 vm04 bash[20742]: cluster 2026-03-10T10:38:14.691406+0000 mgr.y (mgr.24422) 917 : cluster [DBG] pgmap v1380: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:16.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:16 vm07 bash[23367]: cluster 2026-03-10T10:38:14.691406+0000 mgr.y (mgr.24422) 917 : cluster [DBG] pgmap v1380: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:16.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:16 vm07 bash[23367]: cluster 2026-03-10T10:38:14.691406+0000 mgr.y (mgr.24422) 917 : cluster [DBG] pgmap v1380: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:18.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:18 vm04 bash[28289]: cluster 2026-03-10T10:38:16.691769+0000 mgr.y (mgr.24422) 918 : cluster [DBG] pgmap v1381: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:18.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:18 vm04 bash[28289]: cluster 2026-03-10T10:38:16.691769+0000 mgr.y (mgr.24422) 918 : cluster [DBG] pgmap v1381: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:18.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:18 vm04 bash[20742]: cluster 2026-03-10T10:38:16.691769+0000 mgr.y (mgr.24422) 918 : cluster [DBG] pgmap v1381: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:18.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:18 vm04 bash[20742]: cluster 2026-03-10T10:38:16.691769+0000 mgr.y (mgr.24422) 918 : cluster [DBG] pgmap v1381: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:18.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:18 vm07 bash[23367]: cluster 2026-03-10T10:38:16.691769+0000 mgr.y (mgr.24422) 918 : cluster [DBG] pgmap v1381: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:18.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:18 vm07 bash[23367]: cluster 2026-03-10T10:38:16.691769+0000 mgr.y (mgr.24422) 918 : cluster [DBG] pgmap v1381: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:19.767 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:38:19 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:38:20.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:20 vm04 bash[28289]: cluster 2026-03-10T10:38:18.692386+0000 mgr.y (mgr.24422) 919 : cluster [DBG] pgmap v1382: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:20.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:20 vm04 bash[28289]: cluster 2026-03-10T10:38:18.692386+0000 mgr.y (mgr.24422) 919 : cluster [DBG] pgmap v1382: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:20 vm04 bash[28289]: audit 2026-03-10T10:38:19.298324+0000 mgr.y (mgr.24422) 920 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:20 vm04 bash[28289]: audit 2026-03-10T10:38:19.298324+0000 mgr.y (mgr.24422) 920 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:20.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:20 vm04 bash[20742]: cluster 2026-03-10T10:38:18.692386+0000 mgr.y (mgr.24422) 919 : cluster [DBG] pgmap v1382: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:20.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:20 vm04 bash[20742]: cluster 2026-03-10T10:38:18.692386+0000 mgr.y (mgr.24422) 919 : cluster [DBG] pgmap v1382: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:20.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:20 vm04 bash[20742]: audit 2026-03-10T10:38:19.298324+0000 mgr.y (mgr.24422) 920 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:20.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:20 vm04 bash[20742]: audit 2026-03-10T10:38:19.298324+0000 mgr.y (mgr.24422) 920 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:20 vm07 bash[23367]: cluster 2026-03-10T10:38:18.692386+0000 mgr.y (mgr.24422) 919 : cluster [DBG] pgmap v1382: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:20 vm07 bash[23367]: cluster 2026-03-10T10:38:18.692386+0000 mgr.y (mgr.24422) 919 : cluster [DBG] pgmap v1382: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:20 vm07 bash[23367]: audit 2026-03-10T10:38:19.298324+0000 mgr.y (mgr.24422) 920 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:20 vm07 bash[23367]: audit 2026-03-10T10:38:19.298324+0000 mgr.y (mgr.24422) 920 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:22.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:22 vm04 bash[28289]: cluster 2026-03-10T10:38:20.692964+0000 mgr.y (mgr.24422) 921 : cluster [DBG] pgmap v1383: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:22.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:22 vm04 bash[28289]: cluster 2026-03-10T10:38:20.692964+0000 mgr.y (mgr.24422) 921 : cluster [DBG] pgmap v1383: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:22.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:22 vm04 bash[20742]: cluster 2026-03-10T10:38:20.692964+0000 mgr.y (mgr.24422) 921 : cluster [DBG] pgmap v1383: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:22.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:22 vm04 bash[20742]: cluster 2026-03-10T10:38:20.692964+0000 mgr.y (mgr.24422) 921 : cluster [DBG] pgmap v1383: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:22 vm07 bash[23367]: cluster 2026-03-10T10:38:20.692964+0000 mgr.y (mgr.24422) 921 : cluster [DBG] pgmap v1383: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:22 vm07 bash[23367]: cluster 2026-03-10T10:38:20.692964+0000 mgr.y (mgr.24422) 921 : cluster [DBG] pgmap v1383: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:23.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:38:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:38:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:38:24.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:24 vm04 bash[28289]: cluster 2026-03-10T10:38:22.693288+0000 mgr.y (mgr.24422) 922 : cluster [DBG] pgmap v1384: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:24.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:24 vm04 bash[28289]: cluster 2026-03-10T10:38:22.693288+0000 mgr.y (mgr.24422) 922 : cluster [DBG] pgmap v1384: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:24.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:24 vm04 bash[20742]: cluster 2026-03-10T10:38:22.693288+0000 mgr.y (mgr.24422) 922 : cluster [DBG] pgmap v1384: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:24.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:24 vm04 bash[20742]: cluster 2026-03-10T10:38:22.693288+0000 mgr.y (mgr.24422) 922 : cluster [DBG] pgmap v1384: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:24 vm07 bash[23367]: cluster 2026-03-10T10:38:22.693288+0000 mgr.y (mgr.24422) 922 : cluster [DBG] pgmap v1384: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:24 vm07 bash[23367]: cluster 2026-03-10T10:38:22.693288+0000 mgr.y (mgr.24422) 922 : cluster [DBG] pgmap v1384: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:25.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:25 vm07 bash[23367]: cluster 2026-03-10T10:38:24.694015+0000 mgr.y (mgr.24422) 923 : cluster [DBG] pgmap v1385: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:25.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:25 vm07 bash[23367]: cluster 2026-03-10T10:38:24.694015+0000 mgr.y (mgr.24422) 923 : cluster [DBG] pgmap v1385: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:25.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:25 vm04 bash[28289]: cluster 2026-03-10T10:38:24.694015+0000 mgr.y (mgr.24422) 923 : cluster [DBG] pgmap v1385: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:25.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:25 vm04 bash[28289]: cluster 2026-03-10T10:38:24.694015+0000 mgr.y (mgr.24422) 923 : cluster [DBG] pgmap v1385: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:25.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:25 vm04 bash[20742]: cluster 2026-03-10T10:38:24.694015+0000 mgr.y (mgr.24422) 923 : cluster [DBG] pgmap v1385: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:25.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:25 vm04 bash[20742]: cluster 2026-03-10T10:38:24.694015+0000 mgr.y (mgr.24422) 923 : cluster [DBG] pgmap v1385: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:27 vm07 bash[23367]: cluster 2026-03-10T10:38:26.694379+0000 mgr.y (mgr.24422) 924 : cluster [DBG] pgmap v1386: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:28.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:27 vm07 bash[23367]: cluster 2026-03-10T10:38:26.694379+0000 mgr.y (mgr.24422) 924 : cluster [DBG] pgmap v1386: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:28.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:27 vm04 bash[28289]: cluster 2026-03-10T10:38:26.694379+0000 mgr.y (mgr.24422) 924 : cluster [DBG] pgmap v1386: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:28.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:27 vm04 bash[28289]: cluster 2026-03-10T10:38:26.694379+0000 mgr.y (mgr.24422) 924 : cluster [DBG] pgmap v1386: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:27 vm04 bash[20742]: cluster 2026-03-10T10:38:26.694379+0000 mgr.y (mgr.24422) 924 : cluster [DBG] pgmap v1386: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:27 vm04 bash[20742]: cluster 2026-03-10T10:38:26.694379+0000 mgr.y (mgr.24422) 924 : cluster [DBG] pgmap v1386: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:28 vm07 bash[23367]: audit 2026-03-10T10:38:28.424998+0000 mon.a (mon.0) 3702 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:38:29.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:28 vm07 bash[23367]: audit 2026-03-10T10:38:28.424998+0000 mon.a (mon.0) 3702 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:38:29.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:28 vm04 bash[28289]: audit 2026-03-10T10:38:28.424998+0000 mon.a (mon.0) 3702 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:38:29.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:28 vm04 bash[28289]: audit 2026-03-10T10:38:28.424998+0000 mon.a (mon.0) 3702 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:38:29.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:28 vm04 bash[20742]: audit 2026-03-10T10:38:28.424998+0000 mon.a (mon.0) 3702 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:38:29.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:28 vm04 bash[20742]: audit 2026-03-10T10:38:28.424998+0000 mon.a (mon.0) 3702 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:38:29.766 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:38:29 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:38:30.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:29 vm07 bash[23367]: cluster 2026-03-10T10:38:28.695153+0000 mgr.y (mgr.24422) 925 : cluster [DBG] pgmap v1387: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:30.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:29 vm07 bash[23367]: cluster 2026-03-10T10:38:28.695153+0000 mgr.y (mgr.24422) 925 : cluster [DBG] pgmap v1387: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:30.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:29 vm07 bash[23367]: audit 2026-03-10T10:38:29.309082+0000 mgr.y (mgr.24422) 926 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:30.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:29 vm07 bash[23367]: audit 2026-03-10T10:38:29.309082+0000 mgr.y (mgr.24422) 926 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:30.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:29 vm04 bash[28289]: cluster 2026-03-10T10:38:28.695153+0000 mgr.y (mgr.24422) 925 : cluster [DBG] pgmap v1387: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:30.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:29 vm04 bash[28289]: cluster 2026-03-10T10:38:28.695153+0000 mgr.y (mgr.24422) 925 : cluster [DBG] pgmap v1387: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:29 vm04 bash[28289]: audit 2026-03-10T10:38:29.309082+0000 mgr.y (mgr.24422) 926 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:29 vm04 bash[28289]: audit 2026-03-10T10:38:29.309082+0000 mgr.y (mgr.24422) 926 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:29 vm04 bash[20742]: cluster 2026-03-10T10:38:28.695153+0000 mgr.y (mgr.24422) 925 : cluster [DBG] pgmap v1387: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:29 vm04 bash[20742]: cluster 2026-03-10T10:38:28.695153+0000 mgr.y (mgr.24422) 925 : cluster [DBG] pgmap v1387: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:29 vm04 bash[20742]: audit 2026-03-10T10:38:29.309082+0000 mgr.y (mgr.24422) 926 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:29 vm04 bash[20742]: audit 2026-03-10T10:38:29.309082+0000 mgr.y (mgr.24422) 926 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:32.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:31 vm04 bash[28289]: cluster 2026-03-10T10:38:30.695675+0000 mgr.y (mgr.24422) 927 : cluster [DBG] pgmap v1388: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:32.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:31 vm04 bash[28289]: cluster 2026-03-10T10:38:30.695675+0000 mgr.y (mgr.24422) 927 : cluster [DBG] pgmap v1388: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:32.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:31 vm04 bash[20742]: cluster 2026-03-10T10:38:30.695675+0000 mgr.y (mgr.24422) 927 : cluster [DBG] pgmap v1388: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:32.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:31 vm04 bash[20742]: cluster 2026-03-10T10:38:30.695675+0000 mgr.y (mgr.24422) 927 : cluster [DBG] pgmap v1388: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:31 vm07 bash[23367]: cluster 2026-03-10T10:38:30.695675+0000 mgr.y (mgr.24422) 927 : cluster [DBG] pgmap v1388: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:31 vm07 bash[23367]: cluster 2026-03-10T10:38:30.695675+0000 mgr.y (mgr.24422) 927 : cluster [DBG] pgmap v1388: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:33.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:38:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:38:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:38:34.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:33 vm04 bash[28289]: cluster 2026-03-10T10:38:32.695970+0000 mgr.y (mgr.24422) 928 : cluster [DBG] pgmap v1389: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:34.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:33 vm04 bash[28289]: cluster 2026-03-10T10:38:32.695970+0000 mgr.y (mgr.24422) 928 : cluster [DBG] pgmap v1389: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:34.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:33 vm04 bash[20742]: cluster 2026-03-10T10:38:32.695970+0000 mgr.y (mgr.24422) 928 : cluster [DBG] pgmap v1389: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:34.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:33 vm04 bash[20742]: cluster 2026-03-10T10:38:32.695970+0000 mgr.y (mgr.24422) 928 : cluster [DBG] pgmap v1389: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:33 vm07 bash[23367]: cluster 2026-03-10T10:38:32.695970+0000 mgr.y (mgr.24422) 928 : cluster [DBG] pgmap v1389: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:33 vm07 bash[23367]: cluster 2026-03-10T10:38:32.695970+0000 mgr.y (mgr.24422) 928 : cluster [DBG] pgmap v1389: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:36.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:35 vm04 bash[28289]: cluster 2026-03-10T10:38:34.696613+0000 mgr.y (mgr.24422) 929 : cluster [DBG] pgmap v1390: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:36.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:35 vm04 bash[28289]: cluster 2026-03-10T10:38:34.696613+0000 mgr.y (mgr.24422) 929 : cluster [DBG] pgmap v1390: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:35 vm04 bash[20742]: cluster 2026-03-10T10:38:34.696613+0000 mgr.y (mgr.24422) 929 : cluster [DBG] pgmap v1390: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:35 vm04 bash[20742]: cluster 2026-03-10T10:38:34.696613+0000 mgr.y (mgr.24422) 929 : cluster [DBG] pgmap v1390: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:35 vm07 bash[23367]: cluster 2026-03-10T10:38:34.696613+0000 mgr.y (mgr.24422) 929 : cluster [DBG] pgmap v1390: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:35 vm07 bash[23367]: cluster 2026-03-10T10:38:34.696613+0000 mgr.y (mgr.24422) 929 : cluster [DBG] pgmap v1390: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:37 vm04 bash[28289]: cluster 2026-03-10T10:38:36.697013+0000 mgr.y (mgr.24422) 930 : cluster [DBG] pgmap v1391: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:37 vm04 bash[28289]: cluster 2026-03-10T10:38:36.697013+0000 mgr.y (mgr.24422) 930 : cluster [DBG] pgmap v1391: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:38.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:37 vm04 bash[20742]: cluster 2026-03-10T10:38:36.697013+0000 mgr.y (mgr.24422) 930 : cluster [DBG] pgmap v1391: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:38.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:37 vm04 bash[20742]: cluster 2026-03-10T10:38:36.697013+0000 mgr.y (mgr.24422) 930 : cluster [DBG] pgmap v1391: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:37 vm07 bash[23367]: cluster 2026-03-10T10:38:36.697013+0000 mgr.y (mgr.24422) 930 : cluster [DBG] pgmap v1391: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:37 vm07 bash[23367]: cluster 2026-03-10T10:38:36.697013+0000 mgr.y (mgr.24422) 930 : cluster [DBG] pgmap v1391: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:39.767 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:38:39 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:38:40.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:39 vm04 bash[28289]: cluster 2026-03-10T10:38:38.697485+0000 mgr.y (mgr.24422) 931 : cluster [DBG] pgmap v1392: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:40.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:39 vm04 bash[28289]: cluster 2026-03-10T10:38:38.697485+0000 mgr.y (mgr.24422) 931 : cluster [DBG] pgmap v1392: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:40.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:39 vm04 bash[28289]: audit 2026-03-10T10:38:39.315857+0000 mgr.y (mgr.24422) 932 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:40.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:39 vm04 bash[28289]: audit 2026-03-10T10:38:39.315857+0000 mgr.y (mgr.24422) 932 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:39 vm04 bash[20742]: cluster 2026-03-10T10:38:38.697485+0000 mgr.y (mgr.24422) 931 : cluster [DBG] pgmap v1392: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:39 vm04 bash[20742]: cluster 2026-03-10T10:38:38.697485+0000 mgr.y (mgr.24422) 931 : cluster [DBG] pgmap v1392: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:39 vm04 bash[20742]: audit 2026-03-10T10:38:39.315857+0000 mgr.y (mgr.24422) 932 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:39 vm04 bash[20742]: audit 2026-03-10T10:38:39.315857+0000 mgr.y (mgr.24422) 932 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:39 vm07 bash[23367]: cluster 2026-03-10T10:38:38.697485+0000 mgr.y (mgr.24422) 931 : cluster [DBG] pgmap v1392: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:39 vm07 bash[23367]: cluster 2026-03-10T10:38:38.697485+0000 mgr.y (mgr.24422) 931 : cluster [DBG] pgmap v1392: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:39 vm07 bash[23367]: audit 2026-03-10T10:38:39.315857+0000 mgr.y (mgr.24422) 932 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:39 vm07 bash[23367]: audit 2026-03-10T10:38:39.315857+0000 mgr.y (mgr.24422) 932 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:42.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:41 vm04 bash[28289]: cluster 2026-03-10T10:38:40.698006+0000 mgr.y (mgr.24422) 933 : cluster [DBG] pgmap v1393: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:41 vm04 bash[28289]: cluster 2026-03-10T10:38:40.698006+0000 mgr.y (mgr.24422) 933 : cluster [DBG] pgmap v1393: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:41 vm04 bash[20742]: cluster 2026-03-10T10:38:40.698006+0000 mgr.y (mgr.24422) 933 : cluster [DBG] pgmap v1393: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:41 vm04 bash[20742]: cluster 2026-03-10T10:38:40.698006+0000 mgr.y (mgr.24422) 933 : cluster [DBG] pgmap v1393: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:41 vm07 bash[23367]: cluster 2026-03-10T10:38:40.698006+0000 mgr.y (mgr.24422) 933 : cluster [DBG] pgmap v1393: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:41 vm07 bash[23367]: cluster 2026-03-10T10:38:40.698006+0000 mgr.y (mgr.24422) 933 : cluster [DBG] pgmap v1393: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:43.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:42 vm04 bash[28289]: audit 2026-03-10T10:38:42.782078+0000 mon.a (mon.0) 3703 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:38:43.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:42 vm04 bash[28289]: audit 2026-03-10T10:38:42.782078+0000 mon.a (mon.0) 3703 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:38:43.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:42 vm04 bash[20742]: audit 2026-03-10T10:38:42.782078+0000 mon.a (mon.0) 3703 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:38:43.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:42 vm04 bash[20742]: audit 2026-03-10T10:38:42.782078+0000 mon.a (mon.0) 3703 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:38:43.203 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:38:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:38:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:38:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:42 vm07 bash[23367]: audit 2026-03-10T10:38:42.782078+0000 mon.a (mon.0) 3703 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:38:43.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:42 vm07 bash[23367]: audit 2026-03-10T10:38:42.782078+0000 mon.a (mon.0) 3703 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:38:44.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:43 vm04 bash[28289]: cluster 2026-03-10T10:38:42.698317+0000 mgr.y (mgr.24422) 934 : cluster [DBG] pgmap v1394: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:44.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:43 vm04 bash[28289]: cluster 2026-03-10T10:38:42.698317+0000 mgr.y (mgr.24422) 934 : cluster [DBG] pgmap v1394: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:43 vm04 bash[28289]: audit 2026-03-10T10:38:43.129292+0000 mon.a (mon.0) 3704 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:38:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:43 vm04 bash[28289]: audit 2026-03-10T10:38:43.129292+0000 mon.a (mon.0) 3704 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:38:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:43 vm04 bash[28289]: audit 2026-03-10T10:38:43.129986+0000 mon.a (mon.0) 3705 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:38:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:43 vm04 bash[28289]: audit 2026-03-10T10:38:43.129986+0000 mon.a (mon.0) 3705 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:38:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:43 vm04 bash[28289]: audit 2026-03-10T10:38:43.136295+0000 mon.a (mon.0) 3706 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:38:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:43 vm04 bash[28289]: audit 2026-03-10T10:38:43.136295+0000 mon.a (mon.0) 3706 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:38:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:43 vm04 bash[28289]: audit 2026-03-10T10:38:43.431568+0000 mon.a (mon.0) 3707 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:38:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:43 vm04 bash[28289]: audit 2026-03-10T10:38:43.431568+0000 mon.a (mon.0) 3707 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:38:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:43 vm04 bash[20742]: cluster 2026-03-10T10:38:42.698317+0000 mgr.y (mgr.24422) 934 : cluster [DBG] pgmap v1394: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:43 vm04 bash[20742]: cluster 2026-03-10T10:38:42.698317+0000 mgr.y (mgr.24422) 934 : cluster [DBG] pgmap v1394: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:43 vm04 bash[20742]: audit 2026-03-10T10:38:43.129292+0000 mon.a (mon.0) 3704 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:38:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:43 vm04 bash[20742]: audit 2026-03-10T10:38:43.129292+0000 mon.a (mon.0) 3704 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:38:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:43 vm04 bash[20742]: audit 2026-03-10T10:38:43.129986+0000 mon.a (mon.0) 3705 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:38:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:43 vm04 bash[20742]: audit 2026-03-10T10:38:43.129986+0000 mon.a (mon.0) 3705 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:38:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:43 vm04 bash[20742]: audit 2026-03-10T10:38:43.136295+0000 mon.a (mon.0) 3706 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:38:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:43 vm04 bash[20742]: audit 2026-03-10T10:38:43.136295+0000 mon.a (mon.0) 3706 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:38:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:43 vm04 bash[20742]: audit 2026-03-10T10:38:43.431568+0000 mon.a (mon.0) 3707 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:38:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:43 vm04 bash[20742]: audit 2026-03-10T10:38:43.431568+0000 mon.a (mon.0) 3707 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:38:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:43 vm07 bash[23367]: cluster 2026-03-10T10:38:42.698317+0000 mgr.y (mgr.24422) 934 : cluster [DBG] pgmap v1394: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:43 vm07 bash[23367]: cluster 2026-03-10T10:38:42.698317+0000 mgr.y (mgr.24422) 934 : cluster [DBG] pgmap v1394: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:43 vm07 bash[23367]: audit 2026-03-10T10:38:43.129292+0000 mon.a (mon.0) 3704 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:38:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:43 vm07 bash[23367]: audit 2026-03-10T10:38:43.129292+0000 mon.a (mon.0) 3704 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:38:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:43 vm07 bash[23367]: audit 2026-03-10T10:38:43.129986+0000 mon.a (mon.0) 3705 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:38:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:43 vm07 bash[23367]: audit 2026-03-10T10:38:43.129986+0000 mon.a (mon.0) 3705 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:38:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:43 vm07 bash[23367]: audit 2026-03-10T10:38:43.136295+0000 mon.a (mon.0) 3706 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:38:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:43 vm07 bash[23367]: audit 2026-03-10T10:38:43.136295+0000 mon.a (mon.0) 3706 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:38:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:43 vm07 bash[23367]: audit 2026-03-10T10:38:43.431568+0000 mon.a (mon.0) 3707 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:38:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:43 vm07 bash[23367]: audit 2026-03-10T10:38:43.431568+0000 mon.a (mon.0) 3707 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:38:46.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:45 vm04 bash[28289]: cluster 2026-03-10T10:38:44.698915+0000 mgr.y (mgr.24422) 935 : cluster [DBG] pgmap v1395: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:46.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:45 vm04 bash[28289]: cluster 2026-03-10T10:38:44.698915+0000 mgr.y (mgr.24422) 935 : cluster [DBG] pgmap v1395: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:46.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:45 vm04 bash[20742]: cluster 2026-03-10T10:38:44.698915+0000 mgr.y (mgr.24422) 935 : cluster [DBG] pgmap v1395: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:46.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:45 vm04 bash[20742]: cluster 2026-03-10T10:38:44.698915+0000 mgr.y (mgr.24422) 935 : cluster [DBG] pgmap v1395: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:45 vm07 bash[23367]: cluster 2026-03-10T10:38:44.698915+0000 mgr.y (mgr.24422) 935 : cluster [DBG] pgmap v1395: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:45 vm07 bash[23367]: cluster 2026-03-10T10:38:44.698915+0000 mgr.y (mgr.24422) 935 : cluster [DBG] pgmap v1395: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:47 vm04 bash[28289]: cluster 2026-03-10T10:38:46.699292+0000 mgr.y (mgr.24422) 936 : cluster [DBG] pgmap v1396: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:47 vm04 bash[28289]: cluster 2026-03-10T10:38:46.699292+0000 mgr.y (mgr.24422) 936 : cluster [DBG] pgmap v1396: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:47 vm04 bash[20742]: cluster 2026-03-10T10:38:46.699292+0000 mgr.y (mgr.24422) 936 : cluster [DBG] pgmap v1396: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:47 vm04 bash[20742]: cluster 2026-03-10T10:38:46.699292+0000 mgr.y (mgr.24422) 936 : cluster [DBG] pgmap v1396: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:48.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:47 vm07 bash[23367]: cluster 2026-03-10T10:38:46.699292+0000 mgr.y (mgr.24422) 936 : cluster [DBG] pgmap v1396: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:48.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:47 vm07 bash[23367]: cluster 2026-03-10T10:38:46.699292+0000 mgr.y (mgr.24422) 936 : cluster [DBG] pgmap v1396: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:49.767 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:38:49 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:38:50.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:49 vm04 bash[28289]: cluster 2026-03-10T10:38:48.699779+0000 mgr.y (mgr.24422) 937 : cluster [DBG] pgmap v1397: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:50.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:49 vm04 bash[28289]: cluster 2026-03-10T10:38:48.699779+0000 mgr.y (mgr.24422) 937 : cluster [DBG] pgmap v1397: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:50.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:49 vm04 bash[28289]: audit 2026-03-10T10:38:49.321353+0000 mgr.y (mgr.24422) 938 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:50.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:49 vm04 bash[28289]: audit 2026-03-10T10:38:49.321353+0000 mgr.y (mgr.24422) 938 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:49 vm04 bash[20742]: cluster 2026-03-10T10:38:48.699779+0000 mgr.y (mgr.24422) 937 : cluster [DBG] pgmap v1397: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:49 vm04 bash[20742]: cluster 2026-03-10T10:38:48.699779+0000 mgr.y (mgr.24422) 937 : cluster [DBG] pgmap v1397: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:49 vm04 bash[20742]: audit 2026-03-10T10:38:49.321353+0000 mgr.y (mgr.24422) 938 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:49 vm04 bash[20742]: audit 2026-03-10T10:38:49.321353+0000 mgr.y (mgr.24422) 938 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:49 vm07 bash[23367]: cluster 2026-03-10T10:38:48.699779+0000 mgr.y (mgr.24422) 937 : cluster [DBG] pgmap v1397: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:49 vm07 bash[23367]: cluster 2026-03-10T10:38:48.699779+0000 mgr.y (mgr.24422) 937 : cluster [DBG] pgmap v1397: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:49 vm07 bash[23367]: audit 2026-03-10T10:38:49.321353+0000 mgr.y (mgr.24422) 938 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:49 vm07 bash[23367]: audit 2026-03-10T10:38:49.321353+0000 mgr.y (mgr.24422) 938 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:38:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:51 vm04 bash[28289]: cluster 2026-03-10T10:38:50.700285+0000 mgr.y (mgr.24422) 939 : cluster [DBG] pgmap v1398: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:51 vm04 bash[28289]: cluster 2026-03-10T10:38:50.700285+0000 mgr.y (mgr.24422) 939 : cluster [DBG] pgmap v1398: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:51 vm04 bash[20742]: cluster 2026-03-10T10:38:50.700285+0000 mgr.y (mgr.24422) 939 : cluster [DBG] pgmap v1398: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:51 vm04 bash[20742]: cluster 2026-03-10T10:38:50.700285+0000 mgr.y (mgr.24422) 939 : cluster [DBG] pgmap v1398: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:51 vm07 bash[23367]: cluster 2026-03-10T10:38:50.700285+0000 mgr.y (mgr.24422) 939 : cluster [DBG] pgmap v1398: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:51 vm07 bash[23367]: cluster 2026-03-10T10:38:50.700285+0000 mgr.y (mgr.24422) 939 : cluster [DBG] pgmap v1398: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:53.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:38:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:38:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:38:54.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:53 vm04 bash[28289]: cluster 2026-03-10T10:38:52.700601+0000 mgr.y (mgr.24422) 940 : cluster [DBG] pgmap v1399: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:54.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:53 vm04 bash[28289]: cluster 2026-03-10T10:38:52.700601+0000 mgr.y (mgr.24422) 940 : cluster [DBG] pgmap v1399: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:54.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:53 vm04 bash[20742]: cluster 2026-03-10T10:38:52.700601+0000 mgr.y (mgr.24422) 940 : cluster [DBG] pgmap v1399: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:54.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:53 vm04 bash[20742]: cluster 2026-03-10T10:38:52.700601+0000 mgr.y (mgr.24422) 940 : cluster [DBG] pgmap v1399: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:54.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:53 vm07 bash[23367]: cluster 2026-03-10T10:38:52.700601+0000 mgr.y (mgr.24422) 940 : cluster [DBG] pgmap v1399: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:54.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:53 vm07 bash[23367]: cluster 2026-03-10T10:38:52.700601+0000 mgr.y (mgr.24422) 940 : cluster [DBG] pgmap v1399: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:56.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:55 vm04 bash[28289]: cluster 2026-03-10T10:38:54.701253+0000 mgr.y (mgr.24422) 941 : cluster [DBG] pgmap v1400: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:56.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:55 vm04 bash[28289]: cluster 2026-03-10T10:38:54.701253+0000 mgr.y (mgr.24422) 941 : cluster [DBG] pgmap v1400: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:55 vm04 bash[20742]: cluster 2026-03-10T10:38:54.701253+0000 mgr.y (mgr.24422) 941 : cluster [DBG] pgmap v1400: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:55 vm04 bash[20742]: cluster 2026-03-10T10:38:54.701253+0000 mgr.y (mgr.24422) 941 : cluster [DBG] pgmap v1400: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:56.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:55 vm07 bash[23367]: cluster 2026-03-10T10:38:54.701253+0000 mgr.y (mgr.24422) 941 : cluster [DBG] pgmap v1400: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:56.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:55 vm07 bash[23367]: cluster 2026-03-10T10:38:54.701253+0000 mgr.y (mgr.24422) 941 : cluster [DBG] pgmap v1400: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:38:58.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:57 vm04 bash[28289]: cluster 2026-03-10T10:38:56.701557+0000 mgr.y (mgr.24422) 942 : cluster [DBG] pgmap v1401: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:58.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:57 vm04 bash[28289]: cluster 2026-03-10T10:38:56.701557+0000 mgr.y (mgr.24422) 942 : cluster [DBG] pgmap v1401: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:58.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:57 vm04 bash[20742]: cluster 2026-03-10T10:38:56.701557+0000 mgr.y (mgr.24422) 942 : cluster [DBG] pgmap v1401: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:58.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:57 vm04 bash[20742]: cluster 2026-03-10T10:38:56.701557+0000 mgr.y (mgr.24422) 942 : cluster [DBG] pgmap v1401: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:57 vm07 bash[23367]: cluster 2026-03-10T10:38:56.701557+0000 mgr.y (mgr.24422) 942 : cluster [DBG] pgmap v1401: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:57 vm07 bash[23367]: cluster 2026-03-10T10:38:56.701557+0000 mgr.y (mgr.24422) 942 : cluster [DBG] pgmap v1401: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:38:59.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:58 vm04 bash[28289]: audit 2026-03-10T10:38:58.438831+0000 mon.a (mon.0) 3708 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:38:59.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:58 vm04 bash[28289]: audit 2026-03-10T10:38:58.438831+0000 mon.a (mon.0) 3708 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:38:59.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:58 vm04 bash[20742]: audit 2026-03-10T10:38:58.438831+0000 mon.a (mon.0) 3708 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:38:59.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:58 vm04 bash[20742]: audit 2026-03-10T10:38:58.438831+0000 mon.a (mon.0) 3708 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:38:59.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:58 vm07 bash[23367]: audit 2026-03-10T10:38:58.438831+0000 mon.a (mon.0) 3708 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:38:59.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:58 vm07 bash[23367]: audit 2026-03-10T10:38:58.438831+0000 mon.a (mon.0) 3708 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:38:59.767 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:38:59 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:39:00.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:59 vm04 bash[28289]: cluster 2026-03-10T10:38:58.701991+0000 mgr.y (mgr.24422) 943 : cluster [DBG] pgmap v1402: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:00.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:59 vm04 bash[28289]: cluster 2026-03-10T10:38:58.701991+0000 mgr.y (mgr.24422) 943 : cluster [DBG] pgmap v1402: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:00.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:59 vm04 bash[28289]: audit 2026-03-10T10:38:59.332025+0000 mgr.y (mgr.24422) 944 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:00.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:38:59 vm04 bash[28289]: audit 2026-03-10T10:38:59.332025+0000 mgr.y (mgr.24422) 944 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:59 vm04 bash[20742]: cluster 2026-03-10T10:38:58.701991+0000 mgr.y (mgr.24422) 943 : cluster [DBG] pgmap v1402: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:59 vm04 bash[20742]: cluster 2026-03-10T10:38:58.701991+0000 mgr.y (mgr.24422) 943 : cluster [DBG] pgmap v1402: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:59 vm04 bash[20742]: audit 2026-03-10T10:38:59.332025+0000 mgr.y (mgr.24422) 944 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:38:59 vm04 bash[20742]: audit 2026-03-10T10:38:59.332025+0000 mgr.y (mgr.24422) 944 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:00.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:59 vm07 bash[23367]: cluster 2026-03-10T10:38:58.701991+0000 mgr.y (mgr.24422) 943 : cluster [DBG] pgmap v1402: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:00.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:59 vm07 bash[23367]: cluster 2026-03-10T10:38:58.701991+0000 mgr.y (mgr.24422) 943 : cluster [DBG] pgmap v1402: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:00.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:59 vm07 bash[23367]: audit 2026-03-10T10:38:59.332025+0000 mgr.y (mgr.24422) 944 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:00.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:38:59 vm07 bash[23367]: audit 2026-03-10T10:38:59.332025+0000 mgr.y (mgr.24422) 944 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:02.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:01 vm04 bash[28289]: cluster 2026-03-10T10:39:00.702466+0000 mgr.y (mgr.24422) 945 : cluster [DBG] pgmap v1403: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:02.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:01 vm04 bash[28289]: cluster 2026-03-10T10:39:00.702466+0000 mgr.y (mgr.24422) 945 : cluster [DBG] pgmap v1403: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:01 vm04 bash[20742]: cluster 2026-03-10T10:39:00.702466+0000 mgr.y (mgr.24422) 945 : cluster [DBG] pgmap v1403: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:01 vm04 bash[20742]: cluster 2026-03-10T10:39:00.702466+0000 mgr.y (mgr.24422) 945 : cluster [DBG] pgmap v1403: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:02.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:01 vm07 bash[23367]: cluster 2026-03-10T10:39:00.702466+0000 mgr.y (mgr.24422) 945 : cluster [DBG] pgmap v1403: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:02.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:01 vm07 bash[23367]: cluster 2026-03-10T10:39:00.702466+0000 mgr.y (mgr.24422) 945 : cluster [DBG] pgmap v1403: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:03.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:39:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:39:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:39:04.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:03 vm04 bash[28289]: cluster 2026-03-10T10:39:02.702738+0000 mgr.y (mgr.24422) 946 : cluster [DBG] pgmap v1404: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:04.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:03 vm04 bash[28289]: cluster 2026-03-10T10:39:02.702738+0000 mgr.y (mgr.24422) 946 : cluster [DBG] pgmap v1404: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:03 vm04 bash[20742]: cluster 2026-03-10T10:39:02.702738+0000 mgr.y (mgr.24422) 946 : cluster [DBG] pgmap v1404: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:03 vm04 bash[20742]: cluster 2026-03-10T10:39:02.702738+0000 mgr.y (mgr.24422) 946 : cluster [DBG] pgmap v1404: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:03 vm07 bash[23367]: cluster 2026-03-10T10:39:02.702738+0000 mgr.y (mgr.24422) 946 : cluster [DBG] pgmap v1404: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:03 vm07 bash[23367]: cluster 2026-03-10T10:39:02.702738+0000 mgr.y (mgr.24422) 946 : cluster [DBG] pgmap v1404: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:06.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:05 vm04 bash[28289]: cluster 2026-03-10T10:39:04.703385+0000 mgr.y (mgr.24422) 947 : cluster [DBG] pgmap v1405: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:06.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:05 vm04 bash[28289]: cluster 2026-03-10T10:39:04.703385+0000 mgr.y (mgr.24422) 947 : cluster [DBG] pgmap v1405: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:06.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:05 vm04 bash[20742]: cluster 2026-03-10T10:39:04.703385+0000 mgr.y (mgr.24422) 947 : cluster [DBG] pgmap v1405: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:06.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:05 vm04 bash[20742]: cluster 2026-03-10T10:39:04.703385+0000 mgr.y (mgr.24422) 947 : cluster [DBG] pgmap v1405: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:06.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:05 vm07 bash[23367]: cluster 2026-03-10T10:39:04.703385+0000 mgr.y (mgr.24422) 947 : cluster [DBG] pgmap v1405: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:06.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:05 vm07 bash[23367]: cluster 2026-03-10T10:39:04.703385+0000 mgr.y (mgr.24422) 947 : cluster [DBG] pgmap v1405: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:08.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:07 vm04 bash[28289]: cluster 2026-03-10T10:39:06.703648+0000 mgr.y (mgr.24422) 948 : cluster [DBG] pgmap v1406: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:08.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:07 vm04 bash[28289]: cluster 2026-03-10T10:39:06.703648+0000 mgr.y (mgr.24422) 948 : cluster [DBG] pgmap v1406: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:08.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:07 vm04 bash[20742]: cluster 2026-03-10T10:39:06.703648+0000 mgr.y (mgr.24422) 948 : cluster [DBG] pgmap v1406: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:08.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:07 vm04 bash[20742]: cluster 2026-03-10T10:39:06.703648+0000 mgr.y (mgr.24422) 948 : cluster [DBG] pgmap v1406: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:08.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:07 vm07 bash[23367]: cluster 2026-03-10T10:39:06.703648+0000 mgr.y (mgr.24422) 948 : cluster [DBG] pgmap v1406: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:08.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:07 vm07 bash[23367]: cluster 2026-03-10T10:39:06.703648+0000 mgr.y (mgr.24422) 948 : cluster [DBG] pgmap v1406: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:09.767 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:39:09 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:39:10.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:10 vm04 bash[28289]: cluster 2026-03-10T10:39:08.704133+0000 mgr.y (mgr.24422) 949 : cluster [DBG] pgmap v1407: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:10.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:10 vm04 bash[28289]: cluster 2026-03-10T10:39:08.704133+0000 mgr.y (mgr.24422) 949 : cluster [DBG] pgmap v1407: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:10.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:10 vm04 bash[28289]: audit 2026-03-10T10:39:09.342783+0000 mgr.y (mgr.24422) 950 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:10 vm04 bash[28289]: audit 2026-03-10T10:39:09.342783+0000 mgr.y (mgr.24422) 950 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:10.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:10 vm04 bash[20742]: cluster 2026-03-10T10:39:08.704133+0000 mgr.y (mgr.24422) 949 : cluster [DBG] pgmap v1407: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:10.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:10 vm04 bash[20742]: cluster 2026-03-10T10:39:08.704133+0000 mgr.y (mgr.24422) 949 : cluster [DBG] pgmap v1407: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:10.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:10 vm04 bash[20742]: audit 2026-03-10T10:39:09.342783+0000 mgr.y (mgr.24422) 950 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:10.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:10 vm04 bash[20742]: audit 2026-03-10T10:39:09.342783+0000 mgr.y (mgr.24422) 950 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:10 vm07 bash[23367]: cluster 2026-03-10T10:39:08.704133+0000 mgr.y (mgr.24422) 949 : cluster [DBG] pgmap v1407: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:10 vm07 bash[23367]: cluster 2026-03-10T10:39:08.704133+0000 mgr.y (mgr.24422) 949 : cluster [DBG] pgmap v1407: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:10 vm07 bash[23367]: audit 2026-03-10T10:39:09.342783+0000 mgr.y (mgr.24422) 950 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:10 vm07 bash[23367]: audit 2026-03-10T10:39:09.342783+0000 mgr.y (mgr.24422) 950 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:12.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:12 vm04 bash[28289]: cluster 2026-03-10T10:39:10.704681+0000 mgr.y (mgr.24422) 951 : cluster [DBG] pgmap v1408: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:12.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:12 vm04 bash[28289]: cluster 2026-03-10T10:39:10.704681+0000 mgr.y (mgr.24422) 951 : cluster [DBG] pgmap v1408: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:12.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:12 vm04 bash[20742]: cluster 2026-03-10T10:39:10.704681+0000 mgr.y (mgr.24422) 951 : cluster [DBG] pgmap v1408: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:12.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:12 vm04 bash[20742]: cluster 2026-03-10T10:39:10.704681+0000 mgr.y (mgr.24422) 951 : cluster [DBG] pgmap v1408: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:12 vm07 bash[23367]: cluster 2026-03-10T10:39:10.704681+0000 mgr.y (mgr.24422) 951 : cluster [DBG] pgmap v1408: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:12 vm07 bash[23367]: cluster 2026-03-10T10:39:10.704681+0000 mgr.y (mgr.24422) 951 : cluster [DBG] pgmap v1408: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:13.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:39:13 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:39:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:39:14.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:14 vm04 bash[28289]: cluster 2026-03-10T10:39:12.704953+0000 mgr.y (mgr.24422) 952 : cluster [DBG] pgmap v1409: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:14.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:14 vm04 bash[28289]: cluster 2026-03-10T10:39:12.704953+0000 mgr.y (mgr.24422) 952 : cluster [DBG] pgmap v1409: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:14.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:14 vm04 bash[28289]: audit 2026-03-10T10:39:13.444751+0000 mon.a (mon.0) 3709 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:39:14.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:14 vm04 bash[28289]: audit 2026-03-10T10:39:13.444751+0000 mon.a (mon.0) 3709 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:39:14.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:14 vm04 bash[20742]: cluster 2026-03-10T10:39:12.704953+0000 mgr.y (mgr.24422) 952 : cluster [DBG] pgmap v1409: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:14.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:14 vm04 bash[20742]: cluster 2026-03-10T10:39:12.704953+0000 mgr.y (mgr.24422) 952 : cluster [DBG] pgmap v1409: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:14.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:14 vm04 bash[20742]: audit 2026-03-10T10:39:13.444751+0000 mon.a (mon.0) 3709 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:39:14.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:14 vm04 bash[20742]: audit 2026-03-10T10:39:13.444751+0000 mon.a (mon.0) 3709 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:39:14.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:14 vm07 bash[23367]: cluster 2026-03-10T10:39:12.704953+0000 mgr.y (mgr.24422) 952 : cluster [DBG] pgmap v1409: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:14.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:14 vm07 bash[23367]: cluster 2026-03-10T10:39:12.704953+0000 mgr.y (mgr.24422) 952 : cluster [DBG] pgmap v1409: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:14.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:14 vm07 bash[23367]: audit 2026-03-10T10:39:13.444751+0000 mon.a (mon.0) 3709 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:39:14.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:14 vm07 bash[23367]: audit 2026-03-10T10:39:13.444751+0000 mon.a (mon.0) 3709 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:39:16.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:16 vm04 bash[28289]: cluster 2026-03-10T10:39:14.705612+0000 mgr.y (mgr.24422) 953 : cluster [DBG] pgmap v1410: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:16.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:16 vm04 bash[28289]: cluster 2026-03-10T10:39:14.705612+0000 mgr.y (mgr.24422) 953 : cluster [DBG] pgmap v1410: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:16.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:16 vm04 bash[20742]: cluster 2026-03-10T10:39:14.705612+0000 mgr.y (mgr.24422) 953 : cluster [DBG] pgmap v1410: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:16.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:16 vm04 bash[20742]: cluster 2026-03-10T10:39:14.705612+0000 mgr.y (mgr.24422) 953 : cluster [DBG] pgmap v1410: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:16.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:16 vm07 bash[23367]: cluster 2026-03-10T10:39:14.705612+0000 mgr.y (mgr.24422) 953 : cluster [DBG] pgmap v1410: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:16.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:16 vm07 bash[23367]: cluster 2026-03-10T10:39:14.705612+0000 mgr.y (mgr.24422) 953 : cluster [DBG] pgmap v1410: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:18.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:18 vm04 bash[28289]: cluster 2026-03-10T10:39:16.705895+0000 mgr.y (mgr.24422) 954 : cluster [DBG] pgmap v1411: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:18.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:18 vm04 bash[28289]: cluster 2026-03-10T10:39:16.705895+0000 mgr.y (mgr.24422) 954 : cluster [DBG] pgmap v1411: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:18.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:18 vm04 bash[20742]: cluster 2026-03-10T10:39:16.705895+0000 mgr.y (mgr.24422) 954 : cluster [DBG] pgmap v1411: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:18.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:18 vm04 bash[20742]: cluster 2026-03-10T10:39:16.705895+0000 mgr.y (mgr.24422) 954 : cluster [DBG] pgmap v1411: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:18.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:18 vm07 bash[23367]: cluster 2026-03-10T10:39:16.705895+0000 mgr.y (mgr.24422) 954 : cluster [DBG] pgmap v1411: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:18.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:18 vm07 bash[23367]: cluster 2026-03-10T10:39:16.705895+0000 mgr.y (mgr.24422) 954 : cluster [DBG] pgmap v1411: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:19.767 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:39:19 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:39:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:20 vm04 bash[28289]: cluster 2026-03-10T10:39:18.706496+0000 mgr.y (mgr.24422) 955 : cluster [DBG] pgmap v1412: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:20 vm04 bash[28289]: cluster 2026-03-10T10:39:18.706496+0000 mgr.y (mgr.24422) 955 : cluster [DBG] pgmap v1412: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:20 vm04 bash[28289]: audit 2026-03-10T10:39:19.353435+0000 mgr.y (mgr.24422) 956 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:20 vm04 bash[28289]: audit 2026-03-10T10:39:19.353435+0000 mgr.y (mgr.24422) 956 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:20.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:20 vm04 bash[20742]: cluster 2026-03-10T10:39:18.706496+0000 mgr.y (mgr.24422) 955 : cluster [DBG] pgmap v1412: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:20.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:20 vm04 bash[20742]: cluster 2026-03-10T10:39:18.706496+0000 mgr.y (mgr.24422) 955 : cluster [DBG] pgmap v1412: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:20.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:20 vm04 bash[20742]: audit 2026-03-10T10:39:19.353435+0000 mgr.y (mgr.24422) 956 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:20.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:20 vm04 bash[20742]: audit 2026-03-10T10:39:19.353435+0000 mgr.y (mgr.24422) 956 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:20 vm07 bash[23367]: cluster 2026-03-10T10:39:18.706496+0000 mgr.y (mgr.24422) 955 : cluster [DBG] pgmap v1412: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:20 vm07 bash[23367]: cluster 2026-03-10T10:39:18.706496+0000 mgr.y (mgr.24422) 955 : cluster [DBG] pgmap v1412: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:20 vm07 bash[23367]: audit 2026-03-10T10:39:19.353435+0000 mgr.y (mgr.24422) 956 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:20 vm07 bash[23367]: audit 2026-03-10T10:39:19.353435+0000 mgr.y (mgr.24422) 956 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:22.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:22 vm04 bash[28289]: cluster 2026-03-10T10:39:20.707023+0000 mgr.y (mgr.24422) 957 : cluster [DBG] pgmap v1413: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:22.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:22 vm04 bash[28289]: cluster 2026-03-10T10:39:20.707023+0000 mgr.y (mgr.24422) 957 : cluster [DBG] pgmap v1413: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:22.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:22 vm04 bash[20742]: cluster 2026-03-10T10:39:20.707023+0000 mgr.y (mgr.24422) 957 : cluster [DBG] pgmap v1413: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:22.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:22 vm04 bash[20742]: cluster 2026-03-10T10:39:20.707023+0000 mgr.y (mgr.24422) 957 : cluster [DBG] pgmap v1413: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:22 vm07 bash[23367]: cluster 2026-03-10T10:39:20.707023+0000 mgr.y (mgr.24422) 957 : cluster [DBG] pgmap v1413: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:22 vm07 bash[23367]: cluster 2026-03-10T10:39:20.707023+0000 mgr.y (mgr.24422) 957 : cluster [DBG] pgmap v1413: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:23.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:39:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:39:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:39:24.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:24 vm04 bash[28289]: cluster 2026-03-10T10:39:22.707301+0000 mgr.y (mgr.24422) 958 : cluster [DBG] pgmap v1414: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:24.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:24 vm04 bash[28289]: cluster 2026-03-10T10:39:22.707301+0000 mgr.y (mgr.24422) 958 : cluster [DBG] pgmap v1414: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:24.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:24 vm04 bash[20742]: cluster 2026-03-10T10:39:22.707301+0000 mgr.y (mgr.24422) 958 : cluster [DBG] pgmap v1414: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:24.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:24 vm04 bash[20742]: cluster 2026-03-10T10:39:22.707301+0000 mgr.y (mgr.24422) 958 : cluster [DBG] pgmap v1414: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:24 vm07 bash[23367]: cluster 2026-03-10T10:39:22.707301+0000 mgr.y (mgr.24422) 958 : cluster [DBG] pgmap v1414: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:24 vm07 bash[23367]: cluster 2026-03-10T10:39:22.707301+0000 mgr.y (mgr.24422) 958 : cluster [DBG] pgmap v1414: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:26.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:26 vm04 bash[28289]: cluster 2026-03-10T10:39:24.707905+0000 mgr.y (mgr.24422) 959 : cluster [DBG] pgmap v1415: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:26.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:26 vm04 bash[28289]: cluster 2026-03-10T10:39:24.707905+0000 mgr.y (mgr.24422) 959 : cluster [DBG] pgmap v1415: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:26.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:26 vm04 bash[20742]: cluster 2026-03-10T10:39:24.707905+0000 mgr.y (mgr.24422) 959 : cluster [DBG] pgmap v1415: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:26.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:26 vm04 bash[20742]: cluster 2026-03-10T10:39:24.707905+0000 mgr.y (mgr.24422) 959 : cluster [DBG] pgmap v1415: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:26.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:26 vm07 bash[23367]: cluster 2026-03-10T10:39:24.707905+0000 mgr.y (mgr.24422) 959 : cluster [DBG] pgmap v1415: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:26.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:26 vm07 bash[23367]: cluster 2026-03-10T10:39:24.707905+0000 mgr.y (mgr.24422) 959 : cluster [DBG] pgmap v1415: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:28.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:28 vm04 bash[28289]: cluster 2026-03-10T10:39:26.708208+0000 mgr.y (mgr.24422) 960 : cluster [DBG] pgmap v1416: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:28 vm04 bash[28289]: cluster 2026-03-10T10:39:26.708208+0000 mgr.y (mgr.24422) 960 : cluster [DBG] pgmap v1416: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:28 vm04 bash[20742]: cluster 2026-03-10T10:39:26.708208+0000 mgr.y (mgr.24422) 960 : cluster [DBG] pgmap v1416: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:28 vm04 bash[20742]: cluster 2026-03-10T10:39:26.708208+0000 mgr.y (mgr.24422) 960 : cluster [DBG] pgmap v1416: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:28.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:28 vm07 bash[23367]: cluster 2026-03-10T10:39:26.708208+0000 mgr.y (mgr.24422) 960 : cluster [DBG] pgmap v1416: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:28.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:28 vm07 bash[23367]: cluster 2026-03-10T10:39:26.708208+0000 mgr.y (mgr.24422) 960 : cluster [DBG] pgmap v1416: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:29.767 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:39:29 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:39:29.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:29 vm07 bash[23367]: audit 2026-03-10T10:39:28.450364+0000 mon.a (mon.0) 3710 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:39:29.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:29 vm07 bash[23367]: audit 2026-03-10T10:39:28.450364+0000 mon.a (mon.0) 3710 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:39:29.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:29 vm04 bash[28289]: audit 2026-03-10T10:39:28.450364+0000 mon.a (mon.0) 3710 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:39:29.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:29 vm04 bash[28289]: audit 2026-03-10T10:39:28.450364+0000 mon.a (mon.0) 3710 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:39:29.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:29 vm04 bash[20742]: audit 2026-03-10T10:39:28.450364+0000 mon.a (mon.0) 3710 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:39:29.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:29 vm04 bash[20742]: audit 2026-03-10T10:39:28.450364+0000 mon.a (mon.0) 3710 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:39:30.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:30 vm04 bash[28289]: cluster 2026-03-10T10:39:28.708811+0000 mgr.y (mgr.24422) 961 : cluster [DBG] pgmap v1417: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:30.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:30 vm04 bash[28289]: cluster 2026-03-10T10:39:28.708811+0000 mgr.y (mgr.24422) 961 : cluster [DBG] pgmap v1417: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:30.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:30 vm04 bash[28289]: audit 2026-03-10T10:39:29.357564+0000 mgr.y (mgr.24422) 962 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:30.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:30 vm04 bash[28289]: audit 2026-03-10T10:39:29.357564+0000 mgr.y (mgr.24422) 962 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:30.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:30 vm04 bash[20742]: cluster 2026-03-10T10:39:28.708811+0000 mgr.y (mgr.24422) 961 : cluster [DBG] pgmap v1417: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:30.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:30 vm04 bash[20742]: cluster 2026-03-10T10:39:28.708811+0000 mgr.y (mgr.24422) 961 : cluster [DBG] pgmap v1417: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:30.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:30 vm04 bash[20742]: audit 2026-03-10T10:39:29.357564+0000 mgr.y (mgr.24422) 962 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:30.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:30 vm04 bash[20742]: audit 2026-03-10T10:39:29.357564+0000 mgr.y (mgr.24422) 962 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:30 vm07 bash[23367]: cluster 2026-03-10T10:39:28.708811+0000 mgr.y (mgr.24422) 961 : cluster [DBG] pgmap v1417: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:30 vm07 bash[23367]: cluster 2026-03-10T10:39:28.708811+0000 mgr.y (mgr.24422) 961 : cluster [DBG] pgmap v1417: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:30 vm07 bash[23367]: audit 2026-03-10T10:39:29.357564+0000 mgr.y (mgr.24422) 962 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:31.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:30 vm07 bash[23367]: audit 2026-03-10T10:39:29.357564+0000 mgr.y (mgr.24422) 962 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:31.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:31 vm04 bash[28289]: cluster 2026-03-10T10:39:30.709515+0000 mgr.y (mgr.24422) 963 : cluster [DBG] pgmap v1418: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:31.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:31 vm04 bash[28289]: cluster 2026-03-10T10:39:30.709515+0000 mgr.y (mgr.24422) 963 : cluster [DBG] pgmap v1418: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:31.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:31 vm04 bash[20742]: cluster 2026-03-10T10:39:30.709515+0000 mgr.y (mgr.24422) 963 : cluster [DBG] pgmap v1418: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:31.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:31 vm04 bash[20742]: cluster 2026-03-10T10:39:30.709515+0000 mgr.y (mgr.24422) 963 : cluster [DBG] pgmap v1418: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:32.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:31 vm07 bash[23367]: cluster 2026-03-10T10:39:30.709515+0000 mgr.y (mgr.24422) 963 : cluster [DBG] pgmap v1418: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:32.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:31 vm07 bash[23367]: cluster 2026-03-10T10:39:30.709515+0000 mgr.y (mgr.24422) 963 : cluster [DBG] pgmap v1418: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:33.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:39:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:39:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:39:34.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:33 vm04 bash[28289]: cluster 2026-03-10T10:39:32.709922+0000 mgr.y (mgr.24422) 964 : cluster [DBG] pgmap v1419: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:34.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:33 vm04 bash[28289]: cluster 2026-03-10T10:39:32.709922+0000 mgr.y (mgr.24422) 964 : cluster [DBG] pgmap v1419: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:34.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:33 vm04 bash[20742]: cluster 2026-03-10T10:39:32.709922+0000 mgr.y (mgr.24422) 964 : cluster [DBG] pgmap v1419: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:34.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:33 vm04 bash[20742]: cluster 2026-03-10T10:39:32.709922+0000 mgr.y (mgr.24422) 964 : cluster [DBG] pgmap v1419: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:33 vm07 bash[23367]: cluster 2026-03-10T10:39:32.709922+0000 mgr.y (mgr.24422) 964 : cluster [DBG] pgmap v1419: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:33 vm07 bash[23367]: cluster 2026-03-10T10:39:32.709922+0000 mgr.y (mgr.24422) 964 : cluster [DBG] pgmap v1419: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:36.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:35 vm04 bash[28289]: cluster 2026-03-10T10:39:34.710778+0000 mgr.y (mgr.24422) 965 : cluster [DBG] pgmap v1420: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:35 vm04 bash[28289]: cluster 2026-03-10T10:39:34.710778+0000 mgr.y (mgr.24422) 965 : cluster [DBG] pgmap v1420: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:35 vm04 bash[20742]: cluster 2026-03-10T10:39:34.710778+0000 mgr.y (mgr.24422) 965 : cluster [DBG] pgmap v1420: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:35 vm04 bash[20742]: cluster 2026-03-10T10:39:34.710778+0000 mgr.y (mgr.24422) 965 : cluster [DBG] pgmap v1420: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:35 vm07 bash[23367]: cluster 2026-03-10T10:39:34.710778+0000 mgr.y (mgr.24422) 965 : cluster [DBG] pgmap v1420: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:35 vm07 bash[23367]: cluster 2026-03-10T10:39:34.710778+0000 mgr.y (mgr.24422) 965 : cluster [DBG] pgmap v1420: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:38.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:38 vm04 bash[28289]: cluster 2026-03-10T10:39:36.711158+0000 mgr.y (mgr.24422) 966 : cluster [DBG] pgmap v1421: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:38.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:38 vm04 bash[28289]: cluster 2026-03-10T10:39:36.711158+0000 mgr.y (mgr.24422) 966 : cluster [DBG] pgmap v1421: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:38.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:38 vm04 bash[20742]: cluster 2026-03-10T10:39:36.711158+0000 mgr.y (mgr.24422) 966 : cluster [DBG] pgmap v1421: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:38.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:38 vm04 bash[20742]: cluster 2026-03-10T10:39:36.711158+0000 mgr.y (mgr.24422) 966 : cluster [DBG] pgmap v1421: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:38.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:38 vm07 bash[23367]: cluster 2026-03-10T10:39:36.711158+0000 mgr.y (mgr.24422) 966 : cluster [DBG] pgmap v1421: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:38.527 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:38 vm07 bash[23367]: cluster 2026-03-10T10:39:36.711158+0000 mgr.y (mgr.24422) 966 : cluster [DBG] pgmap v1421: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:39.767 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:39:39 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:39:40.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:40 vm04 bash[28289]: cluster 2026-03-10T10:39:38.711650+0000 mgr.y (mgr.24422) 967 : cluster [DBG] pgmap v1422: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:40 vm04 bash[28289]: cluster 2026-03-10T10:39:38.711650+0000 mgr.y (mgr.24422) 967 : cluster [DBG] pgmap v1422: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:40 vm04 bash[28289]: audit 2026-03-10T10:39:39.368311+0000 mgr.y (mgr.24422) 968 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:40 vm04 bash[28289]: audit 2026-03-10T10:39:39.368311+0000 mgr.y (mgr.24422) 968 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:40 vm04 bash[20742]: cluster 2026-03-10T10:39:38.711650+0000 mgr.y (mgr.24422) 967 : cluster [DBG] pgmap v1422: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:40 vm04 bash[20742]: cluster 2026-03-10T10:39:38.711650+0000 mgr.y (mgr.24422) 967 : cluster [DBG] pgmap v1422: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:40 vm04 bash[20742]: audit 2026-03-10T10:39:39.368311+0000 mgr.y (mgr.24422) 968 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:40 vm04 bash[20742]: audit 2026-03-10T10:39:39.368311+0000 mgr.y (mgr.24422) 968 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:40.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:40 vm07 bash[23367]: cluster 2026-03-10T10:39:38.711650+0000 mgr.y (mgr.24422) 967 : cluster [DBG] pgmap v1422: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:40.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:40 vm07 bash[23367]: cluster 2026-03-10T10:39:38.711650+0000 mgr.y (mgr.24422) 967 : cluster [DBG] pgmap v1422: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:40.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:40 vm07 bash[23367]: audit 2026-03-10T10:39:39.368311+0000 mgr.y (mgr.24422) 968 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:40.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:40 vm07 bash[23367]: audit 2026-03-10T10:39:39.368311+0000 mgr.y (mgr.24422) 968 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:42.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:42 vm04 bash[28289]: cluster 2026-03-10T10:39:40.712171+0000 mgr.y (mgr.24422) 969 : cluster [DBG] pgmap v1423: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:42.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:42 vm04 bash[28289]: cluster 2026-03-10T10:39:40.712171+0000 mgr.y (mgr.24422) 969 : cluster [DBG] pgmap v1423: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:42.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:42 vm04 bash[20742]: cluster 2026-03-10T10:39:40.712171+0000 mgr.y (mgr.24422) 969 : cluster [DBG] pgmap v1423: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:42.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:42 vm04 bash[20742]: cluster 2026-03-10T10:39:40.712171+0000 mgr.y (mgr.24422) 969 : cluster [DBG] pgmap v1423: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:42.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:42 vm07 bash[23367]: cluster 2026-03-10T10:39:40.712171+0000 mgr.y (mgr.24422) 969 : cluster [DBG] pgmap v1423: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:42.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:42 vm07 bash[23367]: cluster 2026-03-10T10:39:40.712171+0000 mgr.y (mgr.24422) 969 : cluster [DBG] pgmap v1423: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:43.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:39:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:39:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:39:44.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:44 vm07 bash[23367]: cluster 2026-03-10T10:39:42.712467+0000 mgr.y (mgr.24422) 970 : cluster [DBG] pgmap v1424: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:44.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:44 vm07 bash[23367]: cluster 2026-03-10T10:39:42.712467+0000 mgr.y (mgr.24422) 970 : cluster [DBG] pgmap v1424: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:44.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:44 vm07 bash[23367]: audit 2026-03-10T10:39:43.178008+0000 mon.a (mon.0) 3711 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:39:44.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:44 vm07 bash[23367]: audit 2026-03-10T10:39:43.178008+0000 mon.a (mon.0) 3711 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:39:44.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:44 vm07 bash[23367]: audit 2026-03-10T10:39:43.456898+0000 mon.a (mon.0) 3712 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:39:44.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:44 vm07 bash[23367]: audit 2026-03-10T10:39:43.456898+0000 mon.a (mon.0) 3712 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:39:44.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:44 vm07 bash[23367]: audit 2026-03-10T10:39:43.488123+0000 mon.a (mon.0) 3713 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:39:44.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:44 vm07 bash[23367]: audit 2026-03-10T10:39:43.488123+0000 mon.a (mon.0) 3713 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:39:44.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:44 vm07 bash[23367]: audit 2026-03-10T10:39:43.492296+0000 mon.a (mon.0) 3714 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:39:44.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:44 vm07 bash[23367]: audit 2026-03-10T10:39:43.492296+0000 mon.a (mon.0) 3714 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:39:44.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:44 vm07 bash[23367]: audit 2026-03-10T10:39:43.496972+0000 mon.a (mon.0) 3715 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:39:44.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:44 vm07 bash[23367]: audit 2026-03-10T10:39:43.496972+0000 mon.a (mon.0) 3715 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:39:44.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:44 vm07 bash[23367]: audit 2026-03-10T10:39:43.503152+0000 mon.a (mon.0) 3716 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:39:44.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:44 vm07 bash[23367]: audit 2026-03-10T10:39:43.503152+0000 mon.a (mon.0) 3716 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:39:44.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:44 vm07 bash[23367]: audit 2026-03-10T10:39:43.847639+0000 mon.a (mon.0) 3717 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:39:44.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:44 vm07 bash[23367]: audit 2026-03-10T10:39:43.847639+0000 mon.a (mon.0) 3717 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:39:44.518 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:44 vm07 bash[23367]: audit 2026-03-10T10:39:43.848262+0000 mon.a (mon.0) 3718 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:39:44.518 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:44 vm07 bash[23367]: audit 2026-03-10T10:39:43.848262+0000 mon.a (mon.0) 3718 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:39:44.518 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:44 vm07 bash[23367]: audit 2026-03-10T10:39:43.853478+0000 mon.a (mon.0) 3719 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:39:44.518 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:44 vm07 bash[23367]: audit 2026-03-10T10:39:43.853478+0000 mon.a (mon.0) 3719 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:44 vm04 bash[28289]: cluster 2026-03-10T10:39:42.712467+0000 mgr.y (mgr.24422) 970 : cluster [DBG] pgmap v1424: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:44 vm04 bash[28289]: cluster 2026-03-10T10:39:42.712467+0000 mgr.y (mgr.24422) 970 : cluster [DBG] pgmap v1424: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:44 vm04 bash[28289]: audit 2026-03-10T10:39:43.178008+0000 mon.a (mon.0) 3711 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:44 vm04 bash[28289]: audit 2026-03-10T10:39:43.178008+0000 mon.a (mon.0) 3711 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:44 vm04 bash[28289]: audit 2026-03-10T10:39:43.456898+0000 mon.a (mon.0) 3712 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:44 vm04 bash[28289]: audit 2026-03-10T10:39:43.456898+0000 mon.a (mon.0) 3712 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:44 vm04 bash[28289]: audit 2026-03-10T10:39:43.488123+0000 mon.a (mon.0) 3713 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:44 vm04 bash[28289]: audit 2026-03-10T10:39:43.488123+0000 mon.a (mon.0) 3713 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:44 vm04 bash[28289]: audit 2026-03-10T10:39:43.492296+0000 mon.a (mon.0) 3714 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:44 vm04 bash[28289]: audit 2026-03-10T10:39:43.492296+0000 mon.a (mon.0) 3714 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:44 vm04 bash[28289]: audit 2026-03-10T10:39:43.496972+0000 mon.a (mon.0) 3715 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:44 vm04 bash[28289]: audit 2026-03-10T10:39:43.496972+0000 mon.a (mon.0) 3715 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:44 vm04 bash[28289]: audit 2026-03-10T10:39:43.503152+0000 mon.a (mon.0) 3716 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:44 vm04 bash[28289]: audit 2026-03-10T10:39:43.503152+0000 mon.a (mon.0) 3716 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:44 vm04 bash[28289]: audit 2026-03-10T10:39:43.847639+0000 mon.a (mon.0) 3717 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:44 vm04 bash[28289]: audit 2026-03-10T10:39:43.847639+0000 mon.a (mon.0) 3717 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:44 vm04 bash[28289]: audit 2026-03-10T10:39:43.848262+0000 mon.a (mon.0) 3718 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:44 vm04 bash[28289]: audit 2026-03-10T10:39:43.848262+0000 mon.a (mon.0) 3718 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:44 vm04 bash[28289]: audit 2026-03-10T10:39:43.853478+0000 mon.a (mon.0) 3719 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:44 vm04 bash[28289]: audit 2026-03-10T10:39:43.853478+0000 mon.a (mon.0) 3719 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:44 vm04 bash[20742]: cluster 2026-03-10T10:39:42.712467+0000 mgr.y (mgr.24422) 970 : cluster [DBG] pgmap v1424: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:44 vm04 bash[20742]: cluster 2026-03-10T10:39:42.712467+0000 mgr.y (mgr.24422) 970 : cluster [DBG] pgmap v1424: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:44 vm04 bash[20742]: audit 2026-03-10T10:39:43.178008+0000 mon.a (mon.0) 3711 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:44 vm04 bash[20742]: audit 2026-03-10T10:39:43.178008+0000 mon.a (mon.0) 3711 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:44 vm04 bash[20742]: audit 2026-03-10T10:39:43.456898+0000 mon.a (mon.0) 3712 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:44 vm04 bash[20742]: audit 2026-03-10T10:39:43.456898+0000 mon.a (mon.0) 3712 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:44 vm04 bash[20742]: audit 2026-03-10T10:39:43.488123+0000 mon.a (mon.0) 3713 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:44 vm04 bash[20742]: audit 2026-03-10T10:39:43.488123+0000 mon.a (mon.0) 3713 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:44 vm04 bash[20742]: audit 2026-03-10T10:39:43.492296+0000 mon.a (mon.0) 3714 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:44 vm04 bash[20742]: audit 2026-03-10T10:39:43.492296+0000 mon.a (mon.0) 3714 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:44 vm04 bash[20742]: audit 2026-03-10T10:39:43.496972+0000 mon.a (mon.0) 3715 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:44 vm04 bash[20742]: audit 2026-03-10T10:39:43.496972+0000 mon.a (mon.0) 3715 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:44 vm04 bash[20742]: audit 2026-03-10T10:39:43.503152+0000 mon.a (mon.0) 3716 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:44 vm04 bash[20742]: audit 2026-03-10T10:39:43.503152+0000 mon.a (mon.0) 3716 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:44 vm04 bash[20742]: audit 2026-03-10T10:39:43.847639+0000 mon.a (mon.0) 3717 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:44 vm04 bash[20742]: audit 2026-03-10T10:39:43.847639+0000 mon.a (mon.0) 3717 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:44 vm04 bash[20742]: audit 2026-03-10T10:39:43.848262+0000 mon.a (mon.0) 3718 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:44 vm04 bash[20742]: audit 2026-03-10T10:39:43.848262+0000 mon.a (mon.0) 3718 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:44 vm04 bash[20742]: audit 2026-03-10T10:39:43.853478+0000 mon.a (mon.0) 3719 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:39:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:44 vm04 bash[20742]: audit 2026-03-10T10:39:43.853478+0000 mon.a (mon.0) 3719 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:39:46.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:46 vm07 bash[23367]: cluster 2026-03-10T10:39:44.713083+0000 mgr.y (mgr.24422) 971 : cluster [DBG] pgmap v1425: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:46.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:46 vm07 bash[23367]: cluster 2026-03-10T10:39:44.713083+0000 mgr.y (mgr.24422) 971 : cluster [DBG] pgmap v1425: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:46.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:46 vm04 bash[28289]: cluster 2026-03-10T10:39:44.713083+0000 mgr.y (mgr.24422) 971 : cluster [DBG] pgmap v1425: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:46.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:46 vm04 bash[28289]: cluster 2026-03-10T10:39:44.713083+0000 mgr.y (mgr.24422) 971 : cluster [DBG] pgmap v1425: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:46.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:46 vm04 bash[20742]: cluster 2026-03-10T10:39:44.713083+0000 mgr.y (mgr.24422) 971 : cluster [DBG] pgmap v1425: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:46.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:46 vm04 bash[20742]: cluster 2026-03-10T10:39:44.713083+0000 mgr.y (mgr.24422) 971 : cluster [DBG] pgmap v1425: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:48.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:48 vm07 bash[23367]: cluster 2026-03-10T10:39:46.713368+0000 mgr.y (mgr.24422) 972 : cluster [DBG] pgmap v1426: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:48.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:48 vm07 bash[23367]: cluster 2026-03-10T10:39:46.713368+0000 mgr.y (mgr.24422) 972 : cluster [DBG] pgmap v1426: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:48.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:48 vm04 bash[28289]: cluster 2026-03-10T10:39:46.713368+0000 mgr.y (mgr.24422) 972 : cluster [DBG] pgmap v1426: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:48.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:48 vm04 bash[28289]: cluster 2026-03-10T10:39:46.713368+0000 mgr.y (mgr.24422) 972 : cluster [DBG] pgmap v1426: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:48.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:48 vm04 bash[20742]: cluster 2026-03-10T10:39:46.713368+0000 mgr.y (mgr.24422) 972 : cluster [DBG] pgmap v1426: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:48.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:48 vm04 bash[20742]: cluster 2026-03-10T10:39:46.713368+0000 mgr.y (mgr.24422) 972 : cluster [DBG] pgmap v1426: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:49.767 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:39:49 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:39:50.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:50 vm07 bash[23367]: cluster 2026-03-10T10:39:48.713799+0000 mgr.y (mgr.24422) 973 : cluster [DBG] pgmap v1427: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:50.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:50 vm07 bash[23367]: cluster 2026-03-10T10:39:48.713799+0000 mgr.y (mgr.24422) 973 : cluster [DBG] pgmap v1427: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:50.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:50 vm07 bash[23367]: audit 2026-03-10T10:39:49.379000+0000 mgr.y (mgr.24422) 974 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:50.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:50 vm07 bash[23367]: audit 2026-03-10T10:39:49.379000+0000 mgr.y (mgr.24422) 974 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:50.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:50 vm04 bash[28289]: cluster 2026-03-10T10:39:48.713799+0000 mgr.y (mgr.24422) 973 : cluster [DBG] pgmap v1427: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:50.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:50 vm04 bash[28289]: cluster 2026-03-10T10:39:48.713799+0000 mgr.y (mgr.24422) 973 : cluster [DBG] pgmap v1427: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:50.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:50 vm04 bash[28289]: audit 2026-03-10T10:39:49.379000+0000 mgr.y (mgr.24422) 974 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:50.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:50 vm04 bash[28289]: audit 2026-03-10T10:39:49.379000+0000 mgr.y (mgr.24422) 974 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:50.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:50 vm04 bash[20742]: cluster 2026-03-10T10:39:48.713799+0000 mgr.y (mgr.24422) 973 : cluster [DBG] pgmap v1427: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:50.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:50 vm04 bash[20742]: cluster 2026-03-10T10:39:48.713799+0000 mgr.y (mgr.24422) 973 : cluster [DBG] pgmap v1427: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:50.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:50 vm04 bash[20742]: audit 2026-03-10T10:39:49.379000+0000 mgr.y (mgr.24422) 974 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:50.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:50 vm04 bash[20742]: audit 2026-03-10T10:39:49.379000+0000 mgr.y (mgr.24422) 974 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:39:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:52 vm07 bash[23367]: cluster 2026-03-10T10:39:50.714273+0000 mgr.y (mgr.24422) 975 : cluster [DBG] pgmap v1428: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:52 vm07 bash[23367]: cluster 2026-03-10T10:39:50.714273+0000 mgr.y (mgr.24422) 975 : cluster [DBG] pgmap v1428: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:52.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:52 vm04 bash[28289]: cluster 2026-03-10T10:39:50.714273+0000 mgr.y (mgr.24422) 975 : cluster [DBG] pgmap v1428: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:52.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:52 vm04 bash[28289]: cluster 2026-03-10T10:39:50.714273+0000 mgr.y (mgr.24422) 975 : cluster [DBG] pgmap v1428: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:52.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:52 vm04 bash[20742]: cluster 2026-03-10T10:39:50.714273+0000 mgr.y (mgr.24422) 975 : cluster [DBG] pgmap v1428: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:52.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:52 vm04 bash[20742]: cluster 2026-03-10T10:39:50.714273+0000 mgr.y (mgr.24422) 975 : cluster [DBG] pgmap v1428: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:53.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:39:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:39:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:39:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:54 vm07 bash[23367]: cluster 2026-03-10T10:39:52.714535+0000 mgr.y (mgr.24422) 976 : cluster [DBG] pgmap v1429: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:54 vm07 bash[23367]: cluster 2026-03-10T10:39:52.714535+0000 mgr.y (mgr.24422) 976 : cluster [DBG] pgmap v1429: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:54.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:54 vm04 bash[28289]: cluster 2026-03-10T10:39:52.714535+0000 mgr.y (mgr.24422) 976 : cluster [DBG] pgmap v1429: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:54.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:54 vm04 bash[28289]: cluster 2026-03-10T10:39:52.714535+0000 mgr.y (mgr.24422) 976 : cluster [DBG] pgmap v1429: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:54.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:54 vm04 bash[20742]: cluster 2026-03-10T10:39:52.714535+0000 mgr.y (mgr.24422) 976 : cluster [DBG] pgmap v1429: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:54.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:54 vm04 bash[20742]: cluster 2026-03-10T10:39:52.714535+0000 mgr.y (mgr.24422) 976 : cluster [DBG] pgmap v1429: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:56.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:56 vm07 bash[23367]: cluster 2026-03-10T10:39:54.715094+0000 mgr.y (mgr.24422) 977 : cluster [DBG] pgmap v1430: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:56.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:56 vm07 bash[23367]: cluster 2026-03-10T10:39:54.715094+0000 mgr.y (mgr.24422) 977 : cluster [DBG] pgmap v1430: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:56.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:56 vm04 bash[28289]: cluster 2026-03-10T10:39:54.715094+0000 mgr.y (mgr.24422) 977 : cluster [DBG] pgmap v1430: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:56.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:56 vm04 bash[28289]: cluster 2026-03-10T10:39:54.715094+0000 mgr.y (mgr.24422) 977 : cluster [DBG] pgmap v1430: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:56.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:56 vm04 bash[20742]: cluster 2026-03-10T10:39:54.715094+0000 mgr.y (mgr.24422) 977 : cluster [DBG] pgmap v1430: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:56.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:56 vm04 bash[20742]: cluster 2026-03-10T10:39:54.715094+0000 mgr.y (mgr.24422) 977 : cluster [DBG] pgmap v1430: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:39:58.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:58 vm04 bash[28289]: cluster 2026-03-10T10:39:56.715399+0000 mgr.y (mgr.24422) 978 : cluster [DBG] pgmap v1431: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:58.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:58 vm04 bash[28289]: cluster 2026-03-10T10:39:56.715399+0000 mgr.y (mgr.24422) 978 : cluster [DBG] pgmap v1431: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:58 vm04 bash[20742]: cluster 2026-03-10T10:39:56.715399+0000 mgr.y (mgr.24422) 978 : cluster [DBG] pgmap v1431: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:58 vm04 bash[20742]: cluster 2026-03-10T10:39:56.715399+0000 mgr.y (mgr.24422) 978 : cluster [DBG] pgmap v1431: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:58.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:58 vm07 bash[23367]: cluster 2026-03-10T10:39:56.715399+0000 mgr.y (mgr.24422) 978 : cluster [DBG] pgmap v1431: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:58.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:58 vm07 bash[23367]: cluster 2026-03-10T10:39:56.715399+0000 mgr.y (mgr.24422) 978 : cluster [DBG] pgmap v1431: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:39:59.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:59 vm04 bash[28289]: audit 2026-03-10T10:39:58.464323+0000 mon.a (mon.0) 3720 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:39:59.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:39:59 vm04 bash[28289]: audit 2026-03-10T10:39:58.464323+0000 mon.a (mon.0) 3720 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:39:59.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:59 vm04 bash[20742]: audit 2026-03-10T10:39:58.464323+0000 mon.a (mon.0) 3720 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:39:59.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:39:59 vm04 bash[20742]: audit 2026-03-10T10:39:58.464323+0000 mon.a (mon.0) 3720 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:39:59.767 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:39:59 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:39:59.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:59 vm07 bash[23367]: audit 2026-03-10T10:39:58.464323+0000 mon.a (mon.0) 3720 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:39:59.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:39:59 vm07 bash[23367]: audit 2026-03-10T10:39:58.464323+0000 mon.a (mon.0) 3720 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:40:00.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:00 vm04 bash[28289]: cluster 2026-03-10T10:39:58.715880+0000 mgr.y (mgr.24422) 979 : cluster [DBG] pgmap v1432: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:00 vm04 bash[28289]: cluster 2026-03-10T10:39:58.715880+0000 mgr.y (mgr.24422) 979 : cluster [DBG] pgmap v1432: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:00 vm04 bash[28289]: audit 2026-03-10T10:39:59.388873+0000 mgr.y (mgr.24422) 980 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:00 vm04 bash[28289]: audit 2026-03-10T10:39:59.388873+0000 mgr.y (mgr.24422) 980 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:00 vm04 bash[28289]: cluster 2026-03-10T10:40:00.000124+0000 mon.a (mon.0) 3721 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-10T10:40:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:00 vm04 bash[28289]: cluster 2026-03-10T10:40:00.000124+0000 mon.a (mon.0) 3721 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-10T10:40:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:00 vm04 bash[28289]: cluster 2026-03-10T10:40:00.000154+0000 mon.a (mon.0) 3722 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-10T10:40:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:00 vm04 bash[28289]: cluster 2026-03-10T10:40:00.000154+0000 mon.a (mon.0) 3722 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-10T10:40:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:00 vm04 bash[28289]: cluster 2026-03-10T10:40:00.000166+0000 mon.a (mon.0) 3723 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T10:40:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:00 vm04 bash[28289]: cluster 2026-03-10T10:40:00.000166+0000 mon.a (mon.0) 3723 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T10:40:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:00 vm04 bash[28289]: cluster 2026-03-10T10:40:00.000175+0000 mon.a (mon.0) 3724 : cluster [WRN] application not enabled on pool 'WatchNotifyvm04-60261-1' 2026-03-10T10:40:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:00 vm04 bash[28289]: cluster 2026-03-10T10:40:00.000175+0000 mon.a (mon.0) 3724 : cluster [WRN] application not enabled on pool 'WatchNotifyvm04-60261-1' 2026-03-10T10:40:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:00 vm04 bash[28289]: cluster 2026-03-10T10:40:00.000183+0000 mon.a (mon.0) 3725 : cluster [WRN] application not enabled on pool 'AssertExistsvm04-60281-1' 2026-03-10T10:40:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:00 vm04 bash[28289]: cluster 2026-03-10T10:40:00.000183+0000 mon.a (mon.0) 3725 : cluster [WRN] application not enabled on pool 'AssertExistsvm04-60281-1' 2026-03-10T10:40:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:00 vm04 bash[28289]: cluster 2026-03-10T10:40:00.000193+0000 mon.a (mon.0) 3726 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T10:40:00.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:00 vm04 bash[28289]: cluster 2026-03-10T10:40:00.000193+0000 mon.a (mon.0) 3726 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T10:40:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:00 vm04 bash[20742]: cluster 2026-03-10T10:39:58.715880+0000 mgr.y (mgr.24422) 979 : cluster [DBG] pgmap v1432: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:00 vm04 bash[20742]: cluster 2026-03-10T10:39:58.715880+0000 mgr.y (mgr.24422) 979 : cluster [DBG] pgmap v1432: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:00 vm04 bash[20742]: audit 2026-03-10T10:39:59.388873+0000 mgr.y (mgr.24422) 980 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:00 vm04 bash[20742]: audit 2026-03-10T10:39:59.388873+0000 mgr.y (mgr.24422) 980 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:00 vm04 bash[20742]: cluster 2026-03-10T10:40:00.000124+0000 mon.a (mon.0) 3721 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-10T10:40:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:00 vm04 bash[20742]: cluster 2026-03-10T10:40:00.000124+0000 mon.a (mon.0) 3721 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-10T10:40:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:00 vm04 bash[20742]: cluster 2026-03-10T10:40:00.000154+0000 mon.a (mon.0) 3722 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-10T10:40:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:00 vm04 bash[20742]: cluster 2026-03-10T10:40:00.000154+0000 mon.a (mon.0) 3722 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-10T10:40:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:00 vm04 bash[20742]: cluster 2026-03-10T10:40:00.000166+0000 mon.a (mon.0) 3723 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T10:40:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:00 vm04 bash[20742]: cluster 2026-03-10T10:40:00.000166+0000 mon.a (mon.0) 3723 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T10:40:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:00 vm04 bash[20742]: cluster 2026-03-10T10:40:00.000175+0000 mon.a (mon.0) 3724 : cluster [WRN] application not enabled on pool 'WatchNotifyvm04-60261-1' 2026-03-10T10:40:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:00 vm04 bash[20742]: cluster 2026-03-10T10:40:00.000175+0000 mon.a (mon.0) 3724 : cluster [WRN] application not enabled on pool 'WatchNotifyvm04-60261-1' 2026-03-10T10:40:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:00 vm04 bash[20742]: cluster 2026-03-10T10:40:00.000183+0000 mon.a (mon.0) 3725 : cluster [WRN] application not enabled on pool 'AssertExistsvm04-60281-1' 2026-03-10T10:40:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:00 vm04 bash[20742]: cluster 2026-03-10T10:40:00.000183+0000 mon.a (mon.0) 3725 : cluster [WRN] application not enabled on pool 'AssertExistsvm04-60281-1' 2026-03-10T10:40:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:00 vm04 bash[20742]: cluster 2026-03-10T10:40:00.000193+0000 mon.a (mon.0) 3726 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T10:40:00.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:00 vm04 bash[20742]: cluster 2026-03-10T10:40:00.000193+0000 mon.a (mon.0) 3726 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T10:40:00.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:00 vm07 bash[23367]: cluster 2026-03-10T10:39:58.715880+0000 mgr.y (mgr.24422) 979 : cluster [DBG] pgmap v1432: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:00.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:00 vm07 bash[23367]: cluster 2026-03-10T10:39:58.715880+0000 mgr.y (mgr.24422) 979 : cluster [DBG] pgmap v1432: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:00.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:00 vm07 bash[23367]: audit 2026-03-10T10:39:59.388873+0000 mgr.y (mgr.24422) 980 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:00.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:00 vm07 bash[23367]: audit 2026-03-10T10:39:59.388873+0000 mgr.y (mgr.24422) 980 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:00.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:00 vm07 bash[23367]: cluster 2026-03-10T10:40:00.000124+0000 mon.a (mon.0) 3721 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-10T10:40:00.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:00 vm07 bash[23367]: cluster 2026-03-10T10:40:00.000124+0000 mon.a (mon.0) 3721 : cluster [WRN] Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled 2026-03-10T10:40:00.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:00 vm07 bash[23367]: cluster 2026-03-10T10:40:00.000154+0000 mon.a (mon.0) 3722 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-10T10:40:00.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:00 vm07 bash[23367]: cluster 2026-03-10T10:40:00.000154+0000 mon.a (mon.0) 3722 : cluster [WRN] [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled 2026-03-10T10:40:00.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:00 vm07 bash[23367]: cluster 2026-03-10T10:40:00.000166+0000 mon.a (mon.0) 3723 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T10:40:00.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:00 vm07 bash[23367]: cluster 2026-03-10T10:40:00.000166+0000 mon.a (mon.0) 3723 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio' 2026-03-10T10:40:00.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:00 vm07 bash[23367]: cluster 2026-03-10T10:40:00.000175+0000 mon.a (mon.0) 3724 : cluster [WRN] application not enabled on pool 'WatchNotifyvm04-60261-1' 2026-03-10T10:40:00.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:00 vm07 bash[23367]: cluster 2026-03-10T10:40:00.000175+0000 mon.a (mon.0) 3724 : cluster [WRN] application not enabled on pool 'WatchNotifyvm04-60261-1' 2026-03-10T10:40:00.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:00 vm07 bash[23367]: cluster 2026-03-10T10:40:00.000183+0000 mon.a (mon.0) 3725 : cluster [WRN] application not enabled on pool 'AssertExistsvm04-60281-1' 2026-03-10T10:40:00.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:00 vm07 bash[23367]: cluster 2026-03-10T10:40:00.000183+0000 mon.a (mon.0) 3725 : cluster [WRN] application not enabled on pool 'AssertExistsvm04-60281-1' 2026-03-10T10:40:00.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:00 vm07 bash[23367]: cluster 2026-03-10T10:40:00.000193+0000 mon.a (mon.0) 3726 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T10:40:00.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:00 vm07 bash[23367]: cluster 2026-03-10T10:40:00.000193+0000 mon.a (mon.0) 3726 : cluster [WRN] use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. 2026-03-10T10:40:02.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:02 vm04 bash[28289]: cluster 2026-03-10T10:40:00.716427+0000 mgr.y (mgr.24422) 981 : cluster [DBG] pgmap v1433: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:02.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:02 vm04 bash[28289]: cluster 2026-03-10T10:40:00.716427+0000 mgr.y (mgr.24422) 981 : cluster [DBG] pgmap v1433: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:02.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:02 vm04 bash[20742]: cluster 2026-03-10T10:40:00.716427+0000 mgr.y (mgr.24422) 981 : cluster [DBG] pgmap v1433: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:02.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:02 vm04 bash[20742]: cluster 2026-03-10T10:40:00.716427+0000 mgr.y (mgr.24422) 981 : cluster [DBG] pgmap v1433: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:02 vm07 bash[23367]: cluster 2026-03-10T10:40:00.716427+0000 mgr.y (mgr.24422) 981 : cluster [DBG] pgmap v1433: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:02.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:02 vm07 bash[23367]: cluster 2026-03-10T10:40:00.716427+0000 mgr.y (mgr.24422) 981 : cluster [DBG] pgmap v1433: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:03.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:40:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:40:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:40:04.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:04 vm04 bash[28289]: cluster 2026-03-10T10:40:02.716695+0000 mgr.y (mgr.24422) 982 : cluster [DBG] pgmap v1434: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:04.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:04 vm04 bash[28289]: cluster 2026-03-10T10:40:02.716695+0000 mgr.y (mgr.24422) 982 : cluster [DBG] pgmap v1434: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:04.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:04 vm04 bash[20742]: cluster 2026-03-10T10:40:02.716695+0000 mgr.y (mgr.24422) 982 : cluster [DBG] pgmap v1434: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:04.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:04 vm04 bash[20742]: cluster 2026-03-10T10:40:02.716695+0000 mgr.y (mgr.24422) 982 : cluster [DBG] pgmap v1434: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:04.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:04 vm07 bash[23367]: cluster 2026-03-10T10:40:02.716695+0000 mgr.y (mgr.24422) 982 : cluster [DBG] pgmap v1434: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:04.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:04 vm07 bash[23367]: cluster 2026-03-10T10:40:02.716695+0000 mgr.y (mgr.24422) 982 : cluster [DBG] pgmap v1434: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:06.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:06 vm04 bash[28289]: cluster 2026-03-10T10:40:04.717247+0000 mgr.y (mgr.24422) 983 : cluster [DBG] pgmap v1435: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:06.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:06 vm04 bash[28289]: cluster 2026-03-10T10:40:04.717247+0000 mgr.y (mgr.24422) 983 : cluster [DBG] pgmap v1435: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:06.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:06 vm04 bash[20742]: cluster 2026-03-10T10:40:04.717247+0000 mgr.y (mgr.24422) 983 : cluster [DBG] pgmap v1435: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:06.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:06 vm04 bash[20742]: cluster 2026-03-10T10:40:04.717247+0000 mgr.y (mgr.24422) 983 : cluster [DBG] pgmap v1435: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:06.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:06 vm07 bash[23367]: cluster 2026-03-10T10:40:04.717247+0000 mgr.y (mgr.24422) 983 : cluster [DBG] pgmap v1435: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:06.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:06 vm07 bash[23367]: cluster 2026-03-10T10:40:04.717247+0000 mgr.y (mgr.24422) 983 : cluster [DBG] pgmap v1435: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:08.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:08 vm04 bash[28289]: cluster 2026-03-10T10:40:06.717493+0000 mgr.y (mgr.24422) 984 : cluster [DBG] pgmap v1436: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:08.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:08 vm04 bash[28289]: cluster 2026-03-10T10:40:06.717493+0000 mgr.y (mgr.24422) 984 : cluster [DBG] pgmap v1436: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:08.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:08 vm04 bash[20742]: cluster 2026-03-10T10:40:06.717493+0000 mgr.y (mgr.24422) 984 : cluster [DBG] pgmap v1436: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:08.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:08 vm04 bash[20742]: cluster 2026-03-10T10:40:06.717493+0000 mgr.y (mgr.24422) 984 : cluster [DBG] pgmap v1436: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:08.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:08 vm07 bash[23367]: cluster 2026-03-10T10:40:06.717493+0000 mgr.y (mgr.24422) 984 : cluster [DBG] pgmap v1436: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:08.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:08 vm07 bash[23367]: cluster 2026-03-10T10:40:06.717493+0000 mgr.y (mgr.24422) 984 : cluster [DBG] pgmap v1436: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:09.767 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:40:09 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:40:10.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:10 vm04 bash[28289]: cluster 2026-03-10T10:40:08.718036+0000 mgr.y (mgr.24422) 985 : cluster [DBG] pgmap v1437: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:10.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:10 vm04 bash[28289]: cluster 2026-03-10T10:40:08.718036+0000 mgr.y (mgr.24422) 985 : cluster [DBG] pgmap v1437: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:10.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:10 vm04 bash[28289]: audit 2026-03-10T10:40:09.393050+0000 mgr.y (mgr.24422) 986 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:10.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:10 vm04 bash[28289]: audit 2026-03-10T10:40:09.393050+0000 mgr.y (mgr.24422) 986 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:10.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:10 vm04 bash[20742]: cluster 2026-03-10T10:40:08.718036+0000 mgr.y (mgr.24422) 985 : cluster [DBG] pgmap v1437: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:10.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:10 vm04 bash[20742]: cluster 2026-03-10T10:40:08.718036+0000 mgr.y (mgr.24422) 985 : cluster [DBG] pgmap v1437: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:10.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:10 vm04 bash[20742]: audit 2026-03-10T10:40:09.393050+0000 mgr.y (mgr.24422) 986 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:10.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:10 vm04 bash[20742]: audit 2026-03-10T10:40:09.393050+0000 mgr.y (mgr.24422) 986 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:10.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:10 vm07 bash[23367]: cluster 2026-03-10T10:40:08.718036+0000 mgr.y (mgr.24422) 985 : cluster [DBG] pgmap v1437: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:10.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:10 vm07 bash[23367]: cluster 2026-03-10T10:40:08.718036+0000 mgr.y (mgr.24422) 985 : cluster [DBG] pgmap v1437: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:10.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:10 vm07 bash[23367]: audit 2026-03-10T10:40:09.393050+0000 mgr.y (mgr.24422) 986 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:10.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:10 vm07 bash[23367]: audit 2026-03-10T10:40:09.393050+0000 mgr.y (mgr.24422) 986 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:12.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:12 vm04 bash[28289]: cluster 2026-03-10T10:40:10.718568+0000 mgr.y (mgr.24422) 987 : cluster [DBG] pgmap v1438: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:12.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:12 vm04 bash[28289]: cluster 2026-03-10T10:40:10.718568+0000 mgr.y (mgr.24422) 987 : cluster [DBG] pgmap v1438: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:12.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:12 vm04 bash[20742]: cluster 2026-03-10T10:40:10.718568+0000 mgr.y (mgr.24422) 987 : cluster [DBG] pgmap v1438: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:12.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:12 vm04 bash[20742]: cluster 2026-03-10T10:40:10.718568+0000 mgr.y (mgr.24422) 987 : cluster [DBG] pgmap v1438: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:12.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:12 vm07 bash[23367]: cluster 2026-03-10T10:40:10.718568+0000 mgr.y (mgr.24422) 987 : cluster [DBG] pgmap v1438: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:12.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:12 vm07 bash[23367]: cluster 2026-03-10T10:40:10.718568+0000 mgr.y (mgr.24422) 987 : cluster [DBG] pgmap v1438: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:13.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:40:13 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:40:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:40:13.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:13 vm04 bash[28289]: cluster 2026-03-10T10:40:12.718908+0000 mgr.y (mgr.24422) 988 : cluster [DBG] pgmap v1439: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:13.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:13 vm04 bash[28289]: cluster 2026-03-10T10:40:12.718908+0000 mgr.y (mgr.24422) 988 : cluster [DBG] pgmap v1439: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:13.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:13 vm04 bash[28289]: audit 2026-03-10T10:40:13.469784+0000 mon.a (mon.0) 3727 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:40:13.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:13 vm04 bash[28289]: audit 2026-03-10T10:40:13.469784+0000 mon.a (mon.0) 3727 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:40:13.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:13 vm04 bash[20742]: cluster 2026-03-10T10:40:12.718908+0000 mgr.y (mgr.24422) 988 : cluster [DBG] pgmap v1439: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:13.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:13 vm04 bash[20742]: cluster 2026-03-10T10:40:12.718908+0000 mgr.y (mgr.24422) 988 : cluster [DBG] pgmap v1439: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:13.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:13 vm04 bash[20742]: audit 2026-03-10T10:40:13.469784+0000 mon.a (mon.0) 3727 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:40:13.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:13 vm04 bash[20742]: audit 2026-03-10T10:40:13.469784+0000 mon.a (mon.0) 3727 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:40:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:13 vm07 bash[23367]: cluster 2026-03-10T10:40:12.718908+0000 mgr.y (mgr.24422) 988 : cluster [DBG] pgmap v1439: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:13 vm07 bash[23367]: cluster 2026-03-10T10:40:12.718908+0000 mgr.y (mgr.24422) 988 : cluster [DBG] pgmap v1439: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:13 vm07 bash[23367]: audit 2026-03-10T10:40:13.469784+0000 mon.a (mon.0) 3727 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:40:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:13 vm07 bash[23367]: audit 2026-03-10T10:40:13.469784+0000 mon.a (mon.0) 3727 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:40:16.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:15 vm04 bash[28289]: cluster 2026-03-10T10:40:14.719508+0000 mgr.y (mgr.24422) 989 : cluster [DBG] pgmap v1440: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:16.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:15 vm04 bash[28289]: cluster 2026-03-10T10:40:14.719508+0000 mgr.y (mgr.24422) 989 : cluster [DBG] pgmap v1440: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:15 vm04 bash[20742]: cluster 2026-03-10T10:40:14.719508+0000 mgr.y (mgr.24422) 989 : cluster [DBG] pgmap v1440: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:15 vm04 bash[20742]: cluster 2026-03-10T10:40:14.719508+0000 mgr.y (mgr.24422) 989 : cluster [DBG] pgmap v1440: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:16.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:15 vm07 bash[23367]: cluster 2026-03-10T10:40:14.719508+0000 mgr.y (mgr.24422) 989 : cluster [DBG] pgmap v1440: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:16.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:15 vm07 bash[23367]: cluster 2026-03-10T10:40:14.719508+0000 mgr.y (mgr.24422) 989 : cluster [DBG] pgmap v1440: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:18.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:17 vm04 bash[28289]: cluster 2026-03-10T10:40:16.719842+0000 mgr.y (mgr.24422) 990 : cluster [DBG] pgmap v1441: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:18.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:17 vm04 bash[28289]: cluster 2026-03-10T10:40:16.719842+0000 mgr.y (mgr.24422) 990 : cluster [DBG] pgmap v1441: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:18.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:17 vm04 bash[20742]: cluster 2026-03-10T10:40:16.719842+0000 mgr.y (mgr.24422) 990 : cluster [DBG] pgmap v1441: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:18.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:17 vm04 bash[20742]: cluster 2026-03-10T10:40:16.719842+0000 mgr.y (mgr.24422) 990 : cluster [DBG] pgmap v1441: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:18.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:17 vm07 bash[23367]: cluster 2026-03-10T10:40:16.719842+0000 mgr.y (mgr.24422) 990 : cluster [DBG] pgmap v1441: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:18.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:17 vm07 bash[23367]: cluster 2026-03-10T10:40:16.719842+0000 mgr.y (mgr.24422) 990 : cluster [DBG] pgmap v1441: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:19.767 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:40:19 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:40:20.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:19 vm04 bash[28289]: cluster 2026-03-10T10:40:18.720461+0000 mgr.y (mgr.24422) 991 : cluster [DBG] pgmap v1442: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:20.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:19 vm04 bash[28289]: cluster 2026-03-10T10:40:18.720461+0000 mgr.y (mgr.24422) 991 : cluster [DBG] pgmap v1442: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:20.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:19 vm04 bash[28289]: audit 2026-03-10T10:40:19.401532+0000 mgr.y (mgr.24422) 992 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:20.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:19 vm04 bash[28289]: audit 2026-03-10T10:40:19.401532+0000 mgr.y (mgr.24422) 992 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:20.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:19 vm04 bash[20742]: cluster 2026-03-10T10:40:18.720461+0000 mgr.y (mgr.24422) 991 : cluster [DBG] pgmap v1442: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:20.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:19 vm04 bash[20742]: cluster 2026-03-10T10:40:18.720461+0000 mgr.y (mgr.24422) 991 : cluster [DBG] pgmap v1442: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:20.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:19 vm04 bash[20742]: audit 2026-03-10T10:40:19.401532+0000 mgr.y (mgr.24422) 992 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:20.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:19 vm04 bash[20742]: audit 2026-03-10T10:40:19.401532+0000 mgr.y (mgr.24422) 992 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:20.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:19 vm07 bash[23367]: cluster 2026-03-10T10:40:18.720461+0000 mgr.y (mgr.24422) 991 : cluster [DBG] pgmap v1442: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:20.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:19 vm07 bash[23367]: cluster 2026-03-10T10:40:18.720461+0000 mgr.y (mgr.24422) 991 : cluster [DBG] pgmap v1442: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:20.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:19 vm07 bash[23367]: audit 2026-03-10T10:40:19.401532+0000 mgr.y (mgr.24422) 992 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:20.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:19 vm07 bash[23367]: audit 2026-03-10T10:40:19.401532+0000 mgr.y (mgr.24422) 992 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:22.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:21 vm04 bash[28289]: cluster 2026-03-10T10:40:20.720959+0000 mgr.y (mgr.24422) 993 : cluster [DBG] pgmap v1443: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:22.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:21 vm04 bash[28289]: cluster 2026-03-10T10:40:20.720959+0000 mgr.y (mgr.24422) 993 : cluster [DBG] pgmap v1443: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:22.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:21 vm04 bash[20742]: cluster 2026-03-10T10:40:20.720959+0000 mgr.y (mgr.24422) 993 : cluster [DBG] pgmap v1443: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:22.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:21 vm04 bash[20742]: cluster 2026-03-10T10:40:20.720959+0000 mgr.y (mgr.24422) 993 : cluster [DBG] pgmap v1443: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:22.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:21 vm07 bash[23367]: cluster 2026-03-10T10:40:20.720959+0000 mgr.y (mgr.24422) 993 : cluster [DBG] pgmap v1443: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:22.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:21 vm07 bash[23367]: cluster 2026-03-10T10:40:20.720959+0000 mgr.y (mgr.24422) 993 : cluster [DBG] pgmap v1443: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:23.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:40:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:40:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:40:24.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:23 vm04 bash[28289]: cluster 2026-03-10T10:40:22.721303+0000 mgr.y (mgr.24422) 994 : cluster [DBG] pgmap v1444: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:24.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:23 vm04 bash[28289]: cluster 2026-03-10T10:40:22.721303+0000 mgr.y (mgr.24422) 994 : cluster [DBG] pgmap v1444: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:24.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:23 vm04 bash[20742]: cluster 2026-03-10T10:40:22.721303+0000 mgr.y (mgr.24422) 994 : cluster [DBG] pgmap v1444: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:24.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:23 vm04 bash[20742]: cluster 2026-03-10T10:40:22.721303+0000 mgr.y (mgr.24422) 994 : cluster [DBG] pgmap v1444: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:24.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:23 vm07 bash[23367]: cluster 2026-03-10T10:40:22.721303+0000 mgr.y (mgr.24422) 994 : cluster [DBG] pgmap v1444: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:24.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:23 vm07 bash[23367]: cluster 2026-03-10T10:40:22.721303+0000 mgr.y (mgr.24422) 994 : cluster [DBG] pgmap v1444: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:26.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:25 vm04 bash[28289]: cluster 2026-03-10T10:40:24.722032+0000 mgr.y (mgr.24422) 995 : cluster [DBG] pgmap v1445: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:26.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:25 vm04 bash[28289]: cluster 2026-03-10T10:40:24.722032+0000 mgr.y (mgr.24422) 995 : cluster [DBG] pgmap v1445: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:26.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:25 vm04 bash[20742]: cluster 2026-03-10T10:40:24.722032+0000 mgr.y (mgr.24422) 995 : cluster [DBG] pgmap v1445: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:26.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:25 vm04 bash[20742]: cluster 2026-03-10T10:40:24.722032+0000 mgr.y (mgr.24422) 995 : cluster [DBG] pgmap v1445: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:26.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:25 vm07 bash[23367]: cluster 2026-03-10T10:40:24.722032+0000 mgr.y (mgr.24422) 995 : cluster [DBG] pgmap v1445: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:26.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:25 vm07 bash[23367]: cluster 2026-03-10T10:40:24.722032+0000 mgr.y (mgr.24422) 995 : cluster [DBG] pgmap v1445: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:28.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:27 vm04 bash[28289]: cluster 2026-03-10T10:40:26.722340+0000 mgr.y (mgr.24422) 996 : cluster [DBG] pgmap v1446: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:28.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:27 vm04 bash[28289]: cluster 2026-03-10T10:40:26.722340+0000 mgr.y (mgr.24422) 996 : cluster [DBG] pgmap v1446: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:27 vm04 bash[20742]: cluster 2026-03-10T10:40:26.722340+0000 mgr.y (mgr.24422) 996 : cluster [DBG] pgmap v1446: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:27 vm04 bash[20742]: cluster 2026-03-10T10:40:26.722340+0000 mgr.y (mgr.24422) 996 : cluster [DBG] pgmap v1446: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:28.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:27 vm07 bash[23367]: cluster 2026-03-10T10:40:26.722340+0000 mgr.y (mgr.24422) 996 : cluster [DBG] pgmap v1446: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:28.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:27 vm07 bash[23367]: cluster 2026-03-10T10:40:26.722340+0000 mgr.y (mgr.24422) 996 : cluster [DBG] pgmap v1446: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:29.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:28 vm04 bash[28289]: audit 2026-03-10T10:40:28.476788+0000 mon.a (mon.0) 3728 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:40:29.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:28 vm04 bash[28289]: audit 2026-03-10T10:40:28.476788+0000 mon.a (mon.0) 3728 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:40:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:28 vm04 bash[20742]: audit 2026-03-10T10:40:28.476788+0000 mon.a (mon.0) 3728 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:40:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:28 vm04 bash[20742]: audit 2026-03-10T10:40:28.476788+0000 mon.a (mon.0) 3728 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:40:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:28 vm07 bash[23367]: audit 2026-03-10T10:40:28.476788+0000 mon.a (mon.0) 3728 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:40:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:28 vm07 bash[23367]: audit 2026-03-10T10:40:28.476788+0000 mon.a (mon.0) 3728 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:40:29.767 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:40:29 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:40:30.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:29 vm04 bash[28289]: cluster 2026-03-10T10:40:28.722849+0000 mgr.y (mgr.24422) 997 : cluster [DBG] pgmap v1447: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:29 vm04 bash[28289]: cluster 2026-03-10T10:40:28.722849+0000 mgr.y (mgr.24422) 997 : cluster [DBG] pgmap v1447: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:29 vm04 bash[28289]: audit 2026-03-10T10:40:29.412230+0000 mgr.y (mgr.24422) 998 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:29 vm04 bash[28289]: audit 2026-03-10T10:40:29.412230+0000 mgr.y (mgr.24422) 998 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:29 vm04 bash[20742]: cluster 2026-03-10T10:40:28.722849+0000 mgr.y (mgr.24422) 997 : cluster [DBG] pgmap v1447: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:29 vm04 bash[20742]: cluster 2026-03-10T10:40:28.722849+0000 mgr.y (mgr.24422) 997 : cluster [DBG] pgmap v1447: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:29 vm04 bash[20742]: audit 2026-03-10T10:40:29.412230+0000 mgr.y (mgr.24422) 998 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:29 vm04 bash[20742]: audit 2026-03-10T10:40:29.412230+0000 mgr.y (mgr.24422) 998 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:30.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:29 vm07 bash[23367]: cluster 2026-03-10T10:40:28.722849+0000 mgr.y (mgr.24422) 997 : cluster [DBG] pgmap v1447: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:30.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:29 vm07 bash[23367]: cluster 2026-03-10T10:40:28.722849+0000 mgr.y (mgr.24422) 997 : cluster [DBG] pgmap v1447: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:30.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:29 vm07 bash[23367]: audit 2026-03-10T10:40:29.412230+0000 mgr.y (mgr.24422) 998 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:30.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:29 vm07 bash[23367]: audit 2026-03-10T10:40:29.412230+0000 mgr.y (mgr.24422) 998 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:32.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:31 vm04 bash[28289]: cluster 2026-03-10T10:40:30.723388+0000 mgr.y (mgr.24422) 999 : cluster [DBG] pgmap v1448: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:32.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:31 vm04 bash[28289]: cluster 2026-03-10T10:40:30.723388+0000 mgr.y (mgr.24422) 999 : cluster [DBG] pgmap v1448: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:32.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:31 vm04 bash[20742]: cluster 2026-03-10T10:40:30.723388+0000 mgr.y (mgr.24422) 999 : cluster [DBG] pgmap v1448: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:32.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:31 vm04 bash[20742]: cluster 2026-03-10T10:40:30.723388+0000 mgr.y (mgr.24422) 999 : cluster [DBG] pgmap v1448: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:31 vm07 bash[23367]: cluster 2026-03-10T10:40:30.723388+0000 mgr.y (mgr.24422) 999 : cluster [DBG] pgmap v1448: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:31 vm07 bash[23367]: cluster 2026-03-10T10:40:30.723388+0000 mgr.y (mgr.24422) 999 : cluster [DBG] pgmap v1448: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:33.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:40:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:40:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:40:34.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:33 vm04 bash[28289]: cluster 2026-03-10T10:40:32.723730+0000 mgr.y (mgr.24422) 1000 : cluster [DBG] pgmap v1449: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:34.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:33 vm04 bash[28289]: cluster 2026-03-10T10:40:32.723730+0000 mgr.y (mgr.24422) 1000 : cluster [DBG] pgmap v1449: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:34.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:33 vm04 bash[20742]: cluster 2026-03-10T10:40:32.723730+0000 mgr.y (mgr.24422) 1000 : cluster [DBG] pgmap v1449: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:34.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:33 vm04 bash[20742]: cluster 2026-03-10T10:40:32.723730+0000 mgr.y (mgr.24422) 1000 : cluster [DBG] pgmap v1449: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:33 vm07 bash[23367]: cluster 2026-03-10T10:40:32.723730+0000 mgr.y (mgr.24422) 1000 : cluster [DBG] pgmap v1449: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:33 vm07 bash[23367]: cluster 2026-03-10T10:40:32.723730+0000 mgr.y (mgr.24422) 1000 : cluster [DBG] pgmap v1449: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:36.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:35 vm04 bash[28289]: cluster 2026-03-10T10:40:34.724488+0000 mgr.y (mgr.24422) 1001 : cluster [DBG] pgmap v1450: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:35 vm04 bash[28289]: cluster 2026-03-10T10:40:34.724488+0000 mgr.y (mgr.24422) 1001 : cluster [DBG] pgmap v1450: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:35 vm04 bash[20742]: cluster 2026-03-10T10:40:34.724488+0000 mgr.y (mgr.24422) 1001 : cluster [DBG] pgmap v1450: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:35 vm04 bash[20742]: cluster 2026-03-10T10:40:34.724488+0000 mgr.y (mgr.24422) 1001 : cluster [DBG] pgmap v1450: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:35 vm07 bash[23367]: cluster 2026-03-10T10:40:34.724488+0000 mgr.y (mgr.24422) 1001 : cluster [DBG] pgmap v1450: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:35 vm07 bash[23367]: cluster 2026-03-10T10:40:34.724488+0000 mgr.y (mgr.24422) 1001 : cluster [DBG] pgmap v1450: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:37 vm07 bash[23367]: cluster 2026-03-10T10:40:36.724799+0000 mgr.y (mgr.24422) 1002 : cluster [DBG] pgmap v1451: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:37 vm07 bash[23367]: cluster 2026-03-10T10:40:36.724799+0000 mgr.y (mgr.24422) 1002 : cluster [DBG] pgmap v1451: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:38.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:37 vm04 bash[28289]: cluster 2026-03-10T10:40:36.724799+0000 mgr.y (mgr.24422) 1002 : cluster [DBG] pgmap v1451: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:38.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:37 vm04 bash[28289]: cluster 2026-03-10T10:40:36.724799+0000 mgr.y (mgr.24422) 1002 : cluster [DBG] pgmap v1451: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:38.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:37 vm04 bash[20742]: cluster 2026-03-10T10:40:36.724799+0000 mgr.y (mgr.24422) 1002 : cluster [DBG] pgmap v1451: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:38.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:37 vm04 bash[20742]: cluster 2026-03-10T10:40:36.724799+0000 mgr.y (mgr.24422) 1002 : cluster [DBG] pgmap v1451: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:39.767 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:40:39 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:40:40.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:40 vm04 bash[28289]: cluster 2026-03-10T10:40:38.725323+0000 mgr.y (mgr.24422) 1003 : cluster [DBG] pgmap v1452: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:40 vm04 bash[28289]: cluster 2026-03-10T10:40:38.725323+0000 mgr.y (mgr.24422) 1003 : cluster [DBG] pgmap v1452: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:40 vm04 bash[28289]: audit 2026-03-10T10:40:39.418936+0000 mgr.y (mgr.24422) 1004 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:40.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:40 vm04 bash[28289]: audit 2026-03-10T10:40:39.418936+0000 mgr.y (mgr.24422) 1004 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:40 vm04 bash[20742]: cluster 2026-03-10T10:40:38.725323+0000 mgr.y (mgr.24422) 1003 : cluster [DBG] pgmap v1452: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:40 vm04 bash[20742]: cluster 2026-03-10T10:40:38.725323+0000 mgr.y (mgr.24422) 1003 : cluster [DBG] pgmap v1452: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:40 vm04 bash[20742]: audit 2026-03-10T10:40:39.418936+0000 mgr.y (mgr.24422) 1004 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:40.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:40 vm04 bash[20742]: audit 2026-03-10T10:40:39.418936+0000 mgr.y (mgr.24422) 1004 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:40.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:40 vm07 bash[23367]: cluster 2026-03-10T10:40:38.725323+0000 mgr.y (mgr.24422) 1003 : cluster [DBG] pgmap v1452: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:40.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:40 vm07 bash[23367]: cluster 2026-03-10T10:40:38.725323+0000 mgr.y (mgr.24422) 1003 : cluster [DBG] pgmap v1452: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:40.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:40 vm07 bash[23367]: audit 2026-03-10T10:40:39.418936+0000 mgr.y (mgr.24422) 1004 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:40.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:40 vm07 bash[23367]: audit 2026-03-10T10:40:39.418936+0000 mgr.y (mgr.24422) 1004 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:42.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:42 vm04 bash[28289]: cluster 2026-03-10T10:40:40.725809+0000 mgr.y (mgr.24422) 1005 : cluster [DBG] pgmap v1453: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:42.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:42 vm04 bash[28289]: cluster 2026-03-10T10:40:40.725809+0000 mgr.y (mgr.24422) 1005 : cluster [DBG] pgmap v1453: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:42.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:42 vm04 bash[20742]: cluster 2026-03-10T10:40:40.725809+0000 mgr.y (mgr.24422) 1005 : cluster [DBG] pgmap v1453: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:42.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:42 vm04 bash[20742]: cluster 2026-03-10T10:40:40.725809+0000 mgr.y (mgr.24422) 1005 : cluster [DBG] pgmap v1453: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:42.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:42 vm07 bash[23367]: cluster 2026-03-10T10:40:40.725809+0000 mgr.y (mgr.24422) 1005 : cluster [DBG] pgmap v1453: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:42.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:42 vm07 bash[23367]: cluster 2026-03-10T10:40:40.725809+0000 mgr.y (mgr.24422) 1005 : cluster [DBG] pgmap v1453: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:43.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:40:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:40:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:40:44.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:44 vm04 bash[28289]: cluster 2026-03-10T10:40:42.726117+0000 mgr.y (mgr.24422) 1006 : cluster [DBG] pgmap v1454: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:44.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:44 vm04 bash[28289]: cluster 2026-03-10T10:40:42.726117+0000 mgr.y (mgr.24422) 1006 : cluster [DBG] pgmap v1454: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:44.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:44 vm04 bash[28289]: audit 2026-03-10T10:40:43.483149+0000 mon.a (mon.0) 3729 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:40:44.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:44 vm04 bash[28289]: audit 2026-03-10T10:40:43.483149+0000 mon.a (mon.0) 3729 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:40:44.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:44 vm04 bash[28289]: audit 2026-03-10T10:40:43.895213+0000 mon.a (mon.0) 3730 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:40:44.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:44 vm04 bash[28289]: audit 2026-03-10T10:40:43.895213+0000 mon.a (mon.0) 3730 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:40:44.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:44 vm04 bash[28289]: audit 2026-03-10T10:40:44.212640+0000 mon.a (mon.0) 3731 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:40:44.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:44 vm04 bash[28289]: audit 2026-03-10T10:40:44.212640+0000 mon.a (mon.0) 3731 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:40:44.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:44 vm04 bash[28289]: audit 2026-03-10T10:40:44.213205+0000 mon.a (mon.0) 3732 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:40:44.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:44 vm04 bash[28289]: audit 2026-03-10T10:40:44.213205+0000 mon.a (mon.0) 3732 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:40:44.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:44 vm04 bash[28289]: audit 2026-03-10T10:40:44.218355+0000 mon.a (mon.0) 3733 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:40:44.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:44 vm04 bash[28289]: audit 2026-03-10T10:40:44.218355+0000 mon.a (mon.0) 3733 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:40:44.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:44 vm04 bash[20742]: cluster 2026-03-10T10:40:42.726117+0000 mgr.y (mgr.24422) 1006 : cluster [DBG] pgmap v1454: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:44.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:44 vm04 bash[20742]: cluster 2026-03-10T10:40:42.726117+0000 mgr.y (mgr.24422) 1006 : cluster [DBG] pgmap v1454: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:44.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:44 vm04 bash[20742]: audit 2026-03-10T10:40:43.483149+0000 mon.a (mon.0) 3729 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:40:44.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:44 vm04 bash[20742]: audit 2026-03-10T10:40:43.483149+0000 mon.a (mon.0) 3729 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:40:44.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:44 vm04 bash[20742]: audit 2026-03-10T10:40:43.895213+0000 mon.a (mon.0) 3730 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:40:44.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:44 vm04 bash[20742]: audit 2026-03-10T10:40:43.895213+0000 mon.a (mon.0) 3730 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:40:44.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:44 vm04 bash[20742]: audit 2026-03-10T10:40:44.212640+0000 mon.a (mon.0) 3731 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:40:44.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:44 vm04 bash[20742]: audit 2026-03-10T10:40:44.212640+0000 mon.a (mon.0) 3731 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:40:44.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:44 vm04 bash[20742]: audit 2026-03-10T10:40:44.213205+0000 mon.a (mon.0) 3732 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:40:44.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:44 vm04 bash[20742]: audit 2026-03-10T10:40:44.213205+0000 mon.a (mon.0) 3732 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:40:44.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:44 vm04 bash[20742]: audit 2026-03-10T10:40:44.218355+0000 mon.a (mon.0) 3733 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:40:44.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:44 vm04 bash[20742]: audit 2026-03-10T10:40:44.218355+0000 mon.a (mon.0) 3733 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:40:45.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:44 vm07 bash[23367]: cluster 2026-03-10T10:40:42.726117+0000 mgr.y (mgr.24422) 1006 : cluster [DBG] pgmap v1454: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:45.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:44 vm07 bash[23367]: cluster 2026-03-10T10:40:42.726117+0000 mgr.y (mgr.24422) 1006 : cluster [DBG] pgmap v1454: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:45.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:44 vm07 bash[23367]: audit 2026-03-10T10:40:43.483149+0000 mon.a (mon.0) 3729 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:40:45.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:44 vm07 bash[23367]: audit 2026-03-10T10:40:43.483149+0000 mon.a (mon.0) 3729 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:40:45.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:44 vm07 bash[23367]: audit 2026-03-10T10:40:43.895213+0000 mon.a (mon.0) 3730 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:40:45.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:44 vm07 bash[23367]: audit 2026-03-10T10:40:43.895213+0000 mon.a (mon.0) 3730 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:40:45.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:44 vm07 bash[23367]: audit 2026-03-10T10:40:44.212640+0000 mon.a (mon.0) 3731 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:40:45.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:44 vm07 bash[23367]: audit 2026-03-10T10:40:44.212640+0000 mon.a (mon.0) 3731 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:40:45.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:44 vm07 bash[23367]: audit 2026-03-10T10:40:44.213205+0000 mon.a (mon.0) 3732 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:40:45.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:44 vm07 bash[23367]: audit 2026-03-10T10:40:44.213205+0000 mon.a (mon.0) 3732 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:40:45.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:44 vm07 bash[23367]: audit 2026-03-10T10:40:44.218355+0000 mon.a (mon.0) 3733 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:40:45.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:44 vm07 bash[23367]: audit 2026-03-10T10:40:44.218355+0000 mon.a (mon.0) 3733 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:40:46.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:46 vm04 bash[28289]: cluster 2026-03-10T10:40:44.726753+0000 mgr.y (mgr.24422) 1007 : cluster [DBG] pgmap v1455: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:46.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:46 vm04 bash[28289]: cluster 2026-03-10T10:40:44.726753+0000 mgr.y (mgr.24422) 1007 : cluster [DBG] pgmap v1455: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:46.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:46 vm04 bash[20742]: cluster 2026-03-10T10:40:44.726753+0000 mgr.y (mgr.24422) 1007 : cluster [DBG] pgmap v1455: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:46.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:46 vm04 bash[20742]: cluster 2026-03-10T10:40:44.726753+0000 mgr.y (mgr.24422) 1007 : cluster [DBG] pgmap v1455: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:46.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:46 vm07 bash[23367]: cluster 2026-03-10T10:40:44.726753+0000 mgr.y (mgr.24422) 1007 : cluster [DBG] pgmap v1455: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:46.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:46 vm07 bash[23367]: cluster 2026-03-10T10:40:44.726753+0000 mgr.y (mgr.24422) 1007 : cluster [DBG] pgmap v1455: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:48.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:48 vm04 bash[28289]: cluster 2026-03-10T10:40:46.727018+0000 mgr.y (mgr.24422) 1008 : cluster [DBG] pgmap v1456: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:48.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:48 vm04 bash[28289]: cluster 2026-03-10T10:40:46.727018+0000 mgr.y (mgr.24422) 1008 : cluster [DBG] pgmap v1456: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:48.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:48 vm04 bash[20742]: cluster 2026-03-10T10:40:46.727018+0000 mgr.y (mgr.24422) 1008 : cluster [DBG] pgmap v1456: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:48.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:48 vm04 bash[20742]: cluster 2026-03-10T10:40:46.727018+0000 mgr.y (mgr.24422) 1008 : cluster [DBG] pgmap v1456: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:48.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:48 vm07 bash[23367]: cluster 2026-03-10T10:40:46.727018+0000 mgr.y (mgr.24422) 1008 : cluster [DBG] pgmap v1456: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:48.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:48 vm07 bash[23367]: cluster 2026-03-10T10:40:46.727018+0000 mgr.y (mgr.24422) 1008 : cluster [DBG] pgmap v1456: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:49.767 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:40:49 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:40:50.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:50 vm04 bash[28289]: cluster 2026-03-10T10:40:48.727520+0000 mgr.y (mgr.24422) 1009 : cluster [DBG] pgmap v1457: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:50 vm04 bash[28289]: cluster 2026-03-10T10:40:48.727520+0000 mgr.y (mgr.24422) 1009 : cluster [DBG] pgmap v1457: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:50 vm04 bash[28289]: audit 2026-03-10T10:40:49.421490+0000 mgr.y (mgr.24422) 1010 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:50 vm04 bash[28289]: audit 2026-03-10T10:40:49.421490+0000 mgr.y (mgr.24422) 1010 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:50.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:50 vm04 bash[20742]: cluster 2026-03-10T10:40:48.727520+0000 mgr.y (mgr.24422) 1009 : cluster [DBG] pgmap v1457: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:50.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:50 vm04 bash[20742]: cluster 2026-03-10T10:40:48.727520+0000 mgr.y (mgr.24422) 1009 : cluster [DBG] pgmap v1457: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:50.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:50 vm04 bash[20742]: audit 2026-03-10T10:40:49.421490+0000 mgr.y (mgr.24422) 1010 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:50.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:50 vm04 bash[20742]: audit 2026-03-10T10:40:49.421490+0000 mgr.y (mgr.24422) 1010 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:50.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:50 vm07 bash[23367]: cluster 2026-03-10T10:40:48.727520+0000 mgr.y (mgr.24422) 1009 : cluster [DBG] pgmap v1457: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:50.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:50 vm07 bash[23367]: cluster 2026-03-10T10:40:48.727520+0000 mgr.y (mgr.24422) 1009 : cluster [DBG] pgmap v1457: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:50.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:50 vm07 bash[23367]: audit 2026-03-10T10:40:49.421490+0000 mgr.y (mgr.24422) 1010 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:50.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:50 vm07 bash[23367]: audit 2026-03-10T10:40:49.421490+0000 mgr.y (mgr.24422) 1010 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:40:52.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:52 vm04 bash[28289]: cluster 2026-03-10T10:40:50.728076+0000 mgr.y (mgr.24422) 1011 : cluster [DBG] pgmap v1458: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:52.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:52 vm04 bash[28289]: cluster 2026-03-10T10:40:50.728076+0000 mgr.y (mgr.24422) 1011 : cluster [DBG] pgmap v1458: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:52.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:52 vm04 bash[20742]: cluster 2026-03-10T10:40:50.728076+0000 mgr.y (mgr.24422) 1011 : cluster [DBG] pgmap v1458: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:52.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:52 vm04 bash[20742]: cluster 2026-03-10T10:40:50.728076+0000 mgr.y (mgr.24422) 1011 : cluster [DBG] pgmap v1458: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:52 vm07 bash[23367]: cluster 2026-03-10T10:40:50.728076+0000 mgr.y (mgr.24422) 1011 : cluster [DBG] pgmap v1458: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:52 vm07 bash[23367]: cluster 2026-03-10T10:40:50.728076+0000 mgr.y (mgr.24422) 1011 : cluster [DBG] pgmap v1458: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:53.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:40:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:40:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:40:54.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:54 vm04 bash[28289]: cluster 2026-03-10T10:40:52.728342+0000 mgr.y (mgr.24422) 1012 : cluster [DBG] pgmap v1459: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:54.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:54 vm04 bash[28289]: cluster 2026-03-10T10:40:52.728342+0000 mgr.y (mgr.24422) 1012 : cluster [DBG] pgmap v1459: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:54.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:54 vm04 bash[20742]: cluster 2026-03-10T10:40:52.728342+0000 mgr.y (mgr.24422) 1012 : cluster [DBG] pgmap v1459: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:54.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:54 vm04 bash[20742]: cluster 2026-03-10T10:40:52.728342+0000 mgr.y (mgr.24422) 1012 : cluster [DBG] pgmap v1459: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:54 vm07 bash[23367]: cluster 2026-03-10T10:40:52.728342+0000 mgr.y (mgr.24422) 1012 : cluster [DBG] pgmap v1459: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:54 vm07 bash[23367]: cluster 2026-03-10T10:40:52.728342+0000 mgr.y (mgr.24422) 1012 : cluster [DBG] pgmap v1459: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:56.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:56 vm04 bash[28289]: cluster 2026-03-10T10:40:54.729010+0000 mgr.y (mgr.24422) 1013 : cluster [DBG] pgmap v1460: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:56.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:56 vm04 bash[28289]: cluster 2026-03-10T10:40:54.729010+0000 mgr.y (mgr.24422) 1013 : cluster [DBG] pgmap v1460: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:56.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:56 vm04 bash[20742]: cluster 2026-03-10T10:40:54.729010+0000 mgr.y (mgr.24422) 1013 : cluster [DBG] pgmap v1460: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:56.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:56 vm04 bash[20742]: cluster 2026-03-10T10:40:54.729010+0000 mgr.y (mgr.24422) 1013 : cluster [DBG] pgmap v1460: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:56.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:56 vm07 bash[23367]: cluster 2026-03-10T10:40:54.729010+0000 mgr.y (mgr.24422) 1013 : cluster [DBG] pgmap v1460: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:56.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:56 vm07 bash[23367]: cluster 2026-03-10T10:40:54.729010+0000 mgr.y (mgr.24422) 1013 : cluster [DBG] pgmap v1460: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:40:58.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:58 vm04 bash[28289]: cluster 2026-03-10T10:40:56.729350+0000 mgr.y (mgr.24422) 1014 : cluster [DBG] pgmap v1461: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:58.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:58 vm04 bash[28289]: cluster 2026-03-10T10:40:56.729350+0000 mgr.y (mgr.24422) 1014 : cluster [DBG] pgmap v1461: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:58.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:58 vm04 bash[20742]: cluster 2026-03-10T10:40:56.729350+0000 mgr.y (mgr.24422) 1014 : cluster [DBG] pgmap v1461: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:58.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:58 vm04 bash[20742]: cluster 2026-03-10T10:40:56.729350+0000 mgr.y (mgr.24422) 1014 : cluster [DBG] pgmap v1461: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:58.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:58 vm07 bash[23367]: cluster 2026-03-10T10:40:56.729350+0000 mgr.y (mgr.24422) 1014 : cluster [DBG] pgmap v1461: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:58.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:58 vm07 bash[23367]: cluster 2026-03-10T10:40:56.729350+0000 mgr.y (mgr.24422) 1014 : cluster [DBG] pgmap v1461: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:40:59.431 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:59 vm07 bash[23367]: audit 2026-03-10T10:40:58.494905+0000 mon.a (mon.0) 3734 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:40:59.431 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:40:59 vm07 bash[23367]: audit 2026-03-10T10:40:58.494905+0000 mon.a (mon.0) 3734 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:40:59.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:59 vm04 bash[28289]: audit 2026-03-10T10:40:58.494905+0000 mon.a (mon.0) 3734 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:40:59.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:40:59 vm04 bash[28289]: audit 2026-03-10T10:40:58.494905+0000 mon.a (mon.0) 3734 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:40:59.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:59 vm04 bash[20742]: audit 2026-03-10T10:40:58.494905+0000 mon.a (mon.0) 3734 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:40:59.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:40:59 vm04 bash[20742]: audit 2026-03-10T10:40:58.494905+0000 mon.a (mon.0) 3734 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:40:59.767 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:40:59 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:41:00.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:00 vm04 bash[28289]: cluster 2026-03-10T10:40:58.729804+0000 mgr.y (mgr.24422) 1015 : cluster [DBG] pgmap v1462: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:00.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:00 vm04 bash[28289]: cluster 2026-03-10T10:40:58.729804+0000 mgr.y (mgr.24422) 1015 : cluster [DBG] pgmap v1462: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:00.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:00 vm04 bash[28289]: audit 2026-03-10T10:40:59.432206+0000 mgr.y (mgr.24422) 1016 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:00.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:00 vm04 bash[28289]: audit 2026-03-10T10:40:59.432206+0000 mgr.y (mgr.24422) 1016 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:00.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:00 vm04 bash[20742]: cluster 2026-03-10T10:40:58.729804+0000 mgr.y (mgr.24422) 1015 : cluster [DBG] pgmap v1462: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:00.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:00 vm04 bash[20742]: cluster 2026-03-10T10:40:58.729804+0000 mgr.y (mgr.24422) 1015 : cluster [DBG] pgmap v1462: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:00.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:00 vm04 bash[20742]: audit 2026-03-10T10:40:59.432206+0000 mgr.y (mgr.24422) 1016 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:00.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:00 vm04 bash[20742]: audit 2026-03-10T10:40:59.432206+0000 mgr.y (mgr.24422) 1016 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:00 vm07 bash[23367]: cluster 2026-03-10T10:40:58.729804+0000 mgr.y (mgr.24422) 1015 : cluster [DBG] pgmap v1462: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:00 vm07 bash[23367]: cluster 2026-03-10T10:40:58.729804+0000 mgr.y (mgr.24422) 1015 : cluster [DBG] pgmap v1462: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:00 vm07 bash[23367]: audit 2026-03-10T10:40:59.432206+0000 mgr.y (mgr.24422) 1016 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:00 vm07 bash[23367]: audit 2026-03-10T10:40:59.432206+0000 mgr.y (mgr.24422) 1016 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:02.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:02 vm04 bash[28289]: cluster 2026-03-10T10:41:00.730343+0000 mgr.y (mgr.24422) 1017 : cluster [DBG] pgmap v1463: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:02.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:02 vm04 bash[28289]: cluster 2026-03-10T10:41:00.730343+0000 mgr.y (mgr.24422) 1017 : cluster [DBG] pgmap v1463: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:02.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:02 vm04 bash[20742]: cluster 2026-03-10T10:41:00.730343+0000 mgr.y (mgr.24422) 1017 : cluster [DBG] pgmap v1463: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:02.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:02 vm04 bash[20742]: cluster 2026-03-10T10:41:00.730343+0000 mgr.y (mgr.24422) 1017 : cluster [DBG] pgmap v1463: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:02.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:02 vm07 bash[23367]: cluster 2026-03-10T10:41:00.730343+0000 mgr.y (mgr.24422) 1017 : cluster [DBG] pgmap v1463: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:02.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:02 vm07 bash[23367]: cluster 2026-03-10T10:41:00.730343+0000 mgr.y (mgr.24422) 1017 : cluster [DBG] pgmap v1463: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:03.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:41:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:41:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:41:04.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:04 vm04 bash[28289]: cluster 2026-03-10T10:41:02.730654+0000 mgr.y (mgr.24422) 1018 : cluster [DBG] pgmap v1464: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:04.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:04 vm04 bash[28289]: cluster 2026-03-10T10:41:02.730654+0000 mgr.y (mgr.24422) 1018 : cluster [DBG] pgmap v1464: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:04.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:04 vm04 bash[20742]: cluster 2026-03-10T10:41:02.730654+0000 mgr.y (mgr.24422) 1018 : cluster [DBG] pgmap v1464: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:04.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:04 vm04 bash[20742]: cluster 2026-03-10T10:41:02.730654+0000 mgr.y (mgr.24422) 1018 : cluster [DBG] pgmap v1464: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:04.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:04 vm07 bash[23367]: cluster 2026-03-10T10:41:02.730654+0000 mgr.y (mgr.24422) 1018 : cluster [DBG] pgmap v1464: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:04.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:04 vm07 bash[23367]: cluster 2026-03-10T10:41:02.730654+0000 mgr.y (mgr.24422) 1018 : cluster [DBG] pgmap v1464: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:06.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:06 vm04 bash[28289]: cluster 2026-03-10T10:41:04.731327+0000 mgr.y (mgr.24422) 1019 : cluster [DBG] pgmap v1465: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:06.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:06 vm04 bash[28289]: cluster 2026-03-10T10:41:04.731327+0000 mgr.y (mgr.24422) 1019 : cluster [DBG] pgmap v1465: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:06.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:06 vm04 bash[20742]: cluster 2026-03-10T10:41:04.731327+0000 mgr.y (mgr.24422) 1019 : cluster [DBG] pgmap v1465: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:06.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:06 vm04 bash[20742]: cluster 2026-03-10T10:41:04.731327+0000 mgr.y (mgr.24422) 1019 : cluster [DBG] pgmap v1465: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:06.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:06 vm07 bash[23367]: cluster 2026-03-10T10:41:04.731327+0000 mgr.y (mgr.24422) 1019 : cluster [DBG] pgmap v1465: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:06.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:06 vm07 bash[23367]: cluster 2026-03-10T10:41:04.731327+0000 mgr.y (mgr.24422) 1019 : cluster [DBG] pgmap v1465: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:08.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:08 vm04 bash[28289]: cluster 2026-03-10T10:41:06.731725+0000 mgr.y (mgr.24422) 1020 : cluster [DBG] pgmap v1466: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:08.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:08 vm04 bash[28289]: cluster 2026-03-10T10:41:06.731725+0000 mgr.y (mgr.24422) 1020 : cluster [DBG] pgmap v1466: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:08 vm04 bash[20742]: cluster 2026-03-10T10:41:06.731725+0000 mgr.y (mgr.24422) 1020 : cluster [DBG] pgmap v1466: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:08 vm04 bash[20742]: cluster 2026-03-10T10:41:06.731725+0000 mgr.y (mgr.24422) 1020 : cluster [DBG] pgmap v1466: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:08.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:08 vm07 bash[23367]: cluster 2026-03-10T10:41:06.731725+0000 mgr.y (mgr.24422) 1020 : cluster [DBG] pgmap v1466: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:08.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:08 vm07 bash[23367]: cluster 2026-03-10T10:41:06.731725+0000 mgr.y (mgr.24422) 1020 : cluster [DBG] pgmap v1466: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:09.767 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:41:09 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:41:10.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:10 vm04 bash[28289]: cluster 2026-03-10T10:41:08.732247+0000 mgr.y (mgr.24422) 1021 : cluster [DBG] pgmap v1467: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:10 vm04 bash[28289]: cluster 2026-03-10T10:41:08.732247+0000 mgr.y (mgr.24422) 1021 : cluster [DBG] pgmap v1467: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:10 vm04 bash[28289]: audit 2026-03-10T10:41:09.443026+0000 mgr.y (mgr.24422) 1022 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:10.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:10 vm04 bash[28289]: audit 2026-03-10T10:41:09.443026+0000 mgr.y (mgr.24422) 1022 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:10.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:10 vm04 bash[20742]: cluster 2026-03-10T10:41:08.732247+0000 mgr.y (mgr.24422) 1021 : cluster [DBG] pgmap v1467: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:10.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:10 vm04 bash[20742]: cluster 2026-03-10T10:41:08.732247+0000 mgr.y (mgr.24422) 1021 : cluster [DBG] pgmap v1467: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:10.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:10 vm04 bash[20742]: audit 2026-03-10T10:41:09.443026+0000 mgr.y (mgr.24422) 1022 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:10.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:10 vm04 bash[20742]: audit 2026-03-10T10:41:09.443026+0000 mgr.y (mgr.24422) 1022 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:10 vm07 bash[23367]: cluster 2026-03-10T10:41:08.732247+0000 mgr.y (mgr.24422) 1021 : cluster [DBG] pgmap v1467: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:10 vm07 bash[23367]: cluster 2026-03-10T10:41:08.732247+0000 mgr.y (mgr.24422) 1021 : cluster [DBG] pgmap v1467: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:10 vm07 bash[23367]: audit 2026-03-10T10:41:09.443026+0000 mgr.y (mgr.24422) 1022 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:10 vm07 bash[23367]: audit 2026-03-10T10:41:09.443026+0000 mgr.y (mgr.24422) 1022 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:12.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:12 vm04 bash[28289]: cluster 2026-03-10T10:41:10.732807+0000 mgr.y (mgr.24422) 1023 : cluster [DBG] pgmap v1468: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:12.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:12 vm04 bash[28289]: cluster 2026-03-10T10:41:10.732807+0000 mgr.y (mgr.24422) 1023 : cluster [DBG] pgmap v1468: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:12.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:12 vm04 bash[20742]: cluster 2026-03-10T10:41:10.732807+0000 mgr.y (mgr.24422) 1023 : cluster [DBG] pgmap v1468: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:12.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:12 vm04 bash[20742]: cluster 2026-03-10T10:41:10.732807+0000 mgr.y (mgr.24422) 1023 : cluster [DBG] pgmap v1468: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:12 vm07 bash[23367]: cluster 2026-03-10T10:41:10.732807+0000 mgr.y (mgr.24422) 1023 : cluster [DBG] pgmap v1468: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:12 vm07 bash[23367]: cluster 2026-03-10T10:41:10.732807+0000 mgr.y (mgr.24422) 1023 : cluster [DBG] pgmap v1468: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:13.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:41:13 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:41:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:41:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:13 vm07 bash[23367]: cluster 2026-03-10T10:41:12.733134+0000 mgr.y (mgr.24422) 1024 : cluster [DBG] pgmap v1469: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:13 vm07 bash[23367]: cluster 2026-03-10T10:41:12.733134+0000 mgr.y (mgr.24422) 1024 : cluster [DBG] pgmap v1469: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:13 vm07 bash[23367]: audit 2026-03-10T10:41:13.501676+0000 mon.a (mon.0) 3735 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:41:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:13 vm07 bash[23367]: audit 2026-03-10T10:41:13.501676+0000 mon.a (mon.0) 3735 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:41:14.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:13 vm04 bash[28289]: cluster 2026-03-10T10:41:12.733134+0000 mgr.y (mgr.24422) 1024 : cluster [DBG] pgmap v1469: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:13 vm04 bash[28289]: cluster 2026-03-10T10:41:12.733134+0000 mgr.y (mgr.24422) 1024 : cluster [DBG] pgmap v1469: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:13 vm04 bash[28289]: audit 2026-03-10T10:41:13.501676+0000 mon.a (mon.0) 3735 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:41:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:13 vm04 bash[28289]: audit 2026-03-10T10:41:13.501676+0000 mon.a (mon.0) 3735 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:41:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:13 vm04 bash[20742]: cluster 2026-03-10T10:41:12.733134+0000 mgr.y (mgr.24422) 1024 : cluster [DBG] pgmap v1469: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:13 vm04 bash[20742]: cluster 2026-03-10T10:41:12.733134+0000 mgr.y (mgr.24422) 1024 : cluster [DBG] pgmap v1469: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:13 vm04 bash[20742]: audit 2026-03-10T10:41:13.501676+0000 mon.a (mon.0) 3735 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:41:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:13 vm04 bash[20742]: audit 2026-03-10T10:41:13.501676+0000 mon.a (mon.0) 3735 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:41:16.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:15 vm04 bash[28289]: cluster 2026-03-10T10:41:14.733932+0000 mgr.y (mgr.24422) 1025 : cluster [DBG] pgmap v1470: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:16.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:15 vm04 bash[28289]: cluster 2026-03-10T10:41:14.733932+0000 mgr.y (mgr.24422) 1025 : cluster [DBG] pgmap v1470: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:15 vm04 bash[20742]: cluster 2026-03-10T10:41:14.733932+0000 mgr.y (mgr.24422) 1025 : cluster [DBG] pgmap v1470: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:15 vm04 bash[20742]: cluster 2026-03-10T10:41:14.733932+0000 mgr.y (mgr.24422) 1025 : cluster [DBG] pgmap v1470: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:16.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:15 vm07 bash[23367]: cluster 2026-03-10T10:41:14.733932+0000 mgr.y (mgr.24422) 1025 : cluster [DBG] pgmap v1470: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:16.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:15 vm07 bash[23367]: cluster 2026-03-10T10:41:14.733932+0000 mgr.y (mgr.24422) 1025 : cluster [DBG] pgmap v1470: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:18.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:17 vm04 bash[28289]: cluster 2026-03-10T10:41:16.734255+0000 mgr.y (mgr.24422) 1026 : cluster [DBG] pgmap v1471: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:18.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:17 vm04 bash[28289]: cluster 2026-03-10T10:41:16.734255+0000 mgr.y (mgr.24422) 1026 : cluster [DBG] pgmap v1471: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:18.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:17 vm04 bash[20742]: cluster 2026-03-10T10:41:16.734255+0000 mgr.y (mgr.24422) 1026 : cluster [DBG] pgmap v1471: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:18.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:17 vm04 bash[20742]: cluster 2026-03-10T10:41:16.734255+0000 mgr.y (mgr.24422) 1026 : cluster [DBG] pgmap v1471: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:18.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:17 vm07 bash[23367]: cluster 2026-03-10T10:41:16.734255+0000 mgr.y (mgr.24422) 1026 : cluster [DBG] pgmap v1471: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:18.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:17 vm07 bash[23367]: cluster 2026-03-10T10:41:16.734255+0000 mgr.y (mgr.24422) 1026 : cluster [DBG] pgmap v1471: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:19.767 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:41:19 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:41:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:19 vm04 bash[28289]: cluster 2026-03-10T10:41:18.734750+0000 mgr.y (mgr.24422) 1027 : cluster [DBG] pgmap v1472: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:19 vm04 bash[28289]: cluster 2026-03-10T10:41:18.734750+0000 mgr.y (mgr.24422) 1027 : cluster [DBG] pgmap v1472: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:19 vm04 bash[28289]: audit 2026-03-10T10:41:19.453793+0000 mgr.y (mgr.24422) 1028 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:20.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:19 vm04 bash[28289]: audit 2026-03-10T10:41:19.453793+0000 mgr.y (mgr.24422) 1028 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:20.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:19 vm04 bash[20742]: cluster 2026-03-10T10:41:18.734750+0000 mgr.y (mgr.24422) 1027 : cluster [DBG] pgmap v1472: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:20.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:19 vm04 bash[20742]: cluster 2026-03-10T10:41:18.734750+0000 mgr.y (mgr.24422) 1027 : cluster [DBG] pgmap v1472: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:20.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:19 vm04 bash[20742]: audit 2026-03-10T10:41:19.453793+0000 mgr.y (mgr.24422) 1028 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:20.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:19 vm04 bash[20742]: audit 2026-03-10T10:41:19.453793+0000 mgr.y (mgr.24422) 1028 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:20.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:19 vm07 bash[23367]: cluster 2026-03-10T10:41:18.734750+0000 mgr.y (mgr.24422) 1027 : cluster [DBG] pgmap v1472: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:20.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:19 vm07 bash[23367]: cluster 2026-03-10T10:41:18.734750+0000 mgr.y (mgr.24422) 1027 : cluster [DBG] pgmap v1472: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:20.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:19 vm07 bash[23367]: audit 2026-03-10T10:41:19.453793+0000 mgr.y (mgr.24422) 1028 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:20.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:19 vm07 bash[23367]: audit 2026-03-10T10:41:19.453793+0000 mgr.y (mgr.24422) 1028 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:22.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:21 vm04 bash[28289]: cluster 2026-03-10T10:41:20.735318+0000 mgr.y (mgr.24422) 1029 : cluster [DBG] pgmap v1473: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:22.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:21 vm04 bash[28289]: cluster 2026-03-10T10:41:20.735318+0000 mgr.y (mgr.24422) 1029 : cluster [DBG] pgmap v1473: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:22.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:21 vm04 bash[20742]: cluster 2026-03-10T10:41:20.735318+0000 mgr.y (mgr.24422) 1029 : cluster [DBG] pgmap v1473: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:22.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:21 vm04 bash[20742]: cluster 2026-03-10T10:41:20.735318+0000 mgr.y (mgr.24422) 1029 : cluster [DBG] pgmap v1473: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:22.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:21 vm07 bash[23367]: cluster 2026-03-10T10:41:20.735318+0000 mgr.y (mgr.24422) 1029 : cluster [DBG] pgmap v1473: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:22.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:21 vm07 bash[23367]: cluster 2026-03-10T10:41:20.735318+0000 mgr.y (mgr.24422) 1029 : cluster [DBG] pgmap v1473: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:23.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:41:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:41:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:41:24.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:23 vm04 bash[28289]: cluster 2026-03-10T10:41:22.735617+0000 mgr.y (mgr.24422) 1030 : cluster [DBG] pgmap v1474: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:24.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:23 vm04 bash[28289]: cluster 2026-03-10T10:41:22.735617+0000 mgr.y (mgr.24422) 1030 : cluster [DBG] pgmap v1474: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:24.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:23 vm04 bash[20742]: cluster 2026-03-10T10:41:22.735617+0000 mgr.y (mgr.24422) 1030 : cluster [DBG] pgmap v1474: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:24.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:23 vm04 bash[20742]: cluster 2026-03-10T10:41:22.735617+0000 mgr.y (mgr.24422) 1030 : cluster [DBG] pgmap v1474: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:24.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:23 vm07 bash[23367]: cluster 2026-03-10T10:41:22.735617+0000 mgr.y (mgr.24422) 1030 : cluster [DBG] pgmap v1474: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:24.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:23 vm07 bash[23367]: cluster 2026-03-10T10:41:22.735617+0000 mgr.y (mgr.24422) 1030 : cluster [DBG] pgmap v1474: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:26.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:25 vm04 bash[28289]: cluster 2026-03-10T10:41:24.736251+0000 mgr.y (mgr.24422) 1031 : cluster [DBG] pgmap v1475: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:26.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:25 vm04 bash[28289]: cluster 2026-03-10T10:41:24.736251+0000 mgr.y (mgr.24422) 1031 : cluster [DBG] pgmap v1475: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:26.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:25 vm04 bash[20742]: cluster 2026-03-10T10:41:24.736251+0000 mgr.y (mgr.24422) 1031 : cluster [DBG] pgmap v1475: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:26.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:25 vm04 bash[20742]: cluster 2026-03-10T10:41:24.736251+0000 mgr.y (mgr.24422) 1031 : cluster [DBG] pgmap v1475: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:26.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:25 vm07 bash[23367]: cluster 2026-03-10T10:41:24.736251+0000 mgr.y (mgr.24422) 1031 : cluster [DBG] pgmap v1475: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:26.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:25 vm07 bash[23367]: cluster 2026-03-10T10:41:24.736251+0000 mgr.y (mgr.24422) 1031 : cluster [DBG] pgmap v1475: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:28.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:27 vm04 bash[28289]: cluster 2026-03-10T10:41:26.736601+0000 mgr.y (mgr.24422) 1032 : cluster [DBG] pgmap v1476: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:27 vm04 bash[28289]: cluster 2026-03-10T10:41:26.736601+0000 mgr.y (mgr.24422) 1032 : cluster [DBG] pgmap v1476: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:27 vm04 bash[20742]: cluster 2026-03-10T10:41:26.736601+0000 mgr.y (mgr.24422) 1032 : cluster [DBG] pgmap v1476: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:27 vm04 bash[20742]: cluster 2026-03-10T10:41:26.736601+0000 mgr.y (mgr.24422) 1032 : cluster [DBG] pgmap v1476: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:28.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:27 vm07 bash[23367]: cluster 2026-03-10T10:41:26.736601+0000 mgr.y (mgr.24422) 1032 : cluster [DBG] pgmap v1476: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:28.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:27 vm07 bash[23367]: cluster 2026-03-10T10:41:26.736601+0000 mgr.y (mgr.24422) 1032 : cluster [DBG] pgmap v1476: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:29.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:28 vm04 bash[28289]: audit 2026-03-10T10:41:28.508758+0000 mon.a (mon.0) 3736 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:41:29.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:28 vm04 bash[28289]: audit 2026-03-10T10:41:28.508758+0000 mon.a (mon.0) 3736 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:41:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:28 vm04 bash[20742]: audit 2026-03-10T10:41:28.508758+0000 mon.a (mon.0) 3736 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:41:29.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:28 vm04 bash[20742]: audit 2026-03-10T10:41:28.508758+0000 mon.a (mon.0) 3736 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:41:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:28 vm07 bash[23367]: audit 2026-03-10T10:41:28.508758+0000 mon.a (mon.0) 3736 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:41:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:28 vm07 bash[23367]: audit 2026-03-10T10:41:28.508758+0000 mon.a (mon.0) 3736 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:41:29.767 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:41:29 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:41:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:29 vm04 bash[28289]: cluster 2026-03-10T10:41:28.737177+0000 mgr.y (mgr.24422) 1033 : cluster [DBG] pgmap v1477: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:29 vm04 bash[28289]: cluster 2026-03-10T10:41:28.737177+0000 mgr.y (mgr.24422) 1033 : cluster [DBG] pgmap v1477: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:29 vm04 bash[28289]: audit 2026-03-10T10:41:29.461862+0000 mgr.y (mgr.24422) 1034 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:30.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:29 vm04 bash[28289]: audit 2026-03-10T10:41:29.461862+0000 mgr.y (mgr.24422) 1034 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:29 vm04 bash[20742]: cluster 2026-03-10T10:41:28.737177+0000 mgr.y (mgr.24422) 1033 : cluster [DBG] pgmap v1477: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:29 vm04 bash[20742]: cluster 2026-03-10T10:41:28.737177+0000 mgr.y (mgr.24422) 1033 : cluster [DBG] pgmap v1477: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:29 vm04 bash[20742]: audit 2026-03-10T10:41:29.461862+0000 mgr.y (mgr.24422) 1034 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:30.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:29 vm04 bash[20742]: audit 2026-03-10T10:41:29.461862+0000 mgr.y (mgr.24422) 1034 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:30.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:29 vm07 bash[23367]: cluster 2026-03-10T10:41:28.737177+0000 mgr.y (mgr.24422) 1033 : cluster [DBG] pgmap v1477: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:30.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:29 vm07 bash[23367]: cluster 2026-03-10T10:41:28.737177+0000 mgr.y (mgr.24422) 1033 : cluster [DBG] pgmap v1477: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:30.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:29 vm07 bash[23367]: audit 2026-03-10T10:41:29.461862+0000 mgr.y (mgr.24422) 1034 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:30.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:29 vm07 bash[23367]: audit 2026-03-10T10:41:29.461862+0000 mgr.y (mgr.24422) 1034 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:32.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:31 vm04 bash[28289]: cluster 2026-03-10T10:41:30.737658+0000 mgr.y (mgr.24422) 1035 : cluster [DBG] pgmap v1478: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:32.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:31 vm04 bash[28289]: cluster 2026-03-10T10:41:30.737658+0000 mgr.y (mgr.24422) 1035 : cluster [DBG] pgmap v1478: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:32.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:31 vm04 bash[20742]: cluster 2026-03-10T10:41:30.737658+0000 mgr.y (mgr.24422) 1035 : cluster [DBG] pgmap v1478: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:32.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:31 vm04 bash[20742]: cluster 2026-03-10T10:41:30.737658+0000 mgr.y (mgr.24422) 1035 : cluster [DBG] pgmap v1478: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:31 vm07 bash[23367]: cluster 2026-03-10T10:41:30.737658+0000 mgr.y (mgr.24422) 1035 : cluster [DBG] pgmap v1478: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:31 vm07 bash[23367]: cluster 2026-03-10T10:41:30.737658+0000 mgr.y (mgr.24422) 1035 : cluster [DBG] pgmap v1478: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:33.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:41:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:41:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:41:34.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:33 vm04 bash[28289]: cluster 2026-03-10T10:41:32.738007+0000 mgr.y (mgr.24422) 1036 : cluster [DBG] pgmap v1479: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:34.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:33 vm04 bash[28289]: cluster 2026-03-10T10:41:32.738007+0000 mgr.y (mgr.24422) 1036 : cluster [DBG] pgmap v1479: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:34.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:33 vm04 bash[20742]: cluster 2026-03-10T10:41:32.738007+0000 mgr.y (mgr.24422) 1036 : cluster [DBG] pgmap v1479: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:34.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:33 vm04 bash[20742]: cluster 2026-03-10T10:41:32.738007+0000 mgr.y (mgr.24422) 1036 : cluster [DBG] pgmap v1479: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:33 vm07 bash[23367]: cluster 2026-03-10T10:41:32.738007+0000 mgr.y (mgr.24422) 1036 : cluster [DBG] pgmap v1479: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:33 vm07 bash[23367]: cluster 2026-03-10T10:41:32.738007+0000 mgr.y (mgr.24422) 1036 : cluster [DBG] pgmap v1479: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:35 vm04 bash[28289]: cluster 2026-03-10T10:41:34.738758+0000 mgr.y (mgr.24422) 1037 : cluster [DBG] pgmap v1480: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:36.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:35 vm04 bash[28289]: cluster 2026-03-10T10:41:34.738758+0000 mgr.y (mgr.24422) 1037 : cluster [DBG] pgmap v1480: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:35 vm04 bash[20742]: cluster 2026-03-10T10:41:34.738758+0000 mgr.y (mgr.24422) 1037 : cluster [DBG] pgmap v1480: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:35 vm04 bash[20742]: cluster 2026-03-10T10:41:34.738758+0000 mgr.y (mgr.24422) 1037 : cluster [DBG] pgmap v1480: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:35 vm07 bash[23367]: cluster 2026-03-10T10:41:34.738758+0000 mgr.y (mgr.24422) 1037 : cluster [DBG] pgmap v1480: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:35 vm07 bash[23367]: cluster 2026-03-10T10:41:34.738758+0000 mgr.y (mgr.24422) 1037 : cluster [DBG] pgmap v1480: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:37 vm04 bash[28289]: cluster 2026-03-10T10:41:36.739089+0000 mgr.y (mgr.24422) 1038 : cluster [DBG] pgmap v1481: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:37 vm04 bash[28289]: cluster 2026-03-10T10:41:36.739089+0000 mgr.y (mgr.24422) 1038 : cluster [DBG] pgmap v1481: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:38.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:37 vm04 bash[20742]: cluster 2026-03-10T10:41:36.739089+0000 mgr.y (mgr.24422) 1038 : cluster [DBG] pgmap v1481: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:38.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:37 vm04 bash[20742]: cluster 2026-03-10T10:41:36.739089+0000 mgr.y (mgr.24422) 1038 : cluster [DBG] pgmap v1481: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:37 vm07 bash[23367]: cluster 2026-03-10T10:41:36.739089+0000 mgr.y (mgr.24422) 1038 : cluster [DBG] pgmap v1481: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:37 vm07 bash[23367]: cluster 2026-03-10T10:41:36.739089+0000 mgr.y (mgr.24422) 1038 : cluster [DBG] pgmap v1481: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:39.767 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:41:39 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:41:40.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:39 vm04 bash[28289]: cluster 2026-03-10T10:41:38.739738+0000 mgr.y (mgr.24422) 1039 : cluster [DBG] pgmap v1482: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:40.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:39 vm04 bash[28289]: cluster 2026-03-10T10:41:38.739738+0000 mgr.y (mgr.24422) 1039 : cluster [DBG] pgmap v1482: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:40.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:39 vm04 bash[28289]: audit 2026-03-10T10:41:39.470825+0000 mgr.y (mgr.24422) 1040 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:40.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:39 vm04 bash[28289]: audit 2026-03-10T10:41:39.470825+0000 mgr.y (mgr.24422) 1040 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:39 vm04 bash[20742]: cluster 2026-03-10T10:41:38.739738+0000 mgr.y (mgr.24422) 1039 : cluster [DBG] pgmap v1482: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:39 vm04 bash[20742]: cluster 2026-03-10T10:41:38.739738+0000 mgr.y (mgr.24422) 1039 : cluster [DBG] pgmap v1482: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:39 vm04 bash[20742]: audit 2026-03-10T10:41:39.470825+0000 mgr.y (mgr.24422) 1040 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:40.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:39 vm04 bash[20742]: audit 2026-03-10T10:41:39.470825+0000 mgr.y (mgr.24422) 1040 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:39 vm07 bash[23367]: cluster 2026-03-10T10:41:38.739738+0000 mgr.y (mgr.24422) 1039 : cluster [DBG] pgmap v1482: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:39 vm07 bash[23367]: cluster 2026-03-10T10:41:38.739738+0000 mgr.y (mgr.24422) 1039 : cluster [DBG] pgmap v1482: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:39 vm07 bash[23367]: audit 2026-03-10T10:41:39.470825+0000 mgr.y (mgr.24422) 1040 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:39 vm07 bash[23367]: audit 2026-03-10T10:41:39.470825+0000 mgr.y (mgr.24422) 1040 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:42.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:41 vm04 bash[28289]: cluster 2026-03-10T10:41:40.740257+0000 mgr.y (mgr.24422) 1041 : cluster [DBG] pgmap v1483: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:42.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:41 vm04 bash[28289]: cluster 2026-03-10T10:41:40.740257+0000 mgr.y (mgr.24422) 1041 : cluster [DBG] pgmap v1483: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:41 vm04 bash[20742]: cluster 2026-03-10T10:41:40.740257+0000 mgr.y (mgr.24422) 1041 : cluster [DBG] pgmap v1483: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:42.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:41 vm04 bash[20742]: cluster 2026-03-10T10:41:40.740257+0000 mgr.y (mgr.24422) 1041 : cluster [DBG] pgmap v1483: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:41 vm07 bash[23367]: cluster 2026-03-10T10:41:40.740257+0000 mgr.y (mgr.24422) 1041 : cluster [DBG] pgmap v1483: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:41 vm07 bash[23367]: cluster 2026-03-10T10:41:40.740257+0000 mgr.y (mgr.24422) 1041 : cluster [DBG] pgmap v1483: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:43.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:41:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:41:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:41:44.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:43 vm04 bash[28289]: cluster 2026-03-10T10:41:42.740604+0000 mgr.y (mgr.24422) 1042 : cluster [DBG] pgmap v1484: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:44.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:43 vm04 bash[28289]: cluster 2026-03-10T10:41:42.740604+0000 mgr.y (mgr.24422) 1042 : cluster [DBG] pgmap v1484: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:44.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:43 vm04 bash[28289]: audit 2026-03-10T10:41:43.514254+0000 mon.a (mon.0) 3737 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:41:44.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:43 vm04 bash[28289]: audit 2026-03-10T10:41:43.514254+0000 mon.a (mon.0) 3737 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:41:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:43 vm04 bash[20742]: cluster 2026-03-10T10:41:42.740604+0000 mgr.y (mgr.24422) 1042 : cluster [DBG] pgmap v1484: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:43 vm04 bash[20742]: cluster 2026-03-10T10:41:42.740604+0000 mgr.y (mgr.24422) 1042 : cluster [DBG] pgmap v1484: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:43 vm04 bash[20742]: audit 2026-03-10T10:41:43.514254+0000 mon.a (mon.0) 3737 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:41:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:43 vm04 bash[20742]: audit 2026-03-10T10:41:43.514254+0000 mon.a (mon.0) 3737 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:41:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:43 vm07 bash[23367]: cluster 2026-03-10T10:41:42.740604+0000 mgr.y (mgr.24422) 1042 : cluster [DBG] pgmap v1484: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:44.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:43 vm07 bash[23367]: cluster 2026-03-10T10:41:42.740604+0000 mgr.y (mgr.24422) 1042 : cluster [DBG] pgmap v1484: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:44.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:43 vm07 bash[23367]: audit 2026-03-10T10:41:43.514254+0000 mon.a (mon.0) 3737 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:41:44.268 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:43 vm07 bash[23367]: audit 2026-03-10T10:41:43.514254+0000 mon.a (mon.0) 3737 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:41:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:44 vm07 bash[23367]: audit 2026-03-10T10:41:44.258746+0000 mon.a (mon.0) 3738 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:41:45.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:44 vm07 bash[23367]: audit 2026-03-10T10:41:44.258746+0000 mon.a (mon.0) 3738 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:41:45.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:44 vm04 bash[28289]: audit 2026-03-10T10:41:44.258746+0000 mon.a (mon.0) 3738 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:41:45.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:44 vm04 bash[28289]: audit 2026-03-10T10:41:44.258746+0000 mon.a (mon.0) 3738 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:41:45.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:44 vm04 bash[20742]: audit 2026-03-10T10:41:44.258746+0000 mon.a (mon.0) 3738 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:41:45.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:44 vm04 bash[20742]: audit 2026-03-10T10:41:44.258746+0000 mon.a (mon.0) 3738 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:41:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:45 vm07 bash[23367]: cluster 2026-03-10T10:41:44.741310+0000 mgr.y (mgr.24422) 1043 : cluster [DBG] pgmap v1485: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:45 vm07 bash[23367]: cluster 2026-03-10T10:41:44.741310+0000 mgr.y (mgr.24422) 1043 : cluster [DBG] pgmap v1485: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:46.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:45 vm04 bash[28289]: cluster 2026-03-10T10:41:44.741310+0000 mgr.y (mgr.24422) 1043 : cluster [DBG] pgmap v1485: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:46.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:45 vm04 bash[28289]: cluster 2026-03-10T10:41:44.741310+0000 mgr.y (mgr.24422) 1043 : cluster [DBG] pgmap v1485: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:46.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:45 vm04 bash[20742]: cluster 2026-03-10T10:41:44.741310+0000 mgr.y (mgr.24422) 1043 : cluster [DBG] pgmap v1485: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:46.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:45 vm04 bash[20742]: cluster 2026-03-10T10:41:44.741310+0000 mgr.y (mgr.24422) 1043 : cluster [DBG] pgmap v1485: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:48.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:47 vm07 bash[23367]: cluster 2026-03-10T10:41:46.741596+0000 mgr.y (mgr.24422) 1044 : cluster [DBG] pgmap v1486: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:48.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:47 vm07 bash[23367]: cluster 2026-03-10T10:41:46.741596+0000 mgr.y (mgr.24422) 1044 : cluster [DBG] pgmap v1486: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:48.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:47 vm04 bash[28289]: cluster 2026-03-10T10:41:46.741596+0000 mgr.y (mgr.24422) 1044 : cluster [DBG] pgmap v1486: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:48.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:47 vm04 bash[28289]: cluster 2026-03-10T10:41:46.741596+0000 mgr.y (mgr.24422) 1044 : cluster [DBG] pgmap v1486: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:48.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:47 vm04 bash[20742]: cluster 2026-03-10T10:41:46.741596+0000 mgr.y (mgr.24422) 1044 : cluster [DBG] pgmap v1486: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:48.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:47 vm04 bash[20742]: cluster 2026-03-10T10:41:46.741596+0000 mgr.y (mgr.24422) 1044 : cluster [DBG] pgmap v1486: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:49.767 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:41:49 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:41:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:49 vm07 bash[23367]: cluster 2026-03-10T10:41:48.742073+0000 mgr.y (mgr.24422) 1045 : cluster [DBG] pgmap v1487: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:49 vm07 bash[23367]: cluster 2026-03-10T10:41:48.742073+0000 mgr.y (mgr.24422) 1045 : cluster [DBG] pgmap v1487: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:50.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:49 vm04 bash[28289]: cluster 2026-03-10T10:41:48.742073+0000 mgr.y (mgr.24422) 1045 : cluster [DBG] pgmap v1487: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:50.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:49 vm04 bash[28289]: cluster 2026-03-10T10:41:48.742073+0000 mgr.y (mgr.24422) 1045 : cluster [DBG] pgmap v1487: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:50.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:49 vm04 bash[20742]: cluster 2026-03-10T10:41:48.742073+0000 mgr.y (mgr.24422) 1045 : cluster [DBG] pgmap v1487: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:50.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:49 vm04 bash[20742]: cluster 2026-03-10T10:41:48.742073+0000 mgr.y (mgr.24422) 1045 : cluster [DBG] pgmap v1487: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:51.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:51 vm04 bash[28289]: audit 2026-03-10T10:41:49.482269+0000 mgr.y (mgr.24422) 1046 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:51.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:51 vm04 bash[28289]: audit 2026-03-10T10:41:49.482269+0000 mgr.y (mgr.24422) 1046 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:51.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:51 vm04 bash[28289]: audit 2026-03-10T10:41:50.149014+0000 mon.a (mon.0) 3739 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:41:51.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:51 vm04 bash[28289]: audit 2026-03-10T10:41:50.149014+0000 mon.a (mon.0) 3739 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:41:51.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:51 vm04 bash[28289]: audit 2026-03-10T10:41:50.154113+0000 mon.a (mon.0) 3740 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:41:51.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:51 vm04 bash[28289]: audit 2026-03-10T10:41:50.154113+0000 mon.a (mon.0) 3740 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:41:51.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:51 vm04 bash[28289]: audit 2026-03-10T10:41:50.155510+0000 mon.a (mon.0) 3741 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:41:51.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:51 vm04 bash[28289]: audit 2026-03-10T10:41:50.155510+0000 mon.a (mon.0) 3741 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:41:51.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:51 vm04 bash[28289]: audit 2026-03-10T10:41:50.156008+0000 mon.a (mon.0) 3742 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:41:51.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:51 vm04 bash[28289]: audit 2026-03-10T10:41:50.156008+0000 mon.a (mon.0) 3742 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:41:51.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:51 vm04 bash[28289]: audit 2026-03-10T10:41:50.160460+0000 mon.a (mon.0) 3743 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:41:51.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:51 vm04 bash[28289]: audit 2026-03-10T10:41:50.160460+0000 mon.a (mon.0) 3743 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:41:51.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:51 vm04 bash[20742]: audit 2026-03-10T10:41:49.482269+0000 mgr.y (mgr.24422) 1046 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:51.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:51 vm04 bash[20742]: audit 2026-03-10T10:41:49.482269+0000 mgr.y (mgr.24422) 1046 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:51.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:51 vm04 bash[20742]: audit 2026-03-10T10:41:50.149014+0000 mon.a (mon.0) 3739 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:41:51.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:51 vm04 bash[20742]: audit 2026-03-10T10:41:50.149014+0000 mon.a (mon.0) 3739 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:41:51.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:51 vm04 bash[20742]: audit 2026-03-10T10:41:50.154113+0000 mon.a (mon.0) 3740 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:41:51.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:51 vm04 bash[20742]: audit 2026-03-10T10:41:50.154113+0000 mon.a (mon.0) 3740 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:41:51.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:51 vm04 bash[20742]: audit 2026-03-10T10:41:50.155510+0000 mon.a (mon.0) 3741 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:41:51.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:51 vm04 bash[20742]: audit 2026-03-10T10:41:50.155510+0000 mon.a (mon.0) 3741 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:41:51.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:51 vm04 bash[20742]: audit 2026-03-10T10:41:50.156008+0000 mon.a (mon.0) 3742 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:41:51.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:51 vm04 bash[20742]: audit 2026-03-10T10:41:50.156008+0000 mon.a (mon.0) 3742 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:41:51.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:51 vm04 bash[20742]: audit 2026-03-10T10:41:50.160460+0000 mon.a (mon.0) 3743 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:41:51.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:51 vm04 bash[20742]: audit 2026-03-10T10:41:50.160460+0000 mon.a (mon.0) 3743 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:41:51.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:51 vm07 bash[23367]: audit 2026-03-10T10:41:49.482269+0000 mgr.y (mgr.24422) 1046 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:51.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:51 vm07 bash[23367]: audit 2026-03-10T10:41:49.482269+0000 mgr.y (mgr.24422) 1046 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:41:51.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:51 vm07 bash[23367]: audit 2026-03-10T10:41:50.149014+0000 mon.a (mon.0) 3739 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:41:51.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:51 vm07 bash[23367]: audit 2026-03-10T10:41:50.149014+0000 mon.a (mon.0) 3739 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:41:51.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:51 vm07 bash[23367]: audit 2026-03-10T10:41:50.154113+0000 mon.a (mon.0) 3740 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:41:51.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:51 vm07 bash[23367]: audit 2026-03-10T10:41:50.154113+0000 mon.a (mon.0) 3740 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:41:51.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:51 vm07 bash[23367]: audit 2026-03-10T10:41:50.155510+0000 mon.a (mon.0) 3741 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:41:51.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:51 vm07 bash[23367]: audit 2026-03-10T10:41:50.155510+0000 mon.a (mon.0) 3741 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:41:51.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:51 vm07 bash[23367]: audit 2026-03-10T10:41:50.156008+0000 mon.a (mon.0) 3742 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:41:51.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:51 vm07 bash[23367]: audit 2026-03-10T10:41:50.156008+0000 mon.a (mon.0) 3742 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:41:51.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:51 vm07 bash[23367]: audit 2026-03-10T10:41:50.160460+0000 mon.a (mon.0) 3743 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:41:51.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:51 vm07 bash[23367]: audit 2026-03-10T10:41:50.160460+0000 mon.a (mon.0) 3743 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:41:52.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:52 vm04 bash[28289]: cluster 2026-03-10T10:41:50.742569+0000 mgr.y (mgr.24422) 1047 : cluster [DBG] pgmap v1488: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:52.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:52 vm04 bash[28289]: cluster 2026-03-10T10:41:50.742569+0000 mgr.y (mgr.24422) 1047 : cluster [DBG] pgmap v1488: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:52.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:52 vm04 bash[20742]: cluster 2026-03-10T10:41:50.742569+0000 mgr.y (mgr.24422) 1047 : cluster [DBG] pgmap v1488: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:52.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:52 vm04 bash[20742]: cluster 2026-03-10T10:41:50.742569+0000 mgr.y (mgr.24422) 1047 : cluster [DBG] pgmap v1488: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:52 vm07 bash[23367]: cluster 2026-03-10T10:41:50.742569+0000 mgr.y (mgr.24422) 1047 : cluster [DBG] pgmap v1488: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:52 vm07 bash[23367]: cluster 2026-03-10T10:41:50.742569+0000 mgr.y (mgr.24422) 1047 : cluster [DBG] pgmap v1488: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:53.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:41:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:41:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:41:54.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:54 vm04 bash[28289]: cluster 2026-03-10T10:41:52.742885+0000 mgr.y (mgr.24422) 1048 : cluster [DBG] pgmap v1489: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:54.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:54 vm04 bash[28289]: cluster 2026-03-10T10:41:52.742885+0000 mgr.y (mgr.24422) 1048 : cluster [DBG] pgmap v1489: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:54.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:54 vm04 bash[20742]: cluster 2026-03-10T10:41:52.742885+0000 mgr.y (mgr.24422) 1048 : cluster [DBG] pgmap v1489: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:54.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:54 vm04 bash[20742]: cluster 2026-03-10T10:41:52.742885+0000 mgr.y (mgr.24422) 1048 : cluster [DBG] pgmap v1489: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:54 vm07 bash[23367]: cluster 2026-03-10T10:41:52.742885+0000 mgr.y (mgr.24422) 1048 : cluster [DBG] pgmap v1489: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:54 vm07 bash[23367]: cluster 2026-03-10T10:41:52.742885+0000 mgr.y (mgr.24422) 1048 : cluster [DBG] pgmap v1489: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:56.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:56 vm04 bash[28289]: cluster 2026-03-10T10:41:54.743580+0000 mgr.y (mgr.24422) 1049 : cluster [DBG] pgmap v1490: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:56.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:56 vm04 bash[28289]: cluster 2026-03-10T10:41:54.743580+0000 mgr.y (mgr.24422) 1049 : cluster [DBG] pgmap v1490: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:56.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:56 vm04 bash[20742]: cluster 2026-03-10T10:41:54.743580+0000 mgr.y (mgr.24422) 1049 : cluster [DBG] pgmap v1490: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:56.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:56 vm04 bash[20742]: cluster 2026-03-10T10:41:54.743580+0000 mgr.y (mgr.24422) 1049 : cluster [DBG] pgmap v1490: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:56.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:56 vm07 bash[23367]: cluster 2026-03-10T10:41:54.743580+0000 mgr.y (mgr.24422) 1049 : cluster [DBG] pgmap v1490: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:56.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:56 vm07 bash[23367]: cluster 2026-03-10T10:41:54.743580+0000 mgr.y (mgr.24422) 1049 : cluster [DBG] pgmap v1490: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:41:58.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:58 vm04 bash[20742]: cluster 2026-03-10T10:41:56.743903+0000 mgr.y (mgr.24422) 1050 : cluster [DBG] pgmap v1491: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:58.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:58 vm04 bash[20742]: cluster 2026-03-10T10:41:56.743903+0000 mgr.y (mgr.24422) 1050 : cluster [DBG] pgmap v1491: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:58.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:58 vm04 bash[28289]: cluster 2026-03-10T10:41:56.743903+0000 mgr.y (mgr.24422) 1050 : cluster [DBG] pgmap v1491: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:58.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:58 vm04 bash[28289]: cluster 2026-03-10T10:41:56.743903+0000 mgr.y (mgr.24422) 1050 : cluster [DBG] pgmap v1491: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:58.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:58 vm07 bash[23367]: cluster 2026-03-10T10:41:56.743903+0000 mgr.y (mgr.24422) 1050 : cluster [DBG] pgmap v1491: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:58.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:58 vm07 bash[23367]: cluster 2026-03-10T10:41:56.743903+0000 mgr.y (mgr.24422) 1050 : cluster [DBG] pgmap v1491: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:41:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:59 vm04 bash[20742]: audit 2026-03-10T10:41:58.519976+0000 mon.a (mon.0) 3744 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:41:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:41:59 vm04 bash[20742]: audit 2026-03-10T10:41:58.519976+0000 mon.a (mon.0) 3744 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:41:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:59 vm04 bash[28289]: audit 2026-03-10T10:41:58.519976+0000 mon.a (mon.0) 3744 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:41:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:41:59 vm04 bash[28289]: audit 2026-03-10T10:41:58.519976+0000 mon.a (mon.0) 3744 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:41:59.488 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:59 vm07 bash[23367]: audit 2026-03-10T10:41:58.519976+0000 mon.a (mon.0) 3744 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:41:59.488 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:41:59 vm07 bash[23367]: audit 2026-03-10T10:41:58.519976+0000 mon.a (mon.0) 3744 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:41:59.767 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:41:59 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:42:00.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:00 vm04 bash[20742]: cluster 2026-03-10T10:41:58.744371+0000 mgr.y (mgr.24422) 1051 : cluster [DBG] pgmap v1492: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:00.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:00 vm04 bash[20742]: cluster 2026-03-10T10:41:58.744371+0000 mgr.y (mgr.24422) 1051 : cluster [DBG] pgmap v1492: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:00.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:00 vm04 bash[28289]: cluster 2026-03-10T10:41:58.744371+0000 mgr.y (mgr.24422) 1051 : cluster [DBG] pgmap v1492: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:00 vm04 bash[28289]: cluster 2026-03-10T10:41:58.744371+0000 mgr.y (mgr.24422) 1051 : cluster [DBG] pgmap v1492: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:00 vm07 bash[23367]: cluster 2026-03-10T10:41:58.744371+0000 mgr.y (mgr.24422) 1051 : cluster [DBG] pgmap v1492: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:00 vm07 bash[23367]: cluster 2026-03-10T10:41:58.744371+0000 mgr.y (mgr.24422) 1051 : cluster [DBG] pgmap v1492: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:01.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:01 vm07 bash[23367]: audit 2026-03-10T10:41:59.489610+0000 mgr.y (mgr.24422) 1052 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:01.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:01 vm07 bash[23367]: audit 2026-03-10T10:41:59.489610+0000 mgr.y (mgr.24422) 1052 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:01.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:01 vm04 bash[20742]: audit 2026-03-10T10:41:59.489610+0000 mgr.y (mgr.24422) 1052 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:01.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:01 vm04 bash[20742]: audit 2026-03-10T10:41:59.489610+0000 mgr.y (mgr.24422) 1052 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:01.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:01 vm04 bash[28289]: audit 2026-03-10T10:41:59.489610+0000 mgr.y (mgr.24422) 1052 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:01.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:01 vm04 bash[28289]: audit 2026-03-10T10:41:59.489610+0000 mgr.y (mgr.24422) 1052 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:02.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:02 vm07 bash[23367]: cluster 2026-03-10T10:42:00.744933+0000 mgr.y (mgr.24422) 1053 : cluster [DBG] pgmap v1493: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:02.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:02 vm07 bash[23367]: cluster 2026-03-10T10:42:00.744933+0000 mgr.y (mgr.24422) 1053 : cluster [DBG] pgmap v1493: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:02.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:02 vm04 bash[20742]: cluster 2026-03-10T10:42:00.744933+0000 mgr.y (mgr.24422) 1053 : cluster [DBG] pgmap v1493: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:02.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:02 vm04 bash[20742]: cluster 2026-03-10T10:42:00.744933+0000 mgr.y (mgr.24422) 1053 : cluster [DBG] pgmap v1493: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:02.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:02 vm04 bash[28289]: cluster 2026-03-10T10:42:00.744933+0000 mgr.y (mgr.24422) 1053 : cluster [DBG] pgmap v1493: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:02.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:02 vm04 bash[28289]: cluster 2026-03-10T10:42:00.744933+0000 mgr.y (mgr.24422) 1053 : cluster [DBG] pgmap v1493: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:03.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:42:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:42:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:42:04.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:04 vm07 bash[23367]: cluster 2026-03-10T10:42:02.745210+0000 mgr.y (mgr.24422) 1054 : cluster [DBG] pgmap v1494: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:04.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:04 vm07 bash[23367]: cluster 2026-03-10T10:42:02.745210+0000 mgr.y (mgr.24422) 1054 : cluster [DBG] pgmap v1494: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:04.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:04 vm04 bash[20742]: cluster 2026-03-10T10:42:02.745210+0000 mgr.y (mgr.24422) 1054 : cluster [DBG] pgmap v1494: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:04.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:04 vm04 bash[20742]: cluster 2026-03-10T10:42:02.745210+0000 mgr.y (mgr.24422) 1054 : cluster [DBG] pgmap v1494: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:04.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:04 vm04 bash[28289]: cluster 2026-03-10T10:42:02.745210+0000 mgr.y (mgr.24422) 1054 : cluster [DBG] pgmap v1494: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:04.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:04 vm04 bash[28289]: cluster 2026-03-10T10:42:02.745210+0000 mgr.y (mgr.24422) 1054 : cluster [DBG] pgmap v1494: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:06.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:06 vm04 bash[28289]: cluster 2026-03-10T10:42:04.745838+0000 mgr.y (mgr.24422) 1055 : cluster [DBG] pgmap v1495: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:06.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:06 vm04 bash[28289]: cluster 2026-03-10T10:42:04.745838+0000 mgr.y (mgr.24422) 1055 : cluster [DBG] pgmap v1495: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:06.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:06 vm04 bash[20742]: cluster 2026-03-10T10:42:04.745838+0000 mgr.y (mgr.24422) 1055 : cluster [DBG] pgmap v1495: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:06.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:06 vm04 bash[20742]: cluster 2026-03-10T10:42:04.745838+0000 mgr.y (mgr.24422) 1055 : cluster [DBG] pgmap v1495: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:06.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:06 vm07 bash[23367]: cluster 2026-03-10T10:42:04.745838+0000 mgr.y (mgr.24422) 1055 : cluster [DBG] pgmap v1495: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:06.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:06 vm07 bash[23367]: cluster 2026-03-10T10:42:04.745838+0000 mgr.y (mgr.24422) 1055 : cluster [DBG] pgmap v1495: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:08.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:08 vm04 bash[20742]: cluster 2026-03-10T10:42:06.746108+0000 mgr.y (mgr.24422) 1056 : cluster [DBG] pgmap v1496: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:08.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:08 vm04 bash[20742]: cluster 2026-03-10T10:42:06.746108+0000 mgr.y (mgr.24422) 1056 : cluster [DBG] pgmap v1496: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:08.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:08 vm04 bash[28289]: cluster 2026-03-10T10:42:06.746108+0000 mgr.y (mgr.24422) 1056 : cluster [DBG] pgmap v1496: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:08.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:08 vm04 bash[28289]: cluster 2026-03-10T10:42:06.746108+0000 mgr.y (mgr.24422) 1056 : cluster [DBG] pgmap v1496: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:08.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:08 vm07 bash[23367]: cluster 2026-03-10T10:42:06.746108+0000 mgr.y (mgr.24422) 1056 : cluster [DBG] pgmap v1496: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:08.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:08 vm07 bash[23367]: cluster 2026-03-10T10:42:06.746108+0000 mgr.y (mgr.24422) 1056 : cluster [DBG] pgmap v1496: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:09.767 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:42:09 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:42:10.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:10 vm04 bash[20742]: cluster 2026-03-10T10:42:08.746662+0000 mgr.y (mgr.24422) 1057 : cluster [DBG] pgmap v1497: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:10.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:10 vm04 bash[20742]: cluster 2026-03-10T10:42:08.746662+0000 mgr.y (mgr.24422) 1057 : cluster [DBG] pgmap v1497: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:10 vm04 bash[28289]: cluster 2026-03-10T10:42:08.746662+0000 mgr.y (mgr.24422) 1057 : cluster [DBG] pgmap v1497: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:10.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:10 vm04 bash[28289]: cluster 2026-03-10T10:42:08.746662+0000 mgr.y (mgr.24422) 1057 : cluster [DBG] pgmap v1497: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:10.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:10 vm07 bash[23367]: cluster 2026-03-10T10:42:08.746662+0000 mgr.y (mgr.24422) 1057 : cluster [DBG] pgmap v1497: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:10.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:10 vm07 bash[23367]: cluster 2026-03-10T10:42:08.746662+0000 mgr.y (mgr.24422) 1057 : cluster [DBG] pgmap v1497: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:11.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:11 vm04 bash[20742]: audit 2026-03-10T10:42:09.493802+0000 mgr.y (mgr.24422) 1058 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:11.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:11 vm04 bash[20742]: audit 2026-03-10T10:42:09.493802+0000 mgr.y (mgr.24422) 1058 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:11.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:11 vm04 bash[28289]: audit 2026-03-10T10:42:09.493802+0000 mgr.y (mgr.24422) 1058 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:11.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:11 vm04 bash[28289]: audit 2026-03-10T10:42:09.493802+0000 mgr.y (mgr.24422) 1058 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:11.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:11 vm07 bash[23367]: audit 2026-03-10T10:42:09.493802+0000 mgr.y (mgr.24422) 1058 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:11.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:11 vm07 bash[23367]: audit 2026-03-10T10:42:09.493802+0000 mgr.y (mgr.24422) 1058 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:12.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:12 vm04 bash[28289]: cluster 2026-03-10T10:42:10.747170+0000 mgr.y (mgr.24422) 1059 : cluster [DBG] pgmap v1498: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:12.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:12 vm04 bash[28289]: cluster 2026-03-10T10:42:10.747170+0000 mgr.y (mgr.24422) 1059 : cluster [DBG] pgmap v1498: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:12.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:12 vm04 bash[20742]: cluster 2026-03-10T10:42:10.747170+0000 mgr.y (mgr.24422) 1059 : cluster [DBG] pgmap v1498: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:12.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:12 vm04 bash[20742]: cluster 2026-03-10T10:42:10.747170+0000 mgr.y (mgr.24422) 1059 : cluster [DBG] pgmap v1498: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:12.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:12 vm07 bash[23367]: cluster 2026-03-10T10:42:10.747170+0000 mgr.y (mgr.24422) 1059 : cluster [DBG] pgmap v1498: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:12.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:12 vm07 bash[23367]: cluster 2026-03-10T10:42:10.747170+0000 mgr.y (mgr.24422) 1059 : cluster [DBG] pgmap v1498: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:13.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:42:13 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:42:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:42:14.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:14 vm04 bash[20742]: cluster 2026-03-10T10:42:12.747468+0000 mgr.y (mgr.24422) 1060 : cluster [DBG] pgmap v1499: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:14.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:14 vm04 bash[20742]: cluster 2026-03-10T10:42:12.747468+0000 mgr.y (mgr.24422) 1060 : cluster [DBG] pgmap v1499: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:14.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:14 vm04 bash[20742]: audit 2026-03-10T10:42:13.526610+0000 mon.a (mon.0) 3745 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:42:14.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:14 vm04 bash[20742]: audit 2026-03-10T10:42:13.526610+0000 mon.a (mon.0) 3745 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:42:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:14 vm04 bash[28289]: cluster 2026-03-10T10:42:12.747468+0000 mgr.y (mgr.24422) 1060 : cluster [DBG] pgmap v1499: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:14 vm04 bash[28289]: cluster 2026-03-10T10:42:12.747468+0000 mgr.y (mgr.24422) 1060 : cluster [DBG] pgmap v1499: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:14 vm04 bash[28289]: audit 2026-03-10T10:42:13.526610+0000 mon.a (mon.0) 3745 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:42:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:14 vm04 bash[28289]: audit 2026-03-10T10:42:13.526610+0000 mon.a (mon.0) 3745 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:42:14.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:14 vm07 bash[23367]: cluster 2026-03-10T10:42:12.747468+0000 mgr.y (mgr.24422) 1060 : cluster [DBG] pgmap v1499: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:14.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:14 vm07 bash[23367]: cluster 2026-03-10T10:42:12.747468+0000 mgr.y (mgr.24422) 1060 : cluster [DBG] pgmap v1499: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:14.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:14 vm07 bash[23367]: audit 2026-03-10T10:42:13.526610+0000 mon.a (mon.0) 3745 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:42:14.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:14 vm07 bash[23367]: audit 2026-03-10T10:42:13.526610+0000 mon.a (mon.0) 3745 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:42:16.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:16 vm04 bash[20742]: cluster 2026-03-10T10:42:14.748257+0000 mgr.y (mgr.24422) 1061 : cluster [DBG] pgmap v1500: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:16.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:16 vm04 bash[20742]: cluster 2026-03-10T10:42:14.748257+0000 mgr.y (mgr.24422) 1061 : cluster [DBG] pgmap v1500: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:16.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:16 vm04 bash[28289]: cluster 2026-03-10T10:42:14.748257+0000 mgr.y (mgr.24422) 1061 : cluster [DBG] pgmap v1500: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:16.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:16 vm04 bash[28289]: cluster 2026-03-10T10:42:14.748257+0000 mgr.y (mgr.24422) 1061 : cluster [DBG] pgmap v1500: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:16.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:16 vm07 bash[23367]: cluster 2026-03-10T10:42:14.748257+0000 mgr.y (mgr.24422) 1061 : cluster [DBG] pgmap v1500: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:16.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:16 vm07 bash[23367]: cluster 2026-03-10T10:42:14.748257+0000 mgr.y (mgr.24422) 1061 : cluster [DBG] pgmap v1500: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:18.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:18 vm04 bash[28289]: cluster 2026-03-10T10:42:16.748544+0000 mgr.y (mgr.24422) 1062 : cluster [DBG] pgmap v1501: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:18.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:18 vm04 bash[28289]: cluster 2026-03-10T10:42:16.748544+0000 mgr.y (mgr.24422) 1062 : cluster [DBG] pgmap v1501: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:18.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:18 vm04 bash[20742]: cluster 2026-03-10T10:42:16.748544+0000 mgr.y (mgr.24422) 1062 : cluster [DBG] pgmap v1501: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:18.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:18 vm04 bash[20742]: cluster 2026-03-10T10:42:16.748544+0000 mgr.y (mgr.24422) 1062 : cluster [DBG] pgmap v1501: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:18.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:18 vm07 bash[23367]: cluster 2026-03-10T10:42:16.748544+0000 mgr.y (mgr.24422) 1062 : cluster [DBG] pgmap v1501: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:18.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:18 vm07 bash[23367]: cluster 2026-03-10T10:42:16.748544+0000 mgr.y (mgr.24422) 1062 : cluster [DBG] pgmap v1501: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:19.767 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:42:19 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:42:20.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:20 vm04 bash[20742]: cluster 2026-03-10T10:42:18.749042+0000 mgr.y (mgr.24422) 1063 : cluster [DBG] pgmap v1502: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:20.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:20 vm04 bash[20742]: cluster 2026-03-10T10:42:18.749042+0000 mgr.y (mgr.24422) 1063 : cluster [DBG] pgmap v1502: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:20 vm04 bash[28289]: cluster 2026-03-10T10:42:18.749042+0000 mgr.y (mgr.24422) 1063 : cluster [DBG] pgmap v1502: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:20.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:20 vm04 bash[28289]: cluster 2026-03-10T10:42:18.749042+0000 mgr.y (mgr.24422) 1063 : cluster [DBG] pgmap v1502: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:20 vm07 bash[23367]: cluster 2026-03-10T10:42:18.749042+0000 mgr.y (mgr.24422) 1063 : cluster [DBG] pgmap v1502: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:20 vm07 bash[23367]: cluster 2026-03-10T10:42:18.749042+0000 mgr.y (mgr.24422) 1063 : cluster [DBG] pgmap v1502: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:21.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:21 vm04 bash[20742]: audit 2026-03-10T10:42:19.495865+0000 mgr.y (mgr.24422) 1064 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:21.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:21 vm04 bash[20742]: audit 2026-03-10T10:42:19.495865+0000 mgr.y (mgr.24422) 1064 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:21.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:21 vm04 bash[28289]: audit 2026-03-10T10:42:19.495865+0000 mgr.y (mgr.24422) 1064 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:21.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:21 vm04 bash[28289]: audit 2026-03-10T10:42:19.495865+0000 mgr.y (mgr.24422) 1064 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:21 vm07 bash[23367]: audit 2026-03-10T10:42:19.495865+0000 mgr.y (mgr.24422) 1064 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:21 vm07 bash[23367]: audit 2026-03-10T10:42:19.495865+0000 mgr.y (mgr.24422) 1064 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:22.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:22 vm04 bash[20742]: cluster 2026-03-10T10:42:20.749642+0000 mgr.y (mgr.24422) 1065 : cluster [DBG] pgmap v1503: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:22.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:22 vm04 bash[20742]: cluster 2026-03-10T10:42:20.749642+0000 mgr.y (mgr.24422) 1065 : cluster [DBG] pgmap v1503: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:22.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:22 vm04 bash[28289]: cluster 2026-03-10T10:42:20.749642+0000 mgr.y (mgr.24422) 1065 : cluster [DBG] pgmap v1503: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:22.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:22 vm04 bash[28289]: cluster 2026-03-10T10:42:20.749642+0000 mgr.y (mgr.24422) 1065 : cluster [DBG] pgmap v1503: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:22 vm07 bash[23367]: cluster 2026-03-10T10:42:20.749642+0000 mgr.y (mgr.24422) 1065 : cluster [DBG] pgmap v1503: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:22 vm07 bash[23367]: cluster 2026-03-10T10:42:20.749642+0000 mgr.y (mgr.24422) 1065 : cluster [DBG] pgmap v1503: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:23.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:42:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:42:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:42:24.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:24 vm04 bash[28289]: cluster 2026-03-10T10:42:22.750184+0000 mgr.y (mgr.24422) 1066 : cluster [DBG] pgmap v1504: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:24.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:24 vm04 bash[28289]: cluster 2026-03-10T10:42:22.750184+0000 mgr.y (mgr.24422) 1066 : cluster [DBG] pgmap v1504: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:24.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:24 vm04 bash[20742]: cluster 2026-03-10T10:42:22.750184+0000 mgr.y (mgr.24422) 1066 : cluster [DBG] pgmap v1504: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:24.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:24 vm04 bash[20742]: cluster 2026-03-10T10:42:22.750184+0000 mgr.y (mgr.24422) 1066 : cluster [DBG] pgmap v1504: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:24 vm07 bash[23367]: cluster 2026-03-10T10:42:22.750184+0000 mgr.y (mgr.24422) 1066 : cluster [DBG] pgmap v1504: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:24 vm07 bash[23367]: cluster 2026-03-10T10:42:22.750184+0000 mgr.y (mgr.24422) 1066 : cluster [DBG] pgmap v1504: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:26.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:26 vm04 bash[28289]: cluster 2026-03-10T10:42:24.750982+0000 mgr.y (mgr.24422) 1067 : cluster [DBG] pgmap v1505: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:26.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:26 vm04 bash[28289]: cluster 2026-03-10T10:42:24.750982+0000 mgr.y (mgr.24422) 1067 : cluster [DBG] pgmap v1505: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:26.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:26 vm04 bash[20742]: cluster 2026-03-10T10:42:24.750982+0000 mgr.y (mgr.24422) 1067 : cluster [DBG] pgmap v1505: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:26.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:26 vm04 bash[20742]: cluster 2026-03-10T10:42:24.750982+0000 mgr.y (mgr.24422) 1067 : cluster [DBG] pgmap v1505: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:26.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:26 vm07 bash[23367]: cluster 2026-03-10T10:42:24.750982+0000 mgr.y (mgr.24422) 1067 : cluster [DBG] pgmap v1505: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:26.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:26 vm07 bash[23367]: cluster 2026-03-10T10:42:24.750982+0000 mgr.y (mgr.24422) 1067 : cluster [DBG] pgmap v1505: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:28.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:28 vm04 bash[28289]: cluster 2026-03-10T10:42:26.751308+0000 mgr.y (mgr.24422) 1068 : cluster [DBG] pgmap v1506: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:28.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:28 vm04 bash[28289]: cluster 2026-03-10T10:42:26.751308+0000 mgr.y (mgr.24422) 1068 : cluster [DBG] pgmap v1506: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:28.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:28 vm04 bash[20742]: cluster 2026-03-10T10:42:26.751308+0000 mgr.y (mgr.24422) 1068 : cluster [DBG] pgmap v1506: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:28 vm04 bash[20742]: cluster 2026-03-10T10:42:26.751308+0000 mgr.y (mgr.24422) 1068 : cluster [DBG] pgmap v1506: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:28.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:28 vm07 bash[23367]: cluster 2026-03-10T10:42:26.751308+0000 mgr.y (mgr.24422) 1068 : cluster [DBG] pgmap v1506: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:28.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:28 vm07 bash[23367]: cluster 2026-03-10T10:42:26.751308+0000 mgr.y (mgr.24422) 1068 : cluster [DBG] pgmap v1506: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:29.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:29 vm04 bash[28289]: audit 2026-03-10T10:42:28.531968+0000 mon.a (mon.0) 3746 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:42:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:29 vm04 bash[28289]: audit 2026-03-10T10:42:28.531968+0000 mon.a (mon.0) 3746 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:42:29.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:29 vm04 bash[20742]: audit 2026-03-10T10:42:28.531968+0000 mon.a (mon.0) 3746 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:42:29.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:29 vm04 bash[20742]: audit 2026-03-10T10:42:28.531968+0000 mon.a (mon.0) 3746 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:42:29.767 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:42:29 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:42:29.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:29 vm07 bash[23367]: audit 2026-03-10T10:42:28.531968+0000 mon.a (mon.0) 3746 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:42:29.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:29 vm07 bash[23367]: audit 2026-03-10T10:42:28.531968+0000 mon.a (mon.0) 3746 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:42:30.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:30 vm04 bash[28289]: cluster 2026-03-10T10:42:28.751980+0000 mgr.y (mgr.24422) 1069 : cluster [DBG] pgmap v1507: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:30.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:30 vm04 bash[28289]: cluster 2026-03-10T10:42:28.751980+0000 mgr.y (mgr.24422) 1069 : cluster [DBG] pgmap v1507: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:30.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:30 vm04 bash[20742]: cluster 2026-03-10T10:42:28.751980+0000 mgr.y (mgr.24422) 1069 : cluster [DBG] pgmap v1507: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:30.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:30 vm04 bash[20742]: cluster 2026-03-10T10:42:28.751980+0000 mgr.y (mgr.24422) 1069 : cluster [DBG] pgmap v1507: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:30.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:30 vm07 bash[23367]: cluster 2026-03-10T10:42:28.751980+0000 mgr.y (mgr.24422) 1069 : cluster [DBG] pgmap v1507: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:30.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:30 vm07 bash[23367]: cluster 2026-03-10T10:42:28.751980+0000 mgr.y (mgr.24422) 1069 : cluster [DBG] pgmap v1507: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:31.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:31 vm04 bash[28289]: audit 2026-03-10T10:42:29.506673+0000 mgr.y (mgr.24422) 1070 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:31.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:31 vm04 bash[28289]: audit 2026-03-10T10:42:29.506673+0000 mgr.y (mgr.24422) 1070 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:31.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:31 vm04 bash[20742]: audit 2026-03-10T10:42:29.506673+0000 mgr.y (mgr.24422) 1070 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:31.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:31 vm04 bash[20742]: audit 2026-03-10T10:42:29.506673+0000 mgr.y (mgr.24422) 1070 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:31.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:31 vm07 bash[23367]: audit 2026-03-10T10:42:29.506673+0000 mgr.y (mgr.24422) 1070 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:31.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:31 vm07 bash[23367]: audit 2026-03-10T10:42:29.506673+0000 mgr.y (mgr.24422) 1070 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:32.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:32 vm04 bash[28289]: cluster 2026-03-10T10:42:30.752434+0000 mgr.y (mgr.24422) 1071 : cluster [DBG] pgmap v1508: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:32.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:32 vm04 bash[28289]: cluster 2026-03-10T10:42:30.752434+0000 mgr.y (mgr.24422) 1071 : cluster [DBG] pgmap v1508: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:32 vm04 bash[20742]: cluster 2026-03-10T10:42:30.752434+0000 mgr.y (mgr.24422) 1071 : cluster [DBG] pgmap v1508: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:32 vm04 bash[20742]: cluster 2026-03-10T10:42:30.752434+0000 mgr.y (mgr.24422) 1071 : cluster [DBG] pgmap v1508: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:32.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:32 vm07 bash[23367]: cluster 2026-03-10T10:42:30.752434+0000 mgr.y (mgr.24422) 1071 : cluster [DBG] pgmap v1508: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:32.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:32 vm07 bash[23367]: cluster 2026-03-10T10:42:30.752434+0000 mgr.y (mgr.24422) 1071 : cluster [DBG] pgmap v1508: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:33.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:42:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:42:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:42:34.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:33 vm07 bash[23367]: cluster 2026-03-10T10:42:32.752739+0000 mgr.y (mgr.24422) 1072 : cluster [DBG] pgmap v1509: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:34.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:33 vm07 bash[23367]: cluster 2026-03-10T10:42:32.752739+0000 mgr.y (mgr.24422) 1072 : cluster [DBG] pgmap v1509: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:34.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:33 vm04 bash[28289]: cluster 2026-03-10T10:42:32.752739+0000 mgr.y (mgr.24422) 1072 : cluster [DBG] pgmap v1509: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:34.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:33 vm04 bash[28289]: cluster 2026-03-10T10:42:32.752739+0000 mgr.y (mgr.24422) 1072 : cluster [DBG] pgmap v1509: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:34.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:33 vm04 bash[20742]: cluster 2026-03-10T10:42:32.752739+0000 mgr.y (mgr.24422) 1072 : cluster [DBG] pgmap v1509: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:34.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:33 vm04 bash[20742]: cluster 2026-03-10T10:42:32.752739+0000 mgr.y (mgr.24422) 1072 : cluster [DBG] pgmap v1509: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:36.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:35 vm04 bash[28289]: cluster 2026-03-10T10:42:34.753378+0000 mgr.y (mgr.24422) 1073 : cluster [DBG] pgmap v1510: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:36.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:35 vm04 bash[28289]: cluster 2026-03-10T10:42:34.753378+0000 mgr.y (mgr.24422) 1073 : cluster [DBG] pgmap v1510: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:35 vm04 bash[20742]: cluster 2026-03-10T10:42:34.753378+0000 mgr.y (mgr.24422) 1073 : cluster [DBG] pgmap v1510: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:36.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:35 vm04 bash[20742]: cluster 2026-03-10T10:42:34.753378+0000 mgr.y (mgr.24422) 1073 : cluster [DBG] pgmap v1510: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:35 vm07 bash[23367]: cluster 2026-03-10T10:42:34.753378+0000 mgr.y (mgr.24422) 1073 : cluster [DBG] pgmap v1510: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:36.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:35 vm07 bash[23367]: cluster 2026-03-10T10:42:34.753378+0000 mgr.y (mgr.24422) 1073 : cluster [DBG] pgmap v1510: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:38.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:37 vm04 bash[28289]: cluster 2026-03-10T10:42:36.753796+0000 mgr.y (mgr.24422) 1074 : cluster [DBG] pgmap v1511: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:38.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:37 vm04 bash[28289]: cluster 2026-03-10T10:42:36.753796+0000 mgr.y (mgr.24422) 1074 : cluster [DBG] pgmap v1511: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:38.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:37 vm04 bash[20742]: cluster 2026-03-10T10:42:36.753796+0000 mgr.y (mgr.24422) 1074 : cluster [DBG] pgmap v1511: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:38.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:37 vm04 bash[20742]: cluster 2026-03-10T10:42:36.753796+0000 mgr.y (mgr.24422) 1074 : cluster [DBG] pgmap v1511: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:37 vm07 bash[23367]: cluster 2026-03-10T10:42:36.753796+0000 mgr.y (mgr.24422) 1074 : cluster [DBG] pgmap v1511: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:38.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:37 vm07 bash[23367]: cluster 2026-03-10T10:42:36.753796+0000 mgr.y (mgr.24422) 1074 : cluster [DBG] pgmap v1511: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:39.767 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:42:39 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:42:40.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:39 vm04 bash[28289]: cluster 2026-03-10T10:42:38.754246+0000 mgr.y (mgr.24422) 1075 : cluster [DBG] pgmap v1512: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:40.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:39 vm04 bash[28289]: cluster 2026-03-10T10:42:38.754246+0000 mgr.y (mgr.24422) 1075 : cluster [DBG] pgmap v1512: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:40.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:39 vm04 bash[20742]: cluster 2026-03-10T10:42:38.754246+0000 mgr.y (mgr.24422) 1075 : cluster [DBG] pgmap v1512: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:40.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:39 vm04 bash[20742]: cluster 2026-03-10T10:42:38.754246+0000 mgr.y (mgr.24422) 1075 : cluster [DBG] pgmap v1512: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:39 vm07 bash[23367]: cluster 2026-03-10T10:42:38.754246+0000 mgr.y (mgr.24422) 1075 : cluster [DBG] pgmap v1512: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:40.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:39 vm07 bash[23367]: cluster 2026-03-10T10:42:38.754246+0000 mgr.y (mgr.24422) 1075 : cluster [DBG] pgmap v1512: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:41.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:40 vm04 bash[28289]: audit 2026-03-10T10:42:39.516839+0000 mgr.y (mgr.24422) 1076 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:41.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:40 vm04 bash[28289]: audit 2026-03-10T10:42:39.516839+0000 mgr.y (mgr.24422) 1076 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:41.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:40 vm04 bash[20742]: audit 2026-03-10T10:42:39.516839+0000 mgr.y (mgr.24422) 1076 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:41.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:40 vm04 bash[20742]: audit 2026-03-10T10:42:39.516839+0000 mgr.y (mgr.24422) 1076 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:41.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:40 vm07 bash[23367]: audit 2026-03-10T10:42:39.516839+0000 mgr.y (mgr.24422) 1076 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:41.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:40 vm07 bash[23367]: audit 2026-03-10T10:42:39.516839+0000 mgr.y (mgr.24422) 1076 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:42.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:41 vm04 bash[28289]: cluster 2026-03-10T10:42:40.754751+0000 mgr.y (mgr.24422) 1077 : cluster [DBG] pgmap v1513: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:42.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:41 vm04 bash[28289]: cluster 2026-03-10T10:42:40.754751+0000 mgr.y (mgr.24422) 1077 : cluster [DBG] pgmap v1513: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:42.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:41 vm04 bash[20742]: cluster 2026-03-10T10:42:40.754751+0000 mgr.y (mgr.24422) 1077 : cluster [DBG] pgmap v1513: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:42.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:41 vm04 bash[20742]: cluster 2026-03-10T10:42:40.754751+0000 mgr.y (mgr.24422) 1077 : cluster [DBG] pgmap v1513: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:41 vm07 bash[23367]: cluster 2026-03-10T10:42:40.754751+0000 mgr.y (mgr.24422) 1077 : cluster [DBG] pgmap v1513: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:42.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:41 vm07 bash[23367]: cluster 2026-03-10T10:42:40.754751+0000 mgr.y (mgr.24422) 1077 : cluster [DBG] pgmap v1513: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:43.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:42:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:42:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:42:44.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:43 vm04 bash[28289]: cluster 2026-03-10T10:42:42.755080+0000 mgr.y (mgr.24422) 1078 : cluster [DBG] pgmap v1514: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:44.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:43 vm04 bash[28289]: cluster 2026-03-10T10:42:42.755080+0000 mgr.y (mgr.24422) 1078 : cluster [DBG] pgmap v1514: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:44.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:43 vm04 bash[28289]: audit 2026-03-10T10:42:43.537540+0000 mon.a (mon.0) 3747 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:42:44.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:43 vm04 bash[28289]: audit 2026-03-10T10:42:43.537540+0000 mon.a (mon.0) 3747 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:42:44.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:43 vm04 bash[20742]: cluster 2026-03-10T10:42:42.755080+0000 mgr.y (mgr.24422) 1078 : cluster [DBG] pgmap v1514: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:43 vm04 bash[20742]: cluster 2026-03-10T10:42:42.755080+0000 mgr.y (mgr.24422) 1078 : cluster [DBG] pgmap v1514: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:43 vm04 bash[20742]: audit 2026-03-10T10:42:43.537540+0000 mon.a (mon.0) 3747 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:42:44.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:43 vm04 bash[20742]: audit 2026-03-10T10:42:43.537540+0000 mon.a (mon.0) 3747 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:42:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:43 vm07 bash[23367]: cluster 2026-03-10T10:42:42.755080+0000 mgr.y (mgr.24422) 1078 : cluster [DBG] pgmap v1514: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:43 vm07 bash[23367]: cluster 2026-03-10T10:42:42.755080+0000 mgr.y (mgr.24422) 1078 : cluster [DBG] pgmap v1514: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:43 vm07 bash[23367]: audit 2026-03-10T10:42:43.537540+0000 mon.a (mon.0) 3747 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:42:44.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:43 vm07 bash[23367]: audit 2026-03-10T10:42:43.537540+0000 mon.a (mon.0) 3747 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:42:46.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:45 vm04 bash[28289]: cluster 2026-03-10T10:42:44.755734+0000 mgr.y (mgr.24422) 1079 : cluster [DBG] pgmap v1515: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:46.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:45 vm04 bash[28289]: cluster 2026-03-10T10:42:44.755734+0000 mgr.y (mgr.24422) 1079 : cluster [DBG] pgmap v1515: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:46.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:45 vm04 bash[20742]: cluster 2026-03-10T10:42:44.755734+0000 mgr.y (mgr.24422) 1079 : cluster [DBG] pgmap v1515: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:46.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:45 vm04 bash[20742]: cluster 2026-03-10T10:42:44.755734+0000 mgr.y (mgr.24422) 1079 : cluster [DBG] pgmap v1515: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:45 vm07 bash[23367]: cluster 2026-03-10T10:42:44.755734+0000 mgr.y (mgr.24422) 1079 : cluster [DBG] pgmap v1515: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:46.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:45 vm07 bash[23367]: cluster 2026-03-10T10:42:44.755734+0000 mgr.y (mgr.24422) 1079 : cluster [DBG] pgmap v1515: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:48.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:47 vm04 bash[28289]: cluster 2026-03-10T10:42:46.756075+0000 mgr.y (mgr.24422) 1080 : cluster [DBG] pgmap v1516: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:48.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:47 vm04 bash[28289]: cluster 2026-03-10T10:42:46.756075+0000 mgr.y (mgr.24422) 1080 : cluster [DBG] pgmap v1516: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:47 vm04 bash[20742]: cluster 2026-03-10T10:42:46.756075+0000 mgr.y (mgr.24422) 1080 : cluster [DBG] pgmap v1516: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:47 vm04 bash[20742]: cluster 2026-03-10T10:42:46.756075+0000 mgr.y (mgr.24422) 1080 : cluster [DBG] pgmap v1516: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:48.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:47 vm07 bash[23367]: cluster 2026-03-10T10:42:46.756075+0000 mgr.y (mgr.24422) 1080 : cluster [DBG] pgmap v1516: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:48.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:47 vm07 bash[23367]: cluster 2026-03-10T10:42:46.756075+0000 mgr.y (mgr.24422) 1080 : cluster [DBG] pgmap v1516: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:49.876 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:42:49 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:42:50.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:49 vm04 bash[28289]: cluster 2026-03-10T10:42:48.756530+0000 mgr.y (mgr.24422) 1081 : cluster [DBG] pgmap v1517: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:50.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:49 vm04 bash[28289]: cluster 2026-03-10T10:42:48.756530+0000 mgr.y (mgr.24422) 1081 : cluster [DBG] pgmap v1517: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:50.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:49 vm04 bash[20742]: cluster 2026-03-10T10:42:48.756530+0000 mgr.y (mgr.24422) 1081 : cluster [DBG] pgmap v1517: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:50.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:49 vm04 bash[20742]: cluster 2026-03-10T10:42:48.756530+0000 mgr.y (mgr.24422) 1081 : cluster [DBG] pgmap v1517: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:49 vm07 bash[23367]: cluster 2026-03-10T10:42:48.756530+0000 mgr.y (mgr.24422) 1081 : cluster [DBG] pgmap v1517: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:49 vm07 bash[23367]: cluster 2026-03-10T10:42:48.756530+0000 mgr.y (mgr.24422) 1081 : cluster [DBG] pgmap v1517: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:51.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:50 vm04 bash[28289]: audit 2026-03-10T10:42:49.519756+0000 mgr.y (mgr.24422) 1082 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:51.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:50 vm04 bash[28289]: audit 2026-03-10T10:42:49.519756+0000 mgr.y (mgr.24422) 1082 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:50 vm04 bash[28289]: audit 2026-03-10T10:42:50.199684+0000 mon.a (mon.0) 3748 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:42:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:50 vm04 bash[28289]: audit 2026-03-10T10:42:50.199684+0000 mon.a (mon.0) 3748 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:42:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:50 vm04 bash[28289]: audit 2026-03-10T10:42:50.518798+0000 mon.a (mon.0) 3749 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:42:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:50 vm04 bash[28289]: audit 2026-03-10T10:42:50.518798+0000 mon.a (mon.0) 3749 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:42:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:50 vm04 bash[28289]: audit 2026-03-10T10:42:50.519482+0000 mon.a (mon.0) 3750 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:42:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:50 vm04 bash[28289]: audit 2026-03-10T10:42:50.519482+0000 mon.a (mon.0) 3750 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:42:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:50 vm04 bash[28289]: audit 2026-03-10T10:42:50.524779+0000 mon.a (mon.0) 3751 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:42:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:50 vm04 bash[28289]: audit 2026-03-10T10:42:50.524779+0000 mon.a (mon.0) 3751 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:42:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:50 vm04 bash[20742]: audit 2026-03-10T10:42:49.519756+0000 mgr.y (mgr.24422) 1082 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:50 vm04 bash[20742]: audit 2026-03-10T10:42:49.519756+0000 mgr.y (mgr.24422) 1082 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:50 vm04 bash[20742]: audit 2026-03-10T10:42:50.199684+0000 mon.a (mon.0) 3748 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:42:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:50 vm04 bash[20742]: audit 2026-03-10T10:42:50.199684+0000 mon.a (mon.0) 3748 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:42:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:50 vm04 bash[20742]: audit 2026-03-10T10:42:50.518798+0000 mon.a (mon.0) 3749 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:42:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:50 vm04 bash[20742]: audit 2026-03-10T10:42:50.518798+0000 mon.a (mon.0) 3749 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:42:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:50 vm04 bash[20742]: audit 2026-03-10T10:42:50.519482+0000 mon.a (mon.0) 3750 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:42:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:50 vm04 bash[20742]: audit 2026-03-10T10:42:50.519482+0000 mon.a (mon.0) 3750 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:42:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:50 vm04 bash[20742]: audit 2026-03-10T10:42:50.524779+0000 mon.a (mon.0) 3751 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:42:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:50 vm04 bash[20742]: audit 2026-03-10T10:42:50.524779+0000 mon.a (mon.0) 3751 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:42:51.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:50 vm07 bash[23367]: audit 2026-03-10T10:42:49.519756+0000 mgr.y (mgr.24422) 1082 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:51.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:50 vm07 bash[23367]: audit 2026-03-10T10:42:49.519756+0000 mgr.y (mgr.24422) 1082 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:42:51.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:50 vm07 bash[23367]: audit 2026-03-10T10:42:50.199684+0000 mon.a (mon.0) 3748 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:42:51.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:50 vm07 bash[23367]: audit 2026-03-10T10:42:50.199684+0000 mon.a (mon.0) 3748 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:42:51.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:50 vm07 bash[23367]: audit 2026-03-10T10:42:50.518798+0000 mon.a (mon.0) 3749 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:42:51.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:50 vm07 bash[23367]: audit 2026-03-10T10:42:50.518798+0000 mon.a (mon.0) 3749 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:42:51.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:50 vm07 bash[23367]: audit 2026-03-10T10:42:50.519482+0000 mon.a (mon.0) 3750 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:42:51.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:50 vm07 bash[23367]: audit 2026-03-10T10:42:50.519482+0000 mon.a (mon.0) 3750 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:42:51.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:50 vm07 bash[23367]: audit 2026-03-10T10:42:50.524779+0000 mon.a (mon.0) 3751 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:42:51.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:50 vm07 bash[23367]: audit 2026-03-10T10:42:50.524779+0000 mon.a (mon.0) 3751 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:42:52.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:51 vm04 bash[28289]: cluster 2026-03-10T10:42:50.757041+0000 mgr.y (mgr.24422) 1083 : cluster [DBG] pgmap v1518: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:52.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:51 vm04 bash[28289]: cluster 2026-03-10T10:42:50.757041+0000 mgr.y (mgr.24422) 1083 : cluster [DBG] pgmap v1518: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:51 vm04 bash[20742]: cluster 2026-03-10T10:42:50.757041+0000 mgr.y (mgr.24422) 1083 : cluster [DBG] pgmap v1518: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:51 vm04 bash[20742]: cluster 2026-03-10T10:42:50.757041+0000 mgr.y (mgr.24422) 1083 : cluster [DBG] pgmap v1518: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:51 vm07 bash[23367]: cluster 2026-03-10T10:42:50.757041+0000 mgr.y (mgr.24422) 1083 : cluster [DBG] pgmap v1518: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:51 vm07 bash[23367]: cluster 2026-03-10T10:42:50.757041+0000 mgr.y (mgr.24422) 1083 : cluster [DBG] pgmap v1518: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:53.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:42:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:42:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:42:54.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:53 vm04 bash[28289]: cluster 2026-03-10T10:42:52.757293+0000 mgr.y (mgr.24422) 1084 : cluster [DBG] pgmap v1519: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:54.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:53 vm04 bash[28289]: cluster 2026-03-10T10:42:52.757293+0000 mgr.y (mgr.24422) 1084 : cluster [DBG] pgmap v1519: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:54.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:53 vm04 bash[20742]: cluster 2026-03-10T10:42:52.757293+0000 mgr.y (mgr.24422) 1084 : cluster [DBG] pgmap v1519: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:54.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:53 vm04 bash[20742]: cluster 2026-03-10T10:42:52.757293+0000 mgr.y (mgr.24422) 1084 : cluster [DBG] pgmap v1519: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:54.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:53 vm07 bash[23367]: cluster 2026-03-10T10:42:52.757293+0000 mgr.y (mgr.24422) 1084 : cluster [DBG] pgmap v1519: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:54.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:53 vm07 bash[23367]: cluster 2026-03-10T10:42:52.757293+0000 mgr.y (mgr.24422) 1084 : cluster [DBG] pgmap v1519: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:56.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:55 vm04 bash[28289]: cluster 2026-03-10T10:42:54.758086+0000 mgr.y (mgr.24422) 1085 : cluster [DBG] pgmap v1520: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:56.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:55 vm04 bash[28289]: cluster 2026-03-10T10:42:54.758086+0000 mgr.y (mgr.24422) 1085 : cluster [DBG] pgmap v1520: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:56.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:55 vm04 bash[20742]: cluster 2026-03-10T10:42:54.758086+0000 mgr.y (mgr.24422) 1085 : cluster [DBG] pgmap v1520: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:56.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:55 vm04 bash[20742]: cluster 2026-03-10T10:42:54.758086+0000 mgr.y (mgr.24422) 1085 : cluster [DBG] pgmap v1520: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:56.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:55 vm07 bash[23367]: cluster 2026-03-10T10:42:54.758086+0000 mgr.y (mgr.24422) 1085 : cluster [DBG] pgmap v1520: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:56.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:55 vm07 bash[23367]: cluster 2026-03-10T10:42:54.758086+0000 mgr.y (mgr.24422) 1085 : cluster [DBG] pgmap v1520: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:42:58.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:58 vm04 bash[28289]: cluster 2026-03-10T10:42:56.758405+0000 mgr.y (mgr.24422) 1086 : cluster [DBG] pgmap v1521: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:58.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:58 vm04 bash[28289]: cluster 2026-03-10T10:42:56.758405+0000 mgr.y (mgr.24422) 1086 : cluster [DBG] pgmap v1521: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:58.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:58 vm04 bash[20742]: cluster 2026-03-10T10:42:56.758405+0000 mgr.y (mgr.24422) 1086 : cluster [DBG] pgmap v1521: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:58.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:58 vm04 bash[20742]: cluster 2026-03-10T10:42:56.758405+0000 mgr.y (mgr.24422) 1086 : cluster [DBG] pgmap v1521: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:58.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:58 vm07 bash[23367]: cluster 2026-03-10T10:42:56.758405+0000 mgr.y (mgr.24422) 1086 : cluster [DBG] pgmap v1521: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:58.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:58 vm07 bash[23367]: cluster 2026-03-10T10:42:56.758405+0000 mgr.y (mgr.24422) 1086 : cluster [DBG] pgmap v1521: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:42:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:59 vm04 bash[28289]: audit 2026-03-10T10:42:58.543198+0000 mon.a (mon.0) 3752 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:42:59.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:42:59 vm04 bash[28289]: audit 2026-03-10T10:42:58.543198+0000 mon.a (mon.0) 3752 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:42:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:59 vm04 bash[20742]: audit 2026-03-10T10:42:58.543198+0000 mon.a (mon.0) 3752 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:42:59.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:42:59 vm04 bash[20742]: audit 2026-03-10T10:42:58.543198+0000 mon.a (mon.0) 3752 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:42:59.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:59 vm07 bash[23367]: audit 2026-03-10T10:42:58.543198+0000 mon.a (mon.0) 3752 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:42:59.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:42:59 vm07 bash[23367]: audit 2026-03-10T10:42:58.543198+0000 mon.a (mon.0) 3752 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:43:00.017 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:42:59 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:43:00.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:00 vm04 bash[20742]: cluster 2026-03-10T10:42:58.758971+0000 mgr.y (mgr.24422) 1087 : cluster [DBG] pgmap v1522: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:00.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:00 vm04 bash[20742]: cluster 2026-03-10T10:42:58.758971+0000 mgr.y (mgr.24422) 1087 : cluster [DBG] pgmap v1522: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:00 vm04 bash[28289]: cluster 2026-03-10T10:42:58.758971+0000 mgr.y (mgr.24422) 1087 : cluster [DBG] pgmap v1522: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:00.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:00 vm04 bash[28289]: cluster 2026-03-10T10:42:58.758971+0000 mgr.y (mgr.24422) 1087 : cluster [DBG] pgmap v1522: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:00 vm07 bash[23367]: cluster 2026-03-10T10:42:58.758971+0000 mgr.y (mgr.24422) 1087 : cluster [DBG] pgmap v1522: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:00 vm07 bash[23367]: cluster 2026-03-10T10:42:58.758971+0000 mgr.y (mgr.24422) 1087 : cluster [DBG] pgmap v1522: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:01.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:01 vm04 bash[20742]: audit 2026-03-10T10:42:59.529591+0000 mgr.y (mgr.24422) 1088 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:01.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:01 vm04 bash[20742]: audit 2026-03-10T10:42:59.529591+0000 mgr.y (mgr.24422) 1088 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:01.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:01 vm04 bash[28289]: audit 2026-03-10T10:42:59.529591+0000 mgr.y (mgr.24422) 1088 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:01.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:01 vm04 bash[28289]: audit 2026-03-10T10:42:59.529591+0000 mgr.y (mgr.24422) 1088 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:01.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:01 vm07 bash[23367]: audit 2026-03-10T10:42:59.529591+0000 mgr.y (mgr.24422) 1088 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:01.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:01 vm07 bash[23367]: audit 2026-03-10T10:42:59.529591+0000 mgr.y (mgr.24422) 1088 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:02.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:02 vm04 bash[28289]: cluster 2026-03-10T10:43:00.759399+0000 mgr.y (mgr.24422) 1089 : cluster [DBG] pgmap v1523: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:02.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:02 vm04 bash[28289]: cluster 2026-03-10T10:43:00.759399+0000 mgr.y (mgr.24422) 1089 : cluster [DBG] pgmap v1523: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:02.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:02 vm04 bash[20742]: cluster 2026-03-10T10:43:00.759399+0000 mgr.y (mgr.24422) 1089 : cluster [DBG] pgmap v1523: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:02.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:02 vm04 bash[20742]: cluster 2026-03-10T10:43:00.759399+0000 mgr.y (mgr.24422) 1089 : cluster [DBG] pgmap v1523: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:02.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:02 vm07 bash[23367]: cluster 2026-03-10T10:43:00.759399+0000 mgr.y (mgr.24422) 1089 : cluster [DBG] pgmap v1523: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:02.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:02 vm07 bash[23367]: cluster 2026-03-10T10:43:00.759399+0000 mgr.y (mgr.24422) 1089 : cluster [DBG] pgmap v1523: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:03.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:43:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:43:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:43:04.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:04 vm04 bash[28289]: cluster 2026-03-10T10:43:02.759755+0000 mgr.y (mgr.24422) 1090 : cluster [DBG] pgmap v1524: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:04.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:04 vm04 bash[28289]: cluster 2026-03-10T10:43:02.759755+0000 mgr.y (mgr.24422) 1090 : cluster [DBG] pgmap v1524: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:04.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:04 vm04 bash[20742]: cluster 2026-03-10T10:43:02.759755+0000 mgr.y (mgr.24422) 1090 : cluster [DBG] pgmap v1524: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:04.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:04 vm04 bash[20742]: cluster 2026-03-10T10:43:02.759755+0000 mgr.y (mgr.24422) 1090 : cluster [DBG] pgmap v1524: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:04.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:04 vm07 bash[23367]: cluster 2026-03-10T10:43:02.759755+0000 mgr.y (mgr.24422) 1090 : cluster [DBG] pgmap v1524: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:04.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:04 vm07 bash[23367]: cluster 2026-03-10T10:43:02.759755+0000 mgr.y (mgr.24422) 1090 : cluster [DBG] pgmap v1524: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:06.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:06 vm07 bash[23367]: cluster 2026-03-10T10:43:04.760461+0000 mgr.y (mgr.24422) 1091 : cluster [DBG] pgmap v1525: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:06.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:06 vm07 bash[23367]: cluster 2026-03-10T10:43:04.760461+0000 mgr.y (mgr.24422) 1091 : cluster [DBG] pgmap v1525: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:06.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:06 vm04 bash[28289]: cluster 2026-03-10T10:43:04.760461+0000 mgr.y (mgr.24422) 1091 : cluster [DBG] pgmap v1525: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:06.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:06 vm04 bash[28289]: cluster 2026-03-10T10:43:04.760461+0000 mgr.y (mgr.24422) 1091 : cluster [DBG] pgmap v1525: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:06.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:06 vm04 bash[20742]: cluster 2026-03-10T10:43:04.760461+0000 mgr.y (mgr.24422) 1091 : cluster [DBG] pgmap v1525: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:06.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:06 vm04 bash[20742]: cluster 2026-03-10T10:43:04.760461+0000 mgr.y (mgr.24422) 1091 : cluster [DBG] pgmap v1525: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:08.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:08 vm04 bash[28289]: cluster 2026-03-10T10:43:06.761224+0000 mgr.y (mgr.24422) 1092 : cluster [DBG] pgmap v1526: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:08.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:08 vm04 bash[28289]: cluster 2026-03-10T10:43:06.761224+0000 mgr.y (mgr.24422) 1092 : cluster [DBG] pgmap v1526: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:08.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:08 vm04 bash[20742]: cluster 2026-03-10T10:43:06.761224+0000 mgr.y (mgr.24422) 1092 : cluster [DBG] pgmap v1526: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:08.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:08 vm04 bash[20742]: cluster 2026-03-10T10:43:06.761224+0000 mgr.y (mgr.24422) 1092 : cluster [DBG] pgmap v1526: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:08.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:08 vm07 bash[23367]: cluster 2026-03-10T10:43:06.761224+0000 mgr.y (mgr.24422) 1092 : cluster [DBG] pgmap v1526: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:08.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:08 vm07 bash[23367]: cluster 2026-03-10T10:43:06.761224+0000 mgr.y (mgr.24422) 1092 : cluster [DBG] pgmap v1526: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:10.017 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:43:09 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:43:10.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:10 vm04 bash[28289]: cluster 2026-03-10T10:43:08.761821+0000 mgr.y (mgr.24422) 1093 : cluster [DBG] pgmap v1527: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:10.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:10 vm04 bash[28289]: cluster 2026-03-10T10:43:08.761821+0000 mgr.y (mgr.24422) 1093 : cluster [DBG] pgmap v1527: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:10.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:10 vm04 bash[20742]: cluster 2026-03-10T10:43:08.761821+0000 mgr.y (mgr.24422) 1093 : cluster [DBG] pgmap v1527: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:10.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:10 vm04 bash[20742]: cluster 2026-03-10T10:43:08.761821+0000 mgr.y (mgr.24422) 1093 : cluster [DBG] pgmap v1527: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:10.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:10 vm07 bash[23367]: cluster 2026-03-10T10:43:08.761821+0000 mgr.y (mgr.24422) 1093 : cluster [DBG] pgmap v1527: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:10.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:10 vm07 bash[23367]: cluster 2026-03-10T10:43:08.761821+0000 mgr.y (mgr.24422) 1093 : cluster [DBG] pgmap v1527: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:11.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:11 vm04 bash[28289]: audit 2026-03-10T10:43:09.537560+0000 mgr.y (mgr.24422) 1094 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:11.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:11 vm04 bash[28289]: audit 2026-03-10T10:43:09.537560+0000 mgr.y (mgr.24422) 1094 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:11.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:11 vm04 bash[20742]: audit 2026-03-10T10:43:09.537560+0000 mgr.y (mgr.24422) 1094 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:11.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:11 vm04 bash[20742]: audit 2026-03-10T10:43:09.537560+0000 mgr.y (mgr.24422) 1094 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:11.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:11 vm07 bash[23367]: audit 2026-03-10T10:43:09.537560+0000 mgr.y (mgr.24422) 1094 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:11.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:11 vm07 bash[23367]: audit 2026-03-10T10:43:09.537560+0000 mgr.y (mgr.24422) 1094 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:12.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:12 vm04 bash[28289]: cluster 2026-03-10T10:43:10.762303+0000 mgr.y (mgr.24422) 1095 : cluster [DBG] pgmap v1528: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:12.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:12 vm04 bash[28289]: cluster 2026-03-10T10:43:10.762303+0000 mgr.y (mgr.24422) 1095 : cluster [DBG] pgmap v1528: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:12.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:12 vm04 bash[20742]: cluster 2026-03-10T10:43:10.762303+0000 mgr.y (mgr.24422) 1095 : cluster [DBG] pgmap v1528: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:12.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:12 vm04 bash[20742]: cluster 2026-03-10T10:43:10.762303+0000 mgr.y (mgr.24422) 1095 : cluster [DBG] pgmap v1528: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:12.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:12 vm07 bash[23367]: cluster 2026-03-10T10:43:10.762303+0000 mgr.y (mgr.24422) 1095 : cluster [DBG] pgmap v1528: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:12.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:12 vm07 bash[23367]: cluster 2026-03-10T10:43:10.762303+0000 mgr.y (mgr.24422) 1095 : cluster [DBG] pgmap v1528: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:13.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:43:13 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:43:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:43:14.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:14 vm04 bash[20742]: cluster 2026-03-10T10:43:12.762595+0000 mgr.y (mgr.24422) 1096 : cluster [DBG] pgmap v1529: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:14.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:14 vm04 bash[20742]: cluster 2026-03-10T10:43:12.762595+0000 mgr.y (mgr.24422) 1096 : cluster [DBG] pgmap v1529: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:14.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:14 vm04 bash[20742]: audit 2026-03-10T10:43:13.548723+0000 mon.a (mon.0) 3753 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:43:14.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:14 vm04 bash[20742]: audit 2026-03-10T10:43:13.548723+0000 mon.a (mon.0) 3753 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:43:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:14 vm04 bash[28289]: cluster 2026-03-10T10:43:12.762595+0000 mgr.y (mgr.24422) 1096 : cluster [DBG] pgmap v1529: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:14 vm04 bash[28289]: cluster 2026-03-10T10:43:12.762595+0000 mgr.y (mgr.24422) 1096 : cluster [DBG] pgmap v1529: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:14 vm04 bash[28289]: audit 2026-03-10T10:43:13.548723+0000 mon.a (mon.0) 3753 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:43:14.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:14 vm04 bash[28289]: audit 2026-03-10T10:43:13.548723+0000 mon.a (mon.0) 3753 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:43:14.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:14 vm07 bash[23367]: cluster 2026-03-10T10:43:12.762595+0000 mgr.y (mgr.24422) 1096 : cluster [DBG] pgmap v1529: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:14.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:14 vm07 bash[23367]: cluster 2026-03-10T10:43:12.762595+0000 mgr.y (mgr.24422) 1096 : cluster [DBG] pgmap v1529: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:14.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:14 vm07 bash[23367]: audit 2026-03-10T10:43:13.548723+0000 mon.a (mon.0) 3753 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:43:14.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:14 vm07 bash[23367]: audit 2026-03-10T10:43:13.548723+0000 mon.a (mon.0) 3753 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:43:16.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:16 vm04 bash[28289]: cluster 2026-03-10T10:43:14.763255+0000 mgr.y (mgr.24422) 1097 : cluster [DBG] pgmap v1530: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:16.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:16 vm04 bash[28289]: cluster 2026-03-10T10:43:14.763255+0000 mgr.y (mgr.24422) 1097 : cluster [DBG] pgmap v1530: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:16.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:16 vm04 bash[20742]: cluster 2026-03-10T10:43:14.763255+0000 mgr.y (mgr.24422) 1097 : cluster [DBG] pgmap v1530: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:16.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:16 vm04 bash[20742]: cluster 2026-03-10T10:43:14.763255+0000 mgr.y (mgr.24422) 1097 : cluster [DBG] pgmap v1530: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:16.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:16 vm07 bash[23367]: cluster 2026-03-10T10:43:14.763255+0000 mgr.y (mgr.24422) 1097 : cluster [DBG] pgmap v1530: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:16.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:16 vm07 bash[23367]: cluster 2026-03-10T10:43:14.763255+0000 mgr.y (mgr.24422) 1097 : cluster [DBG] pgmap v1530: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:18.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:18 vm04 bash[28289]: cluster 2026-03-10T10:43:16.763613+0000 mgr.y (mgr.24422) 1098 : cluster [DBG] pgmap v1531: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:18.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:18 vm04 bash[28289]: cluster 2026-03-10T10:43:16.763613+0000 mgr.y (mgr.24422) 1098 : cluster [DBG] pgmap v1531: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:18.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:18 vm04 bash[20742]: cluster 2026-03-10T10:43:16.763613+0000 mgr.y (mgr.24422) 1098 : cluster [DBG] pgmap v1531: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:18.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:18 vm04 bash[20742]: cluster 2026-03-10T10:43:16.763613+0000 mgr.y (mgr.24422) 1098 : cluster [DBG] pgmap v1531: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:18.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:18 vm07 bash[23367]: cluster 2026-03-10T10:43:16.763613+0000 mgr.y (mgr.24422) 1098 : cluster [DBG] pgmap v1531: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:18.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:18 vm07 bash[23367]: cluster 2026-03-10T10:43:16.763613+0000 mgr.y (mgr.24422) 1098 : cluster [DBG] pgmap v1531: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:20.017 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:43:19 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:43:20.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:20 vm04 bash[28289]: cluster 2026-03-10T10:43:18.764194+0000 mgr.y (mgr.24422) 1099 : cluster [DBG] pgmap v1532: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:20.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:20 vm04 bash[28289]: cluster 2026-03-10T10:43:18.764194+0000 mgr.y (mgr.24422) 1099 : cluster [DBG] pgmap v1532: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:20.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:20 vm04 bash[20742]: cluster 2026-03-10T10:43:18.764194+0000 mgr.y (mgr.24422) 1099 : cluster [DBG] pgmap v1532: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:20.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:20 vm04 bash[20742]: cluster 2026-03-10T10:43:18.764194+0000 mgr.y (mgr.24422) 1099 : cluster [DBG] pgmap v1532: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:20 vm07 bash[23367]: cluster 2026-03-10T10:43:18.764194+0000 mgr.y (mgr.24422) 1099 : cluster [DBG] pgmap v1532: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:20.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:20 vm07 bash[23367]: cluster 2026-03-10T10:43:18.764194+0000 mgr.y (mgr.24422) 1099 : cluster [DBG] pgmap v1532: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:21.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:21 vm04 bash[20742]: audit 2026-03-10T10:43:19.547417+0000 mgr.y (mgr.24422) 1100 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:21.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:21 vm04 bash[20742]: audit 2026-03-10T10:43:19.547417+0000 mgr.y (mgr.24422) 1100 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:21.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:21 vm04 bash[28289]: audit 2026-03-10T10:43:19.547417+0000 mgr.y (mgr.24422) 1100 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:21.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:21 vm04 bash[28289]: audit 2026-03-10T10:43:19.547417+0000 mgr.y (mgr.24422) 1100 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:21 vm07 bash[23367]: audit 2026-03-10T10:43:19.547417+0000 mgr.y (mgr.24422) 1100 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:21.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:21 vm07 bash[23367]: audit 2026-03-10T10:43:19.547417+0000 mgr.y (mgr.24422) 1100 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:22.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:22 vm04 bash[20742]: cluster 2026-03-10T10:43:20.764699+0000 mgr.y (mgr.24422) 1101 : cluster [DBG] pgmap v1533: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:22.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:22 vm04 bash[20742]: cluster 2026-03-10T10:43:20.764699+0000 mgr.y (mgr.24422) 1101 : cluster [DBG] pgmap v1533: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:22.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:22 vm04 bash[28289]: cluster 2026-03-10T10:43:20.764699+0000 mgr.y (mgr.24422) 1101 : cluster [DBG] pgmap v1533: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:22.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:22 vm04 bash[28289]: cluster 2026-03-10T10:43:20.764699+0000 mgr.y (mgr.24422) 1101 : cluster [DBG] pgmap v1533: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:22 vm07 bash[23367]: cluster 2026-03-10T10:43:20.764699+0000 mgr.y (mgr.24422) 1101 : cluster [DBG] pgmap v1533: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:22.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:22 vm07 bash[23367]: cluster 2026-03-10T10:43:20.764699+0000 mgr.y (mgr.24422) 1101 : cluster [DBG] pgmap v1533: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:23.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:43:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:43:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:43:24.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:24 vm04 bash[20742]: cluster 2026-03-10T10:43:22.765000+0000 mgr.y (mgr.24422) 1102 : cluster [DBG] pgmap v1534: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:24.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:24 vm04 bash[20742]: cluster 2026-03-10T10:43:22.765000+0000 mgr.y (mgr.24422) 1102 : cluster [DBG] pgmap v1534: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:24.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:24 vm04 bash[28289]: cluster 2026-03-10T10:43:22.765000+0000 mgr.y (mgr.24422) 1102 : cluster [DBG] pgmap v1534: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:24.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:24 vm04 bash[28289]: cluster 2026-03-10T10:43:22.765000+0000 mgr.y (mgr.24422) 1102 : cluster [DBG] pgmap v1534: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:24 vm07 bash[23367]: cluster 2026-03-10T10:43:22.765000+0000 mgr.y (mgr.24422) 1102 : cluster [DBG] pgmap v1534: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:24.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:24 vm07 bash[23367]: cluster 2026-03-10T10:43:22.765000+0000 mgr.y (mgr.24422) 1102 : cluster [DBG] pgmap v1534: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:26.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:26 vm04 bash[20742]: cluster 2026-03-10T10:43:24.765538+0000 mgr.y (mgr.24422) 1103 : cluster [DBG] pgmap v1535: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:26.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:26 vm04 bash[20742]: cluster 2026-03-10T10:43:24.765538+0000 mgr.y (mgr.24422) 1103 : cluster [DBG] pgmap v1535: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:26.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:26 vm04 bash[28289]: cluster 2026-03-10T10:43:24.765538+0000 mgr.y (mgr.24422) 1103 : cluster [DBG] pgmap v1535: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:26.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:26 vm04 bash[28289]: cluster 2026-03-10T10:43:24.765538+0000 mgr.y (mgr.24422) 1103 : cluster [DBG] pgmap v1535: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:26.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:26 vm07 bash[23367]: cluster 2026-03-10T10:43:24.765538+0000 mgr.y (mgr.24422) 1103 : cluster [DBG] pgmap v1535: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:26.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:26 vm07 bash[23367]: cluster 2026-03-10T10:43:24.765538+0000 mgr.y (mgr.24422) 1103 : cluster [DBG] pgmap v1535: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:28.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:28 vm04 bash[20742]: cluster 2026-03-10T10:43:26.765799+0000 mgr.y (mgr.24422) 1104 : cluster [DBG] pgmap v1536: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:28 vm04 bash[20742]: cluster 2026-03-10T10:43:26.765799+0000 mgr.y (mgr.24422) 1104 : cluster [DBG] pgmap v1536: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:28 vm04 bash[28289]: cluster 2026-03-10T10:43:26.765799+0000 mgr.y (mgr.24422) 1104 : cluster [DBG] pgmap v1536: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:28.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:28 vm04 bash[28289]: cluster 2026-03-10T10:43:26.765799+0000 mgr.y (mgr.24422) 1104 : cluster [DBG] pgmap v1536: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:28.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:28 vm07 bash[23367]: cluster 2026-03-10T10:43:26.765799+0000 mgr.y (mgr.24422) 1104 : cluster [DBG] pgmap v1536: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:28.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:28 vm07 bash[23367]: cluster 2026-03-10T10:43:26.765799+0000 mgr.y (mgr.24422) 1104 : cluster [DBG] pgmap v1536: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:29.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:29 vm04 bash[20742]: audit 2026-03-10T10:43:28.554743+0000 mon.a (mon.0) 3754 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:43:29.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:29 vm04 bash[20742]: audit 2026-03-10T10:43:28.554743+0000 mon.a (mon.0) 3754 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:43:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:29 vm04 bash[28289]: audit 2026-03-10T10:43:28.554743+0000 mon.a (mon.0) 3754 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:43:29.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:29 vm04 bash[28289]: audit 2026-03-10T10:43:28.554743+0000 mon.a (mon.0) 3754 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:43:29.767 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:43:29 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:43:29.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:29 vm07 bash[23367]: audit 2026-03-10T10:43:28.554743+0000 mon.a (mon.0) 3754 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:43:29.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:29 vm07 bash[23367]: audit 2026-03-10T10:43:28.554743+0000 mon.a (mon.0) 3754 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:43:30.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:30 vm04 bash[20742]: cluster 2026-03-10T10:43:28.766307+0000 mgr.y (mgr.24422) 1105 : cluster [DBG] pgmap v1537: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:30.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:30 vm04 bash[20742]: cluster 2026-03-10T10:43:28.766307+0000 mgr.y (mgr.24422) 1105 : cluster [DBG] pgmap v1537: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:30.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:30 vm04 bash[28289]: cluster 2026-03-10T10:43:28.766307+0000 mgr.y (mgr.24422) 1105 : cluster [DBG] pgmap v1537: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:30.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:30 vm04 bash[28289]: cluster 2026-03-10T10:43:28.766307+0000 mgr.y (mgr.24422) 1105 : cluster [DBG] pgmap v1537: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:30.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:30 vm07 bash[23367]: cluster 2026-03-10T10:43:28.766307+0000 mgr.y (mgr.24422) 1105 : cluster [DBG] pgmap v1537: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:30.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:30 vm07 bash[23367]: cluster 2026-03-10T10:43:28.766307+0000 mgr.y (mgr.24422) 1105 : cluster [DBG] pgmap v1537: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:31.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:31 vm04 bash[20742]: audit 2026-03-10T10:43:29.558021+0000 mgr.y (mgr.24422) 1106 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:31.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:31 vm04 bash[20742]: audit 2026-03-10T10:43:29.558021+0000 mgr.y (mgr.24422) 1106 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:31 vm04 bash[28289]: audit 2026-03-10T10:43:29.558021+0000 mgr.y (mgr.24422) 1106 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:31.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:31 vm04 bash[28289]: audit 2026-03-10T10:43:29.558021+0000 mgr.y (mgr.24422) 1106 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:31.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:31 vm07 bash[23367]: audit 2026-03-10T10:43:29.558021+0000 mgr.y (mgr.24422) 1106 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:31.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:31 vm07 bash[23367]: audit 2026-03-10T10:43:29.558021+0000 mgr.y (mgr.24422) 1106 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:32.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:32 vm04 bash[20742]: cluster 2026-03-10T10:43:30.766814+0000 mgr.y (mgr.24422) 1107 : cluster [DBG] pgmap v1538: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:32 vm04 bash[20742]: cluster 2026-03-10T10:43:30.766814+0000 mgr.y (mgr.24422) 1107 : cluster [DBG] pgmap v1538: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:32 vm04 bash[28289]: cluster 2026-03-10T10:43:30.766814+0000 mgr.y (mgr.24422) 1107 : cluster [DBG] pgmap v1538: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:32 vm04 bash[28289]: cluster 2026-03-10T10:43:30.766814+0000 mgr.y (mgr.24422) 1107 : cluster [DBG] pgmap v1538: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:32.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:32 vm07 bash[23367]: cluster 2026-03-10T10:43:30.766814+0000 mgr.y (mgr.24422) 1107 : cluster [DBG] pgmap v1538: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:32.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:32 vm07 bash[23367]: cluster 2026-03-10T10:43:30.766814+0000 mgr.y (mgr.24422) 1107 : cluster [DBG] pgmap v1538: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:33.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:43:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:43:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:43:34.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:34 vm04 bash[20742]: cluster 2026-03-10T10:43:32.767139+0000 mgr.y (mgr.24422) 1108 : cluster [DBG] pgmap v1539: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:34.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:34 vm04 bash[20742]: cluster 2026-03-10T10:43:32.767139+0000 mgr.y (mgr.24422) 1108 : cluster [DBG] pgmap v1539: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:34.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:34 vm04 bash[28289]: cluster 2026-03-10T10:43:32.767139+0000 mgr.y (mgr.24422) 1108 : cluster [DBG] pgmap v1539: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:34 vm04 bash[28289]: cluster 2026-03-10T10:43:32.767139+0000 mgr.y (mgr.24422) 1108 : cluster [DBG] pgmap v1539: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:34.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:34 vm07 bash[23367]: cluster 2026-03-10T10:43:32.767139+0000 mgr.y (mgr.24422) 1108 : cluster [DBG] pgmap v1539: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:34.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:34 vm07 bash[23367]: cluster 2026-03-10T10:43:32.767139+0000 mgr.y (mgr.24422) 1108 : cluster [DBG] pgmap v1539: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:36.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:36 vm07 bash[23367]: cluster 2026-03-10T10:43:34.767705+0000 mgr.y (mgr.24422) 1109 : cluster [DBG] pgmap v1540: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:36.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:36 vm07 bash[23367]: cluster 2026-03-10T10:43:34.767705+0000 mgr.y (mgr.24422) 1109 : cluster [DBG] pgmap v1540: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:36.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:36 vm04 bash[20742]: cluster 2026-03-10T10:43:34.767705+0000 mgr.y (mgr.24422) 1109 : cluster [DBG] pgmap v1540: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:36.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:36 vm04 bash[20742]: cluster 2026-03-10T10:43:34.767705+0000 mgr.y (mgr.24422) 1109 : cluster [DBG] pgmap v1540: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:36.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:36 vm04 bash[28289]: cluster 2026-03-10T10:43:34.767705+0000 mgr.y (mgr.24422) 1109 : cluster [DBG] pgmap v1540: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:36.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:36 vm04 bash[28289]: cluster 2026-03-10T10:43:34.767705+0000 mgr.y (mgr.24422) 1109 : cluster [DBG] pgmap v1540: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:38.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:38 vm07 bash[23367]: cluster 2026-03-10T10:43:36.768057+0000 mgr.y (mgr.24422) 1110 : cluster [DBG] pgmap v1541: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:38.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:38 vm07 bash[23367]: cluster 2026-03-10T10:43:36.768057+0000 mgr.y (mgr.24422) 1110 : cluster [DBG] pgmap v1541: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:38.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:38 vm04 bash[20742]: cluster 2026-03-10T10:43:36.768057+0000 mgr.y (mgr.24422) 1110 : cluster [DBG] pgmap v1541: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:38.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:38 vm04 bash[20742]: cluster 2026-03-10T10:43:36.768057+0000 mgr.y (mgr.24422) 1110 : cluster [DBG] pgmap v1541: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:38 vm04 bash[28289]: cluster 2026-03-10T10:43:36.768057+0000 mgr.y (mgr.24422) 1110 : cluster [DBG] pgmap v1541: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:38.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:38 vm04 bash[28289]: cluster 2026-03-10T10:43:36.768057+0000 mgr.y (mgr.24422) 1110 : cluster [DBG] pgmap v1541: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:40.017 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:43:39 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:43:40.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:40 vm07 bash[23367]: cluster 2026-03-10T10:43:38.768615+0000 mgr.y (mgr.24422) 1111 : cluster [DBG] pgmap v1542: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:40.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:40 vm07 bash[23367]: cluster 2026-03-10T10:43:38.768615+0000 mgr.y (mgr.24422) 1111 : cluster [DBG] pgmap v1542: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:40.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:40 vm04 bash[20742]: cluster 2026-03-10T10:43:38.768615+0000 mgr.y (mgr.24422) 1111 : cluster [DBG] pgmap v1542: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:40.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:40 vm04 bash[20742]: cluster 2026-03-10T10:43:38.768615+0000 mgr.y (mgr.24422) 1111 : cluster [DBG] pgmap v1542: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:40 vm04 bash[28289]: cluster 2026-03-10T10:43:38.768615+0000 mgr.y (mgr.24422) 1111 : cluster [DBG] pgmap v1542: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:40.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:40 vm04 bash[28289]: cluster 2026-03-10T10:43:38.768615+0000 mgr.y (mgr.24422) 1111 : cluster [DBG] pgmap v1542: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:41.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:41 vm07 bash[23367]: audit 2026-03-10T10:43:39.558961+0000 mgr.y (mgr.24422) 1112 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:41.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:41 vm07 bash[23367]: audit 2026-03-10T10:43:39.558961+0000 mgr.y (mgr.24422) 1112 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:41.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:41 vm04 bash[28289]: audit 2026-03-10T10:43:39.558961+0000 mgr.y (mgr.24422) 1112 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:41.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:41 vm04 bash[28289]: audit 2026-03-10T10:43:39.558961+0000 mgr.y (mgr.24422) 1112 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:41.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:41 vm04 bash[20742]: audit 2026-03-10T10:43:39.558961+0000 mgr.y (mgr.24422) 1112 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:41.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:41 vm04 bash[20742]: audit 2026-03-10T10:43:39.558961+0000 mgr.y (mgr.24422) 1112 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:42.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:42 vm07 bash[23367]: cluster 2026-03-10T10:43:40.769089+0000 mgr.y (mgr.24422) 1113 : cluster [DBG] pgmap v1543: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:42.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:42 vm07 bash[23367]: cluster 2026-03-10T10:43:40.769089+0000 mgr.y (mgr.24422) 1113 : cluster [DBG] pgmap v1543: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:42.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:42 vm04 bash[20742]: cluster 2026-03-10T10:43:40.769089+0000 mgr.y (mgr.24422) 1113 : cluster [DBG] pgmap v1543: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:42.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:42 vm04 bash[20742]: cluster 2026-03-10T10:43:40.769089+0000 mgr.y (mgr.24422) 1113 : cluster [DBG] pgmap v1543: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:42.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:42 vm04 bash[28289]: cluster 2026-03-10T10:43:40.769089+0000 mgr.y (mgr.24422) 1113 : cluster [DBG] pgmap v1543: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:42.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:42 vm04 bash[28289]: cluster 2026-03-10T10:43:40.769089+0000 mgr.y (mgr.24422) 1113 : cluster [DBG] pgmap v1543: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:43.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:43:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:43:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:43:43.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:43 vm04 bash[20742]: cluster 2026-03-10T10:43:42.769409+0000 mgr.y (mgr.24422) 1114 : cluster [DBG] pgmap v1544: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:43.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:43 vm04 bash[20742]: cluster 2026-03-10T10:43:42.769409+0000 mgr.y (mgr.24422) 1114 : cluster [DBG] pgmap v1544: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:43 vm04 bash[28289]: cluster 2026-03-10T10:43:42.769409+0000 mgr.y (mgr.24422) 1114 : cluster [DBG] pgmap v1544: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:43 vm04 bash[28289]: cluster 2026-03-10T10:43:42.769409+0000 mgr.y (mgr.24422) 1114 : cluster [DBG] pgmap v1544: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:44.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:43 vm07 bash[23367]: cluster 2026-03-10T10:43:42.769409+0000 mgr.y (mgr.24422) 1114 : cluster [DBG] pgmap v1544: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:44.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:43 vm07 bash[23367]: cluster 2026-03-10T10:43:42.769409+0000 mgr.y (mgr.24422) 1114 : cluster [DBG] pgmap v1544: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:44.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:44 vm04 bash[20742]: audit 2026-03-10T10:43:43.560183+0000 mon.a (mon.0) 3755 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:43:44.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:44 vm04 bash[20742]: audit 2026-03-10T10:43:43.560183+0000 mon.a (mon.0) 3755 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:43:44.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:44 vm04 bash[28289]: audit 2026-03-10T10:43:43.560183+0000 mon.a (mon.0) 3755 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:43:44.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:44 vm04 bash[28289]: audit 2026-03-10T10:43:43.560183+0000 mon.a (mon.0) 3755 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:43:45.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:44 vm07 bash[23367]: audit 2026-03-10T10:43:43.560183+0000 mon.a (mon.0) 3755 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:43:45.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:44 vm07 bash[23367]: audit 2026-03-10T10:43:43.560183+0000 mon.a (mon.0) 3755 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:43:45.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:45 vm04 bash[20742]: cluster 2026-03-10T10:43:44.770050+0000 mgr.y (mgr.24422) 1115 : cluster [DBG] pgmap v1545: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:45.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:45 vm04 bash[20742]: cluster 2026-03-10T10:43:44.770050+0000 mgr.y (mgr.24422) 1115 : cluster [DBG] pgmap v1545: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:45.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:45 vm04 bash[28289]: cluster 2026-03-10T10:43:44.770050+0000 mgr.y (mgr.24422) 1115 : cluster [DBG] pgmap v1545: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:45.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:45 vm04 bash[28289]: cluster 2026-03-10T10:43:44.770050+0000 mgr.y (mgr.24422) 1115 : cluster [DBG] pgmap v1545: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:46.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:45 vm07 bash[23367]: cluster 2026-03-10T10:43:44.770050+0000 mgr.y (mgr.24422) 1115 : cluster [DBG] pgmap v1545: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:46.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:45 vm07 bash[23367]: cluster 2026-03-10T10:43:44.770050+0000 mgr.y (mgr.24422) 1115 : cluster [DBG] pgmap v1545: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:48.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:47 vm04 bash[28289]: cluster 2026-03-10T10:43:46.770398+0000 mgr.y (mgr.24422) 1116 : cluster [DBG] pgmap v1546: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:47 vm04 bash[28289]: cluster 2026-03-10T10:43:46.770398+0000 mgr.y (mgr.24422) 1116 : cluster [DBG] pgmap v1546: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:47 vm04 bash[20742]: cluster 2026-03-10T10:43:46.770398+0000 mgr.y (mgr.24422) 1116 : cluster [DBG] pgmap v1546: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:47 vm04 bash[20742]: cluster 2026-03-10T10:43:46.770398+0000 mgr.y (mgr.24422) 1116 : cluster [DBG] pgmap v1546: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:48.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:47 vm07 bash[23367]: cluster 2026-03-10T10:43:46.770398+0000 mgr.y (mgr.24422) 1116 : cluster [DBG] pgmap v1546: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:48.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:47 vm07 bash[23367]: cluster 2026-03-10T10:43:46.770398+0000 mgr.y (mgr.24422) 1116 : cluster [DBG] pgmap v1546: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:49.864 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:43:49 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:43:50.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:49 vm04 bash[20742]: cluster 2026-03-10T10:43:48.770965+0000 mgr.y (mgr.24422) 1117 : cluster [DBG] pgmap v1547: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:50.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:49 vm04 bash[20742]: cluster 2026-03-10T10:43:48.770965+0000 mgr.y (mgr.24422) 1117 : cluster [DBG] pgmap v1547: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:49 vm04 bash[28289]: cluster 2026-03-10T10:43:48.770965+0000 mgr.y (mgr.24422) 1117 : cluster [DBG] pgmap v1547: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:49 vm04 bash[28289]: cluster 2026-03-10T10:43:48.770965+0000 mgr.y (mgr.24422) 1117 : cluster [DBG] pgmap v1547: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:49 vm07 bash[23367]: cluster 2026-03-10T10:43:48.770965+0000 mgr.y (mgr.24422) 1117 : cluster [DBG] pgmap v1547: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:50.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:49 vm07 bash[23367]: cluster 2026-03-10T10:43:48.770965+0000 mgr.y (mgr.24422) 1117 : cluster [DBG] pgmap v1547: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:51.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:50 vm04 bash[20742]: audit 2026-03-10T10:43:49.569655+0000 mgr.y (mgr.24422) 1118 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:50 vm04 bash[20742]: audit 2026-03-10T10:43:49.569655+0000 mgr.y (mgr.24422) 1118 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:50 vm04 bash[20742]: audit 2026-03-10T10:43:50.564563+0000 mon.a (mon.0) 3756 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:43:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:50 vm04 bash[20742]: audit 2026-03-10T10:43:50.564563+0000 mon.a (mon.0) 3756 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:43:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:50 vm04 bash[28289]: audit 2026-03-10T10:43:49.569655+0000 mgr.y (mgr.24422) 1118 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:50 vm04 bash[28289]: audit 2026-03-10T10:43:49.569655+0000 mgr.y (mgr.24422) 1118 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:50 vm04 bash[28289]: audit 2026-03-10T10:43:50.564563+0000 mon.a (mon.0) 3756 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:43:51.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:50 vm04 bash[28289]: audit 2026-03-10T10:43:50.564563+0000 mon.a (mon.0) 3756 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:43:51.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:50 vm07 bash[23367]: audit 2026-03-10T10:43:49.569655+0000 mgr.y (mgr.24422) 1118 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:51.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:50 vm07 bash[23367]: audit 2026-03-10T10:43:49.569655+0000 mgr.y (mgr.24422) 1118 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:43:51.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:50 vm07 bash[23367]: audit 2026-03-10T10:43:50.564563+0000 mon.a (mon.0) 3756 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:43:51.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:50 vm07 bash[23367]: audit 2026-03-10T10:43:50.564563+0000 mon.a (mon.0) 3756 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:43:52.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:51 vm04 bash[20742]: cluster 2026-03-10T10:43:50.771449+0000 mgr.y (mgr.24422) 1119 : cluster [DBG] pgmap v1548: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:52.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:51 vm04 bash[20742]: cluster 2026-03-10T10:43:50.771449+0000 mgr.y (mgr.24422) 1119 : cluster [DBG] pgmap v1548: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:52.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:51 vm04 bash[20742]: audit 2026-03-10T10:43:50.878260+0000 mon.a (mon.0) 3757 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:43:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:51 vm04 bash[20742]: audit 2026-03-10T10:43:50.878260+0000 mon.a (mon.0) 3757 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:43:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:51 vm04 bash[20742]: audit 2026-03-10T10:43:50.878738+0000 mon.a (mon.0) 3758 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:43:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:51 vm04 bash[20742]: audit 2026-03-10T10:43:50.878738+0000 mon.a (mon.0) 3758 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:43:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:51 vm04 bash[20742]: audit 2026-03-10T10:43:50.882792+0000 mon.a (mon.0) 3759 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:43:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:51 vm04 bash[20742]: audit 2026-03-10T10:43:50.882792+0000 mon.a (mon.0) 3759 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:43:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:51 vm04 bash[28289]: cluster 2026-03-10T10:43:50.771449+0000 mgr.y (mgr.24422) 1119 : cluster [DBG] pgmap v1548: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:51 vm04 bash[28289]: cluster 2026-03-10T10:43:50.771449+0000 mgr.y (mgr.24422) 1119 : cluster [DBG] pgmap v1548: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:51 vm04 bash[28289]: audit 2026-03-10T10:43:50.878260+0000 mon.a (mon.0) 3757 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:43:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:51 vm04 bash[28289]: audit 2026-03-10T10:43:50.878260+0000 mon.a (mon.0) 3757 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:43:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:51 vm04 bash[28289]: audit 2026-03-10T10:43:50.878738+0000 mon.a (mon.0) 3758 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:43:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:51 vm04 bash[28289]: audit 2026-03-10T10:43:50.878738+0000 mon.a (mon.0) 3758 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:43:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:51 vm04 bash[28289]: audit 2026-03-10T10:43:50.882792+0000 mon.a (mon.0) 3759 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:43:52.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:51 vm04 bash[28289]: audit 2026-03-10T10:43:50.882792+0000 mon.a (mon.0) 3759 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:43:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:51 vm07 bash[23367]: cluster 2026-03-10T10:43:50.771449+0000 mgr.y (mgr.24422) 1119 : cluster [DBG] pgmap v1548: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:51 vm07 bash[23367]: cluster 2026-03-10T10:43:50.771449+0000 mgr.y (mgr.24422) 1119 : cluster [DBG] pgmap v1548: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:51 vm07 bash[23367]: audit 2026-03-10T10:43:50.878260+0000 mon.a (mon.0) 3757 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:43:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:51 vm07 bash[23367]: audit 2026-03-10T10:43:50.878260+0000 mon.a (mon.0) 3757 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:43:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:51 vm07 bash[23367]: audit 2026-03-10T10:43:50.878738+0000 mon.a (mon.0) 3758 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:43:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:51 vm07 bash[23367]: audit 2026-03-10T10:43:50.878738+0000 mon.a (mon.0) 3758 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:43:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:51 vm07 bash[23367]: audit 2026-03-10T10:43:50.882792+0000 mon.a (mon.0) 3759 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:43:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:51 vm07 bash[23367]: audit 2026-03-10T10:43:50.882792+0000 mon.a (mon.0) 3759 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:43:53.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:43:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:43:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:43:54.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:53 vm04 bash[28289]: cluster 2026-03-10T10:43:52.771736+0000 mgr.y (mgr.24422) 1120 : cluster [DBG] pgmap v1549: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:54.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:53 vm04 bash[28289]: cluster 2026-03-10T10:43:52.771736+0000 mgr.y (mgr.24422) 1120 : cluster [DBG] pgmap v1549: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:54.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:53 vm04 bash[20742]: cluster 2026-03-10T10:43:52.771736+0000 mgr.y (mgr.24422) 1120 : cluster [DBG] pgmap v1549: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:54.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:53 vm04 bash[20742]: cluster 2026-03-10T10:43:52.771736+0000 mgr.y (mgr.24422) 1120 : cluster [DBG] pgmap v1549: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:54.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:53 vm07 bash[23367]: cluster 2026-03-10T10:43:52.771736+0000 mgr.y (mgr.24422) 1120 : cluster [DBG] pgmap v1549: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:54.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:53 vm07 bash[23367]: cluster 2026-03-10T10:43:52.771736+0000 mgr.y (mgr.24422) 1120 : cluster [DBG] pgmap v1549: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:56.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:55 vm04 bash[28289]: cluster 2026-03-10T10:43:54.772344+0000 mgr.y (mgr.24422) 1121 : cluster [DBG] pgmap v1550: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:56.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:55 vm04 bash[28289]: cluster 2026-03-10T10:43:54.772344+0000 mgr.y (mgr.24422) 1121 : cluster [DBG] pgmap v1550: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:55 vm04 bash[20742]: cluster 2026-03-10T10:43:54.772344+0000 mgr.y (mgr.24422) 1121 : cluster [DBG] pgmap v1550: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:55 vm04 bash[20742]: cluster 2026-03-10T10:43:54.772344+0000 mgr.y (mgr.24422) 1121 : cluster [DBG] pgmap v1550: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:56.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:55 vm07 bash[23367]: cluster 2026-03-10T10:43:54.772344+0000 mgr.y (mgr.24422) 1121 : cluster [DBG] pgmap v1550: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:56.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:55 vm07 bash[23367]: cluster 2026-03-10T10:43:54.772344+0000 mgr.y (mgr.24422) 1121 : cluster [DBG] pgmap v1550: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:43:58.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:57 vm04 bash[28289]: cluster 2026-03-10T10:43:56.772617+0000 mgr.y (mgr.24422) 1122 : cluster [DBG] pgmap v1551: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:58.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:57 vm04 bash[28289]: cluster 2026-03-10T10:43:56.772617+0000 mgr.y (mgr.24422) 1122 : cluster [DBG] pgmap v1551: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:58.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:57 vm04 bash[20742]: cluster 2026-03-10T10:43:56.772617+0000 mgr.y (mgr.24422) 1122 : cluster [DBG] pgmap v1551: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:58.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:57 vm04 bash[20742]: cluster 2026-03-10T10:43:56.772617+0000 mgr.y (mgr.24422) 1122 : cluster [DBG] pgmap v1551: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:57 vm07 bash[23367]: cluster 2026-03-10T10:43:56.772617+0000 mgr.y (mgr.24422) 1122 : cluster [DBG] pgmap v1551: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:57 vm07 bash[23367]: cluster 2026-03-10T10:43:56.772617+0000 mgr.y (mgr.24422) 1122 : cluster [DBG] pgmap v1551: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:43:59.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:58 vm04 bash[28289]: audit 2026-03-10T10:43:58.565810+0000 mon.a (mon.0) 3760 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:43:59.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:58 vm04 bash[28289]: audit 2026-03-10T10:43:58.565810+0000 mon.a (mon.0) 3760 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:43:59.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:58 vm04 bash[20742]: audit 2026-03-10T10:43:58.565810+0000 mon.a (mon.0) 3760 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:43:59.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:58 vm04 bash[20742]: audit 2026-03-10T10:43:58.565810+0000 mon.a (mon.0) 3760 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:43:59.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:58 vm07 bash[23367]: audit 2026-03-10T10:43:58.565810+0000 mon.a (mon.0) 3760 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:43:59.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:58 vm07 bash[23367]: audit 2026-03-10T10:43:58.565810+0000 mon.a (mon.0) 3760 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:43:59.944 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:43:59 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:44:00.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:59 vm04 bash[28289]: cluster 2026-03-10T10:43:58.773179+0000 mgr.y (mgr.24422) 1123 : cluster [DBG] pgmap v1552: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:00.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:43:59 vm04 bash[28289]: cluster 2026-03-10T10:43:58.773179+0000 mgr.y (mgr.24422) 1123 : cluster [DBG] pgmap v1552: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:00.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:59 vm04 bash[20742]: cluster 2026-03-10T10:43:58.773179+0000 mgr.y (mgr.24422) 1123 : cluster [DBG] pgmap v1552: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:00.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:43:59 vm04 bash[20742]: cluster 2026-03-10T10:43:58.773179+0000 mgr.y (mgr.24422) 1123 : cluster [DBG] pgmap v1552: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:00.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:59 vm07 bash[23367]: cluster 2026-03-10T10:43:58.773179+0000 mgr.y (mgr.24422) 1123 : cluster [DBG] pgmap v1552: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:00.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:43:59 vm07 bash[23367]: cluster 2026-03-10T10:43:58.773179+0000 mgr.y (mgr.24422) 1123 : cluster [DBG] pgmap v1552: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:01.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:00 vm04 bash[28289]: audit 2026-03-10T10:43:59.580136+0000 mgr.y (mgr.24422) 1124 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:01.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:00 vm04 bash[28289]: audit 2026-03-10T10:43:59.580136+0000 mgr.y (mgr.24422) 1124 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:01.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:00 vm04 bash[20742]: audit 2026-03-10T10:43:59.580136+0000 mgr.y (mgr.24422) 1124 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:01.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:00 vm04 bash[20742]: audit 2026-03-10T10:43:59.580136+0000 mgr.y (mgr.24422) 1124 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:00 vm07 bash[23367]: audit 2026-03-10T10:43:59.580136+0000 mgr.y (mgr.24422) 1124 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:00 vm07 bash[23367]: audit 2026-03-10T10:43:59.580136+0000 mgr.y (mgr.24422) 1124 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:02.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:01 vm07 bash[23367]: cluster 2026-03-10T10:44:00.773609+0000 mgr.y (mgr.24422) 1125 : cluster [DBG] pgmap v1553: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:02.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:01 vm07 bash[23367]: cluster 2026-03-10T10:44:00.773609+0000 mgr.y (mgr.24422) 1125 : cluster [DBG] pgmap v1553: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:02.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:01 vm04 bash[28289]: cluster 2026-03-10T10:44:00.773609+0000 mgr.y (mgr.24422) 1125 : cluster [DBG] pgmap v1553: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:02.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:01 vm04 bash[28289]: cluster 2026-03-10T10:44:00.773609+0000 mgr.y (mgr.24422) 1125 : cluster [DBG] pgmap v1553: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:02.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:01 vm04 bash[20742]: cluster 2026-03-10T10:44:00.773609+0000 mgr.y (mgr.24422) 1125 : cluster [DBG] pgmap v1553: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:02.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:01 vm04 bash[20742]: cluster 2026-03-10T10:44:00.773609+0000 mgr.y (mgr.24422) 1125 : cluster [DBG] pgmap v1553: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:03.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:44:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:44:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:44:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:03 vm07 bash[23367]: cluster 2026-03-10T10:44:02.773858+0000 mgr.y (mgr.24422) 1126 : cluster [DBG] pgmap v1554: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:03 vm07 bash[23367]: cluster 2026-03-10T10:44:02.773858+0000 mgr.y (mgr.24422) 1126 : cluster [DBG] pgmap v1554: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:04.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:03 vm04 bash[28289]: cluster 2026-03-10T10:44:02.773858+0000 mgr.y (mgr.24422) 1126 : cluster [DBG] pgmap v1554: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:04.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:03 vm04 bash[28289]: cluster 2026-03-10T10:44:02.773858+0000 mgr.y (mgr.24422) 1126 : cluster [DBG] pgmap v1554: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:04.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:03 vm04 bash[20742]: cluster 2026-03-10T10:44:02.773858+0000 mgr.y (mgr.24422) 1126 : cluster [DBG] pgmap v1554: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:04.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:03 vm04 bash[20742]: cluster 2026-03-10T10:44:02.773858+0000 mgr.y (mgr.24422) 1126 : cluster [DBG] pgmap v1554: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:06.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:05 vm07 bash[23367]: cluster 2026-03-10T10:44:04.774444+0000 mgr.y (mgr.24422) 1127 : cluster [DBG] pgmap v1555: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:06.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:05 vm07 bash[23367]: cluster 2026-03-10T10:44:04.774444+0000 mgr.y (mgr.24422) 1127 : cluster [DBG] pgmap v1555: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:06.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:05 vm04 bash[28289]: cluster 2026-03-10T10:44:04.774444+0000 mgr.y (mgr.24422) 1127 : cluster [DBG] pgmap v1555: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:06.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:05 vm04 bash[28289]: cluster 2026-03-10T10:44:04.774444+0000 mgr.y (mgr.24422) 1127 : cluster [DBG] pgmap v1555: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:06.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:05 vm04 bash[20742]: cluster 2026-03-10T10:44:04.774444+0000 mgr.y (mgr.24422) 1127 : cluster [DBG] pgmap v1555: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:06.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:05 vm04 bash[20742]: cluster 2026-03-10T10:44:04.774444+0000 mgr.y (mgr.24422) 1127 : cluster [DBG] pgmap v1555: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:08.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:07 vm07 bash[23367]: cluster 2026-03-10T10:44:06.774758+0000 mgr.y (mgr.24422) 1128 : cluster [DBG] pgmap v1556: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:08.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:07 vm07 bash[23367]: cluster 2026-03-10T10:44:06.774758+0000 mgr.y (mgr.24422) 1128 : cluster [DBG] pgmap v1556: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:08.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:07 vm04 bash[28289]: cluster 2026-03-10T10:44:06.774758+0000 mgr.y (mgr.24422) 1128 : cluster [DBG] pgmap v1556: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:08.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:07 vm04 bash[28289]: cluster 2026-03-10T10:44:06.774758+0000 mgr.y (mgr.24422) 1128 : cluster [DBG] pgmap v1556: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:08.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:07 vm04 bash[20742]: cluster 2026-03-10T10:44:06.774758+0000 mgr.y (mgr.24422) 1128 : cluster [DBG] pgmap v1556: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:08.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:07 vm04 bash[20742]: cluster 2026-03-10T10:44:06.774758+0000 mgr.y (mgr.24422) 1128 : cluster [DBG] pgmap v1556: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:09.984 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:44:09 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:44:10.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:09 vm07 bash[23367]: cluster 2026-03-10T10:44:08.775326+0000 mgr.y (mgr.24422) 1129 : cluster [DBG] pgmap v1557: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:10.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:09 vm07 bash[23367]: cluster 2026-03-10T10:44:08.775326+0000 mgr.y (mgr.24422) 1129 : cluster [DBG] pgmap v1557: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:10.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:09 vm04 bash[28289]: cluster 2026-03-10T10:44:08.775326+0000 mgr.y (mgr.24422) 1129 : cluster [DBG] pgmap v1557: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:10.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:09 vm04 bash[28289]: cluster 2026-03-10T10:44:08.775326+0000 mgr.y (mgr.24422) 1129 : cluster [DBG] pgmap v1557: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:10.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:09 vm04 bash[20742]: cluster 2026-03-10T10:44:08.775326+0000 mgr.y (mgr.24422) 1129 : cluster [DBG] pgmap v1557: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:10.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:09 vm04 bash[20742]: cluster 2026-03-10T10:44:08.775326+0000 mgr.y (mgr.24422) 1129 : cluster [DBG] pgmap v1557: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:11.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:10 vm07 bash[23367]: audit 2026-03-10T10:44:09.591027+0000 mgr.y (mgr.24422) 1130 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:11.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:10 vm07 bash[23367]: audit 2026-03-10T10:44:09.591027+0000 mgr.y (mgr.24422) 1130 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:11.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:10 vm04 bash[28289]: audit 2026-03-10T10:44:09.591027+0000 mgr.y (mgr.24422) 1130 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:11.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:10 vm04 bash[28289]: audit 2026-03-10T10:44:09.591027+0000 mgr.y (mgr.24422) 1130 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:11.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:10 vm04 bash[20742]: audit 2026-03-10T10:44:09.591027+0000 mgr.y (mgr.24422) 1130 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:11.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:10 vm04 bash[20742]: audit 2026-03-10T10:44:09.591027+0000 mgr.y (mgr.24422) 1130 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:12.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:11 vm07 bash[23367]: cluster 2026-03-10T10:44:10.775911+0000 mgr.y (mgr.24422) 1131 : cluster [DBG] pgmap v1558: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:12.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:11 vm07 bash[23367]: cluster 2026-03-10T10:44:10.775911+0000 mgr.y (mgr.24422) 1131 : cluster [DBG] pgmap v1558: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:12.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:11 vm04 bash[28289]: cluster 2026-03-10T10:44:10.775911+0000 mgr.y (mgr.24422) 1131 : cluster [DBG] pgmap v1558: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:12.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:11 vm04 bash[28289]: cluster 2026-03-10T10:44:10.775911+0000 mgr.y (mgr.24422) 1131 : cluster [DBG] pgmap v1558: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:12.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:11 vm04 bash[20742]: cluster 2026-03-10T10:44:10.775911+0000 mgr.y (mgr.24422) 1131 : cluster [DBG] pgmap v1558: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:12.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:11 vm04 bash[20742]: cluster 2026-03-10T10:44:10.775911+0000 mgr.y (mgr.24422) 1131 : cluster [DBG] pgmap v1558: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:13.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:44:13 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:44:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:44:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:13 vm07 bash[23367]: cluster 2026-03-10T10:44:12.776289+0000 mgr.y (mgr.24422) 1132 : cluster [DBG] pgmap v1559: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:13 vm07 bash[23367]: cluster 2026-03-10T10:44:12.776289+0000 mgr.y (mgr.24422) 1132 : cluster [DBG] pgmap v1559: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:13 vm07 bash[23367]: audit 2026-03-10T10:44:13.571625+0000 mon.a (mon.0) 3761 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:44:14.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:13 vm07 bash[23367]: audit 2026-03-10T10:44:13.571625+0000 mon.a (mon.0) 3761 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:44:14.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:13 vm04 bash[28289]: cluster 2026-03-10T10:44:12.776289+0000 mgr.y (mgr.24422) 1132 : cluster [DBG] pgmap v1559: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:14.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:13 vm04 bash[28289]: cluster 2026-03-10T10:44:12.776289+0000 mgr.y (mgr.24422) 1132 : cluster [DBG] pgmap v1559: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:14.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:13 vm04 bash[28289]: audit 2026-03-10T10:44:13.571625+0000 mon.a (mon.0) 3761 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:44:14.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:13 vm04 bash[28289]: audit 2026-03-10T10:44:13.571625+0000 mon.a (mon.0) 3761 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:44:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:13 vm04 bash[20742]: cluster 2026-03-10T10:44:12.776289+0000 mgr.y (mgr.24422) 1132 : cluster [DBG] pgmap v1559: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:13 vm04 bash[20742]: cluster 2026-03-10T10:44:12.776289+0000 mgr.y (mgr.24422) 1132 : cluster [DBG] pgmap v1559: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:13 vm04 bash[20742]: audit 2026-03-10T10:44:13.571625+0000 mon.a (mon.0) 3761 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:44:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:13 vm04 bash[20742]: audit 2026-03-10T10:44:13.571625+0000 mon.a (mon.0) 3761 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:44:16.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:15 vm04 bash[28289]: cluster 2026-03-10T10:44:14.777045+0000 mgr.y (mgr.24422) 1133 : cluster [DBG] pgmap v1560: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:16.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:15 vm04 bash[28289]: cluster 2026-03-10T10:44:14.777045+0000 mgr.y (mgr.24422) 1133 : cluster [DBG] pgmap v1560: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:15 vm04 bash[20742]: cluster 2026-03-10T10:44:14.777045+0000 mgr.y (mgr.24422) 1133 : cluster [DBG] pgmap v1560: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:15 vm04 bash[20742]: cluster 2026-03-10T10:44:14.777045+0000 mgr.y (mgr.24422) 1133 : cluster [DBG] pgmap v1560: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:16.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:15 vm07 bash[23367]: cluster 2026-03-10T10:44:14.777045+0000 mgr.y (mgr.24422) 1133 : cluster [DBG] pgmap v1560: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:16.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:15 vm07 bash[23367]: cluster 2026-03-10T10:44:14.777045+0000 mgr.y (mgr.24422) 1133 : cluster [DBG] pgmap v1560: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:18.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:17 vm04 bash[28289]: cluster 2026-03-10T10:44:16.777333+0000 mgr.y (mgr.24422) 1134 : cluster [DBG] pgmap v1561: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:18.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:17 vm04 bash[28289]: cluster 2026-03-10T10:44:16.777333+0000 mgr.y (mgr.24422) 1134 : cluster [DBG] pgmap v1561: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:18.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:17 vm04 bash[20742]: cluster 2026-03-10T10:44:16.777333+0000 mgr.y (mgr.24422) 1134 : cluster [DBG] pgmap v1561: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:18.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:17 vm04 bash[20742]: cluster 2026-03-10T10:44:16.777333+0000 mgr.y (mgr.24422) 1134 : cluster [DBG] pgmap v1561: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:18.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:17 vm07 bash[23367]: cluster 2026-03-10T10:44:16.777333+0000 mgr.y (mgr.24422) 1134 : cluster [DBG] pgmap v1561: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:18.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:17 vm07 bash[23367]: cluster 2026-03-10T10:44:16.777333+0000 mgr.y (mgr.24422) 1134 : cluster [DBG] pgmap v1561: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:19.868 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:44:19 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:44:20.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:19 vm04 bash[28289]: cluster 2026-03-10T10:44:18.777755+0000 mgr.y (mgr.24422) 1135 : cluster [DBG] pgmap v1562: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:20.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:19 vm04 bash[28289]: cluster 2026-03-10T10:44:18.777755+0000 mgr.y (mgr.24422) 1135 : cluster [DBG] pgmap v1562: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:20.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:19 vm04 bash[20742]: cluster 2026-03-10T10:44:18.777755+0000 mgr.y (mgr.24422) 1135 : cluster [DBG] pgmap v1562: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:20.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:19 vm04 bash[20742]: cluster 2026-03-10T10:44:18.777755+0000 mgr.y (mgr.24422) 1135 : cluster [DBG] pgmap v1562: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:20.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:19 vm07 bash[23367]: cluster 2026-03-10T10:44:18.777755+0000 mgr.y (mgr.24422) 1135 : cluster [DBG] pgmap v1562: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:20.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:19 vm07 bash[23367]: cluster 2026-03-10T10:44:18.777755+0000 mgr.y (mgr.24422) 1135 : cluster [DBG] pgmap v1562: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:21.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:20 vm04 bash[28289]: audit 2026-03-10T10:44:19.593581+0000 mgr.y (mgr.24422) 1136 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:21.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:20 vm04 bash[28289]: audit 2026-03-10T10:44:19.593581+0000 mgr.y (mgr.24422) 1136 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:21.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:20 vm04 bash[20742]: audit 2026-03-10T10:44:19.593581+0000 mgr.y (mgr.24422) 1136 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:21.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:20 vm04 bash[20742]: audit 2026-03-10T10:44:19.593581+0000 mgr.y (mgr.24422) 1136 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:21.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:20 vm07 bash[23367]: audit 2026-03-10T10:44:19.593581+0000 mgr.y (mgr.24422) 1136 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:21.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:20 vm07 bash[23367]: audit 2026-03-10T10:44:19.593581+0000 mgr.y (mgr.24422) 1136 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:22.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:21 vm04 bash[28289]: cluster 2026-03-10T10:44:20.778191+0000 mgr.y (mgr.24422) 1137 : cluster [DBG] pgmap v1563: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:22.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:21 vm04 bash[28289]: cluster 2026-03-10T10:44:20.778191+0000 mgr.y (mgr.24422) 1137 : cluster [DBG] pgmap v1563: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:22.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:21 vm04 bash[20742]: cluster 2026-03-10T10:44:20.778191+0000 mgr.y (mgr.24422) 1137 : cluster [DBG] pgmap v1563: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:22.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:21 vm04 bash[20742]: cluster 2026-03-10T10:44:20.778191+0000 mgr.y (mgr.24422) 1137 : cluster [DBG] pgmap v1563: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:22.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:21 vm07 bash[23367]: cluster 2026-03-10T10:44:20.778191+0000 mgr.y (mgr.24422) 1137 : cluster [DBG] pgmap v1563: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:22.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:21 vm07 bash[23367]: cluster 2026-03-10T10:44:20.778191+0000 mgr.y (mgr.24422) 1137 : cluster [DBG] pgmap v1563: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:23.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:44:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:44:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:44:24.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:23 vm04 bash[28289]: cluster 2026-03-10T10:44:22.778501+0000 mgr.y (mgr.24422) 1138 : cluster [DBG] pgmap v1564: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:24.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:23 vm04 bash[28289]: cluster 2026-03-10T10:44:22.778501+0000 mgr.y (mgr.24422) 1138 : cluster [DBG] pgmap v1564: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:24.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:23 vm04 bash[20742]: cluster 2026-03-10T10:44:22.778501+0000 mgr.y (mgr.24422) 1138 : cluster [DBG] pgmap v1564: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:24.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:23 vm04 bash[20742]: cluster 2026-03-10T10:44:22.778501+0000 mgr.y (mgr.24422) 1138 : cluster [DBG] pgmap v1564: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:24.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:23 vm07 bash[23367]: cluster 2026-03-10T10:44:22.778501+0000 mgr.y (mgr.24422) 1138 : cluster [DBG] pgmap v1564: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:24.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:23 vm07 bash[23367]: cluster 2026-03-10T10:44:22.778501+0000 mgr.y (mgr.24422) 1138 : cluster [DBG] pgmap v1564: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:26.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:25 vm04 bash[28289]: cluster 2026-03-10T10:44:24.779203+0000 mgr.y (mgr.24422) 1139 : cluster [DBG] pgmap v1565: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:26.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:25 vm04 bash[28289]: cluster 2026-03-10T10:44:24.779203+0000 mgr.y (mgr.24422) 1139 : cluster [DBG] pgmap v1565: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:26.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:25 vm04 bash[20742]: cluster 2026-03-10T10:44:24.779203+0000 mgr.y (mgr.24422) 1139 : cluster [DBG] pgmap v1565: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:26.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:25 vm04 bash[20742]: cluster 2026-03-10T10:44:24.779203+0000 mgr.y (mgr.24422) 1139 : cluster [DBG] pgmap v1565: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:26.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:25 vm07 bash[23367]: cluster 2026-03-10T10:44:24.779203+0000 mgr.y (mgr.24422) 1139 : cluster [DBG] pgmap v1565: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:26.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:25 vm07 bash[23367]: cluster 2026-03-10T10:44:24.779203+0000 mgr.y (mgr.24422) 1139 : cluster [DBG] pgmap v1565: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:28.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:28 vm04 bash[28289]: cluster 2026-03-10T10:44:26.779491+0000 mgr.y (mgr.24422) 1140 : cluster [DBG] pgmap v1566: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:28.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:28 vm04 bash[28289]: cluster 2026-03-10T10:44:26.779491+0000 mgr.y (mgr.24422) 1140 : cluster [DBG] pgmap v1566: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:28.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:27 vm04 bash[20742]: cluster 2026-03-10T10:44:26.779491+0000 mgr.y (mgr.24422) 1140 : cluster [DBG] pgmap v1566: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:28.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:27 vm04 bash[20742]: cluster 2026-03-10T10:44:26.779491+0000 mgr.y (mgr.24422) 1140 : cluster [DBG] pgmap v1566: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:28.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:28 vm07 bash[23367]: cluster 2026-03-10T10:44:26.779491+0000 mgr.y (mgr.24422) 1140 : cluster [DBG] pgmap v1566: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:28.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:28 vm07 bash[23367]: cluster 2026-03-10T10:44:26.779491+0000 mgr.y (mgr.24422) 1140 : cluster [DBG] pgmap v1566: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:28 vm07 bash[23367]: audit 2026-03-10T10:44:28.577856+0000 mon.a (mon.0) 3762 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:44:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:28 vm07 bash[23367]: audit 2026-03-10T10:44:28.577856+0000 mon.a (mon.0) 3762 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:44:29.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:28 vm04 bash[28289]: audit 2026-03-10T10:44:28.577856+0000 mon.a (mon.0) 3762 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:44:29.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:28 vm04 bash[28289]: audit 2026-03-10T10:44:28.577856+0000 mon.a (mon.0) 3762 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:44:29.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:28 vm04 bash[20742]: audit 2026-03-10T10:44:28.577856+0000 mon.a (mon.0) 3762 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:44:29.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:28 vm04 bash[20742]: audit 2026-03-10T10:44:28.577856+0000 mon.a (mon.0) 3762 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:44:29.995 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:44:29 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:44:30.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:29 vm07 bash[23367]: cluster 2026-03-10T10:44:28.780288+0000 mgr.y (mgr.24422) 1141 : cluster [DBG] pgmap v1567: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:30.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:29 vm07 bash[23367]: cluster 2026-03-10T10:44:28.780288+0000 mgr.y (mgr.24422) 1141 : cluster [DBG] pgmap v1567: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:30.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:29 vm04 bash[28289]: cluster 2026-03-10T10:44:28.780288+0000 mgr.y (mgr.24422) 1141 : cluster [DBG] pgmap v1567: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:30.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:29 vm04 bash[28289]: cluster 2026-03-10T10:44:28.780288+0000 mgr.y (mgr.24422) 1141 : cluster [DBG] pgmap v1567: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:30.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:29 vm04 bash[20742]: cluster 2026-03-10T10:44:28.780288+0000 mgr.y (mgr.24422) 1141 : cluster [DBG] pgmap v1567: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:30.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:29 vm04 bash[20742]: cluster 2026-03-10T10:44:28.780288+0000 mgr.y (mgr.24422) 1141 : cluster [DBG] pgmap v1567: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:31.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:31 vm07 bash[23367]: audit 2026-03-10T10:44:29.604197+0000 mgr.y (mgr.24422) 1142 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:31.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:31 vm07 bash[23367]: audit 2026-03-10T10:44:29.604197+0000 mgr.y (mgr.24422) 1142 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:31.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:31 vm04 bash[28289]: audit 2026-03-10T10:44:29.604197+0000 mgr.y (mgr.24422) 1142 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:31.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:31 vm04 bash[28289]: audit 2026-03-10T10:44:29.604197+0000 mgr.y (mgr.24422) 1142 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:31.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:31 vm04 bash[20742]: audit 2026-03-10T10:44:29.604197+0000 mgr.y (mgr.24422) 1142 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:31.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:31 vm04 bash[20742]: audit 2026-03-10T10:44:29.604197+0000 mgr.y (mgr.24422) 1142 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:32 vm07 bash[23367]: cluster 2026-03-10T10:44:30.780777+0000 mgr.y (mgr.24422) 1143 : cluster [DBG] pgmap v1568: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:32 vm07 bash[23367]: cluster 2026-03-10T10:44:30.780777+0000 mgr.y (mgr.24422) 1143 : cluster [DBG] pgmap v1568: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:32.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:32 vm04 bash[28289]: cluster 2026-03-10T10:44:30.780777+0000 mgr.y (mgr.24422) 1143 : cluster [DBG] pgmap v1568: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:32.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:32 vm04 bash[28289]: cluster 2026-03-10T10:44:30.780777+0000 mgr.y (mgr.24422) 1143 : cluster [DBG] pgmap v1568: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:32.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:32 vm04 bash[20742]: cluster 2026-03-10T10:44:30.780777+0000 mgr.y (mgr.24422) 1143 : cluster [DBG] pgmap v1568: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:32.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:32 vm04 bash[20742]: cluster 2026-03-10T10:44:30.780777+0000 mgr.y (mgr.24422) 1143 : cluster [DBG] pgmap v1568: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:33.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:44:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:44:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:44:34.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:34 vm04 bash[28289]: cluster 2026-03-10T10:44:32.781067+0000 mgr.y (mgr.24422) 1144 : cluster [DBG] pgmap v1569: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:34.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:34 vm04 bash[28289]: cluster 2026-03-10T10:44:32.781067+0000 mgr.y (mgr.24422) 1144 : cluster [DBG] pgmap v1569: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:34.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:34 vm04 bash[20742]: cluster 2026-03-10T10:44:32.781067+0000 mgr.y (mgr.24422) 1144 : cluster [DBG] pgmap v1569: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:34.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:34 vm04 bash[20742]: cluster 2026-03-10T10:44:32.781067+0000 mgr.y (mgr.24422) 1144 : cluster [DBG] pgmap v1569: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:34.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:34 vm07 bash[23367]: cluster 2026-03-10T10:44:32.781067+0000 mgr.y (mgr.24422) 1144 : cluster [DBG] pgmap v1569: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:34.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:34 vm07 bash[23367]: cluster 2026-03-10T10:44:32.781067+0000 mgr.y (mgr.24422) 1144 : cluster [DBG] pgmap v1569: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:36.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:36 vm04 bash[28289]: cluster 2026-03-10T10:44:34.781724+0000 mgr.y (mgr.24422) 1145 : cluster [DBG] pgmap v1570: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:36.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:36 vm04 bash[28289]: cluster 2026-03-10T10:44:34.781724+0000 mgr.y (mgr.24422) 1145 : cluster [DBG] pgmap v1570: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:36.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:36 vm04 bash[20742]: cluster 2026-03-10T10:44:34.781724+0000 mgr.y (mgr.24422) 1145 : cluster [DBG] pgmap v1570: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:36.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:36 vm04 bash[20742]: cluster 2026-03-10T10:44:34.781724+0000 mgr.y (mgr.24422) 1145 : cluster [DBG] pgmap v1570: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:36.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:36 vm07 bash[23367]: cluster 2026-03-10T10:44:34.781724+0000 mgr.y (mgr.24422) 1145 : cluster [DBG] pgmap v1570: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:36.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:36 vm07 bash[23367]: cluster 2026-03-10T10:44:34.781724+0000 mgr.y (mgr.24422) 1145 : cluster [DBG] pgmap v1570: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s 2026-03-10T10:44:38.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:38 vm04 bash[28289]: cluster 2026-03-10T10:44:36.782102+0000 mgr.y (mgr.24422) 1146 : cluster [DBG] pgmap v1571: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:38.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:38 vm04 bash[28289]: cluster 2026-03-10T10:44:36.782102+0000 mgr.y (mgr.24422) 1146 : cluster [DBG] pgmap v1571: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:38.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:38 vm04 bash[20742]: cluster 2026-03-10T10:44:36.782102+0000 mgr.y (mgr.24422) 1146 : cluster [DBG] pgmap v1571: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:38.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:38 vm04 bash[20742]: cluster 2026-03-10T10:44:36.782102+0000 mgr.y (mgr.24422) 1146 : cluster [DBG] pgmap v1571: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:38.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:38 vm07 bash[23367]: cluster 2026-03-10T10:44:36.782102+0000 mgr.y (mgr.24422) 1146 : cluster [DBG] pgmap v1571: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:38.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:38 vm07 bash[23367]: cluster 2026-03-10T10:44:36.782102+0000 mgr.y (mgr.24422) 1146 : cluster [DBG] pgmap v1571: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:40.017 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:44:39 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:44:40.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:40 vm04 bash[28289]: cluster 2026-03-10T10:44:38.782560+0000 mgr.y (mgr.24422) 1147 : cluster [DBG] pgmap v1572: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:40.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:40 vm04 bash[28289]: cluster 2026-03-10T10:44:38.782560+0000 mgr.y (mgr.24422) 1147 : cluster [DBG] pgmap v1572: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:40.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:40 vm04 bash[20742]: cluster 2026-03-10T10:44:38.782560+0000 mgr.y (mgr.24422) 1147 : cluster [DBG] pgmap v1572: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:40.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:40 vm04 bash[20742]: cluster 2026-03-10T10:44:38.782560+0000 mgr.y (mgr.24422) 1147 : cluster [DBG] pgmap v1572: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:40.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:40 vm07 bash[23367]: cluster 2026-03-10T10:44:38.782560+0000 mgr.y (mgr.24422) 1147 : cluster [DBG] pgmap v1572: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:40.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:40 vm07 bash[23367]: cluster 2026-03-10T10:44:38.782560+0000 mgr.y (mgr.24422) 1147 : cluster [DBG] pgmap v1572: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:41.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:41 vm04 bash[28289]: audit 2026-03-10T10:44:39.614793+0000 mgr.y (mgr.24422) 1148 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:41.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:41 vm04 bash[28289]: audit 2026-03-10T10:44:39.614793+0000 mgr.y (mgr.24422) 1148 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:41.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:41 vm04 bash[20742]: audit 2026-03-10T10:44:39.614793+0000 mgr.y (mgr.24422) 1148 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:41.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:41 vm04 bash[20742]: audit 2026-03-10T10:44:39.614793+0000 mgr.y (mgr.24422) 1148 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:41.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:41 vm07 bash[23367]: audit 2026-03-10T10:44:39.614793+0000 mgr.y (mgr.24422) 1148 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:41.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:41 vm07 bash[23367]: audit 2026-03-10T10:44:39.614793+0000 mgr.y (mgr.24422) 1148 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:42.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:42 vm04 bash[28289]: cluster 2026-03-10T10:44:40.783208+0000 mgr.y (mgr.24422) 1149 : cluster [DBG] pgmap v1573: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:42.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:42 vm04 bash[28289]: cluster 2026-03-10T10:44:40.783208+0000 mgr.y (mgr.24422) 1149 : cluster [DBG] pgmap v1573: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:42.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:42 vm04 bash[20742]: cluster 2026-03-10T10:44:40.783208+0000 mgr.y (mgr.24422) 1149 : cluster [DBG] pgmap v1573: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:42.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:42 vm04 bash[20742]: cluster 2026-03-10T10:44:40.783208+0000 mgr.y (mgr.24422) 1149 : cluster [DBG] pgmap v1573: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:42.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:42 vm07 bash[23367]: cluster 2026-03-10T10:44:40.783208+0000 mgr.y (mgr.24422) 1149 : cluster [DBG] pgmap v1573: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:42.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:42 vm07 bash[23367]: cluster 2026-03-10T10:44:40.783208+0000 mgr.y (mgr.24422) 1149 : cluster [DBG] pgmap v1573: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:43.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:44:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:44:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:44:44.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:44 vm04 bash[28289]: cluster 2026-03-10T10:44:42.783541+0000 mgr.y (mgr.24422) 1150 : cluster [DBG] pgmap v1574: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:44.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:44 vm04 bash[28289]: cluster 2026-03-10T10:44:42.783541+0000 mgr.y (mgr.24422) 1150 : cluster [DBG] pgmap v1574: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:44.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:44 vm04 bash[28289]: audit 2026-03-10T10:44:43.586271+0000 mon.a (mon.0) 3763 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:44:44.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:44 vm04 bash[28289]: audit 2026-03-10T10:44:43.586271+0000 mon.a (mon.0) 3763 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:44:44.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:44 vm04 bash[20742]: cluster 2026-03-10T10:44:42.783541+0000 mgr.y (mgr.24422) 1150 : cluster [DBG] pgmap v1574: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:44.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:44 vm04 bash[20742]: cluster 2026-03-10T10:44:42.783541+0000 mgr.y (mgr.24422) 1150 : cluster [DBG] pgmap v1574: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:44.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:44 vm04 bash[20742]: audit 2026-03-10T10:44:43.586271+0000 mon.a (mon.0) 3763 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:44:44.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:44 vm04 bash[20742]: audit 2026-03-10T10:44:43.586271+0000 mon.a (mon.0) 3763 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:44:44.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:44 vm07 bash[23367]: cluster 2026-03-10T10:44:42.783541+0000 mgr.y (mgr.24422) 1150 : cluster [DBG] pgmap v1574: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:44.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:44 vm07 bash[23367]: cluster 2026-03-10T10:44:42.783541+0000 mgr.y (mgr.24422) 1150 : cluster [DBG] pgmap v1574: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:44.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:44 vm07 bash[23367]: audit 2026-03-10T10:44:43.586271+0000 mon.a (mon.0) 3763 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:44:44.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:44 vm07 bash[23367]: audit 2026-03-10T10:44:43.586271+0000 mon.a (mon.0) 3763 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:44:46.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:46 vm04 bash[28289]: cluster 2026-03-10T10:44:44.784122+0000 mgr.y (mgr.24422) 1151 : cluster [DBG] pgmap v1575: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:46.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:46 vm04 bash[28289]: cluster 2026-03-10T10:44:44.784122+0000 mgr.y (mgr.24422) 1151 : cluster [DBG] pgmap v1575: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:46.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:46 vm04 bash[20742]: cluster 2026-03-10T10:44:44.784122+0000 mgr.y (mgr.24422) 1151 : cluster [DBG] pgmap v1575: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:46.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:46 vm04 bash[20742]: cluster 2026-03-10T10:44:44.784122+0000 mgr.y (mgr.24422) 1151 : cluster [DBG] pgmap v1575: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:46.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:46 vm07 bash[23367]: cluster 2026-03-10T10:44:44.784122+0000 mgr.y (mgr.24422) 1151 : cluster [DBG] pgmap v1575: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:46.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:46 vm07 bash[23367]: cluster 2026-03-10T10:44:44.784122+0000 mgr.y (mgr.24422) 1151 : cluster [DBG] pgmap v1575: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:48.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:48 vm04 bash[28289]: cluster 2026-03-10T10:44:46.784520+0000 mgr.y (mgr.24422) 1152 : cluster [DBG] pgmap v1576: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:44:48.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:48 vm04 bash[28289]: cluster 2026-03-10T10:44:46.784520+0000 mgr.y (mgr.24422) 1152 : cluster [DBG] pgmap v1576: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:44:48.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:48 vm04 bash[20742]: cluster 2026-03-10T10:44:46.784520+0000 mgr.y (mgr.24422) 1152 : cluster [DBG] pgmap v1576: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:44:48.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:48 vm04 bash[20742]: cluster 2026-03-10T10:44:46.784520+0000 mgr.y (mgr.24422) 1152 : cluster [DBG] pgmap v1576: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:44:48.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:48 vm07 bash[23367]: cluster 2026-03-10T10:44:46.784520+0000 mgr.y (mgr.24422) 1152 : cluster [DBG] pgmap v1576: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:44:48.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:48 vm07 bash[23367]: cluster 2026-03-10T10:44:46.784520+0000 mgr.y (mgr.24422) 1152 : cluster [DBG] pgmap v1576: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:44:50.017 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:44:49 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:44:50.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:50 vm04 bash[28289]: cluster 2026-03-10T10:44:48.785130+0000 mgr.y (mgr.24422) 1153 : cluster [DBG] pgmap v1577: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:50.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:50 vm04 bash[28289]: cluster 2026-03-10T10:44:48.785130+0000 mgr.y (mgr.24422) 1153 : cluster [DBG] pgmap v1577: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:50.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:50 vm04 bash[20742]: cluster 2026-03-10T10:44:48.785130+0000 mgr.y (mgr.24422) 1153 : cluster [DBG] pgmap v1577: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:50.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:50 vm04 bash[20742]: cluster 2026-03-10T10:44:48.785130+0000 mgr.y (mgr.24422) 1153 : cluster [DBG] pgmap v1577: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:50.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:50 vm07 bash[23367]: cluster 2026-03-10T10:44:48.785130+0000 mgr.y (mgr.24422) 1153 : cluster [DBG] pgmap v1577: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:50.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:50 vm07 bash[23367]: cluster 2026-03-10T10:44:48.785130+0000 mgr.y (mgr.24422) 1153 : cluster [DBG] pgmap v1577: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:51.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:51 vm04 bash[28289]: audit 2026-03-10T10:44:49.625444+0000 mgr.y (mgr.24422) 1154 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:51.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:51 vm04 bash[28289]: audit 2026-03-10T10:44:49.625444+0000 mgr.y (mgr.24422) 1154 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:51.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:51 vm04 bash[28289]: audit 2026-03-10T10:44:50.921228+0000 mon.a (mon.0) 3764 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:44:51.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:51 vm04 bash[28289]: audit 2026-03-10T10:44:50.921228+0000 mon.a (mon.0) 3764 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:44:51.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:51 vm04 bash[20742]: audit 2026-03-10T10:44:49.625444+0000 mgr.y (mgr.24422) 1154 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:51.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:51 vm04 bash[20742]: audit 2026-03-10T10:44:49.625444+0000 mgr.y (mgr.24422) 1154 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:51.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:51 vm04 bash[20742]: audit 2026-03-10T10:44:50.921228+0000 mon.a (mon.0) 3764 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:44:51.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:51 vm04 bash[20742]: audit 2026-03-10T10:44:50.921228+0000 mon.a (mon.0) 3764 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:44:51.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:51 vm07 bash[23367]: audit 2026-03-10T10:44:49.625444+0000 mgr.y (mgr.24422) 1154 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:51.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:51 vm07 bash[23367]: audit 2026-03-10T10:44:49.625444+0000 mgr.y (mgr.24422) 1154 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:44:51.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:51 vm07 bash[23367]: audit 2026-03-10T10:44:50.921228+0000 mon.a (mon.0) 3764 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:44:51.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:51 vm07 bash[23367]: audit 2026-03-10T10:44:50.921228+0000 mon.a (mon.0) 3764 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:44:52.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:52 vm07 bash[23367]: cluster 2026-03-10T10:44:50.785394+0000 mgr.y (mgr.24422) 1155 : cluster [DBG] pgmap v1578: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:52.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:52 vm07 bash[23367]: cluster 2026-03-10T10:44:50.785394+0000 mgr.y (mgr.24422) 1155 : cluster [DBG] pgmap v1578: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:52.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:52 vm07 bash[23367]: audit 2026-03-10T10:44:51.491624+0000 mon.a (mon.0) 3765 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:52.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:52 vm07 bash[23367]: audit 2026-03-10T10:44:51.491624+0000 mon.a (mon.0) 3765 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:52.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:52 vm07 bash[23367]: audit 2026-03-10T10:44:51.497721+0000 mon.a (mon.0) 3766 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:52.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:52 vm07 bash[23367]: audit 2026-03-10T10:44:51.497721+0000 mon.a (mon.0) 3766 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:52.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:52 vm07 bash[23367]: audit 2026-03-10T10:44:51.515977+0000 mon.a (mon.0) 3767 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:52.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:52 vm07 bash[23367]: audit 2026-03-10T10:44:51.515977+0000 mon.a (mon.0) 3767 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:52.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:52 vm07 bash[23367]: audit 2026-03-10T10:44:51.521649+0000 mon.a (mon.0) 3768 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:52.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:52 vm07 bash[23367]: audit 2026-03-10T10:44:51.521649+0000 mon.a (mon.0) 3768 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:52.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:52 vm04 bash[28289]: cluster 2026-03-10T10:44:50.785394+0000 mgr.y (mgr.24422) 1155 : cluster [DBG] pgmap v1578: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:52.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:52 vm04 bash[28289]: cluster 2026-03-10T10:44:50.785394+0000 mgr.y (mgr.24422) 1155 : cluster [DBG] pgmap v1578: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:52.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:52 vm04 bash[28289]: audit 2026-03-10T10:44:51.491624+0000 mon.a (mon.0) 3765 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:52.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:52 vm04 bash[28289]: audit 2026-03-10T10:44:51.491624+0000 mon.a (mon.0) 3765 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:52.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:52 vm04 bash[28289]: audit 2026-03-10T10:44:51.497721+0000 mon.a (mon.0) 3766 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:52.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:52 vm04 bash[28289]: audit 2026-03-10T10:44:51.497721+0000 mon.a (mon.0) 3766 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:52.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:52 vm04 bash[28289]: audit 2026-03-10T10:44:51.515977+0000 mon.a (mon.0) 3767 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:52.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:52 vm04 bash[28289]: audit 2026-03-10T10:44:51.515977+0000 mon.a (mon.0) 3767 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:52.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:52 vm04 bash[28289]: audit 2026-03-10T10:44:51.521649+0000 mon.a (mon.0) 3768 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:52.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:52 vm04 bash[28289]: audit 2026-03-10T10:44:51.521649+0000 mon.a (mon.0) 3768 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:52.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:52 vm04 bash[20742]: cluster 2026-03-10T10:44:50.785394+0000 mgr.y (mgr.24422) 1155 : cluster [DBG] pgmap v1578: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:52.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:52 vm04 bash[20742]: cluster 2026-03-10T10:44:50.785394+0000 mgr.y (mgr.24422) 1155 : cluster [DBG] pgmap v1578: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:52.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:52 vm04 bash[20742]: audit 2026-03-10T10:44:51.491624+0000 mon.a (mon.0) 3765 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:52.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:52 vm04 bash[20742]: audit 2026-03-10T10:44:51.491624+0000 mon.a (mon.0) 3765 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:52.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:52 vm04 bash[20742]: audit 2026-03-10T10:44:51.497721+0000 mon.a (mon.0) 3766 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:52.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:52 vm04 bash[20742]: audit 2026-03-10T10:44:51.497721+0000 mon.a (mon.0) 3766 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:52.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:52 vm04 bash[20742]: audit 2026-03-10T10:44:51.515977+0000 mon.a (mon.0) 3767 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:52.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:52 vm04 bash[20742]: audit 2026-03-10T10:44:51.515977+0000 mon.a (mon.0) 3767 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:52.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:52 vm04 bash[20742]: audit 2026-03-10T10:44:51.521649+0000 mon.a (mon.0) 3768 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:52.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:52 vm04 bash[20742]: audit 2026-03-10T10:44:51.521649+0000 mon.a (mon.0) 3768 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:53.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:44:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:44:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:44:54.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:54 vm07 bash[23367]: cluster 2026-03-10T10:44:52.785638+0000 mgr.y (mgr.24422) 1156 : cluster [DBG] pgmap v1579: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:44:54.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:54 vm07 bash[23367]: cluster 2026-03-10T10:44:52.785638+0000 mgr.y (mgr.24422) 1156 : cluster [DBG] pgmap v1579: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:44:54.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:54 vm04 bash[28289]: cluster 2026-03-10T10:44:52.785638+0000 mgr.y (mgr.24422) 1156 : cluster [DBG] pgmap v1579: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:44:54.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:54 vm04 bash[28289]: cluster 2026-03-10T10:44:52.785638+0000 mgr.y (mgr.24422) 1156 : cluster [DBG] pgmap v1579: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:44:54.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:54 vm04 bash[20742]: cluster 2026-03-10T10:44:52.785638+0000 mgr.y (mgr.24422) 1156 : cluster [DBG] pgmap v1579: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:44:54.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:54 vm04 bash[20742]: cluster 2026-03-10T10:44:52.785638+0000 mgr.y (mgr.24422) 1156 : cluster [DBG] pgmap v1579: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T10:44:55.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:55 vm07 bash[23367]: cluster 2026-03-10T10:44:54.786293+0000 mgr.y (mgr.24422) 1157 : cluster [DBG] pgmap v1580: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:55.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:55 vm07 bash[23367]: cluster 2026-03-10T10:44:54.786293+0000 mgr.y (mgr.24422) 1157 : cluster [DBG] pgmap v1580: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:55.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:55 vm04 bash[28289]: cluster 2026-03-10T10:44:54.786293+0000 mgr.y (mgr.24422) 1157 : cluster [DBG] pgmap v1580: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:55.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:55 vm04 bash[28289]: cluster 2026-03-10T10:44:54.786293+0000 mgr.y (mgr.24422) 1157 : cluster [DBG] pgmap v1580: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:55.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:55 vm04 bash[20742]: cluster 2026-03-10T10:44:54.786293+0000 mgr.y (mgr.24422) 1157 : cluster [DBG] pgmap v1580: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:55.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:55 vm04 bash[20742]: cluster 2026-03-10T10:44:54.786293+0000 mgr.y (mgr.24422) 1157 : cluster [DBG] pgmap v1580: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:44:58.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:58 vm04 bash[28289]: cluster 2026-03-10T10:44:56.786627+0000 mgr.y (mgr.24422) 1158 : cluster [DBG] pgmap v1581: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:58.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:58 vm04 bash[28289]: cluster 2026-03-10T10:44:56.786627+0000 mgr.y (mgr.24422) 1158 : cluster [DBG] pgmap v1581: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:58.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:58 vm04 bash[28289]: audit 2026-03-10T10:44:57.080950+0000 mon.a (mon.0) 3769 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:58.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:58 vm04 bash[28289]: audit 2026-03-10T10:44:57.080950+0000 mon.a (mon.0) 3769 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:58.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:58 vm04 bash[28289]: audit 2026-03-10T10:44:57.085442+0000 mon.a (mon.0) 3770 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:58.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:58 vm04 bash[28289]: audit 2026-03-10T10:44:57.085442+0000 mon.a (mon.0) 3770 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:58.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:58 vm04 bash[28289]: audit 2026-03-10T10:44:57.086175+0000 mon.a (mon.0) 3771 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:44:58.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:58 vm04 bash[28289]: audit 2026-03-10T10:44:57.086175+0000 mon.a (mon.0) 3771 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:44:58.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:58 vm04 bash[28289]: audit 2026-03-10T10:44:57.086870+0000 mon.a (mon.0) 3772 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:44:58.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:58 vm04 bash[28289]: audit 2026-03-10T10:44:57.086870+0000 mon.a (mon.0) 3772 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:44:58.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:58 vm04 bash[28289]: audit 2026-03-10T10:44:57.090614+0000 mon.a (mon.0) 3773 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:58.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:58 vm04 bash[28289]: audit 2026-03-10T10:44:57.090614+0000 mon.a (mon.0) 3773 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:58.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:58 vm04 bash[20742]: cluster 2026-03-10T10:44:56.786627+0000 mgr.y (mgr.24422) 1158 : cluster [DBG] pgmap v1581: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:58.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:58 vm04 bash[20742]: cluster 2026-03-10T10:44:56.786627+0000 mgr.y (mgr.24422) 1158 : cluster [DBG] pgmap v1581: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:58.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:58 vm04 bash[20742]: audit 2026-03-10T10:44:57.080950+0000 mon.a (mon.0) 3769 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:58.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:58 vm04 bash[20742]: audit 2026-03-10T10:44:57.080950+0000 mon.a (mon.0) 3769 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:58.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:58 vm04 bash[20742]: audit 2026-03-10T10:44:57.085442+0000 mon.a (mon.0) 3770 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:58.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:58 vm04 bash[20742]: audit 2026-03-10T10:44:57.085442+0000 mon.a (mon.0) 3770 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:58.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:58 vm04 bash[20742]: audit 2026-03-10T10:44:57.086175+0000 mon.a (mon.0) 3771 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:44:58.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:58 vm04 bash[20742]: audit 2026-03-10T10:44:57.086175+0000 mon.a (mon.0) 3771 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:44:58.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:58 vm04 bash[20742]: audit 2026-03-10T10:44:57.086870+0000 mon.a (mon.0) 3772 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:44:58.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:58 vm04 bash[20742]: audit 2026-03-10T10:44:57.086870+0000 mon.a (mon.0) 3772 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:44:58.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:58 vm04 bash[20742]: audit 2026-03-10T10:44:57.090614+0000 mon.a (mon.0) 3773 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:58.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:58 vm04 bash[20742]: audit 2026-03-10T10:44:57.090614+0000 mon.a (mon.0) 3773 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:58.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:58 vm07 bash[23367]: cluster 2026-03-10T10:44:56.786627+0000 mgr.y (mgr.24422) 1158 : cluster [DBG] pgmap v1581: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:58.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:58 vm07 bash[23367]: cluster 2026-03-10T10:44:56.786627+0000 mgr.y (mgr.24422) 1158 : cluster [DBG] pgmap v1581: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:44:58.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:58 vm07 bash[23367]: audit 2026-03-10T10:44:57.080950+0000 mon.a (mon.0) 3769 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:58.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:58 vm07 bash[23367]: audit 2026-03-10T10:44:57.080950+0000 mon.a (mon.0) 3769 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:58.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:58 vm07 bash[23367]: audit 2026-03-10T10:44:57.085442+0000 mon.a (mon.0) 3770 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:58.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:58 vm07 bash[23367]: audit 2026-03-10T10:44:57.085442+0000 mon.a (mon.0) 3770 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:58.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:58 vm07 bash[23367]: audit 2026-03-10T10:44:57.086175+0000 mon.a (mon.0) 3771 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:44:58.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:58 vm07 bash[23367]: audit 2026-03-10T10:44:57.086175+0000 mon.a (mon.0) 3771 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:44:58.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:58 vm07 bash[23367]: audit 2026-03-10T10:44:57.086870+0000 mon.a (mon.0) 3772 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:44:58.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:58 vm07 bash[23367]: audit 2026-03-10T10:44:57.086870+0000 mon.a (mon.0) 3772 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:44:58.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:58 vm07 bash[23367]: audit 2026-03-10T10:44:57.090614+0000 mon.a (mon.0) 3773 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:58.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:58 vm07 bash[23367]: audit 2026-03-10T10:44:57.090614+0000 mon.a (mon.0) 3773 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:44:59.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:59 vm04 bash[28289]: audit 2026-03-10T10:44:58.592023+0000 mon.a (mon.0) 3774 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:44:59.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:44:59 vm04 bash[28289]: audit 2026-03-10T10:44:58.592023+0000 mon.a (mon.0) 3774 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:44:59.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:59 vm04 bash[20742]: audit 2026-03-10T10:44:58.592023+0000 mon.a (mon.0) 3774 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:44:59.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:44:59 vm04 bash[20742]: audit 2026-03-10T10:44:58.592023+0000 mon.a (mon.0) 3774 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:44:59.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:59 vm07 bash[23367]: audit 2026-03-10T10:44:58.592023+0000 mon.a (mon.0) 3774 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:44:59.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:44:59 vm07 bash[23367]: audit 2026-03-10T10:44:58.592023+0000 mon.a (mon.0) 3774 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:45:00.016 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:44:59 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:45:00.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:00 vm04 bash[28289]: cluster 2026-03-10T10:44:58.787320+0000 mgr.y (mgr.24422) 1159 : cluster [DBG] pgmap v1582: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:00.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:00 vm04 bash[28289]: cluster 2026-03-10T10:44:58.787320+0000 mgr.y (mgr.24422) 1159 : cluster [DBG] pgmap v1582: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:00.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:00 vm04 bash[20742]: cluster 2026-03-10T10:44:58.787320+0000 mgr.y (mgr.24422) 1159 : cluster [DBG] pgmap v1582: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:00.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:00 vm04 bash[20742]: cluster 2026-03-10T10:44:58.787320+0000 mgr.y (mgr.24422) 1159 : cluster [DBG] pgmap v1582: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:00 vm07 bash[23367]: cluster 2026-03-10T10:44:58.787320+0000 mgr.y (mgr.24422) 1159 : cluster [DBG] pgmap v1582: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:00.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:00 vm07 bash[23367]: cluster 2026-03-10T10:44:58.787320+0000 mgr.y (mgr.24422) 1159 : cluster [DBG] pgmap v1582: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:01.442 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:01 vm07 bash[23367]: audit 2026-03-10T10:44:59.636123+0000 mgr.y (mgr.24422) 1160 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:01.442 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:01 vm07 bash[23367]: audit 2026-03-10T10:44:59.636123+0000 mgr.y (mgr.24422) 1160 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:01.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:01 vm04 bash[28289]: audit 2026-03-10T10:44:59.636123+0000 mgr.y (mgr.24422) 1160 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:01.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:01 vm04 bash[28289]: audit 2026-03-10T10:44:59.636123+0000 mgr.y (mgr.24422) 1160 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:01.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:01 vm04 bash[20742]: audit 2026-03-10T10:44:59.636123+0000 mgr.y (mgr.24422) 1160 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:01.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:01 vm04 bash[20742]: audit 2026-03-10T10:44:59.636123+0000 mgr.y (mgr.24422) 1160 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:02.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:02 vm04 bash[28289]: cluster 2026-03-10T10:45:00.787658+0000 mgr.y (mgr.24422) 1161 : cluster [DBG] pgmap v1583: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:02.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:02 vm04 bash[28289]: cluster 2026-03-10T10:45:00.787658+0000 mgr.y (mgr.24422) 1161 : cluster [DBG] pgmap v1583: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:02.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:02 vm04 bash[20742]: cluster 2026-03-10T10:45:00.787658+0000 mgr.y (mgr.24422) 1161 : cluster [DBG] pgmap v1583: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:02.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:02 vm04 bash[20742]: cluster 2026-03-10T10:45:00.787658+0000 mgr.y (mgr.24422) 1161 : cluster [DBG] pgmap v1583: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:02.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:02 vm07 bash[23367]: cluster 2026-03-10T10:45:00.787658+0000 mgr.y (mgr.24422) 1161 : cluster [DBG] pgmap v1583: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:02.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:02 vm07 bash[23367]: cluster 2026-03-10T10:45:00.787658+0000 mgr.y (mgr.24422) 1161 : cluster [DBG] pgmap v1583: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:03.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:45:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:45:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:45:04.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:04 vm04 bash[28289]: cluster 2026-03-10T10:45:02.788021+0000 mgr.y (mgr.24422) 1162 : cluster [DBG] pgmap v1584: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:04.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:04 vm04 bash[28289]: cluster 2026-03-10T10:45:02.788021+0000 mgr.y (mgr.24422) 1162 : cluster [DBG] pgmap v1584: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:04.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:04 vm04 bash[20742]: cluster 2026-03-10T10:45:02.788021+0000 mgr.y (mgr.24422) 1162 : cluster [DBG] pgmap v1584: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:04.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:04 vm04 bash[20742]: cluster 2026-03-10T10:45:02.788021+0000 mgr.y (mgr.24422) 1162 : cluster [DBG] pgmap v1584: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:04.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:04 vm07 bash[23367]: cluster 2026-03-10T10:45:02.788021+0000 mgr.y (mgr.24422) 1162 : cluster [DBG] pgmap v1584: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:04.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:04 vm07 bash[23367]: cluster 2026-03-10T10:45:02.788021+0000 mgr.y (mgr.24422) 1162 : cluster [DBG] pgmap v1584: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:06.017 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:45:05 vm07 bash[50688]: logger=cleanup t=2026-03-10T10:45:05.587902848Z level=info msg="Completed cleanup jobs" duration=1.229683ms 2026-03-10T10:45:06.017 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:45:05 vm07 bash[50688]: logger=plugins.update.checker t=2026-03-10T10:45:05.734895444Z level=info msg="Update check succeeded" duration=55.518658ms 2026-03-10T10:45:06.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:06 vm04 bash[28289]: cluster 2026-03-10T10:45:04.788739+0000 mgr.y (mgr.24422) 1163 : cluster [DBG] pgmap v1585: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:06.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:06 vm04 bash[28289]: cluster 2026-03-10T10:45:04.788739+0000 mgr.y (mgr.24422) 1163 : cluster [DBG] pgmap v1585: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:06.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:06 vm04 bash[20742]: cluster 2026-03-10T10:45:04.788739+0000 mgr.y (mgr.24422) 1163 : cluster [DBG] pgmap v1585: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:06.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:06 vm04 bash[20742]: cluster 2026-03-10T10:45:04.788739+0000 mgr.y (mgr.24422) 1163 : cluster [DBG] pgmap v1585: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:06.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:06 vm07 bash[23367]: cluster 2026-03-10T10:45:04.788739+0000 mgr.y (mgr.24422) 1163 : cluster [DBG] pgmap v1585: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:06.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:06 vm07 bash[23367]: cluster 2026-03-10T10:45:04.788739+0000 mgr.y (mgr.24422) 1163 : cluster [DBG] pgmap v1585: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:08.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:08 vm04 bash[28289]: cluster 2026-03-10T10:45:06.789065+0000 mgr.y (mgr.24422) 1164 : cluster [DBG] pgmap v1586: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:08.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:08 vm04 bash[28289]: cluster 2026-03-10T10:45:06.789065+0000 mgr.y (mgr.24422) 1164 : cluster [DBG] pgmap v1586: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:08.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:08 vm04 bash[20742]: cluster 2026-03-10T10:45:06.789065+0000 mgr.y (mgr.24422) 1164 : cluster [DBG] pgmap v1586: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:08 vm04 bash[20742]: cluster 2026-03-10T10:45:06.789065+0000 mgr.y (mgr.24422) 1164 : cluster [DBG] pgmap v1586: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:08.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:08 vm07 bash[23367]: cluster 2026-03-10T10:45:06.789065+0000 mgr.y (mgr.24422) 1164 : cluster [DBG] pgmap v1586: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:08.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:08 vm07 bash[23367]: cluster 2026-03-10T10:45:06.789065+0000 mgr.y (mgr.24422) 1164 : cluster [DBG] pgmap v1586: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:10.017 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:45:09 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:45:10.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:10 vm04 bash[28289]: cluster 2026-03-10T10:45:08.789806+0000 mgr.y (mgr.24422) 1165 : cluster [DBG] pgmap v1587: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:10.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:10 vm04 bash[28289]: cluster 2026-03-10T10:45:08.789806+0000 mgr.y (mgr.24422) 1165 : cluster [DBG] pgmap v1587: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:10.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:10 vm04 bash[20742]: cluster 2026-03-10T10:45:08.789806+0000 mgr.y (mgr.24422) 1165 : cluster [DBG] pgmap v1587: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:10.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:10 vm04 bash[20742]: cluster 2026-03-10T10:45:08.789806+0000 mgr.y (mgr.24422) 1165 : cluster [DBG] pgmap v1587: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:10 vm07 bash[23367]: cluster 2026-03-10T10:45:08.789806+0000 mgr.y (mgr.24422) 1165 : cluster [DBG] pgmap v1587: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:10.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:10 vm07 bash[23367]: cluster 2026-03-10T10:45:08.789806+0000 mgr.y (mgr.24422) 1165 : cluster [DBG] pgmap v1587: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:11.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:11 vm04 bash[28289]: audit 2026-03-10T10:45:09.646786+0000 mgr.y (mgr.24422) 1166 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:11.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:11 vm04 bash[28289]: audit 2026-03-10T10:45:09.646786+0000 mgr.y (mgr.24422) 1166 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:11.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:11 vm04 bash[20742]: audit 2026-03-10T10:45:09.646786+0000 mgr.y (mgr.24422) 1166 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:11.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:11 vm04 bash[20742]: audit 2026-03-10T10:45:09.646786+0000 mgr.y (mgr.24422) 1166 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:11.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:11 vm07 bash[23367]: audit 2026-03-10T10:45:09.646786+0000 mgr.y (mgr.24422) 1166 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:11.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:11 vm07 bash[23367]: audit 2026-03-10T10:45:09.646786+0000 mgr.y (mgr.24422) 1166 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:12.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:12 vm04 bash[28289]: cluster 2026-03-10T10:45:10.790075+0000 mgr.y (mgr.24422) 1167 : cluster [DBG] pgmap v1588: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:12.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:12 vm04 bash[28289]: cluster 2026-03-10T10:45:10.790075+0000 mgr.y (mgr.24422) 1167 : cluster [DBG] pgmap v1588: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:12.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:12 vm04 bash[20742]: cluster 2026-03-10T10:45:10.790075+0000 mgr.y (mgr.24422) 1167 : cluster [DBG] pgmap v1588: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:12.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:12 vm04 bash[20742]: cluster 2026-03-10T10:45:10.790075+0000 mgr.y (mgr.24422) 1167 : cluster [DBG] pgmap v1588: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:12 vm07 bash[23367]: cluster 2026-03-10T10:45:10.790075+0000 mgr.y (mgr.24422) 1167 : cluster [DBG] pgmap v1588: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:12 vm07 bash[23367]: cluster 2026-03-10T10:45:10.790075+0000 mgr.y (mgr.24422) 1167 : cluster [DBG] pgmap v1588: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:13.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:45:13 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:45:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:45:14.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:14 vm04 bash[28289]: cluster 2026-03-10T10:45:12.790346+0000 mgr.y (mgr.24422) 1168 : cluster [DBG] pgmap v1589: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:14.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:14 vm04 bash[28289]: cluster 2026-03-10T10:45:12.790346+0000 mgr.y (mgr.24422) 1168 : cluster [DBG] pgmap v1589: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:14.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:14 vm04 bash[28289]: audit 2026-03-10T10:45:13.598183+0000 mon.a (mon.0) 3775 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:45:14.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:14 vm04 bash[28289]: audit 2026-03-10T10:45:13.598183+0000 mon.a (mon.0) 3775 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:45:14.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:14 vm04 bash[20742]: cluster 2026-03-10T10:45:12.790346+0000 mgr.y (mgr.24422) 1168 : cluster [DBG] pgmap v1589: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:14.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:14 vm04 bash[20742]: cluster 2026-03-10T10:45:12.790346+0000 mgr.y (mgr.24422) 1168 : cluster [DBG] pgmap v1589: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:14.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:14 vm04 bash[20742]: audit 2026-03-10T10:45:13.598183+0000 mon.a (mon.0) 3775 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:45:14.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:14 vm04 bash[20742]: audit 2026-03-10T10:45:13.598183+0000 mon.a (mon.0) 3775 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:45:14.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:14 vm07 bash[23367]: cluster 2026-03-10T10:45:12.790346+0000 mgr.y (mgr.24422) 1168 : cluster [DBG] pgmap v1589: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:14.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:14 vm07 bash[23367]: cluster 2026-03-10T10:45:12.790346+0000 mgr.y (mgr.24422) 1168 : cluster [DBG] pgmap v1589: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:14.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:14 vm07 bash[23367]: audit 2026-03-10T10:45:13.598183+0000 mon.a (mon.0) 3775 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:45:14.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:14 vm07 bash[23367]: audit 2026-03-10T10:45:13.598183+0000 mon.a (mon.0) 3775 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:45:16.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:16 vm04 bash[28289]: cluster 2026-03-10T10:45:14.790970+0000 mgr.y (mgr.24422) 1169 : cluster [DBG] pgmap v1590: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:16.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:16 vm04 bash[28289]: cluster 2026-03-10T10:45:14.790970+0000 mgr.y (mgr.24422) 1169 : cluster [DBG] pgmap v1590: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:16.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:16 vm04 bash[20742]: cluster 2026-03-10T10:45:14.790970+0000 mgr.y (mgr.24422) 1169 : cluster [DBG] pgmap v1590: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:16.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:16 vm04 bash[20742]: cluster 2026-03-10T10:45:14.790970+0000 mgr.y (mgr.24422) 1169 : cluster [DBG] pgmap v1590: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:16.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:16 vm07 bash[23367]: cluster 2026-03-10T10:45:14.790970+0000 mgr.y (mgr.24422) 1169 : cluster [DBG] pgmap v1590: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:16.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:16 vm07 bash[23367]: cluster 2026-03-10T10:45:14.790970+0000 mgr.y (mgr.24422) 1169 : cluster [DBG] pgmap v1590: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:18.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:18 vm04 bash[28289]: cluster 2026-03-10T10:45:16.791234+0000 mgr.y (mgr.24422) 1170 : cluster [DBG] pgmap v1591: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:18.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:18 vm04 bash[28289]: cluster 2026-03-10T10:45:16.791234+0000 mgr.y (mgr.24422) 1170 : cluster [DBG] pgmap v1591: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:18.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:18 vm04 bash[20742]: cluster 2026-03-10T10:45:16.791234+0000 mgr.y (mgr.24422) 1170 : cluster [DBG] pgmap v1591: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:18 vm04 bash[20742]: cluster 2026-03-10T10:45:16.791234+0000 mgr.y (mgr.24422) 1170 : cluster [DBG] pgmap v1591: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:18.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:18 vm07 bash[23367]: cluster 2026-03-10T10:45:16.791234+0000 mgr.y (mgr.24422) 1170 : cluster [DBG] pgmap v1591: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:18.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:18 vm07 bash[23367]: cluster 2026-03-10T10:45:16.791234+0000 mgr.y (mgr.24422) 1170 : cluster [DBG] pgmap v1591: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:20.017 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:45:19 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:45:20.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:20 vm04 bash[28289]: cluster 2026-03-10T10:45:18.791865+0000 mgr.y (mgr.24422) 1171 : cluster [DBG] pgmap v1592: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:20.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:20 vm04 bash[28289]: cluster 2026-03-10T10:45:18.791865+0000 mgr.y (mgr.24422) 1171 : cluster [DBG] pgmap v1592: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:20 vm04 bash[20742]: cluster 2026-03-10T10:45:18.791865+0000 mgr.y (mgr.24422) 1171 : cluster [DBG] pgmap v1592: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:20 vm04 bash[20742]: cluster 2026-03-10T10:45:18.791865+0000 mgr.y (mgr.24422) 1171 : cluster [DBG] pgmap v1592: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:20.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:20 vm07 bash[23367]: cluster 2026-03-10T10:45:18.791865+0000 mgr.y (mgr.24422) 1171 : cluster [DBG] pgmap v1592: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:20.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:20 vm07 bash[23367]: cluster 2026-03-10T10:45:18.791865+0000 mgr.y (mgr.24422) 1171 : cluster [DBG] pgmap v1592: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:21.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:21 vm04 bash[28289]: audit 2026-03-10T10:45:19.653434+0000 mgr.y (mgr.24422) 1172 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:21.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:21 vm04 bash[28289]: audit 2026-03-10T10:45:19.653434+0000 mgr.y (mgr.24422) 1172 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:21.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:21 vm04 bash[20742]: audit 2026-03-10T10:45:19.653434+0000 mgr.y (mgr.24422) 1172 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:21.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:21 vm04 bash[20742]: audit 2026-03-10T10:45:19.653434+0000 mgr.y (mgr.24422) 1172 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:21.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:21 vm07 bash[23367]: audit 2026-03-10T10:45:19.653434+0000 mgr.y (mgr.24422) 1172 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:21.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:21 vm07 bash[23367]: audit 2026-03-10T10:45:19.653434+0000 mgr.y (mgr.24422) 1172 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:22.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:22 vm04 bash[28289]: cluster 2026-03-10T10:45:20.792150+0000 mgr.y (mgr.24422) 1173 : cluster [DBG] pgmap v1593: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:22.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:22 vm04 bash[28289]: cluster 2026-03-10T10:45:20.792150+0000 mgr.y (mgr.24422) 1173 : cluster [DBG] pgmap v1593: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:22 vm04 bash[20742]: cluster 2026-03-10T10:45:20.792150+0000 mgr.y (mgr.24422) 1173 : cluster [DBG] pgmap v1593: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:22 vm04 bash[20742]: cluster 2026-03-10T10:45:20.792150+0000 mgr.y (mgr.24422) 1173 : cluster [DBG] pgmap v1593: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:22.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:22 vm07 bash[23367]: cluster 2026-03-10T10:45:20.792150+0000 mgr.y (mgr.24422) 1173 : cluster [DBG] pgmap v1593: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:22.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:22 vm07 bash[23367]: cluster 2026-03-10T10:45:20.792150+0000 mgr.y (mgr.24422) 1173 : cluster [DBG] pgmap v1593: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:23.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:45:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:45:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:45:24.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:24 vm04 bash[28289]: cluster 2026-03-10T10:45:22.792481+0000 mgr.y (mgr.24422) 1174 : cluster [DBG] pgmap v1594: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:24 vm04 bash[28289]: cluster 2026-03-10T10:45:22.792481+0000 mgr.y (mgr.24422) 1174 : cluster [DBG] pgmap v1594: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:24 vm04 bash[20742]: cluster 2026-03-10T10:45:22.792481+0000 mgr.y (mgr.24422) 1174 : cluster [DBG] pgmap v1594: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:24 vm04 bash[20742]: cluster 2026-03-10T10:45:22.792481+0000 mgr.y (mgr.24422) 1174 : cluster [DBG] pgmap v1594: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:24.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:24 vm07 bash[23367]: cluster 2026-03-10T10:45:22.792481+0000 mgr.y (mgr.24422) 1174 : cluster [DBG] pgmap v1594: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:24.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:24 vm07 bash[23367]: cluster 2026-03-10T10:45:22.792481+0000 mgr.y (mgr.24422) 1174 : cluster [DBG] pgmap v1594: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:26.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:26 vm04 bash[28289]: cluster 2026-03-10T10:45:24.793232+0000 mgr.y (mgr.24422) 1175 : cluster [DBG] pgmap v1595: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:26.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:26 vm04 bash[28289]: cluster 2026-03-10T10:45:24.793232+0000 mgr.y (mgr.24422) 1175 : cluster [DBG] pgmap v1595: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:26.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:26 vm04 bash[20742]: cluster 2026-03-10T10:45:24.793232+0000 mgr.y (mgr.24422) 1175 : cluster [DBG] pgmap v1595: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:26.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:26 vm04 bash[20742]: cluster 2026-03-10T10:45:24.793232+0000 mgr.y (mgr.24422) 1175 : cluster [DBG] pgmap v1595: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:26.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:26 vm07 bash[23367]: cluster 2026-03-10T10:45:24.793232+0000 mgr.y (mgr.24422) 1175 : cluster [DBG] pgmap v1595: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:26.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:26 vm07 bash[23367]: cluster 2026-03-10T10:45:24.793232+0000 mgr.y (mgr.24422) 1175 : cluster [DBG] pgmap v1595: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:28.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:28 vm07 bash[23367]: cluster 2026-03-10T10:45:26.793521+0000 mgr.y (mgr.24422) 1176 : cluster [DBG] pgmap v1596: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:28.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:28 vm07 bash[23367]: cluster 2026-03-10T10:45:26.793521+0000 mgr.y (mgr.24422) 1176 : cluster [DBG] pgmap v1596: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:28.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:28 vm04 bash[28289]: cluster 2026-03-10T10:45:26.793521+0000 mgr.y (mgr.24422) 1176 : cluster [DBG] pgmap v1596: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:28.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:28 vm04 bash[28289]: cluster 2026-03-10T10:45:26.793521+0000 mgr.y (mgr.24422) 1176 : cluster [DBG] pgmap v1596: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:28 vm04 bash[20742]: cluster 2026-03-10T10:45:26.793521+0000 mgr.y (mgr.24422) 1176 : cluster [DBG] pgmap v1596: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:28.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:28 vm04 bash[20742]: cluster 2026-03-10T10:45:26.793521+0000 mgr.y (mgr.24422) 1176 : cluster [DBG] pgmap v1596: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:29.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:29 vm07 bash[23367]: audit 2026-03-10T10:45:28.604836+0000 mon.a (mon.0) 3776 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:45:29.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:29 vm07 bash[23367]: audit 2026-03-10T10:45:28.604836+0000 mon.a (mon.0) 3776 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:45:29.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:29 vm04 bash[28289]: audit 2026-03-10T10:45:28.604836+0000 mon.a (mon.0) 3776 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:45:29.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:29 vm04 bash[28289]: audit 2026-03-10T10:45:28.604836+0000 mon.a (mon.0) 3776 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:45:29.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:29 vm04 bash[20742]: audit 2026-03-10T10:45:28.604836+0000 mon.a (mon.0) 3776 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:45:29.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:29 vm04 bash[20742]: audit 2026-03-10T10:45:28.604836+0000 mon.a (mon.0) 3776 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:45:30.017 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:45:29 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:45:30.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:30 vm04 bash[28289]: cluster 2026-03-10T10:45:28.794095+0000 mgr.y (mgr.24422) 1177 : cluster [DBG] pgmap v1597: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:30.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:30 vm04 bash[28289]: cluster 2026-03-10T10:45:28.794095+0000 mgr.y (mgr.24422) 1177 : cluster [DBG] pgmap v1597: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:30.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:30 vm04 bash[20742]: cluster 2026-03-10T10:45:28.794095+0000 mgr.y (mgr.24422) 1177 : cluster [DBG] pgmap v1597: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:30.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:30 vm04 bash[20742]: cluster 2026-03-10T10:45:28.794095+0000 mgr.y (mgr.24422) 1177 : cluster [DBG] pgmap v1597: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:30.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:30 vm07 bash[23367]: cluster 2026-03-10T10:45:28.794095+0000 mgr.y (mgr.24422) 1177 : cluster [DBG] pgmap v1597: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:30.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:30 vm07 bash[23367]: cluster 2026-03-10T10:45:28.794095+0000 mgr.y (mgr.24422) 1177 : cluster [DBG] pgmap v1597: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:31.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:31 vm04 bash[28289]: audit 2026-03-10T10:45:29.663978+0000 mgr.y (mgr.24422) 1178 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:31.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:31 vm04 bash[28289]: audit 2026-03-10T10:45:29.663978+0000 mgr.y (mgr.24422) 1178 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:31.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:31 vm04 bash[20742]: audit 2026-03-10T10:45:29.663978+0000 mgr.y (mgr.24422) 1178 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:31.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:31 vm04 bash[20742]: audit 2026-03-10T10:45:29.663978+0000 mgr.y (mgr.24422) 1178 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:31.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:31 vm07 bash[23367]: audit 2026-03-10T10:45:29.663978+0000 mgr.y (mgr.24422) 1178 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:31.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:31 vm07 bash[23367]: audit 2026-03-10T10:45:29.663978+0000 mgr.y (mgr.24422) 1178 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:32.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:32 vm04 bash[28289]: cluster 2026-03-10T10:45:30.794358+0000 mgr.y (mgr.24422) 1179 : cluster [DBG] pgmap v1598: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:32.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:32 vm04 bash[28289]: cluster 2026-03-10T10:45:30.794358+0000 mgr.y (mgr.24422) 1179 : cluster [DBG] pgmap v1598: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:32.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:32 vm04 bash[20742]: cluster 2026-03-10T10:45:30.794358+0000 mgr.y (mgr.24422) 1179 : cluster [DBG] pgmap v1598: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:32.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:32 vm04 bash[20742]: cluster 2026-03-10T10:45:30.794358+0000 mgr.y (mgr.24422) 1179 : cluster [DBG] pgmap v1598: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:32.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:32 vm07 bash[23367]: cluster 2026-03-10T10:45:30.794358+0000 mgr.y (mgr.24422) 1179 : cluster [DBG] pgmap v1598: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:32.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:32 vm07 bash[23367]: cluster 2026-03-10T10:45:30.794358+0000 mgr.y (mgr.24422) 1179 : cluster [DBG] pgmap v1598: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:33.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:45:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:45:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:45:34.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:34 vm04 bash[28289]: cluster 2026-03-10T10:45:32.794678+0000 mgr.y (mgr.24422) 1180 : cluster [DBG] pgmap v1599: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:34.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:34 vm04 bash[28289]: cluster 2026-03-10T10:45:32.794678+0000 mgr.y (mgr.24422) 1180 : cluster [DBG] pgmap v1599: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:34.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:34 vm04 bash[20742]: cluster 2026-03-10T10:45:32.794678+0000 mgr.y (mgr.24422) 1180 : cluster [DBG] pgmap v1599: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:34.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:34 vm04 bash[20742]: cluster 2026-03-10T10:45:32.794678+0000 mgr.y (mgr.24422) 1180 : cluster [DBG] pgmap v1599: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:34.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:34 vm07 bash[23367]: cluster 2026-03-10T10:45:32.794678+0000 mgr.y (mgr.24422) 1180 : cluster [DBG] pgmap v1599: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:34.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:34 vm07 bash[23367]: cluster 2026-03-10T10:45:32.794678+0000 mgr.y (mgr.24422) 1180 : cluster [DBG] pgmap v1599: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:36.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:36 vm04 bash[28289]: cluster 2026-03-10T10:45:34.795426+0000 mgr.y (mgr.24422) 1181 : cluster [DBG] pgmap v1600: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:36.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:36 vm04 bash[28289]: cluster 2026-03-10T10:45:34.795426+0000 mgr.y (mgr.24422) 1181 : cluster [DBG] pgmap v1600: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:36.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:36 vm04 bash[20742]: cluster 2026-03-10T10:45:34.795426+0000 mgr.y (mgr.24422) 1181 : cluster [DBG] pgmap v1600: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:36.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:36 vm04 bash[20742]: cluster 2026-03-10T10:45:34.795426+0000 mgr.y (mgr.24422) 1181 : cluster [DBG] pgmap v1600: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:36.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:36 vm07 bash[23367]: cluster 2026-03-10T10:45:34.795426+0000 mgr.y (mgr.24422) 1181 : cluster [DBG] pgmap v1600: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:36.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:36 vm07 bash[23367]: cluster 2026-03-10T10:45:34.795426+0000 mgr.y (mgr.24422) 1181 : cluster [DBG] pgmap v1600: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:38.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:38 vm04 bash[28289]: cluster 2026-03-10T10:45:36.795750+0000 mgr.y (mgr.24422) 1182 : cluster [DBG] pgmap v1601: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:38.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:38 vm04 bash[28289]: cluster 2026-03-10T10:45:36.795750+0000 mgr.y (mgr.24422) 1182 : cluster [DBG] pgmap v1601: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:38.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:38 vm04 bash[20742]: cluster 2026-03-10T10:45:36.795750+0000 mgr.y (mgr.24422) 1182 : cluster [DBG] pgmap v1601: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:38.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:38 vm04 bash[20742]: cluster 2026-03-10T10:45:36.795750+0000 mgr.y (mgr.24422) 1182 : cluster [DBG] pgmap v1601: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:38.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:38 vm07 bash[23367]: cluster 2026-03-10T10:45:36.795750+0000 mgr.y (mgr.24422) 1182 : cluster [DBG] pgmap v1601: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:38.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:38 vm07 bash[23367]: cluster 2026-03-10T10:45:36.795750+0000 mgr.y (mgr.24422) 1182 : cluster [DBG] pgmap v1601: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:40.017 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:45:39 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:45:40.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:40 vm04 bash[28289]: cluster 2026-03-10T10:45:38.796366+0000 mgr.y (mgr.24422) 1183 : cluster [DBG] pgmap v1602: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:40 vm04 bash[28289]: cluster 2026-03-10T10:45:38.796366+0000 mgr.y (mgr.24422) 1183 : cluster [DBG] pgmap v1602: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:40 vm04 bash[20742]: cluster 2026-03-10T10:45:38.796366+0000 mgr.y (mgr.24422) 1183 : cluster [DBG] pgmap v1602: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:40 vm04 bash[20742]: cluster 2026-03-10T10:45:38.796366+0000 mgr.y (mgr.24422) 1183 : cluster [DBG] pgmap v1602: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:40.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:40 vm07 bash[23367]: cluster 2026-03-10T10:45:38.796366+0000 mgr.y (mgr.24422) 1183 : cluster [DBG] pgmap v1602: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:40.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:40 vm07 bash[23367]: cluster 2026-03-10T10:45:38.796366+0000 mgr.y (mgr.24422) 1183 : cluster [DBG] pgmap v1602: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:41.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:41 vm04 bash[28289]: audit 2026-03-10T10:45:39.669524+0000 mgr.y (mgr.24422) 1184 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:41.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:41 vm04 bash[28289]: audit 2026-03-10T10:45:39.669524+0000 mgr.y (mgr.24422) 1184 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:41.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:41 vm04 bash[20742]: audit 2026-03-10T10:45:39.669524+0000 mgr.y (mgr.24422) 1184 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:41.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:41 vm04 bash[20742]: audit 2026-03-10T10:45:39.669524+0000 mgr.y (mgr.24422) 1184 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:41.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:41 vm07 bash[23367]: audit 2026-03-10T10:45:39.669524+0000 mgr.y (mgr.24422) 1184 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:41.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:41 vm07 bash[23367]: audit 2026-03-10T10:45:39.669524+0000 mgr.y (mgr.24422) 1184 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:42.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:42 vm04 bash[28289]: cluster 2026-03-10T10:45:40.796725+0000 mgr.y (mgr.24422) 1185 : cluster [DBG] pgmap v1603: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:42.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:42 vm04 bash[28289]: cluster 2026-03-10T10:45:40.796725+0000 mgr.y (mgr.24422) 1185 : cluster [DBG] pgmap v1603: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:42.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:42 vm04 bash[20742]: cluster 2026-03-10T10:45:40.796725+0000 mgr.y (mgr.24422) 1185 : cluster [DBG] pgmap v1603: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:42.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:42 vm04 bash[20742]: cluster 2026-03-10T10:45:40.796725+0000 mgr.y (mgr.24422) 1185 : cluster [DBG] pgmap v1603: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:42.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:42 vm07 bash[23367]: cluster 2026-03-10T10:45:40.796725+0000 mgr.y (mgr.24422) 1185 : cluster [DBG] pgmap v1603: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:42.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:42 vm07 bash[23367]: cluster 2026-03-10T10:45:40.796725+0000 mgr.y (mgr.24422) 1185 : cluster [DBG] pgmap v1603: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:43.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:45:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:45:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:45:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:44 vm04 bash[28289]: cluster 2026-03-10T10:45:42.796979+0000 mgr.y (mgr.24422) 1186 : cluster [DBG] pgmap v1604: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:44 vm04 bash[28289]: cluster 2026-03-10T10:45:42.796979+0000 mgr.y (mgr.24422) 1186 : cluster [DBG] pgmap v1604: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:44 vm04 bash[28289]: audit 2026-03-10T10:45:43.610428+0000 mon.a (mon.0) 3777 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:45:44.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:44 vm04 bash[28289]: audit 2026-03-10T10:45:43.610428+0000 mon.a (mon.0) 3777 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:45:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:44 vm04 bash[20742]: cluster 2026-03-10T10:45:42.796979+0000 mgr.y (mgr.24422) 1186 : cluster [DBG] pgmap v1604: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:44 vm04 bash[20742]: cluster 2026-03-10T10:45:42.796979+0000 mgr.y (mgr.24422) 1186 : cluster [DBG] pgmap v1604: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:44 vm04 bash[20742]: audit 2026-03-10T10:45:43.610428+0000 mon.a (mon.0) 3777 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:45:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:44 vm04 bash[20742]: audit 2026-03-10T10:45:43.610428+0000 mon.a (mon.0) 3777 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:45:44.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:44 vm07 bash[23367]: cluster 2026-03-10T10:45:42.796979+0000 mgr.y (mgr.24422) 1186 : cluster [DBG] pgmap v1604: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:44.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:44 vm07 bash[23367]: cluster 2026-03-10T10:45:42.796979+0000 mgr.y (mgr.24422) 1186 : cluster [DBG] pgmap v1604: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:44.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:44 vm07 bash[23367]: audit 2026-03-10T10:45:43.610428+0000 mon.a (mon.0) 3777 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:45:44.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:44 vm07 bash[23367]: audit 2026-03-10T10:45:43.610428+0000 mon.a (mon.0) 3777 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:45:46.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:46 vm04 bash[28289]: cluster 2026-03-10T10:45:44.797595+0000 mgr.y (mgr.24422) 1187 : cluster [DBG] pgmap v1605: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:46.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:46 vm04 bash[28289]: cluster 2026-03-10T10:45:44.797595+0000 mgr.y (mgr.24422) 1187 : cluster [DBG] pgmap v1605: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:46.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:46 vm04 bash[20742]: cluster 2026-03-10T10:45:44.797595+0000 mgr.y (mgr.24422) 1187 : cluster [DBG] pgmap v1605: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:46.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:46 vm04 bash[20742]: cluster 2026-03-10T10:45:44.797595+0000 mgr.y (mgr.24422) 1187 : cluster [DBG] pgmap v1605: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:46.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:46 vm07 bash[23367]: cluster 2026-03-10T10:45:44.797595+0000 mgr.y (mgr.24422) 1187 : cluster [DBG] pgmap v1605: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:46.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:46 vm07 bash[23367]: cluster 2026-03-10T10:45:44.797595+0000 mgr.y (mgr.24422) 1187 : cluster [DBG] pgmap v1605: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:48.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:48 vm04 bash[28289]: cluster 2026-03-10T10:45:46.797906+0000 mgr.y (mgr.24422) 1188 : cluster [DBG] pgmap v1606: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:48.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:48 vm04 bash[28289]: cluster 2026-03-10T10:45:46.797906+0000 mgr.y (mgr.24422) 1188 : cluster [DBG] pgmap v1606: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:48.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:48 vm04 bash[20742]: cluster 2026-03-10T10:45:46.797906+0000 mgr.y (mgr.24422) 1188 : cluster [DBG] pgmap v1606: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:48.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:48 vm04 bash[20742]: cluster 2026-03-10T10:45:46.797906+0000 mgr.y (mgr.24422) 1188 : cluster [DBG] pgmap v1606: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:48.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:48 vm07 bash[23367]: cluster 2026-03-10T10:45:46.797906+0000 mgr.y (mgr.24422) 1188 : cluster [DBG] pgmap v1606: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:48.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:48 vm07 bash[23367]: cluster 2026-03-10T10:45:46.797906+0000 mgr.y (mgr.24422) 1188 : cluster [DBG] pgmap v1606: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:50.017 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:45:49 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:45:50.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:50 vm04 bash[28289]: cluster 2026-03-10T10:45:48.798673+0000 mgr.y (mgr.24422) 1189 : cluster [DBG] pgmap v1607: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:50.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:50 vm04 bash[28289]: cluster 2026-03-10T10:45:48.798673+0000 mgr.y (mgr.24422) 1189 : cluster [DBG] pgmap v1607: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:50.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:50 vm04 bash[20742]: cluster 2026-03-10T10:45:48.798673+0000 mgr.y (mgr.24422) 1189 : cluster [DBG] pgmap v1607: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:50.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:50 vm04 bash[20742]: cluster 2026-03-10T10:45:48.798673+0000 mgr.y (mgr.24422) 1189 : cluster [DBG] pgmap v1607: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:50.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:50 vm07 bash[23367]: cluster 2026-03-10T10:45:48.798673+0000 mgr.y (mgr.24422) 1189 : cluster [DBG] pgmap v1607: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:50.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:50 vm07 bash[23367]: cluster 2026-03-10T10:45:48.798673+0000 mgr.y (mgr.24422) 1189 : cluster [DBG] pgmap v1607: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:51.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:51 vm04 bash[28289]: audit 2026-03-10T10:45:49.680201+0000 mgr.y (mgr.24422) 1190 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:51.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:51 vm04 bash[28289]: audit 2026-03-10T10:45:49.680201+0000 mgr.y (mgr.24422) 1190 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:51.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:51 vm04 bash[20742]: audit 2026-03-10T10:45:49.680201+0000 mgr.y (mgr.24422) 1190 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:51.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:51 vm04 bash[20742]: audit 2026-03-10T10:45:49.680201+0000 mgr.y (mgr.24422) 1190 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:51.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:51 vm07 bash[23367]: audit 2026-03-10T10:45:49.680201+0000 mgr.y (mgr.24422) 1190 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:51.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:51 vm07 bash[23367]: audit 2026-03-10T10:45:49.680201+0000 mgr.y (mgr.24422) 1190 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:45:52.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:52 vm04 bash[20742]: cluster 2026-03-10T10:45:50.799084+0000 mgr.y (mgr.24422) 1191 : cluster [DBG] pgmap v1608: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:52.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:52 vm04 bash[20742]: cluster 2026-03-10T10:45:50.799084+0000 mgr.y (mgr.24422) 1191 : cluster [DBG] pgmap v1608: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:52.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:52 vm04 bash[28289]: cluster 2026-03-10T10:45:50.799084+0000 mgr.y (mgr.24422) 1191 : cluster [DBG] pgmap v1608: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:52.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:52 vm04 bash[28289]: cluster 2026-03-10T10:45:50.799084+0000 mgr.y (mgr.24422) 1191 : cluster [DBG] pgmap v1608: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:52.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:52 vm07 bash[23367]: cluster 2026-03-10T10:45:50.799084+0000 mgr.y (mgr.24422) 1191 : cluster [DBG] pgmap v1608: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:52.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:52 vm07 bash[23367]: cluster 2026-03-10T10:45:50.799084+0000 mgr.y (mgr.24422) 1191 : cluster [DBG] pgmap v1608: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:53.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:45:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:45:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:45:54.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:54 vm04 bash[28289]: cluster 2026-03-10T10:45:52.799461+0000 mgr.y (mgr.24422) 1192 : cluster [DBG] pgmap v1609: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:54.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:54 vm04 bash[28289]: cluster 2026-03-10T10:45:52.799461+0000 mgr.y (mgr.24422) 1192 : cluster [DBG] pgmap v1609: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:54.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:54 vm04 bash[20742]: cluster 2026-03-10T10:45:52.799461+0000 mgr.y (mgr.24422) 1192 : cluster [DBG] pgmap v1609: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:54.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:54 vm04 bash[20742]: cluster 2026-03-10T10:45:52.799461+0000 mgr.y (mgr.24422) 1192 : cluster [DBG] pgmap v1609: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:54.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:54 vm07 bash[23367]: cluster 2026-03-10T10:45:52.799461+0000 mgr.y (mgr.24422) 1192 : cluster [DBG] pgmap v1609: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:54.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:54 vm07 bash[23367]: cluster 2026-03-10T10:45:52.799461+0000 mgr.y (mgr.24422) 1192 : cluster [DBG] pgmap v1609: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:56.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:56 vm04 bash[20742]: cluster 2026-03-10T10:45:54.800185+0000 mgr.y (mgr.24422) 1193 : cluster [DBG] pgmap v1610: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:56.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:56 vm04 bash[20742]: cluster 2026-03-10T10:45:54.800185+0000 mgr.y (mgr.24422) 1193 : cluster [DBG] pgmap v1610: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:56.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:56 vm04 bash[28289]: cluster 2026-03-10T10:45:54.800185+0000 mgr.y (mgr.24422) 1193 : cluster [DBG] pgmap v1610: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:56.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:56 vm04 bash[28289]: cluster 2026-03-10T10:45:54.800185+0000 mgr.y (mgr.24422) 1193 : cluster [DBG] pgmap v1610: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:56.767 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:45:56 vm07 bash[50688]: logger=infra.usagestats t=2026-03-10T10:45:56.608296365Z level=info msg="Usage stats are ready to report" 2026-03-10T10:45:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:56 vm07 bash[23367]: cluster 2026-03-10T10:45:54.800185+0000 mgr.y (mgr.24422) 1193 : cluster [DBG] pgmap v1610: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:56 vm07 bash[23367]: cluster 2026-03-10T10:45:54.800185+0000 mgr.y (mgr.24422) 1193 : cluster [DBG] pgmap v1610: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:45:57.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:57 vm04 bash[28289]: audit 2026-03-10T10:45:57.128613+0000 mon.a (mon.0) 3778 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:45:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:57 vm04 bash[28289]: audit 2026-03-10T10:45:57.128613+0000 mon.a (mon.0) 3778 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:45:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:57 vm04 bash[20742]: audit 2026-03-10T10:45:57.128613+0000 mon.a (mon.0) 3778 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:45:57.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:57 vm04 bash[20742]: audit 2026-03-10T10:45:57.128613+0000 mon.a (mon.0) 3778 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:45:57.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:57 vm07 bash[23367]: audit 2026-03-10T10:45:57.128613+0000 mon.a (mon.0) 3778 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:45:57.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:57 vm07 bash[23367]: audit 2026-03-10T10:45:57.128613+0000 mon.a (mon.0) 3778 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:45:58.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:58 vm04 bash[20742]: cluster 2026-03-10T10:45:56.800529+0000 mgr.y (mgr.24422) 1194 : cluster [DBG] pgmap v1611: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:58 vm04 bash[20742]: cluster 2026-03-10T10:45:56.800529+0000 mgr.y (mgr.24422) 1194 : cluster [DBG] pgmap v1611: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:58 vm04 bash[20742]: audit 2026-03-10T10:45:57.478131+0000 mon.a (mon.0) 3779 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:45:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:58 vm04 bash[20742]: audit 2026-03-10T10:45:57.478131+0000 mon.a (mon.0) 3779 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:45:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:58 vm04 bash[20742]: audit 2026-03-10T10:45:57.478740+0000 mon.a (mon.0) 3780 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:45:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:58 vm04 bash[20742]: audit 2026-03-10T10:45:57.478740+0000 mon.a (mon.0) 3780 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:45:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:58 vm04 bash[20742]: audit 2026-03-10T10:45:57.479445+0000 mon.a (mon.0) 3781 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:45:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:58 vm04 bash[20742]: audit 2026-03-10T10:45:57.479445+0000 mon.a (mon.0) 3781 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:45:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:58 vm04 bash[20742]: audit 2026-03-10T10:45:57.479929+0000 mon.a (mon.0) 3782 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:45:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:58 vm04 bash[20742]: audit 2026-03-10T10:45:57.479929+0000 mon.a (mon.0) 3782 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:45:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:58 vm04 bash[20742]: audit 2026-03-10T10:45:57.485595+0000 mon.a (mon.0) 3783 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:45:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:58 vm04 bash[20742]: audit 2026-03-10T10:45:57.485595+0000 mon.a (mon.0) 3783 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:45:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:58 vm04 bash[28289]: cluster 2026-03-10T10:45:56.800529+0000 mgr.y (mgr.24422) 1194 : cluster [DBG] pgmap v1611: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:58 vm04 bash[28289]: cluster 2026-03-10T10:45:56.800529+0000 mgr.y (mgr.24422) 1194 : cluster [DBG] pgmap v1611: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:58 vm04 bash[28289]: audit 2026-03-10T10:45:57.478131+0000 mon.a (mon.0) 3779 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:45:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:58 vm04 bash[28289]: audit 2026-03-10T10:45:57.478131+0000 mon.a (mon.0) 3779 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:45:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:58 vm04 bash[28289]: audit 2026-03-10T10:45:57.478740+0000 mon.a (mon.0) 3780 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:45:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:58 vm04 bash[28289]: audit 2026-03-10T10:45:57.478740+0000 mon.a (mon.0) 3780 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:45:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:58 vm04 bash[28289]: audit 2026-03-10T10:45:57.479445+0000 mon.a (mon.0) 3781 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:45:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:58 vm04 bash[28289]: audit 2026-03-10T10:45:57.479445+0000 mon.a (mon.0) 3781 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:45:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:58 vm04 bash[28289]: audit 2026-03-10T10:45:57.479929+0000 mon.a (mon.0) 3782 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:45:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:58 vm04 bash[28289]: audit 2026-03-10T10:45:57.479929+0000 mon.a (mon.0) 3782 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:45:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:58 vm04 bash[28289]: audit 2026-03-10T10:45:57.485595+0000 mon.a (mon.0) 3783 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:45:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:58 vm04 bash[28289]: audit 2026-03-10T10:45:57.485595+0000 mon.a (mon.0) 3783 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:45:58.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:58 vm07 bash[23367]: cluster 2026-03-10T10:45:56.800529+0000 mgr.y (mgr.24422) 1194 : cluster [DBG] pgmap v1611: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:58.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:58 vm07 bash[23367]: cluster 2026-03-10T10:45:56.800529+0000 mgr.y (mgr.24422) 1194 : cluster [DBG] pgmap v1611: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:45:58.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:58 vm07 bash[23367]: audit 2026-03-10T10:45:57.478131+0000 mon.a (mon.0) 3779 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:45:58.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:58 vm07 bash[23367]: audit 2026-03-10T10:45:57.478131+0000 mon.a (mon.0) 3779 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:45:58.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:58 vm07 bash[23367]: audit 2026-03-10T10:45:57.478740+0000 mon.a (mon.0) 3780 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:45:58.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:58 vm07 bash[23367]: audit 2026-03-10T10:45:57.478740+0000 mon.a (mon.0) 3780 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-10T10:45:58.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:58 vm07 bash[23367]: audit 2026-03-10T10:45:57.479445+0000 mon.a (mon.0) 3781 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:45:58.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:58 vm07 bash[23367]: audit 2026-03-10T10:45:57.479445+0000 mon.a (mon.0) 3781 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:45:58.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:58 vm07 bash[23367]: audit 2026-03-10T10:45:57.479929+0000 mon.a (mon.0) 3782 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:45:58.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:58 vm07 bash[23367]: audit 2026-03-10T10:45:57.479929+0000 mon.a (mon.0) 3782 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:45:58.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:58 vm07 bash[23367]: audit 2026-03-10T10:45:57.485595+0000 mon.a (mon.0) 3783 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:45:58.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:58 vm07 bash[23367]: audit 2026-03-10T10:45:57.485595+0000 mon.a (mon.0) 3783 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:45:59.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:59 vm04 bash[28289]: audit 2026-03-10T10:45:58.616324+0000 mon.a (mon.0) 3784 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:45:59.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:45:59 vm04 bash[28289]: audit 2026-03-10T10:45:58.616324+0000 mon.a (mon.0) 3784 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:45:59.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:59 vm04 bash[20742]: audit 2026-03-10T10:45:58.616324+0000 mon.a (mon.0) 3784 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:45:59.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:45:59 vm04 bash[20742]: audit 2026-03-10T10:45:58.616324+0000 mon.a (mon.0) 3784 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:46:00.017 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:45:59 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:46:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:59 vm07 bash[23367]: audit 2026-03-10T10:45:58.616324+0000 mon.a (mon.0) 3784 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:46:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:45:59 vm07 bash[23367]: audit 2026-03-10T10:45:58.616324+0000 mon.a (mon.0) 3784 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:46:00.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:00 vm04 bash[28289]: cluster 2026-03-10T10:45:58.801393+0000 mgr.y (mgr.24422) 1195 : cluster [DBG] pgmap v1612: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:00.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:00 vm04 bash[28289]: cluster 2026-03-10T10:45:58.801393+0000 mgr.y (mgr.24422) 1195 : cluster [DBG] pgmap v1612: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:00.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:00 vm04 bash[28289]: audit 2026-03-10T10:45:59.689633+0000 mgr.y (mgr.24422) 1196 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:00.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:00 vm04 bash[28289]: audit 2026-03-10T10:45:59.689633+0000 mgr.y (mgr.24422) 1196 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:00.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:00 vm04 bash[20742]: cluster 2026-03-10T10:45:58.801393+0000 mgr.y (mgr.24422) 1195 : cluster [DBG] pgmap v1612: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:00.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:00 vm04 bash[20742]: cluster 2026-03-10T10:45:58.801393+0000 mgr.y (mgr.24422) 1195 : cluster [DBG] pgmap v1612: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:00.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:00 vm04 bash[20742]: audit 2026-03-10T10:45:59.689633+0000 mgr.y (mgr.24422) 1196 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:00.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:00 vm04 bash[20742]: audit 2026-03-10T10:45:59.689633+0000 mgr.y (mgr.24422) 1196 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:01.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:00 vm07 bash[23367]: cluster 2026-03-10T10:45:58.801393+0000 mgr.y (mgr.24422) 1195 : cluster [DBG] pgmap v1612: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:01.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:00 vm07 bash[23367]: cluster 2026-03-10T10:45:58.801393+0000 mgr.y (mgr.24422) 1195 : cluster [DBG] pgmap v1612: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:01.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:00 vm07 bash[23367]: audit 2026-03-10T10:45:59.689633+0000 mgr.y (mgr.24422) 1196 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:01.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:00 vm07 bash[23367]: audit 2026-03-10T10:45:59.689633+0000 mgr.y (mgr.24422) 1196 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:01.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:01 vm04 bash[28289]: cluster 2026-03-10T10:46:00.801762+0000 mgr.y (mgr.24422) 1197 : cluster [DBG] pgmap v1613: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:01.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:01 vm04 bash[28289]: cluster 2026-03-10T10:46:00.801762+0000 mgr.y (mgr.24422) 1197 : cluster [DBG] pgmap v1613: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:01.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:01 vm04 bash[20742]: cluster 2026-03-10T10:46:00.801762+0000 mgr.y (mgr.24422) 1197 : cluster [DBG] pgmap v1613: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:01.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:01 vm04 bash[20742]: cluster 2026-03-10T10:46:00.801762+0000 mgr.y (mgr.24422) 1197 : cluster [DBG] pgmap v1613: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:02.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:01 vm07 bash[23367]: cluster 2026-03-10T10:46:00.801762+0000 mgr.y (mgr.24422) 1197 : cluster [DBG] pgmap v1613: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:02.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:01 vm07 bash[23367]: cluster 2026-03-10T10:46:00.801762+0000 mgr.y (mgr.24422) 1197 : cluster [DBG] pgmap v1613: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:03.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:46:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:46:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:46:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:03 vm04 bash[28289]: cluster 2026-03-10T10:46:02.802122+0000 mgr.y (mgr.24422) 1198 : cluster [DBG] pgmap v1614: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:03 vm04 bash[28289]: cluster 2026-03-10T10:46:02.802122+0000 mgr.y (mgr.24422) 1198 : cluster [DBG] pgmap v1614: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:03 vm04 bash[20742]: cluster 2026-03-10T10:46:02.802122+0000 mgr.y (mgr.24422) 1198 : cluster [DBG] pgmap v1614: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:03 vm04 bash[20742]: cluster 2026-03-10T10:46:02.802122+0000 mgr.y (mgr.24422) 1198 : cluster [DBG] pgmap v1614: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:03 vm07 bash[23367]: cluster 2026-03-10T10:46:02.802122+0000 mgr.y (mgr.24422) 1198 : cluster [DBG] pgmap v1614: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:03 vm07 bash[23367]: cluster 2026-03-10T10:46:02.802122+0000 mgr.y (mgr.24422) 1198 : cluster [DBG] pgmap v1614: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:06.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:05 vm04 bash[28289]: cluster 2026-03-10T10:46:04.802780+0000 mgr.y (mgr.24422) 1199 : cluster [DBG] pgmap v1615: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:06.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:05 vm04 bash[28289]: cluster 2026-03-10T10:46:04.802780+0000 mgr.y (mgr.24422) 1199 : cluster [DBG] pgmap v1615: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:06.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:05 vm04 bash[20742]: cluster 2026-03-10T10:46:04.802780+0000 mgr.y (mgr.24422) 1199 : cluster [DBG] pgmap v1615: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:06.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:05 vm04 bash[20742]: cluster 2026-03-10T10:46:04.802780+0000 mgr.y (mgr.24422) 1199 : cluster [DBG] pgmap v1615: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:06.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:05 vm07 bash[23367]: cluster 2026-03-10T10:46:04.802780+0000 mgr.y (mgr.24422) 1199 : cluster [DBG] pgmap v1615: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:06.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:05 vm07 bash[23367]: cluster 2026-03-10T10:46:04.802780+0000 mgr.y (mgr.24422) 1199 : cluster [DBG] pgmap v1615: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:08.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:07 vm04 bash[28289]: cluster 2026-03-10T10:46:06.803129+0000 mgr.y (mgr.24422) 1200 : cluster [DBG] pgmap v1616: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:08.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:07 vm04 bash[28289]: cluster 2026-03-10T10:46:06.803129+0000 mgr.y (mgr.24422) 1200 : cluster [DBG] pgmap v1616: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:08.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:07 vm04 bash[20742]: cluster 2026-03-10T10:46:06.803129+0000 mgr.y (mgr.24422) 1200 : cluster [DBG] pgmap v1616: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:08.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:07 vm04 bash[20742]: cluster 2026-03-10T10:46:06.803129+0000 mgr.y (mgr.24422) 1200 : cluster [DBG] pgmap v1616: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:08.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:07 vm07 bash[23367]: cluster 2026-03-10T10:46:06.803129+0000 mgr.y (mgr.24422) 1200 : cluster [DBG] pgmap v1616: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:08.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:07 vm07 bash[23367]: cluster 2026-03-10T10:46:06.803129+0000 mgr.y (mgr.24422) 1200 : cluster [DBG] pgmap v1616: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:10.017 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:46:09 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:46:10.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:09 vm07 bash[23367]: cluster 2026-03-10T10:46:08.803813+0000 mgr.y (mgr.24422) 1201 : cluster [DBG] pgmap v1617: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:10.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:09 vm07 bash[23367]: cluster 2026-03-10T10:46:08.803813+0000 mgr.y (mgr.24422) 1201 : cluster [DBG] pgmap v1617: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:10.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:09 vm04 bash[28289]: cluster 2026-03-10T10:46:08.803813+0000 mgr.y (mgr.24422) 1201 : cluster [DBG] pgmap v1617: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:10.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:09 vm04 bash[28289]: cluster 2026-03-10T10:46:08.803813+0000 mgr.y (mgr.24422) 1201 : cluster [DBG] pgmap v1617: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:09 vm04 bash[20742]: cluster 2026-03-10T10:46:08.803813+0000 mgr.y (mgr.24422) 1201 : cluster [DBG] pgmap v1617: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:09 vm04 bash[20742]: cluster 2026-03-10T10:46:08.803813+0000 mgr.y (mgr.24422) 1201 : cluster [DBG] pgmap v1617: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:11.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:10 vm04 bash[28289]: audit 2026-03-10T10:46:09.700426+0000 mgr.y (mgr.24422) 1202 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:11.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:10 vm04 bash[28289]: audit 2026-03-10T10:46:09.700426+0000 mgr.y (mgr.24422) 1202 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:11.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:10 vm04 bash[20742]: audit 2026-03-10T10:46:09.700426+0000 mgr.y (mgr.24422) 1202 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:11.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:10 vm04 bash[20742]: audit 2026-03-10T10:46:09.700426+0000 mgr.y (mgr.24422) 1202 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:11.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:10 vm07 bash[23367]: audit 2026-03-10T10:46:09.700426+0000 mgr.y (mgr.24422) 1202 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:11.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:10 vm07 bash[23367]: audit 2026-03-10T10:46:09.700426+0000 mgr.y (mgr.24422) 1202 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:11 vm04 bash[28289]: cluster 2026-03-10T10:46:10.804268+0000 mgr.y (mgr.24422) 1203 : cluster [DBG] pgmap v1618: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:12.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:11 vm04 bash[28289]: cluster 2026-03-10T10:46:10.804268+0000 mgr.y (mgr.24422) 1203 : cluster [DBG] pgmap v1618: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:11 vm04 bash[20742]: cluster 2026-03-10T10:46:10.804268+0000 mgr.y (mgr.24422) 1203 : cluster [DBG] pgmap v1618: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:11 vm04 bash[20742]: cluster 2026-03-10T10:46:10.804268+0000 mgr.y (mgr.24422) 1203 : cluster [DBG] pgmap v1618: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:12.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:11 vm07 bash[23367]: cluster 2026-03-10T10:46:10.804268+0000 mgr.y (mgr.24422) 1203 : cluster [DBG] pgmap v1618: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:12.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:11 vm07 bash[23367]: cluster 2026-03-10T10:46:10.804268+0000 mgr.y (mgr.24422) 1203 : cluster [DBG] pgmap v1618: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:13.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:46:13 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:46:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:46:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:13 vm04 bash[28289]: cluster 2026-03-10T10:46:12.804657+0000 mgr.y (mgr.24422) 1204 : cluster [DBG] pgmap v1619: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:13 vm04 bash[28289]: cluster 2026-03-10T10:46:12.804657+0000 mgr.y (mgr.24422) 1204 : cluster [DBG] pgmap v1619: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:13 vm04 bash[28289]: audit 2026-03-10T10:46:13.622788+0000 mon.a (mon.0) 3785 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:46:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:13 vm04 bash[28289]: audit 2026-03-10T10:46:13.622788+0000 mon.a (mon.0) 3785 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:46:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:13 vm04 bash[20742]: cluster 2026-03-10T10:46:12.804657+0000 mgr.y (mgr.24422) 1204 : cluster [DBG] pgmap v1619: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:13 vm04 bash[20742]: cluster 2026-03-10T10:46:12.804657+0000 mgr.y (mgr.24422) 1204 : cluster [DBG] pgmap v1619: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:13 vm04 bash[20742]: audit 2026-03-10T10:46:13.622788+0000 mon.a (mon.0) 3785 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:46:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:13 vm04 bash[20742]: audit 2026-03-10T10:46:13.622788+0000 mon.a (mon.0) 3785 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:46:14.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:13 vm07 bash[23367]: cluster 2026-03-10T10:46:12.804657+0000 mgr.y (mgr.24422) 1204 : cluster [DBG] pgmap v1619: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:14.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:13 vm07 bash[23367]: cluster 2026-03-10T10:46:12.804657+0000 mgr.y (mgr.24422) 1204 : cluster [DBG] pgmap v1619: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:14.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:13 vm07 bash[23367]: audit 2026-03-10T10:46:13.622788+0000 mon.a (mon.0) 3785 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:46:14.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:13 vm07 bash[23367]: audit 2026-03-10T10:46:13.622788+0000 mon.a (mon.0) 3785 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:46:16.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:15 vm04 bash[28289]: cluster 2026-03-10T10:46:14.805493+0000 mgr.y (mgr.24422) 1205 : cluster [DBG] pgmap v1620: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:15 vm04 bash[28289]: cluster 2026-03-10T10:46:14.805493+0000 mgr.y (mgr.24422) 1205 : cluster [DBG] pgmap v1620: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:15 vm04 bash[20742]: cluster 2026-03-10T10:46:14.805493+0000 mgr.y (mgr.24422) 1205 : cluster [DBG] pgmap v1620: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:16.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:15 vm04 bash[20742]: cluster 2026-03-10T10:46:14.805493+0000 mgr.y (mgr.24422) 1205 : cluster [DBG] pgmap v1620: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:16.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:15 vm07 bash[23367]: cluster 2026-03-10T10:46:14.805493+0000 mgr.y (mgr.24422) 1205 : cluster [DBG] pgmap v1620: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:16.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:15 vm07 bash[23367]: cluster 2026-03-10T10:46:14.805493+0000 mgr.y (mgr.24422) 1205 : cluster [DBG] pgmap v1620: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:18.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:18 vm07 bash[23367]: cluster 2026-03-10T10:46:16.805855+0000 mgr.y (mgr.24422) 1206 : cluster [DBG] pgmap v1621: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:18.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:18 vm07 bash[23367]: cluster 2026-03-10T10:46:16.805855+0000 mgr.y (mgr.24422) 1206 : cluster [DBG] pgmap v1621: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:18.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:18 vm04 bash[28289]: cluster 2026-03-10T10:46:16.805855+0000 mgr.y (mgr.24422) 1206 : cluster [DBG] pgmap v1621: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:18.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:18 vm04 bash[28289]: cluster 2026-03-10T10:46:16.805855+0000 mgr.y (mgr.24422) 1206 : cluster [DBG] pgmap v1621: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:18 vm04 bash[20742]: cluster 2026-03-10T10:46:16.805855+0000 mgr.y (mgr.24422) 1206 : cluster [DBG] pgmap v1621: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:18 vm04 bash[20742]: cluster 2026-03-10T10:46:16.805855+0000 mgr.y (mgr.24422) 1206 : cluster [DBG] pgmap v1621: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:20.017 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:46:19 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:46:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:20 vm04 bash[20742]: cluster 2026-03-10T10:46:18.806533+0000 mgr.y (mgr.24422) 1207 : cluster [DBG] pgmap v1622: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:20 vm04 bash[20742]: cluster 2026-03-10T10:46:18.806533+0000 mgr.y (mgr.24422) 1207 : cluster [DBG] pgmap v1622: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:20 vm04 bash[28289]: cluster 2026-03-10T10:46:18.806533+0000 mgr.y (mgr.24422) 1207 : cluster [DBG] pgmap v1622: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:20 vm04 bash[28289]: cluster 2026-03-10T10:46:18.806533+0000 mgr.y (mgr.24422) 1207 : cluster [DBG] pgmap v1622: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:20.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:20 vm07 bash[23367]: cluster 2026-03-10T10:46:18.806533+0000 mgr.y (mgr.24422) 1207 : cluster [DBG] pgmap v1622: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:20.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:20 vm07 bash[23367]: cluster 2026-03-10T10:46:18.806533+0000 mgr.y (mgr.24422) 1207 : cluster [DBG] pgmap v1622: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:21.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:21 vm04 bash[28289]: audit 2026-03-10T10:46:19.708687+0000 mgr.y (mgr.24422) 1208 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:21.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:21 vm04 bash[28289]: audit 2026-03-10T10:46:19.708687+0000 mgr.y (mgr.24422) 1208 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:21.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:21 vm04 bash[20742]: audit 2026-03-10T10:46:19.708687+0000 mgr.y (mgr.24422) 1208 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:21.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:21 vm04 bash[20742]: audit 2026-03-10T10:46:19.708687+0000 mgr.y (mgr.24422) 1208 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:21.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:21 vm07 bash[23367]: audit 2026-03-10T10:46:19.708687+0000 mgr.y (mgr.24422) 1208 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:21.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:21 vm07 bash[23367]: audit 2026-03-10T10:46:19.708687+0000 mgr.y (mgr.24422) 1208 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:22.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:22 vm04 bash[28289]: cluster 2026-03-10T10:46:20.806922+0000 mgr.y (mgr.24422) 1209 : cluster [DBG] pgmap v1623: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:22.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:22 vm04 bash[28289]: cluster 2026-03-10T10:46:20.806922+0000 mgr.y (mgr.24422) 1209 : cluster [DBG] pgmap v1623: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:22 vm04 bash[20742]: cluster 2026-03-10T10:46:20.806922+0000 mgr.y (mgr.24422) 1209 : cluster [DBG] pgmap v1623: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:22.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:22 vm04 bash[20742]: cluster 2026-03-10T10:46:20.806922+0000 mgr.y (mgr.24422) 1209 : cluster [DBG] pgmap v1623: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:22.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:22 vm07 bash[23367]: cluster 2026-03-10T10:46:20.806922+0000 mgr.y (mgr.24422) 1209 : cluster [DBG] pgmap v1623: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:22.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:22 vm07 bash[23367]: cluster 2026-03-10T10:46:20.806922+0000 mgr.y (mgr.24422) 1209 : cluster [DBG] pgmap v1623: 228 pgs: 228 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:23.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:23 vm04 bash[28289]: cluster 2026-03-10T10:46:22.054747+0000 mon.a (mon.0) 3786 : cluster [DBG] osdmap e736: 8 total, 8 up, 8 in 2026-03-10T10:46:23.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:23 vm04 bash[28289]: cluster 2026-03-10T10:46:22.054747+0000 mon.a (mon.0) 3786 : cluster [DBG] osdmap e736: 8 total, 8 up, 8 in 2026-03-10T10:46:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:23 vm04 bash[20742]: cluster 2026-03-10T10:46:22.054747+0000 mon.a (mon.0) 3786 : cluster [DBG] osdmap e736: 8 total, 8 up, 8 in 2026-03-10T10:46:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:23 vm04 bash[20742]: cluster 2026-03-10T10:46:22.054747+0000 mon.a (mon.0) 3786 : cluster [DBG] osdmap e736: 8 total, 8 up, 8 in 2026-03-10T10:46:23.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:46:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:46:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:46:23.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:23 vm07 bash[23367]: cluster 2026-03-10T10:46:22.054747+0000 mon.a (mon.0) 3786 : cluster [DBG] osdmap e736: 8 total, 8 up, 8 in 2026-03-10T10:46:23.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:23 vm07 bash[23367]: cluster 2026-03-10T10:46:22.054747+0000 mon.a (mon.0) 3786 : cluster [DBG] osdmap e736: 8 total, 8 up, 8 in 2026-03-10T10:46:24.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:24 vm04 bash[28289]: cluster 2026-03-10T10:46:22.807261+0000 mgr.y (mgr.24422) 1210 : cluster [DBG] pgmap v1625: 164 pgs: 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:46:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:24 vm04 bash[28289]: cluster 2026-03-10T10:46:22.807261+0000 mgr.y (mgr.24422) 1210 : cluster [DBG] pgmap v1625: 164 pgs: 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:46:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:24 vm04 bash[28289]: cluster 2026-03-10T10:46:23.047051+0000 mon.a (mon.0) 3787 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:46:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:24 vm04 bash[28289]: cluster 2026-03-10T10:46:23.047051+0000 mon.a (mon.0) 3787 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:46:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:24 vm04 bash[28289]: cluster 2026-03-10T10:46:23.063705+0000 mon.a (mon.0) 3788 : cluster [DBG] osdmap e737: 8 total, 8 up, 8 in 2026-03-10T10:46:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:24 vm04 bash[28289]: cluster 2026-03-10T10:46:23.063705+0000 mon.a (mon.0) 3788 : cluster [DBG] osdmap e737: 8 total, 8 up, 8 in 2026-03-10T10:46:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:24 vm04 bash[20742]: cluster 2026-03-10T10:46:22.807261+0000 mgr.y (mgr.24422) 1210 : cluster [DBG] pgmap v1625: 164 pgs: 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:46:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:24 vm04 bash[20742]: cluster 2026-03-10T10:46:22.807261+0000 mgr.y (mgr.24422) 1210 : cluster [DBG] pgmap v1625: 164 pgs: 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:46:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:24 vm04 bash[20742]: cluster 2026-03-10T10:46:23.047051+0000 mon.a (mon.0) 3787 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:46:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:24 vm04 bash[20742]: cluster 2026-03-10T10:46:23.047051+0000 mon.a (mon.0) 3787 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:46:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:24 vm04 bash[20742]: cluster 2026-03-10T10:46:23.063705+0000 mon.a (mon.0) 3788 : cluster [DBG] osdmap e737: 8 total, 8 up, 8 in 2026-03-10T10:46:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:24 vm04 bash[20742]: cluster 2026-03-10T10:46:23.063705+0000 mon.a (mon.0) 3788 : cluster [DBG] osdmap e737: 8 total, 8 up, 8 in 2026-03-10T10:46:24.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:24 vm07 bash[23367]: cluster 2026-03-10T10:46:22.807261+0000 mgr.y (mgr.24422) 1210 : cluster [DBG] pgmap v1625: 164 pgs: 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:46:24.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:24 vm07 bash[23367]: cluster 2026-03-10T10:46:22.807261+0000 mgr.y (mgr.24422) 1210 : cluster [DBG] pgmap v1625: 164 pgs: 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:46:24.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:24 vm07 bash[23367]: cluster 2026-03-10T10:46:23.047051+0000 mon.a (mon.0) 3787 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:46:24.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:24 vm07 bash[23367]: cluster 2026-03-10T10:46:23.047051+0000 mon.a (mon.0) 3787 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:46:24.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:24 vm07 bash[23367]: cluster 2026-03-10T10:46:23.063705+0000 mon.a (mon.0) 3788 : cluster [DBG] osdmap e737: 8 total, 8 up, 8 in 2026-03-10T10:46:24.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:24 vm07 bash[23367]: cluster 2026-03-10T10:46:23.063705+0000 mon.a (mon.0) 3788 : cluster [DBG] osdmap e737: 8 total, 8 up, 8 in 2026-03-10T10:46:25.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:25 vm04 bash[28289]: cluster 2026-03-10T10:46:24.070893+0000 mon.a (mon.0) 3789 : cluster [DBG] osdmap e738: 8 total, 8 up, 8 in 2026-03-10T10:46:25.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:25 vm04 bash[28289]: cluster 2026-03-10T10:46:24.070893+0000 mon.a (mon.0) 3789 : cluster [DBG] osdmap e738: 8 total, 8 up, 8 in 2026-03-10T10:46:25.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:25 vm04 bash[20742]: cluster 2026-03-10T10:46:24.070893+0000 mon.a (mon.0) 3789 : cluster [DBG] osdmap e738: 8 total, 8 up, 8 in 2026-03-10T10:46:25.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:25 vm04 bash[20742]: cluster 2026-03-10T10:46:24.070893+0000 mon.a (mon.0) 3789 : cluster [DBG] osdmap e738: 8 total, 8 up, 8 in 2026-03-10T10:46:25.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:25 vm07 bash[23367]: cluster 2026-03-10T10:46:24.070893+0000 mon.a (mon.0) 3789 : cluster [DBG] osdmap e738: 8 total, 8 up, 8 in 2026-03-10T10:46:25.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:25 vm07 bash[23367]: cluster 2026-03-10T10:46:24.070893+0000 mon.a (mon.0) 3789 : cluster [DBG] osdmap e738: 8 total, 8 up, 8 in 2026-03-10T10:46:26.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:26 vm04 bash[28289]: cluster 2026-03-10T10:46:24.807549+0000 mgr.y (mgr.24422) 1211 : cluster [DBG] pgmap v1628: 228 pgs: 64 creating+peering, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:26.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:26 vm04 bash[28289]: cluster 2026-03-10T10:46:24.807549+0000 mgr.y (mgr.24422) 1211 : cluster [DBG] pgmap v1628: 228 pgs: 64 creating+peering, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:26.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:26 vm04 bash[28289]: cluster 2026-03-10T10:46:25.066889+0000 mon.a (mon.0) 3790 : cluster [DBG] osdmap e739: 8 total, 8 up, 8 in 2026-03-10T10:46:26.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:26 vm04 bash[28289]: cluster 2026-03-10T10:46:25.066889+0000 mon.a (mon.0) 3790 : cluster [DBG] osdmap e739: 8 total, 8 up, 8 in 2026-03-10T10:46:26.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:26 vm04 bash[20742]: cluster 2026-03-10T10:46:24.807549+0000 mgr.y (mgr.24422) 1211 : cluster [DBG] pgmap v1628: 228 pgs: 64 creating+peering, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:26.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:26 vm04 bash[20742]: cluster 2026-03-10T10:46:24.807549+0000 mgr.y (mgr.24422) 1211 : cluster [DBG] pgmap v1628: 228 pgs: 64 creating+peering, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:26.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:26 vm04 bash[20742]: cluster 2026-03-10T10:46:25.066889+0000 mon.a (mon.0) 3790 : cluster [DBG] osdmap e739: 8 total, 8 up, 8 in 2026-03-10T10:46:26.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:26 vm04 bash[20742]: cluster 2026-03-10T10:46:25.066889+0000 mon.a (mon.0) 3790 : cluster [DBG] osdmap e739: 8 total, 8 up, 8 in 2026-03-10T10:46:26.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:26 vm07 bash[23367]: cluster 2026-03-10T10:46:24.807549+0000 mgr.y (mgr.24422) 1211 : cluster [DBG] pgmap v1628: 228 pgs: 64 creating+peering, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:26.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:26 vm07 bash[23367]: cluster 2026-03-10T10:46:24.807549+0000 mgr.y (mgr.24422) 1211 : cluster [DBG] pgmap v1628: 228 pgs: 64 creating+peering, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:46:26.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:26 vm07 bash[23367]: cluster 2026-03-10T10:46:25.066889+0000 mon.a (mon.0) 3790 : cluster [DBG] osdmap e739: 8 total, 8 up, 8 in 2026-03-10T10:46:26.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:26 vm07 bash[23367]: cluster 2026-03-10T10:46:25.066889+0000 mon.a (mon.0) 3790 : cluster [DBG] osdmap e739: 8 total, 8 up, 8 in 2026-03-10T10:46:27.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:27 vm04 bash[28289]: cluster 2026-03-10T10:46:26.087173+0000 mon.a (mon.0) 3791 : cluster [DBG] osdmap e740: 8 total, 8 up, 8 in 2026-03-10T10:46:27.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:27 vm04 bash[28289]: cluster 2026-03-10T10:46:26.087173+0000 mon.a (mon.0) 3791 : cluster [DBG] osdmap e740: 8 total, 8 up, 8 in 2026-03-10T10:46:27.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:27 vm04 bash[20742]: cluster 2026-03-10T10:46:26.087173+0000 mon.a (mon.0) 3791 : cluster [DBG] osdmap e740: 8 total, 8 up, 8 in 2026-03-10T10:46:27.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:27 vm04 bash[20742]: cluster 2026-03-10T10:46:26.087173+0000 mon.a (mon.0) 3791 : cluster [DBG] osdmap e740: 8 total, 8 up, 8 in 2026-03-10T10:46:27.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:27 vm07 bash[23367]: cluster 2026-03-10T10:46:26.087173+0000 mon.a (mon.0) 3791 : cluster [DBG] osdmap e740: 8 total, 8 up, 8 in 2026-03-10T10:46:27.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:27 vm07 bash[23367]: cluster 2026-03-10T10:46:26.087173+0000 mon.a (mon.0) 3791 : cluster [DBG] osdmap e740: 8 total, 8 up, 8 in 2026-03-10T10:46:28.186 INFO:tasks.workunit.client.0.vm04.stdout: watch_notify: Running main() from gmock_main.cc 2026-03-10T10:46:28.186 INFO:tasks.workunit.client.0.vm04.stdout: watch_notify: [==========] Running 2 tests from 1 test suite. 2026-03-10T10:46:28.186 INFO:tasks.workunit.client.0.vm04.stdout: watch_notify: [----------] Global test environment set-up. 2026-03-10T10:46:28.186 INFO:tasks.workunit.client.0.vm04.stdout: watch_notify: [----------] 2 tests from NeoRadosWatchNotify 2026-03-10T10:46:28.186 INFO:tasks.workunit.client.0.vm04.stdout: watch_notify: [ RUN ] NeoRadosWatchNotify.WatchNotify 2026-03-10T10:46:28.186 INFO:tasks.workunit.client.0.vm04.stdout: watch_notify: handle_notify cookie 94165722437744 notify_id 3156800962562 notifier_gid 14778 2026-03-10T10:46:28.186 INFO:tasks.workunit.client.0.vm04.stdout: watch_notify: [ OK ] NeoRadosWatchNotify.WatchNotify (1801298 ms) 2026-03-10T10:46:28.186 INFO:tasks.workunit.client.0.vm04.stdout: watch_notify: [ RUN ] NeoRadosWatchNotify.WatchNotifyTimeout 2026-03-10T10:46:28.186 INFO:tasks.workunit.client.0.vm04.stdout: watch_notify: Trying... 2026-03-10T10:46:28.186 INFO:tasks.workunit.client.0.vm04.stdout: watch_notify: handle_notify cookie 94165735322416 notify_id 3169685864450 notifier_gid 45239 2026-03-10T10:46:28.186 INFO:tasks.workunit.client.0.vm04.stdout: watch_notify: Waiting for 3.000000000s 2026-03-10T10:46:28.186 INFO:tasks.workunit.client.0.vm04.stdout: watch_notify: Timed out. 2026-03-10T10:46:28.186 INFO:tasks.workunit.client.0.vm04.stdout: watch_notify: Flushing... 2026-03-10T10:46:28.186 INFO:tasks.workunit.client.0.vm04.stdout: watch_notify: Flushed... 2026-03-10T10:46:28.186 INFO:tasks.workunit.client.0.vm04.stdout: watch_notify: [ OK ] NeoRadosWatchNotify.WatchNotifyTimeout (6129 ms) 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stdout: watch_notify: [----------] 2 tests from NeoRadosWatchNotify (1807427 ms total) 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stdout: watch_notify: 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stdout: watch_notify: [----------] Global test environment tear-down 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stdout: watch_notify: [==========] 2 tests from 1 test suite ran. (1807427 ms total) 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stdout: watch_notify: [ PASSED ] 2 tests. 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59546 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 59546 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59923 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 59923 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=60165 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 60165 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=60029 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 60029 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=60209 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 60209 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59812 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 59812 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59501 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 59501 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=60244 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 60244 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59727 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 59727 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59249 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 59249 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59355 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 59355 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59683 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 59683 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59389 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 59389 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59739 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 59739 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=59761 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 59761 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=60133 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 60133 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ for t in "${!pids[@]}" 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=60269 2026-03-10T10:46:28.187 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 60269 2026-03-10T10:46:28.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:28 vm04 bash[28289]: cluster 2026-03-10T10:46:26.807864+0000 mgr.y (mgr.24422) 1212 : cluster [DBG] pgmap v1631: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:28.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:28 vm04 bash[28289]: cluster 2026-03-10T10:46:26.807864+0000 mgr.y (mgr.24422) 1212 : cluster [DBG] pgmap v1631: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:28.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:28 vm04 bash[28289]: cluster 2026-03-10T10:46:27.116440+0000 mon.a (mon.0) 3792 : cluster [DBG] osdmap e741: 8 total, 8 up, 8 in 2026-03-10T10:46:28.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:28 vm04 bash[28289]: cluster 2026-03-10T10:46:27.116440+0000 mon.a (mon.0) 3792 : cluster [DBG] osdmap e741: 8 total, 8 up, 8 in 2026-03-10T10:46:28.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:28 vm04 bash[20742]: cluster 2026-03-10T10:46:26.807864+0000 mgr.y (mgr.24422) 1212 : cluster [DBG] pgmap v1631: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:28.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:28 vm04 bash[20742]: cluster 2026-03-10T10:46:26.807864+0000 mgr.y (mgr.24422) 1212 : cluster [DBG] pgmap v1631: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:28.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:28 vm04 bash[20742]: cluster 2026-03-10T10:46:27.116440+0000 mon.a (mon.0) 3792 : cluster [DBG] osdmap e741: 8 total, 8 up, 8 in 2026-03-10T10:46:28.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:28 vm04 bash[20742]: cluster 2026-03-10T10:46:27.116440+0000 mon.a (mon.0) 3792 : cluster [DBG] osdmap e741: 8 total, 8 up, 8 in 2026-03-10T10:46:28.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:28 vm07 bash[23367]: cluster 2026-03-10T10:46:26.807864+0000 mgr.y (mgr.24422) 1212 : cluster [DBG] pgmap v1631: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:28.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:28 vm07 bash[23367]: cluster 2026-03-10T10:46:26.807864+0000 mgr.y (mgr.24422) 1212 : cluster [DBG] pgmap v1631: 228 pgs: 32 unknown, 32 creating+peering, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:28.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:28 vm07 bash[23367]: cluster 2026-03-10T10:46:27.116440+0000 mon.a (mon.0) 3792 : cluster [DBG] osdmap e741: 8 total, 8 up, 8 in 2026-03-10T10:46:28.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:28 vm07 bash[23367]: cluster 2026-03-10T10:46:27.116440+0000 mon.a (mon.0) 3792 : cluster [DBG] osdmap e741: 8 total, 8 up, 8 in 2026-03-10T10:46:29.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:29 vm04 bash[28289]: cluster 2026-03-10T10:46:28.164718+0000 mon.a (mon.0) 3793 : cluster [DBG] osdmap e742: 8 total, 8 up, 8 in 2026-03-10T10:46:29.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:29 vm04 bash[28289]: cluster 2026-03-10T10:46:28.164718+0000 mon.a (mon.0) 3793 : cluster [DBG] osdmap e742: 8 total, 8 up, 8 in 2026-03-10T10:46:29.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:29 vm04 bash[28289]: audit 2026-03-10T10:46:28.629128+0000 mon.a (mon.0) 3794 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:46:29.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:29 vm04 bash[28289]: audit 2026-03-10T10:46:28.629128+0000 mon.a (mon.0) 3794 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:46:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:29 vm04 bash[20742]: cluster 2026-03-10T10:46:28.164718+0000 mon.a (mon.0) 3793 : cluster [DBG] osdmap e742: 8 total, 8 up, 8 in 2026-03-10T10:46:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:29 vm04 bash[20742]: cluster 2026-03-10T10:46:28.164718+0000 mon.a (mon.0) 3793 : cluster [DBG] osdmap e742: 8 total, 8 up, 8 in 2026-03-10T10:46:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:29 vm04 bash[20742]: audit 2026-03-10T10:46:28.629128+0000 mon.a (mon.0) 3794 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:46:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:29 vm04 bash[20742]: audit 2026-03-10T10:46:28.629128+0000 mon.a (mon.0) 3794 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:46:29.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:29 vm07 bash[23367]: cluster 2026-03-10T10:46:28.164718+0000 mon.a (mon.0) 3793 : cluster [DBG] osdmap e742: 8 total, 8 up, 8 in 2026-03-10T10:46:29.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:29 vm07 bash[23367]: cluster 2026-03-10T10:46:28.164718+0000 mon.a (mon.0) 3793 : cluster [DBG] osdmap e742: 8 total, 8 up, 8 in 2026-03-10T10:46:29.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:29 vm07 bash[23367]: audit 2026-03-10T10:46:28.629128+0000 mon.a (mon.0) 3794 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:46:29.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:29 vm07 bash[23367]: audit 2026-03-10T10:46:28.629128+0000 mon.a (mon.0) 3794 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:46:30.017 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:46:29 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:46:30.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:30 vm07 bash[23367]: cluster 2026-03-10T10:46:28.808384+0000 mgr.y (mgr.24422) 1213 : cluster [DBG] pgmap v1634: 164 pgs: 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:30.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:30 vm07 bash[23367]: cluster 2026-03-10T10:46:28.808384+0000 mgr.y (mgr.24422) 1213 : cluster [DBG] pgmap v1634: 164 pgs: 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:30.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:30 vm07 bash[23367]: cluster 2026-03-10T10:46:29.161910+0000 mon.a (mon.0) 3795 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:46:30.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:30 vm07 bash[23367]: cluster 2026-03-10T10:46:29.161910+0000 mon.a (mon.0) 3795 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:46:30.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:30 vm07 bash[23367]: cluster 2026-03-10T10:46:29.175307+0000 mon.a (mon.0) 3796 : cluster [DBG] osdmap e743: 8 total, 8 up, 8 in 2026-03-10T10:46:30.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:30 vm07 bash[23367]: cluster 2026-03-10T10:46:29.175307+0000 mon.a (mon.0) 3796 : cluster [DBG] osdmap e743: 8 total, 8 up, 8 in 2026-03-10T10:46:30.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:30 vm04 bash[28289]: cluster 2026-03-10T10:46:28.808384+0000 mgr.y (mgr.24422) 1213 : cluster [DBG] pgmap v1634: 164 pgs: 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:30.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:30 vm04 bash[28289]: cluster 2026-03-10T10:46:28.808384+0000 mgr.y (mgr.24422) 1213 : cluster [DBG] pgmap v1634: 164 pgs: 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:30.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:30 vm04 bash[28289]: cluster 2026-03-10T10:46:29.161910+0000 mon.a (mon.0) 3795 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:46:30.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:30 vm04 bash[28289]: cluster 2026-03-10T10:46:29.161910+0000 mon.a (mon.0) 3795 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:46:30.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:30 vm04 bash[28289]: cluster 2026-03-10T10:46:29.175307+0000 mon.a (mon.0) 3796 : cluster [DBG] osdmap e743: 8 total, 8 up, 8 in 2026-03-10T10:46:30.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:30 vm04 bash[28289]: cluster 2026-03-10T10:46:29.175307+0000 mon.a (mon.0) 3796 : cluster [DBG] osdmap e743: 8 total, 8 up, 8 in 2026-03-10T10:46:30.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:30 vm04 bash[20742]: cluster 2026-03-10T10:46:28.808384+0000 mgr.y (mgr.24422) 1213 : cluster [DBG] pgmap v1634: 164 pgs: 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:30.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:30 vm04 bash[20742]: cluster 2026-03-10T10:46:28.808384+0000 mgr.y (mgr.24422) 1213 : cluster [DBG] pgmap v1634: 164 pgs: 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:30.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:30 vm04 bash[20742]: cluster 2026-03-10T10:46:29.161910+0000 mon.a (mon.0) 3795 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:46:30.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:30 vm04 bash[20742]: cluster 2026-03-10T10:46:29.161910+0000 mon.a (mon.0) 3795 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:46:30.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:30 vm04 bash[20742]: cluster 2026-03-10T10:46:29.175307+0000 mon.a (mon.0) 3796 : cluster [DBG] osdmap e743: 8 total, 8 up, 8 in 2026-03-10T10:46:30.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:30 vm04 bash[20742]: cluster 2026-03-10T10:46:29.175307+0000 mon.a (mon.0) 3796 : cluster [DBG] osdmap e743: 8 total, 8 up, 8 in 2026-03-10T10:46:31.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:31 vm07 bash[23367]: audit 2026-03-10T10:46:29.715341+0000 mgr.y (mgr.24422) 1214 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:31.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:31 vm07 bash[23367]: audit 2026-03-10T10:46:29.715341+0000 mgr.y (mgr.24422) 1214 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:31.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:31 vm07 bash[23367]: cluster 2026-03-10T10:46:30.242470+0000 mon.a (mon.0) 3797 : cluster [DBG] osdmap e744: 8 total, 8 up, 8 in 2026-03-10T10:46:31.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:31 vm07 bash[23367]: cluster 2026-03-10T10:46:30.242470+0000 mon.a (mon.0) 3797 : cluster [DBG] osdmap e744: 8 total, 8 up, 8 in 2026-03-10T10:46:31.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:31 vm04 bash[28289]: audit 2026-03-10T10:46:29.715341+0000 mgr.y (mgr.24422) 1214 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:31.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:31 vm04 bash[28289]: audit 2026-03-10T10:46:29.715341+0000 mgr.y (mgr.24422) 1214 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:31.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:31 vm04 bash[28289]: cluster 2026-03-10T10:46:30.242470+0000 mon.a (mon.0) 3797 : cluster [DBG] osdmap e744: 8 total, 8 up, 8 in 2026-03-10T10:46:31.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:31 vm04 bash[28289]: cluster 2026-03-10T10:46:30.242470+0000 mon.a (mon.0) 3797 : cluster [DBG] osdmap e744: 8 total, 8 up, 8 in 2026-03-10T10:46:31.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:31 vm04 bash[20742]: audit 2026-03-10T10:46:29.715341+0000 mgr.y (mgr.24422) 1214 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:31.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:31 vm04 bash[20742]: audit 2026-03-10T10:46:29.715341+0000 mgr.y (mgr.24422) 1214 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:31.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:31 vm04 bash[20742]: cluster 2026-03-10T10:46:30.242470+0000 mon.a (mon.0) 3797 : cluster [DBG] osdmap e744: 8 total, 8 up, 8 in 2026-03-10T10:46:31.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:31 vm04 bash[20742]: cluster 2026-03-10T10:46:30.242470+0000 mon.a (mon.0) 3797 : cluster [DBG] osdmap e744: 8 total, 8 up, 8 in 2026-03-10T10:46:32.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:32 vm07 bash[23367]: cluster 2026-03-10T10:46:30.808675+0000 mgr.y (mgr.24422) 1215 : cluster [DBG] pgmap v1637: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:32.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:32 vm07 bash[23367]: cluster 2026-03-10T10:46:30.808675+0000 mgr.y (mgr.24422) 1215 : cluster [DBG] pgmap v1637: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:32.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:32 vm07 bash[23367]: cluster 2026-03-10T10:46:31.255594+0000 mon.a (mon.0) 3798 : cluster [DBG] osdmap e745: 8 total, 8 up, 8 in 2026-03-10T10:46:32.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:32 vm07 bash[23367]: cluster 2026-03-10T10:46:31.255594+0000 mon.a (mon.0) 3798 : cluster [DBG] osdmap e745: 8 total, 8 up, 8 in 2026-03-10T10:46:32.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:32 vm04 bash[28289]: cluster 2026-03-10T10:46:30.808675+0000 mgr.y (mgr.24422) 1215 : cluster [DBG] pgmap v1637: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:32.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:32 vm04 bash[28289]: cluster 2026-03-10T10:46:30.808675+0000 mgr.y (mgr.24422) 1215 : cluster [DBG] pgmap v1637: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:32.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:32 vm04 bash[28289]: cluster 2026-03-10T10:46:31.255594+0000 mon.a (mon.0) 3798 : cluster [DBG] osdmap e745: 8 total, 8 up, 8 in 2026-03-10T10:46:32.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:32 vm04 bash[28289]: cluster 2026-03-10T10:46:31.255594+0000 mon.a (mon.0) 3798 : cluster [DBG] osdmap e745: 8 total, 8 up, 8 in 2026-03-10T10:46:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:32 vm04 bash[20742]: cluster 2026-03-10T10:46:30.808675+0000 mgr.y (mgr.24422) 1215 : cluster [DBG] pgmap v1637: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:32 vm04 bash[20742]: cluster 2026-03-10T10:46:30.808675+0000 mgr.y (mgr.24422) 1215 : cluster [DBG] pgmap v1637: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:32 vm04 bash[20742]: cluster 2026-03-10T10:46:31.255594+0000 mon.a (mon.0) 3798 : cluster [DBG] osdmap e745: 8 total, 8 up, 8 in 2026-03-10T10:46:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:32 vm04 bash[20742]: cluster 2026-03-10T10:46:31.255594+0000 mon.a (mon.0) 3798 : cluster [DBG] osdmap e745: 8 total, 8 up, 8 in 2026-03-10T10:46:33.270 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:46:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:46:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:46:33.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:33 vm04 bash[28289]: cluster 2026-03-10T10:46:32.262780+0000 mon.a (mon.0) 3799 : cluster [DBG] osdmap e746: 8 total, 8 up, 8 in 2026-03-10T10:46:33.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:33 vm04 bash[28289]: cluster 2026-03-10T10:46:32.262780+0000 mon.a (mon.0) 3799 : cluster [DBG] osdmap e746: 8 total, 8 up, 8 in 2026-03-10T10:46:33.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:33 vm04 bash[20742]: cluster 2026-03-10T10:46:32.262780+0000 mon.a (mon.0) 3799 : cluster [DBG] osdmap e746: 8 total, 8 up, 8 in 2026-03-10T10:46:33.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:33 vm04 bash[20742]: cluster 2026-03-10T10:46:32.262780+0000 mon.a (mon.0) 3799 : cluster [DBG] osdmap e746: 8 total, 8 up, 8 in 2026-03-10T10:46:33.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:33 vm07 bash[23367]: cluster 2026-03-10T10:46:32.262780+0000 mon.a (mon.0) 3799 : cluster [DBG] osdmap e746: 8 total, 8 up, 8 in 2026-03-10T10:46:33.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:33 vm07 bash[23367]: cluster 2026-03-10T10:46:32.262780+0000 mon.a (mon.0) 3799 : cluster [DBG] osdmap e746: 8 total, 8 up, 8 in 2026-03-10T10:46:34.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:34 vm04 bash[28289]: cluster 2026-03-10T10:46:32.809068+0000 mgr.y (mgr.24422) 1216 : cluster [DBG] pgmap v1640: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:46:34.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:34 vm04 bash[28289]: cluster 2026-03-10T10:46:32.809068+0000 mgr.y (mgr.24422) 1216 : cluster [DBG] pgmap v1640: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:46:34.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:34 vm04 bash[28289]: cluster 2026-03-10T10:46:33.275514+0000 mon.a (mon.0) 3800 : cluster [DBG] osdmap e747: 8 total, 8 up, 8 in 2026-03-10T10:46:34.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:34 vm04 bash[28289]: cluster 2026-03-10T10:46:33.275514+0000 mon.a (mon.0) 3800 : cluster [DBG] osdmap e747: 8 total, 8 up, 8 in 2026-03-10T10:46:34.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:34 vm04 bash[20742]: cluster 2026-03-10T10:46:32.809068+0000 mgr.y (mgr.24422) 1216 : cluster [DBG] pgmap v1640: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:46:34.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:34 vm04 bash[20742]: cluster 2026-03-10T10:46:32.809068+0000 mgr.y (mgr.24422) 1216 : cluster [DBG] pgmap v1640: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:46:34.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:34 vm04 bash[20742]: cluster 2026-03-10T10:46:33.275514+0000 mon.a (mon.0) 3800 : cluster [DBG] osdmap e747: 8 total, 8 up, 8 in 2026-03-10T10:46:34.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:34 vm04 bash[20742]: cluster 2026-03-10T10:46:33.275514+0000 mon.a (mon.0) 3800 : cluster [DBG] osdmap e747: 8 total, 8 up, 8 in 2026-03-10T10:46:34.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:34 vm07 bash[23367]: cluster 2026-03-10T10:46:32.809068+0000 mgr.y (mgr.24422) 1216 : cluster [DBG] pgmap v1640: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:46:34.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:34 vm07 bash[23367]: cluster 2026-03-10T10:46:32.809068+0000 mgr.y (mgr.24422) 1216 : cluster [DBG] pgmap v1640: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.0 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:46:34.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:34 vm07 bash[23367]: cluster 2026-03-10T10:46:33.275514+0000 mon.a (mon.0) 3800 : cluster [DBG] osdmap e747: 8 total, 8 up, 8 in 2026-03-10T10:46:34.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:34 vm07 bash[23367]: cluster 2026-03-10T10:46:33.275514+0000 mon.a (mon.0) 3800 : cluster [DBG] osdmap e747: 8 total, 8 up, 8 in 2026-03-10T10:46:35.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:35 vm04 bash[28289]: cluster 2026-03-10T10:46:34.283749+0000 mon.a (mon.0) 3801 : cluster [DBG] osdmap e748: 8 total, 8 up, 8 in 2026-03-10T10:46:35.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:35 vm04 bash[28289]: cluster 2026-03-10T10:46:34.283749+0000 mon.a (mon.0) 3801 : cluster [DBG] osdmap e748: 8 total, 8 up, 8 in 2026-03-10T10:46:35.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:35 vm04 bash[20742]: cluster 2026-03-10T10:46:34.283749+0000 mon.a (mon.0) 3801 : cluster [DBG] osdmap e748: 8 total, 8 up, 8 in 2026-03-10T10:46:35.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:35 vm04 bash[20742]: cluster 2026-03-10T10:46:34.283749+0000 mon.a (mon.0) 3801 : cluster [DBG] osdmap e748: 8 total, 8 up, 8 in 2026-03-10T10:46:35.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:35 vm07 bash[23367]: cluster 2026-03-10T10:46:34.283749+0000 mon.a (mon.0) 3801 : cluster [DBG] osdmap e748: 8 total, 8 up, 8 in 2026-03-10T10:46:35.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:35 vm07 bash[23367]: cluster 2026-03-10T10:46:34.283749+0000 mon.a (mon.0) 3801 : cluster [DBG] osdmap e748: 8 total, 8 up, 8 in 2026-03-10T10:46:36.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:36 vm04 bash[28289]: cluster 2026-03-10T10:46:34.809339+0000 mgr.y (mgr.24422) 1217 : cluster [DBG] pgmap v1643: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:36.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:36 vm04 bash[28289]: cluster 2026-03-10T10:46:34.809339+0000 mgr.y (mgr.24422) 1217 : cluster [DBG] pgmap v1643: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:36.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:36 vm04 bash[28289]: cluster 2026-03-10T10:46:35.309475+0000 mon.a (mon.0) 3802 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:46:36.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:36 vm04 bash[28289]: cluster 2026-03-10T10:46:35.309475+0000 mon.a (mon.0) 3802 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:46:36.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:36 vm04 bash[28289]: cluster 2026-03-10T10:46:35.325416+0000 mon.a (mon.0) 3803 : cluster [DBG] osdmap e749: 8 total, 8 up, 8 in 2026-03-10T10:46:36.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:36 vm04 bash[28289]: cluster 2026-03-10T10:46:35.325416+0000 mon.a (mon.0) 3803 : cluster [DBG] osdmap e749: 8 total, 8 up, 8 in 2026-03-10T10:46:36.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:36 vm04 bash[20742]: cluster 2026-03-10T10:46:34.809339+0000 mgr.y (mgr.24422) 1217 : cluster [DBG] pgmap v1643: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:36.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:36 vm04 bash[20742]: cluster 2026-03-10T10:46:34.809339+0000 mgr.y (mgr.24422) 1217 : cluster [DBG] pgmap v1643: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:36.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:36 vm04 bash[20742]: cluster 2026-03-10T10:46:35.309475+0000 mon.a (mon.0) 3802 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:46:36.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:36 vm04 bash[20742]: cluster 2026-03-10T10:46:35.309475+0000 mon.a (mon.0) 3802 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:46:36.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:36 vm04 bash[20742]: cluster 2026-03-10T10:46:35.325416+0000 mon.a (mon.0) 3803 : cluster [DBG] osdmap e749: 8 total, 8 up, 8 in 2026-03-10T10:46:36.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:36 vm04 bash[20742]: cluster 2026-03-10T10:46:35.325416+0000 mon.a (mon.0) 3803 : cluster [DBG] osdmap e749: 8 total, 8 up, 8 in 2026-03-10T10:46:36.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:36 vm07 bash[23367]: cluster 2026-03-10T10:46:34.809339+0000 mgr.y (mgr.24422) 1217 : cluster [DBG] pgmap v1643: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:36.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:36 vm07 bash[23367]: cluster 2026-03-10T10:46:34.809339+0000 mgr.y (mgr.24422) 1217 : cluster [DBG] pgmap v1643: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:36.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:36 vm07 bash[23367]: cluster 2026-03-10T10:46:35.309475+0000 mon.a (mon.0) 3802 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:46:36.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:36 vm07 bash[23367]: cluster 2026-03-10T10:46:35.309475+0000 mon.a (mon.0) 3802 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:46:36.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:36 vm07 bash[23367]: cluster 2026-03-10T10:46:35.325416+0000 mon.a (mon.0) 3803 : cluster [DBG] osdmap e749: 8 total, 8 up, 8 in 2026-03-10T10:46:36.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:36 vm07 bash[23367]: cluster 2026-03-10T10:46:35.325416+0000 mon.a (mon.0) 3803 : cluster [DBG] osdmap e749: 8 total, 8 up, 8 in 2026-03-10T10:46:37.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:37 vm04 bash[28289]: cluster 2026-03-10T10:46:36.325250+0000 mon.a (mon.0) 3804 : cluster [DBG] osdmap e750: 8 total, 8 up, 8 in 2026-03-10T10:46:37.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:37 vm04 bash[28289]: cluster 2026-03-10T10:46:36.325250+0000 mon.a (mon.0) 3804 : cluster [DBG] osdmap e750: 8 total, 8 up, 8 in 2026-03-10T10:46:37.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:37 vm04 bash[20742]: cluster 2026-03-10T10:46:36.325250+0000 mon.a (mon.0) 3804 : cluster [DBG] osdmap e750: 8 total, 8 up, 8 in 2026-03-10T10:46:37.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:37 vm04 bash[20742]: cluster 2026-03-10T10:46:36.325250+0000 mon.a (mon.0) 3804 : cluster [DBG] osdmap e750: 8 total, 8 up, 8 in 2026-03-10T10:46:37.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:37 vm07 bash[23367]: cluster 2026-03-10T10:46:36.325250+0000 mon.a (mon.0) 3804 : cluster [DBG] osdmap e750: 8 total, 8 up, 8 in 2026-03-10T10:46:37.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:37 vm07 bash[23367]: cluster 2026-03-10T10:46:36.325250+0000 mon.a (mon.0) 3804 : cluster [DBG] osdmap e750: 8 total, 8 up, 8 in 2026-03-10T10:46:38.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:38 vm04 bash[28289]: cluster 2026-03-10T10:46:36.809668+0000 mgr.y (mgr.24422) 1218 : cluster [DBG] pgmap v1646: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:38.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:38 vm04 bash[28289]: cluster 2026-03-10T10:46:36.809668+0000 mgr.y (mgr.24422) 1218 : cluster [DBG] pgmap v1646: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:38.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:38 vm04 bash[28289]: cluster 2026-03-10T10:46:37.359097+0000 mon.a (mon.0) 3805 : cluster [DBG] osdmap e751: 8 total, 8 up, 8 in 2026-03-10T10:46:38.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:38 vm04 bash[28289]: cluster 2026-03-10T10:46:37.359097+0000 mon.a (mon.0) 3805 : cluster [DBG] osdmap e751: 8 total, 8 up, 8 in 2026-03-10T10:46:38.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:38 vm04 bash[20742]: cluster 2026-03-10T10:46:36.809668+0000 mgr.y (mgr.24422) 1218 : cluster [DBG] pgmap v1646: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:38.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:38 vm04 bash[20742]: cluster 2026-03-10T10:46:36.809668+0000 mgr.y (mgr.24422) 1218 : cluster [DBG] pgmap v1646: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:38.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:38 vm04 bash[20742]: cluster 2026-03-10T10:46:37.359097+0000 mon.a (mon.0) 3805 : cluster [DBG] osdmap e751: 8 total, 8 up, 8 in 2026-03-10T10:46:38.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:38 vm04 bash[20742]: cluster 2026-03-10T10:46:37.359097+0000 mon.a (mon.0) 3805 : cluster [DBG] osdmap e751: 8 total, 8 up, 8 in 2026-03-10T10:46:38.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:38 vm07 bash[23367]: cluster 2026-03-10T10:46:36.809668+0000 mgr.y (mgr.24422) 1218 : cluster [DBG] pgmap v1646: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:38.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:38 vm07 bash[23367]: cluster 2026-03-10T10:46:36.809668+0000 mgr.y (mgr.24422) 1218 : cluster [DBG] pgmap v1646: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:38.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:38 vm07 bash[23367]: cluster 2026-03-10T10:46:37.359097+0000 mon.a (mon.0) 3805 : cluster [DBG] osdmap e751: 8 total, 8 up, 8 in 2026-03-10T10:46:38.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:38 vm07 bash[23367]: cluster 2026-03-10T10:46:37.359097+0000 mon.a (mon.0) 3805 : cluster [DBG] osdmap e751: 8 total, 8 up, 8 in 2026-03-10T10:46:39.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:39 vm04 bash[28289]: cluster 2026-03-10T10:46:38.398179+0000 mon.a (mon.0) 3806 : cluster [DBG] osdmap e752: 8 total, 8 up, 8 in 2026-03-10T10:46:39.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:39 vm04 bash[28289]: cluster 2026-03-10T10:46:38.398179+0000 mon.a (mon.0) 3806 : cluster [DBG] osdmap e752: 8 total, 8 up, 8 in 2026-03-10T10:46:39.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:39 vm04 bash[20742]: cluster 2026-03-10T10:46:38.398179+0000 mon.a (mon.0) 3806 : cluster [DBG] osdmap e752: 8 total, 8 up, 8 in 2026-03-10T10:46:39.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:39 vm04 bash[20742]: cluster 2026-03-10T10:46:38.398179+0000 mon.a (mon.0) 3806 : cluster [DBG] osdmap e752: 8 total, 8 up, 8 in 2026-03-10T10:46:39.724 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:39 vm07 bash[23367]: cluster 2026-03-10T10:46:38.398179+0000 mon.a (mon.0) 3806 : cluster [DBG] osdmap e752: 8 total, 8 up, 8 in 2026-03-10T10:46:39.724 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:39 vm07 bash[23367]: cluster 2026-03-10T10:46:38.398179+0000 mon.a (mon.0) 3806 : cluster [DBG] osdmap e752: 8 total, 8 up, 8 in 2026-03-10T10:46:40.017 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:46:39 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:46:40.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:40 vm04 bash[28289]: cluster 2026-03-10T10:46:38.809966+0000 mgr.y (mgr.24422) 1219 : cluster [DBG] pgmap v1649: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:40 vm04 bash[28289]: cluster 2026-03-10T10:46:38.809966+0000 mgr.y (mgr.24422) 1219 : cluster [DBG] pgmap v1649: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:40 vm04 bash[28289]: cluster 2026-03-10T10:46:39.404285+0000 mon.a (mon.0) 3807 : cluster [DBG] osdmap e753: 8 total, 8 up, 8 in 2026-03-10T10:46:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:40 vm04 bash[28289]: cluster 2026-03-10T10:46:39.404285+0000 mon.a (mon.0) 3807 : cluster [DBG] osdmap e753: 8 total, 8 up, 8 in 2026-03-10T10:46:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:40 vm04 bash[20742]: cluster 2026-03-10T10:46:38.809966+0000 mgr.y (mgr.24422) 1219 : cluster [DBG] pgmap v1649: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:40 vm04 bash[20742]: cluster 2026-03-10T10:46:38.809966+0000 mgr.y (mgr.24422) 1219 : cluster [DBG] pgmap v1649: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:40 vm04 bash[20742]: cluster 2026-03-10T10:46:39.404285+0000 mon.a (mon.0) 3807 : cluster [DBG] osdmap e753: 8 total, 8 up, 8 in 2026-03-10T10:46:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:40 vm04 bash[20742]: cluster 2026-03-10T10:46:39.404285+0000 mon.a (mon.0) 3807 : cluster [DBG] osdmap e753: 8 total, 8 up, 8 in 2026-03-10T10:46:40.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:40 vm07 bash[23367]: cluster 2026-03-10T10:46:38.809966+0000 mgr.y (mgr.24422) 1219 : cluster [DBG] pgmap v1649: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:40.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:40 vm07 bash[23367]: cluster 2026-03-10T10:46:38.809966+0000 mgr.y (mgr.24422) 1219 : cluster [DBG] pgmap v1649: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:40.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:40 vm07 bash[23367]: cluster 2026-03-10T10:46:39.404285+0000 mon.a (mon.0) 3807 : cluster [DBG] osdmap e753: 8 total, 8 up, 8 in 2026-03-10T10:46:40.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:40 vm07 bash[23367]: cluster 2026-03-10T10:46:39.404285+0000 mon.a (mon.0) 3807 : cluster [DBG] osdmap e753: 8 total, 8 up, 8 in 2026-03-10T10:46:41.431 INFO:tasks.workunit.client.0.vm04.stdout: write_operations: Running main() from gmock_main.cc 2026-03-10T10:46:41.431 INFO:tasks.workunit.client.0.vm04.stdout: write_operations: [==========] Running 7 tests from 1 test suite. 2026-03-10T10:46:41.431 INFO:tasks.workunit.client.0.vm04.stdout: write_operations: [----------] Global test environment set-up. 2026-03-10T10:46:41.431 INFO:tasks.workunit.client.0.vm04.stdout: write_operations: [----------] 7 tests from NeoRadosWriteOps 2026-03-10T10:46:41.431 INFO:tasks.workunit.client.0.vm04.stdout: write_operations: [ RUN ] NeoRadosWriteOps.AssertExists 2026-03-10T10:46:41.431 INFO:tasks.workunit.client.0.vm04.stdout: write_operations: [ OK ] NeoRadosWriteOps.AssertExists (1801277 ms) 2026-03-10T10:46:41.431 INFO:tasks.workunit.client.0.vm04.stdout: write_operations: [ RUN ] NeoRadosWriteOps.AssertVersion 2026-03-10T10:46:41.431 INFO:tasks.workunit.client.0.vm04.stdout: write_operations: [ OK ] NeoRadosWriteOps.AssertVersion (3017 ms) 2026-03-10T10:46:41.431 INFO:tasks.workunit.client.0.vm04.stdout: write_operations: [ RUN ] NeoRadosWriteOps.Xattrs 2026-03-10T10:46:41.431 INFO:tasks.workunit.client.0.vm04.stdout: write_operations: [ OK ] NeoRadosWriteOps.Xattrs (3102 ms) 2026-03-10T10:46:41.431 INFO:tasks.workunit.client.0.vm04.stdout: write_operations: [ RUN ] NeoRadosWriteOps.Write 2026-03-10T10:46:41.431 INFO:tasks.workunit.client.0.vm04.stdout: write_operations: [ OK ] NeoRadosWriteOps.Write (3087 ms) 2026-03-10T10:46:41.431 INFO:tasks.workunit.client.0.vm04.stdout: write_operations: [ RUN ] NeoRadosWriteOps.Exec 2026-03-10T10:46:41.431 INFO:tasks.workunit.client.0.vm04.stdout: write_operations: [ OK ] NeoRadosWriteOps.Exec (3032 ms) 2026-03-10T10:46:41.431 INFO:tasks.workunit.client.0.vm04.stdout: write_operations: [ RUN ] NeoRadosWriteOps.WriteSame 2026-03-10T10:46:41.431 INFO:tasks.workunit.client.0.vm04.stdout: write_operations: [ OK ] NeoRadosWriteOps.WriteSame (3072 ms) 2026-03-10T10:46:41.431 INFO:tasks.workunit.client.0.vm04.stdout: write_operations: [ RUN ] NeoRadosWriteOps.CmpExt 2026-03-10T10:46:41.431 INFO:tasks.workunit.client.0.vm04.stdout: write_operations: [ OK ] NeoRadosWriteOps.CmpExt (4069 ms) 2026-03-10T10:46:41.431 INFO:tasks.workunit.client.0.vm04.stdout: write_operations: [----------] 7 tests from NeoRadosWriteOps (1820656 ms total) 2026-03-10T10:46:41.431 INFO:tasks.workunit.client.0.vm04.stdout: write_operations: 2026-03-10T10:46:41.431 INFO:tasks.workunit.client.0.vm04.stdout: write_operations: [----------] Global test environment tear-down 2026-03-10T10:46:41.431 INFO:tasks.workunit.client.0.vm04.stdout: write_operations: [==========] 7 tests from 1 test suite ran. (1820656 ms total) 2026-03-10T10:46:41.431 INFO:tasks.workunit.client.0.vm04.stdout: write_operations: [ PASSED ] 7 tests. 2026-03-10T10:46:41.431 INFO:tasks.workunit.client.0.vm04.stderr:+ exit 0 2026-03-10T10:46:41.431 INFO:tasks.workunit.client.0.vm04.stderr:+ cleanup 2026-03-10T10:46:41.431 INFO:tasks.workunit.client.0.vm04.stderr:+ pkill -P 59243 2026-03-10T10:46:41.436 INFO:tasks.workunit.client.0.vm04.stderr:+ true 2026-03-10T10:46:41.436 INFO:teuthology.orchestra.run:Running command with timeout 3600 2026-03-10T10:46:41.436 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp 2026-03-10T10:46:41.447 INFO:tasks.workunit:Running workunits matching rados/test_pool_quota.sh on client.0... 2026-03-10T10:46:41.448 INFO:tasks.workunit:Running workunit rados/test_pool_quota.sh... 2026-03-10T10:46:41.448 DEBUG:teuthology.orchestra.run.vm04:workunit test rados/test_pool_quota.sh> mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_pool_quota.sh 2026-03-10T10:46:41.496 INFO:tasks.workunit.client.0.vm04.stderr:+ uuidgen 2026-03-10T10:46:41.497 INFO:tasks.workunit.client.0.vm04.stderr:+ p=3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f 2026-03-10T10:46:41.497 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph osd pool create 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f 12 2026-03-10T10:46:41.566 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.560+0000 7fcda626b640 1 -- 192.168.123.104:0/3821266 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fcda01057d0 msgr2=0x7fcda0109820 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:46:41.566 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.560+0000 7fcda626b640 1 --2- 192.168.123.104:0/3821266 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fcda01057d0 0x7fcda0109820 secure :-1 s=READY pgs=3063 cs=0 l=1 rev1=1 crypto rx=0x7fcd8c009a30 tx=0x7fcd8c01c920 comp rx=0 tx=0).stop 2026-03-10T10:46:41.566 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.560+0000 7fcda626b640 1 -- 192.168.123.104:0/3821266 shutdown_connections 2026-03-10T10:46:41.566 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.560+0000 7fcda626b640 1 --2- 192.168.123.104:0/3821266 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcda0109f50 0x7fcda0111ad0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:46:41.566 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.560+0000 7fcda626b640 1 --2- 192.168.123.104:0/3821266 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fcda01057d0 0x7fcda0109820 unknown :-1 s=CLOSED pgs=3063 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:46:41.566 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.560+0000 7fcda626b640 1 --2- 192.168.123.104:0/3821266 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fcda0104e20 0x7fcda0105200 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:46:41.566 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.560+0000 7fcda626b640 1 -- 192.168.123.104:0/3821266 >> 192.168.123.104:0/3821266 conn(0x7fcda0100880 msgr2=0x7fcda0102ca0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:46:41.566 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.560+0000 7fcda626b640 1 -- 192.168.123.104:0/3821266 shutdown_connections 2026-03-10T10:46:41.566 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.560+0000 7fcda626b640 1 -- 192.168.123.104:0/3821266 wait complete. 2026-03-10T10:46:41.566 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.564+0000 7fcda626b640 1 Processor -- start 2026-03-10T10:46:41.566 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.564+0000 7fcda626b640 1 -- start start 2026-03-10T10:46:41.566 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.564+0000 7fcda626b640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcda0104e20 0x7fcda019f1b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:46:41.566 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.564+0000 7fcda626b640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fcda01057d0 0x7fcda019f6f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:46:41.566 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.564+0000 7fcda626b640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fcda0109f50 0x7fcda01a3a80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:46:41.566 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.564+0000 7fcda626b640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fcda0116c60 con 0x7fcda01057d0 2026-03-10T10:46:41.566 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.564+0000 7fcda626b640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7fcda0116ae0 con 0x7fcda0104e20 2026-03-10T10:46:41.566 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.564+0000 7fcda626b640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7fcda0116de0 con 0x7fcda0109f50 2026-03-10T10:46:41.566 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.564+0000 7fcd9ffff640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fcda0109f50 0x7fcda01a3a80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:46:41.567 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.564+0000 7fcd9ffff640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fcda0109f50 0x7fcda01a3a80 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3301/0 says I am v2:192.168.123.104:37006/0 (socket says 192.168.123.104:37006) 2026-03-10T10:46:41.567 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.564+0000 7fcd9ffff640 1 -- 192.168.123.104:0/223883202 learned_addr learned my addr 192.168.123.104:0/223883202 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:46:41.567 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.564+0000 7fcd9f7fe640 1 --2- 192.168.123.104:0/223883202 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcda0104e20 0x7fcda019f1b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:46:41.567 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.564+0000 7fcd9ffff640 1 -- 192.168.123.104:0/223883202 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcda0104e20 msgr2=0x7fcda019f1b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:46:41.567 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.564+0000 7fcd9ffff640 1 --2- 192.168.123.104:0/223883202 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcda0104e20 0x7fcda019f1b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:46:41.567 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.564+0000 7fcd9ffff640 1 -- 192.168.123.104:0/223883202 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fcda01057d0 msgr2=0x7fcda019f6f0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:46:41.567 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.564+0000 7fcd9ffff640 1 --2- 192.168.123.104:0/223883202 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fcda01057d0 0x7fcda019f6f0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:46:41.567 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.564+0000 7fcd9ffff640 1 -- 192.168.123.104:0/223883202 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fcda01a4160 con 0x7fcda0109f50 2026-03-10T10:46:41.567 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.564+0000 7fcd9f7fe640 1 --2- 192.168.123.104:0/223883202 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcda0104e20 0x7fcda019f1b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-10T10:46:41.567 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.564+0000 7fcd9ffff640 1 --2- 192.168.123.104:0/223883202 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fcda0109f50 0x7fcda01a3a80 secure :-1 s=READY pgs=3370 cs=0 l=1 rev1=1 crypto rx=0x7fcd94004a30 tx=0x7fcd9400d4a0 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:46:41.567 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.564+0000 7fcd9cff9640 1 -- 192.168.123.104:0/223883202 <== mon.2 v2:192.168.123.104:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fcd9400dbb0 con 0x7fcda0109f50 2026-03-10T10:46:41.567 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.564+0000 7fcd9cff9640 1 -- 192.168.123.104:0/223883202 <== mon.2 v2:192.168.123.104:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fcd9400dd50 con 0x7fcda0109f50 2026-03-10T10:46:41.567 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.564+0000 7fcd9cff9640 1 -- 192.168.123.104:0/223883202 <== mon.2 v2:192.168.123.104:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fcd94013800 con 0x7fcda0109f50 2026-03-10T10:46:41.568 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.564+0000 7fcda626b640 1 -- 192.168.123.104:0/223883202 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fcda01a43f0 con 0x7fcda0109f50 2026-03-10T10:46:41.568 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.564+0000 7fcda626b640 1 -- 192.168.123.104:0/223883202 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7fcda01abd30 con 0x7fcda0109f50 2026-03-10T10:46:41.569 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.564+0000 7fcda626b640 1 -- 192.168.123.104:0/223883202 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fcd64005190 con 0x7fcda0109f50 2026-03-10T10:46:41.572 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.568+0000 7fcd9cff9640 1 -- 192.168.123.104:0/223883202 <== mon.2 v2:192.168.123.104:3301/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7fcd94025080 con 0x7fcda0109f50 2026-03-10T10:46:41.573 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.568+0000 7fcd9cff9640 1 --2- 192.168.123.104:0/223883202 >> v2:192.168.123.104:6800/3326026257 conn(0x7fcd740777a0 0x7fcd74079c60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:46:41.573 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.568+0000 7fcd9cff9640 1 -- 192.168.123.104:0/223883202 <== mon.2 v2:192.168.123.104:3301/0 5 ==== osd_map(755..755 src has 251..755) ==== 7348+0+0 (secure 0 0 0) 0x7fcd9409a7b0 con 0x7fcda0109f50 2026-03-10T10:46:41.573 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.568+0000 7fcd9f7fe640 1 --2- 192.168.123.104:0/223883202 >> v2:192.168.123.104:6800/3326026257 conn(0x7fcd740777a0 0x7fcd74079c60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:46:41.573 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.568+0000 7fcd9cff9640 1 -- 192.168.123.104:0/223883202 <== mon.2 v2:192.168.123.104:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fcd94010070 con 0x7fcda0109f50 2026-03-10T10:46:41.573 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.568+0000 7fcd9f7fe640 1 --2- 192.168.123.104:0/223883202 >> v2:192.168.123.104:6800/3326026257 conn(0x7fcd740777a0 0x7fcd74079c60 secure :-1 s=READY pgs=4252 cs=0 l=1 rev1=1 crypto rx=0x7fcd90005fd0 tx=0x7fcd90005eb0 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:46:41.667 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:41.664+0000 7fcda626b640 1 -- 192.168.123.104:0/223883202 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "osd pool create", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pg_num": 12} v 0) -- 0x7fcd64005480 con 0x7fcda0109f50 2026-03-10T10:46:41.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:41 vm04 bash[28289]: audit 2026-03-10T10:46:39.725371+0000 mgr.y (mgr.24422) 1220 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:41.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:41 vm04 bash[28289]: audit 2026-03-10T10:46:39.725371+0000 mgr.y (mgr.24422) 1220 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:41.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:41 vm04 bash[28289]: cluster 2026-03-10T10:46:40.416755+0000 mon.a (mon.0) 3808 : cluster [DBG] osdmap e754: 8 total, 8 up, 8 in 2026-03-10T10:46:41.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:41 vm04 bash[28289]: cluster 2026-03-10T10:46:40.416755+0000 mon.a (mon.0) 3808 : cluster [DBG] osdmap e754: 8 total, 8 up, 8 in 2026-03-10T10:46:41.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:41 vm04 bash[20742]: audit 2026-03-10T10:46:39.725371+0000 mgr.y (mgr.24422) 1220 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:41.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:41 vm04 bash[20742]: audit 2026-03-10T10:46:39.725371+0000 mgr.y (mgr.24422) 1220 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:41.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:41 vm04 bash[20742]: cluster 2026-03-10T10:46:40.416755+0000 mon.a (mon.0) 3808 : cluster [DBG] osdmap e754: 8 total, 8 up, 8 in 2026-03-10T10:46:41.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:41 vm04 bash[20742]: cluster 2026-03-10T10:46:40.416755+0000 mon.a (mon.0) 3808 : cluster [DBG] osdmap e754: 8 total, 8 up, 8 in 2026-03-10T10:46:41.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:41 vm07 bash[23367]: audit 2026-03-10T10:46:39.725371+0000 mgr.y (mgr.24422) 1220 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:41.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:41 vm07 bash[23367]: audit 2026-03-10T10:46:39.725371+0000 mgr.y (mgr.24422) 1220 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:41.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:41 vm07 bash[23367]: cluster 2026-03-10T10:46:40.416755+0000 mon.a (mon.0) 3808 : cluster [DBG] osdmap e754: 8 total, 8 up, 8 in 2026-03-10T10:46:41.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:41 vm07 bash[23367]: cluster 2026-03-10T10:46:40.416755+0000 mon.a (mon.0) 3808 : cluster [DBG] osdmap e754: 8 total, 8 up, 8 in 2026-03-10T10:46:42.447 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.444+0000 7fcd9cff9640 1 -- 192.168.123.104:0/223883202 <== mon.2 v2:192.168.123.104:3301/0 7 ==== mon_command_ack([{"prefix": "osd pool create", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pg_num": 12}]=0 pool '3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f' created v756) ==== 176+0+0 (secure 0 0 0) 0x7fcd94067240 con 0x7fcda0109f50 2026-03-10T10:46:42.505 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.500+0000 7fcda626b640 1 -- 192.168.123.104:0/223883202 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "osd pool create", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pg_num": 12} v 0) -- 0x7fcd64004910 con 0x7fcda0109f50 2026-03-10T10:46:42.506 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.504+0000 7fcd9cff9640 1 -- 192.168.123.104:0/223883202 <== mon.2 v2:192.168.123.104:3301/0 8 ==== mon_command_ack([{"prefix": "osd pool create", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pg_num": 12}]=0 pool '3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f' already exists v756) ==== 183+0+0 (secure 0 0 0) 0x7fcd9406c0f0 con 0x7fcda0109f50 2026-03-10T10:46:42.506 INFO:tasks.workunit.client.0.vm04.stderr:pool '3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f' already exists 2026-03-10T10:46:42.509 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.504+0000 7fcda626b640 1 -- 192.168.123.104:0/223883202 >> v2:192.168.123.104:6800/3326026257 conn(0x7fcd740777a0 msgr2=0x7fcd74079c60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:46:42.509 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.504+0000 7fcda626b640 1 --2- 192.168.123.104:0/223883202 >> v2:192.168.123.104:6800/3326026257 conn(0x7fcd740777a0 0x7fcd74079c60 secure :-1 s=READY pgs=4252 cs=0 l=1 rev1=1 crypto rx=0x7fcd90005fd0 tx=0x7fcd90005eb0 comp rx=0 tx=0).stop 2026-03-10T10:46:42.509 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.504+0000 7fcda626b640 1 -- 192.168.123.104:0/223883202 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fcda0109f50 msgr2=0x7fcda01a3a80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:46:42.509 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.504+0000 7fcda626b640 1 --2- 192.168.123.104:0/223883202 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fcda0109f50 0x7fcda01a3a80 secure :-1 s=READY pgs=3370 cs=0 l=1 rev1=1 crypto rx=0x7fcd94004a30 tx=0x7fcd9400d4a0 comp rx=0 tx=0).stop 2026-03-10T10:46:42.509 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.504+0000 7fcda626b640 1 -- 192.168.123.104:0/223883202 shutdown_connections 2026-03-10T10:46:42.509 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.504+0000 7fcda626b640 1 --2- 192.168.123.104:0/223883202 >> v2:192.168.123.104:6800/3326026257 conn(0x7fcd740777a0 0x7fcd74079c60 unknown :-1 s=CLOSED pgs=4252 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:46:42.509 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.504+0000 7fcda626b640 1 --2- 192.168.123.104:0/223883202 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fcda0109f50 0x7fcda01a3a80 unknown :-1 s=CLOSED pgs=3370 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:46:42.509 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.504+0000 7fcda626b640 1 --2- 192.168.123.104:0/223883202 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fcda01057d0 0x7fcda019f6f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:46:42.509 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.504+0000 7fcda626b640 1 --2- 192.168.123.104:0/223883202 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcda0104e20 0x7fcda019f1b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:46:42.509 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.504+0000 7fcda626b640 1 -- 192.168.123.104:0/223883202 >> 192.168.123.104:0/223883202 conn(0x7fcda0100880 msgr2=0x7fcda0100e90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:46:42.509 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.504+0000 7fcda626b640 1 -- 192.168.123.104:0/223883202 shutdown_connections 2026-03-10T10:46:42.509 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.504+0000 7fcda626b640 1 -- 192.168.123.104:0/223883202 wait complete. 2026-03-10T10:46:42.522 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph osd pool set-quota 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f max_objects 10 2026-03-10T10:46:42.583 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.580+0000 7f33f1d61640 1 -- 192.168.123.104:0/896656059 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f33ec10f1e0 msgr2=0x7f33ec111660 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:46:42.583 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.580+0000 7f33f1d61640 1 --2- 192.168.123.104:0/896656059 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f33ec10f1e0 0x7f33ec111660 secure :-1 s=READY pgs=3064 cs=0 l=1 rev1=1 crypto rx=0x7f33e000b0a0 tx=0x7f33e001cae0 comp rx=0 tx=0).stop 2026-03-10T10:46:42.583 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.580+0000 7f33f1d61640 1 -- 192.168.123.104:0/896656059 shutdown_connections 2026-03-10T10:46:42.583 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.580+0000 7f33f1d61640 1 --2- 192.168.123.104:0/896656059 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f33ec10f1e0 0x7f33ec111660 unknown :-1 s=CLOSED pgs=3064 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:46:42.583 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.580+0000 7f33f1d61640 1 --2- 192.168.123.104:0/896656059 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f33ec101510 0x7f33ec10eb70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:46:42.583 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.580+0000 7f33f1d61640 1 --2- 192.168.123.104:0/896656059 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f33ec100bf0 0x7f33ec100fd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:46:42.583 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.580+0000 7f33f1d61640 1 -- 192.168.123.104:0/896656059 >> 192.168.123.104:0/896656059 conn(0x7f33ec0fc820 msgr2=0x7f33ec0fec40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:46:42.583 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.580+0000 7f33f1d61640 1 -- 192.168.123.104:0/896656059 shutdown_connections 2026-03-10T10:46:42.583 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.580+0000 7f33f1d61640 1 -- 192.168.123.104:0/896656059 wait complete. 2026-03-10T10:46:42.583 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.580+0000 7f33f1d61640 1 Processor -- start 2026-03-10T10:46:42.584 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.580+0000 7f33f1d61640 1 -- start start 2026-03-10T10:46:42.584 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.580+0000 7f33f1d61640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f33ec100bf0 0x7f33ec19f110 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:46:42.584 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.580+0000 7f33f1d61640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f33ec101510 0x7f33ec19f650 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:46:42.584 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.580+0000 7f33f1d61640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f33ec10f1e0 0x7f33ec1a39e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:46:42.584 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.580+0000 7f33f1d61640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f33ec116cc0 con 0x7f33ec101510 2026-03-10T10:46:42.584 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.580+0000 7f33f1d61640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f33ec116b40 con 0x7f33ec100bf0 2026-03-10T10:46:42.584 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.580+0000 7f33f1d61640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f33ec116e40 con 0x7f33ec10f1e0 2026-03-10T10:46:42.584 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.580+0000 7f33ebfff640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f33ec10f1e0 0x7f33ec1a39e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:46:42.584 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.580+0000 7f33ebfff640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f33ec10f1e0 0x7f33ec1a39e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3301/0 says I am v2:192.168.123.104:37022/0 (socket says 192.168.123.104:37022) 2026-03-10T10:46:42.584 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.580+0000 7f33ebfff640 1 -- 192.168.123.104:0/1151779075 learned_addr learned my addr 192.168.123.104:0/1151779075 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:46:42.584 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.580+0000 7f33ebfff640 1 -- 192.168.123.104:0/1151779075 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f33ec100bf0 msgr2=0x7f33ec19f110 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:46:42.584 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.580+0000 7f33ebfff640 1 --2- 192.168.123.104:0/1151779075 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f33ec100bf0 0x7f33ec19f110 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:46:42.584 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.580+0000 7f33ebfff640 1 -- 192.168.123.104:0/1151779075 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f33ec101510 msgr2=0x7f33ec19f650 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:46:42.584 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.580+0000 7f33ebfff640 1 --2- 192.168.123.104:0/1151779075 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f33ec101510 0x7f33ec19f650 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:46:42.584 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.580+0000 7f33ebfff640 1 -- 192.168.123.104:0/1151779075 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f33ec1a4160 con 0x7f33ec10f1e0 2026-03-10T10:46:42.584 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.580+0000 7f33ebfff640 1 --2- 192.168.123.104:0/1151779075 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f33ec10f1e0 0x7f33ec1a39e0 secure :-1 s=READY pgs=3371 cs=0 l=1 rev1=1 crypto rx=0x7f33e001c910 tx=0x7f33e00077f0 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:46:42.584 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.580+0000 7f33e8ff9640 1 -- 192.168.123.104:0/1151779075 <== mon.2 v2:192.168.123.104:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f33e0007990 con 0x7f33ec10f1e0 2026-03-10T10:46:42.584 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.580+0000 7f33e8ff9640 1 -- 192.168.123.104:0/1151779075 <== mon.2 v2:192.168.123.104:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f33e0004070 con 0x7f33ec10f1e0 2026-03-10T10:46:42.584 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.580+0000 7f33f1d61640 1 -- 192.168.123.104:0/1151779075 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f33ec1a43f0 con 0x7f33ec10f1e0 2026-03-10T10:46:42.584 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.580+0000 7f33f1d61640 1 -- 192.168.123.104:0/1151779075 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f33ec1abc90 con 0x7f33ec10f1e0 2026-03-10T10:46:42.585 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.580+0000 7f33e8ff9640 1 -- 192.168.123.104:0/1151779075 <== mon.2 v2:192.168.123.104:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f33e0007d30 con 0x7f33ec10f1e0 2026-03-10T10:46:42.588 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.580+0000 7f33f1d61640 1 -- 192.168.123.104:0/1151779075 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f33b0005190 con 0x7f33ec10f1e0 2026-03-10T10:46:42.589 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.580+0000 7f33e8ff9640 1 -- 192.168.123.104:0/1151779075 <== mon.2 v2:192.168.123.104:3301/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f33e0005ce0 con 0x7f33ec10f1e0 2026-03-10T10:46:42.589 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.584+0000 7f33e8ff9640 1 --2- 192.168.123.104:0/1151779075 >> v2:192.168.123.104:6800/3326026257 conn(0x7f33c00776d0 0x7f33c0079b90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:46:42.589 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.584+0000 7f33e8ff9640 1 -- 192.168.123.104:0/1151779075 <== mon.2 v2:192.168.123.104:3301/0 5 ==== osd_map(756..756 src has 251..756) ==== 7723+0+0 (secure 0 0 0) 0x7f33e01341e0 con 0x7f33ec10f1e0 2026-03-10T10:46:42.589 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.584+0000 7f33e8ff9640 1 -- 192.168.123.104:0/1151779075 <== mon.2 v2:192.168.123.104:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f33e0016760 con 0x7f33ec10f1e0 2026-03-10T10:46:42.589 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.584+0000 7f33eb7fe640 1 --2- 192.168.123.104:0/1151779075 >> v2:192.168.123.104:6800/3326026257 conn(0x7f33c00776d0 0x7f33c0079b90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:46:42.589 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.584+0000 7f33eb7fe640 1 --2- 192.168.123.104:0/1151779075 >> v2:192.168.123.104:6800/3326026257 conn(0x7f33c00776d0 0x7f33c0079b90 secure :-1 s=READY pgs=4253 cs=0 l=1 rev1=1 crypto rx=0x7f33dc005e10 tx=0x7f33dc005d80 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:46:42.686 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:42.680+0000 7f33f1d61640 1 -- 192.168.123.104:0/1151779075 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"} v 0) -- 0x7f33b0005480 con 0x7f33ec10f1e0 2026-03-10T10:46:42.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:42 vm04 bash[28289]: cluster 2026-03-10T10:46:40.810339+0000 mgr.y (mgr.24422) 1221 : cluster [DBG] pgmap v1652: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:42.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:42 vm04 bash[28289]: cluster 2026-03-10T10:46:40.810339+0000 mgr.y (mgr.24422) 1221 : cluster [DBG] pgmap v1652: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:42.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:42 vm04 bash[28289]: cluster 2026-03-10T10:46:41.428508+0000 mon.a (mon.0) 3809 : cluster [DBG] osdmap e755: 8 total, 8 up, 8 in 2026-03-10T10:46:42.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:42 vm04 bash[28289]: cluster 2026-03-10T10:46:41.428508+0000 mon.a (mon.0) 3809 : cluster [DBG] osdmap e755: 8 total, 8 up, 8 in 2026-03-10T10:46:42.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:42 vm04 bash[28289]: audit 2026-03-10T10:46:41.669131+0000 mon.c (mon.2) 479 : audit [INF] from='client.? 192.168.123.104:0/223883202' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pg_num": 12}]: dispatch 2026-03-10T10:46:42.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:42 vm04 bash[28289]: audit 2026-03-10T10:46:41.669131+0000 mon.c (mon.2) 479 : audit [INF] from='client.? 192.168.123.104:0/223883202' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pg_num": 12}]: dispatch 2026-03-10T10:46:42.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:42 vm04 bash[28289]: audit 2026-03-10T10:46:41.669526+0000 mon.a (mon.0) 3810 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pg_num": 12}]: dispatch 2026-03-10T10:46:42.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:42 vm04 bash[28289]: audit 2026-03-10T10:46:41.669526+0000 mon.a (mon.0) 3810 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pg_num": 12}]: dispatch 2026-03-10T10:46:42.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:42 vm04 bash[20742]: cluster 2026-03-10T10:46:40.810339+0000 mgr.y (mgr.24422) 1221 : cluster [DBG] pgmap v1652: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:42.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:42 vm04 bash[20742]: cluster 2026-03-10T10:46:40.810339+0000 mgr.y (mgr.24422) 1221 : cluster [DBG] pgmap v1652: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:42.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:42 vm04 bash[20742]: cluster 2026-03-10T10:46:41.428508+0000 mon.a (mon.0) 3809 : cluster [DBG] osdmap e755: 8 total, 8 up, 8 in 2026-03-10T10:46:42.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:42 vm04 bash[20742]: cluster 2026-03-10T10:46:41.428508+0000 mon.a (mon.0) 3809 : cluster [DBG] osdmap e755: 8 total, 8 up, 8 in 2026-03-10T10:46:42.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:42 vm04 bash[20742]: audit 2026-03-10T10:46:41.669131+0000 mon.c (mon.2) 479 : audit [INF] from='client.? 192.168.123.104:0/223883202' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pg_num": 12}]: dispatch 2026-03-10T10:46:42.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:42 vm04 bash[20742]: audit 2026-03-10T10:46:41.669131+0000 mon.c (mon.2) 479 : audit [INF] from='client.? 192.168.123.104:0/223883202' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pg_num": 12}]: dispatch 2026-03-10T10:46:42.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:42 vm04 bash[20742]: audit 2026-03-10T10:46:41.669526+0000 mon.a (mon.0) 3810 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pg_num": 12}]: dispatch 2026-03-10T10:46:42.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:42 vm04 bash[20742]: audit 2026-03-10T10:46:41.669526+0000 mon.a (mon.0) 3810 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pg_num": 12}]: dispatch 2026-03-10T10:46:42.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:42 vm07 bash[23367]: cluster 2026-03-10T10:46:40.810339+0000 mgr.y (mgr.24422) 1221 : cluster [DBG] pgmap v1652: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:42.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:42 vm07 bash[23367]: cluster 2026-03-10T10:46:40.810339+0000 mgr.y (mgr.24422) 1221 : cluster [DBG] pgmap v1652: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:42.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:42 vm07 bash[23367]: cluster 2026-03-10T10:46:41.428508+0000 mon.a (mon.0) 3809 : cluster [DBG] osdmap e755: 8 total, 8 up, 8 in 2026-03-10T10:46:42.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:42 vm07 bash[23367]: cluster 2026-03-10T10:46:41.428508+0000 mon.a (mon.0) 3809 : cluster [DBG] osdmap e755: 8 total, 8 up, 8 in 2026-03-10T10:46:42.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:42 vm07 bash[23367]: audit 2026-03-10T10:46:41.669131+0000 mon.c (mon.2) 479 : audit [INF] from='client.? 192.168.123.104:0/223883202' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pg_num": 12}]: dispatch 2026-03-10T10:46:42.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:42 vm07 bash[23367]: audit 2026-03-10T10:46:41.669131+0000 mon.c (mon.2) 479 : audit [INF] from='client.? 192.168.123.104:0/223883202' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pg_num": 12}]: dispatch 2026-03-10T10:46:42.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:42 vm07 bash[23367]: audit 2026-03-10T10:46:41.669526+0000 mon.a (mon.0) 3810 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pg_num": 12}]: dispatch 2026-03-10T10:46:42.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:42 vm07 bash[23367]: audit 2026-03-10T10:46:41.669526+0000 mon.a (mon.0) 3810 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pg_num": 12}]: dispatch 2026-03-10T10:46:43.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:46:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:46:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:46:43.472 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:43.468+0000 7f33e8ff9640 1 -- 192.168.123.104:0/1151779075 <== mon.2 v2:192.168.123.104:3301/0 7 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]=0 set-quota max_objects = 10 for pool 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f v757) ==== 223+0+0 (secure 0 0 0) 0x7f33e0100940 con 0x7f33ec10f1e0 2026-03-10T10:46:43.530 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:43.528+0000 7f33f1d61640 1 -- 192.168.123.104:0/1151779075 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"} v 0) -- 0x7f33b0004910 con 0x7f33ec10f1e0 2026-03-10T10:46:43.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:43 vm07 bash[23367]: audit 2026-03-10T10:46:42.444402+0000 mon.a (mon.0) 3811 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pg_num": 12}]': finished 2026-03-10T10:46:43.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:43 vm07 bash[23367]: audit 2026-03-10T10:46:42.444402+0000 mon.a (mon.0) 3811 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pg_num": 12}]': finished 2026-03-10T10:46:43.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:43 vm07 bash[23367]: cluster 2026-03-10T10:46:42.455867+0000 mon.a (mon.0) 3812 : cluster [DBG] osdmap e756: 8 total, 8 up, 8 in 2026-03-10T10:46:43.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:43 vm07 bash[23367]: cluster 2026-03-10T10:46:42.455867+0000 mon.a (mon.0) 3812 : cluster [DBG] osdmap e756: 8 total, 8 up, 8 in 2026-03-10T10:46:43.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:43 vm07 bash[23367]: audit 2026-03-10T10:46:42.507222+0000 mon.c (mon.2) 480 : audit [INF] from='client.? 192.168.123.104:0/223883202' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pg_num": 12}]: dispatch 2026-03-10T10:46:43.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:43 vm07 bash[23367]: audit 2026-03-10T10:46:42.507222+0000 mon.c (mon.2) 480 : audit [INF] from='client.? 192.168.123.104:0/223883202' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pg_num": 12}]: dispatch 2026-03-10T10:46:43.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:43 vm07 bash[23367]: audit 2026-03-10T10:46:42.507943+0000 mon.a (mon.0) 3813 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pg_num": 12}]: dispatch 2026-03-10T10:46:43.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:43 vm07 bash[23367]: audit 2026-03-10T10:46:42.507943+0000 mon.a (mon.0) 3813 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pg_num": 12}]: dispatch 2026-03-10T10:46:43.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:43 vm07 bash[23367]: audit 2026-03-10T10:46:42.686833+0000 mon.c (mon.2) 481 : audit [INF] from='client.? 192.168.123.104:0/1151779075' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:46:43.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:43 vm07 bash[23367]: audit 2026-03-10T10:46:42.686833+0000 mon.c (mon.2) 481 : audit [INF] from='client.? 192.168.123.104:0/1151779075' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:46:43.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:43 vm07 bash[23367]: audit 2026-03-10T10:46:42.687401+0000 mon.a (mon.0) 3814 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:46:43.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:43 vm07 bash[23367]: audit 2026-03-10T10:46:42.687401+0000 mon.a (mon.0) 3814 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:46:43.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:43 vm07 bash[23367]: cluster 2026-03-10T10:46:42.747250+0000 mon.a (mon.0) 3815 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:46:43.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:43 vm07 bash[23367]: cluster 2026-03-10T10:46:42.747250+0000 mon.a (mon.0) 3815 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:46:43.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:43 vm04 bash[28289]: audit 2026-03-10T10:46:42.444402+0000 mon.a (mon.0) 3811 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pg_num": 12}]': finished 2026-03-10T10:46:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:43 vm04 bash[28289]: audit 2026-03-10T10:46:42.444402+0000 mon.a (mon.0) 3811 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pg_num": 12}]': finished 2026-03-10T10:46:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:43 vm04 bash[28289]: cluster 2026-03-10T10:46:42.455867+0000 mon.a (mon.0) 3812 : cluster [DBG] osdmap e756: 8 total, 8 up, 8 in 2026-03-10T10:46:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:43 vm04 bash[28289]: cluster 2026-03-10T10:46:42.455867+0000 mon.a (mon.0) 3812 : cluster [DBG] osdmap e756: 8 total, 8 up, 8 in 2026-03-10T10:46:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:43 vm04 bash[28289]: audit 2026-03-10T10:46:42.507222+0000 mon.c (mon.2) 480 : audit [INF] from='client.? 192.168.123.104:0/223883202' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pg_num": 12}]: dispatch 2026-03-10T10:46:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:43 vm04 bash[28289]: audit 2026-03-10T10:46:42.507222+0000 mon.c (mon.2) 480 : audit [INF] from='client.? 192.168.123.104:0/223883202' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pg_num": 12}]: dispatch 2026-03-10T10:46:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:43 vm04 bash[28289]: audit 2026-03-10T10:46:42.507943+0000 mon.a (mon.0) 3813 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pg_num": 12}]: dispatch 2026-03-10T10:46:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:43 vm04 bash[28289]: audit 2026-03-10T10:46:42.507943+0000 mon.a (mon.0) 3813 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pg_num": 12}]: dispatch 2026-03-10T10:46:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:43 vm04 bash[28289]: audit 2026-03-10T10:46:42.686833+0000 mon.c (mon.2) 481 : audit [INF] from='client.? 192.168.123.104:0/1151779075' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:46:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:43 vm04 bash[28289]: audit 2026-03-10T10:46:42.686833+0000 mon.c (mon.2) 481 : audit [INF] from='client.? 192.168.123.104:0/1151779075' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:46:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:43 vm04 bash[28289]: audit 2026-03-10T10:46:42.687401+0000 mon.a (mon.0) 3814 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:46:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:43 vm04 bash[28289]: audit 2026-03-10T10:46:42.687401+0000 mon.a (mon.0) 3814 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:46:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:43 vm04 bash[28289]: cluster 2026-03-10T10:46:42.747250+0000 mon.a (mon.0) 3815 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:46:43.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:43 vm04 bash[28289]: cluster 2026-03-10T10:46:42.747250+0000 mon.a (mon.0) 3815 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:46:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:43 vm04 bash[20742]: audit 2026-03-10T10:46:42.444402+0000 mon.a (mon.0) 3811 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pg_num": 12}]': finished 2026-03-10T10:46:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:43 vm04 bash[20742]: audit 2026-03-10T10:46:42.444402+0000 mon.a (mon.0) 3811 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pg_num": 12}]': finished 2026-03-10T10:46:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:43 vm04 bash[20742]: cluster 2026-03-10T10:46:42.455867+0000 mon.a (mon.0) 3812 : cluster [DBG] osdmap e756: 8 total, 8 up, 8 in 2026-03-10T10:46:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:43 vm04 bash[20742]: cluster 2026-03-10T10:46:42.455867+0000 mon.a (mon.0) 3812 : cluster [DBG] osdmap e756: 8 total, 8 up, 8 in 2026-03-10T10:46:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:43 vm04 bash[20742]: audit 2026-03-10T10:46:42.507222+0000 mon.c (mon.2) 480 : audit [INF] from='client.? 192.168.123.104:0/223883202' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pg_num": 12}]: dispatch 2026-03-10T10:46:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:43 vm04 bash[20742]: audit 2026-03-10T10:46:42.507222+0000 mon.c (mon.2) 480 : audit [INF] from='client.? 192.168.123.104:0/223883202' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pg_num": 12}]: dispatch 2026-03-10T10:46:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:43 vm04 bash[20742]: audit 2026-03-10T10:46:42.507943+0000 mon.a (mon.0) 3813 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pg_num": 12}]: dispatch 2026-03-10T10:46:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:43 vm04 bash[20742]: audit 2026-03-10T10:46:42.507943+0000 mon.a (mon.0) 3813 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pg_num": 12}]: dispatch 2026-03-10T10:46:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:43 vm04 bash[20742]: audit 2026-03-10T10:46:42.686833+0000 mon.c (mon.2) 481 : audit [INF] from='client.? 192.168.123.104:0/1151779075' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:46:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:43 vm04 bash[20742]: audit 2026-03-10T10:46:42.686833+0000 mon.c (mon.2) 481 : audit [INF] from='client.? 192.168.123.104:0/1151779075' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:46:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:43 vm04 bash[20742]: audit 2026-03-10T10:46:42.687401+0000 mon.a (mon.0) 3814 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:46:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:43 vm04 bash[20742]: audit 2026-03-10T10:46:42.687401+0000 mon.a (mon.0) 3814 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:46:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:43 vm04 bash[20742]: cluster 2026-03-10T10:46:42.747250+0000 mon.a (mon.0) 3815 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:46:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:43 vm04 bash[20742]: cluster 2026-03-10T10:46:42.747250+0000 mon.a (mon.0) 3815 : cluster [WRN] Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:46:44.459 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:44 vm04 bash[28289]: cluster 2026-03-10T10:46:42.810730+0000 mgr.y (mgr.24422) 1222 : cluster [DBG] pgmap v1655: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:46:44.459 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:44 vm04 bash[28289]: cluster 2026-03-10T10:46:42.810730+0000 mgr.y (mgr.24422) 1222 : cluster [DBG] pgmap v1655: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:46:44.459 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:44 vm04 bash[28289]: audit 2026-03-10T10:46:43.454196+0000 mon.a (mon.0) 3816 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]': finished 2026-03-10T10:46:44.459 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:44 vm04 bash[28289]: audit 2026-03-10T10:46:43.454196+0000 mon.a (mon.0) 3816 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]': finished 2026-03-10T10:46:44.459 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:44 vm04 bash[28289]: cluster 2026-03-10T10:46:43.464915+0000 mon.a (mon.0) 3817 : cluster [DBG] osdmap e757: 8 total, 8 up, 8 in 2026-03-10T10:46:44.459 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:44 vm04 bash[28289]: cluster 2026-03-10T10:46:43.464915+0000 mon.a (mon.0) 3817 : cluster [DBG] osdmap e757: 8 total, 8 up, 8 in 2026-03-10T10:46:44.459 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:44 vm04 bash[28289]: audit 2026-03-10T10:46:43.532626+0000 mon.c (mon.2) 482 : audit [INF] from='client.? 192.168.123.104:0/1151779075' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:46:44.459 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:44 vm04 bash[28289]: audit 2026-03-10T10:46:43.532626+0000 mon.c (mon.2) 482 : audit [INF] from='client.? 192.168.123.104:0/1151779075' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:46:44.459 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:44 vm04 bash[28289]: audit 2026-03-10T10:46:43.533229+0000 mon.a (mon.0) 3818 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:46:44.459 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:44 vm04 bash[28289]: audit 2026-03-10T10:46:43.533229+0000 mon.a (mon.0) 3818 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:46:44.459 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:44 vm04 bash[28289]: audit 2026-03-10T10:46:43.634810+0000 mon.a (mon.0) 3819 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:46:44.459 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:44 vm04 bash[28289]: audit 2026-03-10T10:46:43.634810+0000 mon.a (mon.0) 3819 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:46:44.459 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:44 vm04 bash[20742]: cluster 2026-03-10T10:46:42.810730+0000 mgr.y (mgr.24422) 1222 : cluster [DBG] pgmap v1655: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:46:44.459 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:44 vm04 bash[20742]: cluster 2026-03-10T10:46:42.810730+0000 mgr.y (mgr.24422) 1222 : cluster [DBG] pgmap v1655: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:46:44.480 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.476+0000 7f33e8ff9640 1 -- 192.168.123.104:0/1151779075 <== mon.2 v2:192.168.123.104:3301/0 8 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]=0 set-quota max_objects = 10 for pool 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f v758) ==== 223+0+0 (secure 0 0 0) 0x7f33e01057f0 con 0x7f33ec10f1e0 2026-03-10T10:46:44.480 INFO:tasks.workunit.client.0.vm04.stderr:set-quota max_objects = 10 for pool 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f 2026-03-10T10:46:44.482 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.480+0000 7f33f1d61640 1 -- 192.168.123.104:0/1151779075 >> v2:192.168.123.104:6800/3326026257 conn(0x7f33c00776d0 msgr2=0x7f33c0079b90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:46:44.482 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.480+0000 7f33f1d61640 1 --2- 192.168.123.104:0/1151779075 >> v2:192.168.123.104:6800/3326026257 conn(0x7f33c00776d0 0x7f33c0079b90 secure :-1 s=READY pgs=4253 cs=0 l=1 rev1=1 crypto rx=0x7f33dc005e10 tx=0x7f33dc005d80 comp rx=0 tx=0).stop 2026-03-10T10:46:44.482 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.480+0000 7f33f1d61640 1 -- 192.168.123.104:0/1151779075 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f33ec10f1e0 msgr2=0x7f33ec1a39e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:46:44.482 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.480+0000 7f33f1d61640 1 --2- 192.168.123.104:0/1151779075 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f33ec10f1e0 0x7f33ec1a39e0 secure :-1 s=READY pgs=3371 cs=0 l=1 rev1=1 crypto rx=0x7f33e001c910 tx=0x7f33e00077f0 comp rx=0 tx=0).stop 2026-03-10T10:46:44.482 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.480+0000 7f33f1d61640 1 -- 192.168.123.104:0/1151779075 shutdown_connections 2026-03-10T10:46:44.482 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.480+0000 7f33f1d61640 1 --2- 192.168.123.104:0/1151779075 >> v2:192.168.123.104:6800/3326026257 conn(0x7f33c00776d0 0x7f33c0079b90 unknown :-1 s=CLOSED pgs=4253 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:46:44.482 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.480+0000 7f33f1d61640 1 --2- 192.168.123.104:0/1151779075 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f33ec10f1e0 0x7f33ec1a39e0 unknown :-1 s=CLOSED pgs=3371 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:46:44.482 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.480+0000 7f33f1d61640 1 --2- 192.168.123.104:0/1151779075 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f33ec101510 0x7f33ec19f650 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:46:44.482 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.480+0000 7f33f1d61640 1 --2- 192.168.123.104:0/1151779075 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f33ec100bf0 0x7f33ec19f110 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:46:44.482 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.480+0000 7f33f1d61640 1 -- 192.168.123.104:0/1151779075 >> 192.168.123.104:0/1151779075 conn(0x7f33ec0fc820 msgr2=0x7f33ec0fec10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:46:44.482 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.480+0000 7f33f1d61640 1 -- 192.168.123.104:0/1151779075 shutdown_connections 2026-03-10T10:46:44.482 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.480+0000 7f33f1d61640 1 -- 192.168.123.104:0/1151779075 wait complete. 2026-03-10T10:46:44.494 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph osd pool application enable 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f rados 2026-03-10T10:46:44.556 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.552+0000 7f9cb97db640 1 -- 192.168.123.104:0/2110004611 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f9cb4113730 msgr2=0x7f9cb4115b60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:46:44.556 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.552+0000 7f9cb97db640 1 --2- 192.168.123.104:0/2110004611 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f9cb4113730 0x7f9cb4115b60 secure :-1 s=READY pgs=3065 cs=0 l=1 rev1=1 crypto rx=0x7f9ca400b0a0 tx=0x7f9ca401cba0 comp rx=0 tx=0).stop 2026-03-10T10:46:44.557 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.552+0000 7f9cb97db640 1 -- 192.168.123.104:0/2110004611 shutdown_connections 2026-03-10T10:46:44.557 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.552+0000 7f9cb97db640 1 --2- 192.168.123.104:0/2110004611 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f9cb4113730 0x7f9cb4115b60 unknown :-1 s=CLOSED pgs=3065 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:46:44.557 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.552+0000 7f9cb97db640 1 --2- 192.168.123.104:0/2110004611 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9cb4075960 0x7f9cb4075da0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:46:44.557 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.552+0000 7f9cb97db640 1 --2- 192.168.123.104:0/2110004611 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f9cb4106830 0x7f9cb4075420 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:46:44.557 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.552+0000 7f9cb97db640 1 -- 192.168.123.104:0/2110004611 >> 192.168.123.104:0/2110004611 conn(0x7f9cb40fe640 msgr2=0x7f9cb4100a60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:46:44.557 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.552+0000 7f9cb97db640 1 -- 192.168.123.104:0/2110004611 shutdown_connections 2026-03-10T10:46:44.557 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.552+0000 7f9cb97db640 1 -- 192.168.123.104:0/2110004611 wait complete. 2026-03-10T10:46:44.557 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.552+0000 7f9cb97db640 1 Processor -- start 2026-03-10T10:46:44.557 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.552+0000 7f9cb97db640 1 -- start start 2026-03-10T10:46:44.557 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.552+0000 7f9cb97db640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f9cb4075960 0x7f9cb41a4330 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:46:44.557 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.552+0000 7f9cb97db640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f9cb4106830 0x7f9cb41a4870 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:46:44.557 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.552+0000 7f9cb97db640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9cb4113730 0x7f9cb41a8c00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:46:44.557 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.552+0000 7f9cb97db640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f9cb411bde0 con 0x7f9cb4075960 2026-03-10T10:46:44.557 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.552+0000 7f9cb97db640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f9cb411bc60 con 0x7f9cb4113730 2026-03-10T10:46:44.557 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.552+0000 7f9cb97db640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f9cb411bf60 con 0x7f9cb4106830 2026-03-10T10:46:44.557 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.552+0000 7f9cb2ffd640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f9cb4075960 0x7f9cb41a4330 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:46:44.557 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.552+0000 7f9cb2ffd640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f9cb4075960 0x7f9cb41a4330 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:38680/0 (socket says 192.168.123.104:38680) 2026-03-10T10:46:44.557 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.552+0000 7f9cb2ffd640 1 -- 192.168.123.104:0/3470338320 learned_addr learned my addr 192.168.123.104:0/3470338320 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:46:44.558 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.552+0000 7f9cb37fe640 1 --2- 192.168.123.104:0/3470338320 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9cb4113730 0x7f9cb41a8c00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:46:44.558 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.552+0000 7f9cb2ffd640 1 -- 192.168.123.104:0/3470338320 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f9cb4106830 msgr2=0x7f9cb41a4870 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:46:44.558 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.552+0000 7f9cb2ffd640 1 --2- 192.168.123.104:0/3470338320 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f9cb4106830 0x7f9cb41a4870 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:46:44.558 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.552+0000 7f9cb2ffd640 1 -- 192.168.123.104:0/3470338320 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9cb4113730 msgr2=0x7f9cb41a8c00 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:46:44.558 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.552+0000 7f9cb2ffd640 1 --2- 192.168.123.104:0/3470338320 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9cb4113730 0x7f9cb41a8c00 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:46:44.558 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.552+0000 7f9cb2ffd640 1 -- 192.168.123.104:0/3470338320 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f9cb41a9380 con 0x7f9cb4075960 2026-03-10T10:46:44.558 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.556+0000 7f9cb2ffd640 1 --2- 192.168.123.104:0/3470338320 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f9cb4075960 0x7f9cb41a4330 secure :-1 s=READY pgs=3066 cs=0 l=1 rev1=1 crypto rx=0x7f9ca800dcf0 tx=0x7f9ca800b630 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:46:44.558 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.556+0000 7f9c93fff640 1 -- 192.168.123.104:0/3470338320 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f9ca8014070 con 0x7f9cb4075960 2026-03-10T10:46:44.559 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.556+0000 7f9c93fff640 1 -- 192.168.123.104:0/3470338320 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f9ca8004540 con 0x7f9cb4075960 2026-03-10T10:46:44.559 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.556+0000 7f9cb97db640 1 -- 192.168.123.104:0/3470338320 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f9cb41a9670 con 0x7f9cb4075960 2026-03-10T10:46:44.559 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.556+0000 7f9c93fff640 1 -- 192.168.123.104:0/3470338320 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f9ca8005020 con 0x7f9cb4075960 2026-03-10T10:46:44.559 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.556+0000 7f9cb97db640 1 -- 192.168.123.104:0/3470338320 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f9cb41b0eb0 con 0x7f9cb4075960 2026-03-10T10:46:44.559 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.556+0000 7f9c93fff640 1 -- 192.168.123.104:0/3470338320 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f9ca80040d0 con 0x7f9cb4075960 2026-03-10T10:46:44.562 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.556+0000 7f9cb97db640 1 -- 192.168.123.104:0/3470338320 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f9c78005190 con 0x7f9cb4075960 2026-03-10T10:46:44.562 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.556+0000 7f9c93fff640 1 --2- 192.168.123.104:0/3470338320 >> v2:192.168.123.104:6800/3326026257 conn(0x7f9c88077640 0x7f9c88079b00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:46:44.562 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.556+0000 7f9c93fff640 1 -- 192.168.123.104:0/3470338320 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(758..758 src has 251..758) ==== 7723+0+0 (secure 0 0 0) 0x7f9ca809d170 con 0x7f9cb4075960 2026-03-10T10:46:44.564 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.560+0000 7f9cb27fc640 1 --2- 192.168.123.104:0/3470338320 >> v2:192.168.123.104:6800/3326026257 conn(0x7f9c88077640 0x7f9c88079b00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:46:44.564 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.560+0000 7f9c93fff640 1 -- 192.168.123.104:0/3470338320 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f9ca8065bb0 con 0x7f9cb4075960 2026-03-10T10:46:44.564 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.560+0000 7f9cb27fc640 1 --2- 192.168.123.104:0/3470338320 >> v2:192.168.123.104:6800/3326026257 conn(0x7f9c88077640 0x7f9c88079b00 secure :-1 s=READY pgs=4254 cs=0 l=1 rev1=1 crypto rx=0x7f9c9c006fd0 tx=0x7f9c9c006ce0 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:46:44.659 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:44.656+0000 7f9cb97db640 1 -- 192.168.123.104:0/3470338320 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "osd pool application enable", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "app": "rados"} v 0) -- 0x7f9c78005480 con 0x7f9cb4075960 2026-03-10T10:46:44.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:44 vm07 bash[23367]: cluster 2026-03-10T10:46:42.810730+0000 mgr.y (mgr.24422) 1222 : cluster [DBG] pgmap v1655: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:46:44.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:44 vm07 bash[23367]: cluster 2026-03-10T10:46:42.810730+0000 mgr.y (mgr.24422) 1222 : cluster [DBG] pgmap v1655: 176 pgs: 12 unknown, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:46:44.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:44 vm07 bash[23367]: audit 2026-03-10T10:46:43.454196+0000 mon.a (mon.0) 3816 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]': finished 2026-03-10T10:46:44.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:44 vm07 bash[23367]: audit 2026-03-10T10:46:43.454196+0000 mon.a (mon.0) 3816 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]': finished 2026-03-10T10:46:44.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:44 vm07 bash[23367]: cluster 2026-03-10T10:46:43.464915+0000 mon.a (mon.0) 3817 : cluster [DBG] osdmap e757: 8 total, 8 up, 8 in 2026-03-10T10:46:44.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:44 vm07 bash[23367]: cluster 2026-03-10T10:46:43.464915+0000 mon.a (mon.0) 3817 : cluster [DBG] osdmap e757: 8 total, 8 up, 8 in 2026-03-10T10:46:44.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:44 vm07 bash[23367]: audit 2026-03-10T10:46:43.532626+0000 mon.c (mon.2) 482 : audit [INF] from='client.? 192.168.123.104:0/1151779075' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:46:44.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:44 vm07 bash[23367]: audit 2026-03-10T10:46:43.532626+0000 mon.c (mon.2) 482 : audit [INF] from='client.? 192.168.123.104:0/1151779075' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:46:44.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:44 vm07 bash[23367]: audit 2026-03-10T10:46:43.533229+0000 mon.a (mon.0) 3818 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:46:44.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:44 vm07 bash[23367]: audit 2026-03-10T10:46:43.533229+0000 mon.a (mon.0) 3818 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:46:44.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:44 vm07 bash[23367]: audit 2026-03-10T10:46:43.634810+0000 mon.a (mon.0) 3819 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:46:44.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:44 vm07 bash[23367]: audit 2026-03-10T10:46:43.634810+0000 mon.a (mon.0) 3819 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:46:44.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:44 vm04 bash[20742]: audit 2026-03-10T10:46:43.454196+0000 mon.a (mon.0) 3816 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]': finished 2026-03-10T10:46:44.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:44 vm04 bash[20742]: audit 2026-03-10T10:46:43.454196+0000 mon.a (mon.0) 3816 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]': finished 2026-03-10T10:46:44.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:44 vm04 bash[20742]: cluster 2026-03-10T10:46:43.464915+0000 mon.a (mon.0) 3817 : cluster [DBG] osdmap e757: 8 total, 8 up, 8 in 2026-03-10T10:46:44.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:44 vm04 bash[20742]: cluster 2026-03-10T10:46:43.464915+0000 mon.a (mon.0) 3817 : cluster [DBG] osdmap e757: 8 total, 8 up, 8 in 2026-03-10T10:46:44.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:44 vm04 bash[20742]: audit 2026-03-10T10:46:43.532626+0000 mon.c (mon.2) 482 : audit [INF] from='client.? 192.168.123.104:0/1151779075' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:46:44.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:44 vm04 bash[20742]: audit 2026-03-10T10:46:43.532626+0000 mon.c (mon.2) 482 : audit [INF] from='client.? 192.168.123.104:0/1151779075' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:46:44.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:44 vm04 bash[20742]: audit 2026-03-10T10:46:43.533229+0000 mon.a (mon.0) 3818 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:46:44.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:44 vm04 bash[20742]: audit 2026-03-10T10:46:43.533229+0000 mon.a (mon.0) 3818 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:46:44.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:44 vm04 bash[20742]: audit 2026-03-10T10:46:43.634810+0000 mon.a (mon.0) 3819 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:46:44.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:44 vm04 bash[20742]: audit 2026-03-10T10:46:43.634810+0000 mon.a (mon.0) 3819 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:46:45.767 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:45.764+0000 7f9c93fff640 1 -- 192.168.123.104:0/3470338320 <== mon.0 v2:192.168.123.104:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool application enable", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "app": "rados"}]=0 enabled application 'rados' on pool '3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f' v759) ==== 213+0+0 (secure 0 0 0) 0x7f9ca806aa60 con 0x7f9cb4075960 2026-03-10T10:46:45.826 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:45.820+0000 7f9cb97db640 1 -- 192.168.123.104:0/3470338320 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "osd pool application enable", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "app": "rados"} v 0) -- 0x7f9c78004820 con 0x7f9cb4075960 2026-03-10T10:46:45.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:45 vm04 bash[28289]: audit 2026-03-10T10:46:44.473889+0000 mon.a (mon.0) 3820 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]': finished 2026-03-10T10:46:45.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:45 vm04 bash[28289]: audit 2026-03-10T10:46:44.473889+0000 mon.a (mon.0) 3820 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]': finished 2026-03-10T10:46:45.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:45 vm04 bash[28289]: cluster 2026-03-10T10:46:44.479807+0000 mon.a (mon.0) 3821 : cluster [DBG] osdmap e758: 8 total, 8 up, 8 in 2026-03-10T10:46:45.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:45 vm04 bash[28289]: cluster 2026-03-10T10:46:44.479807+0000 mon.a (mon.0) 3821 : cluster [DBG] osdmap e758: 8 total, 8 up, 8 in 2026-03-10T10:46:45.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:45 vm04 bash[28289]: audit 2026-03-10T10:46:44.661585+0000 mon.a (mon.0) 3822 : audit [INF] from='client.? 192.168.123.104:0/3470338320' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "app": "rados"}]: dispatch 2026-03-10T10:46:45.953 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:45 vm04 bash[28289]: audit 2026-03-10T10:46:44.661585+0000 mon.a (mon.0) 3822 : audit [INF] from='client.? 192.168.123.104:0/3470338320' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "app": "rados"}]: dispatch 2026-03-10T10:46:45.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:45 vm04 bash[20742]: audit 2026-03-10T10:46:44.473889+0000 mon.a (mon.0) 3820 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]': finished 2026-03-10T10:46:45.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:45 vm04 bash[20742]: audit 2026-03-10T10:46:44.473889+0000 mon.a (mon.0) 3820 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]': finished 2026-03-10T10:46:45.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:45 vm04 bash[20742]: cluster 2026-03-10T10:46:44.479807+0000 mon.a (mon.0) 3821 : cluster [DBG] osdmap e758: 8 total, 8 up, 8 in 2026-03-10T10:46:45.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:45 vm04 bash[20742]: cluster 2026-03-10T10:46:44.479807+0000 mon.a (mon.0) 3821 : cluster [DBG] osdmap e758: 8 total, 8 up, 8 in 2026-03-10T10:46:45.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:45 vm04 bash[20742]: audit 2026-03-10T10:46:44.661585+0000 mon.a (mon.0) 3822 : audit [INF] from='client.? 192.168.123.104:0/3470338320' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "app": "rados"}]: dispatch 2026-03-10T10:46:45.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:45 vm04 bash[20742]: audit 2026-03-10T10:46:44.661585+0000 mon.a (mon.0) 3822 : audit [INF] from='client.? 192.168.123.104:0/3470338320' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "app": "rados"}]: dispatch 2026-03-10T10:46:46.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:45 vm07 bash[23367]: audit 2026-03-10T10:46:44.473889+0000 mon.a (mon.0) 3820 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]': finished 2026-03-10T10:46:46.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:45 vm07 bash[23367]: audit 2026-03-10T10:46:44.473889+0000 mon.a (mon.0) 3820 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "10"}]': finished 2026-03-10T10:46:46.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:45 vm07 bash[23367]: cluster 2026-03-10T10:46:44.479807+0000 mon.a (mon.0) 3821 : cluster [DBG] osdmap e758: 8 total, 8 up, 8 in 2026-03-10T10:46:46.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:45 vm07 bash[23367]: cluster 2026-03-10T10:46:44.479807+0000 mon.a (mon.0) 3821 : cluster [DBG] osdmap e758: 8 total, 8 up, 8 in 2026-03-10T10:46:46.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:45 vm07 bash[23367]: audit 2026-03-10T10:46:44.661585+0000 mon.a (mon.0) 3822 : audit [INF] from='client.? 192.168.123.104:0/3470338320' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "app": "rados"}]: dispatch 2026-03-10T10:46:46.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:45 vm07 bash[23367]: audit 2026-03-10T10:46:44.661585+0000 mon.a (mon.0) 3822 : audit [INF] from='client.? 192.168.123.104:0/3470338320' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "app": "rados"}]: dispatch 2026-03-10T10:46:46.867 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:46.864+0000 7f9c93fff640 1 -- 192.168.123.104:0/3470338320 <== mon.0 v2:192.168.123.104:3300/0 8 ==== mon_command_ack([{"prefix": "osd pool application enable", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "app": "rados"}]=0 enabled application 'rados' on pool '3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f' v760) ==== 213+0+0 (secure 0 0 0) 0x7f9ca805dba0 con 0x7f9cb4075960 2026-03-10T10:46:46.867 INFO:tasks.workunit.client.0.vm04.stderr:enabled application 'rados' on pool '3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f' 2026-03-10T10:46:46.869 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:46.864+0000 7f9cb97db640 1 -- 192.168.123.104:0/3470338320 >> v2:192.168.123.104:6800/3326026257 conn(0x7f9c88077640 msgr2=0x7f9c88079b00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:46:46.869 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:46.864+0000 7f9cb97db640 1 --2- 192.168.123.104:0/3470338320 >> v2:192.168.123.104:6800/3326026257 conn(0x7f9c88077640 0x7f9c88079b00 secure :-1 s=READY pgs=4254 cs=0 l=1 rev1=1 crypto rx=0x7f9c9c006fd0 tx=0x7f9c9c006ce0 comp rx=0 tx=0).stop 2026-03-10T10:46:46.869 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:46.864+0000 7f9cb97db640 1 -- 192.168.123.104:0/3470338320 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f9cb4075960 msgr2=0x7f9cb41a4330 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:46:46.869 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:46.864+0000 7f9cb97db640 1 --2- 192.168.123.104:0/3470338320 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f9cb4075960 0x7f9cb41a4330 secure :-1 s=READY pgs=3066 cs=0 l=1 rev1=1 crypto rx=0x7f9ca800dcf0 tx=0x7f9ca800b630 comp rx=0 tx=0).stop 2026-03-10T10:46:46.870 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:46.864+0000 7f9cb97db640 1 -- 192.168.123.104:0/3470338320 shutdown_connections 2026-03-10T10:46:46.870 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:46.864+0000 7f9cb97db640 1 --2- 192.168.123.104:0/3470338320 >> v2:192.168.123.104:6800/3326026257 conn(0x7f9c88077640 0x7f9c88079b00 unknown :-1 s=CLOSED pgs=4254 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:46:46.870 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:46.864+0000 7f9cb97db640 1 --2- 192.168.123.104:0/3470338320 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f9cb4113730 0x7f9cb41a8c00 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:46:46.870 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:46.864+0000 7f9cb97db640 1 --2- 192.168.123.104:0/3470338320 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f9cb4106830 0x7f9cb41a4870 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:46:46.870 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:46.864+0000 7f9cb97db640 1 --2- 192.168.123.104:0/3470338320 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f9cb4075960 0x7f9cb41a4330 unknown :-1 s=CLOSED pgs=3066 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:46:46.870 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:46.864+0000 7f9cb97db640 1 -- 192.168.123.104:0/3470338320 >> 192.168.123.104:0/3470338320 conn(0x7f9cb40fe640 msgr2=0x7f9cb4100360 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:46:46.870 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:46.864+0000 7f9cb97db640 1 -- 192.168.123.104:0/3470338320 shutdown_connections 2026-03-10T10:46:46.870 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:46:46.864+0000 7f9cb97db640 1 -- 192.168.123.104:0/3470338320 wait complete. 2026-03-10T10:46:46.883 INFO:tasks.workunit.client.0.vm04.stderr:+ seq 1 10 2026-03-10T10:46:46.884 INFO:tasks.workunit.client.0.vm04.stderr:+ rados -p 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f put obj1 /etc/passwd 2026-03-10T10:46:46.953 INFO:tasks.workunit.client.0.vm04.stderr:+ rados -p 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f put obj2 /etc/passwd 2026-03-10T10:46:47.016 INFO:tasks.workunit.client.0.vm04.stderr:+ rados -p 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f put obj3 /etc/passwd 2026-03-10T10:46:47.071 INFO:tasks.workunit.client.0.vm04.stderr:+ rados -p 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f put obj4 /etc/passwd 2026-03-10T10:46:47.121 INFO:tasks.workunit.client.0.vm04.stderr:+ rados -p 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f put obj5 /etc/passwd 2026-03-10T10:46:47.150 INFO:tasks.workunit.client.0.vm04.stderr:+ rados -p 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f put obj6 /etc/passwd 2026-03-10T10:46:47.177 INFO:tasks.workunit.client.0.vm04.stderr:+ rados -p 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f put obj7 /etc/passwd 2026-03-10T10:46:47.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:46 vm04 bash[28289]: cluster 2026-03-10T10:46:44.811075+0000 mgr.y (mgr.24422) 1223 : cluster [DBG] pgmap v1658: 176 pgs: 12 creating+peering, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:47.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:46 vm04 bash[28289]: cluster 2026-03-10T10:46:44.811075+0000 mgr.y (mgr.24422) 1223 : cluster [DBG] pgmap v1658: 176 pgs: 12 creating+peering, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:47.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:46 vm04 bash[28289]: audit 2026-03-10T10:46:45.769069+0000 mon.a (mon.0) 3823 : audit [INF] from='client.? 192.168.123.104:0/3470338320' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "app": "rados"}]': finished 2026-03-10T10:46:47.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:46 vm04 bash[28289]: audit 2026-03-10T10:46:45.769069+0000 mon.a (mon.0) 3823 : audit [INF] from='client.? 192.168.123.104:0/3470338320' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "app": "rados"}]': finished 2026-03-10T10:46:47.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:46 vm04 bash[28289]: cluster 2026-03-10T10:46:45.776961+0000 mon.a (mon.0) 3824 : cluster [DBG] osdmap e759: 8 total, 8 up, 8 in 2026-03-10T10:46:47.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:46 vm04 bash[28289]: cluster 2026-03-10T10:46:45.776961+0000 mon.a (mon.0) 3824 : cluster [DBG] osdmap e759: 8 total, 8 up, 8 in 2026-03-10T10:46:47.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:46 vm04 bash[28289]: audit 2026-03-10T10:46:45.827812+0000 mon.a (mon.0) 3825 : audit [INF] from='client.? 192.168.123.104:0/3470338320' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "app": "rados"}]: dispatch 2026-03-10T10:46:47.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:46 vm04 bash[28289]: audit 2026-03-10T10:46:45.827812+0000 mon.a (mon.0) 3825 : audit [INF] from='client.? 192.168.123.104:0/3470338320' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "app": "rados"}]: dispatch 2026-03-10T10:46:47.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:46 vm04 bash[20742]: cluster 2026-03-10T10:46:44.811075+0000 mgr.y (mgr.24422) 1223 : cluster [DBG] pgmap v1658: 176 pgs: 12 creating+peering, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:47.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:46 vm04 bash[20742]: cluster 2026-03-10T10:46:44.811075+0000 mgr.y (mgr.24422) 1223 : cluster [DBG] pgmap v1658: 176 pgs: 12 creating+peering, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:47.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:46 vm04 bash[20742]: audit 2026-03-10T10:46:45.769069+0000 mon.a (mon.0) 3823 : audit [INF] from='client.? 192.168.123.104:0/3470338320' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "app": "rados"}]': finished 2026-03-10T10:46:47.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:46 vm04 bash[20742]: audit 2026-03-10T10:46:45.769069+0000 mon.a (mon.0) 3823 : audit [INF] from='client.? 192.168.123.104:0/3470338320' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "app": "rados"}]': finished 2026-03-10T10:46:47.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:46 vm04 bash[20742]: cluster 2026-03-10T10:46:45.776961+0000 mon.a (mon.0) 3824 : cluster [DBG] osdmap e759: 8 total, 8 up, 8 in 2026-03-10T10:46:47.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:46 vm04 bash[20742]: cluster 2026-03-10T10:46:45.776961+0000 mon.a (mon.0) 3824 : cluster [DBG] osdmap e759: 8 total, 8 up, 8 in 2026-03-10T10:46:47.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:46 vm04 bash[20742]: audit 2026-03-10T10:46:45.827812+0000 mon.a (mon.0) 3825 : audit [INF] from='client.? 192.168.123.104:0/3470338320' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "app": "rados"}]: dispatch 2026-03-10T10:46:47.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:46 vm04 bash[20742]: audit 2026-03-10T10:46:45.827812+0000 mon.a (mon.0) 3825 : audit [INF] from='client.? 192.168.123.104:0/3470338320' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "app": "rados"}]: dispatch 2026-03-10T10:46:47.208 INFO:tasks.workunit.client.0.vm04.stderr:+ rados -p 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f put obj8 /etc/passwd 2026-03-10T10:46:47.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:46 vm07 bash[23367]: cluster 2026-03-10T10:46:44.811075+0000 mgr.y (mgr.24422) 1223 : cluster [DBG] pgmap v1658: 176 pgs: 12 creating+peering, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:47.270 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:46 vm07 bash[23367]: cluster 2026-03-10T10:46:44.811075+0000 mgr.y (mgr.24422) 1223 : cluster [DBG] pgmap v1658: 176 pgs: 12 creating+peering, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:46:47.270 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:46 vm07 bash[23367]: audit 2026-03-10T10:46:45.769069+0000 mon.a (mon.0) 3823 : audit [INF] from='client.? 192.168.123.104:0/3470338320' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "app": "rados"}]': finished 2026-03-10T10:46:47.270 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:46 vm07 bash[23367]: audit 2026-03-10T10:46:45.769069+0000 mon.a (mon.0) 3823 : audit [INF] from='client.? 192.168.123.104:0/3470338320' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "app": "rados"}]': finished 2026-03-10T10:46:47.270 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:46 vm07 bash[23367]: cluster 2026-03-10T10:46:45.776961+0000 mon.a (mon.0) 3824 : cluster [DBG] osdmap e759: 8 total, 8 up, 8 in 2026-03-10T10:46:47.270 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:46 vm07 bash[23367]: cluster 2026-03-10T10:46:45.776961+0000 mon.a (mon.0) 3824 : cluster [DBG] osdmap e759: 8 total, 8 up, 8 in 2026-03-10T10:46:47.270 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:46 vm07 bash[23367]: audit 2026-03-10T10:46:45.827812+0000 mon.a (mon.0) 3825 : audit [INF] from='client.? 192.168.123.104:0/3470338320' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "app": "rados"}]: dispatch 2026-03-10T10:46:47.270 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:46 vm07 bash[23367]: audit 2026-03-10T10:46:45.827812+0000 mon.a (mon.0) 3825 : audit [INF] from='client.? 192.168.123.104:0/3470338320' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "app": "rados"}]: dispatch 2026-03-10T10:46:47.288 INFO:tasks.workunit.client.0.vm04.stderr:+ rados -p 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f put obj9 /etc/passwd 2026-03-10T10:46:47.339 INFO:tasks.workunit.client.0.vm04.stderr:+ rados -p 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f put obj10 /etc/passwd 2026-03-10T10:46:47.391 INFO:tasks.workunit.client.0.vm04.stderr:+ sleep 30 2026-03-10T10:46:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:47 vm04 bash[28289]: cluster 2026-03-10T10:46:46.811354+0000 mgr.y (mgr.24422) 1224 : cluster [DBG] pgmap v1660: 176 pgs: 12 creating+peering, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T10:46:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:47 vm04 bash[28289]: cluster 2026-03-10T10:46:46.811354+0000 mgr.y (mgr.24422) 1224 : cluster [DBG] pgmap v1660: 176 pgs: 12 creating+peering, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T10:46:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:47 vm04 bash[28289]: audit 2026-03-10T10:46:46.868712+0000 mon.a (mon.0) 3826 : audit [INF] from='client.? 192.168.123.104:0/3470338320' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "app": "rados"}]': finished 2026-03-10T10:46:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:47 vm04 bash[28289]: audit 2026-03-10T10:46:46.868712+0000 mon.a (mon.0) 3826 : audit [INF] from='client.? 192.168.123.104:0/3470338320' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "app": "rados"}]': finished 2026-03-10T10:46:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:47 vm04 bash[28289]: cluster 2026-03-10T10:46:46.873054+0000 mon.a (mon.0) 3827 : cluster [DBG] osdmap e760: 8 total, 8 up, 8 in 2026-03-10T10:46:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:47 vm04 bash[28289]: cluster 2026-03-10T10:46:46.873054+0000 mon.a (mon.0) 3827 : cluster [DBG] osdmap e760: 8 total, 8 up, 8 in 2026-03-10T10:46:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:47 vm04 bash[28289]: cluster 2026-03-10T10:46:47.748301+0000 mon.a (mon.0) 3828 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:46:48.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:47 vm04 bash[28289]: cluster 2026-03-10T10:46:47.748301+0000 mon.a (mon.0) 3828 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:46:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:47 vm04 bash[20742]: cluster 2026-03-10T10:46:46.811354+0000 mgr.y (mgr.24422) 1224 : cluster [DBG] pgmap v1660: 176 pgs: 12 creating+peering, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T10:46:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:47 vm04 bash[20742]: cluster 2026-03-10T10:46:46.811354+0000 mgr.y (mgr.24422) 1224 : cluster [DBG] pgmap v1660: 176 pgs: 12 creating+peering, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T10:46:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:47 vm04 bash[20742]: audit 2026-03-10T10:46:46.868712+0000 mon.a (mon.0) 3826 : audit [INF] from='client.? 192.168.123.104:0/3470338320' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "app": "rados"}]': finished 2026-03-10T10:46:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:47 vm04 bash[20742]: audit 2026-03-10T10:46:46.868712+0000 mon.a (mon.0) 3826 : audit [INF] from='client.? 192.168.123.104:0/3470338320' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "app": "rados"}]': finished 2026-03-10T10:46:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:47 vm04 bash[20742]: cluster 2026-03-10T10:46:46.873054+0000 mon.a (mon.0) 3827 : cluster [DBG] osdmap e760: 8 total, 8 up, 8 in 2026-03-10T10:46:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:47 vm04 bash[20742]: cluster 2026-03-10T10:46:46.873054+0000 mon.a (mon.0) 3827 : cluster [DBG] osdmap e760: 8 total, 8 up, 8 in 2026-03-10T10:46:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:47 vm04 bash[20742]: cluster 2026-03-10T10:46:47.748301+0000 mon.a (mon.0) 3828 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:46:48.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:47 vm04 bash[20742]: cluster 2026-03-10T10:46:47.748301+0000 mon.a (mon.0) 3828 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:46:48.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:47 vm07 bash[23367]: cluster 2026-03-10T10:46:46.811354+0000 mgr.y (mgr.24422) 1224 : cluster [DBG] pgmap v1660: 176 pgs: 12 creating+peering, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T10:46:48.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:47 vm07 bash[23367]: cluster 2026-03-10T10:46:46.811354+0000 mgr.y (mgr.24422) 1224 : cluster [DBG] pgmap v1660: 176 pgs: 12 creating+peering, 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T10:46:48.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:47 vm07 bash[23367]: audit 2026-03-10T10:46:46.868712+0000 mon.a (mon.0) 3826 : audit [INF] from='client.? 192.168.123.104:0/3470338320' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "app": "rados"}]': finished 2026-03-10T10:46:48.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:47 vm07 bash[23367]: audit 2026-03-10T10:46:46.868712+0000 mon.a (mon.0) 3826 : audit [INF] from='client.? 192.168.123.104:0/3470338320' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "app": "rados"}]': finished 2026-03-10T10:46:48.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:47 vm07 bash[23367]: cluster 2026-03-10T10:46:46.873054+0000 mon.a (mon.0) 3827 : cluster [DBG] osdmap e760: 8 total, 8 up, 8 in 2026-03-10T10:46:48.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:47 vm07 bash[23367]: cluster 2026-03-10T10:46:46.873054+0000 mon.a (mon.0) 3827 : cluster [DBG] osdmap e760: 8 total, 8 up, 8 in 2026-03-10T10:46:48.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:47 vm07 bash[23367]: cluster 2026-03-10T10:46:47.748301+0000 mon.a (mon.0) 3828 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:46:48.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:47 vm07 bash[23367]: cluster 2026-03-10T10:46:47.748301+0000 mon.a (mon.0) 3828 : cluster [WRN] Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T10:46:50.017 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:46:49 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:46:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:49 vm07 bash[23367]: cluster 2026-03-10T10:46:48.812101+0000 mgr.y (mgr.24422) 1225 : cluster [DBG] pgmap v1662: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 955 B/s rd, 5.6 KiB/s wr, 2 op/s 2026-03-10T10:46:50.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:49 vm07 bash[23367]: cluster 2026-03-10T10:46:48.812101+0000 mgr.y (mgr.24422) 1225 : cluster [DBG] pgmap v1662: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 955 B/s rd, 5.6 KiB/s wr, 2 op/s 2026-03-10T10:46:50.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:49 vm04 bash[28289]: cluster 2026-03-10T10:46:48.812101+0000 mgr.y (mgr.24422) 1225 : cluster [DBG] pgmap v1662: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 955 B/s rd, 5.6 KiB/s wr, 2 op/s 2026-03-10T10:46:50.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:49 vm04 bash[28289]: cluster 2026-03-10T10:46:48.812101+0000 mgr.y (mgr.24422) 1225 : cluster [DBG] pgmap v1662: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 955 B/s rd, 5.6 KiB/s wr, 2 op/s 2026-03-10T10:46:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:49 vm04 bash[20742]: cluster 2026-03-10T10:46:48.812101+0000 mgr.y (mgr.24422) 1225 : cluster [DBG] pgmap v1662: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 955 B/s rd, 5.6 KiB/s wr, 2 op/s 2026-03-10T10:46:50.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:49 vm04 bash[20742]: cluster 2026-03-10T10:46:48.812101+0000 mgr.y (mgr.24422) 1225 : cluster [DBG] pgmap v1662: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 955 B/s rd, 5.6 KiB/s wr, 2 op/s 2026-03-10T10:46:51.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:50 vm04 bash[28289]: audit 2026-03-10T10:46:49.728046+0000 mgr.y (mgr.24422) 1226 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:51.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:50 vm04 bash[28289]: audit 2026-03-10T10:46:49.728046+0000 mgr.y (mgr.24422) 1226 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:50 vm04 bash[20742]: audit 2026-03-10T10:46:49.728046+0000 mgr.y (mgr.24422) 1226 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:51.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:50 vm04 bash[20742]: audit 2026-03-10T10:46:49.728046+0000 mgr.y (mgr.24422) 1226 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:51.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:50 vm07 bash[23367]: audit 2026-03-10T10:46:49.728046+0000 mgr.y (mgr.24422) 1226 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:51.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:50 vm07 bash[23367]: audit 2026-03-10T10:46:49.728046+0000 mgr.y (mgr.24422) 1226 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:46:52.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:51 vm04 bash[28289]: cluster 2026-03-10T10:46:50.812395+0000 mgr.y (mgr.24422) 1227 : cluster [DBG] pgmap v1663: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 807 B/s rd, 4.7 KiB/s wr, 2 op/s 2026-03-10T10:46:52.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:51 vm04 bash[28289]: cluster 2026-03-10T10:46:50.812395+0000 mgr.y (mgr.24422) 1227 : cluster [DBG] pgmap v1663: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 807 B/s rd, 4.7 KiB/s wr, 2 op/s 2026-03-10T10:46:52.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:51 vm04 bash[20742]: cluster 2026-03-10T10:46:50.812395+0000 mgr.y (mgr.24422) 1227 : cluster [DBG] pgmap v1663: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 807 B/s rd, 4.7 KiB/s wr, 2 op/s 2026-03-10T10:46:52.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:51 vm04 bash[20742]: cluster 2026-03-10T10:46:50.812395+0000 mgr.y (mgr.24422) 1227 : cluster [DBG] pgmap v1663: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 807 B/s rd, 4.7 KiB/s wr, 2 op/s 2026-03-10T10:46:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:51 vm07 bash[23367]: cluster 2026-03-10T10:46:50.812395+0000 mgr.y (mgr.24422) 1227 : cluster [DBG] pgmap v1663: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 807 B/s rd, 4.7 KiB/s wr, 2 op/s 2026-03-10T10:46:52.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:51 vm07 bash[23367]: cluster 2026-03-10T10:46:50.812395+0000 mgr.y (mgr.24422) 1227 : cluster [DBG] pgmap v1663: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 807 B/s rd, 4.7 KiB/s wr, 2 op/s 2026-03-10T10:46:53.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:52 vm04 bash[28289]: cluster 2026-03-10T10:46:52.749291+0000 mon.a (mon.0) 3829 : cluster [WRN] pool '3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f' is full (reached quota's max_objects: 10) 2026-03-10T10:46:53.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:52 vm04 bash[28289]: cluster 2026-03-10T10:46:52.749291+0000 mon.a (mon.0) 3829 : cluster [WRN] pool '3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f' is full (reached quota's max_objects: 10) 2026-03-10T10:46:53.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:52 vm04 bash[28289]: cluster 2026-03-10T10:46:52.749470+0000 mon.a (mon.0) 3830 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T10:46:53.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:52 vm04 bash[28289]: cluster 2026-03-10T10:46:52.749470+0000 mon.a (mon.0) 3830 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T10:46:53.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:52 vm04 bash[28289]: cluster 2026-03-10T10:46:52.756834+0000 mon.a (mon.0) 3831 : cluster [DBG] osdmap e761: 8 total, 8 up, 8 in 2026-03-10T10:46:53.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:52 vm04 bash[28289]: cluster 2026-03-10T10:46:52.756834+0000 mon.a (mon.0) 3831 : cluster [DBG] osdmap e761: 8 total, 8 up, 8 in 2026-03-10T10:46:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:52 vm04 bash[20742]: cluster 2026-03-10T10:46:52.749291+0000 mon.a (mon.0) 3829 : cluster [WRN] pool '3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f' is full (reached quota's max_objects: 10) 2026-03-10T10:46:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:52 vm04 bash[20742]: cluster 2026-03-10T10:46:52.749291+0000 mon.a (mon.0) 3829 : cluster [WRN] pool '3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f' is full (reached quota's max_objects: 10) 2026-03-10T10:46:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:52 vm04 bash[20742]: cluster 2026-03-10T10:46:52.749470+0000 mon.a (mon.0) 3830 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T10:46:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:52 vm04 bash[20742]: cluster 2026-03-10T10:46:52.749470+0000 mon.a (mon.0) 3830 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T10:46:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:52 vm04 bash[20742]: cluster 2026-03-10T10:46:52.756834+0000 mon.a (mon.0) 3831 : cluster [DBG] osdmap e761: 8 total, 8 up, 8 in 2026-03-10T10:46:53.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:52 vm04 bash[20742]: cluster 2026-03-10T10:46:52.756834+0000 mon.a (mon.0) 3831 : cluster [DBG] osdmap e761: 8 total, 8 up, 8 in 2026-03-10T10:46:53.203 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:46:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:46:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:46:53.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:52 vm07 bash[23367]: cluster 2026-03-10T10:46:52.749291+0000 mon.a (mon.0) 3829 : cluster [WRN] pool '3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f' is full (reached quota's max_objects: 10) 2026-03-10T10:46:53.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:52 vm07 bash[23367]: cluster 2026-03-10T10:46:52.749291+0000 mon.a (mon.0) 3829 : cluster [WRN] pool '3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f' is full (reached quota's max_objects: 10) 2026-03-10T10:46:53.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:52 vm07 bash[23367]: cluster 2026-03-10T10:46:52.749470+0000 mon.a (mon.0) 3830 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T10:46:53.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:52 vm07 bash[23367]: cluster 2026-03-10T10:46:52.749470+0000 mon.a (mon.0) 3830 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T10:46:53.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:52 vm07 bash[23367]: cluster 2026-03-10T10:46:52.756834+0000 mon.a (mon.0) 3831 : cluster [DBG] osdmap e761: 8 total, 8 up, 8 in 2026-03-10T10:46:53.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:52 vm07 bash[23367]: cluster 2026-03-10T10:46:52.756834+0000 mon.a (mon.0) 3831 : cluster [DBG] osdmap e761: 8 total, 8 up, 8 in 2026-03-10T10:46:54.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:53 vm04 bash[28289]: cluster 2026-03-10T10:46:52.812673+0000 mgr.y (mgr.24422) 1228 : cluster [DBG] pgmap v1665: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 727 B/s rd, 4.3 KiB/s wr, 2 op/s 2026-03-10T10:46:54.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:53 vm04 bash[28289]: cluster 2026-03-10T10:46:52.812673+0000 mgr.y (mgr.24422) 1228 : cluster [DBG] pgmap v1665: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 727 B/s rd, 4.3 KiB/s wr, 2 op/s 2026-03-10T10:46:54.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:53 vm04 bash[20742]: cluster 2026-03-10T10:46:52.812673+0000 mgr.y (mgr.24422) 1228 : cluster [DBG] pgmap v1665: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 727 B/s rd, 4.3 KiB/s wr, 2 op/s 2026-03-10T10:46:54.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:53 vm04 bash[20742]: cluster 2026-03-10T10:46:52.812673+0000 mgr.y (mgr.24422) 1228 : cluster [DBG] pgmap v1665: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 727 B/s rd, 4.3 KiB/s wr, 2 op/s 2026-03-10T10:46:54.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:53 vm07 bash[23367]: cluster 2026-03-10T10:46:52.812673+0000 mgr.y (mgr.24422) 1228 : cluster [DBG] pgmap v1665: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 727 B/s rd, 4.3 KiB/s wr, 2 op/s 2026-03-10T10:46:54.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:53 vm07 bash[23367]: cluster 2026-03-10T10:46:52.812673+0000 mgr.y (mgr.24422) 1228 : cluster [DBG] pgmap v1665: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 727 B/s rd, 4.3 KiB/s wr, 2 op/s 2026-03-10T10:46:56.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:55 vm04 bash[28289]: cluster 2026-03-10T10:46:54.813228+0000 mgr.y (mgr.24422) 1229 : cluster [DBG] pgmap v1666: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.7 KiB/s wr, 2 op/s 2026-03-10T10:46:56.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:55 vm04 bash[28289]: cluster 2026-03-10T10:46:54.813228+0000 mgr.y (mgr.24422) 1229 : cluster [DBG] pgmap v1666: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.7 KiB/s wr, 2 op/s 2026-03-10T10:46:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:55 vm04 bash[20742]: cluster 2026-03-10T10:46:54.813228+0000 mgr.y (mgr.24422) 1229 : cluster [DBG] pgmap v1666: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.7 KiB/s wr, 2 op/s 2026-03-10T10:46:56.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:55 vm04 bash[20742]: cluster 2026-03-10T10:46:54.813228+0000 mgr.y (mgr.24422) 1229 : cluster [DBG] pgmap v1666: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.7 KiB/s wr, 2 op/s 2026-03-10T10:46:56.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:55 vm07 bash[23367]: cluster 2026-03-10T10:46:54.813228+0000 mgr.y (mgr.24422) 1229 : cluster [DBG] pgmap v1666: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.7 KiB/s wr, 2 op/s 2026-03-10T10:46:56.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:55 vm07 bash[23367]: cluster 2026-03-10T10:46:54.813228+0000 mgr.y (mgr.24422) 1229 : cluster [DBG] pgmap v1666: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 3.7 KiB/s wr, 2 op/s 2026-03-10T10:46:58.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:57 vm04 bash[28289]: cluster 2026-03-10T10:46:56.813561+0000 mgr.y (mgr.24422) 1230 : cluster [DBG] pgmap v1667: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 3.0 KiB/s wr, 2 op/s 2026-03-10T10:46:58.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:57 vm04 bash[28289]: cluster 2026-03-10T10:46:56.813561+0000 mgr.y (mgr.24422) 1230 : cluster [DBG] pgmap v1667: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 3.0 KiB/s wr, 2 op/s 2026-03-10T10:46:58.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:57 vm04 bash[28289]: audit 2026-03-10T10:46:57.527061+0000 mon.a (mon.0) 3832 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:46:58.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:57 vm04 bash[28289]: audit 2026-03-10T10:46:57.527061+0000 mon.a (mon.0) 3832 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:46:58.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:57 vm04 bash[20742]: cluster 2026-03-10T10:46:56.813561+0000 mgr.y (mgr.24422) 1230 : cluster [DBG] pgmap v1667: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 3.0 KiB/s wr, 2 op/s 2026-03-10T10:46:58.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:57 vm04 bash[20742]: cluster 2026-03-10T10:46:56.813561+0000 mgr.y (mgr.24422) 1230 : cluster [DBG] pgmap v1667: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 3.0 KiB/s wr, 2 op/s 2026-03-10T10:46:58.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:57 vm04 bash[20742]: audit 2026-03-10T10:46:57.527061+0000 mon.a (mon.0) 3832 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:46:58.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:57 vm04 bash[20742]: audit 2026-03-10T10:46:57.527061+0000 mon.a (mon.0) 3832 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:46:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:57 vm07 bash[23367]: cluster 2026-03-10T10:46:56.813561+0000 mgr.y (mgr.24422) 1230 : cluster [DBG] pgmap v1667: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 3.0 KiB/s wr, 2 op/s 2026-03-10T10:46:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:57 vm07 bash[23367]: cluster 2026-03-10T10:46:56.813561+0000 mgr.y (mgr.24422) 1230 : cluster [DBG] pgmap v1667: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 3.0 KiB/s wr, 2 op/s 2026-03-10T10:46:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:57 vm07 bash[23367]: audit 2026-03-10T10:46:57.527061+0000 mon.a (mon.0) 3832 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:46:58.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:57 vm07 bash[23367]: audit 2026-03-10T10:46:57.527061+0000 mon.a (mon.0) 3832 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:46:59.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:59 vm04 bash[28289]: audit 2026-03-10T10:46:58.646784+0000 mon.a (mon.0) 3833 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:46:59.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:59 vm04 bash[28289]: audit 2026-03-10T10:46:58.646784+0000 mon.a (mon.0) 3833 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:46:59.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:59 vm04 bash[28289]: audit 2026-03-10T10:46:58.647709+0000 mon.a (mon.0) 3834 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:46:59.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:59 vm04 bash[28289]: audit 2026-03-10T10:46:58.647709+0000 mon.a (mon.0) 3834 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:46:59.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:59 vm04 bash[28289]: cluster 2026-03-10T10:46:58.814137+0000 mgr.y (mgr.24422) 1231 : cluster [DBG] pgmap v1668: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:46:59.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:46:59 vm04 bash[28289]: cluster 2026-03-10T10:46:58.814137+0000 mgr.y (mgr.24422) 1231 : cluster [DBG] pgmap v1668: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:46:59.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:59 vm04 bash[20742]: audit 2026-03-10T10:46:58.646784+0000 mon.a (mon.0) 3833 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:46:59.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:59 vm04 bash[20742]: audit 2026-03-10T10:46:58.646784+0000 mon.a (mon.0) 3833 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:46:59.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:59 vm04 bash[20742]: audit 2026-03-10T10:46:58.647709+0000 mon.a (mon.0) 3834 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:46:59.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:59 vm04 bash[20742]: audit 2026-03-10T10:46:58.647709+0000 mon.a (mon.0) 3834 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:46:59.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:59 vm04 bash[20742]: cluster 2026-03-10T10:46:58.814137+0000 mgr.y (mgr.24422) 1231 : cluster [DBG] pgmap v1668: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:46:59.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:46:59 vm04 bash[20742]: cluster 2026-03-10T10:46:58.814137+0000 mgr.y (mgr.24422) 1231 : cluster [DBG] pgmap v1668: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:47:00.016 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:46:59 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:47:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:59 vm07 bash[23367]: audit 2026-03-10T10:46:58.646784+0000 mon.a (mon.0) 3833 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:47:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:59 vm07 bash[23367]: audit 2026-03-10T10:46:58.646784+0000 mon.a (mon.0) 3833 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:47:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:59 vm07 bash[23367]: audit 2026-03-10T10:46:58.647709+0000 mon.a (mon.0) 3834 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:47:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:59 vm07 bash[23367]: audit 2026-03-10T10:46:58.647709+0000 mon.a (mon.0) 3834 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:47:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:59 vm07 bash[23367]: cluster 2026-03-10T10:46:58.814137+0000 mgr.y (mgr.24422) 1231 : cluster [DBG] pgmap v1668: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:47:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:46:59 vm07 bash[23367]: cluster 2026-03-10T10:46:58.814137+0000 mgr.y (mgr.24422) 1231 : cluster [DBG] pgmap v1668: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:47:00.784 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:00 vm04 bash[28289]: audit 2026-03-10T10:46:59.732175+0000 mgr.y (mgr.24422) 1232 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:00.784 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:00 vm04 bash[28289]: audit 2026-03-10T10:46:59.732175+0000 mgr.y (mgr.24422) 1232 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:00.785 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:00 vm04 bash[20742]: audit 2026-03-10T10:46:59.732175+0000 mgr.y (mgr.24422) 1232 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:01.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:00 vm04 bash[20742]: audit 2026-03-10T10:46:59.732175+0000 mgr.y (mgr.24422) 1232 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:00 vm07 bash[23367]: audit 2026-03-10T10:46:59.732175+0000 mgr.y (mgr.24422) 1232 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:00 vm07 bash[23367]: audit 2026-03-10T10:46:59.732175+0000 mgr.y (mgr.24422) 1232 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:02.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:01 vm04 bash[28289]: cluster 2026-03-10T10:47:00.814401+0000 mgr.y (mgr.24422) 1233 : cluster [DBG] pgmap v1669: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:47:02.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:01 vm04 bash[28289]: cluster 2026-03-10T10:47:00.814401+0000 mgr.y (mgr.24422) 1233 : cluster [DBG] pgmap v1669: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:47:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:01 vm04 bash[20742]: cluster 2026-03-10T10:47:00.814401+0000 mgr.y (mgr.24422) 1233 : cluster [DBG] pgmap v1669: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:47:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:01 vm04 bash[20742]: cluster 2026-03-10T10:47:00.814401+0000 mgr.y (mgr.24422) 1233 : cluster [DBG] pgmap v1669: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:47:02.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:01 vm07 bash[23367]: cluster 2026-03-10T10:47:00.814401+0000 mgr.y (mgr.24422) 1233 : cluster [DBG] pgmap v1669: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:47:02.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:01 vm07 bash[23367]: cluster 2026-03-10T10:47:00.814401+0000 mgr.y (mgr.24422) 1233 : cluster [DBG] pgmap v1669: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:47:03.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:47:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:47:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:47:04.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:03 vm07 bash[23367]: audit 2026-03-10T10:47:02.722566+0000 mon.a (mon.0) 3835 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:47:04.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:03 vm07 bash[23367]: audit 2026-03-10T10:47:02.722566+0000 mon.a (mon.0) 3835 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:47:04.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:03 vm07 bash[23367]: audit 2026-03-10T10:47:02.730933+0000 mon.a (mon.0) 3836 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:47:04.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:03 vm07 bash[23367]: audit 2026-03-10T10:47:02.730933+0000 mon.a (mon.0) 3836 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:47:04.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:03 vm07 bash[23367]: cluster 2026-03-10T10:47:02.814717+0000 mgr.y (mgr.24422) 1234 : cluster [DBG] pgmap v1670: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1017 B/s rd, 0 op/s 2026-03-10T10:47:04.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:03 vm07 bash[23367]: cluster 2026-03-10T10:47:02.814717+0000 mgr.y (mgr.24422) 1234 : cluster [DBG] pgmap v1670: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1017 B/s rd, 0 op/s 2026-03-10T10:47:04.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:03 vm07 bash[23367]: audit 2026-03-10T10:47:02.902600+0000 mon.a (mon.0) 3837 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:47:04.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:03 vm07 bash[23367]: audit 2026-03-10T10:47:02.902600+0000 mon.a (mon.0) 3837 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:47:04.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:03 vm07 bash[23367]: audit 2026-03-10T10:47:02.908380+0000 mon.a (mon.0) 3838 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:47:04.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:03 vm07 bash[23367]: audit 2026-03-10T10:47:02.908380+0000 mon.a (mon.0) 3838 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:47:04.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:03 vm07 bash[23367]: audit 2026-03-10T10:47:03.235284+0000 mon.a (mon.0) 3839 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:47:04.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:03 vm07 bash[23367]: audit 2026-03-10T10:47:03.235284+0000 mon.a (mon.0) 3839 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:47:04.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:03 vm07 bash[23367]: audit 2026-03-10T10:47:03.235972+0000 mon.a (mon.0) 3840 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:47:04.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:03 vm07 bash[23367]: audit 2026-03-10T10:47:03.235972+0000 mon.a (mon.0) 3840 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:47:04.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:03 vm07 bash[23367]: audit 2026-03-10T10:47:03.240798+0000 mon.a (mon.0) 3841 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:47:04.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:03 vm07 bash[23367]: audit 2026-03-10T10:47:03.240798+0000 mon.a (mon.0) 3841 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:47:04.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:03 vm04 bash[28289]: audit 2026-03-10T10:47:02.722566+0000 mon.a (mon.0) 3835 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:47:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:03 vm04 bash[28289]: audit 2026-03-10T10:47:02.722566+0000 mon.a (mon.0) 3835 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:47:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:03 vm04 bash[28289]: audit 2026-03-10T10:47:02.730933+0000 mon.a (mon.0) 3836 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:47:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:03 vm04 bash[28289]: audit 2026-03-10T10:47:02.730933+0000 mon.a (mon.0) 3836 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:47:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:03 vm04 bash[28289]: cluster 2026-03-10T10:47:02.814717+0000 mgr.y (mgr.24422) 1234 : cluster [DBG] pgmap v1670: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1017 B/s rd, 0 op/s 2026-03-10T10:47:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:03 vm04 bash[28289]: cluster 2026-03-10T10:47:02.814717+0000 mgr.y (mgr.24422) 1234 : cluster [DBG] pgmap v1670: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1017 B/s rd, 0 op/s 2026-03-10T10:47:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:03 vm04 bash[28289]: audit 2026-03-10T10:47:02.902600+0000 mon.a (mon.0) 3837 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:47:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:03 vm04 bash[28289]: audit 2026-03-10T10:47:02.902600+0000 mon.a (mon.0) 3837 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:47:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:03 vm04 bash[28289]: audit 2026-03-10T10:47:02.908380+0000 mon.a (mon.0) 3838 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:47:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:03 vm04 bash[28289]: audit 2026-03-10T10:47:02.908380+0000 mon.a (mon.0) 3838 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:47:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:03 vm04 bash[28289]: audit 2026-03-10T10:47:03.235284+0000 mon.a (mon.0) 3839 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:47:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:03 vm04 bash[28289]: audit 2026-03-10T10:47:03.235284+0000 mon.a (mon.0) 3839 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:47:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:03 vm04 bash[28289]: audit 2026-03-10T10:47:03.235972+0000 mon.a (mon.0) 3840 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:47:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:03 vm04 bash[28289]: audit 2026-03-10T10:47:03.235972+0000 mon.a (mon.0) 3840 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:47:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:03 vm04 bash[28289]: audit 2026-03-10T10:47:03.240798+0000 mon.a (mon.0) 3841 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:47:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:03 vm04 bash[28289]: audit 2026-03-10T10:47:03.240798+0000 mon.a (mon.0) 3841 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:47:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:03 vm04 bash[20742]: audit 2026-03-10T10:47:02.722566+0000 mon.a (mon.0) 3835 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:47:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:03 vm04 bash[20742]: audit 2026-03-10T10:47:02.722566+0000 mon.a (mon.0) 3835 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:47:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:03 vm04 bash[20742]: audit 2026-03-10T10:47:02.730933+0000 mon.a (mon.0) 3836 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:47:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:03 vm04 bash[20742]: audit 2026-03-10T10:47:02.730933+0000 mon.a (mon.0) 3836 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:47:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:03 vm04 bash[20742]: cluster 2026-03-10T10:47:02.814717+0000 mgr.y (mgr.24422) 1234 : cluster [DBG] pgmap v1670: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1017 B/s rd, 0 op/s 2026-03-10T10:47:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:03 vm04 bash[20742]: cluster 2026-03-10T10:47:02.814717+0000 mgr.y (mgr.24422) 1234 : cluster [DBG] pgmap v1670: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1017 B/s rd, 0 op/s 2026-03-10T10:47:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:03 vm04 bash[20742]: audit 2026-03-10T10:47:02.902600+0000 mon.a (mon.0) 3837 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:47:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:03 vm04 bash[20742]: audit 2026-03-10T10:47:02.902600+0000 mon.a (mon.0) 3837 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:47:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:03 vm04 bash[20742]: audit 2026-03-10T10:47:02.908380+0000 mon.a (mon.0) 3838 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:47:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:03 vm04 bash[20742]: audit 2026-03-10T10:47:02.908380+0000 mon.a (mon.0) 3838 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:47:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:03 vm04 bash[20742]: audit 2026-03-10T10:47:03.235284+0000 mon.a (mon.0) 3839 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:47:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:03 vm04 bash[20742]: audit 2026-03-10T10:47:03.235284+0000 mon.a (mon.0) 3839 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:47:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:03 vm04 bash[20742]: audit 2026-03-10T10:47:03.235972+0000 mon.a (mon.0) 3840 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:47:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:03 vm04 bash[20742]: audit 2026-03-10T10:47:03.235972+0000 mon.a (mon.0) 3840 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:47:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:03 vm04 bash[20742]: audit 2026-03-10T10:47:03.240798+0000 mon.a (mon.0) 3841 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:47:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:03 vm04 bash[20742]: audit 2026-03-10T10:47:03.240798+0000 mon.a (mon.0) 3841 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:47:06.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:05 vm04 bash[28289]: cluster 2026-03-10T10:47:04.815443+0000 mgr.y (mgr.24422) 1235 : cluster [DBG] pgmap v1671: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:06.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:05 vm04 bash[28289]: cluster 2026-03-10T10:47:04.815443+0000 mgr.y (mgr.24422) 1235 : cluster [DBG] pgmap v1671: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:06.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:05 vm04 bash[20742]: cluster 2026-03-10T10:47:04.815443+0000 mgr.y (mgr.24422) 1235 : cluster [DBG] pgmap v1671: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:06.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:05 vm04 bash[20742]: cluster 2026-03-10T10:47:04.815443+0000 mgr.y (mgr.24422) 1235 : cluster [DBG] pgmap v1671: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:06.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:05 vm07 bash[23367]: cluster 2026-03-10T10:47:04.815443+0000 mgr.y (mgr.24422) 1235 : cluster [DBG] pgmap v1671: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:06.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:05 vm07 bash[23367]: cluster 2026-03-10T10:47:04.815443+0000 mgr.y (mgr.24422) 1235 : cluster [DBG] pgmap v1671: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:08.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:07 vm04 bash[28289]: cluster 2026-03-10T10:47:06.815790+0000 mgr.y (mgr.24422) 1236 : cluster [DBG] pgmap v1672: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:08.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:07 vm04 bash[28289]: cluster 2026-03-10T10:47:06.815790+0000 mgr.y (mgr.24422) 1236 : cluster [DBG] pgmap v1672: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:08.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:07 vm04 bash[20742]: cluster 2026-03-10T10:47:06.815790+0000 mgr.y (mgr.24422) 1236 : cluster [DBG] pgmap v1672: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:08.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:07 vm04 bash[20742]: cluster 2026-03-10T10:47:06.815790+0000 mgr.y (mgr.24422) 1236 : cluster [DBG] pgmap v1672: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:08.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:07 vm07 bash[23367]: cluster 2026-03-10T10:47:06.815790+0000 mgr.y (mgr.24422) 1236 : cluster [DBG] pgmap v1672: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:08.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:07 vm07 bash[23367]: cluster 2026-03-10T10:47:06.815790+0000 mgr.y (mgr.24422) 1236 : cluster [DBG] pgmap v1672: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:10.017 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:47:09 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:47:10.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:09 vm07 bash[23367]: cluster 2026-03-10T10:47:08.816332+0000 mgr.y (mgr.24422) 1237 : cluster [DBG] pgmap v1673: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:10.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:09 vm07 bash[23367]: cluster 2026-03-10T10:47:08.816332+0000 mgr.y (mgr.24422) 1237 : cluster [DBG] pgmap v1673: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:10.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:09 vm04 bash[28289]: cluster 2026-03-10T10:47:08.816332+0000 mgr.y (mgr.24422) 1237 : cluster [DBG] pgmap v1673: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:10.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:09 vm04 bash[28289]: cluster 2026-03-10T10:47:08.816332+0000 mgr.y (mgr.24422) 1237 : cluster [DBG] pgmap v1673: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:10.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:09 vm04 bash[20742]: cluster 2026-03-10T10:47:08.816332+0000 mgr.y (mgr.24422) 1237 : cluster [DBG] pgmap v1673: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:10.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:09 vm04 bash[20742]: cluster 2026-03-10T10:47:08.816332+0000 mgr.y (mgr.24422) 1237 : cluster [DBG] pgmap v1673: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:11.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:10 vm04 bash[28289]: audit 2026-03-10T10:47:09.744690+0000 mgr.y (mgr.24422) 1238 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:11.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:10 vm04 bash[28289]: audit 2026-03-10T10:47:09.744690+0000 mgr.y (mgr.24422) 1238 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:11.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:10 vm04 bash[20742]: audit 2026-03-10T10:47:09.744690+0000 mgr.y (mgr.24422) 1238 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:11.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:10 vm04 bash[20742]: audit 2026-03-10T10:47:09.744690+0000 mgr.y (mgr.24422) 1238 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:11.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:10 vm07 bash[23367]: audit 2026-03-10T10:47:09.744690+0000 mgr.y (mgr.24422) 1238 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:11.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:10 vm07 bash[23367]: audit 2026-03-10T10:47:09.744690+0000 mgr.y (mgr.24422) 1238 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:12.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:11 vm04 bash[28289]: cluster 2026-03-10T10:47:10.816679+0000 mgr.y (mgr.24422) 1239 : cluster [DBG] pgmap v1674: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:12.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:11 vm04 bash[28289]: cluster 2026-03-10T10:47:10.816679+0000 mgr.y (mgr.24422) 1239 : cluster [DBG] pgmap v1674: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:12.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:11 vm04 bash[20742]: cluster 2026-03-10T10:47:10.816679+0000 mgr.y (mgr.24422) 1239 : cluster [DBG] pgmap v1674: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:12.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:11 vm04 bash[20742]: cluster 2026-03-10T10:47:10.816679+0000 mgr.y (mgr.24422) 1239 : cluster [DBG] pgmap v1674: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:12.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:11 vm07 bash[23367]: cluster 2026-03-10T10:47:10.816679+0000 mgr.y (mgr.24422) 1239 : cluster [DBG] pgmap v1674: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:12.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:11 vm07 bash[23367]: cluster 2026-03-10T10:47:10.816679+0000 mgr.y (mgr.24422) 1239 : cluster [DBG] pgmap v1674: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:13.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:47:13 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:47:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:47:14.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:13 vm04 bash[28289]: cluster 2026-03-10T10:47:12.817111+0000 mgr.y (mgr.24422) 1240 : cluster [DBG] pgmap v1675: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:14.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:13 vm04 bash[28289]: cluster 2026-03-10T10:47:12.817111+0000 mgr.y (mgr.24422) 1240 : cluster [DBG] pgmap v1675: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:13 vm04 bash[28289]: audit 2026-03-10T10:47:13.653641+0000 mon.a (mon.0) 3842 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:47:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:13 vm04 bash[28289]: audit 2026-03-10T10:47:13.653641+0000 mon.a (mon.0) 3842 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:47:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:13 vm04 bash[20742]: cluster 2026-03-10T10:47:12.817111+0000 mgr.y (mgr.24422) 1240 : cluster [DBG] pgmap v1675: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:13 vm04 bash[20742]: cluster 2026-03-10T10:47:12.817111+0000 mgr.y (mgr.24422) 1240 : cluster [DBG] pgmap v1675: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:13 vm04 bash[20742]: audit 2026-03-10T10:47:13.653641+0000 mon.a (mon.0) 3842 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:47:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:13 vm04 bash[20742]: audit 2026-03-10T10:47:13.653641+0000 mon.a (mon.0) 3842 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:47:14.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:13 vm07 bash[23367]: cluster 2026-03-10T10:47:12.817111+0000 mgr.y (mgr.24422) 1240 : cluster [DBG] pgmap v1675: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:14.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:13 vm07 bash[23367]: cluster 2026-03-10T10:47:12.817111+0000 mgr.y (mgr.24422) 1240 : cluster [DBG] pgmap v1675: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:14.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:13 vm07 bash[23367]: audit 2026-03-10T10:47:13.653641+0000 mon.a (mon.0) 3842 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:47:14.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:13 vm07 bash[23367]: audit 2026-03-10T10:47:13.653641+0000 mon.a (mon.0) 3842 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:47:16.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:15 vm04 bash[28289]: cluster 2026-03-10T10:47:14.817864+0000 mgr.y (mgr.24422) 1241 : cluster [DBG] pgmap v1676: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:16.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:15 vm04 bash[28289]: cluster 2026-03-10T10:47:14.817864+0000 mgr.y (mgr.24422) 1241 : cluster [DBG] pgmap v1676: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:16.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:15 vm04 bash[20742]: cluster 2026-03-10T10:47:14.817864+0000 mgr.y (mgr.24422) 1241 : cluster [DBG] pgmap v1676: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:16.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:15 vm04 bash[20742]: cluster 2026-03-10T10:47:14.817864+0000 mgr.y (mgr.24422) 1241 : cluster [DBG] pgmap v1676: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:16.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:15 vm07 bash[23367]: cluster 2026-03-10T10:47:14.817864+0000 mgr.y (mgr.24422) 1241 : cluster [DBG] pgmap v1676: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:16.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:15 vm07 bash[23367]: cluster 2026-03-10T10:47:14.817864+0000 mgr.y (mgr.24422) 1241 : cluster [DBG] pgmap v1676: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:17.393 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=131262 2026-03-10T10:47:17.393 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph osd pool set-quota 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f max_objects 100 2026-03-10T10:47:17.394 INFO:tasks.workunit.client.0.vm04.stderr:+ rados -p 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f put onemore /etc/passwd 2026-03-10T10:47:17.457 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.452+0000 7fb6fb1cd640 1 -- 192.168.123.104:0/1730062814 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb6f41057d0 msgr2=0x7fb6f4109820 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:47:17.457 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.452+0000 7fb6fb1cd640 1 --2- 192.168.123.104:0/1730062814 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb6f41057d0 0x7fb6f4109820 secure :-1 s=READY pgs=3081 cs=0 l=1 rev1=1 crypto rx=0x7fb6e4009a30 tx=0x7fb6e401c900 comp rx=0 tx=0).stop 2026-03-10T10:47:17.457 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.452+0000 7fb6fb1cd640 1 -- 192.168.123.104:0/1730062814 shutdown_connections 2026-03-10T10:47:17.457 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.452+0000 7fb6fb1cd640 1 --2- 192.168.123.104:0/1730062814 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fb6f4109f50 0x7fb6f4111ad0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:17.457 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.452+0000 7fb6fb1cd640 1 --2- 192.168.123.104:0/1730062814 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb6f41057d0 0x7fb6f4109820 unknown :-1 s=CLOSED pgs=3081 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:17.457 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.452+0000 7fb6fb1cd640 1 --2- 192.168.123.104:0/1730062814 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb6f4104e20 0x7fb6f4105200 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:17.457 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.452+0000 7fb6fb1cd640 1 -- 192.168.123.104:0/1730062814 >> 192.168.123.104:0/1730062814 conn(0x7fb6f4100880 msgr2=0x7fb6f4102ca0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:47:17.457 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.452+0000 7fb6fb1cd640 1 -- 192.168.123.104:0/1730062814 shutdown_connections 2026-03-10T10:47:17.458 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.452+0000 7fb6fb1cd640 1 -- 192.168.123.104:0/1730062814 wait complete. 2026-03-10T10:47:17.458 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.456+0000 7fb6fb1cd640 1 Processor -- start 2026-03-10T10:47:17.458 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.456+0000 7fb6fb1cd640 1 -- start start 2026-03-10T10:47:17.458 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.456+0000 7fb6fb1cd640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb6f4104e20 0x7fb6f419f140 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:47:17.459 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.456+0000 7fb6fb1cd640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb6f41057d0 0x7fb6f419f680 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:47:17.459 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.456+0000 7fb6fb1cd640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fb6f4109f50 0x7fb6f41a3a10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:47:17.459 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.456+0000 7fb6fb1cd640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fb6f4116c10 con 0x7fb6f41057d0 2026-03-10T10:47:17.459 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.456+0000 7fb6fb1cd640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7fb6f4116a90 con 0x7fb6f4104e20 2026-03-10T10:47:17.459 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.456+0000 7fb6fb1cd640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7fb6f4116d90 con 0x7fb6f4109f50 2026-03-10T10:47:17.459 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.456+0000 7fb6ebfff640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb6f41057d0 0x7fb6f419f680 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:47:17.459 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.456+0000 7fb6ebfff640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb6f41057d0 0x7fb6f419f680 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:51574/0 (socket says 192.168.123.104:51574) 2026-03-10T10:47:17.459 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.456+0000 7fb6ebfff640 1 -- 192.168.123.104:0/2229795566 learned_addr learned my addr 192.168.123.104:0/2229795566 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:47:17.459 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.456+0000 7fb6ebfff640 1 -- 192.168.123.104:0/2229795566 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fb6f4109f50 msgr2=0x7fb6f41a3a10 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:47:17.459 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.456+0000 7fb6f8f42640 1 --2- 192.168.123.104:0/2229795566 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb6f4104e20 0x7fb6f419f140 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:47:17.459 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.456+0000 7fb6f9743640 1 --2- 192.168.123.104:0/2229795566 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fb6f4109f50 0x7fb6f41a3a10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:47:17.459 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.456+0000 7fb6ebfff640 1 --2- 192.168.123.104:0/2229795566 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fb6f4109f50 0x7fb6f41a3a10 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:17.459 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.456+0000 7fb6ebfff640 1 -- 192.168.123.104:0/2229795566 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb6f4104e20 msgr2=0x7fb6f419f140 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:47:17.459 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.456+0000 7fb6ebfff640 1 --2- 192.168.123.104:0/2229795566 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb6f4104e20 0x7fb6f419f140 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:17.459 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.456+0000 7fb6ebfff640 1 -- 192.168.123.104:0/2229795566 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fb6f41a40f0 con 0x7fb6f41057d0 2026-03-10T10:47:17.459 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.456+0000 7fb6f9743640 1 --2- 192.168.123.104:0/2229795566 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fb6f4109f50 0x7fb6f41a3a10 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:47:17.459 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.456+0000 7fb6ebfff640 1 --2- 192.168.123.104:0/2229795566 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb6f41057d0 0x7fb6f419f680 secure :-1 s=READY pgs=3082 cs=0 l=1 rev1=1 crypto rx=0x7fb6e401cde0 tx=0x7fb6e4005e60 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:47:17.459 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.456+0000 7fb6e9ffb640 1 -- 192.168.123.104:0/2229795566 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fb6e4004280 con 0x7fb6f41057d0 2026-03-10T10:47:17.459 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.456+0000 7fb6fb1cd640 1 -- 192.168.123.104:0/2229795566 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fb6f41a4380 con 0x7fb6f41057d0 2026-03-10T10:47:17.460 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.456+0000 7fb6fb1cd640 1 -- 192.168.123.104:0/2229795566 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fb6f41abcc0 con 0x7fb6f41057d0 2026-03-10T10:47:17.460 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.456+0000 7fb6e9ffb640 1 -- 192.168.123.104:0/2229795566 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fb6e4004420 con 0x7fb6f41057d0 2026-03-10T10:47:17.460 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.456+0000 7fb6e9ffb640 1 -- 192.168.123.104:0/2229795566 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fb6e40aeb00 con 0x7fb6f41057d0 2026-03-10T10:47:17.461 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.456+0000 7fb6e9ffb640 1 -- 192.168.123.104:0/2229795566 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7fb6e40aeca0 con 0x7fb6f41057d0 2026-03-10T10:47:17.462 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.460+0000 7fb6e9ffb640 1 --2- 192.168.123.104:0/2229795566 >> v2:192.168.123.104:6800/3326026257 conn(0x7fb6c80777a0 0x7fb6c8079c60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:47:17.462 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.460+0000 7fb6f8f42640 1 --2- 192.168.123.104:0/2229795566 >> v2:192.168.123.104:6800/3326026257 conn(0x7fb6c80777a0 0x7fb6c8079c60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:47:17.462 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.460+0000 7fb6f8f42640 1 --2- 192.168.123.104:0/2229795566 >> v2:192.168.123.104:6800/3326026257 conn(0x7fb6c80777a0 0x7fb6c8079c60 secure :-1 s=READY pgs=4266 cs=0 l=1 rev1=1 crypto rx=0x7fb6dc008ab0 tx=0x7fb6dc005dd0 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:47:17.462 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.460+0000 7fb6e9ffb640 1 -- 192.168.123.104:0/2229795566 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(761..761 src has 251..761) ==== 7736+0+0 (secure 0 0 0) 0x7fb6e4133b10 con 0x7fb6f41057d0 2026-03-10T10:47:17.462 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.460+0000 7fb6e9ffb640 1 -- 192.168.123.104:0/2229795566 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=762}) -- 0x7fb6c8082cf0 con 0x7fb6f41057d0 2026-03-10T10:47:17.462 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.460+0000 7fb6fb1cd640 1 -- 192.168.123.104:0/2229795566 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fb6f4106300 con 0x7fb6f41057d0 2026-03-10T10:47:17.465 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.460+0000 7fb6e9ffb640 1 -- 192.168.123.104:0/2229795566 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fb6e40bd050 con 0x7fb6f41057d0 2026-03-10T10:47:17.552 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.548+0000 7fb6fb1cd640 1 -- 192.168.123.104:0/2229795566 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "100"} v 0) -- 0x7fb6f41a04a0 con 0x7fb6f41057d0 2026-03-10T10:47:17.961 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.956+0000 7fb6e9ffb640 1 -- 192.168.123.104:0/2229795566 <== mon.0 v2:192.168.123.104:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "100"}]=0 set-quota max_objects = 100 for pool 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f v762) ==== 225+0+0 (secure 0 0 0) 0x7fb6e4016610 con 0x7fb6f41057d0 2026-03-10T10:47:17.973 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.968+0000 7fb6e9ffb640 1 -- 192.168.123.104:0/2229795566 <== mon.0 v2:192.168.123.104:3300/0 8 ==== osd_map(762..762 src has 251..762) ==== 628+0+0 (secure 0 0 0) 0x7fb6e40f8350 con 0x7fb6f41057d0 2026-03-10T10:47:17.973 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:17.968+0000 7fb6e9ffb640 1 -- 192.168.123.104:0/2229795566 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=763}) -- 0x7fb6c8083c80 con 0x7fb6f41057d0 2026-03-10T10:47:18.016 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:18.012+0000 7fb6fb1cd640 1 -- 192.168.123.104:0/2229795566 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "100"} v 0) -- 0x7fb6f410bbf0 con 0x7fb6f41057d0 2026-03-10T10:47:18.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:17 vm07 bash[23367]: cluster 2026-03-10T10:47:16.818223+0000 mgr.y (mgr.24422) 1242 : cluster [DBG] pgmap v1677: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:18.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:17 vm07 bash[23367]: cluster 2026-03-10T10:47:16.818223+0000 mgr.y (mgr.24422) 1242 : cluster [DBG] pgmap v1677: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:18.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:17 vm07 bash[23367]: audit 2026-03-10T10:47:17.555091+0000 mon.a (mon.0) 3843 : audit [INF] from='client.? 192.168.123.104:0/2229795566' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "100"}]: dispatch 2026-03-10T10:47:18.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:17 vm07 bash[23367]: audit 2026-03-10T10:47:17.555091+0000 mon.a (mon.0) 3843 : audit [INF] from='client.? 192.168.123.104:0/2229795566' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "100"}]: dispatch 2026-03-10T10:47:18.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:17 vm04 bash[28289]: cluster 2026-03-10T10:47:16.818223+0000 mgr.y (mgr.24422) 1242 : cluster [DBG] pgmap v1677: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:18.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:17 vm04 bash[28289]: cluster 2026-03-10T10:47:16.818223+0000 mgr.y (mgr.24422) 1242 : cluster [DBG] pgmap v1677: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:18.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:17 vm04 bash[28289]: audit 2026-03-10T10:47:17.555091+0000 mon.a (mon.0) 3843 : audit [INF] from='client.? 192.168.123.104:0/2229795566' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "100"}]: dispatch 2026-03-10T10:47:18.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:17 vm04 bash[28289]: audit 2026-03-10T10:47:17.555091+0000 mon.a (mon.0) 3843 : audit [INF] from='client.? 192.168.123.104:0/2229795566' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "100"}]: dispatch 2026-03-10T10:47:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:17 vm04 bash[20742]: cluster 2026-03-10T10:47:16.818223+0000 mgr.y (mgr.24422) 1242 : cluster [DBG] pgmap v1677: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:17 vm04 bash[20742]: cluster 2026-03-10T10:47:16.818223+0000 mgr.y (mgr.24422) 1242 : cluster [DBG] pgmap v1677: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:17 vm04 bash[20742]: audit 2026-03-10T10:47:17.555091+0000 mon.a (mon.0) 3843 : audit [INF] from='client.? 192.168.123.104:0/2229795566' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "100"}]: dispatch 2026-03-10T10:47:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:17 vm04 bash[20742]: audit 2026-03-10T10:47:17.555091+0000 mon.a (mon.0) 3843 : audit [INF] from='client.? 192.168.123.104:0/2229795566' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "100"}]: dispatch 2026-03-10T10:47:18.971 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:18.968+0000 7fb6e9ffb640 1 -- 192.168.123.104:0/2229795566 <== mon.0 v2:192.168.123.104:3300/0 9 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "100"}]=0 set-quota max_objects = 100 for pool 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f v763) ==== 225+0+0 (secure 0 0 0) 0x7fb6e4100360 con 0x7fb6f41057d0 2026-03-10T10:47:18.971 INFO:tasks.workunit.client.0.vm04.stderr:set-quota max_objects = 100 for pool 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f 2026-03-10T10:47:18.973 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:18.968+0000 7fb6fb1cd640 1 -- 192.168.123.104:0/2229795566 >> v2:192.168.123.104:6800/3326026257 conn(0x7fb6c80777a0 msgr2=0x7fb6c8079c60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:47:18.973 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:18.968+0000 7fb6fb1cd640 1 --2- 192.168.123.104:0/2229795566 >> v2:192.168.123.104:6800/3326026257 conn(0x7fb6c80777a0 0x7fb6c8079c60 secure :-1 s=READY pgs=4266 cs=0 l=1 rev1=1 crypto rx=0x7fb6dc008ab0 tx=0x7fb6dc005dd0 comp rx=0 tx=0).stop 2026-03-10T10:47:18.973 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:18.968+0000 7fb6fb1cd640 1 -- 192.168.123.104:0/2229795566 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb6f41057d0 msgr2=0x7fb6f419f680 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:47:18.973 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:18.968+0000 7fb6fb1cd640 1 --2- 192.168.123.104:0/2229795566 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb6f41057d0 0x7fb6f419f680 secure :-1 s=READY pgs=3082 cs=0 l=1 rev1=1 crypto rx=0x7fb6e401cde0 tx=0x7fb6e4005e60 comp rx=0 tx=0).stop 2026-03-10T10:47:18.973 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:18.968+0000 7fb6fb1cd640 1 -- 192.168.123.104:0/2229795566 shutdown_connections 2026-03-10T10:47:18.973 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:18.968+0000 7fb6fb1cd640 1 --2- 192.168.123.104:0/2229795566 >> v2:192.168.123.104:6800/3326026257 conn(0x7fb6c80777a0 0x7fb6c8079c60 unknown :-1 s=CLOSED pgs=4266 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:18.973 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:18.968+0000 7fb6fb1cd640 1 --2- 192.168.123.104:0/2229795566 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fb6f4109f50 0x7fb6f41a3a10 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:18.973 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:18.968+0000 7fb6fb1cd640 1 --2- 192.168.123.104:0/2229795566 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb6f41057d0 0x7fb6f419f680 unknown :-1 s=CLOSED pgs=3082 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:18.973 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:18.968+0000 7fb6fb1cd640 1 --2- 192.168.123.104:0/2229795566 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb6f4104e20 0x7fb6f419f140 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:18.973 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:18.968+0000 7fb6fb1cd640 1 -- 192.168.123.104:0/2229795566 >> 192.168.123.104:0/2229795566 conn(0x7fb6f4100880 msgr2=0x7fb6f4100e00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:47:18.973 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:18.968+0000 7fb6fb1cd640 1 -- 192.168.123.104:0/2229795566 shutdown_connections 2026-03-10T10:47:18.974 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:18.968+0000 7fb6fb1cd640 1 -- 192.168.123.104:0/2229795566 wait complete. 2026-03-10T10:47:18.987 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 131262 2026-03-10T10:47:19.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:18 vm07 bash[23367]: audit 2026-03-10T10:47:17.962663+0000 mon.a (mon.0) 3844 : audit [INF] from='client.? 192.168.123.104:0/2229795566' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "100"}]': finished 2026-03-10T10:47:19.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:18 vm07 bash[23367]: audit 2026-03-10T10:47:17.962663+0000 mon.a (mon.0) 3844 : audit [INF] from='client.? 192.168.123.104:0/2229795566' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "100"}]': finished 2026-03-10T10:47:19.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:18 vm07 bash[23367]: cluster 2026-03-10T10:47:17.976802+0000 mon.a (mon.0) 3845 : cluster [DBG] osdmap e762: 8 total, 8 up, 8 in 2026-03-10T10:47:19.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:18 vm07 bash[23367]: cluster 2026-03-10T10:47:17.976802+0000 mon.a (mon.0) 3845 : cluster [DBG] osdmap e762: 8 total, 8 up, 8 in 2026-03-10T10:47:19.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:18 vm07 bash[23367]: audit 2026-03-10T10:47:18.018855+0000 mon.a (mon.0) 3846 : audit [INF] from='client.? 192.168.123.104:0/2229795566' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "100"}]: dispatch 2026-03-10T10:47:19.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:18 vm07 bash[23367]: audit 2026-03-10T10:47:18.018855+0000 mon.a (mon.0) 3846 : audit [INF] from='client.? 192.168.123.104:0/2229795566' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "100"}]: dispatch 2026-03-10T10:47:19.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:18 vm04 bash[28289]: audit 2026-03-10T10:47:17.962663+0000 mon.a (mon.0) 3844 : audit [INF] from='client.? 192.168.123.104:0/2229795566' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "100"}]': finished 2026-03-10T10:47:19.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:18 vm04 bash[28289]: audit 2026-03-10T10:47:17.962663+0000 mon.a (mon.0) 3844 : audit [INF] from='client.? 192.168.123.104:0/2229795566' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "100"}]': finished 2026-03-10T10:47:19.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:18 vm04 bash[28289]: cluster 2026-03-10T10:47:17.976802+0000 mon.a (mon.0) 3845 : cluster [DBG] osdmap e762: 8 total, 8 up, 8 in 2026-03-10T10:47:19.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:18 vm04 bash[28289]: cluster 2026-03-10T10:47:17.976802+0000 mon.a (mon.0) 3845 : cluster [DBG] osdmap e762: 8 total, 8 up, 8 in 2026-03-10T10:47:19.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:18 vm04 bash[28289]: audit 2026-03-10T10:47:18.018855+0000 mon.a (mon.0) 3846 : audit [INF] from='client.? 192.168.123.104:0/2229795566' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "100"}]: dispatch 2026-03-10T10:47:19.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:18 vm04 bash[28289]: audit 2026-03-10T10:47:18.018855+0000 mon.a (mon.0) 3846 : audit [INF] from='client.? 192.168.123.104:0/2229795566' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "100"}]: dispatch 2026-03-10T10:47:19.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:18 vm04 bash[20742]: audit 2026-03-10T10:47:17.962663+0000 mon.a (mon.0) 3844 : audit [INF] from='client.? 192.168.123.104:0/2229795566' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "100"}]': finished 2026-03-10T10:47:19.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:18 vm04 bash[20742]: audit 2026-03-10T10:47:17.962663+0000 mon.a (mon.0) 3844 : audit [INF] from='client.? 192.168.123.104:0/2229795566' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "100"}]': finished 2026-03-10T10:47:19.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:18 vm04 bash[20742]: cluster 2026-03-10T10:47:17.976802+0000 mon.a (mon.0) 3845 : cluster [DBG] osdmap e762: 8 total, 8 up, 8 in 2026-03-10T10:47:19.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:18 vm04 bash[20742]: cluster 2026-03-10T10:47:17.976802+0000 mon.a (mon.0) 3845 : cluster [DBG] osdmap e762: 8 total, 8 up, 8 in 2026-03-10T10:47:19.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:18 vm04 bash[20742]: audit 2026-03-10T10:47:18.018855+0000 mon.a (mon.0) 3846 : audit [INF] from='client.? 192.168.123.104:0/2229795566' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "100"}]: dispatch 2026-03-10T10:47:19.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:18 vm04 bash[20742]: audit 2026-03-10T10:47:18.018855+0000 mon.a (mon.0) 3846 : audit [INF] from='client.? 192.168.123.104:0/2229795566' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "100"}]: dispatch 2026-03-10T10:47:20.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:19 vm07 bash[23367]: cluster 2026-03-10T10:47:18.818834+0000 mgr.y (mgr.24422) 1243 : cluster [DBG] pgmap v1679: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:47:20.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:19 vm07 bash[23367]: cluster 2026-03-10T10:47:18.818834+0000 mgr.y (mgr.24422) 1243 : cluster [DBG] pgmap v1679: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:47:20.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:19 vm07 bash[23367]: audit 2026-03-10T10:47:18.972688+0000 mon.a (mon.0) 3847 : audit [INF] from='client.? 192.168.123.104:0/2229795566' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "100"}]': finished 2026-03-10T10:47:20.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:19 vm07 bash[23367]: audit 2026-03-10T10:47:18.972688+0000 mon.a (mon.0) 3847 : audit [INF] from='client.? 192.168.123.104:0/2229795566' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "100"}]': finished 2026-03-10T10:47:20.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:19 vm07 bash[23367]: cluster 2026-03-10T10:47:18.974395+0000 mon.a (mon.0) 3848 : cluster [DBG] osdmap e763: 8 total, 8 up, 8 in 2026-03-10T10:47:20.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:19 vm07 bash[23367]: cluster 2026-03-10T10:47:18.974395+0000 mon.a (mon.0) 3848 : cluster [DBG] osdmap e763: 8 total, 8 up, 8 in 2026-03-10T10:47:20.017 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:47:19 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:47:20.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:19 vm04 bash[28289]: cluster 2026-03-10T10:47:18.818834+0000 mgr.y (mgr.24422) 1243 : cluster [DBG] pgmap v1679: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:47:20.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:19 vm04 bash[28289]: cluster 2026-03-10T10:47:18.818834+0000 mgr.y (mgr.24422) 1243 : cluster [DBG] pgmap v1679: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:47:20.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:19 vm04 bash[28289]: audit 2026-03-10T10:47:18.972688+0000 mon.a (mon.0) 3847 : audit [INF] from='client.? 192.168.123.104:0/2229795566' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "100"}]': finished 2026-03-10T10:47:20.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:19 vm04 bash[28289]: audit 2026-03-10T10:47:18.972688+0000 mon.a (mon.0) 3847 : audit [INF] from='client.? 192.168.123.104:0/2229795566' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "100"}]': finished 2026-03-10T10:47:20.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:19 vm04 bash[28289]: cluster 2026-03-10T10:47:18.974395+0000 mon.a (mon.0) 3848 : cluster [DBG] osdmap e763: 8 total, 8 up, 8 in 2026-03-10T10:47:20.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:19 vm04 bash[28289]: cluster 2026-03-10T10:47:18.974395+0000 mon.a (mon.0) 3848 : cluster [DBG] osdmap e763: 8 total, 8 up, 8 in 2026-03-10T10:47:20.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:19 vm04 bash[20742]: cluster 2026-03-10T10:47:18.818834+0000 mgr.y (mgr.24422) 1243 : cluster [DBG] pgmap v1679: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:47:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:19 vm04 bash[20742]: cluster 2026-03-10T10:47:18.818834+0000 mgr.y (mgr.24422) 1243 : cluster [DBG] pgmap v1679: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:47:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:19 vm04 bash[20742]: audit 2026-03-10T10:47:18.972688+0000 mon.a (mon.0) 3847 : audit [INF] from='client.? 192.168.123.104:0/2229795566' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "100"}]': finished 2026-03-10T10:47:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:19 vm04 bash[20742]: audit 2026-03-10T10:47:18.972688+0000 mon.a (mon.0) 3847 : audit [INF] from='client.? 192.168.123.104:0/2229795566' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "100"}]': finished 2026-03-10T10:47:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:19 vm04 bash[20742]: cluster 2026-03-10T10:47:18.974395+0000 mon.a (mon.0) 3848 : cluster [DBG] osdmap e763: 8 total, 8 up, 8 in 2026-03-10T10:47:20.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:19 vm04 bash[20742]: cluster 2026-03-10T10:47:18.974395+0000 mon.a (mon.0) 3848 : cluster [DBG] osdmap e763: 8 total, 8 up, 8 in 2026-03-10T10:47:21.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:20 vm07 bash[23367]: audit 2026-03-10T10:47:19.753469+0000 mgr.y (mgr.24422) 1244 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:21.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:20 vm07 bash[23367]: audit 2026-03-10T10:47:19.753469+0000 mgr.y (mgr.24422) 1244 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:21.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:20 vm04 bash[28289]: audit 2026-03-10T10:47:19.753469+0000 mgr.y (mgr.24422) 1244 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:21.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:20 vm04 bash[28289]: audit 2026-03-10T10:47:19.753469+0000 mgr.y (mgr.24422) 1244 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:21.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:20 vm04 bash[20742]: audit 2026-03-10T10:47:19.753469+0000 mgr.y (mgr.24422) 1244 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:21.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:20 vm04 bash[20742]: audit 2026-03-10T10:47:19.753469+0000 mgr.y (mgr.24422) 1244 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:22.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:21 vm07 bash[23367]: cluster 2026-03-10T10:47:20.819217+0000 mgr.y (mgr.24422) 1245 : cluster [DBG] pgmap v1681: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:22.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:21 vm07 bash[23367]: cluster 2026-03-10T10:47:20.819217+0000 mgr.y (mgr.24422) 1245 : cluster [DBG] pgmap v1681: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:22.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:21 vm04 bash[28289]: cluster 2026-03-10T10:47:20.819217+0000 mgr.y (mgr.24422) 1245 : cluster [DBG] pgmap v1681: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:22.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:21 vm04 bash[28289]: cluster 2026-03-10T10:47:20.819217+0000 mgr.y (mgr.24422) 1245 : cluster [DBG] pgmap v1681: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:22.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:21 vm04 bash[20742]: cluster 2026-03-10T10:47:20.819217+0000 mgr.y (mgr.24422) 1245 : cluster [DBG] pgmap v1681: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:22.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:21 vm04 bash[20742]: cluster 2026-03-10T10:47:20.819217+0000 mgr.y (mgr.24422) 1245 : cluster [DBG] pgmap v1681: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:22.774 INFO:tasks.workunit.client.0.vm04.stderr:+ [ 0 -ne 0 ] 2026-03-10T10:47:22.774 INFO:tasks.workunit.client.0.vm04.stderr:+ true 2026-03-10T10:47:22.774 INFO:tasks.workunit.client.0.vm04.stderr:+ rados -p 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f put twomore /etc/passwd 2026-03-10T10:47:22.799 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph osd pool set-quota 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f max_bytes 100 2026-03-10T10:47:22.861 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.856+0000 7fcbb1006640 1 -- 192.168.123.104:0/1736222856 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcbac1021b0 msgr2=0x7fcbac10e6e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:47:22.861 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.856+0000 7fcbb1006640 1 --2- 192.168.123.104:0/1736222856 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcbac1021b0 0x7fcbac10e6e0 secure :-1 s=READY pgs=2718 cs=0 l=1 rev1=1 crypto rx=0x7fcb9c009f90 tx=0x7fcb9c01ca30 comp rx=0 tx=0).stop 2026-03-10T10:47:22.862 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.856+0000 7fcbb1006640 1 -- 192.168.123.104:0/1736222856 shutdown_connections 2026-03-10T10:47:22.862 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.856+0000 7fcbb1006640 1 --2- 192.168.123.104:0/1736222856 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcbac1021b0 0x7fcbac10e6e0 unknown :-1 s=CLOSED pgs=2718 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:22.862 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.856+0000 7fcbb1006640 1 --2- 192.168.123.104:0/1736222856 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fcbac101810 0x7fcbac101c70 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:22.862 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.856+0000 7fcbb1006640 1 --2- 192.168.123.104:0/1736222856 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fcbac107810 0x7fcbac107bf0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:22.862 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.856+0000 7fcbb1006640 1 -- 192.168.123.104:0/1736222856 >> 192.168.123.104:0/1736222856 conn(0x7fcbac0fd530 msgr2=0x7fcbac0ff950 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:47:22.862 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.856+0000 7fcbb1006640 1 -- 192.168.123.104:0/1736222856 shutdown_connections 2026-03-10T10:47:22.862 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.856+0000 7fcbb1006640 1 -- 192.168.123.104:0/1736222856 wait complete. 2026-03-10T10:47:22.862 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.860+0000 7fcbb1006640 1 Processor -- start 2026-03-10T10:47:22.862 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.860+0000 7fcbb1006640 1 -- start start 2026-03-10T10:47:22.862 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.860+0000 7fcbb1006640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcbac101810 0x7fcbac19f1c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:47:22.862 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.860+0000 7fcbb1006640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fcbac1021b0 0x7fcbac19f700 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:47:22.862 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.860+0000 7fcbb1006640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fcbac107810 0x7fcbac1a3a90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:47:22.863 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.860+0000 7fcbb1006640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fcbac06b960 con 0x7fcbac1021b0 2026-03-10T10:47:22.863 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.860+0000 7fcbb1006640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7fcbac06b7e0 con 0x7fcbac101810 2026-03-10T10:47:22.863 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.860+0000 7fcbb1006640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7fcbac06bae0 con 0x7fcbac107810 2026-03-10T10:47:22.863 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.860+0000 7fcbab577640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fcbac107810 0x7fcbac1a3a90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:47:22.863 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.860+0000 7fcbaa575640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fcbac1021b0 0x7fcbac19f700 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:47:22.863 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.860+0000 7fcbaad76640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcbac101810 0x7fcbac19f1c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:47:22.863 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.860+0000 7fcbaa575640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fcbac1021b0 0x7fcbac19f700 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:51606/0 (socket says 192.168.123.104:51606) 2026-03-10T10:47:22.863 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.860+0000 7fcbaa575640 1 -- 192.168.123.104:0/426495064 learned_addr learned my addr 192.168.123.104:0/426495064 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:47:22.863 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.860+0000 7fcbab577640 1 -- 192.168.123.104:0/426495064 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcbac101810 msgr2=0x7fcbac19f1c0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:47:22.863 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.860+0000 7fcbab577640 1 --2- 192.168.123.104:0/426495064 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcbac101810 0x7fcbac19f1c0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:22.863 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.860+0000 7fcbab577640 1 -- 192.168.123.104:0/426495064 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fcbac1021b0 msgr2=0x7fcbac19f700 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:47:22.863 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.860+0000 7fcbab577640 1 --2- 192.168.123.104:0/426495064 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fcbac1021b0 0x7fcbac19f700 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:22.863 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.860+0000 7fcbab577640 1 -- 192.168.123.104:0/426495064 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fcbac1a4210 con 0x7fcbac107810 2026-03-10T10:47:22.863 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.860+0000 7fcbaa575640 1 --2- 192.168.123.104:0/426495064 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fcbac1021b0 0x7fcbac19f700 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T10:47:22.864 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.860+0000 7fcbab577640 1 --2- 192.168.123.104:0/426495064 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fcbac107810 0x7fcbac1a3a90 secure :-1 s=READY pgs=3391 cs=0 l=1 rev1=1 crypto rx=0x7fcb9c01c860 tx=0x7fcb9c007730 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:47:22.864 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.860+0000 7fcb8bfff640 1 -- 192.168.123.104:0/426495064 <== mon.2 v2:192.168.123.104:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fcb9c0a5ba0 con 0x7fcbac107810 2026-03-10T10:47:22.864 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.860+0000 7fcbb1006640 1 -- 192.168.123.104:0/426495064 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fcbac1a44a0 con 0x7fcbac107810 2026-03-10T10:47:22.864 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.860+0000 7fcbb1006640 1 -- 192.168.123.104:0/426495064 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7fcbac1abd40 con 0x7fcbac107810 2026-03-10T10:47:22.865 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.860+0000 7fcb8bfff640 1 -- 192.168.123.104:0/426495064 <== mon.2 v2:192.168.123.104:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fcb9c007920 con 0x7fcbac107810 2026-03-10T10:47:22.865 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.860+0000 7fcb8bfff640 1 -- 192.168.123.104:0/426495064 <== mon.2 v2:192.168.123.104:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fcb9c004440 con 0x7fcbac107810 2026-03-10T10:47:22.868 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.860+0000 7fcbb1006640 1 -- 192.168.123.104:0/426495064 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fcb70005190 con 0x7fcbac107810 2026-03-10T10:47:22.869 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.864+0000 7fcb8bfff640 1 -- 192.168.123.104:0/426495064 <== mon.2 v2:192.168.123.104:3301/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7fcb9c005ce0 con 0x7fcbac107810 2026-03-10T10:47:22.869 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.864+0000 7fcb8bfff640 1 --2- 192.168.123.104:0/426495064 >> v2:192.168.123.104:6800/3326026257 conn(0x7fcb7c077870 0x7fcb7c079d30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:47:22.870 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.864+0000 7fcb8bfff640 1 -- 192.168.123.104:0/426495064 <== mon.2 v2:192.168.123.104:3301/0 5 ==== osd_map(764..764 src has 251..764) ==== 7736+0+0 (secure 0 0 0) 0x7fcb9c134030 con 0x7fcbac107810 2026-03-10T10:47:22.870 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.864+0000 7fcbaad76640 1 --2- 192.168.123.104:0/426495064 >> v2:192.168.123.104:6800/3326026257 conn(0x7fcb7c077870 0x7fcb7c079d30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:47:22.870 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.864+0000 7fcb8bfff640 1 -- 192.168.123.104:0/426495064 <== mon.2 v2:192.168.123.104:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fcb9c016740 con 0x7fcbac107810 2026-03-10T10:47:22.870 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.868+0000 7fcbaad76640 1 --2- 192.168.123.104:0/426495064 >> v2:192.168.123.104:6800/3326026257 conn(0x7fcb7c077870 0x7fcb7c079d30 secure :-1 s=READY pgs=4268 cs=0 l=1 rev1=1 crypto rx=0x7fcb94004500 tx=0x7fcb94009340 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:47:22.958 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:22.956+0000 7fcbb1006640 1 -- 192.168.123.104:0/426495064 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"} v 0) -- 0x7fcb70005480 con 0x7fcbac107810 2026-03-10T10:47:23.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:22 vm07 bash[23367]: cluster 2026-03-10T10:47:22.754786+0000 mon.a (mon.0) 3849 : cluster [INF] pool '3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f' no longer out of quota; removing NO_QUOTA flag 2026-03-10T10:47:23.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:22 vm07 bash[23367]: cluster 2026-03-10T10:47:22.754786+0000 mon.a (mon.0) 3849 : cluster [INF] pool '3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f' no longer out of quota; removing NO_QUOTA flag 2026-03-10T10:47:23.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:22 vm07 bash[23367]: cluster 2026-03-10T10:47:22.754968+0000 mon.a (mon.0) 3850 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T10:47:23.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:22 vm07 bash[23367]: cluster 2026-03-10T10:47:22.754968+0000 mon.a (mon.0) 3850 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T10:47:23.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:22 vm07 bash[23367]: cluster 2026-03-10T10:47:22.773828+0000 mon.a (mon.0) 3851 : cluster [DBG] osdmap e764: 8 total, 8 up, 8 in 2026-03-10T10:47:23.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:22 vm07 bash[23367]: cluster 2026-03-10T10:47:22.773828+0000 mon.a (mon.0) 3851 : cluster [DBG] osdmap e764: 8 total, 8 up, 8 in 2026-03-10T10:47:23.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:22 vm07 bash[23367]: audit 2026-03-10T10:47:22.960897+0000 mon.c (mon.2) 483 : audit [INF] from='client.? 192.168.123.104:0/426495064' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T10:47:23.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:22 vm07 bash[23367]: audit 2026-03-10T10:47:22.960897+0000 mon.c (mon.2) 483 : audit [INF] from='client.? 192.168.123.104:0/426495064' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T10:47:23.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:22 vm07 bash[23367]: audit 2026-03-10T10:47:22.961258+0000 mon.a (mon.0) 3852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T10:47:23.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:22 vm07 bash[23367]: audit 2026-03-10T10:47:22.961258+0000 mon.a (mon.0) 3852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T10:47:23.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:23 vm04 bash[28289]: cluster 2026-03-10T10:47:22.754786+0000 mon.a (mon.0) 3849 : cluster [INF] pool '3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f' no longer out of quota; removing NO_QUOTA flag 2026-03-10T10:47:23.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:23 vm04 bash[28289]: cluster 2026-03-10T10:47:22.754786+0000 mon.a (mon.0) 3849 : cluster [INF] pool '3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f' no longer out of quota; removing NO_QUOTA flag 2026-03-10T10:47:23.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:23 vm04 bash[28289]: cluster 2026-03-10T10:47:22.754968+0000 mon.a (mon.0) 3850 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T10:47:23.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:23 vm04 bash[28289]: cluster 2026-03-10T10:47:22.754968+0000 mon.a (mon.0) 3850 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T10:47:23.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:23 vm04 bash[28289]: cluster 2026-03-10T10:47:22.773828+0000 mon.a (mon.0) 3851 : cluster [DBG] osdmap e764: 8 total, 8 up, 8 in 2026-03-10T10:47:23.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:23 vm04 bash[28289]: cluster 2026-03-10T10:47:22.773828+0000 mon.a (mon.0) 3851 : cluster [DBG] osdmap e764: 8 total, 8 up, 8 in 2026-03-10T10:47:23.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:23 vm04 bash[28289]: audit 2026-03-10T10:47:22.960897+0000 mon.c (mon.2) 483 : audit [INF] from='client.? 192.168.123.104:0/426495064' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T10:47:23.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:23 vm04 bash[28289]: audit 2026-03-10T10:47:22.960897+0000 mon.c (mon.2) 483 : audit [INF] from='client.? 192.168.123.104:0/426495064' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T10:47:23.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:23 vm04 bash[28289]: audit 2026-03-10T10:47:22.961258+0000 mon.a (mon.0) 3852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T10:47:23.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:23 vm04 bash[28289]: audit 2026-03-10T10:47:22.961258+0000 mon.a (mon.0) 3852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T10:47:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:23 vm04 bash[20742]: cluster 2026-03-10T10:47:22.754786+0000 mon.a (mon.0) 3849 : cluster [INF] pool '3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f' no longer out of quota; removing NO_QUOTA flag 2026-03-10T10:47:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:23 vm04 bash[20742]: cluster 2026-03-10T10:47:22.754786+0000 mon.a (mon.0) 3849 : cluster [INF] pool '3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f' no longer out of quota; removing NO_QUOTA flag 2026-03-10T10:47:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:23 vm04 bash[20742]: cluster 2026-03-10T10:47:22.754968+0000 mon.a (mon.0) 3850 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T10:47:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:23 vm04 bash[20742]: cluster 2026-03-10T10:47:22.754968+0000 mon.a (mon.0) 3850 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T10:47:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:23 vm04 bash[20742]: cluster 2026-03-10T10:47:22.773828+0000 mon.a (mon.0) 3851 : cluster [DBG] osdmap e764: 8 total, 8 up, 8 in 2026-03-10T10:47:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:23 vm04 bash[20742]: cluster 2026-03-10T10:47:22.773828+0000 mon.a (mon.0) 3851 : cluster [DBG] osdmap e764: 8 total, 8 up, 8 in 2026-03-10T10:47:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:23 vm04 bash[20742]: audit 2026-03-10T10:47:22.960897+0000 mon.c (mon.2) 483 : audit [INF] from='client.? 192.168.123.104:0/426495064' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T10:47:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:23 vm04 bash[20742]: audit 2026-03-10T10:47:22.960897+0000 mon.c (mon.2) 483 : audit [INF] from='client.? 192.168.123.104:0/426495064' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T10:47:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:23 vm04 bash[20742]: audit 2026-03-10T10:47:22.961258+0000 mon.a (mon.0) 3852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T10:47:23.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:23 vm04 bash[20742]: audit 2026-03-10T10:47:22.961258+0000 mon.a (mon.0) 3852 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T10:47:23.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:47:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:47:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:47:23.782 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:23.780+0000 7fcb8bfff640 1 -- 192.168.123.104:0/426495064 <== mon.2 v2:192.168.123.104:3301/0 7 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]=0 set-quota max_bytes = 100 for pool 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f v765) ==== 221+0+0 (secure 0 0 0) 0x7fcb9c100880 con 0x7fcbac107810 2026-03-10T10:47:23.833 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:23.828+0000 7fcbb1006640 1 -- 192.168.123.104:0/426495064 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"} v 0) -- 0x7fcb70004a90 con 0x7fcbac107810 2026-03-10T10:47:24.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:24 vm07 bash[23367]: cluster 2026-03-10T10:47:22.819518+0000 mgr.y (mgr.24422) 1246 : cluster [DBG] pgmap v1683: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:24.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:24 vm07 bash[23367]: cluster 2026-03-10T10:47:22.819518+0000 mgr.y (mgr.24422) 1246 : cluster [DBG] pgmap v1683: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:24.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:24 vm07 bash[23367]: audit 2026-03-10T10:47:23.767589+0000 mon.a (mon.0) 3853 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]': finished 2026-03-10T10:47:24.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:24 vm07 bash[23367]: audit 2026-03-10T10:47:23.767589+0000 mon.a (mon.0) 3853 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]': finished 2026-03-10T10:47:24.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:24 vm07 bash[23367]: cluster 2026-03-10T10:47:23.770786+0000 mon.a (mon.0) 3854 : cluster [DBG] osdmap e765: 8 total, 8 up, 8 in 2026-03-10T10:47:24.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:24 vm07 bash[23367]: cluster 2026-03-10T10:47:23.770786+0000 mon.a (mon.0) 3854 : cluster [DBG] osdmap e765: 8 total, 8 up, 8 in 2026-03-10T10:47:24.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:24 vm07 bash[23367]: audit 2026-03-10T10:47:23.835504+0000 mon.c (mon.2) 484 : audit [INF] from='client.? 192.168.123.104:0/426495064' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T10:47:24.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:24 vm07 bash[23367]: audit 2026-03-10T10:47:23.835504+0000 mon.c (mon.2) 484 : audit [INF] from='client.? 192.168.123.104:0/426495064' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T10:47:24.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:24 vm07 bash[23367]: audit 2026-03-10T10:47:23.835939+0000 mon.a (mon.0) 3855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T10:47:24.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:24 vm07 bash[23367]: audit 2026-03-10T10:47:23.835939+0000 mon.a (mon.0) 3855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T10:47:24.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:24 vm04 bash[28289]: cluster 2026-03-10T10:47:22.819518+0000 mgr.y (mgr.24422) 1246 : cluster [DBG] pgmap v1683: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:24 vm04 bash[28289]: cluster 2026-03-10T10:47:22.819518+0000 mgr.y (mgr.24422) 1246 : cluster [DBG] pgmap v1683: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:24 vm04 bash[28289]: audit 2026-03-10T10:47:23.767589+0000 mon.a (mon.0) 3853 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]': finished 2026-03-10T10:47:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:24 vm04 bash[28289]: audit 2026-03-10T10:47:23.767589+0000 mon.a (mon.0) 3853 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]': finished 2026-03-10T10:47:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:24 vm04 bash[28289]: cluster 2026-03-10T10:47:23.770786+0000 mon.a (mon.0) 3854 : cluster [DBG] osdmap e765: 8 total, 8 up, 8 in 2026-03-10T10:47:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:24 vm04 bash[28289]: cluster 2026-03-10T10:47:23.770786+0000 mon.a (mon.0) 3854 : cluster [DBG] osdmap e765: 8 total, 8 up, 8 in 2026-03-10T10:47:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:24 vm04 bash[28289]: audit 2026-03-10T10:47:23.835504+0000 mon.c (mon.2) 484 : audit [INF] from='client.? 192.168.123.104:0/426495064' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T10:47:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:24 vm04 bash[28289]: audit 2026-03-10T10:47:23.835504+0000 mon.c (mon.2) 484 : audit [INF] from='client.? 192.168.123.104:0/426495064' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T10:47:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:24 vm04 bash[28289]: audit 2026-03-10T10:47:23.835939+0000 mon.a (mon.0) 3855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T10:47:24.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:24 vm04 bash[28289]: audit 2026-03-10T10:47:23.835939+0000 mon.a (mon.0) 3855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T10:47:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:24 vm04 bash[20742]: cluster 2026-03-10T10:47:22.819518+0000 mgr.y (mgr.24422) 1246 : cluster [DBG] pgmap v1683: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:24 vm04 bash[20742]: cluster 2026-03-10T10:47:22.819518+0000 mgr.y (mgr.24422) 1246 : cluster [DBG] pgmap v1683: 176 pgs: 176 active+clean; 476 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:24 vm04 bash[20742]: audit 2026-03-10T10:47:23.767589+0000 mon.a (mon.0) 3853 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]': finished 2026-03-10T10:47:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:24 vm04 bash[20742]: audit 2026-03-10T10:47:23.767589+0000 mon.a (mon.0) 3853 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]': finished 2026-03-10T10:47:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:24 vm04 bash[20742]: cluster 2026-03-10T10:47:23.770786+0000 mon.a (mon.0) 3854 : cluster [DBG] osdmap e765: 8 total, 8 up, 8 in 2026-03-10T10:47:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:24 vm04 bash[20742]: cluster 2026-03-10T10:47:23.770786+0000 mon.a (mon.0) 3854 : cluster [DBG] osdmap e765: 8 total, 8 up, 8 in 2026-03-10T10:47:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:24 vm04 bash[20742]: audit 2026-03-10T10:47:23.835504+0000 mon.c (mon.2) 484 : audit [INF] from='client.? 192.168.123.104:0/426495064' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T10:47:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:24 vm04 bash[20742]: audit 2026-03-10T10:47:23.835504+0000 mon.c (mon.2) 484 : audit [INF] from='client.? 192.168.123.104:0/426495064' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T10:47:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:24 vm04 bash[20742]: audit 2026-03-10T10:47:23.835939+0000 mon.a (mon.0) 3855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T10:47:24.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:24 vm04 bash[20742]: audit 2026-03-10T10:47:23.835939+0000 mon.a (mon.0) 3855 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]: dispatch 2026-03-10T10:47:25.113 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:25.108+0000 7fcb8bfff640 1 -- 192.168.123.104:0/426495064 <== mon.2 v2:192.168.123.104:3301/0 8 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]=0 set-quota max_bytes = 100 for pool 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f v766) ==== 221+0+0 (secure 0 0 0) 0x7fcb9c105730 con 0x7fcbac107810 2026-03-10T10:47:25.113 INFO:tasks.workunit.client.0.vm04.stderr:set-quota max_bytes = 100 for pool 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f 2026-03-10T10:47:25.116 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:25.112+0000 7fcbb1006640 1 -- 192.168.123.104:0/426495064 >> v2:192.168.123.104:6800/3326026257 conn(0x7fcb7c077870 msgr2=0x7fcb7c079d30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:47:25.116 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:25.112+0000 7fcbb1006640 1 --2- 192.168.123.104:0/426495064 >> v2:192.168.123.104:6800/3326026257 conn(0x7fcb7c077870 0x7fcb7c079d30 secure :-1 s=READY pgs=4268 cs=0 l=1 rev1=1 crypto rx=0x7fcb94004500 tx=0x7fcb94009340 comp rx=0 tx=0).stop 2026-03-10T10:47:25.116 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:25.112+0000 7fcbb1006640 1 -- 192.168.123.104:0/426495064 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fcbac107810 msgr2=0x7fcbac1a3a90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:47:25.116 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:25.112+0000 7fcbb1006640 1 --2- 192.168.123.104:0/426495064 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fcbac107810 0x7fcbac1a3a90 secure :-1 s=READY pgs=3391 cs=0 l=1 rev1=1 crypto rx=0x7fcb9c01c860 tx=0x7fcb9c007730 comp rx=0 tx=0).stop 2026-03-10T10:47:25.116 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:25.112+0000 7fcbb1006640 1 -- 192.168.123.104:0/426495064 shutdown_connections 2026-03-10T10:47:25.116 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:25.112+0000 7fcbb1006640 1 --2- 192.168.123.104:0/426495064 >> v2:192.168.123.104:6800/3326026257 conn(0x7fcb7c077870 0x7fcb7c079d30 unknown :-1 s=CLOSED pgs=4268 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:25.116 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:25.112+0000 7fcbb1006640 1 --2- 192.168.123.104:0/426495064 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fcbac107810 0x7fcbac1a3a90 unknown :-1 s=CLOSED pgs=3391 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:25.116 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:25.112+0000 7fcbb1006640 1 --2- 192.168.123.104:0/426495064 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fcbac1021b0 0x7fcbac19f700 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:25.116 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:25.112+0000 7fcbb1006640 1 --2- 192.168.123.104:0/426495064 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fcbac101810 0x7fcbac19f1c0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:25.116 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:25.112+0000 7fcbb1006640 1 -- 192.168.123.104:0/426495064 >> 192.168.123.104:0/426495064 conn(0x7fcbac0fd530 msgr2=0x7fcbac10cc00 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:47:25.116 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:25.112+0000 7fcbb1006640 1 -- 192.168.123.104:0/426495064 shutdown_connections 2026-03-10T10:47:25.116 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:25.112+0000 7fcbb1006640 1 -- 192.168.123.104:0/426495064 wait complete. 2026-03-10T10:47:25.129 INFO:tasks.workunit.client.0.vm04.stderr:+ sleep 30 2026-03-10T10:47:26.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:25 vm04 bash[28289]: cluster 2026-03-10T10:47:24.819834+0000 mgr.y (mgr.24422) 1247 : cluster [DBG] pgmap v1685: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 1 op/s 2026-03-10T10:47:26.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:25 vm04 bash[28289]: cluster 2026-03-10T10:47:24.819834+0000 mgr.y (mgr.24422) 1247 : cluster [DBG] pgmap v1685: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 1 op/s 2026-03-10T10:47:26.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:25 vm04 bash[28289]: audit 2026-03-10T10:47:24.908399+0000 mon.a (mon.0) 3856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]': finished 2026-03-10T10:47:26.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:25 vm04 bash[28289]: audit 2026-03-10T10:47:24.908399+0000 mon.a (mon.0) 3856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]': finished 2026-03-10T10:47:26.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:25 vm04 bash[28289]: cluster 2026-03-10T10:47:25.033572+0000 mon.a (mon.0) 3857 : cluster [DBG] osdmap e766: 8 total, 8 up, 8 in 2026-03-10T10:47:26.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:25 vm04 bash[28289]: cluster 2026-03-10T10:47:25.033572+0000 mon.a (mon.0) 3857 : cluster [DBG] osdmap e766: 8 total, 8 up, 8 in 2026-03-10T10:47:26.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:25 vm04 bash[20742]: cluster 2026-03-10T10:47:24.819834+0000 mgr.y (mgr.24422) 1247 : cluster [DBG] pgmap v1685: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 1 op/s 2026-03-10T10:47:26.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:25 vm04 bash[20742]: cluster 2026-03-10T10:47:24.819834+0000 mgr.y (mgr.24422) 1247 : cluster [DBG] pgmap v1685: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 1 op/s 2026-03-10T10:47:26.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:25 vm04 bash[20742]: audit 2026-03-10T10:47:24.908399+0000 mon.a (mon.0) 3856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]': finished 2026-03-10T10:47:26.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:25 vm04 bash[20742]: audit 2026-03-10T10:47:24.908399+0000 mon.a (mon.0) 3856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]': finished 2026-03-10T10:47:26.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:25 vm04 bash[20742]: cluster 2026-03-10T10:47:25.033572+0000 mon.a (mon.0) 3857 : cluster [DBG] osdmap e766: 8 total, 8 up, 8 in 2026-03-10T10:47:26.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:25 vm04 bash[20742]: cluster 2026-03-10T10:47:25.033572+0000 mon.a (mon.0) 3857 : cluster [DBG] osdmap e766: 8 total, 8 up, 8 in 2026-03-10T10:47:26.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:25 vm07 bash[23367]: cluster 2026-03-10T10:47:24.819834+0000 mgr.y (mgr.24422) 1247 : cluster [DBG] pgmap v1685: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 1 op/s 2026-03-10T10:47:26.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:25 vm07 bash[23367]: cluster 2026-03-10T10:47:24.819834+0000 mgr.y (mgr.24422) 1247 : cluster [DBG] pgmap v1685: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 1 op/s 2026-03-10T10:47:26.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:25 vm07 bash[23367]: audit 2026-03-10T10:47:24.908399+0000 mon.a (mon.0) 3856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]': finished 2026-03-10T10:47:26.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:25 vm07 bash[23367]: audit 2026-03-10T10:47:24.908399+0000 mon.a (mon.0) 3856 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "100"}]': finished 2026-03-10T10:47:26.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:25 vm07 bash[23367]: cluster 2026-03-10T10:47:25.033572+0000 mon.a (mon.0) 3857 : cluster [DBG] osdmap e766: 8 total, 8 up, 8 in 2026-03-10T10:47:26.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:25 vm07 bash[23367]: cluster 2026-03-10T10:47:25.033572+0000 mon.a (mon.0) 3857 : cluster [DBG] osdmap e766: 8 total, 8 up, 8 in 2026-03-10T10:47:28.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:27 vm04 bash[28289]: cluster 2026-03-10T10:47:26.820162+0000 mgr.y (mgr.24422) 1248 : cluster [DBG] pgmap v1687: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 1 op/s 2026-03-10T10:47:28.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:27 vm04 bash[28289]: cluster 2026-03-10T10:47:26.820162+0000 mgr.y (mgr.24422) 1248 : cluster [DBG] pgmap v1687: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 1 op/s 2026-03-10T10:47:28.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:27 vm04 bash[28289]: cluster 2026-03-10T10:47:27.756360+0000 mon.a (mon.0) 3858 : cluster [WRN] pool '3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f' is full (reached quota's max_bytes: 100 B) 2026-03-10T10:47:28.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:27 vm04 bash[28289]: cluster 2026-03-10T10:47:27.756360+0000 mon.a (mon.0) 3858 : cluster [WRN] pool '3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f' is full (reached quota's max_bytes: 100 B) 2026-03-10T10:47:28.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:27 vm04 bash[28289]: cluster 2026-03-10T10:47:27.756581+0000 mon.a (mon.0) 3859 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T10:47:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:27 vm04 bash[28289]: cluster 2026-03-10T10:47:27.756581+0000 mon.a (mon.0) 3859 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T10:47:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:27 vm04 bash[28289]: cluster 2026-03-10T10:47:27.762864+0000 mon.a (mon.0) 3860 : cluster [DBG] osdmap e767: 8 total, 8 up, 8 in 2026-03-10T10:47:28.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:27 vm04 bash[28289]: cluster 2026-03-10T10:47:27.762864+0000 mon.a (mon.0) 3860 : cluster [DBG] osdmap e767: 8 total, 8 up, 8 in 2026-03-10T10:47:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:27 vm04 bash[20742]: cluster 2026-03-10T10:47:26.820162+0000 mgr.y (mgr.24422) 1248 : cluster [DBG] pgmap v1687: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 1 op/s 2026-03-10T10:47:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:27 vm04 bash[20742]: cluster 2026-03-10T10:47:26.820162+0000 mgr.y (mgr.24422) 1248 : cluster [DBG] pgmap v1687: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 1 op/s 2026-03-10T10:47:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:27 vm04 bash[20742]: cluster 2026-03-10T10:47:27.756360+0000 mon.a (mon.0) 3858 : cluster [WRN] pool '3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f' is full (reached quota's max_bytes: 100 B) 2026-03-10T10:47:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:27 vm04 bash[20742]: cluster 2026-03-10T10:47:27.756360+0000 mon.a (mon.0) 3858 : cluster [WRN] pool '3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f' is full (reached quota's max_bytes: 100 B) 2026-03-10T10:47:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:27 vm04 bash[20742]: cluster 2026-03-10T10:47:27.756581+0000 mon.a (mon.0) 3859 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T10:47:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:27 vm04 bash[20742]: cluster 2026-03-10T10:47:27.756581+0000 mon.a (mon.0) 3859 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T10:47:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:27 vm04 bash[20742]: cluster 2026-03-10T10:47:27.762864+0000 mon.a (mon.0) 3860 : cluster [DBG] osdmap e767: 8 total, 8 up, 8 in 2026-03-10T10:47:28.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:27 vm04 bash[20742]: cluster 2026-03-10T10:47:27.762864+0000 mon.a (mon.0) 3860 : cluster [DBG] osdmap e767: 8 total, 8 up, 8 in 2026-03-10T10:47:28.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:27 vm07 bash[23367]: cluster 2026-03-10T10:47:26.820162+0000 mgr.y (mgr.24422) 1248 : cluster [DBG] pgmap v1687: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 1 op/s 2026-03-10T10:47:28.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:27 vm07 bash[23367]: cluster 2026-03-10T10:47:26.820162+0000 mgr.y (mgr.24422) 1248 : cluster [DBG] pgmap v1687: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 1023 B/s wr, 1 op/s 2026-03-10T10:47:28.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:27 vm07 bash[23367]: cluster 2026-03-10T10:47:27.756360+0000 mon.a (mon.0) 3858 : cluster [WRN] pool '3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f' is full (reached quota's max_bytes: 100 B) 2026-03-10T10:47:28.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:27 vm07 bash[23367]: cluster 2026-03-10T10:47:27.756360+0000 mon.a (mon.0) 3858 : cluster [WRN] pool '3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f' is full (reached quota's max_bytes: 100 B) 2026-03-10T10:47:28.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:27 vm07 bash[23367]: cluster 2026-03-10T10:47:27.756581+0000 mon.a (mon.0) 3859 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T10:47:28.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:27 vm07 bash[23367]: cluster 2026-03-10T10:47:27.756581+0000 mon.a (mon.0) 3859 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T10:47:28.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:27 vm07 bash[23367]: cluster 2026-03-10T10:47:27.762864+0000 mon.a (mon.0) 3860 : cluster [DBG] osdmap e767: 8 total, 8 up, 8 in 2026-03-10T10:47:28.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:27 vm07 bash[23367]: cluster 2026-03-10T10:47:27.762864+0000 mon.a (mon.0) 3860 : cluster [DBG] osdmap e767: 8 total, 8 up, 8 in 2026-03-10T10:47:29.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:28 vm04 bash[28289]: audit 2026-03-10T10:47:28.660723+0000 mon.a (mon.0) 3861 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:47:29.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:28 vm04 bash[28289]: audit 2026-03-10T10:47:28.660723+0000 mon.a (mon.0) 3861 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:47:29.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:28 vm04 bash[20742]: audit 2026-03-10T10:47:28.660723+0000 mon.a (mon.0) 3861 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:47:29.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:28 vm04 bash[20742]: audit 2026-03-10T10:47:28.660723+0000 mon.a (mon.0) 3861 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:47:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:28 vm07 bash[23367]: audit 2026-03-10T10:47:28.660723+0000 mon.a (mon.0) 3861 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:47:29.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:28 vm07 bash[23367]: audit 2026-03-10T10:47:28.660723+0000 mon.a (mon.0) 3861 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:47:30.017 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:47:29 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:47:30.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:29 vm07 bash[23367]: cluster 2026-03-10T10:47:28.820654+0000 mgr.y (mgr.24422) 1249 : cluster [DBG] pgmap v1689: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 1 op/s 2026-03-10T10:47:30.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:29 vm07 bash[23367]: cluster 2026-03-10T10:47:28.820654+0000 mgr.y (mgr.24422) 1249 : cluster [DBG] pgmap v1689: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 1 op/s 2026-03-10T10:47:30.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:29 vm04 bash[28289]: cluster 2026-03-10T10:47:28.820654+0000 mgr.y (mgr.24422) 1249 : cluster [DBG] pgmap v1689: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 1 op/s 2026-03-10T10:47:30.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:29 vm04 bash[28289]: cluster 2026-03-10T10:47:28.820654+0000 mgr.y (mgr.24422) 1249 : cluster [DBG] pgmap v1689: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 1 op/s 2026-03-10T10:47:30.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:29 vm04 bash[20742]: cluster 2026-03-10T10:47:28.820654+0000 mgr.y (mgr.24422) 1249 : cluster [DBG] pgmap v1689: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 1 op/s 2026-03-10T10:47:30.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:29 vm04 bash[20742]: cluster 2026-03-10T10:47:28.820654+0000 mgr.y (mgr.24422) 1249 : cluster [DBG] pgmap v1689: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.7 KiB/s rd, 1023 B/s wr, 1 op/s 2026-03-10T10:47:31.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:30 vm04 bash[28289]: audit 2026-03-10T10:47:29.763198+0000 mgr.y (mgr.24422) 1250 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:31.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:30 vm04 bash[28289]: audit 2026-03-10T10:47:29.763198+0000 mgr.y (mgr.24422) 1250 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:31.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:30 vm04 bash[20742]: audit 2026-03-10T10:47:29.763198+0000 mgr.y (mgr.24422) 1250 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:31.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:30 vm04 bash[20742]: audit 2026-03-10T10:47:29.763198+0000 mgr.y (mgr.24422) 1250 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:31.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:30 vm07 bash[23367]: audit 2026-03-10T10:47:29.763198+0000 mgr.y (mgr.24422) 1250 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:31.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:30 vm07 bash[23367]: audit 2026-03-10T10:47:29.763198+0000 mgr.y (mgr.24422) 1250 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:31 vm07 bash[23367]: cluster 2026-03-10T10:47:30.820926+0000 mgr.y (mgr.24422) 1251 : cluster [DBG] pgmap v1690: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 726 B/s rd, 0 op/s 2026-03-10T10:47:32.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:31 vm07 bash[23367]: cluster 2026-03-10T10:47:30.820926+0000 mgr.y (mgr.24422) 1251 : cluster [DBG] pgmap v1690: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 726 B/s rd, 0 op/s 2026-03-10T10:47:32.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:31 vm04 bash[28289]: cluster 2026-03-10T10:47:30.820926+0000 mgr.y (mgr.24422) 1251 : cluster [DBG] pgmap v1690: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 726 B/s rd, 0 op/s 2026-03-10T10:47:32.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:31 vm04 bash[28289]: cluster 2026-03-10T10:47:30.820926+0000 mgr.y (mgr.24422) 1251 : cluster [DBG] pgmap v1690: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 726 B/s rd, 0 op/s 2026-03-10T10:47:32.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:31 vm04 bash[20742]: cluster 2026-03-10T10:47:30.820926+0000 mgr.y (mgr.24422) 1251 : cluster [DBG] pgmap v1690: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 726 B/s rd, 0 op/s 2026-03-10T10:47:32.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:31 vm04 bash[20742]: cluster 2026-03-10T10:47:30.820926+0000 mgr.y (mgr.24422) 1251 : cluster [DBG] pgmap v1690: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 726 B/s rd, 0 op/s 2026-03-10T10:47:33.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:47:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:47:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:47:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:33 vm07 bash[23367]: cluster 2026-03-10T10:47:32.821225+0000 mgr.y (mgr.24422) 1252 : cluster [DBG] pgmap v1691: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T10:47:34.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:33 vm07 bash[23367]: cluster 2026-03-10T10:47:32.821225+0000 mgr.y (mgr.24422) 1252 : cluster [DBG] pgmap v1691: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T10:47:34.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:33 vm04 bash[28289]: cluster 2026-03-10T10:47:32.821225+0000 mgr.y (mgr.24422) 1252 : cluster [DBG] pgmap v1691: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T10:47:34.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:33 vm04 bash[28289]: cluster 2026-03-10T10:47:32.821225+0000 mgr.y (mgr.24422) 1252 : cluster [DBG] pgmap v1691: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T10:47:34.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:33 vm04 bash[20742]: cluster 2026-03-10T10:47:32.821225+0000 mgr.y (mgr.24422) 1252 : cluster [DBG] pgmap v1691: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T10:47:34.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:33 vm04 bash[20742]: cluster 2026-03-10T10:47:32.821225+0000 mgr.y (mgr.24422) 1252 : cluster [DBG] pgmap v1691: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T10:47:36.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:36 vm04 bash[28289]: cluster 2026-03-10T10:47:34.821738+0000 mgr.y (mgr.24422) 1253 : cluster [DBG] pgmap v1692: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T10:47:36.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:36 vm04 bash[28289]: cluster 2026-03-10T10:47:34.821738+0000 mgr.y (mgr.24422) 1253 : cluster [DBG] pgmap v1692: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T10:47:36.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:36 vm04 bash[20742]: cluster 2026-03-10T10:47:34.821738+0000 mgr.y (mgr.24422) 1253 : cluster [DBG] pgmap v1692: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T10:47:36.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:36 vm04 bash[20742]: cluster 2026-03-10T10:47:34.821738+0000 mgr.y (mgr.24422) 1253 : cluster [DBG] pgmap v1692: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T10:47:36.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:36 vm07 bash[23367]: cluster 2026-03-10T10:47:34.821738+0000 mgr.y (mgr.24422) 1253 : cluster [DBG] pgmap v1692: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T10:47:36.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:36 vm07 bash[23367]: cluster 2026-03-10T10:47:34.821738+0000 mgr.y (mgr.24422) 1253 : cluster [DBG] pgmap v1692: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T10:47:38.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:38 vm04 bash[20742]: cluster 2026-03-10T10:47:36.821968+0000 mgr.y (mgr.24422) 1254 : cluster [DBG] pgmap v1693: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:47:38.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:38 vm04 bash[20742]: cluster 2026-03-10T10:47:36.821968+0000 mgr.y (mgr.24422) 1254 : cluster [DBG] pgmap v1693: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:47:38.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:38 vm04 bash[28289]: cluster 2026-03-10T10:47:36.821968+0000 mgr.y (mgr.24422) 1254 : cluster [DBG] pgmap v1693: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:47:38.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:38 vm04 bash[28289]: cluster 2026-03-10T10:47:36.821968+0000 mgr.y (mgr.24422) 1254 : cluster [DBG] pgmap v1693: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:47:38.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:38 vm07 bash[23367]: cluster 2026-03-10T10:47:36.821968+0000 mgr.y (mgr.24422) 1254 : cluster [DBG] pgmap v1693: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:47:38.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:38 vm07 bash[23367]: cluster 2026-03-10T10:47:36.821968+0000 mgr.y (mgr.24422) 1254 : cluster [DBG] pgmap v1693: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:47:40.029 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:47:39 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:47:40.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:40 vm04 bash[20742]: cluster 2026-03-10T10:47:38.822591+0000 mgr.y (mgr.24422) 1255 : cluster [DBG] pgmap v1694: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T10:47:40.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:40 vm04 bash[20742]: cluster 2026-03-10T10:47:38.822591+0000 mgr.y (mgr.24422) 1255 : cluster [DBG] pgmap v1694: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T10:47:40.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:40 vm04 bash[28289]: cluster 2026-03-10T10:47:38.822591+0000 mgr.y (mgr.24422) 1255 : cluster [DBG] pgmap v1694: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T10:47:40.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:40 vm04 bash[28289]: cluster 2026-03-10T10:47:38.822591+0000 mgr.y (mgr.24422) 1255 : cluster [DBG] pgmap v1694: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T10:47:40.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:40 vm07 bash[23367]: cluster 2026-03-10T10:47:38.822591+0000 mgr.y (mgr.24422) 1255 : cluster [DBG] pgmap v1694: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T10:47:40.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:40 vm07 bash[23367]: cluster 2026-03-10T10:47:38.822591+0000 mgr.y (mgr.24422) 1255 : cluster [DBG] pgmap v1694: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T10:47:41.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:41 vm04 bash[28289]: audit 2026-03-10T10:47:39.772340+0000 mgr.y (mgr.24422) 1256 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:41.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:41 vm04 bash[28289]: audit 2026-03-10T10:47:39.772340+0000 mgr.y (mgr.24422) 1256 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:41.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:41 vm04 bash[20742]: audit 2026-03-10T10:47:39.772340+0000 mgr.y (mgr.24422) 1256 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:41.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:41 vm04 bash[20742]: audit 2026-03-10T10:47:39.772340+0000 mgr.y (mgr.24422) 1256 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:41.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:41 vm07 bash[23367]: audit 2026-03-10T10:47:39.772340+0000 mgr.y (mgr.24422) 1256 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:41.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:41 vm07 bash[23367]: audit 2026-03-10T10:47:39.772340+0000 mgr.y (mgr.24422) 1256 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:42.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:42 vm04 bash[20742]: cluster 2026-03-10T10:47:40.822978+0000 mgr.y (mgr.24422) 1257 : cluster [DBG] pgmap v1695: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:42.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:42 vm04 bash[20742]: cluster 2026-03-10T10:47:40.822978+0000 mgr.y (mgr.24422) 1257 : cluster [DBG] pgmap v1695: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:42.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:42 vm04 bash[28289]: cluster 2026-03-10T10:47:40.822978+0000 mgr.y (mgr.24422) 1257 : cluster [DBG] pgmap v1695: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:42.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:42 vm04 bash[28289]: cluster 2026-03-10T10:47:40.822978+0000 mgr.y (mgr.24422) 1257 : cluster [DBG] pgmap v1695: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:42.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:42 vm07 bash[23367]: cluster 2026-03-10T10:47:40.822978+0000 mgr.y (mgr.24422) 1257 : cluster [DBG] pgmap v1695: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:42.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:42 vm07 bash[23367]: cluster 2026-03-10T10:47:40.822978+0000 mgr.y (mgr.24422) 1257 : cluster [DBG] pgmap v1695: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:43.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:47:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:47:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:47:44.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:44 vm04 bash[20742]: cluster 2026-03-10T10:47:42.823290+0000 mgr.y (mgr.24422) 1258 : cluster [DBG] pgmap v1696: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:44.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:44 vm04 bash[20742]: cluster 2026-03-10T10:47:42.823290+0000 mgr.y (mgr.24422) 1258 : cluster [DBG] pgmap v1696: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:44.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:44 vm04 bash[20742]: audit 2026-03-10T10:47:43.666636+0000 mon.a (mon.0) 3862 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:47:44.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:44 vm04 bash[20742]: audit 2026-03-10T10:47:43.666636+0000 mon.a (mon.0) 3862 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:47:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:44 vm04 bash[28289]: cluster 2026-03-10T10:47:42.823290+0000 mgr.y (mgr.24422) 1258 : cluster [DBG] pgmap v1696: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:44 vm04 bash[28289]: cluster 2026-03-10T10:47:42.823290+0000 mgr.y (mgr.24422) 1258 : cluster [DBG] pgmap v1696: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:44 vm04 bash[28289]: audit 2026-03-10T10:47:43.666636+0000 mon.a (mon.0) 3862 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:47:44.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:44 vm04 bash[28289]: audit 2026-03-10T10:47:43.666636+0000 mon.a (mon.0) 3862 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:47:44.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:44 vm07 bash[23367]: cluster 2026-03-10T10:47:42.823290+0000 mgr.y (mgr.24422) 1258 : cluster [DBG] pgmap v1696: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:44.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:44 vm07 bash[23367]: cluster 2026-03-10T10:47:42.823290+0000 mgr.y (mgr.24422) 1258 : cluster [DBG] pgmap v1696: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:44.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:44 vm07 bash[23367]: audit 2026-03-10T10:47:43.666636+0000 mon.a (mon.0) 3862 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:47:44.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:44 vm07 bash[23367]: audit 2026-03-10T10:47:43.666636+0000 mon.a (mon.0) 3862 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:47:46.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:46 vm04 bash[20742]: cluster 2026-03-10T10:47:44.823893+0000 mgr.y (mgr.24422) 1259 : cluster [DBG] pgmap v1697: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:46.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:46 vm04 bash[20742]: cluster 2026-03-10T10:47:44.823893+0000 mgr.y (mgr.24422) 1259 : cluster [DBG] pgmap v1697: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:46.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:46 vm04 bash[28289]: cluster 2026-03-10T10:47:44.823893+0000 mgr.y (mgr.24422) 1259 : cluster [DBG] pgmap v1697: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:46.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:46 vm04 bash[28289]: cluster 2026-03-10T10:47:44.823893+0000 mgr.y (mgr.24422) 1259 : cluster [DBG] pgmap v1697: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:46.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:46 vm07 bash[23367]: cluster 2026-03-10T10:47:44.823893+0000 mgr.y (mgr.24422) 1259 : cluster [DBG] pgmap v1697: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:46.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:46 vm07 bash[23367]: cluster 2026-03-10T10:47:44.823893+0000 mgr.y (mgr.24422) 1259 : cluster [DBG] pgmap v1697: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:48.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:48 vm04 bash[20742]: cluster 2026-03-10T10:47:46.824159+0000 mgr.y (mgr.24422) 1260 : cluster [DBG] pgmap v1698: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:48.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:48 vm04 bash[20742]: cluster 2026-03-10T10:47:46.824159+0000 mgr.y (mgr.24422) 1260 : cluster [DBG] pgmap v1698: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:48.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:48 vm04 bash[28289]: cluster 2026-03-10T10:47:46.824159+0000 mgr.y (mgr.24422) 1260 : cluster [DBG] pgmap v1698: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:48.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:48 vm04 bash[28289]: cluster 2026-03-10T10:47:46.824159+0000 mgr.y (mgr.24422) 1260 : cluster [DBG] pgmap v1698: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:48.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:48 vm07 bash[23367]: cluster 2026-03-10T10:47:46.824159+0000 mgr.y (mgr.24422) 1260 : cluster [DBG] pgmap v1698: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:48.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:48 vm07 bash[23367]: cluster 2026-03-10T10:47:46.824159+0000 mgr.y (mgr.24422) 1260 : cluster [DBG] pgmap v1698: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:50.062 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:47:49 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:47:50.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:50 vm04 bash[20742]: cluster 2026-03-10T10:47:48.824769+0000 mgr.y (mgr.24422) 1261 : cluster [DBG] pgmap v1699: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:50.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:50 vm04 bash[20742]: cluster 2026-03-10T10:47:48.824769+0000 mgr.y (mgr.24422) 1261 : cluster [DBG] pgmap v1699: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:50 vm04 bash[28289]: cluster 2026-03-10T10:47:48.824769+0000 mgr.y (mgr.24422) 1261 : cluster [DBG] pgmap v1699: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:50.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:50 vm04 bash[28289]: cluster 2026-03-10T10:47:48.824769+0000 mgr.y (mgr.24422) 1261 : cluster [DBG] pgmap v1699: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:50.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:50 vm07 bash[23367]: cluster 2026-03-10T10:47:48.824769+0000 mgr.y (mgr.24422) 1261 : cluster [DBG] pgmap v1699: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:50.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:50 vm07 bash[23367]: cluster 2026-03-10T10:47:48.824769+0000 mgr.y (mgr.24422) 1261 : cluster [DBG] pgmap v1699: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:51.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:51 vm04 bash[20742]: audit 2026-03-10T10:47:49.775024+0000 mgr.y (mgr.24422) 1262 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:51.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:51 vm04 bash[20742]: audit 2026-03-10T10:47:49.775024+0000 mgr.y (mgr.24422) 1262 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:51.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:51 vm04 bash[28289]: audit 2026-03-10T10:47:49.775024+0000 mgr.y (mgr.24422) 1262 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:51.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:51 vm04 bash[28289]: audit 2026-03-10T10:47:49.775024+0000 mgr.y (mgr.24422) 1262 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:51.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:51 vm07 bash[23367]: audit 2026-03-10T10:47:49.775024+0000 mgr.y (mgr.24422) 1262 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:51.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:51 vm07 bash[23367]: audit 2026-03-10T10:47:49.775024+0000 mgr.y (mgr.24422) 1262 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:47:52.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:52 vm04 bash[20742]: cluster 2026-03-10T10:47:50.825073+0000 mgr.y (mgr.24422) 1263 : cluster [DBG] pgmap v1700: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:52.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:52 vm04 bash[20742]: cluster 2026-03-10T10:47:50.825073+0000 mgr.y (mgr.24422) 1263 : cluster [DBG] pgmap v1700: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:52.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:52 vm04 bash[28289]: cluster 2026-03-10T10:47:50.825073+0000 mgr.y (mgr.24422) 1263 : cluster [DBG] pgmap v1700: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:52.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:52 vm04 bash[28289]: cluster 2026-03-10T10:47:50.825073+0000 mgr.y (mgr.24422) 1263 : cluster [DBG] pgmap v1700: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:52 vm07 bash[23367]: cluster 2026-03-10T10:47:50.825073+0000 mgr.y (mgr.24422) 1263 : cluster [DBG] pgmap v1700: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:52.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:52 vm07 bash[23367]: cluster 2026-03-10T10:47:50.825073+0000 mgr.y (mgr.24422) 1263 : cluster [DBG] pgmap v1700: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:53.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:47:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:47:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:47:54.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:54 vm04 bash[20742]: cluster 2026-03-10T10:47:52.825339+0000 mgr.y (mgr.24422) 1264 : cluster [DBG] pgmap v1701: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:54.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:54 vm04 bash[20742]: cluster 2026-03-10T10:47:52.825339+0000 mgr.y (mgr.24422) 1264 : cluster [DBG] pgmap v1701: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:54.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:54 vm04 bash[28289]: cluster 2026-03-10T10:47:52.825339+0000 mgr.y (mgr.24422) 1264 : cluster [DBG] pgmap v1701: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:54.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:54 vm04 bash[28289]: cluster 2026-03-10T10:47:52.825339+0000 mgr.y (mgr.24422) 1264 : cluster [DBG] pgmap v1701: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:54 vm07 bash[23367]: cluster 2026-03-10T10:47:52.825339+0000 mgr.y (mgr.24422) 1264 : cluster [DBG] pgmap v1701: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:54.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:54 vm07 bash[23367]: cluster 2026-03-10T10:47:52.825339+0000 mgr.y (mgr.24422) 1264 : cluster [DBG] pgmap v1701: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:47:55.130 INFO:tasks.workunit.client.0.vm04.stderr:+ pid=131351 2026-03-10T10:47:55.131 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph osd pool set-quota 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f max_bytes 0 2026-03-10T10:47:55.131 INFO:tasks.workunit.client.0.vm04.stderr:+ rados -p 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f put two /etc/passwd 2026-03-10T10:47:55.204 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.200+0000 7f43cc334640 1 -- 192.168.123.104:0/2378737535 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f43c41057e0 msgr2=0x7f43c4109830 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:47:55.204 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.200+0000 7f43cc334640 1 --2- 192.168.123.104:0/2378737535 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f43c41057e0 0x7f43c4109830 secure :-1 s=READY pgs=3392 cs=0 l=1 rev1=1 crypto rx=0x7f43b8009a30 tx=0x7f43b801c920 comp rx=0 tx=0).stop 2026-03-10T10:47:55.204 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.200+0000 7f43cc334640 1 -- 192.168.123.104:0/2378737535 shutdown_connections 2026-03-10T10:47:55.204 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.200+0000 7f43cc334640 1 --2- 192.168.123.104:0/2378737535 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f43c4109f60 0x7f43c4111ae0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:55.204 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.200+0000 7f43cc334640 1 --2- 192.168.123.104:0/2378737535 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f43c41057e0 0x7f43c4109830 unknown :-1 s=CLOSED pgs=3392 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:55.204 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.200+0000 7f43cc334640 1 --2- 192.168.123.104:0/2378737535 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f43c4104e30 0x7f43c4105210 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:55.204 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.200+0000 7f43cc334640 1 -- 192.168.123.104:0/2378737535 >> 192.168.123.104:0/2378737535 conn(0x7f43c41008f0 msgr2=0x7f43c4102d10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:47:55.204 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.200+0000 7f43cc334640 1 -- 192.168.123.104:0/2378737535 shutdown_connections 2026-03-10T10:47:55.204 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.200+0000 7f43cc334640 1 -- 192.168.123.104:0/2378737535 wait complete. 2026-03-10T10:47:55.204 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.200+0000 7f43cc334640 1 Processor -- start 2026-03-10T10:47:55.204 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.200+0000 7f43cc334640 1 -- start start 2026-03-10T10:47:55.204 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.200+0000 7f43cc334640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f43c4104e30 0x7f43c41103b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:47:55.204 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.200+0000 7f43cc334640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f43c41057e0 0x7f43c41108f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:47:55.204 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.200+0000 7f43cc334640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f43c4109f60 0x7f43c4074900 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:47:55.204 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.200+0000 7f43cc334640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f43c4078530 con 0x7f43c41057e0 2026-03-10T10:47:55.204 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.200+0000 7f43cc334640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f43c40783b0 con 0x7f43c4109f60 2026-03-10T10:47:55.205 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.200+0000 7f43cc334640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f43c40786b0 con 0x7f43c4104e30 2026-03-10T10:47:55.205 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.200+0000 7f43c98a8640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f43c41057e0 0x7f43c41108f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:47:55.205 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.200+0000 7f43c98a8640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f43c41057e0 0x7f43c41108f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:40078/0 (socket says 192.168.123.104:40078) 2026-03-10T10:47:55.205 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.200+0000 7f43c98a8640 1 -- 192.168.123.104:0/37499791 learned_addr learned my addr 192.168.123.104:0/37499791 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:47:55.205 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.200+0000 7f43ca8aa640 1 --2- 192.168.123.104:0/37499791 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f43c4109f60 0x7f43c4074900 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:47:55.205 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.200+0000 7f43c98a8640 1 -- 192.168.123.104:0/37499791 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f43c4104e30 msgr2=0x7f43c41103b0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:47:55.205 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.200+0000 7f43ca0a9640 1 --2- 192.168.123.104:0/37499791 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f43c4104e30 0x7f43c41103b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:47:55.205 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.200+0000 7f43c98a8640 1 --2- 192.168.123.104:0/37499791 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f43c4104e30 0x7f43c41103b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:55.205 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.200+0000 7f43c98a8640 1 -- 192.168.123.104:0/37499791 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f43c4109f60 msgr2=0x7f43c4074900 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:47:55.205 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.200+0000 7f43c98a8640 1 --2- 192.168.123.104:0/37499791 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f43c4109f60 0x7f43c4074900 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:55.205 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.200+0000 7f43c98a8640 1 -- 192.168.123.104:0/37499791 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f43c4074fa0 con 0x7f43c41057e0 2026-03-10T10:47:55.205 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.200+0000 7f43ca8aa640 1 --2- 192.168.123.104:0/37499791 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f43c4109f60 0x7f43c4074900 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-10T10:47:55.205 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.200+0000 7f43ca0a9640 1 --2- 192.168.123.104:0/37499791 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f43c4104e30 0x7f43c41103b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:47:55.205 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.200+0000 7f43c98a8640 1 --2- 192.168.123.104:0/37499791 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f43c41057e0 0x7f43c41108f0 secure :-1 s=READY pgs=3085 cs=0 l=1 rev1=1 crypto rx=0x7f43b801ce00 tx=0x7f43b8005b70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:47:55.205 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.200+0000 7f43ab7fe640 1 -- 192.168.123.104:0/37499791 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f43b8004320 con 0x7f43c41057e0 2026-03-10T10:47:55.207 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.200+0000 7f43cc334640 1 -- 192.168.123.104:0/37499791 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f43c4115d20 con 0x7f43c41057e0 2026-03-10T10:47:55.207 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.200+0000 7f43cc334640 1 -- 192.168.123.104:0/37499791 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f43c4116200 con 0x7f43c41057e0 2026-03-10T10:47:55.207 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.204+0000 7f43ab7fe640 1 -- 192.168.123.104:0/37499791 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f43b80044c0 con 0x7f43c41057e0 2026-03-10T10:47:55.207 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.204+0000 7f43ab7fe640 1 -- 192.168.123.104:0/37499791 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f43b8004dd0 con 0x7f43c41057e0 2026-03-10T10:47:55.210 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.204+0000 7f43cc334640 1 -- 192.168.123.104:0/37499791 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f43c4106310 con 0x7f43c41057e0 2026-03-10T10:47:55.210 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.204+0000 7f43ab7fe640 1 -- 192.168.123.104:0/37499791 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f43b8002830 con 0x7f43c41057e0 2026-03-10T10:47:55.210 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.204+0000 7f43ab7fe640 1 --2- 192.168.123.104:0/37499791 >> v2:192.168.123.104:6800/3326026257 conn(0x7f43980776d0 0x7f4398079b90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:47:55.210 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.204+0000 7f43ca0a9640 1 --2- 192.168.123.104:0/37499791 >> v2:192.168.123.104:6800/3326026257 conn(0x7f43980776d0 0x7f4398079b90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:47:55.210 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.204+0000 7f43ca0a9640 1 --2- 192.168.123.104:0/37499791 >> v2:192.168.123.104:6800/3326026257 conn(0x7f43980776d0 0x7f4398079b90 secure :-1 s=READY pgs=4270 cs=0 l=1 rev1=1 crypto rx=0x7f43b40060b0 tx=0x7f43b40076a0 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:47:55.210 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.208+0000 7f43ab7fe640 1 -- 192.168.123.104:0/37499791 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(767..767 src has 251..767) ==== 7736+0+0 (secure 0 0 0) 0x7f43b8137050 con 0x7f43c41057e0 2026-03-10T10:47:55.210 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.208+0000 7f43ab7fe640 1 -- 192.168.123.104:0/37499791 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=768}) -- 0x7f4398082b20 con 0x7f43c41057e0 2026-03-10T10:47:55.213 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.208+0000 7f43ab7fe640 1 -- 192.168.123.104:0/37499791 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f43b8016630 con 0x7f43c41057e0 2026-03-10T10:47:55.307 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:55.304+0000 7f43cc334640 1 -- 192.168.123.104:0/37499791 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"} v 0) -- 0x7f43c4105210 con 0x7f43c41057e0 2026-03-10T10:47:56.109 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:56.104+0000 7f43ab7fe640 1 -- 192.168.123.104:0/37499791 <== mon.0 v2:192.168.123.104:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]=0 set-quota max_bytes = 0 for pool 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f v768) ==== 217+0+0 (secure 0 0 0) 0x7f43b80ff5c0 con 0x7f43c41057e0 2026-03-10T10:47:56.115 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:56.112+0000 7f43ab7fe640 1 -- 192.168.123.104:0/37499791 <== mon.0 v2:192.168.123.104:3300/0 8 ==== osd_map(768..768 src has 251..768) ==== 628+0+0 (secure 0 0 0) 0x7f43b80f75b0 con 0x7f43c41057e0 2026-03-10T10:47:56.115 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:56.112+0000 7f43ab7fe640 1 -- 192.168.123.104:0/37499791 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=769}) -- 0x7f4398083b00 con 0x7f43c41057e0 2026-03-10T10:47:56.177 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:56.172+0000 7f43cc334640 1 -- 192.168.123.104:0/37499791 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"} v 0) -- 0x7f43c41079e0 con 0x7f43c41057e0 2026-03-10T10:47:56.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:56 vm04 bash[20742]: cluster 2026-03-10T10:47:54.825927+0000 mgr.y (mgr.24422) 1265 : cluster [DBG] pgmap v1702: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:56.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:56 vm04 bash[20742]: cluster 2026-03-10T10:47:54.825927+0000 mgr.y (mgr.24422) 1265 : cluster [DBG] pgmap v1702: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:56.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:56 vm04 bash[20742]: audit 2026-03-10T10:47:55.309337+0000 mon.a (mon.0) 3863 : audit [INF] from='client.? 192.168.123.104:0/37499791' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T10:47:56.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:56 vm04 bash[20742]: audit 2026-03-10T10:47:55.309337+0000 mon.a (mon.0) 3863 : audit [INF] from='client.? 192.168.123.104:0/37499791' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T10:47:56.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:56 vm04 bash[28289]: cluster 2026-03-10T10:47:54.825927+0000 mgr.y (mgr.24422) 1265 : cluster [DBG] pgmap v1702: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:56.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:56 vm04 bash[28289]: cluster 2026-03-10T10:47:54.825927+0000 mgr.y (mgr.24422) 1265 : cluster [DBG] pgmap v1702: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:56.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:56 vm04 bash[28289]: audit 2026-03-10T10:47:55.309337+0000 mon.a (mon.0) 3863 : audit [INF] from='client.? 192.168.123.104:0/37499791' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T10:47:56.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:56 vm04 bash[28289]: audit 2026-03-10T10:47:55.309337+0000 mon.a (mon.0) 3863 : audit [INF] from='client.? 192.168.123.104:0/37499791' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T10:47:56.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:56 vm07 bash[23367]: cluster 2026-03-10T10:47:54.825927+0000 mgr.y (mgr.24422) 1265 : cluster [DBG] pgmap v1702: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:56.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:56 vm07 bash[23367]: cluster 2026-03-10T10:47:54.825927+0000 mgr.y (mgr.24422) 1265 : cluster [DBG] pgmap v1702: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:47:56.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:56 vm07 bash[23367]: audit 2026-03-10T10:47:55.309337+0000 mon.a (mon.0) 3863 : audit [INF] from='client.? 192.168.123.104:0/37499791' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T10:47:56.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:56 vm07 bash[23367]: audit 2026-03-10T10:47:55.309337+0000 mon.a (mon.0) 3863 : audit [INF] from='client.? 192.168.123.104:0/37499791' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T10:47:57.423 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.420+0000 7f43ab7fe640 1 -- 192.168.123.104:0/37499791 <== mon.0 v2:192.168.123.104:3300/0 9 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]=0 set-quota max_bytes = 0 for pool 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f v769) ==== 217+0+0 (secure 0 0 0) 0x7f43b8104470 con 0x7f43c41057e0 2026-03-10T10:47:57.424 INFO:tasks.workunit.client.0.vm04.stderr:set-quota max_bytes = 0 for pool 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f 2026-03-10T10:47:57.426 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.424+0000 7f43ab7fe640 1 -- 192.168.123.104:0/37499791 <== mon.0 v2:192.168.123.104:3300/0 10 ==== osd_map(769..769 src has 251..769) ==== 628+0+0 (secure 0 0 0) 0x7f43b8002b50 con 0x7f43c41057e0 2026-03-10T10:47:57.426 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.424+0000 7f43cc334640 1 -- 192.168.123.104:0/37499791 >> v2:192.168.123.104:6800/3326026257 conn(0x7f43980776d0 msgr2=0x7f4398079b90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:47:57.426 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.424+0000 7f43cc334640 1 --2- 192.168.123.104:0/37499791 >> v2:192.168.123.104:6800/3326026257 conn(0x7f43980776d0 0x7f4398079b90 secure :-1 s=READY pgs=4270 cs=0 l=1 rev1=1 crypto rx=0x7f43b40060b0 tx=0x7f43b40076a0 comp rx=0 tx=0).stop 2026-03-10T10:47:57.426 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.424+0000 7f43cc334640 1 -- 192.168.123.104:0/37499791 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f43c41057e0 msgr2=0x7f43c41108f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:47:57.426 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.424+0000 7f43cc334640 1 --2- 192.168.123.104:0/37499791 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f43c41057e0 0x7f43c41108f0 secure :-1 s=READY pgs=3085 cs=0 l=1 rev1=1 crypto rx=0x7f43b801ce00 tx=0x7f43b8005b70 comp rx=0 tx=0).stop 2026-03-10T10:47:57.427 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.424+0000 7f43cc334640 1 -- 192.168.123.104:0/37499791 shutdown_connections 2026-03-10T10:47:57.427 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.424+0000 7f43cc334640 1 --2- 192.168.123.104:0/37499791 >> v2:192.168.123.104:6800/3326026257 conn(0x7f43980776d0 0x7f4398079b90 unknown :-1 s=CLOSED pgs=4270 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:57.427 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.424+0000 7f43cc334640 1 --2- 192.168.123.104:0/37499791 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f43c4109f60 0x7f43c4074900 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:57.427 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.424+0000 7f43cc334640 1 --2- 192.168.123.104:0/37499791 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f43c41057e0 0x7f43c41108f0 unknown :-1 s=CLOSED pgs=3085 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:57.427 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.424+0000 7f43cc334640 1 --2- 192.168.123.104:0/37499791 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f43c4104e30 0x7f43c41103b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:57.427 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.424+0000 7f43cc334640 1 -- 192.168.123.104:0/37499791 >> 192.168.123.104:0/37499791 conn(0x7f43c41008f0 msgr2=0x7f43c4101c30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:47:57.427 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.424+0000 7f43cc334640 1 -- 192.168.123.104:0/37499791 shutdown_connections 2026-03-10T10:47:57.427 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.424+0000 7f43cc334640 1 -- 192.168.123.104:0/37499791 wait complete. 2026-03-10T10:47:57.443 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph osd pool set-quota 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f max_objects 0 2026-03-10T10:47:57.518 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6aabfff640 1 -- 192.168.123.104:0/3187408809 <== mon.1 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f6aac0a5ad0 con 0x7f6abc108960 2026-03-10T10:47:57.518 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6ac3580640 1 -- 192.168.123.104:0/3187408809 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6abc108960 msgr2=0x7f6abc108d40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:47:57.518 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6ac3580640 1 --2- 192.168.123.104:0/3187408809 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6abc108960 0x7f6abc108d40 secure :-1 s=READY pgs=2720 cs=0 l=1 rev1=1 crypto rx=0x7f6aac009980 tx=0x7f6aac01c880 comp rx=0 tx=0).stop 2026-03-10T10:47:57.518 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6ac3580640 1 -- 192.168.123.104:0/3187408809 shutdown_connections 2026-03-10T10:47:57.518 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6ac3580640 1 --2- 192.168.123.104:0/3187408809 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f6abc103300 0x7f6abc10f830 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:57.518 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6ac3580640 1 --2- 192.168.123.104:0/3187408809 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f6abc102960 0x7f6abc102dc0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:57.518 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6ac3580640 1 --2- 192.168.123.104:0/3187408809 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6abc108960 0x7f6abc108d40 unknown :-1 s=CLOSED pgs=2720 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:57.518 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6ac3580640 1 -- 192.168.123.104:0/3187408809 >> 192.168.123.104:0/3187408809 conn(0x7f6abc0fe640 msgr2=0x7f6abc100a60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:47:57.518 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6ac3580640 1 -- 192.168.123.104:0/3187408809 shutdown_connections 2026-03-10T10:47:57.519 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6ac3580640 1 -- 192.168.123.104:0/3187408809 wait complete. 2026-03-10T10:47:57.519 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6ac3580640 1 Processor -- start 2026-03-10T10:47:57.519 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6ac3580640 1 -- start start 2026-03-10T10:47:57.519 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6ac3580640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f6abc102960 0x7f6abc19f160 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:47:57.520 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6ac3580640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f6abc103300 0x7f6abc19f6a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:47:57.520 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6ac3580640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6abc108960 0x7f6abc1a3a30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:47:57.520 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6ac3580640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f6abc114ab0 con 0x7f6abc103300 2026-03-10T10:47:57.520 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6ac3580640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f6abc114930 con 0x7f6abc108960 2026-03-10T10:47:57.520 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6ac3580640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f6abc114c30 con 0x7f6abc102960 2026-03-10T10:47:57.520 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6ac12f5640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f6abc102960 0x7f6abc19f160 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:47:57.520 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6ac12f5640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f6abc102960 0x7f6abc19f160 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3301/0 says I am v2:192.168.123.104:33688/0 (socket says 192.168.123.104:33688) 2026-03-10T10:47:57.520 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6ac12f5640 1 -- 192.168.123.104:0/3394144048 learned_addr learned my addr 192.168.123.104:0/3394144048 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:47:57.520 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6ac1af6640 1 --2- 192.168.123.104:0/3394144048 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6abc108960 0x7f6abc1a3a30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:47:57.520 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6ac12f5640 1 -- 192.168.123.104:0/3394144048 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6abc108960 msgr2=0x7f6abc1a3a30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:47:57.520 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6ac12f5640 1 --2- 192.168.123.104:0/3394144048 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6abc108960 0x7f6abc1a3a30 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:57.520 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6ac12f5640 1 -- 192.168.123.104:0/3394144048 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f6abc103300 msgr2=0x7f6abc19f6a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:47:57.520 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6ac0af4640 1 --2- 192.168.123.104:0/3394144048 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f6abc103300 0x7f6abc19f6a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:47:57.520 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6ac12f5640 1 --2- 192.168.123.104:0/3394144048 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f6abc103300 0x7f6abc19f6a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:57.520 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6ac12f5640 1 -- 192.168.123.104:0/3394144048 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6abc1a41b0 con 0x7f6abc102960 2026-03-10T10:47:57.520 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6ac0af4640 1 --2- 192.168.123.104:0/3394144048 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f6abc103300 0x7f6abc19f6a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:47:57.520 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6ac12f5640 1 --2- 192.168.123.104:0/3394144048 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f6abc102960 0x7f6abc19f160 secure :-1 s=READY pgs=3393 cs=0 l=1 rev1=1 crypto rx=0x7f6aac009950 tx=0x7f6aac0a5f70 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:47:57.520 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6aaa7fc640 1 -- 192.168.123.104:0/3394144048 <== mon.2 v2:192.168.123.104:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f6aac0044e0 con 0x7f6abc102960 2026-03-10T10:47:57.521 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6aaa7fc640 1 -- 192.168.123.104:0/3394144048 <== mon.2 v2:192.168.123.104:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f6aac0054c0 con 0x7f6abc102960 2026-03-10T10:47:57.521 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6ac3580640 1 -- 192.168.123.104:0/3394144048 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f6abc1a4440 con 0x7f6abc102960 2026-03-10T10:47:57.521 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6ac3580640 1 -- 192.168.123.104:0/3394144048 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f6abc1abce0 con 0x7f6abc102960 2026-03-10T10:47:57.521 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.516+0000 7f6aaa7fc640 1 -- 192.168.123.104:0/3394144048 <== mon.2 v2:192.168.123.104:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f6aac005050 con 0x7f6abc102960 2026-03-10T10:47:57.523 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.520+0000 7f6aaa7fc640 1 -- 192.168.123.104:0/3394144048 <== mon.2 v2:192.168.123.104:3301/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f6aac0026f0 con 0x7f6abc102960 2026-03-10T10:47:57.523 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.520+0000 7f6aaa7fc640 1 --2- 192.168.123.104:0/3394144048 >> v2:192.168.123.104:6800/3326026257 conn(0x7f6a980776d0 0x7f6a98079b90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:47:57.523 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.520+0000 7f6ac0af4640 1 --2- 192.168.123.104:0/3394144048 >> v2:192.168.123.104:6800/3326026257 conn(0x7f6a980776d0 0x7f6a98079b90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:47:57.523 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.520+0000 7f6aaa7fc640 1 -- 192.168.123.104:0/3394144048 <== mon.2 v2:192.168.123.104:3301/0 5 ==== osd_map(769..769 src has 251..769) ==== 7736+0+0 (secure 0 0 0) 0x7f6aac1342e0 con 0x7f6abc102960 2026-03-10T10:47:57.523 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.520+0000 7f6ac0af4640 1 --2- 192.168.123.104:0/3394144048 >> v2:192.168.123.104:6800/3326026257 conn(0x7f6a980776d0 0x7f6a98079b90 secure :-1 s=READY pgs=4271 cs=0 l=1 rev1=1 crypto rx=0x7f6ab0005e10 tx=0x7f6ab000a600 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:47:57.523 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.520+0000 7f6aaa7fc640 1 -- 192.168.123.104:0/3394144048 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({osdmap=770}) -- 0x7f6a98082ba0 con 0x7f6abc102960 2026-03-10T10:47:57.526 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.520+0000 7f6ac3580640 1 -- 192.168.123.104:0/3394144048 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6a84005190 con 0x7f6abc102960 2026-03-10T10:47:57.526 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.524+0000 7f6aaa7fc640 1 -- 192.168.123.104:0/3394144048 <== mon.2 v2:192.168.123.104:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f6aac016590 con 0x7f6abc102960 2026-03-10T10:47:57.615 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.612+0000 7f6ac3580640 1 -- 192.168.123.104:0/3394144048 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"} v 0) -- 0x7f6a84005480 con 0x7f6abc102960 2026-03-10T10:47:57.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:57 vm04 bash[20742]: audit 2026-03-10T10:47:56.111201+0000 mon.a (mon.0) 3864 : audit [INF] from='client.? 192.168.123.104:0/37499791' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T10:47:57.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:57 vm04 bash[20742]: audit 2026-03-10T10:47:56.111201+0000 mon.a (mon.0) 3864 : audit [INF] from='client.? 192.168.123.104:0/37499791' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T10:47:57.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:57 vm04 bash[20742]: cluster 2026-03-10T10:47:56.120644+0000 mon.a (mon.0) 3865 : cluster [DBG] osdmap e768: 8 total, 8 up, 8 in 2026-03-10T10:47:57.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:57 vm04 bash[20742]: cluster 2026-03-10T10:47:56.120644+0000 mon.a (mon.0) 3865 : cluster [DBG] osdmap e768: 8 total, 8 up, 8 in 2026-03-10T10:47:57.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:57 vm04 bash[20742]: audit 2026-03-10T10:47:56.179938+0000 mon.a (mon.0) 3866 : audit [INF] from='client.? 192.168.123.104:0/37499791' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T10:47:57.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:57 vm04 bash[20742]: audit 2026-03-10T10:47:56.179938+0000 mon.a (mon.0) 3866 : audit [INF] from='client.? 192.168.123.104:0/37499791' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T10:47:57.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:57 vm04 bash[28289]: audit 2026-03-10T10:47:56.111201+0000 mon.a (mon.0) 3864 : audit [INF] from='client.? 192.168.123.104:0/37499791' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T10:47:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:57 vm04 bash[28289]: audit 2026-03-10T10:47:56.111201+0000 mon.a (mon.0) 3864 : audit [INF] from='client.? 192.168.123.104:0/37499791' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T10:47:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:57 vm04 bash[28289]: cluster 2026-03-10T10:47:56.120644+0000 mon.a (mon.0) 3865 : cluster [DBG] osdmap e768: 8 total, 8 up, 8 in 2026-03-10T10:47:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:57 vm04 bash[28289]: cluster 2026-03-10T10:47:56.120644+0000 mon.a (mon.0) 3865 : cluster [DBG] osdmap e768: 8 total, 8 up, 8 in 2026-03-10T10:47:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:57 vm04 bash[28289]: audit 2026-03-10T10:47:56.179938+0000 mon.a (mon.0) 3866 : audit [INF] from='client.? 192.168.123.104:0/37499791' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T10:47:57.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:57 vm04 bash[28289]: audit 2026-03-10T10:47:56.179938+0000 mon.a (mon.0) 3866 : audit [INF] from='client.? 192.168.123.104:0/37499791' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T10:47:57.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:57 vm07 bash[23367]: audit 2026-03-10T10:47:56.111201+0000 mon.a (mon.0) 3864 : audit [INF] from='client.? 192.168.123.104:0/37499791' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T10:47:57.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:57 vm07 bash[23367]: audit 2026-03-10T10:47:56.111201+0000 mon.a (mon.0) 3864 : audit [INF] from='client.? 192.168.123.104:0/37499791' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T10:47:57.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:57 vm07 bash[23367]: cluster 2026-03-10T10:47:56.120644+0000 mon.a (mon.0) 3865 : cluster [DBG] osdmap e768: 8 total, 8 up, 8 in 2026-03-10T10:47:57.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:57 vm07 bash[23367]: cluster 2026-03-10T10:47:56.120644+0000 mon.a (mon.0) 3865 : cluster [DBG] osdmap e768: 8 total, 8 up, 8 in 2026-03-10T10:47:57.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:57 vm07 bash[23367]: audit 2026-03-10T10:47:56.179938+0000 mon.a (mon.0) 3866 : audit [INF] from='client.? 192.168.123.104:0/37499791' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T10:47:57.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:57 vm07 bash[23367]: audit 2026-03-10T10:47:56.179938+0000 mon.a (mon.0) 3866 : audit [INF] from='client.? 192.168.123.104:0/37499791' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T10:47:57.768 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.764+0000 7f6aaa7fc640 1 -- 192.168.123.104:0/3394144048 <== mon.2 v2:192.168.123.104:3301/0 7 ==== osd_map(770..770 src has 251..770) ==== 628+0+0 (secure 0 0 0) 0x7f6aac0f8a20 con 0x7f6abc102960 2026-03-10T10:47:57.768 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.764+0000 7f6aaa7fc640 1 -- 192.168.123.104:0/3394144048 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({osdmap=771}) -- 0x7f6a98083740 con 0x7f6abc102960 2026-03-10T10:47:57.768 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.764+0000 7f6aaa7fc640 1 -- 192.168.123.104:0/3394144048 <== mon.2 v2:192.168.123.104:3301/0 8 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]=0 set-quota max_objects = 0 for pool 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f v770) ==== 221+0+0 (secure 0 0 0) 0x7f6aac100a30 con 0x7f6abc102960 2026-03-10T10:47:57.825 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:57.820+0000 7f6ac3580640 1 -- 192.168.123.104:0/3394144048 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"} v 0) -- 0x7f6a84004ab0 con 0x7f6abc102960 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:58 vm04 bash[28289]: cluster 2026-03-10T10:47:56.826258+0000 mgr.y (mgr.24422) 1266 : cluster [DBG] pgmap v1704: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:58 vm04 bash[28289]: cluster 2026-03-10T10:47:56.826258+0000 mgr.y (mgr.24422) 1266 : cluster [DBG] pgmap v1704: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:58 vm04 bash[28289]: audit 2026-03-10T10:47:57.425518+0000 mon.a (mon.0) 3867 : audit [INF] from='client.? 192.168.123.104:0/37499791' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:58 vm04 bash[28289]: audit 2026-03-10T10:47:57.425518+0000 mon.a (mon.0) 3867 : audit [INF] from='client.? 192.168.123.104:0/37499791' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:58 vm04 bash[28289]: cluster 2026-03-10T10:47:57.454777+0000 mon.a (mon.0) 3868 : cluster [DBG] osdmap e769: 8 total, 8 up, 8 in 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:58 vm04 bash[28289]: cluster 2026-03-10T10:47:57.454777+0000 mon.a (mon.0) 3868 : cluster [DBG] osdmap e769: 8 total, 8 up, 8 in 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:58 vm04 bash[28289]: audit 2026-03-10T10:47:57.618078+0000 mon.c (mon.2) 485 : audit [INF] from='client.? 192.168.123.104:0/3394144048' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:58 vm04 bash[28289]: audit 2026-03-10T10:47:57.618078+0000 mon.c (mon.2) 485 : audit [INF] from='client.? 192.168.123.104:0/3394144048' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:58 vm04 bash[28289]: audit 2026-03-10T10:47:57.618586+0000 mon.a (mon.0) 3869 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:58 vm04 bash[28289]: audit 2026-03-10T10:47:57.618586+0000 mon.a (mon.0) 3869 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:58 vm04 bash[28289]: cluster 2026-03-10T10:47:57.760595+0000 mon.a (mon.0) 3870 : cluster [INF] pool '3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f' no longer out of quota; removing NO_QUOTA flag 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:58 vm04 bash[28289]: cluster 2026-03-10T10:47:57.760595+0000 mon.a (mon.0) 3870 : cluster [INF] pool '3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f' no longer out of quota; removing NO_QUOTA flag 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:58 vm04 bash[28289]: cluster 2026-03-10T10:47:57.760814+0000 mon.a (mon.0) 3871 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:58 vm04 bash[28289]: cluster 2026-03-10T10:47:57.760814+0000 mon.a (mon.0) 3871 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:58 vm04 bash[28289]: audit 2026-03-10T10:47:57.765187+0000 mon.a (mon.0) 3872 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]': finished 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:58 vm04 bash[28289]: audit 2026-03-10T10:47:57.765187+0000 mon.a (mon.0) 3872 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]': finished 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:58 vm04 bash[28289]: cluster 2026-03-10T10:47:57.783360+0000 mon.a (mon.0) 3873 : cluster [DBG] osdmap e770: 8 total, 8 up, 8 in 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:58 vm04 bash[28289]: cluster 2026-03-10T10:47:57.783360+0000 mon.a (mon.0) 3873 : cluster [DBG] osdmap e770: 8 total, 8 up, 8 in 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:58 vm04 bash[28289]: audit 2026-03-10T10:47:57.827053+0000 mon.c (mon.2) 486 : audit [INF] from='client.? 192.168.123.104:0/3394144048' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:58 vm04 bash[28289]: audit 2026-03-10T10:47:57.827053+0000 mon.c (mon.2) 486 : audit [INF] from='client.? 192.168.123.104:0/3394144048' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:58 vm04 bash[28289]: audit 2026-03-10T10:47:57.827542+0000 mon.a (mon.0) 3874 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:58 vm04 bash[28289]: audit 2026-03-10T10:47:57.827542+0000 mon.a (mon.0) 3874 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:58 vm04 bash[20742]: cluster 2026-03-10T10:47:56.826258+0000 mgr.y (mgr.24422) 1266 : cluster [DBG] pgmap v1704: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:58 vm04 bash[20742]: cluster 2026-03-10T10:47:56.826258+0000 mgr.y (mgr.24422) 1266 : cluster [DBG] pgmap v1704: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:58 vm04 bash[20742]: audit 2026-03-10T10:47:57.425518+0000 mon.a (mon.0) 3867 : audit [INF] from='client.? 192.168.123.104:0/37499791' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:58 vm04 bash[20742]: audit 2026-03-10T10:47:57.425518+0000 mon.a (mon.0) 3867 : audit [INF] from='client.? 192.168.123.104:0/37499791' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:58 vm04 bash[20742]: cluster 2026-03-10T10:47:57.454777+0000 mon.a (mon.0) 3868 : cluster [DBG] osdmap e769: 8 total, 8 up, 8 in 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:58 vm04 bash[20742]: cluster 2026-03-10T10:47:57.454777+0000 mon.a (mon.0) 3868 : cluster [DBG] osdmap e769: 8 total, 8 up, 8 in 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:58 vm04 bash[20742]: audit 2026-03-10T10:47:57.618078+0000 mon.c (mon.2) 485 : audit [INF] from='client.? 192.168.123.104:0/3394144048' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:58 vm04 bash[20742]: audit 2026-03-10T10:47:57.618078+0000 mon.c (mon.2) 485 : audit [INF] from='client.? 192.168.123.104:0/3394144048' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:58 vm04 bash[20742]: audit 2026-03-10T10:47:57.618586+0000 mon.a (mon.0) 3869 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:58 vm04 bash[20742]: audit 2026-03-10T10:47:57.618586+0000 mon.a (mon.0) 3869 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:58 vm04 bash[20742]: cluster 2026-03-10T10:47:57.760595+0000 mon.a (mon.0) 3870 : cluster [INF] pool '3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f' no longer out of quota; removing NO_QUOTA flag 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:58 vm04 bash[20742]: cluster 2026-03-10T10:47:57.760595+0000 mon.a (mon.0) 3870 : cluster [INF] pool '3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f' no longer out of quota; removing NO_QUOTA flag 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:58 vm04 bash[20742]: cluster 2026-03-10T10:47:57.760814+0000 mon.a (mon.0) 3871 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:58 vm04 bash[20742]: cluster 2026-03-10T10:47:57.760814+0000 mon.a (mon.0) 3871 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:58 vm04 bash[20742]: audit 2026-03-10T10:47:57.765187+0000 mon.a (mon.0) 3872 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]': finished 2026-03-10T10:47:58.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:58 vm04 bash[20742]: audit 2026-03-10T10:47:57.765187+0000 mon.a (mon.0) 3872 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]': finished 2026-03-10T10:47:58.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:58 vm04 bash[20742]: cluster 2026-03-10T10:47:57.783360+0000 mon.a (mon.0) 3873 : cluster [DBG] osdmap e770: 8 total, 8 up, 8 in 2026-03-10T10:47:58.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:58 vm04 bash[20742]: cluster 2026-03-10T10:47:57.783360+0000 mon.a (mon.0) 3873 : cluster [DBG] osdmap e770: 8 total, 8 up, 8 in 2026-03-10T10:47:58.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:58 vm04 bash[20742]: audit 2026-03-10T10:47:57.827053+0000 mon.c (mon.2) 486 : audit [INF] from='client.? 192.168.123.104:0/3394144048' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:47:58.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:58 vm04 bash[20742]: audit 2026-03-10T10:47:57.827053+0000 mon.c (mon.2) 486 : audit [INF] from='client.? 192.168.123.104:0/3394144048' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:47:58.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:58 vm04 bash[20742]: audit 2026-03-10T10:47:57.827542+0000 mon.a (mon.0) 3874 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:47:58.704 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:58 vm04 bash[20742]: audit 2026-03-10T10:47:57.827542+0000 mon.a (mon.0) 3874 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:47:58.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:58 vm07 bash[23367]: cluster 2026-03-10T10:47:56.826258+0000 mgr.y (mgr.24422) 1266 : cluster [DBG] pgmap v1704: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:47:58.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:58 vm07 bash[23367]: cluster 2026-03-10T10:47:56.826258+0000 mgr.y (mgr.24422) 1266 : cluster [DBG] pgmap v1704: 176 pgs: 176 active+clean; 480 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:47:58.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:58 vm07 bash[23367]: audit 2026-03-10T10:47:57.425518+0000 mon.a (mon.0) 3867 : audit [INF] from='client.? 192.168.123.104:0/37499791' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T10:47:58.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:58 vm07 bash[23367]: audit 2026-03-10T10:47:57.425518+0000 mon.a (mon.0) 3867 : audit [INF] from='client.? 192.168.123.104:0/37499791' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T10:47:58.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:58 vm07 bash[23367]: cluster 2026-03-10T10:47:57.454777+0000 mon.a (mon.0) 3868 : cluster [DBG] osdmap e769: 8 total, 8 up, 8 in 2026-03-10T10:47:58.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:58 vm07 bash[23367]: cluster 2026-03-10T10:47:57.454777+0000 mon.a (mon.0) 3868 : cluster [DBG] osdmap e769: 8 total, 8 up, 8 in 2026-03-10T10:47:58.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:58 vm07 bash[23367]: audit 2026-03-10T10:47:57.618078+0000 mon.c (mon.2) 485 : audit [INF] from='client.? 192.168.123.104:0/3394144048' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:47:58.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:58 vm07 bash[23367]: audit 2026-03-10T10:47:57.618078+0000 mon.c (mon.2) 485 : audit [INF] from='client.? 192.168.123.104:0/3394144048' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:47:58.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:58 vm07 bash[23367]: audit 2026-03-10T10:47:57.618586+0000 mon.a (mon.0) 3869 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:47:58.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:58 vm07 bash[23367]: audit 2026-03-10T10:47:57.618586+0000 mon.a (mon.0) 3869 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:47:58.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:58 vm07 bash[23367]: cluster 2026-03-10T10:47:57.760595+0000 mon.a (mon.0) 3870 : cluster [INF] pool '3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f' no longer out of quota; removing NO_QUOTA flag 2026-03-10T10:47:58.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:58 vm07 bash[23367]: cluster 2026-03-10T10:47:57.760595+0000 mon.a (mon.0) 3870 : cluster [INF] pool '3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f' no longer out of quota; removing NO_QUOTA flag 2026-03-10T10:47:58.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:58 vm07 bash[23367]: cluster 2026-03-10T10:47:57.760814+0000 mon.a (mon.0) 3871 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T10:47:58.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:58 vm07 bash[23367]: cluster 2026-03-10T10:47:57.760814+0000 mon.a (mon.0) 3871 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T10:47:58.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:58 vm07 bash[23367]: audit 2026-03-10T10:47:57.765187+0000 mon.a (mon.0) 3872 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]': finished 2026-03-10T10:47:58.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:58 vm07 bash[23367]: audit 2026-03-10T10:47:57.765187+0000 mon.a (mon.0) 3872 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]': finished 2026-03-10T10:47:58.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:58 vm07 bash[23367]: cluster 2026-03-10T10:47:57.783360+0000 mon.a (mon.0) 3873 : cluster [DBG] osdmap e770: 8 total, 8 up, 8 in 2026-03-10T10:47:58.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:58 vm07 bash[23367]: cluster 2026-03-10T10:47:57.783360+0000 mon.a (mon.0) 3873 : cluster [DBG] osdmap e770: 8 total, 8 up, 8 in 2026-03-10T10:47:58.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:58 vm07 bash[23367]: audit 2026-03-10T10:47:57.827053+0000 mon.c (mon.2) 486 : audit [INF] from='client.? 192.168.123.104:0/3394144048' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:47:58.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:58 vm07 bash[23367]: audit 2026-03-10T10:47:57.827053+0000 mon.c (mon.2) 486 : audit [INF] from='client.? 192.168.123.104:0/3394144048' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:47:58.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:58 vm07 bash[23367]: audit 2026-03-10T10:47:57.827542+0000 mon.a (mon.0) 3874 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:47:58.768 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:58 vm07 bash[23367]: audit 2026-03-10T10:47:57.827542+0000 mon.a (mon.0) 3874 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:47:58.768 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.764+0000 7f6aaa7fc640 1 -- 192.168.123.104:0/3394144048 <== mon.2 v2:192.168.123.104:3301/0 9 ==== osd_map(771..771 src has 251..771) ==== 628+0+0 (secure 0 0 0) 0x7f6aac132090 con 0x7f6abc102960 2026-03-10T10:47:58.772 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.768+0000 7f6aaa7fc640 1 -- 192.168.123.104:0/3394144048 <== mon.2 v2:192.168.123.104:3301/0 10 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]=0 set-quota max_objects = 0 for pool 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f v771) ==== 221+0+0 (secure 0 0 0) 0x7f6aac1058e0 con 0x7f6abc102960 2026-03-10T10:47:58.772 INFO:tasks.workunit.client.0.vm04.stderr:set-quota max_objects = 0 for pool 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f 2026-03-10T10:47:58.775 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.772+0000 7f6ac3580640 1 -- 192.168.123.104:0/3394144048 >> v2:192.168.123.104:6800/3326026257 conn(0x7f6a980776d0 msgr2=0x7f6a98079b90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:47:58.775 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.772+0000 7f6ac3580640 1 --2- 192.168.123.104:0/3394144048 >> v2:192.168.123.104:6800/3326026257 conn(0x7f6a980776d0 0x7f6a98079b90 secure :-1 s=READY pgs=4271 cs=0 l=1 rev1=1 crypto rx=0x7f6ab0005e10 tx=0x7f6ab000a600 comp rx=0 tx=0).stop 2026-03-10T10:47:58.775 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.772+0000 7f6ac3580640 1 -- 192.168.123.104:0/3394144048 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f6abc102960 msgr2=0x7f6abc19f160 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:47:58.776 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.772+0000 7f6ac3580640 1 --2- 192.168.123.104:0/3394144048 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f6abc102960 0x7f6abc19f160 secure :-1 s=READY pgs=3393 cs=0 l=1 rev1=1 crypto rx=0x7f6aac009950 tx=0x7f6aac0a5f70 comp rx=0 tx=0).stop 2026-03-10T10:47:58.776 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.772+0000 7f6ac3580640 1 -- 192.168.123.104:0/3394144048 shutdown_connections 2026-03-10T10:47:58.776 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.772+0000 7f6ac3580640 1 --2- 192.168.123.104:0/3394144048 >> v2:192.168.123.104:6800/3326026257 conn(0x7f6a980776d0 0x7f6a98079b90 unknown :-1 s=CLOSED pgs=4271 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:58.776 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.772+0000 7f6ac3580640 1 --2- 192.168.123.104:0/3394144048 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f6abc108960 0x7f6abc1a3a30 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:58.776 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.772+0000 7f6ac3580640 1 --2- 192.168.123.104:0/3394144048 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f6abc103300 0x7f6abc19f6a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:58.776 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.772+0000 7f6ac3580640 1 --2- 192.168.123.104:0/3394144048 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f6abc102960 0x7f6abc19f160 unknown :-1 s=CLOSED pgs=3393 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:58.776 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.772+0000 7f6ac3580640 1 -- 192.168.123.104:0/3394144048 >> 192.168.123.104:0/3394144048 conn(0x7f6abc0fe640 msgr2=0x7f6abc10ddd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:47:58.776 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.772+0000 7f6ac3580640 1 -- 192.168.123.104:0/3394144048 shutdown_connections 2026-03-10T10:47:58.776 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.772+0000 7f6ac3580640 1 -- 192.168.123.104:0/3394144048 wait complete. 2026-03-10T10:47:58.796 INFO:tasks.workunit.client.0.vm04.stderr:+ wait 131351 2026-03-10T10:47:58.796 INFO:tasks.workunit.client.0.vm04.stderr:+ [ 0 -ne 0 ] 2026-03-10T10:47:58.796 INFO:tasks.workunit.client.0.vm04.stderr:+ true 2026-03-10T10:47:58.796 INFO:tasks.workunit.client.0.vm04.stderr:+ rados -p 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f put three /etc/passwd 2026-03-10T10:47:58.833 INFO:tasks.workunit.client.0.vm04.stderr:+ uuidgen 2026-03-10T10:47:58.833 INFO:tasks.workunit.client.0.vm04.stderr:+ pp=cc1e9eec-b4d8-493e-ae89-ce289b5abc51 2026-03-10T10:47:58.833 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph osd pool create cc1e9eec-b4d8-493e-ae89-ce289b5abc51 12 2026-03-10T10:47:58.897 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.892+0000 7f28438a2640 1 -- 192.168.123.104:0/1705819339 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f283c106c30 msgr2=0x7f283c10e740 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:47:58.897 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.892+0000 7f28438a2640 1 --2- 192.168.123.104:0/1705819339 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f283c106c30 0x7f283c10e740 secure :-1 s=READY pgs=3088 cs=0 l=1 rev1=1 crypto rx=0x7f283800b0a0 tx=0x7f283801caf0 comp rx=0 tx=0).stop 2026-03-10T10:47:58.897 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.892+0000 7f28438a2640 1 -- 192.168.123.104:0/1705819339 shutdown_connections 2026-03-10T10:47:58.897 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.892+0000 7f28438a2640 1 --2- 192.168.123.104:0/1705819339 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f283c106c30 0x7f283c10e740 unknown :-1 s=CLOSED pgs=3088 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:58.897 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.892+0000 7f28438a2640 1 --2- 192.168.123.104:0/1705819339 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f283c103000 0x7f283c1064d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:58.897 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.892+0000 7f28438a2640 1 --2- 192.168.123.104:0/1705819339 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f283c102650 0x7f283c102a30 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:58.897 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.892+0000 7f28438a2640 1 -- 192.168.123.104:0/1705819339 >> 192.168.123.104:0/1705819339 conn(0x7f283c0fc820 msgr2=0x7f283c0fec40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:47:58.897 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.892+0000 7f28438a2640 1 -- 192.168.123.104:0/1705819339 shutdown_connections 2026-03-10T10:47:58.897 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.892+0000 7f28438a2640 1 -- 192.168.123.104:0/1705819339 wait complete. 2026-03-10T10:47:58.897 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.892+0000 7f28438a2640 1 Processor -- start 2026-03-10T10:47:58.897 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.892+0000 7f28438a2640 1 -- start start 2026-03-10T10:47:58.897 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.892+0000 7f28438a2640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f283c102650 0x7f283c117c10 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:47:58.897 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.892+0000 7f28438a2640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f283c103000 0x7f283c118150 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:47:58.897 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.892+0000 7f28438a2640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f283c106c30 0x7f283c1143d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:47:58.897 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.892+0000 7f28438a2640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f283c10f450 con 0x7f283c102650 2026-03-10T10:47:58.897 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.892+0000 7f28438a2640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f283c10f2d0 con 0x7f283c106c30 2026-03-10T10:47:58.897 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.892+0000 7f28438a2640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f283c10f5d0 con 0x7f283c103000 2026-03-10T10:47:58.897 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.892+0000 7f2840e16640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f283c103000 0x7f283c118150 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:47:58.897 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.892+0000 7f2840e16640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f283c103000 0x7f283c118150 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3301/0 says I am v2:192.168.123.104:33746/0 (socket says 192.168.123.104:33746) 2026-03-10T10:47:58.897 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.892+0000 7f2840e16640 1 -- 192.168.123.104:0/2501777349 learned_addr learned my addr 192.168.123.104:0/2501777349 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:47:58.898 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.892+0000 7f2841e18640 1 --2- 192.168.123.104:0/2501777349 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f283c106c30 0x7f283c1143d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:47:58.898 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.892+0000 7f2840e16640 1 -- 192.168.123.104:0/2501777349 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f283c106c30 msgr2=0x7f283c1143d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:47:58.898 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.892+0000 7f2840e16640 1 --2- 192.168.123.104:0/2501777349 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f283c106c30 0x7f283c1143d0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:58.898 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.892+0000 7f2840e16640 1 -- 192.168.123.104:0/2501777349 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f283c102650 msgr2=0x7f283c117c10 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:47:58.898 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.892+0000 7f2841617640 1 --2- 192.168.123.104:0/2501777349 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f283c102650 0x7f283c117c10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:47:58.898 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.892+0000 7f2840e16640 1 --2- 192.168.123.104:0/2501777349 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f283c102650 0x7f283c117c10 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:58.898 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.892+0000 7f2840e16640 1 -- 192.168.123.104:0/2501777349 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f283c114c90 con 0x7f283c103000 2026-03-10T10:47:58.898 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.892+0000 7f2841617640 1 --2- 192.168.123.104:0/2501777349 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f283c102650 0x7f283c117c10 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:47:58.898 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.896+0000 7f2840e16640 1 --2- 192.168.123.104:0/2501777349 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f283c103000 0x7f283c118150 secure :-1 s=READY pgs=3395 cs=0 l=1 rev1=1 crypto rx=0x7f283000ca00 tx=0x7f283000cec0 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:47:58.898 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.896+0000 7f282a7fc640 1 -- 192.168.123.104:0/2501777349 <== mon.2 v2:192.168.123.104:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f2830013070 con 0x7f283c103000 2026-03-10T10:47:58.899 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.896+0000 7f28438a2640 1 -- 192.168.123.104:0/2501777349 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f283c114f80 con 0x7f283c103000 2026-03-10T10:47:58.899 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.896+0000 7f28438a2640 1 -- 192.168.123.104:0/2501777349 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f283c1b0160 con 0x7f283c103000 2026-03-10T10:47:58.900 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.896+0000 7f282a7fc640 1 -- 192.168.123.104:0/2501777349 <== mon.2 v2:192.168.123.104:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f2830004480 con 0x7f283c103000 2026-03-10T10:47:58.900 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.896+0000 7f282a7fc640 1 -- 192.168.123.104:0/2501777349 <== mon.2 v2:192.168.123.104:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f2830002e30 con 0x7f283c103000 2026-03-10T10:47:58.901 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.896+0000 7f282a7fc640 1 -- 192.168.123.104:0/2501777349 <== mon.2 v2:192.168.123.104:3301/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f2830020020 con 0x7f283c103000 2026-03-10T10:47:58.901 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.896+0000 7f282a7fc640 1 --2- 192.168.123.104:0/2501777349 >> v2:192.168.123.104:6800/3326026257 conn(0x7f2810077720 0x7f2810079be0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:47:58.901 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.896+0000 7f2841617640 1 --2- 192.168.123.104:0/2501777349 >> v2:192.168.123.104:6800/3326026257 conn(0x7f2810077720 0x7f2810079be0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:47:58.901 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.896+0000 7f282a7fc640 1 -- 192.168.123.104:0/2501777349 <== mon.2 v2:192.168.123.104:3301/0 5 ==== osd_map(771..771 src has 251..771) ==== 7736+0+0 (secure 0 0 0) 0x7f283009ac30 con 0x7f283c103000 2026-03-10T10:47:58.904 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.896+0000 7f28438a2640 1 -- 192.168.123.104:0/2501777349 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f283c100dd0 con 0x7f283c103000 2026-03-10T10:47:58.905 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.900+0000 7f2841617640 1 --2- 192.168.123.104:0/2501777349 >> v2:192.168.123.104:6800/3326026257 conn(0x7f2810077720 0x7f2810079be0 secure :-1 s=READY pgs=4273 cs=0 l=1 rev1=1 crypto rx=0x7f282c005fd0 tx=0x7f282c005ea0 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:47:58.905 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:58.900+0000 7f282a7fc640 1 -- 192.168.123.104:0/2501777349 <== mon.2 v2:192.168.123.104:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f2830067480 con 0x7f283c103000 2026-03-10T10:47:59.002 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.000+0000 7f28438a2640 1 -- 192.168.123.104:0/2501777349 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "osd pool create", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pg_num": 12} v 0) -- 0x7f283c1153e0 con 0x7f283c103000 2026-03-10T10:47:59.787 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.784+0000 7f282a7fc640 1 -- 192.168.123.104:0/2501777349 <== mon.2 v2:192.168.123.104:3301/0 7 ==== mon_command_ack([{"prefix": "osd pool create", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pg_num": 12}]=0 pool 'cc1e9eec-b4d8-493e-ae89-ce289b5abc51' created v772) ==== 176+0+0 (secure 0 0 0) 0x7f283006c330 con 0x7f283c103000 2026-03-10T10:47:59.841 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.836+0000 7f28438a2640 1 -- 192.168.123.104:0/2501777349 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "osd pool create", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pg_num": 12} v 0) -- 0x7f283c1b0330 con 0x7f283c103000 2026-03-10T10:47:59.842 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.840+0000 7f282a7fc640 1 -- 192.168.123.104:0/2501777349 <== mon.2 v2:192.168.123.104:3301/0 8 ==== mon_command_ack([{"prefix": "osd pool create", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pg_num": 12}]=0 pool 'cc1e9eec-b4d8-493e-ae89-ce289b5abc51' already exists v772) ==== 183+0+0 (secure 0 0 0) 0x7f283005f470 con 0x7f283c103000 2026-03-10T10:47:59.842 INFO:tasks.workunit.client.0.vm04.stderr:pool 'cc1e9eec-b4d8-493e-ae89-ce289b5abc51' already exists 2026-03-10T10:47:59.844 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.840+0000 7f28438a2640 1 -- 192.168.123.104:0/2501777349 >> v2:192.168.123.104:6800/3326026257 conn(0x7f2810077720 msgr2=0x7f2810079be0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:47:59.844 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.840+0000 7f28438a2640 1 --2- 192.168.123.104:0/2501777349 >> v2:192.168.123.104:6800/3326026257 conn(0x7f2810077720 0x7f2810079be0 secure :-1 s=READY pgs=4273 cs=0 l=1 rev1=1 crypto rx=0x7f282c005fd0 tx=0x7f282c005ea0 comp rx=0 tx=0).stop 2026-03-10T10:47:59.845 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.840+0000 7f28438a2640 1 -- 192.168.123.104:0/2501777349 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f283c103000 msgr2=0x7f283c118150 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:47:59.845 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.840+0000 7f28438a2640 1 --2- 192.168.123.104:0/2501777349 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f283c103000 0x7f283c118150 secure :-1 s=READY pgs=3395 cs=0 l=1 rev1=1 crypto rx=0x7f283000ca00 tx=0x7f283000cec0 comp rx=0 tx=0).stop 2026-03-10T10:47:59.845 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.840+0000 7f28438a2640 1 -- 192.168.123.104:0/2501777349 shutdown_connections 2026-03-10T10:47:59.845 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.840+0000 7f28438a2640 1 --2- 192.168.123.104:0/2501777349 >> v2:192.168.123.104:6800/3326026257 conn(0x7f2810077720 0x7f2810079be0 unknown :-1 s=CLOSED pgs=4273 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:59.845 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.840+0000 7f28438a2640 1 --2- 192.168.123.104:0/2501777349 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f283c106c30 0x7f283c1143d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:59.845 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.840+0000 7f28438a2640 1 --2- 192.168.123.104:0/2501777349 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f283c103000 0x7f283c118150 unknown :-1 s=CLOSED pgs=3395 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:59.845 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.840+0000 7f28438a2640 1 --2- 192.168.123.104:0/2501777349 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f283c102650 0x7f283c117c10 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:59.845 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.840+0000 7f28438a2640 1 -- 192.168.123.104:0/2501777349 >> 192.168.123.104:0/2501777349 conn(0x7f283c0fc820 msgr2=0x7f283c1054b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:47:59.845 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.840+0000 7f28438a2640 1 -- 192.168.123.104:0/2501777349 shutdown_connections 2026-03-10T10:47:59.845 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.840+0000 7f28438a2640 1 -- 192.168.123.104:0/2501777349 wait complete. 2026-03-10T10:47:59.858 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph osd pool application enable cc1e9eec-b4d8-493e-ae89-ce289b5abc51 rados 2026-03-10T10:47:59.920 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.916+0000 7f742cb9b640 1 -- 192.168.123.104:0/3632262484 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f7428109f50 msgr2=0x7f7428111ad0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:47:59.920 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.916+0000 7f742cb9b640 1 --2- 192.168.123.104:0/3632262484 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f7428109f50 0x7f7428111ad0 secure :-1 s=READY pgs=3396 cs=0 l=1 rev1=1 crypto rx=0x7f741800b0a0 tx=0x7f741801cb10 comp rx=0 tx=0).stop 2026-03-10T10:47:59.920 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.916+0000 7f742cb9b640 1 -- 192.168.123.104:0/3632262484 shutdown_connections 2026-03-10T10:47:59.920 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.916+0000 7f742cb9b640 1 --2- 192.168.123.104:0/3632262484 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f7428109f50 0x7f7428111ad0 unknown :-1 s=CLOSED pgs=3396 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:59.920 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.916+0000 7f742cb9b640 1 --2- 192.168.123.104:0/3632262484 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f74281057d0 0x7f7428109820 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:59.920 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.916+0000 7f742cb9b640 1 --2- 192.168.123.104:0/3632262484 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f7428104e20 0x7f7428105200 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:59.920 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.916+0000 7f742cb9b640 1 -- 192.168.123.104:0/3632262484 >> 192.168.123.104:0/3632262484 conn(0x7f7428100880 msgr2=0x7f7428102ca0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:47:59.920 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.916+0000 7f742cb9b640 1 -- 192.168.123.104:0/3632262484 shutdown_connections 2026-03-10T10:47:59.920 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.916+0000 7f742cb9b640 1 -- 192.168.123.104:0/3632262484 wait complete. 2026-03-10T10:47:59.920 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.916+0000 7f742cb9b640 1 Processor -- start 2026-03-10T10:47:59.920 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.916+0000 7f742cb9b640 1 -- start start 2026-03-10T10:47:59.920 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.916+0000 7f742cb9b640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7428104e20 0x7f742819f130 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:47:59.920 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.916+0000 7f742cb9b640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f74281057d0 0x7f742819f670 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:47:59.921 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.916+0000 7f742cb9b640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f7428109f50 0x7f74281a3a00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:47:59.921 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.916+0000 7f742cb9b640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f7428116c60 con 0x7f74281057d0 2026-03-10T10:47:59.921 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.916+0000 7f742cb9b640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f7428116ae0 con 0x7f7428104e20 2026-03-10T10:47:59.921 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.916+0000 7f742cb9b640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f7428116de0 con 0x7f7428109f50 2026-03-10T10:47:59.921 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.916+0000 7f7426d76640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f7428109f50 0x7f74281a3a00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:47:59.921 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.916+0000 7f7426d76640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f7428109f50 0x7f74281a3a00 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3301/0 says I am v2:192.168.123.104:33766/0 (socket says 192.168.123.104:33766) 2026-03-10T10:47:59.921 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.916+0000 7f7426d76640 1 -- 192.168.123.104:0/4217010425 learned_addr learned my addr 192.168.123.104:0/4217010425 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:47:59.921 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.916+0000 7f7426d76640 1 -- 192.168.123.104:0/4217010425 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7428104e20 msgr2=0x7f742819f130 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:47:59.921 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.916+0000 7f7426d76640 1 --2- 192.168.123.104:0/4217010425 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7428104e20 0x7f742819f130 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:59.921 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.916+0000 7f7426d76640 1 -- 192.168.123.104:0/4217010425 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f74281057d0 msgr2=0x7f742819f670 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:47:59.921 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.916+0000 7f7426d76640 1 --2- 192.168.123.104:0/4217010425 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f74281057d0 0x7f742819f670 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:47:59.921 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.916+0000 7f7426d76640 1 -- 192.168.123.104:0/4217010425 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f74281a40e0 con 0x7f7428109f50 2026-03-10T10:47:59.921 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.916+0000 7f7426d76640 1 --2- 192.168.123.104:0/4217010425 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f7428109f50 0x7f74281a3a00 secure :-1 s=READY pgs=3397 cs=0 l=1 rev1=1 crypto rx=0x7f7418098640 tx=0x7f7418007870 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:47:59.921 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.916+0000 7f740f7fe640 1 -- 192.168.123.104:0/4217010425 <== mon.2 v2:192.168.123.104:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f7418002a70 con 0x7f7428109f50 2026-03-10T10:47:59.921 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.916+0000 7f740f7fe640 1 -- 192.168.123.104:0/4217010425 <== mon.2 v2:192.168.123.104:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f7418002c10 con 0x7f7428109f50 2026-03-10T10:47:59.922 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.916+0000 7f740f7fe640 1 -- 192.168.123.104:0/4217010425 <== mon.2 v2:192.168.123.104:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f7418004740 con 0x7f7428109f50 2026-03-10T10:47:59.922 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.916+0000 7f742cb9b640 1 -- 192.168.123.104:0/4217010425 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f74281a4370 con 0x7f7428109f50 2026-03-10T10:47:59.923 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.920+0000 7f742cb9b640 1 -- 192.168.123.104:0/4217010425 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f74281a47d0 con 0x7f7428109f50 2026-03-10T10:47:59.923 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.920+0000 7f740f7fe640 1 -- 192.168.123.104:0/4217010425 <== mon.2 v2:192.168.123.104:3301/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f7418005ce0 con 0x7f7428109f50 2026-03-10T10:47:59.923 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.920+0000 7f742cb9b640 1 -- 192.168.123.104:0/4217010425 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f73ec005190 con 0x7f7428109f50 2026-03-10T10:47:59.926 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.920+0000 7f740f7fe640 1 --2- 192.168.123.104:0/4217010425 >> v2:192.168.123.104:6800/3326026257 conn(0x7f7400077640 0x7f7400079b00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:47:59.926 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.920+0000 7f740f7fe640 1 -- 192.168.123.104:0/4217010425 <== mon.2 v2:192.168.123.104:3301/0 5 ==== osd_map(772..772 src has 251..772) ==== 8111+0+0 (secure 0 0 0) 0x7f7418137170 con 0x7f7428109f50 2026-03-10T10:47:59.926 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.924+0000 7f7426575640 1 --2- 192.168.123.104:0/4217010425 >> v2:192.168.123.104:6800/3326026257 conn(0x7f7400077640 0x7f7400079b00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:47:59.927 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.924+0000 7f7426575640 1 --2- 192.168.123.104:0/4217010425 >> v2:192.168.123.104:6800/3326026257 conn(0x7f7400077640 0x7f7400079b00 secure :-1 s=READY pgs=4274 cs=0 l=1 rev1=1 crypto rx=0x7f741c00aa30 tx=0x7f741c005d10 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:47:59.927 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:47:59.924+0000 7f740f7fe640 1 -- 192.168.123.104:0/4217010425 <== mon.2 v2:192.168.123.104:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f74180b2030 con 0x7f7428109f50 2026-03-10T10:48:00.017 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:47:59 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:48:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:59 vm07 bash[23367]: audit 2026-03-10T10:47:58.678877+0000 mon.a (mon.0) 3875 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:48:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:59 vm07 bash[23367]: audit 2026-03-10T10:47:58.678877+0000 mon.a (mon.0) 3875 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:48:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:59 vm07 bash[23367]: audit 2026-03-10T10:47:58.680125+0000 mon.a (mon.0) 3876 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:48:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:59 vm07 bash[23367]: audit 2026-03-10T10:47:58.680125+0000 mon.a (mon.0) 3876 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:48:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:59 vm07 bash[23367]: audit 2026-03-10T10:47:58.767980+0000 mon.a (mon.0) 3877 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]': finished 2026-03-10T10:48:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:59 vm07 bash[23367]: audit 2026-03-10T10:47:58.767980+0000 mon.a (mon.0) 3877 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]': finished 2026-03-10T10:48:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:59 vm07 bash[23367]: cluster 2026-03-10T10:47:58.785356+0000 mon.a (mon.0) 3878 : cluster [DBG] osdmap e771: 8 total, 8 up, 8 in 2026-03-10T10:48:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:59 vm07 bash[23367]: cluster 2026-03-10T10:47:58.785356+0000 mon.a (mon.0) 3878 : cluster [DBG] osdmap e771: 8 total, 8 up, 8 in 2026-03-10T10:48:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:59 vm07 bash[23367]: cluster 2026-03-10T10:47:58.826646+0000 mgr.y (mgr.24422) 1267 : cluster [DBG] pgmap v1708: 176 pgs: 176 active+clean; 482 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:48:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:59 vm07 bash[23367]: cluster 2026-03-10T10:47:58.826646+0000 mgr.y (mgr.24422) 1267 : cluster [DBG] pgmap v1708: 176 pgs: 176 active+clean; 482 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:48:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:59 vm07 bash[23367]: audit 2026-03-10T10:47:59.004581+0000 mon.c (mon.2) 487 : audit [INF] from='client.? 192.168.123.104:0/2501777349' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pg_num": 12}]: dispatch 2026-03-10T10:48:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:59 vm07 bash[23367]: audit 2026-03-10T10:47:59.004581+0000 mon.c (mon.2) 487 : audit [INF] from='client.? 192.168.123.104:0/2501777349' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pg_num": 12}]: dispatch 2026-03-10T10:48:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:59 vm07 bash[23367]: audit 2026-03-10T10:47:59.005183+0000 mon.a (mon.0) 3879 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pg_num": 12}]: dispatch 2026-03-10T10:48:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:47:59 vm07 bash[23367]: audit 2026-03-10T10:47:59.005183+0000 mon.a (mon.0) 3879 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pg_num": 12}]: dispatch 2026-03-10T10:48:00.026 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:00.020+0000 7f742cb9b640 1 -- 192.168.123.104:0/4217010425 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"} v 0) -- 0x7f73ec005480 con 0x7f7428109f50 2026-03-10T10:48:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:59 vm04 bash[20742]: audit 2026-03-10T10:47:58.678877+0000 mon.a (mon.0) 3875 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:48:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:59 vm04 bash[20742]: audit 2026-03-10T10:47:58.678877+0000 mon.a (mon.0) 3875 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:48:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:59 vm04 bash[20742]: audit 2026-03-10T10:47:58.680125+0000 mon.a (mon.0) 3876 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:48:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:59 vm04 bash[20742]: audit 2026-03-10T10:47:58.680125+0000 mon.a (mon.0) 3876 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:48:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:59 vm04 bash[20742]: audit 2026-03-10T10:47:58.767980+0000 mon.a (mon.0) 3877 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]': finished 2026-03-10T10:48:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:59 vm04 bash[20742]: audit 2026-03-10T10:47:58.767980+0000 mon.a (mon.0) 3877 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]': finished 2026-03-10T10:48:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:59 vm04 bash[20742]: cluster 2026-03-10T10:47:58.785356+0000 mon.a (mon.0) 3878 : cluster [DBG] osdmap e771: 8 total, 8 up, 8 in 2026-03-10T10:48:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:59 vm04 bash[20742]: cluster 2026-03-10T10:47:58.785356+0000 mon.a (mon.0) 3878 : cluster [DBG] osdmap e771: 8 total, 8 up, 8 in 2026-03-10T10:48:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:59 vm04 bash[20742]: cluster 2026-03-10T10:47:58.826646+0000 mgr.y (mgr.24422) 1267 : cluster [DBG] pgmap v1708: 176 pgs: 176 active+clean; 482 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:48:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:59 vm04 bash[20742]: cluster 2026-03-10T10:47:58.826646+0000 mgr.y (mgr.24422) 1267 : cluster [DBG] pgmap v1708: 176 pgs: 176 active+clean; 482 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:48:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:59 vm04 bash[20742]: audit 2026-03-10T10:47:59.004581+0000 mon.c (mon.2) 487 : audit [INF] from='client.? 192.168.123.104:0/2501777349' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pg_num": 12}]: dispatch 2026-03-10T10:48:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:59 vm04 bash[20742]: audit 2026-03-10T10:47:59.004581+0000 mon.c (mon.2) 487 : audit [INF] from='client.? 192.168.123.104:0/2501777349' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pg_num": 12}]: dispatch 2026-03-10T10:48:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:59 vm04 bash[20742]: audit 2026-03-10T10:47:59.005183+0000 mon.a (mon.0) 3879 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pg_num": 12}]: dispatch 2026-03-10T10:48:00.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:47:59 vm04 bash[20742]: audit 2026-03-10T10:47:59.005183+0000 mon.a (mon.0) 3879 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pg_num": 12}]: dispatch 2026-03-10T10:48:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:59 vm04 bash[28289]: audit 2026-03-10T10:47:58.678877+0000 mon.a (mon.0) 3875 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:48:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:59 vm04 bash[28289]: audit 2026-03-10T10:47:58.678877+0000 mon.a (mon.0) 3875 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:48:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:59 vm04 bash[28289]: audit 2026-03-10T10:47:58.680125+0000 mon.a (mon.0) 3876 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:48:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:59 vm04 bash[28289]: audit 2026-03-10T10:47:58.680125+0000 mon.a (mon.0) 3876 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:48:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:59 vm04 bash[28289]: audit 2026-03-10T10:47:58.767980+0000 mon.a (mon.0) 3877 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]': finished 2026-03-10T10:48:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:59 vm04 bash[28289]: audit 2026-03-10T10:47:58.767980+0000 mon.a (mon.0) 3877 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]': finished 2026-03-10T10:48:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:59 vm04 bash[28289]: cluster 2026-03-10T10:47:58.785356+0000 mon.a (mon.0) 3878 : cluster [DBG] osdmap e771: 8 total, 8 up, 8 in 2026-03-10T10:48:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:59 vm04 bash[28289]: cluster 2026-03-10T10:47:58.785356+0000 mon.a (mon.0) 3878 : cluster [DBG] osdmap e771: 8 total, 8 up, 8 in 2026-03-10T10:48:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:59 vm04 bash[28289]: cluster 2026-03-10T10:47:58.826646+0000 mgr.y (mgr.24422) 1267 : cluster [DBG] pgmap v1708: 176 pgs: 176 active+clean; 482 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:48:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:59 vm04 bash[28289]: cluster 2026-03-10T10:47:58.826646+0000 mgr.y (mgr.24422) 1267 : cluster [DBG] pgmap v1708: 176 pgs: 176 active+clean; 482 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:48:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:59 vm04 bash[28289]: audit 2026-03-10T10:47:59.004581+0000 mon.c (mon.2) 487 : audit [INF] from='client.? 192.168.123.104:0/2501777349' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pg_num": 12}]: dispatch 2026-03-10T10:48:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:59 vm04 bash[28289]: audit 2026-03-10T10:47:59.004581+0000 mon.c (mon.2) 487 : audit [INF] from='client.? 192.168.123.104:0/2501777349' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pg_num": 12}]: dispatch 2026-03-10T10:48:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:59 vm04 bash[28289]: audit 2026-03-10T10:47:59.005183+0000 mon.a (mon.0) 3879 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pg_num": 12}]: dispatch 2026-03-10T10:48:00.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:47:59 vm04 bash[28289]: audit 2026-03-10T10:47:59.005183+0000 mon.a (mon.0) 3879 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pg_num": 12}]: dispatch 2026-03-10T10:48:00.783 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:00.780+0000 7f740f7fe640 1 -- 192.168.123.104:0/4217010425 <== mon.2 v2:192.168.123.104:3301/0 7 ==== mon_command_ack([{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]=0 enabled application 'rados' on pool 'cc1e9eec-b4d8-493e-ae89-ce289b5abc51' v773) ==== 213+0+0 (secure 0 0 0) 0x7f7418016820 con 0x7f7428109f50 2026-03-10T10:48:00.842 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:00.836+0000 7f742cb9b640 1 -- 192.168.123.104:0/4217010425 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"} v 0) -- 0x7f73ec004820 con 0x7f7428109f50 2026-03-10T10:48:01.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:00 vm04 bash[20742]: audit 2026-03-10T10:47:59.774273+0000 mon.a (mon.0) 3880 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pg_num": 12}]': finished 2026-03-10T10:48:01.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:00 vm04 bash[20742]: audit 2026-03-10T10:47:59.774273+0000 mon.a (mon.0) 3880 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pg_num": 12}]': finished 2026-03-10T10:48:01.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:00 vm04 bash[20742]: cluster 2026-03-10T10:47:59.779272+0000 mon.a (mon.0) 3881 : cluster [DBG] osdmap e772: 8 total, 8 up, 8 in 2026-03-10T10:48:01.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:00 vm04 bash[20742]: cluster 2026-03-10T10:47:59.779272+0000 mon.a (mon.0) 3881 : cluster [DBG] osdmap e772: 8 total, 8 up, 8 in 2026-03-10T10:48:01.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:00 vm04 bash[20742]: audit 2026-03-10T10:47:59.785261+0000 mgr.y (mgr.24422) 1268 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:01.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:00 vm04 bash[20742]: audit 2026-03-10T10:47:59.785261+0000 mgr.y (mgr.24422) 1268 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:01.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:00 vm04 bash[20742]: audit 2026-03-10T10:47:59.843302+0000 mon.c (mon.2) 488 : audit [INF] from='client.? 192.168.123.104:0/2501777349' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pg_num": 12}]: dispatch 2026-03-10T10:48:01.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:00 vm04 bash[20742]: audit 2026-03-10T10:47:59.843302+0000 mon.c (mon.2) 488 : audit [INF] from='client.? 192.168.123.104:0/2501777349' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pg_num": 12}]: dispatch 2026-03-10T10:48:01.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:00 vm04 bash[20742]: audit 2026-03-10T10:47:59.843846+0000 mon.a (mon.0) 3882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pg_num": 12}]: dispatch 2026-03-10T10:48:01.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:00 vm04 bash[20742]: audit 2026-03-10T10:47:59.843846+0000 mon.a (mon.0) 3882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pg_num": 12}]: dispatch 2026-03-10T10:48:01.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:00 vm04 bash[20742]: audit 2026-03-10T10:48:00.028338+0000 mon.c (mon.2) 489 : audit [INF] from='client.? 192.168.123.104:0/4217010425' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]: dispatch 2026-03-10T10:48:01.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:00 vm04 bash[20742]: audit 2026-03-10T10:48:00.028338+0000 mon.c (mon.2) 489 : audit [INF] from='client.? 192.168.123.104:0/4217010425' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]: dispatch 2026-03-10T10:48:01.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:00 vm04 bash[20742]: audit 2026-03-10T10:48:00.029005+0000 mon.a (mon.0) 3883 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]: dispatch 2026-03-10T10:48:01.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:00 vm04 bash[20742]: audit 2026-03-10T10:48:00.029005+0000 mon.a (mon.0) 3883 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]: dispatch 2026-03-10T10:48:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:00 vm04 bash[28289]: audit 2026-03-10T10:47:59.774273+0000 mon.a (mon.0) 3880 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pg_num": 12}]': finished 2026-03-10T10:48:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:00 vm04 bash[28289]: audit 2026-03-10T10:47:59.774273+0000 mon.a (mon.0) 3880 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pg_num": 12}]': finished 2026-03-10T10:48:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:00 vm04 bash[28289]: cluster 2026-03-10T10:47:59.779272+0000 mon.a (mon.0) 3881 : cluster [DBG] osdmap e772: 8 total, 8 up, 8 in 2026-03-10T10:48:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:00 vm04 bash[28289]: cluster 2026-03-10T10:47:59.779272+0000 mon.a (mon.0) 3881 : cluster [DBG] osdmap e772: 8 total, 8 up, 8 in 2026-03-10T10:48:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:00 vm04 bash[28289]: audit 2026-03-10T10:47:59.785261+0000 mgr.y (mgr.24422) 1268 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:00 vm04 bash[28289]: audit 2026-03-10T10:47:59.785261+0000 mgr.y (mgr.24422) 1268 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:00 vm04 bash[28289]: audit 2026-03-10T10:47:59.843302+0000 mon.c (mon.2) 488 : audit [INF] from='client.? 192.168.123.104:0/2501777349' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pg_num": 12}]: dispatch 2026-03-10T10:48:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:00 vm04 bash[28289]: audit 2026-03-10T10:47:59.843302+0000 mon.c (mon.2) 488 : audit [INF] from='client.? 192.168.123.104:0/2501777349' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pg_num": 12}]: dispatch 2026-03-10T10:48:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:00 vm04 bash[28289]: audit 2026-03-10T10:47:59.843846+0000 mon.a (mon.0) 3882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pg_num": 12}]: dispatch 2026-03-10T10:48:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:00 vm04 bash[28289]: audit 2026-03-10T10:47:59.843846+0000 mon.a (mon.0) 3882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pg_num": 12}]: dispatch 2026-03-10T10:48:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:00 vm04 bash[28289]: audit 2026-03-10T10:48:00.028338+0000 mon.c (mon.2) 489 : audit [INF] from='client.? 192.168.123.104:0/4217010425' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]: dispatch 2026-03-10T10:48:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:00 vm04 bash[28289]: audit 2026-03-10T10:48:00.028338+0000 mon.c (mon.2) 489 : audit [INF] from='client.? 192.168.123.104:0/4217010425' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]: dispatch 2026-03-10T10:48:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:00 vm04 bash[28289]: audit 2026-03-10T10:48:00.029005+0000 mon.a (mon.0) 3883 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]: dispatch 2026-03-10T10:48:01.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:00 vm04 bash[28289]: audit 2026-03-10T10:48:00.029005+0000 mon.a (mon.0) 3883 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]: dispatch 2026-03-10T10:48:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:00 vm07 bash[23367]: audit 2026-03-10T10:47:59.774273+0000 mon.a (mon.0) 3880 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pg_num": 12}]': finished 2026-03-10T10:48:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:00 vm07 bash[23367]: audit 2026-03-10T10:47:59.774273+0000 mon.a (mon.0) 3880 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pg_num": 12}]': finished 2026-03-10T10:48:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:00 vm07 bash[23367]: cluster 2026-03-10T10:47:59.779272+0000 mon.a (mon.0) 3881 : cluster [DBG] osdmap e772: 8 total, 8 up, 8 in 2026-03-10T10:48:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:00 vm07 bash[23367]: cluster 2026-03-10T10:47:59.779272+0000 mon.a (mon.0) 3881 : cluster [DBG] osdmap e772: 8 total, 8 up, 8 in 2026-03-10T10:48:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:00 vm07 bash[23367]: audit 2026-03-10T10:47:59.785261+0000 mgr.y (mgr.24422) 1268 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:00 vm07 bash[23367]: audit 2026-03-10T10:47:59.785261+0000 mgr.y (mgr.24422) 1268 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:00 vm07 bash[23367]: audit 2026-03-10T10:47:59.843302+0000 mon.c (mon.2) 488 : audit [INF] from='client.? 192.168.123.104:0/2501777349' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pg_num": 12}]: dispatch 2026-03-10T10:48:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:00 vm07 bash[23367]: audit 2026-03-10T10:47:59.843302+0000 mon.c (mon.2) 488 : audit [INF] from='client.? 192.168.123.104:0/2501777349' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pg_num": 12}]: dispatch 2026-03-10T10:48:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:00 vm07 bash[23367]: audit 2026-03-10T10:47:59.843846+0000 mon.a (mon.0) 3882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pg_num": 12}]: dispatch 2026-03-10T10:48:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:00 vm07 bash[23367]: audit 2026-03-10T10:47:59.843846+0000 mon.a (mon.0) 3882 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pg_num": 12}]: dispatch 2026-03-10T10:48:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:00 vm07 bash[23367]: audit 2026-03-10T10:48:00.028338+0000 mon.c (mon.2) 489 : audit [INF] from='client.? 192.168.123.104:0/4217010425' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]: dispatch 2026-03-10T10:48:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:00 vm07 bash[23367]: audit 2026-03-10T10:48:00.028338+0000 mon.c (mon.2) 489 : audit [INF] from='client.? 192.168.123.104:0/4217010425' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]: dispatch 2026-03-10T10:48:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:00 vm07 bash[23367]: audit 2026-03-10T10:48:00.029005+0000 mon.a (mon.0) 3883 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]: dispatch 2026-03-10T10:48:01.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:00 vm07 bash[23367]: audit 2026-03-10T10:48:00.029005+0000 mon.a (mon.0) 3883 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]: dispatch 2026-03-10T10:48:01.833 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.828+0000 7f740f7fe640 1 -- 192.168.123.104:0/4217010425 <== mon.2 v2:192.168.123.104:3301/0 8 ==== mon_command_ack([{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]=0 enabled application 'rados' on pool 'cc1e9eec-b4d8-493e-ae89-ce289b5abc51' v774) ==== 213+0+0 (secure 0 0 0) 0x7f74180ab2d0 con 0x7f7428109f50 2026-03-10T10:48:01.834 INFO:tasks.workunit.client.0.vm04.stderr:enabled application 'rados' on pool 'cc1e9eec-b4d8-493e-ae89-ce289b5abc51' 2026-03-10T10:48:01.836 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.832+0000 7f742cb9b640 1 -- 192.168.123.104:0/4217010425 >> v2:192.168.123.104:6800/3326026257 conn(0x7f7400077640 msgr2=0x7f7400079b00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:48:01.836 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.832+0000 7f742cb9b640 1 --2- 192.168.123.104:0/4217010425 >> v2:192.168.123.104:6800/3326026257 conn(0x7f7400077640 0x7f7400079b00 secure :-1 s=READY pgs=4274 cs=0 l=1 rev1=1 crypto rx=0x7f741c00aa30 tx=0x7f741c005d10 comp rx=0 tx=0).stop 2026-03-10T10:48:01.836 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.832+0000 7f742cb9b640 1 -- 192.168.123.104:0/4217010425 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f7428109f50 msgr2=0x7f74281a3a00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:48:01.836 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.832+0000 7f742cb9b640 1 --2- 192.168.123.104:0/4217010425 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f7428109f50 0x7f74281a3a00 secure :-1 s=READY pgs=3397 cs=0 l=1 rev1=1 crypto rx=0x7f7418098640 tx=0x7f7418007870 comp rx=0 tx=0).stop 2026-03-10T10:48:01.836 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.832+0000 7f742cb9b640 1 -- 192.168.123.104:0/4217010425 shutdown_connections 2026-03-10T10:48:01.836 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.832+0000 7f742cb9b640 1 --2- 192.168.123.104:0/4217010425 >> v2:192.168.123.104:6800/3326026257 conn(0x7f7400077640 0x7f7400079b00 unknown :-1 s=CLOSED pgs=4274 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:48:01.836 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.832+0000 7f742cb9b640 1 --2- 192.168.123.104:0/4217010425 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f7428109f50 0x7f74281a3a00 unknown :-1 s=CLOSED pgs=3397 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:48:01.836 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.832+0000 7f742cb9b640 1 --2- 192.168.123.104:0/4217010425 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f74281057d0 0x7f742819f670 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:48:01.836 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.832+0000 7f742cb9b640 1 --2- 192.168.123.104:0/4217010425 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f7428104e20 0x7f742819f130 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:48:01.836 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.832+0000 7f742cb9b640 1 -- 192.168.123.104:0/4217010425 >> 192.168.123.104:0/4217010425 conn(0x7f7428100880 msgr2=0x7f7428100e70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:48:01.836 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.832+0000 7f742cb9b640 1 -- 192.168.123.104:0/4217010425 shutdown_connections 2026-03-10T10:48:01.836 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.832+0000 7f742cb9b640 1 -- 192.168.123.104:0/4217010425 wait complete. 2026-03-10T10:48:01.849 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph osd pool set-quota cc1e9eec-b4d8-493e-ae89-ce289b5abc51 max_objects 10 2026-03-10T10:48:01.908 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.904+0000 7fa4f8262640 1 -- 192.168.123.104:0/607118619 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fa4f0101390 msgr2=0x7fa4f010f820 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:48:01.908 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.904+0000 7fa4f8262640 1 --2- 192.168.123.104:0/607118619 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fa4f0101390 0x7fa4f010f820 secure :-1 s=READY pgs=3398 cs=0 l=1 rev1=1 crypto rx=0x7fa4ec00b0d0 tx=0x7fa4ec01c970 comp rx=0 tx=0).stop 2026-03-10T10:48:01.908 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.904+0000 7fa4f8262640 1 -- 192.168.123.104:0/607118619 shutdown_connections 2026-03-10T10:48:01.909 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.904+0000 7fa4f8262640 1 --2- 192.168.123.104:0/607118619 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fa4f0101390 0x7fa4f010f820 unknown :-1 s=CLOSED pgs=3398 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:48:01.909 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.904+0000 7fa4f8262640 1 --2- 192.168.123.104:0/607118619 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa4f01009d0 0x7fa4f0100e50 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:48:01.909 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.904+0000 7fa4f8262640 1 --2- 192.168.123.104:0/607118619 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa4f0108b40 0x7fa4f0108f20 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:48:01.909 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.904+0000 7fa4f8262640 1 -- 192.168.123.104:0/607118619 >> 192.168.123.104:0/607118619 conn(0x7fa4f00fc820 msgr2=0x7fa4f00fec40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:48:01.909 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.904+0000 7fa4f8262640 1 -- 192.168.123.104:0/607118619 shutdown_connections 2026-03-10T10:48:01.909 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.904+0000 7fa4f8262640 1 -- 192.168.123.104:0/607118619 wait complete. 2026-03-10T10:48:01.909 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.904+0000 7fa4f8262640 1 Processor -- start 2026-03-10T10:48:01.909 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.904+0000 7fa4f8262640 1 -- start start 2026-03-10T10:48:01.909 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.904+0000 7fa4f8262640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa4f01009d0 0x7fa4f019f0d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:48:01.909 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.904+0000 7fa4f8262640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fa4f0101390 0x7fa4f019f610 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:48:01.909 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.904+0000 7fa4f8262640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa4f0108b40 0x7fa4f01a39a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:48:01.909 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.904+0000 7fa4f8262640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fa4f0103db0 con 0x7fa4f01009d0 2026-03-10T10:48:01.909 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.904+0000 7fa4f8262640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7fa4f0103c30 con 0x7fa4f0108b40 2026-03-10T10:48:01.909 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.904+0000 7fa4f8262640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7fa4f0103f30 con 0x7fa4f0101390 2026-03-10T10:48:01.909 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.904+0000 7fa4f57d6640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fa4f0101390 0x7fa4f019f610 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:48:01.909 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.904+0000 7fa4f57d6640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fa4f0101390 0x7fa4f019f610 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3301/0 says I am v2:192.168.123.104:33784/0 (socket says 192.168.123.104:33784) 2026-03-10T10:48:01.909 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.904+0000 7fa4f57d6640 1 -- 192.168.123.104:0/2499768475 learned_addr learned my addr 192.168.123.104:0/2499768475 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:48:01.909 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.904+0000 7fa4f67d8640 1 --2- 192.168.123.104:0/2499768475 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa4f0108b40 0x7fa4f01a39a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:48:01.909 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.904+0000 7fa4f57d6640 1 -- 192.168.123.104:0/2499768475 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa4f0108b40 msgr2=0x7fa4f01a39a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:48:01.909 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.904+0000 7fa4f57d6640 1 --2- 192.168.123.104:0/2499768475 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa4f0108b40 0x7fa4f01a39a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:48:01.909 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.904+0000 7fa4f57d6640 1 -- 192.168.123.104:0/2499768475 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa4f01009d0 msgr2=0x7fa4f019f0d0 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:48:01.909 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.904+0000 7fa4f57d6640 1 --2- 192.168.123.104:0/2499768475 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa4f01009d0 0x7fa4f019f0d0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:48:01.909 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.904+0000 7fa4f57d6640 1 -- 192.168.123.104:0/2499768475 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fa4f01a4120 con 0x7fa4f0101390 2026-03-10T10:48:01.910 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.904+0000 7fa4f57d6640 1 --2- 192.168.123.104:0/2499768475 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fa4f0101390 0x7fa4f019f610 secure :-1 s=READY pgs=3399 cs=0 l=1 rev1=1 crypto rx=0x7fa4e000ea30 tx=0x7fa4e000eef0 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:48:01.910 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.904+0000 7fa4deffd640 1 -- 192.168.123.104:0/2499768475 <== mon.2 v2:192.168.123.104:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fa4e000ce40 con 0x7fa4f0101390 2026-03-10T10:48:01.910 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.908+0000 7fa4f8262640 1 -- 192.168.123.104:0/2499768475 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fa4f01a4410 con 0x7fa4f0101390 2026-03-10T10:48:01.922 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.908+0000 7fa4f8262640 1 -- 192.168.123.104:0/2499768475 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7fa4f01abc50 con 0x7fa4f0101390 2026-03-10T10:48:01.923 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.908+0000 7fa4deffd640 1 -- 192.168.123.104:0/2499768475 <== mon.2 v2:192.168.123.104:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fa4e0004540 con 0x7fa4f0101390 2026-03-10T10:48:01.923 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.908+0000 7fa4f8262640 1 -- 192.168.123.104:0/2499768475 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fa4b8005190 con 0x7fa4f0101390 2026-03-10T10:48:01.923 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.908+0000 7fa4deffd640 1 -- 192.168.123.104:0/2499768475 <== mon.2 v2:192.168.123.104:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fa4e0002e60 con 0x7fa4f0101390 2026-03-10T10:48:01.923 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.908+0000 7fa4deffd640 1 -- 192.168.123.104:0/2499768475 <== mon.2 v2:192.168.123.104:3301/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7fa4e0010750 con 0x7fa4f0101390 2026-03-10T10:48:01.923 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.908+0000 7fa4deffd640 1 --2- 192.168.123.104:0/2499768475 >> v2:192.168.123.104:6800/3326026257 conn(0x7fa4c8077700 0x7fa4c8079bc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:48:01.923 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.908+0000 7fa4deffd640 1 -- 192.168.123.104:0/2499768475 <== mon.2 v2:192.168.123.104:3301/0 5 ==== osd_map(774..774 src has 251..774) ==== 8124+0+0 (secure 0 0 0) 0x7fa4e009ba00 con 0x7fa4f0101390 2026-03-10T10:48:01.923 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.912+0000 7fa4deffd640 1 -- 192.168.123.104:0/2499768475 <== mon.2 v2:192.168.123.104:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fa4e00680d0 con 0x7fa4f0101390 2026-03-10T10:48:01.923 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.912+0000 7fa4f5fd7640 1 --2- 192.168.123.104:0/2499768475 >> v2:192.168.123.104:6800/3326026257 conn(0x7fa4c8077700 0x7fa4c8079bc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:48:01.923 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:01.912+0000 7fa4f5fd7640 1 --2- 192.168.123.104:0/2499768475 >> v2:192.168.123.104:6800/3326026257 conn(0x7fa4c8077700 0x7fa4c8079bc0 secure :-1 s=READY pgs=4275 cs=0 l=1 rev1=1 crypto rx=0x7fa4e4009710 tx=0x7fa4e4009290 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:48:02.006 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:02.004+0000 7fa4f8262640 1 -- 192.168.123.104:0/2499768475 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"} v 0) -- 0x7fa4b8005480 con 0x7fa4f0101390 2026-03-10T10:48:02.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:01 vm04 bash[20742]: audit 2026-03-10T10:48:00.777221+0000 mon.a (mon.0) 3884 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]': finished 2026-03-10T10:48:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:01 vm04 bash[20742]: audit 2026-03-10T10:48:00.777221+0000 mon.a (mon.0) 3884 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]': finished 2026-03-10T10:48:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:01 vm04 bash[20742]: cluster 2026-03-10T10:48:00.780591+0000 mon.a (mon.0) 3885 : cluster [DBG] osdmap e773: 8 total, 8 up, 8 in 2026-03-10T10:48:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:01 vm04 bash[20742]: cluster 2026-03-10T10:48:00.780591+0000 mon.a (mon.0) 3885 : cluster [DBG] osdmap e773: 8 total, 8 up, 8 in 2026-03-10T10:48:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:01 vm04 bash[20742]: cluster 2026-03-10T10:48:00.826975+0000 mgr.y (mgr.24422) 1269 : cluster [DBG] pgmap v1711: 188 pgs: 12 unknown, 176 active+clean; 482 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 903 B/s wr, 1 op/s 2026-03-10T10:48:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:01 vm04 bash[20742]: cluster 2026-03-10T10:48:00.826975+0000 mgr.y (mgr.24422) 1269 : cluster [DBG] pgmap v1711: 188 pgs: 12 unknown, 176 active+clean; 482 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 903 B/s wr, 1 op/s 2026-03-10T10:48:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:01 vm04 bash[20742]: audit 2026-03-10T10:48:00.844294+0000 mon.c (mon.2) 490 : audit [INF] from='client.? 192.168.123.104:0/4217010425' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]: dispatch 2026-03-10T10:48:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:01 vm04 bash[20742]: audit 2026-03-10T10:48:00.844294+0000 mon.c (mon.2) 490 : audit [INF] from='client.? 192.168.123.104:0/4217010425' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]: dispatch 2026-03-10T10:48:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:01 vm04 bash[20742]: audit 2026-03-10T10:48:00.844977+0000 mon.a (mon.0) 3886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]: dispatch 2026-03-10T10:48:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:01 vm04 bash[20742]: audit 2026-03-10T10:48:00.844977+0000 mon.a (mon.0) 3886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]: dispatch 2026-03-10T10:48:02.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:01 vm04 bash[28289]: audit 2026-03-10T10:48:00.777221+0000 mon.a (mon.0) 3884 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]': finished 2026-03-10T10:48:02.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:01 vm04 bash[28289]: audit 2026-03-10T10:48:00.777221+0000 mon.a (mon.0) 3884 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]': finished 2026-03-10T10:48:02.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:01 vm04 bash[28289]: cluster 2026-03-10T10:48:00.780591+0000 mon.a (mon.0) 3885 : cluster [DBG] osdmap e773: 8 total, 8 up, 8 in 2026-03-10T10:48:02.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:01 vm04 bash[28289]: cluster 2026-03-10T10:48:00.780591+0000 mon.a (mon.0) 3885 : cluster [DBG] osdmap e773: 8 total, 8 up, 8 in 2026-03-10T10:48:02.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:01 vm04 bash[28289]: cluster 2026-03-10T10:48:00.826975+0000 mgr.y (mgr.24422) 1269 : cluster [DBG] pgmap v1711: 188 pgs: 12 unknown, 176 active+clean; 482 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 903 B/s wr, 1 op/s 2026-03-10T10:48:02.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:01 vm04 bash[28289]: cluster 2026-03-10T10:48:00.826975+0000 mgr.y (mgr.24422) 1269 : cluster [DBG] pgmap v1711: 188 pgs: 12 unknown, 176 active+clean; 482 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 903 B/s wr, 1 op/s 2026-03-10T10:48:02.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:01 vm04 bash[28289]: audit 2026-03-10T10:48:00.844294+0000 mon.c (mon.2) 490 : audit [INF] from='client.? 192.168.123.104:0/4217010425' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]: dispatch 2026-03-10T10:48:02.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:01 vm04 bash[28289]: audit 2026-03-10T10:48:00.844294+0000 mon.c (mon.2) 490 : audit [INF] from='client.? 192.168.123.104:0/4217010425' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]: dispatch 2026-03-10T10:48:02.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:01 vm04 bash[28289]: audit 2026-03-10T10:48:00.844977+0000 mon.a (mon.0) 3886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]: dispatch 2026-03-10T10:48:02.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:01 vm04 bash[28289]: audit 2026-03-10T10:48:00.844977+0000 mon.a (mon.0) 3886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]: dispatch 2026-03-10T10:48:02.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:01 vm07 bash[23367]: audit 2026-03-10T10:48:00.777221+0000 mon.a (mon.0) 3884 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]': finished 2026-03-10T10:48:02.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:01 vm07 bash[23367]: audit 2026-03-10T10:48:00.777221+0000 mon.a (mon.0) 3884 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]': finished 2026-03-10T10:48:02.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:01 vm07 bash[23367]: cluster 2026-03-10T10:48:00.780591+0000 mon.a (mon.0) 3885 : cluster [DBG] osdmap e773: 8 total, 8 up, 8 in 2026-03-10T10:48:02.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:01 vm07 bash[23367]: cluster 2026-03-10T10:48:00.780591+0000 mon.a (mon.0) 3885 : cluster [DBG] osdmap e773: 8 total, 8 up, 8 in 2026-03-10T10:48:02.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:01 vm07 bash[23367]: cluster 2026-03-10T10:48:00.826975+0000 mgr.y (mgr.24422) 1269 : cluster [DBG] pgmap v1711: 188 pgs: 12 unknown, 176 active+clean; 482 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 903 B/s wr, 1 op/s 2026-03-10T10:48:02.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:01 vm07 bash[23367]: cluster 2026-03-10T10:48:00.826975+0000 mgr.y (mgr.24422) 1269 : cluster [DBG] pgmap v1711: 188 pgs: 12 unknown, 176 active+clean; 482 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.5 KiB/s rd, 903 B/s wr, 1 op/s 2026-03-10T10:48:02.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:01 vm07 bash[23367]: audit 2026-03-10T10:48:00.844294+0000 mon.c (mon.2) 490 : audit [INF] from='client.? 192.168.123.104:0/4217010425' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]: dispatch 2026-03-10T10:48:02.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:01 vm07 bash[23367]: audit 2026-03-10T10:48:00.844294+0000 mon.c (mon.2) 490 : audit [INF] from='client.? 192.168.123.104:0/4217010425' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]: dispatch 2026-03-10T10:48:02.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:01 vm07 bash[23367]: audit 2026-03-10T10:48:00.844977+0000 mon.a (mon.0) 3886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]: dispatch 2026-03-10T10:48:02.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:01 vm07 bash[23367]: audit 2026-03-10T10:48:00.844977+0000 mon.a (mon.0) 3886 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]: dispatch 2026-03-10T10:48:02.859 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:02.856+0000 7fa4deffd640 1 -- 192.168.123.104:0/2499768475 <== mon.2 v2:192.168.123.104:3301/0 7 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]=0 set-quota max_objects = 10 for pool cc1e9eec-b4d8-493e-ae89-ce289b5abc51 v775) ==== 223+0+0 (secure 0 0 0) 0x7fa4e006cf80 con 0x7fa4f0101390 2026-03-10T10:48:02.910 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:02.908+0000 7fa4f8262640 1 -- 192.168.123.104:0/2499768475 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"} v 0) -- 0x7fa4b8004910 con 0x7fa4f0101390 2026-03-10T10:48:03.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:02 vm04 bash[28289]: audit 2026-03-10T10:48:01.824897+0000 mon.a (mon.0) 3887 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]': finished 2026-03-10T10:48:03.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:02 vm04 bash[28289]: audit 2026-03-10T10:48:01.824897+0000 mon.a (mon.0) 3887 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]': finished 2026-03-10T10:48:03.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:02 vm04 bash[28289]: cluster 2026-03-10T10:48:01.830644+0000 mon.a (mon.0) 3888 : cluster [DBG] osdmap e774: 8 total, 8 up, 8 in 2026-03-10T10:48:03.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:02 vm04 bash[28289]: cluster 2026-03-10T10:48:01.830644+0000 mon.a (mon.0) 3888 : cluster [DBG] osdmap e774: 8 total, 8 up, 8 in 2026-03-10T10:48:03.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:02 vm04 bash[28289]: audit 2026-03-10T10:48:02.008749+0000 mon.c (mon.2) 491 : audit [INF] from='client.? 192.168.123.104:0/2499768475' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:48:03.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:02 vm04 bash[28289]: audit 2026-03-10T10:48:02.008749+0000 mon.c (mon.2) 491 : audit [INF] from='client.? 192.168.123.104:0/2499768475' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:48:03.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:02 vm04 bash[28289]: audit 2026-03-10T10:48:02.009175+0000 mon.a (mon.0) 3889 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:48:03.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:02 vm04 bash[28289]: audit 2026-03-10T10:48:02.009175+0000 mon.a (mon.0) 3889 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:48:03.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:02 vm04 bash[20742]: audit 2026-03-10T10:48:01.824897+0000 mon.a (mon.0) 3887 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]': finished 2026-03-10T10:48:03.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:02 vm04 bash[20742]: audit 2026-03-10T10:48:01.824897+0000 mon.a (mon.0) 3887 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]': finished 2026-03-10T10:48:03.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:02 vm04 bash[20742]: cluster 2026-03-10T10:48:01.830644+0000 mon.a (mon.0) 3888 : cluster [DBG] osdmap e774: 8 total, 8 up, 8 in 2026-03-10T10:48:03.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:02 vm04 bash[20742]: cluster 2026-03-10T10:48:01.830644+0000 mon.a (mon.0) 3888 : cluster [DBG] osdmap e774: 8 total, 8 up, 8 in 2026-03-10T10:48:03.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:02 vm04 bash[20742]: audit 2026-03-10T10:48:02.008749+0000 mon.c (mon.2) 491 : audit [INF] from='client.? 192.168.123.104:0/2499768475' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:48:03.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:02 vm04 bash[20742]: audit 2026-03-10T10:48:02.008749+0000 mon.c (mon.2) 491 : audit [INF] from='client.? 192.168.123.104:0/2499768475' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:48:03.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:02 vm04 bash[20742]: audit 2026-03-10T10:48:02.009175+0000 mon.a (mon.0) 3889 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:48:03.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:02 vm04 bash[20742]: audit 2026-03-10T10:48:02.009175+0000 mon.a (mon.0) 3889 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:48:03.203 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:48:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:48:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:48:03.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:02 vm07 bash[23367]: audit 2026-03-10T10:48:01.824897+0000 mon.a (mon.0) 3887 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]': finished 2026-03-10T10:48:03.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:02 vm07 bash[23367]: audit 2026-03-10T10:48:01.824897+0000 mon.a (mon.0) 3887 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "app": "rados"}]': finished 2026-03-10T10:48:03.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:02 vm07 bash[23367]: cluster 2026-03-10T10:48:01.830644+0000 mon.a (mon.0) 3888 : cluster [DBG] osdmap e774: 8 total, 8 up, 8 in 2026-03-10T10:48:03.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:02 vm07 bash[23367]: cluster 2026-03-10T10:48:01.830644+0000 mon.a (mon.0) 3888 : cluster [DBG] osdmap e774: 8 total, 8 up, 8 in 2026-03-10T10:48:03.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:02 vm07 bash[23367]: audit 2026-03-10T10:48:02.008749+0000 mon.c (mon.2) 491 : audit [INF] from='client.? 192.168.123.104:0/2499768475' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:48:03.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:02 vm07 bash[23367]: audit 2026-03-10T10:48:02.008749+0000 mon.c (mon.2) 491 : audit [INF] from='client.? 192.168.123.104:0/2499768475' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:48:03.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:02 vm07 bash[23367]: audit 2026-03-10T10:48:02.009175+0000 mon.a (mon.0) 3889 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:48:03.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:02 vm07 bash[23367]: audit 2026-03-10T10:48:02.009175+0000 mon.a (mon.0) 3889 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:48:03.887 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:03.884+0000 7fa4deffd640 1 -- 192.168.123.104:0/2499768475 <== mon.2 v2:192.168.123.104:3301/0 8 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]=0 set-quota max_objects = 10 for pool cc1e9eec-b4d8-493e-ae89-ce289b5abc51 v776) ==== 223+0+0 (secure 0 0 0) 0x7fa4e00600c0 con 0x7fa4f0101390 2026-03-10T10:48:03.887 INFO:tasks.workunit.client.0.vm04.stderr:set-quota max_objects = 10 for pool cc1e9eec-b4d8-493e-ae89-ce289b5abc51 2026-03-10T10:48:03.889 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:03.884+0000 7fa4f8262640 1 -- 192.168.123.104:0/2499768475 >> v2:192.168.123.104:6800/3326026257 conn(0x7fa4c8077700 msgr2=0x7fa4c8079bc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:48:03.889 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:03.884+0000 7fa4f8262640 1 --2- 192.168.123.104:0/2499768475 >> v2:192.168.123.104:6800/3326026257 conn(0x7fa4c8077700 0x7fa4c8079bc0 secure :-1 s=READY pgs=4275 cs=0 l=1 rev1=1 crypto rx=0x7fa4e4009710 tx=0x7fa4e4009290 comp rx=0 tx=0).stop 2026-03-10T10:48:03.889 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:03.884+0000 7fa4f8262640 1 -- 192.168.123.104:0/2499768475 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fa4f0101390 msgr2=0x7fa4f019f610 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:48:03.889 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:03.884+0000 7fa4f8262640 1 --2- 192.168.123.104:0/2499768475 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fa4f0101390 0x7fa4f019f610 secure :-1 s=READY pgs=3399 cs=0 l=1 rev1=1 crypto rx=0x7fa4e000ea30 tx=0x7fa4e000eef0 comp rx=0 tx=0).stop 2026-03-10T10:48:03.889 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:03.884+0000 7fa4f8262640 1 -- 192.168.123.104:0/2499768475 shutdown_connections 2026-03-10T10:48:03.889 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:03.884+0000 7fa4f8262640 1 --2- 192.168.123.104:0/2499768475 >> v2:192.168.123.104:6800/3326026257 conn(0x7fa4c8077700 0x7fa4c8079bc0 unknown :-1 s=CLOSED pgs=4275 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:48:03.889 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:03.884+0000 7fa4f8262640 1 --2- 192.168.123.104:0/2499768475 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fa4f0108b40 0x7fa4f01a39a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:48:03.889 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:03.884+0000 7fa4f8262640 1 --2- 192.168.123.104:0/2499768475 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fa4f0101390 0x7fa4f019f610 unknown :-1 s=CLOSED pgs=3399 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:48:03.889 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:03.884+0000 7fa4f8262640 1 --2- 192.168.123.104:0/2499768475 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fa4f01009d0 0x7fa4f019f0d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:48:03.889 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:03.884+0000 7fa4f8262640 1 -- 192.168.123.104:0/2499768475 >> 192.168.123.104:0/2499768475 conn(0x7fa4f00fc820 msgr2=0x7fa4f00fd8e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:48:03.889 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:03.884+0000 7fa4f8262640 1 -- 192.168.123.104:0/2499768475 shutdown_connections 2026-03-10T10:48:03.889 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:48:03.884+0000 7fa4f8262640 1 -- 192.168.123.104:0/2499768475 wait complete. 2026-03-10T10:48:03.901 INFO:tasks.workunit.client.0.vm04.stderr:+ sleep 30 2026-03-10T10:48:04.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:03 vm04 bash[20742]: cluster 2026-03-10T10:48:02.827274+0000 mgr.y (mgr.24422) 1270 : cluster [DBG] pgmap v1713: 188 pgs: 12 unknown, 176 active+clean; 482 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:03 vm04 bash[20742]: cluster 2026-03-10T10:48:02.827274+0000 mgr.y (mgr.24422) 1270 : cluster [DBG] pgmap v1713: 188 pgs: 12 unknown, 176 active+clean; 482 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:03 vm04 bash[20742]: audit 2026-03-10T10:48:02.853190+0000 mon.a (mon.0) 3890 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]': finished 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:03 vm04 bash[20742]: audit 2026-03-10T10:48:02.853190+0000 mon.a (mon.0) 3890 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]': finished 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:03 vm04 bash[20742]: cluster 2026-03-10T10:48:02.855745+0000 mon.a (mon.0) 3891 : cluster [DBG] osdmap e775: 8 total, 8 up, 8 in 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:03 vm04 bash[20742]: cluster 2026-03-10T10:48:02.855745+0000 mon.a (mon.0) 3891 : cluster [DBG] osdmap e775: 8 total, 8 up, 8 in 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:03 vm04 bash[20742]: audit 2026-03-10T10:48:02.913017+0000 mon.c (mon.2) 492 : audit [INF] from='client.? 192.168.123.104:0/2499768475' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:03 vm04 bash[20742]: audit 2026-03-10T10:48:02.913017+0000 mon.c (mon.2) 492 : audit [INF] from='client.? 192.168.123.104:0/2499768475' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:03 vm04 bash[20742]: audit 2026-03-10T10:48:02.913364+0000 mon.a (mon.0) 3892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:03 vm04 bash[20742]: audit 2026-03-10T10:48:02.913364+0000 mon.a (mon.0) 3892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:03 vm04 bash[20742]: audit 2026-03-10T10:48:03.281236+0000 mon.a (mon.0) 3893 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:03 vm04 bash[20742]: audit 2026-03-10T10:48:03.281236+0000 mon.a (mon.0) 3893 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:03 vm04 bash[20742]: audit 2026-03-10T10:48:03.619457+0000 mon.a (mon.0) 3894 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:03 vm04 bash[20742]: audit 2026-03-10T10:48:03.619457+0000 mon.a (mon.0) 3894 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:03 vm04 bash[20742]: audit 2026-03-10T10:48:03.620053+0000 mon.a (mon.0) 3895 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:03 vm04 bash[20742]: audit 2026-03-10T10:48:03.620053+0000 mon.a (mon.0) 3895 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:03 vm04 bash[20742]: audit 2026-03-10T10:48:03.625539+0000 mon.a (mon.0) 3896 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:03 vm04 bash[20742]: audit 2026-03-10T10:48:03.625539+0000 mon.a (mon.0) 3896 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:03 vm04 bash[28289]: cluster 2026-03-10T10:48:02.827274+0000 mgr.y (mgr.24422) 1270 : cluster [DBG] pgmap v1713: 188 pgs: 12 unknown, 176 active+clean; 482 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:03 vm04 bash[28289]: cluster 2026-03-10T10:48:02.827274+0000 mgr.y (mgr.24422) 1270 : cluster [DBG] pgmap v1713: 188 pgs: 12 unknown, 176 active+clean; 482 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:03 vm04 bash[28289]: audit 2026-03-10T10:48:02.853190+0000 mon.a (mon.0) 3890 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]': finished 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:03 vm04 bash[28289]: audit 2026-03-10T10:48:02.853190+0000 mon.a (mon.0) 3890 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]': finished 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:03 vm04 bash[28289]: cluster 2026-03-10T10:48:02.855745+0000 mon.a (mon.0) 3891 : cluster [DBG] osdmap e775: 8 total, 8 up, 8 in 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:03 vm04 bash[28289]: cluster 2026-03-10T10:48:02.855745+0000 mon.a (mon.0) 3891 : cluster [DBG] osdmap e775: 8 total, 8 up, 8 in 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:03 vm04 bash[28289]: audit 2026-03-10T10:48:02.913017+0000 mon.c (mon.2) 492 : audit [INF] from='client.? 192.168.123.104:0/2499768475' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:03 vm04 bash[28289]: audit 2026-03-10T10:48:02.913017+0000 mon.c (mon.2) 492 : audit [INF] from='client.? 192.168.123.104:0/2499768475' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:03 vm04 bash[28289]: audit 2026-03-10T10:48:02.913364+0000 mon.a (mon.0) 3892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:03 vm04 bash[28289]: audit 2026-03-10T10:48:02.913364+0000 mon.a (mon.0) 3892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:03 vm04 bash[28289]: audit 2026-03-10T10:48:03.281236+0000 mon.a (mon.0) 3893 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:03 vm04 bash[28289]: audit 2026-03-10T10:48:03.281236+0000 mon.a (mon.0) 3893 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:03 vm04 bash[28289]: audit 2026-03-10T10:48:03.619457+0000 mon.a (mon.0) 3894 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:03 vm04 bash[28289]: audit 2026-03-10T10:48:03.619457+0000 mon.a (mon.0) 3894 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:03 vm04 bash[28289]: audit 2026-03-10T10:48:03.620053+0000 mon.a (mon.0) 3895 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:03 vm04 bash[28289]: audit 2026-03-10T10:48:03.620053+0000 mon.a (mon.0) 3895 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:03 vm04 bash[28289]: audit 2026-03-10T10:48:03.625539+0000 mon.a (mon.0) 3896 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:48:04.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:03 vm04 bash[28289]: audit 2026-03-10T10:48:03.625539+0000 mon.a (mon.0) 3896 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:48:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:03 vm07 bash[23367]: cluster 2026-03-10T10:48:02.827274+0000 mgr.y (mgr.24422) 1270 : cluster [DBG] pgmap v1713: 188 pgs: 12 unknown, 176 active+clean; 482 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:48:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:03 vm07 bash[23367]: cluster 2026-03-10T10:48:02.827274+0000 mgr.y (mgr.24422) 1270 : cluster [DBG] pgmap v1713: 188 pgs: 12 unknown, 176 active+clean; 482 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail 2026-03-10T10:48:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:03 vm07 bash[23367]: audit 2026-03-10T10:48:02.853190+0000 mon.a (mon.0) 3890 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]': finished 2026-03-10T10:48:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:03 vm07 bash[23367]: audit 2026-03-10T10:48:02.853190+0000 mon.a (mon.0) 3890 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]': finished 2026-03-10T10:48:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:03 vm07 bash[23367]: cluster 2026-03-10T10:48:02.855745+0000 mon.a (mon.0) 3891 : cluster [DBG] osdmap e775: 8 total, 8 up, 8 in 2026-03-10T10:48:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:03 vm07 bash[23367]: cluster 2026-03-10T10:48:02.855745+0000 mon.a (mon.0) 3891 : cluster [DBG] osdmap e775: 8 total, 8 up, 8 in 2026-03-10T10:48:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:03 vm07 bash[23367]: audit 2026-03-10T10:48:02.913017+0000 mon.c (mon.2) 492 : audit [INF] from='client.? 192.168.123.104:0/2499768475' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:48:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:03 vm07 bash[23367]: audit 2026-03-10T10:48:02.913017+0000 mon.c (mon.2) 492 : audit [INF] from='client.? 192.168.123.104:0/2499768475' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:48:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:03 vm07 bash[23367]: audit 2026-03-10T10:48:02.913364+0000 mon.a (mon.0) 3892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:48:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:03 vm07 bash[23367]: audit 2026-03-10T10:48:02.913364+0000 mon.a (mon.0) 3892 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]: dispatch 2026-03-10T10:48:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:03 vm07 bash[23367]: audit 2026-03-10T10:48:03.281236+0000 mon.a (mon.0) 3893 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:48:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:03 vm07 bash[23367]: audit 2026-03-10T10:48:03.281236+0000 mon.a (mon.0) 3893 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:48:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:03 vm07 bash[23367]: audit 2026-03-10T10:48:03.619457+0000 mon.a (mon.0) 3894 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:48:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:03 vm07 bash[23367]: audit 2026-03-10T10:48:03.619457+0000 mon.a (mon.0) 3894 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:48:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:03 vm07 bash[23367]: audit 2026-03-10T10:48:03.620053+0000 mon.a (mon.0) 3895 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:48:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:03 vm07 bash[23367]: audit 2026-03-10T10:48:03.620053+0000 mon.a (mon.0) 3895 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:48:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:03 vm07 bash[23367]: audit 2026-03-10T10:48:03.625539+0000 mon.a (mon.0) 3896 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:48:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:03 vm07 bash[23367]: audit 2026-03-10T10:48:03.625539+0000 mon.a (mon.0) 3896 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:48:05.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:04 vm04 bash[20742]: audit 2026-03-10T10:48:03.883208+0000 mon.a (mon.0) 3897 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]': finished 2026-03-10T10:48:05.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:04 vm04 bash[20742]: audit 2026-03-10T10:48:03.883208+0000 mon.a (mon.0) 3897 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]': finished 2026-03-10T10:48:05.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:04 vm04 bash[20742]: cluster 2026-03-10T10:48:03.890453+0000 mon.a (mon.0) 3898 : cluster [DBG] osdmap e776: 8 total, 8 up, 8 in 2026-03-10T10:48:05.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:04 vm04 bash[20742]: cluster 2026-03-10T10:48:03.890453+0000 mon.a (mon.0) 3898 : cluster [DBG] osdmap e776: 8 total, 8 up, 8 in 2026-03-10T10:48:05.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:04 vm04 bash[28289]: audit 2026-03-10T10:48:03.883208+0000 mon.a (mon.0) 3897 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]': finished 2026-03-10T10:48:05.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:04 vm04 bash[28289]: audit 2026-03-10T10:48:03.883208+0000 mon.a (mon.0) 3897 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]': finished 2026-03-10T10:48:05.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:04 vm04 bash[28289]: cluster 2026-03-10T10:48:03.890453+0000 mon.a (mon.0) 3898 : cluster [DBG] osdmap e776: 8 total, 8 up, 8 in 2026-03-10T10:48:05.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:04 vm04 bash[28289]: cluster 2026-03-10T10:48:03.890453+0000 mon.a (mon.0) 3898 : cluster [DBG] osdmap e776: 8 total, 8 up, 8 in 2026-03-10T10:48:05.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:04 vm07 bash[23367]: audit 2026-03-10T10:48:03.883208+0000 mon.a (mon.0) 3897 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]': finished 2026-03-10T10:48:05.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:04 vm07 bash[23367]: audit 2026-03-10T10:48:03.883208+0000 mon.a (mon.0) 3897 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "field": "max_objects", "val": "10"}]': finished 2026-03-10T10:48:05.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:04 vm07 bash[23367]: cluster 2026-03-10T10:48:03.890453+0000 mon.a (mon.0) 3898 : cluster [DBG] osdmap e776: 8 total, 8 up, 8 in 2026-03-10T10:48:05.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:04 vm07 bash[23367]: cluster 2026-03-10T10:48:03.890453+0000 mon.a (mon.0) 3898 : cluster [DBG] osdmap e776: 8 total, 8 up, 8 in 2026-03-10T10:48:06.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:05 vm04 bash[28289]: cluster 2026-03-10T10:48:04.827618+0000 mgr.y (mgr.24422) 1271 : cluster [DBG] pgmap v1716: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 758 B/s wr, 1 op/s 2026-03-10T10:48:06.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:05 vm04 bash[28289]: cluster 2026-03-10T10:48:04.827618+0000 mgr.y (mgr.24422) 1271 : cluster [DBG] pgmap v1716: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 758 B/s wr, 1 op/s 2026-03-10T10:48:06.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:05 vm04 bash[20742]: cluster 2026-03-10T10:48:04.827618+0000 mgr.y (mgr.24422) 1271 : cluster [DBG] pgmap v1716: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 758 B/s wr, 1 op/s 2026-03-10T10:48:06.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:05 vm04 bash[20742]: cluster 2026-03-10T10:48:04.827618+0000 mgr.y (mgr.24422) 1271 : cluster [DBG] pgmap v1716: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 758 B/s wr, 1 op/s 2026-03-10T10:48:06.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:05 vm07 bash[23367]: cluster 2026-03-10T10:48:04.827618+0000 mgr.y (mgr.24422) 1271 : cluster [DBG] pgmap v1716: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 758 B/s wr, 1 op/s 2026-03-10T10:48:06.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:05 vm07 bash[23367]: cluster 2026-03-10T10:48:04.827618+0000 mgr.y (mgr.24422) 1271 : cluster [DBG] pgmap v1716: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 758 B/s wr, 1 op/s 2026-03-10T10:48:08.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:07 vm04 bash[20742]: cluster 2026-03-10T10:48:06.827874+0000 mgr.y (mgr.24422) 1272 : cluster [DBG] pgmap v1717: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 511 B/s wr, 0 op/s 2026-03-10T10:48:08.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:07 vm04 bash[20742]: cluster 2026-03-10T10:48:06.827874+0000 mgr.y (mgr.24422) 1272 : cluster [DBG] pgmap v1717: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 511 B/s wr, 0 op/s 2026-03-10T10:48:08.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:07 vm04 bash[28289]: cluster 2026-03-10T10:48:06.827874+0000 mgr.y (mgr.24422) 1272 : cluster [DBG] pgmap v1717: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 511 B/s wr, 0 op/s 2026-03-10T10:48:08.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:07 vm04 bash[28289]: cluster 2026-03-10T10:48:06.827874+0000 mgr.y (mgr.24422) 1272 : cluster [DBG] pgmap v1717: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 511 B/s wr, 0 op/s 2026-03-10T10:48:08.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:07 vm07 bash[23367]: cluster 2026-03-10T10:48:06.827874+0000 mgr.y (mgr.24422) 1272 : cluster [DBG] pgmap v1717: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 511 B/s wr, 0 op/s 2026-03-10T10:48:08.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:07 vm07 bash[23367]: cluster 2026-03-10T10:48:06.827874+0000 mgr.y (mgr.24422) 1272 : cluster [DBG] pgmap v1717: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 511 B/s wr, 0 op/s 2026-03-10T10:48:10.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:09 vm04 bash[20742]: cluster 2026-03-10T10:48:08.828469+0000 mgr.y (mgr.24422) 1273 : cluster [DBG] pgmap v1718: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 438 B/s wr, 1 op/s 2026-03-10T10:48:10.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:09 vm04 bash[20742]: cluster 2026-03-10T10:48:08.828469+0000 mgr.y (mgr.24422) 1273 : cluster [DBG] pgmap v1718: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 438 B/s wr, 1 op/s 2026-03-10T10:48:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:09 vm04 bash[28289]: cluster 2026-03-10T10:48:08.828469+0000 mgr.y (mgr.24422) 1273 : cluster [DBG] pgmap v1718: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 438 B/s wr, 1 op/s 2026-03-10T10:48:10.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:09 vm04 bash[28289]: cluster 2026-03-10T10:48:08.828469+0000 mgr.y (mgr.24422) 1273 : cluster [DBG] pgmap v1718: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 438 B/s wr, 1 op/s 2026-03-10T10:48:10.267 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:48:09 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:48:10.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:09 vm07 bash[23367]: cluster 2026-03-10T10:48:08.828469+0000 mgr.y (mgr.24422) 1273 : cluster [DBG] pgmap v1718: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 438 B/s wr, 1 op/s 2026-03-10T10:48:10.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:09 vm07 bash[23367]: cluster 2026-03-10T10:48:08.828469+0000 mgr.y (mgr.24422) 1273 : cluster [DBG] pgmap v1718: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 438 B/s wr, 1 op/s 2026-03-10T10:48:11.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:10 vm04 bash[20742]: audit 2026-03-10T10:48:09.797066+0000 mgr.y (mgr.24422) 1274 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:11.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:10 vm04 bash[20742]: audit 2026-03-10T10:48:09.797066+0000 mgr.y (mgr.24422) 1274 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:11.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:10 vm04 bash[28289]: audit 2026-03-10T10:48:09.797066+0000 mgr.y (mgr.24422) 1274 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:11.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:10 vm04 bash[28289]: audit 2026-03-10T10:48:09.797066+0000 mgr.y (mgr.24422) 1274 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:11.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:10 vm07 bash[23367]: audit 2026-03-10T10:48:09.797066+0000 mgr.y (mgr.24422) 1274 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:11.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:10 vm07 bash[23367]: audit 2026-03-10T10:48:09.797066+0000 mgr.y (mgr.24422) 1274 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:12.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:11 vm04 bash[28289]: cluster 2026-03-10T10:48:10.828826+0000 mgr.y (mgr.24422) 1275 : cluster [DBG] pgmap v1719: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 383 B/s wr, 1 op/s 2026-03-10T10:48:12.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:11 vm04 bash[28289]: cluster 2026-03-10T10:48:10.828826+0000 mgr.y (mgr.24422) 1275 : cluster [DBG] pgmap v1719: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 383 B/s wr, 1 op/s 2026-03-10T10:48:12.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:11 vm04 bash[20742]: cluster 2026-03-10T10:48:10.828826+0000 mgr.y (mgr.24422) 1275 : cluster [DBG] pgmap v1719: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 383 B/s wr, 1 op/s 2026-03-10T10:48:12.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:11 vm04 bash[20742]: cluster 2026-03-10T10:48:10.828826+0000 mgr.y (mgr.24422) 1275 : cluster [DBG] pgmap v1719: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 383 B/s wr, 1 op/s 2026-03-10T10:48:12.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:11 vm07 bash[23367]: cluster 2026-03-10T10:48:10.828826+0000 mgr.y (mgr.24422) 1275 : cluster [DBG] pgmap v1719: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 383 B/s wr, 1 op/s 2026-03-10T10:48:12.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:11 vm07 bash[23367]: cluster 2026-03-10T10:48:10.828826+0000 mgr.y (mgr.24422) 1275 : cluster [DBG] pgmap v1719: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 383 B/s wr, 1 op/s 2026-03-10T10:48:13.453 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:48:13 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:48:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:48:14.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:13 vm04 bash[20742]: cluster 2026-03-10T10:48:12.829158+0000 mgr.y (mgr.24422) 1276 : cluster [DBG] pgmap v1720: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 307 B/s wr, 1 op/s 2026-03-10T10:48:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:13 vm04 bash[20742]: cluster 2026-03-10T10:48:12.829158+0000 mgr.y (mgr.24422) 1276 : cluster [DBG] pgmap v1720: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 307 B/s wr, 1 op/s 2026-03-10T10:48:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:13 vm04 bash[20742]: audit 2026-03-10T10:48:13.686915+0000 mon.a (mon.0) 3899 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:48:14.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:13 vm04 bash[20742]: audit 2026-03-10T10:48:13.686915+0000 mon.a (mon.0) 3899 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:48:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:13 vm04 bash[28289]: cluster 2026-03-10T10:48:12.829158+0000 mgr.y (mgr.24422) 1276 : cluster [DBG] pgmap v1720: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 307 B/s wr, 1 op/s 2026-03-10T10:48:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:13 vm04 bash[28289]: cluster 2026-03-10T10:48:12.829158+0000 mgr.y (mgr.24422) 1276 : cluster [DBG] pgmap v1720: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 307 B/s wr, 1 op/s 2026-03-10T10:48:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:13 vm04 bash[28289]: audit 2026-03-10T10:48:13.686915+0000 mon.a (mon.0) 3899 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:48:14.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:13 vm04 bash[28289]: audit 2026-03-10T10:48:13.686915+0000 mon.a (mon.0) 3899 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:48:14.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:13 vm07 bash[23367]: cluster 2026-03-10T10:48:12.829158+0000 mgr.y (mgr.24422) 1276 : cluster [DBG] pgmap v1720: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 307 B/s wr, 1 op/s 2026-03-10T10:48:14.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:13 vm07 bash[23367]: cluster 2026-03-10T10:48:12.829158+0000 mgr.y (mgr.24422) 1276 : cluster [DBG] pgmap v1720: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 307 B/s wr, 1 op/s 2026-03-10T10:48:14.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:13 vm07 bash[23367]: audit 2026-03-10T10:48:13.686915+0000 mon.a (mon.0) 3899 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:48:14.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:13 vm07 bash[23367]: audit 2026-03-10T10:48:13.686915+0000 mon.a (mon.0) 3899 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:48:16.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:15 vm04 bash[20742]: cluster 2026-03-10T10:48:14.829892+0000 mgr.y (mgr.24422) 1277 : cluster [DBG] pgmap v1721: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 935 B/s rd, 0 op/s 2026-03-10T10:48:16.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:15 vm04 bash[20742]: cluster 2026-03-10T10:48:14.829892+0000 mgr.y (mgr.24422) 1277 : cluster [DBG] pgmap v1721: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 935 B/s rd, 0 op/s 2026-03-10T10:48:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:15 vm04 bash[28289]: cluster 2026-03-10T10:48:14.829892+0000 mgr.y (mgr.24422) 1277 : cluster [DBG] pgmap v1721: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 935 B/s rd, 0 op/s 2026-03-10T10:48:16.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:15 vm04 bash[28289]: cluster 2026-03-10T10:48:14.829892+0000 mgr.y (mgr.24422) 1277 : cluster [DBG] pgmap v1721: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 935 B/s rd, 0 op/s 2026-03-10T10:48:16.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:15 vm07 bash[23367]: cluster 2026-03-10T10:48:14.829892+0000 mgr.y (mgr.24422) 1277 : cluster [DBG] pgmap v1721: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 935 B/s rd, 0 op/s 2026-03-10T10:48:16.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:15 vm07 bash[23367]: cluster 2026-03-10T10:48:14.829892+0000 mgr.y (mgr.24422) 1277 : cluster [DBG] pgmap v1721: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 935 B/s rd, 0 op/s 2026-03-10T10:48:18.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:17 vm04 bash[20742]: cluster 2026-03-10T10:48:16.830250+0000 mgr.y (mgr.24422) 1278 : cluster [DBG] pgmap v1722: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:18.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:17 vm04 bash[20742]: cluster 2026-03-10T10:48:16.830250+0000 mgr.y (mgr.24422) 1278 : cluster [DBG] pgmap v1722: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:18.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:17 vm04 bash[28289]: cluster 2026-03-10T10:48:16.830250+0000 mgr.y (mgr.24422) 1278 : cluster [DBG] pgmap v1722: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:18.203 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:17 vm04 bash[28289]: cluster 2026-03-10T10:48:16.830250+0000 mgr.y (mgr.24422) 1278 : cluster [DBG] pgmap v1722: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:18.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:17 vm07 bash[23367]: cluster 2026-03-10T10:48:16.830250+0000 mgr.y (mgr.24422) 1278 : cluster [DBG] pgmap v1722: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:18.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:17 vm07 bash[23367]: cluster 2026-03-10T10:48:16.830250+0000 mgr.y (mgr.24422) 1278 : cluster [DBG] pgmap v1722: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:20.267 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:48:19 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:48:20.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:20 vm07 bash[23367]: cluster 2026-03-10T10:48:18.830708+0000 mgr.y (mgr.24422) 1279 : cluster [DBG] pgmap v1723: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:48:20.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:20 vm07 bash[23367]: cluster 2026-03-10T10:48:18.830708+0000 mgr.y (mgr.24422) 1279 : cluster [DBG] pgmap v1723: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:48:20.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:19 vm04 bash[20742]: cluster 2026-03-10T10:48:18.830708+0000 mgr.y (mgr.24422) 1279 : cluster [DBG] pgmap v1723: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:48:20.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:19 vm04 bash[20742]: cluster 2026-03-10T10:48:18.830708+0000 mgr.y (mgr.24422) 1279 : cluster [DBG] pgmap v1723: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:48:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:20 vm04 bash[28289]: cluster 2026-03-10T10:48:18.830708+0000 mgr.y (mgr.24422) 1279 : cluster [DBG] pgmap v1723: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:48:20.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:20 vm04 bash[28289]: cluster 2026-03-10T10:48:18.830708+0000 mgr.y (mgr.24422) 1279 : cluster [DBG] pgmap v1723: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:48:21.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:21 vm07 bash[23367]: audit 2026-03-10T10:48:19.802742+0000 mgr.y (mgr.24422) 1280 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:21.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:21 vm07 bash[23367]: audit 2026-03-10T10:48:19.802742+0000 mgr.y (mgr.24422) 1280 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:21.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:21 vm04 bash[20742]: audit 2026-03-10T10:48:19.802742+0000 mgr.y (mgr.24422) 1280 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:21.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:21 vm04 bash[20742]: audit 2026-03-10T10:48:19.802742+0000 mgr.y (mgr.24422) 1280 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:21.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:21 vm04 bash[28289]: audit 2026-03-10T10:48:19.802742+0000 mgr.y (mgr.24422) 1280 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:21.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:21 vm04 bash[28289]: audit 2026-03-10T10:48:19.802742+0000 mgr.y (mgr.24422) 1280 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:22.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:22 vm07 bash[23367]: cluster 2026-03-10T10:48:20.830943+0000 mgr.y (mgr.24422) 1281 : cluster [DBG] pgmap v1724: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:22.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:22 vm07 bash[23367]: cluster 2026-03-10T10:48:20.830943+0000 mgr.y (mgr.24422) 1281 : cluster [DBG] pgmap v1724: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:22.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:22 vm04 bash[20742]: cluster 2026-03-10T10:48:20.830943+0000 mgr.y (mgr.24422) 1281 : cluster [DBG] pgmap v1724: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:22.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:22 vm04 bash[20742]: cluster 2026-03-10T10:48:20.830943+0000 mgr.y (mgr.24422) 1281 : cluster [DBG] pgmap v1724: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:22.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:22 vm04 bash[28289]: cluster 2026-03-10T10:48:20.830943+0000 mgr.y (mgr.24422) 1281 : cluster [DBG] pgmap v1724: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:22.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:22 vm04 bash[28289]: cluster 2026-03-10T10:48:20.830943+0000 mgr.y (mgr.24422) 1281 : cluster [DBG] pgmap v1724: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:23.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:48:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:48:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:48:24.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:24 vm07 bash[23367]: cluster 2026-03-10T10:48:22.831377+0000 mgr.y (mgr.24422) 1282 : cluster [DBG] pgmap v1725: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:24.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:24 vm07 bash[23367]: cluster 2026-03-10T10:48:22.831377+0000 mgr.y (mgr.24422) 1282 : cluster [DBG] pgmap v1725: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:24.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:24 vm04 bash[20742]: cluster 2026-03-10T10:48:22.831377+0000 mgr.y (mgr.24422) 1282 : cluster [DBG] pgmap v1725: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:24.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:24 vm04 bash[20742]: cluster 2026-03-10T10:48:22.831377+0000 mgr.y (mgr.24422) 1282 : cluster [DBG] pgmap v1725: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:24.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:24 vm04 bash[28289]: cluster 2026-03-10T10:48:22.831377+0000 mgr.y (mgr.24422) 1282 : cluster [DBG] pgmap v1725: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:24.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:24 vm04 bash[28289]: cluster 2026-03-10T10:48:22.831377+0000 mgr.y (mgr.24422) 1282 : cluster [DBG] pgmap v1725: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:26.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:26 vm04 bash[28289]: cluster 2026-03-10T10:48:24.832075+0000 mgr.y (mgr.24422) 1283 : cluster [DBG] pgmap v1726: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:48:26.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:26 vm04 bash[28289]: cluster 2026-03-10T10:48:24.832075+0000 mgr.y (mgr.24422) 1283 : cluster [DBG] pgmap v1726: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:48:26.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:26 vm04 bash[20742]: cluster 2026-03-10T10:48:24.832075+0000 mgr.y (mgr.24422) 1283 : cluster [DBG] pgmap v1726: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:48:26.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:26 vm04 bash[20742]: cluster 2026-03-10T10:48:24.832075+0000 mgr.y (mgr.24422) 1283 : cluster [DBG] pgmap v1726: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:48:26.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:26 vm07 bash[23367]: cluster 2026-03-10T10:48:24.832075+0000 mgr.y (mgr.24422) 1283 : cluster [DBG] pgmap v1726: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:48:26.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:26 vm07 bash[23367]: cluster 2026-03-10T10:48:24.832075+0000 mgr.y (mgr.24422) 1283 : cluster [DBG] pgmap v1726: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:48:28.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:28 vm07 bash[23367]: cluster 2026-03-10T10:48:26.832454+0000 mgr.y (mgr.24422) 1284 : cluster [DBG] pgmap v1727: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:28.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:28 vm07 bash[23367]: cluster 2026-03-10T10:48:26.832454+0000 mgr.y (mgr.24422) 1284 : cluster [DBG] pgmap v1727: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:28.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:28 vm04 bash[28289]: cluster 2026-03-10T10:48:26.832454+0000 mgr.y (mgr.24422) 1284 : cluster [DBG] pgmap v1727: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:28.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:28 vm04 bash[28289]: cluster 2026-03-10T10:48:26.832454+0000 mgr.y (mgr.24422) 1284 : cluster [DBG] pgmap v1727: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:28.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:28 vm04 bash[20742]: cluster 2026-03-10T10:48:26.832454+0000 mgr.y (mgr.24422) 1284 : cluster [DBG] pgmap v1727: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:28.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:28 vm04 bash[20742]: cluster 2026-03-10T10:48:26.832454+0000 mgr.y (mgr.24422) 1284 : cluster [DBG] pgmap v1727: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:29.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:29 vm07 bash[23367]: audit 2026-03-10T10:48:28.692767+0000 mon.a (mon.0) 3900 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:48:29.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:29 vm07 bash[23367]: audit 2026-03-10T10:48:28.692767+0000 mon.a (mon.0) 3900 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:48:29.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:29 vm04 bash[28289]: audit 2026-03-10T10:48:28.692767+0000 mon.a (mon.0) 3900 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:48:29.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:29 vm04 bash[28289]: audit 2026-03-10T10:48:28.692767+0000 mon.a (mon.0) 3900 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:48:29.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:29 vm04 bash[20742]: audit 2026-03-10T10:48:28.692767+0000 mon.a (mon.0) 3900 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:48:29.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:29 vm04 bash[20742]: audit 2026-03-10T10:48:28.692767+0000 mon.a (mon.0) 3900 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:48:30.250 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:48:29 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:48:30.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:30 vm07 bash[23367]: cluster 2026-03-10T10:48:28.833077+0000 mgr.y (mgr.24422) 1285 : cluster [DBG] pgmap v1728: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:48:30.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:30 vm07 bash[23367]: cluster 2026-03-10T10:48:28.833077+0000 mgr.y (mgr.24422) 1285 : cluster [DBG] pgmap v1728: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:48:30.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:30 vm04 bash[28289]: cluster 2026-03-10T10:48:28.833077+0000 mgr.y (mgr.24422) 1285 : cluster [DBG] pgmap v1728: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:48:30.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:30 vm04 bash[28289]: cluster 2026-03-10T10:48:28.833077+0000 mgr.y (mgr.24422) 1285 : cluster [DBG] pgmap v1728: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:48:30.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:30 vm04 bash[20742]: cluster 2026-03-10T10:48:28.833077+0000 mgr.y (mgr.24422) 1285 : cluster [DBG] pgmap v1728: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:48:30.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:30 vm04 bash[20742]: cluster 2026-03-10T10:48:28.833077+0000 mgr.y (mgr.24422) 1285 : cluster [DBG] pgmap v1728: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:48:31.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:31 vm07 bash[23367]: audit 2026-03-10T10:48:29.813485+0000 mgr.y (mgr.24422) 1286 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:31.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:31 vm07 bash[23367]: audit 2026-03-10T10:48:29.813485+0000 mgr.y (mgr.24422) 1286 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:31.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:31 vm04 bash[28289]: audit 2026-03-10T10:48:29.813485+0000 mgr.y (mgr.24422) 1286 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:31.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:31 vm04 bash[28289]: audit 2026-03-10T10:48:29.813485+0000 mgr.y (mgr.24422) 1286 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:31.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:31 vm04 bash[20742]: audit 2026-03-10T10:48:29.813485+0000 mgr.y (mgr.24422) 1286 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:31.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:31 vm04 bash[20742]: audit 2026-03-10T10:48:29.813485+0000 mgr.y (mgr.24422) 1286 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:32.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:32 vm04 bash[28289]: cluster 2026-03-10T10:48:30.833358+0000 mgr.y (mgr.24422) 1287 : cluster [DBG] pgmap v1729: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:32.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:32 vm04 bash[28289]: cluster 2026-03-10T10:48:30.833358+0000 mgr.y (mgr.24422) 1287 : cluster [DBG] pgmap v1729: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:32 vm04 bash[20742]: cluster 2026-03-10T10:48:30.833358+0000 mgr.y (mgr.24422) 1287 : cluster [DBG] pgmap v1729: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:32.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:32 vm04 bash[20742]: cluster 2026-03-10T10:48:30.833358+0000 mgr.y (mgr.24422) 1287 : cluster [DBG] pgmap v1729: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:32.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:32 vm07 bash[23367]: cluster 2026-03-10T10:48:30.833358+0000 mgr.y (mgr.24422) 1287 : cluster [DBG] pgmap v1729: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:32.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:32 vm07 bash[23367]: cluster 2026-03-10T10:48:30.833358+0000 mgr.y (mgr.24422) 1287 : cluster [DBG] pgmap v1729: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:33.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:48:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:48:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:48:33.902 INFO:tasks.workunit.client.0.vm04.stderr:+ seq 1 10 2026-03-10T10:48:33.902 INFO:tasks.workunit.client.0.vm04.stderr:+ rados -p cc1e9eec-b4d8-493e-ae89-ce289b5abc51 put obj1 /etc/passwd 2026-03-10T10:48:33.927 INFO:tasks.workunit.client.0.vm04.stderr:+ rados -p cc1e9eec-b4d8-493e-ae89-ce289b5abc51 put obj2 /etc/passwd 2026-03-10T10:48:33.954 INFO:tasks.workunit.client.0.vm04.stderr:+ rados -p cc1e9eec-b4d8-493e-ae89-ce289b5abc51 put obj3 /etc/passwd 2026-03-10T10:48:33.978 INFO:tasks.workunit.client.0.vm04.stderr:+ rados -p cc1e9eec-b4d8-493e-ae89-ce289b5abc51 put obj4 /etc/passwd 2026-03-10T10:48:34.002 INFO:tasks.workunit.client.0.vm04.stderr:+ rados -p cc1e9eec-b4d8-493e-ae89-ce289b5abc51 put obj5 /etc/passwd 2026-03-10T10:48:34.026 INFO:tasks.workunit.client.0.vm04.stderr:+ rados -p cc1e9eec-b4d8-493e-ae89-ce289b5abc51 put obj6 /etc/passwd 2026-03-10T10:48:34.053 INFO:tasks.workunit.client.0.vm04.stderr:+ rados -p cc1e9eec-b4d8-493e-ae89-ce289b5abc51 put obj7 /etc/passwd 2026-03-10T10:48:34.078 INFO:tasks.workunit.client.0.vm04.stderr:+ rados -p cc1e9eec-b4d8-493e-ae89-ce289b5abc51 put obj8 /etc/passwd 2026-03-10T10:48:34.102 INFO:tasks.workunit.client.0.vm04.stderr:+ rados -p cc1e9eec-b4d8-493e-ae89-ce289b5abc51 put obj9 /etc/passwd 2026-03-10T10:48:34.128 INFO:tasks.workunit.client.0.vm04.stderr:+ rados -p cc1e9eec-b4d8-493e-ae89-ce289b5abc51 put obj10 /etc/passwd 2026-03-10T10:48:34.154 INFO:tasks.workunit.client.0.vm04.stderr:+ sleep 30 2026-03-10T10:48:34.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:34 vm04 bash[20742]: cluster 2026-03-10T10:48:32.833651+0000 mgr.y (mgr.24422) 1288 : cluster [DBG] pgmap v1730: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:34.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:34 vm04 bash[20742]: cluster 2026-03-10T10:48:32.833651+0000 mgr.y (mgr.24422) 1288 : cluster [DBG] pgmap v1730: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:34 vm04 bash[28289]: cluster 2026-03-10T10:48:32.833651+0000 mgr.y (mgr.24422) 1288 : cluster [DBG] pgmap v1730: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:34.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:34 vm04 bash[28289]: cluster 2026-03-10T10:48:32.833651+0000 mgr.y (mgr.24422) 1288 : cluster [DBG] pgmap v1730: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:34.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:34 vm07 bash[23367]: cluster 2026-03-10T10:48:32.833651+0000 mgr.y (mgr.24422) 1288 : cluster [DBG] pgmap v1730: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:34.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:34 vm07 bash[23367]: cluster 2026-03-10T10:48:32.833651+0000 mgr.y (mgr.24422) 1288 : cluster [DBG] pgmap v1730: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:36.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:36 vm04 bash[20742]: cluster 2026-03-10T10:48:34.834213+0000 mgr.y (mgr.24422) 1289 : cluster [DBG] pgmap v1731: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:48:36.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:36 vm04 bash[20742]: cluster 2026-03-10T10:48:34.834213+0000 mgr.y (mgr.24422) 1289 : cluster [DBG] pgmap v1731: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:48:36.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:36 vm04 bash[28289]: cluster 2026-03-10T10:48:34.834213+0000 mgr.y (mgr.24422) 1289 : cluster [DBG] pgmap v1731: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:48:36.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:36 vm04 bash[28289]: cluster 2026-03-10T10:48:34.834213+0000 mgr.y (mgr.24422) 1289 : cluster [DBG] pgmap v1731: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:48:36.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:36 vm07 bash[23367]: cluster 2026-03-10T10:48:34.834213+0000 mgr.y (mgr.24422) 1289 : cluster [DBG] pgmap v1731: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:48:36.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:36 vm07 bash[23367]: cluster 2026-03-10T10:48:34.834213+0000 mgr.y (mgr.24422) 1289 : cluster [DBG] pgmap v1731: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:48:38.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:38 vm04 bash[20742]: cluster 2026-03-10T10:48:36.834463+0000 mgr.y (mgr.24422) 1290 : cluster [DBG] pgmap v1732: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:38.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:38 vm04 bash[20742]: cluster 2026-03-10T10:48:36.834463+0000 mgr.y (mgr.24422) 1290 : cluster [DBG] pgmap v1732: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:38.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:38 vm04 bash[28289]: cluster 2026-03-10T10:48:36.834463+0000 mgr.y (mgr.24422) 1290 : cluster [DBG] pgmap v1732: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:38.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:38 vm04 bash[28289]: cluster 2026-03-10T10:48:36.834463+0000 mgr.y (mgr.24422) 1290 : cluster [DBG] pgmap v1732: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:38.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:38 vm07 bash[23367]: cluster 2026-03-10T10:48:36.834463+0000 mgr.y (mgr.24422) 1290 : cluster [DBG] pgmap v1732: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:38.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:38 vm07 bash[23367]: cluster 2026-03-10T10:48:36.834463+0000 mgr.y (mgr.24422) 1290 : cluster [DBG] pgmap v1732: 188 pgs: 188 active+clean; 484 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:40.266 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:48:39 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:48:40.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:40 vm04 bash[20742]: cluster 2026-03-10T10:48:38.835008+0000 mgr.y (mgr.24422) 1291 : cluster [DBG] pgmap v1733: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 2 op/s 2026-03-10T10:48:40.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:40 vm04 bash[20742]: cluster 2026-03-10T10:48:38.835008+0000 mgr.y (mgr.24422) 1291 : cluster [DBG] pgmap v1733: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 2 op/s 2026-03-10T10:48:40.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:40 vm04 bash[28289]: cluster 2026-03-10T10:48:38.835008+0000 mgr.y (mgr.24422) 1291 : cluster [DBG] pgmap v1733: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 2 op/s 2026-03-10T10:48:40.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:40 vm04 bash[28289]: cluster 2026-03-10T10:48:38.835008+0000 mgr.y (mgr.24422) 1291 : cluster [DBG] pgmap v1733: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 2 op/s 2026-03-10T10:48:40.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:40 vm07 bash[23367]: cluster 2026-03-10T10:48:38.835008+0000 mgr.y (mgr.24422) 1291 : cluster [DBG] pgmap v1733: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 2 op/s 2026-03-10T10:48:40.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:40 vm07 bash[23367]: cluster 2026-03-10T10:48:38.835008+0000 mgr.y (mgr.24422) 1291 : cluster [DBG] pgmap v1733: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 2.5 KiB/s wr, 2 op/s 2026-03-10T10:48:41.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:41 vm04 bash[28289]: audit 2026-03-10T10:48:39.822772+0000 mgr.y (mgr.24422) 1292 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:41.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:41 vm04 bash[28289]: audit 2026-03-10T10:48:39.822772+0000 mgr.y (mgr.24422) 1292 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:41.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:41 vm04 bash[20742]: audit 2026-03-10T10:48:39.822772+0000 mgr.y (mgr.24422) 1292 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:41.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:41 vm04 bash[20742]: audit 2026-03-10T10:48:39.822772+0000 mgr.y (mgr.24422) 1292 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:41.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:41 vm07 bash[23367]: audit 2026-03-10T10:48:39.822772+0000 mgr.y (mgr.24422) 1292 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:41.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:41 vm07 bash[23367]: audit 2026-03-10T10:48:39.822772+0000 mgr.y (mgr.24422) 1292 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:42.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:42 vm07 bash[23367]: cluster 2026-03-10T10:48:40.835244+0000 mgr.y (mgr.24422) 1293 : cluster [DBG] pgmap v1734: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 2.5 KiB/s wr, 1 op/s 2026-03-10T10:48:42.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:42 vm07 bash[23367]: cluster 2026-03-10T10:48:40.835244+0000 mgr.y (mgr.24422) 1293 : cluster [DBG] pgmap v1734: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 2.5 KiB/s wr, 1 op/s 2026-03-10T10:48:42.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:42 vm04 bash[28289]: cluster 2026-03-10T10:48:40.835244+0000 mgr.y (mgr.24422) 1293 : cluster [DBG] pgmap v1734: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 2.5 KiB/s wr, 1 op/s 2026-03-10T10:48:42.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:42 vm04 bash[28289]: cluster 2026-03-10T10:48:40.835244+0000 mgr.y (mgr.24422) 1293 : cluster [DBG] pgmap v1734: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 2.5 KiB/s wr, 1 op/s 2026-03-10T10:48:42.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:42 vm04 bash[20742]: cluster 2026-03-10T10:48:40.835244+0000 mgr.y (mgr.24422) 1293 : cluster [DBG] pgmap v1734: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 2.5 KiB/s wr, 1 op/s 2026-03-10T10:48:42.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:42 vm04 bash[20742]: cluster 2026-03-10T10:48:40.835244+0000 mgr.y (mgr.24422) 1293 : cluster [DBG] pgmap v1734: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 2.5 KiB/s wr, 1 op/s 2026-03-10T10:48:43.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:48:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:48:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:48:43.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:43 vm07 bash[23367]: cluster 2026-03-10T10:48:42.768459+0000 mon.a (mon.0) 3901 : cluster [WRN] pool 'cc1e9eec-b4d8-493e-ae89-ce289b5abc51' is full (reached quota's max_objects: 10) 2026-03-10T10:48:43.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:43 vm07 bash[23367]: cluster 2026-03-10T10:48:42.768459+0000 mon.a (mon.0) 3901 : cluster [WRN] pool 'cc1e9eec-b4d8-493e-ae89-ce289b5abc51' is full (reached quota's max_objects: 10) 2026-03-10T10:48:43.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:43 vm07 bash[23367]: cluster 2026-03-10T10:48:42.768638+0000 mon.a (mon.0) 3902 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T10:48:43.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:43 vm07 bash[23367]: cluster 2026-03-10T10:48:42.768638+0000 mon.a (mon.0) 3902 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T10:48:43.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:43 vm07 bash[23367]: cluster 2026-03-10T10:48:42.781169+0000 mon.a (mon.0) 3903 : cluster [DBG] osdmap e777: 8 total, 8 up, 8 in 2026-03-10T10:48:43.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:43 vm07 bash[23367]: cluster 2026-03-10T10:48:42.781169+0000 mon.a (mon.0) 3903 : cluster [DBG] osdmap e777: 8 total, 8 up, 8 in 2026-03-10T10:48:43.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:43 vm04 bash[28289]: cluster 2026-03-10T10:48:42.768459+0000 mon.a (mon.0) 3901 : cluster [WRN] pool 'cc1e9eec-b4d8-493e-ae89-ce289b5abc51' is full (reached quota's max_objects: 10) 2026-03-10T10:48:43.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:43 vm04 bash[28289]: cluster 2026-03-10T10:48:42.768459+0000 mon.a (mon.0) 3901 : cluster [WRN] pool 'cc1e9eec-b4d8-493e-ae89-ce289b5abc51' is full (reached quota's max_objects: 10) 2026-03-10T10:48:43.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:43 vm04 bash[28289]: cluster 2026-03-10T10:48:42.768638+0000 mon.a (mon.0) 3902 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T10:48:43.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:43 vm04 bash[28289]: cluster 2026-03-10T10:48:42.768638+0000 mon.a (mon.0) 3902 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T10:48:43.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:43 vm04 bash[28289]: cluster 2026-03-10T10:48:42.781169+0000 mon.a (mon.0) 3903 : cluster [DBG] osdmap e777: 8 total, 8 up, 8 in 2026-03-10T10:48:43.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:43 vm04 bash[28289]: cluster 2026-03-10T10:48:42.781169+0000 mon.a (mon.0) 3903 : cluster [DBG] osdmap e777: 8 total, 8 up, 8 in 2026-03-10T10:48:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:43 vm04 bash[20742]: cluster 2026-03-10T10:48:42.768459+0000 mon.a (mon.0) 3901 : cluster [WRN] pool 'cc1e9eec-b4d8-493e-ae89-ce289b5abc51' is full (reached quota's max_objects: 10) 2026-03-10T10:48:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:43 vm04 bash[20742]: cluster 2026-03-10T10:48:42.768459+0000 mon.a (mon.0) 3901 : cluster [WRN] pool 'cc1e9eec-b4d8-493e-ae89-ce289b5abc51' is full (reached quota's max_objects: 10) 2026-03-10T10:48:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:43 vm04 bash[20742]: cluster 2026-03-10T10:48:42.768638+0000 mon.a (mon.0) 3902 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T10:48:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:43 vm04 bash[20742]: cluster 2026-03-10T10:48:42.768638+0000 mon.a (mon.0) 3902 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL) 2026-03-10T10:48:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:43 vm04 bash[20742]: cluster 2026-03-10T10:48:42.781169+0000 mon.a (mon.0) 3903 : cluster [DBG] osdmap e777: 8 total, 8 up, 8 in 2026-03-10T10:48:43.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:43 vm04 bash[20742]: cluster 2026-03-10T10:48:42.781169+0000 mon.a (mon.0) 3903 : cluster [DBG] osdmap e777: 8 total, 8 up, 8 in 2026-03-10T10:48:44.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:44 vm07 bash[23367]: cluster 2026-03-10T10:48:42.835521+0000 mgr.y (mgr.24422) 1294 : cluster [DBG] pgmap v1736: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T10:48:44.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:44 vm07 bash[23367]: cluster 2026-03-10T10:48:42.835521+0000 mgr.y (mgr.24422) 1294 : cluster [DBG] pgmap v1736: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T10:48:44.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:44 vm07 bash[23367]: audit 2026-03-10T10:48:43.727625+0000 mon.a (mon.0) 3904 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:48:44.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:44 vm07 bash[23367]: audit 2026-03-10T10:48:43.727625+0000 mon.a (mon.0) 3904 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:48:44.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:44 vm07 bash[23367]: audit 2026-03-10T10:48:43.728619+0000 mon.a (mon.0) 3905 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:48:44.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:44 vm07 bash[23367]: audit 2026-03-10T10:48:43.728619+0000 mon.a (mon.0) 3905 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:48:44.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:44 vm04 bash[28289]: cluster 2026-03-10T10:48:42.835521+0000 mgr.y (mgr.24422) 1294 : cluster [DBG] pgmap v1736: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T10:48:44.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:44 vm04 bash[28289]: cluster 2026-03-10T10:48:42.835521+0000 mgr.y (mgr.24422) 1294 : cluster [DBG] pgmap v1736: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T10:48:44.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:44 vm04 bash[28289]: audit 2026-03-10T10:48:43.727625+0000 mon.a (mon.0) 3904 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:48:44.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:44 vm04 bash[28289]: audit 2026-03-10T10:48:43.727625+0000 mon.a (mon.0) 3904 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:48:44.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:44 vm04 bash[28289]: audit 2026-03-10T10:48:43.728619+0000 mon.a (mon.0) 3905 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:48:44.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:44 vm04 bash[28289]: audit 2026-03-10T10:48:43.728619+0000 mon.a (mon.0) 3905 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:48:44.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:44 vm04 bash[20742]: cluster 2026-03-10T10:48:42.835521+0000 mgr.y (mgr.24422) 1294 : cluster [DBG] pgmap v1736: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T10:48:44.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:44 vm04 bash[20742]: cluster 2026-03-10T10:48:42.835521+0000 mgr.y (mgr.24422) 1294 : cluster [DBG] pgmap v1736: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T10:48:44.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:44 vm04 bash[20742]: audit 2026-03-10T10:48:43.727625+0000 mon.a (mon.0) 3904 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:48:44.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:44 vm04 bash[20742]: audit 2026-03-10T10:48:43.727625+0000 mon.a (mon.0) 3904 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:48:44.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:44 vm04 bash[20742]: audit 2026-03-10T10:48:43.728619+0000 mon.a (mon.0) 3905 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:48:44.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:44 vm04 bash[20742]: audit 2026-03-10T10:48:43.728619+0000 mon.a (mon.0) 3905 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:48:46.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:46 vm07 bash[23367]: cluster 2026-03-10T10:48:44.836058+0000 mgr.y (mgr.24422) 1295 : cluster [DBG] pgmap v1737: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T10:48:46.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:46 vm07 bash[23367]: cluster 2026-03-10T10:48:44.836058+0000 mgr.y (mgr.24422) 1295 : cluster [DBG] pgmap v1737: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T10:48:46.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:46 vm04 bash[28289]: cluster 2026-03-10T10:48:44.836058+0000 mgr.y (mgr.24422) 1295 : cluster [DBG] pgmap v1737: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T10:48:46.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:46 vm04 bash[28289]: cluster 2026-03-10T10:48:44.836058+0000 mgr.y (mgr.24422) 1295 : cluster [DBG] pgmap v1737: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T10:48:46.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:46 vm04 bash[20742]: cluster 2026-03-10T10:48:44.836058+0000 mgr.y (mgr.24422) 1295 : cluster [DBG] pgmap v1737: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T10:48:46.953 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:46 vm04 bash[20742]: cluster 2026-03-10T10:48:44.836058+0000 mgr.y (mgr.24422) 1295 : cluster [DBG] pgmap v1737: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T10:48:48.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:48 vm07 bash[23367]: cluster 2026-03-10T10:48:46.836393+0000 mgr.y (mgr.24422) 1296 : cluster [DBG] pgmap v1738: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T10:48:48.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:48 vm07 bash[23367]: cluster 2026-03-10T10:48:46.836393+0000 mgr.y (mgr.24422) 1296 : cluster [DBG] pgmap v1738: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T10:48:48.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:48 vm04 bash[28289]: cluster 2026-03-10T10:48:46.836393+0000 mgr.y (mgr.24422) 1296 : cluster [DBG] pgmap v1738: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T10:48:48.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:48 vm04 bash[28289]: cluster 2026-03-10T10:48:46.836393+0000 mgr.y (mgr.24422) 1296 : cluster [DBG] pgmap v1738: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T10:48:48.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:48 vm04 bash[20742]: cluster 2026-03-10T10:48:46.836393+0000 mgr.y (mgr.24422) 1296 : cluster [DBG] pgmap v1738: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T10:48:48.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:48 vm04 bash[20742]: cluster 2026-03-10T10:48:46.836393+0000 mgr.y (mgr.24422) 1296 : cluster [DBG] pgmap v1738: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s 2026-03-10T10:48:50.266 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:48:49 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:48:50.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:50 vm07 bash[23367]: cluster 2026-03-10T10:48:48.836890+0000 mgr.y (mgr.24422) 1297 : cluster [DBG] pgmap v1739: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:48:50.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:50 vm07 bash[23367]: cluster 2026-03-10T10:48:48.836890+0000 mgr.y (mgr.24422) 1297 : cluster [DBG] pgmap v1739: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:48:50.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:50 vm04 bash[28289]: cluster 2026-03-10T10:48:48.836890+0000 mgr.y (mgr.24422) 1297 : cluster [DBG] pgmap v1739: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:48:50.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:50 vm04 bash[28289]: cluster 2026-03-10T10:48:48.836890+0000 mgr.y (mgr.24422) 1297 : cluster [DBG] pgmap v1739: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:48:50.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:50 vm04 bash[20742]: cluster 2026-03-10T10:48:48.836890+0000 mgr.y (mgr.24422) 1297 : cluster [DBG] pgmap v1739: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:48:50.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:50 vm04 bash[20742]: cluster 2026-03-10T10:48:48.836890+0000 mgr.y (mgr.24422) 1297 : cluster [DBG] pgmap v1739: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:48:51.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:51 vm07 bash[23367]: audit 2026-03-10T10:48:49.827617+0000 mgr.y (mgr.24422) 1298 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:51.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:51 vm07 bash[23367]: audit 2026-03-10T10:48:49.827617+0000 mgr.y (mgr.24422) 1298 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:51.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:51 vm04 bash[28289]: audit 2026-03-10T10:48:49.827617+0000 mgr.y (mgr.24422) 1298 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:51.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:51 vm04 bash[28289]: audit 2026-03-10T10:48:49.827617+0000 mgr.y (mgr.24422) 1298 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:51.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:51 vm04 bash[20742]: audit 2026-03-10T10:48:49.827617+0000 mgr.y (mgr.24422) 1298 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:51.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:51 vm04 bash[20742]: audit 2026-03-10T10:48:49.827617+0000 mgr.y (mgr.24422) 1298 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:48:52.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:52 vm07 bash[23367]: cluster 2026-03-10T10:48:50.837133+0000 mgr.y (mgr.24422) 1299 : cluster [DBG] pgmap v1740: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:48:52.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:52 vm07 bash[23367]: cluster 2026-03-10T10:48:50.837133+0000 mgr.y (mgr.24422) 1299 : cluster [DBG] pgmap v1740: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:48:52.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:52 vm04 bash[28289]: cluster 2026-03-10T10:48:50.837133+0000 mgr.y (mgr.24422) 1299 : cluster [DBG] pgmap v1740: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:48:52.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:52 vm04 bash[28289]: cluster 2026-03-10T10:48:50.837133+0000 mgr.y (mgr.24422) 1299 : cluster [DBG] pgmap v1740: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:48:52.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:52 vm04 bash[20742]: cluster 2026-03-10T10:48:50.837133+0000 mgr.y (mgr.24422) 1299 : cluster [DBG] pgmap v1740: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:48:52.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:52 vm04 bash[20742]: cluster 2026-03-10T10:48:50.837133+0000 mgr.y (mgr.24422) 1299 : cluster [DBG] pgmap v1740: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T10:48:53.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:48:53 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:48:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:48:54.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:54 vm07 bash[23367]: cluster 2026-03-10T10:48:52.837433+0000 mgr.y (mgr.24422) 1300 : cluster [DBG] pgmap v1741: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1018 B/s rd, 0 op/s 2026-03-10T10:48:54.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:54 vm07 bash[23367]: cluster 2026-03-10T10:48:52.837433+0000 mgr.y (mgr.24422) 1300 : cluster [DBG] pgmap v1741: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1018 B/s rd, 0 op/s 2026-03-10T10:48:54.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:54 vm04 bash[28289]: cluster 2026-03-10T10:48:52.837433+0000 mgr.y (mgr.24422) 1300 : cluster [DBG] pgmap v1741: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1018 B/s rd, 0 op/s 2026-03-10T10:48:54.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:54 vm04 bash[28289]: cluster 2026-03-10T10:48:52.837433+0000 mgr.y (mgr.24422) 1300 : cluster [DBG] pgmap v1741: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1018 B/s rd, 0 op/s 2026-03-10T10:48:54.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:54 vm04 bash[20742]: cluster 2026-03-10T10:48:52.837433+0000 mgr.y (mgr.24422) 1300 : cluster [DBG] pgmap v1741: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1018 B/s rd, 0 op/s 2026-03-10T10:48:54.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:54 vm04 bash[20742]: cluster 2026-03-10T10:48:52.837433+0000 mgr.y (mgr.24422) 1300 : cluster [DBG] pgmap v1741: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1018 B/s rd, 0 op/s 2026-03-10T10:48:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:56 vm07 bash[23367]: cluster 2026-03-10T10:48:54.837954+0000 mgr.y (mgr.24422) 1301 : cluster [DBG] pgmap v1742: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:48:56.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:56 vm07 bash[23367]: cluster 2026-03-10T10:48:54.837954+0000 mgr.y (mgr.24422) 1301 : cluster [DBG] pgmap v1742: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:48:56.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:56 vm04 bash[28289]: cluster 2026-03-10T10:48:54.837954+0000 mgr.y (mgr.24422) 1301 : cluster [DBG] pgmap v1742: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:48:56.952 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:56 vm04 bash[28289]: cluster 2026-03-10T10:48:54.837954+0000 mgr.y (mgr.24422) 1301 : cluster [DBG] pgmap v1742: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:48:56.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:56 vm04 bash[20742]: cluster 2026-03-10T10:48:54.837954+0000 mgr.y (mgr.24422) 1301 : cluster [DBG] pgmap v1742: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:48:56.952 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:56 vm04 bash[20742]: cluster 2026-03-10T10:48:54.837954+0000 mgr.y (mgr.24422) 1301 : cluster [DBG] pgmap v1742: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:48:58.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:57 vm07 bash[23367]: cluster 2026-03-10T10:48:56.838210+0000 mgr.y (mgr.24422) 1302 : cluster [DBG] pgmap v1743: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:58.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:57 vm07 bash[23367]: cluster 2026-03-10T10:48:56.838210+0000 mgr.y (mgr.24422) 1302 : cluster [DBG] pgmap v1743: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:58.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:57 vm04 bash[28289]: cluster 2026-03-10T10:48:56.838210+0000 mgr.y (mgr.24422) 1302 : cluster [DBG] pgmap v1743: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:58.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:57 vm04 bash[28289]: cluster 2026-03-10T10:48:56.838210+0000 mgr.y (mgr.24422) 1302 : cluster [DBG] pgmap v1743: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:58.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:57 vm04 bash[20742]: cluster 2026-03-10T10:48:56.838210+0000 mgr.y (mgr.24422) 1302 : cluster [DBG] pgmap v1743: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:58.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:57 vm04 bash[20742]: cluster 2026-03-10T10:48:56.838210+0000 mgr.y (mgr.24422) 1302 : cluster [DBG] pgmap v1743: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:48:59.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:58 vm07 bash[23367]: audit 2026-03-10T10:48:58.736263+0000 mon.a (mon.0) 3906 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:48:59.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:58 vm07 bash[23367]: audit 2026-03-10T10:48:58.736263+0000 mon.a (mon.0) 3906 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:48:59.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:58 vm04 bash[28289]: audit 2026-03-10T10:48:58.736263+0000 mon.a (mon.0) 3906 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:48:59.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:58 vm04 bash[28289]: audit 2026-03-10T10:48:58.736263+0000 mon.a (mon.0) 3906 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:48:59.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:58 vm04 bash[20742]: audit 2026-03-10T10:48:58.736263+0000 mon.a (mon.0) 3906 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:48:59.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:58 vm04 bash[20742]: audit 2026-03-10T10:48:58.736263+0000 mon.a (mon.0) 3906 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:49:00.017 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:48:59 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:49:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:59 vm07 bash[23367]: cluster 2026-03-10T10:48:58.838694+0000 mgr.y (mgr.24422) 1303 : cluster [DBG] pgmap v1744: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:00.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:48:59 vm07 bash[23367]: cluster 2026-03-10T10:48:58.838694+0000 mgr.y (mgr.24422) 1303 : cluster [DBG] pgmap v1744: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:00.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:59 vm04 bash[28289]: cluster 2026-03-10T10:48:58.838694+0000 mgr.y (mgr.24422) 1303 : cluster [DBG] pgmap v1744: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:00.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:48:59 vm04 bash[28289]: cluster 2026-03-10T10:48:58.838694+0000 mgr.y (mgr.24422) 1303 : cluster [DBG] pgmap v1744: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:00.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:59 vm04 bash[20742]: cluster 2026-03-10T10:48:58.838694+0000 mgr.y (mgr.24422) 1303 : cluster [DBG] pgmap v1744: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:00.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:48:59 vm04 bash[20742]: cluster 2026-03-10T10:48:58.838694+0000 mgr.y (mgr.24422) 1303 : cluster [DBG] pgmap v1744: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:01.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:00 vm07 bash[23367]: audit 2026-03-10T10:48:59.835373+0000 mgr.y (mgr.24422) 1304 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:49:01.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:00 vm07 bash[23367]: audit 2026-03-10T10:48:59.835373+0000 mgr.y (mgr.24422) 1304 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:49:01.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:00 vm04 bash[28289]: audit 2026-03-10T10:48:59.835373+0000 mgr.y (mgr.24422) 1304 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:49:01.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:00 vm04 bash[28289]: audit 2026-03-10T10:48:59.835373+0000 mgr.y (mgr.24422) 1304 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:49:01.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:00 vm04 bash[20742]: audit 2026-03-10T10:48:59.835373+0000 mgr.y (mgr.24422) 1304 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:49:01.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:00 vm04 bash[20742]: audit 2026-03-10T10:48:59.835373+0000 mgr.y (mgr.24422) 1304 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:49:02.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:01 vm04 bash[28289]: cluster 2026-03-10T10:49:00.839114+0000 mgr.y (mgr.24422) 1305 : cluster [DBG] pgmap v1745: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:02.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:01 vm04 bash[28289]: cluster 2026-03-10T10:49:00.839114+0000 mgr.y (mgr.24422) 1305 : cluster [DBG] pgmap v1745: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:01 vm04 bash[20742]: cluster 2026-03-10T10:49:00.839114+0000 mgr.y (mgr.24422) 1305 : cluster [DBG] pgmap v1745: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:02.203 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:01 vm04 bash[20742]: cluster 2026-03-10T10:49:00.839114+0000 mgr.y (mgr.24422) 1305 : cluster [DBG] pgmap v1745: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:02.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:01 vm07 bash[23367]: cluster 2026-03-10T10:49:00.839114+0000 mgr.y (mgr.24422) 1305 : cluster [DBG] pgmap v1745: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:02.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:01 vm07 bash[23367]: cluster 2026-03-10T10:49:00.839114+0000 mgr.y (mgr.24422) 1305 : cluster [DBG] pgmap v1745: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:03.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:49:03 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:49:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:49:04.155 INFO:tasks.workunit.client.0.vm04.stderr:+ rados -p 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f put threemore /etc/passwd 2026-03-10T10:49:04.180 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph osd pool set-quota 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f max_bytes 0 2026-03-10T10:49:04.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:03 vm04 bash[20742]: cluster 2026-03-10T10:49:02.839440+0000 mgr.y (mgr.24422) 1306 : cluster [DBG] pgmap v1746: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:04.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:03 vm04 bash[20742]: cluster 2026-03-10T10:49:02.839440+0000 mgr.y (mgr.24422) 1306 : cluster [DBG] pgmap v1746: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:04.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:03 vm04 bash[20742]: audit 2026-03-10T10:49:03.665137+0000 mon.a (mon.0) 3907 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:49:04.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:03 vm04 bash[20742]: audit 2026-03-10T10:49:03.665137+0000 mon.a (mon.0) 3907 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:49:04.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:03 vm04 bash[28289]: cluster 2026-03-10T10:49:02.839440+0000 mgr.y (mgr.24422) 1306 : cluster [DBG] pgmap v1746: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:04.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:03 vm04 bash[28289]: cluster 2026-03-10T10:49:02.839440+0000 mgr.y (mgr.24422) 1306 : cluster [DBG] pgmap v1746: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:04.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:03 vm04 bash[28289]: audit 2026-03-10T10:49:03.665137+0000 mon.a (mon.0) 3907 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:49:04.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:03 vm04 bash[28289]: audit 2026-03-10T10:49:03.665137+0000 mon.a (mon.0) 3907 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:49:04.241 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.236+0000 7faab14fe640 1 -- 192.168.123.104:0/1301691139 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7faaac101ed0 msgr2=0x7faaac105bb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:49:04.241 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.236+0000 7faab14fe640 1 --2- 192.168.123.104:0/1301691139 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7faaac101ed0 0x7faaac105bb0 secure :-1 s=READY pgs=3418 cs=0 l=1 rev1=1 crypto rx=0x7faaa000ab80 tx=0x7faaa001dd30 comp rx=0 tx=0).stop 2026-03-10T10:49:04.241 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.236+0000 7faab14fe640 1 -- 192.168.123.104:0/1301691139 shutdown_connections 2026-03-10T10:49:04.241 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.236+0000 7faab14fe640 1 --2- 192.168.123.104:0/1301691139 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7faaac1062e0 0x7faaac10de60 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:04.241 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.236+0000 7faab14fe640 1 --2- 192.168.123.104:0/1301691139 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7faaac101ed0 0x7faaac105bb0 unknown :-1 s=CLOSED pgs=3418 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:04.241 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.236+0000 7faab14fe640 1 --2- 192.168.123.104:0/1301691139 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faaac101520 0x7faaac101900 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:04.241 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.236+0000 7faab14fe640 1 -- 192.168.123.104:0/1301691139 >> 192.168.123.104:0/1301691139 conn(0x7faaac0fc820 msgr2=0x7faaac0fec40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:49:04.241 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.236+0000 7faab14fe640 1 -- 192.168.123.104:0/1301691139 shutdown_connections 2026-03-10T10:49:04.241 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.236+0000 7faab14fe640 1 -- 192.168.123.104:0/1301691139 wait complete. 2026-03-10T10:49:04.241 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.236+0000 7faab14fe640 1 Processor -- start 2026-03-10T10:49:04.241 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.236+0000 7faab14fe640 1 -- start start 2026-03-10T10:49:04.242 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.236+0000 7faab14fe640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7faaac101520 0x7faaac1a56f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:49:04.242 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.236+0000 7faab14fe640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faaac101ed0 0x7faaac1a5c30 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:49:04.242 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.236+0000 7faab14fe640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7faaac1062e0 0x7faaac19f8b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:49:04.242 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.236+0000 7faab14fe640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7faaac113040 con 0x7faaac101ed0 2026-03-10T10:49:04.242 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.236+0000 7faab14fe640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7faaac112ec0 con 0x7faaac1062e0 2026-03-10T10:49:04.242 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.236+0000 7faab14fe640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7faaac1131c0 con 0x7faaac101520 2026-03-10T10:49:04.242 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.240+0000 7faaaaffd640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7faaac101520 0x7faaac1a56f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:49:04.242 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.240+0000 7faaaaffd640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7faaac101520 0x7faaac1a56f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3301/0 says I am v2:192.168.123.104:41952/0 (socket says 192.168.123.104:41952) 2026-03-10T10:49:04.242 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.240+0000 7faaaaffd640 1 -- 192.168.123.104:0/4088895356 learned_addr learned my addr 192.168.123.104:0/4088895356 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:49:04.242 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.240+0000 7faaaa7fc640 1 --2- 192.168.123.104:0/4088895356 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faaac101ed0 0x7faaac1a5c30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:49:04.242 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.240+0000 7faaaa7fc640 1 -- 192.168.123.104:0/4088895356 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7faaac101520 msgr2=0x7faaac1a56f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:49:04.242 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.240+0000 7faaaa7fc640 1 --2- 192.168.123.104:0/4088895356 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7faaac101520 0x7faaac1a56f0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:04.242 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.240+0000 7faaaa7fc640 1 -- 192.168.123.104:0/4088895356 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7faaac1062e0 msgr2=0x7faaac19f8b0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:49:04.242 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.240+0000 7faaab7fe640 1 --2- 192.168.123.104:0/4088895356 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7faaac1062e0 0x7faaac19f8b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:49:04.242 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.240+0000 7faaaa7fc640 1 --2- 192.168.123.104:0/4088895356 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7faaac1062e0 0x7faaac19f8b0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:04.242 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.240+0000 7faaaa7fc640 1 -- 192.168.123.104:0/4088895356 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7faaac19ffe0 con 0x7faaac101ed0 2026-03-10T10:49:04.242 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.240+0000 7faaaaffd640 1 --2- 192.168.123.104:0/4088895356 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7faaac101520 0x7faaac1a56f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:49:04.242 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.240+0000 7faaab7fe640 1 --2- 192.168.123.104:0/4088895356 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7faaac1062e0 0x7faaac19f8b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:49:04.242 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.240+0000 7faaaa7fc640 1 --2- 192.168.123.104:0/4088895356 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faaac101ed0 0x7faaac1a5c30 secure :-1 s=READY pgs=3101 cs=0 l=1 rev1=1 crypto rx=0x7faaa000aaa0 tx=0x7faaa00a8ec0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:49:04.244 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.240+0000 7faa8bfff640 1 -- 192.168.123.104:0/4088895356 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7faaa00bd070 con 0x7faaac101ed0 2026-03-10T10:49:04.244 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.240+0000 7faa8bfff640 1 -- 192.168.123.104:0/4088895356 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7faaa00a6d40 con 0x7faaac101ed0 2026-03-10T10:49:04.244 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.240+0000 7faab14fe640 1 -- 192.168.123.104:0/4088895356 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7faaac1a0270 con 0x7faaac101ed0 2026-03-10T10:49:04.244 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.240+0000 7faa8bfff640 1 -- 192.168.123.104:0/4088895356 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7faaa0003f70 con 0x7faaac101ed0 2026-03-10T10:49:04.244 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.240+0000 7faab14fe640 1 -- 192.168.123.104:0/4088895356 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7faaac1ac5c0 con 0x7faaac101ed0 2026-03-10T10:49:04.244 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.240+0000 7faa8bfff640 1 -- 192.168.123.104:0/4088895356 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7faaa0004110 con 0x7faaac101ed0 2026-03-10T10:49:04.244 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.240+0000 7faab14fe640 1 -- 192.168.123.104:0/4088895356 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7faa70005190 con 0x7faaac101ed0 2026-03-10T10:49:04.245 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.240+0000 7faa8bfff640 1 --2- 192.168.123.104:0/4088895356 >> v2:192.168.123.104:6800/3326026257 conn(0x7faa800776d0 0x7faa80079b90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:49:04.245 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.240+0000 7faa8bfff640 1 -- 192.168.123.104:0/4088895356 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(777..777 src has 251..777) ==== 8124+0+0 (secure 0 0 0) 0x7faaa0134760 con 0x7faaac101ed0 2026-03-10T10:49:04.245 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.240+0000 7faa8bfff640 1 -- 192.168.123.104:0/4088895356 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=778}) -- 0x7faa80083340 con 0x7faaac101ed0 2026-03-10T10:49:04.246 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.244+0000 7faaaaffd640 1 --2- 192.168.123.104:0/4088895356 >> v2:192.168.123.104:6800/3326026257 conn(0x7faa800776d0 0x7faa80079b90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:49:04.246 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.244+0000 7faaaaffd640 1 --2- 192.168.123.104:0/4088895356 >> v2:192.168.123.104:6800/3326026257 conn(0x7faa800776d0 0x7faa80079b90 secure :-1 s=READY pgs=4287 cs=0 l=1 rev1=1 crypto rx=0x7faaac1a09f0 tx=0x7faa9400a2d0 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:49:04.247 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.244+0000 7faa8bfff640 1 -- 192.168.123.104:0/4088895356 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7faaa0017a40 con 0x7faaac101ed0 2026-03-10T10:49:04.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:03 vm07 bash[23367]: cluster 2026-03-10T10:49:02.839440+0000 mgr.y (mgr.24422) 1306 : cluster [DBG] pgmap v1746: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:03 vm07 bash[23367]: cluster 2026-03-10T10:49:02.839440+0000 mgr.y (mgr.24422) 1306 : cluster [DBG] pgmap v1746: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:03 vm07 bash[23367]: audit 2026-03-10T10:49:03.665137+0000 mon.a (mon.0) 3907 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:49:04.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:03 vm07 bash[23367]: audit 2026-03-10T10:49:03.665137+0000 mon.a (mon.0) 3907 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T10:49:04.338 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.336+0000 7faab14fe640 1 -- 192.168.123.104:0/4088895356 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"} v 0) -- 0x7faa70005480 con 0x7faaac101ed0 2026-03-10T10:49:04.998 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:04.992+0000 7faa8bfff640 1 -- 192.168.123.104:0/4088895356 <== mon.0 v2:192.168.123.104:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]=0 set-quota max_bytes = 0 for pool 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f v778) ==== 217+0+0 (secure 0 0 0) 0x7faaa0101b70 con 0x7faaac101ed0 2026-03-10T10:49:05.021 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:05.016+0000 7faa8bfff640 1 -- 192.168.123.104:0/4088895356 <== mon.0 v2:192.168.123.104:3300/0 8 ==== osd_map(778..778 src has 251..778) ==== 628+0+0 (secure 0 0 0) 0x7faaa016a9f0 con 0x7faaac101ed0 2026-03-10T10:49:05.021 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:05.016+0000 7faa8bfff640 1 -- 192.168.123.104:0/4088895356 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=779}) -- 0x7faa800842a0 con 0x7faaac101ed0 2026-03-10T10:49:05.058 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:05.056+0000 7faab14fe640 1 -- 192.168.123.104:0/4088895356 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"} v 0) -- 0x7faa700047d0 con 0x7faaac101ed0 2026-03-10T10:49:05.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:04 vm04 bash[28289]: audit 2026-03-10T10:49:03.982623+0000 mon.a (mon.0) 3908 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:49:05.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:04 vm04 bash[28289]: audit 2026-03-10T10:49:03.982623+0000 mon.a (mon.0) 3908 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:49:05.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:04 vm04 bash[28289]: audit 2026-03-10T10:49:03.983222+0000 mon.a (mon.0) 3909 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:49:05.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:04 vm04 bash[28289]: audit 2026-03-10T10:49:03.983222+0000 mon.a (mon.0) 3909 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:49:05.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:04 vm04 bash[28289]: audit 2026-03-10T10:49:03.992771+0000 mon.a (mon.0) 3910 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:49:05.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:04 vm04 bash[28289]: audit 2026-03-10T10:49:03.992771+0000 mon.a (mon.0) 3910 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:49:05.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:04 vm04 bash[28289]: audit 2026-03-10T10:49:04.340568+0000 mon.a (mon.0) 3911 : audit [INF] from='client.? 192.168.123.104:0/4088895356' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T10:49:05.202 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:04 vm04 bash[28289]: audit 2026-03-10T10:49:04.340568+0000 mon.a (mon.0) 3911 : audit [INF] from='client.? 192.168.123.104:0/4088895356' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T10:49:05.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:04 vm04 bash[20742]: audit 2026-03-10T10:49:03.982623+0000 mon.a (mon.0) 3908 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:49:05.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:04 vm04 bash[20742]: audit 2026-03-10T10:49:03.982623+0000 mon.a (mon.0) 3908 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:49:05.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:04 vm04 bash[20742]: audit 2026-03-10T10:49:03.983222+0000 mon.a (mon.0) 3909 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:49:05.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:04 vm04 bash[20742]: audit 2026-03-10T10:49:03.983222+0000 mon.a (mon.0) 3909 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:49:05.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:04 vm04 bash[20742]: audit 2026-03-10T10:49:03.992771+0000 mon.a (mon.0) 3910 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:49:05.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:04 vm04 bash[20742]: audit 2026-03-10T10:49:03.992771+0000 mon.a (mon.0) 3910 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:49:05.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:04 vm04 bash[20742]: audit 2026-03-10T10:49:04.340568+0000 mon.a (mon.0) 3911 : audit [INF] from='client.? 192.168.123.104:0/4088895356' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T10:49:05.202 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:04 vm04 bash[20742]: audit 2026-03-10T10:49:04.340568+0000 mon.a (mon.0) 3911 : audit [INF] from='client.? 192.168.123.104:0/4088895356' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T10:49:05.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:04 vm07 bash[23367]: audit 2026-03-10T10:49:03.982623+0000 mon.a (mon.0) 3908 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:49:05.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:04 vm07 bash[23367]: audit 2026-03-10T10:49:03.982623+0000 mon.a (mon.0) 3908 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T10:49:05.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:04 vm07 bash[23367]: audit 2026-03-10T10:49:03.983222+0000 mon.a (mon.0) 3909 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:49:05.266 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:04 vm07 bash[23367]: audit 2026-03-10T10:49:03.983222+0000 mon.a (mon.0) 3909 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T10:49:05.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:04 vm07 bash[23367]: audit 2026-03-10T10:49:03.992771+0000 mon.a (mon.0) 3910 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:49:05.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:04 vm07 bash[23367]: audit 2026-03-10T10:49:03.992771+0000 mon.a (mon.0) 3910 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:49:05.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:04 vm07 bash[23367]: audit 2026-03-10T10:49:04.340568+0000 mon.a (mon.0) 3911 : audit [INF] from='client.? 192.168.123.104:0/4088895356' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T10:49:05.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:04 vm07 bash[23367]: audit 2026-03-10T10:49:04.340568+0000 mon.a (mon.0) 3911 : audit [INF] from='client.? 192.168.123.104:0/4088895356' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T10:49:06.003 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.000+0000 7faa8bfff640 1 -- 192.168.123.104:0/4088895356 <== mon.0 v2:192.168.123.104:3300/0 9 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]=0 set-quota max_bytes = 0 for pool 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f v779) ==== 217+0+0 (secure 0 0 0) 0x7faaa0106a20 con 0x7faaac101ed0 2026-03-10T10:49:06.003 INFO:tasks.workunit.client.0.vm04.stderr:set-quota max_bytes = 0 for pool 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f 2026-03-10T10:49:06.009 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.004+0000 7faab14fe640 1 -- 192.168.123.104:0/4088895356 >> v2:192.168.123.104:6800/3326026257 conn(0x7faa800776d0 msgr2=0x7faa80079b90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:49:06.009 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.004+0000 7faab14fe640 1 --2- 192.168.123.104:0/4088895356 >> v2:192.168.123.104:6800/3326026257 conn(0x7faa800776d0 0x7faa80079b90 secure :-1 s=READY pgs=4287 cs=0 l=1 rev1=1 crypto rx=0x7faaac1a09f0 tx=0x7faa9400a2d0 comp rx=0 tx=0).stop 2026-03-10T10:49:06.009 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.004+0000 7faab14fe640 1 -- 192.168.123.104:0/4088895356 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faaac101ed0 msgr2=0x7faaac1a5c30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:49:06.009 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.004+0000 7faab14fe640 1 --2- 192.168.123.104:0/4088895356 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faaac101ed0 0x7faaac1a5c30 secure :-1 s=READY pgs=3101 cs=0 l=1 rev1=1 crypto rx=0x7faaa000aaa0 tx=0x7faaa00a8ec0 comp rx=0 tx=0).stop 2026-03-10T10:49:06.010 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.008+0000 7faab14fe640 1 -- 192.168.123.104:0/4088895356 shutdown_connections 2026-03-10T10:49:06.010 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.008+0000 7faab14fe640 1 --2- 192.168.123.104:0/4088895356 >> v2:192.168.123.104:6800/3326026257 conn(0x7faa800776d0 0x7faa80079b90 unknown :-1 s=CLOSED pgs=4287 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:06.010 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.008+0000 7faab14fe640 1 --2- 192.168.123.104:0/4088895356 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7faaac1062e0 0x7faaac19f8b0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:06.010 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.008+0000 7faab14fe640 1 --2- 192.168.123.104:0/4088895356 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7faaac101ed0 0x7faaac1a5c30 unknown :-1 s=CLOSED pgs=3101 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:06.010 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.008+0000 7faab14fe640 1 --2- 192.168.123.104:0/4088895356 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7faaac101520 0x7faaac1a56f0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:06.010 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.008+0000 7faab14fe640 1 -- 192.168.123.104:0/4088895356 >> 192.168.123.104:0/4088895356 conn(0x7faaac0fc820 msgr2=0x7faaac0fdb40 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:49:06.010 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.008+0000 7faab14fe640 1 -- 192.168.123.104:0/4088895356 shutdown_connections 2026-03-10T10:49:06.010 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.008+0000 7faab14fe640 1 -- 192.168.123.104:0/4088895356 wait complete. 2026-03-10T10:49:06.022 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph osd pool set-quota 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f max_objects 0 2026-03-10T10:49:06.083 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.080+0000 7fb0f91c7640 1 -- 192.168.123.104:0/1764879094 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fb0f41057d0 msgr2=0x7fb0f4109820 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:49:06.083 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.080+0000 7fb0f91c7640 1 --2- 192.168.123.104:0/1764879094 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fb0f41057d0 0x7fb0f4109820 secure :-1 s=READY pgs=3419 cs=0 l=1 rev1=1 crypto rx=0x7fb0e8009a30 tx=0x7fb0e801c940 comp rx=0 tx=0).stop 2026-03-10T10:49:06.084 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.080+0000 7fb0f91c7640 1 -- 192.168.123.104:0/1764879094 shutdown_connections 2026-03-10T10:49:06.084 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.080+0000 7fb0f91c7640 1 --2- 192.168.123.104:0/1764879094 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb0f4109f50 0x7fb0f4111ad0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:06.084 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.080+0000 7fb0f91c7640 1 --2- 192.168.123.104:0/1764879094 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fb0f41057d0 0x7fb0f4109820 unknown :-1 s=CLOSED pgs=3419 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:06.084 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.080+0000 7fb0f91c7640 1 --2- 192.168.123.104:0/1764879094 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb0f4104e20 0x7fb0f4105200 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:06.084 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.080+0000 7fb0f91c7640 1 -- 192.168.123.104:0/1764879094 >> 192.168.123.104:0/1764879094 conn(0x7fb0f4100880 msgr2=0x7fb0f4102ca0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:49:06.084 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.080+0000 7fb0f91c7640 1 -- 192.168.123.104:0/1764879094 shutdown_connections 2026-03-10T10:49:06.084 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.080+0000 7fb0f91c7640 1 -- 192.168.123.104:0/1764879094 wait complete. 2026-03-10T10:49:06.084 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.080+0000 7fb0f91c7640 1 Processor -- start 2026-03-10T10:49:06.084 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.080+0000 7fb0f91c7640 1 -- start start 2026-03-10T10:49:06.084 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.080+0000 7fb0f91c7640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fb0f4104e20 0x7fb0f419f2d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:49:06.084 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.080+0000 7fb0f91c7640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb0f41057d0 0x7fb0f419f810 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:49:06.084 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.080+0000 7fb0f91c7640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb0f4109f50 0x7fb0f41a3ba0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:49:06.084 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.080+0000 7fb0f91c7640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fb0f4116cc0 con 0x7fb0f4109f50 2026-03-10T10:49:06.084 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.080+0000 7fb0f91c7640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7fb0f4116b40 con 0x7fb0f41057d0 2026-03-10T10:49:06.084 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.080+0000 7fb0f91c7640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7fb0f4116e40 con 0x7fb0f4104e20 2026-03-10T10:49:06.084 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.080+0000 7fb0f3577640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb0f4109f50 0x7fb0f41a3ba0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:49:06.084 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.080+0000 7fb0f3577640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb0f4109f50 0x7fb0f41a3ba0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:37900/0 (socket says 192.168.123.104:37900) 2026-03-10T10:49:06.084 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.080+0000 7fb0f3577640 1 -- 192.168.123.104:0/1886937679 learned_addr learned my addr 192.168.123.104:0/1886937679 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:49:06.084 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.080+0000 7fb0f2575640 1 --2- 192.168.123.104:0/1886937679 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb0f41057d0 0x7fb0f419f810 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:49:06.085 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.080+0000 7fb0f2d76640 1 --2- 192.168.123.104:0/1886937679 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fb0f4104e20 0x7fb0f419f2d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:49:06.085 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.080+0000 7fb0f3577640 1 -- 192.168.123.104:0/1886937679 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fb0f4104e20 msgr2=0x7fb0f419f2d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:49:06.085 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.080+0000 7fb0f3577640 1 --2- 192.168.123.104:0/1886937679 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fb0f4104e20 0x7fb0f419f2d0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:06.085 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.080+0000 7fb0f3577640 1 -- 192.168.123.104:0/1886937679 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb0f41057d0 msgr2=0x7fb0f419f810 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:49:06.085 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.080+0000 7fb0f3577640 1 --2- 192.168.123.104:0/1886937679 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb0f41057d0 0x7fb0f419f810 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:06.085 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.080+0000 7fb0f3577640 1 -- 192.168.123.104:0/1886937679 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fb0f41a4320 con 0x7fb0f4109f50 2026-03-10T10:49:06.085 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.080+0000 7fb0f2575640 1 --2- 192.168.123.104:0/1886937679 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb0f41057d0 0x7fb0f419f810 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T10:49:06.085 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.080+0000 7fb0f2d76640 1 --2- 192.168.123.104:0/1886937679 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fb0f4104e20 0x7fb0f419f2d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_reply_more state changed! 2026-03-10T10:49:06.085 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.080+0000 7fb0f3577640 1 --2- 192.168.123.104:0/1886937679 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb0f4109f50 0x7fb0f41a3ba0 secure :-1 s=READY pgs=3102 cs=0 l=1 rev1=1 crypto rx=0x7fb0e400b970 tx=0x7fb0e400be30 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:49:06.085 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.080+0000 7fb0d3fff640 1 -- 192.168.123.104:0/1886937679 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fb0e400c730 con 0x7fb0f4109f50 2026-03-10T10:49:06.085 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.080+0000 7fb0f91c7640 1 -- 192.168.123.104:0/1886937679 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fb0f41a4610 con 0x7fb0f4109f50 2026-03-10T10:49:06.085 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.080+0000 7fb0f91c7640 1 -- 192.168.123.104:0/1886937679 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fb0f41abe50 con 0x7fb0f4109f50 2026-03-10T10:49:06.086 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.084+0000 7fb0f91c7640 1 -- 192.168.123.104:0/1886937679 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fb0f4106300 con 0x7fb0f4109f50 2026-03-10T10:49:06.089 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.084+0000 7fb0d3fff640 1 -- 192.168.123.104:0/1886937679 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fb0e4010070 con 0x7fb0f4109f50 2026-03-10T10:49:06.089 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.084+0000 7fb0d3fff640 1 -- 192.168.123.104:0/1886937679 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fb0e4004830 con 0x7fb0f4109f50 2026-03-10T10:49:06.089 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.084+0000 7fb0d3fff640 1 -- 192.168.123.104:0/1886937679 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7fb0e401d070 con 0x7fb0f4109f50 2026-03-10T10:49:06.089 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.084+0000 7fb0d3fff640 1 --2- 192.168.123.104:0/1886937679 >> v2:192.168.123.104:6800/3326026257 conn(0x7fb0c40777d0 0x7fb0c4079c90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:49:06.089 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.084+0000 7fb0f2d76640 1 --2- 192.168.123.104:0/1886937679 >> v2:192.168.123.104:6800/3326026257 conn(0x7fb0c40777d0 0x7fb0c4079c90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:49:06.089 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.084+0000 7fb0f2d76640 1 --2- 192.168.123.104:0/1886937679 >> v2:192.168.123.104:6800/3326026257 conn(0x7fb0c40777d0 0x7fb0c4079c90 secure :-1 s=READY pgs=4288 cs=0 l=1 rev1=1 crypto rx=0x7fb0dc009a80 tx=0x7fb0dc008040 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:49:06.089 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.084+0000 7fb0d3fff640 1 -- 192.168.123.104:0/1886937679 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(779..779 src has 251..779) ==== 8124+0+0 (secure 0 0 0) 0x7fb0e409abe0 con 0x7fb0f4109f50 2026-03-10T10:49:06.089 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.084+0000 7fb0d3fff640 1 -- 192.168.123.104:0/1886937679 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=780}) -- 0x7fb0c4083540 con 0x7fb0f4109f50 2026-03-10T10:49:06.089 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.084+0000 7fb0d3fff640 1 -- 192.168.123.104:0/1886937679 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fb0e40caa90 con 0x7fb0f4109f50 2026-03-10T10:49:06.187 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:06.184+0000 7fb0f91c7640 1 -- 192.168.123.104:0/1886937679 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"} v 0) -- 0x7fb0f41a0630 con 0x7fb0f4109f50 2026-03-10T10:49:06.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:06 vm04 bash[28289]: cluster 2026-03-10T10:49:04.840184+0000 mgr.y (mgr.24422) 1307 : cluster [DBG] pgmap v1747: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:06.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:06 vm04 bash[28289]: cluster 2026-03-10T10:49:04.840184+0000 mgr.y (mgr.24422) 1307 : cluster [DBG] pgmap v1747: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:06.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:06 vm04 bash[28289]: audit 2026-03-10T10:49:04.999731+0000 mon.a (mon.0) 3912 : audit [INF] from='client.? 192.168.123.104:0/4088895356' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T10:49:06.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:06 vm04 bash[28289]: audit 2026-03-10T10:49:04.999731+0000 mon.a (mon.0) 3912 : audit [INF] from='client.? 192.168.123.104:0/4088895356' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T10:49:06.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:06 vm04 bash[28289]: cluster 2026-03-10T10:49:05.010448+0000 mon.a (mon.0) 3913 : cluster [DBG] osdmap e778: 8 total, 8 up, 8 in 2026-03-10T10:49:06.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:06 vm04 bash[28289]: cluster 2026-03-10T10:49:05.010448+0000 mon.a (mon.0) 3913 : cluster [DBG] osdmap e778: 8 total, 8 up, 8 in 2026-03-10T10:49:06.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:06 vm04 bash[28289]: audit 2026-03-10T10:49:05.060387+0000 mon.a (mon.0) 3914 : audit [INF] from='client.? 192.168.123.104:0/4088895356' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T10:49:06.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:06 vm04 bash[28289]: audit 2026-03-10T10:49:05.060387+0000 mon.a (mon.0) 3914 : audit [INF] from='client.? 192.168.123.104:0/4088895356' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T10:49:06.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:06 vm04 bash[20742]: cluster 2026-03-10T10:49:04.840184+0000 mgr.y (mgr.24422) 1307 : cluster [DBG] pgmap v1747: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:06.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:06 vm04 bash[20742]: cluster 2026-03-10T10:49:04.840184+0000 mgr.y (mgr.24422) 1307 : cluster [DBG] pgmap v1747: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:06.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:06 vm04 bash[20742]: audit 2026-03-10T10:49:04.999731+0000 mon.a (mon.0) 3912 : audit [INF] from='client.? 192.168.123.104:0/4088895356' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T10:49:06.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:06 vm04 bash[20742]: audit 2026-03-10T10:49:04.999731+0000 mon.a (mon.0) 3912 : audit [INF] from='client.? 192.168.123.104:0/4088895356' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T10:49:06.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:06 vm04 bash[20742]: cluster 2026-03-10T10:49:05.010448+0000 mon.a (mon.0) 3913 : cluster [DBG] osdmap e778: 8 total, 8 up, 8 in 2026-03-10T10:49:06.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:06 vm04 bash[20742]: cluster 2026-03-10T10:49:05.010448+0000 mon.a (mon.0) 3913 : cluster [DBG] osdmap e778: 8 total, 8 up, 8 in 2026-03-10T10:49:06.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:06 vm04 bash[20742]: audit 2026-03-10T10:49:05.060387+0000 mon.a (mon.0) 3914 : audit [INF] from='client.? 192.168.123.104:0/4088895356' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T10:49:06.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:06 vm04 bash[20742]: audit 2026-03-10T10:49:05.060387+0000 mon.a (mon.0) 3914 : audit [INF] from='client.? 192.168.123.104:0/4088895356' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T10:49:06.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:06 vm07 bash[23367]: cluster 2026-03-10T10:49:04.840184+0000 mgr.y (mgr.24422) 1307 : cluster [DBG] pgmap v1747: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:06.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:06 vm07 bash[23367]: cluster 2026-03-10T10:49:04.840184+0000 mgr.y (mgr.24422) 1307 : cluster [DBG] pgmap v1747: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:06.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:06 vm07 bash[23367]: audit 2026-03-10T10:49:04.999731+0000 mon.a (mon.0) 3912 : audit [INF] from='client.? 192.168.123.104:0/4088895356' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T10:49:06.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:06 vm07 bash[23367]: audit 2026-03-10T10:49:04.999731+0000 mon.a (mon.0) 3912 : audit [INF] from='client.? 192.168.123.104:0/4088895356' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T10:49:06.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:06 vm07 bash[23367]: cluster 2026-03-10T10:49:05.010448+0000 mon.a (mon.0) 3913 : cluster [DBG] osdmap e778: 8 total, 8 up, 8 in 2026-03-10T10:49:06.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:06 vm07 bash[23367]: cluster 2026-03-10T10:49:05.010448+0000 mon.a (mon.0) 3913 : cluster [DBG] osdmap e778: 8 total, 8 up, 8 in 2026-03-10T10:49:06.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:06 vm07 bash[23367]: audit 2026-03-10T10:49:05.060387+0000 mon.a (mon.0) 3914 : audit [INF] from='client.? 192.168.123.104:0/4088895356' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T10:49:06.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:06 vm07 bash[23367]: audit 2026-03-10T10:49:05.060387+0000 mon.a (mon.0) 3914 : audit [INF] from='client.? 192.168.123.104:0/4088895356' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]: dispatch 2026-03-10T10:49:07.035 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:07.032+0000 7fb0d3fff640 1 -- 192.168.123.104:0/1886937679 <== mon.0 v2:192.168.123.104:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]=0 set-quota max_objects = 0 for pool 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f v780) ==== 221+0+0 (secure 0 0 0) 0x7fb0e40672b0 con 0x7fb0f4109f50 2026-03-10T10:49:07.048 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:07.044+0000 7fb0d3fff640 1 -- 192.168.123.104:0/1886937679 <== mon.0 v2:192.168.123.104:3300/0 8 ==== osd_map(780..780 src has 251..780) ==== 628+0+0 (secure 0 0 0) 0x7fb0e405f2a0 con 0x7fb0f4109f50 2026-03-10T10:49:07.048 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:07.044+0000 7fb0d3fff640 1 -- 192.168.123.104:0/1886937679 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=781}) -- 0x7fb0c4084590 con 0x7fb0f4109f50 2026-03-10T10:49:07.099 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:07.096+0000 7fb0f91c7640 1 -- 192.168.123.104:0/1886937679 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"} v 0) -- 0x7fb0f41ac140 con 0x7fb0f4109f50 2026-03-10T10:49:07.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:07 vm04 bash[28289]: audit 2026-03-10T10:49:06.004966+0000 mon.a (mon.0) 3915 : audit [INF] from='client.? 192.168.123.104:0/4088895356' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T10:49:07.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:07 vm04 bash[28289]: audit 2026-03-10T10:49:06.004966+0000 mon.a (mon.0) 3915 : audit [INF] from='client.? 192.168.123.104:0/4088895356' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T10:49:07.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:07 vm04 bash[28289]: cluster 2026-03-10T10:49:06.024641+0000 mon.a (mon.0) 3916 : cluster [DBG] osdmap e779: 8 total, 8 up, 8 in 2026-03-10T10:49:07.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:07 vm04 bash[28289]: cluster 2026-03-10T10:49:06.024641+0000 mon.a (mon.0) 3916 : cluster [DBG] osdmap e779: 8 total, 8 up, 8 in 2026-03-10T10:49:07.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:07 vm04 bash[28289]: audit 2026-03-10T10:49:06.189800+0000 mon.a (mon.0) 3917 : audit [INF] from='client.? 192.168.123.104:0/1886937679' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:49:07.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:07 vm04 bash[28289]: audit 2026-03-10T10:49:06.189800+0000 mon.a (mon.0) 3917 : audit [INF] from='client.? 192.168.123.104:0/1886937679' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:49:07.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:07 vm04 bash[20742]: audit 2026-03-10T10:49:06.004966+0000 mon.a (mon.0) 3915 : audit [INF] from='client.? 192.168.123.104:0/4088895356' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T10:49:07.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:07 vm04 bash[20742]: audit 2026-03-10T10:49:06.004966+0000 mon.a (mon.0) 3915 : audit [INF] from='client.? 192.168.123.104:0/4088895356' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T10:49:07.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:07 vm04 bash[20742]: cluster 2026-03-10T10:49:06.024641+0000 mon.a (mon.0) 3916 : cluster [DBG] osdmap e779: 8 total, 8 up, 8 in 2026-03-10T10:49:07.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:07 vm04 bash[20742]: cluster 2026-03-10T10:49:06.024641+0000 mon.a (mon.0) 3916 : cluster [DBG] osdmap e779: 8 total, 8 up, 8 in 2026-03-10T10:49:07.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:07 vm04 bash[20742]: audit 2026-03-10T10:49:06.189800+0000 mon.a (mon.0) 3917 : audit [INF] from='client.? 192.168.123.104:0/1886937679' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:49:07.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:07 vm04 bash[20742]: audit 2026-03-10T10:49:06.189800+0000 mon.a (mon.0) 3917 : audit [INF] from='client.? 192.168.123.104:0/1886937679' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:49:07.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:07 vm07 bash[23367]: audit 2026-03-10T10:49:06.004966+0000 mon.a (mon.0) 3915 : audit [INF] from='client.? 192.168.123.104:0/4088895356' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T10:49:07.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:07 vm07 bash[23367]: audit 2026-03-10T10:49:06.004966+0000 mon.a (mon.0) 3915 : audit [INF] from='client.? 192.168.123.104:0/4088895356' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_bytes", "val": "0"}]': finished 2026-03-10T10:49:07.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:07 vm07 bash[23367]: cluster 2026-03-10T10:49:06.024641+0000 mon.a (mon.0) 3916 : cluster [DBG] osdmap e779: 8 total, 8 up, 8 in 2026-03-10T10:49:07.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:07 vm07 bash[23367]: cluster 2026-03-10T10:49:06.024641+0000 mon.a (mon.0) 3916 : cluster [DBG] osdmap e779: 8 total, 8 up, 8 in 2026-03-10T10:49:07.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:07 vm07 bash[23367]: audit 2026-03-10T10:49:06.189800+0000 mon.a (mon.0) 3917 : audit [INF] from='client.? 192.168.123.104:0/1886937679' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:49:07.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:07 vm07 bash[23367]: audit 2026-03-10T10:49:06.189800+0000 mon.a (mon.0) 3917 : audit [INF] from='client.? 192.168.123.104:0/1886937679' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:49:08.045 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:08.040+0000 7fb0d3fff640 1 -- 192.168.123.104:0/1886937679 <== mon.0 v2:192.168.123.104:3300/0 9 ==== mon_command_ack([{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]=0 set-quota max_objects = 0 for pool 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f v781) ==== 221+0+0 (secure 0 0 0) 0x7fb0e406c160 con 0x7fb0f4109f50 2026-03-10T10:49:08.045 INFO:tasks.workunit.client.0.vm04.stderr:set-quota max_objects = 0 for pool 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f 2026-03-10T10:49:08.048 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:08.044+0000 7fb0f91c7640 1 -- 192.168.123.104:0/1886937679 >> v2:192.168.123.104:6800/3326026257 conn(0x7fb0c40777d0 msgr2=0x7fb0c4079c90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:49:08.048 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:08.044+0000 7fb0f91c7640 1 --2- 192.168.123.104:0/1886937679 >> v2:192.168.123.104:6800/3326026257 conn(0x7fb0c40777d0 0x7fb0c4079c90 secure :-1 s=READY pgs=4288 cs=0 l=1 rev1=1 crypto rx=0x7fb0dc009a80 tx=0x7fb0dc008040 comp rx=0 tx=0).stop 2026-03-10T10:49:08.048 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:08.044+0000 7fb0f91c7640 1 -- 192.168.123.104:0/1886937679 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb0f4109f50 msgr2=0x7fb0f41a3ba0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:49:08.048 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:08.044+0000 7fb0f91c7640 1 --2- 192.168.123.104:0/1886937679 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb0f4109f50 0x7fb0f41a3ba0 secure :-1 s=READY pgs=3102 cs=0 l=1 rev1=1 crypto rx=0x7fb0e400b970 tx=0x7fb0e400be30 comp rx=0 tx=0).stop 2026-03-10T10:49:08.050 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:08.048+0000 7fb0f91c7640 1 -- 192.168.123.104:0/1886937679 shutdown_connections 2026-03-10T10:49:08.050 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:08.048+0000 7fb0f91c7640 1 --2- 192.168.123.104:0/1886937679 >> v2:192.168.123.104:6800/3326026257 conn(0x7fb0c40777d0 0x7fb0c4079c90 unknown :-1 s=CLOSED pgs=4288 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:08.050 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:08.048+0000 7fb0f91c7640 1 --2- 192.168.123.104:0/1886937679 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb0f4109f50 0x7fb0f41a3ba0 unknown :-1 s=CLOSED pgs=3102 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:08.050 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:08.048+0000 7fb0f91c7640 1 --2- 192.168.123.104:0/1886937679 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb0f41057d0 0x7fb0f419f810 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:08.050 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:08.048+0000 7fb0f91c7640 1 --2- 192.168.123.104:0/1886937679 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fb0f4104e20 0x7fb0f419f2d0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:08.050 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:08.048+0000 7fb0f91c7640 1 -- 192.168.123.104:0/1886937679 >> 192.168.123.104:0/1886937679 conn(0x7fb0f4100880 msgr2=0x7fb0f4100f20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:49:08.050 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:08.048+0000 7fb0f91c7640 1 -- 192.168.123.104:0/1886937679 shutdown_connections 2026-03-10T10:49:08.050 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:08.048+0000 7fb0f91c7640 1 -- 192.168.123.104:0/1886937679 wait complete. 2026-03-10T10:49:08.069 INFO:tasks.workunit.client.0.vm04.stderr:+ sleep 30 2026-03-10T10:49:08.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:08 vm04 bash[28289]: cluster 2026-03-10T10:49:06.840476+0000 mgr.y (mgr.24422) 1308 : cluster [DBG] pgmap v1750: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T10:49:08.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:08 vm04 bash[28289]: cluster 2026-03-10T10:49:06.840476+0000 mgr.y (mgr.24422) 1308 : cluster [DBG] pgmap v1750: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T10:49:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:08 vm04 bash[28289]: audit 2026-03-10T10:49:07.036448+0000 mon.a (mon.0) 3918 : audit [INF] from='client.? 192.168.123.104:0/1886937679' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]': finished 2026-03-10T10:49:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:08 vm04 bash[28289]: audit 2026-03-10T10:49:07.036448+0000 mon.a (mon.0) 3918 : audit [INF] from='client.? 192.168.123.104:0/1886937679' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]': finished 2026-03-10T10:49:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:08 vm04 bash[28289]: cluster 2026-03-10T10:49:07.066197+0000 mon.a (mon.0) 3919 : cluster [DBG] osdmap e780: 8 total, 8 up, 8 in 2026-03-10T10:49:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:08 vm04 bash[28289]: cluster 2026-03-10T10:49:07.066197+0000 mon.a (mon.0) 3919 : cluster [DBG] osdmap e780: 8 total, 8 up, 8 in 2026-03-10T10:49:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:08 vm04 bash[28289]: audit 2026-03-10T10:49:07.101519+0000 mon.a (mon.0) 3920 : audit [INF] from='client.? 192.168.123.104:0/1886937679' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:49:08.453 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:08 vm04 bash[28289]: audit 2026-03-10T10:49:07.101519+0000 mon.a (mon.0) 3920 : audit [INF] from='client.? 192.168.123.104:0/1886937679' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:49:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:08 vm04 bash[20742]: cluster 2026-03-10T10:49:06.840476+0000 mgr.y (mgr.24422) 1308 : cluster [DBG] pgmap v1750: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T10:49:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:08 vm04 bash[20742]: cluster 2026-03-10T10:49:06.840476+0000 mgr.y (mgr.24422) 1308 : cluster [DBG] pgmap v1750: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T10:49:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:08 vm04 bash[20742]: audit 2026-03-10T10:49:07.036448+0000 mon.a (mon.0) 3918 : audit [INF] from='client.? 192.168.123.104:0/1886937679' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]': finished 2026-03-10T10:49:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:08 vm04 bash[20742]: audit 2026-03-10T10:49:07.036448+0000 mon.a (mon.0) 3918 : audit [INF] from='client.? 192.168.123.104:0/1886937679' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]': finished 2026-03-10T10:49:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:08 vm04 bash[20742]: cluster 2026-03-10T10:49:07.066197+0000 mon.a (mon.0) 3919 : cluster [DBG] osdmap e780: 8 total, 8 up, 8 in 2026-03-10T10:49:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:08 vm04 bash[20742]: cluster 2026-03-10T10:49:07.066197+0000 mon.a (mon.0) 3919 : cluster [DBG] osdmap e780: 8 total, 8 up, 8 in 2026-03-10T10:49:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:08 vm04 bash[20742]: audit 2026-03-10T10:49:07.101519+0000 mon.a (mon.0) 3920 : audit [INF] from='client.? 192.168.123.104:0/1886937679' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:49:08.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:08 vm04 bash[20742]: audit 2026-03-10T10:49:07.101519+0000 mon.a (mon.0) 3920 : audit [INF] from='client.? 192.168.123.104:0/1886937679' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:49:08.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:08 vm07 bash[23367]: cluster 2026-03-10T10:49:06.840476+0000 mgr.y (mgr.24422) 1308 : cluster [DBG] pgmap v1750: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T10:49:08.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:08 vm07 bash[23367]: cluster 2026-03-10T10:49:06.840476+0000 mgr.y (mgr.24422) 1308 : cluster [DBG] pgmap v1750: 188 pgs: 188 active+clean; 505 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T10:49:08.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:08 vm07 bash[23367]: audit 2026-03-10T10:49:07.036448+0000 mon.a (mon.0) 3918 : audit [INF] from='client.? 192.168.123.104:0/1886937679' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]': finished 2026-03-10T10:49:08.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:08 vm07 bash[23367]: audit 2026-03-10T10:49:07.036448+0000 mon.a (mon.0) 3918 : audit [INF] from='client.? 192.168.123.104:0/1886937679' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]': finished 2026-03-10T10:49:08.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:08 vm07 bash[23367]: cluster 2026-03-10T10:49:07.066197+0000 mon.a (mon.0) 3919 : cluster [DBG] osdmap e780: 8 total, 8 up, 8 in 2026-03-10T10:49:08.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:08 vm07 bash[23367]: cluster 2026-03-10T10:49:07.066197+0000 mon.a (mon.0) 3919 : cluster [DBG] osdmap e780: 8 total, 8 up, 8 in 2026-03-10T10:49:08.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:08 vm07 bash[23367]: audit 2026-03-10T10:49:07.101519+0000 mon.a (mon.0) 3920 : audit [INF] from='client.? 192.168.123.104:0/1886937679' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:49:08.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:08 vm07 bash[23367]: audit 2026-03-10T10:49:07.101519+0000 mon.a (mon.0) 3920 : audit [INF] from='client.? 192.168.123.104:0/1886937679' entity='client.admin' cmd=[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]: dispatch 2026-03-10T10:49:09.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:09 vm04 bash[28289]: audit 2026-03-10T10:49:08.047049+0000 mon.a (mon.0) 3921 : audit [INF] from='client.? 192.168.123.104:0/1886937679' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]': finished 2026-03-10T10:49:09.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:09 vm04 bash[28289]: audit 2026-03-10T10:49:08.047049+0000 mon.a (mon.0) 3921 : audit [INF] from='client.? 192.168.123.104:0/1886937679' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]': finished 2026-03-10T10:49:09.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:09 vm04 bash[28289]: cluster 2026-03-10T10:49:08.061433+0000 mon.a (mon.0) 3922 : cluster [DBG] osdmap e781: 8 total, 8 up, 8 in 2026-03-10T10:49:09.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:09 vm04 bash[28289]: cluster 2026-03-10T10:49:08.061433+0000 mon.a (mon.0) 3922 : cluster [DBG] osdmap e781: 8 total, 8 up, 8 in 2026-03-10T10:49:09.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:09 vm04 bash[20742]: audit 2026-03-10T10:49:08.047049+0000 mon.a (mon.0) 3921 : audit [INF] from='client.? 192.168.123.104:0/1886937679' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]': finished 2026-03-10T10:49:09.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:09 vm04 bash[20742]: audit 2026-03-10T10:49:08.047049+0000 mon.a (mon.0) 3921 : audit [INF] from='client.? 192.168.123.104:0/1886937679' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]': finished 2026-03-10T10:49:09.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:09 vm04 bash[20742]: cluster 2026-03-10T10:49:08.061433+0000 mon.a (mon.0) 3922 : cluster [DBG] osdmap e781: 8 total, 8 up, 8 in 2026-03-10T10:49:09.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:09 vm04 bash[20742]: cluster 2026-03-10T10:49:08.061433+0000 mon.a (mon.0) 3922 : cluster [DBG] osdmap e781: 8 total, 8 up, 8 in 2026-03-10T10:49:09.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:09 vm07 bash[23367]: audit 2026-03-10T10:49:08.047049+0000 mon.a (mon.0) 3921 : audit [INF] from='client.? 192.168.123.104:0/1886937679' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]': finished 2026-03-10T10:49:09.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:09 vm07 bash[23367]: audit 2026-03-10T10:49:08.047049+0000 mon.a (mon.0) 3921 : audit [INF] from='client.? 192.168.123.104:0/1886937679' entity='client.admin' cmd='[{"prefix": "osd pool set-quota", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "field": "max_objects", "val": "0"}]': finished 2026-03-10T10:49:09.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:09 vm07 bash[23367]: cluster 2026-03-10T10:49:08.061433+0000 mon.a (mon.0) 3922 : cluster [DBG] osdmap e781: 8 total, 8 up, 8 in 2026-03-10T10:49:09.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:09 vm07 bash[23367]: cluster 2026-03-10T10:49:08.061433+0000 mon.a (mon.0) 3922 : cluster [DBG] osdmap e781: 8 total, 8 up, 8 in 2026-03-10T10:49:10.267 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:49:09 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:49:10.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:10 vm07 bash[23367]: cluster 2026-03-10T10:49:08.841034+0000 mgr.y (mgr.24422) 1309 : cluster [DBG] pgmap v1753: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:49:10.267 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:10 vm07 bash[23367]: cluster 2026-03-10T10:49:08.841034+0000 mgr.y (mgr.24422) 1309 : cluster [DBG] pgmap v1753: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:49:10.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:10 vm04 bash[28289]: cluster 2026-03-10T10:49:08.841034+0000 mgr.y (mgr.24422) 1309 : cluster [DBG] pgmap v1753: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:49:10.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:10 vm04 bash[28289]: cluster 2026-03-10T10:49:08.841034+0000 mgr.y (mgr.24422) 1309 : cluster [DBG] pgmap v1753: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:49:10.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:10 vm04 bash[20742]: cluster 2026-03-10T10:49:08.841034+0000 mgr.y (mgr.24422) 1309 : cluster [DBG] pgmap v1753: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:49:10.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:10 vm04 bash[20742]: cluster 2026-03-10T10:49:08.841034+0000 mgr.y (mgr.24422) 1309 : cluster [DBG] pgmap v1753: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 767 B/s wr, 1 op/s 2026-03-10T10:49:11.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:11 vm04 bash[28289]: audit 2026-03-10T10:49:09.845411+0000 mgr.y (mgr.24422) 1310 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:49:11.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:11 vm04 bash[28289]: audit 2026-03-10T10:49:09.845411+0000 mgr.y (mgr.24422) 1310 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:49:11.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:11 vm04 bash[20742]: audit 2026-03-10T10:49:09.845411+0000 mgr.y (mgr.24422) 1310 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:49:11.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:11 vm04 bash[20742]: audit 2026-03-10T10:49:09.845411+0000 mgr.y (mgr.24422) 1310 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:49:11.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:11 vm07 bash[23367]: audit 2026-03-10T10:49:09.845411+0000 mgr.y (mgr.24422) 1310 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:49:11.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:11 vm07 bash[23367]: audit 2026-03-10T10:49:09.845411+0000 mgr.y (mgr.24422) 1310 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:49:12.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:12 vm04 bash[28289]: cluster 2026-03-10T10:49:10.841312+0000 mgr.y (mgr.24422) 1311 : cluster [DBG] pgmap v1754: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 879 B/s rd, 527 B/s wr, 1 op/s 2026-03-10T10:49:12.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:12 vm04 bash[28289]: cluster 2026-03-10T10:49:10.841312+0000 mgr.y (mgr.24422) 1311 : cluster [DBG] pgmap v1754: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 879 B/s rd, 527 B/s wr, 1 op/s 2026-03-10T10:49:12.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:12 vm04 bash[20742]: cluster 2026-03-10T10:49:10.841312+0000 mgr.y (mgr.24422) 1311 : cluster [DBG] pgmap v1754: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 879 B/s rd, 527 B/s wr, 1 op/s 2026-03-10T10:49:12.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:12 vm04 bash[20742]: cluster 2026-03-10T10:49:10.841312+0000 mgr.y (mgr.24422) 1311 : cluster [DBG] pgmap v1754: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 879 B/s rd, 527 B/s wr, 1 op/s 2026-03-10T10:49:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:12 vm07 bash[23367]: cluster 2026-03-10T10:49:10.841312+0000 mgr.y (mgr.24422) 1311 : cluster [DBG] pgmap v1754: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 879 B/s rd, 527 B/s wr, 1 op/s 2026-03-10T10:49:12.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:12 vm07 bash[23367]: cluster 2026-03-10T10:49:10.841312+0000 mgr.y (mgr.24422) 1311 : cluster [DBG] pgmap v1754: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 879 B/s rd, 527 B/s wr, 1 op/s 2026-03-10T10:49:13.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:49:13 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:49:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:49:14.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:14 vm04 bash[28289]: cluster 2026-03-10T10:49:12.841549+0000 mgr.y (mgr.24422) 1312 : cluster [DBG] pgmap v1755: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 749 B/s rd, 449 B/s wr, 0 op/s 2026-03-10T10:49:14.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:14 vm04 bash[28289]: cluster 2026-03-10T10:49:12.841549+0000 mgr.y (mgr.24422) 1312 : cluster [DBG] pgmap v1755: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 749 B/s rd, 449 B/s wr, 0 op/s 2026-03-10T10:49:14.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:14 vm04 bash[28289]: audit 2026-03-10T10:49:13.747484+0000 mon.a (mon.0) 3923 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:49:14.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:14 vm04 bash[28289]: audit 2026-03-10T10:49:13.747484+0000 mon.a (mon.0) 3923 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:49:14.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:14 vm04 bash[20742]: cluster 2026-03-10T10:49:12.841549+0000 mgr.y (mgr.24422) 1312 : cluster [DBG] pgmap v1755: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 749 B/s rd, 449 B/s wr, 0 op/s 2026-03-10T10:49:14.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:14 vm04 bash[20742]: cluster 2026-03-10T10:49:12.841549+0000 mgr.y (mgr.24422) 1312 : cluster [DBG] pgmap v1755: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 749 B/s rd, 449 B/s wr, 0 op/s 2026-03-10T10:49:14.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:14 vm04 bash[20742]: audit 2026-03-10T10:49:13.747484+0000 mon.a (mon.0) 3923 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:49:14.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:14 vm04 bash[20742]: audit 2026-03-10T10:49:13.747484+0000 mon.a (mon.0) 3923 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:49:14.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:14 vm07 bash[23367]: cluster 2026-03-10T10:49:12.841549+0000 mgr.y (mgr.24422) 1312 : cluster [DBG] pgmap v1755: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 749 B/s rd, 449 B/s wr, 0 op/s 2026-03-10T10:49:14.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:14 vm07 bash[23367]: cluster 2026-03-10T10:49:12.841549+0000 mgr.y (mgr.24422) 1312 : cluster [DBG] pgmap v1755: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 749 B/s rd, 449 B/s wr, 0 op/s 2026-03-10T10:49:14.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:14 vm07 bash[23367]: audit 2026-03-10T10:49:13.747484+0000 mon.a (mon.0) 3923 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:49:14.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:14 vm07 bash[23367]: audit 2026-03-10T10:49:13.747484+0000 mon.a (mon.0) 3923 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:49:16.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:16 vm04 bash[28289]: cluster 2026-03-10T10:49:14.842060+0000 mgr.y (mgr.24422) 1313 : cluster [DBG] pgmap v1756: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 383 B/s wr, 1 op/s 2026-03-10T10:49:16.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:16 vm04 bash[28289]: cluster 2026-03-10T10:49:14.842060+0000 mgr.y (mgr.24422) 1313 : cluster [DBG] pgmap v1756: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 383 B/s wr, 1 op/s 2026-03-10T10:49:16.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:16 vm04 bash[20742]: cluster 2026-03-10T10:49:14.842060+0000 mgr.y (mgr.24422) 1313 : cluster [DBG] pgmap v1756: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 383 B/s wr, 1 op/s 2026-03-10T10:49:16.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:16 vm04 bash[20742]: cluster 2026-03-10T10:49:14.842060+0000 mgr.y (mgr.24422) 1313 : cluster [DBG] pgmap v1756: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 383 B/s wr, 1 op/s 2026-03-10T10:49:16.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:16 vm07 bash[23367]: cluster 2026-03-10T10:49:14.842060+0000 mgr.y (mgr.24422) 1313 : cluster [DBG] pgmap v1756: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 383 B/s wr, 1 op/s 2026-03-10T10:49:16.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:16 vm07 bash[23367]: cluster 2026-03-10T10:49:14.842060+0000 mgr.y (mgr.24422) 1313 : cluster [DBG] pgmap v1756: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 383 B/s wr, 1 op/s 2026-03-10T10:49:18.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:18 vm04 bash[28289]: cluster 2026-03-10T10:49:16.842298+0000 mgr.y (mgr.24422) 1314 : cluster [DBG] pgmap v1757: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 313 B/s wr, 1 op/s 2026-03-10T10:49:18.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:18 vm04 bash[28289]: cluster 2026-03-10T10:49:16.842298+0000 mgr.y (mgr.24422) 1314 : cluster [DBG] pgmap v1757: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 313 B/s wr, 1 op/s 2026-03-10T10:49:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:18 vm04 bash[20742]: cluster 2026-03-10T10:49:16.842298+0000 mgr.y (mgr.24422) 1314 : cluster [DBG] pgmap v1757: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 313 B/s wr, 1 op/s 2026-03-10T10:49:18.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:18 vm04 bash[20742]: cluster 2026-03-10T10:49:16.842298+0000 mgr.y (mgr.24422) 1314 : cluster [DBG] pgmap v1757: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 313 B/s wr, 1 op/s 2026-03-10T10:49:18.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:18 vm07 bash[23367]: cluster 2026-03-10T10:49:16.842298+0000 mgr.y (mgr.24422) 1314 : cluster [DBG] pgmap v1757: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 313 B/s wr, 1 op/s 2026-03-10T10:49:18.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:18 vm07 bash[23367]: cluster 2026-03-10T10:49:16.842298+0000 mgr.y (mgr.24422) 1314 : cluster [DBG] pgmap v1757: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.0 KiB/s rd, 313 B/s wr, 1 op/s 2026-03-10T10:49:20.119 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:49:19 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:49:20.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:20 vm04 bash[28289]: cluster 2026-03-10T10:49:18.842943+0000 mgr.y (mgr.24422) 1315 : cluster [DBG] pgmap v1758: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 284 B/s wr, 1 op/s 2026-03-10T10:49:20.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:20 vm04 bash[28289]: cluster 2026-03-10T10:49:18.842943+0000 mgr.y (mgr.24422) 1315 : cluster [DBG] pgmap v1758: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 284 B/s wr, 1 op/s 2026-03-10T10:49:20.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:20 vm04 bash[20742]: cluster 2026-03-10T10:49:18.842943+0000 mgr.y (mgr.24422) 1315 : cluster [DBG] pgmap v1758: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 284 B/s wr, 1 op/s 2026-03-10T10:49:20.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:20 vm04 bash[20742]: cluster 2026-03-10T10:49:18.842943+0000 mgr.y (mgr.24422) 1315 : cluster [DBG] pgmap v1758: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 284 B/s wr, 1 op/s 2026-03-10T10:49:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:20 vm07 bash[23367]: cluster 2026-03-10T10:49:18.842943+0000 mgr.y (mgr.24422) 1315 : cluster [DBG] pgmap v1758: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 284 B/s wr, 1 op/s 2026-03-10T10:49:20.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:20 vm07 bash[23367]: cluster 2026-03-10T10:49:18.842943+0000 mgr.y (mgr.24422) 1315 : cluster [DBG] pgmap v1758: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.4 KiB/s rd, 284 B/s wr, 1 op/s 2026-03-10T10:49:21.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:21 vm04 bash[28289]: audit 2026-03-10T10:49:19.846696+0000 mgr.y (mgr.24422) 1316 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:49:21.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:21 vm04 bash[28289]: audit 2026-03-10T10:49:19.846696+0000 mgr.y (mgr.24422) 1316 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:49:21.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:21 vm04 bash[20742]: audit 2026-03-10T10:49:19.846696+0000 mgr.y (mgr.24422) 1316 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:49:21.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:21 vm04 bash[20742]: audit 2026-03-10T10:49:19.846696+0000 mgr.y (mgr.24422) 1316 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:49:21.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:21 vm07 bash[23367]: audit 2026-03-10T10:49:19.846696+0000 mgr.y (mgr.24422) 1316 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:49:21.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:21 vm07 bash[23367]: audit 2026-03-10T10:49:19.846696+0000 mgr.y (mgr.24422) 1316 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:49:22.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:22 vm04 bash[28289]: cluster 2026-03-10T10:49:20.843229+0000 mgr.y (mgr.24422) 1317 : cluster [DBG] pgmap v1759: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:22.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:22 vm04 bash[28289]: cluster 2026-03-10T10:49:20.843229+0000 mgr.y (mgr.24422) 1317 : cluster [DBG] pgmap v1759: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:22.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:22 vm04 bash[20742]: cluster 2026-03-10T10:49:20.843229+0000 mgr.y (mgr.24422) 1317 : cluster [DBG] pgmap v1759: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:22.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:22 vm04 bash[20742]: cluster 2026-03-10T10:49:20.843229+0000 mgr.y (mgr.24422) 1317 : cluster [DBG] pgmap v1759: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:22 vm07 bash[23367]: cluster 2026-03-10T10:49:20.843229+0000 mgr.y (mgr.24422) 1317 : cluster [DBG] pgmap v1759: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:22.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:22 vm07 bash[23367]: cluster 2026-03-10T10:49:20.843229+0000 mgr.y (mgr.24422) 1317 : cluster [DBG] pgmap v1759: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:23.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:49:23 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:49:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:49:24.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:24 vm04 bash[28289]: cluster 2026-03-10T10:49:22.843483+0000 mgr.y (mgr.24422) 1318 : cluster [DBG] pgmap v1760: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:24.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:24 vm04 bash[28289]: cluster 2026-03-10T10:49:22.843483+0000 mgr.y (mgr.24422) 1318 : cluster [DBG] pgmap v1760: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:24.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:24 vm04 bash[20742]: cluster 2026-03-10T10:49:22.843483+0000 mgr.y (mgr.24422) 1318 : cluster [DBG] pgmap v1760: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:24.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:24 vm04 bash[20742]: cluster 2026-03-10T10:49:22.843483+0000 mgr.y (mgr.24422) 1318 : cluster [DBG] pgmap v1760: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:24.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:24 vm07 bash[23367]: cluster 2026-03-10T10:49:22.843483+0000 mgr.y (mgr.24422) 1318 : cluster [DBG] pgmap v1760: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:24.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:24 vm07 bash[23367]: cluster 2026-03-10T10:49:22.843483+0000 mgr.y (mgr.24422) 1318 : cluster [DBG] pgmap v1760: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:26.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:26 vm04 bash[28289]: cluster 2026-03-10T10:49:24.844045+0000 mgr.y (mgr.24422) 1319 : cluster [DBG] pgmap v1761: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:26.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:26 vm04 bash[28289]: cluster 2026-03-10T10:49:24.844045+0000 mgr.y (mgr.24422) 1319 : cluster [DBG] pgmap v1761: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:26.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:26 vm04 bash[20742]: cluster 2026-03-10T10:49:24.844045+0000 mgr.y (mgr.24422) 1319 : cluster [DBG] pgmap v1761: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:26.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:26 vm04 bash[20742]: cluster 2026-03-10T10:49:24.844045+0000 mgr.y (mgr.24422) 1319 : cluster [DBG] pgmap v1761: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:26 vm07 bash[23367]: cluster 2026-03-10T10:49:24.844045+0000 mgr.y (mgr.24422) 1319 : cluster [DBG] pgmap v1761: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:26.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:26 vm07 bash[23367]: cluster 2026-03-10T10:49:24.844045+0000 mgr.y (mgr.24422) 1319 : cluster [DBG] pgmap v1761: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:28.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:28 vm04 bash[28289]: cluster 2026-03-10T10:49:26.844424+0000 mgr.y (mgr.24422) 1320 : cluster [DBG] pgmap v1762: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:28.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:28 vm04 bash[28289]: cluster 2026-03-10T10:49:26.844424+0000 mgr.y (mgr.24422) 1320 : cluster [DBG] pgmap v1762: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:28.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:28 vm04 bash[20742]: cluster 2026-03-10T10:49:26.844424+0000 mgr.y (mgr.24422) 1320 : cluster [DBG] pgmap v1762: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:28.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:28 vm04 bash[20742]: cluster 2026-03-10T10:49:26.844424+0000 mgr.y (mgr.24422) 1320 : cluster [DBG] pgmap v1762: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:28.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:28 vm07 bash[23367]: cluster 2026-03-10T10:49:26.844424+0000 mgr.y (mgr.24422) 1320 : cluster [DBG] pgmap v1762: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:28.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:28 vm07 bash[23367]: cluster 2026-03-10T10:49:26.844424+0000 mgr.y (mgr.24422) 1320 : cluster [DBG] pgmap v1762: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:29.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:29 vm04 bash[28289]: audit 2026-03-10T10:49:28.753607+0000 mon.a (mon.0) 3924 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:49:29.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:29 vm04 bash[28289]: audit 2026-03-10T10:49:28.753607+0000 mon.a (mon.0) 3924 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:49:29.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:29 vm04 bash[20742]: audit 2026-03-10T10:49:28.753607+0000 mon.a (mon.0) 3924 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:49:29.453 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:29 vm04 bash[20742]: audit 2026-03-10T10:49:28.753607+0000 mon.a (mon.0) 3924 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:49:29.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:29 vm07 bash[23367]: audit 2026-03-10T10:49:28.753607+0000 mon.a (mon.0) 3924 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:49:29.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:29 vm07 bash[23367]: audit 2026-03-10T10:49:28.753607+0000 mon.a (mon.0) 3924 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:49:30.178 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:49:29 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:49:30.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:30 vm04 bash[28289]: cluster 2026-03-10T10:49:28.845081+0000 mgr.y (mgr.24422) 1321 : cluster [DBG] pgmap v1763: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:30.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:30 vm04 bash[28289]: cluster 2026-03-10T10:49:28.845081+0000 mgr.y (mgr.24422) 1321 : cluster [DBG] pgmap v1763: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:30.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:30 vm04 bash[20742]: cluster 2026-03-10T10:49:28.845081+0000 mgr.y (mgr.24422) 1321 : cluster [DBG] pgmap v1763: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:30.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:30 vm04 bash[20742]: cluster 2026-03-10T10:49:28.845081+0000 mgr.y (mgr.24422) 1321 : cluster [DBG] pgmap v1763: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:30.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:30 vm07 bash[23367]: cluster 2026-03-10T10:49:28.845081+0000 mgr.y (mgr.24422) 1321 : cluster [DBG] pgmap v1763: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:30.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:30 vm07 bash[23367]: cluster 2026-03-10T10:49:28.845081+0000 mgr.y (mgr.24422) 1321 : cluster [DBG] pgmap v1763: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:31.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:31 vm04 bash[28289]: audit 2026-03-10T10:49:29.857299+0000 mgr.y (mgr.24422) 1322 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:49:31.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:31 vm04 bash[28289]: audit 2026-03-10T10:49:29.857299+0000 mgr.y (mgr.24422) 1322 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:49:31.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:31 vm04 bash[20742]: audit 2026-03-10T10:49:29.857299+0000 mgr.y (mgr.24422) 1322 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:49:31.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:31 vm04 bash[20742]: audit 2026-03-10T10:49:29.857299+0000 mgr.y (mgr.24422) 1322 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:49:31.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:31 vm07 bash[23367]: audit 2026-03-10T10:49:29.857299+0000 mgr.y (mgr.24422) 1322 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:49:31.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:31 vm07 bash[23367]: audit 2026-03-10T10:49:29.857299+0000 mgr.y (mgr.24422) 1322 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:49:32.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:32 vm04 bash[28289]: cluster 2026-03-10T10:49:30.845402+0000 mgr.y (mgr.24422) 1323 : cluster [DBG] pgmap v1764: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:32.452 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:32 vm04 bash[28289]: cluster 2026-03-10T10:49:30.845402+0000 mgr.y (mgr.24422) 1323 : cluster [DBG] pgmap v1764: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:32.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:32 vm04 bash[20742]: cluster 2026-03-10T10:49:30.845402+0000 mgr.y (mgr.24422) 1323 : cluster [DBG] pgmap v1764: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:32.452 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:32 vm04 bash[20742]: cluster 2026-03-10T10:49:30.845402+0000 mgr.y (mgr.24422) 1323 : cluster [DBG] pgmap v1764: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:32.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:32 vm07 bash[23367]: cluster 2026-03-10T10:49:30.845402+0000 mgr.y (mgr.24422) 1323 : cluster [DBG] pgmap v1764: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:32.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:32 vm07 bash[23367]: cluster 2026-03-10T10:49:30.845402+0000 mgr.y (mgr.24422) 1323 : cluster [DBG] pgmap v1764: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:33.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:49:33 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:49:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:49:34.516 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:34 vm07 bash[23367]: cluster 2026-03-10T10:49:32.845759+0000 mgr.y (mgr.24422) 1324 : cluster [DBG] pgmap v1765: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:34.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:34 vm07 bash[23367]: cluster 2026-03-10T10:49:32.845759+0000 mgr.y (mgr.24422) 1324 : cluster [DBG] pgmap v1765: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:34.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:34 vm04 bash[28289]: cluster 2026-03-10T10:49:32.845759+0000 mgr.y (mgr.24422) 1324 : cluster [DBG] pgmap v1765: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:34.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:34 vm04 bash[28289]: cluster 2026-03-10T10:49:32.845759+0000 mgr.y (mgr.24422) 1324 : cluster [DBG] pgmap v1765: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:34.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:34 vm04 bash[20742]: cluster 2026-03-10T10:49:32.845759+0000 mgr.y (mgr.24422) 1324 : cluster [DBG] pgmap v1765: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:34.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:34 vm04 bash[20742]: cluster 2026-03-10T10:49:32.845759+0000 mgr.y (mgr.24422) 1324 : cluster [DBG] pgmap v1765: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:36.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:36 vm07 bash[23367]: cluster 2026-03-10T10:49:34.846468+0000 mgr.y (mgr.24422) 1325 : cluster [DBG] pgmap v1766: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:36.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:36 vm07 bash[23367]: cluster 2026-03-10T10:49:34.846468+0000 mgr.y (mgr.24422) 1325 : cluster [DBG] pgmap v1766: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:36.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:36 vm04 bash[28289]: cluster 2026-03-10T10:49:34.846468+0000 mgr.y (mgr.24422) 1325 : cluster [DBG] pgmap v1766: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:36.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:36 vm04 bash[28289]: cluster 2026-03-10T10:49:34.846468+0000 mgr.y (mgr.24422) 1325 : cluster [DBG] pgmap v1766: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:36.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:36 vm04 bash[20742]: cluster 2026-03-10T10:49:34.846468+0000 mgr.y (mgr.24422) 1325 : cluster [DBG] pgmap v1766: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:36.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:36 vm04 bash[20742]: cluster 2026-03-10T10:49:34.846468+0000 mgr.y (mgr.24422) 1325 : cluster [DBG] pgmap v1766: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:38.070 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph osd pool delete 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f 3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f --yes-i-really-really-mean-it 2026-03-10T10:49:38.133 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.128+0000 7f411e298640 1 -- 192.168.123.104:0/3247326547 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f4118074270 msgr2=0x7f41180746b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:49:38.134 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.128+0000 7f411e298640 1 --2- 192.168.123.104:0/3247326547 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f4118074270 0x7f41180746b0 secure :-1 s=READY pgs=3420 cs=0 l=1 rev1=1 crypto rx=0x7f4110009f90 tx=0x7f411001ca20 comp rx=0 tx=0).stop 2026-03-10T10:49:38.134 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.128+0000 7f411e298640 1 -- 192.168.123.104:0/3247326547 shutdown_connections 2026-03-10T10:49:38.134 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.128+0000 7f411e298640 1 --2- 192.168.123.104:0/3247326547 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f4118074bf0 0x7f411811e7a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:38.134 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.128+0000 7f411e298640 1 --2- 192.168.123.104:0/3247326547 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f4118074270 0x7f41180746b0 unknown :-1 s=CLOSED pgs=3420 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:38.134 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.128+0000 7f411e298640 1 --2- 192.168.123.104:0/3247326547 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f41181112c0 0x7f41181116a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:38.134 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.128+0000 7f411e298640 1 -- 192.168.123.104:0/3247326547 >> 192.168.123.104:0/3247326547 conn(0x7f411806eb60 msgr2=0x7f411806ef70 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:49:38.134 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.128+0000 7f411e298640 1 -- 192.168.123.104:0/3247326547 shutdown_connections 2026-03-10T10:49:38.134 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.128+0000 7f411e298640 1 -- 192.168.123.104:0/3247326547 wait complete. 2026-03-10T10:49:38.134 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.132+0000 7f411e298640 1 Processor -- start 2026-03-10T10:49:38.134 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.132+0000 7f411e298640 1 -- start start 2026-03-10T10:49:38.134 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.132+0000 7f411e298640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f4118074270 0x7f41181ac110 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:49:38.135 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.132+0000 7f411e298640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f4118074bf0 0x7f41181ac650 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:49:38.135 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.132+0000 7f411e298640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f41181112c0 0x7f41181b09e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:49:38.135 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.132+0000 7f411e298640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7f4118123b60 con 0x7f4118074270 2026-03-10T10:49:38.135 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.132+0000 7f411e298640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7f41181239e0 con 0x7f41181112c0 2026-03-10T10:49:38.135 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.132+0000 7f411e298640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7f4118123ce0 con 0x7f4118074bf0 2026-03-10T10:49:38.135 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.132+0000 7f41177fe640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f4118074bf0 0x7f41181ac650 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:49:38.135 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.132+0000 7f41177fe640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f4118074bf0 0x7f41181ac650 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3301/0 says I am v2:192.168.123.104:54268/0 (socket says 192.168.123.104:54268) 2026-03-10T10:49:38.135 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.132+0000 7f41177fe640 1 -- 192.168.123.104:0/1184560020 learned_addr learned my addr 192.168.123.104:0/1184560020 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:49:38.135 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.132+0000 7f41177fe640 1 -- 192.168.123.104:0/1184560020 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f41181112c0 msgr2=0x7f41181b09e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:49:38.135 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.132+0000 7f4117fff640 1 --2- 192.168.123.104:0/1184560020 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f4118074270 0x7f41181ac110 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:49:38.135 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.132+0000 7f411c80e640 1 --2- 192.168.123.104:0/1184560020 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f41181112c0 0x7f41181b09e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:49:38.135 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.132+0000 7f41177fe640 1 --2- 192.168.123.104:0/1184560020 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f41181112c0 0x7f41181b09e0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:38.135 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.132+0000 7f41177fe640 1 -- 192.168.123.104:0/1184560020 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f4118074270 msgr2=0x7f41181ac110 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:49:38.135 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.132+0000 7f41177fe640 1 --2- 192.168.123.104:0/1184560020 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f4118074270 0x7f41181ac110 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:38.135 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.132+0000 7f41177fe640 1 -- 192.168.123.104:0/1184560020 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f41181b1160 con 0x7f4118074bf0 2026-03-10T10:49:38.135 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.132+0000 7f411c80e640 1 --2- 192.168.123.104:0/1184560020 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f41181112c0 0x7f41181b09e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:49:38.135 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.132+0000 7f41177fe640 1 --2- 192.168.123.104:0/1184560020 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f4118074bf0 0x7f41181ac650 secure :-1 s=READY pgs=3421 cs=0 l=1 rev1=1 crypto rx=0x7f4110098520 tx=0x7f41100077b0 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:49:38.135 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.132+0000 7f41157fa640 1 -- 192.168.123.104:0/1184560020 <== mon.2 v2:192.168.123.104:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f41100079a0 con 0x7f4118074bf0 2026-03-10T10:49:38.135 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.132+0000 7f411e298640 1 -- 192.168.123.104:0/1184560020 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f41181b13f0 con 0x7f4118074bf0 2026-03-10T10:49:38.135 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.132+0000 7f41157fa640 1 -- 192.168.123.104:0/1184560020 <== mon.2 v2:192.168.123.104:3301/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7f4110004070 con 0x7f4118074bf0 2026-03-10T10:49:38.135 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.132+0000 7f411e298640 1 -- 192.168.123.104:0/1184560020 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f41181b8c90 con 0x7f4118074bf0 2026-03-10T10:49:38.135 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.132+0000 7f41157fa640 1 -- 192.168.123.104:0/1184560020 <== mon.2 v2:192.168.123.104:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f41100b6690 con 0x7f4118074bf0 2026-03-10T10:49:38.136 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.132+0000 7f410effd640 1 -- 192.168.123.104:0/1184560020 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f40d8005190 con 0x7f4118074bf0 2026-03-10T10:49:38.139 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.132+0000 7f41157fa640 1 -- 192.168.123.104:0/1184560020 <== mon.2 v2:192.168.123.104:3301/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f4110005ce0 con 0x7f4118074bf0 2026-03-10T10:49:38.140 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.132+0000 7f41157fa640 1 --2- 192.168.123.104:0/1184560020 >> v2:192.168.123.104:6800/3326026257 conn(0x7f40e00777a0 0x7f40e0079c60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:49:38.140 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.132+0000 7f41157fa640 1 -- 192.168.123.104:0/1184560020 <== mon.2 v2:192.168.123.104:3301/0 5 ==== osd_map(781..781 src has 251..781) ==== 8124+0+0 (secure 0 0 0) 0x7f4110133dd0 con 0x7f4118074bf0 2026-03-10T10:49:38.140 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.132+0000 7f41157fa640 1 -- 192.168.123.104:0/1184560020 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({osdmap=782}) -- 0x7f40e0083510 con 0x7f4118074bf0 2026-03-10T10:49:38.140 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.136+0000 7f4117fff640 1 --2- 192.168.123.104:0/1184560020 >> v2:192.168.123.104:6800/3326026257 conn(0x7f40e00777a0 0x7f40e0079c60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:49:38.140 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.136+0000 7f4117fff640 1 --2- 192.168.123.104:0/1184560020 >> v2:192.168.123.104:6800/3326026257 conn(0x7f40e00777a0 0x7f40e0079c60 secure :-1 s=READY pgs=4289 cs=0 l=1 rev1=1 crypto rx=0x7f4104004350 tx=0x7f4104009340 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:49:38.140 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.136+0000 7f41157fa640 1 -- 192.168.123.104:0/1184560020 <== mon.2 v2:192.168.123.104:3301/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7f4110137030 con 0x7f4118074bf0 2026-03-10T10:49:38.239 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:38.236+0000 7f410effd640 1 -- 192.168.123.104:0/1184560020 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "osd pool delete", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pool2": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "yes_i_really_really_mean_it": true} v 0) -- 0x7f40d8005480 con 0x7f4118074bf0 2026-03-10T10:49:38.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:38 vm07 bash[23367]: cluster 2026-03-10T10:49:36.846821+0000 mgr.y (mgr.24422) 1326 : cluster [DBG] pgmap v1767: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:38.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:38 vm07 bash[23367]: cluster 2026-03-10T10:49:36.846821+0000 mgr.y (mgr.24422) 1326 : cluster [DBG] pgmap v1767: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:38.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:38 vm04 bash[28289]: cluster 2026-03-10T10:49:36.846821+0000 mgr.y (mgr.24422) 1326 : cluster [DBG] pgmap v1767: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:38.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:38 vm04 bash[28289]: cluster 2026-03-10T10:49:36.846821+0000 mgr.y (mgr.24422) 1326 : cluster [DBG] pgmap v1767: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:38.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:38 vm04 bash[20742]: cluster 2026-03-10T10:49:36.846821+0000 mgr.y (mgr.24422) 1326 : cluster [DBG] pgmap v1767: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:38.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:38 vm04 bash[20742]: cluster 2026-03-10T10:49:36.846821+0000 mgr.y (mgr.24422) 1326 : cluster [DBG] pgmap v1767: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T10:49:39.245 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.240+0000 7f41157fa640 1 -- 192.168.123.104:0/1184560020 <== mon.2 v2:192.168.123.104:3301/0 7 ==== osd_map(782..782 src has 251..782) ==== 296+0+0 (secure 0 0 0) 0x7f41100f8490 con 0x7f4118074bf0 2026-03-10T10:49:39.245 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.240+0000 7f41157fa640 1 -- 192.168.123.104:0/1184560020 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_subscribe({osdmap=783}) -- 0x7f40e0083d80 con 0x7f4118074bf0 2026-03-10T10:49:39.257 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.252+0000 7f41157fa640 1 -- 192.168.123.104:0/1184560020 <== mon.2 v2:192.168.123.104:3301/0 8 ==== mon_command_ack([{"prefix": "osd pool delete", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pool2": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "yes_i_really_really_mean_it": true}]=0 pool '3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f' removed v782) ==== 248+0+0 (secure 0 0 0) 0x7f41100166a0 con 0x7f4118074bf0 2026-03-10T10:49:39.305 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.300+0000 7f410effd640 1 -- 192.168.123.104:0/1184560020 --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_command({"prefix": "osd pool delete", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pool2": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "yes_i_really_really_mean_it": true} v 0) -- 0x7f40d80034e0 con 0x7f4118074bf0 2026-03-10T10:49:39.306 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.304+0000 7f41157fa640 1 -- 192.168.123.104:0/1184560020 <== mon.2 v2:192.168.123.104:3301/0 9 ==== mon_command_ack([{"prefix": "osd pool delete", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pool2": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "yes_i_really_really_mean_it": true}]=0 pool '3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f' does not exist v782) ==== 255+0+0 (secure 0 0 0) 0x7f41100ab2d0 con 0x7f4118074bf0 2026-03-10T10:49:39.306 INFO:tasks.workunit.client.0.vm04.stderr:pool '3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f' does not exist 2026-03-10T10:49:39.308 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.304+0000 7f410effd640 1 -- 192.168.123.104:0/1184560020 >> v2:192.168.123.104:6800/3326026257 conn(0x7f40e00777a0 msgr2=0x7f40e0079c60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:49:39.308 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.304+0000 7f410effd640 1 --2- 192.168.123.104:0/1184560020 >> v2:192.168.123.104:6800/3326026257 conn(0x7f40e00777a0 0x7f40e0079c60 secure :-1 s=READY pgs=4289 cs=0 l=1 rev1=1 crypto rx=0x7f4104004350 tx=0x7f4104009340 comp rx=0 tx=0).stop 2026-03-10T10:49:39.308 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.304+0000 7f410effd640 1 -- 192.168.123.104:0/1184560020 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f4118074bf0 msgr2=0x7f41181ac650 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:49:39.308 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.304+0000 7f410effd640 1 --2- 192.168.123.104:0/1184560020 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f4118074bf0 0x7f41181ac650 secure :-1 s=READY pgs=3421 cs=0 l=1 rev1=1 crypto rx=0x7f4110098520 tx=0x7f41100077b0 comp rx=0 tx=0).stop 2026-03-10T10:49:39.308 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.304+0000 7f410effd640 1 -- 192.168.123.104:0/1184560020 shutdown_connections 2026-03-10T10:49:39.308 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.304+0000 7f410effd640 1 --2- 192.168.123.104:0/1184560020 >> v2:192.168.123.104:6800/3326026257 conn(0x7f40e00777a0 0x7f40e0079c60 unknown :-1 s=CLOSED pgs=4289 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:39.308 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.304+0000 7f410effd640 1 --2- 192.168.123.104:0/1184560020 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7f41181112c0 0x7f41181b09e0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:39.308 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.304+0000 7f410effd640 1 --2- 192.168.123.104:0/1184560020 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7f4118074bf0 0x7f41181ac650 unknown :-1 s=CLOSED pgs=3421 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:39.308 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.304+0000 7f410effd640 1 --2- 192.168.123.104:0/1184560020 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7f4118074270 0x7f41181ac110 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:39.308 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.304+0000 7f410effd640 1 -- 192.168.123.104:0/1184560020 >> 192.168.123.104:0/1184560020 conn(0x7f411806eb60 msgr2=0x7f411810ace0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:49:39.308 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.304+0000 7f410effd640 1 -- 192.168.123.104:0/1184560020 shutdown_connections 2026-03-10T10:49:39.308 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.304+0000 7f410effd640 1 -- 192.168.123.104:0/1184560020 wait complete. 2026-03-10T10:49:39.319 INFO:tasks.workunit.client.0.vm04.stderr:+ ceph osd pool delete cc1e9eec-b4d8-493e-ae89-ce289b5abc51 cc1e9eec-b4d8-493e-ae89-ce289b5abc51 --yes-i-really-really-mean-it 2026-03-10T10:49:39.380 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.376+0000 7fb257100640 1 -- 192.168.123.104:0/2929177246 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb250104e50 msgr2=0x7fb250105230 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:49:39.380 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.376+0000 7fb257100640 1 --2- 192.168.123.104:0/2929177246 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb250104e50 0x7fb250105230 secure :-1 s=READY pgs=3103 cs=0 l=1 rev1=1 crypto rx=0x7fb238009a30 tx=0x7fb23801c900 comp rx=0 tx=0).stop 2026-03-10T10:49:39.380 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.376+0000 7fb257100640 1 -- 192.168.123.104:0/2929177246 shutdown_connections 2026-03-10T10:49:39.380 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.376+0000 7fb257100640 1 --2- 192.168.123.104:0/2929177246 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb250109f80 0x7fb250111b00 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:39.380 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.376+0000 7fb257100640 1 --2- 192.168.123.104:0/2929177246 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fb250105800 0x7fb250109850 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:39.380 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.376+0000 7fb257100640 1 --2- 192.168.123.104:0/2929177246 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb250104e50 0x7fb250105230 unknown :-1 s=CLOSED pgs=3103 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:39.380 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.376+0000 7fb257100640 1 -- 192.168.123.104:0/2929177246 >> 192.168.123.104:0/2929177246 conn(0x7fb2501008f0 msgr2=0x7fb250102d10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:49:39.380 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.376+0000 7fb257100640 1 -- 192.168.123.104:0/2929177246 shutdown_connections 2026-03-10T10:49:39.380 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.376+0000 7fb257100640 1 -- 192.168.123.104:0/2929177246 wait complete. 2026-03-10T10:49:39.380 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.376+0000 7fb257100640 1 Processor -- start 2026-03-10T10:49:39.380 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.376+0000 7fb257100640 1 -- start start 2026-03-10T10:49:39.380 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.376+0000 7fb257100640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb250104e50 0x7fb25019f1c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:49:39.380 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.376+0000 7fb257100640 1 --2- >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb250105800 0x7fb25019f700 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:49:39.380 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.376+0000 7fb257100640 1 --2- >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fb250109f80 0x7fb2501a3a90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:49:39.380 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.376+0000 7fb257100640 1 -- --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_getmap magic: 0 -- 0x7fb250116c30 con 0x7fb250104e50 2026-03-10T10:49:39.380 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.376+0000 7fb257100640 1 -- --> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] -- mon_getmap magic: 0 -- 0x7fb250116ab0 con 0x7fb250105800 2026-03-10T10:49:39.380 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.376+0000 7fb257100640 1 -- --> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] -- mon_getmap magic: 0 -- 0x7fb250116db0 con 0x7fb250109f80 2026-03-10T10:49:39.380 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.376+0000 7fb254e75640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb250104e50 0x7fb25019f1c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:49:39.380 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.376+0000 7fb254e75640 1 --2- >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb250104e50 0x7fb25019f1c0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.104:3300/0 says I am v2:192.168.123.104:50490/0 (socket says 192.168.123.104:50490) 2026-03-10T10:49:39.380 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.376+0000 7fb254e75640 1 -- 192.168.123.104:0/2617738470 learned_addr learned my addr 192.168.123.104:0/2617738470 (peer_addr_for_me v2:192.168.123.104:0/0) 2026-03-10T10:49:39.380 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.376+0000 7fb247fff640 1 --2- 192.168.123.104:0/2617738470 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb250105800 0x7fb25019f700 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:49:39.380 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.376+0000 7fb254e75640 1 -- 192.168.123.104:0/2617738470 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fb250109f80 msgr2=0x7fb2501a3a90 unknown :-1 s=STATE_CONNECTING l=1).mark_down 2026-03-10T10:49:39.381 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.376+0000 7fb255676640 1 --2- 192.168.123.104:0/2617738470 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fb250109f80 0x7fb2501a3a90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:49:39.381 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.376+0000 7fb254e75640 1 --2- 192.168.123.104:0/2617738470 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fb250109f80 0x7fb2501a3a90 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:39.381 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.376+0000 7fb254e75640 1 -- 192.168.123.104:0/2617738470 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb250105800 msgr2=0x7fb25019f700 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:49:39.381 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.376+0000 7fb254e75640 1 --2- 192.168.123.104:0/2617738470 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb250105800 0x7fb25019f700 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:39.381 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.376+0000 7fb254e75640 1 -- 192.168.123.104:0/2617738470 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fb2501a4210 con 0x7fb250104e50 2026-03-10T10:49:39.381 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.376+0000 7fb247fff640 1 --2- 192.168.123.104:0/2617738470 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb250105800 0x7fb25019f700 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-10T10:49:39.381 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.376+0000 7fb254e75640 1 --2- 192.168.123.104:0/2617738470 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb250104e50 0x7fb25019f1c0 secure :-1 s=READY pgs=3104 cs=0 l=1 rev1=1 crypto rx=0x7fb23801cde0 tx=0x7fb238002730 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:49:39.381 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.376+0000 7fb255676640 1 --2- 192.168.123.104:0/2617738470 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fb250109f80 0x7fb2501a3a90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request state changed! 2026-03-10T10:49:39.381 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.376+0000 7fb245ffb640 1 -- 192.168.123.104:0/2617738470 <== mon.0 v2:192.168.123.104:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fb238004280 con 0x7fb250104e50 2026-03-10T10:49:39.381 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.376+0000 7fb245ffb640 1 -- 192.168.123.104:0/2617738470 <== mon.0 v2:192.168.123.104:3300/0 2 ==== config(25 keys) ==== 1029+0+0 (secure 0 0 0) 0x7fb238004420 con 0x7fb250104e50 2026-03-10T10:49:39.381 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.376+0000 7fb257100640 1 -- 192.168.123.104:0/2617738470 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fb2501a44a0 con 0x7fb250104e50 2026-03-10T10:49:39.381 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.376+0000 7fb257100640 1 -- 192.168.123.104:0/2617738470 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fb250077540 con 0x7fb250104e50 2026-03-10T10:49:39.383 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.380+0000 7fb245ffb640 1 -- 192.168.123.104:0/2617738470 <== mon.0 v2:192.168.123.104:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7fb2380c09b0 con 0x7fb250104e50 2026-03-10T10:49:39.383 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.380+0000 7fb245ffb640 1 -- 192.168.123.104:0/2617738470 <== mon.0 v2:192.168.123.104:3300/0 4 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7fb2380c8070 con 0x7fb250104e50 2026-03-10T10:49:39.384 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.380+0000 7fb257100640 1 -- 192.168.123.104:0/2617738470 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fb250106330 con 0x7fb250104e50 2026-03-10T10:49:39.388 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.384+0000 7fb245ffb640 1 --2- 192.168.123.104:0/2617738470 >> v2:192.168.123.104:6800/3326026257 conn(0x7fb2280777a0 0x7fb228079c60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-10T10:49:39.388 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.384+0000 7fb247fff640 1 --2- 192.168.123.104:0/2617738470 >> v2:192.168.123.104:6800/3326026257 conn(0x7fb2280777a0 0x7fb228079c60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-10T10:49:39.388 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.384+0000 7fb247fff640 1 --2- 192.168.123.104:0/2617738470 >> v2:192.168.123.104:6800/3326026257 conn(0x7fb2280777a0 0x7fb228079c60 secure :-1 s=READY pgs=4290 cs=0 l=1 rev1=1 crypto rx=0x7fb250102a70 tx=0x7fb240007400 comp rx=0 tx=0).ready entity=mgr.24422 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-10T10:49:39.388 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.384+0000 7fb245ffb640 1 -- 192.168.123.104:0/2617738470 <== mon.0 v2:192.168.123.104:3300/0 5 ==== osd_map(782..782 src has 251..782) ==== 7736+0+0 (secure 0 0 0) 0x7fb23813c780 con 0x7fb250104e50 2026-03-10T10:49:39.388 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.384+0000 7fb245ffb640 1 -- 192.168.123.104:0/2617738470 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=783}) -- 0x7fb228082cf0 con 0x7fb250104e50 2026-03-10T10:49:39.388 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.384+0000 7fb245ffb640 1 -- 192.168.123.104:0/2617738470 <== mon.0 v2:192.168.123.104:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+195034 (secure 0 0 0) 0x7fb238016610 con 0x7fb250104e50 2026-03-10T10:49:39.477 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:39.472+0000 7fb257100640 1 -- 192.168.123.104:0/2617738470 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "osd pool delete", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pool2": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "yes_i_really_really_mean_it": true} v 0) -- 0x7fb250105230 con 0x7fb250104e50 2026-03-10T10:49:39.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:39 vm07 bash[23367]: audit 2026-03-10T10:49:38.241460+0000 mon.c (mon.2) 493 : audit [INF] from='client.? 192.168.123.104:0/1184560020' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pool2": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:39.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:39 vm07 bash[23367]: audit 2026-03-10T10:49:38.241460+0000 mon.c (mon.2) 493 : audit [INF] from='client.? 192.168.123.104:0/1184560020' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pool2": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:39.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:39 vm07 bash[23367]: audit 2026-03-10T10:49:38.241691+0000 mon.a (mon.0) 3925 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pool2": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:39.517 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:39 vm07 bash[23367]: audit 2026-03-10T10:49:38.241691+0000 mon.a (mon.0) 3925 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pool2": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:39.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:39 vm04 bash[28289]: audit 2026-03-10T10:49:38.241460+0000 mon.c (mon.2) 493 : audit [INF] from='client.? 192.168.123.104:0/1184560020' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pool2": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:39.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:39 vm04 bash[28289]: audit 2026-03-10T10:49:38.241460+0000 mon.c (mon.2) 493 : audit [INF] from='client.? 192.168.123.104:0/1184560020' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pool2": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:39.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:39 vm04 bash[28289]: audit 2026-03-10T10:49:38.241691+0000 mon.a (mon.0) 3925 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pool2": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:39.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:39 vm04 bash[28289]: audit 2026-03-10T10:49:38.241691+0000 mon.a (mon.0) 3925 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pool2": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:39.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:39 vm04 bash[20742]: audit 2026-03-10T10:49:38.241460+0000 mon.c (mon.2) 493 : audit [INF] from='client.? 192.168.123.104:0/1184560020' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pool2": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:39.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:39 vm04 bash[20742]: audit 2026-03-10T10:49:38.241460+0000 mon.c (mon.2) 493 : audit [INF] from='client.? 192.168.123.104:0/1184560020' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pool2": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:39.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:39 vm04 bash[20742]: audit 2026-03-10T10:49:38.241691+0000 mon.a (mon.0) 3925 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pool2": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:39.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:39 vm04 bash[20742]: audit 2026-03-10T10:49:38.241691+0000 mon.a (mon.0) 3925 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pool2": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:40.267 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:49:39 vm07 bash[48477]: debug there is no tcmu-runner data available 2026-03-10T10:49:40.295 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:40.292+0000 7fb245ffb640 1 -- 192.168.123.104:0/2617738470 <== mon.0 v2:192.168.123.104:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool delete", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pool2": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "yes_i_really_really_mean_it": true}]=0 pool 'cc1e9eec-b4d8-493e-ae89-ce289b5abc51' removed v783) ==== 248+0+0 (secure 0 0 0) 0x7fb2381094d0 con 0x7fb250104e50 2026-03-10T10:49:40.307 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:40.304+0000 7fb245ffb640 1 -- 192.168.123.104:0/2617738470 <== mon.0 v2:192.168.123.104:3300/0 8 ==== osd_map(783..783 src has 251..783) ==== 296+0+0 (secure 0 0 0) 0x7fb2381729f0 con 0x7fb250104e50 2026-03-10T10:49:40.307 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:40.304+0000 7fb245ffb640 1 -- 192.168.123.104:0/2617738470 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_subscribe({osdmap=784}) -- 0x7fb2280839d0 con 0x7fb250104e50 2026-03-10T10:49:40.359 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:40.356+0000 7fb257100640 1 -- 192.168.123.104:0/2617738470 --> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] -- mon_command({"prefix": "osd pool delete", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pool2": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "yes_i_really_really_mean_it": true} v 0) -- 0x7fb2501a0520 con 0x7fb250104e50 2026-03-10T10:49:40.359 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:40.356+0000 7fb245ffb640 1 -- 192.168.123.104:0/2617738470 <== mon.0 v2:192.168.123.104:3300/0 9 ==== mon_command_ack([{"prefix": "osd pool delete", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pool2": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "yes_i_really_really_mean_it": true}]=0 pool 'cc1e9eec-b4d8-493e-ae89-ce289b5abc51' does not exist v783) ==== 255+0+0 (secure 0 0 0) 0x7fb23810e380 con 0x7fb250104e50 2026-03-10T10:49:40.359 INFO:tasks.workunit.client.0.vm04.stderr:pool 'cc1e9eec-b4d8-493e-ae89-ce289b5abc51' does not exist 2026-03-10T10:49:40.361 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:40.356+0000 7fb257100640 1 -- 192.168.123.104:0/2617738470 >> v2:192.168.123.104:6800/3326026257 conn(0x7fb2280777a0 msgr2=0x7fb228079c60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:49:40.361 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:40.356+0000 7fb257100640 1 --2- 192.168.123.104:0/2617738470 >> v2:192.168.123.104:6800/3326026257 conn(0x7fb2280777a0 0x7fb228079c60 secure :-1 s=READY pgs=4290 cs=0 l=1 rev1=1 crypto rx=0x7fb250102a70 tx=0x7fb240007400 comp rx=0 tx=0).stop 2026-03-10T10:49:40.361 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:40.356+0000 7fb257100640 1 -- 192.168.123.104:0/2617738470 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb250104e50 msgr2=0x7fb25019f1c0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-10T10:49:40.361 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:40.356+0000 7fb257100640 1 --2- 192.168.123.104:0/2617738470 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb250104e50 0x7fb25019f1c0 secure :-1 s=READY pgs=3104 cs=0 l=1 rev1=1 crypto rx=0x7fb23801cde0 tx=0x7fb238002730 comp rx=0 tx=0).stop 2026-03-10T10:49:40.362 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:40.356+0000 7fb257100640 1 -- 192.168.123.104:0/2617738470 shutdown_connections 2026-03-10T10:49:40.362 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:40.356+0000 7fb257100640 1 --2- 192.168.123.104:0/2617738470 >> v2:192.168.123.104:6800/3326026257 conn(0x7fb2280777a0 0x7fb228079c60 unknown :-1 s=CLOSED pgs=4290 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:40.362 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:40.356+0000 7fb257100640 1 --2- 192.168.123.104:0/2617738470 >> [v2:192.168.123.104:3301/0,v1:192.168.123.104:6790/0] conn(0x7fb250109f80 0x7fb2501a3a90 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:40.362 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:40.356+0000 7fb257100640 1 --2- 192.168.123.104:0/2617738470 >> [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] conn(0x7fb250105800 0x7fb25019f700 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:40.362 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:40.356+0000 7fb257100640 1 --2- 192.168.123.104:0/2617738470 >> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] conn(0x7fb250104e50 0x7fb25019f1c0 unknown :-1 s=CLOSED pgs=3104 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-10T10:49:40.362 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:40.356+0000 7fb257100640 1 -- 192.168.123.104:0/2617738470 >> 192.168.123.104:0/2617738470 conn(0x7fb2501008f0 msgr2=0x7fb250100f20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-10T10:49:40.362 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:40.360+0000 7fb257100640 1 -- 192.168.123.104:0/2617738470 shutdown_connections 2026-03-10T10:49:40.362 INFO:tasks.workunit.client.0.vm04.stderr:2026-03-10T10:49:40.360+0000 7fb257100640 1 -- 192.168.123.104:0/2617738470 wait complete. 2026-03-10T10:49:40.373 INFO:tasks.workunit.client.0.vm04.stdout:OK 2026-03-10T10:49:40.374 INFO:tasks.workunit.client.0.vm04.stderr:+ echo OK 2026-03-10T10:49:40.374 INFO:teuthology.orchestra.run:Running command with timeout 3600 2026-03-10T10:49:40.374 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp 2026-03-10T10:49:40.382 INFO:tasks.workunit:Stopping ['rados/test.sh', 'rados/test_pool_quota.sh'] on client.0... 2026-03-10T10:49:40.382 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0 2026-03-10T10:49:40.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:40 vm04 bash[28289]: cluster 2026-03-10T10:49:38.847552+0000 mgr.y (mgr.24422) 1327 : cluster [DBG] pgmap v1768: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:40 vm04 bash[28289]: cluster 2026-03-10T10:49:38.847552+0000 mgr.y (mgr.24422) 1327 : cluster [DBG] pgmap v1768: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:40 vm04 bash[28289]: audit 2026-03-10T10:49:39.244494+0000 mon.a (mon.0) 3926 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pool2": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T10:49:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:40 vm04 bash[28289]: audit 2026-03-10T10:49:39.244494+0000 mon.a (mon.0) 3926 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pool2": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T10:49:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:40 vm04 bash[28289]: cluster 2026-03-10T10:49:39.250430+0000 mon.a (mon.0) 3927 : cluster [DBG] osdmap e782: 8 total, 8 up, 8 in 2026-03-10T10:49:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:40 vm04 bash[28289]: cluster 2026-03-10T10:49:39.250430+0000 mon.a (mon.0) 3927 : cluster [DBG] osdmap e782: 8 total, 8 up, 8 in 2026-03-10T10:49:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:40 vm04 bash[28289]: audit 2026-03-10T10:49:39.307415+0000 mon.c (mon.2) 494 : audit [INF] from='client.? 192.168.123.104:0/1184560020' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pool2": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:40 vm04 bash[28289]: audit 2026-03-10T10:49:39.307415+0000 mon.c (mon.2) 494 : audit [INF] from='client.? 192.168.123.104:0/1184560020' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pool2": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:40 vm04 bash[28289]: audit 2026-03-10T10:49:39.307829+0000 mon.a (mon.0) 3928 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pool2": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:40 vm04 bash[28289]: audit 2026-03-10T10:49:39.307829+0000 mon.a (mon.0) 3928 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pool2": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:40 vm04 bash[28289]: audit 2026-03-10T10:49:39.479289+0000 mon.a (mon.0) 3929 : audit [INF] from='client.? 192.168.123.104:0/2617738470' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pool2": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:40.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:40 vm04 bash[28289]: audit 2026-03-10T10:49:39.479289+0000 mon.a (mon.0) 3929 : audit [INF] from='client.? 192.168.123.104:0/2617738470' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pool2": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:40 vm04 bash[20742]: cluster 2026-03-10T10:49:38.847552+0000 mgr.y (mgr.24422) 1327 : cluster [DBG] pgmap v1768: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:40 vm04 bash[20742]: cluster 2026-03-10T10:49:38.847552+0000 mgr.y (mgr.24422) 1327 : cluster [DBG] pgmap v1768: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:40 vm04 bash[20742]: audit 2026-03-10T10:49:39.244494+0000 mon.a (mon.0) 3926 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pool2": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T10:49:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:40 vm04 bash[20742]: audit 2026-03-10T10:49:39.244494+0000 mon.a (mon.0) 3926 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pool2": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T10:49:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:40 vm04 bash[20742]: cluster 2026-03-10T10:49:39.250430+0000 mon.a (mon.0) 3927 : cluster [DBG] osdmap e782: 8 total, 8 up, 8 in 2026-03-10T10:49:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:40 vm04 bash[20742]: cluster 2026-03-10T10:49:39.250430+0000 mon.a (mon.0) 3927 : cluster [DBG] osdmap e782: 8 total, 8 up, 8 in 2026-03-10T10:49:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:40 vm04 bash[20742]: audit 2026-03-10T10:49:39.307415+0000 mon.c (mon.2) 494 : audit [INF] from='client.? 192.168.123.104:0/1184560020' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pool2": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:40 vm04 bash[20742]: audit 2026-03-10T10:49:39.307415+0000 mon.c (mon.2) 494 : audit [INF] from='client.? 192.168.123.104:0/1184560020' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pool2": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:40 vm04 bash[20742]: audit 2026-03-10T10:49:39.307829+0000 mon.a (mon.0) 3928 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pool2": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:40 vm04 bash[20742]: audit 2026-03-10T10:49:39.307829+0000 mon.a (mon.0) 3928 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pool2": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:40 vm04 bash[20742]: audit 2026-03-10T10:49:39.479289+0000 mon.a (mon.0) 3929 : audit [INF] from='client.? 192.168.123.104:0/2617738470' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pool2": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:40.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:40 vm04 bash[20742]: audit 2026-03-10T10:49:39.479289+0000 mon.a (mon.0) 3929 : audit [INF] from='client.? 192.168.123.104:0/2617738470' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pool2": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:40.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:40 vm07 bash[23367]: cluster 2026-03-10T10:49:38.847552+0000 mgr.y (mgr.24422) 1327 : cluster [DBG] pgmap v1768: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:40.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:40 vm07 bash[23367]: cluster 2026-03-10T10:49:38.847552+0000 mgr.y (mgr.24422) 1327 : cluster [DBG] pgmap v1768: 188 pgs: 188 active+clean; 507 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:40.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:40 vm07 bash[23367]: audit 2026-03-10T10:49:39.244494+0000 mon.a (mon.0) 3926 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pool2": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T10:49:40.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:40 vm07 bash[23367]: audit 2026-03-10T10:49:39.244494+0000 mon.a (mon.0) 3926 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pool2": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T10:49:40.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:40 vm07 bash[23367]: cluster 2026-03-10T10:49:39.250430+0000 mon.a (mon.0) 3927 : cluster [DBG] osdmap e782: 8 total, 8 up, 8 in 2026-03-10T10:49:40.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:40 vm07 bash[23367]: cluster 2026-03-10T10:49:39.250430+0000 mon.a (mon.0) 3927 : cluster [DBG] osdmap e782: 8 total, 8 up, 8 in 2026-03-10T10:49:40.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:40 vm07 bash[23367]: audit 2026-03-10T10:49:39.307415+0000 mon.c (mon.2) 494 : audit [INF] from='client.? 192.168.123.104:0/1184560020' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pool2": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:40.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:40 vm07 bash[23367]: audit 2026-03-10T10:49:39.307415+0000 mon.c (mon.2) 494 : audit [INF] from='client.? 192.168.123.104:0/1184560020' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pool2": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:40.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:40 vm07 bash[23367]: audit 2026-03-10T10:49:39.307829+0000 mon.a (mon.0) 3928 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pool2": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:40.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:40 vm07 bash[23367]: audit 2026-03-10T10:49:39.307829+0000 mon.a (mon.0) 3928 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "pool2": "3d08dfcd-dbfb-4abd-b8a6-c5dc390b132f", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:40.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:40 vm07 bash[23367]: audit 2026-03-10T10:49:39.479289+0000 mon.a (mon.0) 3929 : audit [INF] from='client.? 192.168.123.104:0/2617738470' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pool2": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:40.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:40 vm07 bash[23367]: audit 2026-03-10T10:49:39.479289+0000 mon.a (mon.0) 3929 : audit [INF] from='client.? 192.168.123.104:0/2617738470' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pool2": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:40.823 DEBUG:teuthology.parallel:result is None 2026-03-10T10:49:40.823 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0 2026-03-10T10:49:40.831 INFO:tasks.workunit:Deleted dir /home/ubuntu/cephtest/mnt.0/client.0 2026-03-10T10:49:40.831 DEBUG:teuthology.orchestra.run.vm04:> rmdir -- /home/ubuntu/cephtest/mnt.0 2026-03-10T10:49:40.877 INFO:tasks.workunit:Deleted artificial mount point /home/ubuntu/cephtest/mnt.0/client.0 2026-03-10T10:49:40.877 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-10T10:49:40.883 INFO:tasks.cephadm:Teardown begin 2026-03-10T10:49:40.883 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T10:49:40.929 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T10:49:40.944 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-10T10:49:40.944 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 -- ceph mgr module disable cephadm 2026-03-10T10:49:41.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:41 vm04 bash[28289]: audit 2026-03-10T10:49:39.867300+0000 mgr.y (mgr.24422) 1328 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:49:41.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:41 vm04 bash[28289]: audit 2026-03-10T10:49:39.867300+0000 mgr.y (mgr.24422) 1328 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:49:41.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:41 vm04 bash[28289]: cluster 2026-03-10T10:49:40.252982+0000 mon.a (mon.0) 3930 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T10:49:41.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:41 vm04 bash[28289]: cluster 2026-03-10T10:49:40.252982+0000 mon.a (mon.0) 3930 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T10:49:41.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:41 vm04 bash[28289]: audit 2026-03-10T10:49:40.296272+0000 mon.a (mon.0) 3931 : audit [INF] from='client.? 192.168.123.104:0/2617738470' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pool2": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T10:49:41.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:41 vm04 bash[28289]: audit 2026-03-10T10:49:40.296272+0000 mon.a (mon.0) 3931 : audit [INF] from='client.? 192.168.123.104:0/2617738470' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pool2": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T10:49:41.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:41 vm04 bash[28289]: cluster 2026-03-10T10:49:40.316487+0000 mon.a (mon.0) 3932 : cluster [DBG] osdmap e783: 8 total, 8 up, 8 in 2026-03-10T10:49:41.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:41 vm04 bash[28289]: cluster 2026-03-10T10:49:40.316487+0000 mon.a (mon.0) 3932 : cluster [DBG] osdmap e783: 8 total, 8 up, 8 in 2026-03-10T10:49:41.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:41 vm04 bash[28289]: audit 2026-03-10T10:49:40.361342+0000 mon.a (mon.0) 3933 : audit [INF] from='client.? 192.168.123.104:0/2617738470' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pool2": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:41.703 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:41 vm04 bash[28289]: audit 2026-03-10T10:49:40.361342+0000 mon.a (mon.0) 3933 : audit [INF] from='client.? 192.168.123.104:0/2617738470' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pool2": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:41.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:41 vm04 bash[20742]: audit 2026-03-10T10:49:39.867300+0000 mgr.y (mgr.24422) 1328 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:49:41.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:41 vm04 bash[20742]: audit 2026-03-10T10:49:39.867300+0000 mgr.y (mgr.24422) 1328 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:49:41.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:41 vm04 bash[20742]: cluster 2026-03-10T10:49:40.252982+0000 mon.a (mon.0) 3930 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T10:49:41.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:41 vm04 bash[20742]: cluster 2026-03-10T10:49:40.252982+0000 mon.a (mon.0) 3930 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T10:49:41.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:41 vm04 bash[20742]: audit 2026-03-10T10:49:40.296272+0000 mon.a (mon.0) 3931 : audit [INF] from='client.? 192.168.123.104:0/2617738470' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pool2": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T10:49:41.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:41 vm04 bash[20742]: audit 2026-03-10T10:49:40.296272+0000 mon.a (mon.0) 3931 : audit [INF] from='client.? 192.168.123.104:0/2617738470' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pool2": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T10:49:41.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:41 vm04 bash[20742]: cluster 2026-03-10T10:49:40.316487+0000 mon.a (mon.0) 3932 : cluster [DBG] osdmap e783: 8 total, 8 up, 8 in 2026-03-10T10:49:41.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:41 vm04 bash[20742]: cluster 2026-03-10T10:49:40.316487+0000 mon.a (mon.0) 3932 : cluster [DBG] osdmap e783: 8 total, 8 up, 8 in 2026-03-10T10:49:41.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:41 vm04 bash[20742]: audit 2026-03-10T10:49:40.361342+0000 mon.a (mon.0) 3933 : audit [INF] from='client.? 192.168.123.104:0/2617738470' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pool2": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:41.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:41 vm04 bash[20742]: audit 2026-03-10T10:49:40.361342+0000 mon.a (mon.0) 3933 : audit [INF] from='client.? 192.168.123.104:0/2617738470' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pool2": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:41.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:41 vm07 bash[23367]: audit 2026-03-10T10:49:39.867300+0000 mgr.y (mgr.24422) 1328 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:49:41.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:41 vm07 bash[23367]: audit 2026-03-10T10:49:39.867300+0000 mgr.y (mgr.24422) 1328 : audit [DBG] from='client.24406 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T10:49:41.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:41 vm07 bash[23367]: cluster 2026-03-10T10:49:40.252982+0000 mon.a (mon.0) 3930 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T10:49:41.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:41 vm07 bash[23367]: cluster 2026-03-10T10:49:40.252982+0000 mon.a (mon.0) 3930 : cluster [INF] Health check cleared: POOL_FULL (was: 1 pool(s) full) 2026-03-10T10:49:41.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:41 vm07 bash[23367]: audit 2026-03-10T10:49:40.296272+0000 mon.a (mon.0) 3931 : audit [INF] from='client.? 192.168.123.104:0/2617738470' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pool2": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T10:49:41.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:41 vm07 bash[23367]: audit 2026-03-10T10:49:40.296272+0000 mon.a (mon.0) 3931 : audit [INF] from='client.? 192.168.123.104:0/2617738470' entity='client.admin' cmd='[{"prefix": "osd pool delete", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pool2": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "yes_i_really_really_mean_it": true}]': finished 2026-03-10T10:49:41.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:41 vm07 bash[23367]: cluster 2026-03-10T10:49:40.316487+0000 mon.a (mon.0) 3932 : cluster [DBG] osdmap e783: 8 total, 8 up, 8 in 2026-03-10T10:49:41.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:41 vm07 bash[23367]: cluster 2026-03-10T10:49:40.316487+0000 mon.a (mon.0) 3932 : cluster [DBG] osdmap e783: 8 total, 8 up, 8 in 2026-03-10T10:49:41.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:41 vm07 bash[23367]: audit 2026-03-10T10:49:40.361342+0000 mon.a (mon.0) 3933 : audit [INF] from='client.? 192.168.123.104:0/2617738470' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pool2": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:41.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:41 vm07 bash[23367]: audit 2026-03-10T10:49:40.361342+0000 mon.a (mon.0) 3933 : audit [INF] from='client.? 192.168.123.104:0/2617738470' entity='client.admin' cmd=[{"prefix": "osd pool delete", "pool": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "pool2": "cc1e9eec-b4d8-493e-ae89-ce289b5abc51", "yes_i_really_really_mean_it": true}]: dispatch 2026-03-10T10:49:42.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:42 vm04 bash[28289]: cluster 2026-03-10T10:49:40.848067+0000 mgr.y (mgr.24422) 1329 : cluster [DBG] pgmap v1771: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:42.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:42 vm04 bash[28289]: cluster 2026-03-10T10:49:40.848067+0000 mgr.y (mgr.24422) 1329 : cluster [DBG] pgmap v1771: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:42.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:42 vm04 bash[20742]: cluster 2026-03-10T10:49:40.848067+0000 mgr.y (mgr.24422) 1329 : cluster [DBG] pgmap v1771: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:42.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:42 vm04 bash[20742]: cluster 2026-03-10T10:49:40.848067+0000 mgr.y (mgr.24422) 1329 : cluster [DBG] pgmap v1771: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:42.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:42 vm07 bash[23367]: cluster 2026-03-10T10:49:40.848067+0000 mgr.y (mgr.24422) 1329 : cluster [DBG] pgmap v1771: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:42.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:42 vm07 bash[23367]: cluster 2026-03-10T10:49:40.848067+0000 mgr.y (mgr.24422) 1329 : cluster [DBG] pgmap v1771: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T10:49:43.452 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:49:43 vm04 bash[20997]: ::ffff:192.168.123.107 - - [10/Mar/2026:10:49:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-10T10:49:44.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:44 vm04 bash[28289]: cluster 2026-03-10T10:49:42.848367+0000 mgr.y (mgr.24422) 1330 : cluster [DBG] pgmap v1772: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T10:49:44.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:44 vm04 bash[28289]: cluster 2026-03-10T10:49:42.848367+0000 mgr.y (mgr.24422) 1330 : cluster [DBG] pgmap v1772: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T10:49:44.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:44 vm04 bash[28289]: audit 2026-03-10T10:49:43.765568+0000 mon.a (mon.0) 3934 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:49:44.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:44 vm04 bash[28289]: audit 2026-03-10T10:49:43.765568+0000 mon.a (mon.0) 3934 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:49:44.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:44 vm04 bash[28289]: audit 2026-03-10T10:49:43.766708+0000 mon.a (mon.0) 3935 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:49:44.702 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:44 vm04 bash[28289]: audit 2026-03-10T10:49:43.766708+0000 mon.a (mon.0) 3935 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:49:44.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:44 vm04 bash[20742]: cluster 2026-03-10T10:49:42.848367+0000 mgr.y (mgr.24422) 1330 : cluster [DBG] pgmap v1772: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T10:49:44.702 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:44 vm04 bash[20742]: cluster 2026-03-10T10:49:42.848367+0000 mgr.y (mgr.24422) 1330 : cluster [DBG] pgmap v1772: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T10:49:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:44 vm04 bash[20742]: audit 2026-03-10T10:49:43.765568+0000 mon.a (mon.0) 3934 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:49:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:44 vm04 bash[20742]: audit 2026-03-10T10:49:43.765568+0000 mon.a (mon.0) 3934 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:49:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:44 vm04 bash[20742]: audit 2026-03-10T10:49:43.766708+0000 mon.a (mon.0) 3935 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:49:44.703 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:44 vm04 bash[20742]: audit 2026-03-10T10:49:43.766708+0000 mon.a (mon.0) 3935 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:49:44.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:44 vm07 bash[23367]: cluster 2026-03-10T10:49:42.848367+0000 mgr.y (mgr.24422) 1330 : cluster [DBG] pgmap v1772: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T10:49:44.766 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:44 vm07 bash[23367]: cluster 2026-03-10T10:49:42.848367+0000 mgr.y (mgr.24422) 1330 : cluster [DBG] pgmap v1772: 164 pgs: 164 active+clean; 455 KiB data, 1.1 GiB used, 159 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T10:49:44.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:44 vm07 bash[23367]: audit 2026-03-10T10:49:43.765568+0000 mon.a (mon.0) 3934 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:49:44.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:44 vm07 bash[23367]: audit 2026-03-10T10:49:43.765568+0000 mon.a (mon.0) 3934 : audit [INF] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' 2026-03-10T10:49:44.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:44 vm07 bash[23367]: audit 2026-03-10T10:49:43.766708+0000 mon.a (mon.0) 3935 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:49:44.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:44 vm07 bash[23367]: audit 2026-03-10T10:49:43.766708+0000 mon.a (mon.0) 3935 : audit [DBG] from='mgr.24422 192.168.123.104:0/709660610' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T10:49:45.612 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/mon.c/config 2026-03-10T10:49:45.781 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:49:45.776+0000 7fb3e50f5640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-10T10:49:45.781 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:49:45.776+0000 7fb3e50f5640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-10T10:49:45.781 INFO:teuthology.orchestra.run.vm04.stderr:2026-03-10T10:49:45.776+0000 7fb3e50f5640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-10T10:49:45.781 INFO:teuthology.orchestra.run.vm04.stderr:[errno 21] error connecting to the cluster 2026-03-10T10:49:45.824 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T10:49:45.824 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-10T10:49:45.824 DEBUG:teuthology.orchestra.run.vm04:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T10:49:45.827 DEBUG:teuthology.orchestra.run.vm07:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T10:49:45.830 INFO:tasks.cephadm:Stopping all daemons... 2026-03-10T10:49:45.830 INFO:tasks.cephadm.mon.a:Stopping mon.a... 2026-03-10T10:49:45.830 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@mon.a 2026-03-10T10:49:45.879 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:45 vm04 systemd[1]: Stopping Ceph mon.a for e4c1c9d6-1c68-11f1-a9bd-116050875839... 2026-03-10T10:49:46.051 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@mon.a.service' 2026-03-10T10:49:46.104 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:45 vm04 bash[20742]: debug 2026-03-10T10:49:45.908+0000 7fe7dfccc640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T10:49:46.104 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:45 vm04 bash[20742]: debug 2026-03-10T10:49:45.908+0000 7fe7dfccc640 -1 mon.a@0(leader) e3 *** Got Signal Terminated *** 2026-03-10T10:49:46.104 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:46 vm04 bash[131980]: ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839-mon-a 2026-03-10T10:49:46.104 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:46 vm04 systemd[1]: ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@mon.a.service: Deactivated successfully. 2026-03-10T10:49:46.104 INFO:journalctl@ceph.mon.a.vm04.stdout:Mar 10 10:49:46 vm04 systemd[1]: Stopped Ceph mon.a for e4c1c9d6-1c68-11f1-a9bd-116050875839. 2026-03-10T10:49:46.104 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:49:45 vm04 bash[20997]: [10/Mar/2026:10:49:45] ENGINE Bus STOPPING 2026-03-10T10:49:46.104 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:49:46 vm04 bash[20997]: [10/Mar/2026:10:49:46] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T10:49:46.104 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:49:46 vm04 bash[20997]: [10/Mar/2026:10:49:46] ENGINE Bus STOPPED 2026-03-10T10:49:46.104 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:49:46 vm04 bash[20997]: [10/Mar/2026:10:49:46] ENGINE Bus STARTING 2026-03-10T10:49:46.109 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T10:49:46.109 INFO:tasks.cephadm.mon.a:Stopped mon.a 2026-03-10T10:49:46.109 INFO:tasks.cephadm.mon.b:Stopping mon.c... 2026-03-10T10:49:46.109 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@mon.c 2026-03-10T10:49:46.157 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:49:46 vm04 bash[20997]: [10/Mar/2026:10:49:46] ENGINE Serving on http://:::9283 2026-03-10T10:49:46.157 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:49:46 vm04 bash[20997]: [10/Mar/2026:10:49:46] ENGINE Bus STARTED 2026-03-10T10:49:46.327 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@mon.c.service' 2026-03-10T10:49:46.375 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:46 vm04 systemd[1]: Stopping Ceph mon.c for e4c1c9d6-1c68-11f1-a9bd-116050875839... 2026-03-10T10:49:46.375 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:46 vm04 bash[28289]: debug 2026-03-10T10:49:46.220+0000 7fba29210640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.c -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T10:49:46.375 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:46 vm04 bash[28289]: debug 2026-03-10T10:49:46.220+0000 7fba29210640 -1 mon.c@2(peon) e3 *** Got Signal Terminated *** 2026-03-10T10:49:46.375 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:46 vm04 bash[132083]: ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839-mon-c 2026-03-10T10:49:46.375 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:46 vm04 systemd[1]: ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@mon.c.service: Deactivated successfully. 2026-03-10T10:49:46.375 INFO:journalctl@ceph.mon.c.vm04.stdout:Mar 10 10:49:46 vm04 systemd[1]: Stopped Ceph mon.c for e4c1c9d6-1c68-11f1-a9bd-116050875839. 2026-03-10T10:49:46.380 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T10:49:46.380 INFO:tasks.cephadm.mon.b:Stopped mon.c 2026-03-10T10:49:46.380 INFO:tasks.cephadm.mon.b:Stopping mon.b... 2026-03-10T10:49:46.380 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@mon.b 2026-03-10T10:49:46.641 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:46 vm07 systemd[1]: Stopping Ceph mon.b for e4c1c9d6-1c68-11f1-a9bd-116050875839... 2026-03-10T10:49:46.641 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:46 vm07 bash[23367]: debug 2026-03-10T10:49:46.414+0000 7f55b183a640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.b -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T10:49:46.641 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 10:49:46 vm07 bash[23367]: debug 2026-03-10T10:49:46.414+0000 7f55b183a640 -1 mon.b@1(peon) e3 *** Got Signal Terminated *** 2026-03-10T10:49:46.716 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@mon.b.service' 2026-03-10T10:49:46.730 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T10:49:46.730 INFO:tasks.cephadm.mon.b:Stopped mon.b 2026-03-10T10:49:46.730 INFO:tasks.cephadm.mgr.y:Stopping mgr.y... 2026-03-10T10:49:46.730 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@mgr.y 2026-03-10T10:49:46.880 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@mgr.y.service' 2026-03-10T10:49:46.881 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:49:46 vm04 systemd[1]: Stopping Ceph mgr.y for e4c1c9d6-1c68-11f1-a9bd-116050875839... 2026-03-10T10:49:46.881 INFO:journalctl@ceph.mgr.y.vm04.stdout:Mar 10 10:49:46 vm04 bash[132191]: ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839-mgr-y 2026-03-10T10:49:46.890 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T10:49:46.890 INFO:tasks.cephadm.mgr.y:Stopped mgr.y 2026-03-10T10:49:46.890 INFO:tasks.cephadm.mgr.x:Stopping mgr.x... 2026-03-10T10:49:46.890 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@mgr.x 2026-03-10T10:49:46.971 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 10:49:46 vm07 systemd[1]: Stopping Ceph mgr.x for e4c1c9d6-1c68-11f1-a9bd-116050875839... 2026-03-10T10:49:47.044 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@mgr.x.service' 2026-03-10T10:49:47.055 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T10:49:47.055 INFO:tasks.cephadm.mgr.x:Stopped mgr.x 2026-03-10T10:49:47.055 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-10T10:49:47.055 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@osd.0 2026-03-10T10:49:47.202 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 10:49:47 vm04 systemd[1]: Stopping Ceph osd.0 for e4c1c9d6-1c68-11f1-a9bd-116050875839... 2026-03-10T10:49:47.202 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 10:49:47 vm04 bash[31174]: debug 2026-03-10T10:49:47.104+0000 7f7b6b38f640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T10:49:47.202 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 10:49:47 vm04 bash[31174]: debug 2026-03-10T10:49:47.104+0000 7f7b6b38f640 -1 osd.0 783 *** Got signal Terminated *** 2026-03-10T10:49:47.202 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 10:49:47 vm04 bash[31174]: debug 2026-03-10T10:49:47.104+0000 7f7b6b38f640 -1 osd.0 783 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T10:49:52.452 INFO:journalctl@ceph.osd.0.vm04.stdout:Mar 10 10:49:52 vm04 bash[132278]: ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839-osd-0 2026-03-10T10:49:52.520 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@osd.0.service' 2026-03-10T10:49:52.531 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T10:49:52.531 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-10T10:49:52.531 INFO:tasks.cephadm.osd.1:Stopping osd.1... 2026-03-10T10:49:52.531 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@osd.1 2026-03-10T10:49:52.952 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 10 10:49:52 vm04 systemd[1]: Stopping Ceph osd.1 for e4c1c9d6-1c68-11f1-a9bd-116050875839... 2026-03-10T10:49:52.952 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 10 10:49:52 vm04 bash[37366]: debug 2026-03-10T10:49:52.616+0000 7f2d75314640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T10:49:52.952 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 10 10:49:52 vm04 bash[37366]: debug 2026-03-10T10:49:52.616+0000 7f2d75314640 -1 osd.1 783 *** Got signal Terminated *** 2026-03-10T10:49:52.952 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 10 10:49:52 vm04 bash[37366]: debug 2026-03-10T10:49:52.616+0000 7f2d75314640 -1 osd.1 783 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T10:49:57.926 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 10 10:49:57 vm04 bash[132458]: ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839-osd-1 2026-03-10T10:49:57.965 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@osd.1.service' 2026-03-10T10:49:57.979 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T10:49:57.979 INFO:tasks.cephadm.osd.1:Stopped osd.1 2026-03-10T10:49:57.979 INFO:tasks.cephadm.osd.2:Stopping osd.2... 2026-03-10T10:49:57.979 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@osd.2 2026-03-10T10:49:58.202 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 10 10:49:58 vm04 systemd[1]: Stopping Ceph osd.2 for e4c1c9d6-1c68-11f1-a9bd-116050875839... 2026-03-10T10:49:58.202 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 10 10:49:58 vm04 bash[43416]: debug 2026-03-10T10:49:58.060+0000 7f03ba62c640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T10:49:58.202 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 10 10:49:58 vm04 bash[43416]: debug 2026-03-10T10:49:58.060+0000 7f03ba62c640 -1 osd.2 783 *** Got signal Terminated *** 2026-03-10T10:49:58.202 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 10 10:49:58 vm04 bash[43416]: debug 2026-03-10T10:49:58.060+0000 7f03ba62c640 -1 osd.2 783 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T10:50:03.423 INFO:journalctl@ceph.osd.2.vm04.stdout:Mar 10 10:50:03 vm04 bash[132638]: ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839-osd-2 2026-03-10T10:50:03.447 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@osd.2.service' 2026-03-10T10:50:03.458 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T10:50:03.458 INFO:tasks.cephadm.osd.2:Stopped osd.2 2026-03-10T10:50:03.458 INFO:tasks.cephadm.osd.3:Stopping osd.3... 2026-03-10T10:50:03.458 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@osd.3 2026-03-10T10:50:03.702 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 10 10:50:03 vm04 systemd[1]: Stopping Ceph osd.3 for e4c1c9d6-1c68-11f1-a9bd-116050875839... 2026-03-10T10:50:03.702 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 10 10:50:03 vm04 bash[49304]: debug 2026-03-10T10:50:03.540+0000 7f60dedb9640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.3 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T10:50:03.702 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 10 10:50:03 vm04 bash[49304]: debug 2026-03-10T10:50:03.540+0000 7f60dedb9640 -1 osd.3 783 *** Got signal Terminated *** 2026-03-10T10:50:03.702 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 10 10:50:03 vm04 bash[49304]: debug 2026-03-10T10:50:03.540+0000 7f60dedb9640 -1 osd.3 783 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T10:50:08.840 INFO:journalctl@ceph.osd.3.vm04.stdout:Mar 10 10:50:08 vm04 bash[132825]: ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839-osd-3 2026-03-10T10:50:08.861 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@osd.3.service' 2026-03-10T10:50:08.871 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T10:50:08.871 INFO:tasks.cephadm.osd.3:Stopped osd.3 2026-03-10T10:50:08.871 INFO:tasks.cephadm.osd.4:Stopping osd.4... 2026-03-10T10:50:08.871 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@osd.4 2026-03-10T10:50:09.266 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 10:50:08 vm07 systemd[1]: Stopping Ceph osd.4 for e4c1c9d6-1c68-11f1-a9bd-116050875839... 2026-03-10T10:50:09.267 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 10:50:08 vm07 bash[26644]: debug 2026-03-10T10:50:08.906+0000 7f4621b84640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.4 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T10:50:09.267 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 10:50:08 vm07 bash[26644]: debug 2026-03-10T10:50:08.906+0000 7f4621b84640 -1 osd.4 783 *** Got signal Terminated *** 2026-03-10T10:50:09.267 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 10:50:08 vm07 bash[26644]: debug 2026-03-10T10:50:08.906+0000 7f4621b84640 -1 osd.4 783 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T10:50:11.266 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:50:10 vm07 bash[51324]: ts=2026-03-10T10:50:10.876Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nvmeof msg="Unable to refresh target groups" err="Get \"http://192.168.123.104:8765/sd/prometheus/sd-config?service=nvmeof\": dial tcp 192.168.123.104:8765: connect: connection refused" 2026-03-10T10:50:11.266 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:50:10 vm07 bash[51324]: ts=2026-03-10T10:50:10.878Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph-exporter msg="Unable to refresh target groups" err="Get \"http://192.168.123.104:8765/sd/prometheus/sd-config?service=ceph-exporter\": dial tcp 192.168.123.104:8765: connect: connection refused" 2026-03-10T10:50:11.267 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:50:10 vm07 bash[51324]: ts=2026-03-10T10:50:10.878Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nfs msg="Unable to refresh target groups" err="Get \"http://192.168.123.104:8765/sd/prometheus/sd-config?service=nfs\": dial tcp 192.168.123.104:8765: connect: connection refused" 2026-03-10T10:50:11.267 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:50:10 vm07 bash[51324]: ts=2026-03-10T10:50:10.878Z caller=refresh.go:90 level=error component="discovery manager notify" discovery=http config=config-0 msg="Unable to refresh target groups" err="Get \"http://192.168.123.104:8765/sd/prometheus/sd-config?service=alertmanager\": dial tcp 192.168.123.104:8765: connect: connection refused" 2026-03-10T10:50:11.267 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:50:10 vm07 bash[51324]: ts=2026-03-10T10:50:10.878Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=node msg="Unable to refresh target groups" err="Get \"http://192.168.123.104:8765/sd/prometheus/sd-config?service=node-exporter\": dial tcp 192.168.123.104:8765: connect: connection refused" 2026-03-10T10:50:11.267 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 10:50:10 vm07 bash[51324]: ts=2026-03-10T10:50:10.879Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph msg="Unable to refresh target groups" err="Get \"http://192.168.123.104:8765/sd/prometheus/sd-config?service=mgr-prometheus\": dial tcp 192.168.123.104:8765: connect: connection refused" 2026-03-10T10:50:13.516 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 10:50:13 vm07 bash[32409]: debug 2026-03-10T10:50:13.238+0000 7f46d1c44640 -1 osd.5 783 heartbeat_check: no reply from 192.168.123.104:6803 osd.0 since back 2026-03-10T10:49:48.014878+0000 front 2026-03-10T10:49:48.014917+0000 (oldest deadline 2026-03-10T10:50:12.714614+0000) 2026-03-10T10:50:13.517 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 10:50:13 vm07 bash[26644]: debug 2026-03-10T10:50:13.450+0000 7f461d99c640 -1 osd.4 783 heartbeat_check: no reply from 192.168.123.104:6803 osd.0 since back 2026-03-10T10:49:49.171900+0000 front 2026-03-10T10:49:49.171937+0000 (oldest deadline 2026-03-10T10:50:13.271471+0000) 2026-03-10T10:50:14.244 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 10:50:14 vm07 bash[56624]: ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839-osd-4 2026-03-10T10:50:14.279 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@osd.4.service' 2026-03-10T10:50:14.289 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T10:50:14.289 INFO:tasks.cephadm.osd.4:Stopped osd.4 2026-03-10T10:50:14.289 INFO:tasks.cephadm.osd.5:Stopping osd.5... 2026-03-10T10:50:14.289 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@osd.5 2026-03-10T10:50:14.516 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 10:50:14 vm07 bash[32409]: debug 2026-03-10T10:50:14.246+0000 7f46d1c44640 -1 osd.5 783 heartbeat_check: no reply from 192.168.123.104:6803 osd.0 since back 2026-03-10T10:49:48.014878+0000 front 2026-03-10T10:49:48.014917+0000 (oldest deadline 2026-03-10T10:50:12.714614+0000) 2026-03-10T10:50:14.516 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 10:50:14 vm07 systemd[1]: Stopping Ceph osd.5 for e4c1c9d6-1c68-11f1-a9bd-116050875839... 2026-03-10T10:50:14.516 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 10:50:14 vm07 bash[32409]: debug 2026-03-10T10:50:14.366+0000 7f46d5e2c640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.5 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T10:50:14.516 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 10:50:14 vm07 bash[32409]: debug 2026-03-10T10:50:14.366+0000 7f46d5e2c640 -1 osd.5 783 *** Got signal Terminated *** 2026-03-10T10:50:14.516 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 10:50:14 vm07 bash[32409]: debug 2026-03-10T10:50:14.366+0000 7f46d5e2c640 -1 osd.5 783 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T10:50:15.516 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 10:50:15 vm07 bash[32409]: debug 2026-03-10T10:50:15.222+0000 7f46d1c44640 -1 osd.5 783 heartbeat_check: no reply from 192.168.123.104:6803 osd.0 since back 2026-03-10T10:49:48.014878+0000 front 2026-03-10T10:49:48.014917+0000 (oldest deadline 2026-03-10T10:50:12.714614+0000) 2026-03-10T10:50:16.016 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:15 vm07 bash[43999]: debug 2026-03-10T10:50:15.754+0000 7faa16f55640 -1 osd.7 783 heartbeat_check: no reply from 192.168.123.104:6803 osd.0 since back 2026-03-10T10:49:50.769081+0000 front 2026-03-10T10:49:50.768910+0000 (oldest deadline 2026-03-10T10:50:14.868756+0000) 2026-03-10T10:50:16.516 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 10:50:16 vm07 bash[32409]: debug 2026-03-10T10:50:16.218+0000 7f46d1c44640 -1 osd.5 783 heartbeat_check: no reply from 192.168.123.104:6803 osd.0 since back 2026-03-10T10:49:48.014878+0000 front 2026-03-10T10:49:48.014917+0000 (oldest deadline 2026-03-10T10:50:12.714614+0000) 2026-03-10T10:50:17.016 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:16 vm07 bash[43999]: debug 2026-03-10T10:50:16.706+0000 7faa16f55640 -1 osd.7 783 heartbeat_check: no reply from 192.168.123.104:6803 osd.0 since back 2026-03-10T10:49:50.769081+0000 front 2026-03-10T10:49:50.768910+0000 (oldest deadline 2026-03-10T10:50:14.868756+0000) 2026-03-10T10:50:17.516 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 10:50:17 vm07 bash[32409]: debug 2026-03-10T10:50:17.186+0000 7f46d1c44640 -1 osd.5 783 heartbeat_check: no reply from 192.168.123.104:6803 osd.0 since back 2026-03-10T10:49:48.014878+0000 front 2026-03-10T10:49:48.014917+0000 (oldest deadline 2026-03-10T10:50:12.714614+0000) 2026-03-10T10:50:18.017 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 10:50:17 vm07 bash[38167]: debug 2026-03-10T10:50:17.778+0000 7fe83912b640 -1 osd.6 783 heartbeat_check: no reply from 192.168.123.104:6803 osd.0 since back 2026-03-10T10:49:51.088053+0000 front 2026-03-10T10:49:51.088302+0000 (oldest deadline 2026-03-10T10:50:16.987886+0000) 2026-03-10T10:50:18.017 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:17 vm07 bash[43999]: debug 2026-03-10T10:50:17.682+0000 7faa16f55640 -1 osd.7 783 heartbeat_check: no reply from 192.168.123.104:6803 osd.0 since back 2026-03-10T10:49:50.769081+0000 front 2026-03-10T10:49:50.768910+0000 (oldest deadline 2026-03-10T10:50:14.868756+0000) 2026-03-10T10:50:18.516 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 10:50:18 vm07 bash[32409]: debug 2026-03-10T10:50:18.210+0000 7f46d1c44640 -1 osd.5 783 heartbeat_check: no reply from 192.168.123.104:6803 osd.0 since back 2026-03-10T10:49:48.014878+0000 front 2026-03-10T10:49:48.014917+0000 (oldest deadline 2026-03-10T10:50:12.714614+0000) 2026-03-10T10:50:19.017 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 10:50:18 vm07 bash[38167]: debug 2026-03-10T10:50:18.766+0000 7fe83912b640 -1 osd.6 783 heartbeat_check: no reply from 192.168.123.104:6803 osd.0 since back 2026-03-10T10:49:51.088053+0000 front 2026-03-10T10:49:51.088302+0000 (oldest deadline 2026-03-10T10:50:16.987886+0000) 2026-03-10T10:50:19.017 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 10:50:18 vm07 bash[38167]: debug 2026-03-10T10:50:18.766+0000 7fe83912b640 -1 osd.6 783 heartbeat_check: no reply from 192.168.123.104:6807 osd.1 since back 2026-03-10T10:49:56.988297+0000 front 2026-03-10T10:49:56.988373+0000 (oldest deadline 2026-03-10T10:50:18.688089+0000) 2026-03-10T10:50:19.017 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:18 vm07 bash[43999]: debug 2026-03-10T10:50:18.658+0000 7faa16f55640 -1 osd.7 783 heartbeat_check: no reply from 192.168.123.104:6803 osd.0 since back 2026-03-10T10:49:50.769081+0000 front 2026-03-10T10:49:50.768910+0000 (oldest deadline 2026-03-10T10:50:14.868756+0000) 2026-03-10T10:50:19.510 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 10:50:19 vm07 bash[32409]: debug 2026-03-10T10:50:19.214+0000 7f46d1c44640 -1 osd.5 783 heartbeat_check: no reply from 192.168.123.104:6803 osd.0 since back 2026-03-10T10:49:48.014878+0000 front 2026-03-10T10:49:48.014917+0000 (oldest deadline 2026-03-10T10:50:12.714614+0000) 2026-03-10T10:50:19.510 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 10:50:19 vm07 bash[32409]: debug 2026-03-10T10:50:19.214+0000 7f46d1c44640 -1 osd.5 783 heartbeat_check: no reply from 192.168.123.104:6807 osd.1 since back 2026-03-10T10:49:55.015020+0000 front 2026-03-10T10:49:55.015377+0000 (oldest deadline 2026-03-10T10:50:18.514975+0000) 2026-03-10T10:50:19.510 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 10:50:19 vm07 bash[56799]: ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839-osd-5 2026-03-10T10:50:19.736 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@osd.5.service' 2026-03-10T10:50:19.746 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T10:50:19.746 INFO:tasks.cephadm.osd.5:Stopped osd.5 2026-03-10T10:50:19.746 INFO:tasks.cephadm.osd.6:Stopping osd.6... 2026-03-10T10:50:19.746 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@osd.6 2026-03-10T10:50:19.767 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:19 vm07 bash[43999]: debug 2026-03-10T10:50:19.622+0000 7faa16f55640 -1 osd.7 783 heartbeat_check: no reply from 192.168.123.104:6803 osd.0 since back 2026-03-10T10:49:50.769081+0000 front 2026-03-10T10:49:50.768910+0000 (oldest deadline 2026-03-10T10:50:14.868756+0000) 2026-03-10T10:50:20.016 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 10:50:19 vm07 bash[38167]: debug 2026-03-10T10:50:19.762+0000 7fe83912b640 -1 osd.6 783 heartbeat_check: no reply from 192.168.123.104:6803 osd.0 since back 2026-03-10T10:49:51.088053+0000 front 2026-03-10T10:49:51.088302+0000 (oldest deadline 2026-03-10T10:50:16.987886+0000) 2026-03-10T10:50:20.017 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 10:50:19 vm07 bash[38167]: debug 2026-03-10T10:50:19.762+0000 7fe83912b640 -1 osd.6 783 heartbeat_check: no reply from 192.168.123.104:6807 osd.1 since back 2026-03-10T10:49:56.988297+0000 front 2026-03-10T10:49:56.988373+0000 (oldest deadline 2026-03-10T10:50:18.688089+0000) 2026-03-10T10:50:20.017 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 10:50:19 vm07 systemd[1]: Stopping Ceph osd.6 for e4c1c9d6-1c68-11f1-a9bd-116050875839... 2026-03-10T10:50:20.017 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 10:50:19 vm07 bash[38167]: debug 2026-03-10T10:50:19.898+0000 7fe83cb12640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.6 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T10:50:20.017 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 10:50:19 vm07 bash[38167]: debug 2026-03-10T10:50:19.898+0000 7fe83cb12640 -1 osd.6 783 *** Got signal Terminated *** 2026-03-10T10:50:20.017 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 10:50:19 vm07 bash[38167]: debug 2026-03-10T10:50:19.898+0000 7fe83cb12640 -1 osd.6 783 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T10:50:21.016 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 10:50:20 vm07 bash[38167]: debug 2026-03-10T10:50:20.758+0000 7fe83912b640 -1 osd.6 783 heartbeat_check: no reply from 192.168.123.104:6803 osd.0 since back 2026-03-10T10:49:51.088053+0000 front 2026-03-10T10:49:51.088302+0000 (oldest deadline 2026-03-10T10:50:16.987886+0000) 2026-03-10T10:50:21.017 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 10:50:20 vm07 bash[38167]: debug 2026-03-10T10:50:20.758+0000 7fe83912b640 -1 osd.6 783 heartbeat_check: no reply from 192.168.123.104:6807 osd.1 since back 2026-03-10T10:49:56.988297+0000 front 2026-03-10T10:49:56.988373+0000 (oldest deadline 2026-03-10T10:50:18.688089+0000) 2026-03-10T10:50:21.017 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:20 vm07 bash[43999]: debug 2026-03-10T10:50:20.638+0000 7faa16f55640 -1 osd.7 783 heartbeat_check: no reply from 192.168.123.104:6803 osd.0 since back 2026-03-10T10:49:50.769081+0000 front 2026-03-10T10:49:50.768910+0000 (oldest deadline 2026-03-10T10:50:14.868756+0000) 2026-03-10T10:50:21.017 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:20 vm07 bash[43999]: debug 2026-03-10T10:50:20.638+0000 7faa16f55640 -1 osd.7 783 heartbeat_check: no reply from 192.168.123.104:6807 osd.1 since back 2026-03-10T10:49:54.869136+0000 front 2026-03-10T10:49:54.869048+0000 (oldest deadline 2026-03-10T10:50:20.168947+0000) 2026-03-10T10:50:22.016 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 10:50:21 vm07 bash[38167]: debug 2026-03-10T10:50:21.790+0000 7fe83912b640 -1 osd.6 783 heartbeat_check: no reply from 192.168.123.104:6803 osd.0 since back 2026-03-10T10:49:51.088053+0000 front 2026-03-10T10:49:51.088302+0000 (oldest deadline 2026-03-10T10:50:16.987886+0000) 2026-03-10T10:50:22.017 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 10:50:21 vm07 bash[38167]: debug 2026-03-10T10:50:21.790+0000 7fe83912b640 -1 osd.6 783 heartbeat_check: no reply from 192.168.123.104:6807 osd.1 since back 2026-03-10T10:49:56.988297+0000 front 2026-03-10T10:49:56.988373+0000 (oldest deadline 2026-03-10T10:50:18.688089+0000) 2026-03-10T10:50:22.017 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:21 vm07 bash[43999]: debug 2026-03-10T10:50:21.638+0000 7faa16f55640 -1 osd.7 783 heartbeat_check: no reply from 192.168.123.104:6803 osd.0 since back 2026-03-10T10:49:50.769081+0000 front 2026-03-10T10:49:50.768910+0000 (oldest deadline 2026-03-10T10:50:14.868756+0000) 2026-03-10T10:50:22.017 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:21 vm07 bash[43999]: debug 2026-03-10T10:50:21.638+0000 7faa16f55640 -1 osd.7 783 heartbeat_check: no reply from 192.168.123.104:6807 osd.1 since back 2026-03-10T10:49:54.869136+0000 front 2026-03-10T10:49:54.869048+0000 (oldest deadline 2026-03-10T10:50:20.168947+0000) 2026-03-10T10:50:23.017 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 10:50:22 vm07 bash[38167]: debug 2026-03-10T10:50:22.834+0000 7fe83912b640 -1 osd.6 783 heartbeat_check: no reply from 192.168.123.104:6803 osd.0 since back 2026-03-10T10:49:51.088053+0000 front 2026-03-10T10:49:51.088302+0000 (oldest deadline 2026-03-10T10:50:16.987886+0000) 2026-03-10T10:50:23.017 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 10:50:22 vm07 bash[38167]: debug 2026-03-10T10:50:22.834+0000 7fe83912b640 -1 osd.6 783 heartbeat_check: no reply from 192.168.123.104:6807 osd.1 since back 2026-03-10T10:49:56.988297+0000 front 2026-03-10T10:49:56.988373+0000 (oldest deadline 2026-03-10T10:50:18.688089+0000) 2026-03-10T10:50:23.017 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:22 vm07 bash[43999]: debug 2026-03-10T10:50:22.626+0000 7faa16f55640 -1 osd.7 783 heartbeat_check: no reply from 192.168.123.104:6803 osd.0 since back 2026-03-10T10:49:50.769081+0000 front 2026-03-10T10:49:50.768910+0000 (oldest deadline 2026-03-10T10:50:14.868756+0000) 2026-03-10T10:50:23.017 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:22 vm07 bash[43999]: debug 2026-03-10T10:50:22.626+0000 7faa16f55640 -1 osd.7 783 heartbeat_check: no reply from 192.168.123.104:6807 osd.1 since back 2026-03-10T10:49:54.869136+0000 front 2026-03-10T10:49:54.869048+0000 (oldest deadline 2026-03-10T10:50:20.168947+0000) 2026-03-10T10:50:24.016 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 10:50:23 vm07 bash[38167]: debug 2026-03-10T10:50:23.834+0000 7fe83912b640 -1 osd.6 783 heartbeat_check: no reply from 192.168.123.104:6803 osd.0 since back 2026-03-10T10:49:51.088053+0000 front 2026-03-10T10:49:51.088302+0000 (oldest deadline 2026-03-10T10:50:16.987886+0000) 2026-03-10T10:50:24.017 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 10:50:23 vm07 bash[38167]: debug 2026-03-10T10:50:23.834+0000 7fe83912b640 -1 osd.6 783 heartbeat_check: no reply from 192.168.123.104:6807 osd.1 since back 2026-03-10T10:49:56.988297+0000 front 2026-03-10T10:49:56.988373+0000 (oldest deadline 2026-03-10T10:50:18.688089+0000) 2026-03-10T10:50:24.017 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:23 vm07 bash[43999]: debug 2026-03-10T10:50:23.590+0000 7faa16f55640 -1 osd.7 783 heartbeat_check: no reply from 192.168.123.104:6803 osd.0 since back 2026-03-10T10:49:50.769081+0000 front 2026-03-10T10:49:50.768910+0000 (oldest deadline 2026-03-10T10:50:14.868756+0000) 2026-03-10T10:50:24.017 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:23 vm07 bash[43999]: debug 2026-03-10T10:50:23.590+0000 7faa16f55640 -1 osd.7 783 heartbeat_check: no reply from 192.168.123.104:6807 osd.1 since back 2026-03-10T10:49:54.869136+0000 front 2026-03-10T10:49:54.869048+0000 (oldest deadline 2026-03-10T10:50:20.168947+0000) 2026-03-10T10:50:24.852 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:24 vm07 bash[43999]: debug 2026-03-10T10:50:24.542+0000 7faa16f55640 -1 osd.7 783 heartbeat_check: no reply from 192.168.123.104:6803 osd.0 since back 2026-03-10T10:49:50.769081+0000 front 2026-03-10T10:49:50.768910+0000 (oldest deadline 2026-03-10T10:50:14.868756+0000) 2026-03-10T10:50:24.852 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:24 vm07 bash[43999]: debug 2026-03-10T10:50:24.542+0000 7faa16f55640 -1 osd.7 783 heartbeat_check: no reply from 192.168.123.104:6807 osd.1 since back 2026-03-10T10:49:54.869136+0000 front 2026-03-10T10:49:54.869048+0000 (oldest deadline 2026-03-10T10:50:20.168947+0000) 2026-03-10T10:50:25.216 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 10:50:24 vm07 bash[38167]: debug 2026-03-10T10:50:24.846+0000 7fe83912b640 -1 osd.6 783 heartbeat_check: no reply from 192.168.123.104:6803 osd.0 since back 2026-03-10T10:49:51.088053+0000 front 2026-03-10T10:49:51.088302+0000 (oldest deadline 2026-03-10T10:50:16.987886+0000) 2026-03-10T10:50:25.216 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 10:50:24 vm07 bash[38167]: debug 2026-03-10T10:50:24.846+0000 7fe83912b640 -1 osd.6 783 heartbeat_check: no reply from 192.168.123.104:6807 osd.1 since back 2026-03-10T10:49:56.988297+0000 front 2026-03-10T10:49:56.988373+0000 (oldest deadline 2026-03-10T10:50:18.688089+0000) 2026-03-10T10:50:25.216 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 10:50:24 vm07 bash[38167]: debug 2026-03-10T10:50:24.846+0000 7fe83912b640 -1 osd.6 783 heartbeat_check: no reply from 192.168.123.104:6811 osd.2 since back 2026-03-10T10:49:58.688340+0000 front 2026-03-10T10:49:58.688383+0000 (oldest deadline 2026-03-10T10:50:24.588279+0000) 2026-03-10T10:50:25.216 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 10:50:24 vm07 bash[56986]: ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839-osd-6 2026-03-10T10:50:25.260 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@osd.6.service' 2026-03-10T10:50:25.270 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T10:50:25.271 INFO:tasks.cephadm.osd.6:Stopped osd.6 2026-03-10T10:50:25.271 INFO:tasks.cephadm.osd.7:Stopping osd.7... 2026-03-10T10:50:25.271 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@osd.7 2026-03-10T10:50:25.516 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:25 vm07 systemd[1]: Stopping Ceph osd.7 for e4c1c9d6-1c68-11f1-a9bd-116050875839... 2026-03-10T10:50:25.517 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:25 vm07 bash[43999]: debug 2026-03-10T10:50:25.350+0000 7faa1a93c640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.7 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T10:50:25.517 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:25 vm07 bash[43999]: debug 2026-03-10T10:50:25.350+0000 7faa1a93c640 -1 osd.7 783 *** Got signal Terminated *** 2026-03-10T10:50:25.517 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:25 vm07 bash[43999]: debug 2026-03-10T10:50:25.350+0000 7faa1a93c640 -1 osd.7 783 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T10:50:26.017 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:25 vm07 bash[43999]: debug 2026-03-10T10:50:25.586+0000 7faa16f55640 -1 osd.7 783 heartbeat_check: no reply from 192.168.123.104:6803 osd.0 since back 2026-03-10T10:49:50.769081+0000 front 2026-03-10T10:49:50.768910+0000 (oldest deadline 2026-03-10T10:50:14.868756+0000) 2026-03-10T10:50:26.017 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:25 vm07 bash[43999]: debug 2026-03-10T10:50:25.586+0000 7faa16f55640 -1 osd.7 783 heartbeat_check: no reply from 192.168.123.104:6807 osd.1 since back 2026-03-10T10:49:54.869136+0000 front 2026-03-10T10:49:54.869048+0000 (oldest deadline 2026-03-10T10:50:20.168947+0000) 2026-03-10T10:50:26.017 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:25 vm07 bash[43999]: debug 2026-03-10T10:50:25.586+0000 7faa16f55640 -1 osd.7 783 heartbeat_check: no reply from 192.168.123.104:6811 osd.2 since back 2026-03-10T10:50:00.169337+0000 front 2026-03-10T10:50:00.169294+0000 (oldest deadline 2026-03-10T10:50:25.469149+0000) 2026-03-10T10:50:27.016 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:26 vm07 bash[43999]: debug 2026-03-10T10:50:26.562+0000 7faa16f55640 -1 osd.7 783 heartbeat_check: no reply from 192.168.123.104:6803 osd.0 since back 2026-03-10T10:49:50.769081+0000 front 2026-03-10T10:49:50.768910+0000 (oldest deadline 2026-03-10T10:50:14.868756+0000) 2026-03-10T10:50:27.016 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:26 vm07 bash[43999]: debug 2026-03-10T10:50:26.562+0000 7faa16f55640 -1 osd.7 783 heartbeat_check: no reply from 192.168.123.104:6807 osd.1 since back 2026-03-10T10:49:54.869136+0000 front 2026-03-10T10:49:54.869048+0000 (oldest deadline 2026-03-10T10:50:20.168947+0000) 2026-03-10T10:50:27.017 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:26 vm07 bash[43999]: debug 2026-03-10T10:50:26.562+0000 7faa16f55640 -1 osd.7 783 heartbeat_check: no reply from 192.168.123.104:6811 osd.2 since back 2026-03-10T10:50:00.169337+0000 front 2026-03-10T10:50:00.169294+0000 (oldest deadline 2026-03-10T10:50:25.469149+0000) 2026-03-10T10:50:28.016 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:27 vm07 bash[43999]: debug 2026-03-10T10:50:27.610+0000 7faa16f55640 -1 osd.7 783 heartbeat_check: no reply from 192.168.123.104:6803 osd.0 since back 2026-03-10T10:49:50.769081+0000 front 2026-03-10T10:49:50.768910+0000 (oldest deadline 2026-03-10T10:50:14.868756+0000) 2026-03-10T10:50:28.016 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:27 vm07 bash[43999]: debug 2026-03-10T10:50:27.610+0000 7faa16f55640 -1 osd.7 783 heartbeat_check: no reply from 192.168.123.104:6807 osd.1 since back 2026-03-10T10:49:54.869136+0000 front 2026-03-10T10:49:54.869048+0000 (oldest deadline 2026-03-10T10:50:20.168947+0000) 2026-03-10T10:50:28.016 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:27 vm07 bash[43999]: debug 2026-03-10T10:50:27.610+0000 7faa16f55640 -1 osd.7 783 heartbeat_check: no reply from 192.168.123.104:6811 osd.2 since back 2026-03-10T10:50:00.169337+0000 front 2026-03-10T10:50:00.169294+0000 (oldest deadline 2026-03-10T10:50:25.469149+0000) 2026-03-10T10:50:29.016 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:28 vm07 bash[43999]: debug 2026-03-10T10:50:28.638+0000 7faa16f55640 -1 osd.7 783 heartbeat_check: no reply from 192.168.123.104:6803 osd.0 since back 2026-03-10T10:49:50.769081+0000 front 2026-03-10T10:49:50.768910+0000 (oldest deadline 2026-03-10T10:50:14.868756+0000) 2026-03-10T10:50:29.017 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:28 vm07 bash[43999]: debug 2026-03-10T10:50:28.638+0000 7faa16f55640 -1 osd.7 783 heartbeat_check: no reply from 192.168.123.104:6807 osd.1 since back 2026-03-10T10:49:54.869136+0000 front 2026-03-10T10:49:54.869048+0000 (oldest deadline 2026-03-10T10:50:20.168947+0000) 2026-03-10T10:50:29.017 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:28 vm07 bash[43999]: debug 2026-03-10T10:50:28.638+0000 7faa16f55640 -1 osd.7 783 heartbeat_check: no reply from 192.168.123.104:6811 osd.2 since back 2026-03-10T10:50:00.169337+0000 front 2026-03-10T10:50:00.169294+0000 (oldest deadline 2026-03-10T10:50:25.469149+0000) 2026-03-10T10:50:30.016 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:29 vm07 bash[43999]: debug 2026-03-10T10:50:29.602+0000 7faa16f55640 -1 osd.7 783 heartbeat_check: no reply from 192.168.123.104:6803 osd.0 since back 2026-03-10T10:49:50.769081+0000 front 2026-03-10T10:49:50.768910+0000 (oldest deadline 2026-03-10T10:50:14.868756+0000) 2026-03-10T10:50:30.016 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:29 vm07 bash[43999]: debug 2026-03-10T10:50:29.602+0000 7faa16f55640 -1 osd.7 783 heartbeat_check: no reply from 192.168.123.104:6807 osd.1 since back 2026-03-10T10:49:54.869136+0000 front 2026-03-10T10:49:54.869048+0000 (oldest deadline 2026-03-10T10:50:20.168947+0000) 2026-03-10T10:50:30.017 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:29 vm07 bash[43999]: debug 2026-03-10T10:50:29.602+0000 7faa16f55640 -1 osd.7 783 heartbeat_check: no reply from 192.168.123.104:6811 osd.2 since back 2026-03-10T10:50:00.169337+0000 front 2026-03-10T10:50:00.169294+0000 (oldest deadline 2026-03-10T10:50:25.469149+0000) 2026-03-10T10:50:30.700 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 10:50:30 vm07 bash[57163]: ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839-osd-7 2026-03-10T10:50:30.729 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@osd.7.service' 2026-03-10T10:50:30.739 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T10:50:30.739 INFO:tasks.cephadm.osd.7:Stopped osd.7 2026-03-10T10:50:30.739 INFO:tasks.cephadm.ceph.rgw.foo.a:Stopping rgw.foo.a... 2026-03-10T10:50:30.739 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@rgw.foo.a 2026-03-10T10:50:31.202 INFO:journalctl@ceph.rgw.foo.a.vm04.stdout:Mar 10 10:50:30 vm04 systemd[1]: Stopping Ceph rgw.foo.a for e4c1c9d6-1c68-11f1-a9bd-116050875839... 2026-03-10T10:50:31.202 INFO:journalctl@ceph.rgw.foo.a.vm04.stdout:Mar 10 10:50:30 vm04 bash[53425]: debug 2026-03-10T10:50:30.780+0000 7fbe8ee73640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/radosgw -n client.rgw.foo.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T10:50:31.203 INFO:journalctl@ceph.rgw.foo.a.vm04.stdout:Mar 10 10:50:30 vm04 bash[53425]: debug 2026-03-10T10:50:30.780+0000 7fbe926e2980 -1 shutting down 2026-03-10T10:50:40.853 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@rgw.foo.a.service' 2026-03-10T10:50:40.863 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T10:50:40.863 INFO:tasks.cephadm.ceph.rgw.foo.a:Stopped rgw.foo.a 2026-03-10T10:50:40.863 INFO:tasks.cephadm.prometheus.a:Stopping prometheus.a... 2026-03-10T10:50:40.863 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@prometheus.a 2026-03-10T10:50:40.968 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@prometheus.a.service' 2026-03-10T10:50:40.979 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T10:50:40.979 INFO:tasks.cephadm.prometheus.a:Stopped prometheus.a 2026-03-10T10:50:40.979 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 --force --keep-logs 2026-03-10T10:50:41.069 INFO:teuthology.orchestra.run.vm04.stdout:Deleting cluster with fsid: e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:50:45.952 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:50:45 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:50:45.953 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:50:45 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:50:46.217 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:50:45 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:50:46.217 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:50:45 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:50:46.512 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:50:46 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:50:46.513 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:50:46 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:50:46.513 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:50:46 vm04 systemd[1]: Stopping Ceph alertmanager.a for e4c1c9d6-1c68-11f1-a9bd-116050875839... 2026-03-10T10:50:46.513 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:50:46 vm04 bash[56561]: ts=2026-03-10T10:50:46.342Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..." 2026-03-10T10:50:46.513 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:50:46 vm04 bash[133241]: ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839-alertmanager-a 2026-03-10T10:50:46.513 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:50:46 vm04 systemd[1]: ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@alertmanager.a.service: Deactivated successfully. 2026-03-10T10:50:46.513 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:50:46 vm04 systemd[1]: Stopped Ceph alertmanager.a for e4c1c9d6-1c68-11f1-a9bd-116050875839. 2026-03-10T10:50:46.834 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:50:46 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:50:46.835 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:50:46 vm04 systemd[1]: Stopping Ceph node-exporter.a for e4c1c9d6-1c68-11f1-a9bd-116050875839... 2026-03-10T10:50:46.835 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:50:46 vm04 bash[133386]: ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839-node-exporter-a 2026-03-10T10:50:46.835 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:50:46 vm04 systemd[1]: ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@node-exporter.a.service: Main process exited, code=exited, status=143/n/a 2026-03-10T10:50:46.835 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:50:46 vm04 systemd[1]: ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@node-exporter.a.service: Failed with result 'exit-code'. 2026-03-10T10:50:46.835 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:50:46 vm04 systemd[1]: Stopped Ceph node-exporter.a for e4c1c9d6-1c68-11f1-a9bd-116050875839. 2026-03-10T10:50:46.835 INFO:journalctl@ceph.alertmanager.a.vm04.stdout:Mar 10 10:50:46 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:50:47.093 INFO:journalctl@ceph.node-exporter.a.vm04.stdout:Mar 10 10:50:46 vm04 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:50:48.304 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 --force --keep-logs 2026-03-10T10:50:48.400 INFO:teuthology.orchestra.run.vm07.stdout:Deleting cluster with fsid: e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:50:53.245 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:50:53 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:50:53.245 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:50:53 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:50:53.245 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:50:53 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:50:53.498 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:50:53 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:50:53.498 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:50:53 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:50:53.498 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:50:53 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:50:53.498 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:50:53 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:50:53.498 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:50:53 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:50:53.498 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:50:53 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:50:53.766 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:50:53 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:50:53.766 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:50:53 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:50:53.766 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:50:53 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:50:54.046 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:50:53 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:50:54.046 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:50:54 vm07 systemd[1]: Stopping Ceph iscsi.iscsi.a for e4c1c9d6-1c68-11f1-a9bd-116050875839... 2026-03-10T10:50:54.046 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:50:53 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:50:54.046 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:50:53 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:50:54.516 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:50:54 vm07 bash[48477]: debug Shutdown received 2026-03-10T10:51:04.317 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:51:04 vm07 bash[57656]: ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839-iscsi-iscsi-a 2026-03-10T10:51:04.317 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:51:04 vm07 systemd[1]: ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@iscsi.iscsi.a.service: Main process exited, code=exited, status=137/n/a 2026-03-10T10:51:04.317 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:51:04 vm07 systemd[1]: ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@iscsi.iscsi.a.service: Failed with result 'exit-code'. 2026-03-10T10:51:04.317 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:51:04 vm07 systemd[1]: Stopped Ceph iscsi.iscsi.a for e4c1c9d6-1c68-11f1-a9bd-116050875839. 2026-03-10T10:51:04.317 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:51:04 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:51:04.317 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:51:04 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:51:04.597 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:51:04 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:51:04.598 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:51:04 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:51:04.598 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:51:04 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:51:04.598 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:51:04 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:51:04.598 INFO:journalctl@ceph.iscsi.iscsi.a.vm07.stdout:Mar 10 10:51:04 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:51:04.598 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:51:04 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:51:04.598 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:51:04 vm07 systemd[1]: Stopping Ceph grafana.a for e4c1c9d6-1c68-11f1-a9bd-116050875839... 2026-03-10T10:51:04.867 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:51:04 vm07 bash[50688]: logger=server t=2026-03-10T10:51:04.668973216Z level=info msg="Shutdown started" reason="System signal: terminated" 2026-03-10T10:51:04.867 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:51:04 vm07 bash[50688]: logger=tracing t=2026-03-10T10:51:04.669070479Z level=info msg="Closing tracing" 2026-03-10T10:51:04.867 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:51:04 vm07 bash[50688]: logger=ticker t=2026-03-10T10:51:04.669571758Z level=info msg=stopped last_tick=2026-03-10T10:51:00Z 2026-03-10T10:51:04.867 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:51:04 vm07 bash[50688]: logger=grafana-apiserver t=2026-03-10T10:51:04.669726387Z level=info msg="StorageObjectCountTracker pruner is exiting" 2026-03-10T10:51:04.867 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:51:04 vm07 bash[57824]: ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839-grafana-a 2026-03-10T10:51:04.867 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:51:04 vm07 systemd[1]: ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@grafana.a.service: Deactivated successfully. 2026-03-10T10:51:04.867 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:51:04 vm07 systemd[1]: Stopped Ceph grafana.a for e4c1c9d6-1c68-11f1-a9bd-116050875839. 2026-03-10T10:51:05.143 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:51:04 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:51:05.143 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:51:05 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:51:05.143 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:51:05 vm07 systemd[1]: Stopping Ceph node-exporter.b for e4c1c9d6-1c68-11f1-a9bd-116050875839... 2026-03-10T10:51:05.143 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 10:51:04 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:51:05.411 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:51:05 vm07 bash[57989]: ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839-node-exporter-b 2026-03-10T10:51:05.411 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:51:05 vm07 systemd[1]: ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@node-exporter.b.service: Main process exited, code=exited, status=143/n/a 2026-03-10T10:51:05.411 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:51:05 vm07 systemd[1]: ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@node-exporter.b.service: Failed with result 'exit-code'. 2026-03-10T10:51:05.411 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:51:05 vm07 systemd[1]: Stopped Ceph node-exporter.b for e4c1c9d6-1c68-11f1-a9bd-116050875839. 2026-03-10T10:51:05.411 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 10:51:05 vm07 systemd[1]: /etc/systemd/system/ceph-e4c1c9d6-1c68-11f1-a9bd-116050875839@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T10:51:06.133 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T10:51:06.139 INFO:teuthology.orchestra.run.vm04.stderr:rm: cannot remove '/etc/ceph/ceph.client.admin.keyring': Is a directory 2026-03-10T10:51:06.140 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T10:51:06.140 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T10:51:06.146 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-10T10:51:06.147 DEBUG:teuthology.misc:Transferring archived files from vm04:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/995/remote/vm04/crash 2026-03-10T10:51:06.147 DEBUG:teuthology.orchestra.run.vm04:> sudo tar c -f - -C /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/crash -- . 2026-03-10T10:51:06.189 INFO:teuthology.orchestra.run.vm04.stderr:tar: /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/crash: Cannot open: No such file or directory 2026-03-10T10:51:06.189 INFO:teuthology.orchestra.run.vm04.stderr:tar: Error is not recoverable: exiting now 2026-03-10T10:51:06.190 DEBUG:teuthology.misc:Transferring archived files from vm07:/var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/995/remote/vm07/crash 2026-03-10T10:51:06.190 DEBUG:teuthology.orchestra.run.vm07:> sudo tar c -f - -C /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/crash -- . 2026-03-10T10:51:06.196 INFO:teuthology.orchestra.run.vm07.stderr:tar: /var/lib/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/crash: Cannot open: No such file or directory 2026-03-10T10:51:06.196 INFO:teuthology.orchestra.run.vm07.stderr:tar: Error is not recoverable: exiting now 2026-03-10T10:51:06.197 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-10T10:51:06.197 DEBUG:teuthology.orchestra.run.vm04:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v 'reached quota' | egrep -v 'but it is still running' | egrep -v 'overall HEALTH_' | egrep -v '\(POOL_FULL\)' | egrep -v '\(SMALLER_PGP_NUM\)' | egrep -v '\(CACHE_POOL_NO_HIT_SET\)' | egrep -v '\(CACHE_POOL_NEAR_FULL\)' | egrep -v '\(POOL_APP_NOT_ENABLED\)' | egrep -v '\(PG_AVAILABILITY\)' | egrep -v '\(PG_DEGRADED\)' | egrep -v CEPHADM_STRAY_DAEMON | head -n 1 2026-03-10T10:51:06.248 INFO:tasks.cephadm:Compressing logs... 2026-03-10T10:51:06.248 DEBUG:teuthology.orchestra.run.vm04:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T10:51:06.290 DEBUG:teuthology.orchestra.run.vm07:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T10:51:06.297 INFO:teuthology.orchestra.run.vm07.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T10:51:06.297 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T10:51:06.297 INFO:teuthology.orchestra.run.vm04.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T10:51:06.298 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-osd.3.log 2026-03-10T10:51:06.298 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph.log 2026-03-10T10:51:06.299 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T10:51:06.299 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-mgr.x.log 2026-03-10T10:51:06.299 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph.log 2026-03-10T10:51:06.300 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-mon.b.log 2026-03-10T10:51:06.302 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-osd.3.log: gzip -5 --verbose -- /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-mon.c.log 2026-03-10T10:51:06.302 INFO:teuthology.orchestra.run.vm04.stderr: 92.5% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T10:51:06.303 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph.log: /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-mon.c.log: gzip -5 --verbose -- /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-osd.1.log 2026-03-10T10:51:06.303 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-mgr.y.log 2026-03-10T10:51:06.306 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-mgr.x.log: 93.1%gzip -5 --verbose -- /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-osd.5.log 2026-03-10T10:51:06.306 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-mon.b.log: gzip -5 --verbose -- /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-osd.7.log 2026-03-10T10:51:06.310 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph.log: 88.1% -- replaced with /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph.log.gz 2026-03-10T10:51:06.310 INFO:teuthology.orchestra.run.vm07.stderr: 90.7% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T10:51:06.310 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-osd.5.log: -- replaced with /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-mgr.x.log.gz 2026-03-10T10:51:06.311 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-osd.6.log 2026-03-10T10:51:06.315 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-osd.7.log: gzip -5 --verbose -- /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph.audit.log 2026-03-10T10:51:06.322 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-osd.1.log: 93.4% -- replaced with /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph.log.gz 2026-03-10T10:51:06.322 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-mon.a.log 2026-03-10T10:51:06.327 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-osd.6.log: gzip -5 --verbose -- /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-volume.log 2026-03-10T10:51:06.330 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-mgr.y.log: gzip -5 --verbose -- /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-osd.2.log 2026-03-10T10:51:06.330 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-mon.a.log: gzip -5 --verbose -- /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph.audit.log 2026-03-10T10:51:06.334 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-volume.log 2026-03-10T10:51:06.335 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph.audit.log: gzip -5 --verbose -- /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph.cephadm.log 2026-03-10T10:51:06.338 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph.audit.log: /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-osd.2.log: gzip -5 --verbose -- /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-client.rgw.foo.a.log 2026-03-10T10:51:06.339 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-volume.log: 92.3% -- replaced with /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph.audit.log.gz 2026-03-10T10:51:06.347 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-osd.4.log 2026-03-10T10:51:06.350 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph.cephadm.log 2026-03-10T10:51:06.351 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph.cephadm.log: 80.1% -- replaced with /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph.cephadm.log.gz 2026-03-10T10:51:06.351 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/tcmu-runner.log 2026-03-10T10:51:06.358 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-client.rgw.foo.a.log: gzip -5 --verbose -- /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-osd.0.log 2026-03-10T10:51:06.362 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph.cephadm.log: 88.6% -- replaced with /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph.cephadm.log.gz 2026-03-10T10:51:06.362 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-osd.4.log: /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/tcmu-runner.log: 73.5% -- replaced with /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/tcmu-runner.log.gz 2026-03-10T10:51:06.371 INFO:teuthology.orchestra.run.vm07.stderr: 95.8% -- replaced with /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-volume.log.gz 2026-03-10T10:51:06.382 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-osd.0.log: 95.2% -- replaced with /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph.audit.log.gz 2026-03-10T10:51:06.387 INFO:teuthology.orchestra.run.vm04.stderr: 95.8% -- replaced with /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-volume.log.gz 2026-03-10T10:51:06.486 INFO:teuthology.orchestra.run.vm04.stderr: 94.4% -- replaced with /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-client.rgw.foo.a.log.gz 2026-03-10T10:51:07.258 INFO:teuthology.orchestra.run.vm04.stderr: 91.2% -- replaced with /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-mgr.y.log.gz 2026-03-10T10:51:07.921 INFO:teuthology.orchestra.run.vm07.stderr: 92.8% -- replaced with /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-mon.b.log.gz 2026-03-10T10:51:08.766 INFO:teuthology.orchestra.run.vm04.stderr: 93.2% -- replaced with /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-mon.c.log.gz 2026-03-10T10:51:10.736 INFO:teuthology.orchestra.run.vm04.stderr: 92.0% -- replaced with /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-mon.a.log.gz 2026-03-10T10:51:16.830 INFO:teuthology.orchestra.run.vm07.stderr: 94.7% -- replaced with /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-osd.6.log.gz 2026-03-10T10:51:16.931 INFO:teuthology.orchestra.run.vm07.stderr: 94.7% -- replaced with /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-osd.4.log.gz 2026-03-10T10:51:16.937 INFO:teuthology.orchestra.run.vm07.stderr: 94.7% -- replaced with /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-osd.5.log.gz 2026-03-10T10:51:17.325 INFO:teuthology.orchestra.run.vm07.stderr: 94.7% -- replaced with /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-osd.7.log.gz 2026-03-10T10:51:17.327 INFO:teuthology.orchestra.run.vm07.stderr: 2026-03-10T10:51:17.327 INFO:teuthology.orchestra.run.vm07.stderr:real 0m11.034s 2026-03-10T10:51:17.327 INFO:teuthology.orchestra.run.vm07.stderr:user 0m20.311s 2026-03-10T10:51:17.327 INFO:teuthology.orchestra.run.vm07.stderr:sys 0m1.303s 2026-03-10T10:51:17.550 INFO:teuthology.orchestra.run.vm04.stderr: 94.7% -- replaced with /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-osd.2.log.gz 2026-03-10T10:51:17.797 INFO:teuthology.orchestra.run.vm04.stderr: 94.8% -- replaced with /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-osd.0.log.gz 2026-03-10T10:51:17.807 INFO:teuthology.orchestra.run.vm04.stderr: 94.6% -- replaced with /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-osd.3.log.gz 2026-03-10T10:51:17.887 INFO:teuthology.orchestra.run.vm04.stderr: 94.7% -- replaced with /var/log/ceph/e4c1c9d6-1c68-11f1-a9bd-116050875839/ceph-osd.1.log.gz 2026-03-10T10:51:17.888 INFO:teuthology.orchestra.run.vm04.stderr: 2026-03-10T10:51:17.888 INFO:teuthology.orchestra.run.vm04.stderr:real 0m11.596s 2026-03-10T10:51:17.888 INFO:teuthology.orchestra.run.vm04.stderr:user 0m21.709s 2026-03-10T10:51:17.888 INFO:teuthology.orchestra.run.vm04.stderr:sys 0m1.372s 2026-03-10T10:51:17.888 INFO:tasks.cephadm:Archiving logs... 2026-03-10T10:51:17.888 DEBUG:teuthology.misc:Transferring archived files from vm04:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/995/remote/vm04/log 2026-03-10T10:51:17.888 DEBUG:teuthology.orchestra.run.vm04:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T10:51:18.768 DEBUG:teuthology.misc:Transferring archived files from vm07:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/995/remote/vm07/log 2026-03-10T10:51:18.768 DEBUG:teuthology.orchestra.run.vm07:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T10:51:19.555 INFO:tasks.cephadm:Removing cluster... 2026-03-10T10:51:19.555 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 --force 2026-03-10T10:51:19.645 INFO:teuthology.orchestra.run.vm04.stdout:Deleting cluster with fsid: e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:51:20.919 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid e4c1c9d6-1c68-11f1-a9bd-116050875839 --force 2026-03-10T10:51:21.008 INFO:teuthology.orchestra.run.vm07.stdout:Deleting cluster with fsid: e4c1c9d6-1c68-11f1-a9bd-116050875839 2026-03-10T10:51:22.277 INFO:tasks.cephadm:Removing cephadm ... 2026-03-10T10:51:22.277 DEBUG:teuthology.orchestra.run.vm04:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T10:51:22.281 DEBUG:teuthology.orchestra.run.vm07:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T10:51:22.284 INFO:tasks.cephadm:Teardown complete 2026-03-10T10:51:22.284 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-10T10:51:22.287 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-10T10:51:22.287 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-10T10:51:22.322 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-10T10:51:22.340 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-10T10:51:22.340 DEBUG:teuthology.orchestra.run.vm04:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-10T10:51:22.345 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-10T10:51:22.345 DEBUG:teuthology.orchestra.run.vm07:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-10T10:51:22.405 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-10T10:51:22.414 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T10:51:22.596 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T10:51:22.597 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T10:51:22.598 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-10T10:51:22.599 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-10T10:51:22.759 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:22.759 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T10:51:22.760 INFO:teuthology.orchestra.run.vm07.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-10T10:51:22.760 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:22.772 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be REMOVED: 2026-03-10T10:51:22.772 INFO:teuthology.orchestra.run.vm07.stdout: ceph* 2026-03-10T10:51:22.776 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:22.776 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T10:51:22.777 INFO:teuthology.orchestra.run.vm04.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-10T10:51:22.777 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:22.792 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-10T10:51:22.793 INFO:teuthology.orchestra.run.vm04.stdout: ceph* 2026-03-10T10:51:23.016 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-10T10:51:23.016 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-10T10:51:23.017 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-10T10:51:23.017 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-10T10:51:23.057 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118605 files and directories currently installed.) 2026-03-10T10:51:23.057 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118605 files and directories currently installed.) 2026-03-10T10:51:23.059 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:23.059 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:24.192 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:24.203 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:24.227 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T10:51:24.237 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-10T10:51:24.412 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T10:51:24.413 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T10:51:24.416 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-10T10:51:24.417 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-10T10:51:24.571 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:24.571 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T10:51:24.572 INFO:teuthology.orchestra.run.vm07.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-10T10:51:24.572 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:24.584 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:24.584 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T10:51:24.584 INFO:teuthology.orchestra.run.vm04.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-10T10:51:24.584 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:24.585 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be REMOVED: 2026-03-10T10:51:24.586 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-cephadm* cephadm* 2026-03-10T10:51:24.597 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-10T10:51:24.598 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-cephadm* cephadm* 2026-03-10T10:51:24.757 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-10T10:51:24.757 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 1775 kB disk space will be freed. 2026-03-10T10:51:24.765 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-10T10:51:24.765 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 1775 kB disk space will be freed. 2026-03-10T10:51:24.799 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118603 files and directories currently installed.) 2026-03-10T10:51:24.801 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:24.808 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118603 files and directories currently installed.) 2026-03-10T10:51:24.810 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:24.820 INFO:teuthology.orchestra.run.vm07.stdout:Removing cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:24.828 INFO:teuthology.orchestra.run.vm04.stdout:Removing cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:24.850 INFO:teuthology.orchestra.run.vm07.stdout:Looking for files to backup/remove ... 2026-03-10T10:51:24.851 INFO:teuthology.orchestra.run.vm07.stdout:Not backing up/removing `/var/lib/cephadm', it matches ^/var/.*. 2026-03-10T10:51:24.854 INFO:teuthology.orchestra.run.vm07.stdout:Removing user `cephadm' ... 2026-03-10T10:51:24.854 INFO:teuthology.orchestra.run.vm07.stdout:Warning: group `nogroup' has no more members. 2026-03-10T10:51:24.857 INFO:teuthology.orchestra.run.vm04.stdout:Looking for files to backup/remove ... 2026-03-10T10:51:24.859 INFO:teuthology.orchestra.run.vm04.stdout:Not backing up/removing `/var/lib/cephadm', it matches ^/var/.*. 2026-03-10T10:51:24.861 INFO:teuthology.orchestra.run.vm04.stdout:Removing user `cephadm' ... 2026-03-10T10:51:24.861 INFO:teuthology.orchestra.run.vm04.stdout:Warning: group `nogroup' has no more members. 2026-03-10T10:51:24.865 INFO:teuthology.orchestra.run.vm07.stdout:Done. 2026-03-10T10:51:24.869 INFO:teuthology.orchestra.run.vm04.stdout:Done. 2026-03-10T10:51:24.889 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T10:51:24.893 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T10:51:24.983 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-10T10:51:24.984 INFO:teuthology.orchestra.run.vm07.stdout:Purging configuration files for cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:24.991 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-10T10:51:24.993 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:26.030 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:26.047 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:26.066 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T10:51:26.080 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-10T10:51:26.223 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T10:51:26.223 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T10:51:26.256 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-10T10:51:26.256 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-10T10:51:26.359 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:26.360 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T10:51:26.360 INFO:teuthology.orchestra.run.vm07.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-10T10:51:26.360 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:26.371 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be REMOVED: 2026-03-10T10:51:26.372 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mds* 2026-03-10T10:51:26.403 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:26.403 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T10:51:26.404 INFO:teuthology.orchestra.run.vm04.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-10T10:51:26.404 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:26.411 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-10T10:51:26.412 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mds* 2026-03-10T10:51:26.535 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-10T10:51:26.535 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 7437 kB disk space will be freed. 2026-03-10T10:51:26.570 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-10T10:51:26.570 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 7437 kB disk space will be freed. 2026-03-10T10:51:26.574 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-10T10:51:26.576 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:26.608 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-10T10:51:26.610 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:27.019 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T10:51:27.027 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T10:51:27.120 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-10T10:51:27.122 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:27.127 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-10T10:51:27.129 INFO:teuthology.orchestra.run.vm07.stdout:Purging configuration files for ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:28.555 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:28.588 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-10T10:51:28.625 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:28.658 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T10:51:28.772 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-10T10:51:28.773 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-10T10:51:28.845 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T10:51:28.845 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T10:51:28.894 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:28.894 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core ceph-mon kpartx libboost-iostreams1.74.0 2026-03-10T10:51:28.894 INFO:teuthology.orchestra.run.vm04.stdout: libboost-thread1.74.0 libpmemobj1 libsgutils2-2 python-asyncssh-doc 2026-03-10T10:51:28.894 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools python3-cheroot 2026-03-10T10:51:28.894 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T10:51:28.894 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T10:51:28.894 INFO:teuthology.orchestra.run.vm04.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T10:51:28.894 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T10:51:28.894 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan python3-portend python3-psutil python3-pyinotify 2026-03-10T10:51:28.894 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-10T10:51:28.894 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-10T10:51:28.894 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-10T10:51:28.894 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-waitress python3-webob python3-websocket 2026-03-10T10:51:28.894 INFO:teuthology.orchestra.run.vm04.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T10:51:28.894 INFO:teuthology.orchestra.run.vm04.stdout: sg3-utils-udev 2026-03-10T10:51:28.894 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:28.903 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-10T10:51:28.904 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-10T10:51:28.904 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-k8sevents* 2026-03-10T10:51:28.985 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:28.985 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core ceph-mon kpartx libboost-iostreams1.74.0 2026-03-10T10:51:28.985 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libpmemobj1 libsgutils2-2 python-asyncssh-doc 2026-03-10T10:51:28.985 INFO:teuthology.orchestra.run.vm07.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools python3-cheroot 2026-03-10T10:51:28.985 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T10:51:28.986 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T10:51:28.986 INFO:teuthology.orchestra.run.vm07.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T10:51:28.986 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T10:51:28.986 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan python3-portend python3-psutil python3-pyinotify 2026-03-10T10:51:28.986 INFO:teuthology.orchestra.run.vm07.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-10T10:51:28.986 INFO:teuthology.orchestra.run.vm07.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-10T10:51:28.986 INFO:teuthology.orchestra.run.vm07.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-10T10:51:28.986 INFO:teuthology.orchestra.run.vm07.stdout: python3-threadpoolctl python3-waitress python3-webob python3-websocket 2026-03-10T10:51:28.986 INFO:teuthology.orchestra.run.vm07.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T10:51:28.986 INFO:teuthology.orchestra.run.vm07.stdout: sg3-utils-udev 2026-03-10T10:51:28.986 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:28.995 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be REMOVED: 2026-03-10T10:51:28.995 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-10T10:51:28.996 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-k8sevents* 2026-03-10T10:51:29.069 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 4 to remove and 10 not upgraded. 2026-03-10T10:51:29.069 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 165 MB disk space will be freed. 2026-03-10T10:51:29.104 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-10T10:51:29.106 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:29.117 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:29.136 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:29.163 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 4 to remove and 10 not upgraded. 2026-03-10T10:51:29.163 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 165 MB disk space will be freed. 2026-03-10T10:51:29.167 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:29.198 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-10T10:51:29.200 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:29.212 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:29.233 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:29.264 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:29.622 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-10T10:51:29.624 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:29.769 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-10T10:51:29.771 INFO:teuthology.orchestra.run.vm07.stdout:Purging configuration files for ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:31.066 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:31.100 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-10T10:51:31.266 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:31.292 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-10T10:51:31.293 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-10T10:51:31.300 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T10:51:31.426 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:31.426 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T10:51:31.426 INFO:teuthology.orchestra.run.vm04.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T10:51:31.427 INFO:teuthology.orchestra.run.vm04.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T10:51:31.427 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T10:51:31.427 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T10:51:31.427 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T10:51:31.427 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T10:51:31.427 INFO:teuthology.orchestra.run.vm04.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T10:51:31.427 INFO:teuthology.orchestra.run.vm04.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T10:51:31.427 INFO:teuthology.orchestra.run.vm04.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T10:51:31.427 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T10:51:31.427 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T10:51:31.427 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T10:51:31.427 INFO:teuthology.orchestra.run.vm04.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T10:51:31.427 INFO:teuthology.orchestra.run.vm04.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T10:51:31.427 INFO:teuthology.orchestra.run.vm04.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T10:51:31.427 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:31.436 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-10T10:51:31.436 INFO:teuthology.orchestra.run.vm04.stdout: ceph-base* ceph-common* ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-10T10:51:31.496 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T10:51:31.496 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T10:51:31.600 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-10T10:51:31.600 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 472 MB disk space will be freed. 2026-03-10T10:51:31.620 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:31.620 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T10:51:31.620 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T10:51:31.620 INFO:teuthology.orchestra.run.vm07.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T10:51:31.620 INFO:teuthology.orchestra.run.vm07.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T10:51:31.620 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T10:51:31.620 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T10:51:31.620 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T10:51:31.620 INFO:teuthology.orchestra.run.vm07.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T10:51:31.620 INFO:teuthology.orchestra.run.vm07.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T10:51:31.620 INFO:teuthology.orchestra.run.vm07.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T10:51:31.620 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T10:51:31.620 INFO:teuthology.orchestra.run.vm07.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T10:51:31.620 INFO:teuthology.orchestra.run.vm07.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T10:51:31.620 INFO:teuthology.orchestra.run.vm07.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T10:51:31.620 INFO:teuthology.orchestra.run.vm07.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T10:51:31.620 INFO:teuthology.orchestra.run.vm07.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T10:51:31.620 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:31.631 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be REMOVED: 2026-03-10T10:51:31.632 INFO:teuthology.orchestra.run.vm07.stdout: ceph-base* ceph-common* ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-10T10:51:31.643 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-10T10:51:31.645 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:31.701 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:31.800 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-10T10:51:31.800 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 472 MB disk space will be freed. 2026-03-10T10:51:31.835 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-10T10:51:31.838 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:31.896 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:32.119 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:32.347 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:32.544 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:32.799 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:32.969 INFO:teuthology.orchestra.run.vm04.stdout:Removing radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:33.219 INFO:teuthology.orchestra.run.vm07.stdout:Removing radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:33.419 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:33.439 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:33.666 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:33.700 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:33.850 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T10:51:33.882 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T10:51:33.955 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117455 files and directories currently installed.) 2026-03-10T10:51:33.957 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:34.159 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T10:51:34.191 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T10:51:34.256 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117455 files and directories currently installed.) 2026-03-10T10:51:34.258 INFO:teuthology.orchestra.run.vm07.stdout:Purging configuration files for radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:34.561 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:34.873 INFO:teuthology.orchestra.run.vm07.stdout:Purging configuration files for ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:34.985 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:35.283 INFO:teuthology.orchestra.run.vm07.stdout:Purging configuration files for ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:35.395 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:35.707 INFO:teuthology.orchestra.run.vm07.stdout:Purging configuration files for ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:35.783 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:36.157 INFO:teuthology.orchestra.run.vm07.stdout:Purging configuration files for ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:37.286 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:37.319 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-10T10:51:37.502 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-10T10:51:37.503 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-10T10:51:37.612 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:37.647 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T10:51:37.666 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:37.666 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T10:51:37.666 INFO:teuthology.orchestra.run.vm04.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T10:51:37.667 INFO:teuthology.orchestra.run.vm04.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T10:51:37.667 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T10:51:37.667 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T10:51:37.667 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T10:51:37.667 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T10:51:37.667 INFO:teuthology.orchestra.run.vm04.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T10:51:37.667 INFO:teuthology.orchestra.run.vm04.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T10:51:37.667 INFO:teuthology.orchestra.run.vm04.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T10:51:37.667 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T10:51:37.667 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T10:51:37.667 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T10:51:37.667 INFO:teuthology.orchestra.run.vm04.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T10:51:37.667 INFO:teuthology.orchestra.run.vm04.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T10:51:37.667 INFO:teuthology.orchestra.run.vm04.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T10:51:37.667 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:37.677 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-10T10:51:37.678 INFO:teuthology.orchestra.run.vm04.stdout: ceph-fuse* 2026-03-10T10:51:37.832 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T10:51:37.833 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T10:51:37.837 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-10T10:51:37.838 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 3673 kB disk space will be freed. 2026-03-10T10:51:37.875 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117443 files and directories currently installed.) 2026-03-10T10:51:37.878 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:37.957 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:37.957 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T10:51:37.957 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T10:51:37.957 INFO:teuthology.orchestra.run.vm07.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T10:51:37.957 INFO:teuthology.orchestra.run.vm07.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T10:51:37.957 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T10:51:37.957 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T10:51:37.957 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T10:51:37.957 INFO:teuthology.orchestra.run.vm07.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T10:51:37.957 INFO:teuthology.orchestra.run.vm07.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T10:51:37.957 INFO:teuthology.orchestra.run.vm07.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T10:51:37.957 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T10:51:37.957 INFO:teuthology.orchestra.run.vm07.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T10:51:37.957 INFO:teuthology.orchestra.run.vm07.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T10:51:37.957 INFO:teuthology.orchestra.run.vm07.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T10:51:37.957 INFO:teuthology.orchestra.run.vm07.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T10:51:37.957 INFO:teuthology.orchestra.run.vm07.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T10:51:37.957 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:37.966 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be REMOVED: 2026-03-10T10:51:37.966 INFO:teuthology.orchestra.run.vm07.stdout: ceph-fuse* 2026-03-10T10:51:38.127 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-10T10:51:38.127 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 3673 kB disk space will be freed. 2026-03-10T10:51:38.163 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117443 files and directories currently installed.) 2026-03-10T10:51:38.165 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:38.289 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T10:51:38.387 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-10T10:51:38.389 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:38.622 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T10:51:38.712 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-10T10:51:38.714 INFO:teuthology.orchestra.run.vm07.stdout:Purging configuration files for ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:39.888 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:39.921 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-10T10:51:40.094 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-10T10:51:40.095 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-10T10:51:40.141 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:40.174 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T10:51:40.218 INFO:teuthology.orchestra.run.vm04.stdout:Package 'ceph-test' is not installed, so not removed 2026-03-10T10:51:40.218 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:40.218 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T10:51:40.218 INFO:teuthology.orchestra.run.vm04.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T10:51:40.219 INFO:teuthology.orchestra.run.vm04.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T10:51:40.219 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T10:51:40.219 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T10:51:40.219 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T10:51:40.219 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T10:51:40.219 INFO:teuthology.orchestra.run.vm04.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T10:51:40.219 INFO:teuthology.orchestra.run.vm04.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T10:51:40.219 INFO:teuthology.orchestra.run.vm04.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T10:51:40.219 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T10:51:40.219 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T10:51:40.219 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T10:51:40.219 INFO:teuthology.orchestra.run.vm04.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T10:51:40.219 INFO:teuthology.orchestra.run.vm04.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T10:51:40.219 INFO:teuthology.orchestra.run.vm04.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T10:51:40.219 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:40.239 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T10:51:40.239 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:40.271 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-10T10:51:40.362 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T10:51:40.363 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T10:51:40.462 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-10T10:51:40.463 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-10T10:51:40.479 INFO:teuthology.orchestra.run.vm07.stdout:Package 'ceph-test' is not installed, so not removed 2026-03-10T10:51:40.479 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:40.479 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T10:51:40.479 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T10:51:40.479 INFO:teuthology.orchestra.run.vm07.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T10:51:40.479 INFO:teuthology.orchestra.run.vm07.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T10:51:40.479 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T10:51:40.479 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T10:51:40.479 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T10:51:40.479 INFO:teuthology.orchestra.run.vm07.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T10:51:40.479 INFO:teuthology.orchestra.run.vm07.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T10:51:40.479 INFO:teuthology.orchestra.run.vm07.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T10:51:40.479 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T10:51:40.479 INFO:teuthology.orchestra.run.vm07.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T10:51:40.479 INFO:teuthology.orchestra.run.vm07.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T10:51:40.479 INFO:teuthology.orchestra.run.vm07.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T10:51:40.480 INFO:teuthology.orchestra.run.vm07.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T10:51:40.480 INFO:teuthology.orchestra.run.vm07.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T10:51:40.480 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:40.498 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T10:51:40.498 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:40.529 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T10:51:40.598 INFO:teuthology.orchestra.run.vm04.stdout:Package 'ceph-volume' is not installed, so not removed 2026-03-10T10:51:40.598 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:40.598 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T10:51:40.598 INFO:teuthology.orchestra.run.vm04.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T10:51:40.599 INFO:teuthology.orchestra.run.vm04.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T10:51:40.599 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T10:51:40.599 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T10:51:40.599 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T10:51:40.599 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T10:51:40.599 INFO:teuthology.orchestra.run.vm04.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T10:51:40.599 INFO:teuthology.orchestra.run.vm04.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T10:51:40.599 INFO:teuthology.orchestra.run.vm04.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T10:51:40.599 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T10:51:40.599 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T10:51:40.599 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T10:51:40.599 INFO:teuthology.orchestra.run.vm04.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T10:51:40.599 INFO:teuthology.orchestra.run.vm04.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T10:51:40.599 INFO:teuthology.orchestra.run.vm04.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T10:51:40.599 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:40.616 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T10:51:40.616 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:40.647 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-10T10:51:40.718 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T10:51:40.719 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T10:51:40.840 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-10T10:51:40.841 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-10T10:51:40.842 INFO:teuthology.orchestra.run.vm07.stdout:Package 'ceph-volume' is not installed, so not removed 2026-03-10T10:51:40.842 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:40.842 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T10:51:40.842 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T10:51:40.842 INFO:teuthology.orchestra.run.vm07.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T10:51:40.842 INFO:teuthology.orchestra.run.vm07.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T10:51:40.842 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T10:51:40.842 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T10:51:40.842 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T10:51:40.842 INFO:teuthology.orchestra.run.vm07.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T10:51:40.842 INFO:teuthology.orchestra.run.vm07.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T10:51:40.842 INFO:teuthology.orchestra.run.vm07.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T10:51:40.842 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T10:51:40.842 INFO:teuthology.orchestra.run.vm07.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T10:51:40.843 INFO:teuthology.orchestra.run.vm07.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T10:51:40.843 INFO:teuthology.orchestra.run.vm07.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T10:51:40.843 INFO:teuthology.orchestra.run.vm07.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T10:51:40.843 INFO:teuthology.orchestra.run.vm07.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T10:51:40.843 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:40.863 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T10:51:40.863 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:40.895 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T10:51:40.997 INFO:teuthology.orchestra.run.vm04.stdout:Package 'radosgw' is not installed, so not removed 2026-03-10T10:51:40.997 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:40.997 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T10:51:40.997 INFO:teuthology.orchestra.run.vm04.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T10:51:40.998 INFO:teuthology.orchestra.run.vm04.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T10:51:40.998 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T10:51:40.998 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T10:51:40.998 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T10:51:40.998 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T10:51:40.998 INFO:teuthology.orchestra.run.vm04.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T10:51:40.998 INFO:teuthology.orchestra.run.vm04.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T10:51:40.998 INFO:teuthology.orchestra.run.vm04.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T10:51:40.998 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T10:51:40.998 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T10:51:40.998 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T10:51:40.998 INFO:teuthology.orchestra.run.vm04.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T10:51:40.998 INFO:teuthology.orchestra.run.vm04.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T10:51:40.998 INFO:teuthology.orchestra.run.vm04.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T10:51:40.998 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:41.015 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T10:51:41.015 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:41.048 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-10T10:51:41.102 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T10:51:41.102 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T10:51:41.254 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-10T10:51:41.255 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-10T10:51:41.264 INFO:teuthology.orchestra.run.vm07.stdout:Package 'radosgw' is not installed, so not removed 2026-03-10T10:51:41.264 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:41.264 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T10:51:41.264 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T10:51:41.265 INFO:teuthology.orchestra.run.vm07.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T10:51:41.265 INFO:teuthology.orchestra.run.vm07.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T10:51:41.265 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T10:51:41.265 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T10:51:41.265 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T10:51:41.265 INFO:teuthology.orchestra.run.vm07.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T10:51:41.265 INFO:teuthology.orchestra.run.vm07.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T10:51:41.265 INFO:teuthology.orchestra.run.vm07.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T10:51:41.265 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T10:51:41.265 INFO:teuthology.orchestra.run.vm07.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T10:51:41.265 INFO:teuthology.orchestra.run.vm07.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T10:51:41.265 INFO:teuthology.orchestra.run.vm07.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T10:51:41.265 INFO:teuthology.orchestra.run.vm07.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T10:51:41.265 INFO:teuthology.orchestra.run.vm07.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T10:51:41.265 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:41.286 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T10:51:41.286 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:41.318 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T10:51:41.448 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:41.448 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T10:51:41.448 INFO:teuthology.orchestra.run.vm04.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T10:51:41.448 INFO:teuthology.orchestra.run.vm04.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T10:51:41.449 INFO:teuthology.orchestra.run.vm04.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T10:51:41.449 INFO:teuthology.orchestra.run.vm04.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T10:51:41.449 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T10:51:41.449 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T10:51:41.449 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T10:51:41.449 INFO:teuthology.orchestra.run.vm04.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T10:51:41.449 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T10:51:41.449 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T10:51:41.449 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T10:51:41.449 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T10:51:41.449 INFO:teuthology.orchestra.run.vm04.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T10:51:41.449 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T10:51:41.449 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T10:51:41.449 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T10:51:41.449 INFO:teuthology.orchestra.run.vm04.stdout: xmlstarlet zip 2026-03-10T10:51:41.449 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:41.460 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-10T10:51:41.460 INFO:teuthology.orchestra.run.vm04.stdout: python3-cephfs* python3-rados* python3-rgw* 2026-03-10T10:51:41.529 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T10:51:41.529 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T10:51:41.633 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 3 to remove and 10 not upgraded. 2026-03-10T10:51:41.633 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 2062 kB disk space will be freed. 2026-03-10T10:51:41.675 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-10T10:51:41.678 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:41.692 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:41.703 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:41.709 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:41.709 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T10:51:41.709 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T10:51:41.709 INFO:teuthology.orchestra.run.vm07.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T10:51:41.710 INFO:teuthology.orchestra.run.vm07.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T10:51:41.710 INFO:teuthology.orchestra.run.vm07.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T10:51:41.710 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T10:51:41.710 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T10:51:41.710 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T10:51:41.710 INFO:teuthology.orchestra.run.vm07.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T10:51:41.710 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T10:51:41.710 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T10:51:41.710 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T10:51:41.710 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T10:51:41.710 INFO:teuthology.orchestra.run.vm07.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T10:51:41.710 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T10:51:41.710 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T10:51:41.711 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T10:51:41.711 INFO:teuthology.orchestra.run.vm07.stdout: xmlstarlet zip 2026-03-10T10:51:41.711 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:41.723 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be REMOVED: 2026-03-10T10:51:41.724 INFO:teuthology.orchestra.run.vm07.stdout: python3-cephfs* python3-rados* python3-rgw* 2026-03-10T10:51:41.894 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 3 to remove and 10 not upgraded. 2026-03-10T10:51:41.894 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 2062 kB disk space will be freed. 2026-03-10T10:51:41.935 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-10T10:51:41.938 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:41.949 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:41.960 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:42.911 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:42.945 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-10T10:51:43.097 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:43.130 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T10:51:43.153 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-10T10:51:43.154 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-10T10:51:43.312 INFO:teuthology.orchestra.run.vm04.stdout:Package 'python3-rgw' is not installed, so not removed 2026-03-10T10:51:43.312 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:43.312 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T10:51:43.313 INFO:teuthology.orchestra.run.vm04.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T10:51:43.313 INFO:teuthology.orchestra.run.vm04.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T10:51:43.313 INFO:teuthology.orchestra.run.vm04.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T10:51:43.313 INFO:teuthology.orchestra.run.vm04.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T10:51:43.314 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T10:51:43.314 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T10:51:43.314 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T10:51:43.314 INFO:teuthology.orchestra.run.vm04.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T10:51:43.314 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T10:51:43.314 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T10:51:43.314 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T10:51:43.314 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T10:51:43.314 INFO:teuthology.orchestra.run.vm04.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T10:51:43.314 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T10:51:43.314 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T10:51:43.314 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T10:51:43.314 INFO:teuthology.orchestra.run.vm04.stdout: xmlstarlet zip 2026-03-10T10:51:43.314 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:43.338 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T10:51:43.338 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:43.340 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T10:51:43.341 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T10:51:43.372 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-10T10:51:43.542 INFO:teuthology.orchestra.run.vm07.stdout:Package 'python3-rgw' is not installed, so not removed 2026-03-10T10:51:43.543 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:43.543 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T10:51:43.543 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T10:51:43.543 INFO:teuthology.orchestra.run.vm07.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T10:51:43.543 INFO:teuthology.orchestra.run.vm07.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T10:51:43.543 INFO:teuthology.orchestra.run.vm07.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T10:51:43.543 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T10:51:43.543 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T10:51:43.544 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T10:51:43.544 INFO:teuthology.orchestra.run.vm07.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T10:51:43.544 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T10:51:43.544 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T10:51:43.544 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T10:51:43.544 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T10:51:43.544 INFO:teuthology.orchestra.run.vm07.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T10:51:43.544 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T10:51:43.544 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T10:51:43.544 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T10:51:43.544 INFO:teuthology.orchestra.run.vm07.stdout: xmlstarlet zip 2026-03-10T10:51:43.544 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:43.563 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T10:51:43.563 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:43.584 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-10T10:51:43.584 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-10T10:51:43.595 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T10:51:43.737 INFO:teuthology.orchestra.run.vm04.stdout:Package 'python3-cephfs' is not installed, so not removed 2026-03-10T10:51:43.737 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:43.737 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T10:51:43.738 INFO:teuthology.orchestra.run.vm04.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T10:51:43.738 INFO:teuthology.orchestra.run.vm04.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T10:51:43.738 INFO:teuthology.orchestra.run.vm04.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T10:51:43.738 INFO:teuthology.orchestra.run.vm04.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T10:51:43.738 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T10:51:43.738 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T10:51:43.738 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T10:51:43.738 INFO:teuthology.orchestra.run.vm04.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T10:51:43.738 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T10:51:43.738 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T10:51:43.738 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T10:51:43.738 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T10:51:43.738 INFO:teuthology.orchestra.run.vm04.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T10:51:43.739 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T10:51:43.739 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T10:51:43.739 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T10:51:43.739 INFO:teuthology.orchestra.run.vm04.stdout: xmlstarlet zip 2026-03-10T10:51:43.739 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:43.759 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T10:51:43.759 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:43.790 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-10T10:51:43.800 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T10:51:43.801 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T10:51:43.935 INFO:teuthology.orchestra.run.vm07.stdout:Package 'python3-cephfs' is not installed, so not removed 2026-03-10T10:51:43.935 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:43.935 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T10:51:43.935 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T10:51:43.935 INFO:teuthology.orchestra.run.vm07.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T10:51:43.936 INFO:teuthology.orchestra.run.vm07.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T10:51:43.936 INFO:teuthology.orchestra.run.vm07.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T10:51:43.936 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T10:51:43.936 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T10:51:43.936 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T10:51:43.936 INFO:teuthology.orchestra.run.vm07.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T10:51:43.936 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T10:51:43.936 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T10:51:43.936 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T10:51:43.936 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T10:51:43.936 INFO:teuthology.orchestra.run.vm07.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T10:51:43.936 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T10:51:43.936 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T10:51:43.936 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T10:51:43.936 INFO:teuthology.orchestra.run.vm07.stdout: xmlstarlet zip 2026-03-10T10:51:43.936 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:43.953 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T10:51:43.953 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:43.983 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-10T10:51:43.984 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-10T10:51:43.985 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T10:51:44.108 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:44.108 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T10:51:44.108 INFO:teuthology.orchestra.run.vm04.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T10:51:44.108 INFO:teuthology.orchestra.run.vm04.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T10:51:44.109 INFO:teuthology.orchestra.run.vm04.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T10:51:44.109 INFO:teuthology.orchestra.run.vm04.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T10:51:44.109 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T10:51:44.109 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T10:51:44.109 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T10:51:44.109 INFO:teuthology.orchestra.run.vm04.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T10:51:44.109 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T10:51:44.109 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T10:51:44.109 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T10:51:44.109 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T10:51:44.109 INFO:teuthology.orchestra.run.vm04.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T10:51:44.109 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T10:51:44.109 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T10:51:44.109 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T10:51:44.109 INFO:teuthology.orchestra.run.vm04.stdout: xmlstarlet zip 2026-03-10T10:51:44.109 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:44.119 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-10T10:51:44.119 INFO:teuthology.orchestra.run.vm04.stdout: python3-rbd* 2026-03-10T10:51:44.174 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T10:51:44.174 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T10:51:44.281 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-10T10:51:44.281 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 1186 kB disk space will be freed. 2026-03-10T10:51:44.290 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:44.290 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T10:51:44.290 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T10:51:44.290 INFO:teuthology.orchestra.run.vm07.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T10:51:44.291 INFO:teuthology.orchestra.run.vm07.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T10:51:44.291 INFO:teuthology.orchestra.run.vm07.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T10:51:44.291 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T10:51:44.291 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T10:51:44.291 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T10:51:44.291 INFO:teuthology.orchestra.run.vm07.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T10:51:44.291 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T10:51:44.291 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T10:51:44.291 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T10:51:44.291 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T10:51:44.291 INFO:teuthology.orchestra.run.vm07.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T10:51:44.291 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T10:51:44.291 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T10:51:44.291 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T10:51:44.291 INFO:teuthology.orchestra.run.vm07.stdout: xmlstarlet zip 2026-03-10T10:51:44.291 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:44.301 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be REMOVED: 2026-03-10T10:51:44.301 INFO:teuthology.orchestra.run.vm07.stdout: python3-rbd* 2026-03-10T10:51:44.320 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117410 files and directories currently installed.) 2026-03-10T10:51:44.322 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:44.464 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-10T10:51:44.464 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 1186 kB disk space will be freed. 2026-03-10T10:51:44.501 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117410 files and directories currently installed.) 2026-03-10T10:51:44.503 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:45.339 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:45.372 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-10T10:51:45.539 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:45.548 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-10T10:51:45.548 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-10T10:51:45.573 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T10:51:45.679 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:45.679 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T10:51:45.679 INFO:teuthology.orchestra.run.vm04.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T10:51:45.680 INFO:teuthology.orchestra.run.vm04.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T10:51:45.680 INFO:teuthology.orchestra.run.vm04.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T10:51:45.680 INFO:teuthology.orchestra.run.vm04.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T10:51:45.680 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T10:51:45.680 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T10:51:45.680 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T10:51:45.680 INFO:teuthology.orchestra.run.vm04.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T10:51:45.680 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T10:51:45.680 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T10:51:45.680 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T10:51:45.680 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T10:51:45.680 INFO:teuthology.orchestra.run.vm04.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T10:51:45.680 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T10:51:45.680 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T10:51:45.680 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T10:51:45.680 INFO:teuthology.orchestra.run.vm04.stdout: xmlstarlet zip 2026-03-10T10:51:45.680 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:45.690 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-10T10:51:45.690 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs-dev* libcephfs2* 2026-03-10T10:51:45.763 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T10:51:45.763 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T10:51:45.851 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-10T10:51:45.851 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 3202 kB disk space will be freed. 2026-03-10T10:51:45.883 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:45.883 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T10:51:45.883 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T10:51:45.883 INFO:teuthology.orchestra.run.vm07.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T10:51:45.884 INFO:teuthology.orchestra.run.vm07.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T10:51:45.884 INFO:teuthology.orchestra.run.vm07.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T10:51:45.884 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T10:51:45.884 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T10:51:45.884 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T10:51:45.884 INFO:teuthology.orchestra.run.vm07.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T10:51:45.884 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T10:51:45.884 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T10:51:45.884 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T10:51:45.884 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T10:51:45.884 INFO:teuthology.orchestra.run.vm07.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T10:51:45.884 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T10:51:45.884 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T10:51:45.884 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T10:51:45.884 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117402 files and directories currently installed.) 2026-03-10T10:51:45.884 INFO:teuthology.orchestra.run.vm07.stdout: xmlstarlet zip 2026-03-10T10:51:45.884 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:45.885 INFO:teuthology.orchestra.run.vm04.stdout:Removing libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:45.894 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be REMOVED: 2026-03-10T10:51:45.895 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs-dev* libcephfs2* 2026-03-10T10:51:45.896 INFO:teuthology.orchestra.run.vm04.stdout:Removing libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:45.921 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T10:51:46.056 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-10T10:51:46.056 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 3202 kB disk space will be freed. 2026-03-10T10:51:46.095 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117402 files and directories currently installed.) 2026-03-10T10:51:46.097 INFO:teuthology.orchestra.run.vm07.stdout:Removing libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:46.108 INFO:teuthology.orchestra.run.vm07.stdout:Removing libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:46.130 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T10:51:46.979 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:47.011 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-10T10:51:47.175 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:47.188 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-10T10:51:47.189 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-10T10:51:47.208 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T10:51:47.323 INFO:teuthology.orchestra.run.vm04.stdout:Package 'libcephfs-dev' is not installed, so not removed 2026-03-10T10:51:47.323 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:47.323 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T10:51:47.323 INFO:teuthology.orchestra.run.vm04.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T10:51:47.323 INFO:teuthology.orchestra.run.vm04.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T10:51:47.324 INFO:teuthology.orchestra.run.vm04.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T10:51:47.324 INFO:teuthology.orchestra.run.vm04.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T10:51:47.324 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T10:51:47.324 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T10:51:47.324 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T10:51:47.324 INFO:teuthology.orchestra.run.vm04.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T10:51:47.324 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T10:51:47.324 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T10:51:47.324 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T10:51:47.324 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T10:51:47.324 INFO:teuthology.orchestra.run.vm04.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T10:51:47.324 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T10:51:47.324 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T10:51:47.324 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T10:51:47.324 INFO:teuthology.orchestra.run.vm04.stdout: xmlstarlet zip 2026-03-10T10:51:47.324 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:47.342 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T10:51:47.342 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:47.375 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-10T10:51:47.402 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T10:51:47.403 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T10:51:47.523 INFO:teuthology.orchestra.run.vm07.stdout:Package 'libcephfs-dev' is not installed, so not removed 2026-03-10T10:51:47.523 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:47.523 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T10:51:47.523 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T10:51:47.523 INFO:teuthology.orchestra.run.vm07.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T10:51:47.524 INFO:teuthology.orchestra.run.vm07.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T10:51:47.524 INFO:teuthology.orchestra.run.vm07.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T10:51:47.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T10:51:47.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T10:51:47.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T10:51:47.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T10:51:47.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T10:51:47.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T10:51:47.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T10:51:47.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T10:51:47.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T10:51:47.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T10:51:47.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T10:51:47.524 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T10:51:47.524 INFO:teuthology.orchestra.run.vm07.stdout: xmlstarlet zip 2026-03-10T10:51:47.524 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:47.541 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T10:51:47.541 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:47.570 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-10T10:51:47.570 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-10T10:51:47.572 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T10:51:47.700 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:47.701 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T10:51:47.701 INFO:teuthology.orchestra.run.vm04.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T10:51:47.701 INFO:teuthology.orchestra.run.vm04.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T10:51:47.701 INFO:teuthology.orchestra.run.vm04.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T10:51:47.701 INFO:teuthology.orchestra.run.vm04.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T10:51:47.701 INFO:teuthology.orchestra.run.vm04.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T10:51:47.701 INFO:teuthology.orchestra.run.vm04.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T10:51:47.701 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T10:51:47.701 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T10:51:47.701 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T10:51:47.701 INFO:teuthology.orchestra.run.vm04.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T10:51:47.701 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T10:51:47.701 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T10:51:47.701 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T10:51:47.701 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T10:51:47.701 INFO:teuthology.orchestra.run.vm04.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T10:51:47.701 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T10:51:47.701 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T10:51:47.702 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T10:51:47.702 INFO:teuthology.orchestra.run.vm04.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T10:51:47.702 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:47.712 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-10T10:51:47.712 INFO:teuthology.orchestra.run.vm04.stdout: librados2* libradosstriper1* librbd1* librgw2* libsqlite3-mod-ceph* 2026-03-10T10:51:47.712 INFO:teuthology.orchestra.run.vm04.stdout: qemu-block-extra* rbd-fuse* 2026-03-10T10:51:47.762 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T10:51:47.763 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T10:51:47.876 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-10T10:51:47.876 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 51.6 MB disk space will be freed. 2026-03-10T10:51:47.882 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:47.882 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T10:51:47.882 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T10:51:47.882 INFO:teuthology.orchestra.run.vm07.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T10:51:47.882 INFO:teuthology.orchestra.run.vm07.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T10:51:47.883 INFO:teuthology.orchestra.run.vm07.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T10:51:47.883 INFO:teuthology.orchestra.run.vm07.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T10:51:47.883 INFO:teuthology.orchestra.run.vm07.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T10:51:47.883 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T10:51:47.883 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T10:51:47.883 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T10:51:47.883 INFO:teuthology.orchestra.run.vm07.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T10:51:47.883 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T10:51:47.883 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T10:51:47.883 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T10:51:47.883 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T10:51:47.883 INFO:teuthology.orchestra.run.vm07.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T10:51:47.883 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T10:51:47.883 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T10:51:47.883 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T10:51:47.883 INFO:teuthology.orchestra.run.vm07.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T10:51:47.883 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:47.894 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be REMOVED: 2026-03-10T10:51:47.894 INFO:teuthology.orchestra.run.vm07.stdout: librados2* libradosstriper1* librbd1* librgw2* libsqlite3-mod-ceph* 2026-03-10T10:51:47.894 INFO:teuthology.orchestra.run.vm07.stdout: qemu-block-extra* rbd-fuse* 2026-03-10T10:51:47.913 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117387 files and directories currently installed.) 2026-03-10T10:51:47.915 INFO:teuthology.orchestra.run.vm04.stdout:Removing rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:47.925 INFO:teuthology.orchestra.run.vm04.stdout:Removing libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:47.936 INFO:teuthology.orchestra.run.vm04.stdout:Removing libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:47.947 INFO:teuthology.orchestra.run.vm04.stdout:Removing qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-10T10:51:48.054 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-10T10:51:48.054 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 51.6 MB disk space will be freed. 2026-03-10T10:51:48.090 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117387 files and directories currently installed.) 2026-03-10T10:51:48.092 INFO:teuthology.orchestra.run.vm07.stdout:Removing rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:48.104 INFO:teuthology.orchestra.run.vm07.stdout:Removing libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:48.116 INFO:teuthology.orchestra.run.vm07.stdout:Removing libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:48.126 INFO:teuthology.orchestra.run.vm07.stdout:Removing qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-10T10:51:48.371 INFO:teuthology.orchestra.run.vm04.stdout:Removing librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:48.383 INFO:teuthology.orchestra.run.vm04.stdout:Removing librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:48.394 INFO:teuthology.orchestra.run.vm04.stdout:Removing librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:48.419 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T10:51:48.452 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T10:51:48.526 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-10T10:51:48.529 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-10T10:51:48.560 INFO:teuthology.orchestra.run.vm07.stdout:Removing librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:48.571 INFO:teuthology.orchestra.run.vm07.stdout:Removing librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:48.584 INFO:teuthology.orchestra.run.vm07.stdout:Removing librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:48.608 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T10:51:48.641 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T10:51:48.714 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-10T10:51:48.717 INFO:teuthology.orchestra.run.vm07.stdout:Purging configuration files for qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-10T10:51:50.017 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:50.052 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-10T10:51:50.237 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-10T10:51:50.238 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-10T10:51:50.252 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:50.285 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T10:51:50.375 INFO:teuthology.orchestra.run.vm04.stdout:Package 'librbd1' is not installed, so not removed 2026-03-10T10:51:50.375 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:50.375 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T10:51:50.375 INFO:teuthology.orchestra.run.vm04.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T10:51:50.375 INFO:teuthology.orchestra.run.vm04.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T10:51:50.375 INFO:teuthology.orchestra.run.vm04.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T10:51:50.376 INFO:teuthology.orchestra.run.vm04.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T10:51:50.376 INFO:teuthology.orchestra.run.vm04.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T10:51:50.376 INFO:teuthology.orchestra.run.vm04.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T10:51:50.376 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T10:51:50.376 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T10:51:50.376 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T10:51:50.376 INFO:teuthology.orchestra.run.vm04.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T10:51:50.376 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T10:51:50.376 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T10:51:50.376 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T10:51:50.376 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T10:51:50.376 INFO:teuthology.orchestra.run.vm04.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T10:51:50.376 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T10:51:50.376 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T10:51:50.376 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T10:51:50.376 INFO:teuthology.orchestra.run.vm04.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T10:51:50.376 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:50.394 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T10:51:50.394 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:50.426 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-10T10:51:50.484 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T10:51:50.485 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T10:51:50.607 INFO:teuthology.orchestra.run.vm07.stdout:Package 'librbd1' is not installed, so not removed 2026-03-10T10:51:50.607 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:50.607 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T10:51:50.607 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T10:51:50.607 INFO:teuthology.orchestra.run.vm07.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T10:51:50.607 INFO:teuthology.orchestra.run.vm07.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T10:51:50.607 INFO:teuthology.orchestra.run.vm07.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T10:51:50.608 INFO:teuthology.orchestra.run.vm07.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T10:51:50.608 INFO:teuthology.orchestra.run.vm07.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T10:51:50.608 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T10:51:50.608 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T10:51:50.608 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T10:51:50.608 INFO:teuthology.orchestra.run.vm07.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T10:51:50.608 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T10:51:50.608 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T10:51:50.608 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T10:51:50.608 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T10:51:50.608 INFO:teuthology.orchestra.run.vm07.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T10:51:50.608 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T10:51:50.608 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T10:51:50.608 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T10:51:50.608 INFO:teuthology.orchestra.run.vm07.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T10:51:50.608 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:50.625 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T10:51:50.625 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:50.633 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-10T10:51:50.634 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-10T10:51:50.657 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T10:51:50.769 INFO:teuthology.orchestra.run.vm04.stdout:Package 'rbd-fuse' is not installed, so not removed 2026-03-10T10:51:50.769 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:50.770 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T10:51:50.770 INFO:teuthology.orchestra.run.vm04.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T10:51:50.770 INFO:teuthology.orchestra.run.vm04.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T10:51:50.770 INFO:teuthology.orchestra.run.vm04.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T10:51:50.770 INFO:teuthology.orchestra.run.vm04.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T10:51:50.770 INFO:teuthology.orchestra.run.vm04.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T10:51:50.770 INFO:teuthology.orchestra.run.vm04.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T10:51:50.770 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T10:51:50.770 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T10:51:50.770 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T10:51:50.770 INFO:teuthology.orchestra.run.vm04.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T10:51:50.770 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T10:51:50.770 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T10:51:50.770 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T10:51:50.770 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T10:51:50.770 INFO:teuthology.orchestra.run.vm04.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T10:51:50.770 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T10:51:50.770 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T10:51:50.770 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T10:51:50.770 INFO:teuthology.orchestra.run.vm04.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T10:51:50.770 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:50.789 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T10:51:50.789 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:50.790 DEBUG:teuthology.orchestra.run.vm04:> dpkg -l | grep '^.\(U\|H\)R' | awk '{print $2}' | sudo xargs --no-run-if-empty dpkg -P --force-remove-reinstreq 2026-03-10T10:51:50.847 DEBUG:teuthology.orchestra.run.vm04:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove 2026-03-10T10:51:50.849 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T10:51:50.850 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T10:51:50.923 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-10T10:51:51.011 INFO:teuthology.orchestra.run.vm07.stdout:Package 'rbd-fuse' is not installed, so not removed 2026-03-10T10:51:51.011 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T10:51:51.012 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T10:51:51.012 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T10:51:51.012 INFO:teuthology.orchestra.run.vm07.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T10:51:51.012 INFO:teuthology.orchestra.run.vm07.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T10:51:51.012 INFO:teuthology.orchestra.run.vm07.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T10:51:51.013 INFO:teuthology.orchestra.run.vm07.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T10:51:51.013 INFO:teuthology.orchestra.run.vm07.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T10:51:51.013 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T10:51:51.013 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T10:51:51.013 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T10:51:51.013 INFO:teuthology.orchestra.run.vm07.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T10:51:51.013 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T10:51:51.013 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T10:51:51.013 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T10:51:51.013 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T10:51:51.013 INFO:teuthology.orchestra.run.vm07.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T10:51:51.013 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T10:51:51.013 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T10:51:51.013 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T10:51:51.013 INFO:teuthology.orchestra.run.vm07.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T10:51:51.013 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T10:51:51.036 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T10:51:51.036 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:51.037 DEBUG:teuthology.orchestra.run.vm07:> dpkg -l | grep '^.\(U\|H\)R' | awk '{print $2}' | sudo xargs --no-run-if-empty dpkg -P --force-remove-reinstreq 2026-03-10T10:51:51.091 DEBUG:teuthology.orchestra.run.vm07:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove 2026-03-10T10:51:51.132 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-10T10:51:51.133 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-10T10:51:51.169 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T10:51:51.299 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-10T10:51:51.299 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T10:51:51.299 INFO:teuthology.orchestra.run.vm04.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T10:51:51.299 INFO:teuthology.orchestra.run.vm04.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T10:51:51.299 INFO:teuthology.orchestra.run.vm04.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T10:51:51.300 INFO:teuthology.orchestra.run.vm04.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T10:51:51.300 INFO:teuthology.orchestra.run.vm04.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T10:51:51.300 INFO:teuthology.orchestra.run.vm04.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T10:51:51.300 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T10:51:51.300 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T10:51:51.300 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T10:51:51.300 INFO:teuthology.orchestra.run.vm04.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T10:51:51.300 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T10:51:51.300 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T10:51:51.300 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T10:51:51.300 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T10:51:51.300 INFO:teuthology.orchestra.run.vm04.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T10:51:51.300 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T10:51:51.300 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T10:51:51.300 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T10:51:51.300 INFO:teuthology.orchestra.run.vm04.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T10:51:51.371 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T10:51:51.372 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T10:51:51.468 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 87 to remove and 10 not upgraded. 2026-03-10T10:51:51.468 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 107 MB disk space will be freed. 2026-03-10T10:51:51.507 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-10T10:51:51.510 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:51.525 INFO:teuthology.orchestra.run.vm04.stdout:Removing jq (1.6-2.1ubuntu3.1) ... 2026-03-10T10:51:51.534 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be REMOVED: 2026-03-10T10:51:51.534 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T10:51:51.534 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T10:51:51.534 INFO:teuthology.orchestra.run.vm07.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T10:51:51.534 INFO:teuthology.orchestra.run.vm07.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T10:51:51.535 INFO:teuthology.orchestra.run.vm07.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T10:51:51.535 INFO:teuthology.orchestra.run.vm07.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T10:51:51.535 INFO:teuthology.orchestra.run.vm07.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T10:51:51.535 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T10:51:51.535 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T10:51:51.535 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T10:51:51.535 INFO:teuthology.orchestra.run.vm07.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T10:51:51.535 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T10:51:51.535 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T10:51:51.535 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T10:51:51.535 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T10:51:51.535 INFO:teuthology.orchestra.run.vm07.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T10:51:51.535 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T10:51:51.535 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T10:51:51.535 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T10:51:51.535 INFO:teuthology.orchestra.run.vm07.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T10:51:51.536 INFO:teuthology.orchestra.run.vm04.stdout:Removing kpartx (0.8.8-1ubuntu1.22.04.4) ... 2026-03-10T10:51:51.547 INFO:teuthology.orchestra.run.vm04.stdout:Removing libboost-iostreams1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-10T10:51:51.559 INFO:teuthology.orchestra.run.vm04.stdout:Removing libboost-thread1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-10T10:51:51.570 INFO:teuthology.orchestra.run.vm04.stdout:Removing libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-10T10:51:51.580 INFO:teuthology.orchestra.run.vm04.stdout:Removing libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T10:51:51.591 INFO:teuthology.orchestra.run.vm04.stdout:Removing libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T10:51:51.603 INFO:teuthology.orchestra.run.vm04.stdout:Removing libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T10:51:51.621 INFO:teuthology.orchestra.run.vm04.stdout:Removing libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-10T10:51:51.633 INFO:teuthology.orchestra.run.vm04.stdout:Removing libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-10T10:51:51.644 INFO:teuthology.orchestra.run.vm04.stdout:Removing libgfapi0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T10:51:51.656 INFO:teuthology.orchestra.run.vm04.stdout:Removing libgfrpc0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T10:51:51.666 INFO:teuthology.orchestra.run.vm04.stdout:Removing libgfxdr0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T10:51:51.677 INFO:teuthology.orchestra.run.vm04.stdout:Removing libglusterfs0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T10:51:51.688 INFO:teuthology.orchestra.run.vm04.stdout:Removing libiscsi7:amd64 (1.19.0-3build2) ... 2026-03-10T10:51:51.699 INFO:teuthology.orchestra.run.vm04.stdout:Removing libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-10T10:51:51.700 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 87 to remove and 10 not upgraded. 2026-03-10T10:51:51.700 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 107 MB disk space will be freed. 2026-03-10T10:51:51.710 INFO:teuthology.orchestra.run.vm04.stdout:Removing liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-10T10:51:51.722 INFO:teuthology.orchestra.run.vm04.stdout:Removing luarocks (3.8.0+dfsg1-1) ... 2026-03-10T10:51:51.738 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-10T10:51:51.740 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:51.755 INFO:teuthology.orchestra.run.vm04.stdout:Removing liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-10T10:51:51.756 INFO:teuthology.orchestra.run.vm07.stdout:Removing jq (1.6-2.1ubuntu3.1) ... 2026-03-10T10:51:51.767 INFO:teuthology.orchestra.run.vm04.stdout:Removing libnbd0 (1.10.5-1) ... 2026-03-10T10:51:51.768 INFO:teuthology.orchestra.run.vm07.stdout:Removing kpartx (0.8.8-1ubuntu1.22.04.4) ... 2026-03-10T10:51:51.778 INFO:teuthology.orchestra.run.vm04.stdout:Removing liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-10T10:51:51.779 INFO:teuthology.orchestra.run.vm07.stdout:Removing libboost-iostreams1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-10T10:51:51.789 INFO:teuthology.orchestra.run.vm04.stdout:Removing libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-10T10:51:51.792 INFO:teuthology.orchestra.run.vm07.stdout:Removing libboost-thread1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-10T10:51:51.800 INFO:teuthology.orchestra.run.vm04.stdout:Removing libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-10T10:51:51.802 INFO:teuthology.orchestra.run.vm07.stdout:Removing libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-10T10:51:51.811 INFO:teuthology.orchestra.run.vm04.stdout:Removing libpmemobj1:amd64 (1.11.1-3build1) ... 2026-03-10T10:51:51.813 INFO:teuthology.orchestra.run.vm07.stdout:Removing libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T10:51:51.823 INFO:teuthology.orchestra.run.vm04.stdout:Removing librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-10T10:51:51.824 INFO:teuthology.orchestra.run.vm07.stdout:Removing libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T10:51:51.834 INFO:teuthology.orchestra.run.vm04.stdout:Removing libreadline-dev:amd64 (8.1.2-1) ... 2026-03-10T10:51:51.835 INFO:teuthology.orchestra.run.vm07.stdout:Removing libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T10:51:51.846 INFO:teuthology.orchestra.run.vm04.stdout:Removing sg3-utils-udev (1.46-1ubuntu0.22.04.1) ... 2026-03-10T10:51:51.854 INFO:teuthology.orchestra.run.vm07.stdout:Removing libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-10T10:51:51.854 INFO:teuthology.orchestra.run.vm04.stdout:update-initramfs: deferring update (trigger activated) 2026-03-10T10:51:51.864 INFO:teuthology.orchestra.run.vm07.stdout:Removing libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-10T10:51:51.865 INFO:teuthology.orchestra.run.vm04.stdout:Removing sg3-utils (1.46-1ubuntu0.22.04.1) ... 2026-03-10T10:51:51.875 INFO:teuthology.orchestra.run.vm07.stdout:Removing libgfapi0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T10:51:51.883 INFO:teuthology.orchestra.run.vm04.stdout:Removing libsgutils2-2:amd64 (1.46-1ubuntu0.22.04.1) ... 2026-03-10T10:51:51.886 INFO:teuthology.orchestra.run.vm07.stdout:Removing libgfrpc0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T10:51:51.895 INFO:teuthology.orchestra.run.vm04.stdout:Removing lua-any (27ubuntu1) ... 2026-03-10T10:51:51.896 INFO:teuthology.orchestra.run.vm07.stdout:Removing libgfxdr0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T10:51:51.905 INFO:teuthology.orchestra.run.vm04.stdout:Removing lua-sec:amd64 (1.0.2-1) ... 2026-03-10T10:51:51.907 INFO:teuthology.orchestra.run.vm07.stdout:Removing libglusterfs0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T10:51:51.917 INFO:teuthology.orchestra.run.vm04.stdout:Removing lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-10T10:51:51.918 INFO:teuthology.orchestra.run.vm07.stdout:Removing libiscsi7:amd64 (1.19.0-3build2) ... 2026-03-10T10:51:51.929 INFO:teuthology.orchestra.run.vm07.stdout:Removing libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-10T10:51:51.931 INFO:teuthology.orchestra.run.vm04.stdout:Removing lua5.1 (5.1.5-8.1build4) ... 2026-03-10T10:51:51.939 INFO:teuthology.orchestra.run.vm07.stdout:Removing liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-10T10:51:51.949 INFO:teuthology.orchestra.run.vm04.stdout:Removing nvme-cli (1.16-3ubuntu0.3) ... 2026-03-10T10:51:51.950 INFO:teuthology.orchestra.run.vm07.stdout:Removing luarocks (3.8.0+dfsg1-1) ... 2026-03-10T10:51:51.974 INFO:teuthology.orchestra.run.vm07.stdout:Removing liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-10T10:51:51.985 INFO:teuthology.orchestra.run.vm07.stdout:Removing libnbd0 (1.10.5-1) ... 2026-03-10T10:51:51.997 INFO:teuthology.orchestra.run.vm07.stdout:Removing liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-10T10:51:52.008 INFO:teuthology.orchestra.run.vm07.stdout:Removing libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-10T10:51:52.017 INFO:teuthology.orchestra.run.vm07.stdout:Removing libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-10T10:51:52.027 INFO:teuthology.orchestra.run.vm07.stdout:Removing libpmemobj1:amd64 (1.11.1-3build1) ... 2026-03-10T10:51:52.038 INFO:teuthology.orchestra.run.vm07.stdout:Removing librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-10T10:51:52.049 INFO:teuthology.orchestra.run.vm07.stdout:Removing libreadline-dev:amd64 (8.1.2-1) ... 2026-03-10T10:51:52.060 INFO:teuthology.orchestra.run.vm07.stdout:Removing sg3-utils-udev (1.46-1ubuntu0.22.04.1) ... 2026-03-10T10:51:52.067 INFO:teuthology.orchestra.run.vm07.stdout:update-initramfs: deferring update (trigger activated) 2026-03-10T10:51:52.077 INFO:teuthology.orchestra.run.vm07.stdout:Removing sg3-utils (1.46-1ubuntu0.22.04.1) ... 2026-03-10T10:51:52.096 INFO:teuthology.orchestra.run.vm07.stdout:Removing libsgutils2-2:amd64 (1.46-1ubuntu0.22.04.1) ... 2026-03-10T10:51:52.106 INFO:teuthology.orchestra.run.vm07.stdout:Removing lua-any (27ubuntu1) ... 2026-03-10T10:51:52.116 INFO:teuthology.orchestra.run.vm07.stdout:Removing lua-sec:amd64 (1.0.2-1) ... 2026-03-10T10:51:52.127 INFO:teuthology.orchestra.run.vm07.stdout:Removing lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-10T10:51:52.140 INFO:teuthology.orchestra.run.vm07.stdout:Removing lua5.1 (5.1.5-8.1build4) ... 2026-03-10T10:51:52.156 INFO:teuthology.orchestra.run.vm07.stdout:Removing nvme-cli (1.16-3ubuntu0.3) ... 2026-03-10T10:51:52.355 INFO:teuthology.orchestra.run.vm04.stdout:Removing pkg-config (0.29.2-1ubuntu3) ... 2026-03-10T10:51:52.388 INFO:teuthology.orchestra.run.vm04.stdout:Removing python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-10T10:51:52.412 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-pecan (1.3.3-4ubuntu2) ... 2026-03-10T10:51:52.469 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-webtest (2.0.35-1) ... 2026-03-10T10:51:52.517 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-pastescript (2.0.2-4) ... 2026-03-10T10:51:52.553 INFO:teuthology.orchestra.run.vm07.stdout:Removing pkg-config (0.29.2-1ubuntu3) ... 2026-03-10T10:51:52.568 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-pastedeploy (2.1.1-1) ... 2026-03-10T10:51:52.584 INFO:teuthology.orchestra.run.vm07.stdout:Removing python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-10T10:51:52.608 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-pecan (1.3.3-4ubuntu2) ... 2026-03-10T10:51:52.614 INFO:teuthology.orchestra.run.vm04.stdout:Removing python-pastedeploy-tpl (2.1.1-1) ... 2026-03-10T10:51:52.625 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-10T10:51:52.662 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-webtest (2.0.35-1) ... 2026-03-10T10:51:52.679 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-10T10:51:52.708 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-pastescript (2.0.2-4) ... 2026-03-10T10:51:52.755 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-pastedeploy (2.1.1-1) ... 2026-03-10T10:51:52.803 INFO:teuthology.orchestra.run.vm07.stdout:Removing python-pastedeploy-tpl (2.1.1-1) ... 2026-03-10T10:51:52.813 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-10T10:51:52.866 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-10T10:51:52.936 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-google-auth (1.5.1-3) ... 2026-03-10T10:51:52.986 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-cachetools (5.0.0-1) ... 2026-03-10T10:51:53.030 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:53.074 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:53.109 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-google-auth (1.5.1-3) ... 2026-03-10T10:51:53.124 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-cherrypy3 (18.6.1-4) ... 2026-03-10T10:51:53.158 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-cachetools (5.0.0-1) ... 2026-03-10T10:51:53.182 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-10T10:51:53.204 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:53.233 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-jaraco.collections (3.4.0-2) ... 2026-03-10T10:51:53.249 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T10:51:53.284 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-jaraco.classes (3.2.1-3) ... 2026-03-10T10:51:53.300 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-cherrypy3 (18.6.1-4) ... 2026-03-10T10:51:53.331 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-portend (3.0.0-1) ... 2026-03-10T10:51:53.357 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-10T10:51:53.376 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-tempora (4.1.2-1) ... 2026-03-10T10:51:53.406 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-jaraco.collections (3.4.0-2) ... 2026-03-10T10:51:53.422 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-jaraco.text (3.6.0-2) ... 2026-03-10T10:51:53.452 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-jaraco.classes (3.2.1-3) ... 2026-03-10T10:51:53.469 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-jaraco.functools (3.4.0-2) ... 2026-03-10T10:51:53.497 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-portend (3.0.0-1) ... 2026-03-10T10:51:53.516 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-10T10:51:53.543 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-tempora (4.1.2-1) ... 2026-03-10T10:51:53.591 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-jaraco.text (3.6.0-2) ... 2026-03-10T10:51:53.633 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-joblib (0.17.0-4ubuntu1) ... 2026-03-10T10:51:53.637 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-jaraco.functools (3.4.0-2) ... 2026-03-10T10:51:53.687 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-10T10:51:53.691 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-logutils (0.3.3-8) ... 2026-03-10T10:51:53.738 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-10T10:51:53.789 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-natsort (8.0.2-1) ... 2026-03-10T10:51:53.805 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-joblib (0.17.0-4ubuntu1) ... 2026-03-10T10:51:53.839 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-paste (3.5.0+dfsg1-1) ... 2026-03-10T10:51:53.867 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-logutils (0.3.3-8) ... 2026-03-10T10:51:53.898 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-prettytable (2.5.0-2) ... 2026-03-10T10:51:53.914 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-10T10:51:53.947 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-psutil (5.9.0-1build1) ... 2026-03-10T10:51:53.965 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-natsort (8.0.2-1) ... 2026-03-10T10:51:53.999 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-pyinotify (0.9.6-1.3) ... 2026-03-10T10:51:54.014 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-paste (3.5.0+dfsg1-1) ... 2026-03-10T10:51:54.048 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-routes (2.5.1-1ubuntu1) ... 2026-03-10T10:51:54.075 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-prettytable (2.5.0-2) ... 2026-03-10T10:51:54.102 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-repoze.lru (0.7-2) ... 2026-03-10T10:51:54.123 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-psutil (5.9.0-1build1) ... 2026-03-10T10:51:54.152 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-10T10:51:54.172 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-pyinotify (0.9.6-1.3) ... 2026-03-10T10:51:54.202 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-rsa (4.8-1) ... 2026-03-10T10:51:54.221 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-routes (2.5.1-1ubuntu1) ... 2026-03-10T10:51:54.252 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-simplegeneric (0.8.1-3) ... 2026-03-10T10:51:54.273 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-repoze.lru (0.7-2) ... 2026-03-10T10:51:54.299 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-simplejson (3.17.6-1build1) ... 2026-03-10T10:51:54.325 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-10T10:51:54.353 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-singledispatch (3.4.0.3-3) ... 2026-03-10T10:51:54.374 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-rsa (4.8-1) ... 2026-03-10T10:51:54.401 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-10T10:51:54.426 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-tempita (0.5.2-6ubuntu1) ... 2026-03-10T10:51:54.430 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-simplegeneric (0.8.1-3) ... 2026-03-10T10:51:54.475 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-threadpoolctl (3.1.0-1) ... 2026-03-10T10:51:54.479 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-simplejson (3.17.6-1build1) ... 2026-03-10T10:51:54.526 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-10T10:51:54.531 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-singledispatch (3.4.0.3-3) ... 2026-03-10T10:51:54.575 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-10T10:51:54.579 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-10T10:51:54.602 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-tempita (0.5.2-6ubuntu1) ... 2026-03-10T10:51:54.621 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-10T10:51:54.653 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-threadpoolctl (3.1.0-1) ... 2026-03-10T10:51:54.669 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-websocket (1.2.3-1) ... 2026-03-10T10:51:54.704 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-10T10:51:54.719 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-10T10:51:54.755 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-10T10:51:54.771 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-zc.lockfile (2.0-1) ... 2026-03-10T10:51:54.801 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-10T10:51:54.820 INFO:teuthology.orchestra.run.vm04.stdout:Removing qttranslations5-l10n (5.15.3-1) ... 2026-03-10T10:51:54.841 INFO:teuthology.orchestra.run.vm04.stdout:Removing smartmontools (7.2-1ubuntu0.1) ... 2026-03-10T10:51:54.849 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-websocket (1.2.3-1) ... 2026-03-10T10:51:54.909 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-10T10:51:54.959 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-zc.lockfile (2.0-1) ... 2026-03-10T10:51:55.005 INFO:teuthology.orchestra.run.vm07.stdout:Removing qttranslations5-l10n (5.15.3-1) ... 2026-03-10T10:51:55.027 INFO:teuthology.orchestra.run.vm07.stdout:Removing smartmontools (7.2-1ubuntu0.1) ... 2026-03-10T10:51:55.286 INFO:teuthology.orchestra.run.vm04.stdout:Removing socat (1.7.4.1-3ubuntu4) ... 2026-03-10T10:51:55.298 INFO:teuthology.orchestra.run.vm04.stdout:Removing unzip (6.0-26ubuntu3.2) ... 2026-03-10T10:51:55.318 INFO:teuthology.orchestra.run.vm04.stdout:Removing xmlstarlet (1.6.1-2.1) ... 2026-03-10T10:51:55.335 INFO:teuthology.orchestra.run.vm04.stdout:Removing zip (3.0-12build2) ... 2026-03-10T10:51:55.366 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T10:51:55.376 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T10:51:55.422 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-10T10:51:55.429 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for initramfs-tools (0.140ubuntu13.5) ... 2026-03-10T10:51:55.447 INFO:teuthology.orchestra.run.vm04.stdout:update-initramfs: Generating /boot/initrd.img-5.15.0-1092-kvm 2026-03-10T10:51:55.459 INFO:teuthology.orchestra.run.vm07.stdout:Removing socat (1.7.4.1-3ubuntu4) ... 2026-03-10T10:51:55.470 INFO:teuthology.orchestra.run.vm07.stdout:Removing unzip (6.0-26ubuntu3.2) ... 2026-03-10T10:51:55.489 INFO:teuthology.orchestra.run.vm07.stdout:Removing xmlstarlet (1.6.1-2.1) ... 2026-03-10T10:51:55.506 INFO:teuthology.orchestra.run.vm07.stdout:Removing zip (3.0-12build2) ... 2026-03-10T10:51:55.529 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T10:51:55.538 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T10:51:55.584 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-10T10:51:55.591 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for initramfs-tools (0.140ubuntu13.5) ... 2026-03-10T10:51:55.610 INFO:teuthology.orchestra.run.vm07.stdout:update-initramfs: Generating /boot/initrd.img-5.15.0-1092-kvm 2026-03-10T10:51:56.919 INFO:teuthology.orchestra.run.vm04.stdout:W: mkconf: MD subsystem is not loaded, thus I cannot scan for arrays. 2026-03-10T10:51:56.921 INFO:teuthology.orchestra.run.vm04.stdout:W: mdadm: failed to auto-generate temporary mdadm.conf file. 2026-03-10T10:51:57.085 INFO:teuthology.orchestra.run.vm07.stdout:W: mkconf: MD subsystem is not loaded, thus I cannot scan for arrays. 2026-03-10T10:51:57.085 INFO:teuthology.orchestra.run.vm07.stdout:W: mdadm: failed to auto-generate temporary mdadm.conf file. 2026-03-10T10:51:59.087 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:59.090 DEBUG:teuthology.parallel:result is None 2026-03-10T10:51:59.126 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T10:51:59.129 DEBUG:teuthology.parallel:result is None 2026-03-10T10:51:59.129 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm04.local 2026-03-10T10:51:59.129 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm07.local 2026-03-10T10:51:59.129 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f /etc/apt/sources.list.d/ceph.list 2026-03-10T10:51:59.129 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f /etc/apt/sources.list.d/ceph.list 2026-03-10T10:51:59.138 DEBUG:teuthology.orchestra.run.vm07:> sudo apt-get update 2026-03-10T10:51:59.177 DEBUG:teuthology.orchestra.run.vm04:> sudo apt-get update 2026-03-10T10:51:59.722 INFO:teuthology.orchestra.run.vm04.stdout:Hit:1 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-10T10:51:59.724 INFO:teuthology.orchestra.run.vm04.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-10T10:51:59.742 INFO:teuthology.orchestra.run.vm07.stdout:Hit:1 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-10T10:51:59.824 INFO:teuthology.orchestra.run.vm04.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-10T10:51:59.894 INFO:teuthology.orchestra.run.vm07.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-10T10:51:59.923 INFO:teuthology.orchestra.run.vm04.stdout:Hit:4 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-10T10:52:00.027 INFO:teuthology.orchestra.run.vm07.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-10T10:52:00.170 INFO:teuthology.orchestra.run.vm07.stdout:Hit:4 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-10T10:52:00.808 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-10T10:52:00.821 DEBUG:teuthology.parallel:result is None 2026-03-10T10:52:01.075 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T10:52:01.088 DEBUG:teuthology.parallel:result is None 2026-03-10T10:52:01.088 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-10T10:52:01.090 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-10T10:52:01.090 DEBUG:teuthology.orchestra.run.vm04:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T10:52:01.091 DEBUG:teuthology.orchestra.run.vm07:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T10:52:01.298 INFO:teuthology.orchestra.run.vm04.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T10:52:01.298 INFO:teuthology.orchestra.run.vm04.stdout:============================================================================== 2026-03-10T10:52:01.298 INFO:teuthology.orchestra.run.vm04.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T10:52:01.298 INFO:teuthology.orchestra.run.vm04.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T10:52:01.298 INFO:teuthology.orchestra.run.vm04.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T10:52:01.298 INFO:teuthology.orchestra.run.vm04.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T10:52:01.298 INFO:teuthology.orchestra.run.vm04.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T10:52:01.298 INFO:teuthology.orchestra.run.vm04.stdout:+time2.sebhostin 127.65.222.189 2 u 78 128 377 28.924 -0.579 0.100 2026-03-10T10:52:01.298 INFO:teuthology.orchestra.run.vm04.stdout:-time.cloudflare 10.214.8.5 3 u 15 128 377 20.422 -0.144 0.259 2026-03-10T10:52:01.298 INFO:teuthology.orchestra.run.vm04.stdout:-time.ndless.net 192.53.103.108 2 u 89 128 377 28.977 +0.049 0.374 2026-03-10T10:52:01.298 INFO:teuthology.orchestra.run.vm04.stdout:-static.215.156. 35.73.197.144 2 u 73 128 377 23.567 -0.728 0.098 2026-03-10T10:52:01.298 INFO:teuthology.orchestra.run.vm04.stdout:*vps-ber1.orlean 127.65.222.189 2 u 93 128 377 28.832 -0.455 0.131 2026-03-10T10:52:01.298 INFO:teuthology.orchestra.run.vm04.stdout:+ntp2.wtnet.de 10.129.9.96 2 u 85 128 377 30.545 +0.105 0.099 2026-03-10T10:52:01.299 INFO:teuthology.orchestra.run.vm07.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T10:52:01.299 INFO:teuthology.orchestra.run.vm07.stdout:============================================================================== 2026-03-10T10:52:01.299 INFO:teuthology.orchestra.run.vm07.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T10:52:01.299 INFO:teuthology.orchestra.run.vm07.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T10:52:01.299 INFO:teuthology.orchestra.run.vm07.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T10:52:01.299 INFO:teuthology.orchestra.run.vm07.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T10:52:01.299 INFO:teuthology.orchestra.run.vm07.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T10:52:01.299 INFO:teuthology.orchestra.run.vm07.stdout:+time.cloudflare 10.184.8.5 3 u 70 256 377 20.417 +3.331 0.286 2026-03-10T10:52:01.299 INFO:teuthology.orchestra.run.vm07.stdout:-time.ndless.net 192.53.103.108 2 u 93 256 377 28.921 +2.676 0.565 2026-03-10T10:52:01.299 INFO:teuthology.orchestra.run.vm07.stdout:-static.215.156. 35.73.197.144 2 u 89 256 377 23.522 +2.859 0.983 2026-03-10T10:52:01.299 INFO:teuthology.orchestra.run.vm07.stdout:+time2.sebhostin 127.65.222.189 2 u 94 256 377 28.988 +2.884 0.312 2026-03-10T10:52:01.299 INFO:teuthology.orchestra.run.vm07.stdout:-server2.as2.ch 189.97.54.122 2 u 84 256 377 25.054 +1.786 1.199 2026-03-10T10:52:01.299 INFO:teuthology.orchestra.run.vm07.stdout:-ntp5.kernfusion 237.17.204.95 2 u 150 256 377 28.843 +2.466 0.449 2026-03-10T10:52:01.299 INFO:teuthology.orchestra.run.vm07.stdout:*vps-ber1.orlean 127.65.222.189 2 u 163 256 377 28.824 +3.142 0.507 2026-03-10T10:52:01.299 INFO:teuthology.orchestra.run.vm07.stdout:-ntp2.wtnet.de 10.129.9.96 2 u 79 256 377 30.605 +3.683 0.301 2026-03-10T10:52:01.299 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-10T10:52:01.301 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-10T10:52:01.301 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-10T10:52:01.303 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-10T10:52:01.305 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-10T10:52:01.307 INFO:teuthology.task.internal:Duration was 2901.305276 seconds 2026-03-10T10:52:01.307 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-10T10:52:01.309 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-10T10:52:01.309 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T10:52:01.310 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T10:52:01.334 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-10T10:52:01.334 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm04.local 2026-03-10T10:52:01.334 DEBUG:teuthology.orchestra.run.vm04:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T10:52:01.383 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm07.local 2026-03-10T10:52:01.383 DEBUG:teuthology.orchestra.run.vm07:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T10:52:01.393 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-10T10:52:01.393 DEBUG:teuthology.orchestra.run.vm04:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T10:52:01.426 DEBUG:teuthology.orchestra.run.vm07:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T10:52:01.617 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-10T10:52:01.617 DEBUG:teuthology.orchestra.run.vm04:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T10:52:01.618 DEBUG:teuthology.orchestra.run.vm07:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T10:52:01.624 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T10:52:01.624 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T10:52:01.624 INFO:teuthology.orchestra.run.vm04.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T10:52:01.624 INFO:teuthology.orchestra.run.vm04.stderr: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T10:52:01.624 INFO:teuthology.orchestra.run.vm04.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T10:52:01.625 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T10:52:01.625 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T10:52:01.625 INFO:teuthology.orchestra.run.vm07.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T10:52:01.625 INFO:teuthology.orchestra.run.vm07.stderr: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T10:52:01.625 INFO:teuthology.orchestra.run.vm07.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T10:52:01.649 INFO:teuthology.orchestra.run.vm07.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 93.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T10:52:01.659 INFO:teuthology.orchestra.run.vm04.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 95.1% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T10:52:01.660 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-10T10:52:01.662 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-10T10:52:01.662 DEBUG:teuthology.orchestra.run.vm04:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T10:52:01.709 DEBUG:teuthology.orchestra.run.vm07:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T10:52:01.716 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-10T10:52:01.718 DEBUG:teuthology.orchestra.run.vm04:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T10:52:01.750 DEBUG:teuthology.orchestra.run.vm07:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T10:52:01.756 INFO:teuthology.orchestra.run.vm04.stdout:kernel.core_pattern = core 2026-03-10T10:52:01.765 INFO:teuthology.orchestra.run.vm07.stdout:kernel.core_pattern = core 2026-03-10T10:52:01.773 DEBUG:teuthology.orchestra.run.vm04:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T10:52:01.808 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T10:52:01.808 DEBUG:teuthology.orchestra.run.vm07:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T10:52:01.817 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T10:52:01.817 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-10T10:52:01.819 INFO:teuthology.task.internal:Transferring archived files... 2026-03-10T10:52:01.819 DEBUG:teuthology.misc:Transferring archived files from vm04:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/995/remote/vm04 2026-03-10T10:52:01.819 DEBUG:teuthology.orchestra.run.vm04:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T10:52:01.860 DEBUG:teuthology.misc:Transferring archived files from vm07:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/995/remote/vm07 2026-03-10T10:52:01.860 DEBUG:teuthology.orchestra.run.vm07:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T10:52:01.867 INFO:teuthology.task.internal:Removing archive directory... 2026-03-10T10:52:01.867 DEBUG:teuthology.orchestra.run.vm04:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T10:52:01.902 DEBUG:teuthology.orchestra.run.vm07:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T10:52:01.913 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-10T10:52:01.916 INFO:teuthology.task.internal:Not uploading archives. 2026-03-10T10:52:01.916 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-10T10:52:01.918 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-10T10:52:01.918 DEBUG:teuthology.orchestra.run.vm04:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T10:52:01.947 DEBUG:teuthology.orchestra.run.vm07:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T10:52:01.949 INFO:teuthology.orchestra.run.vm04.stdout: 258077 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 10 10:52 /home/ubuntu/cephtest 2026-03-10T10:52:01.957 INFO:teuthology.orchestra.run.vm07.stdout: 258076 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 10 10:52 /home/ubuntu/cephtest 2026-03-10T10:52:01.958 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-10T10:52:01.964 INFO:teuthology.run:Summary data: description: orch/cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} duration: 2901.3052756786346 flavor: default owner: kyr success: true 2026-03-10T10:52:01.964 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T10:52:01.982 INFO:teuthology.run:pass